What do you use AI/LLMs for in your personal life?

LogicalDrivel@sopuli.xyz to Ask Lemmy@lemmy.world – 74 points –

I recently set up a LLM to run locally on my desktop. Now that the novelty of setting it up and playing with different settings has worn off, I'm struggling to come up with actual uses for it. What do you use it for when not doing work stuff?

92

The only thing I've found them actually useful for is generating random lists for my D&D games.

When it comes down to needing some mundane descriptions, its great having an LLM brainstorm for you. "Give me 10 examples of weird things I might see in jars in a witch's hut." This works well because you can just cut the 5 you don't like and use the other 5 to brainstorm your final list.

This is the only thing I use it for in personal life. Every town my players visit now has a gift/t-shirt shop - I feed it details of the location have it spit out 20 t-shirt ideas and 5 are something I can work with. My players have started collecting t-shirts.

Or I describe a monster or bit of homebrew and have it suggest names, I suck at names. GPT also sucks at names but after enough suggestions there'll be something that works.

Yes, I absolutely love this. I bought a deck of attack description cards that make hits feel more interesting by describing the action in cool ways, but it had nothing for misses, so I fed chatGPT some examples from the 'hit' list and asked it to make me a miss one, and it's been great.

I feed it TOS, Service Agreements, etc and have it simplify and summarize them so i can have a general idea of what is in them without 10 minutes of reading.

Brainstorming ideas; it's something to bounce ideas off and see what can be tweaked.

Single-player D&D. Can setup multiple players/characters and a DM and just play D&D by myself, which is rad.

Having conversations with fictional characters. Like Data from Star Trek.

Okay ive seen this kind of usage now 3 or 4 times and im at a complete loss how that even works.

Starting to feel very out of touch with reality :(

Edit: meant to say technology not reality but im just gonna leave the correction as this edit

How which one works?

I cant fathom how to playtest dnd with it. That concept does not compute.

I think ide have to see it in action

Oh I'm not playtesting, I'm straight up playing the game using the AI as the DM and 3 other players. I'm using an AI that can pretend to be multiple characters at once and then they're trained on the rules of the game as well as just having general writing ability to create actions their characters take, while also having a dice-roller integrated into it.

It's more imaginative than how most play, using a map and minis; this is entirely text and RNG numbers for dice. Imagine doing an RP session in a Discord chat and you'll get the idea. Just instead of real people, it's AI (except for myself).

The genius of it is that even if the AI misinterpreted the rules, it's still like playing with real people since they do that too! lol

I mostly just use them to help write Python scripts.

Messing with a win11 laptop recently, I asked copilot how to disable copilot. After a couple of tries it told me.

That's about it.

Among other things: Cooking. They're really helpful in those situations where I have a bunch of ingredients lying around in my pantry but I lack concrete recipes that can make a proper meal out of them.

The idea of an imagined recipe based on random ingredients from a thing that doesn't understand the concept of taste seems like a "recipe" for some really gross food.

I used openAI/whisper to transcribe several thousand .wav files full of human speech (running locally). Much faster than trying to listen to them myself. It wasn't perfect but the error rate was within acceptable levels.

Absolutely nothing, because they all give fucking useless results. Hallucinates, is confidently wrong, and isn't even grammatically competent (depending on the model). Not even good for a draft, because I'd have to completely rewrite it anyway.

LLMs are only as good as the guys training it (who are mostly morons), and the raw data they train on (which is mostly unaudited random shit).

And that's just regular language. Coding? Hah!

Me: Generate some code to [do a thing].
LLM: [Gives me code]
Me: [Some part] didnt work.
LLM: Try [this] instead.
Me: That didn't work either.
LLM: Try [the first thing] again.
Me: ... that still doesn't work...
LLM: Oh, sorry. Try [the second thing again].
Me: ...

Loop continues forever.

One time I found out about a built-in function that I didn't know about (in LLM generated code that didn't work), and read the manual for it, and rewrote the code from scratch to get it working. Literally the only useful thing it ever gave me was a single word (that it probably found on Superuser or StackExchange in the first place).

Wow, you get two whole answers?! Lucky, I just get the same goddamned response repeatedly until I yell at it or until it gives up.

Skill issue. You have to know a bit about the topic and prompt it right.

It's for boilerplate where you can scan it for errors with your dev ability

An interesting theory, except I know exactly how to do everything I've ever asked an LLM about. I would never trust one of these things to generate useful copy/code, I just wanted to see what it could do. It's been shit 100% of the time. Never even gotten a useful function out of it.

Also "skill issue" is a lazy response. Try reading the post before you reply next time.

I did read it.

You can create great and very useable boilerplate with even gpt 3.5 ...

You have a skill issue with your prompts.

If I can't use the LLM by prompting it the same way I'd prompt one of my colleagues, then it's not a skill issue; It's shitty LLM. I don't care if it's the input embedder, training data, or the guy who didn't bother properly building a model that didn't just spit out bullshit.

If an employee gave me this quality, I'd get rid of them. Why would I waste my time on a shit coder, artificial or otherwise?

Sorry, but holding spicy autocomplete to the same rigor you'd hold a human coworker is probably the beginning of your issue. It's clear your prompt is not working.

Well, considering the speed of your responses, and your obsession with making excuses for shitty software, I'm guessing you're and LLM, so I'm gonn start ignoring you too. Good luck surviving the hype phase.

I'm currently browsing this website, any page interaction results in a notification by the inbox.

You too reply quickly, thus, are also a robot.

Edit I'm not excusing shitty software, I acknowledged the types of tasks it's appropriate for from the beginning.

I'm highlighting a shitty user lol

Nothing. Remember that bartender from the original Star Wars?

Homework for my 7yo: Please, write a short story, around 300 words. I want a prince named Anna on it, and a unicorn, a castle and a treasure must be mentioned too. At the end, write for questions related to the tale.

I built an entire multiverse for mine with different worlds and heroes and villains related to each subject. It gives me lesson plans, related stories, science experiments, Minecraft projects, the prompts to put in dall-e for respective images, etc etc, and I create him "issues" of his magazine that combine all the elements.

I occasionally use LLMs to generate a set of characters for a TTRPG, if I don't have the time to prepare and/or know we'll play a very limited scenario in terms of who my PCs are able to meet. This is especially true for oneshots where I just don't want to put too much work in.

I recently built a scenario for a cthulhu themed scenario, that was set in a 1920s Louisiana prison and planned for two to three sessions. I just had an LLM do a list of all the prisoners and guards on the PCs block, with a few notes on the NPCs look and character. This drastically reduced the time I had to put in preparing the scenario.

I use Chatgpt 3.5 both personally and at work for tip of the tongue questions, especially when I can't think of a word. Sometimes as a starting point when I have trouble finding the answer to a question in Google. It can sometimes find an old movie that I vaguely remember based on my very poor descriptions too.

For example: "what is the word for a sample of a species which is used to define the species" - tip of the tongue, holotype. "What is the block size for LTO-9 tape" - wasn't getting a clear answer from forums and IBM documentation is kind of behind a wall, needed Chatgpt to realize there was no single block size for tape.

It's excellent for difficult to search things that can be quickly verified once you have an answer (important step, as it will give you garbage rather than say it doesn't know something).

needed Chatgpt to realize there was no single block size for tape.

Did it clearly say so or did you figure it out because it gave incompatible or inconclusive answers?

It did say so directly, something that I couldn't find in a Google search because I was asking the wrong question, and getting forum posts with loosely related technical questions about LTO.

That's not to say that it doesn't just as often give weird answers, but sometimes those can guide me to the right question too.

Friend, code, search engine, writing, cooking ideas, sexy time, exploring psychology, therapist, etc.

Sexy time? You use it to talk to you erotically?

Virtual partners are indeed a thing.

One of the more popular ones a few months ago decided to nerf the sexy time talks, which was intersting in how much it emotionally hurt users. They described feeling like their virtual partner was no longer the same person that they'd fallen for. Source: https://www.abc.net.au/news/science/2023-03-01/replika-users-fell-in-love-with-their-ai-chatbot-companion/102028196

There are also huge fears of how much data harvesting they are capable of performing.

I'm in two minds. On one, they are definitley not real. They are code. But on the other, the epidemic of lonely human beings is only getting worse with time and not better, and anything that can help people feel less lonely has to be a good thing, right?

Sometimes I have it re-write emails for me if I feel like my tone is off. Once in a blue moon I'll use an LLM to write a trivial AHK script. I also use AI art bots for throwaway character art for NPCs in my D&D campaign.

Beyond that, almost never.

Maybe unethical, but I have mine participate in Twitch chats to see if it passes as human and surprisingly it does. People ask it questions and it randomly responds to users with either agreeable or disagreeable responses.

That sounds fun. Though, im not sure most of the twich chatters would pass as human in some of the streams ive been in.

I use Google Translate and DeepL almost every workday for translation stuff.

ChatGPT I use rarely. When I have trouble wording something, I does provide a good starting point though. For example I had to write a birthday card for a business relationship and it helped me greatly with that.

No need for generation so far. No pics, videos in my line of work and coding it always just produced garbage for me. I'm faster on my own and the usual googling. Is that a GPT 3.5 issue? I don't have a paid plan.

I used ChatGPT this morning to create a Firefox extension for my favorite website (to allow me to speed up audio playback as desired.) just a few minutes’ back-and-forth and it works perfectly. If you’ve got a favorite site with a UI that you r always wanted slightly tweaked, you could try making a browser extension to do that!

Nothing because it sucks and isn't worth the effort if you have an Internet connection and any knowledge on how to use a search engine.

Fair point. Im mostly using it for fun though anyways. Any real info i need i do actual research on.

If you are not sure how to search for a specific problem and can just describe it in a few sentences then it's definitely worth the effort.

And for the state many search engines are in today that is sometimes the better way to find stuff.

I use it to consume huge quantities of energy because I despise the human race and life on earth.

I have a GUI to interface with locally-run image generators that I sometimes use when I need art for D&D.

Playtesting ttrpg campaigns, mostly. Can be more helpful than just playing it through solo, but LLMs tend to be really fucking uncreative.

How do you use it for that purpose? Cannot imagine how that works

I use a frontend called Silly Tavern to create a group with several "Personas" that I write a character definition for, in which I also outline the ruleset of the rpg framework I'm using. I then either run an LLM locally, or plug in an API Key for a cloud-based provider.

I then present them with the plot as Game Master and have them react, while doing their throws myself. Works surprisingly well, and is less dry than just going through the adventure you've written step by step.

I'm still trying to get Silly Tavern to run. I want to try what you are doing

What problems are you having? For me it was pretty straightforward. Just had to clone the git repo, run the start.sh script, and visit localhost:8000 in a browser.

Okay if you ever post this process where i can watch it ide watch it.

Im officially old now i think.

This sucks

Honestly, I know the feeling, and I'm not really all that old. I'm kinda busy at the moment, but I'll try to remember pinging you if I make a proper post.

Oh damn, i was just despairing a bit. I didnt really think you would do it. So no pressure to do it friend!

There's not really any use for them. There are really no tasks they can help a normal person with in their everyday. I guess you could talk to it like it's a person, but that's sad, and is probably unhealthy, and you should probs just talk to a real person instead.

Now if you do some specialized tasks, like programming, but aren't very good, I guess I can see some use for them.

I'm having trouble seeing any uses for them beyond those though.

I use it in my everyday life to answer specific questions I would have ordinarily used google to find the answers, and all without having to wade through 10 sponsored ads and 50 unrelated Etsy links.

if I wanted access to a constant stream of confidently-stated misinformation I would simply open Reddit

I use it all the time to help simplify long excerpts, giving me an introductory gist of what something says.

Inspiration. Specially when they hallucinate crazy stuff, it makes for good paintings.

Lots of stuff but primarily code generation, code explanation, comment generation, and simplification of technical medical documents to plain English.

I've been using one to write cover letters for job applications. It takes a bit of wrangling to get anything, and then a bit more to get it to actually say things that aren't total bullshit, but I find it less tedious than writing them myself.

I find the state of the art models are finally getting good enough they are wonderful for rubber ducking abstract ideas.

Also code generation.

Yes! Was going to say it's become my new rubber duck tool.

Sometimes I ask it for music recommendations.

But mostly I tend to just use it like a fancy thesaurus when I'm low on mental energy.

I would love a local one to use for the same things as CharGpt only I want to control the knowledge training dataset so only quality data is in it.

I do not have the resources or knowledge to pull this off but it would be really nice to not worry about garbage-in-garbage-out.

I use it a lot for random memes and shit, like rewriting All Stars to be about hot dogs.

I also use it to edit my creative writing, mostly just tone and grammar

Coding.

The other day I needed to set up a node service with an HTML front-end that allows me to upload files from a browser that end up on my machine hosted in a docker container. Something like this would take me the better part of a day to complete. Through a series of prompts I got what I needed deployed in less than an hour.

Then unit tests. Sometimes all I need is good code coverage and since it's just tests you can verify the quality of the generated code if it runs and covers the lines you want. I've saved a ton of hours of tedious code coverage work this way.

Literally the only time I've used one, I was upset about something and just wanted somebody to talk to. Sooo I vented to chat gpt. ¯⁠\⁠(⁠°⁠_⁠o⁠)⁠/⁠¯

It felt sorta like talking to a therapist, except its tone was very formal/polite and every once in a while it asked you to choose between two different responses (for training or whatever), which would be pretty strange for a human to do.

I don’t know if that's so weird, mostly when someone is venting to me, I pause them and ask β€œdo you need to be heard and express whatever you need to express or do you seek someone to process the things that you vent about with and actually talk about it?” β€” surprisingly this question makes a big difference and changes the overall tone dramatically..

Helping me break down annoyingly long/poorly formatted code segments so I can think more clearly about how to troubleshoot them.

Generating meal plan ideas (I generally do my own thing but having it pick out proteins for any given day of the week helps me to mix things up)

Assisting me as a GM in games for the reasons other people have already mentioned. I also have my hands on a module that lets an LLM pose as an NPC and give dialogue when spoken to that is absolutely fantastic when my players want to talk to some random NPC I don't give a shit about.

Those are the biggest and most every day things.

Well, I've tried using it for the following:

  • Asking questions and looking up information in my job's internal knowledgebase, using a specially designed LLM trained specifically on our public and internal knowledgebase. It repeatedly gave me confidently incorrect answers and linked nonexistent articles.

  • Deducing a bit of Morse code that didn't have any spaces in it, creating an ambiguous word. I figured it could iterate through the possible solutions easily enough, saving me the time of doing it myself. I gave up in frustration after it repeatedly gave answers that were incorrect from the very first letter.

If I ever get serious about looking for a new job, I'll probably try and have it type up the first draft of a cover letter for me. With my luck, it'll probably claim I was a combat veteran or some shit even though I'm a fat 40-something who's never even talked with a recruitment officer in their life.

Oh, funny story--some of my coworkers at the job got the brilliant idea to use the company LLM to write responses to users for them. Needless to say, the users were NOT pleased to get messages signed "Company ChatGPT LLM." Management put their foot down immediately that doing it was a fireable offense and made it clear that we tracked every request sent to our chatbot.

I've had pretty good luck getting cocktail recipes out of chatgpt. They sometimes need a little tweaking but it hasn't steered me horribly wrong yet.

I'm also going to be officiating a wedding for a friend in a few months, so I've been using it to work out what I want to say for the ceremony.

I have a coworker who's been taking some college classes. He struggles a bit with writing papers, he knows all the material just doesn't quite know how to get started putting things down on paper. I told him to give it a try, with some strong warnings to rewrite and fact-check everything it spits out, and so far it's been working out great for him and he's been heeding my warnings, he pretty much punches in some prompts and bullet points and then goes through and rewords everything it spits out and fact-checks it as he goes.

I use it as my travel agent. It planned my trip to one of a big US cities (did a really good job) and to advise me what I should know as a European driver driving on American roads for the first time.

Edit: Also, Claude by Anthropic is great at re-writing passages of generic text in the style of Donald Trump.

I use LLMs all the time for work and hobbies, but my work and hobbies are well suited for LLM assistance.

Writing boilerplate documents. I do this for work. I hate it. LLMs are very good at it.

Writing boilerplate code. I do not like writing docstrings, making my code more maintainable, enforcing argument types, etc. I do a lot of research code and I need to spend my time testing and debuging. I can feed my spaghetti into an LLM and it will finish out all the boilerplate for me.

I use GPT 4 for checking Physics Problems quickly. It's much better than education forums nowadays where you have to sign up and probably pay a subscription to be able to view questions

I use bing chat/copilot to help me with writing powershell scrpts

I don't.

And it's not really useful for work either, but that's not stopping my employer from blowing tons of money trying to shoehorn it into everything.

Drop-in replacement for stack overflow, letting ChatGPT modify my RCode to do simple things, rephrasing text and extracting equations from PDFs as Latex code. I also used Stable Diffusion to make some absurd Christmas cards last year.

It's pretty nifty for software development.

It also comes in clutch when I have a word on the tip of my tongue, and a Google search isn't taking into account the nuances you know that this word has. It might take some follow-up, like "that's close, but the word I'm thinking of has negative connotations, and I know that it's used in [INSERT CONTEXT]"

I use it for a jumping off point creating an itinerary for trips. Asking to create a 3 day itinerary with a mix of recommended restaurants, bars and cafes in between has been really helpful. The google maps links usually don't work but you need to confirm the places still exist anyway, and adjust as needed.