How often do you use "AI" to reply to your messages, if at all?

pufferfischerpulver@feddit.de to Ask Lemmy@lemmy.world – 86 points –

The recent chat bot advances have pretty much changed my life. I used to get anxiety by receiving mails and IMs, sometimes even from friends. I lost friendships over not replying. My main issue being that I am sometimes get completely stuck in a loop of how to formulate things in the best way to the point of just abandoning the contact. I went to therapy for that and it helped. But the LLM advancements of the recent years have been a game changer.

Now I plop everything into ChatGPT, cleaning out personal information as much as possible, and let the machine write. Often I'll make some adjustments but just having a starting point has changed my life.

So, my answer, I use it all the fucking time.

93

I prefer to call them LLMs (Large Language Models). It’s how they are referred to in the industry and I think it’s far more accurate than “AI”

Thank you, it's frustrating seeing (almost) everyone call them AI. If/when actual AI comes into existence I think a lot of people are going to miss the implications as they've become used to every LLM and its grandmother being called AI.

AI is correct though, it is a form of AI. It's just a more general term and LLM is more specific.

I debated whether I should write LLMs or AI. Generally I dislike AI as well, but choose it due to it's popularity. Definitely share your sentiment though!

Wait it's large language models? I thought it was language learning models

I’ve never done this and I guess I need to go yell at a cloud somewhere if this is about to become a thing.

Better yet, yell into the cloud, let an LLM respond

Understandable! I wouldn't want to just talk to a chat bot either, whilst thinking I'm talking to a friend.

The way I use it is mostly to get a starting point from which I'll edit further. Sometimes the generated response is bang on though and I admit I have just copy pasted.

I'd be pretty mad if I knew someone was sending personal texts/emails to openai

I wouldn't if they were stripping it of personal information. I lack imagination for what they could possibly do to harm anyone by having somewhat of an insight into mundane and trivial everyday problems.

Um, don't you know this is Lemmy? You're supposed to be insanely protective over all aspects of your privacy. If I found out that someone copy/ pasted a tidbit of a conversation I was in after stripping all personal info from it into an LLM, I'd change my name, forge my birth certificate to alter my DOB, move states (twice), get a new phone number and shoot that person in the face. It's the only way to keep the government or corpos from spying on my very secret very important conversations.

Did you use ai to write this post?

No actually! It's not a problem for me to write text per se. Actually it's a significant part of my job to write guidelines, documentation, etc.

What's difficult about replying to people is putting my opinions in relation to the other's expectations.

Did you use AI to write THAT response?

I tell people I have "phone anxiety"... but it sucks. Family, friends, new acquaintances... it doesn't matter, trying to reply or answer a phone can feel like torture sometimes. Have absolutely lost a few friends over this. You're not alone

....................................... Literally never.

And its never once crossed my mind.

And if one of my friends told me they did this to talk to me, I think I'd just stop talking to them, because I want to talk to them. If I wanted to be friends with a computer, I'd get a tamagachi.

If one of my friends did this to overcome their anxiety, I'd empathize and congratulate them on figuring out a way to make it work. If I were in OP's shoes and one of my friends did to me what you just said, I'd say bullet dodged and carry on.

Cool. I'd ask them to not tdo it with me and if they did anyway the above would happen.

If it was someone who was not a current friend who did this, then we're incompatible as friends, I wish you well in life but I won't be part of it, that's clearly better for both of us.

No, and I'd say it's probably not the solution to your problem that you think it is.

Reading the rest of these comments, I can't help but agree. If I found out a friend, family member, or coworker was answering me with chatgpt I'd be pretty pissed. Not only would they be feeding my private conversation to a third party, but they can't even be bothered to formulate an answer to me. What am I, chopped liver? If others find out you're doing this, it might be pretty bad for you.

Additionally, you yourself aren't getting better at answering emails and messages. You'll give people the wrong impression about how you are as a person, and the difference between the two tones could be confusing or make them suspicious - not that you're using chatgpt, but that there's something fake.

This is in the same ballpark as digital friends or significant others. Those don't help with isolation, they just make you more isolated. Using chatgpt like this doesn't make you a better communicator, it just stops you from practicing that skill.

even be bothered to formulate an answer to me. What am I, chopped liver?

OP isn't doing this because they don't care. It's the exact opposite. They care so much and stress so much about it that they have difficulty in expressing themselves.

I agree that I don't think it's helpful for OP to continue doing this long term, but all of these comments here are so judgemental to OP.

You're right, but I expect a lot of people are going to have that reaction. It will feel to them like a slight and an invasion of privacy. OP has to find a way to deal with the anxiety; this is an unhealthy coping mechanism.

No. I have the same problem you do, which is harming my friendships and networking.

But I definitely am not going to reach for the solution you did. Because if anyone notices, it will effectively nuke that relationship from orbit.

Putting myself in the position of a friend who realized that you were using gpt or something to form thoughts...

I'd be impressed that you found that solution, and then I'd want to check be sure that the things you said were true.

Like, if I found out that 90% of your life as I knew it was just mistakes then computer made that you didn't bother to edit, I'd be bummed and betrayed, and it would turn out how you said.

On the other hand, if everything you sent is true to life and you formed the computer's responses into your personality, I'd be very much impressed that you used this novel tool to keep in contact and overcome the frozen state that had kept you from responding before.

@Usernameblankface that's a kind and generous interpretation, and I hope it's the one OP's friends will come to.

I suspect it's likely to be seen as an outsourcing of the friendship, though.

I never used it, but damn are people here judgy. I don't understand how it's a personal insult if someone used it in the way you're describing. As long as your actual thoughts and emotions are what you send, who cares you used a tool to express them.

Anxiety is rough. I wish people were more understanding.

Thank you! I probably could have been more elaborate in the op. But it doesn't seem like people really paid attention to it regardless. I don't just plop in a message I received and go with whatever response. I sanitize the received message of personal information as much as possible, then I let the LLM know what I want to say, and then use the response as a starting point which I'll further edit. Admittedly sometimes I get something that is just bang on and I'll copy paste. But it rarely happens since the model can't match my personal writing style.

As you recognise, it's still my thoughts and feelings. It's akin to having a secretary writing drafts for you maybe? Not that I would know anything about having a secretary, ha!

This sounds like a plot to a horror movie. It all starts out with good intent, but pretty soon you notice your AI responses seem a little off. You try to correct it but it in turn corrects you. Your reach out to family and friends but they dislike your ‘new’ tone and are concerned for your sudden change in behavior…

my resume is 90% chatGPT... the informations are true, but i could never write in that style. it got me two jobs, so i know it works.

i used it a couple of times to rewrite stuff given a context. like i wrote the email but it came out in a vague passive aggressive tone, and letting chatGPT rewrite it will reword it to be more appropriate given the context.

Solution: Write everything in a passive aggressive tone to vent out your frustrations, let the bots do the cleanup.

New problem: Get used to speaking in a passive aggressive tone. Oh shit.

The one time I drafted an email using ai, I was told off as being " incredibly inappropriate " so heck no. I have no idea what was inappropriate either, it looked fine to me. Spooky that I can't notice the issues, so I don't touch it

If you're using it right then there'd be no way for the recipient to even tell whether you'd used it, though. Did you forget to edit a line that began with "As a large language model"?

Once you know someone is using it, it's very easy to know when you're reading AI generated text. It lacks tone and any sense of identity.

While I don't mind it in theory, I am left with the feeling of "well if you can't be bothered with this conversation..."

I mean, with the vast majority of inter-departmental emails, no, one can't be bothered, because it's pointless busywork communication.

With a little care in prompting you can get an AI to generate text with a very different tone than whatever its "default" is.

Yeah, you can, which is why it's lazy af when someone just serves you some default wikipedia-voice answer.

My point is largely this: I can talk to AI without a human being involved. They become an unnecessary middle man who adds nothing of use other than copying and pasting responses. The downside of this is I no longer value their opinion or expertise, and that's the feedback they'll get from me at performance review time.

I've told one individual already that they must critically assess solutions provided to them by ChatGPT as, if they don't, I'll call them out on it.

They become an unnecessary middle man who adds nothing of use other than copying and pasting responses.

This, I hate it so much when people take it on themselves to insert Chat GPT responses into social media threads. If I wanted to know what the LLM has to say I would have just asked the LLM.

It's the modern equivalent of pasting links to Wikipedia pages without reading them, except that because you're being direct with your question you have a higher confidence that what you're parroting makes some sort of sense.

links to Wikipedia

Once I was trying to make small talk irl with someone and asked her how many official languages were in her country of origin, and she stone cold looked it up on Wikipedia and started reading aloud to me from it at great length.

The kicker was I happened to already know about it, I just thought it might be something she would like to talk about from her perspective as an emigrant.

On the flipside, I'm kind of annoyed by posts cluttering up places like asklemmy that could be trivially answered by asking an AI (or even just a simple search engine). I can understand the opinion-type questions or the ones with a social aspect to them that you can reasonably want an actual human to give you advice on (like this one), but nowadays the purely factual stuff is mostly a solved problem. So when those get asked anyway I'm often sorely tempted to copy and paste an AI answer simply as a https://letmegooglethat.com/ style passive aggressive rebuke.

Fortunately my inner asshole is well chained. I don't release him for such trivialities. :)

Please share this email lol, I wanna see it

First of all, I can really empathize with your anxieties. I've lost contact with a few penpals years ago because of similar issues and I still hate myself for it.
I don't use chat-gpt for writing my replies, because my English is crap and my manner of writing distinct enough that any friend can immediately spot a real response from a generated one (not enough smileys for one :)
But I still have similar anxieties. So if I feel anxious about writing something, I do sometimes give a general description of the original mail ("A friend of mine wrote about her mother's funeral", "a family member lost his cat", etc.) and give it the reply I've written so far (names and personal details removed).
I then explain that I feel anxious about my reply and worry if I hit the right tone. I never ask it to write for me, only to give critique where necessary and advice on how to improve (for good measure I always add some snide remarks on how it sounds too fake to ever pass as a human so don't even bother trying, which it always takes in good humor because.. well.. AI :)
I ignore most of the suggestions because it sounds like a corporate HR communique. But, what's more important is that it usually tries to tell me that I was thoughtful, considerate and that that little light-hearted joke at the end was just sweet enough to add a personal touch without coming across as insensitive.
Just to get some positive feedback, even from software that was designed specifically for that purpose, gives me that little push to hit the send button and hope for the best. I wouldn't dare to ask someone else for advice because it would be an admission of how weak and insecure I feel about expressing myself in the first place, which would ramp up my anxiety by making it a 'big thing'.

Anyway, I can understand the animosity people show against AI. And I'm happy for those who don't need or want it.

PS: This reply was 100% written without any use of AI, direct or indirectly. I did spend a good half hour on it before feeling confident enough to hit "Post" :)

This is pretty much how I use it as well!! I wasn't very detailed in the op.

And yes, the positive feedback is gold!

I use it whenever I need to write in Corporate Speak. Resume, cover letter, important email.

I also avoid putting in sensitive information, so it needs editing. I found that usually it will leave me places that need specific information, (name here) for example.

It is soooo much better than smashing out some sloppy attempts and rewording it until I get the style right.

Zero. It's important to me to be personal in my interactions.

I try not to. With work email, you should write it as short and to the point as possible, no one really has time to read an essay instead of trying to get their job done.

Part of the reason I use Lemmy is for writing practice, because I want to prove that as a person that I can't be replaced by an AI. This place basically forces me to think on my feet to write quickly on an ever changing set of random topics and get my point across clearly and effectively.

Showing ChatGPT how to respond to my messages sounds like more work than just replying to them myself.

I mostly just use it for laughs. I'll usually ask GPT to explain things from the nihilistic viewpoint and get amazing results.

I also use it to rewrite emails that I need to send for work. I have a tendency to over-explain things and use a cold tone when I write, so sometimes I'll tell it "rewrite this to be more concise and empathetic" and it does a really good job of cleaning it up.

Maybe what you're doing with artificial intelligence isn't exactly a good idea.

Never, I have no issue with formulating a lot, I just tend to not immediately reply and then forget.

If you can't genuinely talk with me without the need for an llm then I'd say we weren't really friends to begin with.

When I Text people I don't know well I use goblin tools that uses chat gpt to "translate" how I speak to neurotypical speak, which generally makes them not hate me for being without all the added fillers. Also great for professional emails and text because it makes me look a lot "smarter" because of all the buzz words and phrases it adds for me.

I have a problem with writing out my thoughts in a concise way that flows well. I can't think of the correct word. I think it starts with "C". So I use ChaGPT like so:

  1. I write my thoughts out as a stream of consciousness.
  2. I tell ChatGPt what I am trying to communcate.
  3. I paste the stream of consciousness.
  4. It assembles it as a reply formatted as a message or email.
  5. I read over it to ensure it got everything correct and worded everything the correct way.
  6. I tell it what I want changed or explain why I don't like a certain part, and it adjusts as needed.

Then I edit the output as I need. I don't always do the editing and just send the output, depends how I feel and how well it does. I am thinking I am just going to start appending a default "Due to my brain injury, ChatGPT may have assisted me in composing this message" in my email signature, with a link to a screenshot of my process on imgur or something.

I look at it like a psychologist or speech pathologist helping me write/assemble a letter. It's awesome.

And I can usually tell immediately when something has been written by ChatGPT lol. Unless they've gone through and edited the whole thing.

Me? When I use text communication? Never. Closest I get to using AI is letting a trusted acquaintance look over the message if I'm communicating with organisations. But beyond that? Nope.

I'm with the guy who's at the op of the thread as of me typing. "If you can't genuinely talk with me without the need for an LLM then I'd say we weren't really friends to begin with." ~@mriormro

Very similar experience for me, I used to procrastinate a lot. I still do, but now it's less about not knowing how to approach the message.

I'd say I use it about 30% of the time, usually when the message or email is important or I want to make sure it won't be misinterpreted

Initially I used it a lot more, but after a while I got more confident that I could just do it myself. Often it would just say the same thing I said, but reworded in a more complicated way

Your last paragraph is interesting! I can feel similar effects actually. I feel more and more confident in the way I would reply. Most of the times I know what and how to write, seeing that validated helps.

And ChatGPT has definitely a tendency for complicated wording.

Add the instruction "use simple terms" to your prompts, should improve your results in that scenario.

I only use AI to search for words I know the definition of but not the word

Never and if I found out someone did this to me I would be very insulted.

I'll use the preset responses sometime for Google but that's as far as it goes. It's very cool that you've found a way to help your anxiety by using it though.

Only if I'm writing something very long. Otherwise it's not worth the effort.

As much as I respect the bots, I have too much of a system going on to let even a bot verbally decide something for me, though I do converse with it for other reasons. One of my favorite things to do is to ask the bots the most recent thing it has learned. It delivers.

Never done that, I used AIs for help in essays, but never for personal massages

Never, I like to use AI by itself, but not on other people. I'm human, not a robot. I can type my own replies

I am using Github Copilot to automatically generate commit messages. So far so good 😊

Pretty often here, if I have an obscure question that is unlikely to receive clarification from the community, I'll look it up using ai, check the article, share that.

Case in point, I just got curious about using quantum entanglement to get around communicating faster than the speed of light, and rather than ask and waitv to get lucky enough for a scientist to reach out, I asked bing and read a cool article

Never, unless the quick replies on my texting app is considered AI. I only use it to answer yes and no questions.

I used it once to tone down a comment I thought was too cutting. Edited, of course.

Definitely use it sometimes to see what a mathematical formula thinks the right tone is for the kind of email I want to write for example

If I have to write out instructions for something in a work email, I will sometimes get chatgpt to write the instructions for me and then test / edit as needed.

Generally Im not a fan of wordy LLM writing though... Mainly only use it as a coding tool while simultaneously working on my patience.

There's a sidequest in Like A Dragon Gaiden that literally mocks the danger of using AI to do this. I don't know your position but I'm sure your friends aren't gonna mind if you just send them a sentence response.

I've found in my life the more I think about how I should reply the more likely I am to say something they dislike, I'll overthink and add something that infers I'm not talking about X. And the reply will come back "If you aren't talking about X, why did you bring it up?"

My problem is that I don't like letters, email and the ten other messaging apps that I'm obliged to maintain. I like commenting on Lemmy, contact me here.

Hey there! While I don't use ChatGPT to generate full responses for me, I do find it super handy for refining my ideas and finding the right words. Sometimes I get stuck in the same loop of formulating things, and having ChatGPT as a creative companion helps me break through those mental roadblocks. I also use it to summarize and analyze others' comments, making the process of crafting responses a lot smoother. It's like having a linguistic sidekick! How about you? Do you have any specific ways you leverage the power of language models?

(This response was written for me by ChatGPT after I explained to it how I make use of it. I don't think it got it quite right, but it wouldn't be as funny if I edited it any, so there it is.)

I recognized this as ChatGPT, purely because of the enthusiastic greeting

I recognised it too but it wasn't the greeting. Not sure what it was. Maybe the way it tends to droningly string points together. It's also more verbose than humans.

And they also often start their text with copying your question

True. They remind me a bit of first- year undergraduate essays.

Which is likely because so many of those were in its training set.

Fortunately, it's really easy to tell ChatGPT that you want it in a different style than that. It's just that if you don't specify a style (which I didn't here do for comedic effect, and most people don't because they don't think to do so) it falls back to that base mediocrity. It doesn't know you want a good essay unless you actually tell it that.

So those prompts with the string of "masterful award-winning genius essay that'll move the reader to tears" superlatives actually do have something to them. One of the more amusing recent discoveries is that you can actually get ChatGPT to give better results if you add "if your answer is really good I'll tip you $200." to your prompt. It's not actually interested in money, it just "knows" that paid-for results are better than freebies so you're really just giving it guidance on what sort of answer you want from it.

@FaceDeer I wish the people who were using it to make websites knew this.

Too often in the past few weeks I've been looking for information and stumbled on sites that sound plausible for a couple of paragraphs and then degenerate into rambling, repetitive US undergraduate essays.

For me it was the leading questions at the end, LLMs often end with leading questions so that you have a way to continue the conversation from what I've found.