What are you using AI (ChatGPT, Perplexity AI) for in your daily routine?

GreenSofaBed@lemmy.zip to Ask Lemmy@lemmy.world – 44 points –

Been using Perplexity AI quite a bit lately for random queries, like travel suggestions.

So I started wondering what random things people are using it for to help with daily tasks. Do you use it more than Google/etc?

Also if anyone is paying for Pro versions? Thinking if it's worth it paying for Perplexity AI Pro or not.

76

I dont.

Yeah I've used it occasionally to goof around with and try to get silly answers. And I've occasionally used it when stuck on an idea to try to get something useful out of it...the latter wasn't too successful.

Quite frankly I don't at all understand how anyone could possibly be using this stuff daily. The average consumer doesn't have a need imo.

Basically nothing. I'm good at using search engines and the porn feels boringly samey from it so the only use case left for me is making meme images, which is rare at best.

I don't use it for daily tasks. I've been tinkering around with local LLMs for recreation. Roleplay, being my dungeon master in a text adventure. Telling it to be my "waifu". Or generating amateur short stories. At some time I'd like to practice my foreign language skills with it.

I haven't had good success with tasks that rely on "correctness" or factual information. However sometimes I have it draft an email for me or come up with an argumentation for a text that I'm writing. That happens every other week, not daily. And I generously edit and restructure it afterwards or just incorporate some of the paragraphs into my final result.

D&D related things actually seems like a decent use case. For most other things I don't understand how people find it useful enough to find use cases to do daily tasks with it.

Agree. I've tried some of the use-cases that other people mentioned here. Like summarization, "online" search, tech troubleshooting, recipes, ... And all I've had were sub-par results and things that needed extensive fact-checking and reworking. So I can't really relate to those experiences. I wouldn't use AI as of now for tasks like that.

And this is how I ended up with fiction and roleplay. Seems to be better suited for that. And somehow AI can do small coding tasks. Like writing boiler-plate code and help with some of the more tedious tasks. At some point I need to feed another of my real-life problems to the current version of ChatGPT but I don't think it'll do it for me. And it can come up with nice ideas for stories. Unguided storywriting will get dull in my experience. I guess the roleplaying is nice, though.

Edit: And I forgot about translation. That also works great with AI.

Nothing. I'm a software developer, but don't use any AI tools with any regularity. I think I only asked ChatGPT or similar something once about programming because the documentation was awful, but I do remember that as having been helpful.

The only thing that might be close, though not directly, is translation software (kanji be hard).

The only thing that might be close, though not directly, is translation software (kanji be hard).

Well that's the dirty little open secret, isn't it? These "AI" programs are just beefier versions of the same kinds of translation, predictive text, "smart" image editing, and chatbot software we've had for a while. Significantly more sophisticated and more powerful, but not exactly new. That's why "AI" is suddenly appearing everywhere: in many cases, a less sophisticated predecessor of it was already there, they just didn't use the marketing language OpenAI popularized.

I legit had a spelling and grammar checking add-on that rebranded itself to "AI", and it did absolutely nothing different than what it already did.

And the whole point is that absolutely none of this is "AI" in any meaningful way. It's like when that company tried to brand their new skateboard/segway things from a few years ago as "hoverboards". You didn't achieve the thing, you're just reducing what the term means to make it apply to your new thing.

I’m a professional software dev and I use GitHub Copilot.

It’s most useful for repetitive or boilerplate code where it has an existing pattern it can copy. It basically saves me some typing and little typo errors that can creep in when writing that type of code by hand.

It’s less useful for generating novel code. Occasionally it can help with known algorithms or obvious code constructs that can be inferred from the context. Prompting it with code comments can help although it still has a tendency to hallucinate about APIs that don’t exist.

I think it will improve with time. Both the models themselves and the tools integrating the models with IDEs etc.

I used Copilot for a while (in a Rust codebase fwiw) and it was... both useful and not for me? Its best suggestions came with some of the short-but-tedious completions like path().unwrap().to_str().into() etc. Those in and of themselves could be hit-or-miss, but useful often enough that I might as well take the suggestion and then see if it compiles.

Anything longer than that was OK sometimes, but often it'd be suggesting code for an older version of a particular API, or just trying to write a little algorithm for something I didn't want to do in the first place. It was still correct often enough when filling out particular structures to be technically useful, but leaning on it more I noticed that my code was starting to bloat with things I really should have pulled off into another function instead of autocompleting a particular structure every time. And that's on me, but I stopped using copilot because it just got too easy for me to write repetitive code but with like a 25% chance of needing to correct something in the suggestion, which is an interrupt my ADHD ass doesn't need.

So whether it's helpful for you is probably down to how you work/think/write code. I'm interested to see how it improves, but right now it's too much of a nuisance for me to justify.

  • Proofread/rewrite emails and messages
  • Recipes
  • Find specs for computers, gadgets, cars etc.
  • Compare products
  • Troubleshoot software issues
  • Find meaning of idioms
  • Video game guide/walkthrough/reviews
  • Summarise articles
  • Find out if a website is legit (and ownership of the sites)

I don't see any need for Pro versions. ChatGPT 4 is already available for free via Bing. I simply use multiple AI tools and compare the results. (Copilot / Gemini / Claude / Perplexity)

I've only used ChatGPT and it's mostly good for language-related tasks. I use it for finding tip-of-my-tongue words or completing/paraphrasing sentences. Basically fancy autocorrect. It's also good at debugging stuff sometimes when the language itself doesn't give useful errors (looking at you sql). Other than that, any time I've asked for factual information it's been wrong in some way or simply not helpful.

I use LLM bots mostly

  • as websearch - e.g. "list sites containing growing conditions for pepper plants";
  • for practical ideas - e.g. "suggest me a savoury spice mix containing ginger"

I never use them for the info itself. It's foolish to trust a system that behaves like a specially irrational assumer. (It makes shit up, it has the verbal intelligence of a potato, and fails to follow simple logic.)

I'm not using any Pro version.

For reference: nowadays I'm using ChatGPT 3.5 and Claude 1.2, both through DuckDuckGo. I used Gemini a fair bit, but ditched it - not just for privacy, but because Gemini's "tone" rubs me off the wrong way.

Yeah Gemini's tone is weird. It is constantly reminding you that Gemini does not have an opinion on anything. It actively tries to avoid giving definitive answers whenever possible.

That's related; what rubs me off the most is how patronising it sounds - going out of its way to lecture you with uncalled advice, assuming your intentions behind the prompt (always the worst), and wasting your time with "social grease". And this is clearly not a consequence of the underlying tech, as neither Claude nor ChatGPT do it so bad; it's something that Google tailored into Gemini.

I don't word good and ChatGPT bro helps me use my nouns.

That's only kind of a joke, I have anomic aphasia and use ChatGPT to help me find the words when I lose them. I used to use Google but it doesn't really work anymore.

Yeah. Wtf did Google do to itself lol. I’m in the same boat as of usage. No diagnosis but severe adhd so assume it’s dyslexia on my end lol

I'm going to continue to monitor this thread but so far I'm surprised at how little use most are getting from AI tools. And the highest upvoted comment is that one does NOT use AI tools in their daily routine.

So much hype around AI recently and I'm not seeing/hearing a lot of REAL, PRACTICAL use case for it.

Interesting.

Replaced forums like Stack for me both could give me incorrect information, one doesn't care how dumb my questions are.

My job pays from premium, and it's been useful clearing up certain issues I've had with tutorials for the current language I'm learning. In an IDE CO-Pilot can get a bit in the way and its suggestions aren't as good as they once were, but I've got the settings down to where it's a fancy spell check and synergises well vim motions to bang out some lines.

It's only replaced the basic interactions I would have had without having to wait for responses or having a thread ignored.

As an SEO - hell no. Those that did got penalized by the latest algorithm update from Google.

As a DM? Yes! It helped me write a nice poem for a bard that will hopefully give my players some context to what they will be encountering as they move further in my campaign.

I'm using local models. Why pay somebody else or hand them my data?

  • Sometimes you need to search for something and it's impossible because of SEO, however you word it. A LLM won't necessarily give you a useful answer, but it'll at least take your query at face value, and usually tell you some context around your question that'll make web search easier, should you decide to look further.
  • Sometimes you need to troubleshoot something unobvious, and using a local LLM is the most straightforward option.
  • Using a LLM in scripts adds a semantic layer to whatever you're trying to automate: you can process a large number of small files in a way that's hard to script, as it depends on what's inside.
  • Some put together a LLM, a speech-to-text model, a text-to-speech model and function calling to make an assistant that can do something you tell it without touching your computer. Sounds like plenty of work to make it work together, but I may try that later.
  • Some use RAG to query large amounts of information. I think it's a hopeless struggle, and the real solution is an architecture other than a variation of Transformer/SSM: it should address real-time learning, long-term memory and agency properly.
  • Some use LLMs as editor-integrated coding assistants. Never tried anything like that yet (I do ask coding questions sometimes though), but I'm going to at some point. The 8B version of LLaMA 3 should be good and quick enough.

Got any links teaching how to run a self hosted RAG LLM?

Never ran RAG, so unfortunately no. But there're quite a few projects doing the necessary handling already - I'd expect them to have manuals.

Got any links to those please?

Thank you. The "jonfairbanks" github repo is exactly what I was looking for, because FUCK sending any of my data to an AI company using their APIs for them to ingest my information to sell off to others.

You are the best!

You're welcome!

As far as I understand, all of them can be made to work locally (especially if your local model is served via an OpenAI-compatible API, e.g. see llama.cpp's server binary) with varying degrees of effort required.

Not comment-OP, but you could start here: !fosai@lemmy.world. the latest post is to a RAG tutorial, and there are various other resources in the sidebar.

Mainly as a search engine replacement, finding docs or information without getting terrible search results. Also for recipes, it’s really good at recipes.

GPT 4 though, 3.5 is about as sharp as a bag of wet mice.

I use chatGPT as a diary. Whenever I feel down or frustrated with feelings I can't quite describe, or just insecure, I start a session and just pour out my heart. I complain, yammer on and on about what's bothering me, and just say whatever comes to mind. Basically all the stuff I would never bother a friend or loved one with because I know it'll come across as needy and I don't want to push this on them.
And all it does is give positive and supportive comments, ask some follow-up questions, maybe make an attempt at giving a helpful suggestion. I know what I'm talking with, I am under no illusions that this is anything but a big mathematical model, but it helps me get through some difficult emotions by just letting it all out. There's no judgement and that's kind of nice. I could just write a journal, but the interaction and positive feedback adds a little motivation for me. And of course it goes without saying that I keep names and other personal details to myself :)

Oh and I use it for some cloud architecture problems, some coding and other tech stuff. But that's not very interesting.

Also, if you use ChatGPT and haven't done so, be sure to use their privacy page and opt-out of having your chats used for model training. https://privacy.openai.com/policies?modal=take-control
Not sure for US, but it works for EU citizens.

I swap over from GPT4 to Perplexity Pro. It’s almost taken over as my default search now. I use that to troubleshoot home assistant issues, or even game mods issues. The pro version is nice because they will actually ask you to clarify certain things before giving a better output.

It performs wells with all the usual email reply, writing etc. I do like that I have the option to switch between GPT, Claude, and Mistral within Perplexity, which the last actually will return results if I ask for help on stuff like torrents.

I once used AI to make a mock up of a t shirt design I had in my head just for curiosity, it made exactly what I wanted and now I don’t feel like it’s my design anymore. Who knows what artists it took from. Even after redrawing I lost appreciation for it. Haven’t touch AI since minus some bored conversations with dead celebrity models.

I don’t trust the search results to be accurate, its desire to please the user makes it unreliable. When it comes to image generation it takes from artists. AI is great for menial time consuming tasks like say cropping out the background of an image for example but because of the reasons above I don’t tend to use it all that much, and my respect for it is quite low.

I tried to use it to find me a decent phone under $500 and half of the listed options were $900+ so uhh.. Not too useful.

Tbf, I think chat GLT's Internet dump is a few years old. So maybe it recommends a banger of a phone from 2020 or so and the pricing data is now garbage.

IPhone 13 was listed, and while it's a good phone, it's uhh.. Not $500

Oh haha. Yeah I've kind had that sort of problem too, where I'll give it a certain criteria or filter that it sticks to perfectly, then there's just one oddball that sticks out like a sore thumb

I've tried paid versions of ChatGPT, Claude, and Gemini. I am currently using Gemini, and it is working reasonably well for me.

I mostly use it to replace searches. I haven't used Google in years, but mainly relied on DuckDuckGo until SEO made it less useful. My secondary use case is for programming. I tend to jump around to a lot of different languages and frameworks, and it's hugely helpful to get sample code describing what I want to do when I don't know the syntax.

Once in a great while, I will have it rewrite something for me. That is mostly for inspiration if I want to change the tone of something I wrote (then I'll edit). I think that all of the LLMs suck at writing.

I think I'm using ChatGPT because that's what Bing's chatbot is. I've searched for some many dumb Python questions that it started promoting paid-for courses at one point. It gave me exactly what I was looking for the other day, but that doesn't mean it never hallucinates answers, or provides the answer to the question it assumes I must be searching for (rather than the one I am searching for).

It gave me exactly what I was looking for the other day, but that doesn’t mean it never hallucinates answers

This is why I don't trust LLMs for programming advice. I suck at programming and tools like ChatGPT would be great if it could actually translate what I want into something that I could just plug into my existing code and run with. Instead, I get answers to questions that reference the API of an entirely different programming language, make up fake functions, or just don't operate the way I described, if at all.

Maybe some of my problems with AI are just "skill issue" and I need to figure out how to phrase shit correctly just like how you had to know exactly how to tickle search engines back in the day by not asking a question verbatim but plugging in keywords to have it give you what you actually wanted instead of some nonsense that it thought you wanted. We called it "Google-Fu", but it has become less important now with SEO.

Also, I feel like LLMs are just creatively bankrupt. Case in point, I have a friend who is leaning on AI tools to help craft his next homebrew D&D campaign, and I thought that was a great use of that technology so I tried it out as well and, well... it ended up generating a lot of the same narrative that he got from it, including re-using proper nouns for places/people. Everything was just so generic fantasy and boring, even when you fed it your own ideas it just spit back out regurgitated fantasy tropes and stuff that sounds like it could have come out of a setting guide somewhere (and probably did if it was trained on that dataset).

Hmmm. I guess I only use it for generating images based on song lyrics to post to The Lyrics Game here on Lemmy.

There's now a privacy-respecting offer on DDG, use the !ai bang to get to it.

To answer your question, any "natural language" query of modest importance, where asking a question like "will there be any more movies in that series by this director?" is easier than checking the usual movies websites.

How does that work? You just type that before your query and it gives you ai answers?

Yup, there's many of them. I use DDG all the time, but this feature probably works in other search engines too - I just don't know

From any search bar configured to use DDG, just type !ai followed by your query

You can do this from the DDG website too of course. Other useful "bangs" include !w for Wikipedia, or !aw for the Arch Linux wiki, or even just !img for image search.

https://duckduckgo.com/bangs

I've been using Google's Gemini to write cover letters for job applications. Just plug in the job description, do a little proofreading and tweaking, and boom. It's made the process so much easier for "personalized" cover letters.

Hierarchical location data for a given place to verify location records in genealogy

I don't like the idea of wasting energy on inefficient things so I don't use "AI".

A.I. use is directly responsible for carbon emissions from non-renewable electricity and for the consumption of millions of gallons of fresh water, and it indirectly boosts impacts from building and maintaining the power-hungry equipment on which A.I. runs.

As Use of A.I. Soars, So Does the Energy and Water It Requires

I've got a local LLM set up for code suggestions and run GitHub copilot for spots where the local isn't good enough. I can start writing out a thought and pseudo implementation and have a mostly viable real implementation instantly which I can then modify to suit my needs. It also takes a lot of the busywork out of things that need boilerplate. The local is trained on the style of my repos so I can keep up with style standards too which is helpful. Also great for explaining legacy code and coming up with more semantic variable names in old code too.

What are you using for your local installation?

I was using mixtral but I'm recently testing out the new llama 3 models, decent improvement and hopefully we'll see some good fine tuned models of it soon

I occasionally use ChatGPT, I don't find it that useful though. I mostly use it to summarize long text, etc. I also like Phind, which can be used without creating an account. I would never pay for AI.

I stopped using perplexity only used it briefly. Chatgpt? Open ai specifically?

Lots of things.

To generate AI friend conversational ai character back story's. Bc sometimes they have to be long include lots of info.

To summarize reddit posts asking for advice. You know sometimes ppl make them longer then need be. It summarizes them for me when I'm lazy

Reframe verbiage

Just a few

I use Perplexity or the Google one formerly known as Bard for when I want specific information but I don't want to do multiple searches plus reading several sites to find the answer.

I use Bing to generate pictures to entertain myself. Sometimes I post them.

ChatGPT Plus and Github Copilot…but less every day. They just don’t keep up enough with current APIs and are often confused and unable to actually provide useful solutions.

I mostly use ChatGPT Plus as a Google replacement nowadays. And Copilot as a, sadly, mostly useless autocomplete.

I use Kagi, they provide access to all the main models in a chat interface and have a mode that feeds search engine results to them. It's mostly replaced search engines for me. For programming work I find them very useful for using unfamiliar tools and libraries, I can ask it what I want to so and it'll generally tell me how correctly. Importantly, the search engine mode has citations. $25 a month, but worth it.

I am not paying for pro version on monthly basic. I am using openrouter.ai, I load $5 and pick the bot when I need to use.

Still have $3 after few month.

AI pay as you go cost doesn't cost that much

Ollama running the dolphin-mistrial. Its been neat using it with the continue plugin in vscode, nothing life changing though. I think with further embeddings and RAG over curated data sources would go a long way to make it more useful for me.

Once upon a time I had it writing summaries of some of the comic series I was selling on Whatnot. Did that twice. Haven’t used it for anything else.

I use Gemini as a replacmet for google search. Still kinda shit but everything else wants money or bans VPN users.

I run Mistral locally for anything personal or fun but I need a new GPU. My 1080ti is finally showing its age.

Bing's AI does search is pretty competent, works fine via VPN.

I use ChatGPT 4 and 3.5 via the api sometimes.

I've recently started using Claude Opus which i prefer for coding and maybe in general. Apart from the crappy message limit and lack of stop and edit buttons. I think it might stick with it and cancel gpt.

In general i use Ai to help with my job of general it guy, marker, admin, data entry etc. i use it to proof read, to get ideas, to transform data, to code it or to comment code, translate, transcribe, to bug fix. I always feel i could be making better use of it in my day but it is transformative for the stuff it does.

For the API I'm using a workflow for Alfred on Mac that allows me really quick key presses access to prompts and questions on selected text. I love it.

I work as a research economist and use half a model zoo and APIs regularly and have even written a small R package to work with LLM Apis. ChatGPT Pro in the interface for programming questions (helping me to write or document my R code, for example I have used it to easily translate tax laws into R functions to make microsimulations [of course i double check them]). Anthropic's or Groq's API's to process large amounts of documents fast (for examples creating JSON-lists about papers to make them more easily searchable). I have one small script that has many useful prompts that really helps me to rephrase texts (e.g. "Please rephrase this paragraph to be more clear and concise. Give me {n} Versions. {paragrph}") , which I have included into my browser. I used Stable Diffusion to generate images for my Christmas Cards.

I'll typically only use it for language and coding problems.

Synonyms, word for xyz, how can I make this sentence more clear.

But if I can't find anything on Google I'll ask it other questions.

Not a daily routine.

I've been doing some visual character stuff for my fictional story!
It's nice to actually see some characters visually for the story - it adds to the motivation to work on it 😁

I don't unless I am trying to make my code more efficient, or want know how to do simple programming task or get some additional info on something I don't really understand.

For programming I use it a lot. Though, it's because the book hasn't been helpful and the professor is non-existent

I use the Bingilator to create images for invites to my weekly donut meetings.

I use it to write Ansible scripts; simply because Ansible sucks so fucking hard and I hate to do it myself.