Do any non-corpos actually like AI slop?

lohky@lemmy.world to Ask Lemmy@lemmy.world – 117 points –

I've found that AI has done literally nothing to improve my life in any way and has really just caused endless frustrations. From the enshitification of journalism to ruining pretty much all tech support and customer service, what is the point of this shit?

I work on the Salesforce platform and now I have their dumbass account managers harassing my team to buy into their stupid AI customer service agents. Really, the only AI highlight that I have seen is the guy that made the tool to spam job applications to combat worthless AI job recruiters and HR tools.

86

ChatGPT is incredibly good at helping you with random programming questions, or just dumping a full ass error text and it telling you exactly what's wrong.

This afternoon I used ChatGPT to figure out what the error preventing me from updating my ESXi server. I just copy pasted the entire error text which was one entire terminal windows worth of shit, and it knew that there was an issue accessing the zip. It wasn't smart enough to figure out "hey dumbass give it a full file path not relative" but eventually I got there. Earlier this morning I used it to write a cross apply instead of using multiple sub select statements. It forgot to update the order by, but that was a simple fix. I use it for all sorts of other things we do at work too. ChatGPT won't replace any programmers, but it will help them be more productive.

I thought it was pretty fun to play around with making limericks and rap battles with friends, but I haven't found a particularly usefull use case for LLMs.

I like asking ChatGPT for movie recommendations. Sometimes it makes some shit up but it usually comes through, I've already watched a few flicks I really like that I never would've heard of otherwise

Chat GPT enabled me to automate a small portion of my former job. So that was nice.

Personally I use it when I can't easily find an answer online. I still keep some skepticism about the answers given until I find other sources to corroborate, but in a pinch it works well.

because of the way it's trained on internet data, large models like ChatGPT can actually work pretty well as a sort of first-line search engine. My girlfriend uses it like that all the time especially for obscure stuff in one of her legal classes, it can bring up the right details to point you towards googling the correct document rather than muddling through really shitty library case page searches.

especially when you use something with inline citations like bing

I got high and put in prompts to see what insane videos it would make. That was fun. I even made some YouTube videos from it. I also saw some cool & spooky short videos that are basically "liminal" since it's such an inhuman construction.

But generally, no. It's making the internet worse. And as a customer I definitely never want to deal with an AI instead of a human.

100%. I don't need help finding what's on your website. I can find that myself. If I'm contacting customer support it's because my problem needs another brain on it, from the inside. Someone who can think and take action to help me. Might require creativity or flexibility. AI has never helped me solve anything.

I think people were already making the internet worse. AI just helps them make it worse faster.

I mean, yeah, but that difference is quite crucial.

People have always wanted to be the top search result without putting effort in, because that brings in ad money.
But without putting effort in, their articles were generally short, had typoes, and there were relatively few such articles.

Now, LLMs allow these same people to pump out hundredfold as much gargage, consisting of lengthy articles in many languages. And because LLMs are specifically trained to produce texts that are human-like, it's difficult for search engines to filter out these bad quality results.

AI is used extensively in science to sift through gigantic data sets. Mechanical turk programs like Galaxy Zoo are used to train the algorithm. And scientists can use it to look at everything in more detail.

Apart from that AI is just plain fun to play around with. And with the rapid advancements it will probably keep getting more fun.

Personally I hope to one day have an easy and quick way to sort all the images I have taken over the years. I probably only need a GPU in my server for that one.

anyone who uses machine learning like that would probably take issue with it being called AI too

Even before AI the corps have been following a strategy of understaffing with the idea that software will make up for it and it hasn't. Its beyond the pale the work I have to do now for almost anything I do related to the private sector (work as their customer not as an employee).

You can whip up a whole album of aggressively mid music just cyberbullying the shit out of one person.

Theres someone I sometimes encounter in a discord Im in that makes a hobby of doing stuff with them (from what I gather seeing it, they do more with it that just asking them for a prompt and leaving them at that, at least partly because it doesnt generally give them something theyre happy with initially and they end up having to ask the thing to edit specific bits of it in different ways over and over until it does). I dont really understand what exactly it is this entails, as what they seem to most like making it do is code "shaders" for them that create unrecognizable abstract patterns, but they spend a lot of time talking at length about technical parameters of various models and what they like and dont like about them, so I assume the guy must find something enjoyable in it all. That being said, using it as a sort of strange toy isnt really the most useful use case.

The image generators have been great for making token art for my dnd campaign. Other than that, no.

I for one welcome our new overlords. (for the funny only)

I think it’s a fun toy that is being misused and forced into a lot of things it isn’t ready for.

I’m doing a lot with AI but it’s pretty much slop. I use self hosted stable diffusion, Ollama, and whisper for a discord bot, code help, writing assistance, and I pay elevenlabs for TTS so I can talk to it. It’s been pretty useful. It’s all running on an old computer with a 3060. Voice chat is a little slow and has its own problems but it’s all been fun to learn.

It’s really helped me get recipes without website ads overtaxing my old surface.

Do I think it's generally useful? No, not at all.

But for very specific purposes it's worth considering as an option.

Text-to-image generation has been worth it to get a jumping-off point for a sketch, or to get a rough portrait for a D&D character.

Regular old ChatGPT has been good on a couple occasions for humor (again D&D related; I asked it for a "help wanted" ad in the style of newspaper personals and the result was hilariously campy)

In terms of actual problem solving... There have been a couple instances where, when Google or Stack Overflow haven't helped, I've asked it for troubleshooting ideas as a last resort. It did manage to pinpoint the issue once, but usually it just ends up that one of the topics or strategies it floats prove to be useful after further investigation. I would never trust anything factual without verifying, or copy/paste code from it directly though.

I went for a routine dental cleaning today and my dentist integrated a specialized AI tool to help identify cavities and estimate the progress of decay. Comparing my x-rays between the raw image and the overlay from the AI, we saw a total of 5 cavities. Without the AI, my dentist would have wanted to fill all of them. With the AI, it was narrowed down to 2 that need attention, and the others are early enough that they can be maintained.

I'm all for these types of specialized AIs, and hope to see even further advances in the future.

I usually keep abreast of the scene so I'll give a lot of stuff a try. Entertainment wise, making music and images or playing dnd with it is fun but the novelty tends to wear off. Image gen can be useful for personal projects.

Work wise, I mostly use it to do deep dives into things like datasheets and libraries, or doing the boring coding bits. I verify the info and use it in conjunction with regular research but it makes things a lot easier.

Oh, also tts is fun. The actor who played Dumbledore reads me the news and Emma Watson tells me what exercise is next during my workout, although some might frown on using their voices without consent.

When it just came out I had AI write fanfiction that no sane person would write, and other silly things. I liked that. That and trail cam photos of the Duolingo mascot.

I think my complaints are more with how capitalism treats new technology, though-- and not just lost jobs and the tool on the climate. Greed and competition is making it worse and worse as a technology that AI itself, within a years span, has been enshittified. There are use cases that it can do a world of good, though, just like everything else bad people ruin.

To me AI is useless. Its not intelligent, its just a blender that blends up tons of results into one hot steaming mug of "knowledge". If you toss a nugget of shit into a smoothie while it's being blended, it's gonna taste like shit. Considering the amount of misinformation on the internet, everything AI spits out is shit.

It is purely derivative, devoid of any true originality with vague facade of intelligence in an attempt to bypass existing copyright law.

Your last line pretty much sums up my feelings entirely.

Its really good for all kinds scams.

So...

Low Tier / Wannabe Corpos?

It's great at summarization and translations.

Might want to rethink the summarization part.

AI also hasn’t made any huge improvements in machine translation AFAIK. Translators still get hired because AI can’t do the job as well.

Thank you for pointing that out. I don't use it for anything critical, and it's been very useful because Kagi's summarizer works on things like YouTube videos friends link which I don't care enough to watch. I speak the language pair I use DeepL on, but DeepL often writes more natively than I can. In my anecdotal experience, LLMs have greatly improved the quality of machine translation.

LLMs are TERRIBLE at summarization

Downvoters need to read some peer reviewed studies and not lap up whatever BS comes from OpenAI who are selling you a bogus product lmao. I too was excited for summarization use-case of AI when LLMs were the new shiny toy, until people actually started testing it and got a big reality check

Until it makes shit up that the original work never said.

The services I use, Kagi's autosummarizer and DeepL, haven't done that when I've checked. The downside of the summarizer is that it might remove some subtle things sometimes that I'd have liked it to keep. I imagine that would occur if I had a human summarize too, though. DeepL has been very accurate.

LLMs are especially bad for summarization for the use case of presenting search results. The source is just as critical of information for search as the information itself, and LLMs obfuscate this critical source information and combine results from multiple sources together...

If used in the specific niche use cases its trained for, as long as its used as a tool and not a final product. For example, using AI to generate background elements of a complete image. The AI elements aren't the focus, and should be things that shouldn't matter, but it might be better to use an AI element rather than doing a bare minimum element by hand. This might be something like a blurred out environment background behind a peice of hand drawn character art - otherwise it might just be a gradient or solid colour because it isn't important, but having something low-quality is better than having effectively nothing.

In a similar case, for multidisciplinary projects where the artists can't realistically work proficiently in every field required, AI assets may be good enough to meet the minimum requirements to at least complete the project. For example, I do a lot of game modding - I'm proficient with programming, game/level design, and 3D modeling, but not good enough to make dozens of textures and sounds that are up to snuff. I might be able to dedicate time to make a couple of most key resources myself or hire someone, but seeing as this is a non-commercial, non-monitized project I can't buy resources regularly. AI can be a good enough solution to get the project out the door.

In the same way, LLM tools can be good if used as a way to "extend" existing works. Its a generally bad idea to rely entirely on them, but if you use it to polish a sentence you wrote, come up with phrasing ideas, or write your long if-chain for you, then it's a way of improving or speeding up your work.

Basically, AI tools as they are, should be seen as another tool by those in or adjacent to the related profession - another tool in the toolbox rather than a way to replace the human.

Tbh it’s made a pretty significant improvement in my life as a software developer. Yeah, it makes shit up/generates garbage code sometimes, but if you know how to read code, debug, and program in general, it really saves a lot of grunt work and tedious language barriers. It can also be a solid rubber duck for debugging.

Basically any time I just need a little script take x input and give me y output, or a regex, I’ll have ChatGPT write it for me.

My primary use of AI is for programming and debugging. It's a great way to get boilerplate code blocks, bootstrap scripts, one-liner shell commands, creating regular expressions etc. More often than not, I've also learned new things because it ends up using something new that I didn't know about, or approaches I didn't know were possible.

I also find it's a good tool to learn about new things or topics. It's very flexible in giving you a high level summary, and then digging deeper into the specifics of something that might interest you. Summarizing articles, and long posts is also helpful.

Of course, it's not always accurate, and it doesn't always work. But for me, it works more often than not and I find that valuable.

Like every technology, it will follow the Gartner Hype Cycle. We are definitely in the times of "everything-AI" or AI for everything - but I'm sure things will calm down and people will find it valuable for a number of specific things.

Playing with it on my own computer, locally hosting it and running it offline, has been pretty cool. I find it really impressive when it's something open source and community driven. I also think there are a lot of useful applications for things that are traditionally not solvable with traditional programming.

However a lot of the pushed corporate AI feels not that useful, and there's something about it that really rubs me the wrong way.

I love chatgpt, and am dumbfounded at all the AI hate on lemmy. I use it for work. It's not perfect, but helps immensely with snippets of code, as well as learning STEM concepts. Sometimes I've already written some code that I remember vaguely, but it was a long time ago and I need to do it again. The time it would take to either go find my old code, or just research it completely again, is WAY longer than just asking chatgpt. It's extremely helpful, and definitely faster for what I'd already have to do.

I guess it depends on what you use it for ¯\_(ツ)_/¯.

I hope it continues to improve. I hope we get full open source. If I could "teach" it to do certain tasks someday, that would be friggin awesome.

I like some of the art. Especially things that would be difficult or almost impossible for a human to do.

One of the more interesting ones is horror. AI is super good at making uncanny or gross stuff that most people wouldn't even think to make.

I use ChatGPT and Copilot as search engines, particularly for programming concepts or technical documentation. The way I figure, since these AI companies are scraping the internet to train these models, it’s incredibly likely that they’ve picked up some bit of information that Google and DDG won’t surface because SEO.

Google's ai summary is a godsend for certain types of queries and is generally useful

I use perplexity.ai more than google now. I still don’t love it and it’s more of a testament to how far google has fallen than the usefulness of AI, but I do find myself using it to get a start on basic searches. It is, dare I say, good at calorie counting and language learning things. Helps calculate calorie to gram ratios and the math is usually correct. It also helps me with German, since it’s good at finding patterns and how German people typically say what I am trying to say, instead of just running it through a translator which may or not have the correct context.

I do miss the days where I could ask AI to talk like Obama while he’s taking a shit during an earthquake. ChatGPT would let you go off the rails when it first came out. That was a lot of fun and I laughed pretty hard at the stupid scenarios I could come up with. I’m probably the reason the guardrails got added.

I just tried it and was pleasently surprised.

It also helps me with German, since it’s good at finding patterns and how German people typically say

Depending on your first language I can offer you my assistance as a native german :)
If you want to, pm me or send a message to my email: lemmy@relay2moritz.mozmail.com

I’ve used it to fill in the gaps for DND storyline. I’ll give it a prompt and a couple of story arcs then I’ll tell it to write in a certain style, say a cowardly king or dogmatic paladin. From there it will spit out a story. If I don’t like certain affects, I’ll tell it to rewrite a section with some other detail in mind. It does a fantastic job and saves me some of the guesswork.

For those interested, I just asked it to generate a campaign with a quick prompt and this is what it spit out. Not perfect, but a good basis to build from:

Campaign Framework: The Relic of Shadows Introduction: The Call to Adventure Setting: The campaign begins in the small, picturesque fiefdom of Ravenwood, ruled by the benevolent Lord Alaric. Known for his wisdom and kindness, Lord Alaric's peace is shattered when a relic of immense power, the Amulet of Shadows, is stolen by a band of notorious highwaymen. Hook: Lord Alaric seeks the help of the adventurers, promising wealth and favor in return for the retrieval of the Amulet of Shadows. The relic is said to possess the ability to manipulate shadows, providing its bearer with unparalleled stealth and the power to traverse through the Shadow Realm. Act 1: The Journey Begins Initial Quest: The adventurers set off to track down the highwaymen, following clues and engaging in minor skirmishes along the way. They learn that the highwaymen are more than simple thieves—they are a fragmented faction of former soldiers who turned to banditry after being betrayed by a corrupt noble. Twist: Upon confronting the highwaymen, the adventurers discover that Lord Alaric had once wronged them in his past. The highwaymen had stolen the Amulet of Shadows not out of greed, but to prevent Alaric from using its dark powers. They reveal that Alaric's benevolent facade hides his darker ambitions. Act 2: The Corrupted Lord Return and Betrayal: Despite the highwaymen's revelations, the adventurers decide to return the Amulet of Shadows to Lord Alaric, hoping to resolve the situation peacefully. However, upon returning the relic, Lord Alaric's true nature is revealed. Corrupted by the power of the amulet, he becomes malevolent and orders the adventurers imprisoned in a distant, forgotten dungeon. Dungeon Escape: The adventurers must navigate the treacherous dungeon, facing traps, monsters, and puzzles. During their imprisonment, they encounter an exiled hermit named Elias, a former friend of Lord Alaric who was cast out due to discovering the lord's dark ambitions. Act 3: The Path of Vengeance The Hermit's Guidance: Elias offers to guide the adventurers back to Ravenwood and reveals crucial insights into Lord Alaric's plans. He explains that Alaric seeks to use the Amulet of Shadows to control the Shadow Realm and extend his dominion. Elias knows secret paths and hidden truths that can aid the adventurers. Critical Choice: The adventurers must protect Elias throughout their journey. If Elias perishes, they lose his valuable knowledge, making their mission significantly harder. Without Elias, they will face additional challenges and must uncover Alaric's weaknesses through other means. Act 4: The Final Confrontation Return to Ravenwood: The adventurers, now armed with the knowledge provided by Elias (or through their own investigations if Elias is lost), must confront Lord Alaric. They rally allies, face Alaric's corrupted forces, and uncover the hidden vulnerabilities in Alaric's plans. Climactic Battle: The campaign culminates in an epic showdown with Lord Alaric in his fortified castle. The adventurers must disable the Amulet of Shadows, overcome Alaric's dark magic, and ultimately restore peace to Ravenwood. Resolution Victory or Defeat: Depending on the adventurers' actions, they may emerge victorious, freeing Ravenwood from Alaric's tyrannical grasp, or they might succumb to the shadows, leaving the fiefdom to its dark fate. Their choices throughout the campaign will determine the final outcome and the legacy they leave behind. Feel free to add or adjust details to fit your players' preferences and playstyle! This framework should give you a solid foundation for a thrilling and complex campaign. Happy adventuring! 🐉⚔️

And below was my prompt, took me appx 5 minutes to write from my phone. I like that the AI thought that Elias would recall secret passages from his youth, which I’m thinking could help the adventures bypass some of the guard. I definitely would want to workshop that highwaymen twist, I mean what kind of party would be willing to return a relic of shadows back when they perceive a ruler as being corrupt? It needs something a bit more convincing.

——

Provide me a framework for a DND campaign that will contain the following story arcs. A lord of a small fiefdom seeks a group of traveling adventures to return a relic (you choose the relic, it must have magical powers) that was stolen from him by a group of highwaymen. The story must include a twist about the highwaymen. When returned, the lord becomes corrupted and throws the party in a far off dungeon. The adventures must work their way back to the lord and seek their revenge, with the assistance of a self exiled hermit who formerly knew the lord in his youth that they encounter along the way. If the hermit dies, the party loses insight into the lord's intentions and it makes it much more challenging to win the campaign.

For the most part it's not useful, at least not the way people use it most of the time.
It's an engine for producing text that's most like the text it's seen before, or for telling you what text it's seen before is most like the text you just gave it.

When it comes to having a conversation, it can passibly engage in small talk, or present itself as having just skimmed the Wikipedia article on some topic.
This is kinda nifty and I've actually recently found it useful for giving me literally any insignificant mental stimulation to keep me awake while feeding a baby in the middle of the night.

Using it to replace thinking or interaction gives you a substandard result.
Using it as a language interface to something else can give better results.

I've seen it used as an interface to a set of data collection interfaces, where all it needed to know how to do was tell the user what things they could ask about, and then convert their responses into inputs for the API, and show them the resulting chart. Since it wasn't doing anything to actually interpret the data, it never came across as "wrong".

We are going to burn the planes to make a shitty chat bot.

I have found ChatGPT to be better than Google for random questions I have, asking for general advice in a whole bunch of things but sido what to go for other sources. I also use it to extrapolate data, come up with scheduling for work (I organise some volunteer shifts) and lots of excel formulae.

Sometimes it’s easier to check ChatGPT’s answers, ask follow up questions, look at the sources it provides and live with the occasional hallucinations than to sift through the garbage pile that google search has become.

I've never had AI code run straight off the bat - generally because if I've resorted to asking an AI, I've already spent an hour googling - but it often gives me a starting point to narrow my search.

There's been a couple of times it's been useful outside of coding/config - for example, finding the name of some legal concepts can be fairly hard with traditional search, if you don't know the surrounding terminology.

For the most part, it's worthless garbage.

I have had fun with ChatGPT, but in terms of integrating it into my workflow: no. It just gives me too much garbage on a regular basis for me not to have to check and recheck anything it produces, so it's more efficient to do it myself.

And as entertainment, it's more expensive than e.g. a game, over time.

It's an overly broad term, and the "hype" use-cases dominate the discussion in a way that lacks vision. I'm using machine learning to optimize hardware accelerated processing for particle physics. So, ya, it's not all slop. And what is, may very well evolve.

The only things I use and I know they have AI are Spotify recommendations, live captions on videos and DLSS. I don't find generative AI to be interesting, but there's nothing wrong with machine learning itself imo if it's used for things that have purpose.

That's a bit loaded question. By AI I assume you're refering to GenAI/LLMs rather than AI broadly.

  • I use it to correct my spelling on longer posts and I find that it improves the clarity and helps my point come across better.
  • I use Dall-E to create pictures I never could have before, because despite my interest in drawing, I just never bothered to learn it myself. GenAI enables me to skip the learning and go straight to creating.
  • I like that it can simulate famous people and allows me to ask 'them' questions that I never could in real life. For example, yesterday I spent a good while chatting with 'Sam Harris' about the morality of lying and the edge cases where it might be justified. I find discussions like this genuinely enjoyable and insightful.
  • I also like using the voice mode where I can just talk with it. As a non-native english speaker, I find it to be good practise to help me improve my spelling.

As a non-native english speaker, I find it to be good practise to help me improve my spelling.

To nitpick a bit, you're probably thinking of pronunciation. Spelling is written, not spoken.

It stimulates my brain, and I enjoy the randomness of it all. It's like how in nature things can be perfectly imperfect - random and still beautiful - unintentional and still emotion-inducing. Sure, I see the ethical issues with how an AI is trained and how capitalism cares more about profit than people leading to job loss or exploitation; however, those are separate issues in my mind, and I can still find joy in the random output of an AI. I could easily tunnel on the bad parts of AI and what's happening as the world devours a new technology, but I still see benefits it can bring in the medical research and engineering fields.

I’m not impressed with the LLMs. They do make great synonym generators.

Stable diffusion and other image diffusers are genuinely amazing. And I’m not talking about asking copilot to make Fortnite shrek. There are incredibly complex ways in which you can fine tune to tell it how to shape and refine the image. It has and is going to continue to revolutionize graphical art. And once the math shrinks down it’s going to be everywhere.

I use AI every day. I think it's an amazing tool. It helps me with work, with video games, with general information, with my dog, and with a whole lot of other things. Obviously verify the claims if it's an important matter, but it'll still save you a lot of time. Prompting AI with useful queries is a skill set that everyone should be developing right now. Like it or not, AI is here and it's going to impact everyone.

I have a custom agent that i ask questions to that then goes and finds sources then answers my question. Can do math by writing python code and using the result. I uae it almost exclusively instead of regular search. Ai makes coding far quicker giving examples remeber shit i cant remeber how to use writing basic functions etc.

Writing emails. Making profile pictures.

I used to enjoy the tldr bot on lemmy till some fascist decided to kill it instead of just letting people block it.

I like messing with the locally hosted AI available. We have a locally hosted LLM trained on our command media at work that is occasionally useful. I avoid it otherwise if I didn't set it up myself or know who did.

Going through data and writing letters are the only tasks I've seen AI be useful for. I still wouldn't trust it as far as I could kick it's ass and I'd check it well before submitting for work.

Kitboga has used AI (STT, LLMs, and TTS) to waste the time of Scammers.

There are AI tools being used to develop new cures which will benefit everyone.

There are AI tools being used to help discover new planets.

I use DLSS for gaming.

I run a lot of my own local AI models for various reasons. Whisper - for Audio Transcriptions/Translations.

Different Diffusion Models (SD or Flux) - for some quick visuals to recap a D&D session.

Tesseract OCR - to scan an image and extract any text that it can find (makes it easy to pull out text from any image and make it searchable).

Local LLMs (Llama, Mixtral) for brainstorming ideas, reformatting text, etc. It's great for getting started with certain subjects/topics, as long as I verify everything that it says.

For fun I'll probably setup GLaDOS like what was done here: https://www.reddit.com/r/LocalLLaMA/comments/1csnexs/local_glados_now_running_on_windows_11_rtx_2060/

It's basically to replace their shitty chat bots. It's ok, I'm doing the course for it now. You guys hiring?

duck.ai is very helpful for niche/specific questions I have but can’t find online. It’s also helpful for super quick questions that don’t really warrant a forum post. However, I always take things with a grain of salt.

You know those people who have no creative skills or drive, but want to be thought of as a creative?

You know those people who have this really neat idea for an app, but they don't plan on making it themself because they're "just an ideas guy"?

You know those people who will offer to pay in exposure? I mean, do you really need to be paid just to draw some pictures anyway?

You know those guys who send you a picture they got from google images and claim this to be a girl they know?

That's the vast majority of the AI audience. I could probably sum that up with the word "parasite", but I wanted to be thorough.

I don't like commercial "AI" period.

That said, I did find some use for chatGPT last year. I had it explain to me some parts of Hawking's paper on black hole particle creation, this was only useful for this one case because Hawking had a habit of stating something is true without explaining it and often without providing useful references. For the record, chatGPT was not good at this task, but with enough prodding and steering I was eventually able to get it to explain some concepts well enough for my usage. I just needed to understand a topic, I definitely wasn't asking chatGPT to do any writing for me, most of what it spits out is flat out wrong.

I once spent a day trying to get it to solve a really basic QM problem, and it couldn't even keep the maths consistent from one line to another.