What are your AI use cases?
I've seen a lot of sentiment around Lemmy that AI is "useless". I think this tends to stem from the fact that AI has not delivered on, well, anything the capitalists that push it have promised it would. That is to say, it has failed to meaningfully replace workers with a less expensive solution - AI that actually attempts to replace people's jobs are incredibly expensive (and environmentally irresponsible) and they simply lie and say it's not. It's subsidized by that sweet sweet VC capital so they can keep the lie up. And I say attempt because AI is truly horrible at actually replacing people. It's going to make mistakes and while everybody's been trying real hard to make it less wrong, it's just never gonna be "smart" enough to not have a human reviewing its' behavior. Then you've got AI being shoehorned into every little thing that really, REALLY doesn't need it. I'd say that AI is useless.
But AIs have been very useful to me. For one thing, they're much better at googling than I am. They save me time by summarizing articles to just give me the broad strokes, and I can decide whether I want to go into the details from there. They're also good idea generators - I've used them in creative writing just to explore things like "how might this story go?" or "what are interesting ways to describe this?". I never really use what comes out of them verbatim - whether image or text - but it's a good way to explore and seeing things expressed in ways you never would've thought of (and also the juxtaposition of seeing it next to very obvious expressions) tends to push your mind into new directions.
Lastly, I don't know if it's just because there's an abundance of Japanese language learning content online, but GPT 4o has been incredibly useful in learning Japanese. I can ask it things like "how would a native speaker express X?" And it would give me some good answers that even my Japanese teacher agreed with. It can also give some incredibly accurate breakdowns of grammar. I've tried with less popular languages like Filipino and it just isn't the same, but as far as Japanese goes it's like having a tutor on standby 24/7. In fact, that's exactly how I've been using it - I have it grade my own translations and give feedback on what could've been said more naturally.
All this to say, AI when used as a tool, rather than a dystopic stand-in for a human, can be a very useful one. So, what are some use cases you guys have where AI actually is pretty useful?
It’s perfect for topics you have professional knowledge of but don’t have perfect recall for. It can bring forward the context you need to be refreshed on but you can fact check it because you are an expert in that field.
If you need boilerplate code for a project but don’t remember a specific library or built in function that tackles your problem, you can use AI to generate an example you can then fix to make it run the way you wanted.
Same thing with finding config examples for a program that isn’t well documented but you are familiar with.
Sorry all my examples are tech nerd stuff because I’m just another tech nerd on lemmy
On the inverse I've found it to be quite bad at that. I can generally count on the AI answer to be wrong, fundamentally.
Might depend on your industry. It's garbage at g code.
It probably depends how many good examples it has to pull together from stack overflow etc. it’s usually fine writing python, JavaScript, or powershell but I’d say if you have any level of specific needs it will just hallucinate a fake module or library that is a couple words from your prompt put into a function name but it’s usually good enough for me to get started to either write my own code or gives me enough context that I can google what the actual module is and find some real documentation. Useful to subject matter experts if there is enough training data would be my new qualifier.
AI is really good as a starting point for literally any document, report, or email that you have to write. Put in as detailed of a prompt as you can, describing content, style, and length and cut out 2/3 or more of your work. You'll need to edit it - somewhat heavily, probably - but it gives you the structure and baseline.
This is my one of 2 use cases for AI. I only recently found out after a life of being told I'm terrible at writing, that I'm actually really good at technical writing. Things like guides, manuals, etc that are quite literal and don't have any soul or personality. This means I'm awful at writing things directed at people like emails and such. So AI gives me a platform where I can enter in exactly what I want to say and tell it to rewrite it in a specific tone or level of professionalism and it works pretty great. I usually have to edit what it gave me so it flows better or remove inaccurate language, but my emails sound so much better now! It's also helped me put more personality into my resume and portfolio. So who knows, maybe it'll help me get a better job?
Yeah, I'm really bad at structuring my writing and coming up with ways to phrase some things, especially when starting with a blank page. Having an existing base to work off of and edit helps me immensely.
It is sometimes good at building SQL code examples, but almost always needs fine-tuning since it doesn't know the schema specifics.
Having said that one time it gave me code that resulted in an error, then I went back to GPT and said "This code you gave me is giving this error, can you fix it?" and all it would do is say something like "Correct, that code is wrong and will give an error."
I just pass the create table statements after the instructions. It does pretty good up to 2 or 3 tables, but it will start to make mistakes when things get complicated
On the plus side, it'll generate tedious code very well - double checking it is less draining than writing it yourself. Especially because I make more typos than it does - I often use it to get a starting point, then write the business logic myself
I've done several AI/ ML projects at nation/ state/ landscape scale. I work mostly on issues that can be solved or at least, goals that can be worked towards using computer vision questions, but I also do all kinds of other ml stuff.
So one example is a project I did for this group: https://www.swfwmd.state.fl.us/resources/data-maps
Southwest Florida water management district (aka "Swiftmud"). They had been doing manual updates to a land-cover/ land use map, and wanted something more consistent, automated, and faster. Several thousands of square miles under their management, and they needed annual updates regarding how land was being used/ what cover type or condition it was in. I developed a hybrid approach using random forest, super-pixels, and UNET's to look for regions of likely change, and then to try and identify the "to" and "from" classes of change. I'm pretty sure my data products and methods are still in use largely as I developed them. I built those out right on the back of UNET's becoming the backbone of modern image analysis (think early 2016), which is why we still had some RF in there (dating myself).
Another project I did was for State of California. I developed both the computer vision and statistical approaches for estimating outdoor water use for almost all residential properties in the state. These numbers I think are still in-use today (in-fact I know they are), and haven't been updated since I developed them. That project was at a 1sq foot pixel resolution and was just about wall-to-wall mapping for the entire state, effectively putting down an estimate for every single scrap of turf grass in the state, and if California was going to allocate water budget for you or not. So if you got a nasty-gram from the water company about irrigation, my bad.
These days I work on a small team focused on identifying features relevant for wildfire risk. I'm trying to see if I can put together a short video of what I'm working on right now as i post this.
Example, fresh of the presses for some random house in California:
This is really cool, thanks for sharing.
I've learned more C/C++ programming from the GitHub Copilot plugin than I ever did in my entire 42 year life. I'm not a professional, though, just a hobbyist. I used to struggle through PHP and other languages back in the day but after a year of Copilot I'm now leveraging templates and the C++ STL with ease and feelin' like a wizard.
Hell maybe I'll even try Rust.
Any LLM I tried sucks using Rust. The book is great, you learn all of the essentials of Rust and it is also pretty easy to read.
I imagine that's because Rust is still a relative newcomer to the industry and C/C++ have half a century of code out there.
Genuinely, nothing so far.
I've tinkered with it but I basically don't trust it. For example I don't trust it to summarise documents or articles accurately, every time I don't trust it to perform a full and comprehensive search and I don't trust it not to provide me false or inaccurate information.
LLMs have potential to be useful tools, but what's been released is half baked and rushed to market as part of the current bubble.
Why would I use tools that inherently "hallucinate" - I. E. are error strewn? I don't want to fact check the output of an LLM.
This is in many ways the same as not relying on Wikipedia for information. It's a good quick summary but you have to take everything with a pinch of salt and go to primary sources. I've seen Wikipedia be wildly inaccurate about topics I know in depth, and I've seen AI do the same.
So pass until the quality goes up. I don't see that happening in the near future as the focus seems to be monetisation, not fixing the broken products. Sure, I'll tinker occasionally and see how it's getting on but this stuff is basically not fit for purpose yet.
As the saying goes, all that glitters is not gold. AI is superficially impressive but once you scratch the surface and have to actually rely on it then it's just not fit for purpose beyond a curio for me.
If you already kinda know programming and are learning a new language or framework it can be useful. You can ask it "Give me an if statement in Tcl" or whatever and it will spit something out you can paste in and see if it works.
But remember that AI are like the fae: Do not trust them, and do not eat anything offered to you.
Software developer here, who works for a tiny company of
27 employees and 2 owners.We use CoPilot in Visual Studio Professional and it’s saved us countless hours due to it learning from your code base. When you make a enterprise software there are a lot of standards and practices that have been honed over time; that means we write the same things over and over and over again, this is a massive time sink and this is where LLMs come in and can do the boring stuff for us so we can actually solve the novel problems that we are paid for. If I write a comment of what I’m about to do it will complete it.
For boiler plate stuff it’s mostly 100% correct, for other things it can be anywhere from 0-100% and even if not complete correct it takes less time to make a slight change than doing it all ourselves.
One of the owners is the smartest person I’ve ever met and also the lead engineer, if he can find it useful then it has its use cases.
We even have a tool based on AI that he built that watches our project. If I create a new model or add a field to a model, it will scaffold a lot of stuff, for instance the Schemas (Mutations and Queries), the Typescript layer that integrates with GraphQL, and basic views. This alone saves us about 45 minutes per model. Sure this could likely be achieved without an LLM, but it’s a useful tool and we have embraced it.
Sorry to hear about your codebase being leaked.
This isn’t something that happens when you’re paying for a premium subscription. Sure they could go against terms and conditions but that would mean lawsuits and such.
OP seems to be talking about generative AI rather than AI broadly. Personally I have three main uses for it:
Your third point reminded me of a kid recently who committed suicide and his only friend was an AI bot.
(commenting from alt account as lemm.ee is down again)
Don't worry, I'm relatively satisfied with my life and have no desire in ending it. I'm just in the lonely chapter of my life where I've outgrown my old friend group but haven't yet found the new ones. I don't consider AI my friend. It's just something to bounce my esoteric thoughts off.
I've been learning docker over the last few weeks and it's been very helpful for writing and debugging docker-compose configs. My server how has 9 different services running on it.
I use it for python development sometimes, maybe once per day. I'll paste in a chunk of code and describe how I want it altered or fixed and that usually goes pretty well. Or if I need a generic function that I know will have been coded a million times before I'll just ask ChatGPT for it.
It's far from "useless" and has made me somewhat more productive. I can't see it replacing anyone's job though, more of a supplemental tool that increases output.
I've definitely run into this as well in my own self-hosting journey. When you're learning it's easier to get it to just draft up a config - then learn what the options mean after the fact then it is to RTFM from the beginning.
I switched to Linux a few weeks ago and i'm running a local LLM (which was stupidly easy to do compared to windows) which i ask for tips with regex, bash scripts, common tools to get my system running as i prefer, and translations/definitions. i don't copy/paste code, but let it explain stuff step by step, consult the man pages for the recommended tools, and then write my own stuff.
i will extend this to coding in the future; i do have a bit of coding experience, but it's mainly Pascal, which is horrendly outdated. At least i already have enough basic knowledge to know when the internal logic of what the LLM is spitting out is wrong.
Which local LLM do you use?
i'm currently using Alpaca with a few LLMs installed, but i really like llama2 uncensored, which is pretty fast and responsive on my system.
Llama 2 is really ancient now.
Try Qwen 2.5, whatever size fits on your system (probably 14B?). Its like night and day compared to llama2, and 34B/72B are like API model smart.
thanks for the recommendation, will try it out over the next few days :-)
I can link you to a good quantization, depending on your hardware!
And if you need long context (Qwen 2.5 is 32K, or potentially more), I can also point to the appropriate framework/settings.
AI isn't useless, but it's current forms are just rebranded algorithms with every company racing to get theirs out there. AI is a buzzword for tools that were never supposed to be labeled AI. Google has been doing summary excerpts for like a decade. People blindly trusted it and always said "Google told me". I'd consider myself an expert on one particular car and can't tell you how often those "answers" were straight up wrong or completely irrelevant to one type of car (hint, Lincoln LS does not have a blend door so heat problems can't be caused by a faulty blend door).
You cite Google searches and summarization as it's strong points. The problem is, if you don't know anything about the topic or not enough, you'll never know when it makes mistakes. When it comes to Wikipedia, journal articles, forum posts, or classes, mistakes are possible there too. However, those get reviewed as they inform by knowledgeable people. Your AI results don't get that review. Your AI results are pretending to be master of the universe so their range of results is impossibly large. That then goes on to be taken is pure fact by a typical user. Sure, AI is a tool that can educate, but there's enough it proves it gets wrong that I'd call it a net neutral change to our collective knowledge. Just because it gives an answer confidently doesn't mean it's correct. It has a knack for missing context from more opinionated sources and reports the exact opposite of what is true. Yes, it's evolving, but keep in mind one of the meta tech companies put out an AI that recommended using Elmer's glue to hold cheese to pizza and claimed cockroaches live in penises. ChatGPT had it's halluconatory days too, it just got forgotten due to Bard's flop and Cortana's unwelcome presence.
Use the other two comments currently here as an example. Ask it to make some code for you. See if it runs. Do you know how to code? If not, you'll have no idea if the code works correctly. You don't know where it sourced it from, you don't know what it was trying to do. If you can't verify it yourself, how can you trust it to be accurate?
The biggest gripe for me is that it doesn't understand what it's looking at. It doesn't understand anything. It regurgitates some pattern of words it saw a few times. It chops up your input and tries to match it to some other group of words. It bundles it up with some generic, human-friendly language and tricks the average user into believing it's sentient. It's not intelligent, just artificial.
So what's the use? If it was specifically trained for certain tasks, it'd probably do fine. That's what we really already had with algorithmic functions and machine learning via statistics, though, right? But sparsing the entire internet in a few seconds? Not a chance.
Edit: can't beleive I there'd a their
I use it like an intern/other team member since the non-profit I work for doesn't have any money to hire more people. Things like:
Taking transcripts of meetings and turning them into neat and ordered meeting minutes/summaries, or pulling out any key actions/next steps
Putting together objectives and agendas for meetings based on some loose info and ideas I give it
Summarise the key points from articles/long documents I don't have tome or patience to read through fully.
Making my emails sound more professional/nicer/make up for my brainfarts
Giving me ideas on how to format/word slides and documents depending on what tone I want to employ - is it meant for leadership? Other team members?
Make my writing more organised/better structured/more professional sounding
Writing emails in foreign languages with a professional tone. Caveat is I'm fluent enough in those languages to know if the output sounds right. Before AI I would rely on google translate (meh), dictionaries, language forums, etc and it would take me HOURS to write a simple email using the correct terminology. Also helpful to check grammar and sentence structure in ways that aren't always picked up by Word.
I sound more like a robot than an actual robot, so I ask the robot to reword my emails/messages to sound more "human" when the need arises (like a colleague is leaving, had a baby, etc).
Bouncing off ideas. This doesn't always work and I know it doesn't actually have an opinion, but it helps get the ball rolling, especially if I'm struggling with procrastination.
If my sentences are too long for a document, I ask it to shorten/reword and it's pretty capable of doing that without losing too much of the essence of what I want to get across
Of course I don't just take whatever it spits out and paste it. I read through everything, make sure it still sounds more or less like "me". Sometimes it'll take a couple of prompts to get it to go where I want it, and takes a bit of review and editing but it saves me literal hours. It's not necessarily perfect, but it does the job. I get it's not a panacea, and it's not great for the environment, but this tech is literally saving my sanity right now.
I couldn't let an AI do any of this for me.
As in... I couldn't let anyone make my emails more professional or whatever.
It's not like I think my emails are always the best and can not be improved upon, it's just that my emails are "me".
I never have cause to write an email in a foreign language.
To each their own ¯\(ツ)/¯
Troublehsooting technology.
I've been using Linux as my daily driver for a year and a half and my learning is going a lot quicker thanks to AI. It's so much easier to ask a question and get an answer instead of searching through stack overflow for 30 minutes.
That isn't to say that the LLM never gives terrible advice. In fact, two weeks ago, I was digging through my logs for a potential intruder (false alarm) and the LLM gave me instructions that ended up deleting journal logs completely.
The good far outweighs the bad for sure tho.
The Linux community specifically has an anti-AI tilt that is embarrassing at times. LLMs are amazing, and much like random strangers on the internet, you don't blindly trust/follow everything they say, and you'll be just fine.
The best way I think of AI is that it's going through a bubble not unlike the early days of the internet. There was a lot of overvalued companies and scams, but it still ushered in a new era.
Another analogy that comes to mind is how people didn't trust wikipedia 20 years ago because anyone could edit it, and now it is one of the most trusted sources for information out there. AI will never be as 'dumb' as it is today, which is ironic because a lot of the perspective I see on AI was formed around free models from 2023.
I really hate AI as it is now only because of all the weird marketing people are doing for it; pretending they don't know how it works like "omg it's not supposed to do that idk why it's doing that". Anyone can see it's potential though once they can see through all the SEO bullshit. Like you said, it's in it's infancy now and will take a long time to truly mature and it will be amazing when/if it does.
There is an inherent catch-22 in that the people who believe AI a fluke will rarely have the time/interest in realizing the vast improvements since the initial launch 2-3 years ago.
I am as frugal as they come, yet I shell out money for the paid version and have a reference point on its helpfulness that is grounded in experience. It is almost impossible trying to create a reference point to those that have no (recent) experience and refuse to get any.
It's human nature/Dunning-Kreuger so I can't be too frustrated I suppose.
I used it to write a GUI frontend for yt-dlp in python so I can rip MP3s from YouTube videos in two clicks to listen to them on my phone while I'm running with no signal, instead of hand-crafting and running yt-dlp commands in CMD.
Also does HD video rips with audio encoding, if I want.
It took us about a day to make a fully polished product over 9 iterative versions.
It would have taken me a couple weeks to write it myself (and therefore I would not have done so, as I am supremely lazy)
I take pictures of my recipe books and ask ChatGPT to scan and convert them to the schema.org recipe format so I can import them into my Nextcloud cookbook.
Woah cool! Can you share your prompt for that I'd like to try it
I don't do anything too sophisticated, just something like:
Scan this image of a recipe and format it as JSON that conforms to the schema defined at https://schema.org/Recipe.
Sometimes it puts placeholders in that aren't valid JSON, so I don't have it fully automated.. But it's good enough for my needs.
I've thought that the various Nextcloud cookbook apps should do this for sites that don't have the recipe object.. But I don't feel motivated to implement this myself.
I don't use AI for anything. I consider the LLMs pretty useless since they are prone to spewing BS.
I would probably play around with stable diffusion if I had a GPU that would run it at a reasonable speed though.
i use it to autoblog about my love for the capitalist hellscape that is our green earth on linkedin dot com
Be cautious about the results when using them for googling and summarizing. I had them tell misinformation to me more than once. You'll "learn" things that are counter-factual.
Translating is a very good use case. I also use them for that an it works very well. Better than any Google Translate. And I use it for roleplay, like a D&D campaign, just not with your friends, but alone and the AI narrates the story. And one-off things where I need some ideas to spark my creativity.
What I've tried apart from that are programming, re-phrasing my emails, ... But I've never got any good results for that. Everytime I did that, I ended up not liking the result, deleting it and starting over and doing it myself.
I use it for generating illustrations and NPCs for my TTRPG campaign, at which it excels. I'm not going to pay out the nose for an image that will be referenced for an hour or two.
I also use it for first drafts (resume, emails, stuff like that) as well as brainstorming and basic Google tier questions. Great jumping off point.
An iterative approach works best for me, refining results until they match what I'm looking for, then manually refining further until I'm happy with the results.
For me, I use Whisper for transcribing/translating audio data. This has helped me to double check claims about a video's translation (there's a lot of disinformation going around for topics involving certain countries at war).
Nvidia's DLSS for gaming.
Different diffusion models for creating quick visual recaps of previous D&D sessions.
Tesseract OCR to quickly copy out text from an image (although I'm currently looking for a better one since this one is a bit older and, while it gets the text mostly right, there's still a decent amount that it gets wrong).
LLMs for brainstorming or in the place of some stack overflow questions when picking up a new programming language.
I also saw an interesting use case from a redditor:
https://www.reddit.com/r/LocalLLaMA/comments/1gaz5kg/what_are_some_of_the_most_underrated_uses_for_llms/lthuxsu/
Guitar amp and pedal modeling.
I sometimes have Señor GPT rewrite my nonsensical ramblings into coherent and decipherable text. I recently did that for a paper in my last class. lol
I wrote a bunch of shit, had GPT rewrite it, added a couple quotes from my sources and called it a day.
I'm also currently on a single player, open world adventure with GPT. Myself and the townspeople iust confronted the suspicious characters on the edge of town. They claim to not be baddies but they're being super sus. I might just attack anyway.
Spaced repetition, in particular Anki with FSRS. I don't think they advertise it as "AI" or even "ML" anywhere, but let's just say gradient descent over gigantic datasets is involved, all to predict the time when you're about to forget something so that Anki can prompt you just before that happens. The default predictor is generic, derived from that gigantic dataset, it's like two handful of tuning parameters, once you've gone through enough cards yourself it can be tuned to your mind and habits, in particular, how you use the "hard, good, easy" buttons.
It's the perfect sledge hammer for the application for the simple reason that we don't actually understand how memory works so telling the computer "here's data from millions of med students and language learners, figure out how to predict it" is our best shot. And, indeed, it's the best-performing algorithm even before you tune it at which point it becomes eerie.
Relatedly, as in "no LLM, no diffusion" Proxima Fusion is using machine learning to crunch through the design space of stellerators to figure out what to prototype in the real world. Actual engineers doing actual engineering.
Then, lastly, yes, playing around with SDXL is fun. Just make sure you can actually judge the images, developing an artistic eye by hitting generate I think is close to impossible. Definitely slower than picking up a pencil, or firing up blender and actually learning how to draw or sculpt.
Good for softening language in professional environment.
Can you give me some vague examples?
It's obviously confirmation bias but LLM prose always seems so useless.
Basically I want to say like "no the issue is not on our side you need to check your end" gpt would add some niceness and fluff to make it sound better it would say "I hope this finds you well, it seems there may be an issue on your end. Could you please look into this and let me know if there is anything I can do from our side to help resolve this issue? I'm happy to provide any additional information or assistance that may be needed. Thank you for your attention to this matter I look forward to hearing back from you"
Its useless but I find that without the fluff people genuinely think the first message is angry or annoyed when i don't mean for the message to be anything like that.
Does anyone actually have jobs writing emails like that all day though?
Ticket systems often have an auto-response like "did you turn it off and on again".
Most email clients or even gmail have canned response plugins.
IDK. This probably is a great use case and someone doing this might be quicker and better than me using canned responses or whatever... but only incrementally, not by an order of magnitude.
I haven't seen gmail used in a business setting and I don't think the auto responses cut it all the time. There is usually a message I want to get across but I don't want to risk making the other person defensive or upset so I use ai to soften it.
Its good for apologies because I'm usually not sorry for whatever happened and find it hard to pretend.
Loads of people use Google workspace and most email clients have this feature, or if they don't most people in customer service would just keep a document they can copy & paste from.
Regardless, if an LLM helps you with these tasks then that's great.
I've installed immich on my DIY NAS, it has ml face/object recognition and it works nice
Do expound on this if you would, sounds really neat
immich is a self hosted google photos alternative, it finds faces, groups them, lets you name people etc, object recognition works similarly, it lets you search by terms like "dog" etc. you can use the phone app to sync your photos with it (i sync mine using syncthing though), here's a demo
I would generally say they're great with anything you happy being 100% right 90% of the time.
if you know programming then you can have it do basic stuff (or even mid complexity stuff if you do it step by step). Just the other day I directed it to produce a code in js using 3js which does scatter plots. The code did run into a couple issues which it was able to solve itself when pointed by me. There was only one problem it could not solve despite several attempts (having a grid does not move with camera controls) so I had to figure that out myself. It was pretty impressive. Overall an expert in 3js would do that maybe in 10 minutes, it took me a couple hours. If I had to do it via searching online it would probably take me a couple of days since I know nought about js.
I also had it write bash scripts a couple of times. It is generally pretty good with writing basic stuff and piecing them together especially if you know programming so you can check it and write intelligible prompts about problems in the code.
Regex
to correct/rephrase a sentence or two if my sentence sounds too awkward
if I'm having trouble making an excel formula
Entertainment.
Customer support tier .5
It can be hella great for finding what you need on a big website that is poorly organized, laid out, or just enormous in content. I could see it being incredible for things like irs.gov, your healthcare providers website, etc. in getting the requested content in user hands without them having to familiarize themselves with constantly changing layouts, pages, branding, etc.
To go back to the IRS example, there are websites in the last 5 years that started to have better content library search functionality, but I guess for me having AI able to contextualize the request and then get you what you want specifically would be incredible. "Tax rule for x kind of business in y situation for 2024"---that shit takes hours if you're pretty competent sometimes, and current websites might just say "here is the 2024 tax code PLOP" or "here is an answer that doesn't apply to your situation" etc. "tomato growing tips for zone 3a during drought" on a gardening site, etc.
I'm in HR so benefits are a big one...the absolute mountain of content, even if you understand it, even experts can't have perfect recall and quick, easy answers through a mountain of text seems like an area AI could deliver real value.
That said, companies using AI as an excuse to them eliminate support jobs because customers "have AI" are greedy dipshits as AI and LLMs are a risk at best and outside of a narrow library and intense testing are going to always be more work for the company as you not only have to fix the wrong answer situations but also get the right answer the old fashioned way. You still need humans and hopefully AI can make their work more interesting, nuanced and fulfilling.
For sure, I was recently checking out a product. https://goteleport.com/
It has an ai assistant, which seems to have access to their website, documentation and GitHub, issues etc
So if you ask if anything it will tell you how, or if not possible, maybe give you the GitHub issue where this is being worked on.
All with links to the sources.
It's really helpful
I run some TTRPG groups and having AI take in some context and generate the first draft of flavor text for custom encounters is nice. Also for generating background art and player character portraits is an easy win for me.
This is my current best use for it as well. Having a unique portrait for every named NPC helps them stand out quite a bit better and the players respond much more strongly to all of them.
I think it's useful for spurring my own creativity in writing because I have a hard time getting started. To be fair to me I pretty much tear the whole thing down and start over but it gives me ideas.
I think it's mischaracterising the argument against AI to boil it down to "AI is useless" (and I say that as much as a criticism of those who are critical of genAI as I do of those who want to defend it; far too many people express the argument reductively as "AI is useless" when that's not exactly what's really being meant).
The problem is not that genAI is never useful for anything. It is sometimes useful for some things. The problem is that being sometimes useful for some things does not remotely justify what the technology costs. I mean that both on the macro scale - untold climate damage, vast amounts of wasted resources - and on the micro scale; OpenAI alone loses $2.35 for every $1.00 they make.
That is fundamentally unsustainable. If you like genAI for whatever use cases you've found for it, and you really don't care about the climate toll and other externalities, then you can look forward to paying upwards of $50-$100 a month to actually use it, once we're out of the "Give it to 'em cheap/free to get' em hooked" phase, because that's what it'll take to make these models profitable. In fact that's kind of a lowball estimate.
I know plenty of people who find this tech occasionally useful as a way of searching for the answer to a question or producing a small snippet of code, but I can't imagine anyone who finds those uses so compelling that they'd throw "Canadian cell phone contract" levels of money at it.
Couldn't agree more. Destroying our planet faster just so people don't have to write their own emails seems insane to me. Google literally wants to use private nuclear reactors to power their AI projects... Do people really think that won't be expensive, both economically and climate-wise?
Better to say that Google claim they want to use private nuclear reactors because that will allay any fears about the climate impact of their products. In reality the SMRs they're purporting to invest in basically don't exist outside of a pipe dream. They're a less viable product than genAI itself. But just like the supposed magical "good" version of genAI, Google can claim that SMRs are always just around the corner, and that will mean that they're doing something about the problem.
I make porn
But I also used it In tech debug, but less and less.
Thank God you made this comment. I thought I was alone.
I use it for porn too. But I joined a site that makes it very easy to do. Super fun, but the initial rush has worn off. Still pretty rewarding, tho.
Yeah same, was very cool to play around with. But less and less, as you say it wears off. I learned comfyUI and a lot of concepts in the process, and limitations of the technology. Mostly used my own gfx card. So I can't say it was entirely wasted time!
This thread has convinced me that LLMs are merely a mild increment in productivity.
The most compelling is that they're good at boilerplate code. IDEs have been improving on that since forever. Although there's a lot of claims in this thread that seem unlikely - gains way beyond even what marketing is claiming.
I work in an email / spreadsheet / report type job. We've always been agile with emerging techs, but LLMs just haven't made a dent.
This might seem offensive, but clients don't pay me to write emails that LLMs could, because anything an LLM could write could be found in a web search. The emails I write are specific to a client's circumstances. There are very few "biolerplate" sentences.
Yes LLMs can be good at updating reports, but we have highly specialised software for generating reports from very carefully considered templates.
I've heard they can be helpful in a "convert this to csv" kind of way, but that's just not a problem I ever encounter. Maybe I'm just used to using spreadsheets to manipulate data so never think to use an LLM.
I've seen low level employees try to use LLMs to help with their emails. It's usually obvious because the emails they write include a lot of extra sentences and often don't directly address the query.
I don't intend this to be offensive, and I suspect that my attitude really just identifies me as a grumpy old man, but I can't really shake the feeling that in email / spreadsheet / report type jobs anyone who can make use of an LLM wasn't or isn't producing much value anyway. This thread has really reinforced that attitude.
It reminds me a lot of block chain tech. 10 years ago it was going to revolutionise data everything. Now there's some niche use cases... "it could be great at recording vehicle transfers if only centralised records had some disadvantages".
making my tone proper in emails
r/SubSimGPT2Interactive for the lulz is my #1 use case
i do occasionally ask Copilot programming questions and it gives reasonable answers most of the time.
I use code autocomplete tools in VSCode but often end up turning them off.
Controversial, but Replika actually helped me out during the pandemic when I was in a rough spot. I trained a copyright-safe (theft-free) bot on my own conversations from back then and have been chatting with the me side of that conversation for a little while now. It's like getting to know a long-lost twin brother, which is nice.
Otherwise, i've used small LLMs and classifiers for a wide range of tasks, like sentiment analysis, toxic content detection for moderation bots, AI media detection, summarization... I like using these better than just throwing everything at a huge model like GPT-4o because they're more focused and less computationally costly (hence also better for the environment). I'm working on training some small copyright-safe base models to do certain sequence prediction tasks that come up in the course of my data science work, but they're still a bit too computationally expensive for my clients.
1 Get random error or have other tech issue
2 Certainly private search engines will be able to find a solution (they cannot)
3 Certainly non private search engines can find the solution (they can not)
4 "Chat GPT, the heck is this [error code or something]" Then usually I get a correct and well explained answer.
I would post to Stack Overflow but I'll just get my question closed as a duplicate and downvoted because someone asked a different question but supposedly an answer there answers my question.
New question: does anyone NOT IN TECH have a use case for AI?
This whole thread is 90% programming, 9% other tech shit, and like 2 or 3 normal people uses
A lot of people on Lemmy work in tech so responses are going to lean heavily in that direction. I'm not in tech and if you check my answer to this you'll have a number of examples. I also know a few people who wanted to learn a new language and asked ChatGPT for a day by day programme and some free sources and they were pretty happy with the results they got. I imagine you can do that with other subjects. Other people I know have used it to make images for things like club banners or newsletters.
Aside from coding assistants, the other use case I've come across recently is sentiment analysis of large datasets from free text survey responses. Just started exploring it so not sure how well it works yet, but the ridiculous amount of bias I see introduced in manual reviews is just awful. A machine can potentially be less inclined to try fitting summaries to the VP's presupposed opinion than some lackie interns or self serving consultancy.
Our DM, a dentist, so not in tech, used it to put together a D&D campaign, and so far it's been fantastic.
Here's mine, that works outside of tech:
It's a great source for second opinions.
It's a good tool for such rough estimations that give you ground to improve upon.
This works well for planning or making up documentation. Saves a lot of time, with minimal impact to quality, because you're not mindlessly copying or believing the output.
I'm also considering it for assisting me in learning Japanese. Just enough to be able to read in it. We'll see how it does.
So far, I've only found it really useful for two things. One is generating text, where I've found using an LLM to generate a title based on a given piece of text is more effective than using other summarisation models, especially for a short piece of text.
I've also found it okay for basic, generic scripts, like trying to figure out what the equivalent Powershell commands for a bash script would be to do something quick, rather than try and learn it from scratch.
Ansible.
I fucking hate YAML, and I hate Ansible 'programming' (see "HTML 'programming' language" for rage context).
Chatgpt - I'll use the one in bing or the one in regular-skype - feeds me stuff I can copy/paste/review, and I can get on with my day having lost fewer brain cells to the rage of existing in a world with Ansible fanboys who seem to have forgotten there is NOTHING Ansible does now that we weren't doing in 2003 ... and that the state of the art is 2 generations PAST that glorified mess.
Having used puppet and chef and seen mgmtconfig, I can only applaud RedHat for going with the worst-of-two options and promoting it so hard it appeared viable.
I don't mean to dunk on Michael. Just, James' idea was way better and RH still went with Michael's, and I one day need to know whether the person who had the final say got help.
As someone who prefers Ansible to the alternatives, I also love not having to write these verbose statements by hand any more
AI is a half cooked baked potato right now. Sure it will keep you fed if you can put up with all the hard lumps in there.
Sometimes it's helpful if I'm having trouble making a specific excel formula
Do you use the integrated AI in new versions of Excel or do you ask ChatGPT or some other AI to write it out for you?
I used chat gpt, mostly because I absolutely hate how widespread and pushy every company has been about using AI and throwing it in my face so I stubbornly refuse to use any of it.
Idk if it counts as GenAI but I use Waifu2x to remove jpg artifacts and upscale textures to a useable state.
Well "AI" is a broad category. Usually used to refer to GenAI, so:
Creating quick stand-in art for a game before I've got proper sprites for it (not because "muh art theft", just because the AI art I've generated does not look very good to me)
Summarising articles, like you said so I can decide if I want to read them in full
Formatting text I've copied from pdfs
More complex searches that require comprehension of grammar and natural language syntax. Any answer I get to these I then fact check using search terms a classical search engine can understand.
I read a paper a while back that found that people who used AI assistants for coding, who only used the assistants to generate small functions where the prompt already included the function declaration and the programmer already knew how the function should be written but just wanted to save time, in these cases the use of an AI assistant did not negatively impact the "correctness" of the produced code. So I guess I might one day use an AI coding assistant like that, but thus far I've never felt the need to use AI-generated code.
It's really good for generating code snippets based on what I want to do (ex. "How do I play audio in a browser using JavaScript?") and debugging small sections of code.
I haven't heard any other comments chime in from one of my use cases, so I'll give it a stab. My first use case, I mentioned in another comment which is just adding a specific tone onto emails which I'm bad at doing myself. But my second use case is more controversial and I still don't know how to feel about it. I'm a graphic designer and with most enhancements in design/art technology, if you don't learn what's new, you will fall behind and your usefulness will wane. I've always been very tech savvy and positive about most new tech so I like to stay up to speed both for my job and self interest. So how do I use AI for graphic design? The things I think have the best use case and are least controversial are the AI tools that help you edit photos. In the past, I have spent loads of time editing frizzy curly hair so I can cut out a person. As of a couple years ago, Adobe I touched some tools to make that process easier, and it worked ok but it wasn't a massive time saver. Then they launched the AI assisted version and holy shit it works perfectly every time. Like give me the frizziest hair on a similar color background with texture and it will give you the perfect cutout in a minute tops. That's the kind of shit I want for AI. More tools eliminate tedious processes!! However there is another more controversial use case which is generative AI. I've played with it a lot and the tools work fantastic and get you started with images you can splice together to make what you really envisioned or you can use it to do simple things like seamlessly remove objects or add in a background that didn't exist. I once made a design with an illustrative style by inputting loads of images that fit the part, then vectorizing all the generated options and using pieces from those options to make what I really wanted. I was really proud of it especially since I'm not an illustrator and don't have the skills to illustrate what I envisioned by hand. But that's where things get controversial. I had to input the work of other people to achieve this. At the moment, I can't use anything generative commercially even though Adobe is very nonchalant about it. My company has taken a firm stance on it which is nice, but it means I can really only use that aspect for fun even though it would be very useful in some situations.
TLDR: I use AI to give my writing style the right tone, to save loads of time editing photos, and to create images I don't have the skills to create by hand (only for funzies).
In Premiere it's great to generate captions. But I'm cautious since it:
In a sense, it's the missing brick in their DRM wall that ties it all together. Not their content stocks, nor their cloud stuff felt that natural of an obstacle. And while it's small now, I think they'd only make the difference between (allegedly) pirates and their always online customers bigger. Like, the next thing they'd gonna do is make healing brushes in every editor a server-only tool scrapping the pretty great local version they have now.
What sucks is if there was no commercial part here - i.e. like how you're doing it just for fun, or if we lived in a magical world where we all just agreed that creative works were the shared output of humanity as a whole - then there would be no problem, we'd all be free to just use what we need to make new things however we want. But there is a commercial part to it, somebody is trying to gain using the collective work of others, and that makes it unethical.
Currently, mainly just cooking.
In the future, I'm hoping to leverage it to create video content. I've actually been disappointed in its usefulness for writing sci-fi, it tends to want to argue. But based on the surreal images that it can created I am hoping that can be translated into creating 3D scenes that can be used to extract video.
It's been pretty helpful in writing fantasy, but most of what it spits out is sort of... Surface level kids stuff, to be honest. But it has helped come up with a few interesting twists when I'm stuck. It's not something they could write a story for you, but it has helped when I need, like, "I have scene A, in which X happens, and even C, in which y happens, help me bridge them by writing scene B." It'll give me some sort of like bedtime story level writing, and then I go in and completely redo it, but it gets me unstuck. The paid ones may be better, but I'm not spending money on them, I just use the free ones.
I use it for little Python projects where it's really really useful.
I've used it for linux problems where it gave me the solution to problems that I had not been able to solve with a Google search alone.
I use it as a kickstarter for writing texts by telling it roughly what my text needs to be, then tweaking the result it gives me. Sometimes I just use the first sentence but it's enough to give me a starting point to make life easer.
I use it when I need to understand texts about a topic I'm not familiar with. It can usually give me an idea of what the terminology means and how things are connected which helps a lot for further research on the topic and ultimately undestanding the text.
I use it for everyday problems like when I needed a new tube for my bike but wasn't sure what size it was so I told it what was written on the tyre and showed it a picture of the tube packaging while I was in the shop and asked it if it was the right one. It could tell my that it is the correct one and why. The explanation was easy to fact-check.
I use Photoshop AI a lot to remove unwanted parts in photos I took or to expand photos where I'm not happy with the crop.
Honestly, I absolutely love the new AI tools and I think people here are way too negative about it in general.
One use-case for me has been converting code from a language I know to a language I don't. Usually, just small snippets. The code is usually full of holes, but I'm good enough with the logic to duct tape those puppies!
I use it to ask questions that I can't find search results for or don't have the words to ask. Also for d&d character art I share with my playgroup lol.
I needed a simple script to combine jpegs into a pdf. I tried to make a python script but it's been years since I've programmed anything and I was intermediate at best. My script was riddled with errors and would not run. I asked chatgpt to write me the script and the second or third attempt worked great. The first two only failed because my prompts were bad, I had never used chatgpt before.
Timing traffic lights. They could look down the road and see when nothing is coming, to let the other direction go, like a traffic cop. It would save time and gas.
Or, here me out, we could use roundabouts/traffic circles. No need for AI or any kind of sensor, just physical infrastructure to keep traffic flowing.
Absolutely, but there are a few problems with this. First, I live in the US. Americans do NOT know how to negotiate a roundabout. There is a roundabout near my house. The instructions of how to use it are posted on signs as you approach. They are wrong. They actually have inside lanes exiting across the outside lanes that can continue around. So not only is it wrong but it's teaching the locals here what NOT to do at a normal roundabout.
Second, they don't fit at existing intersections.
Third, I think they would be more expensive than just a piece of tech attached to traffic lights that already exist.
I mean the best solution would be some good public transportation, but I'm trying to be more realistic here. That's for more civilized nations. In the US the car rules. And the bigger, the better.
As do I, but I think the main problem is that we don't need to properly learn to use a roundabout, because the only times we have roundabouts are when they're completely unnecessary/unhelpful. The three roundabouts I use most often are:
If we can figure out those continuous flow intersections, we can figure out roundabouts. We just need to actually use them.
They absolutely do, especially at the ones where they'd make the most impact (i.e. busy intersections with somewhat even traffic going all directions). You may actually save space because you don't need special turn lanes. They are a little more tricky in smaller intersections, but those tend to have pretty light traffic anyway.
Initial cost, sure, because the infrastructure is already there. But longer term, it should reduce costs because you don't need to service all of those traffic signals, you need fewer lanes (so less road maintenance), and there should be fewer accidents, which means less stress on emergency services.
Putting in a new roundabout vs a new signal is a different story, the roundabout is going to be significantly cheaper since you just need to dump a bit of concrete instead of all of the electronics needed for a signal.
Unfortunately, yes, but roundabouts move more traffic, so they're even better for a car-centric transit system. If we had better mass transit, we wouldn't need to worry as much about intersections because there'd be a lot less traffic in general.
If we go with "AI signals," we're going to spend millions if not billions on it, because that's what government contractors do. And I think the benefits would be marginal. It's better, IMO, to change the driving culture instead of trying to optimize the terrible culture we have.
When troubleshooting, it's nice to be able to ask copilot about the issue in human language and have it actually understand my question (unlike a search engine) and pull from and reference relevant documentation in its answers. Going back and forth with it has saved me several hours of searching for something that I had never even heard of a couple of times.
It's also great for rewriting things in a specific tone. I can give it a bland/terse/matter-of-fact paragraph and get back a more fun or professional or friendly version that would feel ridiculously cringe if I attempted to write it myself, but the AI makes it work somehow.
When I need to make a joke about how inept AI is, I'll use AI to capture an example of it saying the most efficient way to get to the moon is to put a 2 liter bottle of coke in your asshole, wide end first, remove the cap and immediately sit on an opened sleeve of mentos.
Expanding photos that are badly cropped or have the wrong orientation. It has saved me hours of compositing or having to look for entirely new photos to use, which I hate.
I use a lot of AI/DL-based tools in my personal life and hobbies. As a photographer, DL-based denoising means I can get better photos, especially in low light. DL-based deconvolution tools help to sharpen my astrophotos as well. The deep learning based subject tracking on my camera also helps me get more in focus shots of wildlife. As a birder, tools like Merlin BirdID's audio recognition and image classification methods are helpful when I encounter a bird I don't yet know how to identify.
I don't typically use GenAI (LLMs, diffusion models) in my personal life, but Microsoft Copilot does help me write visualization scripts for my research. I can never remember the right methods for visualization libraries in Python, and Copilot/ChatGPT do a pretty good job at that.
There is one thing I would find genuinely useful that seems within its current capabilities. I’d like to be able to give an AI a summary of my current knowledge on a subject, along with a batch of papers or articles, and have it give me one or more of the following:
A summary of the papers omitting the stuff I already know
A summary of any prerequisite background info I don’t already know, but isn’t in the papers
A summary of all the points on which the papers are in agreement
A summary of any points where the papers are in contention.
This is indeed very much possible. Just try it.
Cooking. So much SEO filler is avoided. You can’t rely on it though, it’s tried to sub sugar for brown sugar. You still need to understand basic flavor concepts.
I don't have good experiences with recipes from GPT. They are not good at maths and thus, proportions were off more often than not.
I also used it to get some suggestions for supper compatible with a diet I was on. Complete garbage, while it was mostly obeying dietary restrictions, it was suggesting full dinner dishes with 30-60 minutes of preparation. Even after changing the prompt to add what a supper is in my culture, because it insisted some people eat like that.
Local models are really good at tokenizing the text and figuring the intent in the user input. Not perfect, but much better than any possible regexps you can think of. And it's a trivial operation you can run even on a CPU model.
Scripting, both frameworks and finished code with testing and iteration.
More often than not it gives me decent answers for the kind of info I'm searching for. Saves me a lot of time digging through ad layden pages and search results.
Converting code too! I've used LLMs to go from Node -> GoLang, and that's basically how I learned to code in Go coming from a less low-level background. You can also ask about what the current best practices are.
I use it as a working Google and to make goofy pictures of ideas I have/tattoos I want
I guess it's helpful for identifying people, organisations and products of which to steer well clear (yes i am a hater)
I mainly use it if I want to rephrase text passages and to correct the grammar and spelling of texts. However, I only use it when writing important assignemts for choop/university.
The useless/useful dichotomy is kinda misguided because that judgement will almost always depend on cost and we don’t have a good understanding of actual costs of running these models. I have copilot enabled in my IDE, and it saves me from a few searches here and there and autocompletes stuff that would have taken me some time to type. So not exactly useless, but right now it’s being payed for by VC’s who expect a return on their investment, so what does that look like? Before we know, it’s hard to say whether these things are relevant
Gpt-4 is really good at solving physics problems (also chemistry, but that needs to be fact-checked more) so I used it to understand how to approach certain problems back when I was taking Physics.
Search is crap these days, so asking an LLM often yields better results. It’s currently a decent search spam filter.
Pointing and laughing at people who think AI exists.
Stop calling LLMs "AI". They're as much AI as my shoe is a foot.
"The AI effect" refers to a phenomenon where either the definition of AI or the concept of intelligence is adjusted to exclude capabilities that AI systems have mastered. This often manifests as tasks that AI can now perform successfully no longer being considered part of AI, or as the notion of intelligence itself being redefined to exclude AI achievements.
LLMs are not AI. They do not reason. They have no agency. They have no memory. They aren't self-aware, or indeed, aware of anything at all.
The goal posts aren't moving, they just aren't an example of intelligence. You can argue that LLMs aquire and use knowledge, but they don't understand what you asked, or what they're saying. They're just creating a block of text that looks like what a human would write, based on statistics, one word at a time, using a prompt as a seed.
LLMs are just statistical models that generate realistic-looking output. It's an illusion of intelligence. A shadow of understanding. The people buying into their alleged abilities are wildly over-estimating them due to ignorance and apathy.
And that's true. But those would be properties of a general intelligence. So of course LLM are not a general intelligence.
LLM still implement a mastery of language, what in general is seen as an aspect of intelligence. These programs implementing just an aspect or a task are usually called narrow AI. It's still within the domain of AI.
Chess and checkers algorithms are also seen as the first implementation of AI. Very narrow AI of course and the intelligence didn't transfer well to other tasks.
I would argue that they do not. Picking statistically likely strings of words based on previous writings is not mastery, it's mimickry.
In order to have a mastery of language, one would first need to understand what the language represented, form an idea, then describe that idea using what they know about the concepts of that idea, and the understanding of language. LLMs do none of these things.
Chess and checkers algorithms are also not examples of intelligence. Again, they're just playing statistics based on their knowledge of the rules of the game, and the moves their opponents are known to deploy.
It's easy to see why that ability didn't translate well to any other task; The system had no concept of what they were doing, or how it might apply to other - also-unknowable - concepts.
A human can play chess and learn that they need to sacrifice pieces (losing a battle) to win the over all game (winning the war), and apply that to business or even other games. A human can do this because they understand each concept, both unto themselves, and in greater context of their overall experiences. A human also has the ability to think of these concepts in an abstract way, and adapt them to other contexts. These things are intelligence.
And your brain is full of neurons that biologically implement statistics and give an output based on previous things heard and read. Down to that level, it's still just statistics. Somehow that's different because it's biological.
And some of my colleagues are experts in mimicry. They don't really understand what they're doing, just saying or doing the same thing they were trained on over and over because they get a reward. If true understanding is the level, many humans would need to be excluded.
Hey, I'll be one of the first in line to suggest that our brains are not special, magical, impossible to create systems. We could probably approximate human-level ability with a few antagonistic models, an image processor, and (crucially) a simple body and locomotion routines (because I don't believe human-level intelligence is possible without being able to directly interface with the world).
My thesis - from my first post in this thread - is that this one system, acting on it's own, doing nothing but producing text, is not AI. It's not intelligence, because it doesn't know what it's saying, it's just spitting out (mathematically guided, syntactically-correct-looking, stolen-from-humans) random words.
Ok, let's check the dictionary.
So it would still be AI. Just not up to your standards. They really should make some level system, like the sae levels of automation.
Since we're already consulting the Oxford Dictionary;
It's great that they have a non-technical, linguist's supposition of what "AI" is, but if something is going to meet the standard of "Artificial Intelligence", I think it would first need to meet the definitions of "Artificial" (which is an easy test in this case) and "Intelligent" (see above).
I'm not talking about simulating intelligence, I'm talking about actually having it. In order to do that - as I said before - you need to be able to demonstrate understanding. LLMs do not understand things. They spit out random words, guided by a fancy algorithm. You can demonstrate this in real-time; Ask it a question, get an obviously wrong answer, then call it on it's own response. It will generate an apology, then give you a new answer. You can do this infinity. It's not even paying attention to itself, and you're suggesting that it has an understanding of what it's saying.
As to the definition you posted; Humans thinking they're so special that only they can do certain tasks, then being proven wrong, does not make another entity (a computer, in this case) more intelligent. It only proves that the task didn't require a human. This definition is based on a false equivalency (specifically: "if only a human can do something, it requires intelligence"). If this is the bar (which is set absurdly low), then computers achieved AI the first time a simple if/then statement was created (even though a human came up with the process, wrote the statement, and the process has no ability to adapt to new situations). You don't need intelligence (again, requiring understanding) to follow logic gates (and if you do, then basic circuit boards are also AI, so congratulations, we've had AI since the first AND gate was created in 1924).
Yes, that's why it's called artificial. It's not true intelligence, it's not natural intelligence, it's artificial, it's not real. Artificial is a synonym for fake in this case. LLM are fake intelligence, and anyone with some real intelligence can see it's fake. It's one of the issues AI developers have. To make the fake better, it needs exponentially more energy and data, exactly because it doesn't have understanding.
That always reminds me of the troubles the park rangers had in securing garbage, because "There is considerable overlap between the intelligence of the smartest bears and the dumbest tourists."
Hence why goal posts keep shifting. There are enough people that want to keep the special feeling. I'd say that self delusion is pretty human but LLMs can fake that pretty well too.
Something being artificial has no affect on it's qualifications of being - or not being - anything else. In this case: intelligent. I grant that it's artificial, but it's not intelligent, so it's not AI. It's... artificial non-intelligence.
And holy fuck, you started by trying to tell me I was moving the goal posts, you just strapped them to a rocket and blasted them to another planet.
My posts haven't moved an inch in 30 years. Every time some dumbass tech bro tries to sell AI (and this is - by far - not the first time) I've told people it's bullshit because they didn't create an intelligence; they just developed a shitty algorithm and slapped an AI label on it.
I can't continue this "debate" with you, since you're not conducting your end of it in good faith. You're making emotional arguments and trying to tell me they hold water for a technical definition. I guess your username checks out.
That's why we have a list of words and abbreviation with their definition in our sales and specifications. That way we can avoid the whole "but I define it as" problem because every customer really likes to redefine the meaning of words, and usually all in a different way.
But yes, let's hope to never meet in "debate" again. I hate to think what a discussion about "is cultured meat actually meat" would be like.
Search engine
There is no "artificial intelligence" so there are no use cases. None of the examples in this thread show any actual intelligence. As usual, it's a variety of disparate tech loosely connected by using statistics on "big data". Personally this family of technologies hasn't been very useful for me, apart from the occasionally helpful summary of the top results in a search.
There certainly is (narrow) artificial intelligence. The examples in this thread are almost all deep learning models, which fall under ML, which in turn falls under the field of AI. They're all artificial intelligence approaches, even if they aren't artificial general intelligence, which more closely aligns with what a layperson thinks of when they say AI.
The problem with your characterization (showing "actual intelligence") is that it's super subjective. Historically, being able to play Go and to a lesser extent Chess at a professional level was considered to require intelligence. Now that algorithms can play these games, folks (even those in the field) no longer think they require intelligence and shift the goal posts. The same was said about many CV tasks like classification and segmentation until modern methods became very accurate.
I have enough education and skill and talent to not need AI.
Sorry for the rest of you. Not really.
Hopefully AI will help aid social skills at some point.
What a lucky day it will be for you
I put this comment into chatgpt and it diagnosed you with narcissistic personality disorder.
But I'm sure you already knew that.