Are there any genuine benefits to AI?

fiddlestix@lemmy.world to Technology@lemmy.world – 85 points –

I can see some minor benefits - I use it for the odd bit of mundane writing and some of the image creation stuff is interesting,, and I knew that a lot of people use it for coding etc - but mostly it seems to be about making more cash for corporations and stuffing the internet with bots and fake content. Am I missing something here? Are there any genuine benefits?

110

Much like automated machinery, it could in theory free the workers to do more important, valuable work and leave the menial stuff for the machine/AI. In theory this should make everyone richer as the companies can produce stuff cheaper and so more of the profits can go to worker salaries.

Unfortunately what happens is that the extra productivity doesn't go to the workers, but just let's the owners of the companies take more of the money with fewer expenses. Usually rather firing the human worker rather than giving them a more useful position.

So yea I'm not sure myself tbh

No no you found the actual "use" for AI as far as businesses go. They don't care about the human cost of adopting AI and firing large swaths of workers just profits.

Which is why governments should be quickly moving to highly regulate AI and it's uses. But governments are slow plodding things full of old people who get confused with toasters.

As always capitalism kills.

This is the part that bothers me the most, I think.

Trouble is the best way to regulate it isn't clear. If the new tool can do the job at least as well and cheaper, just disallowing it is less beneficial to society. You can tax its use until it is only a little cheaper, but then you have to get people to approve of taxes. Et cetera

This already happened with the industrial revolution. It did make the rich awfully rich, but let's be honest. People are way better off today too.

It's not perfect, but it does help in the long run. Also, there's a big difference in which country you're in.

Capitalist-socialism will be way better off than hard core capitalism, because the mind set and systems are already in place to let it benefit the people more.

Yes, that way the government will be able to make sure it benefits the right people. And we will call it the national socialism.... wait... no!

The question wasn't "In Theory, are there any genuine benefits" it was if there are currently right now.

Most email spam detection and antimalware use ML. There are use cases in medicine with trying to predict whether someone has a condition early

It's also being used in drug R&D to find similar compounds like antimicrobial activity, afaik.

Medical use is absolutely revolutionary. From GP's consultations to reading tests results, radios, AI is already better than humans and will be getting better and better.

Computers are exceptionally good at storing large amount of data, and with ML they are great at taking a lot of input and inferring a result from that. This is essentially diagnosing in a nutshell.

I read that one LLM was so good at detecting TB from Xrays that they reverse engineered the "black box" code hoping for some insight doctors could use. Turns out, the AI was biased toward the age of the Xray machine that took each photo because TB is more common in developing countries that have older equipment. Womp Womp.

A large language model was used to detect TB in X-ray? Do you not just mean Machine Learning?

There are supposedly multiple Large Language Model Radiology Report Generators in development. Can't say if any of them are actually useful at all, though.

okay, but there still needs to be a part that processes the scan images and that's not LLM.

So you're saying because the LLM isn't operating the machinery and processing the data start to finish without any non-LLM software then none of it is LLM? Stay off the drugs, kid.

I hadn't considered this. It's interesting stuff. My old doctor used to just Google stuff in front of me and then repeat the info as if I hadn't been there for the last five minutes.

AI is a very broad topic. Unless you only want to talk about Large Language Models (like ChatGPT) or AI Image Generators (Midjourney) there are a lot of uses for AI that you seem to not be considering.

It's great for upscaling old videos: (this would fall under image generating AI since it can be used for colorizing, improving details, and adding in additional frames) so that you end up with something like: https://www.youtube.com/watch?v=hZ1OgQL9_Cw

It's useful for scanning an image for text and being able to copy it out (OCR).

It's excellent if you're deaf, or sitting in a lobby with a muted live broadcast and want to see what is being said with closed captions (Speech to Text).

Flying your own drone with object detection/avoidance.

There's a lot more, but basically, it's great at taking mundane tasks where you're stuck doing the same (or similar) thing over, and over, and over again, and automating it.

I think most of those are only labelled AI to generate tech hype, though? Like, sure, machine learning and maybe even LLM can and are used for those, but it isn't a machine given human discernable input and pretending to give human output.

"AI" is the broadest umbrella term for any of these tools. That's why I pointed out that OP really should be a bit more specific as to what they mean with their question.

AI doesn't have the same meaning that it had over 10 years ago when we used to use it exclusively for machines that could think for themselves.

They are the greatest gift to solo-brainstorming that I've ever encountered.

_ /\ _

The fruit of those brainstorming sessions are like Homer Simpson designing a new car.

You're confusing brainstorming with content generation. LLMs are great for brainstorming: they can quickly churn out dozens of ideas for my D&D campaign, which I then look through, discard the garbage, keep the good bits of, and riff off of before incorporating into my campaign. If I just used everything it suggested blindly, yeah, nightmare fuel. For brainstorming though, it's fantastic.

I would retort that the exact opposite is true, that content generation is the only thing LLMs are good at because they often forget the context of their previous statements.

I think we're saying the same thing there: LLMs are great at spewing out a ton of content, which makes them a great tool for brainstorming. The content they create is not necessarily trustworthy or even good, but it can be great fuel for the creative process.

My stance is that spewing out a ton of flawed unrelated content is not conducive to creative good content, and therefor LLM is not useful for writing. That hasn't changed.

Exactly. It can generate those base-level ideas much faster and worth higher fidelity than humans can without it, and that can see us at the hobby level with DND, or up at the business level with writers rooms and such.

The important point is that you still need someone good at making the thing you want to look at and finish the thing you're making, or you end up with paintings with too many fingers or stories full of contradictions

Any kid who uses it to craft their campaign is lazy and depriving themselves of a valuable experience, any professional who uses it to write a book, script, or study is wildly unethical, and both are creating a much much worse product than a human without reliance on them. That is the reality of a model who at 100% accuracy would be exactly as flawed as human output, and we're nowhere near that accuracy.

But the point is that you don't use it to make the campaign or write the book. You use it as a tool to help yourself make a campaign or write a book. Ignoring the potential of ai as a tool is silly just because it can't do the whole job for you. That would be a bit like saying you are a fool for using a sponge when washing because it will never get everything by itself...

I get it now! You don't use it for the thing you use it for but instead as a tool to create the thing that you've used it for for yourself because the magic was inside all of us but also the GPT all along. /sarcasm

"don't feed the trolls," they said, but did she ever listen?

No, I guess I didn't...

Anything that requires tons of iteration can be done way faster with AI. Finding new chemical formulas for medicine, as an example. It takes a "throw everything at the wall and see what sticks" approach, but it's still more effective than a human.

Brute force is AI now?

brute force would be "throw at the wall one at a time until one stick"

As long as everything gets thrown it's still brute force, but the reason they use ai for it is because it can throw a lot more a lot faster.

I think by broad definitions it can be, yes.

Think about it. AI is just throwing a ton of sample data in and filtering out the results that are least correct.

Presumably in order to determine whether the eg chemical is worth looking at in the first place

AI has some interesting use cases, but should not be trusted 100%.

Like github copilot ( or any "code copilot"):

  • Good for repeating stuff but with minor changes
  • Can help with common easy coding errors
  • Code quality can take a big hit
  • For coding beginners, it can lead to a deficit of real understanding of your code
    ( and because of that could lead to bugs, security backdoors.... )

Like translations ( code or language ):

  • Good translation of the common/big languages ( english, german...)
  • Can extend a brief summary to a big wall of text ( and back )
  • If wrong translated it can lead to that someone else understands it wrong and it misses the point
  • It removes the "human" part. It can be most of the time depending on the context easily identified.

Like classification of text/Images for moderation:

  • Help for identify bad faith text / images
  • False Positives can be annoying to deal with.

But dont do anything that is IMPORTANT with AI, only use it for fun or if you know if the code/text the AI wrote is correct!

Adding to the language section, it's also really good at guessing words if you give it a decent definition. I think this has other applications but it's quite useful for people like me with the occasionally leaky brain.

Actually the summaries are good, but you have to know some of it anyway and then check to see if it's just making stuff up. That's been my experience.

An interesting point that I saw about a trail on one of the small, London Tube stations:

  • most of the features involved a human who could come and assist or review the footage. The AI being able to flag wheelchair users was good because the station doesn't have wheelchair access with assistance.

  • when they tried to make a heuristic for automatically flagging aggressing people, they found that people with the arms up tend to be aggressive. This flagging system led to the unexpected feature that if a Transport For London (TFL) staff member needed assistance (i.e. if medical assistance was necessary, or if someone was being aggressive towards them, the TFL staff member could put their arms up to bring the attention onto them.

That last one especially seems neat. It seems like the kind of use case where AI has the most power when it's used as a tool to augment human systems, rather than taking humans out of stuff.

"Once implemented, the system was able to identify many black men who were then immediately confronted. Confrontations with black men are now documented at 87% of aggressive confrontations in TFL locations." /sarcasm

I don't think designing AI to make generalizations based on physical appearances is a very good idea to start with.

AI is a revolution in learning.

Very true. I learned how to code surprisingly fast.

And even the mistakes the AI made was good, because it made me learn so much seeing what changes it did to fix it.

Bullshit. Reading a book on a language is just as fast and it doesn't randomly lie or make up entire documentations as an added bonus.

1 more...

It's sped up my retouching workflows. I can automate things that a few years ago would've needed quite a lot of time spent with manual brush work.

Also in the creative industries, it's a massive time saver for conceptual work. Think storyboarding and scamping, first stage visuals that kind of thing.

One of the better uses I’ve heard of is in search and rescue type situations. Using AI to find specific items, people or anomalies on a map or video feed can be helpful.

An example regarding wildfires:

California turns to AI to help spot wildfires

Our software uses ML to detect tax fraud and since tax offices are usually understaffed they can now go after more cases. So yes?

Lots of boring applications that are beneficial in focused use cases.

Computer vision is great for optical character recognition, think scanning documents to digitize them, depositing checks from your phone, etc. Also some good computer vision use cases for scanning plants to see what they are, facial recognition for labeling the photos in your phone etc…

Also some decent opportunities in medical research with protein analysis for development of medicine, and (again) computer vision to detect cancerous cells, read X-rays and MRIs.

Today all the hype is about generative AI with content creation which is enabled with Transformer technology, but it’s basically just version 2 (or maybe more) of Recurrent Neural Networks, or RNNs. Back in 2015 I remember this essay, The Unreasonable Effectiveness of RNNs being just as novel and exciting as ChatGPT.

We’re still burdened with this comment from the first paragraph, though.

Within a few dozen minutes of training my first baby model (with rather arbitrarily-chosen hyperparameters) started to generate very nice looking descriptions of images that were on the edge of making sense.

This will likely be a very difficult chasm to cross, because there is a lot more to human knowledge than thinking of the next letter in a word or the next word in a sentence. We have knowledge domains where, as an individual we may be brilliant, and others where we may be ignorant. Generative AI is trying to become a genius in all areas at once, and finds itself borrowing “knowledge” from Shakespearean literature to answer questions about modern philosophy because the order of the words in the sentences is roughly similar given a noun it used 200 words ago.

Enter Tiny Language Models. Using the technology from large language models, but hyper focused to write children’s stories appears to have progress with specialization, and could allow generative AI to stay focused and stop sounding incoherent when the details matter.

This is relatively full circle in my opinion, RNNs were designed to solve one problem well, then they unexpectedly generalized well, and the hunt was on for the premier generalized model. That hunt advanced the technology by enormous amounts, and now that technology is being used in Tiny Models, which is again looking to solve specific use cases extraordinarily well.

Still very TBD to see what use cases can be identified that add value, but recent advancements to seem ripe to transition gen AI from a novelty to something truly game changing.

Don't limit your thoughts to just generative AI, which is what you are talking about. Chat bot and media generation aren't the only uses for AI (by which I mean any trained neural network program that can do some sort of task.

Motor skills

AI can solve learn to solve the marble maze "Labyrinth" much, much faster than a human, and then speedrun it faster than any human ever has. Six hours. That's how long it took a brand new baby AI to beat the human world record. A human that has been learning hand-eye coordination and fine motor control all of it's life, with a brain which evolved over millions of years to do exactly that.

No special code needed. The AI didn't need to be told how balls roll or knobs turn, or how walls block the ball. It earned all of that on the fly. The only special code it had was optical and mechanical. It knew it had "hands" in the form of two motors, and it knew how to use them. It also had eyes (a camera), and access to a neural network computer vision system. When the AI started taking illegal shortcuts, and they had to instruct it to follow the prescribed path, which is printed on the maze.

Robots could in work factories, mines, and other dangerous, dehumanizing jobs. Why do we want workers to behave like robots at a factory job? Replace them with actual robots and let them perform a human job like customer service.

Think of a robot that has actual hands and arms, feet and legs, and various "muscles". We have it learn it's motor control using a very accurate physics system on a computer that simulates its body. This allows the AI to learn at much faster speeds than by controlling a real robot. We can simulate thousands of robots in parallel and run the simulations much faster than real time. Train it to learn how to use it's limbs and eyes to climb over obstacles, open doors and detain or kill people. We could replace police with them. Super agile robot cops with no racial bias or other prejudices. Arresting people and recording their crimes. Genuine benefit.

Computer Vision

AI can be trained to recognize objects, abstract shapes, people's individual faces, emotions, people's individual body shape, mannerisms, and gait. There are many genuine benefits to such systems. We can monitor every public location with cameras and an AI employing these tools. This would help you find lost loved ones, keep track of your kids as they navigate the city, and track criminal activity.

By recording all of this data, tagged with individual names, we can spontaneously view the public history of any person in the world for law enforcement purposes. Imagine we identify a person as a threat to public safety 10 years from now. We'd have 10 years of data showing everyone they've ever associated with and where they went. Then we could weed out entire networks of crime at once by finding patterns among the people they've associated with.

AI can even predict near future crime from an individual's recent location history, employment history, etc. Imagine a person is fired from his job then visits a gun store then his previous place of employment. Pretty obvious what's going on, right? But what if it happens over the period of two weeks? Difficult for a human to detect a pattern like this in all the noise of millions of people doing their everyday tasks, but easy for an AI. Genuine benefit.

Managing Production

With enough data and processing power, we can manage the entire economy without the need for capitalism. People's needs could be calculated by an AI and production can be planned years ahead of time to optimize inputs and outputs. The economy--as it stands today--is a distributed network of human brains and various computers. AI can eliminate the need for the humans, which is good because humans are greedy and neurotic. AI can do the same job without either. Again, human's are freed to pursue human endeavors instead of worrying about making sure each farm and factory has the resources it needs to feed and clothe everyone. Genuine benefit.

Togetherness

We will all be part of the same machine working in harmony instead of fighting over how to allocate resources. Genuine benefit!

Says the AI...

Even if it turns out to be organic, that wordwall is shill as fuck.

I was with it until it said let's train AI robots to kill people, and then use them to track every face on the planet and use that data to "identify threats"...

OP wants a robot overseer, but also wants it to be a police state.

Train it to learn how to use it's limbs and eyes to climb over obstacles, open doors and detain or kill people. We could replace police with them. Super agile robot cops with no racial bias or other prejudices. Arresting people and recording their crimes. Genuine benefit.

I got as far as ai cops and became sceptical. Like, yeah, sure, but what you're describing isn't just a robot being controlled by an AI, it's also the ai making decisions and choosing who to pursue and such, which is a known weakness right now.

And then you let them kill people.

Don't discount the generative AI either!

Language generating AI like LLMs: Though we're in early stages yet and they don't really work for communication, these are going to be the foundation on which AI learns to talk and communicate information to people. Right now they just spit out correct-sounding responses, but eventually the trick to using that language generation to actually communicate will be resolved.

Image/video/music generating AI: How difficult it is right now, for the average person to illustrate an idea or visual concept they have! But already these image generating AI are making such illustration available to the common person. As they advance further and adjusting their output based on natural conversational language becomes more effective, this will only get better. A picture paints a thousand words...and now the inverse will also be true, as anyone will be able to create a picture with sufficient description. And the same applies to video and music.

That said I love your managing production point. It's something I e been thinking too - centrally planned economies have always had serious issues, but if with predictive AI we can overcome the problems by accurately predicting future need, the problems with them may be solvable, and we can then take advantage of the inherent efficiency in such a planned system.

That's funny because the whole post was sarcastically outlining a distopian nightmare.

If that kind of stuff was actually to become real, some dictator would take control of it and subjugate the entire country, or world... forever. There'd be no way to resist that level of surveillance or machine policing.

I think each one of those dystopian ideas can be done in a safe and humane way, but needless to say it is not the current trajectory.

Nice post. A while back I read something on reddit about a theory for technological advances always being used for the worst possible nightmare scenario. But I can't find it now. Fundamentally I'm a technological optimist but I can't even yet fully imagine the subtle systemic issues this will cause. Except the rather obvious one:

Algorithms on social media decide what people see and that shapes their perception of the world. It is relatively easy to manipulate in subtle ways. If AI can learn to categorize the psychology of users and then "blindly anticipate" what and how they will respond to stimuli (memes / news / framing) then that will in theory allow for a total control by means of psychological manipulation. I'm not sure how close we are to this, the counter argument would be that AI or LLMs currently don't understand at all what is going on, but public relations / advertising / propaganda works on emotional levels and doesn't "have to make sense". The emotional logic is much easier to categorize and generate. So even if there is no blatant evil master plan just optimizing for max profit, max engagement, could make the AI pursue (dumbly) a broad strategy that is more evil than that.

Another great one is science! Machine learning is used for physics, bio, and chem models, in things such as genetic sequencing and generation of new drugs as well as very useful in figuring out protein folding. It's very useful in all of the iterative "grunt work" so to speak. While it may not be the best at finding effective new drugs, it can certainly arrange molecules according to the general rules of organic chemistry much faster than any human, and because of that has already led to several drug breakthroughs. AI is hugely useful! LLMs are mostly hype

1 more...

Someone I know recently published in Nature Communications an enormous study where they used machine learning to pattern match peptides that are clinically significant/bioactive (don’t forget, the vast amount of peptides are currently believed to be degradation products).

Using mass spectrometry, they effectively shoot a sawed off shotgun at a wall then using machine learning to detect pellets that may have interesting effects. This opens up for new understanding in the role peptides play in the translational game as well as a potential for a huge amount of new treatments for a vast swathe of diseases.

Sounds similar to some of the research my sister has done in her PhD so far. As I understand, she had a bunch of snapshots of proteins from a cryo electron microscope, but these snapshots are 2D. She used ML to construct 3D shapes of different types of proteins. And finding the shape of a protein is important because the shape defines the function. It's crazy stuff that would be ludicrously difficult and time-consuming to try to do manually.

Machine learning is important in healthcare and it's going to get better and better. If you train an algorithm on two sets of data where one is a collection of normal scans and the other from patients with an abnormality, it's often more accurate than a medical professional in sorting new scans.

As for the fancy chatbot side of things, I suspect it's only going to lead to a bunch of middle management dickheads believing they can lay off staff until the inevitable happens and it blows up in their faces.

Maybe you only do an "odd bit" of mundane writing and the image/music generation is a gimmick, but a lot of the modern world is mundane and pays people lots of money for mundane work. E.g. think of those internal corporate videos which require a script, stock photography and footage, basic corporate music following a 4 chord progression, a voiceover, all edited into a video.

Steve Taylor is most famous for being the voiceover for Kurzgesagt videos, but more generally he's a voiceover artist that features in lot of these boring corporate videos. This type of content has such high demand there is an entire industry dedicated towards it, which seems well suited to AI.

https://youtu.be/vDb2h1-7LA0

This does raise further ethical/economical issues though, as most people in these creative industries actually require income from this boring work to get by.

So you're saying it's really good at theft from common folks for the benefit of corporations?

This does raise further ethical/economical issues though, as most people in these creative industries actually require income from this boring work to get by.

That sounds more like a problem with capitalism than AI.

This, tbh.

Let's get a ubi or something going

There are lots of things that are very hard to program, but people can do very easily. For example, play Go or recognize that an animal is a bird.

Machine learning/ai makes it competitively simple to make computers do some of these things, but at the cost of efficiency and speed at runtime. This is true if computers vs people as well, a human brain is much slower, less efficient, and less accurate than a calculator.

Machine learning/AI is exciting because it enables computers to quickly be trained to do tasks that were impossible or would have required years of dedicated effort. The tech world is excited about it because whole new enterprises and areas of tech may spring up, big markets that were previously out of reach.

Downsides:

  • AI uses a lot more electricity. Especially for things that computers can already do, using AI is very inefficient.

  • Limited control. You train an ai model to do a task, but you don't have direct control over how it thinks. If chatgpt gives a wrong answer, they can't just trace the program and figure out why. It takes serious effort to figure out how chatgpt answers simple questions, so figuring out how it gets complex answers or why an answer is wrong is nearly impossible at this point. This also applies to unwanted behaviors,if you had a really good history chatbot who happened to turn out racist, you can't just turn that off. You end up having to retrain the model, or secretly add "make sure your answer isn't racist" to every submitted prompt.

My partner and I have founded a company that uses custom AI models trained on research to (partially) automate the process of peer review and replication. We can identify mistakes and some types of fraud in research to aid reviewers as well as extract methods and equations from papers and automatically verify findings. If you know anything about the state of research right now, those are some incredibly large benefits.

The legal industry is going to get turned on its head when AI can read, comment, and write contracts.

A 2023 study by researchers at Princeton University, the University of Pennsylvania and New York University found that “legal services” is among the industries most exposed to occupational change from generative AI.

https://arxiv.org/pdf/2303.01157.pdf

Another report, published in 2023 by economists at Goldman Sachs, estimated that 44 percent of legal work could be automated by emerging AI tools.

https://www.ansa.it/documents/1680080409454_ert.pdf

https://www.pymnts.com/news/artificial-intelligence/2024/lawyers-who-use-ai-will-replace-those-who-dont/

something I'm not seeing here is business applications in supply chain. Managing forward-stocking warehouses, monitoring shipping lanes and ordering for seasonality, as well as identifying anomalies such as chargebacks, stock outs, outlier returns/damages/failures is typically managed by a handful of people mixing spreadsheets, ERP databases, and emailing people to tell them "your light bulbs are stuck in the suez canal and your recent batch of cables have a defect"

AI can replace these systems with ML, and use LLMs to generate the notifications.

I would've found it extremely useful in school for studying advanced topics in biology, and now I use it to explain programming concepts to me, or to explain other languages. Some of the answers really do feel like you have a world-class tutor right next to you. It's not without errors but it's mostly accurate and insightful.

It's also really good at helping you search for things that you can't just type into a search box using keywords. Like, you can give it a general description of what you're thinking about and it'll guess. I've used it for TV shows from the 90-00s I largely forgot about, but also words, phrases, or concepts I can't quite remember. One time I was trying to remember a famous experiment but gave it the wrong scientist and it correctly guessed who it was and what the experiment was about.

It's also useful for brainstorming. You give it a general description of what you're doing and it'll give you somewhat generic recommendations of what you could expect other people to do so that you cover most bases. I've also used this for discussions where I'm not sure about my position so I'll ask it to get a better idea about the problem and to figure out what I'm not considering.

Overall, I think it's a great general purpose assistant.

OK, good points. I've had lots of hallucinations and fake info tho.

I pasted your question verbatim into Bing Chat. Here's what it responded with:

Artificial Intelligence (AI) indeed has a wide range of benefits that extend beyond the ones you’ve mentioned. Here are some areas where AI is making a significant impact:

  • Healthcare: AI is used in predicting disease outbreaks, drug discovery, personalized treatment plans, and improving patient care. For example, machine learning models can analyze medical images to detect diseases at early stages.

  • Education: AI can provide personalized learning experiences, identify gaps in learning materials, and automate administrative tasks. It can adapt to individual learning styles, making education more accessible.

  • Environment: AI can help in climate modeling, predicting natural disasters, and monitoring wildlife. It’s also used in optimizing energy usage in buildings and manufacturing processes, contributing to sustainability.

  • Transportation: Autonomous vehicles use AI for navigation, safety, and traffic management. AI can also optimize logistics, leading to reduced costs and environmental impact.

  • Security: AI can enhance cybersecurity by detecting unusual patterns or anomalies in data, helping to prevent cyber attacks. It’s also used in surveillance systems to identify potential threats.

  • Accessibility: AI can help people with disabilities by providing tools that improve their ability to interact with the world. Examples include speech recognition for those unable to use a keyboard, and visual recognition systems that can describe the environment to visually impaired individuals.

While it’s true that AI can be used to generate profits for corporations, it’s important to remember that many of these advancements also lead to societal benefits. However, like any technology, AI can be misused, and it’s crucial to have regulations and ethical guidelines in place to prevent such misuse. The creation of “bots and fake content” you mentioned is one such misuse, and efforts are ongoing to combat these issues.

In conclusion, AI has the potential to greatly benefit society in many ways, but it’s equally important to be aware of and address its challenges.

Seems like a pretty comprehensive list of the things I'm aware of myself. There's also tons of interesting future applications being worked on that, if they pan out, will be hugely beneficial in all sorts of ways. From what I've seen of what the tech is capable of we're looking at a revolution here.

Seems a bit biased to ask an AI for the benefits of AI......
Not saying anything specific is wrong, just that appearances matter

Was thinking the same.. let's ask Honest Joe the car seller which one is the best mean of transport.

I think implying that it has a bias is giving the Advanced Auto Prediction Engine a bit too much credit.

Oh I am in fact giving the giant auto complete function little credit. But just like any computer system, an AI can reflect the biases of it's creators and dataset. Similarly, the computer can only give an answer to the question it has been asked.

Dataset wise, we don't know exactly what the bot was trained on, other than "a lot". I would like to hope it's creators acted in good judgement, but as creators/maintainers of the AI, there may be an inherent (even if unintentional) bias towards the creation and adoption of AI. Just like how some speech recognition models have issues with some dialects or image recognition has issues with some skin tones - both based on the datasets they ingested.

The question itself invites at least some bias and only asks for benefits. I work in IT, and I see this situation all the time with the questions some people have in tickets: the question will be "how do I do x", and while x is a perfectly reasonable thing for someone to want to do, it's not really the final answer. As reasoning humans, we can also take the context of a question to provide additional details without blindly reciting information from the first few lmgtfy results.

(Stop reading here if you don't want a ramble)


AI is growing yes and it's getting better, but it's still a very immature field. Many of its beneficial cases have serious drawbacks that mean it should NOT be "given full control of a starship", so to speak.

  • Driverless cars still need very good markings on the road to stay in lane, but a human has better pattern matching to find lanes - even in a snow drift.
  • Research queries are especially affected, with chatbots hallucinating references that don't exist despite being formatted correctly. To that specifically:
    • Two lawyers have been caught separately using chatbots for research and submitting their work without validating the answer. They were caught because they cited a case which supported their arguments but did not exist.
    • A chatbot trained to operate as a customer support representative invented a refund policy that did not exist. As decided by small claims court, the airline was forced to honor this policy
    • In an online forum while trying to determine if a piece of software had a specific functionality, I encountered a user who had copied the question into chatgpt and pasted the response. It was a command option that was exactly what I and the forum poster needed, but sadly did not exist. On further research, there was a bug report open for a few years to add this functionality that was not yet implemented
    • A coworker asked an LLM if a specific Windows powershell commands existed. It responded with documentation about a very nicely formatted command that was exactly what we needed, but alas did not exist. It had to be told that it was wrong four times before it gave us an answer that worked.

While OP's question is about the benefits, I think it's also important to talk about the drawbacks at the same time. All that information could be inadvertently filtered out. Would you blindly trust the health of you child or significant other to a chatbot that may or may not be hallucinating? Would you want your boss to fire you because the computer determined your recorded task time to resolution was low? What about all those dozens of people you helped in side chats that don't have tickets?

There's a great saying about not letting progress get in the way of perfection, meaning that we shouldn't get too caught on getting the last 10-20% of completion. But with decision making that can affect peoples' lives and livelihoods, we need to be damn sure the computer is going to make the right decision every time or not trust it to have full controls at all.

As the future currently stands, we still need humans constantly auditing the decisions of our computers (both standard procedural and AI) for safely's sake. All of those examples above could have been solved by a trained human gating the result. In the powershell case, my coworker was that person. If we're trusting the computers with at much decision making as that Bing answer proposes, the AI models need to be MUCH better trained at how to do their jobs than they currently are. Am I saying we should stop using and researching AI? No, but not enough people currently understand that these tools have incredibly rough edges and the ability for a human to verify answers is absolutely critical.

Lastly, are humans biased? Yes absolutely. You can probably see my own bias in the construction of this answer.

But with decision making that can affect peoples' lives and livelihoods, we need to be damn sure the computer is going to make the right decision every time or not trust it to have full controls at all.

👏👏👏

Yes, dystopia already arrived and we will all going to suffer. Here are just a few simple examples of blind trust of algorithms which ruined people's lives. And day by day more are coming.

Before AI: https://sg.finance.yahoo.com/news/prison-bankruptcy-suicide-software-glitch-080025767.html

After AI: https://news.yahoo.com/man-raped-jail-ai-technology-210846029.html

https://www.washingtonpost.com/technology/2019/10/22/ai-hiring-face-scanning-algorithm-increasingly-decides-whether-you-deserve-job/

It was in part a demonstration. I see a huge number of questions posted these days that could be trivially answered by an AI.

Try asking Bing Chat for negative aspects of AI, it'll give you those too.

The one thing I can say for sure is that, sometimes, when I library or something has bad documentation it might be able to give a solution quicker than diving I to the source code

If by AI you mean current language learning models then it looks like it can do some useful stuff and is worrying how close it is to doing amazing things.

If by AI you mean a more general concept of artificial intelligence then yeah. Intelligence iis one of the most important resources for getting what we want. This is not to say there are not valid concerns with AI but the potential is crazy, like humans not needing to work levels.

It's massively important in the sciences, both for computing purposes and theoretical design and investigation purposes.

AI is completely revolutionizing genetics research and subjects like biochemistry and pharmacology, because it's able to extrapolate from already identified genes and compounds and find new ones or identify the purposes of genes just from their sequence structure.

It's made processes that would take weeks or months just to identify a single new component to something that takes days or hours.

Once the technology has embedded, our societal adjustments have completed (and they will be PAINFUL) and assuming the profit of AI is sufficiently taxed for the wealth to redistribute, AI will be seen as the Industrial Revolution x10.

Most likely however, the rich will get richer.

assuming the profit of AI is sufficiently taxed for the wealth to redistribute

AH - hah-hah-hah-hah !!!!!!!

Oh well, at least some of us will still be good for cleaning up messes and other physical things. And remember, like they used to say, hard work never killed anybody.

Yeah I done hold much hope either. I suspect it will lead to another revolution.

Depends on what kind of AI. In gaming, AI is part of the process to entertain and challenge the player, and has even been used to help model life systems.

I have yet to see how useful LLMs can be outside of being blatant plagarists but for a time, projects like AI Dungeon really did push the emphasis on "interactive dynamic narratives" and it was really fun for a while.

ML has been an important part in fraud detection for at least a decade now.

I think as a tool to synthesize and collect and organize information to help people make decisions, it has potential. Much like how machine learning is used to look at a bunch of MRI scans and highlight abnormalities and then medical professional looks at those anomalies to decide if they might be a tumor. But a machine is really good at finding things that are anomalous enough to be worth looking at. 

Things that you might have delegated to a secretary or assistant or business analyst might be worthwhile done by an LLM. “sort all these papers by which ones understood the topic the best so I can read those first“ “Do any of these articles contain new information I haven’t seen before?“ “based on the Billboard top 20, create 5 catchy beats for a backing track” “Draft a letter to this customer apologizing for our error and offering them a coupon for their next order” “analyze this email I wrote and help me make the tone more professional” 

I am terrified by what is going to be possible with phishing scams, spam email, fake articles, deep fake videos, reproduction of copyrighted works, an overwhelming volume of trademarks and patents that are meaningless, obtuse contracts that are purposely difficult for a human to read but contain surreptitious loopholes, software that is full of flaws and back doors, and corporations putting more barriers between customers and customer service people.

“find me the 50 most popular articles on this topic, synthesize them all into a 20 bullet point summary and highlight for me the differences of opinion presented so I can understand both sides of the issue” - super useful

“Generate 100,000 unique variations on a very professional email correspondence from a Nigerian Prince offering to pay $50,000 transaction fee for assistance with an international wire transfer “ - no

Unfortunately I don’t think there are any incentives for the companies building these things to limit use or install the guard rails necessary. And our laws, which always run a little behind technology, are thoroughly outpaced by the rate of innovation here. The very old people in charge of governments have no chance of staying ahead of these companies. It will get much worse before it ever gets better.

Honestly, we should just stick to porn. The Internet should just be for porn because everything else we do with it seems to turn evil. 🫤

I've got some really bad news about the porn industry if you don't think it's evil.

Lots of it's actual AI. Nothing we have at the moment I would actually qualify as true AI. It's just algorithms spitting out and answers what it interprets your question as. They don't think or create anything, just regurgitate things in predefined patterns.

No one knows the long term benefits/costs yet, but its potentially more empowering to small creators than large ones. Everyone has access to the same tools, and for instance, if it can offload a bunch of work from an indy game dev, that could let them focus more on the part of the game design process they are most skilled at/interested in.

Everyone has access to the same tools

go make me something with Sora to see what kind of equal access you think you have

Sure, the public doesn't have access to a cutting edge research AI that public results from were only published a couple of days ago.

Right, it's a new technology, its usage is being curated. Even once they release a publicly accessible application, it'll be like going to the hammer renting store, you get to go there and use a hammer, for a fee sometimes, you cant bring the hammer home or own it, they regulate what you can work on with it, it can be overused and inaccessible while demand is high, they can discontinue access when they want.

And they can and do give more access to wealthier clients.

Have you tried out writing prompts for an image generating AI? If you have some idea and play around with it it's quite a new thing. And extension of human imagination. YMMV

AI is helping us to correctly predict protein folding which will enable new medication. Afaik it's a major breakthough that could allow alleviating a lot of suffering.

It's great for duping stupid people. There are no ethical benefits to the current LLM AI on the market.

It's got potential for use in research such as identifying potential medications to work with certain diseases based on molecular structure, something that would take a normal human countless hours but a machine could do in seconds, but AFAIK that field is a very tiny fraction of the market.

I work in IT for a Fortune 500 org.

We use LLMs to improve worker efficiency. There is a ton of taking info from here and there and entering it into our system. AI took the workload from humans doing data entry and freed the humans up to do less tedious things.

As I said, absolutely unethical. Even if the error percentage was the same or less than that of a human, which I really don't believe given the human inputs were training data so even at 100% accuracy the LLM would be as flawed as a human, then still in any industry where decisions are being made which impact people like accounting, forensics, logistics etc then having a machine making all of those decisions without contextual awareness is a disaster waiting to happen.

But yeah you don't have to convince me that fortune 500 CEOs are cheap assholes cutting costs, I believe you there.