Why is there so much hype around artificial intelligence?

Kintarian@lemmy.world to No Stupid Questions@lemmy.world – 3 points –

I've tried several types of artificial intelligence including Gemini, Microsoft co-pilot, chat GPT. A lot of the times I ask them questions and they get everything wrong. If artificial intelligence doesn't work why are they trying to make us all use it?

17

The last big fall in the price of bitcoin, in December '22 was caused by a shift in the dynamics of mining where it became more expensive to mine new btc than what the coin was actually worth. Not only did this plunge the price of crypto it also demolished demand for expensive graphics chips which are repurposed to run the process-heavy complex math used in mining. Cheaper chips, cascading demand and server space that was dedicated to mining related activities threatened to wipe out profit margins in multiple tech sectors.

6 months later, Chat GPT is tolled out by Open AI. The previous limitations on processing capabilities were gone, server space was cheap and the tech was abundant. So all these tech sectors at risk of losing their ass in an overproduction driven recession, now had a way to pump the price of their services and this was to pump AI.

Additionally around this time the world was recovering from covid lockdowns. Increased demand for online services was dwindling (exacerbating the other crisis outlined above) as people were returning to work and spending more time being social IRL rather than using services. Companies had hired lots of new workers: programmers, tech infrastructure workers, etc., yo meet the exploding demand during covid. Now they had too many workers and their profits were being threatened.

The Federal reserve had raised interest rates to stifle continued hiring of new employees. The solution that the fed had come up with in order to stifle inflation was to encourage laying off workers end masse -- what Marxists might call restoring the reserve army of labor, or relative surplus population -- which was substantially depleted during the pandemic. But business owners were reluctant to do this, the tight labor market of the last few years had made business owners and managers skittish about letting people go.

A basic principle at play here, is that new technology is introduced for two reasons only: to sell as a new commodity and (what we are principally concerned with) replacing workers with machines. Another basic principle is that the capitalist system has to have a certain percentage of its population unemployed and hyper exploited in order to keep wages low.

So there was a confluence of incentives here. 1. Inexpensive server space and chips which producers were eager to restore to profitability (or else face drastic consequences) 2. A need to lay off workers in order to stop inflation 3. Incentives for businesses to do so.

Laying off relatively highly paid technical/intellectual labor is a low hanging fruit in this whole equation, and the roll out of AI did just that. Hundreds of thousands of highly paid workers were laid off across a variety of sectors, assured that AI would create so much more efficiency and cut out the need for so many of these workers. So they rolled out this garbage tech that doesn't work, but everyone in the industry, the media, the government needs it to work, or else they face a massive economic crisis, which had already started with inflation.

At the end of the day its just a massive grift, pushed out to compensate for excessive overproduction driven by another massive grift (cryptocurrency) combined with economic troubles that arose from an insufficient government response to a pandemic that killed millions of people; and rather than take other measures to stifle inflation our leaders in global finance decided to shunt the consequences onto workers, as always. The excuse given was AI, which is nothing more than a predictive text algorithm attached to a massive database created by exploited workers overseas and stolen IPs, and a fuck load of processing power.

I appreciate the candid analysis, but perhaps "nothing to see here" (my paraphrase) is only one part of the story. The other part is that there is genuine innovation and new things within reach that were not possible before. For example, personalized learning--the dream of giving a tutor to each child, so we can overcome Bloom's 2 Sigma Problem--is far more likely with LLMs in the picture than before. It isn't a panacea, but it is certainly more useful than cryptocurrency kept promising to be IMO.

Again, I am highly skeptical that this technology (or any other) can be deployed for such a worthy social mission. I have a cousin who works for a company that produces educational materials for people who need a lot of accommodation, so I know that there are definitely good people in those fields who have the ability, and probably desire, to deploy this tech responsibly and progressively in a manner that helps fulfill that and similar missions, but when I look at things systemically I just don't see the incentive structures to do so. I won't deny being a skeptic of AI, especially since my personal and professional experience with it has been like dramatically underwhelming. I'd love to believe things work better than they do, that they even could but with ai I see a lot of promises and nothing in the way of results, outside of modestly entertaining tricks. Although I gotta admit, stable diffusion is really cool. Commercially I think its dogshit but the way it creates the images is fascinating.

What would a good incentive structure look like? For example, would working with public school districts and being paid by them to ensure safe learning experiences count? Or are you thinking of something else?

Investors are dumb. It's a hot new tech that looks convincing (since LLMs are designed specifically to appear correct, not be correct), so anything with that buzzword gets a ton of money thrown at it. The same phenomenon has occurred with blockchain, big data, even the World Wide Web. After each bubble bursts, some residue remains that actually might have some value.

I genuinely think the best practical use of AI, especially language models is malicious manipulation. Propaganda/advertising bots. There's a joke that reddit is mostly bots. I know there's some countermeasures to sniff them out but think about it.

I'll keep reddit as the example because I know it best. Comments are simple puns, one liner jokes, or flawed/edgy opinions. But people also go to reddit for advice/recommendations that you can't really get elsewhere.

Using an LLM AI I could in theory make tons of convincing recommendations. I get payed by a corporation or state entity to convince lurkers to choose brand A over brand B, to support or disown a political stance or to make it seem like tons of people support it when really few do.

And if it's factually incorrect so what? It was just some kind strangerâ„¢ on the internet

If by "best practical" you meant "best unmitigated capitalist profit optimization" or "most common", then sure, "malicious manipulation" is the answer. That's what literally everything else is designed for.

Who's making you use it?

It's useful for lots of things, but it requires a proof reader.

I try to do a search on Chrome and Gemini pops up and start spewing its BS. I go into messages and I try to send a message and gemini pops up and asks me if it wants to send a message for me. No I know how to write my own stupid messages. It's all integrated into Windows 11, Is integrated into the Bing app. It's like swatting flies trying to get rid of it.

There is no artificial intelligence, just very large statistical models.

It's easier for the marketing department. According to an article, it's neither artificial nor intelligent.

In what way is it not artificial

Artificial intelligence (AI) is not artificial in the sense that it is not fake or counterfeit, but rather a human-created form of intelligence. AI is a real and tangible technology that uses algorithms and data to simulate human-like cognitive processes.

Is human intelligence artificial? #philosophy

Well, using the definition that artificial means man made then no. Human intelligence wasn't made by humans therefore it isn't artificial.

I wonder if some of our intelligence is artificial. Being able to drive directly to any destination, for example, with a simple cell-phone lookup. Reading lifetimes worth of experience in books that doesn't naturally come at birth. Learning incredibly complex languages that are inherited not by genes, but by environment--and, depending on the language, being able to distinguish different colors.

From the day I was born, my environment shaped what I thought and felt. Entering the school system I was indoctrinated into whatever society I was born to. All of the things that I think I know are shaped by someone else. I read a book and I regurgitate its contents to other people. I read a post online and I start pretending that it's the truth when I don't actually know. How often do humans actually have an original thought? Most of the time we're just regurgitating things that we've experienced, read, or heard from exteral foces rather than coming up with thoughts on our own.