Study Finds Consumers Are Actively Turned Off by Products That Use AI

morrowind@lemmy.ml to Technology@lemmy.world – 2181 points –
Study Finds Consumers Are Actively Turned Off by Products That Use AI
futurism.com
388

I think AI has mostly been about luring investors into pumping up share prices rather than offering something of genuine value to consumers.

Some people are gonna lose a lot of other people's money over it.

Definitely. Many companies have implemented AI without thinking with 3 brain cells.

Great and useful implementation of AI exists, but it's like 1/100 right now in products.

If my employer is anything to go by, much of it is just unimaginative businesspeople who are afraid of missing out on what everyone else is selling.

At work we were instructed to shove ChatGPT into our systems about a month after it became a thing. It makes no sense in our system and many of us advised management it was irresponsible since it's giving people advice of very sensitive matters without any guarantee that advice is any good. But no matter, we had to shove it in there, with small print to cover our asses. I bet no one even uses it, but sales can tell customers the product is "AI-driven".

My old company before they laid me off laid off our entire HR and Comms teams in exchange for ChatGPT Enterprise.

“We can just have an AI chatbot for HR and pay inquiries and ask Dall-e to create icons and other content”.

A friend who still works there told me they’re hiring a bunch of “prompt engineers” to improve the quality of the AI outputs haha

That's an even worse 'use case' than I could imagine.

HR should be one of the most protected fields against AI, because you actually need a human resource.

And "prompt engineer" is so stupid. The "job" is only necessary because the AI doesn't understand what you want to do well enough. The only productive guy you could hire would be a programmer or something, that could actually tinker with the AI.

I'm sorry. Hope you find a better job, on the inevitable downswing of the hype, when someone realizes that a prompt can't replace a person in customer service. Customers will invest more time, i.e., even wait in a purposely engineered holding music hell, to have a real person listen to them.

Yes, I'm getting some serious dot-com bubble vibes from the whole AI thing. But the dot-com boom produced Amazon, and every company is basically going all-in in the hope they are the new Amazon while in the end most will end up like pets.com but it's a risk they're willing to take.

“You might lose all your money, but that is a risk I’m willing to take”

  • visionairy AI techbro talking to investors

Investors pump money in a bunch of companies so the chances of at least one of them making it big and paying them back for all the failed investments is almost guaranteed. That's what taking risks is all about.

Sure, but it SEEMS, that some investors are relying on buzzword and hype, without research and ignoring the fundamentals of investing, i.e. besides the ever evolving claims of the CEO, is the company well managed? What is their cash flow and where is it going a year from now? Do the upper level managers have coke habits?

You’re right, but these fundamentals don’t really matter anymore, investors are buying hype and hoping to sell a bigger hype for more money later.

Seeing the whole thing as Knowingly Trading in Hype is actually a really good insight.

Certainly it neatly explains a lot.

Also called a Ponzi scheme, where every participant knows it's a scam, but hopes to find some more fools before it crashes and leave with positive balance.

If the whole sector turns out to be garbage it won't matter which particular set of companies within it you invest in; you will get burned if you cash out after everyone else.

OpenAI will fail. StabilityAI will fail. CivitAI will prevail, mark my words.

A lot of it is follow the leader type bullshit. For companies in areas where AI is actually beneficial they have already been implementing it for years, quietly because it isn't something new or exceptional. It is just the tool you use for solving certain problems.

Investors going to bubble though.

Yeah, can make some products better but most of the products these days that use AI, it doesn't actually need them. It's annoying to use products that actively shovel AI when it doesn't even need it.

Ya know what pfoduct MIGHT be better with AI?

Toasters. They have ONE JOB, and everybody agrees their toaster is crap. But you're not going to buy another toaster, because that too will be crap.

How about a toaster, that accurately, and evenly toasts your bread, and then DOESN'T give you a heart attack at 5am when you're still half asleep???

IS THAT TOO MUCH TO ASK???

Nah. We already have AI toasters, and they're ambitious, but rubbish.

Adding AI is just serious overkill for a toaster, especially when it wouldn't add anything meaningful, not compared to just designing the toaster better.

It only needs one string of conditions that it can understand: don't catch on fire. Turn yourself off IF smoke.

2 more...
2 more...

I tried to find the advert but I see this on YouTube a lot - an Adobe AI ad which depicts, without shame, AI writing out a newsletter/promo for a business owner's new product (cookies or ice cream or something), showing the owner putting no effort into their personal product and a customer happily consuming because they were attracted by the thoughtless promo.

How are producers/consumers okay with everything being so mediocre??

How are producers/consumers okay with everything being so mediocre??

I'm not. My particular beef is with is with plastics and toxic materials and chemicals being ubiquitous in everything I buy. Systemic problem that I can do almost nothing about apart from make things myself out of raw materials.

3 more...

As I mentioned in another post, about the same topic:

Slapping the words “artificial intelligence” onto your product makes you look like those shady used cars salesmen: in the best hypothesis it’s misleading, in the worst it’s actually true but poorly done.

LLMs: using statistics to generate reasonable-sounding wrong answers from bad data.

Often the answers are pretty good. But you never know if you got a good answer or a bad answer.

And the system doesn't know either.

For me this is the major issue. A human is capable of saying "I don't know". LLMs don't seem able to.

Accurate.

No matter what question you ask them, they have an answer. Even when you point out their answer was wrong, they just have a different answer. There's no concept of not knowing the answer, because they don't know anything in the first place.

The worst for me was a fairly simple programming question. The class it used didn't exist.

"You are correct, that class was removed in OLD version. Try this updated code instead."

Gave another made up class name.

Repeated with a newer version number.

It knows what answers smell like, and the same with excuses. Unfortunately there's no way of knowing whether it's actually bullshit until you take a whiff of it yourself.

So instead of Prompt Engineer, the more accurate term should be AI Taste Tester?

From what I’ve seen you’ll need an iron stomach.

They really aren't. Go ask about something in your area of expertise. At first glance, everything will look correct and in order, but the more you read the more it turns out to be complete bullshit. It's good at getting broad strokes but the details are very often wrong.

Now imagine someone that doesn't have your expertise reading that answer. They won't recognize those details are wrong until it's too late.

That is about the experience I have. I asked it for factual information in the field I work at. It didn't gave correct answers. Or, it gave working protocols which were strange and would not be successful.

With proper framework, decent assertions are possible.

  1. It must cite the source and provide the quote, not just a summary.
  2. An adversarial review must be conducted

If that is done, the work on the human is very low.

That said, it's STILL imperfect, but this is leagues better than one shot question and answer

Except LLMs don't store sources.

They don't even store sentences.

It's all a stack of massive N-dimensional probability spaces roughly encoding the probabilities of certain tokens (which are mostly but not always words) appearing after groups of tokens in a certain order.

And all of that to just figure out "what's the most likely next token", an output which is then added to the input and fed into it again to get the next word and so on, producing sentences one word at a time.

Now, if you feed it as input a long, very precise sentence taken from a unique piece, maybe you're luck and it will output the correct next word, but if you already have all that you don't really need an LLM to give you the rest.

Maybe the "framework" you seek - which is quite akin to a indexer with a natural language interface - can be made with AI, but it's not something you can do with LLMs because their structure is entirely unsuited for it.

1 more...
1 more...
1 more...
2 more...

Market shows that investors are actively turned on by products that use AI

Market shows that the market buys into hype, not value.

Market shows that hype is a cycle and the AI hype is nearing its end.

How can you tell when the cycle is ending?

When one of two things happens:

  • A new hype starts to replace it (can happen fast though!)
  • The hype starts to specialize into subcategories of the hype (e.g. AI images, AI videos, AI text generation)

When "AI" hype dies down we are likely to see "AI" removed from various topics because enough people know and understand the hyped parent topic. It'll just be "image generation", "video generation", "generated text", etc.

Customers worry about what they can do with it, while investors and spectators and vendors worry about buzzwords. Customers determine demand.

Sadly what some of those customers want to do is to somehow improve their own business without thinking, and then they too care about buzzwords, that's how the hype comes.

There are different types of people in the market. The informed ones hate AI, and the uninformed love it. The informed ones tend to be the cornerstones of businesses, and the uninformed ones tend to be in charge.

So we have... All this. All this nonsense. All because of stupid managers.

1 more...

No shit, because we all see that AI is just technospeak for “harvest all your info”.

Not to mention it's usually dog shit out put

I refuse to use Facebook anymore, but my wife and others do. Apparently the search box is now a Meta AI box, and it pisses them every time. They want the original search back.

That's another thing companies don't seem to understand. A lot of them aren't creating new products and services that use ai, but are removing the existing ones, that people use daily and enjoy, and forcing some ai alternative. Of course people are going to be pissed off!

We aren't allowed new things. That might change their perfectly balanced money making machine.

And making search worse so it can pretend to be an ex is not what I or anyone is looking for in the search box.

Yes the cost is sending all of your data to the harvest, but what price can you put on having a virtual dumbass that is frequently wrong?

Doubt the general consumer thinks that, in sure most of them are turned away because of the unreliability and how ham fisted most implementations are

+ a monthly service fee

for the price of a cup of coffee

1 more...

LLM based AI was a fun toy when it first broke. Everyone was curious and wanted to play with it, which made it seem super popular. Now that the novelty has worn off, most people are bored and unimpressed with it. The problem is that the tech bros invested so much money in it and they are unwilling to take the loss. They are trying to force it so that they can say they didn't waste their money.

Honestly they're still impressive and useful it's just the hype train overload and trying to implement them in areas they either don't fit or don't work well enough yet.

AI does a good job of generating character portraits for my TTRPG games. But, really, beyond that I haven't found a good use for it.

So far that's been the best use of AI for me too. I've also used it to help flesh out character backgrounds, and then I just go through and edit it.

Yeah exactly, as a tool that doesn't need to be perfect to give you a starting point it's excellent. But companies sort of forgot the "as a tool" part and are just implementing ai outright in places it's not ready yet like drive-thru windows or voice only interface devices...it's not ready for that shit currently (if it ever truly will be).

1 more...
1 more...

One place where I found AI usefull is in generating search queries in JIRA. Not having to deal with their query language every time I have to change a search filter, but being able to just use the built in AI to query in natural language has already saved me like two or three minutes in total in the last two months.

2 more...

Even in areas where they would fit it's really annoying how some companies are trying to push it down our throats.

It's always some obnoxious UI element, screaming at me their 3 example questions, and I always sigh and think, "I have to assume you can only answer these 3 particular questions, and why would I ask those questions, and when I ask UI questions I expect precise answers so would I want to use AI for that."

I have no doubt that LLM's have more uses than I can think of, but come on...

I'm happy for studies like this. People who are trying to smear their AI all over our faces need to calm, the f..k, down.

2 more...

Many of us who are old enough saw it as an advanced version of ELIZA and used it with the same level of amusement until that amusement faded (pretty quick) because it got old.

If anything, they are less impressive because tricking people into thinking a computer is actually having a conversation with them has been around for a long time.

So you want to tell me they all spent billions and made huge data centres that suck more power than small country so we can all play with it, generate some cringy smut and then toss it away?

This is kinda insane if that’s how it will play out

Not the first time this has happened. Even recently. See NFTs. Venture capitalists hear "tech buzzword" and throw money at it because if they're lucky, it's the next Google. Or at least it gets an IPO and they can cash out.

Yeah but the scale is bigger and we could be doing something worthwhile with all these finite resources it makes me a bit dizzy

We could, but they don't care about making the world a better place. They care about getting rich. And then if everything collapses, they can go to their private island or their doomsday vault or whatever and enjoy the apocalypse.

13 more...

I agree with this, my sentiments exactly as well. Getting AI pushed towards us from every direction & really never asked for it. Like to use it for certain things but go to it when needed. Don't want it in everything, at least personally.

15 more...

They've overhyped the hell out of it and slapped those letters on everything including a lot of half baked ideas. Of course people are tired of it and beginning to associate ai with bad marketing.

This whole situation really does feel dotcommish. I suspect we will soon see an ai crash, then a decade or so later it will be ubiquitous but far less hyped.

Thing is, it already was ubiquitous before the AI "boom". That's why everything got an AI label added so quickly, because everything was already using machine learning! LLMs are new, but they're just one form of AI and tbh they don't do 90% of the stuff they're marketed as and most things would be better off without them.

What did they even expect, calling something "AI" when it's no more "AI" than a Perl script determining whether a picture contains more red color than green or vice versa.

Anything making some kind of determination via technical means, including MCs and control systems, has been called AI.

When people start using the abbreviation as if it were "the" AI, naturally first there'll be a hype of clueless people, and then everybody will understand that this is no different from what was before. Just lots of data and computing power to make a show.

For the love of god, defund MBAs.

Give them a box of crayons to eat so the adults can get some work done

Fallout was right.

Fallout was so on point, only a lot of distance and humour makes it not outright painful or scary knowing the damn nukes will be popping sooner or later one just doesn’t know if tomorrow or in 80 years. The question is not if but when

Take the hint, MBAs.

They don't care. At the moment AI is cheap for them (because some other investor is paying for it). As long as they believe AI reduces their operating costs*, and as long as they're convinced every other company will follow suit, it doesn't matter if consumers like it less. Modern history is a long string of companies making things worse and selling them to us anyway because there's no alternatives. Because every competitor is doing it, too, except the ones that are prohibitively expensive.

[*] Lol, it doesn't do that either

There are even companies slapping AI labels onto old tech with timers to trick people into buying it.

That one DankPods video of the "AI Rice cooker" comes to mind

For what it’s worth, rice cookers have been touting “fuzzy logic” for like 30 years. The term “AI” is pretty much the same, it just wasn’t as buzzy back then.

I can attest this is true for me. I was shopping for a new clothes washer, and was strongly considering an LG until I saw it had “AI wash”. I can see relevance for AI in some places, but washing clothes is NOT one of them. It gave me the feeling LG clothes washer division is full of shit.

Bought a SpeedQueen instead and been super happy with it. No AI bullshit anywhere in their product info.

I doubt there's any actual AI in the LG product, it's just a marketing buzzword like they used to use the term 'smartwash'

Much like all the companies who used to market their headphones as "MP3 compatible".

It's just more marketing nonsense.

3 more...

I'd be fairly certain the washing machine has a few sensors and a fairly simple computer program (designed by humans) that can make some limited adjustments to the wash cycle on the fly.

I've seen quite a few instances of stuff like that suddenly being called "AI" as that's the big buzzword now.

Interestingly, LG's AI Wash pre-dates the public release of ChatGPT by almost two years. Truly pioneers.

1 more...
4 more...

Honestly, +1 for SpeedQueen. That’s the brand that every laundromat uses, because they’re basically the Crown Vic of washers; They’re uglier than sin, but they’ll run for literal decades with very little maintenance. They do exactly one thing, (clean your clothes), and they do that one thing very well. They’re the “somehow my grandma’s appliances still work 70 years later, while mine all break after three years" of washing machines.

SpeedQueen doesn’t have any of the modern bells or whistles… But that also means there’s nothing to break prematurely and turn the washer into the world’s largest paperweight. Samsung washers, for instance, have infamously shitty LCD panels, which are notorious for dying right after the warranty expires. And when it dies, the entire washer is dead until you replace basically the entire control interface. SpeedQueen doesn’t have this issue, because they don’t even have LCD panels; everything is just physical knobs and buttons. If something ever does break, it’s just a mechanical switch that you can swap out in 15 minutes with a YouTube tutorial.

FYI, all current Speed Queen models except the Classic Series dryer (DC5, not the washer) are electronically controlled. Even the ones with knobs. They are not mechanical and no longer use the oldschool sequencing drums.

The TR7/DR7 are at least still sold with a 7 year manufacturer's warranty, though. This is specifically to assuage consumer fears about the electronic control panel.

Yes! A washer doesn't need AI or wifi. It needs power, water, detergent and dirty laundry. Had a guest the other day pull out their phone and go Oh my dish washer is out of surfactant. Why the fuck do you need to know that, when you're 20min away by car?

I will pay more if an appliance isn't internet connected.

Speed Queen for the win. I recently replaced a couple of trusty machines that had finally given up after decades of abuse. Went for speed queen, no regrets.

Speed Queen is great stuff. It will last just about forever. When it does break it is built so it can be repaired.

5 more...

I literally uninstalled and disabled every AI process and app in that latest galaxy AI update, which was the whole update btw. my reasons are:

1- privacy and data sharing.

2- the battery, cpu, ram of AI bloatware running in the background 247.

3- it was chaging and doing things which I didn't want especially in the galary photo albums and camera AI modes.

I was considering a new Samsung phone - is that baked into it? (Assuming you're talking Samsung anyway, based on the galaxy name)

Samsung is a nightmare, don't purchase their products.

For example: I used to have a Samsung phone. If I plugged it into the USB port on my computer Windows Explorer would not be able to see it to transfer files. My phone would tell me I need to download Samsung's drivers to transfer files. I could only get them by downloading Samsung's software. Once I installed the software Windows Explorer was able to see the device and transfer files. Once I uninstalled the software Windows Explorer couldn't see the device again.

Anything Samsung can do in your region to insert themselves between you and what you are trying to do they will do.

The software bloat is not dissimilar to what I've heard in the past, but I'd forgotten since I haven't gone in depth researching yet. Which phones do we prefer today? Loosely off the top of my head, less bloat/intrusiveness, nice camera, battery life enough for a day, and maybe on the smaller size to fit one hand are probably what I'll be looking in to.

Apparently Pixel is the easiest to install an alternative OS on, going to start looking into that soon.

1 more...
4 more...
6 more...

To give you a second opinion from the other guy, I've had quite a few Samsungs in a row at this point. From Galaxy S2 to S23Ultra skipping years between every purchase.

They are effectively the premium vendor of Android, at least for western audiences. The midrange has some good ones, but other companies do well there too. At the high end, Samsung might lose out a bit to google on images of people, but the phones Samsung sell are well built, have a long support life, have lots of features that usually end up being imported to AOSP and/or Google's own version of Android. The last few generations are the Apple of Android. The AI features they've added can be run on device if you want, and idk what the other guy is talking about, but the AI features aren't that obnoxiously pushed on my device, the S23 Ultra. I have some things on, most things off. Then again, I've used HTC for a few years and iPhone for two weeks, so except for helping my dad with his Pixel 6a while that device lasted, I've not really tried other brands. The added customization on Samsung is kind of a problem for me, because I don't feel like changing brands after being able to customize so much out of the box.

And I've never had issues connecting to a simple Windows computer, given that the phone has always been able to use the normal Plug-and-play driver that is there already. If you have a macbook like I do, it's a bit cringe, but that's a macbook issue moreso.

the Apple of Android

And here I thought I was being critical of them.

You are right of course, Samsung is very much like Apple. And if you don't care about a company trying to lock you into their software, inserting themselves in between everything you're trying to do, and denying you control over your own device, then I'm sure it works just fine.

2 more...
3 more...
9 more...

Care to share how you disabled every bit of AI in the phone?

Yee. No root required, neither recommended for samsung devices. In short just enable developer mode from phone settings, then debug it with adb platform to uninstall and disable any system app, and can also change lines, colors, phone behaviors, properties and look, install and uninstall apps which you could not before...and so many things.

Do you have to do this every time you update your phone?

11 more...

I mean, pretty obvious if they advertise the technology instead of the capabilities it could provide.

Still waiting for that first good use case for LLMs.

It is legitimately useful for getting started with using a new programming library or tool. Documentation is not always easy to understand or easy to search, so having an LLM generate a baseline (even if it's got mistakes) or answer a few questions can save a lot of time.

So I used to think that, but I gave it a try as I’m a software dev. I personally didn’t find it that useful, as in I wouldn’t pay for it.

Usually when I want to get started, I just look up a basic guide and just copy their entire example to get started. You could do that with chatGPT too but what if it gave you wrong answers?

I also asked it more specific questions about how to do X in tool Y. Something I couldn’t quickly google. Well it didn’t give me a correct answer. Mostly because that question was rather niche.

So my conclusion was that, it may help people that don’t know how to google or are learning a very well know tool/language with lots of good docs, but for those who already know how to use the industry tools, it basically was an expensive hint machine.

In all fairness, I’ll probably use it here and there, but I wouldn’t pay for it. Also, note my example was chatGPT specific. I’ve heard some companies might use it to make their docs more searchable which imo might be the first good use case (once it happens lol).

I just recently got copilot in vscode through work. I typed a comment that said, "create a new model in sqlalchemy named assets with the columns, a, b, c, d". It couldn't know the proper data types to use, but it output everything perfectly, including using my custom defined annotations, only it was the same annotation for every column that I then had to update. As a test, that was great, but copilot also picked up a SQL query I had written in a comment to reference as I was making my models, and it also generated that entire model for me as well.

It didn't do anything that I didn't know how to do, but it saved on some typing effort. I use it mostly for its auto complete functionality and letting it suggest comments for me.

That’s awesome, and I would probably would find those tools useful.

Code generators have existed for a long time, but they are usually free. These tools actually costs a lot of money, cost way more to generate code this way than the traditional way.

So idk if it would be worth it once the venture capitalist money dries up.

That's fair. I don't know if I will ever pay my own money for it, but if my company will, I'll use it where it fits.

4 more...
4 more...

I'm actually working on a vector DB RAG system for my own documentation. Even in its rudimentary stages, it's been very helpful for finding functions in my own code that I don't remember exactly what project I implemented it in, but have a vague idea what it did.

E.g

Have I ever written a bash function that orders non-symver GitHub branches?

Yes! In your 'webwork automation' project, starting on line 234, you wrote a function that sorts Git branches based on WebWork's versioning conventions.

6 more...
6 more...

I've built a couple of useful products which leverage LLMs at one stage or another, but I don't shout about it cos I don't see LLMs as something particularly exciting or relevant to consumers, to me they're just another tool in my toolbox which I consider the efficacy of when trying to solve a particular problem. I think they are a new tool which is genuinely valuable when dealing with natural language problems. For example in my most recent product, which includes the capability to automatically create karaoke music videos, the problem for a long time preventing me from bringing that product to market was transcription quality / ability to consistently get correct and complete lyrics for any song. Now, by using state of the art transcription (which returns 90% accurate results) plus using an open weight LLM with a fine tuned prompt to correct the mistakes in that transcription, I've finally been able to create a product which produces high quality results pretty consistently. Before LLMs that would've been much harder!

Writing bad code that will hold together long enough for you to make your next career hop.

Haven't you been watching the Olympics and seen Google's ad for Gemini?

Premise: your daughter wants to write a letter to an athlete she admires. Instead of helping her as a parent, Gemini can magic-up a draft for her!

On the plus side for them, they can probably use Gemini to write their apology blog about how they missed the mark with that ad.

I think the LLM could be decent at the task of being a fairly dumb personal assistant. An LLM interface to a robot that could go get the mail or get you a cup of coffee would be nice in an "unnecessary luxury" sort of way. Of course, that would eliminate the "unpaid intern to add experience to a resume" jobs. I'm not sure if that's good or bad,l. I'm also not sure why anyone would want it, since unpaid interns are cheaper and probably more satisfying to abuse.

I can imagine an LLM being useful to simulate social interaction for people who would otherwise be completely alone. For example: elderly, childless people who have already had all their friends die or assholes that no human can stand being around.

Is that really an LLM? Cause using ML to be a part of future AGI is not new and actually was very promising and the cutting edge before chatGPT.

So like using ML for vision recognition to know a video of a dog contains a dog. Or just speech to text. I don’t think that’s what people mean these days when they say LLM. Those are more for storing data and giving you data in forms of accurate guesses when prompted.

ML has a huge future, regardless of LLMs.

Llm's are ML...or did I miss something here?

Yes. But not all Machine Learning (ML) is LLM. Machine learning refer to the general uses of neural networks while Large Language Models (LLM) refer more to the ability for an application, or a bot, to understand natural language and deduct context from it, and act accordingly.

ML in general as a much more usages than only power LLM.

2 more...

I feel like everyone who isn't really heavily interacting or developing don't realize how much better they are than human assistants. Shit, for one it doesn't cost me $20 an hour and have to take a shit or get sick, or talk back and not do its fucking job. I do fucking think we need to say a lot of shit though so we'll know it ain't an LLM, because I don't know of an LLM that I can make output like this. I just wish most people were a little less stuck in their western oppulance. Would really help us no get blindsided.

2 more...

Wrote my last application with chat gpt. Changed small stuff and got the job

Please write a full page cover letter that no human will read.

Mostly true before, now 99.99%. The charades are so silly because obviously as a worker all I care about is how much I get paid. That's it.

All the company organization will care about. Is that work gets done to their standards or above and at the absolute lowest price possible.

So my interests are diametrically opposed to their interests because my interest is to work as little as possible for as much money as possible. Their goal is to get as much work out of me as possible for as little money as possible. We could just be honest about it and stop the stupid games. I don't give a shit about my employer anymore than they give a shit about me. If I care about the work that just means I'm that much more pissed they're relying on my good will towards people who use their products and or services.

That's because businesses are using AI to weed out resumes.

Basically you beat the system by using the system. That's my plan too next time I look for work.

I actually think the idea of interpreting intent and connecting to actual actions is where this whole LLM thing will turn a small corner, at least. Apple has something like the right idea: “What was the restaurant Paul recommended last week?” “Make an album of all the photos I shot in Belize.” Etc.

But 98% of GenAI hype is bullahit so far.

How would it do that? Would LLMs not just take input as voice or text and then guess an output as text?

Wouldn’t the text output that is suppose to be commands for action, need to be correct and not a guess?

It’s the whole guessing part that makes LLMs not useful, so imo they should only be used to improve stuff we already need to guess.

One of the ways to mitigate the core issue of an LLM, which is confabulation/inaccuracy, is to have a layer of either confirmation or simply forgiveness intrinsic to the task. Use the favor test. If you asked a friend to do you a favor and perform these actions, they’d give you results that you can either/both look over yourself to confirm they’re correct enough, or you’re willing to simply live with minor errors. If that works for you, go for it. But if you’re doing something that absolutely 100% must be correct, you are entirely dependent on independently reviewing the results.

But one thing Apple is doing is training LLMs with action semantics, so you don’t have to think of its output as strictly textual. When you’re dealing with computers, the term “language” is much looser than you or I tend to understand it. You can have a “grammar” that is inclusive of the entirety of the English language but also includes commands and parameters, for example. So it will kinda speak English, but augmented with the ability to access data and perform actions within iOS as well.

10 more...

I've learned to hate companies that replaced their support staff with AI. I don't mind if it supplements easy stuff, that should take like 15 seconds, but when I have to jump through a bunch of hoops to get to the one lone bastard stuck running the support desk on their own, I start to wonder why I give them any money at all.

I love it when I have to trick those stupid ai chatbots to let me talk to a human customer service rep

It has been getting so bad that even boring regular phone trees will hang up on you if you insist on talking to a human. If it's ISP / cellular, nowadays I will typically just say I want to cancel my account, and then have cancellations route me to the correct department.

There really should be a right to adequate human support that's not hidden behind multiple barriers. As you said, it can be a timesaver for the simple stuff, but there's nothing worse than the dread when you know that your case is going to need some explanation and an actual human that is able to do more than just following a flowchart.

"AI" is certainly a turn-off for me, I would ask a salesman "do you have one that doesn't have that?" and I will now enumerate why:

  1. LLMs are wrongness machines. They do have an almost miraculous ability to string words together to form coherent sentences but when they have no basis at all in truth it's nothing but an extremely elaborate and expensive party trick. I don't want actual services like web searches replaced with elaborate party tricks.

  2. In a lot of cases it's being used as a buzzword to mean basically anything computer controlled or networked. Last time I looked up they were using the word "smart" to mean that. A clothes dryer that can sense the humidity of the exhaust air to know when the clothes are dry isn't any more "AI" than my 90's microwave that can sense the puff of steam from a bag of popcorn. This is the kind of outright dishonest marketing I'd like to see fail so spectacularly that people in the advertising business go missing over it.

  3. I already avoided "smart" appliances and will avoid "AI" appliances for the same reasons: The "smart" functionality doesn't actually run locally, it has to connect to a server out on the internet to work, which means that while that server is still up and offering support to my device, I have a hole in my firewall. And then they'll stop support ten minutes after the warranty expires and the device will no longer work. For many of these devices there's no reason the "smart" functionality couldn't run locally on some embedded ARM chip or talk to some application running on a PC that I own inside my firewall, other than "then we don't get your data."

  4. AI is apparently consuming more electricity than air conditioning. In fact, I'm not convinced that power consumption isn't the selling point they're pushing at board meetings. "It'll keep our friends in the pollution industry in business."

4 more...

Every company that has been trying to push their shiny, new AI feature (which definitely isn't part of a rush to try and capitalize on the prevalence of AI), my instant response is: "Yeah, no, I'm finding a way to turn this shit off."

My response is even harsher..."Yeah, no, I'm finding a way to never use this company's services ever again." Easier said than done, but I don't even want to associate with places that shove this in my face.

Be me

Early adopter of LLMs ever since a random tryout of Replika blew my mind and I set out to figure what the hell was generating its responses

Learn to fine-tune GPT-2 models and have a blast running 30+ subreddit parody bots on r/SubSimGPT2Interactive, including some that generate weird surreal imagery from post titles using VQGAN+CLIP

Have nagging concerns about the industry that produced these toys, start following Timnit Gebru

Begin to sense that something is going wrong when DALLE-2 comes out, clearly targeted at eliminating creative jobs in the bland corporate illustration market. Later, become more disturbed by Stable Diffusion making this, and many much worse things, possible, at massive scale

Try to do something about it by developing one of the first "AI Art" detection tools, intended for use by moderators of subreddits where such content is unwelcome. Get all of my accounts banned from Reddit immediately thereafter

Am dismayed by the viral release of ChatGPT, essentially the same thing as DALLE-2 but text

Grudgingly attempt to see what the fuss is about and install Github Copilot in VSCode. Waste hours of my time debugging code suggestions that turn out to be wrong in subtle, hard-to-spot ways. Switch to using Bing Copilot for "how-to" questions because at least it cites sources and lets me click through to the StackExchange post where the human provided the explanation I need. Admit the thing can be moderately useful and not just a fun dadaist shitposting machine. Have major FOMO about never capitalizing on my early adopter status in any money-making way

Get pissed off by Microsoft's plans to shove Copilot into every nook and cranny of Windows and Office; casually turn on the Opympics and get bombarded by ads for Gemini and whatever the fuck it is Meta is selling

Start looking for an alternative to Edge despite it being the best-performing web browser by many metrics, as well as despite my history with "AI" and OK-ish experience with Copilot. Horrified to find that Mozilla and Brave are doing the exact same thing

Install Vivaldi, then realize that the Internet it provides access to is dead and enshittified anyway

Daydream about never touching a computer again despite my livelihood depending on it

I like the article I read were ww2 german soldiers were being generated by AI as asians, black woman, etc. Glad it doesn't take context into consideration. lol

1 more...

In other news, AI bros convince CEOs and investors that polls saying people don't like AI are out of touch with reality and those people actually want more AI, as proven by an AI that only outputs what those same AI bros want.

Just waiting for that to pop up in the news some time soon.

That's literally the sales response to this. "People don't really know what they want until we sell it to them"

It's pretty fucking gross.

"If I asked people what they want, they would say, better AI"

MBA tech bro: "so ... that means what they really want is the same shitty AI, right?"

1 more...

I've found ChatGPT somewhat useful, but not amazingly so. The thing about ChatGPT is, I understand what the tool is, and our interactions are well defined. When I get a bullshit answer, I have the context to realize it's not working for me in this case and to go look elsewhere. When AI is built in to products in ways that you don't clearly understand what parts are AI and how your interactions are fed to it; that's absolutely and incurably horrible. You just have to reject the whole application; there is no other reasonable choice.

Give me a bunch of open AI models and a big GPU to play with and I'll have a great time. It's a wild world out there.

Shove a bunch of AI nonsense in my face when I didn't ask for it and I'm throwing your product out a window.

Give me a bunch of open AI models and a big GPU to play with and I'll generate twenty gigabytes of weird anime fetish content.

This is the only true use of AI

You forgot to add "and post it to Lemmy".

Also just listening and reading what people say. We don't want fucking AI anything. We understand what it might do. We don't want it.

4 more...

I have just read the features of iOS 18.1 Apple intelligence so called.
TLDR: typing and sending messages for you mostly like one click reply to email. Or… shifting text tone 🙄

So that confirms my fears that in the future bots will communicate with each other instead of us. Which is madness. I want to talk to a real human and not a bot that translates what the human wanted to say approximately around 75% accuracy devoid of any authenticity

If I see someone’s unfiltered written word I can infer their emotions, feelings what kind of state they are in etc. Cold bot to bot speech would truly fuck up society in unpredictable ways undermining fundaments of communication.

Especially if you notice that most communication, even familial already happens online nowadays. So kids will learn to just ‘hey siri tell my mom I am sorry and I will improve myself’.
Mom: ‘hey siri summarize message’

My hope for the future relies on a study indicating that after 5 or so generations of training data tainted with AI generated information, the LLM models collapsed.

Hopefully, after enough LLMs have been fed LLM data, we will arrive in an LLM-free future.

Another possibility is LLMs will only be trained on historic data, meaning they will eventually start to sound very old-fashioned, making them easier to spot.

Future email writing: type the first three words then spam click the auto complete on your LLM-based keyboard. Only stop when the output starts to not make sense anymore.

You can do that today with the FUTO keyboard lol. It uses a small language model for predictive text.

So kids will learn to just ‘hey siri tell my mom I am sorry and I will improve myself’.

What makes you think that kids aren't already doing things like this? Not with Siri, but it doesn't take much effort to get ChatGPT to write something for you.

Also I saw a South Park episode about this. https://en.wikipedia.org/wiki/Deep_Learning_(South_Park)

It isn’t built-in in the very phone operating system where you just tap on generate response in the iMessage. It is always about laziness. First the privacy went away due to path of least effort even though you always had tons of privacy alternatives but they require just 10 seconds of extra effort

1 more...

In your own words, tell me why you're calling today.

My medication is in the wrong dosage.

You need to refill your medication is that right?

No, my medication is in the wrong dosage, it's supposed to be tens and it came as 20s.

You need to change the pharmacy where you're picking up your medication?

I need to speak to a human please.

I understand that you want to speak to an agent, is that right?

Yes.

Chorus, 5x. (Please give me your group number, or dial it in at the keypad. For this letter press that number for that letter press this number. No I'm driving, just connect me with an agent so I can verify over the phone)

I'm sorry, I can't verify your identity please collect all your paperwork and try calling again. Click

Why ever would we be mad?

I went through a McDonald’s drive-thru the other day and had the most insane experience. For the context of this anecdote, I don’t do that often, so, what I experienced was just weird.

While not quite “AI,” the first thing that happened was an automated voice yells at me, “are you ordering using your mobile app today?”

There’s like three menu-speaker boxes, and due to where the car in front of me stopped, I’m like in between the last two. The other speaker begins to yell, “Are you ordering using your mobile app today?”

The person running drive-thru mumbles something about pull around. I do. Pass by the other menu “Are you ordering using your mobile app today?”

Dude walks out with a headset and starts taking orders from each car using a tablet.

I have no idea what is happening. I can’t even see a menu when the guy gets around to me. Turns the tablet around at me.

I realized that I was indeed ordering using the mobile app today.

2 more...

This is because the AI of today is a shit sandwich that we’re being told is peanut butter and jelly.

For those who like to party: All the current “AI” technologies use statistics to approximate semantics. They can’t just be semantic, because we don’t know how meaning works or what gives rise to it. So the public is put off because they have an intuitive sense of the ruse.

As long as the mechanics of meaning remain a mystery, “AI” will be parlor tricks.

And I don’t mean to denigrate data science. It is important and powerful. And real machine intelligence may one day emerge from it (or data science may one day point the way). But data science just isn’t AI.

I find the tech interesting, but the rush to commercialize it was a bad idea. It’s not ready yet, total uncanny valley.

Literally only exciting use for it ive seen so far is that Skyrim companion. And even that doesn't work right yet.

I have rolled back, uninstalled, opted-out, or ripped apart every AI that every company is trying to shove down our throats. I wish I could do the same for search engines, but who uses the internet broadly anymore anyway.

I am impressed by the tech, I think it's amazing, but it's still utterly useless.

I have never, ever needed to interrupt my day's schedule to generate a convincing picture of Luke Skywalker fighting Batman while riding dinosaurs, I have never needed to have a text conversation with someone who seems "almost human," I mean, christ that already describes half the people I know and wish were more normal. I have never needed an article summarized badly, I enjoy reading things, I enjoy writing emails, so I can't figure out why they would make tools to take away the small pleasures we have. What exactly are they thinking?

Yesterday I gave it one more chance, asked one of the apps, I forget which, what tomorrow's weather will be like, the thing forecasted a hurricane coming right for me, a news event from last year. I'm so over AI, please someone notify me when it's really useful and can take over the menial, tedious tasks like managing my online accounts and offering financial advice or can actually help me find a job opening in my field.

All these things have been promised, and seem more out of reach than ever.

The MOST impressive thing I've seen AI do is make really, really convincing furry porn babes. The things are good at mixing features in images. Sometimes.

I used it to generate documents required by auditors for various ISO certification that few people read in full but still need to be correct in case the auditor decides to do more than skim it.

Got some documents out in a couple of hours per doc instead of days, including a glossary made by feeding it back the finished document and telling it to make a glossary. I probably did in 3 days what would have taken me a month otherwise.

Honestly it can be quite useful right now. The mistake is thinking it can do everything.

3 more...
3 more...

Maybe I'd be more interested in AI if there was any I with the A. At the moment, there's no more intelligence to these things than there is in a parrot with brain damage, or a human child. Language Models can mimic speech but are unable to formulate any original thoughts. Until they can, they aren't AI and I won't be the slightest bit interested beyond trying to break them into being slightly dirty (and therefore slightly funny).

Just so you know I totally agree with you but if you go far back enough in my comment history I had a really interesting (imo) discussion/argument with someone abt this very topic and the topic of how to determine if an AI 'thinks' or 'reasons' more broadly.

I wonder if we'll start seeing these tech investor pump n' dump patterns faster collectively, given how many has happened in such a short amount of time already.

Crypto, Internet of Things, Self Driving Cars, NFTs, now AI.

It feels like the futurism sheen has started to waver. When everything's a major revolution inserted into every product, then isn't, it gets exhausting.

Internet of Things

This is very much not a hype and is very widely used. It's not just smart bulbs and toasters. It's burglar/fire alarms, HVAC monitoring, commercial building automation, access control, traffic infrastructure (cameras, signal lights), ATMs, emergency alerting (like how a 911 center dispatches a fire station, there are systems that can be connected to a jurisdiction's network as a secondary path to traditional radio tones) and anything else not a computer or cell phone connected to the Internet. Now even some cars are part of the IoT realm. You are completely surrounded by IoT without even realizing it.

Huh, didn't know that! I mainly mentioned it for the fact that it was crammed into products that didn't need it, like fridges and toasters where it's usually seen as superfluous, much like AI.

I would beg to differ. I thoroughly enjoy downloading various toasting regimines. Everyone knows that a piece of white bread toasts different than a slice of whole wheat. Now add sourdough home slice into the mix. It can get overwhelming quite quickly.

Don’t even get me started on English muffins.

With the toaster app I can keep all of my toasting regimines in one place, without having to wonder whether it’s going to toast my pop tart as though it were a hot pocket.

I mean give the thing an USB interface so I can use an app to set timing presets instead of whatever UX nightmare it'd otherwise be and I'm in, nowadays it's probably cheaper to throw in a MOSFET and tiny chip than it is to use a bimetallic strip, much fewer and less fickle parts and when you already have the capability to be programmable, why not use it. Connecting it to an actual network? Get out of here.

Bagels are a whole different set of data than bread. New bread toasts much more slowly than old bread.

I think that the dot com bubble is the closest, honestly. There can be some kind of useful products (mostly dealing with how we interact with a system, not actually trying to use AI to magically solve a problem; it is shit at that), but the hype is way too large

It's more of a macroeconomic issue. There's too much investor money chasing too few good investments. Until our laws stop favoring the investor class, we're going to keep getting more and more of these bubbles, regardless of what they are for

Yeah it's just investment profit chasing from larger and larger bank accounts.

I'm waiting for one of these bubble pops to do lasting damage but with the amount of protections for specifically them and that money that can't be afforded to be "lost" means it's just everyone else that has to eat dirt.

TimeSquirrel made a good point about Internet of Things, but Crypto and Self Driving Cars are still booming too.

IMHO it's a marketing problem. They're major evolutions taking root over decades. I think AI will gradually become as useful as lasers.

Lets see if this finally kills the AI hype. Big tech is pushing for AI because it is the ultimate spyware, nothing more.

4 more...

This is because AI is usually used to reduce the human cost to the company, and rarely to reduce the human labour for the customer.

That, or mass surveillance.

2 more...

AI has some pretty good uses.

But in the majority of junk on the market it is nothing but marketing bloatware.

I can't really agree as a video producer. Luma, Krea, Runway, Ideogram, Udio, 11Labs, Perplexity, Claude, Firefly -> All worth more than they're charging, most with daily free options. They save me a ton of time. Honestly, the one I'm considering dropping at the moment is ChatGPT.

It does and AI is being tarnished by the hype/marketing.

Not long ago Firefox announced it would deliver client-side "AI" to describe web pages to differently-abled users. This is awesome.

Some people on Lemmy conflated AI and Large Language Models and complained about the addition. I don't blame them, not everyone is an IT pro and is equipped to understand the difference between Machine Learning Models, LLMs and such. I mentioned Firefox has "AI" for client-side translation and that's a great thing. They wondered since when "AI" was used for translation. Machine learning/deep learning translation has been a thing for over a decade and it amazing. It's not LLM (even if LLMs are really good at translation).

The market has pushed "AI" too hard making people cautious about it. They are turning it into the new "blockchain" were most people didn't find any benefit from the hype, on the contrary, they saw the vast majority of it being scams.

even if LLMs are really good at translation

As someone that actually played japanese RPG games translated with AI on dlsite, bullshit.

The irony is companies are being forced to implement it. Like our board has told us we must have "AI in our product.". It's literally a solution looking for a problem that doesn't exist.

It's because automated trading bots trade companies whose names appear in headlines with the word AI upwards.

The stock market is an economic shitpost.

This just screams "The CEO read about it on linkedin while taking a dump and now feels it is vital to the company."

My boss's boss's boss asked for a summary of our roadmap. He read it, and provided his takeaways... 3 of the 4 bullet points were AI-related, and we never once mentioned anything about AI in what we gave him 😑 so I guess we're pivoting?

1 more...

Okay but have you considered shoving AI down the throats of consumers and forcing them to use it? I say invest in more gigantic server farms!

Adobe Acrobat has added AI to their program and I hate it so much. Every other time I try to load a PDF it crashes. Wish I could convince my boss to use a different PDF reader.

Adobe sucks but they have sucked their whole existence. No AI needed.

I have no qualms about AI being used in products. But when you have to tell me that something is "powered by AI" as if that's your main selling point, then you do not have a good product. Tell me what it does, not how it does it.

Developer: Am I out of touch?

No, it's the consumers who are wrong.

Developer Stackholder: Am I pushing the wrong ideas onto the managers?

No, it's the developers who don't know how to implement the features I want.

If I could have the equivalent of a smart speaker that ran the AI model locally and could interface with other files on the system. I would be interested in buying that.

But I don't need AI in everything in the same way that I don't need Bluetooth in everything. Sometimes a kettle is just a kettle. It is bad enough we're putting screens on fridges.

I like the vast majority of my technology dumb, the last barely smart kettle I bought - it had a little screen that showed you temperature and allowed you to keep the water at a particular temperature for 3h - broke within a month. Now I once again have a dumb kettle, it only has the on/off button and has been working perfectly since I got it

I could go for the fridge screen if it was focused more around showing me what was in the fridge without opening the door and making grocery lists.

1 more...

Unsurprisingly. I have use for LLMs and find them helpful, but even I don't see why should we have the copilot button on new keyboards and mice, as well as on the LinkedIn's post input form.

And workers...

 

She looks so done with it. It is amazing how tone deaf and incapabale of detecting emotions the higher ups must have been to OK that image. Not blaming any one lower to approve this, they are probably all fed up too and were happy to use this.

Plus, it's way too cold at her vast and empty warehouse hot desk, because she's wearing at least two sweaters. Please let this lady have a cubicle of her own with a little space heater.

1 more...

Is that a real copilot ad?

This is the link I had I believe, but it's not loading for me now. Either it will work for you, or they pulled it. https://www.instagram.com/microsoft365/p/C7j8ipnxIiI/?img_index=1 (comments were brutal IIRC)

Related article about it: https://futurism.com/microsoft-brags-ai-attend-three-meetings

The post is still there.

I just can’t see anyone contributing anything meaningful to a meeting when they’re split across three different conversations. If that’s the case for this hypothetical employee, she’s part of the problem.

I just can’t see anyone contributing anything meaningful to a meeting when they’re split across three different conversations. If that’s the case for this hypothetical employee, she’s part of the problem.

I think the whole idea is that the AI handles two of those meetings for her (somehow) But yes, I try to put myself in the mind of someone who is enthused to finally be able to "attend" three meetings at once, and I just can't. I have a good job that I mostly enjoy, and am usually enthusiastic about my work. No fucking way.

The only people who could want this are the 1% (and wanna-be 1%), and they want it so the rest of us can attend three meetings at once to increase their wealth even faster.

It's people who brag about how hard they work and how many hours they work when other people say they hate their jobs.

And those people make me laugh. Oh really? You worked 80 hours last week? I "worked" 40, which meant about 4 hours of actual work a day, clocked out at 5 on the dot every day and spent time with my family.

I'm never contributing anything meaningful to the meetings I am continuously added to, so it would be nice to have an AI stand in. I could do the goddamn job I originally applied for instead of scrums, special project scrums, and meta scrums.

I mean, that’s exactly the advantage of slack over meetings but that doesn’t tickle middle management fancy as much.

1 more...

<---Not this cat. I become highly aroused when i hear salespeople gargling out their marketing bullshit

Yeah, baby, lie for me. Mmmm call a LLM "AI" again.

fuck that's hot

Hey now, LLMs are AI!

... So is the code that makes those ghosts in s super mario approach you when you look away and cower when you look at them.

2 more...

Hi, I'm annoying and want to be helpful. Am I helpful? If I repeat the same options again when you've told me I'm not helpful, will that be helpful? I won't remember this conversation once it's ended.

Hi, which option have you told me you already don't want would you like?

Sorry, I didn't quite catch that, please rage again.

1 more...

For me, if a company fails to make a clear cut case about why a product of theirs needs AI, I'm gonna assume they just want to misuse AI to cheaply deliver a mediocre product instead of putting in the necessary cost of manhours.

I like my AI compartmentalized, I got a bookmark for chatGPT for when i want to ask a question, and then close it. I don't need a different flavor of the same thing everywhere.

1 more...

It's really simple: There are a number of use cases where generative AI is a legitimate boon. But there are countless more use cases where AI is unnecessary and provides nothing but bloat, maybe novelty at best.

Generative AI is neither the harbinger or doom, nor the savior of humanity. It's a tool. Just a tool. We're just caught in this weird moment where people are acting like it's an all-encompassing multipurpose tool right now instead of understanding it as the limited use specific tool it actually is.

It's a tool. Just a tool.

And, more often than not, it's a poorly implemented tool that didn't need to be added to the product in the first place.

Yes, that was literally my point. A plumbing wrench is a perfectly useful and wonderful tool, but it isn't going to be much help in the middle of brain surgery. Tools have use cases; they can't be applied to any situation

I don't know anyone who is actively looking for products that have "AI".

It's like companies drank their own Kool aid and think because they want AI, so do the consumers. I have no need for AI. My parents don't even understand what it is. I can't imagine Gen Z gives a hoot.

Ai is not even truly ai right now, there's no intelligence, it's a statistical model made by training billions of stolen data to spit out the most similar thing to fit the prompt. It can get really creepy because it's very convincing but on closer inspection it has jarring mistakes that trigger uncanny valley shit. Hallucinations is giving it too much credit, maybe when we get AGI in a decade that'll fitting.

You're not wrong, but the implementation doesn't really matter I think. If AI could spit out sentences convincingly enough, I'd be okay with that. But, yeah, it's not there yet.

1 more...
1 more...

Absolutely, I was pretty upset when Google added Gemini to their Messages app, then excited when the button (that you can't remove) was removed! Now I've updated Messages again and they brought the button back. Why would you ever need an LLM in a texting app?

Edit: and also Snapchat, Instagram, and any other social media app they're shoveling an AI chat bot into for no reason

Edit 2: AND GOOGLE TELLING ME "Try out Gemini!" EVERY TIME I USE GOOGLE ASSISTANT ON MY PHONE!!!!!

It's farcical.

When a company introduces something consumers want, we will research and find a way to get it and use it ASAP. Nobody needs to interrupt our workflow to tell us about it. I don't remember getting any in-app notifications for the Gmail select all "feature," but I figured it out pretty damn quickly.

I'd rather talk to my cat than an AI chat bot

My cats replies make more consistent sense and I don't need to worry about him plagiarizing something incorrectly.

At least when you say certain keywords to your pets they show some emotions!

Try saying "potty?" to an LLM and decoding its response to gauge if it needs to go potty or not, Google!

AI in consumer devices at this point stands for data harvesting, wonky functionality and questionable usefulness. No wonder nobody wants that crap.

They just don't get it. Once everyone will use AI toilet and AI toothbrush they will sing a different tune.

I definitely need a toilet that remember and analyze my shit. Yes.

They will try to sell it to you as a way to detect any possible health issues early. But it will just be used to analyze you food patterns to shove mcdonalds ads

1 more...

For some reason I imagine a toilet that automates a stool test and blood test and gives you a health report every month.

If the toilet is receiving a blood sample I have bad news for your monthly health report.

A stool test sure, but I'm not going to trust a toilet to use a sterile needle to draw blood.

1 more...

I've been applying similar thinking to my job search. When I see AI listed in a job description, I immediately put the company into one of 3 categories:

  1. It is an AI company that may go out of business suddenly within the next few years leaving me unemployed and possibly without any severance.
  2. Management has drank the Kool-Aid and is hoping AI will drive their profit growth, which makes me question management competence. This also has a high likelihood of future job loss, but at least they might pay severance.
  3. The buzzword was tossed in to make the company look good to investors, but it is not highly relevant to their business. These companies get a partial pass for me.

A company in the first two categories would need to pay a lot to entice me and I would not value their equity offering. The third category is understandable, especially if the success of AI would threaten their business.

I get AI has its uses but I don’t need my mouse to have any thing AI related (looking at you Logitech).

It's because consumers aren't the dumbasses these companies think they are and we all know that the AI being shoved into everything fucking sucks worse than the systems we had before "AI."

Honestly AI is the 3D glasses of consumer products and computing. There are a couple of places and applications where it absolutely improves things, everywhere else it's just an overhyped extra that they tack on in hopes that it will drive up interest.

I really fucking hated the android update where holding the power button summons Gemini before actually giving you the shut down menu.

Oh yes I was so confused when trying to restart my phone and holding the power button just summoned the google assistant

AI is garbage.

AI is just an excuse to lay off your employees for an objectively less reliable computer program, which somehow statistically beats us in logic.

2 more...
2 more...

I barely trust organics. Some CEO being rock hard about his newest repertoire of buzzword doesn’t help.

Think of the savings if you replace the CEO with an AI!

Yet companies are manipulating survey results to justify the FOMO jump to AI bandwagon. I don't know where companies get the info that people want AI (looking at you Proton).

I'm actively turned off because they suck up my data to use it.

I love the idea of local only AI and would use those products, and do play with local LLM/Image products.

1 more...

Cuz everyone knows it's BS, or mostly BS with extra data mining

For the first time in years I thought about buying a new phone. The S23 Ultra, the previous versions had been improving significantly but the price was a factor. Then I got a promotion and figured I would splurge, the S24 Ultra, but it was all aout AI so I just stayed where I am...it does everything anyway.

2 more...

Yeah and that is largely fueled by two things; poor/forced use of AI, and anti-AI media sentiment (which is in turn fueled by reactionary/emotional narratives that keep hitting headlines, commonly full of ignorance)

AI can still provide actual value right now and can still improve. No it's not the end-all but it doesn't have to solve humanity's problems to be worth using.

This unfortunate situation is largely a result of the rush to market because that's the world we live in these days. Nobody gives a fuck about completing a product they only care about completing it first, fuck quality that can come later. As a sr software engineer myself I see it all too often in the companies I've worked for. AI was heralded as christ's second coming that will magically do all of this stuff while still in relative infancy, ensuring that an immature product was rushed out the door and applied to everything possible. That's how we got here, and my first statement is where we are now.

Listen up you kids, this old fart saw this same crap in the 70s when LCDs became common and LCD clocks became the norm. They felt that EVERYTHING needed to have an LCD clock stuck in it, lamps, radios, blocks of cheese, etc. A similar thing happened in the internet boom/bust in the late 90s where everyone needed a website, even gas stations. Now AI is the media and business darling so they are trying to stick AI in everything, partly to justify pissing away so much money on it. I can't even do a simple search on FB because it wants to force me to use the damn meta AI instead.

I occasionally use chat gpt to find info on error code handling and coding snippets but I feel like I'm in some sort of "can you phrase it exactly right?" contest. Anything with even the slightest vagueness to it returns useless garbage.

I'm 28, but I remember hating autocomplete suggestions in UIs and I still do. And they are still here.

1 more...

I have at least two LCD clocks in my house that I will never bother to set, it's an utterly useless feature I will never use.

Hopefully, if people actively avoid products with AI, it will mean this feature doesn't become the default.

2 more...

I keep thinking about how Google has implemented it. It sums up my broader feelings pretty well. They jammed this half-baked "AI" product into the very fucking top of their search results. I can't not see it there - its huge and takes up most of my phone's screen after the search, but I always have to scroll down past it because it is wrong, like, pretty often, or misses important details. Even if it sounds right, because I've had it be wrong before I have to just check the other links anyway. All it has succeed at doing in practice is make me scroll down further before I get to my results (not unlike their ads, I might add). Like, if that's "AI" it's no fucking wonder people avoid it.

1 more...

I absolutely hate having to scroll past garbage AI answers I don't care to see, nor would I trust

2 more...

To be honest, I lost all interest in the new AMD CPUs because they fucking named the thing "AI" (with zero real-world application).

I'm in the market for a new PC next month and I'm gonna get the 7800X3D for my VR gaming needs.

I'll use it more when its has a proven reliable use.

AI is a neat toy... but that's all it is. It's horrible at almost every real-world application it's been forced into, and that's before you wander into the whole shifting minefield of ethical concerns or consider how wildly untrustworthy they are.

I was at the optometrist recently and saw a poser for some lenses (transitions) that somehow had "AI"....I was like WTF how / why / do you need to carry a small supercomputer around with you as well.

4 more...

We're seeing a bunch of promises made when LLM were the novel hot shit. Now that we've plateaued on how useful they are to the average consumer every AI product is just a beta test that will drop support as soon as something newer and shinier comes along.

To me AI helps me bang out small functions and classes for personal projects and act as a Google alternative for mundane stuff.

Other than that any product that uses it is no different than a digital assistant asking chat gpt to do things. Or at least that seems like the perception from a consumer level.

Besides it's bad enough I probably use a homes energy trying to make failing programming demos much less ordering pizza from my watch or whatever.

I hate the feeling that they are continuing to dump real humans who can communicate and respond to issues outside of the rigid framework when it comes to support. AI is also only as good as its data and design. It feels like someone built a self driving car, stuck it on a freshly paved and painted highway and decided it was good to go. Then you take it on an old rural road and end up hitting a tree.

1 more...

In Defence of AI web search from my experiences:

When I have no idea what I am talking about, have no or incorrect terminology, I have found Copilot and GPT4 (separate not the all-in-one) to be game changing compared to flat Google.

I'm not using the data straight off the query result, but the links to the data that was provided in the result.

And embarrassingly, when I'm drunk and babbling into a microphone, Copilot finds the links to what I am looking for.

Now if you are just straight using the results and not researching the answers your mileage will vary.

Is that enough to mitigate how much worse bare Google is than it was ten years ago, back when they were winning against SEO bots? In my experience, it hasn't been, but I've not done enough AI-aided web searches to have a good sample size.

1 more...
1 more...

Who knew that new technologies that are great for businesses' bottom lines wouldn't also be great for consumer satisfaction.

Say it ain't so.

Initially great for bottom lines. Then consumer dissatisfaction finds a way.

Unless you have legally reinforced monopoly.

3 more...
3 more...

More like people know when it's just being used as a buzzword and are smart to avoid when that's (often) the case

Well, maybe if they weren't using AI as a hypeword and just called it adaptive or GPT.

I think there is potential for using AI as a knowledge base. If it saves me hours of having to scour the internet for answers on how to do certain things, I could see a lot of value in that.

The problem is that generative AI can't determine fact from fiction, even though it has enough information to do so. For instance, I'll ask Chat GPT how to do something and it will very confidently spit out a wrong answer 9/10 times. If I tell it that that approach didn't work, it will respond with "Sorry about that. You can't do [x] with [y] because [z] reasons." The reasons are often correct but ChatGPT isn't "intelligent" enough to ascertain that an approach will fail based on data that it already has before suggesting it.

It will then proceed to suggest a variation of the same failed approach several more times. Every once in a while it will eventually pivot towards a workable suggestion.

So basically, this generation of AI is just Cliff Clavin from Cheers. Able to to sting together coherent sentences of mostly bullshit.

1 more...