Study Finds Consumers Are Actively Turned Off by Products That Use AI

morrowind@lemmy.ml to Technology@lemmy.world – 2196 points –
Study Finds Consumers Are Actively Turned Off by Products That Use AI
futurism.com
384

You are viewing a single comment

AI is garbage.

AI is just an excuse to lay off your employees for an objectively less reliable computer program, which somehow statistically beats us in logic.

I've used LLMs a lot over the post couple years. Pro tip. Use it a lot and learn the models. Then they look much more intelligent as you the user becomes better. Obviously if you prompt "Write me a shell script to calculate the meaning of life, make my coffee, and scratch my nuts before 9AM" it will be a grave disappointment.

If you first design a ball fondling/scratching robot, use multiple instances of LLMs to help you plan it out, etc. then you may be impressed.

I think one of the biggest problems is that most people interacting with llms forget they are running on computers and that they are digital and not like us. You can't make assumptions like you can with humans. Usually even when you do that with us you just get stuff you didn't want because you weren't clear enough. We are horrible at instructions and this is something I hope AI will help us learn how to do better. Because ultimately bad instructions or incomplete information doesn't lead to being able to determine anything real. Computers are logic machines. If you tell a computer to go ride a bike at best it'll go out and do all the work to embody itself in a robot and buy a bike and ride it. Wait, you don't even know it did it though because you never specified for it to record the ride....

A very few of us are pretty good at giving computers clear instructions some of the time. Also though, I have found just forcing models to reason in context is powerful. You have to know to tell it to "use a drill down tree style approach to problem solving. Use reflection and discussion to explore and find the optimal solution to reasoning through the problem." Might still give you bad results. That is why you have to experiment. It is a lot of fun if you really just let your thoughts run wild. It takes a lot of creative thinking right now to really get the most out of these models. They should all be 110% open source and free for all. BTW Gemini 1.5 and Claude and Llama 3.1 are all great, nd Llama you can run locally or on a rented GPU VM. OpenAI I'm on the fence about but given who all is involved over there I wouldn't say I would trust them. Especially since they want to do a regulatory capture.

Asking the chat models to have self-disccusion and use/simulate metacognition really seems to help. Play around with it. Often times I am deep in a chat and I learn from its mistakes, it kinda learns from my mistakes and feedback. It is all about working with and not against. Because at this time LLMs are just feed forward neural networks trained on supercomputer clusters. We really don't even know what they are capable of fully because it is so hard to quantify, especially when you don't really know what exactly has been learned.

Q-learning in language is also an interesting methodology I've been playing with. With an imagine generator for example though, you can just add (Q-learning quality) and you may get more interesting and quality results. Which itself is very interesting to me.