aiccount

@aiccount@monyet.cc
0 Post – 39 Comments
Joined 12 months ago

"A solution in search for a problem" is a phrase used way to much, and almost always in the wrong way. Even in the article it says that it has been solving problems for over a year, it just complains that it isn't solving the biggest problems possible yet. It is remarkable how hard it is for people to extrapolate based on the trajectory. The author of this paper would have been talking about how pointless computers are if they were alive in the early 90s, and how they are just "a solution in search for a problem".

19 more...

Yeah, I absolutely agree. About a month ago, I would have said that Suno was clearly leading in AI music generation, but since then, Udio has definitely taken the lead. I can't imagine where things will be by the end of the year, let alone the end of the decade. This is why it's so crazy to me when people look at generative AI and act like it's no big deal and just a passing fad or whatever. They have no idea that there is a tsunami crashing down on us all and they always seem to be the ones that bill themselves as the weather experts who have it all figured out. Nobody knows the implications of this, but it definelty isn't an inconsequential tech.

You don't have to be delusional to self-sacrifice to try to make a difference. I'm so sick of people pretending like there is nothing they could possibly do to help, so they just keep hurting others. It's just like every discussion on factory farms. At least try to help. It will make you feel better, and you can quit getting all defensive when people point out things that can be done.

Hundreds of Palestinians have been killed by Israeli forces since the start of the war in Gaza last October

I feel like I haven't heard a number this low since the end of October. I've been hearing >30,000 lately. Am I mistaken or something? I thought Reuters is supposed to generally do a pretty good job. What's the deal?

1 more...

Why in the world is this being downvoted? An absurd number of dolphins are murdered as an accidental byproduct of commercial net fishing every single day. What the hell goes through the mind of a fool that downnvotes this comment?

3 more...

Most positive use cases are agent-based and the average user doesn't have access to good agent-based systems yet because it requires a bit of willingness to do some "coding". This will soon not be the case though. I can give my crew of AI agents a mission, for example, "find all the papers on baby owl vocalizations and make 10 different charts of the frequency range relative to their average size after each of their first 10 weeks of life", and come back an hour later and have something that would have been 100 hours for a grad student just last year. Right now I have to wait an hour or so, soon it will be instant.

The real usefulness of these agents today is enormous, it is just outside of the view of many average people because their normal lives don't require this kind of power.

Yeah, it would be great to catch that one psycho, but it would be way better to catch the psychos that do this all day every day. I think that's the point they were trying to make.

Yeah, I've just set up a hotkey that says something like "back up your answer with multiple reputable sources" and I just always paste it at the end of everything I ask. If it can't find webpages to show me to back up its claims then I can't trust it. Of course this isn't the case with coding, for that I can actually run the code to verify it.

Yeah, it's trajectory thing. Most people see the one-shot responses of something like the chatgpt's current web interface on openai's website and they think that's where we are at. It isn't though, the cutting edge of just what is currently openly available to people is things like CrewAI or Autogen using agents powered by things like Claude Opus or Llama 3, and maybe the latest gpt4 update.

When you use agents you don't have to baby every response, the agents can run code, test code, check latest information on the internet, and more. This way you can give a complex instruction, let it run and come back to a finished product.

I say it is a trajectory thing because when you compare what was cutting-edge just 1 year ago, basically one-shot gpt3.5 to an agent network with today's latest models, the difference is stark, and when you go a couple years before that to gpt2, it is way beyond stark. When you go a step further and realise that there is lots of custom hardware being built(basically llm ASICs-traditionally a ~10,000x speedup over general use gpus), you can see that soon having instant agent based responses will be the norm.

All this compounds when you consider that we have not hit a plateau and that we are still seeing that better datasets, and more compute, are still producing better models. Not to mention that other architectures, like state-based Mamba, are making remarkable achievements with very little compute so far. We have no idea how powerful thinks like Mamba would be if they were given the datasets and training that the current popular models are being given.

4 more...

It is amazing to watch someone's mind melt like this just because the truth of their food source is pointed out to them. This is a full-blown insane comment.

1 more...

This is an issue with many humans I've hired, though. Maybe they try to cut corners and do a shitty job, but I occasionally check, if they are bad at their job, I warn them, correct them, maybe eventually fire them. For lots of stuff, AI can be interacted with in a very similar way.

This is so similar to many people's complaints with self driving cars. Sure, accidents will still be had, they are not perfect, but neither are human drivers. If we hold AI to some standard that is way beyond people then yes, it's not there, but if we say it just needs to be better than people, then it is there for many applications, but more importantly, it is rapidly improving. Even if it was only as good as people at something, it is still way cheaper and faster. For some things, it's worth it if it isn't even as good as people yet.

I have very little issues with hallucinations anymore, when I use an LLM to get anything involving facts, I always tell it to give sources for everything, and i can have another agent independently verify the sources before i see them. Often times I provide the books or papers that I want it to specifically source from. Even if I am going to check all the sources myself after that, it is still way more efficient then if I did the whole thing myself. The thing is, with the setups I use, I literally never have it make up sources anymore. I remember that kind of thing happening back in the days when AI didn't have internet access, and there really weren't agents yet. I realize some people are still back there, but in the future(that many of us are in) its basically solved. There is still logic mistakes and such, that stuff can't be 100% depended on, but if you have a team of agents going back and forth to find an answer, then you pass it to another team of agents to independently verify the answer, and have it cycle back if a flaw is found, many issues just go away. Maybe some mistakes make it through this whole process, but the same thing happens sometimes with people.

I don't have the link on hand, but there have been studies done that show gpt3.5 working in agentic cycles perform as good or better than gpt4 out of the box. The article I saw that in was saying that basically there are already people using what gpt5 will most likely be just by using teams of agents with the latest models.

Yeah the current popular LLMs, absolutely they are, you couldn't be more right.

We were talking about "AI" though. Are you implying that you think some day AI might be capable of creativity, and that creativity isn't strictly a human trait?

7 more...

Anybody who gets so triggered and defensive when someone points out how disgusting factory farms are doesn't have a diet that they are proud of. Whether your cognitive dissonance allows you to acknowledge that or not is a different story.

I think having it give direct quotes and specific sources would help your experience quite a bit. I absolutely agree that if just use the simplest forms of current LLMs and the "hello world" agent setups that there are hallucination issues and such, but lots of this is no longer an issue when you get deeper into it. It's just a matter of time until the stuff that most people can easily use will have this stuff baked in, it isn't anything that is impossible. I mean, I pretty much always have my agents tell me exactly where they get all their information from. The exception is when I have them writing code because there the proof is in the results.

2 more...

This is a really great way to phrase it. I am very curious to see if this difference in phrasing would really be received differently than the more blunt approach, which certainly doesn't seem to work for most people. Hopefully, we will all have AIs soon that can spoon feed anyone who can't connect the dots on their own.

It blows my mind that people can be reminded of the mass slaughter that is happening daily and think that it must somehow be excusing the one-off brutal slaughter of an individual. I always just assume that people hate to be reminded of the implication of their "sustainable" wild caught tuna or whatever.

Yeah, you may be able to get all the way to a playable game if you use that prompt in a well set up AutoGen app. I would be interested to see if you give it a shot, so please share if you do. It's such a cool time to be alive for "idea" people!

I think without anything akin to extrapolation, we just need to wait and see what the future holds. In my view, most people are almost certainly going to be hit up side the head in the not to distant future. Many people haven't even considered what a world might be like where pretty much all the jobs that people are doing now are easily automated. It is almost like instead of considering this, they are just clinging to some idea that the 100-meter wave hanging above us couldn't possibly crash down.

Yeah, you are definetly onto something there. If you are interested in checking out the current state of this, it is called "AutoGen". You can think of it like a committee of voices inside the bots head. It takes longer to get stuff out, but it is much higher quality.

It is basically a group chat of bots working together on a common goal, but each with their own special abilities(internet access, apis, code running ability..) their own focuses, concerns, etc. It can be used to make anything, most projects now seem to be focused on application development, but there is no reason why it can't be stories, movie scripts, research papers, whatever. For example, you can have a main author, an editor that's fine-tuned on some editing guidelines/books, a few different fact checkers with access to the internet or datasets of research papers (or whatever reference materials) who are required to list sources for anything the author says(if no source can be found, then the author is told by the fact checkers and they must revise what they've written) and whatever other agents you can dream up. People are using dwsigners, marketers, CEOs.. Then you plug in some api keys, maybe give them a token limit, and let them run wild.

A super early version of this idea was ChatDev, if you don't want to go down the whole rabbit hole and just want a quick glimpse, skip ahead to 4:25, ChatDev has an animated visual representation of what is happening. These days AutoGen is where it's at though, this same guy has a bunch of videos on it if you are looking to go a bit deeper.

Yeah, to be clear, I'm not arguing that current LLMs are as creative and intelligent as people.

I am saying that even before babies get human language input, they still get input from people to be made, the baby's algorithm to make that spark is modled on previous humans by the human data that is DNA. These future intelligent AIs will also be made by data that humans make. Even our current LLMs are not purely human language input, they also have an algorithm that is doing stuff with that data in order to show to us its, albeit relatively weak, "intelligent spark" that it had before it got all that human language input.

Chatbots are not new. They started around 1965. Objectively, gpt4 is more creative than the chatbots of 1965. The two are not equally able to create. This is an ongoing change, in the future AI will be more creative than today's most creative AIs. AI will most likely continue on its trajectory and some day, if we dont all get destroyed, it will eventually be more intelligent and creative than humans.

I would love to hear an rebuttal to this that doesn't just base its argument on the fact that AI needs human language input. A baby and its spark is not impressively intelligent. What makes that baby intelligent is its initial algorithm plus the fact that it gets human language data. Requiring that AI must do what the baby does without the human language data that babies get makes no sense to me as a requirement.

1 more...

I think there may be some confusion about how much energy it takes to respond to a single query or generate boilerplate code. I can run Llama 3 on my computer and it can do those things no problem. My computer would use about 6kWh if I ran it for 24 hours, a person in comparison takes about half of that. If my computer spends 4 hours answering queries and making code then it would take 1kWh, and that would be a whole lot of code and answers. The whole thing about powering a small town is a one-time process when the model is made, so to determine if that it worth it or not it needs to be distributed over everyone who ends up using the model that is produced. The math for that would be a bit trickier.

When compared to the amount of energy it would take to produce a group of people that can do question answering and code writing, I'm very certain that the ai model method is considerably less. Hopefully, we don't start making our decision about which one to produce based on energy efficiency. We may, though, if the people that choose the fate of the masses sees us like livestock, then we may end up having our numbers reduced in the name of efficiency. When cars were invented, horses didn't end up all living in paradise. There were just a whole lot less of them around.

When people show outrage about the abuse of a single animal, it is in no way "shoe horning" or a "non-sequitor" to point out the massive animal abuse that many people are supporting. I understand that people hate hearing about, but it's still true.

5 more...

Yeah, I would be unable to respond in any meaningful way if I were trying to argue your side as well. I know why I'm downvoted. I'm downvoted because I point out a disgusting habit that many people have and they hate to think about. That's fine though, if I can get through to a single person, it is worth it. Think hard about which side you are on here, it's not a good side at all. Deep down you know that. Sometimes anger is the appropriate response. You'd be angry too if you developed a moral compass.

Well then you didn't read very many of my comments. I made this first comment because the post I responded to was so absurd so I just exaggerated the ridiculousness that they said. Of course AI is capable of creativity and intelligence. If you look at the long back and forth that this sparked you would see that this is my stance. After I made this over the top, very sarcastic comment, OP corrected themself to clarify that when they said "AI" they actually only meant the current state of LLMs. They have since admitted that it is indeed true that AI absolutely can be capable of creativity and intelligence.

1 more...

Even those future "real" AIs are going to be taking in human input and regurgitating it back to us. The only difference is that the algorithms processing the data will continue to get better and better. There is not some cutoff where we go from 100% unintelligent chatbot to 100% intelligent AI. It is a gradual spectrum.

5 more...

I'm sorry you've been so hurt, I hope you get better.

This is an issue with many humans I've hired, though. Maybe they try to cut corners and do a shitty job, but I occasionally check, if they are bad at their job, I warn them, correct them, maybe eventually fire them. For lots of stuff, AI can be interacted with in a very similar way.

This is so similar to many people's complaints with self driving cars. Sure, accidents will still be had, they are not perfect, but neither are human drivers. If we hold AI to some standard that is way beyond people then yes, it's not there, but if we say it just needs to be better than people, then it is there for many applications, but more importantly, it is rapidly improving. Even if it was only as good as people at something, it is still way cheaper and faster. For some things, it's worth it if it isn't even as good as people yet.

I have very little issues with hallucinations anymore, when I use an LLM to get anything involving facts, I always tell it to give sources for everything, and i can have another agent independently verify the sources before i see them. Often times I provide the books or papers that I want it to specifically source from. Even if I am going to check all the sources myself after that, it is still way more efficient then if I did the whole thing myself. The thing is, with the setups I use, I literally never have it make up sources anymore. I remember that kind of thing happening back in the days when AI didn't have internet access, and there really weren't agents yet. I realize some people are still back there, but in the future(that many of us are in) its basically solved. There is still logic mistakes and such, that stuff can't be 100% depended on, but if you have a team of agents going back and forth to find an answer, then you pass it to another team of agents to independently verify the answer, and have it cycle back if a flaw is found, many issues just go away. Maybe some mistakes make it through this whole process, but the same thing happens sometimes with people.

I don't have the link on hand, but there have been studies done that show gpt3.5 working in agentic cycles perform as good or better than gpt4 out of the box. The article I saw that in was saying that basically there are already people using what gpt5 will most likely be just by using teams of agents with the latest models.

I'm sorry it is so hard for you to make the connection between one abused animal and many abused animals. I don't know what else to say. This is text book cognitive dissonance. Two things couldn't be more related.

3 more...

Alright, no big deal. But yeah, your're gut instinct was correct when you assumed there was a missing /s. I don't really like the /s that much, especially in situations where it is so obvious.

If you had read down through this thread first then you would have seen the obviousness of the /s. I don't think my comment history outside of this thread would have done much since I don't generally talk about this stuff. I just meant if you had looked more than a couple comments in this particular back and forth discussion.

Standing up for what you believe isn't sanctimonious. I hope you eventually learn to quit caring so much what others think about you and that you can start to express your true opinions. Sure, some people will be offended and walk away, but those aren't the best types of people anyway. Quality is much more important than quantity when it comes to who you spend your time with.

Is this how you see human intelligence? Is human intelligence made without the input of other humans? I understand that even babies have some sort of spark before they learn anything from other people, but dont they have the human dna input from their human parents? Why should the requirement for AI intelligence require no human input when even human intelligence seemingly requires human input to be made?

Sorry, lots of questions, just food for thought I suppose.

3 more...

There is a very good reason why you can't even attempt to explain your justification for factory farms and all you can do it say "...aka the dumbest take".

You have no moral justification, all you have is childish selfishness with no regard for anyone but yourself. Your entire life has lead up to the best you can do is try to tear down compationate people because you think it will give you a temporary feeling of not being disgusting.

You could do better, but first you need to at least want to develop will power and self control. I hope for your sake that you never have to experience the hell that you so giddily inflict on others.

10 more...

Yeah, I'm the one without regard for others here. The whole side you are arguing for is not having regard for others. That's literally what this discussion is about. I say "have regard for others", and you say "no, lol".

1 more...

It is hilarious to see you try to take the moral high ground on anyone. You can not begin to grasp what moral behavior is.

If only they all got simply cut along the superior vena cava. I know it would be great if they all had wonderful happy deaths, but unfortunately they simply don't. For example, anyone who eats factory farm eggs has the fact that countless baby chicks are thrown into blenders while still alive on their conscience. It is great when people show what they think happens on those farms because it gives opportunity for people to point what actually is happening. Hopefully, more interactions like this will help to end the hypocrisy of it all.

Look at all the downvotes I'm getting, people absolutely leach onto anything that makes them feel the bad people are the ones who point out how awful these farms are.

6 more...

Imagine you go over to someone's house, and as soon as you walk in, you get overwhelmed by the smell of feces. You walk into the living room, and there is a dog in a cage that it barely fits into. The cage is so tight around its body that it is unable to turn around. You realise there are inches of fecal sludge caked into the bottom of the entire cage. Upon close inspection, you realise that the teeth of the dog have been removed. You are told that by removing the teeth, it can't bite. You ask how it doesn't get so sick that it dies, and you are shown a handful of pills that it is given that fight off its infections and diseases.

You are absolutely disgusted, and you rightfully say so. The response of the owner is this, "This is the same tired argument of 'Nobody can have pets!' That always gets brought up."

This is exactly what you just did.

I never said anything about anyone not being allowed to eat meat, but you have been so conditioned that whenever anyone points out how bad factory farms are, you immediately try to defend them by acting like the only possible way to eat meat is to do it that way. This is not because you are an idiot, it is just because of how clever and motivated the bastards that are doing this to animals are. They are able to convince good-meaning, kind people like yourself to fight on their defense whenever anyone tries to chalange them.

There are many people, now, and all throughout history, that eat meat in a way that is not deplorable, but that way doesn't make large factory farms rich, it doesn't put more money into the billionaires pockets. So, they recruited you, and many others to work for them. They are very smart, and they succeeded.


By the way, I have no idea how you've taken anything Isaid to mean that I think it is OK to machine gun dolphins if you also eat chickens. I never said anything remotely like that. I agree whole-heartedly that that is, indeed, a very dumb take.

1 more...

I was agreeing with you. I'm so sick of people thinking that "someday AI might be creative". Like no, it's literally impossible unless some day AI becomes human(impossible) because human is the only thing capable of creativity. What have I said that you disagree with? You're not one of them are you? What's with all this obsessive AI love?

9 more...

I wasn't talking about the potential future downvotes from that comment. I was talking about all the past downvotes that I already got on my original comment.

As an aside, I'm pretty sure that not any amount of downvotes can grant someone martyrhood status.

3 more...

Yes, it is literally impossible for any AI to ever exist that can be creative. At no point in the future will it ever create anything creative, that is something only human beings can do. Anybody that doesn't understand this is simply incapable of using logic and they have no right to contribute to the conversation at all. This has all already been decided by people who understand things really well and anyone who objects is obviously stupid.

15 more...

Wait till you hear what happens on the factory farms that nearly everyone's meat and dairy comes from. Animals would be lining up for a chance to be treated as well as this dolphin that died at the hands of this bastard.

55 more...