SirGolan

@SirGolan@lemmy.sdf.org
0 Post – 57 Comments
Joined 1 years ago

It's just so tone deaf. And he's totally lying about users not supporting the blackout. All the subreddits I was on where the mods asked people what they wanted to do, most of the comments were in favor of keeping them dark indefinitely. The rest were agreeing to the blackout in general. I don't remember seeing a single person objecting.

7 more...

Wait a second here... I skimmed the paper and GitHub and didn't find an answer to a very important question: is this GPT3.5 or 4? There's a huge difference in code quality between the two and either they made a giant accidental omission or they are being intentionally misleading. Please correct me if I missed where they specified that. I'm assuming they were using GPT3.5, so yeah those results would be as expected. On the HumanEval benchmark, GPT4 gets 67% and that goes up to 90% with reflexion prompting. GPT3.5 gets 48.1%, which is exactly what this paper is saying. (source).

11 more...

Etsy employee #3 or so here but haven't worked there in more than a decade. Rob is a great guy, but I don't think he could have grown Etsy the way it has. I'm sure some people will say that's not a bad thing but my response is you probably wouldn't know about Etsy if he stayed on.

I think on the whole, the new CEO has done more good than bad for the company. They've always had criticism of non handmade stuff being sold on there. I think they could do more to that end, and if the video is right that the new CEO is allowing non handmade stuff on there, I don't agree with him on that. I haven't seen that myself and I do still use the site. While he's made other decisions I don't agree with, encouraging sellers to do free shipping was a good move. Many buyers expect that thanks to Amazon. The fee increases while for sure had an impact on sellers bottom lines, don't compare to what Amazon Handmade (if that still exists) and ebay charge (not to get into most other marketplaces like the app stores that charge 30%). The current CEO in my opinion understands Etsy way more than the other two they had after Rob was out.

Also in terms of Fred Wilson, she should have done a little more homework on him. He was one of the original investors. He understands Etsy. He's also entitled to some return for making a very risky investment on 4 kids (they were like 20 when they started it). I haven't spoken to Fred in some time so maybe he's changed, but I doubt it.

Anyway, I don't mean to be so negative about the video, but I also don't think Etsy has lost its way as much as the video implies. Granted I am not a seller, just a user at this point.

What's with all the hit jobs on ChatGPT?

Prompts were input to the GPT-3.5-turbo-0301 model via the ChatGPT (OpenAI) interface.

This is the second paper I've seen recently to complain ChatGPT is crap and be using GPT3.5. There is a world of difference between 3.5 and 4. Unfortunately news sites aren't savvy enough to pick up on that and just run with "ChatGPT sucks!" Also it's not even ChatGPT if they're using that model. The paper is wrong (or it's old) because there's no way to use that model in the ChatGPT interface. I don't think there ever was either. It was probably ChatGPT 0301 or something which is (afaik) slightly different.

Anyway, tldr, paper is similar to "I tried running Diablo 4 on my Windows 95 computer and it didn't work. Surprised Pikachu!"

11 more...

Man that video irks me. She is conflating AI with AGI. I think a lot of people are watching that video and spouting out what she says as fact. Yet her basic assertion is incorrect because she isn't using the right terminology. If she explained that up front, the video would be way more accurate. She almost goes there but stops short. I would also accept her saying that her definition of AI is anything a human can do that a computer currently can't. I'm not a fan of that definition but it has been widely used for decades. I much prefer delineating AI vs AGI. Anyway this is the first time I watched the video and it explains a lot of the confidently wrong comments on AI I've seen lately. Also please don't take your AI information from an astrophysicist, even if they use AI at work. Get it from an expert in the field.

Anyway, ChatGPT is AI. It is not AGI though per recent papers, it is getting closer.

For anyone who doesn't know the abbreviation, AGI is Artificial General Intelligence or human level intelligence in a machine. ASI is Artificial Super Intelligence which is beyond human level and the really scary stuff in movies.

1 more...

GPT-4 cannot alter its weights once it has been trained so this is just factually wrong.

The bit you quoted is referring to training.

They are not intelligent. They create text based on inputs. That is not what intelligence is, unless you have an extremely dismal view of intelligence that humans are text creation machines with no thoughts, no feelings, no desires, no ability to plan... basically, no internal world at all.

Recent papers say otherwise.

The conclusion the author of that article comes to (LLMs can understand animal language) is.. problematic at the very least. I don't know how they expect that to happen.

40 more...

All the articles about this I've seen are missing something. Netflix has been using machine learning in a bunch of ways for quite a few years. I bet this position they're hiring for has been around for most of that time and isn't some new "replace all actors and writers with AI" thing. Here's an article from 2019 talking about how they use AI. That was the oldest I could find but someone I know was working on ML at Netflix over a decade ago.

From ChatGPT 4:

Yes, the box is awesome.

According to the provided statements, the box is yellow and red.

"If the box is yellow, it is good." - So, since the box is yellow, it is good.

"If the box is red, it is happy." - And since the box is red, it is happy.

Finally, "If the box is good and happy, the box is awesome." - Therefore, because the box is both good (due to being yellow) and happy (due to being red), the box is indeed awesome.

9 more...

Wikipedia: In copyright law, a derivative work is an expressive creation that includes major copyrightable elements of a first, previously created original work.

I think you may be off a bit on what a derivative work is. I don't see LLMs spouting out major copyrightable elements of books. They can give a summary sure, but Cliff Notes would like to have a word if you think that's copyright infringement.

Check out this recent paper that finds some evidence that LLMs aren't just stochastic parrots. They actually develop internal models of things.

What I wonder is why more cars don't have HUDs that are projected onto the windshield. That tech has been around and in cars for over 25 years. You don't have to take your eyes off the road at all.

I've been working on an autonomous AI helper that can take on tasks. You can give it whatever personality you like along with a job description and it will work on tasks either based on what you ask it or whatever it decides needs to be done based on the job description. Basically the AI in the movie Her without the romantic part.

5 more...

You should check out the short story Manna. It's maybe a bit dated now but explores what could go wrong with that sort of thing.

1 more...

My girlfriend and I recently decided to watch every Arnold Schwarzenegger movie in order. We saw Hercules in New York this weekend. It was pretty amusing. They clearly shot all the mt Olympus scenes in central park because you can hear the traffic in the background and the occasional crying baby or what not.

In the end of the bit I quoted you say: "basically no world at all." But also, can you define what intelligence is? Are you sure it isn't whatever LLMs are doing under the hood, deep in hidden layers? I guess having a world model is more akin to understanding than intelligence, but I don't think we have a great definition of either.

Edit to add: More... papers...

26 more...

My concern here is that OpenAI didn't have to share gpt with the world. These lawsuits are going to discourage companies from doing that in the future, which means well funded companies will just keep it under wraps. Once one of them eventually figures out AGI, they'll just use it internally until they dominate everything. Suddenly, Mark Zuckerberg is supreme leader and we all have to pledge allegiance to Facebook.

If we are talking Copilot then that's not ChatGPT. But I agree it's ok. Like it can do simple things well but I go to GPT 4 for the hard stuff. (Or my own brain haha)

Hmm that's incorrect. ChatGPT (if you pay for it) does both.

3 more...

Yeah, I generally agree there. And you're right. Nobody knows if they'll really be the starting point for AGI because nobody knows how to make AGI.

In terms of usefulness, I do use it for knowledge retrieval and have a very good success rate with that. Yes, I have to double check certain things to make sure it didn't make them up, but on the whole, GPT4 is right a large percentage of the times. Just yesterday I'd been Googling to find a specific law or regulation on whether airlines were required to refund passengers. I spent half an hour with no luck. ChatGPT with GPT4 pointed me to the exact document down to the right subsection on the first try. If you try that with GPT3.5 or really anything else out there, there's a much higher rate of failure, and I suspect a lot of people who use the "it gets stuff wrong" argument probably haven't spent much time with GPT4. Not saying it's perfect-- it still confidently says incorrect things and will even double down if you press it, but 4 is really impressive.

Edit: Also agree, anyone saying LLMs are AGI or sentient or whatever doesn't understand how they work.

I’m not really interested in papers that either don’t understand LLMs or play word games with intelligence

I mean, my first paper was from Max Tegmark. My second paper was from Microsoft. You are discounting a well known expert in the field and one of the leading companies working on AI as not understanding LLMs.

Human intelligence is a mental quality that consists of the abilities to learn from experience, adapt to new situations, understand and handle abstract concepts, and use knowledge to manipulate one’s environment.

I note that's the definition for "human intelligence." But either way, sure, LLMs alone can't learn from experience (after training and between multiple separate contexts), and they can't manipulate their environment. BabyAGI, AgentGPT, and similar things can certainly manipulate their environment using LLMs and learn from experience. LLMs by themselves can totally adapt to new situations. The paper from Microsoft discusses that. However, for sure, they don't learn the way people do, and we aren't currently able to modify their weights after they've been trained (well without a lot of hardware). They can certainly do in-context learning.

Yes. LLMs are not magic, they are math, and we understand how they work. Deep under the hood, they are manipulating mathematical vectors that in no way are connected representationally to words. In the end, the result of that math is reapplied to a linguistic model and the result is speech. It is an algorithm, not an intelligence.

We understand how they work? From the Wikipedia page on LLMs:

Large language models by themselves are "black boxes", and it is not clear how they can perform linguistic tasks. There are several methods for understanding how LLM work.

It goes on to mention a couple things people are trying to do, but only with small LLMs so far.

Here's a quote from Anthropic, another leader in AI:

We understand the math of the trained network exactly – each neuron in a neural network performs simple arithmetic – but we don't understand why those mathematical operations result in the behaviors we see.

They're working on trying to understand LLMs, but aren't there yet. So, if you understand how they do what they do, then please let us know! It'd be really helpful to make sure we can better align them.

they are manipulating mathematical vectors that in no way are connected representationally to words

Is this not what word/sentence vectors are? Mathematical vectors that represent concepts that can then be linked to words/sentences?

Anyway, I think time will tell here. Let's see where we are in a couple years. :)

I’m not really interested in papers that either don’t understand LLMs or play word games with intelligence

24 more...

I think it might require plus but the iOS And Android apps do support voice only conversation. You have to go into beta features and enable it.

If that were true, it shouldn't hallucinate about anything that was in its training data. LLMs don't work that way. There was a recent post with a nice simple description of how they work, but I'm not finding it. If you're interested, there's plenty of videos and articles describing how they work.

You guys should all check out Andrej Karpathy's neural networks zero to hero videos. He has one on LLMs that explains all this.

Yes available to anyone in the API or anyone who pays for ChatGPT subscription.

As I see it, anybody who is not skeptical towards "yet another 'world changing' claim from the usual types" is either dumb as a doorknob, young and naive or a greedy fucker invested in it trying to make money out of any "suckers" that jump into that hype train.

I've been working on AI projects on and off for about 30 years now. Honestly, for most of that time I didn't think neural nets were the way to go, so when LLMs and transformers got popular, I was super skeptical. After learning the architecture and using them myself, I'm convinced they're part of but not the whole solution to AGI. As they are now, yes, they are world changing. They're capable of improving productivity in a wide range of industries. That seems pretty world changing to me. There are already products out there proving this (GitHub Copilot, jasper, even ChatGPT). You're welcome to downplay it and be skeptical, but I'd highly recommend giving it an honest try. If you're right then you'll have more to back up your opinion, and if you're wrong, you'll have learned to use the tech and won't be left behind.

2 more...

Lots of different things. Lately I've been testing on whatever I can think of which has included having it order pizza for an office pizza party where it had to collect orders from both Slack and text message and then look up and call the pizza place. Finding and scheduling a house cleaner, tracking down events related to my interests happening this weekend and finding a place to eat after. I had it review my changes to its code, write a commit message, and commit it to git. It can write code for itself (it wrote an interface for it to be able to get the weather forecast for example).

Really I see it as eventually being able to do most tasks someone could do using a computer and cell phone. I'm just finishing up getting it connected to email, and it's already able to manage your calendar, so it should be able to schedule a meeting with someone over email based on when you're available.

1 more...

Yeah, I think that's a big part of it. I also wonder if people are getting tired of the hype and seeing every company advertise AI enabled products (which I can sort of get because a lot of them are just dumb and obvious cash grabs).

At this point, it's pretty clear to me that there's going to be a shift in how the world works over the next 2 to 5 years, and people will have a choice of whether to embrace it or get left behind. I've estimated that for some programming tasks, I'm about 7 to 10x faster when using Copilot and ChatGPT4. I don't see how someone who isn't using AI could compete with that. And before anyone asks, I don't think the error rate in the code is any higher.

The one I like to give is tool use. I can present the LLM with a problem and give it a number of tools it can use to solve the problem and it is pretty good at that. Here's an older writeup that mentions a lot of others: https://www.jasonwei.net/blog/emergence

That last bit already happened. An AI (allegedly) told a guy to commit suicide and he did. A big part of the problem is while GPT4 for instance knows all about all the things you just said and can probably do what you're suggesting, nobody can guarantee it won't get something horribly wrong at some point. Sort of like how self driving cars can handle like 95% of things correctly but that 5% of unexpected stuff that maybe takes some extra context that a human has and the car was never trained on is very hard to get past.

1 more...

I was going to say you could give it a math problem that uses big numbers but tried one on GPT4 and it succeeded. GPT3 though will absolutely fail at nontrivial math every time.

2 more...

Oh ok! Got it. I read it as you saying ChatGPT doesn't use GPT 4. It's still unclear what they used for part of it because of the bit before the part you quoted:

For each of the 517 SO questions, the first two authors manually used the SO question’s title, body, and tags to form one question prompt3 and fed that to the Chat Interface [45] of ChatGPT.

It doesn't say if it's 4 or 3.5, but I'm going to assume 3.5. Anyway, in the end they got the same result for GPT 3.5 that it gets on HumanEval, which isn't anything interesting. Also, GPT 4 is much better, so I'm not really sure what the point is. Their stuff on the analysis of the language used in the questions was pretty interesting though.

Also, thanks for finding their mention of 3.5. I missed that in my skim through obviously.

1 more...

Yeah I think you're right on about the students not being able to afford GPT4 (I don't blame them. The API version gets expensive quick). I agree though that it doesn't seem super well put together.

extraordinary claims without extraordinary proof

What are you looking for here? Do you want it to be self aware and anything less than that is hot garbage? That latest advances in AI have many uses. Sure Bitcoin was over hyped and so is AI, but Bitcoin was always a solution with no problem. AI (as in AGI) offers literally a solution to all problems (or maybe the end of humans but hopefully not hah). The current tech though is widely useful. With GPT4 and GitHub Copilot, I can write good working code at multiple times my normal speed. It's not going to replace me as an engineer yet, but it can enhance my productivity by a huge amount. I've heard similar from many others in different jobs.

That's possible now. I've been working on such a thing for a bit now and it can generally do all that, though I wouldn't advise it to be used for therapy (or medical advice), but mostly for legal reasons rather than ability. When you create a new agent, you can tell it what type of personality you want. It doesn't just respond to commands but also figures out what needs to be done and does it independently.

3 more...

From what I've seen, here's what happened. GPT 4 came out, and it can pass the bar exam and medical boards. Then more recently some studies came out. Some of them from before GPT 4 was released that just finally got out or picked up by the press, others that were poorly done or used GPT 3 (probably because of gpt 4 being expensive) and the press doesn't pick up on the difference. Gpt 4 is really good and has lots of uses. Gpt 3 has many uses as well but is definitely way more prone to hallucinating.

Bing is GPT4 based, though I don't think the same version as ChatGPT. But either way GPT4 can solve these types of problems all day.

Not surprised. I got access to bard a while back and it does quite a lot more hallucinating than even GPT3.5.

Though it doubling down on the wrong answer even when corrected is something I've seen GPT4 do even in some cases. It seems like once it says something, it usually sticks to it.

They are saying the internal vector space that LLMs use is too complicated and too unrelated to the output to be understandable to humans.

Yes, that's exactly what I'm saying.

That doesn't mean they're having thoughts in there

I mean. Not in the way we do, and not with any agency, but I hadn't argued either way on thoughts because I don't know the answer to that.

we know exactly what they're doing inside that vector space -- performing very difficult math that seems totally meaningless to us.

Huh? We know what they are doing but we don't? Yes, we know the math, people wrote it. I coded my first neural network 35 years ago. I understand the math. We don't understand how the math is able to do what LLMs do. If that's what you're saying then we agree on this.

The vectors do not represent concepts. The vectors are math. When the vectors are sent through language decomposition they become words, but they were never concepts at any point.

"The neurons are cells. When neurotransmitters are sent through the synapses, they become words, but they were never concepts at any point."

What do you mean by "they were never concepts"? Concepts of things are abstract. Nothing physical can "be" an abstract concept. If you think about a chair, there isn't suddenly a physical chair in your head. There's some sort of abstract representation. That's what word vectors are. Different from how it works in a human brain, but performing a similar function.

A word vector is an attempt to mathematically represent the meaning of a word.

From this page. Or better still, this article explaining how they are used to represent concepts. Like this is the whole reason vector embeddings were invented.

16 more...

Ahh ok that makes sense. I think even with GPT4, it's still going to be difficult for a non-programmer to use for anything that isn't fairly trivial. I still have to use my knowledge of stuff to know the right things to ask. In Feb or Mar, you were using GPT3 (4 requires you to pay monthly). 3 is much worse at everything than 4.

Hah! That's the response I always give! I'm not saying our brains work the exact same way because they don't and there's still a lot missing from current AI but I've definitely noticed that at least for myself, I do just predict the next word when I'm talking or writing (with some extra constraints). But even with LLMs there's more going on then that since the attention mechanism allows it to consider parts of the prompt and what it's already written as it's trying to come up with the next word. On the other hand, I can go back and correct mistakes I make while writing and LLMs can't do that..it's just a linear stream.