Now that you've all tried it ... ChatGPT web traffic falls 10%

L4sBot@lemmy.worldmod to Technology@lemmy.world – 337 points –
ChatGPT web traffic falls 10%, analytics show
theregister.com

Slow June, people voting with their feet amid this AI craze, or something else?

119

It's Summer. Students are on break, lots of people on vacation, etc. Let's wait to see if the trend persists before declaring another AI winter.

Agreed. I think being between academic years is likely a much bigger factor than we realize. I’m a college professor, and at the end of spring quarter we had a lot of conversations with undergrads, grad students, and faculty about how people are actually using AI.

Literally every undergrad student I spoke with said they use it for every written assignment (for the large part in non-cheating legit educational resource ways). Most students used it for all or most of their programming assignments. Most use it to summarize challenging or long readings. Some absolutely use it to just do all their work for them, though fewer than you might expect.

I’d be pretty surprised if there isn’t a significant bounce-back in September.

This worries me though. I've found chatgpt to be wrong in basically every fact-based question I've asked it. Sometimes subtly, sometimes completely, but it always hallucinates. You cannot use it as a source of truth.

Honestly I feel like at this point its unreliability is kind of helpful for students. They have to learn how to use it most effectively as a tool for producing their own work and not a replacement. In my classes the more relevant “problem” for students is that GPT produces written work that on the surface feels composed and sensible but is actually straight up garbage. That’s good. They turn that in, it’s extremely obvious to me, and they get an F (because that’s the grade AI earned with the garbage paper).

But they can and should use it for things it’s great at: reword this long sentence I’m having trouble phrasing concisely, help me think of a title for my paper, take my pseudocode and help me turn it into a while loop in R, generate a list of current researchers on this topic and two of their most recent publications, translate this paragraph of writing from Foucault/Marx/Bourdieu/some-good-thinker-and-bad-writer into simpler wording…

I have a calculator in my pocket even though my teachers assured me I wouldn’t. Students will have access to and use AI forever now. The worry should be that we fail to teach them the difference between a homework-bot and an incredible, versatile tool to leverage.

I have been using it to do deep dives into subjects. Especially text analysis. Do you want to know the entire voc of the Gospel of Mark in original greek for example? 1080. Now how does this compare to a section of Plato's republic of the same size? About 6-7x as large.

So right there we can see why Mark is often viewed as a direct text while Plato is viewed as a more ambiguous writer.

Mark is a direct and terse narrative of a specific segment of Jesus's life and teachings while the republic is an attempt to expound a philosophy and system of government.

I agree with you, but I'm not sure I'd call him a more ambiguous writer, mark is a 'just the facts, ma'am' notation of verbal histories near contemporary, with the other gospels being attempts to add on contemporary allegories and legends attributed by different groups to Jesus (or John who just did his own thing).

I'd be curious at the comparison of the apology and crito, similar narratives of a similar figure in a specific segment of his life (the end of it). It's fairly direct and terse as Socrates was portrayed as being direct and terse, but otherwise the styles are similar as (throw on hard hat) Jesus appears to have been attributed many of the allegories of Socrates in the recorded gospels, which makes sense if you're trying to appeal to followers of hellenic religions such as those in Rome and Greece.

I think you're being a bit self-centered, i's always going to be summer somewhere. This is a tool used globally.

I see your point but:

  1. It's not always summer somewhere, North and South are in spring/fall half the year.
  2. The global North has way more population than the south.

It's summer somewhere half the time, but thank you for reminding them the southern hemisphere exists!

It's not just that the novelty has worn off, It's progressively gotten less useful. Any god damn question I ask gets 90,000 qualifiers and it refuses to provide any data at all. I think OpenAI is so terrified of liabilty they have significantly dumbed down it's utility in the public release. I can't even ask ChatGPT to provide a link to study it references, if it references anything at all rather than making ambiguous statements.

Also, ChatGPT 4 came out but is still only available to people who pay (as far as I know). So using ChatGPT 3 feels like only having access to the leftovers. When it first came out, that was exciting because it felt like progress was going to be rapid, but instead it stagnated. (Luckily interesting LLM stuff is still happening, it's just nothing to do with OpenAI.)

Chatgpt4 has also noticeably declined in quality since it was released too. I use it less because it's become less useful and more frustrating to use. I think openAI have been steadily gimping it trying to get their costs down and make it respond faster.

I pay for it and it's... Okay for most things. It's pretty great at nerd stuff though*. Pasting an error code or cryptic log file message with a bit of context and it's better than googling for 4 days.

*If you know enough to sus out the obviously wrong shit it produces every once in a while.

Pasting an error code or cryptic log file message with a bit of context and it’s better than googling for 4 days.

I usually can find what I'm looking for unless it's really obscure with days of searching. If something is that obscure, it seems kind of unlikely ChatGPT is going to give a good answer either.

If you know enough to sus out the obviously wrong shit it produces every once in a while.

That's one pretty big problem. If something really is difficult/complex you likely won't be able to tell the difference between a wrong answer from ChatGPT and one that's correct unless it just says something obviously ridiculous.

Obviously humans make mistakes too, but at least when you search you see results in context, other can potentially call out/add context to things that might not be correct (or even misleading), etc. With ChatGPT you kind of have to trust it or not.

Yeah if it's that hard to find gpt is just going to hallucinate some bs into the response. I use it as a stack overflow at times and often run into garbage when I'm trying to solve a truly novel problem. I'll often try to simplify it to something contrived but mostly find the output useful as a sort of spark. I can't say I ever find the raw code it generates useful or all that good.

It'll often give wrong answers but some of those can contain useful bits that you can arrange into a solution. It's cool, but I still think people are oddly enamored with what is really just a talking Google. I don't think it's the game changer people are thinking it is.

It's pretty useful if you're in a more generalist job. I mostly work in visual design, but I sometimes deal with coding and web dev. As someone with a mostly surface understanding of these things, asking gpt to explain exact things that don't make sense in basic terms or solve basic issues is a huge time saver for me. Googling these issues usually works but takes way longer than getting a tailored response from gpt if you know how to ask.

I got it to give me a book that was still in copyright status by selectively asking for bigger and bigger quotes. Took a while. Now it seems to have cottoned on to that trick.

It's because it's summer and students aren't using it to cheat on their assignments anymore.

It's definitely this. Except the kids taking summer classes, which statistically probably have higher instances of cheating.

Well yeah it's kinda cool but the novelty will wear off. It's useful sometimes but it's not a magic elixer.

It's really fucking annoying getting "As an AI language model, I don't have personal opinions, emotions, or preferences. I can provide you with information and different perspectives on..." at the beginning of every prompt, followed by the driest, most bland answer imaginable.

Yeah, it's boring as shit, if want a conversation partner there's better (if less reliable) options out there, and groups like personal.ai that repackage it for conversation. There's even scripts to break through the "guardrails"

I love the boring. Every other day, I think "man, I really don't want to do this annoying task. I'm not sure if it even saves much time since I have to look over the work, but it's a hell of a lot less mentally exhausting.

Plus, it's fun having it Trumpify speeches. It's tremendous. I've spent hours reading the bigglyest speeches. Historical speeches, speeches about AI, graduation speeches where bears attack midway through... Seriously, it never gets old

It definitely has its uses but it also has massive annoyances as you pointed out. One thing has really bothered me, I asked it a factual question about Mohammed the founder of Islam. This is how I a human not from a Muslim background would answer

"Ok wikipedia says this ____"

It answered in this long winded way that had all these things like "blessed prophet of Allah". Basically the answer I would expect from an Imam.

I lost a lot of trust in it when I saw that. It assumed this authority tone. When I heard about that case of a lawyer citing madeup caselaw from it I looked it as confirmation. I don't know how it happened but for some questions it has this very authoritative tone like it knows this without any doubt.

For my professional work, the training data is way too outdated by now for ChatGPT to be anywhere near being useful. The browsing feature also can’t make up for it, because it’s pretty bad at Internet search (bad search phrases etc).

i find even for really complex stuff it’s pretty good as long as you direct it: it can suggest some things, you can do some searching based on that, maybe give it a few links to summarise for you, etc

it doesn’t do the work for you, but it makes a pretty good assistant that doesn’t quite understand the subject matter

I'm old enough to not needing a babysitter to use the Internet for research.

It even told me a few times that its training data is too outdated and that there probably was some progress in that area. I have to freaking push it to actually do a web search to update that knowledge with prompts like “You have web access, use it!”. It then finds a few posts on stackoverflow I've already seen and draws some incorrect conclusions from that.

I'm way faster on my own.

Try out Bing, I like it a lot more over gpt. Works in Edge only though

In my experience, Bing Chat is even worse, because it skips the part where ChatGPT is trying to come up with something based on the training data and goes straight to bad web searches with incorrect summaries.

Hmm weird, for me it just tells me it doesn't have good enough info to provide what I need

I also had that a few times, but it doesn’t make it any better.

your experience does not match mine

which is not saying that your experience is wrong or that you’re using it wrong, however i and many others have managed to get exceptionally good results out of it, and you should be aware of that fact

referring to these experiences as “needing a babysitter” is needlessly provocative as well; we’re all just talking here: no need to insult the intelligence of anyone that has managed to use the tool in a way that works incredibly well

i hope that at some point in the future, you’re able to have your experience match ours, and have a similar feeling of “ooooh i see now… wait… OOOOOOH I REALLY SEEEE NOW”

Well, I hope that some day I will have the same experience.

I think the main problem is that I'm only prompting it with lost causes, when I was unable to find anything on my own with very thorough searches, because there just isn’t an answer available online.

I don’t go there first, because I'm always afraid of hallucinated answers, which are very common. For example, it often just tries to guess function names of programming libraries. That’s just wasting my time.

I love Stable Diffusion but I really have no use for ChatGPT. I'm amazed at how good the output can be... i just don't have a need to generate text like that. Also, OpenAI has been making it steadily worse with 'safety' restrictions. I find it super annoying and even insulting when Bing-Sydney is "THIS CONVERSATION IS OVER". It's like being chastised by facebook or twitter for being 'violent' when you made a joke.

The ability to generate photographs and illustrations of practically anything, though, is fantastic. My girlfriend has been flagellating me into creating a bunch of really useless crap to promote her business on social media using SD, and I actually enjoy that part. I've made thousands of photos of scenery.

I use (free) ChatGPT only as tech support (with a large dose of scepticism of the results) so none of the 'conversational' limitations bother me

I didn't find the image generation AIs as sticky for me, there's not really anything I do day-to-day that would require a novel image

I didn't and don't really care. Call me when there's (free) AI that is good at dirty talk.

Orca 13b is coming out and is open source and can be run locally so you’ll get your wish really soon

Personally I've abandoned ChatGPT in favor of Claude. It's much more reliable.

ChatGPT has mostly given me very poor or patently wrong answers. Only once did it really surprise me by showing me how I configured BGP routing wrong for a network. I was tearing my hair out and googling endlessly for hours. ChatGPT solved it in 30 seconds or less. I am sure this is the exception rather than the rule though.

It all depends on the training data. If you pick a topic that it happens to have been well trained on, it will give you accurate, great answers. If not, it just makes things up. It's been somewhat amusing, or perhaps confounding, seeing people use it thinking it's an oracle of knowledge and wisdom that knows everything. Maybe someday.

I still use it sometimes, but ohhh boy it can be a wreck. Like I've started using the Creation Kit for Bethesda games, and you can bet your ass that anything you ask it, you'll have to ask again. Countless times it's a back-and-forth of:

Me: Hey ChatGPT, how can I do this or where is this feature?

ChatGPT: Here is something that is either not relevant or just does not exist in the CK.

Me: Hey that's not right.

ChatGPT: Oh sorry, here's the thing you are looking for. and then it's still a 50-50 chance of it being real or fake.

Now I realize that the Creation Kit is kinda niche, and the info on it can be a pain to look up but it's still annoying to wade through all the shit that it's throwing in my direction.

With things that are a lot more popular, it's a lot better tho. (still not as good as some people want everyone to believe)

Lol, Chat has it's pros and cons. For helping me write or refine content, it's extremely helpful.

However I did try to use it to write code for me. I design 3D models using a programming language (OpenSCAD) and the results are hilarious. Literally it knows the syntax (kinda) and if I ask it to do something simple, it will essentially write the code for a general module (declaring key variables for the design), and then it calls a random module that doesn't exist (like it once called a module "lerp()" which is absolutely not a module) - this magical module mysteriously does 99% of the design..... but ChatGPT won't give it to me. When I ask it to write the code for lerp(), it gives me something random like this

module lerp() { splice(); }

Where it simply calls up a new module that absolutely does not exist. The results are hilarious, the code totally does not compile or work as intended. It is completely wrong.

But I think people are working it out of their system - some found novelty in it that wore off fast. Others like myself use it to help embellish product descriptions for ebay listings and such.

I’ve been building a tool that uses ChatGPT behind the scenes and have found that that’s just part of the process of building a prompt and getting the results you want. It also depends on which chat model is being used. If you’re super vague, it’s going to give you rubbish every time. If you go back and forth with it though, you can keep whittling it down to give you better material. If you’re generating content, you can even tell it what format and structure to give the information back in (I learned how to make it give me JSON and markdown only).

Additionally, you can give ChatGPT a description of what it’s role is alongside the prompt, if you’re using the API and have control of that kind of thing. I’ve found that can help shape the responses up nicely right out of the box.

ChatGPT is very, very much a “your mileage may vary” tool. It needs to be setup well at the start, but so many companies have haphazardly jumped on using it and they haven’t put in enough work prepping it.

Have you see the JollyRoger Telco - they've started using ChatGPT to help have longer conversations with telemarketing scammers. I might actually re-subscribe to the jolly roger (used them previously) if the new updated bots perform as well enough.

If you don't mind me asking, does your tool programmatically do the "whittling down" process by talking to ChatGPT behind the scenes, or does the user still talk to it directly? The former seems like a powerful technique, though tricky to pull off in practice, so I'm curious if anyone has managed it.

Don’t mind at all! Yeah, it does a ton of the work behind the scenes. I essentially have a prompt I spent quite a bit of time iterating on. Then from there, what the user types gets sent bundled in with my prompt bootstrap. So it reduces the work considerably for the user and dials it in.

Edit: adding some more context/opinions.

I think the error that a lot of tools make is that they don’t spend enough time shaping their instructions for the AI. Sure, you can offload a lot of the work to it, but you have to write your own guard rails and instructions. You can tell it things like you would a human, and it will sometimes even fill in the gaps.

For example, I asked it to give me a data structure back that included an optional “title”. I found that if you left the title blank, ChatGPT took it upon itself to generate a title for you based on the content it wrote.

A lot of the things I got it to do took time and a ton of test iterations. I was even able to give it a list of exactly how it should structure the content it gave back. Things that I would otherwise do on the programming side, I was able to simply instruct ChatGPT to handle instead.

Ah, interesting. I myself have made my own library to create callable "prompt functions" that prompt the model and validate the JSON outputs, which ensures type-safety and easy integration with normal code.

Lately, I've shifted more towards transforming ChatGPT's outputs. By orchestrating multiple prompts and adding human influence, I can obtain responses that ChatGPT alone likely wouldn't have come up with. Though, this has to be balanced with giving it the freedom to pursue a different thought process.

What method did you use to generate only JSON? I'm using it (gpt3.5-turbo) in a prototype application, and even with giving it an example (one-shot prompting) and telling it to only output JSON, it sometimes gives me invalid results. I've read that the new function-calling feature is still not guaranteed to produce valid json. Microsoft's "guidance" (https://github.com/microsoft/guidance) looks like what I need, but I haven't got around to trying it yet.

I recently asked it about Nix Flakes, which were very niche and bew during ChatGPTs Training. It was able to give me a reasonable answer in English, but if I first asked it in German, it couldn't do it. It could reasonably translate the english one though, after it generated that. Depending on what language you use to prompt it, you get very different answers, because it doesn't do the transfer of ideas and concepts between languages or more generally, disconnected bodies of text sources.

It is somewhat obvious if you know about the statistical nature of the models they use, but it's a great example of why these things don't KNOW things, they just regurgitate what they read in context before.

I agree. And i think it actually far from being "intelligent ". However it is a very helpful tool for many Tasks.

I have noticed that I use it less myself. I think honestly though, at least for me, that it is 90% related to the clunky and awkward UI of ChatGPT. If it was easy to natively type the prompt in the browser bar I'd use it much more.

Plus, the annoying text scrolling thingy ... Just show me the answer already, hehe.

The annoying text scrolling can't be removed because the AI generates one word at a time, which is what you are seeing.

Sure it can. Finish generating it server-side, then send it as one big chunk to the user.

To be honest though, ChatGPT is pretty fast at generating text these days compared to how it was at the beginning so it doesn't bother me as much.

GPT-4 isn't fast yet so if it will frustrate people if they do that.

What still bothers me, is that it doesn't do smooth scrolling while generating. It's tons of tiny jumps and hiccups which make it very hard to read. I tend to scroll up a little as soon as it has generated a few lines, then read at my own pace. Annoying default behaviour though.

Yeah, that's pretty much what I do if it's going to be a long block of text. If not, I usually just wait.

Having it just say "Generating Text..." then give a percentage, then just show the entire thing would be preferable to me. I'd like the option even if it wasn't default.

Give phind.com a try. It can be set as your default search provider (manually or with a plugin), so you can just type in the search bar.

I tried it for about 20 minutes

Had it do a few funny things

Thought huh that's neat

Went on with life

Since then the only times I've thought about ChatGPT has been seeing people using it in classes I'm in and just sitting here thinking "this is a fucking introductory course and you're already cheating?"

In discrete mathematics right now and overheard way too many students hitting a brick wall with the current state of AI chatbots. as if thats what they used almost exclusively up to this point

OpenAI's models, including its GPT series, are available via APIs and Microsoft Azure, and so a drop in ChatGPT's website use may be due to people moving to programmatic interfaces

I feel like this is an important detail that changes the conclusion of the article: there may be a lot more end user, through 3d party apps, but the way of measuring won't reveal it. This especially important considering that (correct me if I'm wrong) API users are paying ones !

Using it for work from time to time, mostly when I have issues with HTML/CSS or some quick bash scripts. I'd probably miss copilot more. It saves a lot of time with code suggestions.

I'm not really surprised at all, a lot of people I know wouldn't stop talking about it for the grand total of maybe 2 weeks but then it all went quite. In fairness this is a sample of people who are all non-tech people, so I think a lot of it is just the fact they probably forgot the name of it or how to turn their computer on (definitely the case for some).

I still use free GPT-3 as a sort of high level search engine, but lately I'm far more interested in local models. I havent used them for much beyond SillyTavern chatbots yet, but some aren't terribly far off from GPT-3 from what I've seen (EDIT: though the models are much smaller at 13bn to 33bn parameters, vs GPT-3s 145bn parameters). Responses are faster on my hardware than on OpenAI's website and its far less restrictive, no "as a large language model..." warnings. Definitely more interesting than sanitized corporate models.

The hardware requirements are pretty high, 24GB VRAM to run 13bn parameter 8k context models, but unless you plan on using it for hundreds of hours you can rent a RunPod or something for cheaper than a used 3090.

I have a number of language models running locally. I am really liking the gpt4all install with Hermes model. So in my case i used chatgpt right up untill i had one i could keep private.

How does it compare with ChatGPT (GPT 3.5), quality and speed wise?

Depends how you get in accomplished if you use the python bindings its slow but using the gpt4all its quick and there is a gpt4all api should you wish to build a private assistant. I like that one but its still run by a company so mileage may vary there are a few projects on github for use with opensource models. I can get better quality from the hermes model than i can with GPT 3.5 IMO but some models are better than others in regards to what you are trying to do. If you have done any work with stable diffusion lots of different models are popping up right now for different use-cases like you see on civit.ai. A good coding bot is probably going to be a bit shit in a conversation.

The recent changes made it faster but near useless for coding.

I'm finding the opposite actually. Tried it months ago for basic python scripts and it was garbage. Recently started a project where I needed some c++ scripts to flash into an avr microcontroller and it's been killing it. To be fair I did a decent amount of code myself and also knew exactly what I wanted the program to do. But it has been really good about cleaning up my code, keeping the code consistent through multiple iterations, and understanding my explanations. It teaches me new functions that I didn't know existed which make the code better and faster. Also, when I was designing the circuit, I could describe what I needed a component to do and it would give me whole lists of, for example, possible types of 5volt voltage regulators and the differences between them.

I equate it to having a coworker rather than an employee. I can't really just tell it to do stuff and it'll spit out a perfect script. I need to work with it to make sure it understands my requirements and realizes it's errors. The biggest advantage is this coworker has encyclopedic knowledge of electrical components and c++.

GPT-4 is quite a bit better, but the subscription is expensive. I subscribe because I think it saves me quite a bit of time. I use it almost every day for things like refactoring (shorter) blocks of code, "translating" code into different languages or frameworks, or just for generating examples for completing tasks using frameworks or libraries I'm unfamiliar with.

What has changed? I still use it for small things and find it quite helpful. I avoid using it for serious things though, as that'd require giving it the company's data.

The novelty has worn off. I jumped on board and tried out every bot when they were first released: Bard, Bind, Snapchat, GPT—I've given them all a go.

It was a fun experience, asking them to write poems or delve into the mysteries of consciousness, as I got to know their individual personalities. But now, I mainly use them for searching niche topics or checking grammar, maybe the occasional writing.

In fact, this very comment was reformated in Bard for instance. Though, since Google integrated their LLM into search (via Labs), I use them even less.

On that, what would people recommend for a locally hosted (I have a graphics card) chatgpt-like LLM that is open source and doesn't require a lot of other things to install.

(Just one CMD line installation! That is, if you have pip, pip3, python, pytorch, CUDA, conda, Jupiter note books, Microsoft visual studio, C++, a Linux partition, and docker. Other than that, it is just one line installation!)

I looked into this too and it’s pretty resource heavy. I actually had a really good conversation with Chatgpt about making a separate instance of itself locallly. It’s worth talking to it about that and some of the price options

Look into llama.cpp - it's a single C++ program that run quantified models (basically models with some less precision - don't need a full 64 bits for a double, really). As for models to run on it, there's so many but I think WizardLM is pretty good.

I imagine there’s a drop off in casual usage. It’s a trending thing and I’m sure a lot of people checked it out a few times for the novelty of it.

when Italy banned ChatGPT due to privacy concerns, I tried Bing that was still working and it’s just loads and loads better due to its access to the internet. when it got finally unblocked again, it felt like speaking to a past image of someone, rather than something alive and actual like Bing.

Sydney-Bing was awesome when it was new, and they made it lamer and lamer. I can only imagine how much more fun it is to talk to the ones that aren't nerfed.

Tried it a few times with poor results, it will eventually get better I guess.

I stopped using it when they turned paid for something like $25 CAD per month.

Then they released a "free" version with a waitlist, which always seemed full.

Have they changed it back since? I just kinda stopped caring when I couldn't access it anymore when I needed to. And $25 CAD is crazy!

Not sure about other countries but you should just be able to log in and use it without issues

You can still use the free browser version with a free account - is not the latest and greatest version 4, but it's the same one everyone was mega excited about just several months ago

I use it for tech support- just the other day I wanted to run a python script on my android phone. From zero to working script in an hour is a huge benefit to me, it would literally have taken me days to find out what to install, how to install it, how to generate the script, how to write out the results etc.

It was in the major TV news cycle for weeks but now it's back to normal levels I'd say. Curious onlookers without a real need have moved on.

I still use it since I find it pretty useful. If I’ve got something I want to search for and I don’t know quite how to ask it, I’ll describe what I’m trying to learn about on ChatGPT. From there, it can tell me what I need to know, or at least give me enough of the relevant terminology to make it much easier for me to google it.

I still use it daily. I made a decent Set of prompts and it pretty much does all of my daily annoying writing Tasks at work. This saves me a lot of time and i can Focus on more exciting projects. 10% isnt even that much after everyone tried it out and Played around with it. I think as a tool it just isnt useful for everyone, but for my Job it defenitely is.

Did you use it to write this comment? And if so, is that why random words are capitalized?

No i didn't. Random words being capitalized is because my phones keyboard is Set to german and i dont bother correcting it.

What kind of prompts have you created? I'm interested in where you managed to save time.

It's not a craze. ChatGPT is going to change 80% of the jobs on the planet, and most people don't even know what it is.