The Fall of Stack Overflow

trashhalo@beehaw.org to Technology@beehaw.org – 341 points –
The Fall of Stack Overflow
observablehq.com

Over the past one and a half years, Stack Overflow has lost around 50% of its traffic. This decline is similarly reflected in site usage, with approximately a 50% decrease in the number of questions and answers, as well as the number of votes these posts receive.

The charts below show the usage represented by a moving average of 49 days.


What happened?

257

There is a lot of Stack Overflow hate in this thread. I never had a bad experience. I was always on there yelling at noobs, telling them to Google it, and linking to irrelevant questions. It was just wholesome fun that briefly dulled my crippling insecurities

So you never had a bad experience, just were actively causing bad experiences for others?

I think you just fell for quite an obvious case of sarcasm.

It isn’t obvious unless it has the slash s!

We should leave the /s back on reddit

Sadly, it really is necessary if one wants to be sure nobody actually takes the sarcasm seriously. It's hard for people to tell in a textual medium.

Heck, my style of humor in RL is often sarcasm or deliberately ludicrous comments and people still sometimes go "wait, really?" Even though they know me well.

I'm going to go without it from now on. I can handle clarifying myself if it's absolutely necessary for someone.

Yeah but those people who take the sarcasm seriously are fools and you can’t make things foolproof.

Encouraging and putting up with hair-splitting lawyerly un-generous readings of comments is what leads to people just straight up interpreting any “Plus I’m being genuine here” messages as lies.

We need to trust our readers, else we end up in an echo chamber culture where any deviation from the Party line is interpreted as “disruptive person who must be banned to protect our community”.

These things are linked.

The ability to deliver and detect sarcasm without training wheels is a layer of communication we need and can’t afford to abandon, in order to maintain a productive conversational environment.

Yeah but those people who take the sarcasm seriously are fools and you can’t make things foolproof.

Or you know, have a legitimatly very hard time distinguishing it for actual reasons.

Actual reasons like their stupidity? Yeah I admit that’s a real thing. But if we all give up on it then we all lose the ability and we lose the benefits of it.

Damn bro you're right, I'll just stop being autistic. I'm cured!

chinesescholarshadasimilarst
anceagainstallkindofpunctuat
ionclaimingtheabilitytodeliv
eranddetectmeaningwithouttra
iningwheelswasalayerofcommun
icationpeopleneededandcouldn
otaffordtoabandoninordertoma
intainaproductiveconversatio
nalenvironmentwithanyoneunab
letoreflectuponanddiscernthe
intendedmeaningbeingafoolnot
worthyoftheloftymessageswrit
tencommunicationwasintendedf
ortodiscern

https://en.m.wikipedia.org/wiki/Chinese_punctuation

(This is a lesson in history, so I'll let the discerning reader to decide for themselves whether there is sarcasm contained in it)

Rather than cultivate a friendly and open community, they decided to be hostile and closed. I am not surprised by this at all, but I am surprised with how long the decline has taken. I have a number of bad/silly experiences on stackoverflow that have never been replicated on any other platform.

1 more...

All questions have been asked and all answers have been given

and copilot and chatgpt give good enough answers without being unfriendly

ChatGPT has no knowledge of the answers it gives. It is simply a text completion algorithm. It is fundamentally the same as the thing above your phone keyboard that suggests words as you type, just with much more training data.

Who cares? It still gives me the answers i am looking for.

Yeah it gives you the answers you ask it to give you. It doesn't matter if they are true or not, only if they look like the thing you're looking for.

An incorrect answer can still be valuable. It can give some hint of where to look next.

@magic_lobster_party I can't believe someone wrote that. Incorrect answers do more harm than being useful. If the person asks and don't know, how should he or she know it's incorrect and look for a hint?

I don't know about others' experiences, but I've been completely stuck on problems I only figured out how to solve with chatGPT. It's very forgiving when I don't know the name of something I'm trying to do or don't know how to phrase it well, so even if the actual answer is wrong it gives me somewhere to start and clues me in to the terminology to use.

In the context of coding it can be valuable. I produced two tables in a database and asked it to write a query and it did 90% of the job. It was using an incorrect column for a join. If you are doing it for coding you should notice very quickly what is wrong at least if you have experience.

In my experience, with both coding and natural sciences, a slightly incorrect answer that you attempt to apply, realize is wrong in some way during initial testing/analysis, then you tweak until it's correct, is very useful, especially compared to not receiving any answer or being ridiculed by internet randos.

Google the provided solution for additional sources. Often when I search for solutions to problems I don’t get the right answer directly. Often the provided solution may not even work for me.

But I might find other clues of the problem which can aid me in further research. In the end I finally have all the clues I need to find the answer to my question.

How do you Google anything when all the results are AI generated crap for generating ad revenue?

Well then I guess I have to survive with ChatGPT if the internet is so riddled with search engine optimized garbage. We’re thankfully not there yet, at least not with computer tech questions.

Well if they refer to coding solution they’re right : sometimes non-working code can lead to a working solution. if you know what you’re doing ofc

Even if you don't know what you're doing ChatGPT can still do well if you tell it what went wrong with the suggestion it gave you. It can debug its code or realize that it made wrong assumptions about what you were asking from further context.

What point are you trying to make? LLMs are incredibly useful tools

Yeah for generating prose, not for solving technical problems.

not for solving technical problems

One example is writing complex regex. A simple well written prompt can get you 90% the way there. It's a huge time saver.

for generating prose

It's great a writing boilerplate code so I can spend more of my time architecturing solutions instead of typing.

How is that practically different from a user perspective than answers on SO? Either way, I still have to try the suggested solutions to see if they work in my particular situation.

At least with those, you can be reasonably confident that a single person at some point believed in their answer as a coherent solution

That doesn't exactly inspire confidence.

Better than knowing there's some possibility that the answer was generated purely because the sequence of characters had the highest probability of convincing the reader that it seems correct based on the sequence of characters it was given as input (+/- a decent amount of RNG)

Still debatable, IMO. Human belief is stubborn and self-justifying whereas an RNG can be rerolled as many times as needed.

Yeah but if you keep rerolling the RNG, how do you know when a right answer gets randomly generated?

Also, my point above was that if a human believed the solution was true, it probably was true at some point. With generative language models, there's no guarantee that there's any logic to what it tells you.

You know when the code compiles and does what you want it to do. What's the point in asking for code if you're not going to run it? You'd be doing that with anything you got off of Stack Overflow too, presumably.

the good thing if it gives you the answer in a programming language is that its quite simple tontestvif the output is what you expect, also a lot of humans hive wrong answers...

There was a story once that said if you put an infinite number of monkeys in front of an infinite number of typewriters, they would eventually produce the works of William Shakespeare.

So far, the Internet has not shown that to be true. Example: Twitter.

Now we have an artificial monkey remixing all of that, at our request, and we're trying to find something resembling Hamlet's Soliloquy in what it tells us. What it gives you is meaningless unless you interpret it in a way that works for you -- how do you know the answer is correct if you don't test it? In other words, you have to ensure the answers it gives are what you are looking for.

In that scenario, it's just a big expensive rubber duck you are using to debug your work.

There's a bunch of people telling you "ChatGPT helps me when I have coding problems." And you're responding "No it doesn't."

Your analogy is eloquent and easy to grasp and also wrong.

Fair point, and thank you. Let me clarify a bit.

It wasn't my intention to say ChatGPT isn't helpful. I've heard stories of people using it to great effect, but I've also heard stories of people who had it return the same non-solutions they had already found and dismissed. Just like any tool, actually...

I was just pointing out that it is functionally similar to scanning SO, tech docs, Slashdot, Reddit, and other sources looking for an answer to our question. ChatGPT doesn't have a magical source of knowledge that we collectively also do not have -- it just has speed and a lot processing power. We all still have to verify the answers it gives, just like we would anything from SO.

My last sentence was rushed, not 100% accurate, and shows some of my prejudices about ChatGPT. I think ChatGPT works best when it is treated like a rubber duck -- give it your problem, ask it for input, but then use that as a prompt to spur your own learning and further discovery. Don't use it to replace your own thinking and learning.

Even if ChatGPT is giving exactly the same quality of answer as you can get out of Stack Overflow, it gives it to you much more quickly and pieces together multiple answers into a script you can copy and work with immediately. And it's polite when doing so, and will amend and refine its answers immediately for you if you engage it in some back-and-forth dialogue. That makes it better than Stack Overflow and not functionally similar.

I've done plenty of rubber duck programming before, and it's nothing like working with ChatGPT. The rubber duck never writes code for me. It never gives me new information that I didn't already know. Even though sometimes the information ChatGPT gives me is wrong, that's still far better than just mutely staring back at me like a rubber duck does. A rubber duck teaches me nothing.

"Verifying" the answer given by ChatGPT can be as simple as just going ahead and running it. I can't think of anything simpler than that, you're going to have to run the code eventually anyway. Even if I was the world's greatest expert on something, if I wrote some code to do a thing I would then run it to see if it worked rather than just pushing it to master and expecting everything to be fine.

This doesn't "replace your own thinking and learning" any more than copying and pasting a bit of code out of Stack Overflow does. Indeed, it's much easier to learn from ChatGPT because you can ask it "what does that line with the angle brackets do?" or "Could you add some comments to the loop explaining all the steps" or whatever and it'll immediately comply.

I honestly believe people are way overvaluing the responses ChatGPT gives.

For a lot of boilerplating scenarios or trying to resolve some pretty standard stuff, it's good.

I had an issue a while back with QueryDSL running towards an MSSQL instance, which I tried resolving by asking ChatGPT some pretty straightforward questions regarding the tool. Without going too much into detail, I basically got stuck in a loop where ChatGPT kept suggesting solutions that were not viable at all in QueryDSL. I pointed it out, trying to point out why what it did was wrong and it tried correcting itself suggesting the same broken solutions.

The AI is great until whatever it has been taught previously doesn't cover your situation. My solution was a bit of digging in google away, which helpfully made me resolve the issue. But had I been stuck with only ChatGPT I'd still be going around in loops.

It really doesn't work as a replacement for google/docs/forums. It's another tool in your belt, though, once you get a good feel for its limitations and use cases; I think of it more like upgraded rubber duck debugging. Bad for getting specific information and fixes, but great for getting new perspectives and/or directions to research further.

I agree! It has been a great help in those cases.

I just don't believe that it can fullfill the actual need for sites like StackOverflow. It probably never will be able to either, unless we manage to make it learn new stuff without reliable sources like SO, while also allowing it to snap up these obscure answers to problems without burying it in tons of broken solutions.

ChatGPT is great for simple questions that have been asked and answered a million times previously. I don’t see any downside to these types of questions not being posted to SO…

Exactly this. SO is now just a repository of answers that ChatGPT and it’s ilk can train against. A high percentage is questions that SO users need answers to are already asked and answered. New and novel problems arise so infrequently thanks to the way modern tech companies are structured that an AI that can read and train on the existing answers and update itself periodically is all most people need anymore… (I realize that was rambling, I hope it made sense)

A repository of often (or at least not seldom) outdated answers.

yes! this! is chatgpt intelligent: no! does it more often than not give good enough answers to daily but somewhat obscure ans specific programming questions: yes! is a person on SO intelligent: maybe. do they give good enough answers to daily but somewhat obscure ans specific programming questions: mostly

Its not great for complex stuff, but for quick questions if you are stuck. the answers are given quicker, without snark and usually work

Amazing how much hate SO receives here. As knowledge base it's working super good. And yes, a lot of questions have been answered already. And also yes, just like any other online community there's bad apples which you have to live with unfortunately.

Idolizing ChatGPT as a viable replacementis laughable, because it has no knowledge, no understanding, of what it says. It's just repeating what it "learned" and connected. Ask about something new and it will simply lie, which is arguably worse than an unfriendly answer in my opinion.

The advice on stack overflow is trash because "that question has been answered already" yeah, it was answered 10 years ago on a completely different version. That answer is depreciated.

Not to mention the amount of convoluted answers that get voted to the top and then someone with two upvotes at the bottom meekly giving the answer that you actually needed.

It's like that librarian from the New York public library who determined whether or not children's books would even get published.

She gave "good night moon" a bad score and it fell out of popularity for 30 years after the author died.

I don't think that's entirely fair. Typically answers are getting upvoted when they work for someone. So the top answer worked for more people than the other answers. Now there can be more than one solution to a problem but neither the people who try to answer the question, nor the people who vote on the answers, can possibly know which of them works specifically for you.

ChatGPT will just as well give you a technically correct, but for you wrong, answer. And only after some refinement give the answer you need. Not that different than reading all the answers and picking the one which works for you.

Of course older answers are going to have more uovotes if they technically work. That doesn't mean it's the best answer. It's possible that someone would like to make a new, better, answer and is unable to because of SA restrictions on posting.

The kinds of people who post on SA regularly aren't going to be the people with the best answers.

On top of that SA gives badges for uovoting and it's possible other benefits I'm unaware of.

As we saw with reddit, uovotes systems can be inherently flawed, we have no way of knowing if that uovote is genuine.

Explains the huge swaths of bad advice shared on Reddit though. It's shared confidently and with a smile. Positive vibes only!

What's "Reddit"?

(I removed all my advice from there when it was considered "violent content" and "sexualization of minors"... go find your 3d printing, programming, system management and chemistry tips elsewhere, I did it anyway)

I hear you. I firmly believe that comparing the behavior of GPT with that of certain individuals on SO is like comparing apples to oranges though.

GPT is a machine, and unlike human users on SO, it doesn't harbor any intent to be exclusive or dismissive. The beauty of GPT lies in its willingness to learn and engage in constructive conversations. If it provides incorrect information, it is always open to being questioned and will readily explain its reasoning, allowing users to learn from the exchange.

In stark contrast, some users on SO seem to have a condescending attitude towards learners and are quick to shut them down, making it a challenging environment for those seeking genuine help. I'm sure that these individuals don't represent the entire SO community, but I have yet to have a positive encounter there.

While GPT will make errors, it does so unintentionally, and the motivation behind its responses is to be helpful, rather than asserting superiority. Its non-judgmental approach creates a more welcoming and productive atmosphere for those seeking knowledge.

The difference between GPT and certain SO users lies in their intent and behavior. GPT strives to be inclusive and helpful, always ready to educate and engage in a constructive manner. In contrast, some users on SO can be dismissive and unsupportive, creating an unfavorable environment for learners. Addressing this distinction is vital to fostering a more positive and nurturing learning experience for everyone involved.

In my opinion this is what makes SO ineffective and is largely why it's traffic had dropped even before chat GPT became publicly available.

Edit: I did use GPT to remove vitriol from and shorten my post. I'm trying to be nicer.

I think I see a core issue highlighted in your comment that seems like a common theme in this comment section.

At least from where I'm sitting, SO is not and has never been a place for learning, as in a substitute for novices learning by reading a book or documentation. In my 12-year experience with it, I've always seen it as a place for professionals and semi-professionals of various experience and overlap sharing answers typically not found in the manual, which speeds up the pace of investigations and work by filling eachother's gaps. Not a place where people with plenty of time on their hands and/or knack for teaching go to teach novices. Of course there are those people there too but that's been rare occurrence in my experience. And so if a person expects to get a nice lesson instead of a terse answer from someone with 5 minutes or less, those expectations will be perpetually broken. For me that terse answer is enough more often than not and its accuracy is infinitely more important than the attitude used to say it.

I expect a terse answer. I also am a professional. My experience with SO users is that they do not behave professionally. There's not much more to it.

I don't want to compare the behavior, only the quality of the answers. An unintentional error of ChatGPT is still an error, even when it's delivered with a smile. I absolutely agree that the behavior of some SO users is detrimental and pushes people away.

I can also see ChatGPT (or whatever) as a solution to that - both as moderator and as source of solutions. If it knows the solution it can answer immediately (plus reference where it got it from), if it doesn't know the solution it could moderate the human answers (plus learn from them).

That's fair. You don't have to compare the behavior. There's plenty of that in the thread already.

I think the issue is how people got to Stack Overflow. People generally ask Google first, which hopefully would take you somewhere where somebody has already asked your question and it has answers.

Type a technical question into Google. Back in the day it would likely take you to Experts Exchange. Couple of years later it would take you to Stack Overflow. Now it takes you to some AI generated bullshit that scraped something that might have contained an answer, but was probably just more AI generated bullshit.

Either their SEO game is weak, they stopped paying Google as much for result placement, or they've just been overwhelmed with limitless nonsense made by bots for the sole purpose of selling advertising space that other bots will look at.

Or maybe I'm wrong and everybody is just asking ChatGPT their technical questions now, in which case god fucking help us all...

It gives decent answer and is still relatively at the top. However, if you need to ask something that isn't there you're going to be either intimidated or your question is going to be left unanswered for months.

I'm more inclined to ask questions on sites like Reddit, because it's something I'm familiar with and there's far better chance of getting it answered within couple hours.

ChatGPT is also far superior because there's a feedback loop almost in real time. Doesn't matter if it gives the wrong answer, it gives you something to work with and try, and you can keep asking for more ideas. That's much preferable than having to wait for months or even years to get an answer

Ya im not sure what the deal with the hate is. ChatGPT gives you an excellent starting point and if you give it good feedback and direction you can actually churn out some pretty decent code with it.

Understandably, it has become an increasingly hostile or apatic environment over the years. If one checks questions from 10 years ago or so, one generally sees people eager to help one another.

Now they often expect you to have searched through possibly thousands of questions before you ask one, and immediately accuse you if you missed some – which is unfair, because a non-expert can often miss the connection between two questions phrased slightly differently.

On top of that, some of those questions and their answers are years old, so one wonders if their answers still apply. Often they don't. But again it feels like you're expected to know whether they still apply, as if you were an expert.

Of course it isn't all like that, there are still kind and helpful people there. It's just a statistical trend.

Possibly the site should implement an archival policy, where questions and answers are deleted or archived after a couple of years or so.

human nature remembers negative experiences much better than positive, so it only takes like 5% assholes before it feels like everyone is toxic.

True that! and a change from 2% to 5% may feel much larger than that.

The worst is when you actually read all that questions and clearly stated how they don't apply and that you already tried them and a mod is still closing your question as a duplicate.

I can't wait to read gems like "Answered 12/21/2005 you moron. Learn to search the website. No, I wont link it for you, this is not a Q&A website".

Answers from 2005 that may not be remotely relevant anymore, especially if a language has seen major updates in the TWENTY YEARS since!

More important for frameworks than languages, IMO. Frameworks change drastically in the span of 5-10 years.

No, they shouldn't be archived. I say that because technology can change. At some point they added a new sort method which favors more recent upvotes and it helps more recent answers show above old ones with more votes. This can happen on very old posts where everyone else might not use the site anymore. We shouldn't expect the original asker to switch the accepted answer potentially years down the line.

There's plenty of things wrong with SE and their community but I don't think this is one that needs to change.

Google search going to absolute shit is what happened

I also attribute most of this to google. I am used to google a coding question and getting 10 SO results i can quickly scan through. Since a year I only get blogposts about the general behaviour of the thing i was googling.

This is the most likely explanation. It doesn't make sense to have such a dramatic dropoff in user behavior without an obvious trigger.

I don't understand. Google search has its issues for sure, but it always shows stack overflow highly when I search programming things.

Honestly.

Stackoverflow is a horrible place to ask anything.

I have had 100% legit, well documented questions, closed as duplicate of unrelated other question.

Its... honestly, just not a friendly place to go. Full of a bunch of assholes....

Most of the answers actually suck too. Many times, you will find the correct answer downvoted, and incorrect or bad answers upvoted.

I found this when I was in college too. I only ever asked a few questions and they were all closed as duplicates and never found why the answers from those threads solved anything closed to what I was asking. Lol

A few months ago I had a 7 year old question of mine closed as a duplicate of a 5 year old question. Just another sign that StackOverflow mods are hard at work.

I don't even know how to answer questions.

I lost my old account and now I don't have points on new account and I can not do anything. I can not vote, I can not comment, I can not answer questions. So I just dropped it. I can not even thank (by liking or upvoting) a person whose answer helped me.

I believe others have similar experience.

I feel you. At this point, its a circle-jerk of who can close tickets with the most non-helpful, ridiculous responses....

SO is such a miserable and toxic place that oftentimes I'd rather read more documentation or reach out to someone elsewhere like Discord. And I would never post a question there or comment there.

I’d rather read the docs than just about anything. I love good documentation. I wanna know how and why things work.

The problem is that basically nobody has good docs. They are almost all either incomplete or unreadable.

A lot of companies won't employ technical writers, who exist to make good, thorough, complete and well-presented documentation... they rather assume their engineers can just write the docs.

And no, no they can't... very few engineers study the principles of effective communication. They may understand things, but they can't explain them.

That’s fair. At my company we have technical writers for the external docs and internal docs are usually written by whoever has worked on something and got frustrated that nobody in the company could give them a high level overview, and they had to go through the code for a couple hours.

Tbf though, I’ll take docs that aren’t written super well that tell me how things from our internal libraries should be used. Or just comments. I’ll take comments telling me WHY we are doing something.

I don’t expect our internal docs to be MSDN docs. But I like to read an overview of at least the workflow before I jump into updating a large project.

While I agree, writing good docs is hard for a very intangible benefit. Honestly, it feels like doing the same work twice, with the prospect of doing it again and again in the future as the software is updated. It’s a little demoralizing.

It is hard, I agree. I’m not very good at it myself. But even semi-decent docs are better than googling around or stepping through a decompiled package.

And it’s super useful to new developers, and would have saved me a lot of time and frustration when I was new.

It's hostile to new users and when you do ask you will likely not get answer might get scolded or just get closed as duplicate. Then there is the fact that most has answers doesn't matter if it's outdated or just bad advice. Pretty much everything has GitHub now. Usually I just go raise the question there if I have a genuine question get an answer from the developers themselves. Or just go to their website api/ library doc they have gotten good lately. Then finally recent addition with chatgpt you can ask just about any stupid question you have and maybe it may give some idea to fix the problem you encounter. Pretty much the ultimate rubber duck buddy.

chatGPT doesn't chastize me like a drill instructor whenever I ask it coding problems.

It's funny because if you look at the numbers it looks like traffic started to go down before chat GPT was actually released to the public, indicating that maybe people thought that the site was too much of a pain in the ass to deal with before that and GPT is just the nail in the coffin.

Personally, of all the attempts I've had it positive interactions on that site I've had only one and at this point I treat it as a read-only site because it's not worth my time arguing pedants just to get a question answered.

If I went to the library and all the librarians were assholes I probably wouldn't go to that library anymore either.

It just invents the answer out of thin air, or worse, it gives you subtle errors you won't notice until you're 20 hours into debugging

I agree with you that it sometimes gives wrong answers. But most of the time, it can help better than StackOverflow, especially with simple problems. I mean, there wouldn't be such an exodus from StackOverflow if ChatGPT answers were so bad right ?

But, for very specific subjects or bizarre situations, it obviously cannot replace SO.

And you won't know if the answers it gave you are OK or not until too late, seems like the Russian Roulette of tech support, it's very helpful until it isn't

Depending on Eliza MK50 for tech support doesn't stop feeling absurd to me

How do you know the answer that gets copied from SO will not have any downsides later? Chatgpt is just a tool. I can hit myself in the face with a wrench as well, if I use it in a dumb way. IMHO the people that get bitten in the ass by chatgpt answers are the same that just copied SO code without even trying to understand what it is really doing...

Sounds the same as believing a random stranger.

How many SO topics have you seen with only one, universally agreed upon solution?

It's too much to attribute to any one effect. 50% is a lot for a website of this size (don't forget that Lemmy exploded from a migration of <5% Reddit usershare). Let's KISS by attributing likely causes in order of magnitude:

  1. ChatGPT became the world's fastest growing website in a single month and it's actually half-decent at being a code tutor
  2. ChatGPT bots got unleashed on SO and diluted a lot of SO's comparative advantages
  3. Stack Overflow moderators went on strike, which further damaged content quality
  4. Structurally speaking, SO is an environment which tends to become more elitist over time. As the userbase becomes progressively more self-selective, the population shrinks.
  5. The SO format requires a stream of novel questions, but novel questions generally get rarer over time
  6. Developer documentation has generally improved over time. On SO, asking about a well-documented thing is a short-circuit pathway to getting RTFM'd & discussion locked

ChatGPT came out after the beginning of the trend in the charts. That falsifies the first 2 points of the hypothesis. The strike happened a month ago so that'a gone too. 4, 5 and 6 do not appear as abrupt processes even if we assume they're true so they likely don't explain it. There must be something else that's happened that could cause such a large and abrupt change before any of the above happened. I bet on a change in the major source of traffic - Google.

You've assumed that I want to explain the root cause of the initial decline. This is not the case. Historically, SO has seen several periods of decline. What I'm actually addressing is the question of why the decline has not stopped, because the sustained nature of this decline is what makes it unusual. If you look at the various charts, you can see a brief rally which gets cut off in late Winter 2022 -- this lines up rather nicely with the timing of ChatGPT's release, I feel.

Let's ignore that. Tell me more about your Google angle: what's the basis of your hypothesis?

I'm not who you were speaking to, but back when I used to read it occasionally, the stack overflow blog repeatedly mentioned that the vast majority of its traffic comes from Google. If the vast majority of your traffic comes from Google and then your traffic quantity changes dramatically, it's reasonable to look to the source of your traffic.

Thank you for doing my work for me. It's just Occam's razor.

but github copilot came out right around that time....

In my experience many of the answers have become out of date. It's gradually becoming an archive of the old ways of doing things for many languages / frameworks.

Questions are often closed as a duplicate when the linked question doesn't apply anymore. It's full of really bad ways of doing things.

I'm not really sure of the solution at this point.

Also ChatGPT.

It's a last resort for me nowadays.

Yeah, this is what they get and deserve. They rose by providing meaningful, helpful, and technically adept answers to questions. Then they encouraged an abusive moderator culture that marks questions as duplicate, linking to unrelated questions. They also still do not offer easy ways for the knowledge base to be updated as things over time change. Now the company abusing their abusive moderators, causing them to basically go on strike right now.

Here's hoping the next thing doesn't suck as much ass as Stack Exchange ultimately has.

https://en.m.wikipedia.org/wiki/Fediverse

Based on that, there is no "q&a" type of Fediverse software (a clear answer and a clear "voted best" answer).

Stack overflow had a huge number of "mod tools" to help curate the content (gold nuggets) given. They did not do the step of aggregating content (gold ingots) like Wikipedia has. The marking as duplicate could and should be tempered by "due diligence" or "age of the last time this was asked", but how it is implemented is up to them.

To be fair™ they did at least do a little bit to deal with the existing answers becoming obsolete by changing the default answer sorting. The "new" (it's already been at least a year IIRC) sorting pushes down older answers and allows newer answers to rise to the top with fewer votes. That still doesn't fix the issue that the accepted answer likely won't change as new ways of doing things become standard, but at least it's a step in the right direction.

One thing I've always wondered about stack overflow is why is there only one accepted answer ever possible even though this is programming and there are many different ways of doing any given thing?

Ironic, since one of ChatGPT's biggest weaknesses is that it's an archive of the old ways of doing things. You can't filter by time on ChatGPT, and ChatGPT isn't being retrained on the latest knowledge live. These aren't inherent to GPT, so it's possible that a future iteration will overcome these issues.

On ChatGPT, if a solution doesn't work, you can ask in real time for a different one. On SO, your post just gets locked for being a duplicate.

Asking in real-time wouldn't help in this scenario (e.g. some mirror is no longer accessible). If anything, it'd just lead you further astray and waste more time, because GPT's knowledgebase doesn't have this knowledge.

Why is everyone saying this is because Stack Overflow is toxic? Clearly the decline in traffic is because of ChatGPT. I can say from personal experience that I've been visiting Stack Overflow way less lately because ChatGPT is a better tool for answering my software development questions.

I was going to say ChatGPT.

I think the smugness of StackOverflow is still part of it. Even if ChatGPT sometimes fabricates imaginary code, it's tone is flowery and helpful, compared to the typical pretentiousness of Stackoverflow users.

Also, you can have it talk like a catgirl maid, so I find that's particularly helpful as well.

2 more...

The timing doesn't really add up though. ChatGPT was published in November 2022. According to the graphs on the website linked, the traffic, the number of posts and the number of votes all already were in a visible downfall and at their lowest value of more than 2 years. And this isn't even considering that ChatGPT took a while to get picked up into the average developer's daily workflow.

Anyhow though, I agree that the rise of ChatGPT most likely amplified StackOverflow's decline.

Half the time when I ask it for advice, ChatGPT recommends nonexistent APIs and offers examples in some Frankenstein code that uses a bit of this system and a bit of that, none of which will work. But I still find its hit rate to be no worse than Stack Overflow, and it doesn't try to humiliate you for daring to ask.

It depends on what sort of thing you're asking about. More obscure languages and systems will result in hallucinated APIs more often. If it's something like "how do I sort this list of whatever in some specific way in C#" or "can you write me a regex for such and such a task" then it's far more often right. And even when ChatGPT gets something wrong, if you tell it the error you encountered from the code it'll usually be good at correcting itself.

I find that if it gets it wrong in the first place, its corrections are often equally wrong. I guess this indicates that I've strayed into an area where its training data is not of good quality.

Yeah, if it's in a state where it's making up imaginary APIs whole cloth then in my experience you're asking it for help with something it just doesn't know enough about. I get the best results when I'm asking about popular stuff (such as "write me a python script to convert wav files to mp3" - it'll know the right APIs for that sort of task, generally speaking). If I'm working on something that's more obscure then sometimes it's better to ask ChatGPT for generalized versions of the actual question. For example, I was tinkering with a mod for Minetest a while back that was meant to import .obj models and convert them into a voxelized representation of the object in-game. ChatGPT doesn't know Minetest's API very well, so I was mostly asking it for Lua code to convert the .obj into a simple array of voxel coordinates and then doing the API stuff to make it Minetest-specific for myself. The vector math was the part that ChatGPT knew best so it did an okay job on its part of the task.

Your follow up question should be for ChatGPT to write those APIs for you.

Over the last five years, I'd click a link to Stack Overflow while googling, but I've never made an account because of the toxicity.

But yeah, chatGPT is definitely the nail in the coffin. Being able to give it my code and ask it to point out where the annoying bug is... is amazing.

2 more...

One aspect that I've always been unsure about, with Stack Overflow, and even more with sibling sites like Physics Stack Exchange or Cross Validated (stats and probability), is the voting system. In the physics and stats sites, for example, not rarely I saw answers that were accepted and upvoted but actually wrong. The point is that users can end up voting for something that looks right or useful, even if it isn't (probably less the case when it comes to programming?).

Now an obvious reply to this comment is "And how do you know they were wrong, and non-accepted ones right?". That's an excellent question – and that's exactly the point.

In the end the judge about what's correct is only you and your own logical reasoning. In my opinion this kind of sites should get rid of the voting or acceptance system, and simply list the answers, with useful comments and counter-comments under each. When it comes to questions about science and maths, truth is not determined by majority votes or by authorities, but by sound logic and experiment. That's the very basis from which science started. As Galileo put it:

But in the natural sciences, whose conclusions are true and necessary and have nothing to do with human will, one must take care not to place oneself in the defense of error; for here a thousand Demostheneses and a thousand Aristotles would be left in the lurch by every mediocre wit who happened to hit upon the truth for himself.

For example, at some point in history there was probably only one human being on earth who thought "the notion of simultaneity is circular". And at that time point that human being was right, while the majority who thought otherwise were wrong. Our current education system and sites like those reinforce the anti-scientific view that students should study and memorize what "experts" says, and that majorities dictate what's logically correct or not. As Gibson said (1964): "Do we, in our schools and colleges, foster the spirit of inquiry, of skepticism, of adventurous thinking, of acquiring experience and reflecting on it? Or do we place a premium on docility, giving major recognition to the ability of the student to return verbatim in examinations that which he has been fed?"

Alright sorry for the rant and tangent! I feel strongly about this situation.

But it’s not truth that is implied by voting.

Voting determines the sorting precedence. It’s a way of handling the fact that the site contains more content than a person can read. It’s a way of guiding what they should read first given limited time.

That's how I interpret it. My question is if it's generally interpreted that way, or misinterpreted.

I have to agree with this cause I have run into not a couple but many in recent years where when a proper answer is given, the accepted one despite being flawed or not recommended(Python 2->3 changes for example) anymore, it's still the highest voted one. And proper answer is in 3rd or 4th place. And it's where the old r/science shine cause you can properly ask some really specific domain question there and a qualified scientist might just pop up and answer you in detail. ( not that they can't be wrong, just highly unlikely in current understanding of those topics. )

Gibson was correct about much of our education system and Galileo was certainly right about the consequences of overvaluing mediocre wit that merely happened to well-timed. what neither of them had to content with, however, was the internet and how social media can combine the inability to reason critically and mediocre wit with crippling insecurities and anti-social personalities to what should be predictable results.

a least Gibson understood that a technocratic future didn’t imply that people’s lives would necessarily improve.

Science is based on peer review, which means that a scientific opinion will be accepted only if it can convince a sufficient number of other scientists. This is not too different from using an explicit voting system to rank answers.

All scientists accept the possibility that what they currently believe to be true may one day be considered false. Science does not pretend to describe only eternal truths. So it's not a problem if the most popular answer today becomes the least popular answer in the future, or vice versa.

Peer review, as the name says, is review, not "acceptance". At least in principle, its goal is to help you check whether the logic behind your analysis is sound and your experiments have no flaws. That's why one can find articles with completely antithetical results or theses, both peer-reviewed (and I'm not speaking of purchased pseudo peer-review). Unfortunately it has also become a misused political or business tool, that's for sure – see "impact factors", "h-indexes", and similar bulls**t.

Peer review is a general principle that goes beyond the formalities of journal publication.

Even if you never submit your work to a peer-reviewed journal, your scientific claims will be judged by a community of scientific peers. If your work is not accepted by your scientific peers, then you are not contributing to scientific knowledge.

For example, most homeopathic claims are never submitted to journals. They are nevertheless judged by the scientific community, and are not persuasive enough to be accepted as scientific knowledge.

You're simplifying the situation and dynamics of science too much.

If you submit or share a work that contains a logical or experimental error – it says "2+2=5" somewhere – then yes, your work is not accepted, it's wrong, and you should discard it too.

But many works have no (visible) logical flaws and present hypotheses within current experimental errors. They explore or propose, or start from, alternative theses. They may be pursued and considered by a minority, even a very small one, while the majority pursues something else. But this doesn't make them "rejected". In fact, theories followed by minorities periodically have breakthroughs and suddenly win the majority. This is a vital part of scientific progress. Except in the "2+2=5" case, it's a matter of majority/minority, but that does emphatically not mean acceptance/rejection.

On top of that, the relationship between "truth" and "majority" is even more fascinatingly complex. Let me give you an example.

Probably (this is just statistics from personal experience) the vast majority of physicists would tell you that "energy is conserved". A physicist specialized in general relativity, however, would point out that there's a difference between a conserved quantity (somewhat like a fluid) and a balanced quantity. And energy strictly speaking is balanced, not conserved. This fact, however, creates no tension: if you have a simple conversation – 30 min or a couple hours – with a physicist who stated that "energy is conserved", and you explain the precise difference, show the equations, examine references together etc, that physicist will understand the clarification and simply agree; no biggie. In situations where that physicist works, this results in little practical difference (but obviously there are situations where the difference is important.)

A guided tour through general relativity (see this discussion by Baez as a starting point, for example) will also convince a physicist who still insisted that energy is conserved even after the balance vs conservation difference was clarified. With energy, either "conservation" makes no sense, or if we want to force a sense, then it's false. (I myself have been on both sides of this dialogue.)

This shows a paradoxical situation: the majority may state something that's actually not true – but the majority itself would simply agree with this, if given the chance! This paradoxical discrepancy arises especially today owing to specialization and too little or too slow osmosis among the different specialities, plus excessive simplification in postgraduate education (they present approximate facts as exact). Large groups maintain some statements as facts simply because the more correct point of view is too slow to spread through their community. The energy claim is one example, there are others (thermodynamics and quantum theory have plenty). I think every physicist working in a specialized field is aware about a couple of such majority-vs-truth discrepancies. And this teaches humbleness, openness to reviewing one's beliefs, and reliance on logic, not "majorities".

Edit: a beautiful book by O'Connor & Weatherall, The Misinformation Age: How False Beliefs Spread, discusses this phenomenon and models of this phenomenon.

It couldn't happen to a more deserving group of smug, self-satisfied shitheads.

I miss when SO used to be a good place to ask questions.

I said I was a novice on the Code Review site and then the one answer I got told me to look into something like "mount genius and the valley of stupid" like dude, I fucking said I was a novice, I'm not claiming to be a genius. All over me using a term wrong. And when I asked what term they'd use they still smarted off. It wasn't until I asked them again that they told me the term I was actually looking for.

I remember going to the vmware communities looking for help almost 20 years ago and some smug person was really upset that I didn't use the right wording when I was starting out. He spent something like 2 whole days worth of posting. It was a chore to divine what he was saying while stumbling through his weird rant/lecture about proper terminology. I eventually called him out on it and never went back.

So long story short, communities and companies who don't nip this kind of behavior in the bud and heavily moderate the assholes almost universally turn into the next expertsexchange community. Stack Overflow kind of leaned heavily into enshitification because of this, they eventually just stopped caring about what was being put on their forums, maintaining high content quality, and getting rid of argumentative power-users. Ironically reddit was a much nicer community and usually you'd find an answer or get help without the attitude, especially in the IT space.

SO claims a lot of this is because it is meant to be a tool where people go for correct answers and I get that, but getting downvoted or your question being closed as a duplicate feel mean regardless of how welcoming the admins claim they're trying to make the place.

A big part of the problem is that users seek out reasons to close answers as opposed to seeking ways to try and fix them and avoid them being closed. And they're rewarded for it! I think review queues overall are probably a positive but when you're sitting there just going through them and you find one that could be closed as is but also could possibly be fixed, which are you going to try and do? Vote to close which takes like one second of effort or try and edit which could take a lot longer and may even involve input from OP? Then even if you do try and fix it, what if everyone else does vote to close?

I've had a question closed and my comments explaining why it wasn't a duplicate deleted. The response from everyone was that because I have been using the site off and on for years they expected me to understand the process so they didn't explain to me that I needed to edit and instead just deleted my comment and didn't tell me anything.

The amount of anxiety I have when asking a question there is insane. And I have 6k+ rep. They weren't wrong, I do know the site well. I have used it a lot. But like, of me, an experienced user, is afraid to ask a question that's messed up. I've sat there and been like "okay, people will probably think it is a duplicate of this, I really hate getting questions closed as duplicates so I'm going to preemptively explain why it isn't a a dupe" and then they still close it as a dupe. It's insane. Or they find the one magical combination of words that I didn't quite think of despite spending a good ten minutes or so looking for dupes prior to asking that did ask my question the act smug about it.

I don't really use the sites anymore. Not even the more lighthearted and fun ones like RPG and World Building. I've just been so soured to it.

The amount of anxiety I have when asking a question there is insane. And I have 6k+ rep. They weren’t wrong, I do know the site well. I have used it a lot. But like, of me, an experienced user, is afraid to ask a question that’s messed up.

Yup that's practically the same problem I had. I posted maybe one question over the past 15 years. I got crapped on by one of their power users for not doing something properly and I never posted or asked a question again. I don't even remember what account I originally used, either.

This is sort of why I like ChatGPT, I don't get harassed for asking something incredibly stupid, and the crappy answers are about as bad as the "marked as duplicate" nonsense that gets me nowhere anyways. Why bother trying to interface with those communities ever again? IT in general already tilts heavily towards salty misanthropes, I'll pass on that.

Tried to answer a question got shutdown by mods immediately. I was wondering how stack overflow is going to survive. I know now it won't.

I bet Google searching in general has gone down too. It's often times quicker to just ask ChatGPT for an answer, and usually you can tell when an answer is correct or not. It's like the old days of manually searching on Google for StackOverflow questions and then finding answers, and then trying to determine which one will work.

It's not just ChatGPT that's to blame. The VP of Knowledge & Information at Google mentioned that the younger generation doesn't search for things the same way.

“We keep learning, over and over again, that new internet users don’t have the expectations and the mindset that we have become accustomed to.” Raghavan said, adding, “the queries they ask are completely different.”

These users don’t tend to type in keywords but rather look to discover content in new, more immersive ways, he said.

“In our studies, something like almost 40% of young people, when they’re looking for a place for lunch, they don’t go to Google Maps or Search,” he continued. “They go to TikTok or Instagram.”

Anecdotally, I've witnessed younger people searching on Youtube for a video explanation of a technical issue (e.g. an error code when installing some software), rather than using Google Search. It's baffling to me, but Gen Z has a different way of consuming information.

Edit: Clarity

That may be, but I know my browsing history, even as I get older and older, and I am using StackOverflow hardly at all compared to ChatGPT which I am using almost a scary amount.

I know I am not the only developer, this is how things are going.

ChatGPT is a big, big part of it.

Half of a fuck-ton is still a lot. If they scale down their operational costs they can still run a very comfortable business for a long while on these kinds of numbers.

I think the point is not their viability as a business but their relevance in the industry.

Stack Exchange has been making a large number of bad calls over the past few years. Basically pissing off their moderators. The first one was Monica who actually sued them for it (libel or defamation or something, basically they said she was being transphobic or something when she wasn't) and they settled. Around that time, possibly before, they removed a site from their Hot Network Questions because of a single tweet. Combine that with them constantly ignoring Stack Exchange Meta (where users and admins are meant to interact for the better of the site and discuss the sites themselves). Moderators were understandably furious when their posts get ignored in the place where Stack Exchange says they're meant to communicate when a random tweet gets more attention and immediate action.

More recently they've given different instructions privately to moderators than what they said publicly with regards to suspected AI content.

I mean, combine all of that with how hostile the users of the site are. Accusing you of not searching before posting and marking your question as a duplicate because they think it is and refusing to listen to why you say it isn't.

I'm sure they are bad, because general corporation and enshittification cycle, but when someone consistently mentions, "a single tweet" or something like that that they represent as purely innocuous (but without any explanation or link to source), gets my suspicious radar WAY up...

Your suspicion makes sense, let me provide some context.

(Quick aside for the unaware, not necessarily Snapz, Stack Exchange (SE) is the company and family of sites behind Stack Overflow. Stack Overflow is the biggest and was the first and that's why it doesn't have the same "Blah Exchange" branding.)

I think this answer on SE Meta describes the Tweets the best. I can't find good archived links to the tweets and they seem to be deleted now. This answer has screenshots and quotes them. This answer is not the first thing that happens in chronological order but it is the best thing I've found with quotes of the tweets. So just go here to see what the tweets were. I guess it was actually about three and not just a single one like I remembered. Summary here,

stack exchange: the #1 site for your questions about dataframes and female treachery

normal website

  • IPS: How to approach a friend about his girlfriend asking to sleep with me?
  • IPS: How do I tell students at a school I volunteer at to stop flirting with me?
  • SciFi: Story about aliens nicknamed 'Eechees' who have created a network of tunnels on Mars

2:37 PM - 16 Oct 2018

1 Retweet 38 Likes

Someone then retweeted that,

When people seem confused about why Stack Overflow might not be the most welcoming/comfortable place for people to find answers to programming questions, show them this

[The tweet from above]

This question on Interpersonal Skills (IPS) Meta is (as far as I can find) when the community at large first found out about what happened. Then later there was this question on SE Meta (which the earlier answer is in response to). Both of these posts have most of the context.

Feel free to look over as much as you want, I'll just post some of the highlights proving the points I was talking about.

From the IPS Meta question, in this answer

Was the removal of this site from the [Hot Network Questions (HNQ)] in response to a Twitter complaint?

Yep.

Oh. Well, that seems... crummy.

Yep. Let me tell you about it.

The initial response to the tweet in an internal discussion wasn't actually "let's pull IPS out of the HNQ" it was "Maybe we should finally kill the HNQ or redesign it to make it better." I think that reworking the HNQ is something that many people want to see - myself included. Should a tweet be the final straw when it's been discussed so much over the years? No. Am I willing to be OK with that if it means something will change? Begrudgingly, yes... but that's a separate issue.

[...]

It's easy to panic and focus on optics instead of tenable solutions, and while it looks really drastic, pulling IPS from the HNQ was a pretty moderate response. Yes, it was a quick decision - like pulling your hand away from a hot stove when it burns. It was the solution we chose - without consulting IPS - because it was effective and easy to implement since it would fix the perceived problem immediately and there was already a technical solution in place for doing it.

[...]

We are going to have some internal discussions to improve how we respond in situations like this in the future. We don't want Twitter - or Reddit or any other external site - to be where users go to get real change to happen on the network. We love our meta system - the child meta sites and Meta Stack Exchange - and we need those to be where people feel they can come to and get a response from us.

This comment explains the community's feeling very well I believe.

The immediate response doesn't set a great example and looks outwardly like we didn't think things over. I think is a massive, almost impossibly massive understatement. I don't know if you guys can ever recover any of the massive amount of community trust you lost that day. Finding out that yes, indeed, a twitter complaint is a more powerful force of site governance then months of meta discussions by the most engaged users of the site just means that there's no point participating at all until whatever dynamic causes this is completly [sic] and provably wiped out.

Also this

[...] Removing IPS and only IPS based on the outrage of a few Twitter users is incredibly unfair to this community and sends a very strong signal that SE considers the opinions and efforts of valuable contributors practically worthless. If y'all do care about this site, then please act like it? [...]

From the SE Meta question, this answer

[...]

What happened was that someone called SE out on Twitter for something you could conceivably see as problematic (two questions with out of context bad titles showing next to each other in that list). After that, not only was that change done within 40 minutes of it being pointed out, this happened after MONTHS of engaged users of that site asking for the HNQ to be adressed.

[Lemmy UI does not underline individual links, so here are the three links individually]

  1. https://interpersonal.meta.stackexchange.com/questions/1520/should-we-edit-titles-that-are-not-sufficiently-descriptive
  2. https://interpersonal.meta.stackexchange.com/questions/1291/should-we-step-up-our-voting-culture/1294#1294
  3. https://interpersonal.meta.stackexchange.com/questions/1314/moratorium-on-hot-network-questions-until-we-have-greater-control-over-content

Yet, this happens only after Twitter outrage from non-users of the site. Why is that? Even if you have the very best of intentions and had this cooking internally for a long time (which I'm going to just assume for the purposes of this argument - good faith and all), this couldn't possibly have had less fortunate timing.

I'm not trying to rag on Stack Exchange for doing this, but why was such a massive change made without consulting, collecting feedback from or even notifying the site's active user base? Why does an engaged user of IPS have to visit twitter of all places to find out SE has cut out more than half of their site's traffic overnight?

Why wasn't the community consulted on this? We had discussions on it before, a lot of people came down in favor of restricting IPS from showing up on the sidebar in some fashion or another, and now we get this. No feedback, no discussion. Someone that apparently SE wants to placate made a stink on Twitter, and somehow that's more effective than months of constructive reasoning in driving change. What reason, if at all, does an engaged user of the site have to trust the community governance model with this?

If it sounds like I'm really annoyed by this its because I am, yes I was in favor of removing IPS from HNQ before, but the circumstances under which it happened is making me lose all hope I have for SE's leadership's ability to formulate concrete plans to make changes constructively.


Edit: Make individual links as bullet points in one of the quotes since Lemmy UI does not make it clear it is three links.

Edit 2: Add summary of the tweets so more context is on this post.

If this and Reddit are going downhill, where will we look for our tech questions?! (/s, there will always be others)

My bets for the future:

  • RTFM
  • Have ChatGPT RTFM
  • Read a book about general principles
  • Ask ChatGPT to apply general principles to its own answer after RTFM, then ask it to double check it
  • Spin up a VM, just try the thing. If it doesn't work, ask ChatGPT why.

When everything else fails...

  • Ask a question at any random place (SO, Reddit, Discord, Mastodon,Lemmy, etc.)
  • Feed the answers to ChatGPT and have it summarize them, then double check its own answer

As alluded to by comments here already, a long coming death.

Will probably go down as a marker of the darker side of tech culture, which, not coincidentally (?) manifested at time when the field was most confused as to what constitutes its actual discipline and whether it was an engineering field at all.

People isn't considering that documentation has greatly improved over time, languages and frameworks have become more abstract, user-friendly, modern code is mostly self explanatory, good documentation has become the priority of all open source projects, well documented open source languages and frameworks have become the norm.

Less people asking programming related questions can be explained by programming being an easier and less problematic experience nowadays, that is true.

I don't entirely agree that more and better documentation removes bugs, problems, questions, concerns, or cuts too much into a 50% drop in site usage. Having documentation is just another tool in the toolbelt, to be used alongside community forums.

Discovery process for myself and many of my coworkers has always been; Look up obscure errors, problems, etc. to get an idea of what I'm dealing with, and then off to the documentation.

They don't remove bugs, but it is easier to solve them without having to wait for some random guy to answer on stack overflow.

I don't know now (I haven't asked a question in ages) but to get a good answer on stack overflow it used to take weeks sometimes

GitHub issues are usually more useful

As long as a LLM doesn't run into a corner, making the same mistakes over and over again, it is magical to just paste some code, ask what's wrong with it and receiving a detailed explanation + fix. Even better is when you ask "now can you add this and this to it?" and it does.

I routinely skip SO unless I've already exhausted most possibilities. If it was ever a good place to get answers, I frankly didn't see it. What I did see was infinite amounts of bitching about "bad" questions, non-duplicate duplicates, lazy-ass people who just wanted an excuse not to answer, and assorted people tripping on their little iota of perceived "power".

Hell, even the indexed results on Google etc. just stopped being even remotely useful a few years back. After that, most shit I searched for ended up in an unanswered and possibly locked question with some passive-aggressive bullshit remark. It's got the culture of helpfulness of a 2003 gaming forum - except the people telling everyone else to go fuck themselves are mods, not pubertal kids. (Although if the mods were pubertal kids that would actually explain quite a bit)

This hasn't been my experience at all, but I'm old and have been using SO since it was new.

I have stopped visiting it to answer questions because the questions aren't interesting anymore. They're either "how to do this incredibly obscure thing in SOMELIBRARY" (where I've never heard of that library) or "why does my function exit early at the first return statement instead of continuing on" (basic "you misunderstand programming so fundamentally a single answer is unlikely to help" kind of questions)

As far as I can tell, the range of "I've tried this, and partially gotten it working, but this thing does FOO when it should do BAR" questions don't show up, or at least it doesn't show up when I open the site.

Answering basic questions again and again and again isn't fun. It's something I could be paid to do, I suppose, but I'm not paid for that.

Seriously, how should a community based on short two- to three-paragraph answers react to question after question like this:

I am new to python. I would like to write a program which can collect information from multiple excel and pdf documents to output that in one single excel document to show similarities and differences between the documents . Is this possible ? If so, how and where would I start writing such a programme in python? Thanks

I haven’t tried anything yet

I mean, I'm glad that someone looks at that problem and thinks "programming could do this", because it could, but it's kind of a big task and getting someone from "I haven't tried anything and am brand new to python" to that is beyond any question-and-answer forum. Welcome to programming, you may be able to get there, but it's going to be a bit of a hike.

Mostly it seemed to be people who didn't know what they were talking about answering questions badly in an attempt to win points, presumably in the belief that this would bolster their resume somehow. And people who can't tell a good answer from a bad one voting on the answers.

I suppose the same amount of experts are on stackoverflow and they live in good times. There isn't too much spam to hate about.

The mosts visits to SO does a novice programmer. Currently they live off of AI answers and from more experienced co-workers.

I think the school of SO will last and the community is not hostile; But some people tend to forget that the quality of a question is very important.

Other factors:

SO jobs was shut down.

There is no new technology which enables a new SO chapter. There aren't too many new questions about AI.

What do you think?