theluddite

@theluddite@lemmy.ml
6 Post – 270 Comments
Joined 1 years ago

I write about technology at theluddite.org

I cannot handle the fucking irony of that article being on nature, one of the organizations most responsible for fucking it up in the first place. Nature is a peer-reviewed journal that charges people thousands upon thousands of dollars to publish (that's right, charges, not pays), asks peer reviewers to volunteer their time, and then charges the very institutions that produced the knowledge exorbitant rents to access it. It's all upside. Because they're the most prestigious journal (or maybe one of two or three), they can charge rent on that prestige, then leverage it to buy and start other subsidiary journals. Now they have this beast of an academic publishing empire that is a complete fucking mess.

7 more...

AI systems in the future, since it helps us understand how difficult they might be to deal with," lead author Evan Hubinger, an artificial general intelligence safety research scientist at Anthropic, an AI research company, told Live Science in an email.

The media needs to stop falling for this. This is a "pre-print," aka a non-peer-reviewed paper, published by the AI company itself. These companies are quickly learning that, with the AI hype, they can get free marketing by pretending to do "research" on their own product. It doesn't matter what the conclusion is, whether it's very cool and going to save us or very scary and we should all be afraid, so long as its attention grabbing.

If the media wants to report on it, fine, but don't legitimize it by pretending that it's "researchers" when it's the company itself. The point of journalism is to speak truth to power, not regurgitate what the powerful say.

7 more...

You can tell that technology is advancing rapidly because now you can type short-form text on the internet and everybody can read it. Truly innovative stuff.

"I gave an LLM a wildly oversimplified version of a complex human task and it did pretty well"

For how long will we be forced to endure different versions of the same article?

The study said 86.66% of the generated software systems were "executed flawlessly."

Like I said yesterday, in a post celebrating how ChatGPT can do medical questions with less than 80% accuracy, that is trash. A company with absolute shit code still has virtually all of it "execute flawlessly." Whether or not code executes it not the bar by which we judge it.

Even if it were to hit 100%, which it does not, there's so much more to making things than this obviously oversimplified simulation of a tech company. Real engineering involves getting people in a room, managing stakeholders, navigating conflicting desires from different stakeholders, getting to know the human beings who need a problem solved, and so on.

LLMs are not capable of this kind of meaningful collaboration, despite all this hype.

17 more...

This is bad science at a very fundamental level.

Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management.

I've written about basically this before, but what this study actually did is that the researchers collapsed an extremely complex human situation into generating some text, and then reinterpreted the LLM's generated text as the LLM having taken an action in the real world, which is a ridiculous thing to do, because we know how LLMs work. They have no will. They are not AIs. It doesn't obtain tips or act upon them -- it generates text based on previous text. That's it. There's no need to put a black box around it and treat it like it's human while at the same time condensing human tasks into a game that LLMs can play and then pretending like those two things can reasonably coexist as concepts.

To our knowledge, this is the first demonstration of Large Language Models trained to be helpful, harmless, and honest, strategically deceiving their users in a realistic situation without direct instructions or training for deception.

Part of being a good scientist is studying things that mean something. There's no formula for that. You can do a rigorous and very serious experiment figuring out how may cotton balls the average person can shove up their ass. As far as I know, you'd be the first person to study that, but it's a stupid thing to study.

15 more...

I'm becoming increasingly skeptical of the "destroying our mental health" framework that we've become obsessed with as a society. "Mental health" is so all-encompassing in its breadth (It's basically our entire subjective experience with the world) but at the same time, it's actually quite limiting in the solutions it implies, as if there's specific ailments or exercises or medications.

We're miserable because our world is bad. The mental health crisis is probably better understood as all of us being sad as we collectively and simultaneously burn the world and fill it with trash, seemingly on purpose, and we're not even having fun. The mental health framework, by converting our anger, loneliness, grief, and sadness into medicalized pathologies, stops us from understanding these feelings as valid and actionable. It leads us to seek clinical or technical fixes, like whether we should limit smart phones or whatever.

Maybe smart phones are bad for our mental health, but I think reducing our entire experience with the world into mental health is the worst thing for our mental health.

17 more...

This has been ramping up for years. The first time that I was asked to do "homework" for an interview was probably in 2014 or so. Since then, it's gone from "make a quick prototype" to assignments that clearly take several full work days. The last time I job hunted, I'd politely accept the assignment and ask them if $120/hr is an acceptable rate, and if so, I can send over the contract and we can get started ASAP! If not, I refer them to my thousands upon thousands of lines of open source code.

My experience with these interactions is not that they're looking for the most qualified applicants, but that they're filtering for compliant workers who will unquestioningly accept the conditions offered in exchange for the generally lucrative salaries. It's the kind of employees that they need to keep their internal corporate identity of being the good guys as tech goes from being universally beloved to generally reviled by society in general.

4 more...

At some point in the last decade, the ostensive ostensible goal of automation evolved from savings us from unwanted labor to keeping us from ever doing anything.

25 more...

100% of these AI hype articles are also puff pieces for a specific company. They also all have a very loose interpretation of "AI." Anything that uses any machine learning techniques is AI, which is going to revolutionize every industry and/or end life as we know it.

Anyway, that complaint aside: That seems like a plausible use for machine learning. I look forward to wealthy Americans being able to access it while the rest of us wait 19 months to get a new PCP and take out a mortgage for the privilege.

28 more...

I've posted this here before, but this phenomenon isn't unique to dating apps, though dating apps are a particularly good example. The problem is that capitalism uses computers backwards.

5 more...

I get the point they're making, and I agree with most of the piece, but I'm not sure I'd frame it as Musk's "mistakes," because he literally won the game. He became the richest person on earth. By our society's standards, that's like the very definition of success.

Our economy is like quidditch. There are all these rules for complicated gameplay, but it doesn't actually matter, because catching the snitch is the entire game. Musk is very, very bad at all the parts of the economy except for being a charlatan and a liar, which is capitalism's version of the seeker. Somehow, he's very good at that, and so he wins, even though he has literally no idea how to do anything else.

edit: fix typo!

edit2: since this struck a chord, here's my theory of Elon Musk. Tl;dr: I think his success comes from offering magical technical solutions to our political and social problems, allowing us to continue living an untenable status quo.

4 more...

It's not that this article is bad, but it is what frustrates me about tech journalism, and why I started writing about tech. None of these people have any idea how the internet actually works. They've never written a line of code, or set up a server, or published an app, or even done SEO, so they end up turning everything into a human interest piece, where they interview the people involved and some experts, but report it with that famous "view from nowhere."

Some blame Google itself, asserting that an all-powerful, all-seeing, trillion-dollar corporation with a 90 percent market share for online search is corrupting our access to the truth. But others blame the people I wanted to see in Florida, the ones who engage in the mysterious art of search engine optimization, or SEO.

Let me answer that definitively: it's google, in multiple ways, one of which isn't even search, which I know because I actually do make things on the internet. SEO people aren't helping, for sure, but I've seen many journalists and others talk about how blogspam is the result of SEO, and maybe that's the origin story, but at this point, it is actually the result of google's monopoly on advertising, not search. I've posted this before on this community, but google forces you to turn your website into blogspam in order to monetize it. Cluttering the internet with bullshit content is their explicit content policy. It's actually very direct and straightforward. It's widely and openly discussed on internet forums about monetizing websites.

10 more...

I am the dude. Fair enough, but your summary misses the point. The original website was a useful tool that people use, but it didn't qualify for adsense. I draw an analogy to recipes. Recipe sites used to be useful, but now you have to scroll through tons of blogspam to even get to the recipe. Google has a monopoly on ads, and like it or not, ad revenue is how people who make websites get paid. Google's policies for what qualifies for AdSense have a huge impact on the internet.

The point of the post is to show how direct that relationship is, using an existing and useful website.

2 more...

"The workplace isn't for politics" says company that exerts coercive political power to expel its (ex-)workers for disagreeing.

1 more...

The real problem with LLM coding, in my opinion, is something much more fundamental than whether it can code correctly or not. One of the biggest problems coding faces right now is code bloat. In my 15 years writing code, I write so much less code now than when I started, and spend so much more time bolting together existing libraries, dealing with CI/CD bullshit, and all the other hair that software projects has started to grow.

The amount of code is exploding. Nowadays, every website uses ReactJS. Every single tiny website loads god knows how many libraries. Just the other day, I forked and built an open source project that had a simple web front end (a list view, some forms -- basic shit), and after building it, npm informed me that it had over a dozen critical vulnerabilities, and dozens more of high severity. I think the total was something like 70?

All code now has to be written at least once. With ChatGPT, it doesn't even need to be written once! We can generate arbitrary amounts of code all the time whenever we want! We're going to have so much fucking code, and we have absolutely no idea how to deal with that.

12 more...

The purpose of a system is what it does. "There is no point in claiming that the purpose of a system is to do what it constantly fails to do.” These articles about how social media is broken are constant. It's just not a useful way to think about it. For example:

It relies on badly maintained social-media infrastructure and is presided over by billionaires who have given up on the premise that their platforms should inform users

These platforms are systems. They don't have intent. There's no mens rea or anything. There is no point saying that social media is supposed to inform users when it constantly fails to inform users. In fact, it has never informed users.

Any serious discussion about social media must accept that the system is what it is, not that it's supposed to be some other way, and is currently suffering some anomaly.

3 more...

This study is an agent-based simulation:

The researchers used a type of math called “agent-based modeling” to simulate how people’s opinions change over time. They focused on a model where individuals can believe the truth, the fake information, or remain undecided. The researchers created a network of connections between these individuals, similar to how people are connected on social media.

They used the binary agreement model to understand the “tipping point” (the point where a small change can lead to significant effects) and how disinformation can spread.

Personally, I love agent-based models. I think agent modeling is a very, very powerful tool for systems insight, but I don't like this article's interpretation, nor am I convinced the author of this article really groks what agent-based modeling really is. It's a very different kind of "study" than what most people mean when they use that word, and interpreting the insights is its own can of worms.

Just a heads up, for those of you casually scrolling by.

8 more...

It's probably either waiting for approval to sell ads or was denied and they're adding more stuff. Google has a virtual monopoly on ads, and their approval process can take 1-2 weeks. Google's content policy basially demands that your site by full of generated trash to sell ads. I did a case study here, in which Google denied my popular and useful website for ads until I filled it with the lowest-quality generated trash imaginable. That might help clarify what's up.

5 more...

I do software consulting for a living. A lot of my practice is small organizations hiring me because their entire tech stack is a bunch of shortcuts taped together into one giant teetering monument to moving as fast as possible, and they managed to do all of that while still having to write every line of code.

In 3-4 years, I'm going to be hearing from clients about how they hired an undergrad who was really into AI to do the core of their codebase and everyone is afraid to even log into the server because the slightest breeze might collapse the entire thing.

LLM coding is going to be like every other industrial automation process in our society. We can now make a shittier thing way faster, without thinking of the consequences.

11 more...

Is that really all they do though? That's what theyve convinced us that they do, but everyone on these platforms knows how crucial it is to tweak your content to please the algorithm. They also do everything they can to become monopolies, without which it wouldn't even be possible to start on DIY videos and end on white supremacy or whatever.

I wrote a longer version of this argument here, if you're curious.

5 more...

That's a bad faith gotcha and you know it. My lemmy account, the comment I just wrote, and the entire internet you and I care about and interact with are a tiny sliver of these data warehouses. I have actually done sysadmin and devops for giant e-commerce company, and we spent the vast majority of our compute power on analytics for user tracking and advertising. The actual site itself was tiny compared to our surveillance-value-extraction work. That was a major e-commerce website you've heard of.

Bitcoin alone used half a percent of the entire world's electricity consumption a couple of years ago. That's just bitcoin, not even including the other crypto. Now with the AI hype, companies are building even more of these warehouses to train LLMs.

2 more...

That sucks, but I argue that it's even worse. Not only do they tweak your results to make more money, but because google has a monopoly on web advertising, and (like it or not) advertising is the main internet funding model, google gets to decide whether or not your website gets to generate revenue at all. They literally have an approval process for serving ads, and it is responsible for the proliferation of LLM-generated blogspam. Here's a thing I wrote about it in which I tried to get my already-useful and high-quality website approved for ads, complete with a before and after approval, if you're curious. The after is a wreck.

6 more...

I have worked at two different start ups where the boss explicitly didn't want to hire anyone with kids and had to be informed that there are laws about that, so yes, definitely anti-parent. One of them also kept saying that they only wanted employees like our autistic coworker when we asked him why he had spent weeks rejecting every interviewee that we had liked. Don't even get me started on people that the CEO wouldn't have a beer with, and how often they just so happen to be women or foreigners! Just gross shit all around.

It's very clear when you work closely with founders that they see their businesses as a moral good in the world, and as a result, they have a lot of entitlement about their relationship with labor. They view laws about it as inconveniences on their moral imperative to grow the startup.

We are usually not given a good example of how bad things actually happen. We imagine the barbarians storming the gate, raping and pillaging. That does happen, but more often, things getting worse is more complicated, and it affects different people at different times.

For the one in five (!!) children facing hunger, our society has failed. For a poor person with diabetes and no medical insurance, our society has already failed. For an uber driver with no family support whose car broke down and missed rent, facing an eviction, society is about to break down for them. I'm a dude in my mid thirties that writes code, so for me, things are fine, but if I get hit by a bus tomorrow and lose the ability to use my hands, society will probably fail for me.

More and more people are experiencing that failure. Most of us are fine, but our being fine is becoming incredibly fucking precarious. More often than not, society collapsing looks like a daily constitution saving throw that becomes harder and harder to pass, and more and more of us who have a stroke of bad luck here or there fail.

Understanding society this way is important, and it's why solidarity is the foundation of leftist politics. I march for people without healthcare because I care about them, and also, because there but for the grace of god go I. Bakunin put this beautifully almost 200 years ago:

I am truly free only when all human beings, men and women, are equally free. The freedom of other men, far from negating or limiting my freedom, is, on the contrary, its necessary premise and confirmation.

3 more...

I wish we had less selection, in general. My family lives in Spain, and I've also lived in France. This is just my observation, but American grocery stores clearly emphasize always having a consistent variety, whereas my Spanish family expects to eat higher quality produce seasonally. I suspect that this is a symptom of a wider problem, not the cause, but American groceries are just fucking awful by comparison, and so much more expensive too.

2 more...

My two cents, but the problem here isn't that the images are too woke. It's that the images are a perfect metaphor for corporate DEI initiatives in general. Corporations like Google are literally unjust power structures, and when they do DEI, they update the aesthetics of the corporation such that they can get credit for being inclusive but without addressing the problem itself. Why would they when, in a very real way, they themselves are the problem?

These models are trained on past data and will therefore replicate its injustices. This is a core structural problem. Google is trying to profit off generative AI while not getting blamed for these baked-in problems by updating the aesthetics. The results are predictably fucking stupid.

3 more...

We live in a vast digital spectacle, but we don't participate in the spectacle -- we consume it. Since nothing is real anymore, since our entire reality only exists through digital media, and since we have absolutely no agency, why not vote for a shit-poster for president? It's fun as hell to watch him troll all those tedious snobs in DC. Fuck those guys.

Then enough people voted for him that something incredible happened. He won. That wasn't supposed to happen! For once, something changed, and everyone who voted for him was a part of that change.

Actually accomplishing something is fucking intoxicating. It's so easy to get hooked on that heady feeling of mattering at all for once in our pathetic, powerless, alienated existences as cogs in a giant wasteful plastic machine. We spend months, then years, then decades drifting without meaning, working jobs we hate, taking our kids to shitty day cares we can barely afford, waiting 19 month to see a doctor about that new weird lump, and so on.

For these people, reality has never been so real. They're actually in it now, doing things. They've chosen a new content-creator-in-chief, and they want his content to take over the whole spectacle.

Other people have already posted good answers so I just want to add a couple things.

If you want a very simple, concrete example: Healthcare. It depends on how you count, but more than half the world's countries have some sort of free or low cost public healthcare, whereas in the US, the richest country in the history of countries, that's presented as radical left wing kooky unrealistic communist Bernie idea. This isn't an example of a left-wing policy that we won't adopt, but of what in much of the world is a normal public service that we can't adopt because anti-socialism in this country is so malignant and metastasized that it can be weaponized against things that are just considered normal public services almost like roads in other countries.

A true left wing would support not just things like healthcare, but advocate for an economic system in which workers have control over their jobs, not the bosses. That is completely absent.

Also, this meme:

Two panel comic. top one is labeled republicans. bottom one is democrats. they're both planes dropping bombs except democrats has an lgbt flag and blm flag

It's glib, but it's not wrong. Both parties routinely support American militarism abroad. Antimilitarism in favor of internationalism has been a corner stone for the left since the left began.

27 more...

Every time one of these things happens, there's always comments here about how humans do these things too. Two responses to that:

First, human drivers are actually really good at driving. Here's Cory Doctorow explaining this point:

Take the much-vaunted terribleness of human drivers, which the AV industry likes to tout. It's true that the other dumdums on the road cutting you off and changing lanes without their turn-signals are pretty bad drivers, but actual, professional drivers are amazing. The average school-bus driver clocks up 500 million miles without a fatal crash (but of course, bus drivers are part of the public transit system).

Even dopes like you and me are better than you may think – while cars do kill the shit out of Americans, it's because Americans drive so goddamned much. US traffic deaths are a mere one per 100 million miles driven, and most of those deaths are due to recklessness, not inability. Drunks, speeders, texters and sleepy drivers cause traffic fatalities – they may be skilled drivers, but they are also reckless.

There's like a few hundred robot taxis driving relatively few miles, and the problems are constant. I don't know of anyone who has plugged the numbers yet, but I suspect they look pretty bad by comparison.

Second, when self-driving cars fuck up, they become everyone else's problem. Emergency service personnel, paid for by the taxpayer, are suddenly stuck having to call corporate customer service or whatever. When a human fucks up, there's also a human on the scene to take responsibility for the situation and figure out how to remedy it (unless it's a terrible accident and they're disabled or something, but that's an edge case). When one of these robot taxis fucks up, it becomes the problem of whoever they're inconveniencing, be it construction workers, firefighters, police, whatever.

This second point is classic corporate behavior. Companies look for ways to convert their internal costs (in this case, the labor of taxi drivers) into externalities, pushing down their costs but leaving the rest of us to deal with their mess. For example, plastic packaging is much, much cheaper for companies than collecting and reusing glass bottles or whatever, but the trash now becomes everyone else's problem, and at this point, there is microplastic in literally every place on Earth.

25 more...

I don't really agree with this. It is the answer that I think classical economics would give but I just don't think it's useful. For one, it ignores politics. Large corporations also have bought our government, and a few large wealth management funds like vanguard own a de facto controlling share in many public companies, oftentimes including virtually an entire industry, such that competition between them isn't really incentived as much as financial shenanigans and other Jack Welch style shit.

Some scholars (i think I read this in Adrienne bullers value of a whale, which is basically basis for this entire comment) even argue that we've reached a point where it might be more useful to think of our economy as a planned economy, but planned by finance instead of a state central authority.

All that is to say: why would we expect competition to grow, as you suggest, when the current companies already won, and therefore have the power to crush competition? They've already dismantled so many of the antimonopoly and other regulations standing in their way. The classical economics argument treats these new better companies as just sorta rising out of the aether but in reality there's a whole political context that is probably worth considering.

1 more...

People have been coming up with theories about this forever, from perspectives and time periods as diverse as Aristotle, St. Augustine, Gandhi, and Trotsky. You put a lot of very difficult questions in your post, but you didn't put forth a criteria for what "justified" means to you. I think you're going to need to interrogate that before being able to even think about any of these questions. For example, is violence justified by better outcomes, or by some absolute individual right to fight your oppressor? Is justification a question of morality, legality, tactical value, or something entirely different?

4 more...

We need to set aside our petty differences and fight the true enemy: bloated IDEs.

7 more...

You're not wrong but I think you're making too strong a version of your argument. Many people, including wealthy people, are genuinely, deeply moved by art. I love the symphony, opera, and ballet. If I were rich I'd absolutely commission the shit out of some music and get a lot of real joy out of that.

Vermont has several towns with as little as a thousand people that have fiber internet thanks to municipal cooperatives like ECFiber. Much of the state is a connectivity wasteland but it's really cool to see some towns working together to sort it out.

This has been widely known for at least a decade. I worked for an Amazon competitor back in 2013, and industry wide algorithmic price fuckery, including trying to figure out if your rivals were scraping you and poisoning their data, was common and openly discussed as a normal part of business operations.

The explicit directive of our economic system is to make as much money as possible in competition with everyone else. Or course companies are going to pour resources into using any and all technological fuckery to do that.

Our entire news ecosystem is putrid trash. Even our most prestigious and respected outlets are pumping out a constant stream of genocide apologia right now. Manufacturing Consent is decades old and should've ended the New York Times, and that was before they cheerlead our war into Iraq.

Allowing advertising to decide which content is allowed and which isn't won't do anything but punish sites that deviate from mainstream orthodoxy and reward bland corporate friendly bullshit. Here's what that Internet looks like.

If i may be so bold, I and a few others write about tech at https://theluddite.org/.

I focus on the intersection between technology and human decisions. A lot of tech coverage has a techno-optimist, or tech-as-progress default perspective, where tech is almost this inexorable, inevitable, and apolitical force of nature. I strongly disagree with this perspective, which I think is convenient for the powers that be because it obscures that, right now, a few rich humans are making all our tech decisions.

I also write code for a living, which shockingly few tech writers and commentators have ever done. That makes it possible for me to write stuff like this.

12 more...

I know that this kind of actually critical perspective isn't point of this article, but software always reflects the ideology of the power structure in which it was built. I actually covered something very similar in my most recent post, where I applied Philip Agre's analysis of the so-called Internet Revolution to the AI hype, but you can find many similar analyses all over STS literature, or throughout just Agre's work, which really ought to be required reading for anyone in software.

edit to add some recommendations: If you think of yourself as a tech person, and don't necessarily get or enjoy the humanities (for lack of a better word), I recommend starting here, where Agre discusses his own "critical awakening."

As an AI practitioner already well immersed in the literature, I had incorporated the field's taste for technical formalization so thoroughly into my own cognitive style that I literally could not read the literatures of nontechnical fields at anything beyond a popular level. The problem was not exactly that I could not understand the vocabulary, but that I insisted on trying to read everything as a narration of the workings of a mechanism. By that time much philosophy and psychology had adopted intellectual styles similar to that of AI, and so it was possible to read much that was congenial -- except that it reproduced the same technical schemata as the AI literature. I believe that this problem was not simply my own -- that it is characteristic of AI in general (and, no doubt, other technical fields as well). T

I have been predicting for well over a year now that they will both die before the election, but after the primaries, such that we can't change the ballots, and when Americans go to vote, we will vote between two dead guys. Everyone always asks "I wonder what happens then," and while I'm sure that there's a technical legal answer to that question, the real answer is that no one knows,

4 more...