chobeat

@chobeat@lemmy.ml
12 Post – 27 Comments
Joined 5 years ago

Larping as a tankie is definitely a thing of immature, terminally online kids, but I wouldn't throw Lenin in the bunch. While Stalin is mostly condemned as a reactionary psychopath by pretty much everybody except a few leftist basement-dwellers, Lenin is still read and taught throughout the world. Nothing edgy in reading Lenin.

Edgy kids on the internet worship other psychopaths like Pol Pot or Hoxha.

"debate me" kids are another stereotype on the internet though. The idea that ideas should be entertained and discussed for the sake of it and come without implications attached is just another form of edgyness. It's another thing that often goes away with age or with touching grass. I know because I was one of them. Now I understand that the fact itself of discussing something publicly has moral implications.

Advertising works, nobody denies that. If you see enough ads, on average, your mind will be changed.

Can you point to scientific literature that does prove this statement?

Most people in the field don't even ask themselves this question. They all have an incentive in believing it works.

There's a book about it though: https://us.macmillan.com/books/9780374538651/subprimeattentioncrisis

In the picture you can see organizations moving in the public sphere around AI. On the left you have right-wing and libertarian think tanks, corporations and frontline actors that fuel a sense of panic around AI, either to sabotage their business competitors or to leverage this panic to project an idea of being sellers of a very powerful tool while at the same time deflecting responsibility. If the AI is dangerous and sentient, you won't care much about the engineers behind.

On the right you have several public orgs or NGOs operating in the field of algorithmic accountability, digital rights and so on. They push the opposite of the AI panic, pointing the finger at the corporations and powers that create and govern AI

1 more...

No more "alternatives" please. That formula has failed over and over again. We want software that can do what proprietary platforms do not pursue because it's not profitable. Online spaces to build meaningful connections, have interesting conversations with like-minded people, discover new things, be free from trolls and toxicity, possibly without the guilt of polluting the hell out of this planet with hardware and excessive electricity consumption.

4 more...

It's not from me but from AlgorithmWatch, one of the most famous and respected NGOs in the field of Algorithmic accountability. They published plenty of stuff on these topics and human rights threats from these companies.

Also this is an ecosystem analysis of political positioning. These companies and think tanks are going on newspapers with their names to say we should panic about AI. It's not a secret, just open Google News and you fill find a landslide of news on these topics sponsored by these companies with a simple search.

My girlfriend is a professional fermenter, so I have endless amounts of fermented sauces in my immediate surroundings.

I would say the most hyped one in her network is this strawberry gochujang she's making.

1 more...

Commenting with no clue what people are talking about

to a reasonably large audience

That's a measure of success that makes sense only in a for-profit, growth-oriented environment. Software just has to be sustainable and "bigger" doesn't necessarily imply "more sustainable.

That said, what is now possible with social media is extremely restricted and our idea of what a social media is is constrained by profit motives. Social media could be much more, connect humans for collaboration and exchange instead of data extraction. We are so used to the little crumbs of positive experiences on social media that we normalized it.

Bonfire, for example, if we want to stick to the fediverse, is trying to challenge this narrative and push the boundaries of what a social media is supposed to do.

Another space would be non-siloed notion-like tools.

Anothe entire can of worms would be to go beyond the "dictatorship of the app" and start building software and UX around flexibility and customizability for the average user, rather than keeping this a privilege for tools targeting power users. Flexibility in UX means harder trackability and less CTR, so most end-user "apps" avoid that.

2 more...

I had to check urban dictionary to get the joke lol

A lot of coopyleft or p2pp projects adopt the license and it's not discussed that much in the identity of the project.

I personally believe that software freedom shouldn't come at the expense of people's freedom, and I consider the FOSS movement a political failure because it's completely incapable of mediating between the two things. New generations are growing more and more alienated from a movement they consider a relic of the past.

For my projects, I avoid FOSS licenses, but they are also not relevant enough to get insights from them.

Electric vehicles are part of the problem. Definitely not part of the solution. Personal cars are incompatible with any realistic sustainability target. He actively sabotaged the development of public infrastructure to make profit out of his stupid cars. He's evil as fuck.

1 more...

Not nation-wide, but definitely in California and he claimed that himself. Anyway, if you want to dig deeper: https://disconnect.blog/the-hyperloop-was-always-a-scam/

There are entire fields of research on that. Or do you believe the internet, a technology developed for military purposes, an infrastructure that supports most of the economy, the medium through billions of people experience most of reality and build connections, is free from ideology and propaganda?

they are also doing a whole flavor just for research-oriented social media, geared towards the OpenScience community and the academia in general. It will launch soon.

Then they have a whole set of collaboration tools and groupware, that now kinda incorporates the basic features of Trello and GitHub, but on top of a social media with granular permission systems. There the use cases are many more, but it's also much more general-purpose than the research flavor. I think the end-game would be to have a platform that acts as a middleware and connect social life, gift-based collaboration, work and consumption in a single open platforms.

I also wrote an article envisioning a federated notion-like tool built on top of Bonfire, that clearly would allow to structure knowledge and implement no-code software on top of Bonfire, but clearly this would require a disproportionate effort for what the project is at the moment: https://fossil-milk-962.notion.site/Fractal-Software-for-Fractal-Futures-71e515597d6b424c994cae74f3341521?pvs=4

You might have heard of singularity, sentient AI, uprising of the ai, job losses due to automation. That's all propaganda that sits under the concept of AI panic.

7 more...

no colonial power and no empire ever lasted forever. Everything made by human eventually dissolves. The current strategy of trying to stay alive (kinda) and keeping their identity is more than enough to eventually see the American empire collapse on itself and Israel with it.

automation never reduces jobs. It fragments them, it reduces their quality, it increases deskilling and replaceability. We are not going to work less as we never worked less thanks to automation. If we want to work less, we need unionization, not machines.

Right now the whole model of generative AI and in general LLM is built on the assumption that training a machine learning model is not a problem for licenses, copyright and whatever. Obviously this is bringing to huge legal battles and before their outcome is clear and a new legal pratice or specific regulations are established in EU and USA, there's no point discussing licenses.

Also licenses don't prevent anything, they are not magic. If small or big AI companies feel safe in violating these laws or just profit enough to pay fines, they will keep doing it. It's the same with FOSS licenses: most small companies violate licenses and unless you have whistleblowers, you never find out. Even then, the legal path is very long. Only big corporate scared of humongous lawsuits really care about it, but small startups? Small consultancies? They don't care. Licenses are just a sign that says "STOP! Or go on, I'm a license, not a cop"

This paper explain a taxonomy of harms created by LLMs: https://dl.acm.org/doi/pdf/10.1145/3531146.3533088

OpenAI released ChatGPT without systems to prevent or compensate these harms and being fully aware of the consequences, since this kind of research has been going on for several years. In the meanwhile they've put some paper-thin countermeasures on some of these problems but they are still pretty much a shit-show in terms of accountability. Most likely they will get sued into oblivion before regulators outlaw LLMs with dialogical interfaces. This won't do much for the harm that open-source LLMs will create but at least will limit large-scale harm to the general population.

2 more...

The true path to Enlightenment prescribes not to argue with edgy 16 yo kids on the Internet. The New Atheist movement is dead, only edgy kids remain. No need to argue.

Microsoft bought OpenAI. The AI panic pushed by Sam Altman is sanctioned by Microsoft.

it's answered in other comments

Since here the answers are split between edgy kids and people repeating a bland, stale narrative about comfort and fear of death, I will try to bring a different perspective.

For context: I grew up in a Catholic country but in a very secular family and in a very secular region. I've had an edgy atheist phase that lasted between 8yo and probably around 30yo.

I studied a STEM discipline and have always been surrounded by mostly atheist or agnostic people.

I was afraid of death up until I was 27/28yo, but the cope was gnostic transhumanism, not Abrahamitic religions. At some point I took acid, my gf at the time told me I was going to die, I cried my eyes out for a few minutes and then I was fine and I'm still fine. I had a near-death experience in the hospital that further consolidated the idea that I'm going to die, and it's chill: if you're sick, you have a bunch of people looking after you, everybody gives you attention, you spend all your day chilling in bed on drugs. Dream life death.

I was still agnostic at that point. I started approaching spirituality later on, not much because of an emotional need, but because further studies both in STEM disciplines and Philosophy highlighted the limit of reason to explain and understand the world. Reason is a tool among others, with its limits. Limits that can be reasoned about using reason itself. You cannot investigate or explain what lies outside though, let alone change it, something for which you need different tools: faith, spirituality, trust. I got closer to what Erik Davis calls "Cyborg Spiritualism", but it doesn't mean much since it's not an organized movement, but more of a shared intuition and meaning-making process to which, in the last 60 years, more and more people arrived. Especially people dealing with disciplines like system theory, cybernetics, system design, and information theory, but also people disillusioned with the New Age movement or other Western Gnostic practices. Mixed in it there's plenty of animism.

Atheists believe that all religions are about speaking to God, and hoping for an answer, while many religions are about listening to God because they are already talking to us all the time.

They published a deliberately harmful tool against the advice of civil society, experts and competitors. They are not only reckless but tasked since their foundation with the mission to create chaos. Don't forget the idea behind OpenAI in the beginning was to damage the advantage that Google and Facebook had on AI by releasing machine learning technology in open source. They definitely did it and now they are expanding their goals. They are not in for the money (ChatGPT will never be profitable), they are playing a bigger game.

Pushing the AI panic is not just a marketing strategy but a way to build power. The more they are considered dangerous, the more regulations will be passed that will impact the whole sector. https://fortune.com/2023/05/30/sam-altman-ai-risk-of-extinction-pandemics-nuclear-warfare/

7 more...

ok Elon