TheFutureIsDelaware

@TheFutureIsDelaware@sh.itjust.works
0 Post – 24 Comments
Joined 1 years ago

Because it's objectively unsustainable? I don't really get what it even means to be "pro capitalist" at this point. We know, for a fact, that capitalism will lead to disaster if we keep doing what we're doing. Do you disagree with that? Or do you not care?

What is your general plan for what we should do when we can see that something we currently do and rely on will have to stop in the near future? Not that we will have to choose to stop it, but that it will stop because of something being depleted or no longer possible.

If you imagine that we're trying to find the best long-term system for humanity, and that the possible solutions exist on a curve on an X/Y plane, and we want to find the lowest point on the function, capitalism is very clearly a local minima. It's not the lowest point, but it feels like one to the dumbass apes who came up with it. So much so that we're resistant to doing the work to find the actual minima before this local one kills literally everyone :)

55 more...

"Reddit as a hosting service. We provide you the infrastructure and discoverability necessary to build and maintain a growing community. Yours for only $50/month!"

4 more...

Writers always know that "organic" will be misinterpreted by the public, and do it anyway, hiding behind "technically correct". Personally, I think avoiding creating more misunderstandings about science and space exploration outweighs any "technically correct" bullshit. Stop intentionally hurting public understanding for clicks.

2 more...

No, it absolutely should not work. I can't even imagine what you are imagining when you say that. HOW could it possibly work long term? Are you familiar with any game theory?

2 more...

ChatGPT usage is a very poor metric. Anything interesting is happening via API. Even the chat completion endpoint still isn't "ChatGPT" on its own. None of these complaints about it being "dumber" apply to the API outputs. OpenAI don't care about nerfing chatGPT because it's not their real product.

Aliens very obviously exist. Life evolved on Earth. The idea that it has never done that elsewhere is ridiculous. The question is how spread out they are in both time and space. The fact that we see no clear evidence of them points to "very spread out". But it could just be complex life, or intelligent life that is rare, which is why we look for much fainter signals that we'd only see when specifically looking, like with JWST. Either way, they exist. Whether that existence has any relevance to yours is up for debate.

But they have definitely never visited Earth.

3 more...

First, no alternative is required for something to be unacceptable to continue. This is a very common line of reasoning that keeps us stuck in the local minima. Leaving a local minima necessarily requires some backsliding.

Capitalism is unsustainable because every single aspect of it relies on the idea that resources can be owned.

If you were born onto a planet where one single person owned literally everything, would you think that is acceptable? That it makes sense that the choices of people who are long dead and the agreements between them roll forward in time entitling certain people to certain things, despite a finite amount of those things being accessible to us? What if it was just two people, and one claimed to own all land? Would you say that clearly the resources of the planet should be divided up more fairly between those two people? If so, what about three people? Four? Five? Where do you stop and say "actually, people should be able to hoard far more resources than it is possible for anyone to have if things were fair, and we will use an arbitrary system that involves positive feedback loops for acquiring and locking up resources to determine who is allowed to do this and who isn't".

Every single thing that is used in the creation of wealth is a shared resource. There is no such thing as a non-shared resource. There is no such thing as doing something "alone" when you're working off the foundation built by 90+ billion humans who came before you. Capitalism lets the actual costs of things get spread around to everyone on the planet, environmental harm, depletion of resources that can never be regained, actions that are a net negative but are still taken because they make money for a specific individual. If the TRUE COST of the actions taken in the pursuit of wealth were actually paid by the people making the wealth, it would be very clear how much the fantasy of letting people pursue personal wealth relies on distributing the true costs through time and space. It requires literally stealing from the future. And sometimes the past. Often, resources invested into the public good in the past can be exploited asymmetrically by people making money through the magic of capitalism. Your business causes more money in damage to public resources than it even makes? Who cares, you only pay 30% in taxes!

There is no way forward long term that preserves these fantasies and doesn't inevitably turn into extinction or a single individual owning everything. No one wants to give up this fantasy, and they're willing to let humanity go extinct to prevent having to.

1 more...

It would not HAVE to do that, it just is much harder to get it to happen reliably through attention, but it's not impossible. But offloading deterministic tasks like this to typical software that can deal with them better than an LLM is obviously a much better solution.

But this solution isn't "in the works", it's usable right now.

Working without python:

It left out the only word with an f, flourish. (just kidding, it left in unfathomable. Again... less reliable.)

Well, my first thought is "why the fuck didn't you put WHAT FEATURE in the title?". Then I thought "okay, that's probably the article's title, and OP was just using it." Then I saw that the actual title is "Reddit is getting rid of its Gold awards system", and returned to "fuck this OP".

The entire world is filled with so much information absolutely everywhere. Everything is constantly leaking it out, blasting it in every direction. One of the most worrying potential capabilities that could rapidly emerge in AI is collecting and finding patterns in this information.

Look at examples of side-channel attacks for computers, and then realize that AI has the potential to have side-channel attacks for... everything. It's pretty scary how much you can narrow down guesses when you have a few pieces of information. Using WiFi to learn about a physical space is just the surface.

It has to be okay for people to die, because ALL PATHS FORWARD INVOLVE PEOPLE DYING. Any choice you make involves some hidden choice about who gets to suffer and die and who doesn't.

But no, that's not what I was saying. Also, are you aware that extinction also involves lots of deaths? Have you thought about what does and doesn't count as "death" to you? What about responsibility for that death? How indirect does it have to be before you're free from responsibility? Is it better to have fewer sentient beings living better lives, or more beings living worse lives? Does it matter how much worse? Is there a line where their life becomes a net positive in terms of its contribution to the overall "goodness" of the state of the universe? Once we can ensure a net positive life for people should the goal to be for as many to exist as possible? Should new people only be brought into the world if we can guarantee them a net positive life?

But hey, thanks for the very concrete example of how being in a decent local minima is very hard to break out of.

God we are so fucking far past the point where this will matter. We need to give up the idea that we can still patch our broken governmental system with things that rely on people coming together and agreeing that some things are just bad. The right will defend anything if defending it can give them more power. This legislation would probably be used to kick a democratic judge off for "ethics violations" like giving food to the homeless somewhere that it's been made illegal before it was used to target republican judges guilty of rape, lying under oath, and bribery.

No they don't. And you genuinely do not understand the gulf that evidence would have to overcome for aliens to be more likely than literally any other explanation for any phenomena you're talking about. Including explanations that require extremely unlikely coincidences. Because coincidences happen. But if aliens have visited Earth, that requires an unbelievable amount of observations about the universe to be explained. And truly, the evidence to overcome that would be MORE than a good video. And we don't even fucking have that. It's pathetic how people act about UFOs and aliens.

The fact that it's always bad evidence, or indirect evidence, should tell you that it's always the same bullshit. If you believe it, it's because you want to, not because there's the tiniest reason to.

So they slapped T2I-Adapter (which is basically an alternative to controlnet) on top of SDXL 0.9. This is not very novel or new, and stability is having cashflow issues so they're desperate to have tools on clipdrop that people can actually use to increase profits, so that they're actually willing to pay for them. That's pretty much all this is.

Here's another place you can play with T2I-adapter with SD1.5 models: https://huggingface.co/spaces/Adapter/T2I-Adapter

Yes, because AI assistants are going to get too good to not use. And they are going to be made infinitely more powerful by being able to see and hear everything around you.

this kind of journalism constantly teaches and reminds people that organic doesn’t mean life.

Except... it doesn't. That's just a dreamy hypothetical way that it might manifest, but that doesn't match reality. It misinforms. The end.

Redact.dev I don't know if it still works with the API changes and I'm too lazy to look, but I assume it probably is still capable. It can rewrite your comments for you, then delete them.

The Book of Why by Judea Perl

AI alignment is a field that attempts to solve the problem of "how do you stop something with the ability to deceive, plan ahead, seek and maintain power, and parallelize itself from just doing that to everything".

https://aisafety.info/

AI alignment is "the problem of building machines which faithfully try to do what we want them to do". An AI is aligned if its actual goals (what it's "trying to do") are close enough to the goals intended by its programmers, its users, or humanity in general. Otherwise, it’s misaligned. The concept of alignment is important because many goals are easy to state in human language terms but difficult to specify in computer language terms. As a current example, a self-driving car might have the human-language goal of "travel from point A to point B without crashing". "Crashing" makes sense to a human, but requires significant detail for a computer. "Touching an object" won't work, because the ground and any potential passengers are objects. "Damaging the vehicle" won't work, because there is a small amount of wear and tear caused by driving. All of these things must be carefully defined for the AI, and the closer those definitions come to the human understanding of "crash", the better the AI is "aligned" to the goal that is “don't crash”. And even if you successfully do all of that, the resulting AI may still be misaligned because no part of the human-language goal mentions roads or traffic laws. Pushing this analogy to the extreme case of an artificial general intelligence (AGI), asking a powerful unaligned AGI to e.g. “eradicate cancer” could result in the solution “kill all humans”. In the case of a self-driving car, if the first iteration of the car makes mistakes, we can correct it, whereas for an AGI, the first unaligned deployment might be an existential risk.

Yes, it seems pretty untenable that rare earth is the explanation for the lack of evidence of any life outside of Earth. But even if it is true that we're the only life in the observable universe, the universe is still much bigger, and in many physicists opinion, probably infinite.

The fact that life seems to have evolved on Earth as soon as it was possible to is some evidence that abiogenesis is not the bottleneck. But the usefulness of this observation depends on the distribution of other things we don't know. For example, if on planets where life evolves later, life never makes it to human-level intelligence before the planet becomes uninhabitable, then our early abiogenesis is survivorship bias, rather than something we should expect to be in the center of the distribution of when abiogenesis happens on a planet where it is possible.

No. Maybe as a short stop on the way to extinction, but absolute and complete extinction aint a dystopia. And the worse than extinction possibilities are more like eternal suffering in a simulator for resisting the AI. Not quite captured by a "dystopia".

You're at a moment in history where the only two real options are utopia or extinction. There are some worse things than extinction that people also worry about, but lets call it all "extinction" for now. Super-intelligence is coming. It literally can't be stopped at this point. The only question is whether it's 2, 5, or 10 years.

If we don't solve alignment, you die. It is the default. AI alignment is the hardest problem humans have ever tried to solve. Global warming will cause suffering on that timescale, but not extinction. A well-aligned super-intelligence has actual potential to reverse global warming. A misaligned one will mean it doesn't matter.

So, if you care, you should be working in AI alignment. If you don't have the skillset, find something else: https://80000hours.org/

Every single dismissal of AI "doom" is based on wishful thinking and hand-waving.

5 more...

What?... no. That would be confirmed life on another planet. It would be the single biggest discovery in human history. You would have heard if that was the case. "Fossilized microbes" would be "life". Nobody needs the life to still be alive to be a huge, huge deal.

If you're thinking the word "organic" means "microbe", it doesn't. I guess this is the consequence of all the harm to the public understanding that all the shitty headlines caused trying to get clicks but still being "technically correct" despite knowing it will be misinterpreted as "life".

Not 100% proof. That would require the universe to be infinite, which it still might not be if the curvature is within the tiny margin of error. It's close enough to proof that it might as well be the case. The entire universe couldn't be less than something like 130x the size of the observable universe, though... unless it has nontrivial topology. There's always a caveat.