General_Effort

@General_Effort@lemmy.world
3 Post – 564 Comments
Joined 6 months ago

Borders in cyberspace is the future. There are increased efforts to regulate the internet everywhere. Think copyright, age verification, the GDPR, or even anti-CSAM laws. It's all about making sure that information is only available to people who are permitted to access it. China is really leading the way here.

We do not agree with China's regulations, but that only means that we need border controls. Data must be checked for regulatory compliance with local laws.

10 more...

I just described what's going on. The world outside of China or Russia is going slower but the direction is the same.

It's steady pressure and it's only in one direction. Some countries resist more than others. I'm guessing you are not in the EU, because if so, you'd be aware of the "chat control" push.

Even so, it's not the days of Napster anymore. Think about hardware DRM. It stops no one but you, too, paid to have it developed and built into your devices. Think about Content ID. That's not going away. It's only going to be expanded. That frog will be boiled.

Recently, intellectual property has been reframed as being about "consensual use of data". I think this is proving to be very effective. It's no longer "piracy" or "theft", it's a violation of "consent". The deepfake issue creates a direct link to sexual aggression. One bill in the US, that ostensibly targets deepfakes, would apply to any movie with a sex scene; making sharing it a federal felony.

2 more...

What are they gonna do, send in a fucking swat team to take anything that doesn’t have hardware level DRM?

In a future where this is established, wouldn't you expect non-compliant hardware to be treated just as drugs or machine guns are treated now?

I think that's hardly an immediate worry, though. Various services already scan for illegal content or suspicious activity. It wouldn't take much to get ISPs to snitch on their customers.

Hey, I'm just saying how it's going. Look at, say, threads here about deepfakes. See all the calls for laws and government action. How can that be enforced?

4 more...

Despite the fact that Nvidia is now almost the main beneficiary of the growing interest in AI, the head of the company, Jensen Huang, does not believe that additional trillions of dollars need to be invested in the industry.

*Because of

You heard it, guys. There's no need to create competition to Nvidia's chips. It's perfectly fine if all the profits go to Nvidia, says Nvidia's CEO.

Arrows pointing out from Germany indicating a pointless quest for more space. Why do I feel like I have seen that before?

4 more...

The FTC is worried that the big tech firms will further entrench their monopolies. They are doing a lot of good stuff lately; an underappreciated boon of the Biden Presidency. Lina Khan looks to be really set on fixing decades of mistakes.

I guess they just want to know if these deals lock out potential competitors.

The wars of the future will not be fought on the battlefield or at sea. They will be fought in space, or possibly on top of a very tall mountain. In either case, most of the actual fighting will be done by small robots. And as you go forth today remember always your duty is clear: To build and maintain those robots.

25 more...

Oh! Look how happy those little girls are! Probably because they just learned that Stay-At-Home-Mom isn't the only acceptable career for a woman.

Currently, AI means Artificial Neural Network (ANN). That's only one specific approach. What ANN boils down to is one huge system of equations.

The file stores the parameters of these equations. It's what's called a matrix in math. A parameter is simply a number by which something is multiplied. Colloquially, such a file of parameters is called an AI model.

2 GB is probably an AI model with 1 billion parameters with 16 bit precision. Precision is how many digits you have. The more digits you have, the more precise you can give a value.

When people talk about training an AI, they mean finding the right parameters, so that the equations compute the right thing. The bigger the model, the smarter it can be.

Does that answer the question? It's probably missing a lot.

Ludwig Boltzmann, who spent much of his life studying statistical mechanics, died in 1906 by his own hand. Paul Ehrenfest, carrying on the work, died similarly in 1933. Now it is our turn to study statistical mechanics.

-Opening sentence of the textbook States of Matter by David Goodstein.

You can use graphics cards for more than just graphics, eg for AI. Nvidia is a leader in facilitating that.

They offer a software toolkit for developing programs (an SDK) that use their GPUs to best effect. People have begun making "translation layers" that allow such CUDA programs to run on non-nvidia hardware. (I have no idea how any of this works.) The license of that SDK now forbids reverse engineering its output to create these compatibility tools.

Unless I am very mistaken, Nvidia can't ban the use of "translation layers" or stop people making them, as such. This clause creates a barrier to creating them, though.

Some programs will probably remain CUDA specific, because of that clause. That means that Nvidia is a gatekeeper for these programs and can charge extra for access.

The Kids Online Safety Act (KOSA) continues to march through the halls of Congress as though it’s the best thing since sliced bread, even though one of the co-creators of this bill clearly stated that her intention is to protect children from “the transgender” and to prevent “indoctrination” from the LGBT community.

Historically, "protecting children" was always about oppressing LGBT people and even women. Protecting kids from turning gay or becoming cross-dressers. I'm sure it seems foolish to anyone here, but if you believe that being gay is a choice, it makes sense.

Comstock Laws, anyone?

1 more...

Explanation of how this works.

These "AI models" (meaning the free and open Stable Diffusion in particular) consist of different parts. The important parts here are the VAE and the actual "image maker" (U-Net).

A VAE (Variational AutoEncoder) is a kind of AI that can be used to compress data. In image generators, a VAE is used to compress the images. The actual image AI only works on the smaller, compressed image (the latent representation), which means it takes a less powerful computer (and uses less energy). It’s that which makes it possible to run Stable Diffusion at home.

This attack targets the VAE. The image is altered so that the latent representation is that of a very different image, but still roughly the same to humans. Say, you take images of a cat and of a dog. You put both of them through the VAE to get the latent representation. Now you alter the image of the cat until its latent representation is similar to that of the dog. You alter it only in small ways and use methods to check that it still looks similar for humans. So, what the actual image maker AI "sees" is very different from the image the human sees.

Obviously, this only works if you have access to the VAE used by the image generator. So, it only works against open source AI; basically only Stable Diffusion at this point. Companies that use a closed source VAE cannot be attacked in this way.


I guess it makes sense if your ideology is that information must be owned and everything should make money for someone. I guess some people see cyberpunk dystopia as a desirable future. I wonder if it bothers them that all the tools they used are free (EG the method to check if images are similar to humans).

It doesn’t seem to be a very effective attack but it may have some long-term PR effect. Training an AI costs a fair amount of money. People who give that away for free probably still have some ulterior motive, such as being liked. If instead you get the full hate of a few anarcho-capitalists that threaten digital vandalism, you may be deterred. Well, my two cents.

35 more...

That was an annoying read. It doesn't say what this actually is.

It's not a new LLM. Chat with RTX is specifically software to do inference (=use LLMs) at home, while using the hardware acceleration of RTX cards. There are several projects that do this, though they might not be quite as optimized for NVIDIA's hardware.


Go directly to NVIDIA to avoid the clickbait.

Chat with RTX uses retrieval-augmented generation (RAG), NVIDIA TensorRT-LLM software and NVIDIA RTX acceleration to bring generative AI capabilities to local, GeForce-powered Windows PCs. Users can quickly, easily connect local files on a PC as a dataset to an open-source large language model like Mistral or Llama 2, enabling queries for quick, contextually relevant answers.

Source: https://blogs.nvidia.com/blog/chat-with-rtx-available-now/

Download page: https://www.nvidia.com/en-us/ai-on-rtx/chat-with-rtx-generative-ai/

5 more...

Porn of Normal People

Why did they feel the need to add that "normal" to the headline?

3 more...

The California bill was co-sponsored by the Center for AI Safety (CAIS), a San Francisco-based non-profit run by computer scientist Dan Hendrycks, who is the safety adviser to Musk’s AI start-up, xAI. CAIS has close ties to the effective altruism movement, which was made famous by jailed cryptocurrency executive Sam Bankman-Fried.

Ahh, yes. Elon Musk, paragon of consumer protection. Let's just trust his safety guy.

I doubt this has to do with "powerful people". A DDOS attack does not remove anything from the net, but only makes it temporarily hard to reach.

There are firms that specialize in suppressing information on the net. They use SEO tricks to get sites down-ranked, as well as (potentially fraudulent) copyright and GDPR request.

There must be any number of "little guys" who hate the Internet Archive. They scrape copyrighted stuff and personal data "without consent" and even disregard robots.txt. Lemmy is full of people who think that people should go to jail for that sort of thing.

21 more...

I just googled TS porn and did not find a single image of Taylor Swift.

I am left with a strange urge to buy a plushy shark from Ikea, though.

It's not about money, it's about sending a message. And also, being a clown.

4 more...

Reminds me of the story about the 1956 film The Conqueror. It was shot in Utah, downwind of atmospheric nuclear testing. It was speculated that this caused cancers among the crew.

“No brain?”

“Oh, there’s a brain all right. It’s just that the brain is made out of meat! That’s what I’ve been trying to tell you.”

“So … what does the thinking?”

“You’re not understanding, are you? You’re refusing to deal with what I’m telling you. The brain does the thinking. The meat.”

“Thinking meat! You’re asking me to believe in thinking meat!”*

policymakers on both sides of the aisle agreed that First Amendment protections ought to safeguard the privacy of people's viewing habits, or else risk chilling their speech by altering their viewing habits.

Oh, that's a clever take.

Coffee machines and drink dispensers are already a thing, though?

3 more...

There are certainly purposes for which one wants as much of the raw sensor readings as possible. Other than science, evidence for legal proceedings is the only thing that comes to mind, though.

I'm more disturbed by the naive views so many people have of photographic evidence. Can you think of any historical photograph that proves anything?

Really famous in the US: The marines raising the flag over Iwo Jima. It was staged for the cameras, of course. What does it prove?

A more momentous occasion is illustrated by a photograph of Red Army soldiers raising the soviet flag over the Reichstag. The rubble of Berlin in the background gives it more evidentiary value, but it is manipulated. It was not only staged but actually doctored. Smoke was added in the background and an extra watch on a soldier's arm (evidence of robbery) removed.

Closer to now: As you are aware, anti-American operatives are trying to destroy the constitutional order of the republic. After the last election, they claimed to have video evidence of fraud during ballot counting. On one short snippet of video, one sees a woman talking to some people and then, after they leave, pull a box out from under a table. It's quite inconspicuous, but these bad actors invented a story around this video snippet, in which a "suitcase" full of fraudulent ballots is taken out of hiding after observers leave.

As psychologists know, people do not think in strictly rational terms. We do not take in facts and draw logical conclusion. Professional manipulators, such as advertisers, know that we tend to think in "narratives". If a story is compelling, we like to twist neutral snippets of fact into evidence. We see what we believe.

17 more...
  1. What laws do you want?

  2. How would they be enforced?

  3. What practical effects would that have?

12 more...

I wish people would go straight to the source for these stories. No reason to link to something that only paraphrases a press release and adds some ads.

Press release (contains link to indictment):

https://www.justice.gov/usao-sdny/pr/two-brothers-arrested-attacking-ethereum-blockchain-and-stealing-25-million

Why would you need a law to make someone sell to the highest bidder?

1 more...

Trivia (from Wikipedia): "Taxman" from their 1966 album Revolver was the group's first topical song and the first political statement they had made in their music.

"Taxman" was influential in the development of British psychedelia and mod-style pop, and has been recognised as a precursor to punk rock. When performing "Taxman" on tour in the early 1990s, Harrison adapted the lyrics to reference contemporaneous leaders, citing its enduring quality beyond the 1960s. The song's impact has extended to the tax industry and into political discourse on taxation.

Unlike their other political songs, which are fairly vague peace&love jobs, this one tackles a concrete issue: It protests the 95% top marginal tax rate.


You've heard how "the boomers" screwed up everything for later generations. Here's exhibit A from pop culture. Don't just think about evil, old men in smoky backrooms.

1 more...

I have spent a disturbing amount of time trying to decide if it was necessary to clarify that she was found dead inside the python. I believe that, yes, it was. Make of that what you will.

3 more...

You're allowed to use copyrighted works for lots of reasons. EG satire parody, in which case you can legally publish it and make money.

The problem is that this precise situation is not legally clear. Are you using the service to make the image or is the service making the image on your request?

If the service is making the image and then sending it to you, then that may be a copyright violation.

If the user is making the image while using the service as a tool, it may still be a problem. Whether this turns into a copyright violation depends a lot on what the user/creator does with the image. If they misuse it, the service might be sued for contributory infringement.

Basically, they are playing it safe.

3 more...

It occurs to me that a lot of people don’t know the background here. (ETA: I wrote this in response to a different article, so some refs don't make sense.)

LAION is a German Verein (a club). It’s mainly a German physics/comp sci teacher who does this in his spare time. (German teachers have the equivalent of a Master’s degree.)

He took data collected by an American non-profit called Common Crawl. “Crawl” means that they have a computer program that automatically follows all links on a page, and then all links on those pages, and so on. In this way, Common Crawl basically downloads the internet (or rather the publicly reachable parts of it).

Search engines, like Google or Microsoft’s Bing, crawl the internet to create the databases that power their search. But these and other for-profit businesses aren’t sharing the data. Common Crawl exists so that independent researchers also have some data to study the internet and its history.

Obviously, these data sets include illegal content. It’s not feasible to detect all of it. Even if you could manually look at all of it, that would be illegal in a lot of jurisdictions. Besides, which standards of illegal content should one apply? If a Chinese researcher downloads some data and learns things about Tiananmen Square in 1989, what should the US do about that?

Well, that data is somehow not the issue here, for some reason. Interesting, no?

The German physics teacher wrote a program that extracted links to images, as well as their accompanying text descriptions, from Common Crawl. These links and descriptions were put into a list - a spreadsheet, basically. The list also contains metadata like the image size. On top of that, he used AI to guess if they are “NSFW” (IE porn), and if people would think they are beautiful. This list, with 5 billion entries, is LAION-5b.

Sifting through Petabytes of data to do all that is not something you can do on your home computer. The funding that Stability AI provided is a few thousand USD for supercomputer time in “the cloud”.

German researchers at the LMU - a government funded university in Munich - had developed a new image AI, which is especially efficient and can be run on normal gaming PCs. (The main people now work on a start-up in New York.) The AI was trained on that open source data set and named Stable Diffusion in honor of Stability AI, which had provided the several 100k USD needed to pay for the supercomputer time.

These supposed issues are only an issue for free and open source AI. The for-profit AI companies keep their data sets secret. They are fairly safe from accusations.

Maybe one should use PhotoDNA to search for illegal content? The for-profit company PhotoDNA, which so kindly provided its services for free to this study, is owned by Microsoft, which is also behind OpenAI.

Or maybe one should only use data that has been manually checked by humans? That would be outsourced to a low wage country for pennies, but no need: Luckily, billion-dollar corporations exist that offer just such data sets.

This article solely attacks non-profit endeavors. The only for-profit companies mentioned (PhotoDNA, Getty), stand to gain from these attacks.

Meanwhile... Last year, Taylor Swift received over $100 million for streaming from Spotify alone, making her a billionaire.

Clearly, (some) musicians are doing better than ever. And, judging by this dishonest, manipulative screed, they are determined to do better still.

"unwanted objects"

270GB of mostly node modules?

2 more...

2 things:

“more than half the length of a bowling lane and makes this snake longer than a giraffe is tall.”

Do Americans really consider this helpful information?

marking at least the fifth person to be devoured by a python in the country since 2017.

The Wikipedia page on reticulated pythons needs to be updated.

5 more...

I think you seriously over-estimate the level of tolerance of Nazi Germany. The Nazis persecuted Degenerate Music just like they persecuted Degenerate Art.

2 more...

So... This may be an unpopular question. Almost every time AI is discussed, a staggering number of posts support very right-wing positions. EG on topics like this one: Unearned money for capital owners. It's all Ayn Rand and not Karl Marx. Posters seem to be unaware of that, though.

Is that the "neoliberal Zeitgeist" or what you may call it?

I'm worried about what this may mean for the future.

ETA: 7 downvotes after 1 hour with 0 explanation. About what I expected.

51 more...