Striking actor Stephen Fry says his voice was stolen from the Harry Potter audiobooks and replicated by AI

stopthatgirl7@kbin.social to Technology@lemmy.world – 800 points –
fortune.com

The actor told an audience in London that AI was a “burning issue” for actors.

164

You are viewing a single comment

It just the beginning for sure. This future will be the end of artists and still everyone will clapping to AI productions like fools.

No one cared when spreadsheets replaced a huge chunk of office workers.

If the results are the same what's the issue.

Artists feel special because until recently computer couldn't automate them. But it's the same as any job.

The people who lost those jobs cared.

If not for the wages, people hardly have any attachment to most office jobs. But when it comes to artistic endeavors, a lot of people dream of being able to make a career in those fields. Frankly, that sort of comment itself seems like it comes from envy, like artists ought to be taken down a peg for daring to work with something they are passionate about. I couldn't think of a single artist who bragged about being above automation.

As someone who works in an office job, if AI could free me to work on something creative that would be wonderful, but if it will instead replace already existing creatives and leave us both without anywhere to work, that is not really helping anybody but executives profiting over it. What benefit does that even add to my life? Remixed porn? Meme generators? It's not the same level of benefit as industrial automation, if any. The human element of art enriches it in an unique way that AI trying to distill a style from countless samples won't be able to do.

This hits the nail on the head. A major component of art is that it's an outlet of human creativity, something we find fulfilling to both produce and consume. If creativity is delegated to machines, what's left for us humans? At some point, we'll grow tired of Taco Bell and re-runs, and what then?

Making art is something people enjoy, for one thing. Good art also has something of the artist in it, something to it other than "it was made from this prompt".

Art is just combining previously learnt techniques together with a specific subject. Since AI essentially knows all the techniques it could be better eventually.

Nothing is stopping people making art for fun.

Art is just combining previously learnt techniques together with a specific subject.

If that's what you look for in art then sure, but I disagree with that definition. A child's drawing of her dad has aspects to it that a picture of that dad taken in a photo booth can never have. A poem about war is much more meaningful when it comes from a refugee. The Wikipedia page for art lists several 'purposes' and most of them are not something AI art can ever fulfil.

You can't say ever. It could learn every diary and report from war ever and write amazing stuff. It's just a matter of time. It's currently limited by computer power quite significantly.

It could do that, but its writing would be hollow because those stories are meaningful due to the lived experiences behind them. For example anyone who's read The Diary of a Young Girl could write something similar in Anne Frank's style, but it wouldn't be nearly as impactful because learning about an event is very different from living through it.

It's limited by the trends of human art. The art and text AI that we have are based on pattern processing. They output what is expected based on what we feed it. They aren't able to come up with entirely new styles or philosophies. They don't even have a cognitive ability to have any philosophy. An AI describing a tree or depicting an image of a tree doesn't have an understanding of what a tree is, they are not aware of the world, they can only replicate human words and images.

A breakthrough needs to happen for them to be capable of anything more, but that's going to be its own can of worms.

It's not the same as any job. It's putting your face and your words behind something you cannot consent to. If someone spoofed your username and started posting offensive things, I've no doubt anyone would be upset. That's just your username. Now add your real life photo, your face, and your voice.

You would have to be a sociopath not to care if suddenly your friends and family received a video of you performing offensive acts or shilling for a political cause you are vehemently opposed to.

That's literally not how they work. They figure out a mathematical formula for generating things and apply it. Your analogy doesn't make any sense.

They aren't copying anything in reality. No more than the way an artist's brain changes when looking at other art.

In fact that is a much better analogy for how they work as they are modelled on our neurons.

2 more...
2 more...

AI will annihilate most data entry workers in the next few years as well.

AI is very stupid and breaks in ways you wouldn't expect

Some AIs are more intelligent than the average person.

Ask a normal person to do the tasks ChatGPT can and I bet the results would be even worse.

Ask chatGPT to do things a normal person can, and it also fails. ChatGPT is a tool, a particularly dangerous swiss army chainsaw.

I use it all the time at work.

Getting it to summerize articles is a really useful way to use it.

It's also great at explaining concepts.

It's also great at explaining concepts.

Is it? Or is it just great at making you think that? I've seen many ChatGPT outputs "explaining" something I'm knowledgeable of and it being deliriously wrong.

I agree. I have a very specialized knowledge in certain areas, and when I've tried to use chat GPT to supplement my work, it often misses key points or gets them completely wrong. If it can't process the information, it will err on the side of creating an answer whether it is correct or not, and whether it is real or not. The creators call this "hallucination."

Yeah it is if you prompt it correctly.

I basically use it instead of reading the docs when learning new programming languages and Frameworks.

A coworker tried to use it with a well-established Python library and it responded with a solution involving a Class that did not exist.

LLMs can be useful tools but, be careful in trusting them too much - they are great at what I'd say is best described as "bullshitting". It's not even "trust but verify" it's more "be skeptical of anything that it says". I'd encourage you to actually read the docs, especially those for libraries as it will give you a deeper understanding of what's actually happening and make debugging and innovating easier.

Ive had no problem using them. The more specific you get the more likely they are to do that. You just have to learn how to use them.

I use them daily for refactoring and things like that without issue.

That's great, it works until it doesn't and you won't know when unless you already are knowledgeable from a real source.

You know it doesn't work when you try and tell it doesn't work and it'll usually correct itself.

14 more...
16 more...
16 more...