Jensen Huang says kids shouldn't learn to code — they should leave it up to AI.

L4sBot@lemmy.worldmod to Technology@lemmy.world – 55 points –
Jensen Huang says kids shouldn't learn to code — they should leave it up to AI
tomshardware.com

Jensen Huang says kids shouldn't learn to code — they should leave it up to AI.::At the recent World Government Summit in Dubai, Nvidia CEO Jensen Huang made a counterintuitive break with tech leader wisdom by saying that programming is no longer a vital skill due to the AI revolution.

90

Producer of calculators says kids don't need to learn maths, they just need a calculator

I mean, we aren't exactly teaching kids how to hand calculate trig anymore. Sin, Cos, and Tan operations are pretty much exclusively done with a calculator and you'd be hard pressed to find anyone who graduated in the last 25 years who knows any other way to do it.

I haven't graduated high school yet and even I know how to calculate sin and cos with the taylor series maclurin expansion. I am still in grade 11 and I assume they would be teaching it next year when I take my calculus class? Do they not teach it anymore?

Well, a lot of maths can be done with a calculator. They don't need to learn to actually understand the maths unless either they actually want to, or they're going into something like engineering.

I disagree. They need to understand math, but not being able to calculate math problems in their head.

Absolutely. The calculator is a tool to help you solve a problem. If you don’t understand the problem, then at best you can’t confirm if the answer is correct or not, and at worst the entire exercise is completely lost on you.

The same applies to LLMs. Sure you can get them to spit out code, but unless you understand the code it might be tough to verify that it does what you want. Further, if the code needs adapting (as it often does) then you are shit out of luck if you don’t understand it.

Sure you can ask the LLM to make changes, but the moment something goes wrong in the prompt you have an error sitting there polluting all future output.

Indeed. I've been watching a number of evaluations of different LLMs, where people give it a set of problems and then evaluate the results. The number of times I've seen "Well it got that wrong, but if we let it re-evaluate it, it gets it right". If that's the case, the model is useless. You have to know the right answer before you can ask the model for an answer because the answer you'll get can't be trusted.

Might as well flip a coin.

Yeah. I was tasked with evaluating LLMs for software dev at my company last year. Tried a few solutions and tools, and various workflows from just using it as a crutch to basically instructing the LLM to make the application. The former was rarely necessary (but sometimes helpful) and the latter was ridiculously cumbersome.

You need to be specific, and leave no room for interpretation, because the moment you do the latter it'll start making stuff up that doesn't necessarily fit in with the spec, and while you can correct that, that's tedious in and of itself, and once it's already had the idea it'll often have a hard time letting go of it.

I also had several cases where it outright ignored provided context. That was even more frustrating because then it made assumptions that I'd already proven to be false.

The best use cases I got from it was

  • Explaining unclear code
  • Writing clear documentation (it was really good at this)
  • Rubberducking

Essentially, it was a great helper, but a horrendous developer. Felt more like I was tutoring it than anything else.

I haven’t seen anyone mention rubberducking or documentation or understanding code as use cases for AI before, but those are truly useful and meaningful advantages. Thanks for bringing that to my attention :)

There are definitely ways in which LLMs and imaging models are useful. Hell I've been playing around with vocal synthesis for years, SynthV's AI models are amazing, so even for music there's use cases. The problem is big corporations just fucking it up. Rampant theft, no compensation for the original creators, and then they're sitting on the models like dragons. OpenAI needs to rename themselves, preferably years ago, because there's nothing open about them.

The way I see it, the way SynthV (and VOCALOID prior to that) works is great; you hire a vocalist with the express purpose of making a model out of their voice. They know what they're getting into, and are getting compensated for it. Then there are licenses and such on these models. In some cases, like those produced by Eclipsed Sounds, anyone that uses a model to create a song gets decently free reign. In others, like the Bushiroad models, you are fairly restricted in what you can do with them.

Meaning the original artist has a say. It's why some models, like Cangqiong, will never get AI updates; the voice provider's wishes matter.

Using computer generated stuff as a crutch in the creation process is perfectly fine I feel, but outright trying to replace humans with "AI" is a ridiculous notion.

There has been synths that has been used to trigger vocal samples, among other things, for like 40(?) years, and this almost sounds like an evolution to that?

There are a lot of technological innovations in music (vax roll recording, tape recording, DAW recording, tube amps, transistor amps, amp modellers, Mellotron, analog synths, modular synths, digital synths, soft-synths, etc, etc, etc), and I think there’s surely more to come, and awesome new music to be made possible from the technological advantages.

I agree that the technology is not the problem, but how it’s used. If, let’s say, giant corporations feed all of human art into their closed, proprietary models only to churn out endless amounts of disposable entertainment, it would be detrimental to the creation of original art and I’d look upon that as a bad thing. But I guess we as a society has decided that we want to empower our corporate overlords at the expense of ourselves, to go far off topic of the original thread :/

There has been synths that has been used to trigger vocal samples, among other things, for like 40(?) years, and this almost sounds like an evolution to that?

Kind of? But I think, particularly with SynthV offering such realistic vocals, it might be useful for producers that can't easily get vocalists, or don't want to/can't sing themselves. You can also obviously use it to create backing vocals and fill things out if you realise that you need more vocals and your vocalist isn't available.

Or, maybe you're like me and just enjoy tinkering with the voice. Here's an example song by someone that's pretty talented at tuning these.

let’s say, giant corporations feed all of human art into their closed, proprietary models only to churn out endless amounts of disposable entertainment, it would be detrimental to the creation of original art and I’d look upon that as a bad thing. But I guess we as a society has decided that we want to empower our corporate overlords at the expense of ourselves, to go far off topic of the original thread :/

This is the road I fear we're heading down and it's so dystopic. 😭

Those vocals are pretty good for being computer generated. It’s no replacement for greats like Bowie, Simone, Jagger, Winehouse, Yorke, etc, etc, but it’s not supposed to be. Sometimes it’ll do the trick, sometimes it’ll be a necessity, it’ll work for some backing vocals, demos, sketches, songwriting experimentation, guide vocals, and so on. I hope we’ll see awesome AI tools being used to make awesome music.

I definitely have that fear myself, but I hope human resilience hangs in there. Besides, I don’t think I’d care if the masses listen to bland shit by 17 songwriters or bland shit by AI ;)

The quality of the vocals are now honestly less dependant on the synthesis engine than on the skill of the original singer, and the intent of the production team. Hayden is a first-party library produced by Dreamtonics, and they tend to be very focused on having their voices do a specific thing. Ninezero for example is all-in on that gravelly rock type voice and won't do soft ballads easily or with any particular quality.

This was true even for VOCALOID; most of the VOCALOID libraries are absolute bunk. YAMAHA's (developer of VOCALOID) first signature English library, CYBER DIVA sounds so bad. The (in my opinion) best library for VOCALOID happens to be a Hello Kitty collaboration. For some reason they chose a traditional Japanese singer with an incredible vocal range to be the voice provider rather than a voice actor, and the quality of that voice is reflected in the voice library.

EclipsedSounds has three libraries now, and they've focused more on capturing the qualities of the original singer. Their first library SOLARIA is a Soprano whose voice is provided by Emma Rowley. Their second library ASTERIAN is a bass, voiced by Eric Hollaway (known as 'thatbassvoice'). Their third, SAROS, is a tenor whose provider I don't think has come forth yet. They are much more expressive than most libraries produced by Dreamtonics. SAROS' second vocal demo is a great example.

One of the neat things about them being synthesized is that these libraries can sing in English, Japanese, Mandarin, Cantonese, and Spanish (and with some fiddling, likely in other languages too - I managed to get SAROS to perform in Norwegian thanks to the Spanish update). Where SynthV really falls short is the occasional glitches when you push the vocals, as well as the lack of vocal ornamentation; there's no good way of performing say, growls at the moment.


I think ultimately human creativity will preservere. We'll likely see a lot of AI generated garbage as people are getting used to the tools and finding ways of working with them in the next couple of years. After that, I don't know. Even then there'll be people that prefer to just do everything by themselves.

We manage to make garbage even without AI. Disney's "Wish" was so bad people think AI was used, but I think it's more a matter of "direction by corporate." Corporate decided to seagull the entire project and the original creative vision was basically destroyed by corporate interests. You see it all the time in the games industry as well; creativity is set aside for proven established ideas, and market appeal. Risks are not allowed.

As an autist i can it agree more, understanding something is a requirement for me to do well.

So much of my struggles in school where based on using formulas without knowing why or whats behind them, not understanding the broader practical implications and intended goals of assignments, i was just told to just do them, the way it is asked with the formulas i was given (or was forced to remember). Lost motivation, my will to live even, spiraled and crashed hard in the end.

I got better, now i am sitting here scribbling all kinds of math in my little black book as a way to relax. I dont watch “tv” but i wont miss a kurtzegesagt or a veritasium.

I inherently love science, in major contrast to my later high school grades.

Absolutely. If one just “does as told” without understanding without understanding there is no way of knowing if one is lost or not.

I’ve had similar experiences in school myself, and they truly are detrimental to both learning and the joy of learning.

I’m glad you are doing better, and thanks for sharing your story :)

This is objectively stupid. There are tonnes of things you learn in maths that are useful for everyday life even if you don’t do the actual calculations by hand.

In many engineering professions you really need to understand the underlying math to have a chance in hell to interpret the results correctly. Just because you get a result doesn't mean you get an answer.

Scientific calculators can do a ton of stuff, but they're all useless if you don't know anything about math. If you don't know anything about the subject, you can't formulate the right questions.

You need to learn what is addition subtraction multiplication division and also how it works to do anything meaningful with it in calculator...

They aren't going to catch the typo or order of operations error they made on their calculator if they don't understand the math

1 more...

I thought coding skill is mostly about logical thinking, problem solving, and idea implementation instead of merely writing code?

Even then, who's gonna code to improve the AI in a meaningful way if everyone not learning to code? What if AI write their own update badly and no one correct it, and then the badly written AI write an even worst version of it? I think in biology we called that cancer.

Coding, like writing scientific papers, or novels, is only about randomly generating strings, silly human.

Coding, like writing scientific papers, or novels, is only about randomly generating strings

See also, litigation, medical diagnoses, creating art that evokes an emotional reaction in its audience, etc.

It turns out that virtually all human advancement and achievement comes down to simply figuring out what the next most likely token is based on what's already been written.

(/j incase it's not obvious)

So the Nvidia drivers will be 100% written by AI then?

Well, he's put the writing on the wall for his own developers. So, even if it isn't AI that writes them, the quality may well go down when those that can easily do so, leave for pastures new :P

Does that mean they can't be copyright anymore? I'll take it.

Even if AI were able to be trusted, you still need to know the material to know what you're even asking the AI for.

It's a ruler to guide the pencil, not the pencil drawing a straight line itself, you still have to know how to draw to be able to use it in a way that fits what you want to do.

I asked ChatGPT to show me how to do some Godot4.2 C# stuff the other day as I transition from Unity, it was 70% incorrect.

Good times. (It was probably right for an older version, but I told it the actual version)

Yea, and as we all know, AI will never progress further than it's current state. /s

1 more...

And who will code the code for ML/AI models ? I mean for Jr. Developers this is going to be a better way to learn than "did you Google it? " And maybe have precise answers to your questions. But it sounds more to me like "maybe you should buy more of our silicon".

Sounds a bit like "640kb is more than enough" oneliner. But let's see what it will bring.

But it sounds more to me like “maybe you should buy more of our silicon”.

gotta drum up that infinite demand to meet and grow their insane valuation bubble when they already can't even produce enough to fill all orders.

And I say I don't even know this person and he should just stfu and leave those kids alone.

He's the CEO of Nvidia one of the largest GPU manufacturers in the world and also a trillion dollar company.

Good for him. I like Nvidia and use one, but I have the rest of his company to thank for that.

I think for me it was a combination of:

< Name of person I don't know > says < big unhinged sweeping generalization > for < reason that makes no sense to anyone in the field >

My first instinct is not to click stuff like this altogether. I also think that anyone trying to preach what kids should or shouldn't do is already in the wrong automatically by assuming they have any say in this without a degree in pedagogy.

He’s also obviously biased since the more people use LLMs and the like the more money he gets.

It’s a bit like “lions think gazelles should be kept in their enclosure”.

While large language models are impressive they seem to still lack the ability to actually reason which is quite important for programmer. Another thing they lack is human like intuition that allows us to seek solutions to problems with limited knowledge or without any existing solutions.

With the boom bringing a lot more money and attention to A.I the reasoning abilities will probably improve but until it's good enough we'll need people who can actually understand code. Once it's good enough then we don't really need people like Jensen Huang since robots can do whatever he does but better.

GPT4 (the preview) still produces code where it adds variables that it never uses anywhere... and when I asked one time about one variable, it was like, "Oh, you're right, let me re-write the code to put variable X into use", then just added it in a nonsensical location to serve a nonsensical purpose.

Yeah, tell kids not to learn how to code so that way they can't understand what your products actually do so you can claim plausible deniability to them that they aren't sucking up all your data like a hoover.

Kids should focus on the one thing AI can't do: Stand-up comedy.

Have you looked carefully at AI-produced code?

Bullshit. Even if AI were to fully replace is software developers (which I highly doubt), programming is still a very useful skill to learn just for the problem solving skills.

Kids shouldn't learn to read. They should stick to audio books.

I don't know if that's the best example since with an audio book you're still getting the same reading material.

It's more like, kids shouldn't learn how to sing, they should just have ai sing with their voice for them. They'l never know the ins and outs of it, but they'l know what they want it to be like and describe it to the ai.

Mainly because you don't need to learn to read.

Leave coding to ai? What does this look like? How does this concept work? any examples?

Or does he mean just let ai handle everything and they don't give the ai any input?? I am no programmer but this just doesn't sound right to me. As a regular user.

It makes no sense. AI tools will obviously have an impact on the profession development, but suggesting that no one should learn to code is like saying no one should learn to drive because one day cars will drive themselves. It’s utter self-serving nonsense.

Maybe Nvidia knows something we don't. Are their any examples of using ai to help code? What is Nvidia specifically referencing to making them say this? I ask these questions because obviously something they are seeing is making them say this. yet, all I see from ai in general are things like.

Image generation, story generation, some ai can even roleplay a specific story with you that you inputed, but I just can't see ai doing actual coding, without over simplifying it and making it boring and less different from the next 'creation.'

LLM tools can already write basic code and will likely improve a lot, but there are more reasons to learn to code than to actually do coding yourself. Even just to be able to understand and verify the shit the AI tools spit out before pushing it live.

Nvidia knows that the more people who use AI tools, the more their hardware sells. They benefit directly from people not being able to code themselves or relying more on AI tools.

They want their magic line to keep going up. That’s all.

How does the use of ai tools specifically sell Nvidia hardware? it could help sell other hardware as well. They don't specify any specific ai software that might be exclusive to Nvidia hardware or anything like that. That at the surface doesn't make much sense to me.

LLMs do most of their processing on GPUs using platforms like CUDA, which is an Nvidia product. Nvidia stands to make a lot of money off of CUDA and ML hardware.

Nvidia makes some of the best hardware for training AI models. Increased investment in AI will inevitably increase demand for Nvidia hardware. It may boost other hardware makers too, but Nvidia is getting the biggest boost by far.

Maybe I’m being dumb or missing something but this feels incredibly obvious to me.

Yes, yes, keep my labour in high demand and my salary high

I know some Gen Z recent grads who use chatgpt to write their code.

back in my day, we had to write our code ourselves....

I use chatgpt for coding (millennial). You still need to know how to code though, because 50% of the time it doesn't work properly. You need to explain the nature of your variables, and the overall process you want to achieve. But I still save a good amount of time, because now I don't need to remember the specific syntax for a particular function, and it has saved me reading documentation because in can tell how some functions work by context.

Not learning how to code because of ai is like not learning math because there are calculators, sure, you don't need to know the multiplication tables by heart, but you need to know what multiplication is and how it's used to solve real world pringles.

i use chatgpt for coding (i can code myself but it helps with a lot of stuff), and if I wouldn't be able to code i would wonder why nothing works. but because i know how to code i know that chatgpt is often just writing horrible code which often does something completly else than asked. so i often think "screw this i do it myself" after countless trys to let chatgpt fix it.

"I have a foreboding of an America in my children's or grandchildren's time...when awesome technological powers are in the hands of a very few, and no one representing the public interest can even grasp the issues; when the people have lost the ability to set their own agendas or knowledgeably question those in authority; when, clutching our crystals and nervously consulting our horoscopes, our critical faculties in decline, unable to distinguish between what feels good and what's true, we slide, almost without noticing, back into superstition and darkness..."

Carl Sagan, Astrologist/Horposcopist from ancient times.

Disagree. They need to learn to code. And be experts with AI tools.

Just like kids with a calculator.

Isn't this basically "CEO of AI hardware company says that more people should use AI"? Not really news, since you wouldn't really expect him to say otherwise.

If Nvidia driver quality goes down in the next couple years, we know why.

Dammit, he actually has an engineering degree, I can't make snide remarks about business majors

Or can I?

You still need the fundamentals. You still need to understand problem solving and debugging.

I think my take is, he might be right. That is that by the time kids become adults we may have AGI and we'll either be enslaved or have much less work to do (for better or worse).

But AI as it is now, relies on input from humans. When left to take their own output as input, they go full Alabama (sorry Alabamites) with their output pretty quickly. Currently, they work as a tool in tandem with a human that knows what they're doing. If we don't make a leap from this current iteration of AI, then he'll be very very wrong.

If you think AGI is anywhere close to what we have now, you haven't been using any critical thinking skills when interacting with language models.

I don't. We're talking about the next generation of people here. Do pay attention at the back.

Okay but what I'm saying is that AGI isn't the logical progression of anything we have currently. So there's no reason to assume it will be here in one generation.

I'd tend to agree. I said we may have that, and then he might have a point. But, if we don't, he'll be wrong because current LLMs aren't going to (I think at least) get past the limitations and cannot create anything close to original content if left to feed on their own output.

I don't think it's easy to say what will be the situation in 15-20 years. The current generation of AI is moving ridiculously fast. Can we sidestep to AGI? I don't know the answer, probably people doing more work in this area have a better idea. I just know on this subject it's best not to write anything off.

The current generation of AI is moving ridiculously fast.

You're missing my point. My point is that the current "AI" has nothing to do with AGI. It's an extension of mathematical and computer science theory that has existed for decades. There is no logical link between the machine learning models of today and true AGI. One has nothing to do with the other. To call it AI at all is actually quite misleading.

Why would we plan for something if we have no idea what the time horizon is? It's like saying "we may have a Mars colony in the next generation, so we don't need to teach kids geography"

Why would we plan for something if we have no idea what the time horizon is? It’s like saying “we may have a Mars colony in the next generation, so we don’t need to teach kids geography”

Well, I think this is the point being made quite a bit in this thread. It's general business level hyperbole, really. Just to get a headline and attention (and it seems to have worked). No-one really knows at which point all of our jobs will be taken over.

My point is that in general, the current AI models and diffusion techniques are moving forward at quite the rate. But, I did specify that AGI would be a sidestep out of the current rail. I think that there's now weight (and money) behind AI and pushing forward AGI research. Things moving faster in one lane right now can push investment into other lanes and areas of research. AI is the buzzword every company wants a piece of.

I'm not as confident as Mr Nvidia is, but with this kind of money behind it, AGI does have a shot of happening.

In terms of advice regarding training for software development, though. What I think for sure is that the current LLMs and offshoots of the techniques will develop, better frameworks for businesses to train them on their own material will become commonplace, I think one of the biggest consultancy growth areas will be in producing private models for software (and other) companies.

The net effect of that is going to mean they will just want fewer (better) engineers to make use of the AI to produce more, with less people. So, even without AGI the demand for software developers and engineers is going to be lower I think. So, is it as favourable an industry to train for now as it was for the previous generations? Quite possibly it's not.