It will design the machines to build the autonomous robots that mine the cobalt.... doing the jobs of several companies at one time and either freeing up several people to pursue leisure or the arts or starve to death from being abandoned by society.
Have you seen the real fucking world?
It’s gonna make the rich richer and the poor poorer. At least until the gilded age passes.
I agree and I gave that option as the last one in the list.
AI absolutely will not design machines.
It may be used within strict parameters to improve the speed of theoretically testing types of bearing or hinge or alloys or something to predict which ones would perform best under stress testing - prior to acutal testing to eliminate low-hanging fruit, but it will absolutely not generate a new idea for a machine because it can't generate new ideas.
The model T will absolutely not replace horse drawn carts -- Maybe some small group of people or a family for a vacation but we've been using carts to do war logistics for 1000s of years. You think some shaped metal put together is going to replace 1000s of men and horses? lol yeah right
apples and oranges.
You're comparing two products with the same value prop: transporting people and goods more effectively than carrying/walking.
In terms of mining, a drilling machine is more effective than a pickaxe. But we're comparing current drilling machines to potential drilling machines, so the actual comparison would be:
is an AI-designed drilling machine likely to be more productive (for any given definition of productivity) than a human-designed one?
Well, we know from experience that when (loosely defined) "AI" is used in, for e.g. pharma research, it reaps some benefits - but does not replace wholesale the drug approval process and its still a tool used by - as I originally said - human beings that impose strict parameters on both input and output as part of a larger product and method.
Back to your example: could a series of algorithmic steps - without any human intervention - provide a better car than any modern car designers? As it stands, no, nor is it on the horizon. Can it be used to spin through 4 million slight variations in hood ornaments and return the top 250 in terms of wind resistance? Maybe, and only if a human operator sets up the experiment correctly.
No, the thing I'm comparing is our inability to discern where a new technology will lead and our history of smirking at things like books, cars, the internet and email, AI, etc.
The first steam engines pulling coal out of the ground were so inefficient they wouldn't make sense for any use case than working to get the fuel that powers them. You could definitely smirk and laugh about engines vs 10k men and be totally right in that moment, and people were.
The more history you learn though, you more you realize this is not only a hubrisy thing, it's also futile as how we feel about the proliferation of technology has never had an impact on that technology's proliferation.
And, to be clear, I'm not saying no humans will work or have anything to do -- I'm saying significantly MORE humans will have nothing to do. Sure you still need all kinds of people even if the robots design and build themselves mostly, but it would be an order of magnitude less than the people needed otherwise.
Maybe I’m pessimistic but all I see is every call center representative disappearing and that’ll be it
I agree that AI is just a tool, and it excels in areas where an algorithmic approach can yield good results. A human still has to give it the goal and the parameters.
What's fascinating about AI, though, is how far we can push the algorithmic approach in the real world. Fighter pilots will say that a machine can never replace a highly-trained human pilot, and it is true that humans do some things better right now. However, AI opens up new tactics. For example, it is virtually certain that AI-controlled drone swarms will become a favored tactic in many circumstances where we currently use human pilots. We still need a human in the loop to set the goal and the parameters. However, even much of that may become automated and abstracted as humans come to rely on AI for target search and acquisition. The pace of battle will also accelerate and the electronic warfare environment will become more saturated, meaning that we will probably also have to turn over a significant amount of decision-making to semi-autonomous AI that humans do not directly control at all times.
In other words, I think that the line between dumb tool and autonomous machine is very blurry, but the trend is toward more autonomous AI combined with robotics. In the car design example you give, I think that eventually AI will be able to design a better car on its own using an algorithmic approach. Once it can test 4 million hood ornament variations, it can also model body aerodynamics, fuel efficiency, and any other trait that we tell it is desirable. A sufficiently powerful AI will be able to take those initial parameters and automate the process of optimizing them until it eventually spits out an objectively better design. Yes, a human is in the loop initially to design the experiment and provide parameters, but AI uses the output of each experiment to train itself and automate the design of the next experiment, and the next, ad infinitum. Right now we are in the very early stages of AI, and each AI experiment is discrete. We still have to check its output to make sure it is sensible and combine it with other output or tools to yield useable results. We are the mind guiding our discrete AI tools. But over a few more decades, a slow transition to more autonomy is inevitable.
A few decades ago, if you had asked which tasks an AI would NOT be able to perform well in the future, the answers almost certainly would have been human creative endeavors like writing, painting, and music. And yet, those are the very areas where AI is making incredible progress. Already, AI can draw better, write better, and compose better music than the vast, vast majority of people, and we are just at the beginning of this revolution.
It can solve existing problems in new ways, which might be handy.
can
might
sure. But, like I said, those are subject to a lot of caveats - that humans have to set the experiments up to ask the right questions to get those answers.
That's how it currently is, but I'd be astounded if it didn't progress quickly from now.
OpenAI themselves have made it very clear that scaling up their models have diminishing returns and that they're incapable of moving forward without entirely new models being invented by humans. A short while ago they proclaimed that they could possibly make an AGI if they got several Trillions of USD in investment.
5 years ago I don't think most people thought ChatGPT was possible, or StableDiffusion/MidJourney/etc.
We're in an era of insane technological advancement, and I don't think it'll slow down.
Okay but the people who made the advancements are telling you it has already slowed down. Why don't you understand that? A flawed Chatbot and some art theft machines who can't draw hands aren't exactly worldchanging, either, tbh.
There are other people in the world. Some of them are inventing completely new ways of doing things, and one of those ways could lead to a major breakthrough. I'm not saying a GPT LLM is going to solve the problem, I'm saying AI will.
This is such a rich-country-centric view that I can't stand. LLMs have already given the world maybe it's greatest gift ever -- access to a teacher.
Think of the 800 million poor children in the world and their access to a Kahn academy level teacher on any subject imaginable with a cellphone/computer as all they need. How could that not have value and is pearl clutching drawing skills becoming devalued really all you can think about it?
Anything you learn from an LLM has a margin of error that makes it dangerous and harmful. It hallucinates documentation and fake facts like an asylum inmate. And it's so expensive compared to just having real teachers that it's all pointless. We've got humans, we don't need more humans, adding labor doesn't solve the problem with education.
bro I was taught in a textbook in the US in the 00s that the statue of liberty was painted green.
No math teacher I ever had actually knew the level of math they were teaching.
Humans hallucinate all the time. almost 1 billion children don't even have access to a human teacher, thus the boon to humanity
Those textbooks and the people who regurgitate their contents are the training data for the LLM. Any statement you make about human incompetence is multiplied by an LLM. If they don't have access to a human teacher then they probably don't have PCs and AI subscriptions, either.
yeah but whatever the stats about as N increases alpha/beta error goes away thing is
i would be extremely surprised if before 2100 we see AI that has no human operator and no data scientist team even at a 3rd party distributor - and those things are neither a lie, nor a weaselly marketing stunt ("technically the operators are contractors and not employed by the company" etc).
We invented the printing press 584 years ago, it still requires a team of human operators.
A printing press is not a technology with intelligence. It's like saying we still have to manually operate knives... of course we do.
the comment I originally replied to claimed AI will design the autonomous machines.
It will not. It will facilitate some of the research done by humans to aid in the designing of willfully human operated machinery.
To my knowledge the only autonomous machine that exists is a roomba, which moves blindly around until it physically strikes an object, rotates a random degree and continues in a new direction until it hits something else.
Even then, it is controlled with an app and on more expensive models, some boundary setting.
It is extremely generous to call that "autonomy."
I was in a self-driving taxi yesterday. It didn't need to bump into things to figure out where it was.
Fair, I thought they all got recalled but I guess they're back. but I'd also counter that Waymo is extremely limited about where it can operate - roughly 10 miles max - which, relevant to my original point was entirely hand-mapped and calibrated by human operators, and the rides are monitored and directed by a control center responding in real-time to the car's feedback.
Like my printing press example - it still takes a large human team to operate the "self" - driving car.
either freeing up several people to pursue leisure or the arts or starve to death from being abandoned by society.
You know EXACTLY which one it's gonna be.
It can't design.
define design -- I had Chat GPT dream up new musical instruments and then we implemented one. It wrote all the code and architecture, though I did have to prod/help it along in places.
Neither can the majority of engineers I have meet, but that hasn't stopped them. You really don't need any design ability if your whole day is having endless meetings terrorizing OEMs.
It isn't the intelligence of the machine designer that is the issue, it is the middlemen and the end user.
Continuously having to downgrade machines. Wouldn't want some sales rep seeing something new.
Hahaha, current ML is basically good guessing, that doesn’t really transfer to building machines that actually have to obey the laws of physics.
is it good guessing that you know when you step out of your bed without looking you won't fall to your death?
Big fail to forget the /s here...
Why? This is a very real possibility.
Work a blue collar job your whole life and tell me it’s possible. Machines suck ass. They either need constant supervision, repairs all the time, or straight up don’t function properly. Tech bros always forget about the people who actually keep the world chugging.
They suck because your employer wouldn't pay me more for a better machine. Chemical is where it is at, outside of powerplants and some of the bigger pharms the chemical operator is a dead profession. Entire plants are automated with the only people doing work are doing repairs or sales.
It will design the machines to build the autonomous robots that mine the cobalt.... doing the jobs of several companies at one time and either freeing up several people to pursue leisure or the arts or starve to death from being abandoned by society.
Have you seen the real fucking world?
It’s gonna make the rich richer and the poor poorer. At least until the gilded age passes.
I agree and I gave that option as the last one in the list.
AI absolutely will not design machines.
It may be used within strict parameters to improve the speed of theoretically testing types of bearing or hinge or alloys or something to predict which ones would perform best under stress testing - prior to acutal testing to eliminate low-hanging fruit, but it will absolutely not generate a new idea for a machine because it can't generate new ideas.
The model T will absolutely not replace horse drawn carts -- Maybe some small group of people or a family for a vacation but we've been using carts to do war logistics for 1000s of years. You think some shaped metal put together is going to replace 1000s of men and horses? lol yeah right
apples and oranges.
You're comparing two products with the same value prop: transporting people and goods more effectively than carrying/walking.
In terms of mining, a drilling machine is more effective than a pickaxe. But we're comparing current drilling machines to potential drilling machines, so the actual comparison would be:
Well, we know from experience that when (loosely defined) "AI" is used in, for e.g. pharma research, it reaps some benefits - but does not replace wholesale the drug approval process and its still a tool used by - as I originally said - human beings that impose strict parameters on both input and output as part of a larger product and method.
Back to your example: could a series of algorithmic steps - without any human intervention - provide a better car than any modern car designers? As it stands, no, nor is it on the horizon. Can it be used to spin through 4 million slight variations in hood ornaments and return the top 250 in terms of wind resistance? Maybe, and only if a human operator sets up the experiment correctly.
No, the thing I'm comparing is our inability to discern where a new technology will lead and our history of smirking at things like books, cars, the internet and email, AI, etc.
The first steam engines pulling coal out of the ground were so inefficient they wouldn't make sense for any use case than working to get the fuel that powers them. You could definitely smirk and laugh about engines vs 10k men and be totally right in that moment, and people were.
The more history you learn though, you more you realize this is not only a hubrisy thing, it's also futile as how we feel about the proliferation of technology has never had an impact on that technology's proliferation.
And, to be clear, I'm not saying no humans will work or have anything to do -- I'm saying significantly MORE humans will have nothing to do. Sure you still need all kinds of people even if the robots design and build themselves mostly, but it would be an order of magnitude less than the people needed otherwise.
Maybe I’m pessimistic but all I see is every call center representative disappearing and that’ll be it
I agree that AI is just a tool, and it excels in areas where an algorithmic approach can yield good results. A human still has to give it the goal and the parameters.
What's fascinating about AI, though, is how far we can push the algorithmic approach in the real world. Fighter pilots will say that a machine can never replace a highly-trained human pilot, and it is true that humans do some things better right now. However, AI opens up new tactics. For example, it is virtually certain that AI-controlled drone swarms will become a favored tactic in many circumstances where we currently use human pilots. We still need a human in the loop to set the goal and the parameters. However, even much of that may become automated and abstracted as humans come to rely on AI for target search and acquisition. The pace of battle will also accelerate and the electronic warfare environment will become more saturated, meaning that we will probably also have to turn over a significant amount of decision-making to semi-autonomous AI that humans do not directly control at all times.
In other words, I think that the line between dumb tool and autonomous machine is very blurry, but the trend is toward more autonomous AI combined with robotics. In the car design example you give, I think that eventually AI will be able to design a better car on its own using an algorithmic approach. Once it can test 4 million hood ornament variations, it can also model body aerodynamics, fuel efficiency, and any other trait that we tell it is desirable. A sufficiently powerful AI will be able to take those initial parameters and automate the process of optimizing them until it eventually spits out an objectively better design. Yes, a human is in the loop initially to design the experiment and provide parameters, but AI uses the output of each experiment to train itself and automate the design of the next experiment, and the next, ad infinitum. Right now we are in the very early stages of AI, and each AI experiment is discrete. We still have to check its output to make sure it is sensible and combine it with other output or tools to yield useable results. We are the mind guiding our discrete AI tools. But over a few more decades, a slow transition to more autonomy is inevitable.
A few decades ago, if you had asked which tasks an AI would NOT be able to perform well in the future, the answers almost certainly would have been human creative endeavors like writing, painting, and music. And yet, those are the very areas where AI is making incredible progress. Already, AI can draw better, write better, and compose better music than the vast, vast majority of people, and we are just at the beginning of this revolution.
It can solve existing problems in new ways, which might be handy.
sure. But, like I said, those are subject to a lot of caveats - that humans have to set the experiments up to ask the right questions to get those answers.
That's how it currently is, but I'd be astounded if it didn't progress quickly from now.
OpenAI themselves have made it very clear that scaling up their models have diminishing returns and that they're incapable of moving forward without entirely new models being invented by humans. A short while ago they proclaimed that they could possibly make an AGI if they got several Trillions of USD in investment.
5 years ago I don't think most people thought ChatGPT was possible, or StableDiffusion/MidJourney/etc.
We're in an era of insane technological advancement, and I don't think it'll slow down.
Okay but the people who made the advancements are telling you it has already slowed down. Why don't you understand that? A flawed Chatbot and some art theft machines who can't draw hands aren't exactly worldchanging, either, tbh.
There are other people in the world. Some of them are inventing completely new ways of doing things, and one of those ways could lead to a major breakthrough. I'm not saying a GPT LLM is going to solve the problem, I'm saying AI will.
This is such a rich-country-centric view that I can't stand. LLMs have already given the world maybe it's greatest gift ever -- access to a teacher.
Think of the 800 million poor children in the world and their access to a Kahn academy level teacher on any subject imaginable with a cellphone/computer as all they need. How could that not have value and is pearl clutching drawing skills becoming devalued really all you can think about it?
Anything you learn from an LLM has a margin of error that makes it dangerous and harmful. It hallucinates documentation and fake facts like an asylum inmate. And it's so expensive compared to just having real teachers that it's all pointless. We've got humans, we don't need more humans, adding labor doesn't solve the problem with education.
bro I was taught in a textbook in the US in the 00s that the statue of liberty was painted green.
No math teacher I ever had actually knew the level of math they were teaching.
Humans hallucinate all the time. almost 1 billion children don't even have access to a human teacher, thus the boon to humanity
Those textbooks and the people who regurgitate their contents are the training data for the LLM. Any statement you make about human incompetence is multiplied by an LLM. If they don't have access to a human teacher then they probably don't have PCs and AI subscriptions, either.
yeah but whatever the stats about as N increases alpha/beta error goes away thing is
i would be extremely surprised if before 2100 we see AI that has no human operator and no data scientist team even at a 3rd party distributor - and those things are neither a lie, nor a weaselly marketing stunt ("technically the operators are contractors and not employed by the company" etc).
We invented the printing press 584 years ago, it still requires a team of human operators.
A printing press is not a technology with intelligence. It's like saying we still have to manually operate knives... of course we do.
the comment I originally replied to claimed AI will design the autonomous machines.
It will not. It will facilitate some of the research done by humans to aid in the designing of willfully human operated machinery.
To my knowledge the only autonomous machine that exists is a roomba, which moves blindly around until it physically strikes an object, rotates a random degree and continues in a new direction until it hits something else.
Even then, it is controlled with an app and on more expensive models, some boundary setting.
It is extremely generous to call that "autonomy."
I was in a self-driving taxi yesterday. It didn't need to bump into things to figure out where it was.
Fair, I thought they all got recalled but I guess they're back. but I'd also counter that Waymo is extremely limited about where it can operate - roughly 10 miles max - which, relevant to my original point was entirely hand-mapped and calibrated by human operators, and the rides are monitored and directed by a control center responding in real-time to the car's feedback.
Like my printing press example - it still takes a large human team to operate the "self" - driving car.
You know EXACTLY which one it's gonna be.
It can't design.
define design -- I had Chat GPT dream up new musical instruments and then we implemented one. It wrote all the code and architecture, though I did have to prod/help it along in places.
https://pwillia7.github.io/echosculpt3/
you can read more here: https://reticulated.net/dailyai/daily-experiments-gpt4-bing-ai/
Thx, will read.
Neither can the majority of engineers I have meet, but that hasn't stopped them. You really don't need any design ability if your whole day is having endless meetings terrorizing OEMs.
It isn't the intelligence of the machine designer that is the issue, it is the middlemen and the end user.
Continuously having to downgrade machines. Wouldn't want some sales rep seeing something new.
Hahaha, current ML is basically good guessing, that doesn’t really transfer to building machines that actually have to obey the laws of physics.
is it good guessing that you know when you step out of your bed without looking you won't fall to your death?
Big fail to forget the /s here...
Why? This is a very real possibility.
Work a blue collar job your whole life and tell me it’s possible. Machines suck ass. They either need constant supervision, repairs all the time, or straight up don’t function properly. Tech bros always forget about the people who actually keep the world chugging.
They suck because your employer wouldn't pay me more for a better machine. Chemical is where it is at, outside of powerplants and some of the bigger pharms the chemical operator is a dead profession. Entire plants are automated with the only people doing work are doing repairs or sales.
Why? This is a very real possibility.
AI cannot come even CLOSE to reasoning.
And a submarine can't even swim.