This is dumb. Literally nothing has changed. Anyone who knows anything about LLM's knows that they've struggled with math more than almost every other discipline. It sounds counter intuitive for a computer to be shit at math, but this is because LLM's "intelligence" is through mimicry. They do not calculate math like a calculator. They calculate all responses based on a probability distribution constructed from billions of human text inputs. They are as smart, and as fallible, as wikipedia + reddit + twitter, etc, etc. They are as fallible as their constructing dataset.
Think about how ice cream sales correlate with drownings. There is no direct causality, but that won't stop an LLM from seeing the pattern or implying causality, because it has no real intelligence and doesn't know any better.
"Prompt engineering" is about understanding an LLM's strengths and weaknesses, and learning how to work with them to build out a context and efficiently achieve an end result, whatever that desired result may be. It's not dead, and it's not going anywhere as long as LLM's exist.
I really wish all of these companies racing to replace their existing software features and employees with LLMs understood this. So many applications are dependent on a response being 100% accurate for a very specific request as opposed to being 80% accurate for a wide variety of requests. "Based on training data, here's what a response to your input might look like" is pretty good for conversational language and image generation, but it sucks for anything requiring computation or expertise. Worst of all, it's so confidently wrong about things I might as well be back on Reddit.
I really wish all of these companies racing to replace their existing software features and employees with LLMs understood this.
They totally understand it. And OpenAI has solved it. For example while researching The Ultimate Answer to Life the Universe and Everything, I asked it to calculate 6 by 9 in base 13 and got the correct answer - 42.
ChatGPT didn't use the LLM to calculate that. It only used the LLM understand an obscure and deliberately confusing chapter of the Hitchhiker's Guide book, to write and execute this python script.
# To calculate six by nine in base 13, we multiply the numbers in our standard decimal system and then convert the result to base 13.
# Calculate 6 * 9 in decimal
result_decimal = 6 * 9
# Convert the result to base 13
# The easiest approach is to use the divmod() function repeatedly to get the remainder (which corresponds to the base 13 digit)
# and update the quotient for the next iteration until the quotient is 0.
def decimal_to_base_n(num, base):
if num == 0:
return "0"
digits = []
while num:
num, remainder = divmod(num, base)
digits.append(str(remainder))
return ''.join(digits[::-1])
# Convert the decimal result to base 13
result_base_13 = decimal_to_base_n(result_decimal, 13)
result_base_13
It's not dead, and it's not going anywhere as long as LLM's exist.
Prompt engineering is about expressing your intent in a way that causes an LLM to come to the desired result. (which right now sometimes requires weird phrases, etc.)
It will go away as soon as LLMs get good at inferring intent. It might not be a single model, it may require some extra steps, etc., but there is nothing uniquely "human" about writing prompts.
Future systems could for example start asking questions more often, to clarify your intent better, and then use that as an input to the next stage of tweaking the prompt.
Future systems could for example start asking questions more often
Current systems already do that. But they're expensive and it might be cheaper to have a human do it. Prompt engineering is very much a thing if you're working with high performance low memory consumption language models.
We're a long way from having smartphones with a couple terabytes of RAM and a few thousand GPU cores... but our phones can run basic models and they do. Some phones use a basic LLM for keyboard auto correct for example.
You know, I had gotten frustrated using it because it wouldn't understand me, but now I'll use the approach to find out how it understands me
Re math, enter functionary-able models
If input.type = int; use calculator.
I mean it's not really like humans are good at math either, we are good at making abstractions and following linear rules but we are slow and fallible. Digital computation is just near absolute the best method for doing math. LLMs are decent abstraction and general problem solving tho. They are not as creative as people but they are still pretty good! It's a step on the right direction for true agi. Honestly even when we have agi I doubt they will ever beat raw cpus in computation speed.
Machine learning could find those strengths and weaknesses and learn to work around them likely better than a human could. It's just trial and error. There's nothing about the human brain that makes it better suited to understanding the inner logic of an LLM.
For that you need a program to judge the quality of output given some input. If we had that, LLMs could just improve themselves directly, bypassing any need for prompt engineering in the first place.
The reason prompt engineering is a thing is that people know what is expected and desired output and what isn't, and can adapt their interactions with the tool accordingly, a trait uniquely associated with adaptive complex systems.
can adapt their interactions with the tool accordingly
If we could have programmed around this prior, then people who can and can't Google wouldn't be a thing: Google would just know what results to bring up without the search-curse-refine-repeat cycle. Prompt engineering seems like an extension of Google search-fu.
If we had that, LLMs could just improve themselves directly, bypassing any need for prompt engineering in the first place.
Prompt tuning is not the only way to fine tune the output of an LLM, and since the goal for most is going to be to make them usable by anyone, that's going to be the least desirable route.
I know LLMs are used to grade LLMs. That isn't solving the problem, it's just better than nothing because there are no alternatives. There aren't enough humans willing to endlessly sit and grade LLM responses.
Yes there are, in addition to the thumbs up/down buttons that most people don't use, you can also score based on metrics like "did the person try to rephrase the same question again?" (indication of a bad response), etc. from data gathered during actual use (which ChatGPT does use for training).
Firstly, I'm willing to bet only a minority of users regularly use those buttons. Secondly, you're talking about the most popular LLM(s) out there. What about all the other LLMs almost nobody is using but are still being developed/researched? Where do they find humans willing to sit and rate all the garbage their LLM puts out?
Congrats. You don't understand the difference between a statistical model and a human.
I expected more from a gaylord fartmaster. 2/10.
In what way?
Why couldn't even a basic reinforcement learning model be used to brute force "figure out what input gives desired X output"?
Because the training data is man made so will never be 100% accurate, and critical thought is required to set the desired output, and understand if the output makes sense?
Statistical models find patterns in one's and zeros. They don't apply critical thought.
Actually most (I think all, but not 99% positive) machine learning models are incapable of doing straight arithmetic. Due to the way they are built ML models, including deep learning models, can only learn relationships in a limited input space.
This is most apparent when you test LLMs on different arithmetic operations:
For addition, it does okay up until you get to millions or billions
Multiplication I think breaks at the 100/1000 level
exponents almost break immediately
Give it decimal values and it also breaks relatively quickly for any operation.
This has to do with the fact that LLMs are effectively multiple layers of linear functions, so higher order operations break down faster.
This is the most obvious outcome ever. How could anyone not see this coming given the constant AI improvements?
Though good prompts can still make a big difference for now.
"Prompt engineering" is simply the skill of knowing how to correctly ask for the thing that you want. Given that this is something that is in rare supply even when interacting with other humans, I don't see this going away until we're well past AGI and into ASI.
Human experts often say things like "customers say X, they probably mean they want Y and Z" purely based on their experience of dealing with people in some field for a long time.
That is something that can be learned. Follow-up questions can be asked to clarify (or even doubts - "are you sure you don't mean Y instead?"). Etc. Not that complicated.
(Could be why OpenAI chooses to degrade the experience so much when you disable chat history and training in ChatGPT 😀)
Today's LLMs have other quirks, like adding certain words can help even if they don't change the meaning that much, etc., but that's not some magic either.
customers say X, they probably mean they want Y and Z
Sure - an LLM can help catch some of those situations. But if anything it makes prompt engineering even more important.
Sometimes the customer actually wants X, and a prompt engineer needs to predict this issue and disable the Y/Z behaviour. Prompt engineering is changing, but it's not going away.
But why couldn't an AI do the same?
Why are you assuming it can never get good enough to correctly figure out the intent and find the best possible response it is capable of?
Sure, it's not there today, but this doesn't seem like some insurmountable challenge.
Wait, Slashdot isn't dead yet?
That is not dead which can eternal lie.
Ah so you've seen the maintenance cycle on my open-source stuff.
Usually it all springs to life when the cart comes around and the man shouts "bring out yer dead!"
The hype around AI language models has companies scrambling to hire prompt engineers to improve their AI queries and create new products.
Who is hiring all these prompt engineers? Who is 'scrambling' to find people for this? The jobs I do see have basically replaced "developer" with "prompt engineer" with the same job requirements.
Ah yes, using AI to solve problems with AI.
Good
Prompt Engineering was never a thing to begin with.
This is dumb. Literally nothing has changed. Anyone who knows anything about LLM's knows that they've struggled with math more than almost every other discipline. It sounds counter intuitive for a computer to be shit at math, but this is because LLM's "intelligence" is through mimicry. They do not calculate math like a calculator. They calculate all responses based on a probability distribution constructed from billions of human text inputs. They are as smart, and as fallible, as wikipedia + reddit + twitter, etc, etc. They are as fallible as their constructing dataset.
Think about how ice cream sales correlate with drownings. There is no direct causality, but that won't stop an LLM from seeing the pattern or implying causality, because it has no real intelligence and doesn't know any better.
"Prompt engineering" is about understanding an LLM's strengths and weaknesses, and learning how to work with them to build out a context and efficiently achieve an end result, whatever that desired result may be. It's not dead, and it's not going anywhere as long as LLM's exist.
I really wish all of these companies racing to replace their existing software features and employees with LLMs understood this. So many applications are dependent on a response being 100% accurate for a very specific request as opposed to being 80% accurate for a wide variety of requests. "Based on training data, here's what a response to your input might look like" is pretty good for conversational language and image generation, but it sucks for anything requiring computation or expertise. Worst of all, it's so confidently wrong about things I might as well be back on Reddit.
They totally understand it. And OpenAI has solved it. For example while researching The Ultimate Answer to Life the Universe and Everything, I asked it to calculate 6 by 9 in base 13 and got the correct answer - 42.
ChatGPT didn't use the LLM to calculate that. It only used the LLM understand an obscure and deliberately confusing chapter of the Hitchhiker's Guide book, to write and execute this python script.
Prompt engineering is about expressing your intent in a way that causes an LLM to come to the desired result. (which right now sometimes requires weird phrases, etc.)
It will go away as soon as LLMs get good at inferring intent. It might not be a single model, it may require some extra steps, etc., but there is nothing uniquely "human" about writing prompts.
Future systems could for example start asking questions more often, to clarify your intent better, and then use that as an input to the next stage of tweaking the prompt.
Current systems already do that. But they're expensive and it might be cheaper to have a human do it. Prompt engineering is very much a thing if you're working with high performance low memory consumption language models.
We're a long way from having smartphones with a couple terabytes of RAM and a few thousand GPU cores... but our phones can run basic models and they do. Some phones use a basic LLM for keyboard auto correct for example.
You know, I had gotten frustrated using it because it wouldn't understand me, but now I'll use the approach to find out how it understands me
Re math, enter functionary-able models
If input.type = int; use calculator.
I mean it's not really like humans are good at math either, we are good at making abstractions and following linear rules but we are slow and fallible. Digital computation is just near absolute the best method for doing math. LLMs are decent abstraction and general problem solving tho. They are not as creative as people but they are still pretty good! It's a step on the right direction for true agi. Honestly even when we have agi I doubt they will ever beat raw cpus in computation speed.
Machine learning could find those strengths and weaknesses and learn to work around them likely better than a human could. It's just trial and error. There's nothing about the human brain that makes it better suited to understanding the inner logic of an LLM.
For that you need a program to judge the quality of output given some input. If we had that, LLMs could just improve themselves directly, bypassing any need for prompt engineering in the first place.
The reason prompt engineering is a thing is that people know what is expected and desired output and what isn't, and can adapt their interactions with the tool accordingly, a trait uniquely associated with adaptive complex systems.
If we could have programmed around this prior, then people who can and can't Google wouldn't be a thing: Google would just know what results to bring up without the search-curse-refine-repeat cycle. Prompt engineering seems like an extension of Google search-fu.
Yep, exactly, and it's been studied and put in to practice effectively already.
Prompt tuning is not the only way to fine tune the output of an LLM, and since the goal for most is going to be to make them usable by anyone, that's going to be the least desirable route.
I know LLMs are used to grade LLMs. That isn't solving the problem, it's just better than nothing because there are no alternatives. There aren't enough humans willing to endlessly sit and grade LLM responses.
Yes there are, in addition to the thumbs up/down buttons that most people don't use, you can also score based on metrics like "did the person try to rephrase the same question again?" (indication of a bad response), etc. from data gathered during actual use (which ChatGPT does use for training).
Firstly, I'm willing to bet only a minority of users regularly use those buttons. Secondly, you're talking about the most popular LLM(s) out there. What about all the other LLMs almost nobody is using but are still being developed/researched? Where do they find humans willing to sit and rate all the garbage their LLM puts out?
Congrats. You don't understand the difference between a statistical model and a human.
I expected more from a gaylord fartmaster. 2/10.
In what way?
Why couldn't even a basic reinforcement learning model be used to brute force "figure out what input gives desired X output"?
Because the training data is man made so will never be 100% accurate, and critical thought is required to set the desired output, and understand if the output makes sense?
Statistical models find patterns in one's and zeros. They don't apply critical thought.
Actually most (I think all, but not 99% positive) machine learning models are incapable of doing straight arithmetic. Due to the way they are built ML models, including deep learning models, can only learn relationships in a limited input space.
This is most apparent when you test LLMs on different arithmetic operations:
This has to do with the fact that LLMs are effectively multiple layers of linear functions, so higher order operations break down faster.
This is the most obvious outcome ever. How could anyone not see this coming given the constant AI improvements?
Though good prompts can still make a big difference for now.
"Prompt engineering" is simply the skill of knowing how to correctly ask for the thing that you want. Given that this is something that is in rare supply even when interacting with other humans, I don't see this going away until we're well past AGI and into ASI.
Human experts often say things like "customers say X, they probably mean they want Y and Z" purely based on their experience of dealing with people in some field for a long time.
That is something that can be learned. Follow-up questions can be asked to clarify (or even doubts - "are you sure you don't mean Y instead?"). Etc. Not that complicated.
(Could be why OpenAI chooses to degrade the experience so much when you disable chat history and training in ChatGPT 😀)
Today's LLMs have other quirks, like adding certain words can help even if they don't change the meaning that much, etc., but that's not some magic either.
Sure - an LLM can help catch some of those situations. But if anything it makes prompt engineering even more important.
Sometimes the customer actually wants X, and a prompt engineer needs to predict this issue and disable the Y/Z behaviour. Prompt engineering is changing, but it's not going away.
But why couldn't an AI do the same?
Why are you assuming it can never get good enough to correctly figure out the intent and find the best possible response it is capable of?
Sure, it's not there today, but this doesn't seem like some insurmountable challenge.
Wait, Slashdot isn't dead yet?
Ah so you've seen the maintenance cycle on my open-source stuff.
Usually it all springs to life when the cart comes around and the man shouts "bring out yer dead!"
it's been undead for at least a good 12 years
Some of the community moved to https://soylentnews.org/
Who is hiring all these prompt engineers? Who is 'scrambling' to find people for this? The jobs I do see have basically replaced "developer" with "prompt engineer" with the same job requirements.
Ah yes, using AI to solve problems with AI.
Good
Prompt Engineering was never a thing to begin with.