It’s incredibly how much of the work gets done while staring out the window thinking about nothing at all related to the task at hand.
Definitely solved a problem or two while I was sleeping.
Since you spent your sleep being productive, should that count as overtime?
Maybe, but did it? Absolutely not.
When I was working on my last big project, every morning I started to be between dreaming and waking up in the code world, chasing through the flows of data from various perspectives.
I didn't get the same "aha!" Moments that sneak up on you, but it did help me understand the system I built
I thought it was copy and pasting from stack overflow
Now a days it's:
Open ChatGPT
Paste code
"Can you just fix this for me?"
ChatGPT has taught me so much basic programming I should have learned years ago. Don't know if I'll remember any of it since Ill just end up pasting my almost working stuff into ChatGPT again once I'm stuck.
^^^ How skynet was born, ChatGPT injecting subroutines copied by lackadaisical programmers copying and pasting without checking
Oh I'm reading and making sure I understand everything. I usually write the suggestions into my code myself to try to get it into my fingers. But it is usually basic stuff I haven't done very often I struggle with. Like building a JSON as an object by adding each part step by step and serializing every input properly, instead of just building the JSON string by hand.
Chatgpt is good for code reviews. I wouldn't trust it for making code though.
Are there any other LLM interfaces or co-pilot programming tools you'd recommend over ChatGPT, or wouldn't you trust LLMs in general?
As I see it, since we can keep iterating, reviewing, rewriting prompts, and even use tools frameworks like autogen to let multiple LLMs interact to solve a problem on our behalf, your take is somehow only diggin' the surface.
Mostly, its my own personal choice / preference.
When I see chatgpt spun up code, sometimes its rock solid. Sometimes it uses weird logic that is hard to follow. I prefer it to review my code, rather than review its.
I'm kind of partial to how military concepts use cases for ai. Like anything that can do damage or complex tasks must be done by a human. Mediocre tasks, I can see a use for it.
Like for instance, write a code to automate scheduling jobs to backup multiple systems using this fileset to backup or skip I'd feel OK to let ai do. They should all be basically the same. But to script code that is critical to infrastructure and/or complex I feel it is not the right tool to use.
Edit: all LLMs are basically the same imo. The github one might have access to more code though, idk never used it. If it does look at private repos, then I'd say it would be better, but honestly I think they're about the same.
Don't you think it's just a question of time before it's becoming good enough or better than us at some of these tasks? I do. With better training data, more parameters, better quantization, and now even memory management with memGPT, I see some very interesting evolutions becoming possible.
To me, saying "all LLMs are the same", sounds like saying "all humans are the same", to which I strongly disagree. Different people are good at different things. Otherwise, there'd be no need to hire any specialized workers.
Military usecases is a big ethical discussion. Autonomous drones sound pretty scary, at least as long as the technologies are not fully understood. Many people would probably agree that humans should always stay in control when serious damage could be done. However, many are ok with self-driving cars etc. There are many answers to be found in this regard, but I'm not sure this thread is the place to find them... However, I'd still like to hear if you have any thoughts to add on autonomous machines.
I see some very interesting evolutions becoming possible
It's funny to me to think about LLMs training on LLM-generated input. Sort of like a photo copy of a photo copy (reminder to self: watch Multiplicity this weekend). Would seem like it'd reach some Kentucky level of generational inbreeding, but at a much much faster rate.
So how to stop LLM inbreeding? I think there's computer scientists that have already written entire theses/dissertations on detecting AI generated stuff. My understanding is it's getting more difficult, depending on the medium - generated text formats are generally harder to detect than imagery... for now.
In my experience, it tends to have difficulties with restrictions, and will attempt to use the normal method for things no matter what
Should I feel bad if I feel referenced?
Edit: No. Not really, and neither does anyone else reading this. It's stupid to feel bad about what an absolute stranger says on the internet.
If that's literally all you do, then yes.
If not, you just know your tools.
Not so, I code by myself and when I have a problem I ask ChatGPT but I do not say "do everything for me".
Look at John Henry over here
What?
I'm actually impressed how aptly that reference applies.
The legend of John Henry is that he beat a steam engine at the task of drilling through a mountain. John and his assistant (a shaker) had the advantage of being able to finely tune how the steel driver was placed, struck, and chips of rock moved out of the way. Contrast that with how the steam engine drill just rotely brute-force drilled, but got clogged by the resulting rock chips and could drill no more.
So the moral is the humans beat out the machine due to being able to finesse the situation, even though intuitively a machine should have been able to prevail due to its superior raw strength.
Eventually, but it starts with Googling and trying a bunch of other example code and failing miserably.
Hahhaa yes
Does changing all coding styles in a project based on my mood count for something?
Every time I try to do that they catch in a review, so I never get away with it except on personal projects where I decide every other day to change up where I'm keeping all the code.
FTFY
Look at this absolutely spoiled chad with a half wall. What I wouldn't do to be dpat in the face. Oh you don't know how good you have it. Oh to br crucified!
Depends, java the top picture because its dummy verbose!
Is that the "I want to be a terrorist" kid?
That's writing
Por que no los dos?
More like google searches and a lot of CTL+c and CTL+v
what programming is REALLY like:
:q
Wait I forgot to write out
:wq
Shit I need sudo privileges
:q! 🥲
making a load of edits then FINDING OUT YOU WEREN'T SUDO AAAARGHHHHHHH
:w !sudo tee %
Of course that's impossible to remember, but you can just google "vim sudo save" or something like that to find it.
Can nano do this too?
nano
tells you upfront when you're editing a file that you can't write to.I don't think so, but you can always write the file somewhere else and then
sudo mv
it back.Thats really smart
Why oh why doesn't it put a small banner like nano on opening, you must wait until saving.
Bam! Got it one!
Well done.
Sssh
That's why I use nano.
You never have to exit vim when you make it your only editor.
It’s incredibly how much of the work gets done while staring out the window thinking about nothing at all related to the task at hand.
Definitely solved a problem or two while I was sleeping.
Since you spent your sleep being productive, should that count as overtime?
Maybe, but did it? Absolutely not.
When I was working on my last big project, every morning I started to be between dreaming and waking up in the code world, chasing through the flows of data from various perspectives.
I didn't get the same "aha!" Moments that sneak up on you, but it did help me understand the system I built
I thought it was copy and pasting from stack overflow
Now a days it's:
ChatGPT has taught me so much basic programming I should have learned years ago. Don't know if I'll remember any of it since Ill just end up pasting my almost working stuff into ChatGPT again once I'm stuck.
^^^ How skynet was born, ChatGPT injecting subroutines copied by lackadaisical programmers copying and pasting without checking
Oh I'm reading and making sure I understand everything. I usually write the suggestions into my code myself to try to get it into my fingers. But it is usually basic stuff I haven't done very often I struggle with. Like building a JSON as an object by adding each part step by step and serializing every input properly, instead of just building the JSON string by hand.
Chatgpt is good for code reviews. I wouldn't trust it for making code though.
Are there any other LLM interfaces or co-pilot programming tools you'd recommend over ChatGPT, or wouldn't you trust LLMs in general?
As I see it, since we can keep iterating, reviewing, rewriting prompts, and even use
toolsframeworks like autogen to let multiple LLMs interact to solve a problem on our behalf, your take is somehow only diggin' the surface.Mostly, its my own personal choice / preference.
When I see chatgpt spun up code, sometimes its rock solid. Sometimes it uses weird logic that is hard to follow. I prefer it to review my code, rather than review its.
I'm kind of partial to how military concepts use cases for ai. Like anything that can do damage or complex tasks must be done by a human. Mediocre tasks, I can see a use for it.
Like for instance, write a code to automate scheduling jobs to backup multiple systems using this fileset to backup or skip I'd feel OK to let ai do. They should all be basically the same. But to script code that is critical to infrastructure and/or complex I feel it is not the right tool to use.
Edit: all LLMs are basically the same imo. The github one might have access to more code though, idk never used it. If it does look at private repos, then I'd say it would be better, but honestly I think they're about the same.
Don't you think it's just a question of time before it's becoming good enough or better than us at some of these tasks? I do. With better training data, more parameters, better quantization, and now even memory management with memGPT, I see some very interesting evolutions becoming possible.
To me, saying "all LLMs are the same", sounds like saying "all humans are the same", to which I strongly disagree. Different people are good at different things. Otherwise, there'd be no need to hire any specialized workers.
Military usecases is a big ethical discussion. Autonomous drones sound pretty scary, at least as long as the technologies are not fully understood. Many people would probably agree that humans should always stay in control when serious damage could be done. However, many are ok with self-driving cars etc. There are many answers to be found in this regard, but I'm not sure this thread is the place to find them... However, I'd still like to hear if you have any thoughts to add on autonomous machines.
It's funny to me to think about LLMs training on LLM-generated input. Sort of like a photo copy of a photo copy (reminder to self: watch Multiplicity this weekend). Would seem like it'd reach some Kentucky level of generational inbreeding, but at a much much faster rate.
So how to stop LLM inbreeding? I think there's computer scientists that have already written entire theses/dissertations on detecting AI generated stuff. My understanding is it's getting more difficult, depending on the medium - generated text formats are generally harder to detect than imagery... for now.
In my experience, it tends to have difficulties with restrictions, and will attempt to use the normal method for things no matter what
Should I feel bad if I feel referenced?
Edit: No. Not really, and neither does anyone else reading this. It's stupid to feel bad about what an absolute stranger says on the internet.
If that's literally all you do, then yes. If not, you just know your tools.
Not so, I code by myself and when I have a problem I ask ChatGPT but I do not say "do everything for me".
Look at John Henry over here
What?
I'm actually impressed how aptly that reference applies.
The legend of John Henry is that he beat a steam engine at the task of drilling through a mountain. John and his assistant (a shaker) had the advantage of being able to finely tune how the steel driver was placed, struck, and chips of rock moved out of the way. Contrast that with how the steam engine drill just rotely brute-force drilled, but got clogged by the resulting rock chips and could drill no more.
So the moral is the humans beat out the machine due to being able to finesse the situation, even though intuitively a machine should have been able to prevail due to its superior raw strength.
Eventually, but it starts with Googling and trying a bunch of other example code and failing miserably.
Hahhaa yes
Does changing all coding styles in a project based on my mood count for something?
Every time I try to do that they catch in a review, so I never get away with it except on personal projects where I decide every other day to change up where I'm keeping all the code.
FTFY
Look at this absolutely spoiled chad with a half wall. What I wouldn't do to be dpat in the face. Oh you don't know how good you have it. Oh to br crucified!
Depends, java the top picture because its dummy verbose!
Is that the "I want to be a terrorist" kid?
That's writing
Por que no los dos?
More like google searches and a lot of CTL+c and CTL+v