Study Finds That 52 Percent of ChatGPT Answers to Programming Questions Are Wrong

ekZepp@lemmy.world to Technology@lemmy.ml – 600 points –
Study Finds That 52 Percent of ChatGPT Answers to Programming Questions Are Wrong
futurism.com
126

You are viewing a single comment

Sometimes ChatGPT/copilot’s code predictions are scary good. Sometimes they’re batshit crazy. If you have the experience to be able to tell the difference, it’s a great help.

Due to confusing business domain terms, we often name variables the form of XY and YX.

One time copilot autogenerated about two hundred lines of a class that was like. XY; YX; XXY; XYX; XYXY; ..... XXYYXYXYYYXYXYYXY;

It was pretty hilarious.

But that being said, it's a great tool that has definitely proven to worth the cost...but like with a co-op, you have to check it's work.

I find the mistakes it makes and trouble shooting them really good for learning. I'm self taught.

The amount of reference material it has is also a big influence. I've had to pick up PLC programming a while ago (codesys/structured text, which is kinda based on pascal). While chatgpt understands the syntax it has absolutely no clue about libraries and platform limitations so it keeps hallucinating those based on popular ones in other languages.

Still a great tool to have it fill out things like I/O mappings and the sorts. Just need to give it some examples to work with first.