"prompt engineering"

Maven (famous)@lemmy.world to Programmer Humor@programming.dev – 1269 points –
165

You are viewing a single comment

Now THAT is the AI innovation I'm here for

LLMs are in a position to make boring NPCs much better.

Once they can be run locally at a good speed it'll be a game changer.

I reckon we'll start getting AI cards for computers soon.

We already do! And on the cheap! I have a Coral TPU running for presence detection on some security cameras, I'm pretty sure they can run LLMs but I haven't looked around.

GPT4ALL runs rather well on a 2060 and I would only imagine a lot better on newer hardware