Why spend money on ChatGPT?

Gollum@feddit.org to Programmer Humor@programming.dev – 1236 points –
92

You are viewing a single comment

It works. Well, it works about as well as your average LLM

pi ends with the digit 9, followed by an infinite sequence of other digits.

That's a very interesting use of the word "ends".

It's like how they called the fourth Friday the 13th movie "The Final Chapter".

True but I think the Fast & the Furious franchise has a better shot at giving Pi a run for it's money.

TBF, if your goal is to generate the most valid sentence that directly answers the question, it's only one minor abstract noun that's broken here.

Edit: I wouldn't be surprised if there's a substantial drop in the probability of a digit being listed after the leading 9 (3.14159...), even, so it is "last" in a sense.

Edit again: Man, Baader-Meinhof so hard. Somehow pi to 5 digits came up more than once in 24 hours, so yes.

In other words, it doesn't work.

Maybe it knows something about pi we don't.

It's infinite yet ends in a 9. It's a great mystery.

Pi is 10 in base-pi

EDIT: 10, not 1

I saw someone post this a few days ago, and someone else quickly pointed out that it is incorrect. This time I'll point out it is incorrect.

In base-pi, pi would be represented as 10. The place value of the right-most digit would be pi^0, and the next digit is pi^1.

Indeed. 10 is pi in base-pi

Mathematicians are weird enough that at least one of them has done calculations in base-pi.

That's pretty much what radians are. Well, they combine base pi with whatever base you're using for the coefficients.

The answer to life, the universe, and everything is 42... +9.

Hyperreal numbers go brrr.

I'm kind of curious what ways exactly using this in place of actual pi would change/break geometry. Obviously, it wouldn't become noticeable until you try to involve infinite structures.

I mean, it depends on what you're doing. Supervision always required, though.

GPT-4 gives a correct answer to the question.

It's 4, isn't it?

No clue what Amazon is using. The one I have access to gave a sane answer.

There's probably some finetuning at play for Amazon's thing which makes it tend to always give a straight answer, instead of stepping outside of the box and doing something like correcting an implicit assumption.