Chat GPT Did NOT Like My Memory Test
I was trying to do a memory test to see how far back 3.5 could recall information from previous prompts, but it really doesn't seem to like making pseudorandom seeds. 😆
You are viewing a single comment
I was trying to do a memory test to see how far back 3.5 could recall information from previous prompts, but it really doesn't seem to like making pseudorandom seeds. 😆
Just tried with alpaca ggml-alpaca-7b-q4.bin (a text chat model that I found I can run locally easily on my CPU).
Looks somewhat random.
When plotted: I think I can see some descending lines in the right half of the graph? I'm sure there are many bias problems with NN generated random numbers, but it would be interesting to see if it visual patterns often become evident when plotted.
Not exactly 50 digits though...
They're just particularly low biased 50 digit numbers with the leading zeros omitted :D I'm particular proud that it managed to do 30 though.
It's interesting that none of the the numbers start with zero. From a quick check of digit frequencies in its answer it looks like the network has a phobia of 0's and a mild love of 3's:
It's hard to get more data on this, because when I ask again I get a completely different answer (such as some python code). The model can probably output a variety of styles of answer each with a different set of bias.