Could a Large Language Model Be Conscious? Within the next decade, we may well have systems that are serious candidates for consciousness.

alyaza [they/she]@beehaw.orgmod to Humanities & Cultures@beehaw.org – 7 points –
Could a Large Language Model Be Conscious? - Boston Review
bostonreview.net
5

You are viewing a single comment

I can't take this shit seriously while the authors eat flesh

Why not? Ethical or moral values have about as much bearing on the scientific outcome as how attractive the researchers are.

Ok so assuming good faith:

There's a huge cohort of idiot savant type technologists that are obsessed with the idea of machine consciousness and the incredibly important implications of that (according to them). Yet these same people by and large absolutely refuse to engage in any behaviour modification regarding the incredibly strong evidence of consciousness we find in extant earthlings. Human and non human.

So I can't take their claims of being interested this any more seriously than a teenager's musings on the meaning of life because they don't actually believe anything they're saying.

Knowledge without belief in it is not knowledge, it is mere rhetoric and wordplay.

If you care about machine consciousness and think it would carry any weight or demand modification of our behaviour you would already be acting with urgency against humanitarian crises and animal agriculture.