anyone who thinks an LLM can possibly be conscious simply doesn’t know what an LLM is. it doesn’t think, it doesn’t understand anything. it simply uses probabilities to connect elements that were provided to it during “training”. it’s no more conscious than my phone’s predictive text.
Honestly I get that but I'm not 100% on what "thinking" is.
Replies
-
The point is that the LLM is not thinking, it's just using the words you use to express thinking
This is why people compare it to an autocomplete, or a parrot.
-
I know what it isn’t: a computer assembling words, written by other people, like fridge magnets.
-
whatever "thinking" is, language is just how you express your "thoughts" to other people, right? there's an underlying process that language (imperfectly) represents - the words aren't themselves thoughts. an LLM is only a language generator. it's nothing but a protocol for how to arrange words.
-
don’t make that our problem please