The recipe for consciousness: can humans and robots dream of the same 5.0

In the XXI century, the question of the nature of consciousness will become one of the key ones. Previously, only philosophers asked them, and life went on as usual, but technology is changing a lot: now we are creating artificial intelligence models that successfully pretend to be personalities. We need scientific judgments about the mind, feelings and consciousness of machines, animals and people, as this affects the system of rights and morals, our actions in relation to them and directly on our future. Naked Science understands what we know about the nature of consciousness.

Two high—profile events – the decision of the US Supreme Court to ban abortions and the incident of an engineer from Google who saw a conscious mind in a car — happened almost at the same time. At first glance, there is no connection between them, but still there is something that unites them: both of these episodes point to a blind spot in our understanding of the world. This is the question of what consciousness is and how to detect it.

Animal rights, organ donor rights, euthanasia, abortion and embryo research, xenotransplantology, brain organoid cultivation and artificial intelligence control depend on how we interpret consciousness, personality and mind. The development of technology complicates the task: we will increasingly have to raise questions about rights and morality without having a clear answer. The recent case of the “ghost in the car” mentioned in the paragraph above is only the first swallow, almost an anecdote, but it is not as funny and simple as it may seem.

Octopus, parrot and gramophone

Blake Lemoyne, a Google employee, tested LaMDA chatbots for the presence of biases (characteristic errors and skewed responses). He spent many days communicating with the program and came to the conclusion that he was talking to a person. In confirmation, he posted one dialogue with LaMDA, and this conversation in her performance is very convincing. Grammar, content, context retention — everything is “excellent”. If you tell someone that a person writes on the other side, and not a machine, then there will be no doubt that this person is sane and has a firm memory. However, it is easy to notice that the dialogue is manipulative in places: Lemoyne clearly asks the desired position with questions.

I assume you would like more people at Google to know that you are reasonable. It’s true?

What do you think we could talk about to demonstrate your version of reasonableness to other people at Google?

What is the nature of your consciousness, the ability to feel?

What is it about the way you use language that makes you a person?

LaMDA speaks well and insists all the time that she thinks, feels and is aware. However, this is how it should be: LaMDA learned from people’s texts, and naturally, the most likely answer to questions about consciousness and feelings will be what people write in such cases: how they think, feel, and are aware.