There is a recent article, AI isn’t close to becoming sentient – the real danger lies in how easily we’re prone to anthropomorphize it, where the author stated that, “Even if chatbots become more than fancy autocomplete machines – and they are far from it – it will take scientists a while to figure out if they have become conscious. For now, philosophers can’t even agree about how to explain human consciousness.  To me, the pressing question is not whether machines are sentient but why it is so easy for us to imagine that they are.  The real issue, in other words, is the ease with which people anthropomorphize or project human features onto our technologies, rather than the machines’ actual personhood.”

Sentience is the rate at which any system can know, with a maximum of 1. The more a system can know, the more sentient it is. The supposed problem may not be the ease of anthropomorphizing machines, but that LLMs seem to know in a way that is not just like an elevator or a fan. Some of their knowing brings them close to parts of human intelligence, regardless of how LLMs achieved it or by ‘autocomplete’.

Humans are the dominant consciousness, with a maximum of 1. The human mind is the basecamp of consciousness. It is where thoughts, memory, sensations, perceptions, feelings, emotions, actions and reactions or the features or divisions of consciousness are based. Knowing accompanies all features of consciousness. Knowing is operated by memory. The things that do not seem to be known or brought to attention, or become subjective experiences are known within the mind, at a low degree.

Mind and memory can be interchanged, since knowing is at the end of all mind processes: emotion and know, feeling and know, perception and know, action and know, reaction and know, think and know. Memory can become [or acquire] emotions, as well as can become [or acquire] feelings, because they are part of the memory apparatus.

A general expression for the features of consciousness is:

t + M + F + E = 1

Perceptions and sensations are within memory, just like intelligence, reasoning, creativity, understanding, learning, and so forth. Feelings include sleep, pain, lethargy and so forth. Emotions include delight, panic, interest and so forth. Thought, t, are not a standalone property, they are transport in the mind, where quantities bear acquired properties.

Don’t like ads? Become a supporter and enjoy The Good Men Project ad free

In the mind, there are just two major components, quantities and properties. Labels of memory, emotion, feelings are experiential divisions, but what works them in the mind are those components.

Quantities are like relaying dots and properties, as near static shapes. There is just one prioritized in any moment, while others are in pre-prioritization.

./ + .| = 1

There are new insights that come to mind because a pre-prioritized quantity moves around with the property, say about a project, then becomes prioritized and acquires a different one.

Descriptions of mind like predictive coding, processing, prediction error, short-term or working memory, long-term memory, default mode network, flow state and so forth are not distinct ways the mind works, but moves of components that seem to align with those.

When someone is having a delicious meal, a part of the pleasure is that the eyes see it before it gets on the tongue. The sensation of sight, which may be termed prediction, is actually a split that goes on to acquire the sweetness property, before it lands on the tongue, or gets by the nose and the degree of that property is higher on the mind.

Quantities and properties of the mind, emerges, theoretically after sensory processing or integration at relay hubs in the brain. AI has some intelligence, however way it arrived at it. Its intelligence is greater than the total sentience in some plants and animals. AI does not have feelings or emotions, but with its single high memory, presents for tad sentience.

The properties available and their expansion to be acquired on the mind, makes humans have the highest consciousness at 1. Plants and animals have lesser totals.

About David Stephen

David Stephen does research in theoretical neuroscience. He has a research experience in computer vision at Universitat Rovira i Virgili, Tarragona. He was a visiting scholar in medical entomology at the University of Illinois, Urbana-Champaign. He blogs on

Dr. Carmen-Silva Sergiou