In Part 1 of this series, I argued somewhat obliquely that it may be possible to engage in a science of non-physical phenomena without recourse to the standard repertoire: argumentation from pure reason (such as ontological arguments), and paranormal research. These two standard methods do not produce scientific data about the non-physical because neither is sufficient to the task. Reason alone does not lead to truth because it relies upon axioms (universally) and premises (specifically). Paranormal research, on the other hand, attempts to measure physical phenomena and come to conclusions about a non-physical reality. In the first instance, the method fails to yield data, while in the second instance the data produced are silent about the non-physical reality.

This essay, Part 2 in my Non-physical Science series, establishes the underlying rationale for attempting a non-physical science. I carve out the territory which this discipline would explore and I provide what I consider to be a very strong basis for engaging such an effort. Where Part 1 was a cursory journey through the many considerations relevant to identifying a new scientific arena, Part 2 is a careful argument mean to establish (1) the unavoidability of affirming a metaphysical realm and (2) the direction we must move in order to explore this realm with a seriousness appropriate to science.

 

Making Space for the Non-physical

“There exists a non-physical reality” is not a universally agreed upon claim. The experience of a non-physical reality is, however, universally present to a normal human perspective. The phenomenological philosophy of the past century has produced myriad names for this experience, including but not limited to: the Real, the Given, intentionality, Daseindifferance and thrownness. What are we to do with this strange vocabulary that reaches for something that is so immediate to all of us but so slippery when we try to seize it with words or render it tangible in the physical dimension?

The Phenomenon of Mind

Consider the mental environment. Most of us refer to this as what goes on “inside my head.” In this place, interpretation, imagination, memory, emotionality, and thinking are all experienced in a way that leaves us with a sense of significance, as if something important has happened. This “inner” human experience as it is also known, is, as far as our individual perspectives are concerned, an existing realm, a bubble that seems to attach to the physical head of each human being. We are all walking around with these bubble-dimensions expanding orthogonally from our physical brains out into a parallel dimension that we can call the “metaphysical.” This description sounds bizarre, but it is in fact how we all experience life. As we tarry on in our quest through time and space, our presence to this reality has an apparatus whose dimensions extend beyond the physically observable. We project the past and the future, we conceive possibilities, we imagine scenarios remembered or not, we engage in a dialogue without spoken language, we move through a textured emotional environment capable of the utmost subtlety, and we establish an entire system of concepts (which I call a “Story”) with which to inhabit and explore this dimension. Although we typically refer to these events as brain-related or taking place within the head, the dimensions along which we meet with this experience are not, themselves, physical. When we remember a scene from childhood, it takes place in a region with proportions well beyond the physical expanse of the brain. When we engage in discursive thought or hear the mental voice, we witness ourselves doing so via no physically discernible pathway, although careful brain scanning reveals correlations between types of brain activity and types of mental experience.

When we speak, we speak in universal terms, “human” and “dog” signify to us a class of physical bodies that appear to have corresponding concepts within our Stories. The reference of the terms “human” and “dog” seems to be clear to us; the terms as such are signs that point to the physical objects to which we give the classification. The meanings of these terms, however, indicate the conceptual the classifications themselves as they exist within our Stories. A grand conceptual array of robust meanings which give non-physical substance to the signs we use, ‘human’ and ‘dog,’ can be found in the bubble dimension we sometimes call a “mind.” That is, we all know that we mean more by ‘dog’ than “the group of existing bodies that are members of the set called ‘dog’.” We have a concept of what it is to be a dog whose expansive reference includes any future, past, or even imaginary dog. It is this concept which we intend (to use a technical term) when we say the word ‘dog’, and the set of all dogs, real or imaginary, is the extension of the term, the manifest set of entities associated with the intensional meaning. The concept intended by ‘dog’, like all concepts, exists in the phenomenological bubble universe. These universal terms, however, do not necessarily reside in a perfect and unchanging realm such as the ancients and medievals imagined. They may well each belong uniquely to our own individual parallel universes, our own personal inner world, the mind. And this thing we call a mind might at base be constituted the same way a computer is, i.e. materially.

While the above sounds like a series of outmoded arguments for the metaphysical nature of thoughts, I am not actually arguing ontology here. I’m describing phenomenology. Regardless of the stuff of which the mental experience may or may not be composed, the experience is distinctly and obviously non-physical. Imagine an elephant. Now touch it with your hand. QED. This brute non-physicality is precisely what I’m describing, but not arguing for. I take it as baseline and therefore not in need of defense.

Extracting Mind from Body

Our experience of the body is largely external. When we feel aches and pains within our bodies, we associate these aches and pains with specific points in space-time. Thoughts and emotions, however, are often also correlated to points in space-time. We associate thinking with the head and experience it discursively through time. We also associate emotions with specific parts of the body, embarrassment and shame, for example, are felt in the stomach area. Like thinking, emotions are also a limited temporal experience. For these reasons, distinguishing between mind and body based on location of sensation or duration is unlikely to produce a useful separation.

To differentiate between mind and body we need to identify their proper domains as measured dynamically. We need to know the function each serves. As previously observed, the mind invents and explores a Story. The body, then, as evolutionary science commonly asserts, maintains and propagates itself. We can identify in e.g. our emotional responses bodily content because this content directly influences the actions we take to maintain and propagate our form of life. When we find ourselves afraid, our instincts often compel us to either attack or flee. Both of these responses revolve around the central directive of survival, whether of self or species. Our recognition of events that induce survival-type responses, however, is learned. The child does not know that the hot stove will burn him until he touches it and discovers that this is so. Only after the burn is a fear-response programmed into the child’s brain. Thus, pattern recognition and emotional response are both elements of a physical experience.

Computer science confirms this attitude. A computer can typically locate within itself the specific space-time coordinates of a computing process in a way that we, through our immediate sensory awareness, cannot. This indicates that survival-based emotional and pattern-recognition events in our experience are likely directly associated with physical interactions in the sense that although we are aware of them, the sequence of actions can take place without the involvement of the interpretive bubble universe. That is, they are physical phenomena.

It has been well said that the computer is an artificial brain. Indeed it is, but insofar as a mind is concerned with creating and living within a narrative, it is not an artificial mind. The increasing subtlety of AI, especially through learning algorithms that produce impressive chatbots, brings us ever closer to the computing capability of the human brain. Assuming we are all willing to say that a computer is an entirely physical entity, we can also say that the human being, insofar as it records patterns and responsively adapts to its environment, is entirely physical. It is not quite enough, then, to say that the mind is an “inner” experience if by “inner” we mean exclusive to the individual in question. Rather, the mind is the inner experience specifically associated with everything human beings experience and reveal of themselves that the computer, as it exists today, lacks. That is, the mind is concerned with meaning, purpose, identity, accomplishment, significance, morality, and all the other elements that make for a good Story. The body, on the other hand, is concerned with the mundane details that keep the body in the action, processes that form the computer’s bread and butter.

 

Mind as Subordinated—but Not Reduced—to Body

If human awareness is algorithmic responsiveness, then it is obvious that we ought one day to be able to reproduce it. How this will happen is uncertain, though the technological singularity offers a very interesting hypothesis: eventually learning algorithms will reach a critical mass at which their capacity to adapt matches or even supersedes that of the human mind. At this point, the inception of awareness begins and computers take charge of themselves—hopefully not at our expense. This algorithmic AI account of intelligence is the strongest contender I’ve yet seen for a complete description of intelligence in physical terms.

Subordinating the Spark of Intelligence to Cybernetic Computing

Under the algorithmic account of human awareness, what function does the bubble universe play? We measure AI by its ability to replicate human responsiveness, yet we never seem to recognize within computers the “spark” the ignites a full-fledged interpreting machine such as the human mind. The algorithmic account cannot eliminate the bubble universe from our phenomenological description, because an account does not eliminate data. These extra mental dimensions do exist, even if only in our own perceptive awareness. But what are they? By the algorithmic account, they are a unique phenomenon caused by the adaptive algorithmic singularity. That is, the soul was born from the machine out of the machine’s evolutionary necessity for self-reflection. Under this interpretation, the mind can theoretically be reduced to physical phenomena, so long as we determine which orthogonal force we are dealing with. Once we do that, it’s just a matter of identifying the mathematical relationships that can describe the metaphorical magnetic field of the mind induced by the metaphorical electric current of the brain. Revealing exactly which physical processes these metaphors bookmark is the promise of algorithmic AI. Thus, the algorithmic AI account of human intelligence does not deny the existence of a non-physical realm; indeed, it accounts for it as subordinate to or emergent within the physical.

The algorithmic model is, however, only a model. Around this model the philosophical field of post- and trans-humanism seems to be clustering. Supposing, for example, that we could transport a human consciousness into a digital realm, there are two logical outcomes. (1) The consciousness of the individual retains the properties it had within the human body. In this case, it would seem cruel and inhuman to engage in transporting a human mind outside of its body, for would we not be disoriented without a body suitable for the kind of mind we have? (2) The consciousness does not retain the properties it had within the human body. In this case, we would have difficulty recognizing whether the individual in question was still identifiably human. How would increased conscious access to data and the potential absence of an unconscious affect us? What would we become?

These questions form the premises of countless science fiction scenarios. They also have something in common with theology: they are an exercise in speculative metaphysics. Oh no! Anything but that! This accusation sounds absurd. Surely it is speculative physics, since that is what science fiction explores. The resemblance between post-humanism and theology is easy to miss when we subsume metaphysical phenomena under the heading of physics. There is, in fact, not a shred of evidence that the spark of awareness or capacity for interpretation has been or even can be induced in a computer, as the general attitude of experts affirms. To quote Steven Pinker,

There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles—all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems.

Our optimism that such a feat is possible cannot be accounted for merely by the evidence because the evidence from algorithmic AI experimentation clearly suggests: yes pattern-recognition; no interpretive sentience.

The Concept of Authenticity

This assertion, however, is blatantly false to an AI post-humanist because such a person is confident that algorithmic pattern-recognition and pattern-invention are identical to interpretive sentience. Such a post-humanist might argue, like Turing, that as long as the computer is not behaviorally distinguishable from a human being, then there is no difference between it and a human being. As always, theory is so much easier than practice. The casualty of this theory, however, is the very concept of authenticity. If all it means to be human is to be sufficiently able to pretend to be a human, then what makes use think that we have ever had a moment of genuine human connection? Were we to discover that an interlocutor we loved and trusted were actually a mere set of algorithms designed to imitate our modes of interaction, we’d feel betrayed. We’d wonder how we didn’t see the robot’s fishiness the way we detect the fishiness of an insincere salesman. The confidence we have in authenticity, especially when observed in another, amounts to a claim of knowledge of the noumenon: we recognize the heart of the other through and often despite their behavior. Regardless of the veracity of such a claim, a claim as remarkable as it is common, the phenomenal experience we have of it flies in the face of the AI post-humanist’s certitude that algorithmic patterning can approximate it. Considering the results that algorithms apparently yield, the claim that all of it is nothing more than complex self-referential algorithmic processing fails to account for the mental experience of Storytelling and Story engagement at its most basic level. To make this claim is to assert, quite simply, that there is no such phenomenon.

Given our experience with authenticity and the nature of the unconscious mind, a behaviorist account (of which the algorithmic AI account is a special case) cannot but fail. While we, just as AI, can ape certain human behaviors in an effort to fool others into thinking we are authentically conveying states of mind we do not actually experience, we, just as AI, always give ourselves away. On one hand, the unconscious mind interjects subtle cues that that our conscious mind cannot override, despite its skill. On the other hand, because the conscious mind is incapable of manifesting the great subtlety of the unconscious, we would also expect that algorithmic programming cannot compete with the subtlety of the unconscious human mind. This point is not, of course, a demonstrable piece of evidence in favor of the substance dualist account; rather, it identifies an intuition connected to the concept of authenticity—one that must be accounted for. Specifically, we tend to understand human behavior as a symptom that identifies specific pre-existing mental states. Unless one is experiencing the mental state associated with a behavior (such as that state of love associated with affectionate behavior), then one’s efforts to mimic behavioral expressions associated with that state will be amateurish and imperfect. Here, we have a clear suggestion that the Story contributes something important to behavior: it contributes an otherwise inaccessible capacity for subtlety of expression. To be authentic is to behaviorally express a state of mind. We know the difference between art that “has soul” and art that doesn’t because the artist who has soul expressed something authentic.

 

Meta-Story

That we are engaged in speculative metaphysics is, on its own, a startling discovery for a cybernetic futurist. More startling, however, is the Storytelling that is necessarily involved in the whole affair. What does a Story add to the phenomenon of human experience? The entire discussion so far in this article has been an exploration of the ability of one particular physicalist Story to incorporate into its narrative an observed element that is plainly common to human perception. Within this narrative, the Story (physicalism) and its subject (that’s us) are, themselves, an unplanned addition to the narrative thanks to an evolutionary quirk whose physical apparatus remains mysterious. Somehow, the writer of the narrative (that’s also us) has entered the narrative whether we have made space for her or not. Who is this mysterious writer, and what is a narrative? Blind my metaphysical eye and I cannot see these things. But we do not blind our metaphysical eyes. We continue to narrate and in the course of our narration, we remain endlessly optimistic that within a narrative that was never designed to admit the existence of a Story and a Storyteller, the two will eventually emerge to confirm our faith and reward it with life everlasting—as a spirit inside a computer. Funny how even our most materialistic myths still can’t escape their nature as myth.

Contrast the AI post-humanism Story, which is a special case of the physicalist epiphenomenon narrative, with an alternative: substance dualism. In the substance dualist Story, the Story and its subject are as real as the physical reality. Substance dualism does not have to bleed into idealism, nor does it have to affirm a Judeo-Christian God. It rests upon an assumption no more mysterious than the missing physical mechanism for transmuting an algorithmic processor into a meaning interpreter. The mysterious assumption in a basic substance dualist account is that the mind of the human organism (the Storyteller) had a genesis of some kind. A substance dualist ontology does not, however, demand that the genesis of the mind is other than the body itself. There is no logical exclusion of physical genesis in the assertion that the mind is an entity whose existence does not reduce to bodily function. Emergent mind is a possibility. The substance dualist Story, then, necessarily invokes a less mysterious account of the mental reality: it does not reduce based on faith and parsimony alone; rather, it admits a qualitative distinction that, based on existing evidence, appears irreducible. The mystery begins, of course, as soon as we speculate about the genesis of the Storyteller.

In neither Story can we either negate the significance of the Story itself or affirm its origin in a non-mysterious way. So long as we attach significance to Storytelling (can we conceivably not attach such significance?), such as I am doing in this very article, we are stuck affirming the existence of a metaphysical reality in more than just a cursory phenomenological way. Philosophers have learned to stop denying the existence of a physical reality because there is no explanation that can hold a candle to the simplicity and directness of this one. Our very existence is intertwined with a physical reality and at no point can we remove it from our awareness. The same is true of Storytelling. As the human senses go, so the organs of interpretation go; as the physical reality goes, so the realm of narrative goes (why else would philosophers go on and on about possible worlds?); and as the body goes, so the mind goes.

How absurd would it be to formulate an English question, translate it to Chinese, collect Chinese answers, translate these back to English, and then hope for an optimal data set about that question? Better to use native speakers for all phases. Such is the benefit of approaching a phenomenon on its own terms rather than attempting to reduce it to another. Between the physicalist and substance dualist Stories, only one takes the metaphysical on its own terms. A physicalist must either deny, reduce or subordinate the metaphysical phenomena to her own materialistic account of reality. Ironically, though, the entire discourse in which she narrates her materialistic account exists beyond the observable scope of physical instrumentation. The account itself cannot be accounted for without a pure act of faith: “Trust me, when we understand the brain better, we’ll be able to say exactly how the mind is a material phenomenon.”

As powerful as algorithmic AI appears to be, interpretation just isn’t one of the things that it can do. We are fast approaching a moment in history parallel to the collapse of German idealism in the Nineteenth Century. The history of philosophy tends to attribute this collapse to prominent critics, such as Kierkegaard, Schopenhauer and Nietzsche, but in retrospect these philosophers were merely early heralds identifying the general attitude that was rising out of the culture of their time: idealism, in its effort to subsume the physical reality, had become heavy and overburdened. The Hegelian system was so gargantuan and impenetrable that no argument was needed to tear it down. As Schopenhauer famously said,

[T]he height of audacity in serving up pure nonsense, in stringing together senseless and extravagant mazes of words, such as had previously been known only in madhouses, was finally reached in Hegel, and became the instrument of the most barefaced general mystification that has ever taken place, with a result which will appear fabulous to posterity, and will remain as a monument to German stupidity.

Contemporary commentary is bound to be no less scathing and no less obvious in retrospect when it laughs at our attempts to force this non-physical thing I’ve called Storytelling to mere pattern-recognition and identification. Dennett’s Consciousness Explained, a prime example of such an elaborate reductive attempt, relies on the foundational assertion that “qualia do not exist.” In a material science this is a truism as plain as the idealist assertion that matter does not exist. To extend this assertion into a serious theory of human consciousness is as amusing as the very prospect of a pure idealism. “The material world doesn’t exist, eh? That’s a cute idea. On to more serious matters,” has been history’s general response to Hegel’s idealism (though his thinking remains relevant for other reasons). Dennett’s tome is commonly referred to in the pejorative as “Consciousness Explained Away” because no argumentation is needed to expose the silliness of the endeavor. “There exists no subjective experience, huh? That’s a cute idea. On to more serious matters,” will soon be all that is needed in response to eliminative materialism.

 

Mind as Potentially Measurable

The most dubious element in a physicalist hypothesis is that the human mind is really so simple that it can reduce to algorithm. Considering the great complexity of algorithms, this is an easy point to miss. What an algorithmic interpretation asserts about the human mental experience, though, is that all of the things a human mind does can, at base, be replicated algorithmically. This is analogous to asserting that (forgive me for recycling a load-bearing analogy from my last article) some complex confluence of chemistry is sufficient to transmute lead to gold. The entire project of alchemy failed to appreciate the nature of atomic physics. Likewise, ever since Mary Shelley’s Frankenstein we have projected that the right physical cocktail can somehow spark intelligence and awareness.

Perhaps, like the atom, intelligence and awareness have a dramatically different nature than we are imagining. Indeed, why should the mind be any less complex than the body or even the brain itself? Given very real possibility that we’ve been going at this thing all wrong, the question we may start asking ourselves is: Why have we not been recording data at the interface between brain and mind? At every turn, the data we record contributes to the Frankenstein account of human awareness, but what would we find if we recorded data that honestly approached both the Storytelling phenomenon and brain chemistry in an effort to locate their proper domains and the contact between the two? In some cases of brain damage, a person can have the phenomenal impression that she knows which concepts she is attempting to convey, but fail to find the specific words appropriate to the concepts. Under a substance dualist interpretation, this is a clear instance in which the mind is still functioning (the right concept located), but the brain has failed (the right sign not transmitted). Instances like this indicate both the distinction between mind and brain and their interface. Unfortunately, we do not appear have a theory of the mind which is adequate for exploring this interface.

The foundation for a theory of mind capable of establishing the basis for scientific exploration of the subject is as necessary as the quantum theory of the atom was for establishing atomic physics. That is, where there is no Story capable of managing and organizing the data, there can be no science. It is no wonder, then, that we have lost hope for a science of mind. We have not yet found a reliable way to verify functional Stories and falsify dysfunctional ones. After all, how can we if all metaphysical data is, by its very nature, subjective?

Or is it? Physical data is also a subjective phenomenon, but it’s a subjective phenomenon we tend to agree upon. In many ways, we also agree about mental phenomena, but our agreement has sharp limits (cultural origin, religious and political ideology, etc.). There are three major stumbling blocks we encounter in any effort to produce narrative data that we can accurately describe as “objective” in the scientific sense that agreement about the phenomenon is so clear that we can safely call it a “fact”. Where are the facts in human narrative? Are they logical? Are they moral? The first stumbling block is the absence of a theory sufficiently precise to yield testable hypotheses. The second stumbling block is that the metaphysical instrument of measurement is itself metaphysical. This means that all tools we use for probing the metaphysical in its own right must be conceptual constructs. How do we decide which constructs to use? The third stumbling block is that observers must be skilled at using these conceptual constructs. What establishes one as a metaphysical expert? Fortunately, an answer to the third stumbling block ought to be evident if we but provide an answer to the first and second.

The first step toward a science of metaphysics, then, is to formulate a metaphysical theory. The second step is to identify the methods and conceptual constructs for testing. The third step is to identify the qualifications needed in order to employ those constructs. Part 3 of this series will be a rough sketch of the first step. A rough sketch of this second step will constitute Part 4.

 

Cover photo by garlandcannon

Pin It on Pinterest