Thoughts on Being Human [3] – Signs of Our Times

What are the signs of our times?  That is: what signifies our here-and-now moment?  Screaming politicians?  Angry mobs?  Memes?  The ubiquity of networked technologies?  It is always hard to tell, from within a moment, what signifies the intelligibility of that moment; understanding precisely where we are requires spending time to find points of reference.  Every moment in history, it seems likely, confuses us more when we are in it than when we have passed it by.  Despite this, which ought to be easily recognized, we nevertheless tend to believe our culture now superior to cultures past—not without reason, mind you—without thinking very much about what cultures future might think of us.

A longer paper addressing some of these issues is available on Academia.edu, and audio of the presentation (43:35) has been embedded below.

Julian Baggini recently wrote, in a piece for Aeon Magazine, that racist, sexist, or otherwise prejudiced great philosophers of bygone eras—such as Aristotle, Kant, or Hume—deserve our admiration despite their failings; for “Anyone who cannot bring themselves to admire such a historical figure betrays a profound lack of understanding about just how socially conditioned all our minds are, even the greatest.”  Baggini’s point, though many may recognize it in the abstract, remains outside the habitual considerations of most people today.  In other words, we may recognize that very many of one’s beliefs are very frequently the product of one’s environment—of one’s “time”—and yet hold our own beliefs as though personally-arrived-at convictions of the greatest intellectual certitude and the securest moral rectitude.

Likely, we have arrived at many of our convictions through our own action; at least, in some small part.  But even more likely is that we have played less of a part than we generally suppose; and that the culture into which we have been born or otherwise thrust has played a much greater role than we would like to admit.  To give some anodyne examples: our aesthetic tastes are generally taken to be subjective, and in the age of the internet, decreasingly provincial.  In other words, you can find in the urban northeastern United States fans of Texas-recorded country music, and in the cornfields of the Midwest, lovers of hip-hop.  While this aesthetic diversity might speak to subjective variation—not only in music, but in art, architecture, literature, food, and any other matter of taste—the variation differs less than a superficial glance would have us believe.  Rather, we are in many ways determined by forces outside of ourselves.  We can see this in two ways.

First, we are naturally limited—determined in our bearing by our culture—by the availability of objects: a rare food that is an acquired taste is one an impoverished girl from Minnesota is unlikely to enjoy, just as a teenager who has grown up on pop music is unlikely to appreciate Franz Liszt (at the very least, as much as Liszt deserves to be appreciated).  But second, and much more importantly, we are limited by unrealized presuppositions uncritically adopted.  Whether conscious of it or not, each of us forms, from a very young age, a background image about the meaning of life, the universe, and everything.  More often than not, the ideological commitments or prejudices belonging to people of the past were likewise rooted in their own presupposed and unquestioned background images.  That is: sexism, for instance, was not an arbitrary selection, nor was it merely a power-grab.  Rather, the belief about the universe was one of pervasive hierarchical order: if things appear differently they also appear, to the mind that presupposes all things are hierarchical, as unequal.  Though grossly unjust, and relying upon an a priori belief, there was a fittingness that appeared to Aristotle and company in ranking women lower than men, given the presuppositions already established about how the universe operated.

Especially was such a priorism easy when exercised by nearly all.

Are we really much different, today?  We tend not to see our own ideological commitments quite so well as we do those of others (especially long-dead others, whose ideas were so very different from our own).  We hold our ideologies “too close”, like emerald colored-glasses to which we’ve become so inured we have forgotten the world is not really that green, forgotten that we are wearing them in the first place: “An ideology is really ‘holding us’ only when we do not feel any opposition between it and reality – that is, when the ideology succeeds in determining the mode of our everyday experience of reality itself.”[1]

As said in the previous entry, many—most—of our beliefs develop not through a process of rational inquiry, but rather of sentimental attachment, whether this attachment stems from an internal tenacity or by external authoritarian imposition; especially unrealized impositions accepted without question: the bearings of our parents, teachers, friends, or expanded cultural network which become our own by a social osmosis; which slide into our minds through popular and social media.  The positions we do hold consciously—e.g., our political preferences, our philosophical adherences—more often than not find acceptance only because they have a fittingness to beliefs held a priori.  We are predisposed—determined, in a very literal sense of its Latin etymology (that is, not “determined” as the “inevitable product of temporally antecedent efficient causes”, but as “reduced from a wider potential to a narrower actuality”; having terms or limits set on us)—by innumerable things outside ourselves.  How?

To invert the meaning of our titular phrase: we not only know a time by its signs, but a time itself is determined by its signs.

What is a sign?  Casually, we likely think of nothing more emblematic than the street sign: especially the large, red, octagonal one with “STOP” boldly striking across; or perhaps the large neon announcement of a place to eat—or of a different concupiscent appetite.  We might think in terms of signals and the ordering of logistics, or even in terms of marketing.  But these are, for the most part, unreflectively-adopted notions and merely examples of a phenomenon, not a definition nor even an oblique grasp of the essence of what it is for something to be a sign.

Perhaps we can enter into a more critical appreciation of the sign by very briefly considering its history.  John Deely (1942-2017)—philosopher and semiotician (and my dissertation director) whose Four Ages of Understanding is vital reading for any interested in the history of signs—showed that it was Augustine of Hippo (c.350-430) who first defined a sign as indifferently natural or cultural: “A sign is something that shows itself to the senses and something beyond itself to the soul” (Signum est quod se ipsum sensui et praeter se aliquid animo ostendit).  The notion of signs was central to the philosophical-informing of Catholic theology and philosophy throughout the Latin Age—in explaining the nature of the sacraments, chiefly—and developed from Augustine’s notion of signs as physical things that, being sensed direct our minds to something else, to a more general but essential meaning given by John Poinsot (1589-1644), as “that which represents to a cognitive faculty something other than itself” (id quod potentiae cognoscitivae aliquid aliud a se repraesentat).

A sign, therefore, has an inherently relational quality; it represents to.  Some centuries after Poinsot, Charles Peirce (1839-1914)—who did not know Poinsot’s long-forgotten work, but did know the work of Poinsot’s teachers, the semi-anonymous Conimbricenses—recognized that this relationality must be triadic in nature: in other words, the relation that a sign accomplishes is no mere concatenation of dyadic (two-term) relations, but a mediation whereby one thing affects another, which in turn affects a third back towards the first.  A mere thing, apart from this accomplished relation, is not in fact a sign—that is, it doesn’t perform the significative function.  Were I to say “illud quod primo cadit” to someone entirely unfamiliar with Latin, it would not signify to them what it signifies to someone who knows the language; especially the medieval usage; and most especially the oeuvre of Thomas Aquinas (most especially his 1270 Summa theologiae prima secundae q.94, a.2). To a tribal native from a remote region of the world, the “STOP” sign does not signify the law, which she does not know (it may correspond to something in her own culture, but that is a rabbit hole we’ll not go down).

The causality exercised by a sign, therefore, is in that it directs our cognitive faculties to this rather than that.  To some extent we are free in regards to the signs by which we allow ourselves to be directed.  For instance, I may choose to think not of the weather outside, but of the book on my desk; or of what it might be like to remain conscious while being sucked into Sagittarius A*.  Quite likely, I—and you, and everyone else—am more likely to think of some things than others, a thinking not free of cathectic influence.  If an attractive woman flaunts her most attractive qualities towards me, seductively, it is much more difficult to think about philosophy (though doing so can help prevent making a mistake with that woman—especially if you start talking about philosophy with her; often a real mood-killer).

Often, however, this distractedness follows not simply because of the here-and-now objects drawing our attention.  We as human beings are creatures shot through and through with habits: with governing tendencies which draw us towards behaviors and objects even in their absence.  Any habitual smoker—or explicit addict of any kind—can attest to this phenomenon.  But one need not be an outright addict in order to have developed habits; and many of our habits—particularly those that are not well-defined or explicit—escape our attention.  These are not only habits of behavior (virtuous or vicious, benign or malicious, unimportant or vital), but also habits of thinking.  Whereas habits of behavior can in part be explained through consideration of neurochemistry—lust and love and romance, testosterone and oxytocin and dopamine, for instance—habits of thinking can be understood only by recognizing the influences and the actions of signs.  Thus even our habits of behavior become profoundly determining of our actions and especially of our reactions when we are not conscientious of the semiotics at work in our lives (since our behavior does not occur in a cognitive vacuum, but as a consequence always to some cognition), even more so are our habits of thinking semiotically determined.  Most especially is this the case with the establishment of what we consider “normality”—the baseline or background image mentioned above—which notion comes primarily from influences we do not even recognize.

In other words, we believe things not only without really knowing what it is in which we believe, but often without even realizing that we believe them at all.  We—confused somnambulists plodding along with uncritical beliefs—are indeed both signs of our times and are ourselves formed by the signs of our times.

The next entries will consider this semiotic, cultural determination in three interrelated movements: first, distinguishing the cognitive faculties which belong to the human being; second, considering the habituation of our cognitive faculties; and third, by illuminating some of the cultural determinations which are common to us but of which we are unaware.

[1] Slavoj Žižek 1989: The Sublime Object of Ideology, 49.

Participation & Asphyxiation

In his highly-quotable, rarely understood and seminal work, Understanding Media, Marshall McLuhan distinguishes early on between what he calls “hot” and “cold” media, which are, respectively “high” and “low” definition.  A hot, high-definition medium is low in participation, while the colder and the lower-definition a medium is, the deeper our participation in it must be: that is, high-definition media transmit large quantities of information, while low-definition media provide less and require us to supplement to the completion of the intended meaning.  From the Thomistic, semiotic standpoint, this is an incomplete picture, inasmuch as the effects of media must be understood not only in terms of their relation to exterior senses but as signs involved in triadic relations to interior senses–that is, to the powers of perception as such.  McLuhan often mentions perception, but without any elaboration of the faculties involved.  More’s the pity.

Regardless, there are a few statements McLuhan makes concerning high-participation or depth-participation media which jumped out at me today.

“A cool medium… leaves much more for the listener or user to do than a hot medium.  If the medium is of high definition, participation is low.  If the medium is of low intensity, the participation is high.  Perhaps this is why lovers mumble so.” – [31] “The Television: The Timid Giant”, p.425.

“Depth involvement encourages everyone to take himself much more seriously than before.” – [17] Comics: Mad Vestibule to TV, p.227.

“The problem, therefore, is not that Johnny can’t read, but that, in an age of depth involvement, Johnny can’t visualize distant goals.” [17] Comics: Mad Vestibule to TV, p.229.

For one thing, we in general pay too little attention to the influences on our thinking, especially–as McLuhan keenly points out, repeatedly–in terms of the medium.  That is, we think an awful lot about the content of our media without thinking about how we are affected by the means of the content’s delivery; especially when we fail to notice that the content itself is another form of media.  As Charles Peirce put it, the entire universe is perfused with signs.  We not only see the light through the television, but the person; we see not only a person, but a model for action; we see not only a model for action, but a statement on our culture–and so on.

For another, media are not good or bad by virtue of being cool or hot: as instances of cool media, McLuhan lists the medieval manuscript right alongside the (low-definition 1960s) television.  The definining characteristic of being cool simply means that one must participate in what the medium brings in order to attain a completion, a full “sensation” of what is being presented.

And for a final thing, high participation media–particularly those which require a depth-involvement in the moment–captivate us if we engage them; a captivation which dissuades us from other pursuits.  In other words, we direct our mental energy to that medium.  In contrast, high definition, low participation media are passively engaged.  Supplying us enough information of their own, we simply sit there and absorb.  There is a tremendous difference between watching a 1950s sitcom and one from the 2000s; or between watching Firing Line in the 1980s and Tucker Carlson in the 2010s–not just in terms of content, but in terms of medium.  It is little surprise that with high definition television, television programming now requires lengthy arcs, seasons; episodic adventures are a thing of the past.  Seinfeld would no longer work; the Netflix model–followed even by those who do better than Netflix–is that of the “40-hour movie”.  Binge-watching is encouraged; regular binge-watching, of course, dulls any habits of active participation.  Contrariwise, too much high-participation engagement–for instance, social media–shuts off our receptivity to new ideas.

To understand our intellectual anemia, we will need first to free ourselves from the suffocating ubiquity of our social connectivity.  If only we had the energy…

Thoughts on Being Human [2] – Sentimental Belief

Where do we get our “ideas” or “beliefs”?  Before answering, it is helpful to define each of those terms—they tend to be terms that we presume ourselves to know but may not really understand.  First, “idea”: an idea is a conceptualization of meaning.  In other words, it is the means by which we understand the what of something.  Therefore, an idea is also frequently called a “concept”.  Second, “belief”: a belief is the conviction in the truth of an idea so as to dispose us to act in a certain way given a certain situation.  For example: someone may have the belief that red (an idea) is a color (another idea), and when asked to name a color, will say “red”.  Or for another example, someone may believe that others have a right to the truth (an idea), which belief disposes that someone habitually to tell the truth (or, at the very least, to struggle with lying).

We are all familiar with having ideas and beliefs.  We can entertain an idea without believing in it.  We cannot believe in something without having ideas; that is, beliefs are dependent upon ideas.  But how do ideas become formed?  Or, where do they come from?  This is a very complex topic, both historically and philosophically, often studied as the subject matter of at least one undergraduate philosophy course, and mastering the issue requires a great deal study beyond that.  But the really important point, and one which must be hammered home here and repeatedly, is that we do not know our ideas themselves.  Believing that we know our ideas directly and their significates only indirectly undermined the entirety of modern philosophy.  This strange notion, having distilled into the culture for centuries, still wreaks havoc on our society today.  In contrast, it must be known that our ideas or concepts are means on the basis of which we are oriented towards possible ways of being.  This character of cognitive orientation we call “intentionality”.

But while the question of ideation occupies a more fundamental position in the theory of knowledge, the more accessible question is how beliefs are formed.  That is not to say it is a question easily answered; but it is answered more easily than the same question about ideas.  Moreover, we can find some help from Charles Sanders Peirce (1839-1914).

In brief, Peirce (1877: “Fixation of Belief”, in The Essential Peirce, vol.1, p.109-23) outlines four different ways of “fixing” (in the sense of “affixing”) our beliefs: what he calls the methods of tenacity, authority, the a priori, and the scientific.  In the first method we repeat an idea to ourselves until it seems true and natural; clinging to the idea no matter what anyone else says or what anything else shows.  For examples, think perhaps of sports fanatics who have unshakeable faith in the supremacy of their teams, or the way that someone convinces him or herself that something is a good plan, or that he or she is special, or that a love interest has mutual feelings.

The second method, of authority, is by something of external form of the first: that is, by a figure imbued with authority repeatedly and/or forcefully telling others that something is true.  Historically, this has been the most common way of fixing beliefs.  Doubtlessly, the imagery evoked is of priests in pulpits and nuns in school; but in truth, while religious authority has often operated in this fashion, other authorities have as well—in fact, the existence of law itself has often functioned in precisely this belief-affixing fashion, law being seen for most of history not as the restrictive clauses of a social contract but as a formative principle for society.  Likewise, the beliefs of one’s parents—and not the kind they attempt to impose by stricture, but the kind received by a social osmosis—are often accepted on an implicit authority.

The third method, the a priori, operates by discerning what fits one’s experiences and already-conceived beliefs.  In other words, there is a semi-critical element in this method: such that a proposed new belief is rejected if it does not cohere with one’s prior beliefs.  This way has the merit of consistency—avoiding the holding of contradictory beliefs, which the ways of tenacity and authority may not—but nevertheless presupposes the truth of those beliefs we already possess.  Thus, while a stronger edifice than the straw-like constructions provided by the aforementioned methods, the a priori remains merely a house of sticks.

The fourth way, the “scientific”, should not be confused with the common conception of science as the empirical observation of experiments done by people wearing white coats in laboratories—though such observation and experimentation is included in what the scientific approach to belief comprises: namely, any critical evaluation of the relation between what is observed and what is thought about what is observed.  In other words, the scientific approach to fixing belief never rests; always, the veracity of the connection between the observed and the believed lies open to further scrutiny.  Not to be confused with skepticism—where one withholds belief from any proposition not absolutely certain—the scientific method does rely heavily on doubt: doubt being understood not as an “irritation of the mind” which drives it to seek resolution.  Doubt is the movement which, followed earnestly, brings us to belief.

While it may not always have been the case, today, most people probably form their beliefs through some combination of these four ways and, sadly, for the most part through the first two.  Consequently, we can identify two broad tendencies of such formation: towards the sentimental and towards the critical.  By the “sentimental” here, I mean attachment to a belief on the basis of feeling.  By the “critical”, I mean conscious formation of belief by the deliberate pursuit of and openness to doubts.  Sentimental attachments become a vicious circle: feelings determine the acceptability of beliefs which reinforce the feelings.  Our sentiments arise in a way just as complicated as our ideas, but being affixed usually through some form of tenacity—whether through repetition to oneself or through someone taken as an authority.

Often, someone may begin their lives from sentimental attachments but proceed into an a priori way of thinking.  For instance, someone may take it on authority that there is or is not a God who imbues the cosmos with a natural order that gives normative moral force to human behavior.  If someone does begin with this as a sentimental attachment, then one can systematically construct a moral theory which is logically consistent with this authoritatively-given belief.  Contrariwise, if someone does not, an alternative moral theory could be formed which is logically consistent, but only given some other authoritatively-given a priori belief: such as, say, that everyone should do as they please so long as it does not hurt or interfere with the goals of others.  So long as each a priori claim—that of a natural, normative order or that of moral relativism—is simply taken for granted as true, neither consequent theory has any claim to validity over the other.  You might as well argue over whether the Pittsburgh Steelers or New England Patriots are the better football team (“better” not in the sense of victories and losses, but better-to-be-a-fan-of).

For much of the past 2000 years, the majority of people in the Western world have taken it as a given that there exists a divinely-appointed normative order.  In the past century, that majority has waned as the opposite, relativistic a priori has waxed, accelerating with the spread of electric technology.  In other words, the sentimentally-founded beliefs most commonly appropriated at a given time and place are largely a product of cultural determinations: the context in which each human individual grows up, the things they see as “norms”; with television and now the internet, the “great stereopticon” of media can now spread instantaneously.  There may be no central authority, in other words, but we are very much under the influence of authoritative fixation of belief: less by explicit condemnation of old moral beliefs, but more by presenting the alternatives as the norm.

As we will see in the next entry, all this is explicable in terms of signs: that is, there is a specific kind of causality particular to a sign—a causality we can call objective or specifying, or objective specifying—which orients us towards our objects.  An uncritical attitude towards the causality of signs leaves us open to determination by them; and, not infrequently therefore, leaves us standing on the shaky ground of mere culturally-cultivated sentiment.

Thoughts on Being Human [1] – Presumption

One thing lies common to all of us: we are human.  We have human bodies, human minds; human feelings, human thoughts, human desires, human beliefs.  But aside from these and other such generic commonalities, our experiences differ.  The experiences of being a man or being a woman (or being intersex, for that matter, though an abnormal case) all fall within the category of human experiences, but seem at least somewhat mutually exclusive.  To be a man is not to be a woman and vice versa.  Likewise, some of us are born with a disposition towards athleticism and sports, others towards bookishness and study.  Others are “neuroatypical”, an umbrella term (that seems ever widening) to designate those on the autistic spectrum, with schizophrenic tendencies, or bipolar disorders.  And all of us have variation in our cultural experiences: from the minor variations evident in families living on the same street, to the major variations of Western and Eastern civilizations.

Our experience of being human receives formation, in other words, by factors both internal and external.  Some of what we experience depends on the context into which we are born, raised, and move throughout our lives: the cultural world—not merely the physical structure of the planet (though certainly a part of it), but the relational totality of the environment against which we are opposed as a self.  This relational totality is complex and includes various objects irreducible to the physical structures we sensorially experience [an idea to be explained elsewhere].  But the self—while never apart from the world and “always-already-in-it”—as a singular and unrepeatable nexus of experience, does not receive all of its determinations from cultural realities.  Some are innate: born into us, passed through genes, whether faithfully or defectively received.  Others are the product of our own willfully-chosen behaviors; probably, in fact, more than we would like to admit (namely, most of our failings).

Both the “external” or worldly factors and the “internal” or innate and chosen factors are commonly inscribed in individual persons: that is, the context of experience always finds itself inscribed in the text of the human life.  This idea of treating not only human history but also individual human lives as “texts”—suggesting the idea of stories or narratives—might and often gives the wrong impression: primarily, the valorization of the self as a hero.  This valorization may come through a passive re-interpretation of the events surrounding oneself—reappraising all that has happened in one’s life so that one appears the protagonist against malevolent or oppressive forces—or through a more proactive “self-authoring” tendency (such that enterprising capitalists like Jordan Peterson will charge healthy rates to help you “re-write” your life).

Regardless of whether one reappraises the context of life or reappraises oneself, the dramatization of life as a story, in which the self stands as the protagonist, follows the broader cultural trend of emphasizing the importance of lived experience: not so much the intelligible or articulable content of what one has experienced, but the undergoing of it, the being-the-one-who-experiences whatever it might be.  Integral to lived experience are the cathectic responses we have (“cathexis” here being a term appropriated from an English translation of Freud to mean, generally, both feelings, or physiological responses to perceptions, and emotions, or physiological responses to intellections often including feelings, as intellections correlate with perceptions; something to be investigated further down the line).  Because all of our experiences are unique in time and place and the circumstances of their occurrence, the whole of our lived experience is always singular; it cannot be shared precisely as it has unfolded for us with anyone else—there is an irreducible subjectivity to the living of our experience, meaning that it belongs to that specific subject and no other; it cannot be reduced into a form which may be objectivized, i.e., turned into an object, that others can perceive and know.

Incommunicability of lived experience, and particularly of its cathectic core, has led to a belief that there is a kind of idiomorphic self-consciousness that every individual possesses—that my being myself cannot be understood by you nor any other self—that, fundamentally, unless you have the same experience as I do, or one very similar, you cannot understand what I understand.  In other words, it has become a widespread belief that knowledge or understanding of what it is to be something is derived from lived experience, and this knowledge cannot be communicated by language.  In consequence it is argued that one cannot understand what it is to be transgender if one is not transgender; what it is to be a woman if one is not a woman; what it is to be black if one is not black; what it is to be poor if one has not been poor; and so on.

To a certain extent, this is true: to the precise extent that those experiences are lived in an irreducibly subjective way.  But this extent does not go nearly so far as generally believed or claimed.  Living through some experience does grant a privileged knowledge of how that experience has affected oneself.  This privileged knowledge does not comprise the entirety of the experience, however, nor is “knowledge”, let alone “understanding”, comprised of naught but the living of an experience.  To the contrary, lived experience contributes only a minor part to the greater whole which we call knowledge.  Although a unique form of disclosure, lived experience reveals for us only the phenomenological moment of selfhood, at the intersection of meaning and experience.

Meaning itself, however, can never be circumscribed by what has happened to or within some individual subject.  It is, by its very nature, suprasubjective, communicable, articulable.  How often do we have the experience of searching for the right word?  We have a sense of what we would like to say; we have an idea that we want to communicate; but until that right word is found, we feel an incompleteness to our knowledge—like when we cannot remember an actor’s name, or where we know that woman from, or the word which describes a feeling we are experiencing.

Or, put succinctly: if we cannot put something into words, do we really, fully, truly know it?  If we cannot explain what it means to be a man, or a woman, a Catholic or a Jew, a theist or an atheist—if we cannot explain what it means to be human—do we really know what it is at all?

 

Thomistic Psychology & Technology

As a Research Fellow with the Center for the Study of Digital Life, my main project has been to explore the connections of the faculty psychology originated by Aristotle’s Περὶ ψυχῆς (transliterated: Peri psuche; in Latin: De anima; in English: On the Soul) and carried into the medieval tradition, especially as found in Thomas Aquinas and his specific enumeration of “interior sense powers” (ST Ia, 78, a.4), with the effects of technological development.  I wrote a bit about the interior senses prior, but in a general and sweeping sense.  Here, I want to put my research instead into an explicitly Thomistic context, and explain a little about the importance of technology to our faculties.

What is “psychology”?

Most people, when hearing the word “psychology”, typically think of the contemporary scientific practice, and likely conjure up images of scientists in lab coats, or perhaps the practitioners of psychological study’s therapeutic applications, psychiatrists: bearded men with round-rimmed glasses writing notes in an expensive chair while someone lying on a chaise describes a dream about his mother (even though the vast majority of students in psychological disciplines today are women).

In both psychology and psychiatry, the common object of study and treatment is the complex of mental and emotional subjectivity—what someone thinks and feels—even if it is believed that the basis to which these phenomena can be reduced (a reduction not shared by all in the psychological sciences) is the biological, be that genetic or something specifically neurological.  This understanding of psychology as comprising an understanding of the complex of mental and emotional subjectivity is carried into applications for the entire organization of society: in warfare, business, economics, and especially in marketing.  In this way, there is a vernacular use of the term “psychology” which is similar to (as loosely derivative from), but does not map onto, the academic use.

The common root of these terms, psychology and psychiatry, is the Greek word psyche (ψυχῆς), which has a much broader meaning than is studied in the practice of contemporary academic or laboratory psychology, treated by contemporary psychiatry, or signified by the common vernacular use of the term “psychology”.  Psyche is translated into Latin as anima, and both the Greek and Latin terms are conventionally translated into English as “soul”.  Unfortunately, due to historical misappropriations, the English “soul” has been reduced to its association with the idea of the “spiritual soul”, thereby losing much of the rich significance possessed by the terms of antiquity.

In other words, for Aristotle just as for Aquinas, psyche or anima did not mean a “supernatural”, ethereal force; the soul was not a ghost in the machine, but the vital force of any living being.  The questions of the human soul’s spiritual dimension–“spirit” likewise having been a misappropriated term taken by later thinkers to signify that same ghost in the machine–arise not from a presupposed supernatural existence of the human, but from the natural intellectual capacities that an earth-bound, bodily-existing human exercises.  The spiritual dimension, in other words, is part of the same vital force that orders the life of the body.

“Psychology” as I use it here–as a philosophical or cenoscopic inquiry–is a study of the basic organizing principles enabling bodily life; thus, plants and non-human animals likewise fall into the study of “psychology”; not in their biological structures (which are the bodily processes through which the body operates), which we might consider as the material conditions of life, but in their organizational structures: the formal and final causes making the thing to be the kind of thing it is, which define its “nature”.

Specifically, this focuses for the Thomist around the “faculties” or “powers” of the soul (I prefer the former term, as the latter is used with a greater variety of meanings and therefore easily becomes confusing).

Faculties of the soul

For human beings–narrowing our focus here–these faculties are the “vegetative” or “nutritive”, the exterior senses, the interior senses, the intellect, and the will (see ST Ia, q.78 generally).  The nutritive faculties are those we share in common with all other living beings–the metabolic capacities, essentially–while the exterior senses are commonly known as sight, hearing, smell, taste, and touch, though “touch” is generally now recognized as a genus of many other senses (equilibrium, temperature sensitivity, textural sensitivity, etc.).  Intellection is the capacity to recognize being and therefore the meaning of objects in themselves as things existing independently of our reference to them; and will is the capacity to direct oneself and one’s faculties (to some degree) to the pursuit of objects based upon what we have cognized about them (or to allow oneself to be directed by faculties depending upon their perceived rather than intellected desirability).  Although the nuances of these faculties are many and fascinating, it is the often-neglected “inner senses” which command our attention today.

It is these four faculties–the sensus communis or as I have come to call it (“common sense” having an entirely different meaning in English and therefore unsuitable as a translation), the integrating sense, the vis imaginativa/phantasia or simple retention, the vis memorativa or recollective retention, and the vis cogitativa or cogitative faculty–that constitute our perceptive capacity.  That is, often what we think of as sensations are in fact perceptions; we may sense something without perceiving of it and we may perceive something that, strictly speaking, we do not sense by the exterior faculties.  Aristotle divided this into, respectively, sensibles in themselves (e.g., white, loud, acrid) and incidental sensibles (the son of Diares).  In other words, we perceive unities that we do not sense.  This unification–whereby sensations are collated so as to form unities which may be perceptually objectivized–occurs through the integrating sense (cf. SCG II.100.3).

Perceptual_Faculties
A working model of the interior sense faculties and their operations.

Perceptual objectivization–which I attribute formally to the cogitative faculty–is the germ of all the interior senses’ operations.  Subsequent to such objectivization follows a bevy of further operations: in the simple retention, the objects of exterior sensation are preserved without any patterning, contextualization, or experiential content: that is, we retain things just as we have sensed them.  The recollective retention retains the patterns not only in which we observe sensible objects, but also our perceptual experience of them–which includes the judgments of the evaluative operation, whereby the cogitative faculty judges an object beneficial, harmful, or neutral.

The cogitative faculty may then invoke both retentive capacities, for a new evaluative operation (e.g., this thing now seems beneficial but did it previously seem harmful?), for a memorative operation ordered towards some executive operation (e.g., how to get to the nearest water source from an unfamiliar location), or for a fictive operation: that is, a creative assembling based upon what has been retained from experiences past.

All of these operations have shared dependencies: though the cogitative faculty is the core of interior sensation, the operations it performs–in human beings, at least–are highly complex and depend upon the right-functioning of the other faculties.  Because all of these operations occur through the brain, the organ which adaptively fulfills these functions, there are always potential material faults.

As an aside: with the rise of scientific psychology in the late 19th century and the identification of the brain’s varied role in cognitive functioning, few who applied themselves to psychological studies retained this division of faculties; the brain seemed a homogenuous organ which performed all the operations, and thus the division of the faculties seemed arbitrary.  What these rejections missed is that, although the brain is a singular organ, the various regions become dedicated through early plastic imprinting to perform determinate functions which are common to all human beings.

At the same time, something can “go wrong” with these faculties and their relations to one another without any “material” fault.  In other words, these faculties can become ill-proportioned and distorted from fulfilling their proper end, which in human beings specifically is the service of the intellect’s discernment of the true (and thus the true good).  While such a disproportion can happen in any circumstance–because the disproportion is essentially a consequence of habituation, the forming of an active disposition in the faculty to operate in a determinate fashion–it happens more readily through technological means.

Technological extensions and distentions

Very succinctly defined, technology is the collected knowledge concerning means of artefactually extending natural human capacities within species-specifically human environments.  In a later post, I will explain this definition.  For now, we need only to note that a specific technology is the knowledge of a specific way of extending a natural capacity.  Usually, this knowledge is embodied in a device; oftentimes, even, the device being developed before the knowledge is attained precisely.  Regardless, the technological extension allows for something that we can do naturally to be done through an instrument, often an instrument streamlined for that very purpose and therefore more apt for performing the operation than we are unaided.  A hammer, for instance, is a smashing instrument.  A fist can be, as well, but you’ll likely bloody your knuckles before you get a nail deep into a piece of wood.

Today, we can see profound effects of technological extension on the interior sense faculties of the human psyche: most especially, in the age of television–which is a medium of communication which generally extends our ability to portray images but in any of its actual instantiations, ordinarily habituates us to acceptance of illusory objects–do we see that our faculties of retention have been minimized and our operations of fictive collation extended.

Thus, while many people are ready to blame the emergence of digital life for the current hysteria gripping the Western world–for instance, in the preponderant voicing of postgender ideologies–the truth is that the roots of our present delusional normal were laid decades ago.

And we are only just now beginning to understand the psychological consequences.