Submission
Privacy Policy
Code of Ethics
Newsletter

In Defense of AI

“John Henry said i feed four little brothers, and my baby sister’s walking on her
Knees, did the lord say that machines outta take the place of living, and what’s a
Subsitute for bread and beans, I ain’t seen it, do engines get rewarded for their
Steam[?]”

-John Henry, The Legend of John Henry’s Hammer (Johnny Cash lyrics)

Dating back to at least the 1930s, the legend of John Henry tells the story of a man with a hammer hired to forge a tunnel through the mountain for a railway track.  One day, however, his job was at risk—with the rise the steam drill machine.  Upon hearing of this machine John Henry exclaimed: “Did the Lord say that machines outta take the place of [the] living[?]”  So, he challenged it to a racing match of smashing through the mountain saying: “a man ain’t nothin but a man, but if you bring that steam drill round, I’ll beat it fair and honest…you can’t replace a steel driven man.”  John Henry beat the machine.  But soon after he died of exhaustion.

Today, we face a similar confrontation, except this time the machine isn’t any ordinary one.  It’s a machine that’s paradoxically dubbed “artificial intelligence” (AI).  And with its rise, we are faced with a question similar to John Henry’s:  Should AI take the place of human beings? 

Today, I don’t wish to answer that question directly.  Rather, I wish to answer the more basic question: “Can it?”  Moral (or immoral) possibilities depend upon logical possibilities.  If in principle it’s impossible that humans can in all aspects be totally replaced, then the moral problem regarding their total replacement boils down to answering an oddball question—like what’s highest number? —but one which we know can never obtain in the real world.

Ever since the release and public promotion of ChatGPT in 2022, tech experts, business leaders, politicians, and CEOs have been running around as if their hair is on fire and the apocalypse is imminent.  Here I will argue the contrary: it isn’t.

AI has given rise to many fears—fears of a world run by killer robots, fears of AI launching nuclear weapons, fears of the extinction of the human race, etc.  Certainly, AI will render many types of work obsolete, or at least render certain roles less needed than before.  Programmers as we know are already having their jobs upended or changed, data scientists, bookkeepers, telemarketers, computer support specialists, and receptionists will likely be next.  I suspect many of these jobs will still exist, but in much smaller numbers and those that fill them will largely be acting in tandem with AI or as a backup for when AI fails.  Fewer people will be needed because the existing few will have become more productive than ever before, using the AI as an extension of their work.

Nevertheless, countless new jobs will be created.  When horses were surpassed by trains and automobiles, many horse veterinarians lost work, but the automobile created new jobs in the form of train and automobile mechanics.  Just as the development of TV replaced the radio as a popular means of mass communication, but gave rise to numerous jobs in newscasting, so too will the development of AI give rise to a host of jobs of which we never dreamed.

For example, who’s at fault in the case of a driverless car crash?  The passenger who failed to update his car’s AI software or the company who designed the faulty AI in the driverless car?  Welcome to the field of AI accident law.

What about AI trying to hack into other AI run devices or platforms?  Welcome to a new world for cybersecurity, spying, and defense.  Who’s going to design these programs? People.

What about whether AI robots can be pet sitters?  Animal behaviorists will be needed (or should be consulted) regarded proper animal training and expected reactions to the AI sitters.

Are you a moral philosopher?  Perhaps, tech companies will hire you regarding the ethics of AI; e.g. if the brakes fail in a driverless car, should the car veer into the river likely killing the family inside or hit the mother with child in the crosswalk ahead?  Welcome to the need for AI ethics consultants.

Add these three new roles to the existing 20 proposed by Forbes and you’ve got a whole new cottage industry of jobs.

Besides creating these new jobs, AI will make many things that are way too expensive these days much cheaper.  Want to fight a parking ticket, but don’t have the time?  Try DoNotPay, your new chatbot lawyer.  Want to be a more effective real estate lawyer, filling out boring forms with record speed?  Try AI.  Need to generate a will or an estate but don’t want to pay high legal fees?  Wait 10 years (but despite the naysayers it will come with even approval by big law firms).

AI is also expected to be able to answer advanced theological questions.  Magisterium AI is being programmed to answer questions about Catholic teaching, but presumably the same can be done for other religious traditions.

Ultimately, there will be problems.  But such is what happens with any new technology.  Before guns, there were no mass shootings, but there were massive raiding marauders, pillaging Vikings, and death by stoning.  Before cars, there were no car accidents, but there was death by trampling and the smell of horse manure.  Before freezers, there was no freezer-burned meats, but there was excessive spoilage.  Likewise, with AI some problems will disappear or be minimized––driver error, distracted drivers, accounting errors, grammatical typos, the attention economy––only to be replaced by others like AI hacking, AI mistakes, and a fake emotional intimacy economy.

But some things will never be replaced.  Humans, unless we make the dumb mistake of making an evil AI robot with nuclear weapons, will likely not go extinct.  Humans have something machines, and robots (not even robots with advanced programming called “artificial intelligence”) cannot replace––an immaterial component.  As any philosopher of mind knows, there are many serious and powerful arguments for there being an immaterial aspect to the human person.  For a sampling of such arguments for immateriality see James Ross, Edward Feser, Saul Kripke, David Chalmers, and Robert Koons on this topic.

I will not present their arguments here.  Rather I will present a simple one, inspired by Mortimer Adler’s work in Intellect: Mind Over MatterThe argument for immateriality in humans is as follows:

  1. All material entities are individuals.
  2. No universal is an individual.
  3. Therefore, no universal is a material entity (from 1 & 2).
  4. Our intellect perceives universals.
  5. Only an immaterial power can apprehend immaterial entities directly.
  6. Therefore, our intellect is immaterial (from 4, 5, & 6).

 Adler explains the first premise as follows: “the individuality of all material or corporeal things is supported by the facts of common experience.  The objects we perceive through our senses are all individual beings—that is, this individual dog, that individual spoon….we have never seen a triangle in general, nor can we imagine one.  Any triangle that we draw on a piece of paper…is a particular triangle of a certain shape and size.”

Universals, by contrast, are not individual things.  Where is 5?  You can draw five triangles, stack 5 marbles, or draw the character for 5 on a whiteboard, but none of those are 5 as such.  If they were, then were you to destroy the 5 triangles, the 5 marbles, and the character on the whiteboard, then you would be destroying 5 as such, which is hardly the case.  Numbers in the abstract are universals, so too are turkey as such, man as such, and truth as such.  None of these are individual objects.

Since universals are not individual objects, and all material entities are individual, universals cannot be material.  Take the case of the universal 5.  5 exists, but not materially.  If it did, then we could sensibly ask how much it weighs, what its shape is like, what its color is, etc.  But we cannot sensibly ask such questions because 5 isn’t something material.

Perhaps, universals have a connection with material objects insofar as they must be known via our knowledge of the material world, but a necessary epistemological connection doesn’t entail ontological identity.  So, the third premise follows.

The 5th premise can be explained as follows: Immaterial entities cannot be directly apprehended by material beings.  You can’t pour immateriality into your coffee mug, nor can you shoot something that’s not made of matter.  Perhaps, immaterial beings can affect things here below through their power, but we cannot directly act upon them.

We know that we know, and we know that we know 5, turkey as such, man in the abstract, and countless other universals.  We know them through our intellect (premise 4).  So, since material entities cannot directly apprehend immaterial entities, it follows that our intellect must be immaterial.

What does this all have to do with AI?  AI are merely programs in machines—very complicated machines, but machines, nonetheless.  Now, all machines are material entities.  Therefore, no AI machine can have an intellect.  There is something about humans that is unique—our intellect.  AI machines don’t possess an intellect because they are entirely material beings.  Perhaps, if we bioengineered a human being and put some extra circuits in his brain, we could have an AI attached to a human or integrated in a human body, but that would still be a human who’s using the machine.

As John Searle’s Chinese Room Argument has shown, just because a computer engages in a series of reactions that we then interpret as having meaning or signifying thought, doesn’t entail that the computer actually has thought.  Likewise, that an AI might be setup by a human being (with an immaterial intellect) to spit out very intelligible essays or emails doesn’t entail that the AI has understanding, thoughts, or consciousness.

Whatever may be the future of AI, there will always be something the AI cannot have, namely, understanding.  As John Henry said, “a man ain’t nothin but a man.”  And AI ain’t nothin’ but a machine with no intellect.


John Skalko, PhD, is a philosopher living in Massachusetts, who most recently has been avidly researching and writing about the philosophy of animal cognition.  Follow him on X @ThomistAnimals.

Print Friendly, PDF & Email