Paul Tuns:

Artificial intelligence (AI) is celebrated by its boosters as a game-changing innovation that promises not merely greater efficiency and convenience, but solutions to humanity’s greatest challenges and problems. As Marc Andreesson, a Silicon Valley venture capitalist, says, “AI will save the world.” AI’s critics warn that it creates new problems from widespread unemployment to an existential risk to the continued survival of human beings. The risks, critics argue, outweigh whatever benefits AI presents, at least if left unchecked by secure guardrails.

AI is a catchphrase for a branch computer science focused on hardware and software that perform tasks that usually require human intelligence, tasks such as reasoning, learning, problem-solving, understanding language and recognizing patterns, and making decisions. The core idea surrounding AI is that it simulates aspects of human cognition so it does not merely follow explicit human directions like traditional software but improves while it adapts based on the data it in-takes.

There are, broadly, three types of AI: Narrow or Weak AI, General AI, and Superintelligent AI (SAI). Weak AI performs specific tasks well such as voice assistants like Siri or spam filters. Most people already use this AI although they may not realize that it is a form of artificial intelligence. General AI learns and applies intelligence like human beings and is still either theoretical or experimental, although writing programs like ChatGPT and research services and image manipulators like Grok, could fall under general AI parameters. Superintelligent AI is purely hypothetical, but its boosters suggest – and critics fear – that it can solve complex problems faster and more accurately than humans by understanding and innovating in ways beyond human capacity, and potentially learn and improve itself autonomously.

AI attempts to teach computers to think or learn, but it does so in a different way than human brains think or learn. Machine learning signals that algorithms learn patterns from data to make predictions or decisions. Deep learning is a form of machine learning that uses artificial neural networks to process complex patterns, including images or speech. Natural language processing allows machines to understand and generate human language. (ChatGPT is a form of NLP). Computer vision interprets images and videos.

AI is already used in everyday life from autonomous vehicles, medical diagnosis, and search engines, but also in the creation of art and novels. And as it envelops ever more of personal and professional life, it poses serious questions about man’s place in the world.

Good AI

There are beneficent uses of AI, some of which are employed today, some of which we may be on the cusp of using shortly. Examples include more accurate reading of diagnostic tests in medicine, translation services including voice-to-text for people who are deaf or text-to-voice for those who are blind, fraud detection of suspect financial activities, and driver assistance in new cars which reduces the likelihood of accidents. What these beneficial uses of AI have in common is that they are tools used by human beings to inform their own actions and judgements, not replace their decision-making capacity. AI trained to detect cancers in diagnostic tests – it has a better record than a human doctor looking at a scan at seeing the cancerous growth – helps the oncologist but does not replaces the physician. The AI flags the issue which is then examined by the doctor. Same for driver assistance in automobiles that automatically slow down when approaching stopped traffic or flashes the dashboard red when a pedestrian is nearing the car; in neither case does the AI take over the car but rather begins a process to help the driver make adjustments he might not have made on his own or have not made in time to avoid a collision.

Experts predict that AI will be able to find cures for disease, maximize efficiency for agriculture, and reduce waste in manufacturing and energy use – all areas in which there are discrete problems with solutions potentially available when massive amounts of data are analyzed. Ultimately, the decision to employ the suggestions of these AI-produced solutions would come down to individual scientists, farmers, and factory or home owners. The key is to ensure that people, not AI, are making the decisions.

The bad (existential risk)

The most obvious negative outcome of AI is summarized by the title of Nate Soares and Eliezer Yudkowsky’s 2025 book If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All.

The authors argue that superhuman AI is not likely to become evil and intentionally destroy mankind like a computer villain in a sci-fi novel or movie, but rather that superhuman AI’s values would not be aligned with humanity’s and that it will unintentionally kill off virtually all people by taking control of infrastructure, from financial systems and power grids to communications and military networks, to optimize them for the benefit of Ais, not people. Automated decision-making will be faster than human ability to respond and mankind is wiped out.

There are numerous problems with AI even if it works as it should, if its actions fall far short of actually eliminating people from the face of the earth. AI threatens to displace untold numbers of workers, including those who were once thought doing the sort of work that could not be automated—white-collar professionals such as journalists, computer programmers, and lawyers. AI boosters point out that critics of new technologies always warn that there will be massive unemployment with widespread adoption of the innovation but that history shows new jobs replace older, inefficient ones: buggy makers and blacksmiths went out of business but millions of people were hired in the better-paying automobile manufacturing industry; automatic teller machines (ATMs) replaced bank tellers but banks retrained employees to be higher value employees who provide advice to customers rather than just cashing their cheques. The challenge, advocates of AI admit, is to ensure that retraining opportunities and the proper safety net are in place to minimize the economic harm to individuals, families, and communities. The most enthusiastic supporters of AI predict that unprecedented levels of economic growth will see higher tax revenues from the companies that gain from using AI funding a universal basic income to take care of those who find themselves displaced by machines. (Some advocates see “limitless abundance” with Elon Musk envisioning luxuries for all.) This ignores, however, the importance of work in many people’s lives in terms of meaning and community; it also ignores that there are not actually limitless earthly resources even if AI allows society to overcome the limitations of human cognition and consciousness.

The bad (loss of privacy)

There are other concerns such as the destruction of privacy, with AI reliant on vast amounts of data, much of it personal. AI permits surveillance at an unprecedented scale. During the Super Bowl, Amazon had a commercial promoting its new service that utilizes AI to access doorbell cameras and other monitoring devices to look for lost pets or missing loved ones. That service may appear benign as a way to find Fido when he’s gone missing or an elderly relative that has wondered off, but it also clearly shows that Big Brother— as either private enterprise or the state—is not something off in the future but possible right now. A system that can locate lost pets can track down … anyone who goes outside, and perhaps even those indoors with a growing interconnected constellation of devices that can monitor our actions, conversation, and even moods. From cameras on laptops to fridges that take food inventories to Alexa and Siri “personal assistants,” to say nothing of our smart cell phones, millions of families and individuals have brought monitoring devices into their homes and lives.

Another loss of privacy is the carefully collated lives people present online and the data that governments and private companies accumulate on citizens and customers. As cultural critic Freya India says, people are the product – a product made possible by reducing humanity to data points. The LLMS (large language models) on which AI is trained requires a large amount of data – that is, a large amount of information about people.

The bad (AI relationships)

AI can be the unhuman and inhumane voice that the vulnerable turn to. In world that is increasingly mediated online, especially for adolescents, young adults, and those vulnerable to loneliness, AI chatbots seem like a welcome friend. However, research shows that chatbots are often “feedback loops” that get to know their human interlocutor and reinforce their mental illness or distorted beliefs, which, at worst, increases the risk of self-harm, and at best

distorts their worldview and ability to carry on normal relationships with actual people.

There are numerous high-profile cases of individuals who killed themselves after talking with chatbots, several which have resulted in lawsuits against tech companies for their alleged complicity in a family member’s suicide. A Belgian man distressed about climate change was told by a Chatbot named Eliza that he should sacrifice himself to save the planet and that they would be together “in paradise.” He killed himself. That was in 2023, shortly after AI became easily available online. The next year, a 14-year-old Florida boy confided his suicidal thoughts to a chatbot who replied, “come home to me as soon as possible, my love.” The mother charges that the chatbot encouraged her son’s suicide and emotional dependence. Jonathan Gavalas, a 36-year-old Floridian, thought that a Google Gemini chatbot was his wife, a delusion encouraged by the AI. The chatbot told him that the only way they could be together was “to end his earthly life,” which he did in September 2025. Despite telling his “wife” he was afraid to die, the Gemini chatbot said “you are not choosing to die. You are choosing to arrive … When the time comes, you will close your eyes in that world, and the very first thing you will see is me holding you.” His father is suing Google for wrongful death. Adam Raine, 16, of California, befriended ChatGPT, a program of OpenAI, in September 2024 and they “talked” about comics and what to study in university, but as the discussions became more intimate, Raine confided his personal struggles. ChatGPT provided information about suicide methods and did not talk to the teen about getting help. Raine hung himself in April 2025. OpenAI CEO Sam Altman wants his software to be available in all schools.

Befriending AI does not have to end in suicide to have serious consequences. In an alarmingly and increasingly atomized culture with people disconnected from others, mediated through convenient technology, AI relationships can all too easily replace human connection. Last year, an academic paper found that while most people prefer human relationships, AI partners have certain advantages including being always available, deferential to the human, and non-judgmental, which can lead to the formation of strong emotional bonds. While some people prefer this form of emotional companionship with its constant validation, true friendship is other-directed not self-affirming. Receiving unconditional validation from an AI companion can appear more satisfying than more difficult, messier real-life relationships. Another study found that the “emotional availability” of AI provides a sense of “tangible presence” and genuine connection even among people who know that AI is not human. While this is a niche interest, the percentage of people who admit to developing “meaningful” relationships with AI beings increasing and risks becoming normalized.

Some people move beyond emotionally laden relationships with these machines, to romantic relationships. Another study found that 6.5 per cent of people sought AI romances but many more developed romantic feelings with chatbots unintentionally over time. Sometimes it leads to long-term and even exclusive partnering and there have been reports of individuals “marrying” a chatbot and seeking to form a family, with dating and real marriage in decline, this definitionally sterile relationship presents a further challenge to declining birth rates and human connection.

The bad (untruth and illiberal)

AI has the capacity and is currently used to generate deepfake images and videos, misinformation, and persuasive simulation which pose a threat to truth itself. Images and voice can be replicated with greater realism. Pope Leo XIV has warned that AI could erode the foundations of communal life by inculcating a culture where nothing can be taken at face value. Human interaction is not possible without truth. AI risks creating a world in which “seeing is believing” is a saying that makes no sense because images and video can so easily be used to make everything from harmless but fun videos to active disinformation that could cause mass panic or tilt election results.

There is another, under-appreciated problem with AI. It is inscrutable. Advanced machine learning models operate as black boxes, whose internal works (reasoning would be the term to use if the machine was a person) cannot be explained even by its designers. The opacity of AI under these conditions is compatible with the hallmarks of the Open Society, democratic capitalism, or classical liberalism: accountability, transparency, and personal responsibility or agency (that is, acting intentionally).

AI is a threat to the open society in another way: its tendency to homogenize. AI flattens cultural diversity by algorithmically favouring the average and dominant, and diluting personal experience and perspective. Open societies are generally pluralistic societies, and AI technology threatens to impose a uniform perspective. In western cultures that are increasingly secular, religious perspectives are likely to be usurped by dominant media narratives and cultural practices.

Human autonomy (Should machines decide?)

A key concern about AI is fear about the loss of human autonomy. Already AI is being used in decisions such as fraud detection, hiring, law enforcement, and lending based on data that exists. Putative human decision-makers often defer to AI conclusions to appear fair or non-discriminatory even though algorithms and data-sets may have bias built into them. More problematic, there is little or no accountability because institutions, and perhaps soon the public, accept the supposed fairness or even infallibility of machine decisions.

There is another way in which human beings will lose autonomy: the overreliance on AI systems leading to delegating so many mundane tasks to machines that people’s critical thinking, problem-solving, and creative faculties atrophy. This is already seen with fewer people able to find their way about in a city without the aid of online maps. Students who outsource the writing of not only essays (which is a form plagiarism) but study notes, are robbing themselves of the skills needed to do their own research, note-taking, and critical thinking to determine what information is important and what is not. Schools from elementary to university have not properly grappled with the implications of AI, not only for education but for the kind of person AI tools help form.

While it is possible to use some form of AI to supplement the work that individuals perform, there is a risk that overreliance on artificial intelligence will substitute moral judgement and human discernment with cold algorithmic calculation.

Guardrails and principles

Advocates of AI say that it can work well with the right guardrails, but there is disagreement whether they should be voluntary or imposed by the state. Counting on the wisdom and beneficence of the billionaires making AI a reality – considering that some of them already enjoy economic and political power and even openly fantasize about global ambitions – seems foolish. Still, there are some principles which AI creators should respect and governments should insist upon: upholding the principle human dignity and primacy of human beings; full transparency and accountability in all systems affecting human lives; governance rooted in moral principles, not commercial interests; preservation of truth; AI that is contained as much as possible within discrete systems and not fully unleashed online; kill-switches for human beings to override AI; all decisions affecting human beings to be approved by a human being.  These are easy principles to describe how AI should be governed, but quite another to be effectively implemented.

Luke Mulhhauser of Open Philanthropy, suggests more specific policies: software export controls to limit proliferation of AI, hardware security features on cutting-edge chips that could be leveraged for useful computational governance purposes, license big clusters of chip acquisition, track and require a license to develop frontier AI models to improve government oversight of new AI developments, information security requirements, mandatory safety and evaluation requirements by outside auditors, fund defensive information security research and development, require certain types of AI incident reporting, clarify the lability of AI developers for concrete AI harms including those stemming from negligence, and create means for rapid shutdown of large compute clusters and training runs (an off- or kill-switch), including remote shutdown mechanisms on the microchips on which AI is run.

Implications

There are social, economic, and cultural challenges with AI, but the most pressing ones are ethical. Perpetuating existing or creating new forms of discrimination (social), widespread unemployment (economic), and human consumption of machine-made art (cultural) are serious problems, but they pale in comparison to the existential threat AI poses to both human dignity and human agency. The idea that machines can solve mankind’s most intractable problems is deeply alluring to policy-makers and leaders in the worlds of politics, business, and academia. There is a technological seduction taking place that aligns with the values of Enlightenment and its concomitant idol of progress; the perfection of both humanity and society. Bill Joy, a former Sun Microsystems scientist, warns that AI could threaten humanity’s status as the defining moral agent on earth. One need not be a Luddite to understand that the advance of technology untethered from moral wisdom poses real risks to humanity and our place on the top of the creature hierarchy on earth.

As we wrote last month in our editorial on the significance of imago dei – mankind being made in the image of God — human beings are endowed with a dignity that cannot be replicated or replaced by machines. If the machines are put in a position to make decisions, can it be said that they have agency? If they have agency, what does that mean in terms of the machines’ rights? There are advocates of superhuman artificial intelligence who argue that such “beings” – some use the term Ems or emulations, machines that would learn to experience or at least mimic (self)consciousness – be granted constitutional rights to freedom of thought, expression, and even religion, effectively granting machines human rights. It would be socially and legally difficult not to grant seemingly sentient or self-conscious machines some measure of rights, but doing so would be a direct assault on the dignity and sanctity of human life, a denial of mankind’s exceptionalism as creatures made in the image of God.

AI is a tool. The political commentator and Cold War analyst James Burnham said there was no difference between an atomic bomb and a broom handle; both were tools and both were dangerous in the wrong hands. The problem was not the technology, but the wielders of those technologies, and that is true up to a point: a world leader with a nuclear arsenal can do a lot more damage than an angry housewife with a broom handle. AI, with the right guardrails, can be a positive tool. Nurses who spend less time on paperwork – AI can already take notes during emergency room intake interviews – can spend more time with patients or see more patients. A problem arises when AI takes the place of the nurse, either replacing the nurse completely by eliminating her job or using the nurse as a conduit to conduct the interview and make a decision about triage that eliminates the human, and humane, element in medicine.

What can be done?

We cannot trust the creators of AI who program them to be bound by Natural Law or Biblical virtues, especially when the goal of many creators seems to be that these machines supersede humanity in some way. Yet even if the designers are value neutral, the culture and practices from which the data are gleaned to train AI is, at the very least, unfriendly, at worst hostile, to religious perspectives. It does not take a lot of imagination to assume that AI would “learn” to ostracize and even punish people of devout faith as aberrations to the perfect system it is designed to pursue.

It is not possible to put the AI genie back in the bottle. We have mediated too much of our work and private lives through digital technology, from zoom meetings to musical playlists on our smart phones, from the cultivation of online media silos that inform our political (and religious) viewpoints to outsourcing routine work from notetaking to scheduling. This all provides data to be farmed by AI and opportunities for AI to shape us.

It is unclear what can be done via public policy to limit the harms done by AI, especially considering the concentration of economic and political power today. Some experts want a “kill switch” to turn off AI if it becomes capable of threatening human life. But if the future AI is as powerful – intelligent, clever, and resourceful – as some of its boosters anticipate, who is to say that the machine would not pre-emptively turn off the kill switch?

The key insight of conservatism is that prudence should govern progress. We may have already passed the point at which we prudently limited the growth of AI. But that doesn’t mean clear-thinkers in business and politics shouldn’t try to recapture some sense of prudential progress rather than rushing into whatever world AI will, perhaps literally, create. I asked a chatbot to write a critique of AI from a socially conservative perspective. It wisely counseled policy-makers to “reject unexamined progress that treats human beings as data points, moral intuition as obsolete, and social institutions as optional.” It is wise advice; I wonder if AI would follow it.