Daryl Gungadoo, PhD, is an engineer, inventor, and innovator and works as a media lab director for the Adventist Review. He resides in Bracknell, Berkshire, England.

In 1953, renowned children’s book author Roald Dahl penned a short story titled “The Great Automatic Grammatizator.” Remarkably, his narrative appeared to predict the emergence of generative artificial intelligence (AI), which has now become one of the most rapidly expanding consumer applications to date. Dahl’s story captures the anxieties regarding the growing prevalence of generative AI and the diminishing value of human creative work.

In an attempt to demystify where we are now and where we are going with AI, I invite you to explore the technology and relevance it has to our church.

Definition

In brief, AI is computer software that attempts to simulate human thinking. Rather than just executing a list of instructions, the software runs with a purpose/goal.

For it to work, it needs to ingest large amounts of labeled training data and then analyze it for correlations and patterns, which, in turn, help make predictions or decisions.

Differentiation with humans

While AI can become quite smart in certain areas, it can never replace God-created human intelligence because, frankly, we do not quite understand human intelligence ourselves. How, then, can we replicate it?

God created us with a predominance in one or two of the following eight kinds of human intelligence or learning preferences:1 visual/spatial, aural/audio, reading/writing, kinesthetic/physical/tactile, social/verbal/linguistic, logical/analytical, solo (individual learning), and natural/nature (use of nature in explanations).

While computer software can simulate some of the God-given learning styles and reasoning process, AI software focuses on also trying to emulate cognitive skills such as reasoning, learning, and self-correction at varying degrees of success. It is better at some learning styles (like logical/analytical), is moderate at others (visual/spatial), and does poorly at others (like solo or social).

While God did infuse in humanity the desire to create and invent, it seems (so far) that replicating the thinking process of a human brain is unattainable, and I can only be in awe at God’s power, knowledge, and creativity.

Categories of AI

We can categorize artificial intelligence by functionality as suggested by Arend Hintze:2

  1. Reactive machines. A reactive machine has no memory but is task specific. It’s usually designed for narrow purposes and cannot be easily used for other situations. An example is a game of chess app that cannot use past experiences to inform future ones but, rather, analyzes possible moves and chooses the most strategic ones.
  2. Limited memory. These AI systems use past experiences to shape future decisions. An example is a self-driving car: “Observations inform actions happening in the not-so-distant future, such as a car changing lanes.”3 Such observations are not stored permanently, however.
  3. Theory of mind. “When applied to AI, [theory of mind] means that the system would have the social intelligence to understand emotions. This type of AI will be able to infer human intentions and predict behavior, a necessary skill for AI systems to become integral members of human teams.”4 So far, this type of AI does not exist.
  4. Self-awareness. This functionality is able to form memories of the past and make predictions, learn, and become more intelligent based on their experiences. AI from science fiction falls into this category, which also does not yet exist.

Ethical issues

  • Bias, discrimination, and misuse. As humans train AI systems with data sets that they themselves create, what the programs produce can reflect their biases and prejudices. Take the results with a grain of salt, and remember that the output is only as good as the dataset it was trained on, let alone many other factors, such as algorithmic bias. A new form of cybercriminal could develop a “virus” that skews the data being trained into an AI system with erroneous information for a nefarious result.
  • Privacy. AI systems collect and process large amounts of data—about us. Depending on your country’s regulations, the data can include personal information, such as location, browsing history, and social media activity. It is imperative to use such data in a responsible way.
  • Lack of transparency and accountability. AI systems are often perceived as complex and opaque, rendering it challenging to understand how they make decisions and how to correct the process. It can also lead to mistrust and fear of AI.
  • Control. AI systems are becoming increasingly autonomous as they make decisions without human involvement. Such systems need to be monitored. It is important to ensure that AI systems do not pose a threat to our safety or freedom (that, being relative, for the good of the “many”).

Pros and cons

Pro. AI, if properly programmed, can eliminate human errors by leveraging its programmed precision and ability to make decisions based on collected information and extensive patterns.

Pro. AI robots can substitute for humans in hazardous scenarios, such as defusing bombs and exploring space and the deep sea.

Pro. AI remains unaffected by fatigue, distractions, or boredom and does not require rest breaks. It is particularly good at monotonous and repetitive tasks.

Pro. While subjective attitudes and emotions influence human decision-making, AI relies on factual information as the foundation for its decision-making process (if properly programmed).

Con. Developing AI necessitates sophisticated hardware, software, and a significant investment of time. AI devices demand regular updates and maintenance, making them an expensive endeavor.

Con. AI excels at executing programmed tasks with great success, yet it lacks the ability to think creatively or deviate from its predetermined parameters.

Con. Unemployment is a major concern in relation to AI, as AI-powered robots have already displaced humans in numerous manufacturing and research roles.

Con. As AI assumes a substantial portion of employee responsibilities, workers might be negligent and rely excessively on AI to manage critical details.

Con. The moral development ingrained in humans from early childhood is absent in AI machines (unless explicitly programmed). They only possess knowledge, recognition, and utilization abilities based on programmed parameters.

Myths

  • AI as portrayed in fiction is real. Fiction, especially movies such as Matrix, Terminator, and M3GAN, often hype up the types of self-aware AI. Such types are nonexistent and will not be around any time soon. If they were to exist, they would be an absolute game changer. A computer with artificial general intelligence could ingest all the world’s knowledge (gleaned from the internet) to solve some of the world’s problems or even deal with them before they come into existence.
  • AI systems are unfair. While AI is used more for employment decisions, bank loans, and credit allocation, it may be perceived as unfair to vulnerable groups. Let us remember that AI is trained to mimic the behavior of human decision-makers and will reflect those biases.
  • AI is as good as the data it is trained on. No real-world dataset is perfect. It is, however, possible to address issues using techniques like careful problem formulation, targeted sampling, synthetic data, or building constraints into models.
  • AI will take our jobs. Most paradigm shifts in technology, from the car to the calculator to the personal computer, have encountered the fear of mass unemployment. In the long run, while some jobs will indeed disappear, new jobs and new industries will come up, along with higher standards of living. I perceive AI will be used as a tool to augment existing jobs.
  • AI will develop on its own and rebel against humanity. While AI is outperforming humans in complex repetitive tasks, it remains narrow in scope and lacks creativity. As a creationist, I believe that God alone imparts consciousness and that it is simply not possible for an AI to develop consciousness and sentience on its own.

AI for church use

I asked two of the most popular generative AI systems, Google Bard (based on the Language Model for Dialog Applications) and ChatGPT (based on generative pre-trained transformers), how they see AI helping churches in the present and near future. Here are some of their suggestions (my comments are in brackets):

  • Improve operations. AI can be used to automate tasks such as scheduling, budgeting, and data entry. It will free up staff time to focus on more important tasks, such as ministering to the congregation.
  • Improve outreach. AI can be used to create personalized content for church members and potential members. Being tailored to each person’s interests and needs, it will be more likely to engage them.
  • Improve ministry. AI can be used to provide counseling [although I would prefer an actual human with empathy doing this task], prayer support [I am yet to be convinced that AI can do this effectively, except for perhaps gathering prayer requests], and other forms of ministry to church members.
  • Chatbots. Computer programs can simulate conversations with humans. Chatbots can be used to answer questions, engage with visitors as a virtual deacon, or provide guidance and support. [While the novelty of being approached by a robot as a visitor might be fun for some, humans seek by nature connections with other humans. I suggest that chatbots be restricted to online support.]
  • Personalized outreach. Chatbots can lead Bible studies. [Personally, I prefer a Bible study to be conducted by a human with empathy. That’s my feeling even when “talking” to a chatbot at my online bank. An example of this is AI-assisted Bible study can be accessed at https:/www
    .openbible.info/labs/ai-bible-study/.]
  • Biblical interpretation. [See the experiment in “Researchers Use AI for Bible Interpretation,” https://www.laserfiche.com/resources/blog/researchers-use-ai-for-bible-interpretation/.]
  • Virtual assistants. Virtual assistants are like chatbots, but they can do more. They can schedule appointments, make reservations, and even help with tasks such as laundry and grocery shopping. [I suppose a savvy pastor can take advantage of this tool.]
  • Sermon preparation. AI can assist pastors and other leaders in composing sermons by providing access to relevant scripture references, historical context, and other resources. [For example, in ChatGPT, you can ask: “What are some Bible verses that support the idea of ___?”]
  • Data analysis. AI can be used to analyze data from church services, websites, and social media. Such data will enable you to learn more about church members, potential members, and the community at large.
  • Language translation. AI-powered language translation can help churches reach non-native speakers and overcome language barriers.
  • Personalized content. AI can create personalized content for church members tailored to each person’s interests and needs, making it more likely to engage them. [Although I would be reticent to an AI-curated list, Amazon has been using this tool for a while in their shopping cart.]
  • Virtual events. AI can create virtual events that people from all over the world can attend. It can help churches reach a wider audience and connect with those who might not be able to come in person.
  • Security. AI-powered security systems can help churches protect their property and prevent crime by detecting and responding to potential threats.

AI tomorrow

Let us remember that artificial intelligence algorithms are still just algorithms. Modern AI advancements, such as neural networks, have derived inspiration from the architecture of the human brain but are not capable of thinking like humans. They are simply a complex set of commands for a computer to follow and do not work like the human brain.

Scientist Adam Zewe highlighted the shortfall of today’s AI in replicating human decisions, attributed mainly to the data that the models are trained on.5 The researchers suggest improving dataset transparency, matching the training context to the deployment context (similar to the calibration certification of a speed camera that can be requested by the culprit in the case of a speeding ticket). The implementation of such a framework of regulations would go a long way to diffuse much of the general public’s unease about using AI.

Different ending

Some tech philosophers think that a crisis might occur if an AI’s goals are not the same as that of humanity, causing AI to break through human barriers and “take over the world.” However, an eschatological reading of the Bible suggests a different end of the world, one where “every eye shall Him coming on the clouds.” I look forward to that glorious ending and new beginning.

  1. See Howard Gardner’s Theory of Multiple Intelligences in Howard Gardner, Frames of Mind: The Theory of Multiple Intelligences (New York: Basic Books, 2011).
  2. K. Sarwar, “Types of Artificial Intelligence,” Sarwar.K, May 19, 2023, https://ksarwar.com/types-of-artificial-intelligence/.
  3. Sarwar.
  4. “What Is Artificial Intelligence? How Does It Work?,” Zegashop, March 2, 2022, https://www.zegashop.com/web/what-is-artificial-intelligence-how-does-it-work/.
  5. Adam Zewe, “AI Models Misjudge Rule Violations: Human Versus Machine Decisions,” Neuroscience News, May 14, 2023, https://neurosciencenews.com/ai-judge-rules-23238/.

Ministry reserves the right to approve, disapprove, and delete comments at our discretion and will not be able to respond to inquiries about these comments. Please ensure that your words are respectful, courteous, and relevant.

comments powered by Disqus
Daryl Gungadoo, PhD, is an engineer, inventor, and innovator and works as a media lab director for the Adventist Review. He resides in Bracknell, Berkshire, England.

Digital delivery

If you're a print subscriber, we'll complement your print copy of Ministry with an electronic version.

Sign up
Advertisement - RevivalandReformation 300x250

Recent issues

See All