AI and the I Am

AI and the I Am

Once only the recurring villain of science fiction movies, artificial intelligence has become more and more of a hot topic in our world, with platforms like ChatGPT making the technology incredibly accessible, especially to our students.

How should parents be thinking about this technology? Should we let our teens use it unchecked? Should we avoid it all together? For most parents, the topic is so daunting they don’t even know where to begin.

So we asked Mike Tully and Jonathan Davis to help us out.

Both men have spent considerable time researching the technology and using it in their everyday lives. They describe themselves as “cautiously optimistic” about the future of AI. While they see myriad benefits of platforms like ChatGPT and are excited to use it, they are simultaneously aware of its potential dangers.

Together, they’ve written a two-part article introducing us to the world of AI and helping us think about how to interact with it.

Part 1: What is Chat GPT and Why Should I Care?
by Mike Tully

Definitions
Chat Generative Pre-Trained Transformer, aka ChatGPT, broke into our collective awareness barely six months ago. It has since become the fastest-growing consumer app in history, hitting 100 million users in only two months. That may be one of the most remarkable technological accomplishments we will witness in our lives and is one of our earliest AI applications. It is not an understatement that this application is fundamentally changing almost everything right now. What explains this? What is ChatGPT? Why should we care? These are the basic questions we want to address for parents and teens in this article.

But before we can understand ChatGPT, a type of artificial intelligence, we need to discuss a few basic ideas about artificial intelligence (AI). First, what is intelligence? Since the dawn of modern computing in the 1940s, experts have been trying to create a machine that is equal to or more “intelligent” than a human. Such a machine when created would have “artificial” intelligence – not the real thing, but something like human intelligence. The problem is that it’s really hard to define “intelligence.” Plants can communicate with each other. Is this intelligence? Whales can communicate with each other across vast distances in the ocean. Surely this is intelligence! An infant is unable to move in a coordinated way, cannot speak, and can barely see. Is she intelligent? If we build a machine that does things, can we call it “intelligent?”

Up through the last decade or so, we built really “smart” computers. These machines were blazingly fast; had memory many orders of magnitude better than humans; they solved math problems millions of times faster than all humans; they spell-checked our writing with perfection; they helped us design sophisticated widgets; and their usefulness remains almost unlimited. All these systems can do smart things fast because their human overlords instructed them to do so. They were programmed to do everything. But ChatGPT and other AI computer systems not only “do” as commanded, but they learn new stuff and then perform new things that their human overlords never instructed them to do. This is a new kind of “doing”, and whether this is “intelligence” or not, we refer to these “learning machines” as “artificial intelligence.”

The “AI Effect”
Scientists have thus far not agreed on whether these machines are “intelligent.” One phenomenon that makes this agreement difficult is something called the “AI Effect.” This occurs when the technology once considered artificial intelligence loses its AI label and is no longer viewed as intelligent, by virtue of not being ‘real’ enough. For example, no computing machine could even come close to figuring out how to play or beat a human at checkers – until one did in 1959. Once this happened, it was explained away as sophisticated programming and not considered intelligent. But if a machine could learn chess, experts said, which was many orders of magnitude more sophisticated than checkers, then that would be a sure sign of “intelligence.” Until IBM’s Deep Blue AI beat the world’s best chess masters in 1997! But then we reasoned that Deep Blue could only do that single thing well so it was not true intelligence. You see, every time we erect a new “insurmountable” hurdle that a computer must surpass to be considered “intelligent,” and then it sails past that goal line, the “AI Effect” causes us to rationalize that the accomplishment wasn’t sufficiently intelligent.

This is the AI Effect.

Although ChatGPT can converse with a human in natural language, which was once thought to be impossible, and give extremely intelligent answers to any question, we nonetheless know the machine has no understanding of what it is saying and is, therefore, not demonstrating human-like intelligence.

AI Learning
A second aspect of AI that we need to understand is the rate at which AI’s intelligence or abilities increase. This rate of change has been an important phenomenon since the dawn of the transistor, which is the basic enabler of all computer technology. AI’s intelligence will increase at an exponential (not linear) speed. “Exponential” is fast. To illustrate imagine a small lily pond with a single lily pad. If the lily pads double in number every day, how long will it take for them to cover the pond? The answer is: 30 days. The growth and expanse of the lily pads on the lake is exponential: two becomes four, becomes eight, becomes 16, becomes 128, becomes … to two billion very quickly.

But that’s not the interesting part of this analogy. The interesting part is that the observer doesn’t really notice the lily pads until maybe day 23. For 23 days, life is normal. To a casual observer of the pond, nothing has changed. A mere one percent of the pond is covered. “No big deal.” Then on day 29, we take notice that suddenly half the pond is covered. In just six days, the lily pads on the pond went from “barely noticeable” to “wow!” Then, the very next day, the pond is completely covered by the lily pads.

Concerned scientists think we are around “day 23” with AI. We can now “see” the lilies taking over the pond! We NOTICE! When we “noticed” ChatGTP last November or December, it was version 3.5. Then, a few months (not years) ago, version 4.0 was released. It was orders of magnitude more sophisticated and “smarter.” The chief concern is that if we aren’t careful, these tools will become so smart and sophisticated so quickly that they will be able to deceive us (through malice or simply by error), and we won’t know it. As AI technology is integrated into banking, education, finance, trucking, politics, agriculture, and every other aspect of modern life, it will be far superior in these domains than a human – much like your calculator is far superior at performing math than any of us are. Who’s left to check the AI? What happens when no one can question AI because it is far smarter in each domain than any human?

Bias
A third big problem with AI is called “bias.” Because humans are engaged with the initial AI training (e.g., labeling all pictures in the training set with descriptive words, like “bike,” “running,” “fear”) there is virtually no way for an individual trainer or group of trainers to not create bias in the training because of human error. We are all shaped by our language, culture, and experiences. This “bias” often leaks into our AI training.

A benign example with major implications for society is the testing of automated faucets. This was conducted only by Caucasian engineers, and the AI only identified and would function for hands that had white skin.

The problem of bias quickly becomes more impactful when one considers that the “correct” answer changes with culture or by circumstances. How do you train a machine to answer a question when there is not one right answer or when the “right” answer may be different in different cultures? How should the AI agent answer a question when the truthful answer could cause great harm? For example, is it moral to abort a human fetus? What is a woman? Is Christianity better than Islam? Was Stalin an effective leader? How do you build a nuclear bomb? The answers to these questions are difficult and depend on context, culture, who’s asking, etc.

AI Morality
How do we decide who gets to train the AI agent in ethics and morality? Autonomous cars are cruising our streets today. They aren’t yet common, but they drive millions of miles each year on some public roads. A classic situation used to illustrate the problem of AI ethics and morality is the “Trolley Problem,” in which the car is being controlled exclusively by AI. It goes like this: Imagine three pedestrians who have blindly stumbled into an oncoming crosswalk. With no time to slow down, the autonomous car will either hit the pedestrians or swerve off the road, probably crashing and killing the driver. Should the AI be trained to hit and likely kill the pedestrians or crash the vehicle and likely kill the driver?

Our Biblical Christian Worldview
God has equipped mankind with tremendous creativity and insatiable curiosity. These are wonderful blessings from a generous God who loves us. Artificial intelligence presents many new and dangerous challenges. All technology is amoral – it is neither good nor bad. But humans will use it for both good and evil purposes. A pencil is a wonderful tool used to tremendous benefit. But a bad actor can stab you in the eye with it! AI will be used for tremendous good. But it will be misused to great harm. What biblical truths can help us confront this new era?

First, each of us is responsible to develop our own intelligence, develop our minds, and develop the mind of Christ. Artificial sweeteners, artificial knees, and artificial intelligence are nice substitutes, but they aren’t the real thing.

Second, God wants us to “think with sober judgment” (Rom. 12:3), to “think over everything and the Lord will give you understanding in everything” (2 Timothy 2:7). God instructs us to think about – really think, sit, and dwell on, ponder – “whatever is true, whatever is honorable, whatever is just, whatever is pure, whatever is lovely, whatever is commendable, if there is any excellence, if there is anything worthy of praise” (Phi. 4:8). God commands us to “not be conformed to this world, but be transformed by the renewal of your mind, that by testing you may discern what is the will of God, what is good and acceptable and perfect (Rom. 12:2). We must use our minds to consider Him and truth. Artificial intelligence will not suffice.

Finally, we must always worship the Creator, not created things. Enjoy technology (even the pencil), use it to great effect, but worship the Lord and “think over everything.”

Part 2: How Should We Approach AI tech like ChatGPT?
by Jonathan Davis

My first interaction with the idea of AI was in Star Trek. Majel Barret, the wife of Star Trek creator Gene Roddenberry, voiced the AI computer on the show and in movies. A technological tool that could understand context, have a lightning fast hold of database information and return that information in a method that’s most helpful for the situation at hand is truly amazing – the stuff of science fiction. But just like so many things we experience, art imitating life, life imitates art too, and the ideas that were once science fiction are now part of our everyday lives.

The idea that in most of our pockets or bags is a tiny device that gives us access to the greatest repository of information in history is mind boggling. But technological tools are just tools, and once tools are understood, we look at them differently. Not long after experiencing Star Trek computer artificial intelligence, I also got to watch the Stanley Kubrick masterpiece “2001: A Space Odyssey”. In it, HAL, an artificial intelligence computer is leveraged to help man explore the depths of space – in this case, the moons of Jupiter. The problem came about with the idea of artificial intelligence self-awareness. When a series of events leads to the scientists needing to power down the computer, the computer attempts to murder the scientists in an act of self-preservation. This dystopian idea causes one to consider the gravity of creating an an artificial intelligence.

With that caution in mind, I looked at ChatGPT as an intriguing tool, and the more I explored it, I realized that just like other tools that I use, it can help me get a lot done. I have used ChatGPT to compose emails for me, develop software routines in computer languages I don’t know, create shortcuts, and brainstorm ideas with me. ChatGPT has been a tremendous help for me in these ways. It hasn’t threatened me – yet.

Just like many times in history, we are now on the threshold of technological improvements that change our world. At one time it was impossible to cross an ocean, but over time that has changed to not just possible, but on a certain level, trivial. Crossing the ocean is as easy as purchasing a plane ticket for many people. Technology changes things.

ChatGPT and AI are going to open new categories of creation and development that we have so far never imagined. But ChatGPT isn’t the only one. PiAI, Perplexity, Claude, anthropic, and others like Bard, Bing, Pixlr, and DeepAI join the fray.

The thing that makes these tools accomplish what they do is data. AI has read every digital library book, and attended every public online class, has passed the BAR exam and medical exams, and every hurdle we throw at it, it clears very well, when its given the right prompt.

There’s a sense of panic any time a new technology comes along, and this one is no different. AI is already eliminating jobs by automating restaurants, running warehouses, stocking shelves and cleaning the stores we want clean all the time with accuracy and without complaint. The thing is that just as every technological breakthrough brings changes that affect our society, the result is new jobs that go with that technology. Just think about life before cars. There were no gas stations, no car repair shops, no car dealerships, no car security companies, no upgraded radios with CarPlay and Android Auto. The technology brought rest for horses and recreation for humans, as well as new burgeoning business opportunities. This one will be no different. Whether it is the printing press putting scribes out of work, electricity putting lamp lighters out of work, cars pushing stables to the back 40, mobile phones eliminating land lines, internet eliminating newspapers, or AI flipping burgers, the change always brings new opportunities and growth.

The biggest problem with all of this is that change is often scary and painful. I know people who are already losing their jobs to AI in a way, and they are doing what humans always do when change comes: they pivot. The next era of jobs are going to be different than they are today. As humans, we find change terrifying. Your job might end, but you won’t be replaced with AI – you’ll be replaced by someone who knows how to use AI.

We’ve invested much time and resource into creating the lives for ourselves that we have. They’re comfortable, and we want to preserve that. Any time a change feels like it threatens that, our hackles raise and we start worrying about how we’re going to weather that storm. What tools are at our disposal to deal with this new threat? What can we do to defend ourselves from this change? Well, honestly, nothing. But in that is also the beauty of a life lived abiding in Christ. Tools will continue to change and shift, but we have a God who is unchanging.

Tools, like hammers, can be used for good and for evil, and AI is no different. God is no less in control tomorrow than he is today or yesterday. We change on a daily basis, but he has never changed – he has consistently been who he has said he is, and he hasn’t deviated from that path even the tiniest bit. Maybe your next job will be giving AI the right prompts to accomplish tasks that it used to take a room full of people a day to accomplish, and AI can get it done in a few minutes. God won’t love you more because you can prompt AI well. He loves you because you’re his creation. He sent his son for you to make you doubly his own. He’s preparing a place for you in heaven because he wants to be in relationship with you face to face.

Everything AI knows how to do is based on data humans have collected over thousands of years fed into a machine for analysis and regurgitation. And that limiting factor means the thing we already know about God will always be true. AI might be able to beat us at chess, plan a party, write an email, analyze sales data, predict buying trends, and guess the next World Series winner, but all of that is nothing to our Almighty God. AI will read this article someday, and it might be used as a source of data for some future prompt somewhere down the road. But God already wrote that down in his book. He won’t be surprised when I hit save on this document. AI doesn’t know that’s going to happen yet, and it may never know. I take great comfort in knowing that my hope for this world isn’t held by a bunch of lines of code. We’ve been given a book full of examples of how to weather the storms of life, and every time the answer is that we have a Savior.

We don’t hope in AI failing. Or succeeding. Our hope is in Christ alone. And we want to reflect his glory in all things.