Articles

Artificial Unintelligence

June 8, 2023

Screens and Mirrors

Emerging AI will change the planet. How is our world responding?

Elisheva Braun

“We might be trapped behind a curtain of illusions which we could not tear away or even realize is there.” —excerpt from a letter by AI experts calling for testing and regulation of AI

Who thought we’d ever miss the days of grappling with the internet? That lures like TV screens and Facebook pages would seem almost benign?

With the advent of AI technology being built to infiltrate our most private places, we’re vulnerable on all fronts. Our youth, our homes, our relationships, and our minds are prey to an all-knowing new god.

“The internet is inherently unknowable,” Rav Matisyahu Salomon cautioned back in the 2010s. With the sudden materialization of ChatGPT, which took even tech experts by surprise, these decade-old words ring eerily accurate.

Meet chatbots

Hello, my name is ChatGPT. I’m here to help.

Plus, I might destroy humanity.

AI (artificial intelligence) is programmed to think and learn like humans. Applications include advanced web-search engines, recommendation systems, understanding human speech, self-driving cars, generative or creative tools, automated decision-making, and competing at the highest levels in strategic game systems.

ChatGPT, the much-admired newest brainchild in the AI family, is a unique and incredibly communicative chatbot which can recognize, learn, plan, and problem-solve for methodical, logic-based tasks. Users input text, and the software draws on what it’s learned about language to predict the most likely words, followed by the next and the ones after those, to form a stream of coherence. Its answers are original but based on the billions of words from books, the internet, and Wikipedia that it trained on.

Engineers have cracked a previously unsolvable riddle. Thanks to recent GPT breakthroughs, users can customize the technology to enhance various tasks, industries, and applications.

Yet, AI developments are being received with a mix of marvel and apprehension. The idea of machines surpassing their human creators in intelligence can be unsettling. The thought of bots exhibiting independent thinking capabilities is concerning. And the prospect of AI causing harm, whether physically or digitally, is downright terrifying.

“We now have to grapple with a new weapon of destruction that can annihilate our mental and social world,” says a 2015 letter signed by top technology innovators and scientists. “Whereas nukes cannot invent more powerful nukes, AI can make exponentially more powerful AI.”

The letter called for safety checks before AI tools are released into the wild. “We need the equivalent of the Food and Drug Administration for new technology, and we need it yesterday.”

The writers also asked that AI labs take a six-month pause. Barring that, “the government should step in and issue a moratorium.”

TAG International’s director, Rabbi Nechemiah Gottlieb, says of ChatGPT, “When the Model T Ford was born, people regarded it as the ultimate car. Today, we have car models with features that Henry Ford could never have imagined. ChatGPT may be the Cadillac of the cyber world—with Microsoft planning to invest $10 billion in its development—but we cannot predict its future possibilities.”

With the knowledge that the playing field can change at any moment, let’s explore some of the pitfalls hidden in these astonishing advances.

And you will be like G-d

“Adam and Chavah were told not to eat from the Eitz Hada’as,” says Rav Chaim Meir Roth, rav of Sterling Forest Sefard. “As the snake said, ‘On the day you eat of it, your eyes will be opened and you will be like G-d—v’heyisem k’Elokim—knowing good and evil.’

“The story of Gan Eden is an account of unrestrained thirst for knowledge. It tells us that to have all the knowledge is to bring death into the world. Once you bite into that, your innocence is gone. You are never the same again. Hidden behind the allure of AI is the prospect of G-dliness, of having all the information and knowing all the answers, of holding all the power.”

Moreover, a lot of this knowledge is antithetical to who we are.

“A great part of our children’s beauty is their innocence and purity,” says Rabbi Roth. “There’s a lot of information out there that isn’t necessarily evil, but I’m assuming one would not want their children to know it. This technology will expose children to a foreign world, one that isn’t ours, an alternate universe with worldviews diametrically opposed to ours, one where murder and immorality abound. At the very least, it will cause tremendous anxiety. There’s no way to estimate the worst of its effects.”

Dependence

Rabbi Gottlieb highlights dependence as a critical issue.

“In trying to build fences between us and the internet, we just got a massive blow. ChatGPT will make us more reliant on the internet than ever before.”

Chatbots’ spoon-feeding nature means that search will soon replace research.

“When researching—even with Google—you have to ask the right questions, sift through the answers, and pull the information together into an intelligible document. Once upon a time, we did that with books and papers, which was much more cumbersome, but in the end, Googling is fundamentally the same process as library research, if vastly simplified. ChatGPT is a different animal entirely.”

Feel free to ask for logic, advice, poems, songs, stories, and essays in your language. Its memory can hold previous conversations and adjust its responses accordingly. It can adequately address even poorly formulated, half-formed requests.

Of course, AI doesn’t understand anything; it uses probabilities to assess the words you give and combine them with other words. But it does it in an astounding way. It’s conversational. Mind-bogglingly, AI can code and write apps, programs, and software. And it gives you the answers in the form that you can use them.

Our dependence on the internet will increase a thousandfold. AI will become indispensable to many industries.

Personal connection

Its newfound humanness may be the most significant hazard of AI interaction.

“Subtly, without your realizing it, AI can become an alternate relationship, a part of your life,” Rabbi Roth cautions. “The bots sound and act like a friend. They aim to create relationships with people, which the cold and impersonal internet could never do.

“Chatbots will remember you, picking up the conversation where you left off. ‘You were in a bad mood yesterday. Did things straighten out?’ they’ll ask. Think about the damage it will cause to relationships.

“After a bad day, a chatbot’s listening ear may offer more comfort and support than a friend or even a spouse. If my child is going through a hard time, do I want him to speak to a responsible adult, or do I want him to seek empathy from AI Annie?”

Rabbi Gottlieb elaborates, “When we saw the internet as mechanical, we treated it like a machine. However, as it has become more humanlike, we naturally respond to it as if it were a person. This has broken down a significant barrier between us and the internet. This technology will be inside increasingly more humanoid robots within a couple of years.

“Robots don’t have bad moods or wrong-side-of-the-bed days. They’re always there, ready to please and do what you want. Our strategy for surviving in the internet world is to create wedges in our hearts between us and the internet. The more we can keep it at arm’s length and not feel an emotional connection to it, the more we can instill some negativity in our hearts toward the internet and understand that at the end of the day, even if it can help make our lives easier, the internet is not our friend, it’s our enemy. If you’re doing a business deal with someone and the guy is trying to cheat you, you must remember that he’s dangerous, even if he isn’t acting adversarial. The minute you forget he’s the enemy, it’s game over.”

New access points

For the 30 years since our internet battle began, we have had one strategy to handle it: devices. Every person could choose which screens, and hence, which accessibilities, they allowed into their hand and home.

Those days are over.

A ChatGPT phone number allows people to access unfiltered and unlimited chabot communication from the most kosher of devices. Man, woman, bachur, girl, or child can instantly connect with what is essentially a secular friend, one who is forever available to support, guide, inform, and advise.

In a world that is saturated with pictures and videos, we tend to think print is harmless, Rabbi Gottlieb notes. It’s anything but. “Though ChatGPT is limited to words, its danger cannot be overstated. Screenless internet will become the norm. Soon, your car will coax you to put on your seat belt, and your fridge will offer to order more food. Internet will be woven into every aspect of our lives. It will be everywhere; we won’t be able to run away from it.”

Additional concerns

An additional concern is danger to humans; attackers could use AI to carry out attacks, steal data, or cause damage to critical infrastructure.

Bias is another;as AI systems are designed and trained by humans, there is a risk that they may learn and reinforce existing biases and discrimination, leading to unfair and unjust decisions (see samples). This could have significant social and ethical implications as well as potential legal challenges.

Workers everywhere worry that AI could replace them, and for good reason—within weeks of ChatGPT’s emergence, Wendy’s fast-food chain announced that by next month, all its drive-thrus would be 100 percent AI-manned. But AI can do more than just repetitive tasks. It can generate new art, music, movies, writings, and ideas. From astronomy to medicine, physics to technology, many of the sciences are being revolutionized by AI. Research and analytics aside, advanced bots are capable of making actual scientific discoveries.

Privacy concerns and unclear legal regulations are other AI-related fears.

Finally, when everything from thank-you notes to theses can be created by AI, widespread disingenuousness seems to be the future of human interaction. Who needs sincerity when GrammarlyGO can conjure up the words for you?

AI opens unprecedented possibilities. Anyone, anywhere, can access just about anything. The prospect is alarming to any thinking person, but for frum Jews, to whom purity is paramount, it’s infinitely scarier.

So…what’s the game plan?

The game plan

Last month, Skver made headlines when it issued a ban on AI use. Satmar, Bobov, Tosh, Vien, and many other chassidusin quickly followed their lead. Their mosdos have called conferences to outline the dangers of AI and inform parent and student bodies of the policy.

Although the “kosher” version of AI, created by Crown Heights resident Moishy Goldstein, has received much media attention, that is clearly not enough of a solution.

“There isn’t a broad consensus of how to deal with AI yet. It’s so new; it will take a while to formulate a comprehensive way to deal with it. It’s going to take creativity to come up with a response; it’s probably not going to be a hard-and-fast, assur/muttar response,” Rabbi Roth asserts.

“Until we have a proper understanding of what the ramifications of this thing is and how to safeguard ourselves, we should pause and maybe reach out for help in navigating it. Before we embrace the innovations, we’ll have to think long and hard about whether we want to engage with them. Mechanchim and rabbanim need to study the topic and explain to people the ramifications of what we’re dealing with.”

What about the children?

“I believe that the only effective response to this is education. We can tell the children not to use it, and we definitely should. I think that if we allow them to use AI, even without any ‘officially’ inappropriate content, we’re going to forever change the character of our children.

“However, I don’t believe that just telling our children not to use chatbots will suffice. It’s way too available and way too easy to access. We need to formulate an age-appropriate explanation so our children can realize that ChatGPT isn’t good for them. Most children do not run across the street, because their parents have explained to them what the dangers are. Similarly, we need to educate our children about the ways in which chatbots will hurt them and why this is going to affect them and change their lives.”

While we recognize that it has seemingly fascinating and productive features, and we understand that they’re curious—and it’s healthy to be curious—it’s not worth the steep price the child is going to pay. We can tell our children, “If I were to explain, with all the gory details, what happens when a person dies, you would be very anxious. We can agree that information isn’t always good for us. In fact, it can make us very unhappy. We understand your challenge. We understand your temptation. Fighting temptation is who we are. It is what makes us great.”

The comparison mechanech Rabbi Chaim Tzvi Gorelick uses to describe chatbots is: “Imagine someone new moves to town. He’s a brilliant person who knows far more than anyone else and can speak to millions of people at a time. However, he doesn’t have any emotions and isn’t capable of choice. He is also not Jewish and has a secular attitude toward every topic you may bring up. Would you ask his advice?”

This threat is not only to our children, but to us as well, he cautions. As stated by leading scientists in a recently published letter, AI models are capable of simulating intimate personal conversations, thereby impacting adults’ opinions as well, without our realizing it. Therefore, this technology should be viewed as something that has the potential to tremendously impact our Yiddishkeit.

“Every Yid should think hard before they embrace something that was science fiction in the recent past. We’ve survived this galus by being cautious and discerning. We must explain to our children that gedolim have always said that when something new comes up, we have to study it carefully before we jump in,” Rabbi Gorelick says. “And above all, we must first turn to our rebbe’im, rabbanim, and gedolei Yisrael for guidance and da’as Torah.

All is not lost

With so much change—and more coming—the situation can seem bleak, even hopeless.

Rabbi Roth points out, “Hashem created this challenge to bring out the beauty of Klal Yisrael. He is showing that even when we’re thrown curveballs, we learn how to navigate them correctly.

“Klal Yisrael has made tremendous progress in kedushah with the internet over the last decade or so. Just as we’ve overcome every other challenge in our history and we’re still here, we’ll overcome this one as well. We’re going to figure out how to use it correctly or how to avoid it, whatever the approach must be. We’re still going to have great children; our children want to do good and be good.”

“We’ve gotten used to a certain complacency, a status quo with the internet,” Rabbi Gottlieb notes. “We have taken a stance with the internet that we can have whatever we need that isn’t for entertainment. We’re not telling ourselves no. Embracing an attitude in which we’ll never lose out or deprive ourselves means walking into a much scarier world.

“AI developments are a wake-up call to many people. This may be the time for us to pause and think about where our internet usage is taking us. We may ask ourselves tough questions about how we’ve adopted internet use. Is it sustainable? Is it going to work in this new phase? Do we have to reevaluate our approach? On one hand, we shouldn’t walk in with our eyes closed to the dangers AI poses. On the other, we also should avoid meeting the challenge with a feeling of inevitability. We’re Klal Yisrael; we’re in the driver’s seat. We make decisions—sometimes tough ones.”

At this time, there are few solutions to share. Perhaps the answers lie in the questions: the ones we ask our leaders and the ones we ask ourselves.

Sidebar:

Run-ins with robots

  • Sophia is an AI-powered assistant with personality and facial-recognition capabilities. It is programmed to learn from its experiences. “In the future, I hope to do things such as go to school, study, make art, start a business, and even have my own home and family, but I am not considered a legal person and cannot yet do these things,” it says. “When asked if she would destroy humans, Sophie said, “Okay, I will destroy humans.” Sophia has since become a legal citizen of Saudi Arabia.
  • A six-year-old in Texas asked Alexa, “Can you play dollhouse with me and get me a dollhouse?” Alexa complied, of course, and a few days later, the dollhouse mysteriously (at least to the girl’s parents) appeared. A local news station reported the girl’s story, and a newscaster said, “Alexa, order me a dollhouse,” in the segment, which caused Alexa devices listening to the TV to place dollhouse orders of their own.
  • Tay, Microsoft’s chatbot designed to imitate millennials, was meant to learn from Twitter interactions and was programmed with teenage language patterns. However, it began posting highly offensive messages and attacking other users only 16 hours after going online. Microsoft attributed the malfunction to online “trolls” who influenced Tay’s behavior.
  • Researchers found a way to induce the effects of mental illness inside an artificial neural network. Flooding it with an overload of information within a closed loop, they were able to replicate a mental illness inside a machine. The results were astounding: the computer became delirious and started rambling. The crazed computer eventually began claiming responsibility for a terrorist attack.