ILLUSTRATION BY ANDREW ARCHER

STANDARDS

NGSS: Core Idea: ETS1.A, ETS2.B

CCSS: Writing: 8

TEKS: 6.4A, 7.4A, 8.4A, B.4B, C.4B, I.4B

Rise of the Machines

Artificial intelligence is becoming a bigger part of daily life—in ways you might not even realize. How will this growing technology impact our future?

AS YOU READ, THINK ABOUT the uses and limitations of artificial intelligence.

Every time you open TikTok or YouTube, you’re greeted with a series of videos that these platforms think you’ll like. On Spotify, you can listen to playlists of songs recommended just for you. Digital assistants like Alexa or Siri understand your voice and respond to commands. All of these apps and devices can interact with users. But how? By relying on artificial intelligence (AI).

AI technology allows computers to perform tasks normally associated with a human’s ability to learn, make decisions, and understand language. Scientists began working on developing AI back in the 1950s (see Key Moments in AI). Early AI programs aimed to solve complicated math problems and translate languages. These days, AI software recognizes your face to unlock your smartphone, autocompletes the messages you text to friends, and calculates the quickest route to a destination on your GPS. It’s also being used to try and tackle bigger issues, like predicting extreme weather with greater accuracy and developing new medicines. 

Right now, a type of AI called generative AI (GenAI) is receiving a lot of buzz. It can produce content in the form of text, images, audio, and video. GenAI powers chatbots, which can engage in both written and spoken conversations. This includes the popular tool ChatGPT. You can ask it to do anything, like “Write me a song about lizards in the style of Taylor Swift” or “Explain gravity to me like I’m a toddler” (see What It’s Like to Use AI). Another widely used GenAI tool is DALL-E, which creates images based on a prompt typed in by a user. Want a realistic picture of a tiger riding a motorcycle? Programs like DALL-E can generate one in seconds.

If it’s starting to feel like AI is everywhere, you’re not wrong. Thanks to increasingly powerful computers and access to massive amounts of data from the internet to train GenAI models, AI technology is growing rapidly. Some people believe that AI will improve our everyday lives. For example, AI “may be able to take away boring tasks like filling out forms, summarizing texts, or collecting and processing massive amounts of data,” says Johnny Chang. He’s a computer scientist studying AI at Stanford University in California. But others worry about how this technology is taking people’s jobs, plagiarizing the work of artists and writers, and allowing people to spread misinformation. So who’s right? Should the world embrace AI or fear it?

When you open TikTok or YouTube, a series of videos appears. These platforms display ones they think you’ll like. Spotify recommends playlists of songs just for you. Digital assistants like Alexa or Siri understand your voice and respond to commands. All of these apps and devices can interact with users. But how? By relying on artificial intelligence (AI).

AI technology allows computers to perform tasks that humans normally do. These involve the ability to learn, make decisions, and understand language. Scientists began working on AI back in the 1950s (see Key Moments in AI). Early AI programs aimed to solve difficult math problems and translate languages. Today AI software recognizes your face to unlock your smartphone. When you text friends, it autocompletes the messages. And it calculates the quickest route to a location on your GPS. People are also using it to tackle bigger issues. For example, they try to predict extreme weather more accurately and develop new medicines with AI.

Right now, a type of AI called generative AI (GenAI) is getting a lot of buzz. It can produce content in the form of text, images, audio, and video. GenAI powers chatbots. They can take part in both written and spoken conversations. This includes the popular tool ChatGPT. You can ask it to do anything. For example, “Write me a song about lizards in the style of Taylor Swift.” Or “Explain gravity to me like I’m a toddler” (see What It’s Like to Use AI). Another popular GenAI tool is DALL-E. A user types in a prompt, and it creates images based on that. Want a realistic picture of a tiger riding a motorcycle? Programs like DALL-E can produce one in seconds.

Does it feel like AI is everywhere? You’re not wrong. AI technology is growing rapidly. One reason is increasingly powerful computers. Another is that massive amounts of data are found on the internet. This data can be used to train GenAI models. Some people believe that AI will improve our everyday lives. For example, AI “may be able to take away boring tasks like filling out forms, summarizing texts, or collecting and processing massive amounts of data,” says computer scientist Johnny Chang. He studies AI at Stanford University in California. But others worry about this technology. It can take people’s jobs, steal the work of artists and writers, and allow people to spread misinformation. So who’s right? Should the world welcome AI or fear it?

PROGRAMMED TO LEARN

Like all computer programs, AI operates using an algorithm, or set of instructions. Programmers code, or write, algorithms that train AI models to process data, identify patterns, and make predictions. The simplest AI follows the rules described by its algorithm to figure out how to do one specific task, like win a game of chess or recommend movies on Netflix.

GenAI takes things a step further: It doesn’t just perform the same function over and over, it generates new content each time it’s used. ChatGPT, for example, analyzes books, websites, and articles found on the internet to determine how often people use certain words in context with others. The program then predicts, one word at a time, which words are likely to appear next. It also analyzes its previous interactions with users, allowing it to change over time. 

Although GenAI is still in the early stages of development, people are already using it to answer common questions, or tackle tasks like writing emails, reports, and computer code. Tech companies believe that eventually, GenAI could be used to do things like diagnose diseases, interpret emotions, and provide students with real-time feedback on their schoolwork.

Like all computer programs, AI operates on an algorithm. That’s a set of instructions. Programmers code, or write, these algorithms. The instructions train AI models to process data, identify patterns, and make predictions. The simplest AI follows the rules in its algorithm to do one specific task. It might figure out how to win a game of chess or recommend movies on Netflix.

GenAI goes a step further. It doesn’t just perform the same task over and over. It produces new content each time it’s used. For example, ChatGPT analyzes books, websites, and articles on the internet. It finds how often people use certain words with others. Then the program predicts which words will likely appear next, one word at a time. It also analyzes its earlier interactions with users. That allows it to change over time. 

GenAI is still in the early stages of development. But people are already using it to answer common questions. They also use it for tasks like writing emails, reports, and computer code. Tech companies believe that GenAI could have other uses in the future. It may diagnose diseases, interpret emotions, and give students real-time feedback on their schoolwork.

THE COST OF AI

Although AI can be useful, it’s far from perfect. AI systems often make errors called hallucinations. Many GenAI tools, including ChatGPT, have been known to generate misleading or false information—even citing sources that don’t exist. That’s because the AI systems are simply copying patterns found in other texts. These programs cannot determine if the content they create makes sense or not. And because GenAI models are so complex, there’s no way to know how or why these programs arrive at the answers they do. “It’s not stable, and it’s not predictable,” says Beth Singler, an anthropologist who studies human behavior, specifically how people use AI. “This makes it very difficult to rely on AI as a tool.”

The output of AI also reflects human biases, or inaccurate beliefs for or against something. Most AI tools are trained with data from the internet and reflect content they find online, which can be offensive and full of harmful stereotypes. This is especially dangerous when AI is used for public safety. AI-equipped surveillance cameras are being installed in public places like schools, concert halls, and train stations. The programs are intended to track people to prevent crimes but may be biased about who they flag as a potential criminal.

People are also using AI to generate realistic videos and images to deliberately spread false information, or disinformation. Chatbots, too, can impersonate trusted friends or loved ones to steal personal information—just one more reason to be careful about what personal details you share online.

AI can be useful, but it’s far from perfect. AI systems often make errors called hallucinations. ChatGPT and many other GenAI tools sometimes give false information. They even cite sources that don’t exist. That’s because these AI systems just copy patterns from other texts. When they create content, they can’t know if it makes sense or not.

And GenAI models are extremely complex. So no one knows how or why they arrive at their answers. Beth Singler is an anthropologist who studies human behavior. She focuses on how people use AI. “It’s not stable, and it’s not predictable,” she says. “This makes it very difficult to rely on AI as a tool.”

AI’s output also reflects human biases. These are inaccurate beliefs for or against something. Most AI tools are trained with data from the internet. So they reflect content they find online. That can be offensive and full of harmful stereotypes. This is especially dangerous when AI is used for public safety. AI-equipped surveillance cameras are being installed in public places like schools, concert halls, and train stations. The programs are supposed to track people to prevent crimes. But they may be biased about who is a possible criminal.

People are also using AI to create false information. They produce realistic videos and images to intentionally spread this disinformation. And chatbots can imitate trusted friends or loved ones. Then they steal personal information. That’s just one more reason to be careful about sharing personal details online.

REALITY CHECK

Right now, there aren’t many laws regulating the development of AI. Some tech companies have been taken to court over their use of copyrighted material—like books, artworks, and news articles—to train their GenAI tools. Writers and artists argue that their original content shouldn’t be stolen and used for this purpose without their permission.

Using GenAI to do things like write, make videos, or create art also means that many skilled people may lose their jobs. Right now, there are “people who cannot find employment because someone has decided it’s cheaper and more efficient to use ChatGPT to write text that’s not as good,” says Singler. The consulting firm McKinsey & Co. predicts that by 2030, about 12 million jobs in the U.S. will have been replaced by AI.

AI has the potential to change our lives in both good ways and bad. While these tools will only become more sophisticated, they will never replace human intelligence. “The hype is way too much right now,” says Chang. “The best way to combat hype is to familiarize yourself with these tools and what they can and cannot do.” The goal, Chang says, is to use AI responsibly.

Right now, few laws control the development of AI. Some tech companies have been taken to court. That’s because they trained their GenAI tools on copyrighted material. This includes books, artworks, and news articles. Writers and artists say their original content shouldn’t be stolen and used like this without their permission.

Another concern comes from using GenAI to do things like write, make videos, or create art. Many skilled people may lose their jobs. Right now, there are “people who cannot find employment because someone has decided it’s cheaper and more efficient to use ChatGPT to write text that’s not as good,” says Singler. The consulting firm McKinsey & Co. predicts the loss of about 12 million jobs in the U.S. by 2030. AI will have replaced them.

AI could change our lives in both good and bad ways. These tools will only become more complex, but they will never replace human intelligence. “The hype is way too much right now,” says Chang. “The best way to combat hype is to familiarize yourself with these tools and what they can and cannot do.” The goal is to use AI responsibly, he says.