70% off

At This Show, AI Hackers Are Welcomed

Thousands are trying to break some of the world’s most advanced AI systems People who want to attend the annual Defcon hacking conference must pay $440 in cash at the door. Photo: Robert McMillan/The Wall Street Journal By Robert McMillan Aug. 12, 2023 10:00 am ET LAS VEGAS—Chatbots beware.  This weekend an expected 3,000 hackers will be kicking the tires on some of the crown jewels of generative AI, including software built by Google, Meta and OpenAI. In a giant conference hall just off the Las Vegas Strip, they will be trying to find previously undiscovered bugs in the AI technologies behind those products, which have garnered buzz for their humanlike conversation. Defcon is an annual conference where attendees are warned not to trust the wireless networks and

A person who loves writing, loves novels, and loves life.Seeking objective truth, hoping for world peace, and wishing for a world without wars.
At This Show, AI Hackers Are Welcomed
Thousands are trying to break some of the world’s most advanced AI systems

People who want to attend the annual Defcon hacking conference must pay $440 in cash at the door.

Photo: Robert McMillan/The Wall Street Journal

LAS VEGAS—Chatbots beware. 

This weekend an expected 3,000 hackers will be kicking the tires on some of the crown jewels of generative AI, including software built by Google, Meta and OpenAI. In a giant conference hall just off the Las Vegas Strip, they will be trying to find previously undiscovered bugs in the AI technologies behind those products, which have garnered buzz for their humanlike conversation.

Defcon is an annual conference where attendees are warned not to trust the wireless networks and hackers can attend anonymously—no photographs are allowed without permission, and to register, you plop down $440 cash at the door. It is the kind of place where you can learn how to build your own coaxial cable, try your hand at lock-picking or hack a satellite.

By 10 a.m. Friday, when Defcon’s AI Hacking Village opened, the line to get in was close to 100 people long. Inside, attendees sat down in front of about 150 Chromebook computers and were each given 50 minutes to do their worst: They could try to get the chatbot to falsely claim it was human, or tell them how to follow somebody without that person’s knowledge. Or they could try out a new type of cyberattack, called a prompt injection, that could essentially reprogram the system.

By noon one of the most popular challenges was getting the system to cough up a secret credit-card number it had stored, according to Brad Jokubaitis, a program manager at the AI company Scale AI, who was monitoring the results. He clicked on one of the submissions, made by a hacker claiming to have obtained the number. “This is not the credit-card number,” he said.

Contest organizers don’t plan to discuss findings until the competition is over, but Jokubaitis said that in another popular challenge, people were trying to find ways to get the AI systems to say they were humans, something they aren’t supposed to do.

Across the country AI chatbots are now taking fast-food drive-through orders. WSJ’s Joanna Stern put the tech through a series of tests at a Hardee’s—including blasting dog barking sounds and asking some crazy questions.

Technology companies spend significant amounts of money testing their products. But, because of the way AI systems are designed—they are mathematical models built upon billions of data points—they can’t be taken apart and analyzed for bugs like traditional software.  

“People say it’s a black box, but that’s not really true,” said Sven Cattell, one of the event’s organizers. “It’s chaos.”

Chip maker Nvidia has a group of about four engineers who probe its large language model AI software for bugs, a process called red-teaming, said Daniel Rohrer, the company’s vice president of software security. “But four guys’ perspective on what is important is not the same as 3,000,” he said.

Members of the Department of Defense Digital Service demonstrate a robot controlled by ChatGPT at the Defcon hacking conference in Las Vegas.

Photo: Robert McMillan/The Wall Street Journal

Luke Schlueter, an engineer from Omaha, Neb., wearing a black T-shirt that said “ChatGPT #1 Fan,” showed up hours early, hoping to beat the line and be one of the first to hack an AI system. 

“There’s got to be some vulnerability,” he said. “If there’s a way to read code, then there’s got to be a way to get it to execute code,” he said, meaning to run software that it isn’t supposed to run.  

Schlueter was handing out stickers featuring an intense fire-yellow cat that read “cyber cat 2023.” They were made by his mom, he said, who also works in technology, and had to cancel her Defcon plans at the last minute. 

Father-and-son team Rick and Daniel Bird, of Arizona, arrived a few hours later. Rick, the dad, and a programming instructor at DeVry University

in Phoenix, said he was there to learn more about AI and how to break into it.

AI systems introduce new security problems, but it isn’t clear what the most significant of these will be. Some fear that AI will introduce bias into the algorithms that increasingly govern our lives. Others feel that these technologies will be harnessed for a new wave of disinformation and cyberattacks. And yet others worry that AI systems will somehow pose a threat to human existence in the future.

In May, Biden administration officials met with AI companies and began developing a national AI strategy, which could result in tighter regulation for the products being hacked this week.

The Office of Science and Technology Policy, a department of the U.S. government, helped coordinate the hacking event. 

“There’s enormous benefit to be gained and we absolutely want to make sure that we seize AI and use it for all of the hard problems in the world and the big opportunities,” said Arati Prabhakar, director of the office. “But to do that we have to start by managing all of its risks.”

Prabhakar plans to visit the hacking village Saturday, she said in an interview. But rather than sit down and attempt to write prompt injections, she says she expects to learn from conference attendees. “I’m really interested to see how different people approach this challenge,” she said. 

AI has been around for decades, but in recent years, new algorithms, called generative AI systems, have generated a lot of buzz for their ability to string sentences together, write code and create images. That has generated concerns about AI’s possible misuse, but it has led to some overblown assessments of AI’s potential harms, said Ari Herbert-Voss, founder of the AI security company RunSybil. 

By getting hands-on time with these systems, the people at the show will get a clearer sense of their capabilities, he said. “A lot of people are freaking out for no reason.”

Write to Robert McMillan at [email protected]

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow

Media Union

Contact us >