70% off

Business, Labor Square Off Over AI’s Future in American Workplace

ChatGPT investigation by FTC could be test of U.S. authority to regulate AI The Federal Trade Commission is asking questions about OpenAI’s policies for selling access to its AI systems. Photo: Eric Lee for The Wall Street Journal By Ryan Tracy Updated July 19, 2023 8:19 am ET WASHINGTON—The Federal Trade Commission’s investigation into the ChatGPT app points to an emerging conflict over how Washington should regulate artificial intelligence, one that could pit some of America’s biggest businesses against labor unions and progressive advocacy groups. Businesses want to use systems like ChatGPT, which can instantly generate media and imitate human conversation, to cut the number of employees needed to write documents or answer calls. AI-driven bots could also open up new markets by creati

A person who loves writing, loves novels, and loves life.Seeking objective truth, hoping for world peace, and wishing for a world without wars.
Business, Labor Square Off Over AI’s Future in American Workplace
ChatGPT investigation by FTC could be test of U.S. authority to regulate AI

The Federal Trade Commission is asking questions about OpenAI’s policies for selling access to its AI systems.

Photo: Eric Lee for The Wall Street Journal

WASHINGTON—The Federal Trade Commission’s investigation into the ChatGPT app points to an emerging conflict over how Washington should regulate artificial intelligence, one that could pit some of America’s biggest businesses against labor unions and progressive advocacy groups.

Businesses want to use systems like ChatGPT, which can instantly generate media and imitate human conversation, to cut the number of employees needed to write documents or answer calls. AI-driven bots could also open up new markets by creating individually customized ads or even pitching customers in live conversation. 

Labor unions, privacy advocates and consumer groups see AI’s potential benefits, too. But they fear that AI will eliminate jobs and downgrade working conditions. If an AI system were trained to persuade, bad actors could feed it a person’s private data and use it to manipulate or defraud, some warn.

Now, as the U.S. government takes its first tentative steps toward regulating AI, the Biden administration’s close ties to labor and progressive groups has some in business and tech concerned that the regulatory push will go too far, by stunting the development and use of a technology seen as crucial to powering the U.S. economy in the future. 

Those fears were stoked by disclosure last week of the FTC probe of ChatGPT creator OpenAI. The agency is asking detailed questions about OpenAI’s policies for selling access to its AI systems to other businesses, among other topics.

“The regulatory uncertainty and overreach that could come with such an approach would significantly hamstring America’s ability to compete and deploy societally beneficial uses of AI,” said Jordan Crenshaw, a senior vice president at the U.S. Chamber of Commerce.

How businesses and consumers can harness the benefits while avoiding abuses of the powerful technology is at the heart of the regulatory debate. OpenAI’s GPT and other so-called generative AI systems could power virtual agents tuned to the desires of specific individuals, with the ability to converse with humans in real time, upending how Americans work, shop and travel. 

“Throughout our digital lives, we will just talk to our computers,” Louis Rosenberg, chief executive of AI developer Unanimous AI, said at a recent conference hosted by the advocacy group Public Citizen. 

That will create opportunities for businesses to exercise “targeted, customized, influence at scale,” Rosenberg said. But if left unregulated, such a tool “could be the most dangerous technology for human manipulation that we’ve had to confront,” he added.  

SHARE YOUR THOUGHTS

How should AI be regulated? Join the conversation below.

The FTC hasn’t said it intends to write a broad AI regulation. In drafting both a national strategy on AI and recommendations for “accountability measures,” Biden administration officials have repeatedly said they want to strike a balance between fostering innovation and preventing harm.  

Labor is a key constituency of the Biden administration, and union representatives met with White House officials July 3 to articulate their concerns.

Unions are concerned not only about job losses, but about companies using AI applications to keep tabs on workers outside of their jobs, where an AI-driven system might identify a group of workers carrying their employer-issued smartphones to a union organizing meeting, according to Amanda Ballantyne, director of AFL-CIO’s technology institute.  

“You hear stories from workers about having their time in the bathroom tracked,” she said. 

Civil-rights groups are also concerned about racial or gender bias creeping into business decisions when companies use AI as a sort of automated consultant, a practice that is growing in fields such as healthcare, recruitment and finance. 

Some of these groups pushed successfully for a New York City law that requires bias audits of the AI-driven systems that many companies now use to find recruits for job openings.

Similar regulatory ideas are gaining traction in Washington. Earlier this year, an arm of the Commerce Department sought comments on possible accountability measures for AI systems, including whether potentially risky new AI models should go through a certification process before they are released. 

Auditing firm PwC filed a comment with the Commerce Department praising the benefits of third-party audits.

Photo: Richard B. Levine/Levine Roberts/ZUMA Press

“Whatever guardrails are set around AI, it needs to work not just for the largest players but the entire ecosystem,” said

Doug Johnson, a vice president at the Consumer Technology Association, a trade group that represents more than 1,000 tech companies of various sizes. 

The Blue Cross Blue Shield Association, a group of health insurers that consistently ranks among Washington’s top lobbying spenders, told the Commerce Department that officials should consider the costs to businesses before adopting a requirement that major or high-risk AI systems be audited by a third party.

Third-party audits can be “resource-intensive, difficult to obtain, and often cost-prohibitive,” the group wrote. 

On the other side of the ledger, auditing giant PwC filed a comment praising the benefits of audits, which the company itself might be retained to conduct. “We know that information that is subject to third-party assurance garners more trust,” PwC wrote. 

Microsoft and OpenAI have called on the U.S. to develop a new AI regulatory program, including by setting up a new federal agency to license powerful AI systems. 

That sentiment isn’t shared by everyone in the tech industry. 

Google has said it opposes a new ‘Department of AI.’

Photo: David Paul Morris/Bloomberg News

Alphabet’s Google, which offers AI systems that compete with Microsoft and OpenAI, said in its comment that it opposes a new “Department of AI” and favors instead “a hub-and-spoke approach—with a central agency like the National Institute of Standards and Technology (NIST) informing sectoral regulators” overseeing specific industries. NIST recently published a voluntary framework for managing AI risks. 

“AI is too important not to regulate,” Google said, but it added: “Any new regulations should avoid impairing the operation of systems that deliver significant value for Americans.” 

Besides the Commerce Department effort, both Senate Majority Leader Chuck Schumer (D., N.Y.) and the White House’s Office of Science and Technology Policy have launched efforts to gather public input in the name of shaping future rules. 

That sets up what is expected to be a fresh gold mine for Washington lobbyists. 

“Some of the largest companies in America getting involved in the AI debate means that a lot more money and influence is going to get thrown around,” said Adam Thierer, a senior fellow on technology and innovation at the R Street Institute think tank. “There is now a cacophony of voices to hear from.”

Write to Ryan Tracy at [email protected]

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow

Media Union

Contact us >