US agency opens probe into security practices of OpenAI

US's Federal Trade Commission opens inquiry into security practices of ChatGPT’s maker OpenAI

In a 20-page letter sent to the San Francisco company this week, the agency said it was also looking into OpenAI’s security practices.

Representational image. Credit: Reuters Photo

The Federal Trade Commission has opened an investigation into OpenAI, the artificial intelligence startup that makes ChatGPT, over whether the chatbot has harmed consumers through its collection of data and its publication of false information on individuals.

In a 20-page letter sent to the San Francisco company this week, the agency said it was also looking into OpenAI’s security practices. The FTC asked OpenAI dozens of questions in its letter, including how the startup trains its AI models and treats personal data, and said the company should provide the agency with documents and details.

The FTC is examining whether OpenAI “engaged in unfair or deceptive privacy or data security practices or engaged in unfair or deceptive practices relating to risks of harm to consumers,” the letter said.

Also read | What to know about ChatGPT’s new code interpreter feature

The investigation was reported earlier by The Washington Post and confirmed by a person familiar with the investigation.

The FTC investigation poses the first major US regulatory threat to OpenAI, one of the highest-profile AI companies, and signals that the technology may increasingly come under scrutiny as people, businesses and governments use more AI-powered products. The rapidly evolving technology has raised alarms as chatbots, which can generate answers in response to prompts, have the potential to replace people in their jobs and spread disinformation.

Sam Altman, who leads OpenAI, has said the fast-growing AI industry needs to be regulated. In May, he testified in Congress to invite AI legislation and has visited hundreds of lawmakers, aiming to set a policy agenda for the technology.

Also read | Google’s AI chatbot Bard is trained by humans who say they’re overworked, underpaid and frustrated

On Thursday, he tweeted that it was “super important” that OpenAI’s technology was safe. He added, “We are confident we follow the law” and will work with the agency.

OpenAI has already come under regulatory pressure internationally. In March, Italy’s data protection authority banned ChatGPT, saying OpenAI unlawfully collected personal data from users and did not have an age-verification system in place to prevent minors from being exposed to illicit material. OpenAI restored access to the system the next month, saying it had made the changes the Italian authority asked for.

The FTC is acting on AI with notable speed, opening an investigation less than a year after OpenAI introduced ChatGPT. Lina Khan, the FTC chair, has said tech companies should be regulated while technologies are nascent, rather than only when they become mature.

In the past, the agency typically began investigations after a major public misstep by a company, such as opening an inquiry into Meta’s privacy practices after reports that it shared user data with a political consulting firm, Cambridge Analytica, in 2018.

Khan, who testified at a House committee hearing Thursday over the agency’s practices, previously said the AI industry needed scrutiny.

“Although these tools are novel, they are not exempt from existing rules, and the FTC will vigorously enforce the laws we are charged with administering, even in this new market,” she wrote in a guest essay in The New York Times in May. “While the technology is moving swiftly, we already can see several risks.”

On Thursday, at the House Judiciary Committee hearing, Khan said: “ChatGPT and some of these other services are being fed a huge trove of data. There are no checks on what type of data is being inserted into these companies.” She added that there had been reports of people’s “sensitive information” showing up.

The investigation could force OpenAI to reveal its methods around building ChatGPT and what data sources it uses to build its AI systems. While OpenAI had long been fairly open about such information, it more recently has said little about where the data for its AI systems comes from and how much is used to build ChatGPT, probably because it is wary of competitors copying it and has concerns about lawsuits over the use of certain data sets.

Chatbots, which are also being deployed by companies like Google and Microsoft, represent a major shift in the way computer software is built and used. They are poised to reinvent internet search engines like Google Search and Bing, talking digital assistants like Alexa and Siri, and email services like Gmail and Outlook.

When OpenAI released ChatGPT in November, it instantly captured the public’s imagination with its ability to answer questions, write poetry and riff on almost any topic. But the technology can also blend fact with fiction and even make up information, a phenomenon that scientists call “hallucination.”

ChatGPT is driven by what AI researchers call a neural network. This is the same technology that translates between French and English on services like Google Translate and identifies pedestrians as self-driving cars navigate city streets. A neural network learns skills by analysing data. By pinpointing patterns in thousands of cat photos, for example, it can learn to recognize a cat.

Researchers at labs like OpenAI have designed neural networks that analyse vast amounts of digital text, including Wikipedia articles, books, news stories and online chat logs. These systems, known as large language models, have learned to generate text on their own but may repeat flawed information or combine facts in ways that produce inaccurate information.

In March, the Center for AI and Digital Policy, an advocacy group pushing for the ethical use of technology, asked the FTC to block OpenAI from releasing new commercial versions of ChatGPT, citing concerns involving bias, disinformation and security.

The organisation updated the complaint less than a week ago, describing additional ways the chatbot could do harm, which it said OpenAI had also pointed out.

“The company itself has acknowledged the risks associated with the release of the product and has called for regulation,” said Marc Rotenberg, the president and founder of the Center for AI and Digital Policy. “The Federal Trade Commission needs to act.”

OpenAI has been working to refine ChatGPT and to reduce the frequency of biased, false or otherwise harmful material. As employees and other testers use the system, the company asks them to rate the usefulness and truthfulness of its responses. Then, through a technique called reinforcement learning, it uses these ratings to more carefully define what the chatbot will and will not do.

The FTC’s investigation into OpenAI can take many months, and it is unclear if it will lead to any action from the agency. Such investigations are private and often include depositions of top corporate executives.

The agency may not have the knowledge to fully vet answers from OpenAI, said Megan Gray, a former staff member of the consumer protection bureau. “The FTC doesn’t have the staff with technical expertise to evaluate the responses they will get and to see how OpenAI may try to shade the truth,” she said.

Get a round-up of the day's top stories in your inbox

Check out all newsletters

Get a round-up of the day's top stories in your inbox