When Samsung Leaked Secrets to ChatGPT

  • AI
  • Legal
  • Technology
  • By Arno
AceRota - When Samsung Leaked Secrets to ChatGPT

ChatGPT launched in November 2022 and took the world by storm. By early 2023, companies everywhere were asking the same question: should we let our employees use this?

Samsung said yes. It only took three weeks to regret it.

In March 2023, Samsung’s semiconductor division lifted a ban on ChatGPT, giving engineers access to the AI tool. The goal was simple: boost productivity and keep staff engaged with the latest technology. But within days, employees had fed sensitive company data into the chatbot — and there was no way to get it back.

Three Incidents, One Problem

At least three separate leaks occurred in quick succession.

In the first case, an engineer working on a semiconductor database download program ran into an error. They copied the proprietary source code into ChatGPT and asked the AI to fix it. The code — containing trade secrets from Samsung’s semiconductor business — was now on OpenAI’s servers.

In the second case, another employee uploaded program code designed to identify defective equipment. They asked ChatGPT to optimise it. Again, confidential information left Samsung’s control.

In the third, an employee fed an entire recorded meeting into the chatbot and asked it to generate minutes. Internal discussions about hardware performance and yield data were shared with an external AI system.

Why This Was a Problem

Many people treat ChatGPT like a search engine. You ask a question, it gives an answer, and the conversation ends. But that is not how the system works.

ChatGPT saves user prompts to train and improve its models. Unless users explicitly opt out, their inputs become part of the training data. OpenAI’s policy warned users not to share sensitive information, noting that it was “not able to delete specific prompts from your history.”

This meant Samsung’s trade secrets were potentially incorporated into the model itself. There was no mechanism to selectively remove that data. Once the source code and meeting notes entered the system, Samsung lost control of them.

Samsung’s Response

When Samsung discovered the leaks, the company moved quickly. First, it imposed an emergency measure limiting each ChatGPT prompt to 1,024 bytes. The idea was to prevent employees from uploading large blocks of code or lengthy documents.

That was a temporary fix. The real response came in May 2023. Samsung issued a company-wide ban on generative AI tools across all company-owned devices and networks. The ban covered ChatGPT, Google Bard, Microsoft Bing, and any similar service.

Employees were told that future violations could result in access being blocked entirely at the network level. Samsung also announced plans to develop its own in-house AI tools for software development and translation — systems that could offer the benefits of AI without the data security risks.

The Ripple Effect

Samsung was not the only company caught off guard. The incident made headlines worldwide and forced businesses to confront a question many had been avoiding: what happens to the data we put into AI tools?

Other tech giants took notice. SK hynix blocked chatbot access on its internal network entirely. LG Display launched employee education campaigns about AI security risks. Amazon and Walmart issued warnings to staff about sharing sensitive information. JPMorgan Chase and other major banks restricted or blocked ChatGPT altogether.

A survey conducted by Samsung after the incident found that 65% of participants believed generative AI tools carried security risks. Despite the enthusiasm for the technology, the trust gap was significant.

Lessons for Your Business

The Samsung case offers three clear takeaways for any organisation using or considering AI tools.

Treat AI Inputs as Public

Anything entered into a public AI chatbot should be treated as if it could appear on the front page of a newspaper. If you would not post it on social media, do not paste it into ChatGPT.

Create Clear AI Policies

Do not assume employees understand the risks. Samsung’s engineers were not trying to leak data — they were trying to be productive. Clear guidelines about what can and cannot be shared with AI tools are essential.

Consider Private AI Alternatives

For businesses that handle sensitive data, public AI tools may not be appropriate. Samsung opted to develop its own in-house systems. Smaller companies can look at enterprise-grade AI solutions with data privacy guarantees.

The Bottom Line

The Samsung ChatGPT leak was not caused by malice. It was caused by ignorance — of how the technology works, of where the data goes, and of what companies stand to lose when trade secrets leave their control.

AI tools are powerful productivity aids. But they come with risks that many businesses have not fully understood. The Samsung incident was a wake-up call. The question is whether your business will learn from someone else’s mistake or your own.

All new customers are entitled to a 3-month no-obligation trial, with all features included.

Try AceRota for free!

Mobile and Desktop

Available for all major mobile, tablet and desktop platforms.

Works on iPhone, iPad, Android phone and tablet, MacOS, Windows.