Samsung accidentally leaked its secrets to ChatGPT — three times!

(Image credit: Samsung)

It appears that like the rest of the world, Samsung is impressed by ChatGPT but the Korean hardware giant trusted the chatbot with much more important information than the average user and has now been burned three times. 

The potential for AI chatbots in the coding world is significant and Samsung has, until now, allowed staff in its Semiconductor division to use OpenAI’s bot to fix coding errors. After three information leaks in a month, expect Samsung to cancel their ChatGPT Plus subscription. Indeed the firm is now developing its own internal AI to assist with coding to avoid further slip-ups. 

One of the leaks reportedly concerns an employee asking ChatGPT to optimize test sequences for identifying faults in chips, an important process for a firm like Samsung that could yield major savings for manufacturers and consumers. Now, OpenAI is sitting on a heap of Samsung’s confidential information — did we mention OpenAI is partnered with Microsoft? 

While this is quite a specialized case, another instance is something ordinary folk should be wary of. One Samsung employee asked ChatGPT to turn notes from a meeting into a presentation, a seemingly innocuous request that has now leaked information to several third parties. This is something that we should all consider when using ChatGPT and Google Bard, and with AI’s rapid rise, there is little legal precedent to rely on. 

In its Privacy Policy (which Samsung hopefully read in full) OpenAI mentions that “when you use our Services, we may collect Personal Information that is included in the input, file uploads, or feedback that you provide to our Services”. OpenAI also reserves the right to use personal information gathered “for research purposes” and “to develop new programs and services.”

How secure is ChatGPT? 

ChatGPT logo on phone sitting on laptop with OpenAI logo

(Image credit: Shutterstock)

OpenAI makes no secret of the fact that ChatGPT retains user input data — it is after all one of the best ways to train and improve the chatbot. 

While most of us are unlikely to leak confidential information from a multi-billion dollar company there are also individual privacy concerns. AI chatbots have grown so fast that there is little regulation. This is all the more worrying with Microsoft’s ambitions to integrate ChatGPT into Office 365, a platform millions use at work every day. 

There are also concerns in the EU that ChatGPT goes against GDPR and Italy has already completely banned it, although this has just driven Italians to VPNs. For now, users will have to use their own judgment and avoid disclosing any personal information when they can.

More from Tom's Guide

Andy Sansom
Trainee Writer

Andy is Tom’s Guide’s Trainee Writer, which means that he currently writes about pretty much everything we cover. He has previously worked in copywriting and content writing both freelance and for a leading business magazine. His interests include gaming, music and sports- particularly Formula One, football and badminton. Andy’s degree is in Creative Writing and he enjoys writing his own screenplays and submitting them to competitions in an attempt to justify three years of studying.