It appears that like the rest of the world, Samsung is impressed by ChatGPT but the Korean hardware giant trusted the chatbot with much more important information than the average user and has now been burned three times.
The potential for AI chatbots in the coding world is significant and Samsung has, until now, allowed staff in its Semiconductor division to use OpenAI’s bot to fix coding errors. After three information leaks in a month, expect Samsung to cancel their ChatGPT Plus subscription. Indeed the firm is now developing its own internal AI to assist with coding to avoid further slip-ups.
One of the leaks reportedly concerns an employee asking ChatGPT to optimize test sequences for identifying faults in chips, an important process for a firm like Samsung that could yield major savings for manufacturers and consumers. Now, OpenAI is sitting on a heap of Samsung’s confidential information — did we mention OpenAI is partnered with Microsoft?
While this is quite a specialized case, another instance is something ordinary folk should be wary of. One Samsung employee asked ChatGPT to turn notes from a meeting into a presentation, a seemingly innocuous request that has now leaked information to several third parties. This is something that we should all consider when using ChatGPT and Google Bard, and with AI’s rapid rise, there is little legal precedent to rely on.
How secure is ChatGPT?
OpenAI makes no secret of the fact that ChatGPT retains user input data — it is after all one of the best ways to train and improve the chatbot.
While most of us are unlikely to leak confidential information from a multi-billion dollar company there are also individual privacy concerns. AI chatbots have grown so fast that there is little regulation. This is all the more worrying with Microsoft’s ambitions to integrate ChatGPT into Office 365, a platform millions use at work every day.
There are also concerns in the EU that ChatGPT goes against GDPR and Italy has already completely banned it, although this has just driven Italians to VPNs. For now, users will have to use their own judgment and avoid disclosing any personal information when they can.