Concerns about the security and privacy of productive AI tools have increased following reports of leaked conversations through OpenAI’s ChatGPT. The incident has raised questions about vulnerabilities in AI systems, despite companies’ efforts to build security measures. Sensitive data, including usernames and passwords, were apparently leaked to unrelated users during sessions.
The leaks are expected to include details of another user’s proposals and presentations, a major violation of OpenAI’s privacy policies. According to the user complaint, the incident occurred despite strong passwords and security measures.
According to OpenAI, the data leak is due to a hacker’s attack on compromised accounts, and the conversation appears to have originated from Sri Lanka instead of the user’s actual location, Brooklyn in the US. In March 2023, a ChatGPT bug was discovered that leaked user payment data.
In another instance, ChatGPT accidentally leaked company secrets belonging to Samsung, leading to an internal ban of the tool. Leading AI companies like OpenAI, Google and Anthropic need to focus on careful security postures and specific measures to prevent such risks, just like Hackdra’s UlgenAI.