OpenAI’s recently launched ChatGPT macOS app has faced scrutiny due to a significant security flaw that left user conversations vulnerable. The app, released just last week, stored chat logs in plain text on users’ computers until an update was rolled out on June 28th. This oversight meant that anyone with access to the device, including potential malicious actors or unauthorized applications, could easily read these conversations.
Developer Pedro José Pereira Vieito first exposed the issue on social media, demonstrating how straightforward it was to access these unencrypted files and display conversation text in real-time. By circumventing standard macOS sandboxing practices, which isolate app data to enhance security, OpenAI’s app operated with greater flexibility but lacked critical protections against unauthorized data access.
Following notification from tech outlets like The Verge, OpenAI swiftly addressed the issue by releasing an update that encrypts stored conversations, thereby preventing unauthorized access. In response, OpenAI spokesperson Taya Christianson affirmed their commitment to maintaining high security standards while delivering a user-friendly experience.
This incident underscores broader concerns about data privacy in AI applications. While OpenAI can review conversations for safety and model improvement with user consent, the initial lack of encryption posed a serious risk. Users are advised to update their ChatGPT macOS app to ensure their conversations remain secure moving forward.