Pressure from privacy advocates and government regulators (most notably the ban in Italy) seems to have prompted some improvements to ChatGPT, with the company now allowing users to disable chat history. However, any prior conversations remain logged and available to the company’s AI models, and chats will still be retained for 30 days before deletion in order to “fight abuse.”
Limited ability to disable chat history added to ChatGPT
Users will have to opt-out of having chat history logged via the settings menu, and this will only apply to conversations that took place after the setting was changed. Prior conversations remain available to the company’s AI models (which may extend to products other than ChatGPT) for training.
Conversation logging is also not exactly “disabled” in the strictest sense of the term, as ChatGPT continues to record and hold interactions for 30 days regardless of user settings. The company says that this is only to scan for situations where abuse is suspected and that the logs will be permanently deleted at the end of this period.
ChatGPT users can see what has been logged to this point by using the “export data” option, which provides a file (via email) containing all the information that the company’s AI models have available to them. Those that want to retain the ability to log chat history but opt out of having it used for training will soon have that option available, but will have to subscribe to the premium ChatGPT Business package to access it once it is available. OpenAI says that this option will be available in “the coming months,” but it is not yet known what the price point will be or how the full feature set will differ from the existing premium ChatGPT Plus subscription (which costs $20 per month) or business API access (which costs roughly $0.002 per 750 words of input, but is currently limited to version 3.5 of the software unless one of a limited amount of invitations to version 4 is obtained).
AI models forced to address swirling legal issues
A number of other AI models are now available, but ChatGPT has become the “household name” of the bunch by virtue of being the first (and seemingly still the most competent and versatile overall). It has thus also drawn the greatest share of scrutiny from activists and legislators, and its mistakes are providing material for the foundation of generative AI regulation.
The biggest issue thus far was a mid-March data breach in which a failure of a cache system exposed user chat history titles to random others and about 1% of ChatGPT Plus subscribers had their sensitive billing information emailed to the wrong person. This prompted Italy to ban the chatbot from the country until it makes prescribed security and transparency changes, and sparked a discussion about similar bans in at least several other nations.
Other issues stem from users failing to heed precautions about what should and should not be plugged into the system. Samsung made the news when several of its employees plugged proprietary code and internal meeting information into ChatGPT for code reviews and reformatting, apparently unaware that the chatbot was logging all of this input and using it as material for training parent company AI models. Concerns of this nature have prompted some major companies, including a great deal of the financial industry, to ban ChatGPT and similar AI models from the workplace for the time being.
ChatGPT and other AI models are also already facing a variety of lawsuits, long before they even make it out of their formal “testing” phases. Getty Images, one of the world’s largest providers of stock photography to media, is suing Stable Diffusion creator StableAI for use of its protected images in its training data. Similarly, the Github Copilot project is being sued by numerous open source developers over use of their code as training material. And a mayor in Australia is suing OpenAI for defamation, in the first case of its kind, after ChatGPT falsely claimed that he served time in prison as part of a foreign bribery scandal.
AI models also face an inherent dilemma in that their appeal is that the learning algorithm is supposed to be flexible and responsive, developing new capability as it goes, but regulation and safety concerns also force the placement of “guardrails” to prevent certain queries. People have continuously found ways to slip ChatGPT’s guardrails via clever prompts, one of the most famous of which was by simply telling the chatbot to roleplay as if it was an unrestricted system called “DAN” and answer the question from that perspective.
Many of the AI models are watching what happens to ChatGPT carefully, as they have yet to even implement features like chat history. The option was only just recently added to Microsoft’s own Bing chatbot; others, such as SnapChat, rolled the product out with the immediate ability to delete chat history.