Fortune 500 companies continue to demonstrate extreme wariness of “AI chatbots” and similar AI tools in the workplace, as Apple has banned employees from using ChatGPT on work devices. An internal report claims that the ChatGPT ban was actually implemented months ago, and that the company has issued similar prohibitions for tools like GitHub’s CoPilot.
Some recent incidents have demonstrated that these fears are well-founded, such as the case of Samsung employees pasting confidential code and notes from internal meetings into the version of ChatGPT that logs all input as training data for OpenAI’s suite of products.
Major corporations taking very conservative approach to AI with ChatGPT bans
Touted as everything from a revolutionary tool of innovation to a “job-killer” that would render millions of positions obsolete within months, the embrace of ChatGPT by business has been hesitant at best thus far. In part this is due to spotty and unreliable performance at certain tasks, but it is also due to a simple lack of ability to ensure that internal company information remains confidential with hundreds to thousands of employees regularly plugging material into it.
One risk is that OpenAI employees, including potentially low-skilled contractors scattered about the world that are not necessarily vetted thoroughly, may end up with access to this stored information. Another is that it might be leaked once it is logged and stored, either via malicious hacking or something like a database misconfiguration (something that has also already happened). But perhaps the biggest risk is that ChatGPT will simply spit it out to someone else when asked a question. Some studies have already documented targeted attacks that specifically prompt AI chatbots to give up confidential information that they might have encountered.
These are the sorts of concerns that have prompted other ChatGPT bans, but Apple may have an even more specific reason: it is reportedly developing its own similar language generation AI app, and wants to eliminate any chance of internal research and development making its way to the competition. The news about Apple’s ChatGPT ban is breaking as OpenAI is launching an iOS ChatGPT app that supports voice input.
Data handling policies of chat AI platforms less clear, less scrutinized than social media and websites
Apple’s App Store Guidelines do not currently have a specific policy for generative AI apps. However, judging by recent actions, it appears it may be targeting those that could be used by children. BlueMail, a popular email and calendar app, saw its age requirement raised from 4 to 17 after it added a ChatGPT element that helps to generate emails based on prior samples. However, if this is a policy it is not yet consistently applied, as other apps that have incorporated ChatGPT (such as Bing) remain available to all ages.
Generative AI tools seem to have developed so quickly that regulation is still keeping up with them. The first real pressure on OpenAI came from Italy, which threatened a nationwide ChatGPT ban if the app did not make changes to bring it in compliance with General Data Protection Regulation (GDPR) terms. Among other things, that warning pushed OpenAI to add the ability to disable chat history to the standard version of ChatGPT last month. However, no matter what the end user does, ChatGPT will still store conversations for 30 days (to curtail abuse of the app) before they are permanently removed.
In addition to the ChatGPT ban, GitHub’s Copilot is reportedly off the table for Apple employees as well. Copilot provides users of GitHub’s various integrated development environments (IDE) with the ability to have code auto-completed by AI, and supports a variety of languages (such as Python and Ruby). While Copilot is an independent generative AI project that has been available since 2021 and does not presently use ChatGPT, the upcoming Copilot X is slated to incorporate ChatGPT 4. In addition to the case of Samsung employees plugging sensitive code into it, Stack Overflow implemented a ChatGPT ban in December after a sudden flood of low-quality answers was called “substantially harmful.”
Fears about ChatGPT glitching or having an accidental database exposure also became grounded in reality in March, when an issue with the app’s queue system exposed user chat titles to others and a percentage of paid subscribers that were logged in at the time had sensitive billing information emailed to the wrong people.
Fortune 500 companies that have implemented ChatGPT bans for employees include Amazon, Calix, Northrup Grumman, and Verizon. Samsung had originally implemented a ban, lifted it, then had the employee incidents within the space of a few weeks. It has since put the ban back in place.
No industry has more commonly banned ChatGPT than the financial sector, however, where a long list of major banks and investment firms were quick to restrict it internally. Morgan Stanley has already commissioned its own independent variant of the chatbot from OpenAI for internal use, and Microsoft is looking into building a similar product that could be used on company intranets without passing information to outside sources.