Man holding smart mobile phone with AI technology showing X privacy policy and AI models

New X Privacy Policy Promises No Non-Public Personal Data Use in AI Models, Requires Consent for Biometric Info

The platform formerly known as Twitter has updated its privacy policy, and with it comes an assurance that AI models will not be trained on private user data. The update also introduced a number of other changes, such as new restrictions on collecting biometric info or employment history without express user consent.

Elon Musk followed the update with a statement affirming that X’s machine learning and AI models would not be trained with private and confidential information, such as direct messages. However, Musk did indicate that “public data,” such as posts (and former tweets), would be used as the company seeks to build its own competitor to the offerings of Open AI, Microsoft and others.

New privacy policy: Direct messages and profiles protected, public posts are fair game

The new privacy policy’s terms do not go into effect until September 29. The update followed a late August Bloomberg report indicating that the company planned to collect biometric data for its paid “X Premium” subscription service, giving users the option of uploading a selfie and photo ID to receive a verification badge.

Musk’s shift to the “X” branding of the platform is part of a very long-term plan to create an “everything app” that moves beyond messaging to things like finance and transportation. One of the next steps appears to be encroachment on LinkedIn’s space as the center for professional employment and business interfacing, collecting information about employment from user accounts and using it to feed a recommendation engine that can suggest new jobs to them. However, Musk has clarified that both this aspect and the use of biometric information will require user consent before anything is touched by the company’s AI models.

Musk’s statement and the privacy policy update appear to indicate that any public X posts, or what were formerly called tweets in a prior era, are fair game for the AI models to train on. Musk has accused other AI models of scraping Twitter for this purpose, and this appeared to be the central motivating factor behind unpopular changes to the platform that began rolling out in July. After several weeks of requiring a login to view any tweets, users must now have the specific URL of a post to embed it in outside sources or view it in a browser, and user comments will not be displayed if it is accessed in this way.

Musk’s grand plan for the AI models appears to be a separate company he calls xAI, which he announced in July with the claim that its ultimate purpose would be to “understand the true nature of the universe.” While there is no timetable as of yet on unlocking the secrets of the cosmos, Musk has said that in the near term the AI project will differentiate itself by seeking “maximum truth telling.” Musk reportedly bought some 10,000 GPUs and hired two former DeepMind researchers for the project in April.

Total range of user data that X AI models will access still unclear

While the privacy policy has specified that biometric and employment information will not be used without consent, and has implied that all public posts will be put to use, X has not yet specifically laid out exactly what will be used and how it will be used. X has reportedly stopped responding to all press requests with an automated poop emoji, but also has yet to reply to any questions.

A Bloomberg Law analysis indicates that X is going to provide much more detail than what has been outlined in the privacy policy, or it risks running into problems with numerous state-level data privacy laws.

Biometric data is likely to be a particular problem. A number of other tech firms have already been hit with big fines in Illinois for failing to obtain adequate user consent before collecting pictures from social media, and several other states now have laws that could be leveraged in similar ways. X is already facing a possible class action in the state, though that suit focuses on the PhotoDNA software the platform uses to attempt auto-detection of explicit images by way of unique hashes assigned to every upload (including those containing people’s faces).

All of the major tech platforms in the chatbot market are dealing with multifaceted concerns about the training of their AI models. Others, such as Zoom, have similarly updated their privacy policy to reassure users that private information will not be used for training. Others have taken a different tack. Google, for example, updated its privacy policy to make the apparent claim that it is entitled to use everything its search engine can reach on the internet as AI model training material. Meta is an example of taking an approach somewhere in the middle, allowing users to access and control the training data collected by third parties in states that legally compel them to.