The future of data is not about how much we collect, but how ethically it is used and how we can realistically safeguard it so that we get the best out of AI without violating data privacy tenets.
Data Privacy
Technological development has always outpaced privacy concerns, but never more so than in the past decade. Collection and centralization of personally identifiable information (PII), tracking of movements and digital surveillance are all at unprecedented levels. Regulations and laws are only just beginning to catch up to the ability of both governments and private entities to deploy these capabilities.
What exactly is there to worry about? The mass collection and centralization of data by giant multinationals such as Facebook and Google is as good of a place to start as any. Two decades of vacuuming up the personal data of users of various online services has created the most impressive marketing capabilities in history, but these profiles have astounding potential for damage when they are used the wrong way or fall into the wrong hands.
Unauthorized information that is captured in data breaches tends to find its way to massive “combo lists” that are sold and traded on the dark web. Social security numbers are added from this breach, home addresses and phone numbers from that one, personal health information from yet another. Soon, a frighteningly complete profile of millions of individuals is available to anyone willing to pay the asking price.
These are just the established data privacy issues. The emerging ones are even worse. High-quality facial recognition technology is just beginning to roll out across the public places of some countries. Artificial intelligence is not only making mass facial recognition possible, but magnifies the power and reach of any application that involves capturing and sorting information: scanning pictures, analyzing speech, sifting through text and location data. This threatens to not only shatter anonymity and privacy, but allow for highly advanced impersonation and take the concept of “identity theft” to new levels.
Some businesses chafe at the trouble and added expense of new and emerging data privacy regulations, but they are vital to both protecting rights and privacy and instilling confidence in end users. Customers want to be able to submit their payment information without worry about data breaches and identity theft, use services without wondering what is being done with their personal information and use devices without fear of surveillance or having location data tracked. The need for meaningful safeguards only grows greater as technological capabilities increase.
Senator Ron Wyden says both Apple and Google are complying with foreign government requests for data from push notifications which can facilitate government surveillance by disclosing apps that the user has installed, the Google or Apple account they use with the phone, and potentially even the text displayed in the notification (if it is not encrypted).
With so many organizations incorporating generative AI tools into operations, ethical AI adoption can increase their competitive advantage, simplify tool testing, increase end-user adoption and more.
Drawing on terms first proposed in a series of stalled-out data privacy bills that date back to at least 2018, the Government Surveillance Reform Act of 2023 (GSRA) narrows the focus specifically to warrantless government interception at all levels from federal to local.
Annual Cisco Consumer Privacy Survey, a study including the opinions of over 2,600 respondents of varying demographics in 12 countries, indicates that consumer awareness of data privacy rights is continuing to grow and that AI has some work to do to earn public trust.
A temporary moratorium on the use of facial recognition technology in state schools is now a matter of law in New York, following the conclusion of a study that found that potential rights violations outweighed the safety benefit.
A shift from data protection as a burdensome obligation to a framework of privacy by design delivers three big results: less costs to adapt to new legislation, growth in consumer confidence and trust, and it runs less risks for a business in case of inevitable mishaps.
A government investigation of Elon Musk's tenure as leader of Twitter has determined that there may be violations of a 2022 FTC order that required certain privacy and security measures be implemented.
Privacy tests have found that every connected car brand collects more personal data than it needs to, and employs it for non-essential purposes. The vast majority are sharing or selling customer data.
The most controversial portion of UK's Online Safety Bill appears to be dead in the water, as Ofcom has publicly admitted that the technology to create backdoors into encrypted messaging without breaking it does not exist.