A two-day international summit held in the UK has concluded with an agreement on AI safety, with 28 countries that represent most of the leading forces in AI development getting on board.
A new executive order from the Biden administration addresses a wide range of the potential harms that AI can cause, putting new safeguards in place for everything from biological materials engineering to deepfakes.
AI can become a transformative force in meeting today’s compliance and security needs for GRC teams, provided organizations create a happy path that ensures data isn’t leaked and empowers developers to use AI safely.
Already under investigation by the data protection authorities (DPAs) of several EU nations, OpenAI is now facing scrutiny in Poland in response to an August GDPR complaint.
The data leak reportedly stems from the activity of two AI researchers, who had disk backups of their workstations exposed. This included some 30,000 messages with assorted Microsoft team members in addition to private keys, login credentials and internal secrets.
Setting up the right AI governance is a crucial foundation in these early days of AI. Companies that get governance right will be able to move faster, more confidently in the space – likely outperforming companies that lack the right safeguards to mobilize AI effectively.
Zoom's plan for AI data collection is apparently to scrape it from internal customer activity. The March TOS update changed the platform terms to announce that Zoom reserved the right to use platform video, audio and chat content to train AI models.
With its ability to analyze vast amounts of data quickly and accurately, AI can augment human capabilities and improve overall cybersecurity measures. However, there are also concerns surrounding its development and implementation. One of the biggest concerns is the question of control.
Generative AI models in the style of ChatGPT are being sold that promise to help create malware, write phishing emails, set up attack sites, scan for vulnerabilities, and more. The latest DarkBART and DarkBERT projects have been trained on dark web sites.
The laws and regulations of the future will increasingly be read, analyzed and implemented by AI or by lawyers augmented with AI, and also by technology and business people, especially for SMEs who cannot afford lawyers.