Already under investigation by the data protection authorities (DPAs) of several EU nations, OpenAI is now facing scrutiny in Poland in response to an August GDPR complaint.
The data leak reportedly stems from the activity of two AI researchers, who had disk backups of their workstations exposed. This included some 30,000 messages with assorted Microsoft team members in addition to private keys, login credentials and internal secrets.
Setting up the right AI governance is a crucial foundation in these early days of AI. Companies that get governance right will be able to move faster, more confidently in the space – likely outperforming companies that lack the right safeguards to mobilize AI effectively.
Zoom's plan for AI data collection is apparently to scrape it from internal customer activity. The March TOS update changed the platform terms to announce that Zoom reserved the right to use platform video, audio and chat content to train AI models.
With its ability to analyze vast amounts of data quickly and accurately, AI can augment human capabilities and improve overall cybersecurity measures. However, there are also concerns surrounding its development and implementation. One of the biggest concerns is the question of control.
Generative AI models in the style of ChatGPT are being sold that promise to help create malware, write phishing emails, set up attack sites, scan for vulnerabilities, and more. The latest DarkBART and DarkBERT projects have been trained on dark web sites.
The laws and regulations of the future will increasingly be read, analyzed and implemented by AI or by lawyers augmented with AI, and also by technology and business people, especially for SMEs who cannot afford lawyers.
A growing number of organizations are beginning to recognize the potential that AI has to dramatically improve the process of cybersecurity training by improving efficiency in areas like content development, analytics, and enhanced accessibility.
AI-based information risk assessment is allowing companies to mitigate potential security risks and even predict future attacks with greater speed and accuracy than they could ever achieve through manual processes.
The focus is now turning to the cybersecurity implications of ChatGPT and other AI/machine learning (ML) platforms especially after the recent OpenAI security incident. What are some of the key security considerations that organizations need to consider before they explore how to utilize new AI/ML solutions?