Padlock on digital background showing AI guidance by OAIC

New OAIC AI Guidance Sharpens Privacy Act Rules, Applies to All Organizations

The Office of the Australian Information Commissioner (OAIC) has issued new AI guidance that all organizations that operate in the country should take note of, as it provides greater clarity to existing Privacy Act rules and also applies to organizations usually considered too small for the regulation’s rules.

Existing Privacy Act terms may be applied to those who both use and develop AI in Australia, with the usual annual turnover threshold of $3 million lifted in this particular area. The new AI guidance builds on Privacy Act terms that establish data minimization rules and restrict sharing with third parties, as well as requiring consent notifications that make explicit what the personal data is being collected for.

Australian AI guidance brings privacy act to bear

The new AI guidance outlines five key takeaways that require attention, and though the term “guidance” is used some of these constitute expansions of application of existing rules. The first of these is that Privacy Act requirements for personal information apply to AI systems, both in terms of user input and what the system outputs. Those implementing AI systems must also consider whether the product is suitable to its intended use, whether it has been tested in this area sufficiently, and whether sufficient human oversight is available throughout the process.

The second AI guidance takeaway stipulates that privacy policies must be updated to have “clear and transparent” information about public-facing AI use. The third takeaway notes that the generation of images of real people, whether it be due to a hallucination or intentional creation of something like a deepfake, are also covered by personal information privacy rules.

The fourth AI guidance takeaway states that any personal information input into AI systems can only be used for the primary purpose for which it was collected, unless consent is collected for other uses or those secondary uses can be reasonably expected to be necessary. The fifth and final takeaway is perhaps a case of burying the lede; the OAIC simply suggests that organizations not collect personal information through AI systems at all due to the ” significant and complex privacy risks involved.”

In terms of hard rules, the AI guidance also plainly states that the Privacy Act applies to all uses of AI involving personal information and that other relevant laws, such as the mandatory components of the Australian Privacy Principles (APP) guidelines, may also apply even though they are not specifically addressed by this guide.

AI guidance advises careful consideration of commercial products

The AI guidance cautions that commercial products should not be selected simply because they are available, and that due diligence must be done to determine who exactly will have access to personal data that might enter the system (as well as potential security risks). The OAIC advises looping product selection and implementation into a larger “privacy by design” strategy and that pictures, videos and audio recordings are just as subject to the privacy protection rules as documents and text.

What the new AI guidance boils down to is essentially the legal principle that it cannot be considered “reasonable” respect for or protection of privacy to enter personal information into an AI system, unless that AI has been expressly designed with demonstrable safeguards and appropriate human oversight. Most of the common chatbots, at least as presently constituted, would likely fail that test. But the possibility of false information generated about a person must also be considered, as that too is considered a violation.

So how should organizations respond? Ideally, as the OAIC points out with its final takeaway, personal information should be kept entirely away from generative AI systems as almost none of them can offer guarantees of ongoing compliance. If an AI system must be incorporated (or one is developing such a system), the first measure is to take a harder look as to whether gathered personal information will meet the “reasonably necessary” standard. It is also important to thoroughly screen this personal information for accuracy, as any inaccurate output built off its back could constitute a violation. Express consent of the user must also be collected for all applicable information, and privacy policies must be updated to make clear exactly what the AI system is being used for. All of these steps present opportunities to evaluate whether the collected information is really necessary or useful, or is worth the potential exposure of a glitch or breach of the system.

AI developers should also note the 10 mandatory AI guardrails proposed by the Department of Industry, Science and Resources in September. That paper’s commentary period closed on October 4. 10 voluntary guardrails are already in place as part of the department’s Voluntary AI Safety Standard.

 

Senior Correspondent at CPO Magazine