Australian Privacy Commissioner Carly Kind breaks down new rules in Australia's Privacy Act

Video interview
April 2025

IN BRIEF

  • "The Australian framework requires concepts like necessity and fairness to be built in right from the start and that means that you can’t just consent to waive your rights, and equally, entities can’t just get consent to cover up a range of other uses."
  • "Generative AI does raise particularly novel and challenging concerns, and I would group those issues, maybe crudely, in an input and an output bucket. The input bucket relates to the scraping of personal information to train an AI model."
  • "I’d also like to us to think about how to use innovative methods to regulate. We haven’t really, in Australia, dipped our toe in the water of regulatory sandboxes, for example, whereas this is something that other jurisdictions are doing."

In this VISION by Protiviti Interview, Protiviti Director Hanneke Catts sits down with Carly Kind, Privacy Commissioner for The Office of the Australian Information Commissioner (OAIC), to discuss recent updates to the Australian Privacy Act. The OAIC is an independent national regulator for privacy and freedom of information; its responsibilities include conducting investigations, reviewing decisions, handling complaints, and providing guidance and advice.

In this interview:

1:05 – The Australia Privacy Act: An overview   

4:40 – Opportunities to enhance privacy protections

7:57 – Concerns for businesses 

10:03 – Penalties and impacts

13:51 – Implications of AI

17:23 – The future of privacy in Australia


Read transcript

Australian Privacy Commissioner Carly Kind breaks down new rules in Australia’s Privacy Act

Joe Kornik: Welcome to the VISION by Protiviti podcast. Joe Kornik, Editor-in-Chief of VISION by Protiviti, a global content resource examining big themes that will impact the C-suite and executive boardrooms worldwide. Today, we’re exploring the future of privacy, and I’m thrilled to welcome in Carly Kind, Privacy Commissioner for the Office of the Australian Information Commissioner, an independent national regulator for privacy and freedom of information and government information policy. The OAIC’s responsibilities include conducting investigations, reviewing decisions, handling complaints, and providing guidance and advice. I’m happy to turn over the interviewing duties today to my colleague, Protiviti Director, Hanneke Catts. Hanneke, I’ll turn it over to you to begin.       

Hanneke Catts: Thanks, Joe. Carly, thank you so much for joining us today. It’s an absolute pleasure to be speaking with you.

Carly Kind: Thank you, Hanneke, and thank you so much for having me. 

Catts: To begin the interview, I wanted to start with the Australia Privacy Act that’s now been in place since 1988, with several updates focused on the increased protection of personal information. It includes the most recent tranche of changes to the act, including the protection of children’s information online and improved regulator powers, the information sharing following a data breach. Carly, can you please provide some context to the nature of the changes and how they came about as well as their importance to Australian businesses?     

Kind: Absolutely. Reform of the Privacy Act has been an ongoing agenda for this government and previous governments for some time. In fact, it was instituted by the Australian Law Reform Commission more than a decade ago. There’s a pretty widespread understanding that the act does need some updates. Those updates have been articulated in a very long report called the Privacy Act Review and the government has accepted in principle many of the recommendations from that review and they’re now bringing forth those changes in tranches. What we saw last year with the privacy and other legislation amendment act was Tranche 1 of a range of Privacy Act reforms. The key components of Tranche 1 were, as you said, Hanneke, first and foremost, the introduction of the mandate for our first to develop a Children’s Online Privacy Code, which will essentially particularize the requirements of the Privacy Act for services likely to be accessed by children and that code development process will take about two years. By the time we consult with children, parents, teachers, industry actors, those services who will be regulated under the code and then put the code out for consultation.

The other key elements of Tranche 1 included a statutory tort of privacy and that is a pretty novel approach to developing privacy law in the common law jurisdiction. It also included criminal offences around doxing, that is the malicious disclosure of personal information, and then, as you also intruded, Hanneke, it also includes some new enforcement powers and regulatory powers for the OAIC as well. 

Then, the final small change but really important for businesses is a requirement that they articulate in their privacy policy when they’re using automated decision-making systems. That requirement won’t come into effect for two years and it will be a small, on paper a small change, but it is an important one for regulated entities to think about. Probably, the most substantial change in terms of obligations on entities at this stage, the other changes from Tranche 1 are more about the powers and responsibilities of my office. 

Tranche 2, should it proceed, and government are now beginning thinking about how to take Tranche 2 forward, will contain more of these substantive changes to the legislation, potentially, new thresholds for data collection and processing, specific changes around how to do advertising, for example, and a range of other different tweaks to the existing law.              

Catts: That’s really great background and context. Building on that, and you alluded to Tranche 2 upcoming changes, what do you see as the next big opportunities to enhance privacy guidance for organizations to give Australians more confidence that their information is protected?    

Kind: I would say there being two parts to that, Hanneke. On the one hand, overhaul of the Privacy Act, should the government proceed with it in 2025, would definitely be a big moment of uplift for Australian regulated entities. That may look like a range of different things depending on how government choose to take forward that project, but for example, the inclusion of small businesses is something that is on the table currently. Small businesses are exempted from the application of the Privacy Act. 

Other big changes might include an interaction of a fair and reasonable test for processing personal information. As I said, potentially, restrictions on targeted advertising, updated definitions around things like consent and personal information. 

If the government proceeds with that project and it passes through Parliament, then that will require really a pretty broad update of compliance work, although, I would say that most entities are already on that journey given that Privacy Act reform has been anticipated for some years. We’ve seen quite high levels of engagement across the regulated sector at this stage, that is, those entities already regulated by the Privacy Act, to really get ahead of the legislation and put in place updated good governance practices, particularly in response to some of the major data breaches in Australia in the last few years and just generally the expectations of the Australian public, which are very consistent with stronger privacy protection. But there is another part, as I said, that really is contingent on Privacy Act reform going ahead. We don’t know at this stage whether that will be the case or what that would look like. 

In parallel, our office is working really hard to put some more meat on the bone on some of the requirements of the Privacy Act as it is. It’s a principles-based framework. It really rests heavily on some concepts that haven’t benefitted from very much judicial interpretation. Things like raise in the bonus, things like fairness, lawfulness are already built into the Privacy Act as it is. Equally definitions of consent, personal information, et cetera. These haven’t really been tested, stress tested in the courts at this stage, and so our office is really focused at the moment on how we can use enforcement actions to advance jurisprudence in the courts around Privacy Act interpretation, and therefore, give more specific guidance back to entities about exactly what the law requires. 

That’s something that we can take into our hands because obviously we’re not legislators, we’re regulators. So stronger, more robust enforcement of the Privacy Act with the view to advance in judicial interpretation, and therefore ultimately, being able to provide more guidance in education back to entities is really the priority for the OAIC in 2025.         

Catts: Are there any key take-outs which should be adopting from other jurisdictions for our Australia obligations? What’s your advice to Australian businesses as they consider their privacy obligations? 

Kind: Most Australian companies or many, at least, are engaged in the global economy and so many of them will already be complying with other jurisdictions, privacy requirements. On the one hand, Australia’s privacy framework is a little outdated compared to some other jurisdictions, and therefore, those Australian companies that are already trying to meet the level of the GDPR, which is the European framework, will really be in a good position to ensure privacy compliance no matter what happens in the Privacy Act reform, as the GDPR is quite a robust, and at least, a more recent legal framework than the Privacy Act. 

Having said that, the Privacy Act does involve some unique approaches that aren’t really evident in other privacy, in other jurisdictions. It’s not, for example, heavily contingent on consent the way GDPR is. That may be cast as a weakness of the Australian regime. I see it actually as a positive because many of us know that consent isn’t really a very effective means for protecting individuals anymore. People just could click “Yes” to terms and conditions that have 40, 50 pages long without really reading them. Actually, the Australian framework requires concepts like necessity and fairness to be built in right from the start and that means that you can’t just consent to waive your rights, and equally, that entities can’t just get consent to cover up a range of other uses. I do think that the Australian framework is quite unique. It requires particular analysis and thought to ensure compliance, but I equally think that where entities are broadly displaying interest in good governance practices, they’re going to be in a really good position to comply with the Privacy Act, generally.               

Catts: Great. Some really great insights and food for thought there too, Carly. The new privacy penalties that were introduced a couple of years ago, they significantly increased penalties for repeated or serious privacy breaches, along with the OAIC’s new powers and the penalty fees from the recent changes that you were mentioning before. Has there been any notable impacts from the penalty increase?      

Kind: The litigation we have in court on foot currently against both Medivac and ACL is under the old regime, not the new enforcement regime, or the enforcement penalty regime that came into effect in 2022. We’re still at the tail end of the previous approach in our enforcement matters, but likely any new enforcement matters and particularly civil penalties proceedings will be under that new regime. 

I think the more notable changes you allude to, Hanneke, is the introduction of different tiers of penalties in 2024. Basically, it preserves the serious privacy interference tier and it adds just interference with privacy, so it removes serious from the second tier, and then it also gives us the lowest tier, which gives us the ability to issue infringement notices for technical contraventions of the Privacy Act. These are things that relate to the privacy policy of an entity, for example. That’s a really interesting and exciting development because it enables us to take a more robust approach to enforcement of technical infringement. It’s lower cost and a simpler procedure by which, after issuing a notice, we can issue an infringement fine up to about $60,000. That should act as a really good incentive for entities to tighten up their compliance around those technical requirements, including on privacy policies, and it also will allow us to really take action whether it’s persistent or quite egregious non-compliance in that space. We wouldn’t be using these powers arbitrarily to go after entities that have made good faith efforts to comply with the law, but we would be using it where we see really consistent or egregious harms or where vulnerable people are implicated.   

Catts: Yes. In regards to the different tiering, and I know you said that it’s still being worked through, but can you give us some insights into what type of breach would constitute a mid-tier penalty versus a low-level administration breach?  

Kind: Mid-tier penalties are applicable to things like the collection principle, APP 3. Whether it’s necessary for an entity to collect personal information in the first place, whether they do that with fair and lawful means. We’ve issued a range of determinations recently that look at scraping of personal information, which we say may not reach the threshold of fair and lawful depending on all the circumstances. That could be a space in which we potentially think about this matter. Another APP to which it clearly applies is APP 6, which is around use and secondary use and disclosure of personal information. When an entity has collected it for one reason and then they decide to use it for another, or they pass it on to a third-party in a way that wasn’t anticipated when they first collected that data. That is another space in which a contravention of that provision may give rise to enforcement proceeding under the new provisions.       

Catts: Thank you for those clarifications. Turning our attention now to artificial intelligence, what do you see as the privacy implications of the dramatic rise of artificial intelligence broadly, and generative AI more specifically? 

Kind: Yes, and it’s hard to disaggregate the two sometimes, isn’t it? Obviously, we’re very preoccupied by generative AI at this current moment in the last year or so, but of course, artificial intelligence in various forms, particularly machine learning, has been around for a long time and much of it doesn’t have privacy implications, for example, where it’s not using personal information at all. That kind of AI, as I said, is well-established in some sectors, for example, in supply chain logistics, and wouldn’t fall within the category of AI that does raise privacy concerns. 

Generative AI does raise particularly novel and challenging concerns, and I would group those issues in, maybe crudely, in an input and an output bucket. The input bucket relates to the scraping of personal information, or the use of personal information collected for one purpose, to train an AI model. I think that this does raise potential concerns when it comes to the Privacy Act, which does have a range of thresholds that have to be met. One is, of course, the collection is fair and lawful, and another that if you’re using data that you’re already holding for one purpose for a secondary purpose, you either need to have consent for that or be able to establish that it was within the reasonable expectations of the individual. I think there are some challenges there when it comes to the use of personal information to train generative AI models, not something my office is looking at, at the moment. 

There’s a big question about whether or not an individual’s data may be misused or being out of their control by being used to train AI. 

Then, at the output end, we see a range of privacy issues as well. We see, potentially, inaccurate data being disclosed through models. We see privacy. security risks that may implicate privacy. through AI models, generative AI models, so potential risks around technical vulnerabilities. It could be occasioned through AI models. We’ve issued guidance on both of those issues, one on the input question, on how you can develop and train an AI model consistently with the Privacy Act, and then on the output question we focused on the use of commercially available AI products and how entities are using those. One thing to really draw out from that guidance is that there is a big difference when a business is using a commercially available model, whether or not they’re doing that with the tool on premises or whether they’re using cloud-based infrastructure. If it’s the latter, then there’s a separate range of concerns, which relate to the disclosure of personal information to entities, particularly those overseas. 

We are urging caution when it comes to the use of commercially available AI models within the context of personal information. Again, that’s the line in the sand for me, if you’re not using personal information, for example, customer information, then it’s a different set of considerations. You might want to think about other things like accuracy or hallucinations, et cetera, but if you’re using personal information in the context of models, particularly cloud-based models, then you do need to think about things like their disclosure requirements.            

Catts: Carly, finally, looking forward to the end of the decade, what’s your vision for the future of privacy in Australia?  

Kind: What a fun question. [Laughter] I’ve got big hopes. Look, I think the Privacy Act reform would be great. There are some wonderful proposals in the Privacy Act review. It would be really great to level up the Privacy Act. But I also think that notwithstanding those reforms, I think there’s a lot we could be doing. Our office has really only shifted towards the more enforcement posture in the last couple of years, I see a lot of scope for building that up. Again, not with a view to being punitive to entities, but with a view to really establishing how the Privacy Act applies in achieving general and specific deterrents. I think we could continue to build the privacy community in Australia, I think there’s a really strong privacy professional’s community and I’d love to see that continue to grow. 

One of the upsides of data breaches in the last few years, and there are very few upsides given that there are harmful impacts, is that privacy is starting to be on the agenda, at board meetings and with the C-suite, and that we could continue to enhance that, particularly, through this more robust enforcement posture. So that CEOs and others, the general counsels and others have to really say that this is the issue of risk that we need to manage proactively. 

AI is going to, obviously, be a big game changer in the next few years and we need to look in Australia about how to approach that. I would like to see some effort in the regulatory space to articulate particular rules around AI. But I’d also like to us to think about how to use innovative methods to regulate. We haven’t really, in Australia, dipped our toe in the water of regulatory sandboxes, for example, whereas this is something that other jurisdictions are doing. I’d love that to be something that the OAIC can take onboard. Likewise, being able to provide innovation advice to entities who are starting to think about how to use personal information in the development of products and services and might be able to come to the regulator before they do that to get advice on how the law applies. Again, this is something that I’d be stealing from other jurisdictions, perhaps better resource regulators who have that ability to provide innovation hotlines et cetera, but again, that would be a great space for us to live into the way I see.  

Catts: Carly, that’s fantastic. Thank you very much for speaking with us today. We really appreciate all of your time and your insights.  

Kind: Thank you for having me, Hanneke. I appreciate it.

Catts: Thanks, Carly. With that, we’ll hand back over to Joe.  

Kornik: Thanks, Carly. Thank you for watching to the VISION by Protiviti interview. On behalf of Hanneke and Carly, I’m Joe Kornik. We’ll see you next time.        

Close transcript

Carly Kind is the Privacy Commissioner for The Office of the Australian Information Commissioner, an independent national regulator for privacy and freedom of information and government information policy. The OAIC’s responsibilities include conducting investigations, reviewing decisions, handling complaints, and providing guidance and advice. Previously, Kind was Director of the Lovelace Institute, an independent research institute and deliberative body with a mission to ensure data and AI work for people and society.

Carly Kind
Australian Privacy Commissioner
View bio

Hanneke Catts is a Sydney-based Protiviti director with over 15 years experience focusing on technology consulting, including privacy, technology risk, project management and assurance, IT controls and security compliance, enterprise risk management, and internal audit and regulatory compliance. Catts has worked with many organisations in Sydney and London with large and complex IT environments in the financial services, technology, government, health and manufacturing industries, and with smaller organisations with specific IT needs.

Hanneke Catts
Director, Protiviti Australia
View bio
Add a Comment
CAPTCHA
4 + 2 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
* Required
Comments
No comments added yet.