AI and teen privacy panel discussion with Future of Privacy Forum leaders

Panel discussion
December 2024
Audio file

IN BRIEF

  • “When you look at something like an LLM, you're talking about the training data. Where is that? Where did that data come from? Is it information that was scraped off the web? Is it information that's been collected from apps on your phone? Is it a form that you signed somewhere? Is there personal data in the mix?”
  • “I think it's important to keep in mind that with privacy rules, particularly with strict data minimization and limits on secondary use, that could have a negative impact on training safe and fair AI systems, which rely on training using representative data sets. So, there's kind of like a tradeoff that we need to be considering between very strong privacy safeguards while still allowing room for innovation.”
  • “I will leave folks with one last message, which is that no matter what happens with the technology and how it’s stretched and what enforcement we’ll see, getting the basics right is really, really half of the battle. By that, I mean the data hygiene piece, having time and attention and systems set up internally, and that really, really goes a long way to preventing any harms that might emulate from the use of AI.”

In this VISION by Protiviti podcast, Protiviti Senior Managing Director Tom Moore leads a discussion on the impact of AI and the critical need for children and teen privacy with key members of the Future of Privacy Forum, a global non-profit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. Tom welcomes Anne Flanagan, Vice President of Artificial Intelligence for the Forum and Bailey Sanchez, Senior Counsel with the Future of Privacy Forum’s U.S. Legislation Team. The panel was recorded as part of VISION by Protiviti’s recent webinar “Building trust through transparency: Exploring the future of data privacy.”

In this discussion:

1:15 – Future of Privacy forum: mission and purpose

4:05 – AI risks and harms

8:55 – Youth and teen privacy concerns

14:09 – Regulatory frameworks

22:54 – Three- to five-year outlook on privacy and AI regulation


Read transcript

AI and teen privacy panel discussion with Future of Privacy Forum leaders

Joe Kornik: Welcome to the VISION by Protiviti podcast. I'm Joe Kornik, Editor-in-Chief of VISION by Protiviti, a global content resource examining big themes that will impact the C-Suite and executive boardrooms worldwide. This special edition podcast highlights a panel discussion hosted by Protiviti Senior Managing Director Tom Moore. The panel was recorded as part of VISION by Protiviti 's recent webinar, Building Trust through Transparency: Exploring the Future of Data Privacy. Tom leads a discussion about the impact of AI and the critical need for children and teen privacy with key members of the Future of Privacy Forum, a global nonprofit organization that serves as a catalyst for privacy leadership and scholarship, advancing principal data practices in support of emerging technologies. Tom welcomes Anne Flanagan, Vice President of artificial intelligence for The Forum, and Bailey Sanchez, Senior Counsel of The Forum’s U.S. Legislation Team. Tom, I’ll turn it over to you to begin.

Tom Moore: Great. Thanks, Joe. Anne and Bailey, thank you very much for the opportunity to speak with you today. You're both deep subject-matter experts representing a fantastic organization, the Future of Privacy Forum. We're thrilled to have you today, so welcome. I'm going to start just with a general question about FPF. Can you tell me about the mission of FPF, what role it plays in thought leadership around the privacy space? Anne, why don’t you go first and then Bailey, I'll let you chime in.

Anne Flanagan: Obviously, Tom, it’s such a pleasure for us to be here today and great that Bailey is joining as well. Future of Privacy Forum, so, I know Joe introduced us very briefly earlier on and indeed we may have some Future of Privacy Forum members on the webinar today, and we’re a membership-funded organization, combination of membership and some grants. We really sit in the nonprofit space between the public sector and the private sector. We primarily help senior privacy, data, AI executives and folks that work in the policy and regulatory space to really understand what's happening around the world of privacy as concepts evolve. We are a technology-optimistic, but obviously, very pro-privacy. We're headquartered in Washington DC, I myself am based on the West Coast over in San Francisco, but we also have a presence in the EU, Asia Pacific and folks as well that work in the Africa region as well as Latin America.

So, we really are, as you can see, right around the world in our presence and the word “forum” is definitely not accidental. We really act as a convener for folks to have these difficult conversations around the world of privacy right now, particularly as technology evolves ever and ever faster and data needs are really at their first and foremost for most companies in this day and age. I'm going to hand over to Bailey because I lead our work on artificial intelligence and we really—I think in 2024, even though FPF did work for seven or eight years on artificial intelligence, we launched a center for AI earlier on this year to really consolidate that work and to tackle some bigger AI projects. I'm really pleased to announce that we have a major piece of work launching before the end of the year the folks on this call may be interested in, so we can come back later and let you know about it. But we're really looking at how executives are tackling, assessing risk around AI right now, which I think is top of mind for a lot of folks, but Bailey, I want to hand over to you.

Bailey Sanchez: Thank you, Anne. So, at FPF, we look at privacy and AI from a law, technology and policy perspective. And so, me on the U.S. legislation team, I am looking a lot at like what the law says and where there are emerging trends in the law. We do comparative analysis of different legal regimes. I think one report that is pretty relevant for this group here is we just published a report on 2024 state AI legislation trends. And then myself in particular, I have a lot of expertise in the youth privacy and safety space, which is why I'm joining today's conversation.

Moore: Great. Well, again, thank you both for joining us and Anne, let's start with you. Artificial intelligence is your area of expertise. Can it potentially compromise an individual's right to privacy? Can you give us some examples of harms that could come and risks that are accompanying artificial intelligence?

Flanagan: So, I love these questions because AI is something which I think is top of mind for absolutely everybody. I'm sure folks are talking about it around the dinner table, folks are talking around the C-Suite table. People are using it in their day-to-day jobs right now. It's really gone very, very mainstream. But those of you in the privacy and the data community, I'm sure you've been talking about it for years, if not using it for years. AI is not necessarily anything that is new, but of course, I think we'll all have to acknowledge that about two years ago, this thing came along called Chat GPT and really revolutionized and democratized access to AI in a way that we had never seen before. I think the potential of that has unleashed in the way that that is a consumer-facing technology. It's really seen this absolutely exponential boom, and as a result of that, we start to see pressures on the market. We start to see pressures internally in organizations around using AI.

I think anytime you end up with a new technology or effectively a new technology where there's a lot of pressure to use it, deploy it, develop it, the data behind it can obviously create risk. And I think that's really what you're getting out there is, what is the intersection here and as we all sit here at the end of the year between AI and privacy and how does that change the dynamic.

I think when we go back to basics and we really look at what it means for a technology to create risk around privacy, it's really looking at, I think, two main things, Tom. I think one is, where is the data coming from that's really backing onto that technology? So, when you look at something like an LLM, you're talking about the training data. Where is that? Where did that data come from? Is it information that was scraped off the web? Is it information that's been collected from apps on your phone? Is it a form that you signed somewhere? Is there personal data in the mix? There could be proprietary information in the mix as well. I think that's sort of a separate concern because we're focused on privacy today. I think going back to the basics of where did that data come from and the hygiene around that data, I think that's one area where things can go really wrong really quickly because I think one of the biggest challenges with generative AI is, if you are going through this “garbage in garbage out” quote, but it's very, very real when it comes to an LLM because you're constantly iterating and you're constantly building on what's there before.

So, when it comes to developing and building models or indeed deploying an AI system in an environment where you're inputting data into it, it's really, really important to have that hygiene around protecting that data on the input. So, you could have potential privacy implications there.

The other area, which I think is the one that's maybe more obvious and really where consumers actually might see harm, is on the output side of things. In other words, you may have some very, very serious situations where you might have, for example, consequential decision-making. You could be applying for a mortgage and maybe your bank is deploying an AI system to make a decision about your creditworthiness, for example. If they have information that is incorrect, biased, or if the model is not developed in such a way that is taking into account the fairness in its output, you could end up with some outcomes that are going to be very consequential for you in your life that really come from a violation of your privacy or come from data that's not quite accurate. So, I think that we start to see the rubber hit the road in that respect.

In terms of general output, we already heard today on the call, data breaches are mentioned. To build and deploy AI models you're often looking at huge swathes of data. I think we've heard for years this idea of the data were more, data is always better, more data is always better, and the consequences of a data breach in an organization that is developing or deploying AI, may be—not necessarily, but may be—more grave than an organization where the data use is more minimal. So, I think it really goes back to basics around the data hygiene and the normal risks that companies are looking at when it comes to privacy, they still apply. AI just amplifies and increases that risk.

And then the last thing here is that there's maybe a literacy gap right now because AI is developing so fast. I mean, I don't just mean a literacy gap in terms of how the technology actually works, but what the technology means for your businesses, your customers, and those folks whose personal data might be in the mix, where the PII is actually coming into play. There often just isn't a lot of time to think about these problems right now because there are other concerns around the business. So, the speed of the deployment is certainly a really, really big barrier, so that literacy gap catch up period, and organizations obviously like Protiviti and also the Future of Privacy Forum, we try to really help in educating in that space.

Moore: Excellent. Thank you. Bailey, turning to you. Obviously, we just talked about AI, but there's other innovations out there as well, quantum computing, AR, etcetera. How are these influencing the landscape of teen and youth privacy? Is it all harms? Is it also—is there potential opportunity to enhanced privacy with these tools?

Sanchez: Sure. So, there's certainly harms to consider. I think one harm, in particular that's very top of mind right now for kids and teens specifically is synthetic content and using generative AI to generate synthetic content. It's Election Day. I think there's been a lot of focus on how generative AI will impact elections, but I think it's important to remember that there's a whole spectrum of harms with AI and other emerging technologies that you just mentioned. And it's not that they are different for children, but they're usually just exacerbated. So, things like kids using generative AI to kind of like bully their peers, kids and teens using generative AI to create CSAM. A lot of the stories that we hear about that online are often perpetrated by other students rather than like a shadowy bad actor

But there are also opportunities with AI and other emerging technologies. I think something that we talked about a lot is cyber hygiene and making sure that you have your passwords in order, or just different internet facilitated scans. I think there's actually an opportunity to use AI to help vet malicious content. Again, keeping in mind that kids and teens are particularly vulnerable groups there.

Then also AI can have a lot of benefit in the school context. Predictive AI has been used in schools for a long time. There are those harms that we hear about, like whether AI is being used to make a decision about college applications. There's a really bad story a couple of years ago out of Florida, that early warning systems were kind of predicting how likely a student was to become a criminal, but on the flip side, the technology can be used to help students do homework. I think there's an interesting Google Notebook tool where you can upload your notes, or your documents, and it creates a podcast for students. So, I think there are opportunities as much as there are risks. Another harm to consider is just that kids cannot always vet an AI tool, but if we think, like Anne just said, I think there's a digital literacy gap for adults as well. So, we tend to think of kids as this very separate and distinct group, but obviously, a lot of the time it's the same or similar harms and we just need to kind of amplify whatever tools that we create or safeguards we put in place.

Moore: Well, Bailey, let's sticks on that topic for just a second and talk about what proactive steps can individuals, schools, families, policymakers take to help young people avoid these threats and use these tools for good.

Sanchez: I mean, I think a really basic one is just to learn and understand the technology. We call kids a vulnerable group, but they're pretty savvy. Kids are going to be bringing a lot of tools at home, into the classroom, and so I think there is kind of an obligation for us as adults to also be up to speed on all the tools. I think focusing on the most high-risk type of processing is really important from that company and government perspective. Again, AI is used for just kind of a range of things, like Spotify uses AI to make song recommendations. I think that's a much lower risk of harm than something like AI being used to make a decision about students’ educational outcome, and so, pinpointing what types of risk that we are trying to solve for.

Then I think another thing specific to the education and student context is, I've been seeing an uptick of companies wanting to deploy their products in the education space because they might see, hey, I've created this for a consumer facing or B2B, what about B to school? But I think it's important to keep in mind that there are special considerations with schools and student data, and you need to really tread cautiously in those spaces and make sure you have all of your compliance check boxes ticked off.

Then another immediate thing to keep in mind is, there's a whole discussion about age assurance. Should we restrict kids from certain segments of the internet? Do we need to design things that are child-friendly? I don't think there is an answer to that policy debate quite yet, but I think in the meantime, something that companies can do is just make sure they have a process in place for handling kids’ data if it makes its way to you. Again, a lot of companies might be B2B and not intended for kids, but they also might not be doing that proactive age verification because they just don't anticipate a lot of kids coming your way. If kids’ data makes its way into your processing, just making sure that you have a plan in place for what you're going to do with that.

Moore: So, Bailey, we have talked about government regulation somewhat, basically, what legal frameworks exist? How should policy evolve over time to help and continue to safeguard the privacy rights of our young people?

Sanchez: Yes. So, as I've mentioned, something that the Future of Privacy Forum published recently was a 2024 state AI trends report. I think as we know, one of the more significant state bills was the Colorado AI Act. So, Colorado AI Act has broad protections on broad consumer rights and business obligations, but it is only focused on discrimination and systems that are substantial factor in consequential decisions, which we've been talking about a bunch, of things like health, employment and housing. Again, that's not necessarily a bad thing. Maybe we don't need very specific AI regulation for every single type of AI out there. I’m going to mention Spotify recommendations. So, I think a trend that we're seeing in the U.S. is a big focus on those consequential decision-making AI systems rather than just kind of general-purpose AI.

I think some other steps that can be taken are targeted rule making, the focus on different segments of the risk that we're trying to pinpoint. But I think it's important to keep in mind that with privacy rules, particularly with strict data minimization and limits on secondary use, that could have a negative impact on training safe and fair AI systems, which rely on training using representative data sets. So, there's kind of like a tradeoff that we need to be considering between very strong privacy safeguards while still allowing room for innovation.

Moore: So, Bailey mentioned Colorado, other states. Do you see regulation of artificial intelligence, especially with respect to youth and teen privacy occurring at the state level in the U.S. or do you think you foresee anything happening at the federal level?

Sanchez: That is a good question. I think kids’ privacy and online safety has been a very big topic for policymakers globally. I think if we saw anything pass on privacy or AI at the federal level, because I know you mentioned some kind of like skepticism with federal privacy. I think kids’ privacy is one of the areas that's most ripe for something to pass federally, but I think it's important to keep in mind that when it comes to kids’ privacy and kids’ safety, lawmakers are all often approaching it from just a lot of different factors, again, the risk can include the data risk that Anne highlighted. It can include content moderation, free speech, safety, and then just the rights of the kids themselves. So, I think predicting what might happen federally is very tough. Then at the state level, a lot of the bills that I've seen have been focused on just needing specific opt-ins for training with kids’ data or just banning kids from addictive feeds. So, those are very, very concrete versus I think the rest of the AI policy conversation is focused on that broader subset.

Moore: Let's zoom out to just AI, in general. Do you think the legal frameworks that are in place today are adequate to address AI threats and harms, or how do you see them evolving to better protect individual privacy?

Flanagan: So, this is a great question, and I think one that's very close to our hearts at the Future of Privacy Forum. There's obviously a lot of activity happening in the United States right now. We see a lot of AI bills at state levels, but given that we're in a global webinar today, I think it's helpful to zoom out and sort of look at the general state of play because we have, of course, that precedent of a lot of privacy and data protection regulation right around the world, which really serves as a core building block where it looks to tackling some of the issues around AI. We already spoke about data and certainly in the EU, for example, GDPR has been there since 2018. We're starting to see more and more enforcement, more and more cases involving AI, but actually, the GDPR is being used as the tool to course-correct any harms. So, GDPR, quick reminder, obviously it's use-case agnostic, technology-neutral. It certainly did not foresee generative AI as a technology, but it should be future-proof enough to be able to be used in that context. It's a big conversation happening in Brussels right now as to whether it needs to be opened up or modified in any way, shape or form.

I think we're starting to see a lot more enforcement on AI, in addition to automated decision-making where we've seen enforcement for quite a while. You have, of course, the addition of the EU AI Act in Europe. It came into force in the EU in the middle of August. It is going to take about 24 months to come into force. And really, what we're going to see is this staggered approach and based on whether or not you're in an area of operation that is categorized as high risk such as, for example, education, employment, to name two examples. Your obligations strictly and overtime, but it's really based around product liability. It's not really based around rights of people, and it doesn't have a civil rights component to it like we see in the laws in the United States, for example

So, instead of the long and the short of that end, of course, given how influential the GDPR has been around the world, to a degree in the United States, but mostly outside of the United States, you really see that there is a baseline of privacy protection in place in most countries, which is certainly not adequate to address all of the harms and correct all of the problems in respective AI but it goes a really, really long way and certainly, I don't think anyone can turn around and say they have nothing to go on. There’s certainly something there already.

If you look at what's happening in the Asia Pacific region, very, very interesting. You see government like Singapore, which has its model governance framework, which is a softer type of law. It's falling short of regulation, but it is advising companies to create risk frameworks around how they use AI, really, really similar in the United States when it comes to public-sector use of AI, particularly around procurement for example. You have the NIST risk management framework for AI and it really goes back to basics around—again, get us a softer piece of work, not like, shy of regulation, but the tools are really, really there around making sure that you know what data you have, you're mapping it, you are doing some risk analysis. You're actually taking time, attention, and focus and having folks in the organization actually address any risks surrounding AI—a lot of best practice there.

We're starting to see some of the I guess the ideas of NIST creeping into—sorry, the NIST RMF—those building blocks. We're starting to see those reflected in state level legislation around AI. We're starting to see the ideas around ensuring that there is consistency with any privacy laws in the United States. We're starting to see a bit more polish and a bit more sophistication. We still, of course, have a patchwork of laws in the U.S. It can create a lot of confusion. One of the things that Future of Privacy Forum talks about a lot is that if we had a federal level privacy law, it's not that it will solve all of these problems, but it certainly would create a more cohesive and harmonized framework around the United States that would improve the state of play with respect to the spelling, I guess questions and inconsistency that's good for business, it’s good for people, and it certainly would bring about a state where you have a minimum level of safety around this topic.

Then, I think when we look at what else is happening around the AI regulatory landscape, I think those two big areas there around data and around any potential risk—you start to see this risk basis that you see in the EU AI act where you have the different levels of risk around the use case. So, I think we've moved to—Tom, I guess long story short—we've moved from a world where the existing regulation around AI, which is very principles-based, it's based around the person, it's relatively technology neutral in a lot of cases, as you see in privacy laws—we're starting now to see more of a focus on the use case. Of course, those use cases will continue to evolve and as Bailey mentioned earlier on, when it comes to AI harms certain activities are going to intrinsically carry a lot more risk than others.

Moore: Yes. All right. Well, I think we have time for one last question for both of you. Make a bold prediction three, five years out. What may surprise us about youth, teen privacy, AI, something that people may not be thinking of? What might you expect to see in the future that others who aren't studying this as deeply as you are, may miss?

Sanchez: I can go first. So, in the kids’ privacy and safety space, there have been a lot of laws that have passed at the state level and a lot of laws that have passed on the state level that have resulted in litigation. Those are making their way through the courts right now. There's actually an age verification law that's going to be heard at the Supreme Court this term and then there's one that is at the Ninth Circuit and then there’s a bunch of district court cases. So, I think these are important to pay attention to because they're answering a lot of interesting questions just about the future of internet regulation. Again, getting back into that question of whether you can kind age-gate your service or if you have to make something kind of like age-appropriate for everyone. Another interesting aspect to those is having certain types of disclaimers that you're legally required to do, which I think will be very relevant for the discussion with them when it comes to AI transparency. So, I think it will take three to five years to get those answers, but that will be my bold prediction, is that in five years, I think we're going to have a lot more legal clarity on just what the legal framework in the U.S. will look like around privacy and AI.

Moore: That's a great call, I agree. Anne, anything from you, any bold predictions?

Flanagan: I love this crystal ball question. I think five years ago, we couldn't have predicted generative AI. So, I'm going to start with that, which is that I think the technology will surprise us and I think the consequences of that are going to be twofold. I think we're going to see existing regulations enforced more strictly—not more strictly, but I think we're going to see more and more enforcement because we're going to see harms that weren't necessarily anticipated, and regulators will use the tools in their toolbox already to address them. The second thing that I'm going to see is as those new technologies evolve, I think we're going to see some of the principles that we've accepted stretched to the limit. And in that respect, I think we're going to see a little bit more new—so, I'll give you a perfect example of this. There's an outstanding question right now as to whether—and it's almost a philosophical one—can an LLM actually contain personal data? Being trained on personal data, there can be personal data coming out on the other side, but is the model actually—does that contain personal data? What are the implications for other technologies and other similar scenarios? And you have disagreement from different regulators on this topic right now, it's come up in California, it's come up in Hamburg in Germany, The European Data Protection Board right now is investigating, what it thinks itself about it and has asked for comments from various different stakeholders. So, I think we're going to see some of the things that we have taken for granted. We're going to have to think a little bit harder and get a little bit more sophisticated, but I think we'll have a lot of surprises.

I will leave folks with one last message, which is that no matter what happens with the technology and how it’s stretched and what enforcement we’ll see, getting the basics right is really, really half of the battle. By that, I mean the data hygiene piece, having time and attention and systems set up internally, and that really, really goes a long way to preventing any harms that might emulate from the use of AI.

Moore: Great, thank you both to you for that answer, as well as all the others. You articulated the point I made earlier about organizations who value customer trust, want to earn it, keep it, need to continue to focus on this particular area, look out for the future, stay close to it. Have a leadership that represents the voice of the customer. It's a really important issue. Thank you both. This was tremendous.

Kornik: Thanks, Tom, and thanks, Anne, and thanks Bailey for that session. The insights and the conversation was fantastic. Thank you for listening to the VISION by Protiviti podcast. Please be sure to rate and subscribe wherever you listen to podcasts. Be sure to visit the VISION site at vision.protiviti.com for all the latest content about privacy and data protection. On behalf of Tom, Anne and Bailey, I'm Joe Kornik. We'll see you next time.

Close transcript

Anne J. Flanagan is Vice President for Artificial Intelligence at the Future of Privacy Forum where she leads a portfolio of projects exploring the data flows driving algorithmic and AI products and services, their opportunities and risks, and the ethical and responsible development of this technology. An international policy expert in data and AI, Anne is an economist and strategic technology governance and business leader with experience on five continents. Anne spent over a decade in the Irish government and EU institutions, including developing Ireland’s technical policy positions and diplomatic strategy in relation to EU legislation on telecoms, digital infrastructure and data.

Anne J. Flanagan
Vice President for AI, Future of Privacy Forum
View bio

Bailey Sanchez is Senior Counsel with the Future of Privacy Forum’s U.S. Legislation Team where she leads the team’s work analyzing legislative proposals that impact children's and teens’ online privacy and safety. Bailey seeks to understand legislative and regulatory trends at the intersection of youth and technology and provide resources and expertise to stakeholders navigating the youth privacy landscape. Prior to joining FPF, Bailey was a legal extern at the International Association of Privacy Professionals.

Bailey Sanchez
Senior Counsel, Future of Privacy Forum
View bio

Tom Moore is a senior managing director in Protiviti’s Data Privacy practice. Previously, Tom served as chief privacy officer at AT&T, directly responsible for all privacy programs, policies, strategy, and compliance with regulations at the state, national and international levels. Tom joined AT&T in 1990. Tom also serves on the board for the Future of Privacy Forum and the Community Foundation of the Lowcountry. He was formerly a member of the Executive Committee of the Board of Directors of the AT&T Performing Arts Center in Dallas.

Tom Moore
Senior Managing Director, Protiviti
View bio
Add a Comment
* Required
Comments
No comments added yet.