Future of Privacy Forum CEO Jules Polonetsky on “exciting but risky” road ahead

Video interview
October 2024

IN BRIEF

  • “For better or worse, the dam burst and everyone, from the most conservative organization to the wildest startup, is rolling out [AI] stuff that comes with lots of risks.”
  • “So we're at an inflection point, and we’ll either, over time, see some of this absorbed into the legal and compliance structure of the organization, but I think we’re seeing a whole new breed of folks who are stepping up from data protection to a broader scope, whether it’s AI, whether it’s perhaps digital governance, perhaps it might be ethics.”
  • “I think we’ll move slowly and haphazardly if the world is state laws, [if] the world is regulation by crisis and pushback. We end up not being trusted to use the most robust forms of data that we actually do need.”

In this VISION by Protiviti interview, Protiviti senior managing director Tom Moore sits down with Jules Polonetsky, CEO of the Future of Privacy Forum, a global non-profit organization that serves as a catalyst for privacy leadership, to discuss how business leaders can navigate a tricky road ahead for data security and privacy. For 15 years, Polonetsky and the FPF have helped advance principled data practices, assisted in the drafting of data protection legislation and presented expert testimony before legislatures around the world.

In this interview:

1:15 – Why the Future of Privacy Forum?

2:50 – What should business leaders focus on in the next five years?

7:02 – How is the head of privacy role evolving?

12:58 – GDPR and the fragmented state of U.S. regulation

14: 00 – Looking ahead to 2030


Read transcript

Future of Privacy Forum CEO Jules Polonetsky on “exciting but risky” road ahead

Joe Kornik: Welcome to the VISION by Protiviti interview. I’m Joe Kornik, Editor-in-Chief of VISION by Protiviti, our global content resource examining big themes that will impact the C-suite and executive boardrooms worldwide. Today, we’re exploring the future of privacy, and I’m excited to welcome Jules Polonetsky to the program. For 15 years, Jules has been CEO of the Future of Privacy Forum, a global non-profit that serves as a catalyst for privacy leadership, where Jules has helped advance principled data practices, assisted in the drafting of data protection legislation, and presented expert testimony with legislatures around the world. He is an adjunct faculty member for the AI Law and Policy Seminar at the College of William & Mary Law School. Jules will be speaking with my Protiviti colleague, Senior Managing Director Tom Moore. Tom, I’ll turn it over to you to begin.

Tom Moore: Great. Thank you, Joe. I couldn’t think of anybody better to talk about the future of privacy than Jules Polonetsky. Jules, I’m so happy you’re joining us today. Thank you.

Jules Polonetsky: Delighted.

Moore: If you don’t mind, tell us a little bit more about the Future of Privacy Forum, its history, what you’re working on today?

Polonetsky: We’ve been around for about 15 years, and our members are very generally the chief privacy officers at 200 plus organizations. The people who are really trying to grapple with the fact that the organizations they lead are driving the AI agenda, whether it’s big tech companies or startups, or banking, or car companies, right? Everybody is challenged by the fact that the pace of how data is being used is accelerating and the norms of what’s right, what’s legal, what’s creepy, what’s innovative and exciting to users is rapidly developing, and it’s developing everywhere in the world. So we work with those folks to create best practices, standards, try to support reasonable legal rules and structures around it.

We do that as well with the leading policymakers because it turns out they’re busy. They’ve got a big agenda. They’re trying to deal with wars around the world, the economy, all the challenges that legislators and government leaders grapple with, and they want and need support. They want to know which country is doing it right. What can we learn? Where are their mistakes? How does this technology work? So we try to work as the pragmatic voice, optimistic about tech and data, but quite aware that things can and will go wrong if you don’t have clear guidelines, clear landmarks for how organizations can be responsible as they roll out new data-powered products.

Moore: Excellent, and congratulations on 15 years of existence for the Future of Privacy Forum. Let’s talk about it. You obviously have your pulse on the world of privacy, and what do you think are some of the biggest issues over the next five years? If you’re a business leader, you’re a leader of an enterprise, you’re a regulator, what should you be thinking about? What should you be focused on to get prepared for the next five years?

Polonetsky: You know, the easy answer is to immediately talk about AI, but before we go to AI I think it makes sense for us to pause sort of a second and recognize that it’s only the last few years where you’ve been able to assume that almost everybody, in any at least decent, advantaged, progressing economy, has a mobile phone, probably has a smartphone, probably has apps, probably is connected to people via some sort of social media or WhatsApp group or the like. The world has started hurling, part of it—it’s a COVID world, where suddenly we all got comfortable doing things over video conference. We became a small world where people are connected, which means that the good and the bad things that happen around the world immediately reverberate. It means the bad actors can do their work from every part of the world and can develop sophisticated, complicated organizations and have sort of teams and levels of different delegated services that they can use as they deal with organizations.

So we’ve moved to this super connected, super immediate, sort of 24/7 world where users can create a giant alarm, sometimes correctly, sometimes incorrectly, when they think that your organization is doing the wrong thing, and it immediately is driven into the media because the media seem to spend a good chunk of their day following what happens on social media.

Those stages are only going to accelerate, but we’re also seeing the backlash, right? People who are just feeling burnt out because they were locked up at home during COVID, and they didn’t get to go out, and now they’re still gaming all day and all night, and they’re still connected. All the business tools are pinging them, not just on email, but on Slack, on Teams, and all these tools. Being ready and thoughtful and structured enough to navigate this incredibly frothy, turbulent world—and then let’s talk about AI, where suddenly the investments are moving so quickly that the policy concerns are being left temporarily by the wayside, right? Who would have imagined that we’re rolling out products and we say, “Well, actually, they don’t work a lot of the time, but when they do, they do these incredible really cool things except it can’t be fully reliable but we’re relying on it for incredibly important processes like interacting with our customers.”

So, for a long time, the problems of our current generated AI tools were well-aware and you had leading companies saying, “Not yet. We don’t know the answers yet to how we’re going to put out stuff that isn’t reliable but can do super cool things, but actually also might be discriminatory, right?” For better or worse, the dam burst and everyone, from the most conservative organization to the wildest startup, is rolling out stuff that comes with lots of risks.

So that’s the world we live in. Chief privacy officers and legal and compliance folks suddenly need to go from a careful measured world where they do assessments, and they consult, and they discuss, and they give advice and the business accepts the advice, to a place where people are rolling things out that are purchased from vendors who’ve purchased from vendors and putting it out in the market. So we are in an exciting, risky—exciting because really cool things are happening, but I don’t know that we’ve ever seen as much risk or drama and guess what? The media are super interested because it’s about AI. So it can be the silliest flap and suddenly it’s front page news.

Moore: You mentioned chief privacy officers, heads of legal, heads of compliance. They’re at the forefront of all this. The roles continue to evolve with AI and other technologies. Tell me about what you see as the primary role of the head of privacy within a large organization.

Polonetsky: You know, I see two trends. This is really a role that’s in flux. There is one trend, maybe it’s a negative trend or maybe it’s just the way of the world as laws and policies become established. When I first became a chief privacy officer many, many years ago, it was a novel title and it wasn’t the highly-regulated companies that had the most senior executives in these roles. The banks had regulation and structures and lines of defense and dealt with it for years. HIPAA, the health privacy and portability law was in place and organizations had structures around that. It was the startups, the internet companies, it was the ad tech companies who didn’t have detailed legislation, at least not in the U.S., but who were running into all of these explosions of concern, or the data companies who were suddenly able to do so much more than just send you targeted mail, who needed senior executives navigating the nuances of, “What do the consumers really want and what is civil society saying? They’re making a fuss about this. And what about regulators who want to support the internet and want to support these new business models, and who are very excited to come up with new laws and rules? And what about our customers who need to understand what we’re doing with their data in ways that we’ve never used their data before?”

Here now, we’re in a world that’s become far more regulated. We’ve got all these state laws in the U.S. now. We’ve got AI laws. We have privacy laws. We have global data protection regulation not just in Europe, which has been a leader and has been mature, but almost every other jurisdiction. We’ve got a team in Africa. The countries across Africa are rolling out data protection regulation. South America, the big economies, India, right? The most giant economies, China, all have new data protection regulation and, now, new AI regulation. So for some companies they’ve said, “We don’t need the drama. We know how to do compliance. We worry about all kinds of compliance issues.” Some companies are rolling these roles into compliance and perhaps eliminating this sort of executive type role. Other companies are going in exactly the other direction. They’re looking at the challenges of AI, which are not only about privacy, but start with, in many cases, personal information that’s collected and used and already regulated by data protection law. Even automated decisioning is already regulated by data protection law. So, some companies are recognizing that here’s this incredibly strategic area, who is going to help us shape what are very nuanced decisions about not only how to deal with complying with laws— “Hey, we’re now going to use video and improve it and your face is involved, and our customer’s data is involved, and we’re going to read their confidential information to create better tools that serve them. But, boy, they better trust us and trust the output.”

We see multiple layers of regulation, for instance, in Europe, where not only do we have privacy law, not only do we have AI law, but we have new kinds of competition laws. New laws that force you to provide data to your competitors. New laws that force you to provide data for researchers. So, we see a number of other companies saying, “Digital governance has become really complicated and we need somebody or some team managing the envelope of restrictions that exist around how we use data.”

So we're at an inflection point, and we’ll either, over time, see some of this absorbed into the legal and compliance structure of the organization, but I think we’re seeing a whole new breed of folks who are stepping up from data protection to a broader scope, whether it’s AI, whether it’s perhaps digital governance, perhaps it might be ethics. That’s where it’s going.

Moore: Excellent. So speaking of that broader scope, talk to the privacy community, the privacy leader, chief privacy officer, or other title. What do they need to do to prepare themselves for this environment to grow into those broader responsibilities?

Polonetsky: I love telling some of my colleagues and friends in data protection, they spend too much time on data protection. By that, I mean there is so much. I mean you can’t stop. There’s a new law. There’s a new regulation and California keeps rolling out new changes. The Europeans keep interpreting and reinterpreting. So you can really spend all your time keeping up with the incredible rush of details. But the reality is, guys, people, gentlemen, ladies, all of you, you know how to do that. There might be a nuance, there might be an item to deal with, you know how to read legislation. You know how to do compliance. What’s changing super-fast are the way your business, the way your sector is using data. Things that were norms are now changing. Things that the platforms are doing for their business that affects your business are changing. Spend more time, please, legal, compliance, ethics, privacy people, being gurus of how data is being used because that’s going to help you ask the smart question. You ask your legal assessment question, you’re going to get your legal assessment answer. Understanding how your partner and what their business goals are and how they’re really planning to use data give you the opportunity to ask much more probing questions that answer what you need to know.

Moore: Earlier, Jules, you mentioned Europeans, the GDPR. They’ve obviously invested quite a bit in legislating, regulating, enforcing the data protection for European citizens. Are they striking the right balance? Related question, what lessons can the U.S. learn should we ever get to national privacy law in the U.S.?

Polonetsky: GDPR, I think, is a very thoughtful document. The European legal process is a challenging process. It’s not one country. It’s a union. My hope is that we will move in the U.S. to regulate quickly around AI and data protection. Even if it’s not perfect, I think businesses need the certainty. They need a level playing field, and then they’ll compete. If anything ended up being too restricted, then we can go back and debate it. Right now, I think we’re suffering from a gap, tools being rolled out, and the law is sort of catching up in a way that may end up being quite challenging.

Moore: So let me put you on the spot, turn that hope into a prediction. By 2030, do we have a U.S. national privacy law or do we still have the state patchwork, federal agencies regulating, state agencies regulating?

Polonetsky: By 2030, I think the answer is easily yes. By next year, the answer is, “That’s going to be hard to say.” You know, it took the Europeans seven years to build out GDPR. Again, mostly, 70-80% of GDPR was already in the UK’s data protection law and German data protection law. They didn’t start with a blank slate. We’re talking about regulating a huge chunk of the U.S. economy. That’s complicated. It ought to be taking a while. I think Congress is in this period where they’re struggling through understanding the complexity of what it takes. So, you know what? Although I’d like them to do it now so that the states don’t all go do disparate things, it’s going to take them some time. They should take the time, but they need to do a bit better a job really getting thoughtful and smart, and there are hard issues that need to be debated by critics, and business, and researchers and so forth.

Moore: So Jules, on a couple occasions today, you’ve expressed optimism or hope. Let’s go the other route for just a second. What if we don’t get this right? What if national law, thoughtful and smart, doesn’t come into play by 2030? What could be the consequences of not getting this right?

Polonetsky: I don’t think we have a choice to not get this right. I think the not getting this right, perhaps, is doing it very piecemeal, doing it in ways—My home state of Maryland has done a very strict state privacy law that doesn’t have any greater flexibility for research. Could they have really intended to make it very, very complicated and hard, the home of the National Institutes of Health and leading universities and so forth? Could they have intended to do that? So, I think we could have inadvertent, complicated mistakes, complications of multi-state compliance that cost money and cost time and probably don’t add any value.

So I think we move slowly and haphazardly if the world is state laws, the world is regulation by crisis and pushback. We end up not being trusted to use the most robust forms of data that we actually do need. We need data about sensitive populations to identify where discrimination can be taking place, where are people not getting access to health facilities. So if state laws make me worry about collecting any sensitive data, which many of them do with minimization or opt-in requirements, then it’s too risky. I don’t collect that location data, and that’s fine. We’ll protect some people who won’t get targeted by ads or who won’t have sensitive locations being exposed, but we then won’t have the data that the CDC needs to understand how a pandemic spreads. We won’t have information needed to know how students travel to school and traffic information. So we’ll end up in a world where we progress, but with drama, with regulation by Twitter and media headline and class action litigation.

We need the certainty of a level playing field, as imperfect as laws will always be, so that we can actually move forward rapidly, particularly around AI where there are huge debates. We need to decide, is it okay to suck up all the data from the public internet? Well, you know what? Maybe it’s public data, but maybe we didn’t actually intend this when we hammered out the IP rules and the copyright rules, and maybe we want to think about what the right balance is. If not, it’s the courts that are going to decide it. Let’s decide it with good, thoughtful public policy.

Moore: Jules, this has been fantastic. You shared an incredible amount of information, breadth of both concern but also optimism. I’m thrilled that you joined us today. Thank you for your time and hope to see you again soon.

Polonetsky: I am indeed optimistic despite, I think, all the drama. Exciting things are happening with data. We just need to get the guardrails that can help us drive quickly, safely.

Moore: Great, thank you. Back to you, Joe.

Kornik: Thanks, Tom. And thanks, Jules. And thank you for watching the VISION by Protiviti interview. On behalf of Tom and Jules, I’m Joe Kornik. We’ll see you next time.

Close transcript

Jules Polonetsky has served for 15 years as CEO of the Future of Privacy Forum, a global non-profit organization advancing principled data practices in support of emerging technologies. Jules has led the development of numerous codes of conduct and best practices, assisted in the drafting of data protection legislation and presented expert testimony with agencies and legislatures around the world. Jules is an adjunct faculty member for the AI Law & Policy Seminar at William & Mary University Law School. Jules has worked on consumer protection issues for 30 years, as chief privacy officer at AOL and at DoubleClick, a Consumer Affairs Commissioner for New York City, and an elected New York state legislator.

Jules Polonetsky
CEO, Future of Privacy Forum
View bio

Tom Moore is a senior managing director in Protiviti’s Data Privacy practice. Previously, Tom served as chief privacy officer at AT&T, directly responsible for all privacy programs, policies, strategy, and compliance with regulations at the state, national and international levels. Tom joined AT&T in 1990. Tom also serves on the board for the Future of Privacy Forum and the Community Foundation of the Lowcountry. He was formerly a member of the Executive Committee of the Board of Directors of the AT&T Performing Arts Center in Dallas.

Tom Moore
Senior Managing Director, Protiviti
View bio
Add a Comment
* Required
Comments
No comments added yet.