Former Apple, Google CPOs talk tech, data, AI and privacy’s evolution
IN BRIEF
- “The incredible complexity of the problem and opportunity around privacy requires business leaders to understand — this is about weighing equities. It’s about delivering utility in a responsible way. It’s about innovating in a way that’s going to keep your organization on the right side of history.”
- “I think that the buzzword out there is AI, and I think CPOs are very, very well set to handle the issue of AI. They’ve set up compliance programs; as we’re looking at AI, AI is just very much software, and as we’re looking at the first regulatory framework in the EU, it’s all about harms. So it’s balancing risk, balancing harms.”
- “I think it’s a really interesting moment for privacy leaders. I think you need to embrace that change. I think trying to hold on to the past and preserve your privacy brand exclusively is not going to prove to be the most prescient or professionally advantageous strategy, given just the velocity and shape of the change that’s coming to us.”
Protiviti’s senior managing director Tom Moore sits down with a pair of privacy luminaries who both left high-profile roles as chief privacy officers to join the global law firm Gibson Dunn. Jane Horvath is a partner and Co-Chair of the firm’s Privacy, Cybersecurity and Data Innovation Practice Group. Previously, Jane was CPO at Apple, Google’s Global Privacy Counsel, and the DOJ’s first Chief Privacy Counsel and Civil Liberties Officer. Keith Enright is a partner in Gibson Dunn and serves as Co-Chair of both the firm’s Tech and Innovation Industry Group and the Artificial Intelligence Practice Group. Previously, Keith was a vice president and CPO at Google. Tom leads a lively discussion about the future of privacy, data, regulation and the challenges ahead.
In this interview:
1:42 – Privacy challenges at Apple and Google
5:32 – What should business leaders know about privacy?
7:20 – Principles-based approach to privacy: The Apple model
10:42 – Top challenges for CPOs through 2025 and how to prepare
23:16 – Will the U.S. have a federal data privacy law soon?
27:00 – What clients are asking about privacy
Former Apple, Google CPOs talk tech, data, AI and privacy’s evolution
Joe Kornik: Welcome to the VISION by Protiviti interview. I’m Joe Kornik, Editor-in-Chief of VISION by Protiviti, our global content resource examining big themes that will impact the C-suite and executive boardrooms worldwide. Today, we’re exploring the future of privacy, and we’re thrilled to welcome in a pair of privacy luminaries for a panel discussion led by Protiviti’s Tom Moore. Both of today’s guests have previously held high-profile roles as chief privacy officers of two of the largest tech firms in the world and are now with global law firm Gibson Dunn. Jane Horvath is Co-Chair of the firm’s Privacy, Cybersecurity, and Data Innovation Practice Group. Previously, Jane was CPO at Apple, Global Privacy Council at Google, and the DOJ’s first Chief Privacy Counsel and Civil Liberties Officer. Joining Jane today will be Keith Enright, also a partner in Gibson Dunn, where he serves as Co-Chair of both the firm’s Tech and Innovation Industry Group and the Artificial Intelligence Practice Group. Previously, Keith was Vice President and CPO at Google. Leading today’s discussion will be my Protiviti colleague, Senior Managing Director, Tom Moore. Tom, I’ll turn it over to you to begin.
Tom Moore: Great. Thank you, Joe. I’m honored today to be with Keith and Jane. You guys are awesome leaders in the privacy space, and I think we’re going to have a great conversation.
Keith Enright: Yes, it’s such a pleasure. Thanks for having me.
Jane Horvath: Hi. Tom, thank you so much for inviting me. I’m excited to talk about privacy today.
Moore: You both were chief privacy officers of two of the largest companies in the world and at the forefront of many of the issues facing privacy and data protection today. Let’s reflect on that time for just a little bit. Jane, let’s start with you. What are some of the biggest challenges you faced, or one or two highlights from that period?
Horvath: Probably the biggest challenge that I faced, actually, there were probably two challenges. The first was 9/11 government surveillance. A lot of the audience may remember the San Bernardino case in which the federal government, the FBI, asked us to build a backdoor into the operating system. They were doing it with good intentions, there’d been a horrific terrorist attack, but that really raised a lot of the issues that we grapple with every day: where is the balance between security, meaning encryption, and privacy. Then the other I would say is, as my time went, privacy became more and more regulated. Of course, we saw GDPR, and we’re seeing more and more states enact privacy laws, many of which actually are not compatible. We have Asia, we have China that enacted a privacy law that is really ostensibly a data localization law. So I would say it got more challenging from a regulatory standpoint.
Moore: Keith, what about you?
Enright: I have very similar themes, I would say. I would break it down to, say, complexity, velocity, and scale, capture the challenges. Complexity in terms of the diversity of the product portfolio, the incredible rate of technological innovation and change, trying to make sure that you are staying sufficiently technically sophisticated enough so that you could give both good legal advice and counsel, but you could also help keep the business moving forward and not serve as an unhelpful headwind to progress and innovation. Velocity and scale, at Google, we were launching tens of thousands of products every single year. They were being used by billions of people all over the world to stay connected and stay productive. So taking all of the complexity of the environment, all of the additional legal and regulatory requirements as Jane points out, as the environment got far, far, far more complicated, mapping all of that to clear actionable advice to allow hundreds of product teams across the global organization to continue innovating and bringing great products into production was a pretty incredible existing challenge.
In terms of highlights, and I’ll point to one serendipitously because of my good friend and partner, Jane here, probably the single greatest highlight of my Google career was during the pandemic, we had this incredible moment where our respective leaders set aside the commercial interests of the organization, and gave Jane and I really significant runway to collaborate on privacy protective exposure notification technology, which involved working closely with engineers and innovators, and then also involved the global roadshow of engaging with not only the data protection regulators we knew very well, but public health authorities and others who needed to be brought along and sort of educated on the notion that we really could use privacy as a feature in deploying this incredibly important technology around the world, in a way that was indisputably going to save lives.
Moore: What a great example of not only intra-firm cooperation and collaboration but inter-firm as well. Keith, you hit upon an important topic, your business leaders and how you engaged with them. Is there one or two things you wish every business leader knew before you went to talk to them, so you had common grounding?
Enright: I suppose what I would love for leaders at every organization to bring into the conversation with their privacy and data protection leadership, it would be a general understanding that privacy is not a compliance problem to be solved. It is a set of risks and opportunities that exist between technical innovation, business priorities, individual rights and freedoms of users, user expectations, which are going to be different in different places around the world for different age groups, for different individuals. The incredible complexity of the problem and opportunity around privacy requires business leaders to understand—this is about weighing equities. It’s about delivering utility in a responsible way. It’s about innovating in a way that’s going to keep your organization on the right side of history.
I do think privacy leaders have a significant challenge when they’re engaging with the C-suite or the boardroom to somehow remind their leadership: you can’t get compliance fatigue from privacy and data protection. Because the environment is going to keep getting more complicated, you sort of need to engage with this as an opportunity to future-proof your product strategy, and be vigilant and diligent about thinking about how do we make responsible investments to make sure that we’re doing this appropriately, and never think of it as a solved problem.
Moore: Very interesting. It’s profound as well. Jane, I can’t think of too many companies that have the reputation for supporting privacy from a consumer standpoint than Apple. Take us into the into the boardroom or take us into the C-suite at Apple. What were some of those conversations you had? What were the type of questions you received from the board or the C-suite?
Horvath: Sure. So like Keith, I was very lucky. When I started at Apple, it was very apparent that there was a deep respect for privacy. My main partner was the head of privacy engineering, and we didn’t do anything without each other every meeting, every conversation, and I think the most important thing that over the 11 years I was there was, like, people think privacy, “I don’t care about privacy.” Not Apple, but people are saying, “Oh, I don’t care about privacy. They can have all my data,” but there are really innovative ways that you can build privacy in, that doesn’t mean you’re not collecting their data. So we distilled privacy when we were counseling and doing product counseling down to four main principles at Apple. The first was data minimization. That’s sort of overarching, because anybody who works with engineers, like telling them they have to comply with GDPR, their eyes roll in the back. So for us, it was great to distill it down. So data minimization, on-device processing, but it was even more. This is that innovative step, where you can innovate, and it is really a subset of data minimization. So people think, “Oh, minimizing data means I can’t collect data.” It actually means you can’t collect identifiable data. So have you considered sampling data? Have you considered creating a random identifier to collect data? So these were some of the things that every day when we were counseling.
The third principle, choice. Consumers should know what’s happening with their data. Do they know? So it’s transparency and do they have choices about it. So many of you who use iPhones get to make choices every day about data collection.
Then finally, security. You really can’t build a product to be protective of privacy without considering security.
So that was sort of the secret sauce that Apple was distilling this thing called “privacy” down to these four principles, and we briefed the board on the principles. We didn’t have to, but my boss at the time, felt like it was important to talk to the board about the things that we wanted to do with privacy, and they thought it was a great idea, and Tim was hugely passionate about the issue. So from the executive suite, it flowed down through the company. So my job was relatively easy because I didn’t have to make the sales pitch.
Moore: The principles approach is a good one. I think what you lined out there was relevant then and it’s relevant now. Those are sustainable principles that are very much top of mind for chief privacy officers, their bosses and the C-Suite, as well as the board. You’re not privacy officers anymore other than in terms of providing advice to that cohort, but tell us a little bit about what should CPOs be thinking about today and into 2025, so kind of a short-term, where should they be triaging issues, what should be top of mind?
Horvath: I think that the buzzword out there is AI, and I think CPOs are very, very well set to handle the issue of AI. They’ve set up compliance programs; as we’re looking at AI, AI is just very much software, and as we’re looking at the first regulatory framework in the EU, it’s all about harms. So it’s balancing risk, balancing harms.
I think the bigger challenge is, of course, this software needs lots of data, but again, you can pull from your innovative quill and decide that yes, it needs data, but does it need data that’s linked to an individual person, are there things that you can do with the data set? So I think CPOs can be very, very helpful and valued members of the team as companies are considering how to use their existing data.
Of course, as we talked about earlier, privacy’s become much more regulated and that data was collected pursuant to a certain number of agreements, a privacy policy. So the CPO is going to have to be deeply involved in determining, if you’re going to use the data for a different purpose, how do you do it? So I think the CPO shouldn’t panic. The CPO can never and has never been able to be the “no” person, but the CPO can be a really innovative member going forward, in my opinion.
Enright: I agree with everything that Jane said. I think it’s a very interesting moment, not only for CPOs, for chief privacy officers, but for privacy professionals more generally. I think by most estimations, if you look at, say, the last 15, 20 years, the privacy profession has enjoyed an incredible tailwind. Many folks, us on this call, have enjoyed just a tremendous professional benefit from the growth of the profession, the explosion of new legal requirements, which Jane had kind of pointed to; the fact that organizations woke up to some of these risks; in part, the passage of the GDPR in 2018 and the notion of 4% of global annual turnover civil penalties for noncompliance, made it to an extent greater than had ever been the case in the past, a board-level conversation, where you had boards of directors and C-suites of large multinational concerns, suddenly sensing that they had some clear accountability to ensure that management was contemplating and mitigating these risks appropriately, and that there was a privacy and data protection component to core business strategy.
Something very interesting has happened, say, over the last five years, where privacy and data protection continue to flourish. You also had a number of other areas of legal and compliance risk scaling up very quickly and very dramatically. You have content regulation online for large platforms and others. You have the challenge of protecting children and families online, sort of rising to the fore with increased regulatory intention. Also, as Jane said correctly, I think artificial intelligence has just exploded over the last couple of years. Now, those of us who are sort of specialists in the field have been working with artificial intelligence for over a decade, but the explosion of LLMs and generative AI has really, of course, created an unprecedented level of investment and attention in that area—that’s having a bunch of interesting effects. You have C-suite and board level attention is now being, in some ways, diverted to how do you understand how AI affects your business strategy, how do you anticipate potential disruption, how are you looking at whether some of these innovations are going to allow your business strategy to allow you to take share from your competitors, all of that has senior leadership looking across organizations to try to find leadership resources and technical talent to focus on the AI question and the AI problem and the AI opportunity.
One domain which seems immediately adjacent and particularly delicious for that kind of recruitment is privacy and data protection, as many of the features that the AI space has—you have a tremendous amount of technological innovation over a relatively short period of time, you have an explosion of regulation, inconsistencies, domestically and internationally, and you have not just in-house—you also have the regulatory community is going through an analogous struggle. They’re trying to find their way in a new AI-dominant world, all of which has caused privacy professionals to be really considering, do they pivot? Do you shift from being a privacy and data protection specialist to being an AI governance specialist? Do you evolve and expand? Do you decide to sort of rebrand yourself and stretch your portfolio into more things? Do you actively solicit senior executive requests that you take on accountability for some of these adjacent domains, or do you resist them, recognizing that privacy and data protection remain an extraordinarily challenging remit, and the CPO or some other senior leader may have some apprehension about overextending themselves, agreeing to be held accountable for something far beyond what was already an extraordinarily challenging remit.
So I think it’s a really interesting moment for privacy leaders. I have some strong views on this which we may talk about, but like the TLDR on it is, I think you need to embrace that change. I think trying to hold on to the past and preserve your privacy brand exclusively is not going to prove to be the most prescient or professionally advantageous strategy, given just the velocity and shape of the change that’s coming to us.
Moore: So Keith, I think we, the three of us, can stipulate that that is the right approach for privacy leaders, but can you go into a little bit more detail about how. What should a privacy leader be doing maybe in the next three years or so to prepare themselves and educate themselves to meet these challenges of technology, innovation, regulation, all the things colliding together that you just described?
Enright: So a candid response to that question requires a very clear understanding of the culture of your organization and what your business strategy is. If you’re working for a Google or an Apple, there’s a certain set of plays in your playbook that you need to run to ensure that you are appropriately educating your senior leadership and bringing them along, and making sure that you are understanding the risk landscape, staying appropriately sophisticated on the way things are impacted or changed by AI. Again, in large organizations like that, you have the benefit of these vast reservoirs of resources that you can draw upon to make sure that you are not only staying technically proficient, but that you’re serving as connective tissue across all of these different complementary teams and functions so you’re preparing your organization to not only endure, but to thrive through that wave of change that’s coming.
But not everybody’s going to be at an organization like Google or Apple. I think for privacy leaders, almost anywhere else, you are going to need to understand what is the risk appetite of your leadership, what are the consequences of the changes on the horizon for your core business strategy. What kind of resources are available to you in terms of do you have a privacy program that is very high-level of maturity and some of those resources can be extended or redeployed to think about things like AI governance? Or do you have an underfunded anemic privacy program that is already carrying an unsustainable level of risk, and you found yourself in a “Hunger Game” situation trying to fight just to keep the thing operating at a level that you feel comfortable being held accountable for? All of those variables are going to be essential things for privacy and data protection leaders to sort of really press against.
I think, again, this is going to be an interesting moment over the course for the next few years, as I believe there is a wave of investigations and enforcement coming across the next two to three years. First, in the core privacy and data protection space, the General Data Protection Regulation, many other laws and regulations around the world, they haven’t gone away. Just because industry is increasingly interested in, confused by and distracted by what artificial intelligence means, that doesn’t prevent data protection authorities and data protection regulators from launching investigations and from initiating enforcement for your, call them “legacy obligations” under regimes like the General Data Protection Regulation.
I think we’ve actually seen a relatively limited wave of enforcement for the last couple of years, because regulators’ capacity has been largely absorbed with trying to digest and understand the way that the ecosystem is changing as well, but I think that’s going to settle in over the next few years and I think we are going to see privacy regulators enforcing in the context of privacy, privacy regulators enforcing in the context of AI, AI regulators enforcing in the context of AI—all of this is going to create an interesting political dynamic, I think, in jurisdictions around the world, which is going to dramatically amplify the need for organizations to be making substantial investments and preparing themselves for a changing and increasing risk environment.
Horvath: Just to give an example, right now, the Irish DPC, their Meta and X are no longer training their AI on European data. So, how many other investigations are ongoing at the DPC that are basically holding up the AI products? So here is another area where the CPO is going to have to be a bridge to the company. Because as Keith said, I think a lot of businesses think, “Okay. This privacy thing’s over. We went through the privacy thing. Now, we’re going to concentrate on the AI thing,” but the privacy regulators, particularly in Europe where the fines are pretty stringent, they’re not going away. They are a single-issue regulator, and I think it will be more challenging for CPOs because their budgets are going to get slashed, and where you’re operating in a company whose margins are tight or who doesn’t generally—they’re going to be hiring these AI people also. So there’s going to be less of a pot of money to go around and more work.
So I agree completely with Keith, we’re going to see a lot of activity. We are already kind of seeing it from the FTC. They are issuing very, very broad CIDs, the OpenAI CID that was leaked to the press, it was just like an expedition of everything about their company. So I think that’s going to be another area when you have a regulator knocking that it’s going to be really critically important to get a hold of it, don’t panic, see where you can narrow it down and address the regulator head-on.
Moore: Jane, I wholeheartedly agree with you. I think that that regulation coming not only from Europe but in the U.S. with the three letter agencies, but also the states, is a focus right now, but let’s look at the future. Does the U.S. have a federal privacy law, data protection law, in the next three to five years?
Horvath: I’m going to be bullish and say, “yes,” at a certain point, because I think we get very close to having one, but I think AI—probably AI, children—all of these different areas are going to push it across the finish line at a certain point, but I don’t know. Keith, what do you think?
Enright: So I share your optimism, actually. Memories are short, but not too terribly long ago, we really did have growing optimism that we were going to see omnibus federal privacy legislation. There are a lot of interesting things happening. For most of my career, the position of industry, generally, was that they would never support a bill that didn’t have extremely strong federal preemption and did not have a private right of action. And you started seeing multiple large industry players beginning to soften, even on some of those core positions, just before the pandemic really, which I found incredibly interesting—like the political will and, I think, the growing awareness that we require some kind of consistent federal standard to allow some level of compliance with increasingly varied requirements manifesting in these state laws that are coming into effect. It seems to be generating momentum. Now again, as this always happened before, it all fell apart and we were set back again, but it did, it suggests to me the impossibility of a federal law is probably overstated. I think there is a road there, and there will inevitably be compromises, surprises, and idiosyncrasies and whatever that ultimate law that makes its way over the line looks like, but I do think we’ll see something. I think in single digit years ahead of us, we will have a federal law in the U.S.
Moore: Let’s pivot to your current responsibilities, Jane. Tell me about the differences between leading a large company like Apple’s privacy team versus providing legal advice services to multiple clients.
Horvath: I’m really enjoying it, actually. I’ve been a serial in-house person, did my stint in government. I worked at Google and actually was on the interview panel that hired Keith, what a great panel that was, and then Apple, and I’m really having fun working with a lot of different clients. I also still have a global practice. I ran a global team in Apple. I love the global issues. I’ve got a few clients in the Middle East, working on different AI projects, doing things from policy to compliance to HR. It just keeps me going and it’s exciting. I think the most fun is working with a client and understanding their business, but also having the client say, “Oh, you understand what I’m going through. You understand that I can’t just go tell the business “x,” because I’ve been in-house, and I know where they are. So it’s an exciting time. There’s just so many different developments going on, not just in AI: cybersecurity, data localization, content regulation. There are just huge amounts of interesting issues.
Moore: So top of mind for those clients, you get a call, what’s the—I think you just probably mentioned it, but what are the top two or three things those clients are talking to you about right now?
Horvath: Incident response is a big one. The biggest question we’re having right now is, we want to use AI internally, what are the risks? How do we grapple with rolling out AI tools? What are the benchmarks? What are the guardrails we need to put in place? What are the policies we need to put in place? How do we do it while minimizing liability? Because AI hallucinates and has other issues, and how do you grapple with those issues? So that’s probably my biggest issue right now.
Moore: Great. Keith, I presume you have lots of opportunities after your Google career, why professional practice?
Enright: It’s probably useful to just describe sort of the things that are in common. One of the things that always made me feel so blessed to join Google when I did almost 14 years ago, was the privilege of working with the best and brightest people. We got to work on this incredible portfolio of products that were being used by billions of people all over the world, really with a sincere commitment of making people’s lives better. The original motto of organizing the world’s information and making it universally accessible and useful, that resonated deeply with me. It was very easy to be passionate about the work and excited about the work. You do anything for 13 1/2 years, and you get comfortable to some extent, even something as challenging is leading privacy for Google. When Jane actually reached out to me to tell me a little bit about the opportunity taking shape here at Gibson, and not just in support of one company’s vision or one company’s product portfolio, but to be able to support thousands of leaders and thousands of innovators across tens of thousands of products all over the world, that’s exactly the kind of thing that is going to help me to stay challenged and do my best work and keep growing and evolving.
Moore: I’m excited for both of you. Obviously, your compatibility reads through loud and clear. Thank you very much, Jane. Thank you very much, Keith. I really appreciate you’re here in today. Joe, back to you. Thank you.
Kornik: Thanks, Tom, and thanks, Jane and Keith, for that fascinating discussion. I appreciate your insights. Thank you for watching the VISION by Protiviti interview. On behalf of Tom, Jane, and Keith, I’m Joe Kornik. We’ll see you next time.
Jane Horvath is a partner in the Washington, D.C. office of Gibson, Dunn & Crutcher. She is Co-Chair of the firm’s Privacy, Cybersecurity and Data Innovation Practice Group, and a member of the Administrative Law and Regulatory, Artificial Intelligence, Crisis Management, Litigation and Media, Entertainment and Technology Practice Groups. Having previously served as Apple’s Chief Privacy Officer, Google’s Global Privacy Counsel and the DOJ’s first Chief Privacy Counsel and Civil Liberties Officer, among other positions, Jane draws from more than two decades of privacy and legal experience, offering unique in-house counsel and regulatory perspectives to counsel clients as they manage complex technical issues on a global regulatory scale.
Keith Enright is a partner in Gibson Dunn’s Palo Alto office and serves as Co-Chair of both the firm’s Tech and Innovation Industry Group and the Artificial Intelligence Practice Group.* With over two decades of senior executive experience in privacy and law, including as Google’s Chief Privacy Officer, Keith provides clients with unparalleled in-house counsel and regulatory experience in creating and implementing programs for privacy, data protection, compliance, and information risk management. Before joining Gibson Dunn, Keith served as Google’s Chief Privacy Officer and Vice President for over 13 years where he led the company’s worldwide privacy and consumer protection legal functions, with teams across the United States, Europe and Asia.
Tom Moore is a senior managing director in Protiviti’s Data Privacy practice. Previously, Tom served as chief privacy officer at AT&T, directly responsible for all privacy programs, policies, strategy, and compliance with regulations at the state, national and international levels. Tom joined AT&T in 1990. Tom also serves on the board for the Future of Privacy Forum and the Community Foundation of the Lowcountry. He was formerly a member of the Executive Committee of the Board of Directors of the AT&T Performing Arts Center in Dallas.
Did you enjoy this content? For more like this, subscribe to the VISION by Protiviti newsletter.