Did China break encryption? Protiviti’s quantum director sets the record straight
Did China break encryption? Protiviti’s quantum director sets the record straight
Did China break encryption? Protiviti’s quantum director sets the record straight
In this VISION by Protiviti Interview, Konstantinos Karagiannis, Protiviti’s director of quantum computing services, sits down with Joe Kornik, Editor-in-Chief of VISION by Protiviti, to discuss the recent news that China may have broken military-grade encryption. Karagiannis sets the record straight on what happened, what it could mean for the future of classified information, and what organizations should be doing to prepare for a post-quantum world.
In this interview:
1:00 – Did China break quantum encryption?
4:31 – What it takes to crack the RSA
6:28 – Practical challenges to scaling the China solution
9:46 – What should organizations be doing to get ahead of “Q-day”?
Did China break encryption? Protiviti’s quantum director sets the record straight
Joe Kornik: Welcome to the VISION by Protiviti Interview. I’m Joe Kornik, Editor-in-Chief of VISION by Protiviti, our global content resource examining big themes that will impact the C-suite and executive board rooms worldwide. Today, we’re exploring the future of privacy, and I’m joined by my Protiviti colleague, Konstantinos Karagiannis, Director of Quantum Computing Services.
Konstantinos has been helping organizations get ready for quantum opportunities and threats that lie ahead. He’s been involved in the quantum computing industry since 2012, and is the host of Protiviti’s popular podcast, “The Post-Quantum World.” Konstantinos, thank you so much for joining me today.
NKS
Konstantinos Karagiannis: Yes, thanks for having me. It’s always great to join you.
Kornik: So, Konstantinos, I’ve been hearing more and more about quantum. I know you’ve been at this for a long time but lately, I’ve been hearing more and more about it in the media, including in mid-October, something happened in China. I’m not going to pretend to understand exactly what happened, but I’ve heard things or seen things about potentially military-grade encryption being cracked, which seems way earlier than we thought, I think. So, is the end of encryption here early, it’s what I know some in the media have called “Q-Day.” Has that arrived?
Karagiannis: The short answer is no, which is good. It’s not the end of encryption already. It’s funny that this Chinese story broke pretty heavily over the weekend as we’re recording this, and I was like, “I’m going to have an interesting week. I already know this is going to be one where I’m going to be asked a lot of interesting things.
So, basically, we don’t have a great translation of this Chinese paper. A Chinese paper was published, and in it they make some pretty strong claims, but the abstract is in English and then after that it dives right into Chinese. So, if you try and translate it with machines or whatever, AI, you end up with some holes, and as a result, no one’s reproduced this yet. So, I can’t come on today and say that based on reproductions and other teams that I could say that this paper is even real, but let’s say the claims are true. Let’s pretend it’s not some nation-state psy-op to try and freak out the West or something. Even if the claims are 100% true, it doesn’t really spell the end of encryption. So, that’s the awesome news, right? Even worst case, it’s not all over.
People might have been hearing for a while now that we need fault-tolerant quantum computing to crack encryption, and that just means that quantum computers are noisy. They’re prone to interference, the qubits fall apart, you can’t do the complicated math of Shor’s algorithm to crack something like RSA. So, we need error correction. These things are starting to be built, error-correcting machines, but it could be 10 years or longer before we have one powerful enough using those traditional paradigms to crack encryption.
What’s scary about this Chinese paper is that they used the current annealing quantum computer from D-Wave. That’s a machine that’s on the cloud right now that you can access and use today. It raises all sorts of questions about access, where did these researchers come in from, D-Wave’s technically Canadian. So, it’s all this stuff, because your listeners might have heard of the quantum export bans going on. So, I can’t comment on that, I don’t know how they got access to it, but basically this machine exists and can be used.
So, annealing is different. It’s not error corrected. It’s not even designed to give you the correct answer. A gate-based quantum computer, the ones that we thought would be cracking encryption, they’re designed to take a problem through a series of quantum gates and give you a definitive this or that, you know, whatever your problem is. Annealing is more like an optimization finder. It’s sort of like a global optimization peaks-and-valleys solver.
So, if I were to ask you to imagine, I love this example, driving around the United States and finding the highest and lowest points, that would take you forever; whereas an annealer can literally do something called “tunneling”; it can move through all of those peaks and valleys and find the lowest one, let’s say. That kind of optimization machine is what they used in this problem. So, that’s a little scary because it’s a new approach.
Kornik: Right, and I was reading some of the media reports and the researchers, I guess, claim to have factored a 50-bit number. Can you explain the significance of that in the context of RSA encryption?
Karagiannis: Sure. So, a 50-bit number, first of all, is not terribly large, in fact we’ve tangoed in this area before and I’ll talk about that a little bit later, but basically, they picked a number, let’s say 2289753, and they wanted to try and get its factors. A 50-bit number, you can think of it as 50 bits, you know, a bit is a zero or one, right? So, if you were to string 50 of them in a row, each of those bits has two options, a zero or a one. Because of that, the math gets very interesting. It becomes 2ⁿ, so it would be 2 to the 50th power. Those are all the possible combinations of ones and zeros.
That’s a pretty big number, right? But if you’re going to try and crack something like RSA, you’re talking about a 2048-bit key. That is way bigger. You’re thinking more along the lines of 2 to the 2048th power. These numbers get insanely large. The universe only has 2 to the 80th power particles in it. So, these are just numbers that you can’t even fathom. So, it’s not like 2 to the 50 is anywhere near or even touching 2048; exponential math is not really something humans are comfortable thinking about. Like you could represent that number I cited before, that seven digits, right? If you were to represent a 2048-bit number, you would use 617 digits. So, take that number they factored, add 610 more digits to it, and that’s just one. That’s crazy. That’s not even scratching the surface.
So, as a result, we’re nowhere near anything that could be called military-grade encryption or a real risk today. That’s kind of like for starters.
Kornik: Okay. Well, that certainly makes me feel better and I’m guessing most of the people watching also feel better. What are some of the practical challenges in scaling quantum annealing to a level where it could truly threaten our encryption standards?
Karagiannis: We’re having a hard time scaling regular gate-based machines, right? That’s why we don’t have these fault-tolerant systems yet. When it comes to annealing, the question is, does this paper show any kind of linear path that scaling even becomes an issue? In the paper, they push for a hybrid quantum classical approach. What that means is they’re using the optimization of the annealer to sort of bundle numbers in a way that you can optimally then apply classical approaches too.
So, you could think of it as, like, a search for the keys. You are kind of bundling likely places to look for the keys, and then you’re going to use classical hardware to look for the keys. That’s really hopelessly simplifying it. I just want to make sure that it doesn’t fly right over our listeners’ heads. So, that’s what’s happening. It’s kind of like a machine learning. They almost call it like an approach to machine learning, which it’s really not but they’re calling it that. This is like optimization.
So, because of that layout, they’re hoping that this will scale. That’s fair to hope that, but when you look at the classical systems that are involved, I’m not convinced that you can go much farther. Like even if you can optimize for a larger key search, I don’t think the hardware you then have to rely on to do the actual searching would be able to keep up. I think we’re going to hit the scale limit fast.
This isn’t the first time we’ve seen this kind of limitation. People might remember in December 2022, there was a paper that kind of created a stir, once again from China. It was called “Factoring integers with sublinear resources on a superconducting quantum processor.” It’s a big, crazy title, but basically in it, everyone might remember that they claimed to factor a 48-bit number with 10 of those gate-type qubits we talked about that we were building. Using extrapolation, they said you’d only need 372 to crack RSA. That’s terrifying because we thought we would need many, many thousands of error-corrected qubits to factor RSA. So, that was sort of a “sky is falling” situation.
Google researchers did a little bit of validation. Remember I said we don’t have access to the paper translated here so no one’s been able to reproduce the results, but Google researchers were able to work on the problem and prove that it would stop around 70-bits. So, the sky didn’t fall then, and right now, it might not be falling here either, because I have a feeling that if you try to scale this up, you’re going to have those classical system constrains that will kick in and sort of like protect it from getting too much farther.
That said, it’s interesting, and whenever we have new approaches like this, it makes me worry that some little kernel of them will show us a path forward. Some optimization process—there’ve been other papers too, I’m not going to go down rabbit holes—but everyone’s probably going to find something that fails but it still makes us go, “Okay, we might have something to worry about in the future where we can learn from this. So, there’s always that.
Kornik: Well, great. Thank you so much for shedding some light on that and making us feel perhaps a little bit better, or perhaps a little bit more on alert or high-alert as we probably all should be anyway.
We are sitting here in the middle of cybersecurity month, and VISION by Protiviti is focused on the future of privacy. So, I’m just curious, if we could take sort of a 30,000-foot view and talk a little bit about how organizations should be preparing for the potential impact of quantum computing on their cybersecurity infrastructure, on their data security framework, even if it’s maybe not the most immediate threat but we know it’s coming eventually.
Karagiannis: Sure. One big thing to point out is this approach that was published in the Chinese paper can’t touch the new NIST post-quantum cryptographic standards that were released on August 13th, 2024. The lattice-based approach in there is safe from this type of attack and safe from Shor’s algorithm, which is the quantum attack we were all worrying about.
So, really the best thing you could be doing right now is starting the migration plans for PQC. It’s time to start taking inventory, start looking at what cryptography you have in place, start looking at which critical assets you might want to protect first. Because migrating to new cryptography takes time and it’s tricky. So, that’s the journey you have to begin on. This paper will not, as I said, threaten PQC, so why not start looking towards PQC because that is going to be a path that everyone has to take.
It’s also important to note that eventually, NIST is going to start recommending the deprecation of some classical cyphers. So, whether you believe that quantum computers are 10 years or 10 million years away that can crack encryption, it doesn’t matter. Eventually, you’re going to start failing audits and things like that if you don’t have the latest cyphers in place. So, it is really time to start examining your environment and making a move to PQC.
Kornik: Well, Konstantinos, thank you so much for giving us that insight. We’re certainly glad that we’ve got you to sort it all out for us and to help us make sense of it. Even if I didn’t understand everything you said, I understood a great deal of it, so I am further along than I was before we started talking. So, thank you for that.
Karagiannis: Yes, and if I manage to recreate the paper, I’ll be sure to come on and tell you what happened.
Kornik: Yes, please do.
Karagiannis: Okay.
Kornik: Thanks, Konstantinos, I appreciate it, and thank you for watching the VISION by Protiviti interview. On behalf of Konstantinos Karagiannis, I’m Joe Kornik. We’ll see you next time.
Did you enjoy this content? For more like this, subscribe to the VISION by Protiviti newsletter.
Former Apple, Google CPOs talk tech, data, AI and privacy’s evolution
Former Apple, Google CPOs talk tech, data, AI and privacy’s evolution
Former Apple, Google CPOs talk tech, data, AI and privacy’s evolution
Protiviti’s senior managing director Tom Moore sits down with a pair of privacy luminaries who both left high-profile roles as chief privacy officers to join the global law firm Gibson Dunn. Jane Horvath is a partner and Co-Chair of the firm’s Privacy, Cybersecurity and Data Innovation Practice Group. Previously, Jane was CPO at Apple, Google’s Global Privacy Counsel, and the DOJ’s first Chief Privacy Counsel and Civil Liberties Officer. Keith Enright is a partner in Gibson Dunn and serves as Co-Chair of both the firm’s Tech and Innovation Industry Group and the Artificial Intelligence Practice Group. Previously, Keith was a vice president and CPO at Google. Tom leads a lively discussion about the future of privacy, data, regulation and the challenges ahead.
In this interview:
1:42 – Privacy challenges at Apple and Google
5:32 – What should business leaders know about privacy?
7:20 – Principles-based approach to privacy: The Apple model
10:42 – Top challenges for CPOs through 2025 and how to prepare
23:16 – Will the U.S. have a federal data privacy law soon?
27:00 – What clients are asking about privacy
Former Apple, Google CPOs talk tech, data, AI and privacy’s evolution
Joe Kornik: Welcome to the VISION by Protiviti interview. I’m Joe Kornik, Editor-in-Chief of VISION by Protiviti, our global content resource examining big themes that will impact the C-suite and executive boardrooms worldwide. Today, we’re exploring the future of privacy, and we’re thrilled to welcome in a pair of privacy luminaries for a panel discussion led by Protiviti’s Tom Moore. Both of today’s guests have previously held high-profile roles as chief privacy officers of two of the largest tech firms in the world and are now with global law firm Gibson Dunn. Jane Horvath is Co-Chair of the firm’s Privacy, Cybersecurity, and Data Innovation Practice Group. Previously, Jane was CPO at Apple, Global Privacy Council at Google, and the DOJ’s first Chief Privacy Counsel and Civil Liberties Officer. Joining Jane today will be Keith Enright, also a partner in Gibson Dunn, where he serves as Co-Chair of both the firm’s Tech and Innovation Industry Group and the Artificial Intelligence Practice Group. Previously, Keith was Vice President and CPO at Google. Leading today’s discussion will be my Protiviti colleague, Senior Managing Director, Tom Moore. Tom, I’ll turn it over to you to begin.
Tom Moore: Great. Thank you, Joe. I’m honored today to be with Keith and Jane. You guys are awesome leaders in the privacy space, and I think we’re going to have a great conversation.
Keith Enright: Yes, it’s such a pleasure. Thanks for having me.
Jane Horvath: Hi. Tom, thank you so much for inviting me. I’m excited to talk about privacy today.
Moore: You both were chief privacy officers of two of the largest companies in the world and at the forefront of many of the issues facing privacy and data protection today. Let’s reflect on that time for just a little bit. Jane, let’s start with you. What are some of the biggest challenges you faced, or one or two highlights from that period?
Horvath: Probably the biggest challenge that I faced, actually, there were probably two challenges. The first was 9/11 government surveillance. A lot of the audience may remember the San Bernardino case in which the federal government, the FBI, asked us to build a backdoor into the operating system. They were doing it with good intentions, there’d been a horrific terrorist attack, but that really raised a lot of the issues that we grapple with every day: where is the balance between security, meaning encryption, and privacy. Then the other I would say is, as my time went, privacy became more and more regulated. Of course, we saw GDPR, and we’re seeing more and more states enact privacy laws, many of which actually are not compatible. We have Asia, we have China that enacted a privacy law that is really ostensibly a data localization law. So I would say it got more challenging from a regulatory standpoint.
Moore: Keith, what about you?
Enright: I have very similar themes, I would say. I would break it down to, say, complexity, velocity, and scale, capture the challenges. Complexity in terms of the diversity of the product portfolio, the incredible rate of technological innovation and change, trying to make sure that you are staying sufficiently technically sophisticated enough so that you could give both good legal advice and counsel, but you could also help keep the business moving forward and not serve as an unhelpful headwind to progress and innovation. Velocity and scale, at Google, we were launching tens of thousands of products every single year. They were being used by billions of people all over the world to stay connected and stay productive. So taking all of the complexity of the environment, all of the additional legal and regulatory requirements as Jane points out, as the environment got far, far, far more complicated, mapping all of that to clear actionable advice to allow hundreds of product teams across the global organization to continue innovating and bringing great products into production was a pretty incredible existing challenge.
In terms of highlights, and I’ll point to one serendipitously because of my good friend and partner, Jane here, probably the single greatest highlight of my Google career was during the pandemic, we had this incredible moment where our respective leaders set aside the commercial interests of the organization, and gave Jane and I really significant runway to collaborate on privacy protective exposure notification technology, which involved working closely with engineers and innovators, and then also involved the global roadshow of engaging with not only the data protection regulators we knew very well, but public health authorities and others who needed to be brought along and sort of educated on the notion that we really could use privacy as a feature in deploying this incredibly important technology around the world, in a way that was indisputably going to save lives.
Moore: What a great example of not only intra-firm cooperation and collaboration but inter-firm as well. Keith, you hit upon an important topic, your business leaders and how you engaged with them. Is there one or two things you wish every business leader knew before you went to talk to them, so you had common grounding?
Enright: I suppose what I would love for leaders at every organization to bring into the conversation with their privacy and data protection leadership, it would be a general understanding that privacy is not a compliance problem to be solved. It is a set of risks and opportunities that exist between technical innovation, business priorities, individual rights and freedoms of users, user expectations, which are going to be different in different places around the world for different age groups, for different individuals. The incredible complexity of the problem and opportunity around privacy requires business leaders to understand—this is about weighing equities. It’s about delivering utility in a responsible way. It’s about innovating in a way that’s going to keep your organization on the right side of history.
I do think privacy leaders have a significant challenge when they’re engaging with the C-suite or the boardroom to somehow remind their leadership: you can’t get compliance fatigue from privacy and data protection. Because the environment is going to keep getting more complicated, you sort of need to engage with this as an opportunity to future-proof your product strategy, and be vigilant and diligent about thinking about how do we make responsible investments to make sure that we’re doing this appropriately, and never think of it as a solved problem.
Moore: Very interesting. It’s profound as well. Jane, I can’t think of too many companies that have the reputation for supporting privacy from a consumer standpoint than Apple. Take us into the into the boardroom or take us into the C-suite at Apple. What were some of those conversations you had? What were the type of questions you received from the board or the C-suite?
Horvath: Sure. So like Keith, I was very lucky. When I started at Apple, it was very apparent that there was a deep respect for privacy. My main partner was the head of privacy engineering, and we didn’t do anything without each other every meeting, every conversation, and I think the most important thing that over the 11 years I was there was, like, people think privacy, “I don’t care about privacy.” Not Apple, but people are saying, “Oh, I don’t care about privacy. They can have all my data,” but there are really innovative ways that you can build privacy in, that doesn’t mean you’re not collecting their data. So we distilled privacy when we were counseling and doing product counseling down to four main principles at Apple. The first was data minimization. That’s sort of overarching, because anybody who works with engineers, like telling them they have to comply with GDPR, their eyes roll in the back. So for us, it was great to distill it down. So data minimization, on-device processing, but it was even more. This is that innovative step, where you can innovate, and it is really a subset of data minimization. So people think, “Oh, minimizing data means I can’t collect data.” It actually means you can’t collect identifiable data. So have you considered sampling data? Have you considered creating a random identifier to collect data? So these were some of the things that every day when we were counseling.
The third principle, choice. Consumers should know what’s happening with their data. Do they know? So it’s transparency and do they have choices about it. So many of you who use iPhones get to make choices every day about data collection.
Then finally, security. You really can’t build a product to be protective of privacy without considering security.
So that was sort of the secret sauce that Apple was distilling this thing called “privacy” down to these four principles, and we briefed the board on the principles. We didn’t have to, but my boss at the time, felt like it was important to talk to the board about the things that we wanted to do with privacy, and they thought it was a great idea, and Tim was hugely passionate about the issue. So from the executive suite, it flowed down through the company. So my job was relatively easy because I didn’t have to make the sales pitch.
Moore: The principles approach is a good one. I think what you lined out there was relevant then and it’s relevant now. Those are sustainable principles that are very much top of mind for chief privacy officers, their bosses and the C-Suite, as well as the board. You’re not privacy officers anymore other than in terms of providing advice to that cohort, but tell us a little bit about what should CPOs be thinking about today and into 2025, so kind of a short-term, where should they be triaging issues, what should be top of mind?
Horvath: I think that the buzzword out there is AI, and I think CPOs are very, very well set to handle the issue of AI. They’ve set up compliance programs; as we’re looking at AI, AI is just very much software, and as we’re looking at the first regulatory framework in the EU, it’s all about harms. So it’s balancing risk, balancing harms.
I think the bigger challenge is, of course, this software needs lots of data, but again, you can pull from your innovative quill and decide that yes, it needs data, but does it need data that’s linked to an individual person, are there things that you can do with the data set? So I think CPOs can be very, very helpful and valued members of the team as companies are considering how to use their existing data.
Of course, as we talked about earlier, privacy’s become much more regulated and that data was collected pursuant to a certain number of agreements, a privacy policy. So the CPO is going to have to be deeply involved in determining, if you’re going to use the data for a different purpose, how do you do it? So I think the CPO shouldn’t panic. The CPO can never and has never been able to be the “no” person, but the CPO can be a really innovative member going forward, in my opinion.
Enright: I agree with everything that Jane said. I think it’s a very interesting moment, not only for CPOs, for chief privacy officers, but for privacy professionals more generally. I think by most estimations, if you look at, say, the last 15, 20 years, the privacy profession has enjoyed an incredible tailwind. Many folks, us on this call, have enjoyed just a tremendous professional benefit from the growth of the profession, the explosion of new legal requirements, which Jane had kind of pointed to; the fact that organizations woke up to some of these risks; in part, the passage of the GDPR in 2018 and the notion of 4% of global annual turnover civil penalties for noncompliance, made it to an extent greater than had ever been the case in the past, a board-level conversation, where you had boards of directors and C-suites of large multinational concerns, suddenly sensing that they had some clear accountability to ensure that management was contemplating and mitigating these risks appropriately, and that there was a privacy and data protection component to core business strategy.
Something very interesting has happened, say, over the last five years, where privacy and data protection continue to flourish. You also had a number of other areas of legal and compliance risk scaling up very quickly and very dramatically. You have content regulation online for large platforms and others. You have the challenge of protecting children and families online, sort of rising to the fore with increased regulatory intention. Also, as Jane said correctly, I think artificial intelligence has just exploded over the last couple of years. Now, those of us who are sort of specialists in the field have been working with artificial intelligence for over a decade, but the explosion of LLMs and generative AI has really, of course, created an unprecedented level of investment and attention in that area—that’s having a bunch of interesting effects. You have C-suite and board level attention is now being, in some ways, diverted to how do you understand how AI affects your business strategy, how do you anticipate potential disruption, how are you looking at whether some of these innovations are going to allow your business strategy to allow you to take share from your competitors, all of that has senior leadership looking across organizations to try to find leadership resources and technical talent to focus on the AI question and the AI problem and the AI opportunity.
One domain which seems immediately adjacent and particularly delicious for that kind of recruitment is privacy and data protection, as many of the features that the AI space has—you have a tremendous amount of technological innovation over a relatively short period of time, you have an explosion of regulation, inconsistencies, domestically and internationally, and you have not just in-house—you also have the regulatory community is going through an analogous struggle. They’re trying to find their way in a new AI-dominant world, all of which has caused privacy professionals to be really considering, do they pivot? Do you shift from being a privacy and data protection specialist to being an AI governance specialist? Do you evolve and expand? Do you decide to sort of rebrand yourself and stretch your portfolio into more things? Do you actively solicit senior executive requests that you take on accountability for some of these adjacent domains, or do you resist them, recognizing that privacy and data protection remain an extraordinarily challenging remit, and the CPO or some other senior leader may have some apprehension about overextending themselves, agreeing to be held accountable for something far beyond what was already an extraordinarily challenging remit.
So I think it’s a really interesting moment for privacy leaders. I have some strong views on this which we may talk about, but like the TLDR on it is, I think you need to embrace that change. I think trying to hold on to the past and preserve your privacy brand exclusively is not going to prove to be the most prescient or professionally advantageous strategy, given just the velocity and shape of the change that’s coming to us.
Moore: So Keith, I think we, the three of us, can stipulate that that is the right approach for privacy leaders, but can you go into a little bit more detail about how. What should a privacy leader be doing maybe in the next three years or so to prepare themselves and educate themselves to meet these challenges of technology, innovation, regulation, all the things colliding together that you just described?
Enright: So a candid response to that question requires a very clear understanding of the culture of your organization and what your business strategy is. If you’re working for a Google or an Apple, there’s a certain set of plays in your playbook that you need to run to ensure that you are appropriately educating your senior leadership and bringing them along, and making sure that you are understanding the risk landscape, staying appropriately sophisticated on the way things are impacted or changed by AI. Again, in large organizations like that, you have the benefit of these vast reservoirs of resources that you can draw upon to make sure that you are not only staying technically proficient, but that you’re serving as connective tissue across all of these different complementary teams and functions so you’re preparing your organization to not only endure, but to thrive through that wave of change that’s coming.
But not everybody’s going to be at an organization like Google or Apple. I think for privacy leaders, almost anywhere else, you are going to need to understand what is the risk appetite of your leadership, what are the consequences of the changes on the horizon for your core business strategy. What kind of resources are available to you in terms of do you have a privacy program that is very high-level of maturity and some of those resources can be extended or redeployed to think about things like AI governance? Or do you have an underfunded anemic privacy program that is already carrying an unsustainable level of risk, and you found yourself in a “Hunger Game” situation trying to fight just to keep the thing operating at a level that you feel comfortable being held accountable for? All of those variables are going to be essential things for privacy and data protection leaders to sort of really press against.
I think, again, this is going to be an interesting moment over the course for the next few years, as I believe there is a wave of investigations and enforcement coming across the next two to three years. First, in the core privacy and data protection space, the General Data Protection Regulation, many other laws and regulations around the world, they haven’t gone away. Just because industry is increasingly interested in, confused by and distracted by what artificial intelligence means, that doesn’t prevent data protection authorities and data protection regulators from launching investigations and from initiating enforcement for your, call them “legacy obligations” under regimes like the General Data Protection Regulation.
I think we’ve actually seen a relatively limited wave of enforcement for the last couple of years, because regulators’ capacity has been largely absorbed with trying to digest and understand the way that the ecosystem is changing as well, but I think that’s going to settle in over the next few years and I think we are going to see privacy regulators enforcing in the context of privacy, privacy regulators enforcing in the context of AI, AI regulators enforcing in the context of AI—all of this is going to create an interesting political dynamic, I think, in jurisdictions around the world, which is going to dramatically amplify the need for organizations to be making substantial investments and preparing themselves for a changing and increasing risk environment.
Horvath: Just to give an example, right now, the Irish DPC, their Meta and X are no longer training their AI on European data. So, how many other investigations are ongoing at the DPC that are basically holding up the AI products? So here is another area where the CPO is going to have to be a bridge to the company. Because as Keith said, I think a lot of businesses think, “Okay. This privacy thing’s over. We went through the privacy thing. Now, we’re going to concentrate on the AI thing,” but the privacy regulators, particularly in Europe where the fines are pretty stringent, they’re not going away. They are a single-issue regulator, and I think it will be more challenging for CPOs because their budgets are going to get slashed, and where you’re operating in a company whose margins are tight or who doesn’t generally—they’re going to be hiring these AI people also. So there’s going to be less of a pot of money to go around and more work.
So I agree completely with Keith, we’re going to see a lot of activity. We are already kind of seeing it from the FTC. They are issuing very, very broad CIDs, the OpenAI CID that was leaked to the press, it was just like an expedition of everything about their company. So I think that’s going to be another area when you have a regulator knocking that it’s going to be really critically important to get a hold of it, don’t panic, see where you can narrow it down and address the regulator head-on.
Moore: Jane, I wholeheartedly agree with you. I think that that regulation coming not only from Europe but in the U.S. with the three letter agencies, but also the states, is a focus right now, but let’s look at the future. Does the U.S. have a federal privacy law, data protection law, in the next three to five years?
Horvath: I’m going to be bullish and say, “yes,” at a certain point, because I think we get very close to having one, but I think AI—probably AI, children—all of these different areas are going to push it across the finish line at a certain point, but I don’t know. Keith, what do you think?
Enright: So I share your optimism, actually. Memories are short, but not too terribly long ago, we really did have growing optimism that we were going to see omnibus federal privacy legislation. There are a lot of interesting things happening. For most of my career, the position of industry, generally, was that they would never support a bill that didn’t have extremely strong federal preemption and did not have a private right of action. And you started seeing multiple large industry players beginning to soften, even on some of those core positions, just before the pandemic really, which I found incredibly interesting—like the political will and, I think, the growing awareness that we require some kind of consistent federal standard to allow some level of compliance with increasingly varied requirements manifesting in these state laws that are coming into effect. It seems to be generating momentum. Now again, as this always happened before, it all fell apart and we were set back again, but it did, it suggests to me the impossibility of a federal law is probably overstated. I think there is a road there, and there will inevitably be compromises, surprises, and idiosyncrasies and whatever that ultimate law that makes its way over the line looks like, but I do think we’ll see something. I think in single digit years ahead of us, we will have a federal law in the U.S.
Moore: Let’s pivot to your current responsibilities, Jane. Tell me about the differences between leading a large company like Apple’s privacy team versus providing legal advice services to multiple clients.
Horvath: I’m really enjoying it, actually. I’ve been a serial in-house person, did my stint in government. I worked at Google and actually was on the interview panel that hired Keith, what a great panel that was, and then Apple, and I’m really having fun working with a lot of different clients. I also still have a global practice. I ran a global team in Apple. I love the global issues. I’ve got a few clients in the Middle East, working on different AI projects, doing things from policy to compliance to HR. It just keeps me going and it’s exciting. I think the most fun is working with a client and understanding their business, but also having the client say, “Oh, you understand what I’m going through. You understand that I can’t just go tell the business “x,” because I’ve been in-house, and I know where they are. So it’s an exciting time. There’s just so many different developments going on, not just in AI: cybersecurity, data localization, content regulation. There are just huge amounts of interesting issues.
Moore: So top of mind for those clients, you get a call, what’s the—I think you just probably mentioned it, but what are the top two or three things those clients are talking to you about right now?
Horvath: Incident response is a big one. The biggest question we’re having right now is, we want to use AI internally, what are the risks? How do we grapple with rolling out AI tools? What are the benchmarks? What are the guardrails we need to put in place? What are the policies we need to put in place? How do we do it while minimizing liability? Because AI hallucinates and has other issues, and how do you grapple with those issues? So that’s probably my biggest issue right now.
Moore: Great. Keith, I presume you have lots of opportunities after your Google career, why professional practice?
Enright: It’s probably useful to just describe sort of the things that are in common. One of the things that always made me feel so blessed to join Google when I did almost 14 years ago, was the privilege of working with the best and brightest people. We got to work on this incredible portfolio of products that were being used by billions of people all over the world, really with a sincere commitment of making people’s lives better. The original motto of organizing the world’s information and making it universally accessible and useful, that resonated deeply with me. It was very easy to be passionate about the work and excited about the work. You do anything for 13 1/2 years, and you get comfortable to some extent, even something as challenging is leading privacy for Google. When Jane actually reached out to me to tell me a little bit about the opportunity taking shape here at Gibson, and not just in support of one company’s vision or one company’s product portfolio, but to be able to support thousands of leaders and thousands of innovators across tens of thousands of products all over the world, that’s exactly the kind of thing that is going to help me to stay challenged and do my best work and keep growing and evolving.
Moore: I’m excited for both of you. Obviously, your compatibility reads through loud and clear. Thank you very much, Jane. Thank you very much, Keith. I really appreciate you’re here in today. Joe, back to you. Thank you.
Kornik: Thanks, Tom, and thanks, Jane and Keith, for that fascinating discussion. I appreciate your insights. Thank you for watching the VISION by Protiviti interview. On behalf of Tom, Jane, and Keith, I’m Joe Kornik. We’ll see you next time.
Jane Horvath is a partner in the Washington, D.C. office of Gibson, Dunn & Crutcher. She is Co-Chair of the firm’s Privacy, Cybersecurity and Data Innovation Practice Group, and a member of the Administrative Law and Regulatory, Artificial Intelligence, Crisis Management, Litigation and Media, Entertainment and Technology Practice Groups. Having previously served as Apple’s Chief Privacy Officer, Google’s Global Privacy Counsel and the DOJ’s first Chief Privacy Counsel and Civil Liberties Officer, among other positions, Jane draws from more than two decades of privacy and legal experience, offering unique in-house counsel and regulatory perspectives to counsel clients as they manage complex technical issues on a global regulatory scale.

Keith Enright is a partner in Gibson Dunn’s Palo Alto office and serves as Co-Chair of both the firm’s Tech and Innovation Industry Group and the Artificial Intelligence Practice Group.* With over two decades of senior executive experience in privacy and law, including as Google’s Chief Privacy Officer, Keith provides clients with unparalleled in-house counsel and regulatory experience in creating and implementing programs for privacy, data protection, compliance, and information risk management. Before joining Gibson Dunn, Keith served as Google’s Chief Privacy Officer and Vice President for over 13 years where he led the company’s worldwide privacy and consumer protection legal functions, with teams across the United States, Europe and Asia.

Tom Moore is a senior managing director in Protiviti’s Data Privacy practice. Previously, Tom served as chief privacy officer at AT&T, directly responsible for all privacy programs, policies, strategy, and compliance with regulations at the state, national and international levels. Tom joined AT&T in 1990. Tom also serves on the board for the Future of Privacy Forum and the Community Foundation of the Lowcountry. He was formerly a member of the Executive Committee of the Board of Directors of the AT&T Performing Arts Center in Dallas.

Did you enjoy this content? For more like this, subscribe to the VISION by Protiviti newsletter.
NY Comptroller: If COVID can’t kill a city, can it make it stronger? - test
NY Comptroller: If COVID can’t kill a city, can it make it stronger? - test
NY Comptroller: If COVID can’t kill a city, can it make it stronger? - test
Thomas DiNapoli is the 54th Comptroller of New York, a cabinet officer of the state of New York and head of the New York state government's Department of Audit and Control. As Comptroller, DiNapoli is the State’s chief fiscal officer ensuring that state and local governments use taxpayer money effectively and efficiently to promote the common good. Employing more than 2,700 people, the office’s responsibilities include serving as sole trustee of the $254.8 billion New York State Common Retirement Fund, one of the largest institutional investors in the world; administering the New York State and Local Retirement System for more than one million public employees and more than 3,000 employers; administering the State’s approximately $16.7 billion payroll and overseeing the fiscal affairs of local governments, including New York City. In 1972, DiNapoli became the first 18-year-old in New York state to hold public office when he was elected a trustee on the Mineola Board of Education. In 2007, DiNapoli was elected State Comptroller. He was re-elected Comptroller by New York’s voters in 2010, 2014 and 2018. Joe Kornik, VISION by Protiviti’s Editor-in-Chief, sat down with DiNapoli in May to discuss New York City’s future.
Kornik: I’d like to start talking about how COVID-19—and the economic crisis it’s caused—has the potential to alter a city’s finances for a long time. Now that we’re nearing the end, how’d we do?
DiNapoli: Well, I certainly think compared to where we were a year ago, we've done much better than any of us could have imagined at the time. When you think of the depths of the economic fallout from COVID and the severe job loss, it was devastating from an economic point of view. And New York City was the first and the hardest hit of the U.S. metropolitan areas. We experienced a severe spike in unemployment and a severe drop in sales tax revenue, and I think everybody was expecting the worst. So here we are about halfway through 2021 and we’ve seen the picture improve in terms of unemployment and sales tax revenue, but we’re certainly not back to pre-pandemic levels. The big game changer for the city was the support that came from the federal government and the American Rescue Plan Act of 2021. The change in the presidency, the change in the Congress and certainly Chuck Schumer as Senate Majority Leader were all big factors helping lead the city through the crisis: We’re actually on target to end the year with a surplus. That doesn't mean there still aren’t major concerns, but it’s a much better picture from where we thought we’d be a year ago.
Kornik: Honestly, that’s more optimistic than I expected. It seems like there are so many headwinds in terms of lost tax revenue, unemployment, real estate and other factors to consider.
DiNapoli: You know the employment numbers are still going to be off and revenue numbers are going to be off, and the property tax loss is significant—the city's projecting the highest drop in property tax collections in its history. And we’re concerned that may continue well into the future. In terms of real estate, that depends a lot on how business moves forward with bringing people back to the office. There's still a lot of uncertainty, but one of the bright spots has been the resilience of financial services. When the markets tanked in March of 2020, everybody thought Wall Street was going to tank, too. But it didn’t; bonuses were up, and that has helped maintain an important part of the city's revenue. So, that’s been a big key to financial stability. I’m optimistic. I was in Manhattan recently and there's more street traffic than I've seen in many months, and people seem to be returning to work and the office. And maybe we’re starting to get some day-trippers? I don't think we’re getting very much overseas tourism yet, but we’re all watching tourism because it’s so vital to the city’s overall economy. But even as Broadway starts to reopen and restaurants continue to come back with the help of federal support, the pace of the recovery is so important to the future of the city’s finances. So, we’re keeping a close eye on all of this. We’ve done a series of reports on the retail sector, the restaurant sector, the hospitality sector, the tourism sector and the forecasts are still way off. But if this recovery is slower than anticipated, we could be dealing with a lot of tough choices sooner rather than later.
Kornik: I know some of the biggest challenges are imminent, but if you were to focus a little farther out—maybe even something the next Comptroller will have on his or her plate a decade from now—what comes to mind?
DiNapoli: Well, first I would point out that I still have many years to go to beat Arthur Levitt’s run of 24 years of being New York Comptroller. A decade from now, I could still be Comptroller… now, I’m not announcing anything, I assure you. But if we’re looking a decade out, one of the key dynamics is, will New York City still be a place that attracts young, talented people—in the arts, or technology or the financial sector? And, even pre-pandemic, there's always been a concern about the out-migration of established upper-income New Yorkers, but I think we probably need to focus more on the migration of some of those younger talented people who are on the verge of launching their careers and perhaps settling down and raising a family in the city but because of this pandemic, we might have lost some of them. So, if we want New York to continue to be a vibrant, wonderful place 10 years from now, we've got to make sure we're focusing on that next generation. So that really speaks to some of those factors I was talking about earlier, safety and employment. Businesses will need to adapt to a new reality, even if that means a hybrid model of remote and in-person work—they need to be mindful of how younger people want to work. I do think if we address some of those broader issues, and if we focus on the next generation and make sure we're not losing them, I think the city has the potential to be stronger than ever in 2030. Look, New York has come through many crises over the years, including a pandemic, by the way. And history says we always end up better, not worse.
Kornik: Do you suspect that will happen again?
DiNapoli: Right after 9/11, there was nothing going on downtown. Now, lower Manhattan is humming in terms of business activity, but it's also become a residential community. Much more so than it ever was pre-9/11. It’s better than it was. And I think when we look back on this time a decade from now, there will be lessons learned and things about New York City that are better than they were pre-COVID. I'm very positive about what New York will be 10 years from now. And while it’s always difficult to look that far out, our history as a city says, almost without fail, that we’re better than we were the decade before. So, I have every reason to think that we’ll look back on this time as a big turning point to a better New York City.
Joe Kornik is Director of Brand Publishing and Editor-in-Chief of VISION by Protiviti, a content resource focused on the future of global megatrends and how they’ll impact business, industries, communities and people in 2030 and beyond. Joe is an experienced editor, writer, moderator, speaker and brand builder. Prior to leading VISION by Protiviti, Joe was the Publisher and Editor-in-Chief of Consulting magazine. Previously, he was chief editor of several professional services publications at Bloomberg BNA, the Nielsen Company and Reed Elsevier. He holds a degree in Journalism/English from James Madison University.

I'm very positive about what New York will be 10 years from now. And while it’s always difficult to look that far out, our history as a city says, almost without fail, that we’re better than we were the decade before.
Did you enjoy this content? For more like this, subscribe to the VISION by Protiviti newsletter.
Future of Privacy Forum CEO Jules Polonetsky on “exciting but risky” road ahead
Future of Privacy Forum CEO Jules Polonetsky on “exciting but risky” road ahead
Future of Privacy Forum CEO Jules Polonetsky on “exciting but risky” road ahead
In this VISION by Protiviti interview, Protiviti senior managing director Tom Moore sits down with Jules Polonetsky, CEO of the Future of Privacy Forum, a global non-profit organization that serves as a catalyst for privacy leadership, to discuss how business leaders can navigate a tricky road ahead for data security and privacy. For 15 years, Polonetsky and the FPF have helped advance principled data practices, assisted in the drafting of data protection legislation and presented expert testimony before legislatures around the world.
In this interview:
1:15 – Why the Future of Privacy Forum?
2:50 – What should business leaders focus on in the next five years?
7:02 – How is the head of privacy role evolving?
12:58 – GDPR and the fragmented state of U.S. regulation
14: 00 – Looking ahead to 2030
Future of Privacy Forum CEO Jules Polonetsky on “exciting but risky” road ahead
Joe Kornik: Welcome to the VISION by Protiviti interview. I’m Joe Kornik, Editor-in-Chief of VISION by Protiviti, our global content resource examining big themes that will impact the C-suite and executive boardrooms worldwide. Today, we’re exploring the future of privacy, and I’m excited to welcome Jules Polonetsky to the program. For 15 years, Jules has been CEO of the Future of Privacy Forum, a global non-profit that serves as a catalyst for privacy leadership, where Jules has helped advance principled data practices, assisted in the drafting of data protection legislation, and presented expert testimony with legislatures around the world. He is an adjunct faculty member for the AI Law and Policy Seminar at the College of William & Mary Law School. Jules will be speaking with my Protiviti colleague, Senior Managing Director Tom Moore. Tom, I’ll turn it over to you to begin.
Tom Moore: Great. Thank you, Joe. I couldn’t think of anybody better to talk about the future of privacy than Jules Polonetsky. Jules, I’m so happy you’re joining us today. Thank you.
Jules Polonetsky: Delighted.
Moore: If you don’t mind, tell us a little bit more about the Future of Privacy Forum, its history, what you’re working on today?
Polonetsky: We’ve been around for about 15 years, and our members are very generally the chief privacy officers at 200 plus organizations. The people who are really trying to grapple with the fact that the organizations they lead are driving the AI agenda, whether it’s big tech companies or startups, or banking, or car companies, right? Everybody is challenged by the fact that the pace of how data is being used is accelerating and the norms of what’s right, what’s legal, what’s creepy, what’s innovative and exciting to users is rapidly developing, and it’s developing everywhere in the world. So we work with those folks to create best practices, standards, try to support reasonable legal rules and structures around it.
We do that as well with the leading policymakers because it turns out they’re busy. They’ve got a big agenda. They’re trying to deal with wars around the world, the economy, all the challenges that legislators and government leaders grapple with, and they want and need support. They want to know which country is doing it right. What can we learn? Where are their mistakes? How does this technology work? So we try to work as the pragmatic voice, optimistic about tech and data, but quite aware that things can and will go wrong if you don’t have clear guidelines, clear landmarks for how organizations can be responsible as they roll out new data-powered products.
Moore: Excellent, and congratulations on 15 years of existence for the Future of Privacy Forum. Let’s talk about it. You obviously have your pulse on the world of privacy, and what do you think are some of the biggest issues over the next five years? If you’re a business leader, you’re a leader of an enterprise, you’re a regulator, what should you be thinking about? What should you be focused on to get prepared for the next five years?
Polonetsky: You know, the easy answer is to immediately talk about AI, but before we go to AI I think it makes sense for us to pause sort of a second and recognize that it’s only the last few years where you’ve been able to assume that almost everybody, in any at least decent, advantaged, progressing economy, has a mobile phone, probably has a smartphone, probably has apps, probably is connected to people via some sort of social media or WhatsApp group or the like. The world has started hurling, part of it—it’s a COVID world, where suddenly we all got comfortable doing things over video conference. We became a small world where people are connected, which means that the good and the bad things that happen around the world immediately reverberate. It means the bad actors can do their work from every part of the world and can develop sophisticated, complicated organizations and have sort of teams and levels of different delegated services that they can use as they deal with organizations.
So we’ve moved to this super connected, super immediate, sort of 24/7 world where users can create a giant alarm, sometimes correctly, sometimes incorrectly, when they think that your organization is doing the wrong thing, and it immediately is driven into the media because the media seem to spend a good chunk of their day following what happens on social media.
Those stages are only going to accelerate, but we’re also seeing the backlash, right? People who are just feeling burnt out because they were locked up at home during COVID, and they didn’t get to go out, and now they’re still gaming all day and all night, and they’re still connected. All the business tools are pinging them, not just on email, but on Slack, on Teams, and all these tools. Being ready and thoughtful and structured enough to navigate this incredibly frothy, turbulent world—and then let’s talk about AI, where suddenly the investments are moving so quickly that the policy concerns are being left temporarily by the wayside, right? Who would have imagined that we’re rolling out products and we say, “Well, actually, they don’t work a lot of the time, but when they do, they do these incredible really cool things except it can’t be fully reliable but we’re relying on it for incredibly important processes like interacting with our customers.”
So, for a long time, the problems of our current generated AI tools were well-aware and you had leading companies saying, “Not yet. We don’t know the answers yet to how we’re going to put out stuff that isn’t reliable but can do super cool things, but actually also might be discriminatory, right?” For better or worse, the dam burst and everyone, from the most conservative organization to the wildest startup, is rolling out stuff that comes with lots of risks.
So that’s the world we live in. Chief privacy officers and legal and compliance folks suddenly need to go from a careful measured world where they do assessments, and they consult, and they discuss, and they give advice and the business accepts the advice, to a place where people are rolling things out that are purchased from vendors who’ve purchased from vendors and putting it out in the market. So we are in an exciting, risky—exciting because really cool things are happening, but I don’t know that we’ve ever seen as much risk or drama and guess what? The media are super interested because it’s about AI. So it can be the silliest flap and suddenly it’s front page news.
Moore: You mentioned chief privacy officers, heads of legal, heads of compliance. They’re at the forefront of all this. The roles continue to evolve with AI and other technologies. Tell me about what you see as the primary role of the head of privacy within a large organization.
Polonetsky: You know, I see two trends. This is really a role that’s in flux. There is one trend, maybe it’s a negative trend or maybe it’s just the way of the world as laws and policies become established. When I first became a chief privacy officer many, many years ago, it was a novel title and it wasn’t the highly-regulated companies that had the most senior executives in these roles. The banks had regulation and structures and lines of defense and dealt with it for years. HIPAA, the health privacy and portability law was in place and organizations had structures around that. It was the startups, the internet companies, it was the ad tech companies who didn’t have detailed legislation, at least not in the U.S., but who were running into all of these explosions of concern, or the data companies who were suddenly able to do so much more than just send you targeted mail, who needed senior executives navigating the nuances of, “What do the consumers really want and what is civil society saying? They’re making a fuss about this. And what about regulators who want to support the internet and want to support these new business models, and who are very excited to come up with new laws and rules? And what about our customers who need to understand what we’re doing with their data in ways that we’ve never used their data before?”
Here now, we’re in a world that’s become far more regulated. We’ve got all these state laws in the U.S. now. We’ve got AI laws. We have privacy laws. We have global data protection regulation not just in Europe, which has been a leader and has been mature, but almost every other jurisdiction. We’ve got a team in Africa. The countries across Africa are rolling out data protection regulation. South America, the big economies, India, right? The most giant economies, China, all have new data protection regulation and, now, new AI regulation. So for some companies they’ve said, “We don’t need the drama. We know how to do compliance. We worry about all kinds of compliance issues.” Some companies are rolling these roles into compliance and perhaps eliminating this sort of executive type role. Other companies are going in exactly the other direction. They’re looking at the challenges of AI, which are not only about privacy, but start with, in many cases, personal information that’s collected and used and already regulated by data protection law. Even automated decisioning is already regulated by data protection law. So, some companies are recognizing that here’s this incredibly strategic area, who is going to help us shape what are very nuanced decisions about not only how to deal with complying with laws— “Hey, we’re now going to use video and improve it and your face is involved, and our customer’s data is involved, and we’re going to read their confidential information to create better tools that serve them. But, boy, they better trust us and trust the output.”
We see multiple layers of regulation, for instance, in Europe, where not only do we have privacy law, not only do we have AI law, but we have new kinds of competition laws. New laws that force you to provide data to your competitors. New laws that force you to provide data for researchers. So, we see a number of other companies saying, “Digital governance has become really complicated and we need somebody or some team managing the envelope of restrictions that exist around how we use data.”
So we're at an inflection point, and we’ll either, over time, see some of this absorbed into the legal and compliance structure of the organization, but I think we’re seeing a whole new breed of folks who are stepping up from data protection to a broader scope, whether it’s AI, whether it’s perhaps digital governance, perhaps it might be ethics. That’s where it’s going.
Moore: Excellent. So speaking of that broader scope, talk to the privacy community, the privacy leader, chief privacy officer, or other title. What do they need to do to prepare themselves for this environment to grow into those broader responsibilities?
Polonetsky: I love telling some of my colleagues and friends in data protection, they spend too much time on data protection. By that, I mean there is so much. I mean you can’t stop. There’s a new law. There’s a new regulation and California keeps rolling out new changes. The Europeans keep interpreting and reinterpreting. So you can really spend all your time keeping up with the incredible rush of details. But the reality is, guys, people, gentlemen, ladies, all of you, you know how to do that. There might be a nuance, there might be an item to deal with, you know how to read legislation. You know how to do compliance. What’s changing super-fast are the way your business, the way your sector is using data. Things that were norms are now changing. Things that the platforms are doing for their business that affects your business are changing. Spend more time, please, legal, compliance, ethics, privacy people, being gurus of how data is being used because that’s going to help you ask the smart question. You ask your legal assessment question, you’re going to get your legal assessment answer. Understanding how your partner and what their business goals are and how they’re really planning to use data give you the opportunity to ask much more probing questions that answer what you need to know.
Moore: Earlier, Jules, you mentioned Europeans, the GDPR. They’ve obviously invested quite a bit in legislating, regulating, enforcing the data protection for European citizens. Are they striking the right balance? Related question, what lessons can the U.S. learn should we ever get to national privacy law in the U.S.?
Polonetsky: GDPR, I think, is a very thoughtful document. The European legal process is a challenging process. It’s not one country. It’s a union. My hope is that we will move in the U.S. to regulate quickly around AI and data protection. Even if it’s not perfect, I think businesses need the certainty. They need a level playing field, and then they’ll compete. If anything ended up being too restricted, then we can go back and debate it. Right now, I think we’re suffering from a gap, tools being rolled out, and the law is sort of catching up in a way that may end up being quite challenging.
Moore: So let me put you on the spot, turn that hope into a prediction. By 2030, do we have a U.S. national privacy law or do we still have the state patchwork, federal agencies regulating, state agencies regulating?
Polonetsky: By 2030, I think the answer is easily yes. By next year, the answer is, “That’s going to be hard to say.” You know, it took the Europeans seven years to build out GDPR. Again, mostly, 70-80% of GDPR was already in the UK’s data protection law and German data protection law. They didn’t start with a blank slate. We’re talking about regulating a huge chunk of the U.S. economy. That’s complicated. It ought to be taking a while. I think Congress is in this period where they’re struggling through understanding the complexity of what it takes. So, you know what? Although I’d like them to do it now so that the states don’t all go do disparate things, it’s going to take them some time. They should take the time, but they need to do a bit better a job really getting thoughtful and smart, and there are hard issues that need to be debated by critics, and business, and researchers and so forth.
Moore: So Jules, on a couple occasions today, you’ve expressed optimism or hope. Let’s go the other route for just a second. What if we don’t get this right? What if national law, thoughtful and smart, doesn’t come into play by 2030? What could be the consequences of not getting this right?
Polonetsky: I don’t think we have a choice to not get this right. I think the not getting this right, perhaps, is doing it very piecemeal, doing it in ways—My home state of Maryland has done a very strict state privacy law that doesn’t have any greater flexibility for research. Could they have really intended to make it very, very complicated and hard, the home of the National Institutes of Health and leading universities and so forth? Could they have intended to do that? So, I think we could have inadvertent, complicated mistakes, complications of multi-state compliance that cost money and cost time and probably don’t add any value.
So I think we move slowly and haphazardly if the world is state laws, the world is regulation by crisis and pushback. We end up not being trusted to use the most robust forms of data that we actually do need. We need data about sensitive populations to identify where discrimination can be taking place, where are people not getting access to health facilities. So if state laws make me worry about collecting any sensitive data, which many of them do with minimization or opt-in requirements, then it’s too risky. I don’t collect that location data, and that’s fine. We’ll protect some people who won’t get targeted by ads or who won’t have sensitive locations being exposed, but we then won’t have the data that the CDC needs to understand how a pandemic spreads. We won’t have information needed to know how students travel to school and traffic information. So we’ll end up in a world where we progress, but with drama, with regulation by Twitter and media headline and class action litigation.
We need the certainty of a level playing field, as imperfect as laws will always be, so that we can actually move forward rapidly, particularly around AI where there are huge debates. We need to decide, is it okay to suck up all the data from the public internet? Well, you know what? Maybe it’s public data, but maybe we didn’t actually intend this when we hammered out the IP rules and the copyright rules, and maybe we want to think about what the right balance is. If not, it’s the courts that are going to decide it. Let’s decide it with good, thoughtful public policy.
Moore: Jules, this has been fantastic. You shared an incredible amount of information, breadth of both concern but also optimism. I’m thrilled that you joined us today. Thank you for your time and hope to see you again soon.
Polonetsky: I am indeed optimistic despite, I think, all the drama. Exciting things are happening with data. We just need to get the guardrails that can help us drive quickly, safely.
Moore: Great, thank you. Back to you, Joe.
Kornik: Thanks, Tom. And thanks, Jules. And thank you for watching the VISION by Protiviti interview. On behalf of Tom and Jules, I’m Joe Kornik. We’ll see you next time.
Jules Polonetsky has served for 15 years as CEO of the Future of Privacy Forum, a global non-profit organization advancing principled data practices in support of emerging technologies. Jules has led the development of numerous codes of conduct and best practices, assisted in the drafting of data protection legislation and presented expert testimony with agencies and legislatures around the world. Jules is an adjunct faculty member for the AI Law & Policy Seminar at William & Mary University Law School. Jules has worked on consumer protection issues for 30 years, as chief privacy officer at AOL and at DoubleClick, a Consumer Affairs Commissioner for New York City, and an elected New York state legislator.

Tom Moore is a senior managing director in Protiviti’s Data Privacy practice. Previously, Tom served as chief privacy officer at AT&T, directly responsible for all privacy programs, policies, strategy, and compliance with regulations at the state, national and international levels. Tom joined AT&T in 1990. Tom also serves on the board for the Future of Privacy Forum and the Community Foundation of the Lowcountry. He was formerly a member of the Executive Committee of the Board of Directors of the AT&T Performing Arts Center in Dallas.

Did you enjoy this content? For more like this, subscribe to the VISION by Protiviti newsletter.
Data and privacy: Exploring the pros and cons of doing business in a digital world
Data and privacy: Exploring the pros and cons of doing business in a digital world
Data and privacy: Exploring the pros and cons of doing business in a digital world
These days, data breaches happen so often that they feel like they are just the cost of doing business in a digital world. The worst ones involve credit card payment data, which could result in fraudulent charges to your account. Caught early enough, this will not impact your credit rating, and your bank will issue you a new card number. Because this happens with such regularity, I keep a list of web sites and passwords handy so that I can easily change all my credit card automatic payment info
In July, I received a letter saying that Ticketmaster, more specifically its parent company Live Nation Entertainment, had suffered a breach and my personal data had been compromised. Ticketmaster, which sold more than 620 million tickets in 35 countries in 2023, sent that same letter to some 560 million members (6.25% of the Earth’s population). Maybe you got one, too.
Exposing the personal data of half a billion people to malicious hackers is astounding news, but my first reaction wasn’t “wow” but “meh.” I’ve been breached before and I will, undoubtedly, be breached again, so I initiated the routine damage control sequence.
The latest, but not the worst
The Ticketmaster breach is just the latest, and not nearly the worst. That distinction belongs to CAM4, which exposed more than 10 billion records in 2020; Yahoo in 2017 with 3 billion ; and Aadhaar and Alibaba, which exposed more than a billion users each in 2018 and 2022. And household names like LinkedIn (2021) and Facebook (2019) have also had bigger breaches.
Thankfully, Ticketmaster says more crucial information—such as U.S. social security numbers, which are required for users who want to sell their tickets on the site, were not compromised, but phone numbers, e-mail addresses, home addresses and encrypted credit card payment data was—a hacker’s paradise. (Ticketmaster did offer free credit and identity report monitoring, which I gladly accepted.)
Thankfully, nothing bad has come of it for me… at least not yet. But who knows who has access to my personal data on the dark web? And what can I—and 560 million others—do about it? The truth is, absolutely nothing.
And, perhaps foolishly, I have resold tickets on Ticketmaster, so my social security number is currently sitting in a Ticketmaster database—secured for now. Should I be worried? My bank has it. My tax software has it. And probably a few other for-profit businesses I’ve forgotten about have it too. It’s funny how we rationalize where danger to our privacy and most sensitive data lies and where it doesn’t. And how nonchalant we’ve become about the possibility, or probability, of it being exposed.
Big data means big worries
It’s been five years since Forbes declared data privacy would be the biggest issue facing businesses and consumers over the next decade. That was in 2019, before the pandemic accelerated our mass digitization. In many ways, that prediction has come to fruition. Fast forward to more recent Forbes findings that indicate 86% of Americans are more concerned about their privacy and data security than the state of the U.S. economy, and two-thirds either don't know or are misinformed about how their data is being used, and who has access to it.
86%
of Americans are more concerned about their privacy and data security than the state of the U.S. economy, and two-thirds either don't know or are misinformed about how their data is being used, and who has access to it.
- Forbes 2024 Global Threat Report
A Pew Research Center survey of U.S. adults found 81% were concerned about the data companies collect about them and 71% are concerned about the data the government collects about them. Globally, the numbers are similar: A 2023 IAPP survey found 68% of respondents say they are very concerned about their privacy online.
Meanwhile, in Protiviti’s Executive Perspectives on Top Risks 2024 and 2034 survey, cyber threats are increasingly on the minds of global executives, moving from the 15th ranked risk in 2023 all the way to the third ranked risk for 2024. And when we asked them to identify risks a decade from now, cyber threats climbed to the top as the biggest risk anticipated in 2034.
The challenges are complex: AI and other emerging technologies will impact data security and privacy in ways we’re not entirely sure of just yet; and shifting state, national and global regulation complicate data policy and governance. Executives are aware of the problems, and probably many of the solutions, but implementing them in a measured way in an ever-evolving digital data and privacy landscape is incredibly difficult.
Exploring the future of privacy
That’s why VISION by Protiviti is embarking on a months-long journey to explore the future of privacy. Organizations are experiencing unprecedented change, and the regulations that govern how personal information from consumers and clients is collected, used, stored and archived are evolving.
In addition, the roles of the chief privacy officer (CPO), as well as the chief information security officer (CISO) and chief technology officer (CTO), are evolving day by day to match the external pressures of maintaining data privacy. Too many data breaches also have eroded customer trust, and consumers—undoubtedly growing tired of the “we regret to inform you…” letters—are demanding more say in the management of their data.
To take a 360-view of the topic, VISION by Protiviti’s Future of Privacy content includes interviews with experts and leaders in the data privacy and protection space, including:
-
Jules Polonetsky, CEO of the Future of Privacy Forum, speaking with Protiviti’s Tom Moore about navigating the road ahead, the AI opportunities that will emerge and why we absolutely cannot get this wrong
-
Sarah Armstrong-Smith, Microsoft’s Chief Security Advisor for EMEA, sitting down with Protiviti’s Roland Carandang to discuss what steps business leaders should be taking to build out a strategic data security plan
-
The Economist’s Dexter Thillien discussing how privacy is in peril in the digital economy, and ways the private sector will play a significant role in the future of data privacy
-
Sue Bergamo, executive advisor, author and former CISO highlighting what boards are getting wrong about data protection and privacy
-
Mauro Guillén, futurist and vice dean of the Wharton School at University of Pennsylvania, writing about the effect of AI on the availability and use of personal data
-
Protiviti’s senior managing director Tom Moore’s take on the evolving role of the chief privacy officer and its uncertain future.
In addition, VISION by Protiviti will be publishing its own research on the topic in collaboration with the University of Oxford. Look for our Global Executive Outlook on the Future of Privacy, 2030 at the end of October. We’ll be taking a closer look at the survey findings in a Protiviti webinar on November 5, 2024. And VISION by Protiviti will be hosting two privacy-focused live events in New York in mid-November. Stay tuned for details.
And while I’m in New York, maybe I’ll take in a Broadway show or a concert. And yes, I will probably buy those tickets through Ticketmaster.
81%
of U.S. adults are concerned about the data companies collect about them and 71% are concerned about the data the government collects about them.
- Pew Research Center Survey
Protecting data and minimizing threats with Microsoft’s Sarah Armstrong-Smith
Protecting data and minimizing threats with Microsoft’s Sarah Armstrong-Smith
Protecting data and minimizing threats with Microsoft’s Sarah Armstrong-Smith
In this VISION by Protiviti interview, Protiviti’s Roland Carandang, Managing Director in the London office and one of the firm’s global leaders for innovation, security and privacy, sits down with Sarah Armstrong-Smith, Microsoft’s Chief Security Advisor for Europe, Middle East and Africa, independent board advisor and author of Understand the Cyber Attacker Mindset: Build a Strategic Security Programme to Counteract Threats. The two discuss Microsoft’s data governance strategies in the face of elevated risk, the impact of AI and emerging technology and what steps business leaders should be talking to build out a strategic security plan.
In this interview:
1:04 – What are the biggest threats to privacy?
2:58 – How AI changes the game: pros and cons
7:00 – Microsoft’s role in protecting customers’ privacy
10:18 – Thinking like a cyber criminal
15:35 – Will it get worse before it gets better?
Protecting data and minimizing threats with Microsoft’s Sarah Armstrong-Smith
Joe Kornik: Welcome to the VISION by Protiviti interview. I'm Joe Kornik Editor-in-Chief of VISION by Protiviti, our global content resource examining big themes that will impact the C-Suite and executive boardrooms worldwide. Today we're exploring the future of privacy, and we welcome in Sarah Armstrong Smith, Microsoft Chief Security Advisor for Europe, Middle East and Africa, Independent Board Advisor, and author of “Understanding the Cyber Attacker Mindset: Building a strategic security program to counteract threats.” Today, she'll be speaking with my Protiviti colleague Roland Carandang, Managing Director in the London Office and one of our global leaders for innovation, security, and privacy. Roland, I'll turn it over to you to begin.
Roland Carandang: Thanks so much Joe. Sarah, welcome. Congratulations on the publication of your latest book and thank you so much for being with us today.
Sarah Smith: That's great to be here. Thank you.
Carandang: I'm going to dive in with a very big question just to start things off. What do you see as the biggest threats to data privacy right now and what are some things that executives and boards should be focused on?
Smith: Yes. Well, I think I'm going to go for the easy option to start with being a Chief Security Adviser at Microsoft, it has to be just the scope and scale of cyber-attacks. Now they're at a range that we have never seen before just in terms of the ferocity of those different types of threat actors. What are they doing? What are they after in particular? Then when we talk about cyber attacks, we then got to think about what are those threat actors after. In essence they're looking to, how do I monetize my return on investment? Some of those are financially motivated actors, some of those might be espionage, nation state actors, they're activists, but ultimately, it's all about data and that's something we've really got to be cognizant about. So whenever we've had a cyber-attack, we then have to think about the data breaches and what does that mean for the impact to those individuals that may be impacted by that cyber attack as well.
Then we have questions that no doubt have to be answered, maybe that’s through regulators, our own business, our customers, partners, with regards to what data, how much data, and what's the impact of that. If I took all of that combined, when we're talking about cyber attacks, data breaches, intellectual property theft, whichever way you want to look at it, ultimately it'll come down to one thing, which is effective data governance. I would really say, what data, where is it, what is the value of that data, and what are my expectations, not just from regulators but consumers and employees as well, about how I should be protecting that data no matter what is on the horizon?
Carandang: On VISION by Protiviti we often talk about AI, and I know that's something that's on your mind. Ultimately, what impact do you think AI will have on data privacy and data security. Is there anything that business leaders should be doing to prepare for that now?
Smith: Well, I think with any technology there are always pros and cons. So we start with the pro. Ultimately, when you think about the ability for AI, machine learning, to provide really deep insights across large data sets. I think one of the biggest challenges that a lot of companies have, reflecting on where we started is where's my data, how much data, how much data exposure do I have? Getting those real deep insights but also thinking about how I can use that data to drive innovation
It's no doubt we're thinking about AI and just the scale of innovation that we've seen over the last couple of years. We're seeing tremendous work with regards to breakthroughs in science, medicine, and technology. So there's absolutely no doubt that there are some huge positive impacts for a lot of companies.
Now, I go to the cons. So kind of the reverse of that. In particular when we think about Gen AI, so that's only been around in the last couple years. It's probably made famous by ChatGPT. There are multiple other AI models. Then we got to think about how that was actually trained and where did that data come from. Some of the data, let's say, might have been scraped off the Internet. It could have been taken from social media. There's a multiple ways in which this data has come from and it's been asking a lot of questions again about what data, where did that data come from, do I have any say in that data in terms of consent, legitimate interest and all of these type of things. Again, if I can reflect back to the first question with regards to the cyber attackers and how they are thinking about amplifying their cyber-attacks with some of these large language models. Again, I think from a nation state perspective, highly resourced, highly motivated threat actors.
Now a couple of months ago Microsoft actually issued some research in conjunction with Open AI, as we're talking about ChatGPT. What we identified, if we took some of the larger nation state actors, they're using these models to do reconnaissance so that they're learning about their targets and they're also using those large language models to refine their attack. So this is just a caveat that the AI itself is not doing anything bad. It's not a naughty AI. It's still tool in the threat actors kit bag. When we're talking about phishing, ransomware, malware, whatever the case may be, the AI is just another tool, if you think about it that way. I want to think about AI, and I know there's a lot of companies that are spinning up R&D centers, innovation, thinking about the art of the possible. Maybe they are building their own models or they're buying them, whatever the case. There are some really fundamental things as we're talking about privacy in particular, that's responsible and ethical AI. It's a really having deep appreciation for those security and privacy implications, the detriment of some of those large language models and how they're being utilized but also keep thinking about privacy-enhancing technology. So having encryption, how we're thinking about managing the data or the data when it's exfiltrated… none of those things change just because we have some new technologies, right? We can't lose sight of the fundamental, the foundation layers if you like, of security and privacy in particular.
Carandang: That's super interesting, Sarah. Microsoft clearly has a big role to play—it sounds like such an understatement—in AI but it also has just lots of customers as well and customer data. Since you mentioned it, can you just tell us a bit more about your role at Microsoft and how a company—you mentioned large data sets, and how a company like that deals with protecting its customer data. How do you spend your days and perhaps some of your nights as well?
Smith: Can I say, it's never a dull day, let's say, being at a big tech company. If I've had to talk about my role first and foremost, in essence, my role is to liaise with our largest enterprise customers across Europe. I work multi country and multi sector and it's really at that C-seat level. I can be talking to CISO, CIO, CTOs. It's really understanding those biggest challenges. Some of that we've already touched on. We've talked about cyber security, cyber-attacks, how they're evolving. We've talked about evolving technology particularly when it comes to AI, responsible AI and all of these things but it all fundamentally comes down to data and really understanding the value and the proposition of all of this big tech together.
Now we look at the cloud in its most simplistic form, irrespective of the size of the enterprises that we're talking about. Although I'm at this level I've obviously got lots of different small enterprises and consumers who are utilizing the cloud. I would say the real value comes down to the shared responsibility model first and foremost. So if you thought about having your own data center or your own services, you're responsible for everything. You're responsible for the building, the infrastructure, the networks, all of the data, and all of these things. The big difference when you move to the cloud, and some of that comes down to the type of cloud or SaaS services or whatever the case may be, but the shared responsibility modeling, that just means the platform, the cloud platform, itself is the accountability of the cloud service provider. So in essence that infrastructure—patching, backups, recovery—won’t completely go away but it's one of those things that you don't necessarily have to think about.
The other part of that shared responsibility model, if you think about all of the different companies across the globe, some of those are highly regulated entities and those regulations are going to differ depending on what country they're in or even what region they're in. Now part of that, for customers to be able to adopt the cloud, Microsoft also has to have a very comprehensive compliance portfolio. If you're thinking about, we’re talking about GDPR or we're talking about various different standards like NIST for example, the underpinning platform first and foremost has to have all of those controls in place that you take advantage of. There's a huge advantage right out-of-the-box I'd say in terms of the inbuilt capability that's already there by standard and by default. The challenge, however, is you have to take advantage of it. This still means you’re still accountable for who's accessing that data and what data you put into the cloud.
Carandang: You mentioned in the introduction in your new book, Understand the Cyber Attacker Mindset. It dives right into the global cyber crime. You've engaged with actual cyber criminals. What are some of the key takeaways that you learned from your engagement with those cyber criminals that you could share with the audience here?
Smith: I think what's interesting to me and why I wrote it is to really focus on the human part of security. I think again, when we think about security, a lot of people think about we're here to protect data and we're here to protect technology and servers in the cloud and all of these things but actually, the data only has a value to it when we understand the repercussions of the impact of some of that data in the wrong hands and how it could be misused, abused in various different things. I think what we talked about at the beginning is a million and one ways in which I could potentially attack you but there's only a finite reason why I would want to, and why I'm motivated enough to want to do it. So I looked at the different types of threat actors. As I said, we've got some that are financially motivated, we've got activists, nation state actors, and we've got malicious insiders as well. Then it's the same data but in the different hands, what is the impact of that? Then it's being able to work backwards and say, “Ok, well, if someone was trying to sell this data, if someone was trying to use this data for espionage, if someone was trying to use it for other nefarious purposes, what do I need to do to protect that in all of those different hands, in essence?” That's really important, to understand the human motivation behind it and why they are willing to go to that extra degree to get their hands on that data. So I think about it from a very, very simplistic, no matter what size organization is we're talking about, the little ones up to the big enterprises, and I try and keep it quite simple. Our strategy in essence comes down to protecting the access in and the exit out. So the access in is identity. As we're talking about privacy it’s identity in all its guises. So it's identity as a human and identity of things. So we're talking about laptops and devices and various things like that. In essence from the threat actors perspective, I have to find a way into your network. I don't particularly care how I get in. Whether I'm trying to do those phishing emails, I'm going directly to the source, or I found a vulnerability in your network. I will find any which way in to that network. The exit out really then comes into that data. What is it I'm trying to exfiltrate out of your company that's giving me that value in particular.
Carandang: Thank you, Sarah. That's fantastic. You mentioned scale earlier. Just with the number of data or tax on data growing exponentially day by day, I do wonder if it's time for just some bold paradigm shifts. Do you see any of these shifts on the horizon? For example, can you imagine where consumers will start to pay small fees for otherwise free services, so companies won't need to sell that data to third parties?
Smith: I think we're going to see that a little bit. I think people are starting to pay for subscription services where it's a highly tailored service. They don't get adverts or the adverts they do get are more tailored. We are starting to see these people who want an enriched service. But I think the challenge we have as well is, a lot of this technology, particularly when we’re talking about social media, has been around for a very long time and it's been free for a very long time. Even when we know that when it is free, you’ve heard the comments you are in essence the commodity but there's data, there's profiling that's being sold to varying degrees across different companies depending on how you're interacting with some of their services.
I think the interesting thing is even when we've spoken about the size of some of the cyber attacks, the size of some of the data breaches, the fact that we've had these regulations, the fact that we've had record-breaking fines as a result of misuse or abuse of data and selling of data in various different things, has it actually stopped people from using it? I would argue not. Maybe there's a handful of people who are a bit mindful of it. I think you'll get pockets of people that want a better service and that you could sell it as a better service and enriched service or some way, maybe you'll have those kinds of people who might want to do that but I think overall, I can't see it happening to a large extent.
Carandang: Got it. Thank you, Sarah. So we've covered a lot today. I wanted to just ask you your overall feelings on maybe the next five years or so. So take us out to 2030. Tell us what you see. Are we in a better place? How well we have gone with this endeavor.
Smith: I think it's interesting, isn't it? Like we talked about GDPR, we talked about how long that's been around. So we are over five years since GDPR has come into being and other regulations around the world are all coming up to a varying degrees. Has it made any difference? I'm not sure. Arguably, I think it's going to have to get much worse before it gets better but I do think there is some positive coming as well. I would just frame that with where we started, when we're talking about cybersecurity and what's the game changer. So I think what we have seen is this willingness for more collaboration across big tech but across multiple different countries and jurisdictions. Particularly when we think about different actors and they're moving data around and moving data, there's money laundering, people are hiding in plain sight, making it really hard to bring a lot of these people to justice. Therefore what we have seen, as I said, in the last couple of years, is that willingness to collaborate, the willingness to share intelligence and really, really think about, there are some of these core principles of what we've been talking about and really then coming back to those foundational levels that we talk about. How do we have security and privacy by design, by default and as standard, so that nobody questions all of these things that have to be added on. Are you doing it for the right reasons? It just is. So, I think, as I said, there's going to be a lot more work. It's not going to be easy. I have a tiny bit of optimism that we can tip the balance but I just want to be realistic at the same point, not underestimating how much work is involved.
Carandang: That’s brilliant, Sarah. Thank you so much for your time and insight today. You've been very generous. Thank you for the great work you're doing more generally, and congratulations again on your book. Joe, back to you.
Kornik: Thank you for watching the VISION by Protiviti interview. On behalf of Roland and Sarah, I'm Joe Kornik. We'll see you next time.
Sarah Armstrong-Smith is Microsoft’s chief security advisor for EMEA and an independent board advisor on cybersecurity strategies. Sarah has led a long and impactful career guiding businesses through digital attacks and specializing in disaster recovery and crisis management. Sarah is the author of Understand the Cyber Attacker Mindset: Build a Strategic Security Programme to Counteract Threats. Prior to Microsoft, she was Group Head for Business Resilience & Crisis Management at The London Stock Exchange and Head of Continuity & Resilience, Enterprise & Cyber Security at Fujitsu.

Roland Carandang is a Managing Director in Protiviti’s London office and one of the firm’s global leaders for innovation, security and privacy. He leads a world-class consulting team focused on modernizing and protecting businesses where he helps clients understand, implement and operate technology-based capabilities and takes pride in helping clients navigate an increasingly complex world. He collaborates across the Protiviti and Robert Half enterprise to ensure we are solving the right problems in the right way.

Did you enjoy this content? For more like this, subscribe to the VISION by Protiviti newsletter.
From bureaucratic performance to the common good: The challenge of Public Value in Italy
From bureaucratic performance to the common good: The challenge of Public Value in Italy
From bureaucratic performance to the common good: The challenge of Public Value in Italy
The soccer field fable: A lesson in misaligned priorities
Once upon a time, there was a mayor of a town with some 5,000 residents who prided himself on making the most of public funding. One key goal of his administration was to build five soccer fields in five years, with the ultimate goal of increasing the level of sports participation and health of the town’s citizens. That well-meaning mayor, thanks to a well-functioning organization and efficient and motivated employees, was able to fulfill the political goal. As a result, the 5,000 residents, who happened to average 80 years of age, had five beautiful new soccer fields, delivered on time and budget, Unfortunately, running and playing soccer was not the way the elderly residents of the town were looking to get their exercise. From their perspective, soccer fields were a well-intentioned, but ultimately faulty endeavor.
Breaking free from empty indicators
For decades, both global and Italian bureaucracies have dragged businesses and citizens through a complex labyrinth of public projects focused on quantitative outputs—too focused on how efficiently public funding was utilized (the input), how much was accomplished and how quickly (output), and what public benefit was delivered (social and health benefit, in the case of the soccer fields).
These indicators for success (‘done/not done’ and ‘how much was done and in how much time’) have given rise to a new kind of bureaucracy where “performance for performance’s sake” is the norm and where the true impacts on citizen well-being are often overlooked or, at best, incidental. In fact, research published in 2020 and 2021 in the Italian journals RIREA and Management Control show that just 13% of the 3,798 indicators used by the Italian ministries were impact indicators. Exceedingly rare were the cases of co-planning and co-reporting of impacts; research published in 2020 in the Italian journal Azienda Pubblica shows a "heat map" with few cases of co-planning between ministries concerning the same topic.
In this bleak scenario, among the small amount of existing research on the impacts created, we cite CERVAP's research on the Public Value created by the 14 Italian metropolitan cities. The study ranked the Public Value created by those cities through a Public Value Index ranging from 0 to 100.
Milan (between 68 and 70 on the Public Value Index) and Bologna (between 66 and 68) were the cities that generated the most well-being, with Milan's leadership being focused on economic impact, while Bologna’s leadership was keyed into social impacts.
Unfortunately, the southern cities didn’t fare so well, highlighting the fact that Public Value creation in the region is still a cultural and civic battle. These cities include Catania (30 and 32 on the index) and Naples, Palermo, Reggio Calabria and Messina, all between 32 and 39.
This context was also enabled by disjointed planning tools and programmatic fragmentation. In Italy, before 2021, public administrations (PAs) typically operated in silos with as many as ten different planning instruments per administration, resulting in overlapping projects and redundant efforts.
It wasn't until 2022 that Italian PAs began adopting integrated planning methods. For example, the Integrated Plan of Activities and Organization (PIAO) was created by an infusion of funds from the European Union as part of a public administration reform spurred by post-COVID-19 recovery efforts. In springing to life, the PIAO created the first legislative definition of Public Value: the multidimensional (social, economic, environmental) level of well-being created by a public administration in relationship to its citizens and businesses.
As a single planning tool, PIAO aligns resources with performance management and risk mitigation, but more importantly, it creates measurable Public Value by focusing on the comprehensive impact of public projects from the public’s perspective, rather than isolated outputs from the PA’s perspective.
The Public Value pyramid: A new framework for success
But the PIAO also raises some practical questions where the proverbial rubber hits the road: how do PAs systematically and consistently combine resourcing, risk management, performance, and other administrative factors to achieve a measurable impact on wellbeing? The methodological framework of the "Public Value Pyramid" answers these questions. It integrates various administrative functions—from resource management at its base up through performance and risk management—allowing policymakers and managers to govern the enabling, protecting, and creating of Public Value holistically.
The Pyramid operates on a principle of progressive value generation and measurement, beginning at the base level and moving upwards through intermediate programming levels that either create or protect Public Value.
Milan and Bologna were the cities that generated the most well-being, with Milan's leadership being focused on economic impact, while Bologna’s leadership was keyed into social impacts.
The Pyramid operates on a principle of progressive value generation and measurement, beginning at the base level and moving upwards through intermediate programming levels that either create or protect Public Value.
- The basis of the pyramid addresses how to enable Public Value. Public Value creation and protection are enabled by the planning of actions that are preparatory and functional to improve the quantity and quality of diverse types of PA resources.
- The intermediate levels of the pyramid address the issues of how to create Public Value and how to protect Public Value. The intermediate levels should be planned and measured in an integrated way so that the specific performance objectives, such as a funding call for businesses, are protected with specific risk measures.
- At the top of the pyramid, we find impacts and Public Value, which serves as the horizon of the entire programmatic architecture and addresses the question of “what and how many impacts?” and, finally, “how much Public Value?” Precisely, at the top level of the pyramid, we find the analytical or one-dimensional impacts, the average external impact and, ultimately, the average value between impacts, performance and health.
The pyramid also emphasizes the crucial role of public managers whose individual performances are measured based on their contribution to organizational success and risk management. This methodological framework enables PAs to plan by aligning administrative health, risk reduction, and performance improvements promoting holistic Public Value aimed at enhancing citizen well-being.
Engaging the next generation for the “Public Value generation”
The soccer field fable told at the opening of the article warns us against the risk of self-referentiality in defining what is Public Value. Public Value should be observed through the eyes of citizens and businesses, it should be communicated in their own words, but most importantly, it should be enabled, protected, and co-created with them.
The concept of Public Value must be extended to distinct categories of stakeholders, and it must preserve the possibility of improving the well-being of future generations. It is therefore important to engage with the new generations to create awareness and proper appreciation of Public Value. This is a particularly vital move as Italy prepares for nearly a third of its civil service workforce to retire by 2032. Compounding this demographic problem is the fact that young people overwhelmingly gravitate to the private sector as they enter the workforce.
Research conducted at Italian universities is trying to understand what would motivate young people to enter the public sector. When asked: “What would incentivize you to go and work in PA?” university students ranked “contribution to the creation of Public Value,” “clear career prospects” and “higher salaries” as their top three choices. Clearly, young people are looking for meaning in the work they will do and the sense of the common good that is embedded in the concept of Public Value. This is great news! Public Value is key to building a better future in Italy, as well as other countries around the world.
Every country walks at the speed of its public administrations. To encourage PAs to walk faster, we need to attract the best resources Italy has—young people—and actively involve them in innovative and shared Public Value projects.
Embracing Public Value isn’t merely about adopting new methodologies; it’s about changing perspectives—viewing policies through citizens’ eyes and measuring success not just by efficiency or output but by tangible improvements in people’s lives. As other countries observe Italy’s journey from bureaucratic chaos to a systematic approach highlighting Public Value, they too might find inspiration to pursue similar paths for building better futures for their citizens.
When asked: “What would incentivize you to go and work in PA?” university students ranked “contribution to the creation of Public Value,” “clear career prospects” and “higher salaries” as their top three choices.
Former CISO on what boards are getting wrong about data protection and privacy
Former CISO on what boards are getting wrong about data protection and privacy
Former CISO on what boards are getting wrong about data protection and privacy
In this VISION by Protiviti interview, Joe Kornik, Editor-in-Chief of VISION by Protiviti, sits down with Sue Bergamo. Bergamo is an executive advisor, former CIO, CISO, and Global Technology Strategist for Microsoft. She sits on several boards, is host of the Short Takes podcast and author of So You Want to Be a CISO: A Practical Guide to Becoming a Successful Cybersecurity Leader. Here, Bergamo discusses recent SEC rulings and their impacts on the current and future state of the CISO role, how the C-suite and boards view data governance and privacy, and what steps they should be taking right now to build customer trust.
In this interview:
0:57 – The CISO role in a state of flux
4:20 – The effect of the SEC’s cyber disclosure rule
7:39 – Is there a playbook for privacy?
10:20 – Will companies get it right for their customers?
Former CISO on what boards are getting wrong about data protection and privacy
Joe Kornik: Welcome to the VISION by Protiviti interview. I’m Joe Kornik, Editor-in-Chief of VISION by Protiviti, a global content resource examining big themes that will impact the C-Suite and executive boardrooms worldwide. Today, we’re exploring the future of privacy, and I’m thrilled to welcome Sue Bergamo to the program. Sue is an executive advisor, former CIO, CISO, and global technology strategist from Microsoft. She sits on several boards, is host of the Short Takes podcast, and author of “So, You Want to be a CISO: A practical guide to becoming a successful cybersecurity leader.” Sue, thank you so much for joining me today.
Sue Bergamo: Thank you for having me. It’s a pleasure to be here.
Kornik: First off, Sue, let’s talk about the state of the CISO. As you point out in your book, which I mentioned in the intro, “So, You Want to be a CISO,” the position is really in a state of flux right now. So, talk to me a little bit about where the CISO is right now and how it’s changing, and if you think it will continue to be a critical part of the executive team going forward.
Bergamo: I like to use the term evolution because we’re in a position that I hope will evolve to a better state in the future. Just like the CIO role about 20 years ago it had to go through some ebbs and flows and finally, it came out at the end of the tunnel in a much better spot. Everyone was very much aware of what the CIO needed and wanted to do which was really around our back office applications for our infrastructure that run our corporations.
The CISO role is going through that evolution and unfortunately, right now, it’s in a really ugly spot. I’m hopeful that it will come out a little bit better. What’s going on in the industry is the SEC’s cyber disclosure rule that came into effect late last year, which basically said the CISO does not need to report to the board, but the board and the executive team need to be aware of cyber incidents. So, what ended up happening with that—and I can go into more elaboration around two CISOs that were charged with felonies for material breaches that happened in the past—but what happened with that is that—this is my opinion based on what I see and what I know—executive teams decided that CISOs weren’t really needed. A lot of the CISOs said, “We’re not going to put up with these personal liabilities.” So, a lot of us left our positions and then there were a whole bunch of us that lost their jobs because the SEC, the cyber disclosure rule, talked about awareness. They didn’t put the CISO on the board, but they talked about awareness with incidents.
So, what has transpired is—and I don’t mean with this with any disrespect to SecOps managers—but inexperienced, from a CISO perspective, SecOps managers secure the operations people will put into the role of head of security. Sometimes CISO, but mostly head of security because they deal with incident response. Now, the dirty little secret in most organizations is that when an incident occurs, the SecOps manager has a major role in that breach, defending against the breach, but they’re really there to tell the CISO where the threat is coming from. They are not there to lead the band. They’re only there for a very specific focus. So, I see this convergence of inexperienced people and cyber criminals and we’ll see what the future brings, but I do hope that when this evolution comes to fruition the CISO will be put into a much better position, much better light with the executive team.
Kornik: You mentioned those SEC decisions and regulations. I don’t know if you want to expand on that at all or talk more specifically about where CISOs find themselves between a rock and a hard place right now.
Bergamo: Yes. There’s really three types of CISOs. There’s the very inexperienced one that’s just coming into the role, not really sure what they’re doing. Again, it’s not a dig. They have to learn and they’re going to learn the hard way. There’s the middle-of-the-road, as I call it. They’re more experienced than the inexperienced ones, but they’re still trying to find their spot in the position. Then there’s the experts who were exiting. So, a lot of CISOs on the inexperienced and middle-of-the-road side, believe that our jobs are really about the technology, and that is so far from the truth. The experienced ones know that we follow something called the triad, it’s confidentiality, integrity, and availability. We do that, we accomplish the triad through people, process, and technology. People obviously are employees, process is security frameworks and controls, and then the tech. Once you get the tech up and running and optimized for efficiencies so it’s giving you the data that you need in order to defend your companies, the tech is the easy part. It changes all the time, but that’s the easy part. It’s the compliance frameworks that take the majority of our time and if you ask any experienced CISO, they’ll tell you, once the tech is installed and optimized, we spend the majority of our time on compliance and data privacy. The newbies, as I refer to them, sometimes we have to explain this to them and explain why compliance and data privacy are so important.
So, it’s a little bit of a mess out there right now and then you throw in the personal liability. Let me just expand upon that for a moment. We had two well-known CISOs with two public, very public companies—I won’t mention their names—charged with felonies through the SEC which led to the cybersecurity disclosure rule being implemented after the first one. The second one fell into that disclosure rule. That sent shockwaves. Not just waves, but shockwaves through the CISO industry and we’re just sitting here saying to ourselves, “Holy cow.” A lot of us don’t have a lot of support because everybody thinks cyber is our problem and not theirs. It takes a village to defend a company against cyber attackers. Now, we’re being held personally responsible and felony charges, potential jail time, so we’re all saying ourselves, “I don’t think so,” which is why there’s a huge influx of us getting out of the role.
Kornik: Right. So, let’s talk a little bit more about the strategic role of the CISO or where that falls in the organization. Let’s talk specifically about data governance and protecting privacy. How did the companies that do it best do it best? In your experience, do they have chief privacy officers or chief data officers? Is there a playbook that business leaders should be following to really make sure that they’re getting this right?
Bergamo: I wish there was a playbook, but there isn’t. So, I think that’s half of the battle because everyone has a cellphone or a computer, and everyone feels that they know technology and they know data. This is a very specialized field. The CIO—I’ll just say tech and security—it’s a very specialized field. I’ve been fortunate enough to have both roles and yes, everyone always has an opinion on how we can do our jobs better, but this is our craft, and we have all kinds of different education or certifications. There’s no one thing that anyone can point to and no one game plan. But good C-level tech and security executives are well-rounded. We study. We research. We get involved and we understand how to protect data. Now, that AI is coming out, we have a whole new set of technologies that we need to encompass into our program. So, it’s about staying involved and understanding what we need to do to protect the data.
Kornik: When you’re in those conversations with the C-Suite, the boards, and the business leaders, do you think they understand the importance, not just the compliance and the governance aspect of this, but maybe the business importance of data privacy and what that means ultimately to building customer trust in the business and the bottom line?
Bergamo: I do think that everyone understands that data matters and that data is important to running the business. I mean, every business needs information in order to make good decisions and to process customer requests, B2B requests, employee requests. It’s all data driven and so is it given enough limelight? It depends on the size of the company. I do think that the executives and the boards understand the importance of data and how to use it, but I think where they fall short is really in the investments of strategizing and securing the information and giving even the technology or the engineering teams what they need to make sure that that data is sound.
Kornik: Right, and that’s an interesting perspective I would say from the company side. How confident are you that we’ll get this right for the customer, the client, the consumer? Are you optimistic that they’ll be better off over the next several years?
Bergamo: I’m always optimistic. The sun’s always shining in my world, right? Data is the stronghold of every company. From managing the most—my new piece of information all the way to executive reporting. Everybody’s processing information. So, I think with some of the technologies that are coming out either through public cloud vendors or through artificial intelligence, I think that the data and the ability to gather data is just going to be better in the future.
Kornik: Well, Sue, you said you’re an optimist. So, I’m going to leave you with this final question where I ask you to look out a few years. Maybe the end of the decade, let’s say 2030. Where do you think will be in terms of privacy, data privacy? Do you think 2030 is a better place than where we are currently?
Bergamo: Well, we can only get better with time, right? Kind of like a fine wine. So, I’m optimistic that material breaches will continue to happen fast and furiously and finally, our business brothers and sisters will wake up and say, “Oh, I need to be responsible for security too. I need to be responsible to help the CISO or the CIO, or whoever, with my data problems. Maybe I should get more involved.” So, I am optimistic that eventually the tables will turn. I think it’s going to take a little bit more time but 2030, sure, I’ll go with that.
Kornik: Great. Well, thanks so much for the time today, Sue, and the insights. I really enjoyed our conversation.
Bergamo: Thank you, Joe. I appreciate you having me.
Kornik: And thank you for watching the VISION by Protiviti interview. On behalf of Sue Bergamo, I’m Joe Kornik. We’ll see you next time.
Did you enjoy this content? For more like this, subscribe to the VISION by Protiviti newsletter.
The Economist’s Dexter Thillien: Privacy in peril amid digital data explosion
The Economist’s Dexter Thillien: Privacy in peril amid digital data explosion
The Economist’s Dexter Thillien: Privacy in peril amid digital data explosion
In this VISION by Protiviti interview, Joe Kornik, Editor-in-Chief of VISION by Protiviti, sits down with The Economist’s Dexter Thillien. Dexter is the lead analysts for technology and data at The Economist Intelligence Unit, the research arm of The Economist. Dexter is the lead author of numerous reports on AI, cybersecurity, data privacy, technology and regulation as well a frequent speaker on the intersection of the digital economy and global business. Here, he discusses how privacy is in peril in the digital economy, the impact of emerging technologies on data protection, regulation vs. innovation, and how the private sector will play a significant role in data privacy in the future.
In this interview:
1:11 – Biggest privacy issues for consumers and companies
3:18 – Emerging tech’s effects on privacy
5:42 – What type of regulation is needed?
7:49 – Who’s taking this seriously?
11:01 – Privacy in 2030
The Economist’s Dexter Thillien: Privacy in peril AMID digital data explosion
Joe Kornik: Welcome to the VISION by Protiviti Interview. I’m Joe Kornik, Editor-in-Chief of VISION by Protiviti, our global content resource examining big themes that will impact the C-suite and executive boardrooms worldwide. Today, we’re exploring the future of privacy, and I’m happy to be joined by The Economist’s Dexter Thillien. Dexter is the lead analyst for technology and data at The Economist Intelligence Unit, the research arm of The Economist. Dexter is the lead author of numerous reports on AI, cyber security, data privacy, technology, and regulation, as well as a frequent speaker on the intersection of the digital economy and global business. We spoke to Dexter last year about privacy in the metaverse and he has been kind enough to come back. Dexter, thank you so much for joining me today.
Dexter Thillien: Great to be here.
Kornik: Dexter, in a digital economy I don’t think there’s any question that data privacy is now and probably will continue to be one of the biggest issues facing consumers and companies, the rest of this decade and probably much further into the future, actually. What do you see as the biggest threats in terms of data privacy for both customers and companies?
Thillien: Yes, thank you for the question. First of all, you’re right to differentiate between companies and consumers because I think they will have different issues to deal with. For companies, they’re owning more and more personal data as part of the business processes, the key is to make sure that the data is secured. That means putting the right government system in place internally. So, instance, that only the right people can access sensitive data. Also, making sure that the company’s own data, any data from suppliers or consumer is being dealt with properly. That’s going to become more and more of an issue to deal with because there’s going to be more and more data to deal with. For some companies, it might even become a competitive advantage. We’ve seen Apple trying to do that in terms of its privacy policy compared to its competitors.
For consumers, it’s a different question, different range of issues. For consumers, it’s important to keep the data safe and secure, but it is also becoming increasingly difficult because we’re giving away much more personal data at any one time. Giving away personal data is no longer about just filling a form, but any time we see something online or do something online, and also anytime we’re going to be on the move because most of us are going to have a smartphone. Many of the apps we have on our smartphone will also collect a huge amount of data. We may become a bit blasé about all those data we’re giving away, but it’s also very difficult to operate and to use the internet and go online if we’re trying to minimize the amount of data we give away. Meta, as an example, tried to build a more private platform which has been charging users and making privacy as a premium feature, but is so far this has been refused by the European Commission in the European Union. The issue is that advertising remains the cornerstone of Meta which means that it is free as long as we give away much of our personal data. With the addition of pictures and videos and on top of text and also facial recognition entering the fray we’re starting to giveaway data which is even more unique and much more difficult to replace if it ever becomes hacked.
Kornik: I’ve been reading a lot of The EIU’s position papers on AI and really all emerging technologies, which includes quantum and spatial and biometric computing, and how those will ultimately impact data and privacy. How do you see AI and those other emerging technologies I mentioned impacting privacy going forward?
Thillien: I think it will all have an impact. So, starting with artificial intelligence, artificial intelligence is all about data. The fact that some are arguing that we may run out web data as early as 2026 as well as so much of a personal data we have given away so far. One of the major issues with artificial intelligence is the output as a model as it may give away as part of answer some personal data because that personal data is part of the input. Sometimes it’s very, very difficult to understand why it does that. We have seen some cases in Europe where this has happened and privacy regulators are keeping track. There is also consent issue which is why Meta say they will not release its most advanced Llama model in Europe because the company is not entirely sure if it can comply with the GDPR in terms of using pictures and videos and things other than text—content other than text.
In terms of quantum computing we’ve seen in August the National Institute of Science of Technology in the U.S. releasing its first post-quantum encryption standards and this is over the fears that a quantum computer might break the current encryption standards sooner rather than later. It’s still not very clear when that will happen and we do not think at The EIU that it’s going to happen any time soon, at least in the short to medium term because many, many technical hurdles remain. But it’s better to be safe especially as some of the encrypted data which can be, will still be very, very valuable when and if quantum computer becomes operational.
When we’re talking about biometric data and biometric computing it raises a question of what type of data we might be sharing. When it is possible to change an email address or even your financial or other details, it is impossible to change your fingerprint or your DNA. We may not be—we may not be able to identify in terms of what we share, but it is something we consider if we don’t want to—that data and want to make sure that the data do not fall into the wrong hands.
Kornik: Right. Thanks, Dexter. You mentioned GDPR. So, let me just follow up on that. Globally, Europe has invested significantly in data protection rules with GDPR. Japan has had very strict privacy laws in place as well. Meanwhile, India and China not so much. U.S. is somewhere in the middle, but has no federal regulation. A lot of the states have sort of taken the lead on that front. Where is the sweet spot? Who is getting this right? Does too much regulation stifle innovation or does not enough create chaos? Where do you stand on regulation?
Thillien: I think finding the sweet spot between regulation and innovation is what every policymaker, every regulator, is trying to do. I think it’s a problem or an issue for all tech regulation and not just data privacy. This could happen when sometimes regulation becomes more of a box-ticking exercise and we have seen that with cookies in Europe. It has no real impact on privacy because—for instance, active consent will now be fully given. I do think we do need some level of regulation because without it any protection will be lacking and there needs to be independent rules put in place.
I think for me there are two main things to consider when we’re talking about regulation. The first is fragmentation. Many, many businesses will be global in nature, whereas, regulation is very often not. This means that making a decision as to what to do, whether to follow offshore jurisdiction is required, whether there would be some overlap or to go with the strictest rule and have only one set of rules to comply globally for the company. Now, some companies have already done that in terms of the GDPR.
The second and probably the most important one is enforcement. I think rules are very nice, but without the right enforcement it can be a bit pointless. We’ve seen, again, with the GDPR where it’s taking quite a long time before any rulings or any judgement because it can be very, very tough for regulators to make the case. It’s very important to note what level you can enforce before you start thinking of regulation.
Kornik: Barely a week goes by without hearing about another significant breach, right? I just wonder if consumers specifically are becoming desensitized to these breaches, do we suffer low expectations when it comes to our own privacy?
Thillien: I don’t know if we desensitize, but I think the issue that is not always visible or very visible to the main user, we often hear about an attack where millions if not even billions of entries have been hacked, but the impact of that attack is very difficult to gauge because in most cases that means we’re going to receive a bit more spam emails. It becomes much more personal when they are a financial repercussions is in attack meaning we can be scammed or buying details are now available with people buying things online with that money. I also want to make the point that companies can try to do as much as they can and many, many do but the attacker, in this case the hacker, is much more favored than a defender because the attacker, in terms of an attack needs to get it right once when a defender has to get it right all the time. Now, as we’re spending more and more time online it means that the attacked surface is only increasing, which means that those breaches will keep happening.
Kornik: Right. We’ve seen big companies—I mean, very big tech companies even playing sort of fast and loose with data and privacy. Even children’s privacy, I think, is—we’ve heard that has been in the news recently. Can we trust the private sector? I mean, we were talking about regulations so I’m just curious. Can we trust the private sector to do what’s right in terms of privacy? Are boards and the C-suite taking this issue seriously enough, do you think?
Thillien: Some are, but I still don’t think that self-regulation is the answer. While I mentioned the GDPR might not be as well enforced as it should be it still offers a EU citizen much more protection than in many other jurisdictions across the world. You mentioned that the U.S. still doesn’t have federal rules, is trying to remedy that in terms of children. It needs to get passed through congress which is very difficult as well. The U.S. also has a much bigger, what I would call, a third-party market. With the data you might have given happily to a provider or a retailer is then sold on to a third party without you knowing about it, and perhaps you wouldn’t want that particular company, the third party company, to have access to your data. Companies do have to take it seriously because it can impact their reputation if it is proven they haven’t done as much as they could have should it be hacked. With a greater penetration of technology in the world place and the move towards digital information it can also become a phenomenal advantage to business continuity. I think the example of the CrowdStrike incident in July 2024 has shown how reliant we’ve become on digital technology and how important it is to protect those. Could it become a competitive advantage is very, very difficult to say because privacy is one of those areas where doing things right to make sure nothing happens has a limited impact, but not doing things right when something is happening could have a major negative impact. So, the positive and the upside is not very apparent, but you do need to do as much as possible to make sure that nothing actually happens.
Kornik: Dexter, I appreciate your time today. I really enjoyed our conversation. I just have one more question and it’s forward looking. I’m wondering if you could take me out to the end of the decade, let’s say 2030, and tell me what you see around data and privacy. I’m wondering how we’ll view privacy in 2030.
Thillien: I think for me it’s evolving concept because the online world has become so prevalent, but the right to privacy is also a fundamental human right whether it is online or offline. This is part of Article 12 for Universal Declaration of Human Rights which is, and I’m going to quote that, “No one should be subjected to arbitrary interference with their privacy, family, home or correspondence, nor to attacks upon their honor or reputation. Everybody has the right to the protection of the law against such interference or attacks.” I think this is the case both in the online world and both in the offline world. I’m going to give you a personal example maybe. I graduated from my school in very late 20th century. I don’t think I used the internet at all for any of my course work during high school. If you can see, for instance, the iPhone launched in 2007, so 17 years ago, and Facebook in 2004, so 20 years ago, it shows that many, many younger people are now what we call fully digital native and are going to have maybe a different perspective. What I find interesting is some mystery that I saw over the last few months and years where kids, where younger people, were telling their parents not to upload pictures of them online. It made me think about the concept of what I might call online privacy native. Where maybe the younger generation is less keen to share publicly compared to the previous generation. I think we’ll have to wait and see to see what will actually happen going forward.
Kornik: Yes, that’s interesting. I hadn’t really thought about that, but you’re right. That generation did seem more conscious—conscientious about sharing photos.
Thillien: I think they might have a different perspective when it comes to their online persona and their offline self and what they share online. So, they might not have all vision for all generation, vision of privacy more broadly, but in terms of what they’re doing online because they’re fully digital native and they are online a lot of time. Everybody is going to have a smartphone. That’s not going to change. We’re still going to be using the internet. We’re still going to share some data. There is still going to be probably from the younger generation which have only known that kind of a different perspective in terms of what they’re willing to share and especially what they’re not willing to share and to what they might get in return for what they’re sharing. I think it’s very early days and we’ll have to wait and see.
Kornik: Right. Very interesting. Thanks, Dexter, for that perspective and your insights. I really enjoyed our conversation today.
Thillien: Thank you very much for having me.
Kornik: Thank you for joining the VISION by Protiviti interview. On behalf of Dexter Thillien, I’m Joe Kornik. We’ll see you next time.
Did you enjoy this content? For more like this, subscribe to the VISION by Protiviti newsletter.