Data by default: How AI radically changes the data privacy landscape
Data by default: How AI radically changes the data privacy landscape
Data by default: How AI radically changes the data privacy landscape
“Data is a precious thing and will last longer than the systems themselves.”
- Tim Berners-Lee, colloquially known as the inventor of the World Wide Web
It’s been almost twenty years since Berners-Lee uttered those words and they are truer, and perhaps even a little more ominous, today than they were then. The advent of artificial intelligence (AI) makes data even more valuable, and thus raises the issue of data privacy and data ownership to a new level of importance, complexity and controversy.
AI can be fed people’s private data from different sources — not just online and offline databases but also content that users upload to the Web, sensor data from the Internet of Things, and even the digital footprint users leave behind as they use their digital devices. Protecting the rights that individuals have as to what information is collected, where it is stored, who can use it, and for what purpose has always been difficult. And in the future, AI will make that exceedingly complex.
Data and digital footprints
We entered the era of “data collection by default” some time ago, argue Stanford University’s Jennifer King and Caroline Meinhardt in a recent comprehensive analysis. There are two potential ways to address this issue. The mostly unworkable one at this point is to move from an opt-out to an opt-in system. The issue with that is not just regulating and ensuring compliance, but also what to do about information collected in the past. And companies can still encourage users to opt in through special offers and other incentives, and then use the data for purposes that were not anticipated.
A second solution is to develop applications that prevent third parties from collecting activity data in the first place, such as enabling the opt out option when we download an app to our smartphones. But this only applies to activity data, not to the data the user supplies while searching or transacting once the app is installed.
Moreover, efforts at controlling information at the point of collection are undermined by Web crawlers and Web scrapers, which can automatically locate, classify, download and extract vast amounts of data, images and other types of material from the internet writ large. In principle, they can only access public, readily viewable material. But in practice, Web crawlers can jump over paywalls by disguising themselves as users and can use pirated content that has been stored somewhere other than the original location.
In addition, data often get misplaced, breached, leaked or otherwise mishandled, making it an easy target for AI Web crawlers. Thus, the issue goes well beyond the traditional approaches of offering assurances about confidentiality or non-disclosure and establishing opt-in or opt-out mechanisms.
The AI data supply chain
Given the difficulties involved in addressing privacy issues at the point of data collection, options at a later stage of the data supply chain must be considered. The broadest measure would be to ask companies to disclose basic information about the data they feed into AI, indicating the sources, scope and scale. This enables, for example, checking if there is copyright infringement.
efforts at controlling information at the point of collection are undermined by Web crawlers and Web scrapers, which can automatically locate, classify, download and extract vast amounts of data, images and other types of material from the internet writ large.
However, no such requirement or regulation exists, and very few companies do it voluntarily. Some companies are now offering users an opt-out option so that their data and images are not used for AI training. Amazon’s AWS, Google’s Gemini, and OpenAI offer such options, but they are often cumbersome to activate and not totally fool-proof.
The supply chain ends with outputs, which in the case of AI include applications and predictions. Individuals need to be protected if their data are unwittingly disclosed at that point. At the societal level, the thorny problem with using training data from the web is that, even if all permissions and legal requirements were to be met, there is issue of “bias in, bias out,” in the sense that data on the web are not representative of society or of the world. Some users, companies, and countries are more prone to uploading material or leaving behind a digital footprint. Such a biased body of data then becomes the raw material for AI applications.
AI never forgets…
The truth is that individuals, unfortunately, have very few options at their disposal to prevent the misuse or unauthorized use of their data. It is also exceedingly difficult to compel companies to delete data at the user’s request even if it is mandated by law. More alarmingly, there is no good way of making an AI application to “forget” or “unlearn” what it has unlawfully learned. And the more time passes without corrective actions, the harder and costlier these instances become.
As is often the case with emerging technologies, regulation is lagging. And, not surprisingly, there is much debate as to the amount of regulatory oversight that is necessary, warranted or desirable. Adding to the complexity, digital data are global while regulation is local.
According to the United Nations, 137 out of 194 countries have passed data protection and privacy legislation with various levels of safeguards. The Web is a global medium, but it is subject to a mosaic of regulations at the supranational (the European Union), national and subnational levels (i.e., state by state, as in the Unites States). Most importantly, regulations aimed at the Web or AI sometimes collide with those in other areas, like national security. The European Union has complained about American intelligence agencies’ use of private data of EU citizens and residents without their approval. This issue is complicated by the fact that large and small American digital platforms routinely send user data to the U.S. The U.S.-EU Data Privacy Framework, signed in July 2023, regulates the circumstances under which the U.S. can gather information and how European citizens can appeal.
Unintended consequences
From the standpoint of the companies managing digital platforms, the regulatory context could not be more complex. They need to comply with regulations implemented by the country where they are based but also by the laws of the countries in which they collect data from their users. In addition, cross-border data flows may also be regulated: This represents a major obstacle for new startups aiming at international growth, while offering a built-in advantage to more established companies that have the resources to either comply or to deal with the potential litigation if they do not comply.
The truth is that individuals, unfortunately, have very few options at their disposal to prevent the misuse or unauthorized use of their data. It is also exceedingly difficult to compel companies to delete data at the user’s request even if it is mandated by law.
The future of personal data protection and privacy remains uncertain. And yet, companies need to make operational decisions today that may be legally questionable in the future. Companies, especially those engaged in large-scale AI efforts, will continue to amass data and to use it to advance their goals, even at the risk of being found non-compliant.
Another unintended consequence stems from applying new regulations to both tech companies whose core business involves data collection and manipulation, especially those engaged in AI, and those in other industries, which gather and process data in support of selling other products or services. On the one hand, the concern is that complying with regulations designed to prevent the worst potential harms might constrain the ability of such companies to compete. But on the other hand, many companies whose core business is not AI are also developing, or at least using, AI applications. Thus, the default for politicians and regulators is to make all companies comply.
Requirements, standards and scorecards
It is not clear yet if different jurisdictions around the world will treat all companies the same way, or have less onerous requirements for small firms, and for data that are not deemed “sensitive” (sensitive data includes but is not limited to financial, health, biometric and genetic information), as in the proposed American Privacy Rights Act of 2024.
Board directors and business leaders need to stay hyper-informed in a rapidly evolving landscape. There are many proposals on the table in terms of legislative initiatives, but no comprehensive federal regulation in the U.S. yet, let alone a global set of standards other than the decades-old principles of information minimization and information specificity.
Eventually, companies will be asked to create data privacy scorecards, so they should keep track and meticulously document all practices and procedures. In the meantime, they need to exercise sound business privacy practices to to avoid bad publicity, public-relations problems and a loss of customer trust over data mismanagement and hacking.
Board directors and business leaders need to stay hyper-informed in a rapidly evolving landscape. There are many proposals on the table in terms of legislative initiatives, but no comprehensive federal regulation in the U.S. yet, let alone a global set of standards.
CPO or no? Protiviti’s Tom Moore on the evolution of the privacy role and its uncertain future
CPO or no? Protiviti’s Tom Moore on the evolution of the privacy role and its uncertain future
CPO or no? Protiviti’s Tom Moore on the evolution of the privacy role and its uncertain future
The Information Age created an information explosion that shows no signs of slowing down. In fact, the increasing availability of information, including digital data, is speeding up exponentially. With the flood of all that new information come concerns for the C-suite about how to manage it safely and securely.
While some companies expanded the role of their existing technology leaders to deal with this challenge, others opted to expand their executive teams, and the role of the chief privacy officer (CPO) was born. According to the International Association of Privacy Professionals (IAPP), Jennifer Barrett Glasgow of Axciom Corporation was the first CPO. She began her oversight of privacy at Axciom in 1991. Barrett Glasgow’s job description then, for sure, was radically different from the role of today’s CPO, which continues to evolve as quickly as new information becomes available and privacy laws and regulations proliferate. Many privacy leaders in organizations have taken on new titles and enhanced responsibilities in the areas of A.I., trust and ethics, and data governance.
Regulation and legislation
Back in 1991, very few privacy regulations existed globally. Since then, in the U.S., states have stepped up to fill a regulatory void left by the federal government. More than two-thirds have addressed privacy regulation: 18 states have it on the books, eight states have active bills pending, and 10 have bills working their way through their respective legislatures. Meanwhile, in April 2024, U.S. lawmakers announced the American Privacy Rights Act, a bipartisan draft legislation that seeks to create a national standard for data privacy and security, addressing the unregulated sale of online data and aiming to ensure individuals’ right to control their personal information. Although it has no shot to pass before the presidential election in November, lawmakers are optimistic it could serve as a framework for legislation in 2025.
Globally, there’s also been a dramatic increase in the number of privacy regulations. The General Data Protection Regulation (GDPR) from the European Union, in force since 2018, created one of the largest shifts in how information is managed within organizations. Every year, more countries, including Japan, Singapore and South Korea, have introduced new privacy regulations. According to the IAPP, as of March 2024, 70% of nations and 79% of the world’s population are covered by some form of data privacy law.
As privacy regulations continue to expand rapidly, business leaders continue to question who owns the responsibility for ensuring their organizations’ data practices are compliant and who should be responsible for meeting any new compliance requirements. Legal? Technology? Compliance? There may not be one correct answer, and truth be told, privacy is a shared responsibility across the organization.
70%↑
as of March 2024, 70% of nations and 79% of the world’s population are covered by some form of data privacy law.
Is the CPO role in decline?
In an IAPP survey of privacy professionals conducted last year, 78% said their organizations’ most senior privacy leader was in the five highest levels of the organization, while 21% were in the two highest levels. The data also showed that most of those surveyed reported to either the General Counsel (32%), the chief compliance officer (16%), or directly to the CEO (15%).
The annual survey’s biggest one-year shift shows a decline in direct reporting to the CEO and a rise in reporting to the chief compliance officer (CCO). One possibility: This shift may illustrate a decline in the stature of the role for the CPO in organizations and may signal that privacy, like many other regulations, requires an integrated approach. Real-life indicators also point to the decreasing importance of the CPO. Anecdotally, there are plenty of instances when a CPO leaves the organization or the position is eliminated in a restructuring, and it is not filled.
This is exactly what happened earlier this year when Google eliminated its CPO role in a corporate restructuring and opted not to fill it. There are other examples in large organizations where the CPO role either remains vacant or isn’t even on the org chart any longer. Is this because of a lack of expertise in the field, an inadequate internal bench, or a reprioritization of efforts and focus within the enterprise?
Or is it, as is the case at Google, that the varied responsibilities for data privacy have outgrown the role of a single CPO? Whatever the answer, it’s safe to say that when Google, a company estimated to hold between 10 and 15 exabytes of data—or the storage power of about 30 million PCs—makes a potentially game-changing decision regarding privacy, it’s probably a good idea for the rest of us to take note.
Another possibility the CPO role is in decline may lie in the lack of measurable KPIs, making it difficult to conduct benchmarking for privacy professionals. The status quo is that information and data should be protected, so unless an information breach occurs, a regulatory investigation is launched or a fine is levied, some companies may have a hard time evidencing that the CPO role has had a significant and direct impact on customer sentiment, the business and its bottom line.
Of course, good CPOs will serve to preserve the “status quo” every day and in this sense may even be victims of their own success. And if the responsibility for privacy is, ultimately, being dispersed throughout multiple roles within the organization, pitfalls could begin to emerge. For instance, a team that is already resource-constrained could end up with increased privacy responsibilities, potentially, and inadvertently, losing its focus on privacy—a risky proposition.
What are the risks of losing focus?
The risk of a diminished CPO role is losing a dedicated function and leader hyper-focused on privacy. When teams pick up privacy as a second or third priority, important tasks and obligations can get missed. Regulations may not be reviewed fully, legislative efforts are not monitored for anticipated changes, and dealing with enforcement becomes even more challenging. This, of course, has a direct impact on operations and customer perception.
78%
of privacy professionals said their organization's most senior privacy leader was in the five highest levels of the organization, while 21% were in the two highest levels.
Privacy should not be a reactive function. Customers want to collaborate with companies that they trust and protecting an individual’s privacy leads to trust. Additionally, fines levied against companies found mishandling a customer’s data can have a significant economic and reputational impact on the business. Though COVID-19 may have slowed global regulators from enforcing regulations, they are now making up for lost time with increased legislative authority and automated tools. And the repercussions for noncompliance are making headlines with fines and consent decrees.
It’s also important to consider the effect on the career paths and overall morale of the privacy team. When the CPO is deprioritized or pushed down the org chart, it becomes more difficult to attract top talent, and when the privacy pipeline dries up, it’s tough to turn on again. Moreover, eliminating the role altogether leads privacy team members within the organization to seek other disciplines or external opportunities to advance their careers.
Not prioritizing the CPO also leads to many management conundrums. Without a CPO, where does the privacy direction originate? Who will listen to the voice of the customer for privacy concerns and respond in a consistent, centralized manner? How does the organization create internal privacy awareness? The reality is that when the CPO is dispositioned or deprioritized within the organization, so is privacy itself. With the ever-changing and expanding legislative landscape and the sheer amount of data at our disposal, one would expect the role’s strategic importance to be apparent and become more ingrained and elevated within organizations in the coming years.
Building customer trust
Those organizations that do employ and value the CPO role should expect continued cross-collaboration across the entire enterprise. Much like with the expansion and awareness of internal audit and compliance functions following new regulations, privacy awareness also needs to be well communicated and understood across the entire organization. Initiating activities like completing a Privacy or Data Privacy Impact Assessments required under GDPR and some U.S. state laws can only happen if the CPO and privacy team are well versed in the legislation.
The CPO needs to have a stake in the product change management and lifecycle process and work closely with the data governance teams to understand what data is collected, how it’s processed and how it’s protected. The CPO today has numerous vectors of responsibility, including state, federal and global law enforcement; leadership and board attention; internal business models, products and services; technology advancements; customer expectations; and competitor brand and product positioning. Though privacy can be a shared responsibility across the organization, the CPO needs to be the focal point across the enterprise and be accountable for building customer trust through the company’s data protection and privacy practices.
Whether your organization has a chief privacy officer, is looking to hire one, or has opted to split the role across several functions of the business, the one thing that remains certain is data privacy is not optional. More than ever, customers are demanding accountability from organizations about how their data is used, processed, shared and stored. It’s imperative that organizations invest in building a privacy program run by strong leaders who can navigate an evolving data privacy landscape. The risk of not doing so is eroding the company brand and losing customer trust.
Without a CPO, where does the privacy direction originate? Who will listen to the voice of the customer for privacy concerns and respond in a consistent, centralized manner?
NSW Health Pathology CEO: Patient empowerment, AI, Big Data are trends to watch in healthcare
NSW Health Pathology CEO: Patient empowerment, AI, Big Data are trends to watch in healthcare
NSW Health Pathology CEO: Patient empowerment, AI, Big Data are trends to watch in healthcare
In this VISION by Protiviti podcast, we welcome Vanessa Janissen, CEO of New South Wales Health Pathology, where she leads Australia’s largest public pathology and forensic science service. Employing more than 5,000 people, NSW Health Pathology performs over 100,000 clinical and scientific investigations each day across 60 laboratories and more than 150 collection centers. Janissen is interviewed by Ruby Chen, a Director with Protiviti Australia.
In this interview:
1:05 – The digital agenda: Balancing patient privacy with data security
5:01 – The challenges of an aging demographic
8:05 – Strategies for employee wellbeing
13:10 – Pathology’s role in cancer care
18:38 – Emerging trends: Aging, AI, data analytics
Joe Kornik: Welcome to the VISION by Protiviti podcast. I’m Joe Kornik, Editor-in-Chief of VISION by Protiviti, our global content resource examining big themes that will impact the C-Suite and executive boardrooms worldwide. Today we’re exploring the future of government, and we welcome Vanessa Janissen, CEO of New South Wales Health Pathology, where she leads Australia’s largest public pathology and forensic science service. Employing more than 5,000 people, New South Wales Health Pathology performs over 100,000 clinical and scientific investigations each day, across 60 laboratories and 150 plus collection centers.
Sitting down with Janissen today is my colleague, Ruby Chen, a Director with Protiviti Australia. Ruby, I’ll turn it over to you to begin.
Ruby Chen: Great. Thank you so much for the introduction, Joe. Today, we’re so privileged to have Vanessa from New South Wales Health Pathology here on our podcast. Thank you so much for joining us today, Vanessa.
Vanessa Janissen: Thanks for having me, Ruby.
Chen: Let’s start off with a series of questions. The first one is around impacts on the community. As the CEO of New South Wales Health Pathology, I’m sure there are many things that get you excited about the organization, one of which I believe is the major inflight digital transformation agenda. The organization is looking to deliver a single digital patient record management system so that healthcare teams can access patient data anywhere in New South Wales. This will allow laboratories to connect and communicate for the first time in New South Wales’s history.
With New South Wales Health Pathology being the largest public pathology provider, you are custodians of a large store of patient data. So, many people may be wondering, how do you then balance patient privacy, data security, and IT resilience controls whilst ensuring comprehensive access to patient information for healthcare teams?
Janissen: It’s a great question, Ruby. It’s an exciting time in digital healthcare. It’s transforming the way healthcare is delivered globally, and fundamentally shifting how we consume health services as individuals. As you said, New South Wales Health Pathology, we’re working to transform our services and models of care to take advantage of new digital technology. We’re delivering patient services where they’re needed and connecting our 65 laboratories and 5,000 medical and scientific experts to deliver care in new ways that we haven’t been able to do before.
At the moment, we’re a very geography-based and constrained service. What I mean by that is pathology really relies on either having the expertise on each site of our 65 locations, or transporting specimens or samples across sites. The physicality of that way of working really has some logistical limitations, particularly if you live in regional and rural communities where you don’t always have access on site to that expertise, and you may have to wait longer for important clinical answers.
So, investing in digital is a real game changer for us in that space. Our investment in a single laboratory system that will connect and integrate our operations of 65 laboratories, adding to that the investment in technology that digitizes specimens and samples at the point of collection but allows them to be reported by specialist expertise across the state will really mean we can leverage our enormous capability in New South Wales to provide better answers, and therefore better patient care, and potentially hopefully outcomes for patients.
In terms to the risks relating to data, undoubtedly, as we become more digitally enabled and process reliant, and collect, as you said, a massive amount of data, we have to be aware of our risks and duties of accountability to protect our patients’ information. This is not a new problem or a new accountability for us. We’ve always done that. We’ve always had important and sensitive information that we need to protect, but certainly the scale and the impact of data breaches, data loss, and system outages is fundamentally different now.
I don’t think that means we shy away from that direction, because the upside of accessibility and integration and better care is phenomenal. What we have to do is really apply our clinical and corporate governance expertise into that new environment. How do we make sure we’re on top of having the right protections, controls in place, making sure we test their efficacy and our ability to respond and recover.
Chen: Perfect, yes. Thank you so much for sharing that. I guess particularly across different industries in recent years, having sufficient corporate governance and oversight around the cybersecurity is such an important topic. So, then in terms of our next question, this is just around overall challenges particularly with changes in demographics.
According to the Australian Bureau of Statistics, Australia is facing an aging population due to increasing life expectancy and declining fertility rates. It’s estimated that by 2026, which is not long from now, more than 22% of Australians will be over 65 years of age. Would you mind sharing what strategies are being put in place to ensure pathology services and diagnostics services are tailored for this growing demographic?
Janissen: Certainly, as we have a larger older population in the community, for New South Wales Health Pathology, we will need to scale our capacity in our services of hospital to meet that additional demand and complexity of care, and we’ll do that through a number of ways; utilizing automation, robotics, the use of digital as we’ve spoken, and I think AI has an important part to play in ensuring we can continue to provide those levels of services out to that community. But we also know that people don’t fundamentally like being in hospitals and will prefer to receive care in other settings, and certainly our vulnerable, elderly patients often have poorer outcomes in hospitals with increased rates of delirium and falls.
So, we need to think about what care looks like in people’s homes, whether that be their traditional family home or whether that’s residential age care homes. We’ve seen the advent of virtual hospitals and I think that’s part of the answer. Virtual hospitals really came to the fore through necessity in COVID and lockdowns, but certainly they’re now taking off more mainstream where people can receive treatment in their home while being virtually monitored at remote locations.
I think pathology is an integral part of that virtual hospital strategy, not just in the way we participate in collecting samples for diagnostic purposes to help treat patients, but also how we might use technology like point-of-care devices, which are sort of a lab in your hand when we’re out in peoples’ homes, and how do we help monitor patients remotely. We know that the functionality of wearables and biometric devices will certainly help keep people safe in those locations. So, we’re really looking at our investment and how we partner with the local health districts and the hospitals around enabling that virtual setting.
Chen: Yes, that actually sounds really interesting and it gives me a lot of hope for my future when I get to that age group.
Janissen: Absolutely. We’ve got to get right before we get there, Ruby. [Laughter]
Chen: Exactly. Thank you for that. Now moving on to the next question, this is around employee well-being, which again I think it’s quite a topical area particularly in the last few years kind of post-COVID/pandemic. It has been relatively tough for some of our healthcare workers across the state.
I understand that New South Wales Health Pathology is investing in new technologies such as AI and robotics to reduce the manual burden on pathologists and scientists, referring back to what you had mentioned earlier but for slightly different uses. You have also launched a revised people strategy to ensure staff well-being continues to be looked after. Would you be able to shed more light on these initiatives and others to improve employee well-being?
Janissen: Our team worked incredibly hard during COVID to protect the community. We did over 15 million tests in the peak two years, an enormous amount of work, working in ways we’ve never had to work before. With that ambitious change agenda that I mentioned in our first question, it’s really important that we think about how we invest in our people alongside of that process.
People are at the heart of what we do, and every discussion, survey and interaction, it’s really clear to me how proud we are to serve the New South Wales Community, and that people gain a tremendous amount of professional and personal satisfaction coming to work and being a part of a team that has such a positive impact. However, feedback also tells me that fatigue is real and prevalent post-COVID, and the challenging daily realities are impacting on people’s experiences and leaving people feeling tired, sometimes undervalued and overwhelmed when change is happening.
That’s why we’ve really invested a lot of time in developing our people strategy. It’s our map for achieving a culture we all seek, investing in things that our team told us were really important. We spent six months consulting with our staff around what is it that makes them feel welcomed and valued, and that’s the core of it. Our vision for every person who works for New South Wales Pathology is that they feel they belong, a place where everyone feels safe, valued, and supported to do their roles and be the best that they can be.
To achieve that, our people strategy really focuses on six priority outcomes: that our teams feel safe and valued; that our leaders are considerate, accessible and authentic; that people’s work is meaningful and attractive; that we grow, develop, and help people achieve their goals; that we can help them balance work and other important aspects of their life; and we represent the diverse communities that we serve.
We’re really going to focus heavily on the first two; people feeling safe and valued, and that our leaders are consistently supporting them. They’re the top two priorities for focus of our investment. That’s what our team told us was most important to them. The sorts of things we’ll be working around is how do we manage sustainable workloads, making sure that people have access to the right level of training, that we have good rostering practices, that we have new technology that helps free them up to do things more easily.
We’ve commenced a range of psychological safety, well-being, and trauma-informed practice and development of our people leaders and their teams. We’re building on the success of our recruitment model and investing in other initiatives that will attract to make it quicker and easier for people to join our organization, and we’re also recognizing the important relationship that people have with their managers and investing in a whole range of leadership training programs for our clinical leaders, our senior operations team, and medical frontline leaders, and of course, important that we have an advocate for modern awards that recognize the value of what people do in their job.
Chen: That sounds like a very structured approach to the people strategy and something that I probably don’t hear as often.
Janissen: Yes. I mean you’ve got to have a plan and you’ve got to have a focus around people. People are the backbone of our organizations, and when there’s so much going on in terms of out in their personal lives, and in their work environments, and technology coming at them, really focusing in how we support people is absolutely critical.
Chen: All right, awesome. Let’s move on to the next question then. This is a little bit more forward-looking.
There has been investment into cancer care through precision oncology technology, and training and retaining anatomical pathologists in order to provide services for personalized cancer care based on specific biomarkers and their tumor and genetic makeup. Where do you see New South Wales Health Pathology progressing in the cancer care field in the next few years?
Janissen: It’s a really important area for us to focus in on. Diagnosis and treatment of cancer is a core part of our role in New South Wales Health Pathology. Every year, we average about 450,000 tests that are looking for and diagnosing what type of cancer people might have, and unfortunately, it’s a growing area of demand. Last year, we saw about 11% increase in that area, not only in numbers of tests but also what we’re seeing is a shift in complexity. I suspect that’s a part of changing demographics as we get older, but also potentially a result of delayed screenings and limited access to care during COVID, which is really sad.
The good news is that what we continue to see is better health outcomes for patients when fast and accurate diagnosis and treatment pathways can be enacted. This includes the advantages that come from precision medicine, so using genomics to really understand the cancer types that people have, and the ability to have personalized care, so where we target therapies deployed for a very specific cancer. New South Wales Health is leading the way in developing both of those fields, and we’re really proud that Richard Scolyer, the Australian of the Year, is actually an anatomical pathologist in our organization, and he’s a perfect example of those innovations. He’s generously sharing his lived experience of diagnosis and treatment of brain cancer, and the great outcomes as a result of that.
What does it mean we need to invest in? What we’re going to be focusing around is our cancer genomics capability, so using new genome sequencing technology and translation of research into new testing protocols to be more accurate in our diagnosis. We’re expanding our team’s capability, so investing in genetic pathologists, the registrars that train, developing a very specific scientific workforce in this area, and in particular, an investment in bioinformaticians, because genomics produces a massive amount of data, and it’s the interpretation of that data that we need to be really good at. We’ll also continue to partner with world-leading researchers and institutes that advance knowledge of what good cancer care and good cancer diagnosis can look like.
Chen: Yes. I was actually just about to ask you about whether or not you have partnerships and alliances with other overseas institutions. I guess in some other jurisdictions, the cancer rates are also relatively high if not higher than what we see in New South Wales. Would you mind sharing a little bit about the types of research that you’ve been doing jointly with some of the world leading institutions?
Janissen: We partner specifically with our hospitals and local health districts, and quite often, they have onsite research institutes and connection into international research institutes. So, that local partnership is critically important because then you’re focusing in on a range of local need and demand in that area, but also the expertise, that multidisciplinary research support is really critical. At New South Wales Health Pathology, we can participate in that in terms of the capability, we have our experts in areas like anatomical pathology, but also we operationalize a biobank, which is an area that collects and stores specimens that goes into helping in research, in clinical trials, to really support those researchers on the ground to make those advancements.
Chen: That sounds amazing. I really look forward to seeing more research outcomes and benefits for people who are going through a cancer diagnosis and treatment. I think it sounds quite promising. I’m hoping there will be a breakthrough collectively as a lot of the leading organizations come together. Okay, great.
The last question we have here for you, Vanessa, is just a little visionary. We’ve talked about emerging trends from enhancing digital experiences, to increasing controls around IT security, Australia’s aging population, improving employee well-being, and the previous question around the cancer key initiatives. Could you share your thoughts on any other emerging trends that New South Wales Health Pathology will need go grapple with in the next five to ten years, and how else do you see the organization shaping into the future?
Janissen: I think there’s three things that are on my mind around the future. The first is really about this ongoing need to empower our patients in their care. That’s what we hear from patients; they want to have more control, choice, and activate preventative care earlier in their lives, and I think the growing capability around point of care testing and biometric devices that really allow people to gain access to their information personally early to help them manage their diseases, to enable them to access and change their lifestyle to support better outcomes, is something that we really need to support, and I think for New South Wales Health Pathology, we’ve got a really good role to play in that because we do have the expertise around how to interpret data, how to interpret results. So, I’m thinking about how do we connect that expertise into patients when they’re in control of their care in whatever setting that might be.
The second area that I think is an important, emerging trend, and I say emerging but it’s here, and it’s real, and it’s growing really rapidly is how we use AI and deep learning. We’re already seeing the advantages of that in image analysis and large dataset interpretations such as in whole genome sequencing, and it’s really helping to augment our expertise, not replace it. So, how do we maximize that support, how do we ensure that our teams have got the right tools, they’re applying those tools in a safe way to do their job, but also to free them up to do the parts of their job that matter most, which is the human-to-human contact.
The third area that I think is again rapidly emerging is big data analytics and being able to look at trends through big data and population insights. I think through that area, what we can be doing is looking at the prediction of care needs, really thinking and seeing how is patients’ health performing, where are they starting to see deterioration in their health, and how can we translate that into care needs and models of care, and into research requirements.
So, I think they’re a couple of the important trends that we’re thinking about, but inevitably, what matters most is how we support our people to have the capability, the time to engage, to collaborate, to partner in those emerging trends to identify the incredible opportunities and bring them into reality for the communities we serve.
Chen: Yes. I think you’ve raised some very important areas, and I guess the thematic which is around data and information in general. From gaining access to data early, to AI and big data analytics and deep learning, they all kind of fit within IT and information, and that kind of a bucket, right, so that’s really interesting to hear.
Janissen: Yes, and that really reshapes I guess in some ways kind of what our workforce needs are as well. So, we’re thinking about how does that play into our people strategy in terms of the new jobs that will be coming along, and how do we attract and retain that expertise in that area that really, we haven’t had to have in our workforce before. So, it’s a really exciting time.
Chen: Really, really interesting. Thank you so much, Vanessa. I’ve learned so much just over the past half an hour or so about New South Wales Pathology and also just pathology in general. That’s been super helpful and very insightful. I would just like to take the opportunity to thank you so much for your time in this podcast.
Janissen: Thanks very much for having me, Ruby. It’s been a great conversation.
Chen: Yes, thank you. Okay, great. Well, Joe, we’ll hand it over back to you.
Kornik: Thanks, Ruby, and thanks, Vanessa, and thank you for listening to the VISION by Protiviti Podcast. Please rate and subscribe wherever you listen to podcasts, and be sure to visit vision.protiviti.com to view all of our latest content. Until next time, I’m Joe Kornik.
Vanessa Janissen is the CEO of NSW Health Pathology where she leads Australia’s largest public pathology and forensic science service. Employing more than 5,000 people, NSW Health Pathology performs over 100,000 clinical and scientific investigations each day across 60 laboratories and 150-plus collection centres. She’s spent over 25 years in healthcare, both in public and private settings, with a deep commitment to serving the community and the strategic pursuit of better outcomes for people. She is also passionate about growing and developing future leaders, particularly championing and supporting women in leadership positions. Previously, Vanessa held a number of leadership positions at Calvary Healthcare, most recently as the National Director, Strategy and Service Development, leading their strategic growth across hospital, aged care, community and virtual care services.

Ruby Chen is a Protiviti director with over 12 years of experience in the financial services industry, for 10 of which she worked within the Big Four banks before transitioning into consulting. She has a broad range of experience providing advisory services and secondments across all three lines of defense.

Did you enjoy this content? For more like this, subscribe to the VISION by Protiviti newsletter.
Anglicare Sydney CEO on AI, housing and critical issues facing governments
Anglicare Sydney CEO on AI, housing and critical issues facing governments
Anglicare Sydney CEO on AI, housing and critical issues facing governments
In this VISION by Protiviti interview, Simon Miller, CEO of Anglicare Sydney, a nonprofit organization that offers services for seniors, families and individuals in need, from food and housing to mental health and family care, sits down with Protiviti’s Leslie Howatt, a managing director and the firm’s Technology Consulting solution lead in Australia, to discuss Miller’s work as the CEO of an NGO, AI use cases in government, the future of public housing, and how government can more effectively work with the social sector to deliver outcomes.
In this interview:
2:43 – The five biggest challenges facing governments
4:55 – Using AI to deal with policy challenges
8:28 – Interfacing with the social sector on housing, aging, food and mental health
14:30 – A more community-minded future
Anglicare Sydney CEO on AI, housing and critical issues facing governments
Joe Kornik: Welcome to the VISION by Protiviti interview. I'm Joe Kornik, Editor-in-Chief of VISION by Protiviti, a global content resource examining big themes that will impact the C-suite and executive boardrooms worldwide. Today, we’re exploring the future of government, and I'm thrilled to welcome Simon Miller, CEO of Anglicare Sydney, a nonprofit organization that offers services for seniors, families, and individuals in need from food and housing to mental health and family care. I'm happy to turnover the interviewing today to my Protiviti colleague, Leslie Howatt, managing director and the firm’s Technology Consulting solution Australia lead. Leslie, thanks so much for joining me today.
Leslie Howatt: Thanks, Joe. Great to be here and I'm delighted to have Simon, you here with me today. Thank you so much for joining us.
Simon Miller: My pleasure to be here, Leslie. Thank you.
Howatt: You have such a unique background as a CEO of an NGO, former management consultant for 15 years and more than a decade in the New South Wales government. How do you think those experiences come together to help shape the work you currently do as a CEO of a not-for-profit providing age care, retirement living, social housing, and community services to over a hundred thousand people across Greater Sydney and the Illawarra?
Miller: It’s interesting because I’ve had the privilege of working across really all three different sectors. I’ve worked across the public sector. I’ve worked across the private sector and now, in the for-purpose sector. And I think what that does is it gives me the capacity to view problems from different perspectives, to understand how different stakeholders think about things. One of the things with being a CEO of a not-for-profit is there is so much that we have to do in working with other sectors and so, actually, being able to speak their language, understand the challenges that they are working on to be able to bring a whole range of different skill sets around public policy and advocacy, around commercial analysis, around technology and be able to meet those together I think is a really useful perspective that I'm able to bring and contribute to the sector, hopefully, to the development of the sector and some of the things that we’re trying to do as an organization and indeed as a society here in Australia.
Howatt: So, given that, what do you see are the biggest challenges facing governments both in Australia or beyond right now?
Miller: Yes. Look, I think there are five really big themes that governments are grappling with. I think the first one is clearly cost of living, the sticky inflation. Governments and central banks are trying to deal with managing economic growth along with the price pressures and they’re struggling with that. They haven’t really had to do since the 1970s. Well, this is different from the 1970s as well so this is almost a unique challenge around cost of living. What’s that leading to, it’s leading to what I call the rise of disillusionment. There’s disillusionment amongst the population. You see this showing up in populism and extremism. You see it coming up with the challenges from social media. I think the third thing is energy transition. There’s never been a global crisis quite like that of climate change and the energy transition is feeding into all of these. It sorts of feeding into a hopelessness which leads to populism, it’s feeding into the cost-of-living pressures through the rise of renewables, but it’s a really, really serious issue that government is trying to grapple with. They know they have to do it but there’s a lot of pushback from the populists because it’s expensive. And then I think housing, the availability of housing, the cost of housing. You’re seeing rents at unaffordable levels across at the least most of the Western world. You’re seeing dislocations in the housing market in places like China and so there’s a really significant challenge around the housing right at the moment and that’s not just in Western democracies. It’s right across the globe. I think the last one is AI and technology and governments working out, “What do we do with artificial intelligence? How do we manage them? How do we use it?”
Howatt: So, let’s pick up on that last point. In your consulting days, you built and led an artificial intelligence and advanced analytics business across Asia-Pacific, growing it from scratch to hundreds of data scientists and engineers. How do you think government can use AI to deal with living policy challenges?
Miller: Look, I'm genuinely quite excited about AI and what it can be used for in public policy. So, I think there’s just an enormous number of use cases and so let me give you an example. I heard recently about people starting to use large language models to do real time coaching in call centers. So, you can have the AI model listening essentially to the call and provide prompts. So, it’s not just, “We’re going to record this call for later. We can actually give you real-time feedback on how you’re doing and real-time access to information.” So, it’s going to improve experience. I think there’s opportunities for automated approvals, automated regulatory approvals. For instance, planning approvals, interpretations of laws and policies, I think is a really interesting area. I was talking to a large Asia-Pacific bank just a week ago and they were saying they’re now running an AI across all of their policies and all of their terms and conditions and it means that customer queries to bankers, 90% of them can now be answered live without people needing to go back and try to ask experts. AI can actually do that and, again, speed things up, makes it more efficient. Governments can do that same thing with laws and policies.
Airlines and logistics companies have been using AI to help with optimization and scheduling disruption management, and governments are now starting to think about how do we use that in say public transport networks or road networks and so that creates improvements and experience of lives of citizens. We are using at my organization AI to actually automate recruitment in terms of resume screenings, in terms of asynchronous video interviews. Well, government hires thousands of people and so the capacity to be able to find the right people, find them fast, takes a lot of the pressure and friction out of the recruitment process, again, it’s going to improve public policy.
I think the used cases are absolutely endless. The only thing that’s going to limit it, really, is the imagination, government can be quite risk-averse and needs to be risk-averse because it’s actually dealing with balancing quite difficult public conversations and trying to manage the interaction with the population. So, risk aversion and imagination, and on the other hand, their willingness to invest. I think the public investment we require will be substantial but the payoffs, particularly as government revenues fall and we have a demographic pressure of an aging population, I think the payoffs for the government will be enormous from the application of AI, right across almost any area you can imagine in public policy.
Howatt: Let’s talk a little bit about your current role as the CEO of Anglicare. What are some of the big issues that you’re dealing with in that sector and how can government more effectively work with the social sector to deliver better outcomes?
Miller: There’s really four things that I’m particularly concerned about that we are working on. I think there’s the question of aging and ageism. Our population is significantly aging across most of the Western world, indeed across most of the world, we have an aging population and there’s a need for government to help foster more healthy approach to aging. There is some research out of Yale that suggest that the way you think about aging actually impacts really significantly how healthy you are as you get older. You can have more years of being well as a result of thinking more positively about aging. And so I think there’s a role for not-for-profits and governments to really think about, how do we age well as a society? So, let’s not just make aging a clinical thing, something that is about a loss of capacity but how do we celebrate, how do we embrace aging, how do we actually help well-being to be a feature of how we all grow older and that’s one of the things that we’re trying to grapple with in terms of building communities for older people that are healthy and thriving and flourishing that are not just about medical care but are about community, that are about hospitality, that are about social connection.
The second thing that we really focus on is the housing crisis. Anglicare is a significant housing provider and it’s an area that we focus our growth in because we see this as really being the great kind of social challenge of our time. Because without good housing, you lose social cohesion and I think that that’s an enormous challenge to our society. So, that’s an area that requires government investment, government regulation, government support. It’s something that simply doesn’t happen because if it was something the private would just do on its own, we wouldn’t need to worry about it but it’s clearly a huge problem.
The third case is food and security. This is a big challenge, cost of living and food insecurity, huge challenge across the Western world. Certainly in Australia, where I am, food insecurity is an enormous challenge for us and how do we work on distribution, how do we work on subsidizing that, how do we work on building capacity requirement.
I think the last one that we’re significantly working on is around mental health, just the increase in the number of mental health challenges in community, particularly for both young people and for older people. The group in society that has the highest rate of suicide is men over 85. It’s not the largest number but it’s the highest rate and so mental health challenges for both teenagers and for seniors are an area that we are partnering with government to really invest in counselling, in psychology, in support. They are big challenges that we need to face into as an organization, having partnership with government.
Howatt: That’s quite amazing. That’s a stat that I had never heard before but having an aged father, I can kind of understand how that might come about. So, for my last couple of questions, I want you to look out five years or so, maybe to 2030, what do you envision for the future of housing, specifically social housing and community services to support Australia’s aging population?
Miller: One of the biggest challenges with housing is around supply. We just need more housing. But of course, it’s not just a case of building more because land is expensive. Regulatory approvals are expensive. Construction is expensive, and so people, developers, even Anglicare, we need to make a return on the money that we invest in building housing. So, one of the things that I see in the next decade, five to 10 years, is government stripping away some of those regulatory costs, make it really easy for people to build if they build following a certain design standard. I see technology playing a significant role into the things like prefabrication, things like automation, offsite manufacturing, the reduction of the cost of [Unintelligible] architecture, of project management, of engineering. I think AI, in particular, can play a role in those spaces. Then, of course, the lifecycle cost of housing around energy, heating, cooling management and there’s an estimation that actually, at least in high-rise buildings, those lifecycle costs can be half the net person cost of the building. So, they’re really, really significant particularly for people on low incomes.
So, I see really a quite exciting world where government regulation, where technology, and where really smart design in the next five years can dramatically lower some of those costs and enable developers to make good returns but at significantly lower costs and so you get a win-win. You get developers still making the money that they need to be able to put their money at risk but you see people on lower incomes, on minimum wages, on income support actually able to afford it because we’ve changed the way that we deliver housing. So, that’s something I see us being quite exciting over the next five years and I’m hopeful that organizations like mine can actually play a role in saying, “You know, we’re going to take a risk. We’re going to do some things a little bit differently than we’ve traditionally done because we can and because we can,” sort of show the way and so I’d like us to be the innovative organization. But these things are going to really transform the way we do the housing, I think.
Howatt: Amazing. Finally, then, the same question but about the future of government in the delivery of services. Based on everything we’ve talked about today, how optimistic are you that we’ll get this right and what do you see for 2030?
Miller: Look, I think I’m ultimately an optimist and I think the reason I’m an optimist in terms of public policy and getting this right is, what I see when I look at millennials and Gen Z and then the generations coming after them is that they are much more publicly minded and I think these things like climate change that have led them to taking a different, more positive, more community-minded approach. So, I think that by 2030, the millennials and Gen Z are going to be a really significant part of the voting population. So, they’re going to demand it from government and governments tend to respond to voters, one way or another, and so I’m actually really optimistic that their attitude will actually translate into government action.
Howatt: Thanks, Simon. We really appreciate you taking the time to spend with us today and for your amazing insights. That was so helpful and I learned a lot and, hopefully, others do too.
Miller: Fantastic. Thank you.
Howatt: Now, back to you, Joe.
Kornik: Thanks, Leslie. Thank you for joining the VISION by Protiviti interview today. On behalf of Leslie and Simon, I’m Joe Kornik. I’ll see you next time.
Simon Miller is the CEO of Anglicare Sydney, a nonprofit organization that offers services for seniors, families and individuals in need, from food and housing to mental health and family care. Prior to joining Anglicare, Simon was a Managing Director and Senior Partner at The Boston Consulting Group with over 14 years’ experience in advising the boards, CEOs and executives of Australia’s top companies on strategy, growth, digital transformation, advanced analytics, and mergers and acquisitions. In previous roles he was First Assistant Secretary at the Department of Prime Minister and Cabinet, Deputy Director-General at the Department of Water and Energy, Policy Director to the Premier of New South Wales and Chief of Staff to the Treasurer of New South Wales.

Leslie Howatt is a managing director and Protiviti’s technology consulting solution lead in Australia. She specializes in digital and technology strategy as well as transformational change, and boasts over 25 years’ experience across consulting, industry, and government sectors. She has extensive experience designing and delivering large-scale change initiatives across organizations in Australia, New Zealand, Asia and North America. Leslie's industry experience spans financial services, transport & aviation, energy & utilities, consumer products & retail, and telecommunications. She is a champion for diversity and inclusion.

Did you enjoy this content? For more like this, subscribe to the VISION by Protiviti newsletter.
AIR's Jo Ann Barefoot on AI, automation and the future of financial regulation
AIR's Jo Ann Barefoot on AI, automation and the future of financial regulation
AIR's Jo Ann Barefoot on AI, automation and the future of financial regulation
In this VISION by Protiviti Interview, we welcome Jo Ann Barefoot, CEO & Co-founder, Alliance for Innovative Regulation (or AIR), a nonprofit organization working globally to promote a more fair, inclusive and resilient financial system by helping adapt financial regulation for the digital age. Barefoot, who hosts the global podcast Barefoot Innovation and is a Senior Fellow Emerita at the Harvard Kennedy School Center for Business & Government, discusses AIR, digitization, AI and the future of regtech.
In this interview:
1:40 - Regulation and technology converging
3:47 - The future of financial regulation is AI
6:20 - Tools and talent
9:11 - Where customers benefit
12:24 - Envisioning regtech in 2030
AIR's Jo Ann Barefoot on AI, automation and the future of financial regulation
Joe Kornik: Welcome to the VISION by Protiviti interview. I’m Joe Kornik, Editor-in-Chief of VISION by Protiviti, a global content resource examining big themes that will impact the C-Suite and executive boardrooms worldwide. Today, we’re exploring the future of government and I’m thrilled to be joined by Jo Ann Barefoot, CEO and co-founder of the Alliance for Innovative Regulation, or AIR, a nonprofit organization working globally to promote a more fair, inclusive, and resilient financial system by helping adapt financial regulation for the digital age. She hosts the global podcast, Barefoot Innovation, and is a Senior Fellow at the Harvard Kennedy School Center for Business and Government. She serves on multiple global boards and advisory bodies and is a renowned public speaker and author, having published more than 200 articles on innovation and regulation. Jo Ann, thank you so much for joining me today.
Jo Ann Barefoot: I’m delighted to be here. Thank you.
Kornik: So, Jo Ann, it’s been about five years since you launched AIR, the Alliance for Innovative Regulation, as I mentioned in the intro. Could you tell us a little bit about AIR and then how you would assess the first five years and maybe what do you see are some of the biggest challenges and opportunities of the next five?
Barefoot: Yes, I’m happy to do so. Yes, we have just passed our fifth anniversary at AIR. We are a nonprofit. We are based in the United States but active around the world and really, the idea for it came out of the bat. I’m a former bank regulator myself, begin immersing in how technology is changing the world, including finance, and reached sort of an epiphany that it’s going to be hard to regulate change at this speed and this wall breaking nature unless we really step up the focus on the regulatory process itself. We work to help regulators, financial regulators, both use technology themselves and understand the technology they’re overseeing in the marketplace. The challenges going forward are that everything is speeding up and becoming even more wall breaking and we think it’s like race against time to keep pace with the change.
Kornik: Right. I love the name because innovation and regulation are just two words that you don’t often hear together. They are often sort of opposites.
Barefoot: Yes.
Kornik: I’ll start up by asking you about your views on the future of regulation, just particularly in the financial sector.
Barefoot: If we look at the financial sector, it is changing as much as anything else around us in terms of technology, digitization, tokenization, all the trends that we’re living with, and it really matters. It’s an area where if it doesn’t work well, people get hurt and therefore it’s pervasively regulated maybe more than any other sector. The big changes, I think, and you won’t be surprised to hear me say this, are going to be dominated by artificial intelligence. The advent of generative AI, a year and a half ago, is like nothing we’ve ever seen before in terms of hitting everyone’s desk at the same time in just a few months and making everyone rethink what they are doing. So, the big trends are going to be really keeping up with that change, deploying it, and also properly regulating it so that we get the benefits of new ways of doing things but we deal with the risks, which are very, very real.
Kornik: Right. Since you brought up generative AI, let’s stay on that topic for a few more minutes. I’m curious about it’s ultimate impact. I know there was a recent AIR white paper that asked if AI is transforming the future or triggering fear or perhaps a little bit of both. How concerned are you about that generative AI future and how do we make sure it’s actually a force for good and not the other way around?
Barefoot: Yes. We definitely think it’s a double-edged sword, very dangerous for sure. There’s risks that AI and generative AI specifically is going to introduce bias into the system. We have the challenges of so called explainability, why is the AI does doing what it’s doing or recommending what it’s suggesting. We’ve got problems with data quality. We’ve got problems with model risk management. We’ve got problems with privacy. We’ve got problems with AI’s hallucinating. I mean, we don’t minimize the risks of it all, but having said that, we think the upside opportunity is huge and we have to harness it. If we put that in the framework of thinking about financial regulation, when you have a problem that at its heart is about the difficulty of not getting enough information and analyzing it or keeping up with the dynamic environment, those are the situations that AI is perfect for solving. Particularly generative AI, the ability to search large language models. If we have the time, I could give you so many examples where the way we do things today is hampered because we don’t have enough information in hand; it’s in analog form. People are doing manual data entry, I mean, regulators are doing it, risk managers and compliance people are doing it. Everyone is dealing with fragments of information in a system that’s huge and changing at a breathtaking rate and we need to harness these tools to understand what’s going on in them and get hold of that.
Kornik: Right. Are the tools in place? Do the regulators have the tools right now to protect the system and ultimately make it more resilient, inclusive, and fair? How do you balance the need for those guardrails without stifling innovation?
Barefoot: Yes. The regulators, with a few exceptions, do not have the tools now. One of the things we really advocate for, and it might sound nerdy and not interesting but it’s essential, is that the regulators have to overhaul their own technology. Not just adopt some different tools but go back and redesign their information architecture and their talent lineup. They’ve got great people who understand risks in the system but those people, the colleagues who understand technology, they need software engineers and designers. So, attracting and retaining talent, working on culture change. We are putting out a white paper on this shortly by my colleague Nick Cook, who used to head Innovation at the Financial Conduct Authority in the UK which we’re calling Regulators’ Odyssey. They need completely different tools they need to invest, and they all need to, for one thing, migrate to the cloud, which will take a little while but really has to be done.
Kornik: Right. That’s interesting also and I keep an eye out for that. So, I was listening to your Barefoot Innovation podcast, which is excellent by the way. Recently you had Elizabeth Kelly, Special Assistant to the President for Economic Policy at the White House National Economic Council, and I know she was key in drafting the Executive Order on AI by President Biden. So, I’ll ask you, do you think that the U.S. is doing enough? Do we have the right resources and focus pointed at the problem?
Barefoot: The U.S. is doing a great deal and the Executive Order led to learning and innovation really throughout the government. We are not as far along as some other places in thinking through policy toward AI and some would say that was a bad thing, some would say it’s a good thing. We’re going to need to learn to create more dynamic policy change mechanisms because whatever is adopted today on AI is going to be out of date before long because the tech itself is changing so rapidly and so are the use cases. But I would like to see the U.S. really think through a consistent set of principles for regulation of AI and be ready to put that out. I’ll mention another thing that we think is really important. At AIR, we look a lot at consumer finance, consumer protection, and financial inclusion. We think that one of the things that deserves a thought from policymakers and from the private sector is the potential for consumers, financial consumers, to have their own agents for helping them manage their financial lives. Call it a robot, call it an agent. Gartner has called this phenomenon “Custobots,” the AI devices that are going to be able to make decisions. Financial consumers don’t have enough information either and they don’t have, on average, the knowledge to fully understand the financial products that they’re looking at. If we can encourage development of AI assistants for the financial consumer who can help her pay her bills, do her budget, evaluate financial products, see through offers that may have hidden fees or terms that would be adverse to her, manage to meet her own preferences and priorities as she’s saving and trying to build wealth. If we had these and they were affordable for everyone and that was a business model that could support them, and if they were regulated to have to have a duty of care to the individual or a fiduciary duty with no secondary agenda, we could transform consumer financial markets and really reward the good actors who were trying to provide the best products and services and prices and wring out the bad actors who are not. So, we think policymakers should be thinking about that.
Kornik: Right. Very interesting and a lot of this emerging tech AI movement is being led by the private sector of course, and it seems like perhaps an area that’s right for public-private partnership. Do you see opportunities for collaboration between government entities and the private sector?
Barefoot: I couldn’t agree more, Joe. We are going to have to have private and public collaborative efforts here. I think we may need some new models for those but all these area standards, the best practices, interoperability of different kinds of systems. One example is, digital identity is emerging as the key issue in fraud, which is skyrocketing, partly thanks to AI and financial inclusion and privacy. You can’t design a systemic approach to something like that unless you have the public and private sectors working closely together, which people are trying to do but we have a long way to go to do better.
Kornik: Right. Well, Jo Ann, you’ve been very generous with your time. I just have one more question for you and that’s to take me out a few years, maybe to the end of the decade, and talk to me a little bit about what you see as the future of financial regulation in, let’s say, 2030?
Barefoot: I think the future of financial regulation is like the future of everything else. Frankly, it’s data and AI and I think by 2030, we’re going to have a lot of situations where, say, a bank examiner or a securities regulator is sitting at their desk and that he too have an AI agent that enables them to get all the information that they could possibly want just by querying, just by talking to an AI that will then bring you the answers that you have, cut data in different ways, pick up the trends, the early warning systems. I think these systems are going to remove the need to do so much regulatory work with lagging indicators at risk, which is what we have mostly today, and the whole industry is going to be able to get ahead of problems. It’s not going to be perfect, I’m not naïve, and it will be so much better today if when you speak in to see risk emerging or non-compliance emerging, you can flag it immediately and fix it immediately instead of letting it accumulate over the years. It's going to be much more beneficial to the customer and the public and also to the regulated firms. They’re going to have an easier time running their businesses in a fortuitous way.
Kornik: Jo Ann, thank you so much for your time today. I really appreciate it and I really enjoyed our conversation.
Barefoot: I do too.
Kornik: And thank you for watching the VISION by Protiviti interview. For Jo Ann Barefoot, I’m Joe Kornik. We’ll see you next time.
Did you enjoy this content? For more like this, subscribe to the VISION by Protiviti newsletter.
Protiviti's Scott Laliberte: Regulation of AI and emerging tech should not stifle innovation
Protiviti's Scott Laliberte: Regulation of AI and emerging tech should not stifle innovation
Protiviti's Scott Laliberte: Regulation of AI and emerging tech should not stifle innovation
Joe Kornik, Editor-in-Chief of VISION by Protiviti, sits down with Scott Laliberte, Managing Director and Global Leader of Protiviti’s Emerging Technology Group. Scott and his team enable clients to innovate by leveraging emerging technologies, such as AI, machine learning and IOT, among others. In this Q&A, Scott discusses all those emerging technologies and if, how and why and when the government should regulate them, and he offers his feedback on a few emerging tech data points from the Protiviti-Oxford global survey on the future of government.
In this interview:
1:03 – Emerging tech findings from the Protiviti-Oxford survey
3:30 – Regulation of AI and lessons from the privacy
6:15 – The foundations of AI governance
9:58 – The role of the private sector
12:18 – The 3-5 year outlook
Protiviti's Scott Laliberte: Regulation of AI and emerging tech should not stifle innovation
Joe Kornik: Welcome to the VISION by Protiviti interview. I’m Joe Kornik, Editor-in-Chief of VISION by Protiviti, a global content resource examining big themes that will impact the C-suite and executive boardrooms worldwide. Today, we’re exploring the future of government, and I’m happy to be joined by my Protiviti colleague, Scott Laliberte, managing director and global leader of Protiviti’s Emerging Technology Group. Scott and his team enable clients to innovate by leveraging emerging technologies such as AI, machine learning and IoT, among others. Scott, thanks so much for joining me today.
Scott Laliberte: Thanks for having me.
Kornik: Scott, as you know, we’ve recently published our Future of Government survey that we do with the University of Oxford, and we found out some interesting things regarding AI and emerging technologies. For starters, more than three-quarters of global business leaders expect emerging technologies such as IoT and AI to substantially transform the delivery of public sector services. Does that finding surprise you at all?
Laliberte: No, that doesn’t surprise me, Joe. I think so many people are encouraged and really looking forward to the advances and the efficiencies that they’ll gain with AI and IoT. AI is enabling businesses and people to do so many things right now, like more efficient predictive analytics, and when you combine that with the power of IoT and some of the data we can get from IoT censors, we’re going to be able to do some really amazing things with improving services and preventative maintenance, predictive analytics, all those things that will make services in our lives a lot easier.
Kornik: Right. Scott, in the survey, when we asked executives what role, if lany, government should play in regulating emerging technologies such as AI and deepfakes, technologies that, as we put in our survey, can disrupt democracies, 82% of business leaders believe government has a role and 53% said that role should be substantial. What do you make of those numbers?
Laliberte: Well, I think not only are folks encouraged by this technology and the breakthroughs that it is creating for us, they’re also scared of the consequences because those consequences and bad outcomes can be pretty significant. The things like deepfakes, the ability of AI to speed up and make attacks more effective and quicker all are things that they’re worried about. You combine that with the fact that it’s so new and there’s a lack of standards and regulations and all those types of things, people are pretty nervous and they’re looking for guidance. On the flipside of that, I think they don’t want to be overregulated as well. They need that guidance, they want the protection, but they want the ability to be able to continue to innovate and innovate quickly, and that’s going to require a real balancing act there to make sure that the government’s providing oversight, they’re providing guidance, they’re providing regulations, but they’re not stifling innovation by putting so much on there that companies aren’t going to be able to take advantage of the enhancements and gains that they can get with this technology.
Kornik: Right. Scott, we’ve already seen some government involvement in regulating emerging technologies with two recent SEC enforcements for AI washing, the deceptive marketing tactic which exaggerates the use of AI. Do you think we’ll see more in the future, and if so, what will that look like?
Laliberte: Yes, I think we’re going to see a lot more in the future. It’s really interesting, history tends to repeat itself. When I saw the White House executive order on AI issued back in December — fourth quarter of 2023, it was — it reminded me of the early 2000s when a similar White House executive order came out on privacy. If you remember, in the early 2000s, states were just starting to come up with some of their privacy regulations. California had come out with one. Massachusetts had come out with one. Europe had put out some of their legislation and regulations but the U.S., federally, didn’t have any guidance or regulations in this area. The White House put out a similar executive order, and then right after that, we saw enforcement actions by the FTC, DOJ, SEC for privacy-related issues but they related back to regulations and laws that already existed, Consumer Fraud and Abuse Act, Computer Protection Act, and things of that nature.
You’re seeing that now as well, these enforcement actions for AI washing. They weren’t for AI. Right. They were existing laws that existed for consumer fraud and abuse and those types of things. I think we’re going to see the same thing happen here. That executive order basically laid out a bunch of aspirational things but it set some very common-sense things as well. It said you have to be transparent with what you’re doing with AI. You have to protect customer data. You have to make sure you don’t have bias worked into any of the decisions that you’re making. It had a whole bunch of other things in there, but those were really your core tenets. I think what we’re going to see is if you violate those core tenets, the government’s going to come down on you pretty hard to make an example out of you. That’s what we saw in the early 2000s. I think we’re going to see that again. So, that’s the warning shot and a notice that people need to be taking and really embracing that they’ve got to start putting AI governance initiatives in place, make sure they don’t violate any of those core tenets, and that they’re putting their companies in a good position to leverage AI but in a responsible and ethical manner.
Kornik: Scott, from where you sit, what does effective AI governance and compliance look like? How much regulation is appropriate? How much is too much? What should companies be doing right now to prepare for when all this rolls into shape?
Laliberte: Yes. Well, I think what companies need to be doing is really laying down the foundation for AI governance. They need a steering committee. They need a group with — a multidisciplinary group of people that can really be tackling this from multiple directions. It’s not just going to be compliance. It’s not just going to be information security. It’s not just going to be legal. It’s going to be all of those groups working together with the business to really set the foundation of how do they responsibly use AI to enhance the business.
That’s going to be things like making sure you got the right policies and procedures in place, and that may sound like a daunting task but when we’ve helped companies map their policies and procedures to some of the AI standards, like the NIST AI risk management framework, you’ll find that 85%, 90% of the standards are already in place with existing policy statements or controls that they have today. They might just need to be reminded that they have those types of things in place and they apply to AI as well as other technology they already have, but majority of that is going to be in place. And then making sure that you’re educating your people on how to use it, how to develop with it in a responsible way. If you’re going to be doing risky transactions, making sure that you’ve got ways of ensuring transparency, ensuring no biases there or that you have a human in the loop to make sure that it’s not making decisions based upon incorrect information.
So, laying that foundation is going to be very important and it’s also going to be important that it continues to evolve because this area is evolving so quickly. So, that’s really on the company side. On the regulatory side of this, the balance is that, how do you regulate this given that it’s evolving so quickly and not stifle the innovation? Right. So, when you look at Europe, who took a bit of a heavier hand in putting out some very specific guidance, some consequences that will happen if you violate those things, the risk there is that it’s going to slow AI innovation in Europe. I think the United States doesn’t necessarily want to do that because we’re not just competing in the United States, we’re not just competing with Europe, we’re competing with the rest of the globe and many jurisdictions that are not going to put any type of regulations on this whatsoever. So, I think that the government is really going to be — the federal government — is going to be looking to say, “We’ve set out the tenets that we want you to abide by. Frankly, if you’re going to violate those tenets, we’re going to hammer you with existing laws that are on the books,” and that will be the way that they regulate. They’ll regulate through enforcement actions rather than necessarily legislation. What’s going to complicate that also is that the states are going to continue to put forth their legislations as well. So, it’s going to be like it was with privacy where you have a ton of different state laws that you’re trying to navigate, no real federal legislation, and you’re going to be looking at precedented enforcement actions to try and sense what’s the right direction to go in.
Kornik: Right. You touched on this a bit earlier but let’s talk about the private sector for a minute if we could. They are obviously way out in front on this. Do you see an opportunity for them to sort of lead on regulation or perhaps be integral in working with governments to align strategically to sort of make sure that we get all this right?
Laliberte: Yes. That’s a tough question. [Laughter] We’ve seen some of the big players that are already trying to set standards and put out frameworks and things like that, it’s like you got Microsoft Responsible AI and Google and AWS — all the big players are putting forth their frameworks and their guidance, and there’s many common elements. As you look across those, they align with a lot of the NIST standards and the ISO standards and things like that. I believe we will continue to see large leading organizations like that putting forth guidance, templates, all those things that we can take advantage of because they want consumers to use the technology, right, because it’s in their best interest for those things to be used aggressively and responsibly. They’ve always been in collaboration with the government, and I think we’ll continue to see that.
I think when you look at the bad things that could happen with AI, I look at it and compare it to, say, ransomware. Right. Ransomware still is a devasting attack vector that we deal with today. When it first came out, people didn’t know what to do. The government worked in collaboration with private sector and put forth guidance, and it wasn’t necessarily regulation. They didn’t regulate ransomware, but they worked trying to harden critical infrastructure and applying lessons learned and guidance from the commercial sector with government to put forth a really good defensive strategy that could be employed not only by the government but by the private sector as well. I think we’ll see a parallel here and that’s how we’ll be attacking now with AI and other emerging technologies as they start to emerge.
Kornik: Thanks, Scott. Finally, if I were to ask you to sort of look out three to five years or even to the end of the decade, how optimistic are you about all this emerging tech and its role to be a force for good in the world rather than the alternative?
Laliberte: This is a double-edged sword. I am really optimistic about the gains that we will see in society, in business, and in government services with AI and IoT, but especially AI and generative AI. We’re already seeing those gains. You look out three to five years, I mean your imagination is probably the only thing that will limit you in what we’re going to be able to provide and see as services and enhancements.
The other side here, right, the naysayer side of me or the critical thinker that I have, I also see the extreme negatives that could happen, and we have to be prepared for that. We’ve seen that over the years. I’ve been doing this for 30 years and it’s always, a new technology comes out that has great promise and it also could be used for really bad things. Security professionals, technology professionals such as myself and others, we need to be thinking about how are the bad guys going to try and use this against us? It’s that cat-and-mouse game, a game of chess where move and countermove and you’re trying to predict three moves ahead so that you can stay ahead of the bad guys and the cybercriminals out there. It is going to be a challenge. We already see accelerated attacks. The things like deepfakes are really scary in that you think about how do we defend against that? Humans have always been the weakest link in the security chain and now AI is going to allow for really sophisticated attacks that you can’t expect any human to be able to decipher or pick out that it’s a fake. So, we got to get really creative and think outside of the box and collaborate because the group mind working together and collaborating on how to defend this stuff is going to be much stronger than any one individual or any one company is going to be able to do. We’ve seen that collaboration in the past. We’ve seen the ISACs and we’ve seen the government and private sector collaboration, and that is going to be more important than ever as we move into these new waters and new territories that we have to work to combat against.
Kornik: Well, Scott, let’s hope we get this right, huh?
Laliberte: Let’s hope. We will. We have. We always have figured it out. There’ll be some pain along the way. There’ll be some very difficult lessons learned, but with each one of those we take those lessons learned and we apply it to the future to get better and stronger and we’ll continue to succeed.
Kornik: Thanks, Scott. Appreciate your time today. I really enjoyed our conversation.
Laliberte: Me, too. Thank you very much.
Kornik: Thank you for watching the VISION by Protiviti interview. On behalf of Scott Laliberte, I’m Joe Kornik. We’ll see you next time.
Did you enjoy this content? For more like this, subscribe to the VISION by Protiviti newsletter.
Former cybersecurity director, US Navy: Data is the new oil, we need to protect it
Former cybersecurity director, US Navy: Data is the new oil, we need to protect it
Former cybersecurity director, US Navy: Data is the new oil, we need to protect it
In this VISION by Protiviti interview, Perry Keating, Managing Director and President of Protiviti Government Services, speaks with Kathleen Creighton, former director of cybersecurity for the U.S. Navy, about global threats, emerging tech, AI, the next generation of talent and how the private sector plays into national security. Creighton retired from the U.S. Navy in 2021 following a 33-year career—including six years as a Rear Admiral—where she designed cybersecurity, IT and cloud strategy policy and governance for 607,000 Navy personnel. Currently, she is Independent Director for the ManTech Corporation, the West Bend Mutual Insurance Company and the Military Women’s Memorial.
In this interview:
1:28 – What it takes to address today’s cyber threats
3:48 – Quantum and AI — game changers for cybersecurity
6:20 – Job skills and capabilities for the future
8:37 – The role of public-private partnerships in cyber defense
Former cybersecurity director, US Navy: Data is the new oil, we need to protect it
Joe Kornik: Welcome to the VISION by Protiviti interview. I’m Joe Kornik, Editor-in-Chief of VISION by Protiviti, our global content resource examining big themes that will impact the C-suite and executive boardrooms worldwide. Today, we’re exploring the future of government. I’m thrilled to be joined by Kathleen Creighton, former director of cybersecurity for the U.S. Navy where she designed cybersecurity, IT and cloud strategy policy, and governance for 607,000 Navy personnel. Kathleen retired from the U.S. Navy in 2021 following a 33-year career, including six as a Rear Admiral. Currently, she is Independent Director for the ManTech Corporation, the West Bend Mutual Insurance Company, and the Military Women’s Memorial. I’ll be turning over the interviewing duties today to my Protiviti colleague Perry Keating, Managing Director, and President of Protiviti Government Services. Perry, I’ll turn it over to you to begin.
Perry Keating: Thanks, Joe. Thank you, Kathleen, for joining us today.
Kathleen Creighton: Thank you, Perry. It’s a pleasure to be here.
Keating: Wow, we’ve heard quite an impressive resume on your behalf. One of the things I didn’t hear Joe mention was that really—you’re probably really only in a handful of information warfare, community flag officers who specialize in cybersecurity, and IT, and network, and C4ISR capabilities in the Navy. Certainly, we are honored to have you here. What an impressive background. What do you see ss the biggest cyber threats facing the federal government today? What steps do you think leaders should be taking today to address this ever-increasing threat?
Creighton: Perry, there’s so many threats. You may think I would say something like misuse of AI, or supply chain, or compromised credentials. But I really think it’s a broader issue and that issue is the sheer size and complexity of government networks. They are just massive. For example, the DoD has the third largest IP space in the world, only behind the United States and Amazon. So, it’s a gigantic network. It’s not homogenous like a corporate network. It is—it has every variance. It has on-prem, in the cloud. It has IT. It has OT. It has weapon systems. It has autonomous vehicles. It is just incredibly varied and complex to command and control.
Having said that, what the government has done, and needs to continue to do, is to remove humans as much as they can from mundane tasks, automate use of machine learning, introduction of AI. Making sure that they know what’s on their network, what’s connected to their network, the VPNs, the trust, all the connections. Make sure that they have good situational awareness and all the things that go with proper network management, configuration, control, and all the different controls. It’s complex. It takes a skilled workforce to do it. I think they need to continue to automate and make sure that they can fight hurt. Also, they know they’re going to be attacked. They’re being attacked every day. They need to make sure that they can fight hurt and they build in resiliency and know how to segment parts of the network if they need to.
Keating: I noticed that you said AI and quantum computing, but that’s emerging, right? So, you’re wondering, is it emerging faster than we can keep up with? I guess, the question that comes to mind is that “Do we think AI, quantum, and these new emerging technologies, are they’re going to be a net positive or are they going to be a net negative from a cybersecurity perspective?”
Creighton: I don’t think we have the luxury to say whether it’s going to be a net positive or net negative. I think it’s just fact. It’s coming. It’s going to happen. So, we just we have to prepare. AI is already here but will be coming faster and faster. So, we need to get these technologies in the hands of the people who are defenders and let them experiment, let them use it and get comfortable with it, and let them train on it. Deepen the understanding that AI will be part of our crown jewels, right? It will be the thing we’re going to have to defend. It will be something we have to defend in critical infrastructure. So, I think we need to get on with it.
As far as quantum, that’s a huge game changer that it’s hard to get your mind around. A hundred-time increase—hundred million time increase in computing power. It’s mind baffling. So, it will make everything faster. It will make attacks faster. It will make defense faster. Of course, the concern that many of us have, or most of the government has, is its impact on cryptology, on the data that we have encrypted, and that our adversaries will be able to break that encryption much more quickly. So, I think in preparation for that the government needs to—and they are—but really needs to get serious about understanding their inventory of crypto. What types they have, what their purpose, what it’s protecting, the risk of those things, to be able to prioritize. Once they get post-quantum cryptology they’re going to have to be able to implement that quickly and do it based on risk.
Keating: Interesting. Interesting. I heard you talk about the need to remove people I almost wanted to chuckle, right? Always want a little job security [Laughter] but there’s no question that the government still is going to need—be able to have access to the best people trying to solve some of these challenges. From a talent and workforce perspective, one, do you think we have the skill sets that are needed? If not, where do we go get those skill sets and be able to retain that talent that would be so desperately needed?
Creighton: I didn’t mean to insinuate that we’re going to remove people. We’re just going to remove people from doing mundane tasks and put them on other more important tasks. Workforce is one of the key issues. It’s going to be hard but we’re going to be able to do it. We need to start with elementary school. My daughter, for example, went to cyber camp, and it introduced her to a lot of STEM things and cyber. Of course, she went on to become an English major so it didn’t work very well on her, but some kids at the camp hopefully came away that they wanted to pursue STEM. I think with the young people—I think with cryptology, there’s opportunities—I met a young woman this week who goes to a top university. She got exposed to national security and defense. She’s now going to go work for the Navy as a cyber analyst. So, there are patriotic people who want to serve, who want to do this type of work, and we need to reach out—we need to increase diversity. Even things like neurodiversity, right? There’s a three-letter agency where everybody jokes that, “Everybody looks at their shoes.” That’s how you know that they work there. Well, those people are—a lot of them are neurodiverse. We need to attract those people and realize that maybe they don’t interview so well because they have problems maybe making eye contact all the time. But, boy, they’re good at puzzles and analysis.
Keating: I think you provide a unique perspective. You certainly spent decades in the public sector and now you’ve spend some time in the private sector. Are there unique ways that the public and private sectors should be working together or public-private partnerships? Is there any advice or thoughts you had for both the government side of this as well as for the private sector side of this that you think might be helpful in this area?
Creighton: Yes, I think partnerships are incredibly important. The last 10 years in the government, we were really, really pressing on ways that we could partner with industry. especially with cyber threat vulnerability, working with all different types of organizations. I think as a country we’re at the point where we realize that the data is the center of everything. That this is the gold, right? This is the new oil. We need to protect that. The only way we’re going to protect it is through working together, sharing threat intelligence, sharing vulnerabilities, sharing knowledge, and building bridges. So, it’s incredibly important and it’s important that industry feels like they get something out of it and that both sides feel like they get something out of it. That isn’t just the government pushing out information and that the private sector is just like, “Okay. Thanks.” But that they both feel like they’re getting something out of it. I think increasingly they’re feeling that. Also, there’s education. That each know when something bad happens who is their point of contact? Who is your contact with the FBI? Who is your contact? Reach out to that person before you have an incident so that you know each other and you know you’ve already established that communication.
Keating: So, no doubt stakes are high. So, if you had to look out tp 2030, 2035, how optimistic are you? How pessimistic are you? What thoughts do you have? How do you see the future laying out?
Creighton: I’m an optimist. My glass is half full. I have no doubt that we will have compromises, that we will have data loss, that we will have critical infrastructure, compromises, that companies will have data stolen from them and they’ll have ransomware. All these things are going to happen. Hopefully, not too many people get hurt, not too much money is lost, but it will happen. I’m also confident that we’re going to learn from it and we’re going to adapt to it, and that as a country working together, public, private, that we can defend not only our intellectual property, defend our critical infrastructure and ensure our national security.
Keating: All right. Well, Kathleen, thank you so much for your time today. I really enjoyed our conversation.
Creighton: Thank you, Perry.
Keating: All right. Joe, I guess, we’re back to you in the studio.
Kornik: Thanks, Perry and thanks, Kathleen. Thank you for watching the VISION by Protiviti interview. Please rate and subscribe wherever you listen to podcasts and be sure to visit vision.protiviti.com to view all of our latest content. Until next time, I’m Joe Kornik.
Kathleen Creighton retired from the U.S. Navy in 2021 following a 33-year career, including six years of service as a Rear Admiral. Creighton was one of a handful of Information Warfare Community Flag Officers specializing in cybersecurity, IT solutions, network operation and C4ISR capabilities; she also served as the Navy’s Director of Cybersecurity where she designed Navy-wide cybersecurity, IT and cloud strategy policy and governance for 607,000 Navy personnel. Creighton currently serves as an Independent Director for the ManTech Corporation, the West Bend Mutual Insurance Company and the Military Women’s Memorial.

Perry Keating is a Managing Director and President of Protiviti Government Services, Privacy & Cybersecurity, with over 30 years of doing business with the government and the defense industrial base (DIB). His experience gives him unique industry insight into the public sector (U.S. federal, state & local), aerospace & defense, government contractors, defense industrial base (DIB), as well as the telecommunications and high-tech industries.

Did you enjoy this content? For more like this, subscribe to the VISION by Protiviti newsletter.
Protiviti-Oxford survey: Most global execs have concerns about government’s impact on business
Protiviti-Oxford survey: Most global execs have concerns about government’s impact on business
Protiviti-Oxford survey: Most global execs have concerns about government’s impact on business
In 2024, global business leaders have plenty to worry about, including how government actions may impact their bottom-line business performance. In a global Future of Government survey conducted by Protiviti and the University of Oxford, the overwhelming majority of executives—97%—say they have some level of concern about the ability of the government to impact their business over the next decade. And more than half (56%) classify that concern as substantial or extreme.
What impact, if any, do you anticipate your government will have on your overall business success over the next decade? (by location)
Geographically, North American business leaders (69%) far outpace European (40%) and Asian-Pacific (36%) leaders in their belief that government will have a positive impact on their business. This is a notable trend in the survey across nearly every category measured: North American executives express a higher level of faith and trust in the government’s ability to affect change and create a more positive environment to conduct business.
What about public-private partnerships? Nearly six in 10 (59%) business leaders report it’s likely their business will collaborate or cooperate on a specific project or as part of a formal partnership with a government entity over the next ten years. In North America, more than a third (34%) categorize that collaboration as extremely likely. That level of enthusiasm dips to 19% in Asia-Pacific and just 12% in Europe.
Economic factors
Business leaders are looking to the government to pull levers leading to a healthy economy in their region: 80% say they expect the government to have some level of involvement in controlling economic growth, unemployment and inflation with the management of demand and money. Nineteen percent of respondents would expect that involvement to be “significant.” Only 3% of business leaders say government should have “no involvement at all” in controlling economic growth.
Again, North American executives are most optimistic, with two-thirds (67%) saying they expect the government to have a high level of involvement in controlling economic growth. In Europe, that number is slightly more than half (52%), and it’s just one-third (33%) in Asia-Pacific.
Business leaders get even more bullish when it comes to government’s role in correcting “market failures” caused by financial conditions through regulation, taxation and subsidies. Nearly nine in ten (87%) business leaders believe government should play a role, and almost a quarter (24%) say that corrective role should be “significant.” A mere 2% say government should have “no involvement at all” in correcting market failures.
80%
Four out of Five executives say they expect the government to have some level of involvement in controlling economic growth, unemployment and inflation with the management of demand and money.
What level of involvement, if any, do you think the government should have in correcting "market failures" — caused by either financial conditions or discriminatory action — through regulation, taxation, subsidies, and providing public goods?
Government for social good
But it’s more than just the economy. Global business leaders also overwhelmingly believe government can be a benevolent force for social good. When it comes to equity and equality, 60% of survey respondents say government should have substantial involvement in achieving a just and fair society through regulation, progressive taxation, subsidies and the adjustment of rights, as well as giving access to markets in the face of discrimination. Another 25% think it should have moderate involvement. That leaves just 15% believing the government should have little (13%) to no (2%) involvement whatsoever.
In North America, 38% of executives say government should have “significant” involvement in creating a just and fair society. Again, that percentage is almost double that of leaders in Asia-Pacific (18%) and Europe (10%).
And when it comes to climate change, a whopping 83% of executives say they are confident their own government will have successfully implemented its sustainability and climate change initiatives within ten years. Great news indeed, and a nod to government’s ability to solve big problems. Again, that North American optimism shines through, with more than a third “extremely confident” government will meet its climate goals. In Asia-Pacific, it’s 20% and only 6% in Europe.
Emerging tech and e-government
E-government and the digitization of government services are already in motion, so it’s not surprising more than three-quarters (81%) of business leaders say they expect emerging technologies such as the Internet of Things, artificial intelligence and blockchain to substantially transform the delivery of public sector services in the future. And nearly half of all respondents (48%) would categorize that transformation as substantial.
Some 68% of North American business leaders say they anticipate a high level of transformation, while Europe (38%) and Asia-Pacific (34%) don’t seem to see the same level of disruption from emerging technologies on the horizon.
When we asked executives what role, if any, government should play in regulating emerging technologies that can disrupt democracies, such as AI and deep fakes, 82% said believe government has a role, and 53% said that role should be substantial. Again, almost a third (32%) of respondents in North America say government should have a “significant” role in regulating emerging tech, more than Europe (10%) and Asia-Pacific (17%) combined.
83%
a whopping 83% of executives say they are confident their own government will have successfully implemented its sustainability and climate change initiatives within ten years.
What level of involvement, if any, do you think the government should play in regulating emerging technologies, such as artificial intelligence and deep fakes? (by location)
Big Brother and privacy
It’s safe to say that business leaders have concerns about government and privacy. Just 4% of global executives say they are “not concerned at all” about privacy as it relates to the government’s ability to watch, track and monitor citizens and companies through surveillance systems, facial recognition software and AI. Nearly half (49%) say they are either “significantly” or “extremely” concerned about privacy.
However, North American executives are far more concerned about the government intruding on citizens’ privacy than other global executives. A significant 70% say they are highly concerned about government surveillance; a much higher percentage than business leaders in Europe (32%), where government intrusion is limited, and Asia-Pacific (38%), where facial recognition technology is more common.
As government services continue to be digitized, we asked executives if they have concerns about the government’s ability to protect citizens’ data in an “e-government” environment. Perhaps not surprisingly, 83% say they are concerned: 58% are highly concerned, while another quarter are somewhat concerned. Executives citing “extreme” concerns varied by geography with North America (39%) outpacing Europe and Asia-Pacific, both checking in at 14%.
Incentives move the needle
Meanwhile, business leaders seem ready—and perhaps even eager—to be responsive to government action. Some 86% of business leaders say future government subsidies and incentives would move the needle on their business and investment decisions, and 55% say that impact will be substantial.
Meanwhile, when we asked business leaders about the likelihood of increasing their investment in innovation and research and development in response to government funding incentives or initiatives within their specific industry, 54% say they are either somewhat (30%) or extremely (24%) likely to do so. Only 17% say it’s unlikely their business would increase its investment in innovation and R&D, despite additional government funding.
Incentivizing factors can also play a role in determining where a business is located, and 52% say government plays a substantial role, and another 30% say it plays a moderate role, in determining business location. We asked business leaders to rank the aspects of government that play the biggest role in determining their business location. Here are their answers, from highest to lowest impact:
- Infrastructure and utilities
- Labor rules
- Regulation and compliance
- Economic stability
- Freedom of entry
- Taxes and the cost of doing business
In North America, the highest-ranking factor determining business location is regulation and compliance; in Europe, it’s infrastructure and utilities, and in Asia-Pacific, it’s labor rules. Executives over the age of 50 say economic stability is the biggest factor, while younger execs focus on infrastructure and utilities. Of course, the two often go hand in hand.
70%
A significant 70% of leaders in North America say they are highly concerned about government surveillance.
As government services are digitized, how concerned are you, if at all, about your government’s ability to protect data in an "e-government" environment?
Money and taxation
Even though taxes and the cost of doing business ranked dead last in factors determining business location, it doesn’t mean executives aren’t thinking about taxation. When we asked them to categorize their expectations on the overall level of taxation for their company over the next decade, 31% of business leaders say taxes will remain about the same, while 10% expected them to be somewhat lower or significantly lower (3%). That’s 44% who say they don’t expect taxes to increase over the next ten years, a number that’s very surprising. The remaining 56% say taxes will be “somewhat” higher (30%) or “significantly” higher (26%).
We also asked executives to rank which taxes they expect to increase for their company over the next decade. Their answers, from highest to lowest:
- Tax on goods and services
- Tax on corporate profits
- Tax on property
- Tax on payroll
- Tax on personal income
- Tax on social security contributions
Older executives believe the biggest tax increases will come from taxes on property, while the younger cohort says the biggest increases will come from taxes on goods and services.
From a geographic perspective, European and Asian-Pacific executives are aligned on where they expect the biggest tax increases: goods and services and social security. Incidentally, those two categories are the two lowest among North American executives. In North America, respondents expect the biggest tax increases to come from property and corporate profits, categories that score much lower elsewhere in the world.
Did you enjoy this content? For more like this, subscribe to the VISION by Protiviti newsletter.
Dr. David Howard, Director of Studies, Sustainable Urban Development Program, University of Oxford and a Fellow of Kellogg College, Oxford. He is Director for the DPhil in Sustainable Urban Development and Director of Studies for the Sustainable Urban Development Program at the University of Oxford, which promotes lifelong learning for those with professional and personal interests in urban development. David is also Co-Director of the Global Centre on Healthcare and Urbanization at Kellogg College, which hosts public debates and promotes research on key urban issues.

Dr. Nigel Mehdi is Course Director in Sustainable Urban Development, University of Oxford. An urban economist by background, Mehdi is a chartered surveyor working at the intersection of information technology, the built environment and urban sustainability. Nigel gained his PhD in Real Estate Economics from the London School of Economics and he holds postgraduate qualifications in Politics, Development and Democratic Education, Digital Education and Software Engineering. He is a Fellow at Kellogg College.

Dr. Vlad Mykhnenko is an Associate Professor, Sustainable Urban Development, University of Oxford. He is an economic geographer, whose research agenda revolves around one key question: “What can economic geography contribute to our understanding of this or that problem?” Substantively, Mykhnenko’s academic research is devoted to geographical political economy – a trans-disciplinary study of the variegated landscape of capitalism. Since 2003, he has produced well over 100 research outputs, including books, journal articles, other documents, and digital artefacts.

Former Australia Minister for Foreign Affairs Julie Bishop: Find a way to build a better world
Former Australia Minister for Foreign Affairs Julie Bishop: Find a way to build a better world
Former Australia Minister for Foreign Affairs Julie Bishop: Find a way to build a better world
The challenges facing governments around the world, including Australia, are immense and growing. Leaders in both the public and private sector need to evaluate the risks and take the necessary steps to begin to solve the unprecedented problems they face. To help sort it all out, VISION by Protiviti caught up with Julie Bishop, Australia’s Minister for Foreign Affairs from 2013 until 2018. Bishop was the first female to hold the role as well as the first female Deputy Leader of the Liberal Party, serving for 11 years. In a political career spanning over 20 years, including a stint as acting Prime Minister in 2017, Bishop also served as Minister for Education, Science and Training; Minister for Women’s Issues; and Minister for Aging. Currently, she is Chancellor of the Australian National University, appointed in 2020. We sat down with Bishop to talk megatrends, talent, trust, gender, generations, education design, as well as the future of Australia, its government, people and the planet.
Protiviti Australia: It’s such a pleasure to be able to speak with you today. Thank you for joining us.
Julie Bishop: Thank you, I appreciate the invitation!
Protiviti Australia: You talk about four megatrends that are already disrupting Australia and the globe and will continue to do so far into the future. They are emerging technologies, shifts in geopolitical and economic power, a backlash against globalization, and climate change. Let’s take them one at a time: How do we ensure we put regulations and guardrails in place around technological advances such as AI, quantum computing, genetic engineering and others without stifling innovation and creativity?
Bishop: This is a significant challenge as policy makers and legislators often lack detailed understanding of complex technological developments. The CEO of OpenAI has warned that unrestrained development could threaten humanity and has called upon U.S. Congressional leaders to impose regulations. However, this is a global challenge as technology companies can easily seek less restrictive regulatory environments elsewhere should they feel stifled in one jurisdiction. The response thus needs to be global, and that could be achieved through various means, including multilateral organizations and bilateral treaties between leading nations in these fields.
One of the significant hurdles for achieving global regulation is the interface between private sector development and that in defense and intelligence agencies. It will be difficult to convince national governments to be transparent in the development of advanced technologies currently subject to high-level security classifications. The precedent for how global cooperation could work is the response to the existential threat of nuclear weapons through the Treaty on the Non-Proliferation of Nuclear Weapons (NPT). While not perfect, it has largely been effective in preventing the proliferation of weapons of mass destruction, albeit with a small number of notable failures.
I am optimistic as there is a significant effort underway to develop guardrails against the misuse of emerging and powerful technologies. This is occurring within governments, with a level of cooperation from the private sector and supported by higher learning institutions, including Australian National University through its advanced research centers in super computing, cybernetics and advanced science.
One of the significant hurdles for achieving global regulation is the interface between private sector development and that in defense and intelligence agencies. It will be difficult to convince national governments to be transparent in the development of advanced technologies currently subject to high-level security classifications.
Protiviti Australia: How should leaders navigate the shifts in geopolitical and economic power?
Bishop: Historically, times of significant shifts in geopolitical and economic power have been tumultuous. China’s rapid industrialization and growth over the past 45 years has seen it challenge the dominant role of the United States in the global economy and in global affairs. This has been exemplified by President Xi’s Belt and Road initiative that mobilized billions of dollars in funds and supported a large increase in China’s outreach, to the developing world in particular. China has challenged the U.S. and multilateral organizations, including the World Bank, as the preferred partner for infrastructure development in many nations. It has also become more assertive in its territorial claims with some of its neighbors, including in the South China Sea and over Taiwan.
Leaders need to look as far ahead as possible and undertake scenario mapping and planning of the various outcomes, by considering all the key variables while trying to understand the implications of increasingly complex challenges. It is important to focus on areas of greatest rivalry between the competing powers. Technology and access to high-level computer chips is one area where that competition is intense as the U.S. seeks to maintain an advantage in cutting-edge technology development and has restricted sales to China. The U.S. is also considering the forced divestment of popular video streaming app TikTok, which is alleged to have close ties to the Chinese government, although the company disputes that.
China recently launched a World Trade Organization challenge to U.S. electric vehicle subsidies. There are simmering tensions in many other areas of trade. The ability of organizations and government to balance their relationships with the U.S. and China is becoming increasingly difficult, and leaders need to be alert to areas where they may be forced to choose a longer-term partner.
Protiviti Australia: What are some ways government leaders can, both here in Australia and globally, embrace globalism and its economic ideals?
Bishop: Globalism has brought great benefit to the world as it has been driven by the economic imperatives of efficiency, economies of scale, comparative advantage and global transport networks. This has provided an opportunity for many developing nations to more fully participate in global trade and to seek opportunities for improving their domestic economies. However, the COVID-19 pandemic exposed some vulnerabilities in supply chains, particularly the shortages of personal protective equipment in the early stages.
It is also important that governments acknowledge the negative impacts that globalism has had on some communities and to develop policy responses. For example, China’s rapid industrialization had a significant impact on the manufacturing sectors of many developed nations, and that has created pockets of disadvantage and resentment. There must be full and frank discussions about globalism and its impacts, both positive and negative, so that we can adopt policies to gain the greatest benefit while mitigating the downsides.
The ability of organizations and government to balance their relationships with the U.S. and China is becoming increasingly difficult, and leaders need to be alert to areas where they may be forced to choose a longer-term partner.
Protiviti Australia: Where do you see Australia’s role in combating climate change? Can it be an ESG and sustainability leader on the world stage?
Bishop: Climate change is a global problem and thus requires a coordinated international response. Australia can play a leading role in engaging with multilateral institutions and supporting the development of appropriate and effective policies. Our universities and scientific research agencies can play an important role in establishing an evidence base for both the impacts of climate change and to help identify the responses that gain the greatest benefit at the least cost.
One of the challenges for a coordinated international response has been a level of inconsistency from the United States in recent years: supporting the Paris COP21 agreement, then withdrawing from that agreement under the Trump Administration and rejoining under the Biden Administration. This places greater responsibility on other nations to maintain policy momentum, and Australia has played a responsible role in relevant forums.
Another critical element in the response to climate change is to achieve technological breakthroughs in clean energy so that baseload power can be generated at levels to sustain major cities and industries. Australia is also playing a role in the international research effort to unlock new sources of energy, including nuclear fusion and others.
Protiviti Australia: How optimistic are you that governments around the world will be able to come together and cooperate to solve some of these huge issues the planet is facing?
Bishop: I am optimistic that humanity will rise to these challenges as there are serious, even unthinkable, implications from failure. However, that does not mean it will be a smooth road to finding appropriate responses. There are vastly differing governance models around the world and the policy development process can be markedly different. That means governments will seek to achieve the best possible outcome for their citizens to advance their national interests, and that inevitably means policy fragmentation. Leaders need to embrace and encourage full and open debates as that will allow them to consider and evaluate all the competing options and costs. It is vitally important to hear from different perspectives when the conversation involves these incredibly complex issues. Contestability is an important skill set for those leading policy debates and development.
Protiviti Australia: Where do you see the private sector’s role in solving, or helping to solve, some of these big problems you just discussed? What steps can global business leaders take right now in the interest of doing both well and good?
Bishop: With regard to climate, the private sector can play an important role in driving technological breakthroughs and establishing mechanisms such as trading in carbon credits, when governments find such things difficult or virtually impossible. The private sector is also a significant source of carbon emissions and is thus in a position to have a major impact by adopting new technologies and approaches. The World Economic Forum releases a bi-annual report titled the Future of Jobs that surveys companies around the world about various impacts on employment. Technology has been identified as a major employment disruptor, although it is anticipated to create new jobs as it also displaces people from some roles. The report has identified a business model of technology being used to augment rather than replace the human workforce. However, that will require significant training and retraining on a scale that will need partnerships between government, the private sector and educational institutions. Global business leaders need to maintain open channels of communication with government so there is time to develop appropriate policy responses.
One of the challenges for a coordinated international response has been a level of inconsistency from the United States in recent years: supporting the Paris COP21 agreement, then withdrawing from that agreement under the Trump Administration and rejoining under the Biden Administration.
Protiviti Australia: Let's talk about trust. So much of leadership is about trust. Less than half of Australians think their democracy is working. And there’s also declining institutional trust in the private sector, as well. How can we rebuild that trust?
Bishop: Transparency is one of the foundations of trust. One of the criticisms of many governments is that politicians have been captured by business elites and wealthy individuals, who allegedly collude to promote their interests to the detriment of the broader community. To build trust, it is important that there is timely reporting of political and campaign donations, and subsequent access to political leaders. Further, there needs to be greater transparency around many government decisions so the public can be confident the interests of the community were paramount. Political and business leaders need to engage openly in debates about alternative approaches to build community consensus for any responses.
Protiviti Australia: You were the first woman to hold the role of Foreign Minister and Deputy Leader of the Liberal Party, and you also served as the Minister for Women. How do we get more women in critical leadership positions in government? Are there lessons that we should learn from the private sector?
Bishop: The key is to build a pipeline of talented and experienced women from the grassroots level and this must involve mentoring networks and other supportive structures. One of the challenges is ensuring that merit remains the most important criteria for leadership, women often have a different style than men, and it is important for that to be recognized by those making decisions about promotions and judging the leadership qualities of candidates. With many workplaces dominated by male leaders, there is a need for improved awareness of the different leadership styles within the existing workforce.
Protiviti Australia: Thanks for that. With the current workforce representing five generations, and your role in education as the Chancellor of the Australian National University, how are you working with employers to build talent that meets their needs?
Bishop: It is a matter of looking as far ahead as possible and doing our best to anticipate the skills and knowledge that will be valuable to employers in a rapidly changing economy. One way that the Australian National University supports this goal is flexibility in degree design and choice. Undergraduates have virtually unlimited choices in designing their degrees so they can customize their learning experience to align with their career aspirations. The ANU undertakes regular liaison with private sector companies to better understand their needs for employee skills and also their research needs. Arguably, the biggest generational trend is convincing people of the need for lifelong learning as technology in particular increasingly disrupts many professions. We strive to equip our graduates with the critical thinking skills to allow them to more quickly adapt to changing conditions in the workplace and beyond.
There needs to be greater transparency around many government decisions so the public can be confident the interests of the community were paramount.
Protiviti Australia: You’re also dealing with significant disruption in higher education. What do you see as the biggest challenges there over the next few years and how are you addressing them?
Bishop: Technology will continue to disrupt education, and the emergence of AI tools is the most recent challenge. The ANU believes that rather than fighting against the inevitable, we should encourage students to leverage AI to elevate their knowledge and skills. The delivery of education also needs to evolve in response to student expectations. While many value the on-campus experience, there is a need to ensure remote and flexible access to lectures and other learning supports. The cost of education is also increasing, and students can be justified in having raised expectations of their university experience. At the ANU we have a significant residential on-campus student population, and we strive to have on-campus services and support to ensure students have a fulfilling and safe time within our institution.
Protiviti Australia: You’ve been generous with your time, just two more quick questions: When you look out to 2030 and beyond, how optimistic are you we’ll get most of this right? Are better days ahead?
Bishop: I have faith that humanity will collectively respond to the challenges and any crises that may develop. We have shown remarkable resilience over the centuries, and while we may experience adversity at times, we have generally found a way to build a better world, and I expect that to continue. That said, the conflicts in Ukraine and Gaza illustrate that peace can be fragile and that violence can break out suddenly and unexpectedly, inflicting loss of life and suffering on civilian communities. Financial and economic crises can also emerge suddenly and overwhelm policymakers and the ability of governments to respond. The increasing complexity of our financial systems makes such crises less predictable and potentially more severe. So, while I remain optimistic that better days are ahead in the longer term, that does not mean there will not be difficult times and challenges on the way there.
Protiviti Australia: Can you give me one bold prediction on anything we’ve discussed, or perhaps didn’t discuss, about either Australia or the world a decade from now? What’s something that may surprise us in 2034?
Bishop: New forms of energy always seem to be about a decade away, however the emergence of super and quantum computers will hopefully accelerate development and discovery of a new and clean source of energy that allows us to transition away from fossil fuels.
Julie Bishop served as Australia’s Minister for Foreign Affairs from 2013 until 2018. She was the first female to hold the role as well as the first female Deputy Leader of the Liberal Party, serving for 11 years. In a political career spanning over 20 years, Julie also served as Minister for Education, Science and Training, Minister for Women’s Issues and Minister for Ageing. Currently, she is Chancellor of the Australian National University, appointed in 2020. Prior to entering politics, Bishop was Managing Partner of the National law firm Clayton Utz in Perth. Julie is Chair of the Board of Prince’s Trust Australia, Trustee of Prince’s Trust Group Company, Chair of the Board of Telethon Kids Institute, Member of the International Advisory Board of Council on Foreign Relations. She has also established a boutique consultancy, Julie Bishop and Partners, offering strategic advisory services.
