By Elvin Madamba and Maria Shamim

Welcome to another edition of Actionable Insights! In this blog, we take you behind the scenes of our Competent Boards education sessions to bring exclusive corporate intel and share valuable knowledge from our faculty speakers, a global pool of board directors, and industry experts.

The following insights come from Session 9 (Responsible Use of Data, Digitalization, DAO and Cybersecurity) of our most recent sold-out ESG Designation Program, which started in February. We have more cohorts rolling out this Fall and in 2024, so book your place today to be a part of a fascinating knowledge-sharing community! 

In our recent cybersecurity session, we explored the dual nature of data as both a valuable asset and a source of vulnerability for businesses. Our distinguished faculty speakers emphasized the critical role of responsible data management and its optimization for informed business decision-making. They cautioned against neglecting the ethical concerns arising from rapid technological advancements, specifically in relation to digitalization, artificial intelligence (AI), cybersecurity, and big data. The discussions also ventured into the need for businesses to maintain stakeholder trust through constant, transparent communication – a practice that becomes particularly vital in the event of a cybersecurity crisis.

Here are the key takeaways:

Data: a double-edged sword

In 2017, The Economist famously declared that data had surpassed oil as the world’s most valuable resource. Fast-forward five years, the global big data and business analytics market stood at a whopping US $294.16 billion in 2022. Predictions suggest a robust compound annual growth rate of 14.48% from 2023 to 2028, leading to a market worth US $662.63 billion. In this booming data economy, the policies and practices around responsible and optimized data utilization have become the pivot for a company’s success or failure. A study by Deloitte suggests that strategic digital transformation actions can unlock an estimated US $1.25 trillion in additional market capitalization across all Fortune 500 companies. Conversely, ill-conceived strategies could endanger over US $1.5 trillion of their capital.

Bojana Bellamy, President at the Centre for Information Policy Leadership—a global privacy and data policy think tank—identifies data as a vital asset for organizational success. She encourages companies to surpass mere regulatory compliance and adopt a strategic approach to data usage. Bellamy highlights a range of legal and commercial challenges arising from the data revolution. These include data breaches and privacy concerns; the costs of managing vast amounts of daily generated data and establishing cybersecurity defences for digital assets; compliance with rapidly evolving data protection regulations, especially in the context of emerging technologies like AI; and a widening trust deficit among companies, consumers, and regulatory authorities regarding data usage.

Bellamy insists, “Business leaders and [current and future] board members must champion this shift towards more mindful privacy and data management practices, extending beyond mere legal compliance. Data possesses significant business value and opportunity, and the critical question is: how can we communicate, and realise this value in a responsible way, to our people and to external stakeholders, including investors?”

She warns, “If companies fail to prove themselves as accountable and responsible custodians of data, they may lose their place in the future business landscape.”

AI revolution: Responsibility and accountability

As businesses strive to streamline operations, reduce costs, and enhance productivity, AI technologies are seeing exponential growth in usage. AI tools, particularly those aimed at customer interaction, such as ChatGPT and Google’s Bard, are at the forefront of this revolution. According to Bloomberg Intelligence, the market for generative AI—AI technology capable of creating various content forms like text, audio, and images—could rocket from US$40 billion in 2022 to US$1.3 trillion in revenue by 2032.

The versatility of AI technologies, reflected in a wide range of applications across business functions, is driving their adoption. A survey conducted by Forbes Advisor of 600 business owners found that approximately 56% utilize AI for customer service, with cybersecurity and fraud management (51%) and customer relationship management (46%) following close behind.

However, the rapid expansion of AI has its challenges. Bellamy identifies ethical issues revolving around data security and privacy as prominent challenges. She proposes a two-tiered strategy to boards and senior management to counter these risks, ensuring ethical AI development and application. Bellamy’s recommended initial steps for companies include:

  • Integrating AI technologies into existing corporate frameworks for data protection and cybersecurity. Bellamy advises, “Do not reinvent the wheel. Evaluate what you have and how you can adapt your [in-house] risk assessment framework, training, compliance, transparency, and accountability programs to accommodate AI tools. There’s a comprehensive match between these corporate compliance areas and AI.”
  • Establishing internal data and AI ethics committees to incorporate diverse perspectives from various stakeholders. “A cross-functional AI ethics committee can effectively escalate challenging cases to a group of peers. These individuals may not be top leadership, but they represent various organizational aspects, bring diverse viewpoints, and hail from different cultural and geographical backgrounds,” she notes. While some companies may prefer external AI advisory boards, Bellamy advises focusing on internal ethics boards before seeking external experts.
  • Implementing a code of conduct for ethical AI development and application, which can serve as a guiding principle, including during crises. Bellamy emphasizes that management should then focus on operationalizing accountability for these principles to ensure their development and use of AI is responsible and compliant with the principles and growing legal norms and standards.

Christopher Crummey, Director of Executive and Board Cyber Services at cyber technology company Sygnia and former executive director at IBM Command Centres, warns that many companies remain unaware of AI’s potential pitfalls. He asserts, “We’re still in AI’s initial phase; individuals and businesses are yet to grasp the risks AI introduces. Each legitimate AI tool we use is also accessible to threat actors. For them, AI offers a new tool to overcome their shortcomings, such as crafting error-free phishing emails.”

Crummey is also concerned about the risk of confidential corporate data being fed into generative AI technologies. “The idea of your company’s employees feeding sensitive documents into ChatGPT and asking it to ‘summarize these for me…’ means that data has now left your company’s domain, and you no longer control where it goes,” he cautions. “AI is a double-edged sword.”

Cyber defence: Emerging focus for regulators

In the face of escalating AI risks, businesses must adopt a more assertive stance to bolster their cybersecurity measures. As cyber threats evolve in sophistication, the onus is on organizations to enhance their detection capabilities, sealing any vulnerabilities and safeguarding their operations from new and emerging threats. However, this is a complex endeavour. For those ready to embark on this journey, Crummey advises the following course of action:

  • Mitigate human vulnerability: According to Crummey, the most impactful measure a company can undertake is cultivating a robust cybersecurity culture. He insists, “Train your employees. Make it everyone’s responsibility. There’s no better safeguard than a human firewall.” He refers to Equifax’s transformation post-2017 data breach as an example, with the company now being a “gold standard” in cybersecurity. They achieved this by intertwining cybersecurity within their organization, associating employees’ bonuses with their department’s cybersecurity readiness, prioritizing investment in cyber defence, and continuously training employees to identify and mitigate potential cyber threats.
  • Identify your “crown jewels”: Clearly understand your business’s most crucial functions. These are the ones that need priority restoration in the event of a cyber attack.
  • Business impact analysis: Has the company undertaken an exhaustive business impact analysis to understand the potential risks a cyber attack might pose to its various functions?
  • Backup strategy: Is there a comprehensive backup plan for all significant business elements? What is the frequency of updates? Is the company proactive in spotting and rectifying any vulnerabilities related to its backup storage systems?

As cybersecurity emerges as a central regulatory concern, the US Securities and Exchange Commission (SEC) is set to impose stringent disclosure requirements on businesses. Soon, companies will be compelled to publicize substantial security breaches within a four-day timeframe and subsequently detail the enhancements made to their cybersecurity controls in response to the incident. Furthermore, the SEC will examine the competency profile of public companies, assessing the presence of critical cyber skills on their boards. It will also consider whether they have dedicated roles such as a Chief Information Security Officer (CISO) to lead efforts in cyber resiliency, cybersecurity strategy, risk assessment, and governance infrastructure.

Given this heightened regulatory scrutiny, it is paramount that board members are well-versed in the company’s cybersecurity risk portfolio, strategy implementation, and the latest advancements in cybersecurity and AI technologies.

“The board has a fiduciary duty to ensure the company is equipped with the right skills, invests in suitable technologies, fosters the correct culture from the top down, and that cybersecurity remains a constant topic in all board meetings,” Crummey concludes.

These insights come from Session 9 (Responsible Use of Data, Digitalization, DAO and Cybersecurity) of our most recent sold-out ESG Designation Program (February 2023). We have more cohorts rolling out this year and in 2024, so book your place today to be a part of a fascinating knowledge-sharing community and global network!

Elvin Madamba is Program Manager, and Maria Shamim is Research Analyst at Competent Boards. Follow Competent Boards on LinkedIn.

Back To News & Views