AI Governance and Risk Updates
Forwarded this newsletter and want to see more? Sign up here:
Access archived newsletters here:
Hi, enjoy this weeks curated risk and business updates.
AI Governance
AI governance refers to the frameworks, policies, and procedures that organizations implement to ensure the ethical, transparent, and accountable use of AI technologies. It encompasses a range of practices aimed at aligning AI development and deployment with societal values and regulatory requirements. Key components of AI governance include:
Ethical Guidelines: Establishing principles that guide the ethical use of AI, such as fairness, transparency, and accountability.
Regulatory Compliance: Adhering to local and international laws and regulations that govern AI technologies.
Stakeholder Engagement: Involving diverse stakeholders, including employees, customers, and regulators, in the AI decision-making process.
Oversight Mechanisms: Implementing monitoring and auditing processes to ensure AI systems operate as intended and do not cause harm.
AI Risk Management: Identifying, assessing, and mitigating the risks associated with AI technologies
AI Risks
As our understanding of AI technologies and related risks evolve, new risks will continue to emerge as practices evolve and AI use cases continue to expand. Some common types of AI risks include:
Operational Risks: Failures in AI systems that can disrupt business operations, leading to financial losses and reputational damage.
Ethical Risks: Issues related to fairness, transparency, and accountability in AI decision-making, which can result in biased outcomes and loss of trust.
Data Privacy Risks: Concerns about the collection, storage, and use of personal data, which can lead to privacy breaches and regulatory penalties.
Security Risks: Vulnerabilities in AI systems that can be exploited by malicious actors, resulting in data breaches and cyber-attacks.
Malware Generation Risks: The ability of generative AI to create harmful software, posing significant cybersecurity threats.
Misinformation and Disinformation Risks: The potential for AI systems to generate and spread false information, impacting public opinion and trust.
Hallucination Risks: Generative AI models producing incorrect or fabricated information, which can mislead users and affect decision-making processes.
Compliance Risks: Non-compliance with regulatory requirements, which can lead to legal consequences and financial penalties.
Intellectual Property Risks: Issues related to the use of copyrighted material without authorization in AI-generated content.
Environmental Risks: The high computational resources required for AI training and operation, which can have adverse environmental impacts.
Lack of Accountability: The absence of proper corporate governance structures leading to insufficient oversight of AI systems.
Algorithmic Biases: AI algorithms inheriting biases from training data, leading to potentially discriminatory outcomes.
Lack of Transparency and Explain-ability: As examples of AI ghosts, false information, deep fakes proliferate, business customers, consumers and regulators will demand increased AI model transparency. Difficulty in understanding and justifying decisions made by AI systems, which can hinder trust and lead to legal scrutiny.
Model Dependency Risks: Reliance on third-party models can expose organizations to vulnerabilities in those models, such as unsafe dependencies and outdated code.
Training Data: In addition to the Intellectual Property risks, high quality, proprietary training data is becoming scarce, creating challenges for AI differentiation. Other risks could include intentional data poisoning by competitors or bad actors and introduction of biases into AI models leading to discrimination.
Adversarial Attacks: AI models can be susceptible to attacks that manipulate input data to produce erroneous outputs, undermining the integrity of the AI system.
Acquiring and retaining talent: the war for AI talent is growing - with Microsoft becoming the “world’s most aggressive amasser of AI talent, tools and technology.”
Regulatory Compliance: Regulatory frameworks for AI are unsettled and M&A activity among the tech titans is coming under the microscope - meaning future deals with be more expensive and harder to pass regulatory reviews in the US and EU in particular.
AI Costs: including talent acquisition and compensation, cloud infrastructure, chips data storage and transfer fees will continue to grow.
Partnerships and Alliances: Partnerships may become strained as the industry and technologies evolve, leading the dramatic industry shifts and potential legal action.
Sources include DelCreo’s Newsletter, Responsible AI, NIST Generative AI Risk Framework, IBM AI Risk Management blog, Robust Intelligence's resources on AI Risk Management, and Splunk's articles on AI Risk Management.
Request more information on DelCreo’s Risk Universe and risk assessment services.
As a reminder, here are our Risk Universe categories that we leverage to tackle and understand risk which include:
External Risk
Governance Risk
Strategic Risk
Product Risk
Business Operations Risk
Legal & Compliance Risk
Financial Risk
Technology Risk
We leverage our understanding of risk maps and risk universes to better advise our clients in strategic business decisions and to optimize the management of risk throughout the enterprise.
Weighing the Risks
Weekly Highlights
Three Key Ideas:
Apple faces significant economic and industry risks in China, with a notable 20% decline in iPhone sales due to competition from domestic brands and consumer fatigue with incremental upgrades. The EU's negotiation on electric car tariffs highlights economic risks, emphasizing the need to address subsidies and ensure a level playing field for European manufacturers.
Apple's ability to introduce new AI features like ChatGPT in China is hindered by regulatory requirements, necessitating partnerships with local companies. Similarly, the EU-China talks on EV tariffs showcase the potential for trade tensions, emphasizing the need for diplomatic efforts to mitigate impacts.
Effective AI governance requires robust board oversight, formal frameworks, and active risk management practices to ensure AI tools are safe, ethical, and aligned with organizational goals. Central banks need to address systemic risks like cyber attacks and herding behavior while leveraging AI for enhanced efficiencies.
Recommendations:
To address these risk factors, enterprises should implement comprehensive risk management frameworks that incorporate robust data governance, stakeholder engagement, and diplomatic strategies. Ensuring AI integration aligns with strategic objectives and complies with regulatory environments is crucial for sustaining competitive advantage and operational resilience.
Risk Universe Weekly Updates
External Risk
Apple Intelligence seems to have a ChatGPT-shaped problem in China
Apple faces significant economic and industry risk factors in China, where iPhone sales have declined by almost 20% in the first quarter due to competition from domestic brands and consumer fatigue with incremental upgrades.
Political risk factors are also critical, as Apple's ability to introduce its new AI features, including ChatGPT, is hindered by China's regulatory environment, necessitating partnerships with local companies to comply with approval requirements.
EU & China Holding Talks On Electric Car Tariffs Ahead Of November Deadline
The EU-China negotiations on tariffs for Chinese electric vehicles (EVs) highlight significant economic and industry risk factors, with the EU aiming to address perceived excessive subsidies that have led to an uneven playing field for European manufacturers.
Political risk factors are evident as the proposed tariffs have prompted strong responses from China, emphasizing the potential for trade tensions and the need for diplomatic efforts to mitigate impacts on both sides.
BIS: AI’s Double-Edged Sword: Opportunities and Risks for Central Banks
Economic and industry risk factors for central banks include the need to leverage AI for enhanced efficiencies and risk management while addressing potential systemic risks such as cyber attacks and herding behavior in financial markets.
Political risk factors involve ensuring robust data governance and international cooperation to fully harness AI’s potential, while demographic change risks include managing the labor market impacts of AI-driven automation and addressing concerns about employment and income inequality.
Governance Risk
Board oversight and decision-making risk factors are prominent in AI governance, emphasizing the need for formal frameworks and active risk management practices to ensure AI tools are safe, ethical, and aligned with organizational goals. Leadership must prioritize understanding AI risks and integrate comprehensive governance strategies to mitigate vulnerabilities.
Company structure and culture risk factors include the necessity for robust data governance and inclusive stakeholder engagement to build trust and transparency in AI systems, fostering a culture that values ethical AI development and operational resilience while maintaining regulatory compliance and public trust.
AI to impact more than half of banking jobs - Citi
The rapid adoption of AI in the finance industry demands strong board oversight and proactive leadership to navigate the significant risks associated with job automation, data security, compliance, and ethical concerns, ensuring AI integration aligns with strategic objectives and safeguards organizational integrity.
Effective decision-making frameworks and a culture that promotes AI literacy and innovation are crucial as firms like Citi and JPMorgan equip employees with AI skills; failure to address cultural and structural barriers may lead to lagging AI adoption and loss of market share, highlighting the need for agile, adaptable company structures.
How Board Consultants Can Affect Corporate Governance and the Business Judgment Rule
Boards must carefully balance the use of specialized directors and external consultants to ensure accountability, exercise independent judgment, and maintain transparency, as over-reliance on external advisors can lead to conflicts of interest and undermine the board’s decision-making authority.
Integrating external expertise can enhance decision-making processes by providing specialized knowledge and impartial perspectives, but it also risks diluting internal ownership and accountability, requiring robust conflict-of-interest policies and active oversight to uphold corporate governance standards and stakeholder trust.
Strategic Risk
Mega tech IPOs' could finally come in 2025, Nasdaq president says
The anticipated surge in tech IPOs, driven by AI advancements and increasing market enthusiasm, poses significant disruptive innovation risks, as new entrants like Astera Labs and Reddit intensify competition and challenge established players to innovate rapidly to maintain their market position.
The high number of tech unicorns poised to go public or seek alternative funding methods highlights business model risks, requiring firms to demonstrate robust execution capabilities to meet investor expectations and sustain growth amid volatile market conditions and regulatory scrutiny.
Volkswagen to invest up to $5 billion in Rivian
Volkswagen's $5 billion investment in Rivian aims to address Rivian's cash flow issues and support its scaling efforts, but the automaker's consistent quarterly losses and reliance on future product launches highlight significant execution risks and the need for effective cost management to achieve profitability.
The partnership to develop next-generation software-defined vehicle platforms enhances both companies' technological profiles and competitive edge, yet the competitive EV market and previous stakeholder exits, like Ford's, underscore the strategic risks of sustaining market position and technological leadership amidst rapid industry changes.
Business Operations Risk
AI Boom Drives Up Risk of Power Squeeze
The rapid growth in AI data centers significantly increases electricity demand, potentially leading to power shortages and production disruptions, especially in regions like Dallas and Northern Virginia where grid reliability is already a concern.
As AI demands more data center capacity, companies like Amazon face the challenge of ensuring continuous and reliable power supply to prevent business interruptions, while also addressing the environmental impact of increased fossil fuel reliance.
Legal & Compliance Risk
EU says Apple violated app developers’ rights, could be fined 10% of revenue
Apple is under investigation by the European Commission for allegedly violating the Digital Markets Act (DMA) by restricting app developers from steering consumers to alternative purchasing channels, with potential fines of up to 20% of Apple's global turnover for repeat infringements and the possibility of forced divestitures for systemic non-compliance.
The commission is also probing Apple's "Core Technology Fee" and related contractual requirements for third-party developers, which may further complicate compliance with the DMA and expose Apple to additional regulatory scrutiny and legal challenges, impacting its app distribution and fee structures.
Technology Risk
Why AI solutions have just three months to prove themselves
The rush to integrate AI capabilities into software offerings, often driven by hype rather than substantial advancements, raises concerns about the actual functionality and ROI of these AI-enhanced products, with companies relying heavily on AI for productivity gains and cost savings.
The increasing scrutiny and longer purchasing cycles for AI platforms, involving financial and legal departments, indicate potential operational risks related to the implementation and integration of AI solutions, as well as the need for faster ROI and robust performance metrics to justify investments.