The Enshittification of Platforms
Forwarded this newsletter and want to see more? Sign up here:
Access archived newsletters here:
Hi, enjoy this weeks curated risk and business updates.
Enshittification is a term coined by Cory Doctorow to describe a phenomenon where online platforms become less useful, less enjoyable, or less user-friendly over time. While it's not a formally defined term with clear criteria, it's often used to describe situations where platforms prioritize monetization, advertising, or other business interests over user experience. Typical enshittication phases include:
Phase 1: Good to Users - focus is on building a good user experience, often funded by venture capital.
Phase 2: Good to Business Customers - business customer experience and value becomes the focus, at the expense of user experience.
Phase 3: Extraction of Value for Shareholders - at the expense of user and business customer experience.
“This is enshittification: Surpluses are first directed to users; then, once they’re locked in, surpluses go to suppliers; then once they’re locked in, the surplus is handed to shareholders and the platform becomes a useless pile of shit.”
This cycle traps users because switching to alternatives can be difficult. Doctorow proposes solutions like the "end-to-end principle" where platforms prioritize user requests and the "right of exit" allowing users to leave seamlessly with their data.
200 Examples of Enshittification
Some examples of platforms that have been discussed in relation to enshittification include:
Reddit: Its handling of communities, especially during the blackout protests in 2023, where many subreddits went private in protest of changes to the platform's API pricing.
Twitter (now X): Changes in user experience, including changes to the algorithmic timeline, increasing ads, and changes to the verification system.
Uber: The ride-sharing platform has faced criticism for changes to its pricing model, including surge pricing and the reduction of driver earnings.
The concept of enshittification is pretty subjective and can vary depending on the perspective of the user.
Request more information on DelCreo’s Risk Universe and risk assessment services.
As a reminder, here are our Risk Universe categories that we leverage to tackle and understand risk which include:
External Risk
Governance Risk
Strategic Risk
Product Risk
Business Operations Risk
Legal & Compliance Risk
Financial Risk
Technology Risk
We leverage our understanding of risk maps and risk universes to better advise our clients in strategic business decisions and to optimize the management of risk throughout the enterprise.
Weighing the Risks
Weekly Highlights
Three Key Ideas:
The UK tech industry is hindered by limited pro-tech funding, political instability, a problematic talent pipeline, overreliance on foreign investors, and stringent immigration policies, affecting its competitiveness and ability to scale.
The rapid development of AI by profit-driven firms poses economic risks due to uncertainties in profitability and ethical management, while regulatory challenges add complexity to ensuring sustainable growth and protecting user interests.
The U.S. economy faces a critical juncture with high interest rates balancing inflation control and recession risk, while resilient labor market conditions and healthy household income indicate potential stability amidst economic uncertainty.
Recommendations:
To address these risks, business and risk managers should develop a comprehensive risk management strategy that includes securing diverse funding sources, implementing robust AI governance and ethical practices, fostering talent development, and maintaining flexible regulatory compliance to adapt to evolving economic and technological landscapes.
Risk Universe Weekly Updates
External Risk
How Keir Starmer Can Fix the UK’s Tech Industry
The UK tech industry faces economic risk due to limited pro-tech funding and political instability, which affect its competitiveness and investment attractiveness.
Challenges like a talent pipeline problem, overreliance on foreign investors, and stringent immigration policies hinder the growth of startups and the tech industry's potential to scale and innovate.
Can AI be Meaningfully Regulated, or is Regulation a Deceitful Fudge?
The rapid development and commercialization of AI by profit-driven Big Tech firms like Microsoft and OpenAI pose significant economic risks due to uncertainties in achieving profitability and managing the ethical implications of AI applications, while the need for regulation adds complexity to ensuring sustainable growth and protecting user interests.
The rush to regulate AI reflects political pressures to safeguard society and align with democratic values, but the challenge lies in creating adaptable and effective regulations that can keep pace with AI's rapid evolution and address issues such as bias, discrimination, and the ethical use of AI in critical decision-making processes.
U.S. Economy Is Facing ‘Critical Moment’ With Consumers and Businesses
The U.S. economy is at a critical juncture, balancing the Federal Reserve's high interest rates to tame inflation without inducing a recession, with GDP growth slowing to 1.4% in the first quarter of 2024 from 3.4% in the previous quarter due to reduced consumer activity.
Despite the economic uncertainty, resilient labor market conditions and healthy household income and savings rates indicate the economy's ability to withstand current monetary policies, while consumers remain cautiously optimistic about inflation moderating further.
Governance Risk
Over Half Of Organisations Have AI Board, Leader, But No C-Suite Representation: Gartner
Despite the prevalence of AI boards and leaders in over half of the surveyed organizations, there is significant ambiguity in accountability and governance, with only 25% of respondents identifying the chief information officer as responsible for AI initiatives, highlighting risks in decision-making clarity and strategic alignment.
The diverse composition of AI board accountability, with roles spread across various leaders and business units, underscores structural and cultural challenges in implementing cohesive AI strategies, potentially leading to inefficiencies and lack of consensus in driving AI value and mitigating associated risks.
Strategic Risk
AI transformation: Here’s what it actually means for a company
Companies are eagerly branding themselves as AI-driven to capitalize on investor and media hype, similar to the 2017 crypto-craze, risking reputational damage if their claims of AI integration are superficial or fail to deliver substantial results in a rapidly evolving and unpredictable technological landscape.
Successfully integrating AI requires a comprehensive transformation strategy encompassing people, products, and processes; however, companies face execution risks if they lack dedicated AI leaders, consistent upskilling of the workforce, and a coherent approach to rapidly adopting and utilizing new AI tools, potentially leading to fragmented efforts and missed opportunities.
Putting AI to work in finance: Using generative AI for transformational change
CFOs face significant execution risks in integrating generative AI into finance operations, requiring a strategic approach that includes clear business cases, proper data initiatives, and measurable ROI metrics to avoid fragmented efforts and ensure successful implementation.
Adopting generative AI can revolutionize finance operations, but it also brings the risk of brand and reputation damage if not managed responsibly; finance leaders must prioritize ethical AI practices, data security, and transparency to mitigate these risks while leveraging AI's transformative potential.
Business Operations Risk
The Evolution of AI in Business
Small and medium-sized businesses (SMBs) must integrate AI to avoid being outpaced by competitors who leverage AI for rapid production and enhanced customer support, as seen in industries like architecture and marketing where AI significantly reduces operational times and costs, potentially leading to business interruption for those not adapting.
Effective AI implementation requires a well-structured approach, including readiness assessments and the integration of suitable AI technologies, while addressing data privacy and security concerns; failure to do so can lead to talent management challenges and vulnerabilities in corporate IT infrastructure, undermining operational efficiency and competitive advantage.
Technology Risk
CIOs’ concerns over generative AI echo those of the early days of cloud computing
CIOs recognize the necessity of integrating generative AI responsibly, emphasizing governance, security, and compliance to ensure safe and effective usage, reflecting lessons learned from the earlier adoption of cloud technology to avoid issues like "shadow IT" where employees bypass official channels.
With the increasing demand for AI tools, companies must educate their workforce to use AI effectively, balancing the need to innovate and enhance customer experiences with the importance of maintaining data security and aligning with organizational values, to mitigate risks and retain talent.
EXPERT WARNS THAT AI INDUSTRY DUE FOR HUGE COLLAPSE
The current AI boom is raising concerns of a potential bubble, reminiscent of the dot-com era, with experts like James Ferguson and Richard Windsor highlighting issues such as unproven technology, AI "hallucinations," and excessive energy consumption, which undermine the reliability and sustainability of AI models.
The substantial investment and operational resources required for AI, exemplified by Google's significant rise in emissions due to AI activities, point to a risk of inefficiency and environmental impact, potentially leading to a significant industry downturn if the perceived AI bubble bursts.
‘Enshittification’ is coming for absolutely everything
The concept of "enshittification" highlights the degradation of technology platforms, where companies initially offer value to users, then shift focus to business customers, and eventually prioritize shareholder profits, leading to deteriorated user experiences and potential product failures.
The unchecked pursuit of profit by tech companies, exacerbated by reduced competition, inadequate regulation, and weakened worker power, risks creating unsustainable operations that may face significant backlash from users, legal challenges, and eventual market decline.