Stay Updated Icon

Subscribe to Our Tech & Career Digest

Join thousands of readers getting the latest insights on tech trends, career tips, and exclusive updates delivered straight to their inbox.

AI Activists Rethink Strategy: Connecting AI's Impact to Economic Struggles

7:31 PM   |   04 June 2025

AI Activists Rethink Strategy: Connecting AI's Impact to Economic Struggles

AI Activists Rethink Strategy: Connecting AI's Impact to Economic Struggles

In the spring of 2018, a significant moment unfolded within the tech industry. Thousands of Google employees, galvanized by ethical concerns, successfully pressured the company to drop a major artificial intelligence contract with the Pentagon. This internal uprising, centered around Project Maven, a contract to analyze drone footage using AI, highlighted the growing moral qualms among tech workers regarding the application of their creations. The pressure was so intense that Google subsequently pledged not to use its AI for weapons or certain surveillance systems in the future, a move that resonated widely and inspired a new wave of tech activism.

This victory, achieved through unprecedented employee-led protests, marked a high point for a certain kind of AI activism focused on specific ethical red lines and corporate accountability. However, seven years later, the landscape has shifted dramatically, and the legacy of that moment appears more complex. Google has since revised its AI ethics principles, re-opening the door to some previously banned use cases. Meanwhile, companies across the industry are releasing powerful new AI tools at breakneck speed, often with little public oversight or understanding of their full societal implications.

This rapidly evolving environment, characterized by accelerating technological development and shifting corporate stances, has prompted a critical re-evaluation among those working to ensure AI benefits society rather than exacerbating harm. A new report from the AI Now Institute, a prominent think tank dedicated to studying the social implications of AI, offers a comprehensive analysis of the current state of affairs and proposes a radical shift in strategy for activists, civil society groups, and workers.

The Shifting Landscape of AI Power

The AI Now report, titled the "AI Now 2025 Landscape Report," paints a picture of an industry where power is becoming increasingly concentrated in the hands of a few dominant companies. These tech giants, possessing vast resources, data, and talent, are not only leading the charge in developing cutting-edge AI but are also actively shaping the public narrative surrounding the technology. This concentration of power raises significant concerns about who controls AI, whose interests it serves, and how its impacts are distributed across society.

The report argues that these dominant players have successfully promoted a particular vision of AI's future – one often centered on the imminent arrival of all-powerful superintelligence. This narrative, frequently championed by industry figures, posits a utopian age just around the corner, where AI will miraculously solve humanity's most pressing problems, from curing diseases to combating climate change. While the potential for AI to contribute to solving complex problems is real, the report critiques this narrative as a powerful rhetorical tool.

According to the report's authors, this idea of a transformative, abstract superintelligence has become "the argument to end all other arguments." It functions as a technological milestone so abstract and absolute that it gains default priority over other considerations, including the immediate, tangible harms AI is causing today. This focus on a distant, potentially speculative future, the report suggests, conveniently distracts from the present-day realities of AI deployment and the power dynamics at play.

Abstract image showing a green sphere against a dark background, symbolizing the state of AI in business.
Photo-Illustration: WIRED Staff/Getty Images

Connecting AI to Economic Realities: A New Strategic Imperative

One of the most significant recommendations from the AI Now report is the urgent need for advocacy and research groups to connect AI-related issues to broader economic concerns. This represents a strategic pivot from focusing solely on abstract ethical principles or future existential risks to addressing the concrete ways AI is impacting people's livelihoods and economic security *today*.

The report highlights that while the negative impacts of AI were once perhaps hidden or abstract for employees in many sectors, this is no longer the case. Previously stable career paths are now facing disruption across a wide range of industries, extending far beyond traditional tech roles. Software engineering, education, healthcare, creative fields, customer service, and many other professions are grappling with how AI tools are changing workflows, potentially automating tasks, and altering the demand for human labor.

This widespread disruption, the report argues, creates a crucial opportunity for workers and civil society to mobilize. By framing AI not just as a technical or ethical challenge, but as a fundamental economic issue, activists can tap into existing anxieties and grievances related to job security, wages, working conditions, and economic inequality. The report suggests that outcomes like widespread job loss or wage stagnation, often presented by tech companies as inevitable consequences of technological progress, should instead be framed as choices driven by corporate priorities – choices that workers have the power to resist and influence.

This approach could be particularly potent in the current political climate, where economic anxieties are high and there is a growing focus on the struggles of the working class. While political alignment on AI regulation varies, framing AI's impact through an economic lens can potentially build broader coalitions and resonate with a wider segment of the population than purely technical or abstract ethical arguments.

Why the Shift? Limitations of Regulation

The AI Now report expresses a degree of pessimism regarding the current power and effectiveness of regulators in curbing the negative impacts of AI and challenging the concentration of corporate power. While acknowledging that government agencies have launched numerous investigations into AI companies in recent years, the report notes that these efforts have so far resulted in few tangible outcomes. Significant legislative changes, such as a comprehensive national digital privacy law in the US, have failed to materialize. Enforcement actions against dominant AI players for anti-competitive practices or harmful deployments have been limited.

According to the report, despite much discussion about the need to curb monopoly power and limit personal data collection, "much of this activity failed to materialize into concrete enforcement action and legislative change, or to draw bright lines prohibiting specific anticompetitive business practices." This suggests that relying solely on top-down regulatory efforts may be insufficient to address the scale and speed of change driven by the AI industry.

Amba Kak, co-executive director of AI Now and a coauthor of the report, emphasizes this point. While AI Now has historically focused on government policy as a key avenue for change, she notes that it has become clear that these levers will be unsuccessful "unless power is built from below." This underscores the report's central thesis: effective change requires mobilizing affected communities and workers to exert pressure directly on companies and policymakers, rather than waiting for regulatory bodies to act.

The report argues that the slow pace of regulation, coupled with the significant lobbying power of large tech companies, makes it difficult for traditional government oversight to keep pace with the rapid advancements and deployments of AI. By the time regulations are debated and implemented, the technology and its impacts may have already evolved significantly, rendering the rules outdated or ineffective. Furthermore, the technical complexity of AI can pose challenges for regulators in understanding and defining the specific harms that need to be addressed.

Building Power from Below: Worker Organizing and Civil Society Action

Given the perceived limitations of regulatory approaches, the AI Now report strongly advocates for building power from the ground up. This involves empowering workers, consumers, and communities to organize and directly challenge the ways AI is being developed and deployed by dominant corporations. The report highlights several case studies where such bottom-up action has proven effective in halting or modifying the implementation of AI systems.

One compelling example cited in the report is the activism led by National Nurses United, a prominent nurses' union in the United States. The union staged protests against the increasing use of AI in healthcare settings, raising concerns about its impact on patient safety and the erosion of clinical judgment. They also conducted their own survey, gathering evidence that suggested AI tools could undermine the expertise of healthcare professionals and potentially compromise patient care.

This direct action and evidence-gathering by the union had a tangible impact. It led a number of hospitals to institute new AI oversight mechanisms, ensuring greater human review and control over AI-driven decisions. Some hospitals also scaled back the rollout of certain automated tools in response to the nurses' concerns. This case study illustrates the power of organized labor and affected workers to influence the adoption and governance of AI within their own industries.

The report suggests that similar organizing efforts can be replicated in other sectors facing AI-driven disruption. This could involve workers demanding a say in how AI tools are designed and implemented in their workplaces, negotiating for protections against algorithmic management or displacement, and advocating for training and support to adapt to changing job requirements. Civil society groups can play a crucial role in supporting these efforts, providing research, legal expertise, and public awareness campaigns.

Sarah Myers West, co-executive director of AI Now and a coauthor of the report, emphasizes the profound nature of the current moment. "What's unique to this moment is this push to integrate AI everywhere. It’s granting tech companies and the people that run them new kinds of power that go way beyond just deepening their pockets," she states. This expansion of power, she argues, is not merely economic but involves a "profound social and economic and political reshaping of the fabric of our lives." Addressing this requires a different approach to accounting for AI harms, one that acknowledges and challenges the underlying power structures.

Beyond Product Evaluation: Focusing on Systemic Power

The AI Now report clarifies that its focus is not on evaluating individual AI products as inherently 'good' or 'bad'. Kate Brennan, an associate director at AI Now and another coauthor, explains, "We're not interested in discussing whether or not an individual technology like ChatGPT is good." Instead, the report poses a more fundamental question: "We're asking whether it's good for society that these companies have unaccountable power."

This distinction is crucial. The report acknowledges that specific AI tools might offer benefits or be perceived as interesting and exciting. However, the potential utility of a particular product does not negate the risks associated with the concentration of power in the hands of a few entities that control the infrastructure, data, and development pathways for AI. This unaccountable power can lead to decisions about AI deployment that prioritize corporate profits or control over societal well-being, regardless of the potential benefits of the technology itself.

By shifting the focus from product-level critiques to systemic power dynamics, the AI Now report aims to provide a framework for activism that addresses the root causes of many AI-related harms. This involves scrutinizing the business models, market structures, and political influence of dominant AI companies, rather than getting bogged down in debates about the technical capabilities or ethical nuances of specific applications.

Challenges and Opportunities for the New Strategy

Adopting this new strategy of connecting AI to economic struggles and building power from below presents both challenges and opportunities. One challenge is the sheer scale and complexity of the AI industry and its integration into various sectors. Organizing workers and communities across diverse industries requires significant effort and coordination. Furthermore, the abstract nature of some AI systems can make it difficult for non-experts to understand their potential impacts and mobilize effectively.

However, the report suggests that the increasing visibility of AI's economic impacts creates a fertile ground for organizing. As more people experience job insecurity, algorithmic surveillance in the workplace, or the erosion of professional autonomy due to AI, the issue becomes less abstract and more directly relevant to their lives. This shared experience of disruption can serve as a powerful catalyst for collective action.

There is also an opportunity for cross-sector collaboration. Workers in different industries, civil society groups focused on labor rights, economic justice, and digital rights, and researchers studying the societal impacts of AI can form powerful coalitions. By working together, they can amplify their voices, share resources and expertise, and develop coordinated strategies to challenge corporate power and advocate for AI development and deployment that serves the public good.

The report implicitly calls for a reframing of the public discourse around AI. Instead of focusing solely on the promises of future superintelligence or the risks of hypothetical AI takeovers, the conversation needs to center on the present-day realities of how AI is affecting jobs, wages, inequality, and corporate control. This shift in narrative is essential for building the public understanding and support necessary for effective bottom-up organizing.

Conclusion: A Call for a Grounded Approach to AI Activism

The AI Now Institute's latest report offers a timely and critical assessment of the current AI landscape and a compelling call to action for activists. Faced with the consolidation of power in a few dominant tech companies and the limitations of traditional regulatory approaches, the report argues that a strategic pivot is necessary. By connecting the impacts of AI to tangible economic struggles – job security, wages, and corporate power – activists can build broader coalitions and mobilize affected communities.

The report's emphasis on building power from below, through worker organizing and civil society action, provides a concrete pathway for challenging the unaccountable power of dominant AI players. Examples like the National Nurses United demonstrate that direct action and collective bargaining can lead to meaningful changes in how AI is implemented and governed.

Ultimately, the AI Now report urges a grounded approach to AI activism, one that moves beyond abstract ethical debates and futuristic speculation to address the concrete, present-day realities of how AI is reshaping our economy and society. By focusing on the material impacts of AI and empowering those most affected, activists can work towards a future where artificial intelligence serves humanity, rather than concentrating power and exacerbating inequality.

This strategic shift is not about rejecting AI technology outright, but about challenging the power structures that currently dictate its development and deployment. It is a call to ensure that the future of AI is shaped not just by the interests of a few powerful corporations, but by the collective needs and aspirations of workers, communities, and society as a whole.