12. 16. 2023

AI Countergovernance


Blair Attard-Frost



In the last year, the rapid uptake of generative artificial intelligence (AI) systems such as ChatGPT, Dall-E, and Stable Diffusion has led to countless public discussions about the ethics and regulation of AI. There have also been revelations of significant harms caused by those systems. The development of ChatGPT and many other AI systems is deeply entwined with exploitative labour outsourcing practices. The datasets used to train many generative AI systems were obtained through industrial-scale theft of creative works from millions of artists. Tech companies are notoriously secretive about the immense energy and water consumption required to operate the data centers and large language models that power their generative AI systems. Both upstream and downstream, the value chains of these AI systems are rife with harms to society and the environment. 

AI governance should serve as a remedy to those harms. AI governance is a set of practices intended to maximize the benefits and minimize the harms that AI systems cause to a wide range of individuals and groups. Yet, instead of ensuring benefit and preventing harm to all of society, many state-led AI governance regimes are largely failing to serve civil society’s needs. While “experts” from industry and academia are warning of science-fictional risks of human extinction caused by rogue AI, recent research shows that many communities are already being harmed by real-world AI systems and excluded from participating in AI policy development processes.

Leftist critics recognize that AI causes so many harms because AI systems are designed to function as tools of capitalist power. As a result, some have suggested that AI is irredeemable, and have called for a wholesale rejection of all AI systems. Others have proposed more nuanced governance mechanisms for countering capitalist power in AI systems, such as the formation of workers’ councils and peoples’ councils with the authority to set rules on the development and use of AI in their workplaces and communities.

These alternative mechanisms for community-led and worker-led governance indicate that many possibilities for exercising counterpower exist within AI systems. Those who have been marginalized and harmed by the emerging norms of AI governance can collectively organize to take governance into their own hands. Communities, workers, and even public servants need theories, practices, and strategies for AI countergovernance.

 

AI Countergovernance in Theory

The participatory governance researcher Rikki Dean describes countergovernance as “citizen opposition or contestation constituted with a direct and formal relation to power.” Countergovernance concerns a wide range of institutions and mechanisms through which a marginalized community can oppose a source of power that has failed to serve their interests. Community-led countergovernance mechanisms often include collective organizing, civic oversight, petitions, community-led audits, and citizens’ assemblies.

Many approaches to resisting power in AI systems typically direct their resistance against the exclusionary, unjust power structures embedded in the design of the system’s technological components: its datasets, algorithms, machine learning models, software, and hardware. For example, participatory AI typically aims to empower the people who are impacted by AI systems to directly participate in the design of those systems. Feminist AI approaches take more direct aim at the oppressive power relations embedded in AI systems, while they also tend to advocate for similar participatory interventions in technological design as the solution to those oppressive relations.  

AI countergovernance movements take a somewhat different approach. These movements primarily direct organized, collective, community-led opposition against the underlying political and economic systems that are constitutive of “AI,” rather than intervening in the design of the technology itself. 

As an overview of some past and present cases of AI countergovernance will show in the next section, the act of countergoverning AI is less about technology and more about organization. In all of these cases, the collective organizing of an AI countergovernance movement begins when state and industry AI governance regimes fail to serve the interests of a community. The AI technologies involved in these cases are important concerns, but they are all by-products of a failed AI governance regime and the logics of its constituent organizations. 

The central concern in all of these cases is simple and non-technical: powerful organizations failed to serve the interests of a community.

 

AI Countergovernance in Practice 

With hindsight, we can interpret many of the organized tech backlashes of the late 2010s as cases of AI countergovernance. In 2018, many Google employees resigned, signed onto an open letter to Google’s CEO, and publicly protested in response to Google’s decision to partner with the Pentagon on a military AI development project called Project Maven. Through their acts of resistance, these Google employees eventually forced Google to re-organize some of its corporate AI governance practices around the interests of its workers. The Project Maven contract was cancelled, and the drone-based surveillance system slated for development as part of the project was scrapped. Project Maven’s proposed AI system was successfully countergoverned by resisting Google – by forcing it to divest from those particular military AI projects – rather than by staging a design intervention in the organization’s proposed surveillance system. 

In 2019, local community backlash against a “smart city” development project in Toronto led by Sidewalk Labs (a sibling company of Google) was reaching a crescendo. “Smart city” projects such as the proposed Sidewalk Toronto initiative are typically intended to redevelop a large area of urban land so that a variety of new digital technologies, sensors, devices, and AI applications can be integrated into the area’s infrastructure and services. On February 14, 2019, an article in the Toronto Star revealed that Sidewalk Labs had secretly cut a deal with Waterfront Toronto, the intergovernmental authority responsible for stewarding the waterfront area where the “smart city” was to be developed. Sidewalk was to be given a greater parcel of post-industrial land to redevelop than originally proposed, and in exchange, a sizable share of the redeveloped land’s tax revenues would be diverted from public coffers and into Sidewalk Labs.

In addition to Sidewalk’s land grab, the local community raised many concerns regarding invasive and exploitative data collection practices, monitoring technologies, and machine learning systems that Sidewalk had planned to integrate into the infrastructure and services of the “smart city.” Though these concerns were voiced at a series of town hall meetings organized by Sidewalk Labs, Bianca Wylie, an organizer of the local #BlockSidewalk movement, observed that the town hall meetings were often tokenizing and performative acts of consultation. Sidewalk was uninterested in sincere engagement with the community’s vocal concerns about privacy and the commodification of civic data.

#BlockSidewalk organizers continued to resist the Sidewalk Toronto project throughout 2019 and early 2020, attending and holding their own public meetings about the project, communicating their concerns to the local community and the press, and submitting their recommendations for alternative digital infrastructure governance practices to City Council. In May 2020, Sidewalk announced their intent to cancel their Toronto project due to “unprecedented economic uncertainty” amidst the COVID-19 pandemic, though many observers have noted that the local community’s backlash was likely a significant factor in Sidewalk’s decision to abandon the project. Collective organizing against harmful practices of digital governance and urban governance ultimately forced Sidewalk to re-organize in response to the community’s resistance by simply withdrawing from the project altogether. 

More recently, Canada has again become a site of intense backlash over the proposed Artificial Intelligence and Data Act (AIDA), a bill that was tabled in Parliament in June 2022 with the aim of establishing comprehensive national regulations on AI systems. AIDA has been met with extensive criticism on many grounds. The entire bill applies only to “high-impact systems,” but the text of the bill does not define what that term means; its definition is to be determined only after the bill is passed into law. Additionally, AIDA was not developed in consultation with vulnerable communities that stand to be most severely impacted by AI. These gaps in democratic process and oversight have led some critics to describe AIDA as a “blank cheque,” “incomplete,” and “inadequate.”

Many practices of countergovernance have emerged in response to the failures of state-led AI governance associated with AIDA. Concerned individuals and groups have organized community-led audits and evaluations of AIDA through formats such as reports, roundtables, media engagement, and freedom of information requests. Open letters and petitions from groups concerned that AIDA fails to protect human rights, privacy, and the livelihoods of artists have urged the government to redraft or significantly alter the bill. In the absence of an effective public awareness campaign led by the government, I myself played a role in organizing the creation of an explainer document to help build awareness of the potential impacts of generative AI and AIDA on the livelihoods of Canada’s artists. Generative AI systems are frequently trained on large volumes of artistic works without authorization from or compensation to the creators of those works, but AIDA does not guarantee any protections against those exploitative data-sourcing practices.

At the time of my writing this article, a parliamentary committee is studying AIDA. Despite organizers’ best efforts at resisting AIDA, the government seems to have no intent to vigorously engage with vulnerable communities or to make any significant alterations to the bill so it will better serve those communities’ interests. 

The impacts of generative AI on artists also became a flashpoint in the recent labour strikes by the Writers Guild of America (WGA) and Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA). WGA and SAG-AFTRA members went on strike for a variety of reasons, but a significant point of contestation between the unions and the Hollywood studios has been the industry’s use of generative AI and the training of generative AI systems on union-protected creative works. Beyond issues of labour exploitation and theft, generative AI has also raised concerns regarding the displacement and devaluation of creative labour. All of these concerns are at issue in the WGA and SAG-AFTRA strikes, making the strikes another case of AI countergovernance: a community of workers using a variety of mechanisms – including workers’ committees, petitions, public denunciations, and media engagement – to force a powerful industry to change its AI governance practices to better serve the community’s interests.

The WGA won several AI-related limitations in an agreement reached on September 25, including limitations on the use of generative AI in writing processes and limitations on the use of union-protected creative materials to train generative AI systems. On November 8, SAG-AFTRA announced they had reached a tentative agreement that includes “unprecedented provisions for consent and compensation that protects members from the threat of AI,” though the effectiveness of the new contract’s protections remains to be seen.

 

Governing AI from Below 

What can community organizers, workers, and public servants learn from these cases? Looking across all of these backlashes against harmful AI governance regimes, some strategies and paths forward for AI countergovernance become clear.

You can take AI governance into your own hands. If you and others in the communities you’re part of are concerned about the impacts AI might have on you, there are actions you can take to build counterpower and develop your own community-led AI governance practices:

  • Talk: Have conversations with others in your community about your interests, hopes, and fears regarding AI. Identify shared values and needs that might benefit from or be threatened by AI. Identify specific types of AI systems that might impact or are impacting your community. Remember that “AI” is not just machine learning applications, but an entire network of organizations and technologies, including the digital, data, and computer infrastructures that support machine learning applications. If you want to make your conversations more formal, you can work together to organize roundtable discussions, town hall meetings, legislative theatre, and citizens’ assemblies
  • Investigate: Learn more about the impacts that AI systems are having on other communities. Learn more about what mechanisms to regulate AI are currently in development or being enforced by the governments that have power over your community. Resources such as the OECD AI Policy Observatory, AlgorithmWatch, AI Index, AI Incident Database, and Fairwork can help you investigate AI-related issues where you live and elsewhere. If you and others in your community feel like your local regulations do not serve your shared interests or will not effectively address the AI systems that impact your community, talk more together about what regulations or other resources would better serve you. 
  • Build awareness: Create, locate, and distribute knowledge resources that support people in your community who may not be familiar with AI-related issues. Help build their awareness of how AI systems and AI governance regimes might impact their lives. Provide them with access to the knowledge resources they need to continue building their own awareness of the issues. Helpful knowledge resources can include plain-language explainer documents, reading and viewing lists, reports and summaries, and videos and podcasts.
  • Oppose: If the regulations and other policies and resources provided by state power do not serve your community’s shared interests, then you might decide to more directly oppose the AI systems that impact your community. AI countergovernance methods often include (but are not limited to): making rules, shared resources, and plans to govern AI and mitigate its harmful impacts within your community without state support; petitions, open letters, and letter-writing campaigns addressed to sources of state and industry power (e.g., elected representatives, administrative officials, executives); community-led audits and evaluations of harmful AI governance regimes, AI systems, policy instruments, and organizations; and contacting and engaging with journalists who report on AI issues to heighten public awareness of your concerns and your community’s other countergovernance efforts.

Workers can directly oppose AI in their workplaces as well. In the cases of both the Project Maven backlash and the WGA/SAG-AFTRA strikes, labour power organized to counteract industry power. These two cases teach us that workers, labour unions, and professional associations can and should decide how the AI systems that most affect them must be developed and used – or whether any applications of AI should be permitted at all.

Workers without a union can talk, investigate, build awareness, and oppose harmful AI systems using the methods described above. They can also talk about and investigate strategies for unionizing their workplaces. Unionized workers can deploy the organizing methods described above in addition to leveraging other labour governance mechanisms at their disposal, such as collective bargaining agreements and workplace policy demands. 

Sadly, the workers’ victory against Project Maven was short-lived: Google has since launched a series of new military AI partnerships. From Project Maven, we learn that worker resistance must be both organized and sustained to achieve successful long-term countergovernance outcomes. Sustained resistance can be built through worker-led governance mechanisms such as permanent committees or working groups on workplace AI policy – the AI working group that has been organized by the WGA, for example.

Finally, as brokers of state power who are ostensibly paid to serve the interests of their publics, public servants have an important role to play in the process of AI countergovernance. As we see in the cases of the Sidewalk Labs and AIDA backlashes, AI countergovernance movements often arise when state power privileges the interests of industry over those of communities. Community-minded public servants can and should resist the industry capture of state power within their workplaces. Public servants should also organize efforts to promote participatory policy-making processes regarding AI and more generally. 

When countergovernance movements do arise, public servants should vigorously engage with aggrieved communities. Communities should be provided with the resources they need to talk, investigate, and build awareness around the AI-related issues that are of greatest concern to them. Public servants have a professional duty to empower marginalized communities – and the broader publics they belong to – to meaningfully participate in the state’s AI policy development processes. They should work directly with community groups to co-design stronger AI policies and support programs that are more sensitive to community needs.

The top-down AI governance regimes of state and industry power often fail to serve the interests of marginalized communities and workers. If we organize together, we have the power to govern AI ourselves from the bottom up.

Blair Attard-Frost is a researcher, teacher, and storyteller. Their academic work investigates the ethics and governance of artificial intelligence. Their creative work explores potential futures for AI governance through the lenses of speculative fiction, glitch aesthetics, and transgender politics.