5. 18. 2022

The Work of Automation


Mani Moksha



The redemption of work by technology is an idea with a sordid history. New technologies in the workplace have often been accompanied by promises to reduce drudgery or danger. Just as often, however, they have provoked embittered conflicts between workers and management over the contents and cadence of work. 

Today, this process of innovation and struggle is perhaps best exemplified by developments around “AI” (artificial intelligence). In addition to a steady stream of news about AI’s burgeoning abilities, its implications for the future of work have been elaborated in a raft of policy-oriented research, often focused on the automation of paid work while overlooking the unpaid but critical work of social reproduction. There are frequent dramatic projections about the vulnerability of various jobs to AI-enabled automation – grist for the marketing mills of AI entrepreneurs who profit from selling the idea that a future brimming with automation is inevitable

As a field of research, AI has waxed and waned since its inception in the 1950s. The current AI boom took off from advances in the field of machine learning around 2012. Global funding for AI companies, rapidly increasing since 2014 and reaching about US $38 billion in the first half of 2021, speaks to the pace of these advances. Some of the largest tech firms, like Google and Microsoft, have even assumed an “AI-first” business strategy. AI applications designed to help businesses reduce labour costs through automation – “labour-saving AI” – are a feature of this boom, often couched in managerial euphemisms like “operations optimization.” 

Just what counts as AI remains a matter of interpretation. There is a lack of consensus about when the accomplishment of “human behaviours” by a computer program is really intelligence and when it’s just algorithmic mimicry. Yet there’s no mistaking that businesses have started employing machine learning to automate job tasks previously performed by humans. More or less independently drawing reliable inferences from real world inputs, labour-saving AI can estimate the risk of breast cancer in a mammogram, extract textual information from documents for paralegal analysis, generate the transcript of an audio conversation, and identify weeds on a tract of farmland. 

If this AI boom and its techno-optimist bards invite us to anticipate a future alive with automation, then we’re in urgent need of a new politics inviting us to a different version of that future: one in which workers, not cost-cutting managers, determine the nature of their work – including the role of technology in it.

 

Controlling work and selling the future

Technical advances aren’t actually the main driving force behind the automation of jobs. Popular narratives of technological progress sell a romance of historical inevitability, but they overlook how conflicts between unequal interests shape which technologies are adopted and what gets counted as “progress” in the first place. The automation of jobs originates in the unequal and antagonistic relations of power inherent to wage labour under capitalism. 

Capitalist wage labour is generally organized to maximize worker effort while minimizing worker compensation. The further this disparity can be extended without fomenting worker unrest or attrition, the better positioned a business is to realize profits in the face of competition. Managing the disparity better than the competition is a large part of what “productivity” is about. This is why management tends to repeatedly reinvent and reorganize the tools and tasks comprising work – either introducing automation, reforming workflows, or both. Automation can lead not only to job loss but also to deskilling: jobs simplified to the point of restricting meaningful decision-making and a chance to develop expertise. Deskilling through automation tends to result in stunted career paths, reduced compensation, speedup of work, and irregular hours. 

By foregrounding this structural dynamic, we can see how rather than ushering in an era of work less burdened with drudgery, labour-saving AI applications continue capitalism’s preoccupation with productivity by enabling managerial control over the labour process. Expanded adoption of labour-saving AI applications is likely to amplify, not alleviate, unequal relations of power. But how far management can actually improve productivity at the expense of workers, through automation or anything else, is where this structural dynamic meets the circumstances of social and political context – the strength of unions, for example. So rather than taking forecasts of jobless futures at face value, we should be asking: how can we challenge the adoption of labour-saving AI where it operates as a means of expanding managerial control?

 

Building solidarity before building AI

Challenges to labour-saving AI applications, and to “algorithmic management” more generally, have taken the form of collective actions and bargaining at workplaces where those tools have been or might be deployed. Food couriers have successfully unionized in an effort to limit algorithmic control over their working hours. Striking workers at Marriott Hotels secured the right to be consulted about any plans to introduce automation. Actions like these speak to the growing backlash against AI applications integrated into the labour process to improve productivity.

The work of producing such AI applications in the first place forms another terrain on which to face down AI-enabled managerial control. Beyond the marketing mythos of AI as an extraordinarily advanced technology is a production process crisscrossed with competing interests – creating opportunities to intervene before those technologies are manufactured or deployed. Popular thinking about tech workers and their privileges (office catering, laundry services) might imply that they’re unlikely to champion the struggles of lower-income service workers like couriers and hotel staff. But labour unrest in the tech sector has seen coalitions built across this divide, as when the Tech Workers Coalition contributed to the successful unionization of cafeteria staff at Facebook in 2017. 

Labour unrest among tech workers has been sparked also by concerns about the harmful effects of the technologies they help to build. In 2018, an organized campaign among Google staff pushed back against a collaboration with the Pentagon, dubbed Project Maven, that aimed to weaponize AI. Google management cancelled the contract that same year, setting a precedent that encouraged tech workers in other companies as they too fought the development of problematic technologies. This is the origin story of a rank-and-file movement in the tech sector known as #TechWontBuildIt, an emergent radicalism that can also be mobilized to challenge contemporary labour-saving AI projects that intensify managerial control. 

So at what stage in the process of building these projects can workers push back? And how might they be moved to do so?

 

The problem with “business problems” 

It is in wrangling over a “business problem,” not in coding machine learning models, that the work of automation actually begins. Identifying such a problem, like inadequate speed or accuracy in recognizing defective outputs along a production line, allows managers to translate their interests into mathematical problems: a mundane search for “efficiencies” instead of an attempt to exert control over workers.

When a business is looking to adopt labour-saving AI – an expensive investment, given those applications’ computational complexity and their need to be tailored to specific contexts – exploratory research is often first conducted in the work environment where it might be deployed. This research examines the business problem and attempts to determine how various tasks, and workers’ jobs, may be augmented or substituted with AI. Workers are often required to collaborate with researchers or sales associates in this process, participating in exploratory interviews, job shadowing exercises, and so on. It’s here that the first signs of conflict can appear. Workers express concerns about the speedup of their work, or its simplification to the point where their value as employees is diminished. Sometimes they even refuse to cooperate, or they strategically provide information that defines the business problem in ways more favourable to them than to management. 

The production of labour-saving AI, in other words, begins with an effort to enforce a managerial view of work – a problem of inefficiency requiring a technological solution – and with workers’ resistance to that view and its consequences. From the start, that production process dramatizes the antagonistic relations of power inherent to work under capitalism, not a smoothly unfolding inevitability of technological innovation in the workplace.

Once exploratory research has identified particular work tasks for automation, AI applications are modelled and designed. This work is not very standardized. It involves a range of specialities that come together in configuring, testing, and reconfiguring these applications, and so doesn’t lend itself to mass production. It relies instead on teams of workers who tinker away in a workshop-like labour process. Those teams might include software developers, machine learning engineers, data scientists, designers, and user experience researchers.  

Notably, those design teams don’t include data labellers, who prepare the raw data for machine learning models to begin analyzing – the technical foundations of AI. Data labellers pore over huge volumes of individual data files to classify them or highlight relevant contents for a model to compute (e.g. outlining traffic signs in images of city roads). This work is rote and highly prescriptive. Often called “microwork,” it’s usually crowdsourced at piece rates, or outsourced in bulk to companies who manage it. It’s frequently performed by low-paid, precariously employed workers in the Global South. 

This division of labour within AI’s design process continues a long history of standardization and managerialism in computation work. Crowdsourcing data-labelling tasks to poorly paid microworkers resembles the splintering of earlier generations of clerical workers into (literally) front and back offices in the mid-20th century. Management adopted this arrangement to “rationalize” the various forms of clerical work – that is, to improve their productivity. And so typing pools emerged in the back office: large numbers of lesser-paid typists, usually women, who received a steady flow of short jobs to type up. 

Much the same thing happened with the first generation of computer programmers. While they commanded hefty salaries and were relatively sheltered from managers’ oversight in the 1950s and 1960s – much like data scientists today – managerial control eventually triumphed by dividing them: tasking some with the abstract conceptualization of software design, others with the more prescribed and repetitive work of actually coding these designs. And so “machine rooms” emerged, where so-called operators wrangled with the instructions of software programmers.

A similar dynamic is underway today. Tech incumbents and startups are increasingly automating tasks within the labour process that produces AI. Since about 2018, providers of cloud-based computation services have started offering tools that automate various tasks involved in developing machine learning models; other applications allow users to write and explain code in plain language, chipping away at the need for specialized knowledge of programming languages. The possibility of deskilling through gradual standardization looms over even the tech workers who make AI. They too are subject to businesses’ pursuit of improved productivity through increased managerial control over the labour process, with reduced status and wages for workers. 

The work of producing AI remains embedded in the capitalist logic of profit. Deskilling AI work with AI itself is a feature of this logic, not a bug.

 

Power over productivity    

There is a historic opportunity to harness the brewing unrest among tech workers to the growing labour militancy among less privileged workers such as couriers and hotel staff. Both of those groups are implicated in one of capital’s central “business problems” – how to achieve greater productivity with lower labour costs – and both are subject to managerial control through AI-enabled automation. The tech workers who develop labour-saving AI applications, helping to legitimize such technologies as the solution to that business problem, may eventually find their own livelihoods endangered by the false promises of workplace automation. Lower-paid and with more limited job security, contract workers now outnumber full-time staff at Google.

For now, much of the AI labour process retains its workshop-like character and requires highly specialized training, and its teams of workers have significant potential bargaining power. By withholding their labour, they can seriously disrupt that labour process and its profitability. The collective concerns about worker rights and social responsibilities that put #TechWontBuildIt into motion linger on. So do efforts at unionization, headline-grabbing mass actions, and hundreds of other more or less organized challenges to management rolling through the tech sector. If this collective conscience is to push back effectively against forms of AI that extend managerial control, tech workers and those other workers affected by “labour-saving” – frequently labour-harming – technologies need to be considered in tandem, and new solidarities forged between them. Not just because they exist at different stages of the same economic process, but also because they each participate in the same broader working-class struggle for the power to define the contents and cadence of work.

Mani Moksha is an editor of Midnight Sun. He is also a researcher working in the technology industry. @MokshaMani on Twitter.