Keynote. AI as Labour Commons?
- Marcelo Vieta
- 3 days ago
- 15 min read
Updated: 1 day ago
On November 12, 2025, I delivered a keynote talk entitled "Can AI Become a Labour Commons? Preventing Dystopia by Cooperativizing the Code" at the Platform Cooperativism Consortium's "Cooperative AI" conference in Istanbul, Turkiye. To read my talk, see below. If you'd like to see the accompanying PowerPoint, please reach out to me. The conference was a diverse mix of cooperative practitioners, high-tech workers, AI specialists, and university and independent researchers, and wonderfully educational and collaborative. It was brilliantly co-hosted by the PCC's Trebor Scholz and Stefano Tortorici, the Turkiye-based NeedsMap, and held at the Istanbul Planning Agency. Many thanks to Trebor and Stefano for the invite, and to the wonderful hosting by members of NeesMap and the staff at the Planning Agency. It was a truly inspiring and eye-opening experience!

Can AI Become a Labour Commons?
Preventing Dystopia by Cooperativizing the Code
Keynote written and delivered by Marcelo Vieta (www.vieta.ca)
For the Cooperative AI Conference of the Platform Cooperativism Consortium (https://platform.coop/events/cooperativeai/)
Delivered in Istanbul, Turkiye on Nov. 12, 2025
Part 1: The Questions and the Stakes
Good afternoon.
Today, I want to address three questions that I think are crucial for ensuring Cooperative AI:
· Can we democratize AI?
· Can artificial intelligence become a commons?
· Can we prevent the dystopian futures that seem to be looming by cooperativizing the code?
The answers to these questions, I want to argue, are to be found at the intersection of technological transformation, democratic imagination, and the future of work.
My talk today is not technical but ethico-political, philosophical, and action-oriented.
It is a call to pause, to reflect, and to reimagine the human–technology relationship before AI slips further from our grasp.
I want to offer a series of observations drawn from my research interests and work with labour and cooperative movements, the social and solidarity economy, critical theory, communication studies, and the philosophy of technology.
In my talk, I’ll move from a philosophical framing to tangible practices already emerging in cooperative movements, data and platform co‑ops, and local governance.
The questions and observations I share with you today, I propose, are crucial for anyone thinking about the ethics and the future of AI.
Whether you are a technologist, a policymaker, a community organizer, a digital worker, or a tech user, I invite us to consider how we might transform the code and infrastructure of AI into something more democratic, more ethical, and more humane – into what I, with others, have come to call a labour commons.
In contrast to the “break-things and apologize later” method of the big tech sector, I propose that, with AI, we should take a breath and re-think it. There are still possibilities for hope, though these possibilities may be closing quickly.
Why do I say that time is short?
For one, because lost in the AI-hype of speed, scale, and the drive for superintelligence proffered by the so-called “Big-Seven” tech giants and their evangelists are the massive impacts and unintended consequences of the AI push. Let me name just four:
The environmental impacts. Training and deploying large models requires vast computation, water, and energy. Data centres are today’s new factories – except that the exhaust is global. Annual energy footprints of hyperscalers now rival those of entire cities. Google’s alone parallels the energy use of 2.3 million households. These burdens concentrate in specific regions and grids, externalizing costs to vulnerable communities and ecosystems.
Infringements on creative and intellectual labour. Much of today’s AI was trained on scraped cultural labour – writing, art, scholarship, journalism – often without consent, compensation, or attribution. This business model optimizes shareholder value rather than collective cultural memory. The Writers Guild of America’s recent labor dispute was exactly about this.
Lost jobs. Automation reshapes work across sectors; “routine cognitive” work in the global North is particularly vulnerable, while, as we saw in Antonio’s brilliant film, the job of poorly paid annotators are relegated to the global South. Of little consideration to the big AI tech companies is securing good jobs, and the meaning work gives us.
Fragmented political discourse. Synthetic media at scale – deepfakes, hyper-targeted manipulation – erodes already fragile trust in democracy and public life.
I could go on. Most of us in this room know these ever present dark sides of AI.
AI has brought us to a planetary inflection point. Why do I say this?
Because this is the first time in humanity’s existence that technology is running ahead of human oversight and planetary limits at such massive scale and speed.
But I don’t want to paint a dystopia. I want to advocate for ways of preventing it by creatively and democratically appropriating AI. What I want to argue is that there is still time; there is still space for hope – if, as all of you are so inspiringly doing, we reorient our imagination and our organizing.
What I want to highlight in my talk today is what is lost in the mainstream AI-hype: That there already are a growing number of us around the world thinking AI differently. Most of us here in this room … all of you … are at the cutting edge of re-envisioning a more just AI future.
First, a caveat: While the final phrase in the title of my talk is “Cooperativizing the Code,” I want to acknowledge that AI is much more than code. It is a complex socio-technical system that fuses coders and code, poorly paid workers and highly paid tech specialists, individual data and oligopolies, algorithmic infrastructure and AI users, cloud platforms and shareholders, learning models and human governance ecosystems.
The last point – the human and democratic governance of AI – is vital today for what many of us at this conference are advocating for. People, not artificial agents, are the DNA of the AI code. And so, the time to rethink AI is now … before we’re programmed out further.
Part 2: The Techné of the Ancient Greeks, the Technology of Modernity, and the AI of Today and Tomorrow
To begin to think about a Cooperative AI, I want to first step back to the philosophical and etymological roots of the word technology itself. If you will indulge me, I’d like to revisit the ancient Greeks, who still offer us insights for understanding the limits being breached by an unfettered AI. They also offer us continuing whispers as to what we can recuperate by approaching AI from a different ethical disposition.
In Socrates, Plato, and Aristotle, for instance, we encounter four key concepts still useful for understanding what is at stake in the human–technology relation: physis, poiesis, techné and logos.
For the Greeks, these concepts were central to the intentionality of human making and the use of tools.
The ancient Greeks help us still see that human making occurs with and within nature, involving a holistic worldview.
First for them there was physis, or nature – “that which creates itelf…which emerges from out of itself.”
Physis is self-generating, driven by what it is meant to be.
For example, an acorn contains within it the potential to become an oak tree. The oak tree is in the acorn.
Poiesis, by contrast, refers to “the practical activity of making,” engaged with by humans when producing something.
In turn, techné is the art and craft of making – it embraces the skill, knowledge, and discipline of doing poiesis well.
And logos was, for the Greeks, what gives order and gathers.
For the Greeks, then, techné was not just about utility; it was craft, the right way to create, to help bring things into existence closer to their essential forms.
Techné was thus a form of wisdom.
It was a way of participating in the unfolding of the world, not dominating it.
These ideas profoundly influenced 20th century critical thinkers like Herbert Marcuse, sage of the New Left in the 1960s and a key member of the Frankfurt School.
Marcuse saw in techné the possibility of a post-technological rationality – one that could reconcile art and tools, ethics and innovation.
Marcuse contrasted this with the modern technological mindset – technological rationality – which tends to treat nature and human labour as mere instruments … as resources to be extracted, optimized, and controlled.
This shift – from techné to technological rationality, from pre-modern to modern ways of thinking – marked a profound transformation in how humans understood and engaged with the world. This shift was central to the birth of the scientific method and capitalism. Technological rationality is not just a way of thinking – it’s a worldview. It’s a lens through which everything is seen as discernable and dissectible … everything is now a tool or a resource – a means to an end.
As Andrew Feenberg, Marcuse’s PhD student and one of my MA supervisors, puts it like this: things are now “simply there, unresistingly available for human use.” Technological rationality thus shapes our modern economies. It organizes our entire experience of reality.
It tells us what is worth knowing, what is worth doing, and what is worth valuing.
It silences other ways of knowing – ethical, aesthetic, spiritual – by labeling them irrational or irrelevant.
Marcuse and Feenberg see this as a kind of epistemological violence. It’s not just that we use technology to manipulate the world; it’s that we stop asking whether we should.
Indeed, now perhaps more than ever, with AI, we are at the dawn of a new historical rupture.
With AI, we may be reaching technological rationality’s escape velocity – tools designed to augment us begin to operate at scales and speeds that disembed them from human judgment, obscure accountability, and intensify externalities. Can we still recover the ethical core of techné – craft, proportion, and care – in the age of AI?
With unfettered AI, Mary Shelley’s prophetic dystopian novel, Frankenstein comes to mind.
What kind of rationality is guiding our AI systems? What values are embedded in their design? Who benefits, and who bears the cost? Can we bring it back to human scale and reorient our algorithmic imagination toward care and co-responsibility with each other and the planet? Or will we continue down the path of severing ourselves further from the very technologies we have created?
Part 3: Relationality, Ethical Technology, and Indigenous Ways of Knowing
To imagine a different future for AI – one that resists technological rationality – we must look beyond the dominant paradigms of Eurocentric philosophy.
Indigenous ways of knowing offer us invaluable guidance.
Indigenous economics and ways of knowing are not about markets or transactions.
They are holistic worldviews – integrating at all times relationships with land, community, ancestors, and future generations. Past, present, and future are always considered in every human action.
Wealth, for Indigenous peoples, is not measured by accumulation, but by the health of these relationships.
In Indigenous languages, there are no words equivalent to technology. Indigenous peoples fundamentally know that tools are never separate from culture and the non-human. They are interconnected.
From traditional birchbark canoes to digital storytelling today, tools and innovations are expressions of cultural continuity and care.
Today, many Indigenous communities are engaging with digital tools to preserve language, resist data colonialism, assert data sovereignty, and tell their own stories – in their own terms.
My colleague at the University of Toronto, Jennifer Wemigwams, has been working with Indigenous Elders to collect digital bundles that hold cultural memory, sacred knowledge, and teachings via digital platforms.
University of Victoria’s Vanessa Andreotti’s AI bot Aiden is an AI-driven conversational bot created as part of their research in the Gesturing Towards Decolonial Futures (GTDF) collective. The tech combines natural language processing with curated content from GTDF’s research corpus in collaboration with Indigenous communities. Aiden avoids extractive or prescriptive interactions and rather encourages users to question assumptions about progress and development, emphasizing relational ethics and plurality of perspectives. Andreotti is currently looking to house the server with a local community involved in developing Aiden
Resonating with Indigenous ways of knowing is the work of Ursula Franklin, the Canadian and University of Toronto physicist and philosopher whose book The Real World of Technology remains a touchstone for ethical tech thinking.
Franklin argued for “Technology as practice.” Technology is not a collection of tools. It is a way of doing things. Every technological system embodies values, assumptions, and power structures.
Franklin distinguished between holistic and prescriptive technologies. Holistic technologies allow people to control the entire work process – like artisanal or craft work. Prescriptive technologies fragment the work process – they are control‑oriented. They break tasks into discrete steps, controlled by systems or authorities – like assembly lines or bureaucracies. They diminish human agency and foster conformity.
One compelling example of Franklin’s vision in practice is the UK’s Lucas Corporate Plan of the 1970s.
Faced with the threat of massive layoffs at the Lucas Aerospace Company, both skilled and semi-skilled workers came together through a joint worker–union–management council to save the business.
They did so by developing a plan for worker-led, community-centred innovations focused on creating socially useful products.
Rather than continuing to produce jet turbines for the Concorde, Lucas workers began co-inventing and co-producing energy-saving domestic appliances, medical devices, and versatile power generation technologies.
Similar to the Lucas Plan, GKN in Campi Bisenzio, Florence, is a worker-recuperated former car-parts manufacturer that, starting in 2021, saw most of its 430 workers occupy the plant in what is now Italy’s longest workers’ assembly. Universities, engineers, and students have since partnered with the GKN workers and redeployed their expertise to begin transitioning the plant to become a solar panel and cargo bike manufacturer, joining the fight to establish an ecological transition for Italian industry and inspiring the Insorgiamo (together we rise up) movement.
Other European examples of workers and communities recuperating former capitalist firms into worker and community co-ops, as well as transitioning their production into more just and ecological social production, are Milan’s RiMaflow, Rome’s Officine Zero, France’s ScopT (a a former Lipton tea producer), and L’Apres M (a former McDonald’s franchise).
The Lucas Plan, GKN, and many worker-recuperated firms exemplify what Franklin called redemptive technology – technology as a practice rooted in justice, care, and democratic participation.
These technoethical insights and practices offer the Cooperative AI transition a compass.
Part 4: AI as a Moment of Inflection in Humanity’s Technological History
With AI we are at a turning point in the history of the human–technology relation.
We are building technologies that not only extend human capacities exponentially, they begin to operate independently of us – systems that we are training to learn and make decisions without direct human oversight or guidance. What future is this shaping?
Philosophers and historians of technology like Lewis Mumford, David Noble, Franklin, and Feenberg have reminded us that technology is never neutral – it always reflects the values and power structures of its creators.
What values and structures of power, then, undergird AI such that the goal is to artificially replicate, enhance, and ultimately replace human capacities?
If, on the other hand, AI is taken up within critical and ethical frameworks – with relationality, with care, with democratic design, with the inclusion of community voices …. a narrow AI – we can avoid the dilemmas that loom.
I want to reach out to one more thinker – Marshall McLuhan, the Canadian media theorist who gave us the concept of the Global Village, the Medium is the Message, and who, in many ways, predicted the Internet.
Towards the end of his life, McLuhan developed a framework called the Laws of Media, also known as the Tetrad.
It’s a diagnostic tool for understanding how any new technology affects society.
The tetrad asks four questions:
What does a technology enhance or amplify?
What does it make obsolete?
What does it retrieve? That is, what older practices, technologies, or values does it bring back?
What does it reverse into when pushed to its limits?
I asked the University of Toronto’s AI, Microsoft’s Copilot, to run a tetradic analysis of AI.
Here is what it came up with:
Enhances: It amplifies decision-making speed, pattern recognition, predictive analytics, and personalization. It automates cognitive and creative tasks.
Obsolesces: It obsolesces routine human labour, replaces manual data analysis, disrupts traditional learning and judgment.
Retrieves: It brings back predictions and insights … and even oracular knowledge (think ancient seers). It revives apprenticeship models through personalized learning and retribalizes media consumption via recommendation engines.
Reverses: When pushed to its limits, AI flips into surveillance, bias, misinformation, and the dehumanization of decision-making. It risks creating dependence on machines for thinking, creativity, and governance.
Not bad!
But we can go further. This is the tetradic analysis that I came up with. You try. Call out some things that come to mind:
AI enhances or amplifies: simulation and simulacra.
It obsolesces: jobs and professions; human ways of learning; human intuition; human creativity; “sweat equity”; wisdom
It retrieves: religion and myth, serfdom and slavery.
It reverses into: extreme exploitation of labour, hallucination, cheating, human ignorance, human illiteracy, zombiism and perhaps, when the Superintelligence ostensibly takes over, the singularity – that moment when AI surpasses human intelligence and becomes an autonomous force, perhaps even our overlord.
McLuhan’s tetrad helps us see the full arc of technological impact – not just what a technology does, but what it undoes, what it resurrects, and what it risks becoming. That every innovation, carries unintended consequences.
And it challenges us to think holistically and ethically about this new technology – AI.
Part 5: Cooperativizing AI and the Labour Commons
If McLuhan’s tetrad helps us diagnose the impact of AI, then cooperatives and the commons offer us remedies for its pitfalls; a way to reimagine and reclaim it.
Cooperatives all have at their core a deeper concept that has guided much of my own research work and activism: autogestión, or workers’ and community self-management. This is not just about running a business differently. It’s about reclaiming control over the means of production and the technologies and conditions of work, by working people themselves.
In Argentina, Italy, Canada, and elsewhere, I’ve studied and worked with worker-recuperated enterprises – companies that were abandoned or shuttered usually during economic crises and then taken over and run democratically by their workers.
These workers don’t just save their jobs. They transform their workplaces into spaces of solidarity, care, and community engagement.
In Argentina, the workers themselves call the collective and cooperative practices they take on in the recuperated workplace autogestión.
It’s a word that comes from the Greek autos – meaning “self” – and the Latin gestio – meaning “to manage” or “to bear.” More evocatively, it can be understood as self-gestation – the act of collectively creating, controlling, and sustaining a collective’s own productive and creative life.
Autogestión is not just a management model. It’s a political and ethical stance.
This brings us to the idea of the labour commons – a concept I’ve developed with Dario Azzellini in our just-published book Commoning Labour and Democracy at Work: When Workers Take Over.
The labour commons reframes work, not as a commodity to be bought and sold, but as a shared community resource – a commons – collectively governed by working people themselves, and socially embedded.
In a labour commons: The workplace becomes an organizational commons. The labour performed becomes a commoning practice. And the surplus generated becomes a commonwealth, where the created value and surplus are reinvested in the community, not extracted for profit.
This model challenges the capitalist norms that commodify labour and alienate workers. The logic shifts from extraction to reproduction of life.
Can AI be woven into a labour commons? Yes – when data, algorithm models, and platforms are treated as common pool resources, collectively governed and locally accountable. And when people are not exploited in the process.
Here is an example of digital labour commons:The Drivers Cooperative (which is the US’s largest worker cooperative) uses algorithmic dispatch and route optimization while aligning control and value with drivers. The tech is similar to Lyft and Uber; the ownership and governance are not.
Consider also data cooperatives, which offer a democratic alternative to what Shoshana Zuboff has termed our era of “surveillance capitalism.” A data cooperative is a member-owned and governed organization where individuals voluntarily pool their personal data.
The goal is to collectively manage, protect, and benefit from that data, and retain data control, rather than leaving it all in the hands of large corporations.
One example is the Megha Mandli Coop within the Self-Employed Women’s Association (SEWA) in Gujarat, India. It coordinates agricultural tools, inputs, and services, and is piloting digital tools for local supply chain procurement and risk pooling. Data becomes collective, and a means of livelihood and self‑determination, not extraction and patriarchal exploitation. Today, the co-op has grown to include over 1,000 members.
We reviewed similar types of digital coops in our cooperatively written 2023 book, Cooperatives at Work.
These platform and data co-op models align with the values and practices of autogestión and the labour commons, offering suggestive ways that a narrow AI could be governed within democratic, community-driven structures.
These examples show us that it is possible to build complex AI systems that are ethical and community based.
Digital network and movement building are also vital for the solidarity stack that Trebor talked about yesterday. In Argentina, FACTTIC –the federation of tech and knowledge worker co‑ops – builds scale through shared projects, mutual support, and sectoral advocacy. It is innovating capacity building and social protections for tech workers that are collectively negotiated.
Satisfying the criteria of the Digital Public Goods Alliance that contributes to the United Nations’s Sustainable Development Goals, some municipalities are also experimenting with democratic AI:
Barcelona’s Decidim offers open source platforms that extend participatory decision‑making to citizens.
Helsinki’s AI Register and Amsterdam’s Algorithm Register innovate transparent listings of municipal AI uses and public oversight of algorithmic governance.
And the City of Montréal & MILA are creating responsible AI frameworks with local communities.
These are not utopian pipe dreams; they are living prototypes that show how to embed into AI participatory governance and public‑interest.
Part 6: AI 2.0?
But are these experiments in Cooperative AI too late? Some, like Roman Yampolskiy and Noble-prize recipient Gregory Hinton argue that they are – that so-called “deep learning”’s platform power, planetary spread, opacity, and sunk costs have already outrun human and democratic intervention. I am not 100% convinced that we are there yet.
We built these systems. We can redirect them. Or slow them down and re‑design them. Perhaps the goal is not to seize every existing stack, but to start anew where necessary, as some of us have been suggesting at this conference: recoding architectures, data practices, and incentive structures around human scale and ecological limits. I turn to you, the experts, for the answer. Can we still do this?
When there are clear, transparent and participatory processes, can AI not be harnessed for deepening democratic participation and equity? Can’t we recuperate a digital commons? AI as a labour commons? Can AI be a redemptive technology – one that codes ethics into its very infrastructures, that redistributes power equitably?
The brief examples I’ve shared today, and the many initiatives that you are all sharing with us, are not just isolated cases: They are happening all over the world today. They are the budding seeds and flowers of a new and collaborative digital economy. Cooperative AI!
Our future generations deserve that we grapple with both the perils and the possibilities of AI now. You are all showing the way for how to cooperativize not only the code, but the conditions under which code is made.
Thank you. I look forward to the conversation.


