Ethical AI and Self-Organization: Charting a Path Forward
AI at scale in our society scares me. With wealth inequality perpetually growing, anger in public discourse on the rise, and little consensus on fact vs. opinion, it seems like a bad idea to expose ourselves to a technology that can (mis)inform at a scale we have yet to comprehend. So, at Nestr.io—a collaboration platform for purpose-driven and self-organized work—we asked ourselves if AI could be employed ethically and regeneratively. We believe it can. Here's how.
The challenge with AI
Generative AI is not inherently ethical or unethical, good or bad. It is like energy or money; it depends on how it is employed and who employs it. Therein lies the challenge: consumers of AI-generated content won’t know who the author is, and as a result, won’t know what their incentives or goals are. This is no different from content today, except that AI can generate at a ridiculous scale, personalized to the max, and all of that within seconds.
The AI optimists will say that it holds the promise of drastically more efficient organizations. That AI will free up our time and allow us to do more of what we love. I am not (yet) one such optimist. The same was promised with the industrial revolution and with the arrival of the internet. History suggests that with these types of technological leaps, virtually all efficiency gains will be reaped by shareholders and leave workers and the environment worse off.
So, my worry is not so much that AI is a bad technology. It is a technology that will supercharge our existing problems. And I worry it will exacerbate our organization's pursuit of a zero-sum game and wreak havoc by depleting our natural resources more efficiently, and reducing or eliminating a living wage for millions.
Put more simply: AI is not a technological problem; it highlights an organizational problem.
Opportunity
Whenever a problem is put in the spotlight, we have a choice to make. Do we internalize it and accept it as our new reality? Or do we pick it up and address it? This article is a passionate call for those familiar with any practice of self-organization to step up and collectively tackle this problem.
Here’s how I think that can be done.
Explicit purpose and governance
In our organizations, let's demand that any AI deployed is bound by an explicit organizational purpose. Let’s ensure that the AI is aware of all governance records and policies our organization and its members are bound by, so that it will adhere to them too.
Authority lies with people in roles
Ensure that the AI tools we use adhere to our existing and explicit authority structures so that it can suggest but not independently decide. Circle and role-based governance structures, as used in most self-organized contexts, are perfect for this. It will create clarity for the AI regarding which role to suggest an operational task or governance proposal. And even when we have previously granted AI the authority to autonomously take action, it should always be within a role where people are held to account for the AI contributions.
Complete transparency
It should be crystal clear to everyone in or interacting with the organization what work has been done by AI and which role it is assisting, so that any tensions that arise can be addressed and processed appropriately.
Invert the autonomy mantra for AI
Where in self-organization we promote the mantra of 'take any action you deem needed for furthering your role’s purpose unless explicitly specified otherwise,' for AI, we should invert this. AI should not act unless we explicitly ask it to or have granted it permission to act independently.
The wonderful news is that of these four concepts, the first three in broad terms sum up the practice of self-organization. Just like how the practice of self-organization holds an organization accountable to its purpose, so can it hold AI accountable to that same purpose. Self-organization can thus act as the boundary within which AI can do its magic and help us be more effective while mitigating the ethical risks.
Disclaimer: On top of the ethical implications, there are serious problems to consider around the energy consumption needed to train and run AI models, copyright infringements, and security concerns around data ownership. Though these challenges are in need of urgent consideration, I have not addressed them in this article.
A wonderful byproduct
While self-organization and distributed authority are still somewhat obscure, AI in the workplace seems bound for mass adoption. Because self-organization’s governance records will make collaboration-AI more effective, there is an opportunity here for self-organization; and with it, purpose-driven work. Not as a Trojan horse, but because together they are just way more effective.
And, unlike people, AI is not capable of holding the organizational polarity of a marketing purpose vs. an implicit growth purpose simultaneously. We’ll have to tell AI what we prioritize, and with it, every organization will have to answer these questions explicitly. As a result, it will not only hold AI accountable but also hold any organization employing AI and all the people working in them to account.
At Nestr.io, we have decided to double down on self-organization and purpose-driven work so that AI can serve us collectively rather than lead us. Let’s make self-organization and an explicit purpose the norm for anyone employing or using AI. Then we’ll not only align AI with our collective needs but also transform our organizations in the process.