In the heady days of the dot-com boom, Jaron Lanier, originator of the phrase “Virtual Reality,” spoke at the Yerba Buena Center in San Francisco about what he saw coming for the tech industry, both good and bad. His (rather unexpected) term for the good stuff was “cuttlefish envy,” referring to the potential offered by tech to express oneself creatively as instantly and effortlessly as certain cephalopods do via specialized skin cells that project visual designs on their surface based on how they’re feeling. The bad stuff he called “artificial intelligence.”
AI was already a term in circulation by then, and even a controversial one, given that (then as now) some asserted that real progress was being made toward it and some insisted it was bunk. But Lanier was not doomsaying about SkyNet or prophesying a paperclip-maximization apocalypse; he was simply talking about the abdication of our own human intelligence that tech too often demands of us. His prime example was automatic-teller PIN numbers, which humans had (and still have) to memorize strictly for the benefit of machines.
Never in the intervening years have Lanier’s predictions seemed more poignant than in our current moment of preoccupation with AI.
In many ways, 2023 was the year Large Language Models (LLMs) — predictive tools that generate the next likely word based on past data — broke big. Examples (Bard, ChatGPT) are marketed as “generative AI,” and along with their image-generating counterparts like Midjourney and Dall-E, they have generated a tremendous amount of positive and negative hype. But what does it mean to approach this new, disruptive technology with commitments to sustainable and Agile principles intact?
On one hand, these days one can create an image in Photoshop and simply tell the program, “Now add a complicated-looking machine,” whereupon it will … just do it. No amount of learned cynicism can make a moment like that uncool — we’ve all just had too much past experience of how difficult it can be to negotiate with machines. What’s more, AI might be capable of real impacts on important problems like climate change — potentially reducing global emissions 5-10% by providing information, prediction, and suggestions to decision-makers. Cuttlefish envy is alive and well in 2023.
On the other hand, we must wonder about such moments’ true cost. What intelligence of our own — or possibly worse, what rights of others — have we sacrificed to make them possible? Nobody mentioned in 1999 that getting an “AI” to do our bidding would cost the earth a full bottle of clean fresh water for every five of my queries. Nobody said such machines would gain notoriety for using others’ copyrighted intellectual property, and certainly nobody warned us that they would readily provide endless child pornography.
What does this all have to do with sustainability and Agile? Plenty, actually. AI, to some, is the way to address not simply the current crisis but any problem humanity might ever face. “We believe we should place intelligence and energy in a positive feedback loop, and drive them both to infinity,” says Marc Andreessen. This, honestly, sounds lovely. Who wouldn’t love a feedback loop consisting of nothing but increased intelligence and energy? Cuttlefish envy for all!
But whilst the hypothetical long-term sustainability benefits of AI are significant, with the potential to reduce global emissions 5 to 10% by 2030, AI will not be a panacea. We can’t get stuck hoping it will save us without requiring any change of our own practices or attitudes. For one, the AI we have seen so far lacks the ability to reason from context — it doesn’t understand the nuances of your customers, your operations, or your brand, let alone the nuances of our climate crisis. For another, it is generally biased toward what has succeeded in the past, not what we might need for the future.
Another important consideration with AI is its product ethics, especially in light of the kind of failures mentioned above. For AI to step us closer to sustainability without dragging us backward two steps in terms of justice, it needs to be
- fair and unbiased — inclusive of marginalised perspectives and freed from inbuilt biases in training data
- transparent and explainable — accountably comprehensible to users, customers, employees, regulators, and the public
- private and secure — protective of sensitive information, including personally identifiable information (PII) and respectful of others’ IP; and
- accountable and governable — subject to clear lines of responsibility for its decisions and any unintended consequences
Acknowledging this kind of complexity in creating technology means nurturing it responsibly and ecologically. For example, the Green Software Foundation has set out principles for environmentally responsible AI development lifecycles.
Agilists on tech teams know this responsibility better than most. “Abundance,” as Andreessen names his goal, is not achieved, as we have witnessed time and time again, by simply pressing on faster and harder with our extractive, command-and-control mindsets intact.
It requires transparent, humble, and confident negotiation — with each other, with our tools, with our environments — to make something considerate enough of all these different constituencies to sustain itself while producing value for all.
We Agilists have often (though never often enough) been privileged to see this happen in action, and the genuine abundance it produces is always such a beautiful thing. We are called now to help it happen for our industries and for our planet — not by turning a blind eye to the costs involved or to the rights of those who are in this with us, but by applying the fundamental insights of Agile on a new, planetary scale.
This is a big subject, and there is much to unfold! But going forward, for now, we can all
- learn about the gaps between AI hype and reality
- be conscious of AI usage’s environmental impacts, and
- value our own human creative power alongside any efficiencies contributed by AI.
How we do what we do matters. We have the opportunity to foster headspace, creativity, well-being, joy, innovation, and social mobility while embracing the potential of large language models. It is our responsibility to ensure that the products and technologies we create are designed with a clear ethical compass, leading us towards a future of work that is truly transformative and beneficial to all.
The future is not to be predicted nor forecasted, but to be imagined so that it can be created.
It’s up to us.
– The Agile Sustainability Initiative Team