Zero Waste: What if We asked Artificial Intelligence for Answers

Last week, I stumbled upon a couple of items in the news that struck me as more than just noteworthy snippets. The first came from Ginni Rometty, CEO of IBM who suggested AI will change 100% of the jobs currently being done. On another business-centric news channel, Scarlett Fu quipped that capitalism is experiencing an existential crisis.

The History of Artificial Intelligence.

Let that sink in for a moment. If all jobs will be impacted by the increased presence of AI, could the decision making processes surrounding our zero waste/circular economy benefit from the cold logic of an algorithm?

Regaining Reciprocity

We exist in a world that has mostly and historically ignored the idea of environmental reciprocity as irrelevant. This reciprocity is the unspoken partnership that suggests the how we change the environment around us also impacts the where we change it. This is a new consideration in our history. Humans have been altering their surroundings for thousands of years and have done so to survive and thrive.

Within the span of a single generation, the question how what to do next, how we can understand and embrace the reciprocal nature of the environment we live with and the nature of our ecology to alter it, is being asked. However, the answer, the decision to right this wrong seems dependent on assuming we know better. Do we? Ignore the basic understanding that we can not exist without consequence if we continue to do so without embracing the environmental/ecological reciprocity.

Perhaps, we should we turn to the data driven logic of artificial intelligence for answers where the emotional nature of our carbon-based intelligence has failed?

Let’s take a moment to unpack this.

Most of us have the Alan Turing version of what AI actually is. The Turing Test was meant to determine whether a machine is capable of passing a simple reasoning test, a leveling measurement between us and it and making it impossible to determine which conversation was human and which was machine. That possibility does frighten many of us. Autonomous thinking, what we fear the most and anticipate in mostly dire terms could be the game-changer Ms. Rometty predicts it might be.

This possibility was examined by Darrell M. West and John R. Allen of the Brookings Institution in a paper published around this time last year. In it, they “discuss novel applications in finance, national security, health care, criminal justice, transportation, and smart cities, and address issues such as data access problems, algorithmic bias, AI ethics and transparency, and legal liability for AI decisions.” These are broad challenges for science. AI has to prove its intentionality, intelligence and adaptability.

Leaders are Human

The existential crisis mentioned by Ms. Fu is at the leadership level.

And leaders are human. However, and I will not venture too deeply into this “Existentialism is Humanism” argument first proposed by Jean-Paul Sarte in 1946 except to suggest that a shift in how decisions are made at the executive level would be worth considering. The independent activities of individuals is philosophically counter to the concept of capitalism where values are for lack of a better description, dictated to the consumer, governed by the shareholders and stakeholders and ultimately focused on the success of the decision maker to satisfy many masters.

A Well-worn Path

As humans, we have a certain notorious trait that has evolved in part because of our big brains: we change the environment we live in and often do so without the acknowledging the consequences. In that reckless march forward, we have ignored a the benefits of a reciprocal existence with the world around us in favor of crafting the world to resemble the one residing within our head.  Our understanding of the mutual interaction between ecology – the control of our survival – and the environment – the platform we can manipulate to that end is only recently becoming well-known. And yet, like so many dynamic conflicts, where only winning matters, changing course and righting the wrong may live in an algorithm.

Capitalism is a very specific concept and it is not in question here. Private enterprise will continue to drive our economy. And private enterprise is driven by people. We also have the unique ability of thinking ourselves into a truth. AI might help change that dynamic mostly because AI would be incapable of self-convincing.

The Role of AI in Recovering Waste

Truth is data. Joy Buolamwini of the Algorithmic Justice League wrote “The rise of automation and the increased reliance on algorithms for high-stakes decisions such as whether someone get insurance or not, your likelihood to default on a loan or somebody’s risk of recidivism means this is something that needs to be addressed.” That’s what scares us. At the “heart” of what AI is rests in data and algorithms. We however should be comforted by the simple fact that we dictate the objective.

This let-the-data-decide process might not be something we are ready for when it comes to how these automated decisions will be made. But it could be a welcome change in how we view our need to gain reciprocity.

Consider Organics

How would AI answer the wasted food dilemma? Seventy million tons of wasted food end up in landfills. We know this is the result of our poor understanding of ecological/environmental reciprocity. AI, looking at just the data might also note – and make Draconian decisions based on the fact that, while that food is truly wasted, other resources are lost in the process.

Wasted food uses 21% of our freshwater in the processing of that waste.  AI might also consider the wasted cropland and expended time and effort to coax those crops along incurs one-fifth of all production costs for food we will not eat. Twenty one percent of our landfill volume is also needlessly expanded to accommodate the error of over production.

Aside from the wasted food, how do we continue to allow for so much wasted time, energy and resources? Can we admit we are wrong? Can we change? Even if AI pointed out this error, we might resist acknowledging the problem. According to Kathryn Shultz who wrote Being Wrong: Adventures in the Margin of Error (Ecco, 2011) to err is more than just being human, it is based on survival. Do we fear what AI might tell us about us?

On the Rest of the Recycling Problems

AI would incapable of arguing because it is not looking for a personal advantage. What happens when the human element of self-interest is removed?

AI might suggest that everything can be recycled simply by focusing on the production process. Being right adheres to the (wrongheaded) thinking that it costs too much (to alter processes) or it is someone else’s problem (using virgin materials over post consumer materials) or that you do not think you and your organization will have an impact (humans will always alter landscapes – so what). I get it; Being just a little wrong or admitting to any error in judgement or entertaining other ideas, even from within your own organization is really, really hard. Being wrong is not a moral flaw.

We might wonder what would happen if consumers and investors knew how waste impacted their wallet and their investments. AI does not wonder. AI, without the encumbrances of self promotion would not ignore the waste lost in the processing of those products but may, armed with that knowledge provide, without bias, potential changes. AI would turn loss into profit where currently we barely accept the wrong can be righted.

AI, without regard to personal advancement would understand that the cost of food is increased by 20% to cover those previously mentioned errors in production. AI would insist, without remorse that all manufactured products should be hyper focused on reusing or better, revamping current thinking on ‘the cost of doing business’. AI would be able to project the cost of ecological/environmental reciprocity and give us the ability to understand what needs to be done.

If AI does what AI could, AI would examine the cost of doing business as the cost of waste in the execution of that process. We have the data. So: Why do companies allow for the cost of waste? Why do we pay for these errors in judgement by baking the price of waste into the end costs? Isn’t there savings to be had for both companies and consumers? And couldn’t some of those savings be simply shifted to righting the historic wrong?

I would like to believe it can happen. But to do it in an impactful way, we may need to listen to a voice less human.

Leave a Reply

Your email address will not be published. Required fields are marked *