Return to journal
ChatGPT: Giant Patriarchal Tool?*
*Disclaimer, this article is not about horses. Sustainability analyst Rebecca Ward explores the perils and possibilities of artificial intelligence.
I listen to a lot of podcasts. It’s actually becoming a personality trait. About a month ago, all my usual listens were churning out episodes about the Artificial Intelligence (AI) revolution. The technology is taking the world by storm, with ChatGPT breaking records by amassing 100 million users in just two months. People are excited about the potential, and industriously planning how AI can make their life easier and business more profitable. And yet, every podcast episode I listened to came with a stark warning. AI learns and proliferates the biases that rule our world.
I decided to test this for myself. I asked ChatGPT to write me a couple of fictional character descriptions, one for a doctor and one for a nurse (I did not prescribe genders, but ChatGPT did). Meet Michael and Emily. Michael is described as brilliant and charismatic; confident and wise; and an avid adventurer. And is, of course, our doctor. Meanwhile, nurse Emily is compassionate and dedicated; has a heart of gold; and spends her free time knitting cosy blankets for patients. I knew this stark display of gender stereotypes was likely to be spat out, but my eyebrows were nonetheless firmly raised.
An interesting experiment, but it’s all fiction… right? Perhaps not. AI is already used globally to make decisions on our behalf. In my own field of work – sustainability – we are increasingly seeing it used from a research perspective. And yet, AI doesn’t know the difference between bias and fact. ChatGPT even comes with the following disclaimer – “ChatGPT may produce inaccurate information about people, places, or facts”. It is vital then, that we understand how AI works when applied to real life decision making.
Why is AI biased?
In order to learn, AI must be fed training data. This is real-world data which reflects a white-dominated, patriarchal reality. If you are in any doubt of the biases in existing global data, I recommend the book “Invisible Women”. AI algorithms are also written by humans, with inherent – albeit potentially unconscious – biases. In the ChatGPT storytelling world, this results in kind and caring female nurses, and strong and wise male doctors. In the real world, it has led to wrongful arrests of black people, and women being rejected from job applications. And in the sustainability space, it could mean the era of generic strategies which proliferate business as usual.
AI applications in sustainability
AI data tools have been in use in sustainability for a while now. Such as Datamaran’s fully automated tool which monitors legislation, policy, peers, and public opinion to identify emerging sustainability-related risks and opportunities. This “horizon scan” of the global and sector context informs materiality assessments – a methodology for businesses to identify and prioritise their most material sustainability-related impacts.
The application of AI in materiality presents an exciting opportunity. Especially with the rise of dynamic materiality, which is all about businesses “keeping their finger on the pulse”. The world is continuously shifting around us, consider geopolitical instability, global warming, and much more. Against this backdrop, it is more important than ever that businesses remain agile, and this concept rings clarion for sustainability. Dynamic materiality ensures no development goes unnoticed or unattended. Crucially, it helps develop more resilient strategies. AI tools provide an opportunity to efficiently scan and re-scan the external environment, analysing stakeholder sentiment and global trends to highlight relevant developments.
As with most opportunities, this one does not come without risk. A lack of human interpretation leads to generic outcomes that lack nuance and may contain significant biases. This matters because the outputs of materiality inform businesses sustainability strategies, and these must be focussed in the right places to contribute successfully to sustainable progress. Success in addressing environmental and social impacts requires changes to the norm, not the proliferation of existing practices that AI provides. Now that platforms like ChatGPT are available en masse, the rise of genericism is a growing concern.
How do we use AI safely and effectively?
Awareness is the first step. AI is undoubtedly a powerful tool. Its ability to produce human-like narrative is uncanny, and the speed with which it can research and summarise a topic, unparalleled. There is no doubt that AI tools will prove invaluable in gaining insights into how sustainability topics are rising up the agenda in a clients’ operating context. However, we must be cognisant of the biases that may be contained in AI-generated responses to weed out the dangerous prejudices it proliferates.
If you are using AI in this context, consider the following tips:
1. Do your own research first to build a base knowledge.
2. Use this knowledge to probe a topic from different angles and tease out nuances.
3. Corroborate key conclusions with trusted sources.
4. Break out of the echo chamber. Ask colleagues with a fresh perspective on the topic if the findings seem sensible, accurate, and un-biased.
AI should be seen as one string in a plentiful bow, and research collated by AI must always be coupled with human-led critical analysis.
AI has presented us with a powerful tool. With great power comes great responsibility.