The student winner of the UNSW Press Bragg Prize for Science Writing 2023 is Elsie Paton (Year 9, Kambala School). Her essay Two Paths to AI: The Choice of Humanity will be published in The Best Australian Science Writing 2024.
Conjure a mountain slope. On one side, a snow-covered path descends to a city where AI is harnessed to aid the sick, make scientific discoveries, provide unbiased feedback, protect the community, and improve the environment. As you shift your gaze over to the other side of the mountain, you glimpse another city where people use AI for different purposes: surveillance, the storage of sensitive information, personal gain and the production of prejudiced or apathetic decisions. You stand on the top of the mountain, gazing at the two cities, with your own feet leading the way.
Well, firstly, what is AI? Artificial intelligence is information-processing technology that uses algorithms to perform cognitive tasks such as prediction, decision making and data analysis. Though often assumed to be unethical, artificial intelligence has aided science through predicting interactions between chemicals, performing experiments, and diagnosing conditions with greater precision, efficiency, superior speed and reduced financial expenditure, therefore providing countless ethical benefits to many communities around the world.
For example, in pathology, AI software was used by Insilico Medicine to design a drug that slows the development of idiopathic pulmonary fibrosis, a chronic disease that affects the lung’s alveoli and can lead to respiratory failure and death. Furthermore, AI4Good’s ‘Climate Trend Scanner’ aids global warming research, whilst Deepmind’s investigation into proteins that speed-up the development of plastic-degrading enzymes helps sustainably combat environment-related issues. Evidently, a plethora of possibilities awaits as AI reveals its potential to attract multiple ethical benefits to society, through improving science, our actions, lives and surroundings.
However, despite AI’s obvious ethical help in many sectors of science, there are multiple concerns regarding AI’s potential bias, lack of emotion, misuse, job-replacements and lastly, the ethics of ‘human enhancement’.
Firstly, many AI models have been found to repeatedly internalise and express existing prejudices in society. For instance, Georgia Technology discovered that self-driving cars had a higher rate of collision with people of a darker skin tone due to a poorly created dataset, and in 2019, an AI algorithm was found to favour white patients over black patients when advising whether follow up care was necessary! Simply picture the consequences of AI’s prejudiced decisions in fields of science, medicine and day-to-day life; could they eventually impact you?
Secondly, although artificially intelligent technologies currently lack emotion, their input is being taken to heart in delicate situations. For instance, picture a world where an AI ‘death algorithm’ predicts the likelihood of patient survival and thus gives input on whether life-support should continue. Its programming lacks empathy, hope and understanding, yet the doctors trust its accuracy. Well, imagining this should be straightforward, as it already exists, and is strongly opposed. This is because not only is AI unable to recognise the true impact of its actions, but its possible prejudices could further lead to potentially poor decisions in such important situations. Some have questioned whether AI’s role in life-or-death situations is even ethical at all, due to the possibility of severe damage.
Thirdly, it has been speculated as to whether AI could replace jobs due to its superior performance. However, the ethics of replacing those who need a source of income, with machines that don’t, could be questioned.
Fourthly, AI poses privacy issues due to its potential for misuse and violation. As AI systems contain vast amounts of sensitive data, personal information such as a patient's date of birth could be hacked, or important surgeries and scientific experiments could be manipulated and spied upon through technology. Algorithms and scientific equipment could be maliciously programmed for personal gain, resulting in devastating, immoral impacts for science and greater society.
Lastly, and perhaps most importantly, the ethics of ‘human enhancement’ through AI must be considered whilst neuroscience delves deeper into technologies that alter the brain. AI and precise gene editing are two very recent discoveries in science with ethical implications. The recent creation of a gene editing program named ‘CRISPR’ allows diseases such as Huntington’s and HIV to be permanently eliminated from embryos. However, according to Statnews, 2020, researchers at Columbia University found that ‘in more than half of the cases, the editing caused unintended changes, such as loss of an entire chromosome’, leading to harm. Consequently, it must be considered whether it is fundamentally ethical to modify our being, and whether this could create a disparity between those who are ‘enhanced’ and those who are not.
AI poses similar ethical challenges. Whilst AI is capable of, and is bringing significant scientific advancement to the world, aiding countless lives, and nurturing our environment, there are a multitude of palpable possibilities in which AI can be unethically abused. Our continuing embrace of AI calls for caution, yet it simultaneously reveals a kaleidoscope of colours, possibilities and benefits, where the world could be helped in ways unbeknownst. Ultimately, it is up to us, humans, to select the mountain path which lays before us, and cultivate AI’s great potential in an ethical way.