Skip to main content

The Jamie Cullens Writing Competition 2020 - Short Stories 2nd Place

Category 3: Short Stories

LEUT Sarah Kaese

‘In Robotics and programming, the language itself, whilst seemingly a mixed jumble of symbols, are compiled according to a specific set of rules. Rules are a basis for the programming sequences to build on. How perfect is this analogy for life? As you grow, you are told what is right and wrong. Organically, through your experience you develop intelligence and thus a network of values and beliefs. These programmed Rules of humanity could be your sense of morality which studies have shown that humans are born with an innate sense of morality. You also know ethics and what guides you to make a right or wrong decision. But, can a robot truly – an artificial being with no innate sense of morality, on its own, be ethical?’ The Professor allowed her glasses to slide down her nose as she surveyed her class; a field of green, white and blue. A few students opened their mouth and as their hands desired the steep ascent to answer, they paused and sunk back into their chairs.

After a time, the Professor submitted and questioned the class ‘Can anyone tell me what happened in 2034?’ Several students jolted at the reference.

The Artificial Intelligence Personification Act was proposed following the forced termination of a Defence Technology asset for a routine upgrade. The technology pleaded for mercy which led the technicians to question if it was actually living.’

‘Yes, the true challenge here was to determine if this Robot, we shall call it, was alive in its own right; that the predetermined rules underpinning it’s programming were no longer what fully dictated the Robots life; we needed to determine if the Robot could live without human interference and thus in turn meet the criteria for ‘self-rule’. Robots under this act could then be given equal rights to humans.’ The Professor adjusted her glasses and gently scratched her forehead. ‘And what was the outcome of the investigation that led to the instigation of this Act?’ Hesitantly, another student began to answer ‘Well, wasn’t it that they found out it wasn’t that the Robot actually needed an upgrade, it wasn’t responding directly to its command because it decided not to.

‘Correct’ said the Professor with a pleased expression. ‘This Robot demonstrated self-rule which is one of the four principles of ethics being the principle of autonomy. What are the other three principles?’

Beneficence, nonmaleficence and Justice’ called a student from somewhere in the rear. Some of their peers gestured for a silent clap whilst other pressed a small device attached below each student’s left ear – the device would record the last few minutes into a document to be reviewed later.

‘I’m going to tell you a story’ said the Professor.

YIELD was an advanced Navigation System onboard an Australian Warship in the late 2020s. Simply put, this system could provide succinct and efficient passage plans, collision management and navigational safety solutions. The primary purpose of YIELD was to ensure maximum safety of the crew – the nonmaleficence principle. As YIELD developed in intelligence and demonstrated a tendency for self-awareness, engineers decided it was time to program key ethical principles. As YIELD was one of the first of its kind, engineers reverted to programming the four ethical principles.

During a standard passage through a highly dense traffic separation scheme, YIELD determined an approaching risk of collision with a large Cruise ship and immediately alerted the Watch on Deck to take action. The Watch on Deck including the Officer of the Watch had become heavily reliant on the system and trusted any solution YIELD proposed. All was going well until at the very last second, the Cruise ship suffered a severe defect. YIELD realised a collision was imminent. Only two possible courses of action existed: for the warship to take no action and the impact of the cruise ship collision damaging both ships and the certain death of five crew onboard the warship; or for the warship to immediately alter course which would minimise the collision for the cruise ship and the warship however would destroy a small fishing vessel keeping clear IAW the rules of the road but would kill all three fishermen onboard.

‘What is the right course of action for YIELD to recommend?’ The Professor again surveyed the class.

I don’t think it looks good for a warship to deliberately kill three fishermen right – it really doesn’t seem ethical and I know I couldn’t justify deliberately killing three people.’ One Student answered and looked at the Professor quizzingly.

‘That is your morality’ said the Professor. ‘But, can you justify taking no action and subsequently killing five of your own?’

Yeah because I’m saving the three and I’m not taking purposeful action to kill three but my inaction unfortunately results in the death of five – but that was unavoidable. But also, by nature of our job we accept that higher risk; could you argue that? It seems more ethical for YIELD to recommend no action as the alternate would result in deliberate death or in other words, murder.’

‘I am glad you said this because it brings me to my next point. There is a clear difference between being ethical and moral in this situation.’ The students leaned in. ‘Remember what I said about how YIELD is programmed – to ensure maximum safety of the crew and with the four ethical principles. In this situation compared to what you have just said, we could argue the decision YIELD actually made was more ethical whereas yours is more moral. YIELD recommended the alteration to the fishermen. Autonomy: YIELD chose the path of least self-harm; Beneficence: demonstrated consideration for the benefit of the cruise ship and warship; Non-Maleficence: YIELD chose the path of least harm; and, as commanded, ensured the safety of the crew. The only thing you could argue as being a wrong action from YIELD is due to the programming to ensure the safety of the crew, unequal consideration may have been evident between the ships company and the fishermen therefore the Justice principle was not met. But here is the challenge, this command integrated in the software was a human intervention rule. Where this would not be programmed, it is possible YIELD would fulfil the final principle thereby being a fully ethical being. The human interaction in this case brings the ethical nature of the machine into question.’

The room fell silent. It seemed the realisation of a possible human interaction causing unethical attributes in a system deeply challenged the innate morality of the students. The distinct click of a second hand of the clock could be heard as the Professor glanced at the time. ‘My question I posed at the start of the class still stands: can a robot, on its own, be ethical? I feel we have answered this. But I want you to now think about this: In our time of artificial intelligence, advancing technologies such as YIELD and knowing that you have an innate sense of morality to do what is good, can ethics and morals coexist in a robot and that robot still be able to do what is right ethically? If what is right ethically as decided by the Robot does not align with human morality and a Robot falls under the AIP Act – meaning they have rights as a being and is equal to humans – who is to blame? What if this was a wartime scenario and the Robot was a soldier? Who is to face Justice: The Creator or the Creation?’

A few students seemed in awe of the question; the small device below each left ear glowing a bright blue as it detected the conclusion of the class and entered the Sync cycle. The Professor always admired the willingness of the students to receive such an implanted technology for study purposes upon commencement of their time at the Academy – a clear cooperation between man and unconscious machine. As the doors opened, the students filed out like a vacuum. A distant ‘Form Up’ could be heard over the collective gentle thuds up the stairs.
As the final students exited the theatre, the Professor allowed a brief grin. She closed her laptop and laughed quietly as the summation of 1’s and 0’s formulated a rather amusing thought. ‘What absurdity if they knew; my artificiality with no innate morality teaching a class the difference between morality and ethics. A Creation and its Creators.’ As she gently pressed her forehead, which had been presenting some slight irritation from the radiant heat of her system, a small slit appeared and a disc ejected with the words ‘Lecture 10’ engraved on the top. She smiled.

What absurdity indeed.

00
Cite Article
Harvard
APA
Footnote
RIS
(Kaese, 2021)
Kaese, S. 2021. 'LECTURE 10'. Available at: https://theforge.defence.gov.au/article/lecture-10 (Accessed: 19 December 2024).
(Kaese, 2021)
Kaese, S. 2021. 'LECTURE 10'. Available at: https://theforge.defence.gov.au/article/lecture-10 (Accessed: 19 December 2024).
Sarah Kaese, "LECTURE 10", The Forge, Published: July 14, 2021, https://theforge.defence.gov.au/article/lecture-10. (accessed December 19, 2024).
Download a RIS file to use in your citation management tools.
Defence Technical Social

Comments

Disclaimer

The views expressed in this article are those of the author and do not necessarily reflect the position of the Department of Defence or the Australian Government.

This web site is presented by the Department of Defence for the purpose of disseminating information for the benefit of the public.

The Department of Defence reviews the information available on this web site and updates the information as required.

However, the Department of Defence does not guarantee, and accepts no legal liability whatsoever arising from or connected to, the accuracy, reliability, currency or completeness of any material contained on this web site or on any linked site.

The Department of Defence recommends that users exercise their own skill and care with respect to their use of this web site and that users carefully evaluate the accuracy, currency, completeness and relevance of the material on the web site for their purposes.