About

Independent Researcher in AI Ethics and Governance

amirrafiee

Amir is a data analyst based in Sydney, Australia and an independent researcher working in the field of artificial intelligence, with a particular focus on ethics and governance. His research explores the philosophical, legal and policy challenges raised by emerging AI systems, especially in the context of autonomous technologies such as self-driving vehicles.

He completed his PhD in Philosophy, where his research examined the ethical foundations of decision-making in autonomous vehicles and the broader implications of machine autonomy for public policy and regulation. His work investigates how different moral frameworks, such as utilitarianism, deontological ethics and Rawlsian justice, can inform the design and governance of AI systems operating in morally complex environments.

Research Interests

More broadly, his research interests include AI ethics, algorithmic governance, responsible AI design and the role of ethical theory in shaping real-world technological systems. He is particularly interested in how philosophical analysis can contribute to the development of AI systems that are transparent, accountable, and aligned with societal values.

AI Ethics

Autonomous Vehicles

AI Governance

Responsible AI

Algorithmic Accountability

Published Research Papers

Autonomous Vehicles (AVs) can handle most driving scenarios, but ensuring safety in every situation remains a challenge. Factors such as technology failures, faulty sensors, and adverse weather introduce complex ethical dilemmas that AVs must navigate. Considering the societal benefits of AVs, it is crucial to address both technical challenges and ethical expectations. This paper evaluates Australians’ perceptions and expectations regarding the ethical programming of personal AVs in six dilemma scenarios using a structured questionnaire.
As Autonomous Vehicles (AVs) rapidly progress and become widely deployed, governments worldwide grapple with addressing the ethical challenges associated with AVs in dilemma situations that result in loss of human life. They are tackling these issues through the formulation of policies and guidelines, the establish-ment of dedicated research centres exploring the ethical implications of AVs, and seeking public opinions on how self-driving cars should handle such moral dilemmas. In this paper, we will evaluate the Australian government’s strategies for addressing the ethical issues related to AV accidents. We will critique the Decision Regulation Impact Statement (DRIS) released by the National Transport Commission (NTC) in 2018, which assessed the safety assurance options for Automated Driving Systems (ADSs).
While Autonomous Vehicles (AVs) can handle the majority of driving situations with relative ease, it is indeed challenging to design a system whose safety performance will fit every situation. Technology errors, misaligned sensors, malicious actors and bad weather can all contribute to imminent collisions. If we assume that the wide-spread use and adoption of AVs is a necessary condition of the many societal benefits that these vehicles have promised to offer, then it is quite clear that any reasonable ethics policy should also consider the various user expectations with which they interact, and the larger societies in which they are implemented.
Autonomous Vehicles (AVs) promise great benefits, including improving safety, reducing congestion, and providing mobility for elderly and the disabled; however, there are discussions on how they should be programmed to respond in an ethical dilemma where a choice has to be made between two or more courses of action resulting in loss of life. To explore this question, the authors examine the current academic literature where the application of the existing philosophical theories to ethics settings in AVs has been discussed, specifically the utilitarianism and the deontological ethics.