One of the main challenges we face in regulating AI is that the technologies develop much faster than the law can develop. Proper regulation is best achieved flexibly, through the use of ethical frameworks instead.’
Artificial Intelligence has been at the center of research and development endeavors since the 1950s (Nilsson, 2014). Artificial intelligence can be described as a computer science discipline that is focused on developing smart machines that can perform tasks that require human intelligence to accomplish. This concept has been in development since the 1950s when Alan Turing, a British mathematician proposed that if humans can utilize the available information to make decisions and solve problems, machines too could also achieve that. The field has since been growing owing to the ever-changing technological space. With the invention of computers with faster processing speeds and storage capacities, more research and utilization of AI is taking place (Nilsson, 2014).
Artificial Intelligence aims at enabling machines to think like a human being. Through its infrastructure and design, AI technology has the potential to learn from current uses in a process to continually develop. This learning is based on collecting previous data, determining patterns, generating predictions, and suggestions (Nilsson, 2014). The driving force behind AI is access to data which enables the AI to learn.
The application of AI in the modern world is continuously growing. Search engines such as Google use AI to generate suggestions of websites, hotels, and products based on user’s previous usage. Other components of AI include Neural Networks, Machine Learning, Computer Vision, Deep Learning, Natural Language Processing, and Fuzzy Systems (Mitchell, Michalski & Carbonell, 2014).
The current trends indicate that Artificial Intelligence is growing and has a spot in the world’s future. It can be used in almost all fields from Education, Health Sector, Financial Industry, Military, Economics, and Manufacturing. As more countries strive to develop and enhance their AI capabilities, AI is taking root in today’s world. Current analysts suggest that The Fourth Industrial Revolution will be based on the utilization of cyber-physical systems, the Internet of Things, and the Internet of Systems (Pollock, 2018).
From the topic statement, Artificial Intelligence is growing at a high pace. Like any other concept in society, regulation needs to be put in place to ensure the fair, legal, just, and ethical use of this evolving technology. The legislation of AI in countries such as Australia has been outpaced by the developments in the field itself. Regulators have been unable to keep up with the rate of growth of Artificial Intelligence in the country. This has left the models of regulations to be based on self-regulation, frameworks, and roadmaps for adoption (Floridi, 2019). From the statement under analysis, regulation has been left to be regulated by ethical frameworks since legal frameworks have not yet been developed. However, this is not a sustainable solution since AI touches on some potent issues such as privacy and human safety.
The Australian service market has over the recent years been driven by technology. This is, in particular, has been the adoption of Artificial Intelligence tools to boost the provision of goods and services by different institutions. These include the government, education institutions, business institutions, and other private institutions. The growth of AI has skyrocketed in Australia with many entities relying on Artificial Intelligence for their day to day activities. Several notable companies operating in Australia such as Google Australia, Microsoft, and Oracle Corporations use AI in their operations. The benefits accruing from the use of AI are automation which boosts efficiency, execution of tasks that are difficult or dangerous for humans, convenient and faster execution (Mitchell, Michalski & Carbonell, 2014). This trend is predicted to continue as the world continues to embrace Artificial Intelligence as the future of the world.
However, the adoption of new technology at this pace presents a significant number of challenges. AI has the potential to result in the violation of basic human rights (AHRC, 2018). Artificial Intelligence can undermine the right to privacy. As discussed earlier, AI relies on the collection of data from users to create a reservoir of data from which AI learns. This can result in the collection of sensitive data such as racial information, political, religious and philosophical inclinations, membership of trade unions, and biometric information (Coeckelbergh, 2019). AI can undermine the right to education due to the reliance on algorithm-based decisions for education accessibility (Liu et.al, 2019). Also, the right to a fair trial can be undermined similarly. Safety and security rights can also be violated by the use of AI among services meant to deliver services to the members of the public. Inequality is also another presenting problem. This is through the lack of equal access to technology in the country. This can create a social division that excludes people without access to technology. Disruption In the labor market is another harm that AI can cause. AI provides for automation of several processes which results in laying off certain employees who are replaced by smart machines.
AI is a new powerful tool that can reshape the world but it comes with its share of disadvantages. Given that AI is still at its early stages, there has been little to no legislation towards ensuring there is a regulation in the AI field. AI presents the country with a new legal paradigm that has no precedence. The purpose of the regulation of AI should be to ensure that fundamental human rights are not violated. Besides, the regulation should also be designed to support the growth and development of the technology as well. According to AHRC (2018), regulations in regards to AI should revolve around enhancing transparency, non-discrimination, and accountability.
The world continues to evolve and so does computational powers. The development of new technologies has been faster than that of legal or regulatory mechanisms and capacities (Guihot, Matthew & Suzor, 2017). This could be due to several reasons, including the decentralization of regulatory authority, lack of government resources to establish regulatory agencies, and the increasing power of technology companies (Guihot, Matthew & Suzor, 2017). These factors have contributed to the low regulatory processes involving Artificial Intelligence.
According to the Australian Law Society, there is a need to have a balance between rising levels of innovation in AI technologies and innovation in governance. The innovation in governance describes efforts to have laws that are continually being developed to match the growing functionalities and reach of AI (Guihot, Matthew & Suzor, 2017. The exponential growth rate witnessed in the Artificial Intelligence field needs to be matched up by legal and regulatory frameworks to maintain balance. Without this balance, there is potential for problems such as inequality, violation of human rights, and a threat to human wellbeing and security (Floridi & Cowls, 2019).
The topic statement proposes the use of ethical frameworks as a means to achieve flexibility in regulation. Ethical frameworks refer to standards and measures that are based on principles of right or wrong. The increasing usage and dependence of AI have raised concerns over the potential harms as well as ethical usage of AI technologies. The fundamental principle here is that AI should be tailored in a way that honors human rights and is used for ethically accepted purposes (Daly et.al, 2019). Certain ethical issues have been raised in the AI-Ethics debate. These include transparency on how moral decisions are made, the ability of AI to make moral decisions, lack of legal responsibility for automated devices, and accountability of the decisions made by a machine (Ouchchy Coin, & Dubljević, 2020).
Ethical Framework in the regulation of AI has several elements to it. The first element is Public Trust. Over the years, AI has gained a negative perception in the public domain. The media has had a hand in shaping AI to the public (Racine et.al, 2005). These perceptions have led the public to view AI in a negative manner including potential harm to the society, replacement of employees, and breach of privacy. The ethical framework aims to generate trust from the public so that they can embrace the adoption of AI in the country. The second element is to foster ground for the ethical purpose of the AI. The ethical framework proposes that AI technology being developed and adopted should be for an ethical purpose. In essence, this means that the AI should be tailored to address the issues facing the society in a just, transparent, and morally acceptable standard.
The adoption of an ethical framework towards AI considers many ethical principles that AI development and adoption should conform with (Floridi & Cowls, 2019). The principle of beneficence addresses the concerns around the quest to improve the welfare of humans and preserving their dignity. This principle aims at guiding AI to be tailored for human benefit. The principle, according to the Montreal Declaration (Floridi & Cowls, 2017), insists on the need to have AI designed to ultimately promote the wellbeing of all creatures. This framework guides the pursuit of human welfare and empowers people to develop themselves using the tools availed by AI.
The second principle of Ethical framework is non-maleficence. This includes all concerns relating to the privacy, safety, and security of the society (Dawson et.al, 2019). This principle urges the AI developers to consider how their systems will ensure the respect of these fundamental human rights in their execution. The AI is expected to respect human privacy and security and should not do any harm during its use. This provides guidelines for the organizations and institutions using AI to ensure they mitigate potential threats to human safety and privacy (Lee, Kwon & Lim, 2017).
Another ethical principle is autonomy. At the very core of the foundation of AI is the ability of AI technologies to make decisions based on algorithms and data. However, there is a need to have limitations as to which decisions can or cannot be made by AI. This principle aims at ensuring a productive balance between human and machine decisions. The principle proposes that human autonomy be given priority over machine autonomy (Floridi & Cowls, 2019). This principle respects the power of human beings to make critical decisions instead of AI.
The ethical framework proposes the pursuit of justice in the development and adoption of AI (Daly et.al, 2019). The pertinent issues addressed in this principle are equality, societal empowerment, unity, and fairness. The framework proposes that AI be tailored to ensure that it benefits society as a whole. This will reduce divisions arising from a lack of equal distribution of the AI technologies to all members of the public.
Accountability and responsibility for AI technology form the basis for the ethical framework (Floridi & Cowls, 2019). Accountability refers to having the AI developers address the issue of who will be held accountable for decisions made by an automated AI system. The responsibility element addresses the issue of legal responsibility and liability for any harm or decision made by AI. This ethical principle also provides guidelines for promoting transparency of AI technologies. The AI developers need to avail of information as to why some decisions are made or arrived at. By upholding these crucial elements, the ethical framework can indeed provide a regulatory mechanism for AI.
However, despite these principles providing a framework for guiding the adoption and use of AI, there is no guarantee that companies will adhere to them. The current ethical frameworks in place rely heavily on the AI developing companies to regulate themselves. These companies have integrated measures to regulate their AI technologies. However, the AI industry is a crucial part of the nation and there need to be regulations in place to provide regulations. The AHRC proposed the adoption of a national strategy that will ensure that AI is governed by core democratic principles. Legal frameworks need to be devised to ensure there is sufficient and comprehensive monitoring of the use of AI in the country. Legal frameworks provide a foundation in which guidelines to direct legal conduct by AI technologies are laid.
The problem with ethical standards is that there is no legal framework that can legally bind the AI companies to adhere to them (Walz et.al, 2018). There are many ethical recommendations regarding the use of AI (Floridi, 2019). However, this availability of many recommendations provides companies with an opportunity to use those which justify their actions (Floridi, 2019). This is a serious loophole that can be utilized to excuse the failure of the AI to respect the core ethical principles. On the other hand, with a legal framework in place, there are legal consequences for AI that violates set rules and regulations. The Ethical Frameworks and initiatives established by AI companies sound very good on paper but fail in practice. Some issues such as privacy and security are very potent as a result of political concerns in them (Mittelstadt, 2019). These issues require a clear and concise legal framework that addresses these concerns from a legal perspective deriving power from the constitution of the country.
AI is developed mostly for commercial purposes, to turn a profit (Lee, Kwon & Lim, 2017). The developers work under pressure to pursue company goals first. With this in mind, there is little regard for the implications of technology on humans (Mittelstadt, 2019). Owing to this reason, relying on an ethical framework is not sustainable. Legislations need to be enforced to ensure there is a legally- binding mechanism to be adhered to by these companies.
The field of AI is a relatively new endeavor with no history that can be referred to for precedence on ethical behavior (Mittelstadt, 2019). The presence of this precedence would form a basis for a more binding approach to ethical frameworks of AI. There have not been transformative moments in the history of AI that would have established precedence for the adoption of a standard code of ethics in the field. This, therefore, limits the use of ethical frameworks as governing principles for current and future use of AI.
Artificial Intelligence is a field that is here to stay. With its capabilities, the human race is now in a position to move to the Fourth Industrial Revolution (Pollock, 2018). AI can revolutionize health, education, military, government, economy, and industrial sectors. Automation, efficiency, convenience, speed, reliability, and complexity are all the promises that AI has offered to the human race. However, it comes along with challenges such as privacy, security, inequality in use, and accessibility. There have been efforts to regulate this new paradigm shift in the world at large through advocating for Ethical frameworks. As seen in the paper, the Ethical Principles in the Framework are tailored to ensure the wellbeing of the human race. However, this is not enough to regulate such a potent field. In my view, there is a need to have legal frameworks in place to guide the direction that Artificial Intelligence is headed towards. The ethical frameworks can be used to complement the legal frameworks not to act solely as guidance for AI.
Australian Human Rights Commission, Forum, Artificial Intelligence: Governance & Leadership https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwj5uoiohJ7sAhUJ1RoKHR47DRUQFjAAegQIBxAC&url=https%3A%2F%2Ftech.humanrights.gov.au%2Fsites%2Fdefault%2Ffiles%2F2019-12%2FAHRC%2520WEF%2520White%2520Paper%2520online%2520version%2520FINAL.pdf&usg=AOvVaw3OSYr4izaicUnNxEeEID1M accessed 5th October
Coeckelbergh, M. (2019). Artificial intelligence: some ethical issues and regulatory challenges. Technology and Regulation, 31-34.
Daly, A., Hagendorff, T., Li, H., Mann, M., Marda, V., Wagner, B., … & Witteborn, S. (2019). Artificial Intelligence, Governance, and Ethics: Global Perspectives. The Chinese University of Hong Kong Faculty of Law Research Paper, (2019-15).
Dawson, D., Schleiger, E., Horton, J., McLaughlin, J., Robinson, C., Quezada, G., … & Hajkowicz, S. (2019). Artificial intelligence: Australia’s ethics framework. Data61 CSIRO, Australia.
Floridi, L. Translating Principles into Practices of Digital Ethics: Five Risks of Being Unethical. Philos. Technol. 32, 185–193 (2019). https://doi.org/10.1007/s13347-019-00354-x
Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1).
Guihot, M., Matthew, A. F., & Suzor, N. P. (2017). Nudging robots: Innovative solutions to regulate artificial intelligence. Vand. J. Ent. & Tech. L., 20, 385.
Law Council of Australia https://www.lawsociety.com.au/sites/default/files/2019-04/Letter%20to%20LCA%20-%20Artificial%20Intelligence%20-%20governance%20and%20leadership%20-%208%20March%202019.pdf accessed on October 5th 2020
Lee, K. Y., Kwon, H. Y., & Lim, J. I. (2017, August). Legal consideration on the use of artificial intelligence technology and self-regulation in the financial sector: focused on robo-advisors. In International Workshop on Information Security Applications (pp. 323-335). Springer, Cham.
Liu, H. W., Lin, C. F., & Chen, Y. J. (2019). Beyond State v Loomis: artificial intelligence, government algorithmization, and accountability. International Journal of Law and Information Technology, 27(2), 122-141.
Mitchell, R. S., Michalski, J. G., & Carbonell, T. M. (2013). An artificial intelligence approach. Berlin: Springer.
Nilsson, N. J. (2014). Principles of artificial intelligence. Morgan Kaufmann.
Ouchchy, L., Coin, A., & Dubljević, V. (2020). AI in the headlines: the portrayal of the ethical issues of artificial intelligence in the media. AI & SOCIETY, 1-10.
Pollock, D. (2018). The Fourth Industrial Revolution Built on Blockchain and Advanced with AI.
Racine, E., Bar-Ilan, O., & Illes, J. (2005). fMRI in the public eye. Nature Reviews Neuroscience, 6(2), 159-164.
Walz, A., & Firth-Butterfield, K. (2018). Implementing Ethics into Artificial Intelligence: A Contribution, from a Legal Perspective, to The Development of An AI Governance Regime. Duke L. & Tech. Rev., 17, i.