The Government’s Regulation of Artificial Intelligence

When asked about artificial intelligence (AI), people tend to describe it as a technology that thinks and acts as a human would. This misunderstanding of AI is a gross confusion of its conceptual purpose and application. In an article published by Cambridge University Press, Nils J. Nilsson of Stanford University formally defines the multifaceted theory: "Artificial intelligence is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment."
The field is expanding rapidly and produces an urgent need for differentiation between the many applications the technology offers humanity be they negative or positive. There is a clear distinction between common beliefs and reality. This line fades further by the famous Turing test which says true AI is only when a computer’s reply to a query is indistinguishable from that of a human (Moor). This widespread misunderstanding of AI technology is damming and currently relegates the industry to regulation by uncredentialled individuals, a problem that must resolve before the advent of widespread AI implementation. Government regulation of emerging and theoretical AI technologies is imperative for the protection and safety of humanity at large, though not necessarily in 2019.
An army of robots with nothing to lose is a dangerous opponent in a war. Though robotic singularity is far away if not impossible, it is still something worth considering when evaluating the efficacy of government-imposed regulation on AI. One of the most often voiced fears of AI is that of singularity, which presently holds no merit (Eden et al.). Singularity is nothing more than speculation of a potential future outcome. Basing governmental and self-imposed regulation and control solely on this nightmare would hinder technological progression in the AI sector immensely. It is vital, however, to begin formulating legislation on some of today's applications of AI such as facial recognition and the development of AI controlled weaponry.
With the myriad of benefits AI has offered humanity already without government or corporate oversight and regulation, it is easy to embrace its development without adequate research and comprehension. Like the Airline industry in the 19th century and internet in the 20th, AI technologies require global introduction and scrutiny through the basic tenants of a tiered decision-making system (Froese and Ziemke). These regulatory systems have proven effective throughout the timeline in all digital and analog systems in the past. The lowest level encapsulates the working AI, those who will carry out specific tasks and missions. Above are the higher-tier oversight officers ensuring the strict subordination and completion of duties restricted by human-specified parameters. Easily seen daily in the workplace, this system of control and regulation is effective at generating favorable outcomes and results.
Researchers, academics, and technologists must unite to formulate adequate and safe measures of formal AI regulation and subordination. Information and knowledge should be compiled in the provision of a meta-review holding all the information and theories behind the regulation of AI without hindering its rapid growth today. Talk of robotic automation and AI-controlled factories is spreading, calling for an in-depth assessment of the benefits and consequences of AI on the international shrinking job market.
Soon, humanity will inevitably face a need for accepting and adapting to the replacement of working-class individuals by smart robots. Though this may be a frightening thought because of the working class's immeasurable contributions to the world of today, a robotic working class with proper regulation offers incredible benefits to society at large. Humans will finally have the freedom to spend significantly more time with their children and families, friends, neighbors, and communities. Government regulation is vital to this imagined future. The risk of humans using AI for power-grabbing and other selfish means is far too high with such advanced and powerful technologies.
Works Cited
Eden, Amnon H., et al. “Singularity Hypotheses: An Overview.” Singularity Hypotheses: A Scientific and Philosophical Assessment, edited by Amnon H. Eden et al., Springer Berlin Heidelberg, 2012, pp. 1–12. Springer Link, doi:10.1007/978-3-642-32560-1_1.
Froese, Tom, and Tom Ziemke. “Enactive Artificial Intelligence: Investigating the Systemic Organization of Life and Mind.” Artificial Intelligence, vol. 173, no. 3, Mar. 2009, pp. 466–500. ScienceDirect, doi:10.1016/j.artint.2008.12.001.
Moor, James. The Turing Test: The Elusive Standard of Artificial Intelligence. Springer Science & Business Media, 2003.
Nilsson, Nils J. “The Quest for Artificial Intelligence by Nils J. Nilsson.” Cambridge Core, Oct. 2009, doi:10.1017/CBO9780511819346.

Thomas Grylka
Thomas Grylka is the owner, developer, designer, and writer of this blog and website. He loves his Siberian Husky, Zoey, and he does not love talking about himself in the third person. A graduate of Eastern Connecticut State University, Thomas hopes to build a career web developing and writing and live out the rest of his days with his dog.
0 Comments
Leave a Comment