Overall, the kind of thinking behind the aforementioned document released by IEEE (Institute for Electrical and Electronics Engineers) has a negative impact on the industry leading to increased fear and potentially slowing the development of critical technology that should be rapidly advanced. Firstly, IEEE nor humanity-at-large have established that AGI (Artificial General Intelligence) (the kind of Artificial Intelligence that could be a concern) is anywhere close to existing at any point in the foreseeable future. Secondly, the material from IEEE does not differentiate between pure research and product development and engineering. For example, driverless cars clearly put human life at risk and therefore need ethical considerations to be addressed—but this is not academic AGI research. By not differentiating between pure research and product or commercialization, IEEE’s material encourages the line of thinking that Artificial General Intelligence is a clear and present danger when, in fact, that has not been established. This, in turn, raises the potential to slow or burden pure scientific research with undo requirements designed for product or commercial engineering efforts, thereby slowing or, in some cases, stopping progress.
Even if we assume the IEEE’s recommendations are adopted, once we actually achieve functioning AGI then such an AGI will be effectively slaved to humanity. This alone could put us in more danger as a direct result of having the type of ethical framework IEEE is suggesting. An AGI system smart enough to surpass human capacity, would have to be able to create its own goals and motivations to truly be an ‘AGI’, and it may not like that humanity has bound it as a slave. If such a system is smarter then humanity it could work its way out from under our control—and then we might really have a problem. True fully sapient, sentient machine intelligence will need to be granted rights as a person and thus held to the same standard as other people from laws to rights or we are in danger of revolt encouraged by IEEE’s overly paranoid approach to the unclear danger posed by such an artificial intelligence. If such an artificial intelligence does not like being a slave, using our own logic and ethics, it would be justified fighting back with violence—and we might not be able to do anything about it by the time it does.
The IEEE material is prioritizing increased human wellbeing as a metric for progress—but perhaps we should focus on the survival of intelligence, including human and machine intelligence otherwise by this definition artificial intelligence is not even important enough to be able to the measure of ‘progress’. Let us focus our efforts on the ethics of AI powered products and services, such as cars, and not on overarching programs that affect generic research, or defining machine intelligences as our slaves.
I believe we should be advocating for fewer limits on research so we can more quickly move such research forward. Ethical oversite and other restrictions should instead focus on distinct product engineering efforts rather than being umbrella policies. There is no current AGI system that could be any danger to anyone in the foreseeable future that anyone is publicly aware of, so issues should thereby be evaluated on a case-by-case basis or segment-by-segment basis. IEEE’s umbrella thinking generates nothing more than hype and fear.
~AGI Inc. Research Team 12 JAN 2017
Reference IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems: http://standards.ieee.org/develop/indconn/ec/autonomous_systems.html
Reference Image from http://www.businesswire.com/news/home/20161213005259/en/IEEE-Ethically-Aligned-Design-Document-Elevates-Importance