Intelligent System and AI have been in vogue as far as I can remember. Growing up an avid reader of science of fiction, I read Isaac Asimov’s- I Robot with a high degree of interest. Asimov’s three laws of Robotics are quite well known in both science fiction and philosophical circles.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Although recently with the advent of advance technology and the high amount of dollar spending on AI, we have made progress into integrating AI into our everyday lives.
According to a report by IDC (2017) – Worldwide Spending on Cognitive and Artificial Intelligence Systems Forecast to Reach $12.5 Billion This Year, According to New IDC Spending Guide
A brief overview of Research Expenditure is also outlined in the above mentioned blog post.
Thus, the Laws of Robotics although still quite important require and update.
A recent article from 2017 outlines the reason why the Laws of Robotics need to be updated:
Dr. Peter W. Singer a well known expert on defense issues has written on the subject of AI and Asimov’s Laws of robotics. His views on the Law’s of Robotics are outlined in the following article from 2009.
A recent article published by MIRI ( Machine Intelligence Research Institute) in the Cambridge Handbook of Artificial Intelligence, discusses four viewpoint towards AI Ethics.
1.The first section of the article explores the issues that may arise in the future of AI.
2.The second section focuses on the challenges for ensuring that AI operates safely as it approaches humans in its intelligence.
3.The third section focuses on assessing the moral status of AI.
4.The fourth section explores how AI may differ from humans in their ethical assessment.
The subject of AI – remains an interesting albeit a controversial topic. Not only have AI been integrated into our every day lives to improve our productivity. AI is also the used significantly in warfare. This gives rise to problems such as technological arms race. Further problems that arise are how intelligent systems may replace humans leading to unemployment and the retraining of the workforce and even perhaps a point of ‘singularity’ which is when AI becomes more intelligent then humans.
The challenge here is then to focus the research and the vast amount of funding towards a societal benefit.
(More thoughts on the subject matter forthcoming)
Updated March 19th 2018.
Upto this day there have been no casualties with the advent of AI. Recently a Self Driving – Uber hit a pedestrian and marked the beginning of dangers of AI.
Danger is ever present and the statistics of point to almost 3500+ road accidents per-day in the United States alone. This does not mean that we ban the cars we simply punish the driver for their negligence. In the case of self-driving cars the Liability shifts from the driver to the manufacturer. This becomes complicated as Uber in itself is a limited liability company and issues like this push the incident on towards the realms of business law. Although investigations are still pending it is quite sorrowful to see that the thirst for productivity improvement, new technologies and in many cases greed and avarice is what brings to this point in time. In the case of self driving cars there is some saving grace as to that, future self driving cars will probably reduce carbon emissions by a large degree.
In one of the recent papers that I have written I point out how disruptive technologies can lead to such incidents. Disruptive technologies need to be closely monitored as their effects are unknown it is with time that these effects come into play. Therefore, it is best to use these technologies in a controlled environment for as long as possible. Uber itself has many issues both with it’s corporate structure and it’s treatment of employees. It is a dangerous experiment that is being hyped up by the Silicon Valley enthusiast and at times certain political parties who play the competitive advantage card when it comes to technological process in the United States. Furthermore, the amount of money being invested in the research of AI makes it entirely possible for corners to be cut when it comes to the application of proper ethical principles. After-all recent events such as Theranos, Martin Shekrili, have proved that corruption and the inability to apply ethical principles can be found even in the hands of the best and brightest.
What is more frightening about this incident is that it makes the use of technology riskier. Is my much cherished Macbook going to be responsible for my death tomorrow? Perhaps I should not allow it the level of autonomy and keep my judgement intact. Is the pleasure and increase in productivity worth the risk that is affiliated with. Or perhaps we are putting too much thought into this as a self driving car can be perhaps compared to an elevator which has caused countess of deaths but is an integral part of our every-day lives.
AI holds in-itself the potential to be classified as an existential risk and is an issue that should be taken very seriously as unlike an elevator which is only used for some purposes AI has the ability to grow exponentially.
For the draft of my paper on how exploitation occurs in the a disruptive and unjust economy can be found with the following link. Disruptive Economy.
Here again I find it important to quote the words of Isaiah Berlin:
Berlin (1957), “when ideas are neglected by those who ought to attend to them –that is to say,those who have been trained critically about ideas they sometimes acquire an unchecked momentum and irresistible power over multitudes of men that may grow too violent to be affected by rational criticism.” (p, 167)
(Additional Thoughts Forthcoming)