Two words everyone in the world should be aware of is technological singularity, which is a moment in time when smart machines become intelligent. If you have seen the movie series Terminator, you know what I'm talking about. In Terminator, this moment is called "judgment day" and is almost the end of mankind. But technological singularity doesn't have to be something bad. What if the smart machines can help us to build a better world? This is what the book I, Robot by Isaac Asimov is all about.
I, Robot is the first book I've read written by Isaac Asimov. Ever since I wrote a biography on Elon Musk, I've wanted to read one of Isaac Asimov's books because he is one of Elon Musk's favorite authors. But I had to wait until one of my neighbors left I, Robot in our local book club, which is basically a shelf in a window.
The book was first published in 1950 and became a huge success. It has over 148 thousand ratings at Goodreads. That's not strange because the book was very interesting. Elon Musk is also interested in technological singularity, and I've earlier read a few books on the topic.
What's interesting with I, Robot is that it covers the development of smart machines, from stupid robots to machines that can build other machines. It will also look at some of the pitfalls that might happen when we develop them.
I, Robot consists of several smaller stories that are somewhat connected, although you can read them independently. The first story is about a robot designed to take care of a girl and what people in general might think of a smart robot. Other stories in the book is based on the Three Laws of Robotics:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
You might think that mankind is safe if we design the robots to follow these laws. But that's far away from the truth. Machines are both smart and stupid. What happens if a human tells a robot to "go away," meaning that the human just want the robot to move a few meters away from him/her. The robot will follow "A robot must obey the orders given it by human beings," and might interpret "go away" as hiding. This is one of the stories in the book. The "smart" robot hides among other robots and the main characters in the book have to figure out who is the robot that's hiding among the other robots.
These robots are designed by human beings. As we all know, human beings are greedy. What will happen if a human being decides to ignore one the the Three Laws of Robotics? That's another story in the book. And what happens if we tell a robot to build a spaceship? What if a robot becomes so human that's impossible to tell the difference, and that robot decides to become a politician? Will we be able to tell the difference? How can in comparison stupid humans outsmart an intelligent machine?
So if you are interested in what might happen when we have smart robots around or lives, then you should read I, Robot. Isaac Asimov might have written it in the 1950s, but it is still accurate.