Isaac Asimov was already thinking about these problems back in the 1940s, when he developed his famous "three laws of robotics".
He argued that intelligent robots should all be programmed to obey the following three laws:
-A robot may not injure a human being, or, through inaction, allow a human being to come to harm
-A robot must obey the orders given it by human beings except where such orders would conflict with the First Law
-A robot must protect its own existence as long as such protection does not conflict with the First or Second Law