I like the reference to Asmiov’s 3 Laws of Robotics. It seems appropriate to re-examine them in today’s context. I personally think that we don’t yet fully understand what the average user wants out of a bot, so it seems a bit early to define what it should or should not do when we don’t know what it is and it is still in its formative stage, but maybe it’s the right time so that humans can define its existence and protect themselves from potential dangers. But be that as it may, I thought it’d be good to re-post the three laws here:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
You covered the first two, but the third is by far the most interesting and has the most existential consequences.