Dr Asimov describes the three laws of robotics.
A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Asimov himself made slight modifications to the first three in various books and short stories to further develop how robots would interact with humans and each other; he also added a fourth, or zeroth law, to precede the others:
A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
It is recognized that they are inadequate to constrain the behavior of robots.