Friendly AI is a term used by Artificial Intelligence (AI) researchers to refer to AI systems that, in general, perform actions that are helpful to humanity rather than neutral or harmful. This does not mean blind obedience - to the contrary, the term is used to describe AI systems that are friendly because they want to be, not because of any externally imposed force. In addition to referring to completed systems, Friendly AI is also the name of the theoretical and engineering discipline that would be used to create such systems successfully.
The term "Friendly AI" originated with Eliezer Yudkowsky of the Singularity Institute for Artificial Intelligence, whose goal is the creation of Friendly AI smart enough to improve on its own source code without programmer intervention. His book-length work on the topic, Creating Friendly AI, published online in 2001, is probably the first rigorous treatment of the topic anywhere. Yudkowsky invokes arguments from evolutionary psychology and other cognitive sciences to support his approach to the problem of Friendly AI.
Friendly AI is considered important as an academic discipline because past attempts to "answer" the problem of rogue AI generally invoked strict programmatic constraints, which are bound to collapse under alternative interpretations, when the AI becomes smarter than humans, or simply when it gains the ability to reprogram itself. Anthropomorphism is also a problem in AI. Because evolution builds organisms that tend to be selfish, many thinkers assume that any AI we build would have the same tendency, either immediately or after becoming smart enough.
Evolution builds organisms with self-centered goal systems because there is no other way. Altruistic goal systems took many millions of years to evolve, and only did so under conditions in which members of the same tribe had a lot to gain by helping one another and a lot to lose by neglecting to do so. But if we were to design a mind from scratch, we could build it without a self-centered goal system. This would not be "restricting the AI" - it would simply be creating an AI that is unselfish by nature.
The above observation is one of many that contribute to the field of Friendly AI, which is extremely new and still needs much work. Some AI researchers argue that we cannot determine the design features necessary to implement Friendly AI until we have smarter AI systems to experiment on. Others argue that a purely theoretical approach is not only possible, but ethically necessary before beginning a serious attempt at general AI.