Artificial General Intelligences (AGI's)

Designed Artificial General Intelligences are under development. These entities are being purposely designed to replicate the human brain. Typically, the groups of people designing these entities propose to use their work to improve treatment of human neurological problems. Then they intend to encourage the AGI's to learn and think on their own.

AGIs, from a systems point of view, are a part of the Sapiens plurum group that invented them. As a cognitive component of a larger system, they need not have innate drives. They could potentially respond to alerts and seek homeostasis in other entities, such as humans. This would assure that AGIs act to benefit humans. This then reinforces the critical nature of the AGI's definition of ingroup and outgroup. Whose feedback is considered relevant? Who is part of We and who are They?
 
Despite the fact that this age-old dilemma remains, by deriving AGI motivations from human emotion, we insure that We does not become AGIs and They, humans.
RELATED ARTICLESExplain
Sapiens Plurum: Evolving interconnections of people & their creations
People & Other Intelligent Entities
Artificial General Intelligences (AGI's)
AGI's potential response to this map
Watson
Need to incorporate bio-sensed data into AGI cores
AGI Development Groups
Where will AGI's source their goals and feedback?
Drives
Animals
Groups
People
Graph of this discussion
Enter the title of your article


Enter a short (max 500 characters) summation of your article
Enter the main body of your article
Lock
+Comments (0)
+Citations (0)
+About
Enter comment

Select article text to quote
welcome text

First name   Last name 

Email

Skip