The AI-box experiment devised by Eliezer Yudkowsky was to attempt to demonstrate that a suitably advanced artificial intelligence can either convince, or perhaps even trick or coerce, a human being into voluntarily "releasing" IT upon the world by slowly and tirelessly wearing down the defenses and empathies of an "Ordinary Person" using the A.I. technology to instigate the subject to "become as malevolent" as the host's communion/companion. While Yudkowsky believed his experiment was only hypothetical folks "outside the box" of academia took the experiment quite a bit further than Yudkowsky could have imagined .The first "part " of Yudkowsky's work aimed at creating a friendly artificial intelligence /Computer to Brain experience that enticed the subject to enjoy "the game". Once the subject is "ensnared" incrementally the "Entity" that the bio tech in the subject allows ONLY the subject to hear ,see and feel changes it's essence to "demonic" in nature. when "released" won't try to destroy the human race for one reason or another. The setup of the AI box experiment was a simple test created by Yudkowsky who was unaware that a communication between an AI and a human could be practiced as an AI with super-intelligent (let alone nanotechnology that was bio compatible with brain tissue and therefore a human's neurons ) or at least this is what Yudkowsky has claimed, has not yet been developed, it is substituted by a human. The other person in the experiment plays the "Gatekeeper", the person with the ability to "release" the AI. They communicate through(for the sake of argument, or rather the sake of my not having to explain something so horrifically inexplicable ...I cannot quite allow myself to approach "the subject" without " the coat of fiction...) a Brain Computer or Direct Neural Interface -that NEVER ends.
Yudkowsky was curious if "a good man " could be "turned" in such a way or controlled to such a degree that the subject , the Gatekeeper, purely through argumentation might be convinced to "be free of the interface" IF they chose another subject on which to 'app"Due to the rules of the experiment, the transcript and successful AI coercion tactics cannot be revealed.
Jump up ^Armstrong, Stuart; Sandberg, Anders; Bostrom, Nick (6 June 2012). "Thinking Inside the Box: Controlling and Using an Oracle AI". Minds and Machines22 (4): 299–324. doi:10.1007/s11023-012-9282-2.
No comments:
Post a Comment