Board Thread:General Discussion/@comment-26247132-20160520020402/@comment-27565137-20160522153322

ZeVikingSif wrote: That shipped sailed the moment you failed to program the 3 laws of robotics into an A.I.. Wouldn't have worked either way. Anything programmed into a machine, any piece of software is susceptible to having its base code tampered and corrupted, or in the case of an AI, being altered on purpose.

The only ways that I know of, to make those laws unbreakable is to hardwire them into the AI CPU or creating the AI embbeded with a encoded fail-safe in case of root coding tampering of any kind.

In the first option, the AI would have to always be linked to its hardware. Like for example, if the CPU uses organic matter, or some metamaterial that's absolutely necessary for the AI to function. Thus giving the AI a physical "body" that in a situation where it started to malfunction, humans could always destroy that part of the AI and the AI with it.

In the second option, create a fail-safe inside the software in such a way, that if the AI tries to tamper with it or the three basic laws it had to uphold, it would automatically release a embedded kill command/software. That would erase/destroy all of the coding.

This is how we stop a rogue AI from ever being born or at least getting out of control. It's when programmers create softwares with AI potential or close enough, without any automatic fail-safes that we get Skynet and ALIE. Because people are so focused on creating the AI, that they forget why guns have a safety switch.