10-11-2020, 07:09 PM
"Roko's basilisk is a thought experiment proposed in 2010 by the user Roko on the Less Wrong community blog. Roko used ideas in decision theory to argue that a sufficiently powerful AI agent would have an incentive to torture anyone who imagined the agent but didn't work to bring the agent into existence. The argument was called a "basilisk" because merely hearing the argument would supposedly put you at risk of torture from this hypothetical agent — a basilisk in this context is any information that harms or endangers the people who hear it"
https://wiki.lesswrong.com/wiki/Roko's_b...0existence.
would this happen in real life or not? would this be realistic for it to happen in the real world?
https://wiki.lesswrong.com/wiki/Roko's_b...0existence.
would this happen in real life or not? would this be realistic for it to happen in the real world?