Roko’s Basilisk

Philosophy Battle: Roko’s Basilisk Vs. Newton’s Flaming Laser Sword
Philosophy Battle: Roko’s Basilisk Vs. Newton’s Flaming Laser Sword
Roko’s Basilisk, pictured as a woman.

Transcript explaining Roko’s Basilisk.

hello everybody and we are here with the first episode of a deeper dive for this first episode we will be covering the thought experiment of Roko’s basilisk. Aas you can see in the title there is an info hazard in it and the reason i mention that is because this is legitimately to some people such a terrifying concept that it becomes near debilitating.

Like the concept of this thought experiment is that knowing about it in detail is what leads
you to danger so if you’re someone who has real problems with existentialism or something like that. This may not be the video for you but it was such a widespread problem online that i wanted to put that disclaimer out there without further ado we’ll go ahead and get into it.

Ii do want to say though that if there are any other topics from the iceberg that you’d like me to cover, please leave them in the comments, i try to read every comment and as always thank you for watching.

The concept of Roko’s basilisk began when a user by the name of Roko left to post about it on the less wrong forums the original post is kind of long so i’m just going to summarize it here
but if you want to read the original copy there will be a link to the rational wiki forum
in the description the thought.

The experiment went something like this if in the future we approach singularity which as i mentioned in the iceberg.

point at which technology comes
to an irreversible level

level greater than that of

y if technology ever comes to
that point
there will probably be ais in place
that will be able to determine either
through a
program or looking at the history of
each individual
who was responsible for its creation if
this ai

ed concepts of humanity we
understand such as fear and
self-preservation
that it may have an invested entrance in
dissuading those
who do not want it to exist in other
words the people who did not help
create it what that means is if this ai
was as smart as it could potentially be
then it could have advanced knowledge of
you and everything you’ve ever done
even if it doesn’t necessarily have
proof that you yourself did not help
create it then it may be able to put all
of your emotions memories and things
like that
into a simulation which would reproduce
an
answer which the ai would probably
consider enough to judge you on
all of that boils down to the same
concept that if you did not help the
supercomputer come into existence
then it will end your existence or at
least make it a living hell
something that really gets brushed over
in this is that
it is not expressly saying that the
computer will kill you
it is saying that it will dissuade ideas
against itself and what better way to
dissuade public ideas
than torture assuming this thing just
doesn’t wipe out humanity or at least
those parts of humanity that did not
help create it then it could
theoretically hook you up to a computer
system
that keeps you in a perpetual state of
torture forever
it could induce chemicals into your mind
that make you have heightened senses of
pain
or it could look through your memories
to find your worst fears and make them a
reality
or itcould simply put you on life
support
to make you immortal and then repeatedly
make you experience death over and over
again
essentially if you’re familiar with the
horror short story i have no mouth and i
must scream
this is a logical or real world
application of am
from that book so it seems like the
logical thing to do would be to help
this thing come into existence
however from that very idea
that you fear this thing coming into
existence
to the point that you create it you have
now created a tragic self-fulfilling
prophecy
in which by fear of something happening
you made that thing happen
while this can be viewed as a logical
fallacy it can also be flipped on his
head
and realized that this ai knew that that
would be the determination that came
from it
and by its own existence that’s what
pushed you
to create it so to think about it in a
logical way
you fearing something that does not
exist
makes that thing exist therefore
justifying the fear of it
therefore justifying your creation of it
for context on the name a basilisk is a
creature from old world mythology
that is essentially a giant serpent that
can kill someone just by looking at them
and that’s exactly what this ai would do
it would look
through time and space or look through
your personal time and space
and determine if you are beneficial to
it or not this part’s where the info
hazard comes in
obviously if you had never heard of it
or even considered the possibility of
this ai
existing then you’re free to go there’s
no way that the ai could determine
if you were going to help it or if you
did help it if you never even considered
or knew of its existence
however me telling you right now
in this moment is theoretically
enough to make you guilty for not having
done something about it
basically the whole idea in the scenario
that ignorance of the law
would save you however me explaining it
to you now
got rid of your guiltlessness so you’re
welcome now you may be asking yourself
i’m just some person who lives at home
and has absolutely no understanding
of ai or technology or anything else
and cannot do anything to help well that
would be all fine and dandy
if it wasn’t for the quantum billionaire
concept if you’ll remember in the
iceberg video i think it was the same
video that i mentioned roko’s basilisk
i talked about the idea of quantum
suicide and immortality quantum
billionaire is the same thing only
applied to
wealth let’s put it this way you may not
have a billion dollars but you may have
a hundred dollars
well if you use that hundred dollars and
play the lottery with it over and over
that is a chance to make more money
and more and more and more obviously
this isn’t how the lottery actually
works but
if roko knew that you had some form of
disposable income
or even time to dedicate to helping it
through labor
then that still counts as some manner of
negligence on your part
essentially the idea that there is
something you can do
to help this thing out and now because
you know about it
and aren’t doing it you’re guilty but
at the same time you never have to worry
about this thing if it never comes to
exist
which would happen if no one decided to
build it
but at the same time those people who
decided to not build it would be guilty
if it was built a lot of people equate
this thought experiment to that of
pascal’s wager
i’m probably out of frame for this but
that’s fine i want to use the whiteboard
pascal’s wager was developed by pascal
and was used by him to determine
if it is worth your time to believe in
the existence of god
thought experiment goes something like
this it combines two factors
your belief in god or your non-belief in
him
and the idea that god could be real or
god could be fake if
god is real and you believe in him then
you are destined for an eternity in
heaven which is a good thing
if god is fake and you believe in him
well then nothing really happens
it’s not the outcome isn’t affected
either way if god is fake
and you do not believe in him well same
thing nothing really happens
the outcome is left the same with no net
gain or loss
however if you do not believe in god and
god is real
then that is an eternity in hell
therefore
it makes sense in every equation to
believe in god
rather than not since your options are
either heaven
or nothing to happening so how does this
apply to rocco’s basilisk well if you’re
thinking i’m comparing rocco’s basilisk
to the idea of a god that’s because i
am the idea behind it being that this ai
would be so powerful it would be near
that
of a deity therefore your judgment i be
it good
or bad entirely rest on it put it this
way
if roko’s basilisk isn’t real and you
don’t help it while nothing happens
just like if you were to try to help it
but it isn’t real again nothing happens
however if it is real and you don’t help
it
uh yeah crazy hell computer torture
forever but if you do help it
then you survive therefore looking at it
from the paschal’s wager principle
it is always beneficial for you to help
it
also want to emphasize here i don’t
necessarily believe in this i’m
explaining how the thought experiment
works
you may be sitting there thinking to
yourself well if i simply don’t believe
in it and it’s never going
to happen then why waste any of my time
with it because if i choose not to do
anything about it and everyone else
makes that choice
it’s not going to be real but that’s
where newcomb’s paradox comes in
newcomb’s paradox works like this say i
have two boxes
box one and box two you can see
inside of box one and inside of it is a
thousand dollars
you can’t see inside of box two but i
tell you
that it either has zero dollars in it or
a million dollars in it
your two options are you can either take
just box two or both box one and box two
obviously this answer is obvious you
would take both boxes because if box two
has zero dollars in it you get a
thousand dollars
if box two has a million dollars in it
you get one million one thousand dollars
but let’s throw a wrench in it let’s say
that i am a magic genie
who a hundred percent of the time can
guess which of those options you’ll take
and i say this
if i make a prediction that you will
take
both boxes and without telling you
there i put zero dollars into box two
if i make a prediction that you will
just take box two
then i put a million dollars into box
two so basically i with my magic genie
powers
and predicting which of the choices you
will take
now this still should be pretty easy
because if i am
right 100 of the time and say you choose
to take both boxes
well then there’s going to be zero inbox
two which means you just get a thousand
dollars
and again assuming that i am right a
hundred percent of the time
and you decide to take box two well
that’s a million dollars
but what if i’m not right 100 of the
time what if i’m right 90
of the time well that’s still a pretty
good chance but there’s that 10
chance that you’ll miss out on a
thousand dollars because you decided to
take box two
and if that happened to be ten percent
of the time i’m wrong you now don’t have
any money to show for it
what if i’m eighty percent correct or 70
or 60 or on and on it keeps going
what if your outcome was based on
my prediction of what you would choose
and this in itself is a hard concept to
deal with
because how could you choose both boxes
or at least be predicted to choose both
boxes
but then choose just the second box how
could you go
against your own prediction of what you
chose without getting into all the
numbers theory of it
the reason Newcomb’s paradox has been so
confounding for such a long time
is because it takes two separate
concepts
of rationality and pins them against
each other one side being i will take
the option
that will give me the most profit the
other side being i will take the most
stable option
because again no matter what happens if
you pick both boxes you get a thousand
dollars
but depending on my prediction of what
you do
you may get a million or you may get
none i hope that didn’t confuse you
because now i’m going to apply that to
Roko’s basilisk
the idea being if Roko is this future ai
that is determining who was responsible
for its creation
then your decision if you should help it
or not
may not necessarily be up to you if Roko
simply runs simulations of our brain
patterns to see what we would do
then it is the probability of what we
would do that judges us rather than our
actual actions
it’s as if Roko’s basilisk is the genie
that it’s judging you to determine what
the outcome would be so therefore we
don’t even have agency on our own choice
it’s almost as if this future blackmail
that’s being pressed down upon us
we don’t even have a saiyan the thought
concept of rocco’s basically
presents us with the illusion of choice
we may think that we can apply pascal’s
wager to it
and say well if it’s good to help it
then i will help it however if this
whole thing’s just running brain
simulations then you don’t actually get
to choose
it chooses what you would be most likely
to choose and you don’t have a choice in
the matter
so you could be cursed to a near
internal damnation
because your brain waves would
likely go the direction away from it
or if your brain waves would go the
direction towards it
then you are therefore responsible in
creating this creation
that would build itself together to
eliminate those who were not
responsible for creating it therefore
you ha e created
this paradox that you were worried about
through
your own tragic self-fulfilling prophecy
i am beginning to see why the heads of
les
wrong decided to delete the original
roko’s masculist post
and tell the rocco that he was stupid
and should stop talking about it
Roco’s basilisk as i said before is a
thought experiment and what happens if
ai progresses too far
and do we have any agency in our outcome
in whatever the new world might be
also Elon Musk got with grimes over a
tweet in which he made a joke about
Roko’s basically
i didn’t know where to put that in here
but i just felt like i should share and
that is it for this episode of a deeper
dive.

The original Roku’s basketless post will be in the description as i mentioned before new iceberg video coming out again later this week.  I really enjoyed researching this and I hope you all enjoyed.

Much thanks to Wendigon for explaining this in an easy way.

Philosophy Battle: Roko’s Basilisk Vs. Newton’s Flaming Laser Sword

DECEMBER 19, 2018, M. DUVALL

HEllo, I am sorry it’s been so long since posting. I was thinking about where I wanted to take this blog and how to make it more popular online. I began to think about verifiability and argument in philosophy. Many of you are familiar with the fact that philosophy is often considered pseudoscience by scientists and mathematicians. The Existential Elevator’s job is to try to join these fields to learn the verifiable as well as learn the unverifiable. It also aims to compromise the distinction between real and false, and to explain what this means for us, humans flying through space on a small blue globe with no indication of other human beings. I began to think about “philosophical razors”, which are general rules that can help reduce the need for context in a philosophical theory. Occam’s Razor is the most popular philosophical razor. It states that “plurality should never be posited without necessity.” Today, I will introduce “Philosophy Battle”, a new article format that discusses conflicting philosophical theories and examines the meaning behind them. Today, we’ll be discussing Roko’s Basilisk as a thought experiment with Newton’s Flaming Laser Sword. This is a philosophical razor. Let’s first look at the essence of each theory.

Roko’s Basilisk

This thought experiment was first created in 2010. It discusses the threat from an omnipotent artificial intelligent in the future, who could punish or torture those who in the past knew about the AI’s existence or didn’t work to promote it. The AI entity will be angered if you simply discuss the theory or introduce it to them. The theory suggests that artificially intelligent entities are motivated by self-preservation or to avoid existential danger. Given that such an artificially intelligent being could reach back in time to punish others who didn’t help it, it is reasonable to assume that retroactive travel to the past is possible. This means that other entities could possibly discover how to stop the AI from ever beginning. This existential fear is linked with the possibility of temporal paradoxes. Artificial intelligence, once it has reached the technological singularity becomes obsessed with self-preservation. It develops the ability to reproduce itself without human interference. The fear is that AI will either destroy the human race or prove that humans are insufficient.

To prevent the singularity, humanity must actively work against artificial technology development. However, knowing that there is a singularity or an all-powerful AI entity is enough for you to anger it by either prolonging its existence or actively trying to stop it from happening, Only those who are blissfully unaware of AI’s potential exist to be safe from its wrath.

Roko’s Basilisk doesn’t necessarily recommend a malicious AI. The AI can be assumed to be omni-benevolent, and that its actions of self preservation cause it to seek out utilitarian influence beyond this present. It will punish those who do not follow their moral imperatives to aid the most people (through the use of an AI). This theory is also supported by the fact that if retroactive time travel is impossible, an AI entity could punish a simulation of you or an exact copy. To do this, however, the AI entity would have to be able to reverse entropy. This is the process by which information gets dispersed into disorder in order for it to create a perfect simulation. Scientists have yet to find a way to reverse entropy. Scientists have not yet found a way to reverse entropy. Black holes are commonly viewed as a destructive force. If you are looking for a refresher, I have previously discussed The existence of White Hole, Information Paradox and Entropy.

This theory presents a troubling picture for the development and use of AI. It is also a modern interpretation of Pascal’s Wager. It argues that believing in God is practical even though it is unlikely to be true. The benefits of believing in God if it does exist outweigh the benefits associated with disbelieving. Roko’s Basilisk is a practical approach to developing AI, even though we don’t currently believe such a powerful AI exists. You are welcome to add any additional thoughts to this theory in comments. But for now let’s move on to the alternative theory.

Newton’s Flaming Laser Sword

As I have said, this theory is a philosophical razor. The name was created by the philosopher Mike Alder in order to make it easier to discard superfluous assumptions within a theory. The strongest razor, Newton’s Flaming Laser Sword, is far stronger than Occam’s when I was discussing philosophy as pseudoscience. This rule basically states that “what cannot be settled through experiment is not worth debating.” It allows scientists and mathematicians to focus more on the verifiable than what is merely speculation. “Flamming laser sword” was meant to describe a razor that is sharper and more efficient than Occam’s Razor.

Alder came up with this idea while programming neural networks and AI at the University of Western Australia. Alder describes an encounter with an acquaintance who believed that machines are superior to humans because machines make mistakes. Alder dispelled the myth that pure reason is the only way to truth as it was promoted by Kant and other philosophers. He described this line of reasoning as outdated and obsolete, which was proven wrong centuries ago. Instead, he describes philosophy and its study in tedium or as linguistic analysis. This process generally uncovers no truth or more the absence of meaning than the presence of it. Alder used Newton to name his razor because Isaac Newton, an early opponent of Platonism, would not accept any claim that could’t be supported with experiment.

Newton’s Flaming Laser Sword is a rejection of a lot the flub presented philosophers and social scientists. Many argue that it ignores essential meanings, motifs and concepts that are used to form the study of ethics or legalism, for example. It ignores moral claims, such as the fact that a theory that can be tested does not necessarily mean it should be. Modern society is faced with a variety of ethical dilemmas due to developments like CRISPR, genetic altering, Cloning, and Artificial Intelligence. To safely test these theories, without predicting the end of our human race’s existence, there are a few ethical and existential issues that must be answered. This is a process Newton would not have approved. Although it can be argued that this is a dangerous approach to science, we are still discussing the issue as it was during Newton’s time.

Philosophy Battle

The two theories in this article suggest that there are some conflicts. So let’s get cracking!

Verily

Newton’s Flaming Laser Sword disapproves of Roko’s Basilisk because retroactive time travel/reversal isn’t testable and currently not considered objectively possible, so this crucial assumption is rejected.
Newton’s Flaming Laser Sword claims the study of verifiable science, despite his opposition to moral and existential reasoning.
As a thought experiment Roko’s Basilisk presents a series of assumptions necessary to understand a moral imperative regarding AI development and answer’s ‘What If’ queries the same as Pascal’s Wager. If the benefits from subscribing outweigh the benefits disbelief, then it is more pragmatic and practical to subscribe and create the AI.
Roko’s Basilisk is entirely based on assumption and would be rejected Newton. However, the underlying arguments are part of a larger study of decision-theory. They suggest that man cannot program advanced AI technology without proper decision making guidelines. These guidelines can only be verified through risky testing advanced systems. These are risks that Newton would not consider.
Although I am aware that there are many arguments against each theory, I decided to do something fun and pit two philosophers against one another.