2009-12-27

robot's constitution

8.10: pol/robotics 101/constitution not assured:
. the essential problem with robots
is that there is no built-in god-constitution,
with humans there is always the evil of
the supernatural's influence on our emotions,
but that evil always has the welfare of the future in mind,
while a war among robots could lead to a world devoid of life .
. an omnipotent God will make sure that our collective programming
never results in robots devaluing life .
[12.21: this may happen by
bringing us very close to the edge of extinction;
so, we should put much energy into replacing this strategy
with one that is more deliberate and less destructive .
]
. there's going to be a dilemma of one life form vs another,
mutually assuring the destruction of the other:
eg, the matrix-bound humans vs the free humans .


8.25: pol/robotics 101/real he'll begins in the matrix:
. one danger of matrix is no way to kill self,
if dual developers of matrix are at war,
they may use computer hacks to make he'll for the other side
with no death as escape .


9.28: adds/robot constitution/the hal lesson:
. when using robotic police, we need to be careful about
making sure that the constitution is entirely explicit
and that we consider how our values might change
and to define when it's ok to for a subset of us
to say that the values are changed
and it's ok to amend the constitution .
9.29:
. a perfect constitution should never need mending;
rather, it should say what our values are now,
the value space,
the root values are to reach consensus,
or to arbitrate by majority when that's not possible .
. get it all out there on the table .
. make sure the robot is negotiable for the local needs
not just holding the values of the programmer .

12.4: pol/robot constitution/illusion of globality:
. robot must completely separate worlds
so there is noone feeling obliged to
relig'ly convert the infidels nextdoor .

No comments:

Post a Comment