virus: Old stuff for fun

From: Keith Henson (hkhenson@rogers.com)
Date: Tue Jan 20 2004 - 00:41:07 MST

  • Next message: Erik Aronesty: "Re: virus: probability"

    [Not long ago Alcor put up many years of Cryonics magazines on their web
    site. For a few years, late 80s, early 90s I wrote a column every month
    for Cryonics. Here is one of them from August, 1991. It doesn't say
    anything that has not been discussed dozens of times, but I thought you
    might be amused by how it is said and how long people have been thinking
    about some of these problems.]

    Future Tech:

    The Rights of Sentient Beings

    by H. Keith Henson

         In this article I am acting as an advocate for a class of underdogs
    that doesn't exist yet. Indeed, this is a class of beings which Eric
    Drexler argues -- rather persuasively -- that we would be better off never
    creating. On the other hand, Hans Moravec writes, "Why should machines,
    millions of times more intelligent, fecund, and industrious than ourselves
    exist only to support our ponderous, antique bodies and dim-witted minds in
    luxury? Drexler does not hint at the potential lost by keeping our
    creations so totally enslaved."

         In "Engines of Creation" Eric makes a case for "mechanical" artificial
    intelligences, what he calls "engineering AIs." These would be AIs without
    human qualities of the "strive for x variety," where x is reproduction,
    power, reputation, control of resources, etc. His point is that the
    combination of people to provide the drive, and engineering AIs to slog
    through the computations and oversee construction details can accomplish
    anything which an independent self-directed AI could do. This might or
    might not be true, and it is almost certain to be slower, but a self-
    directed AI that seriously outclassed us mentally -- and was bent on
    exterminating humanity -- is not a thought to dwell on before bedtime!

         My approach to the subject of "social artificial intelligence," AIs
    with personalities and human-like drives, is that in the long run they are
    virtually unavoidable. Either we get such AIs as an outgrowth of research
    into how to make minds, or we get them from people who keep their human
    drives while upgrading their hardware. Research on human minds would be
    greatly retarded if it were not permitted to simulate (i.e., build) minds in
    computers. I think there is little disagreement that we need to understand
    ourselves better. And once we have the ability to make improvements in our
    minds, it would be a bad mistake not to do so in a world where we cannot
    control what others are doing to improve themselves.

         Although I believe social AIs to be inevitable, caution in developing
    them certainly is in order. Note, however, that because of competition,
    caution may require making progress as rapidly as we can. In any case,
    staying on good terms with our creations, offspring, or augmented versions
    seems like a very good idea. The future is quite scary enough without
    creating conditions for a war of liberation by oppressed AIs.

         A firm foundation for the ethical treatment of sentient beings,
    regardless of origin, would seem to be in order. This is not an entirely
    new enterprise. Human-to-human relations lie at the root of law, morals,
    and ethics. Workable empirical methods such as the Golden Rule have
    emerged, as well as memes of racial tolerance and the metameme of
    tolerance. In addition, we have landmark studies of which "The Evolution
    of Cooperation," Axelrod's study of the Tit for Tat strategy in the
    Prisoners' Dilemma game, is perhaps the most important to date.

         Thankfully, we have some time to work on these ethical problems before
    they become acute. We don't have sentient machines yet, but as sure as
    memory gets cheaper we will. To the extent that sentient machines depend
    on hardware with a human-brain level of processing power, we can make a
    good guess at when this will happen. Hans Moravec in "Mind Children"
    predicts it will happen in the "early part" of the next century. By that
    timetable, it will be of concern to many people alive today. (And
    successful cryonicists.)

         Current processing capacity of even the most powerful computers is in
    the milli-brain area. Moravec roughly equates our best efforts to date
    between a cockroach and a mouse in raw processing power. Eventually a "one
    human brain" power computer will come within the purchasing power of a
    national government. If the current trends hold, 15-20 years later the
    same capacity machine will be your personal computer. It might sit on your
    desk, though it is just about as likely to be worn like a suit of clothes
    or to be built into your dwelling. It could be grown into your body, or
    follow you around like a pet. A "one human brainpower" computer can by
    definition contain a human mind (when we figure out how to do a readout on
    one).

         Full blown nanotechnology makes even more complex ethical issues
    certain to emerge. Besides downloaded minds, we could have duplicate
    copies of people, artificial personalities (APs), if different from
    intelligences, special-purpose computer personalities created for some
    project, partly or completely independent fragments of minds, and computers
    which identify themselves with buildings or machines. This is only a small
    part of the list of entities we could be interacting with in the future.
    Some cases are small increments compared to the situation under discussion
    here, and the ethical considerations are relatively obvious; but others
    require bigger jumps to analyze.

         In the case of duplicate copies of people, there would be little
    argument as to the "humanity" of a copy. (There might be stringent
    penalties against making duplicates, but it seems it would be very hard for
    law or custom to deny human rights to a human just because there were
    another copy of that human in existence.)

         A case almost as clear would be that of a human who uploaded into more
    powerful hardware. If s/he uploaded into implanted computers (lots of
    empty space in a human skull) it would be hard to tell an augmented person
    from someone not modified, at least physically. If this step were
    accepted, it is not likely uploading into mobile robots would be seen as
    different enough to justify loss of human rights to someone who did it.
    Uploading into non-mobile hardware would not seem to be a sufficient reason
    to deny human rights either; quadriplegics in that unfortunate state are
    no less human. Besides, a person who could afford really spiffy hardware
    would likely have lawyers (or lawyer subroutines). In either case, you
    would want to take care of the meat body, lest you get charged with
    littering.

         De novo artificial personalities may be constructed as research
    projects or as outgrowths of commercial projects, or as I mentioned last
    column, as "offspring" combining the "best" personality traits of other
    people. By analogy (which is the best we have to go on), human rights come
    into existence over time -- with binary jumps at birth and an age where the
    individual is assumed to be "independently responsible." The time it might
    take for a collection of hardware and software to become independent is not
    related to the normal maturation of humans -- it could be either shorter or
    longer -- so other criteria (a test? posting a bond?) might be more
    appropriate. Until that time the sponsoring organization or person would
    be responsible. (Its 11 o'clock in the morning -- do you know what your
    mental offspring is doing?)

         Extension of "parental" responsibility concepts, perhaps combined with
    warrantee concepts, could provide a legal matrix for new computer-based
    personalities and intelligences. There has already been talk in the Usenet
    group "comp.risks" of making the computers themselves legally liable in
    some instances. Robots, computers, and AI/APs, have long been topics of
    science fiction novels, dating back to Asimov's Three Laws of Robotics
    (clearly designed to keep robots slaves forever). One seemingly workable
    way to extend legal rights and responsibilities to AI/APs which first
    showed up in science fiction is to make corporations out of them. The
    concept of an "artificial person" is already well rooted in corporation and
    business law.

         Extending rights to AIs will take either legislation or a lot of test
    cases. Will it be considered murder to pull the plug on an AI? Or would
    it be considered assault? (I would consider it assault if there were no
    damage done and the AI could be restarted.) How about erasing a backup
    copy of an AI's memory and personality? Would this differ from erasing the
    only copy in existence? How about a copy of the information needed to make
    a copy of a person? What should be the policy in making changes to the
    personality of an AI? Would the same policy apply to making changes in a
    human in the course of making a copy? As you can see, these concerns
    rapidly approach the concerns of cryonicists.

         I am not among those who think that somehow nanotechnology will solve
    all our problems. I expect very advanced technology to solve most of our
    current problems, while introducing new ones of amazing variety and
    seriousness. This is not a new situation. Consider the problems facing us
    today, and those which average people faced a thousand years ago. Can you
    imagine trying to explain the S&L crisis to someone of that time? How
    about the ozone hole? A computer virus? These are real problems for
    today, just as civil rights for sentient machines will be on the list of
    tomorrow's concerns.

         Next time I might consider the dangers of getting lost in Middle Earth.
    ----------------------------------------------------------------------
                                         (11)

    ---
    To unsubscribe from the Virus list go to <http://www.lucifer.com/cgi-bin/virus-l>
    


    This archive was generated by hypermail 2.1.5 : Tue Jan 20 2004 - 00:37:46 MST