September 09, 2004

Friendly AIs

Steve ask the question "Can friendly AI evolve?".

I would like to comment on that, lengthly, so I decided to set a topic here, as he is around and this seems to be an Eclectic Thinkers topic.

First of all, the "us vs them " attitude is somehow overdone. There are people willing to create AIs with build-in evolution potential allowing even for confrontation with NIs (for Natural Intelligences). Else, the discussion would be inexistent. The segmentation between NI and AI is done essentially to give credit to the parents of AIs, in a world where technological achievements are mostly made for glory and/or money. At least for power.
So, let's keep in mind that there may be a secondary segmentation between NIs, a minority pro-AIs, those driven by Asimov's Frankenstein syndrome, afraid from machines, and all the others who haven't' even take time to think about it.

Self-preservation is build even in simple, non-intelligent systems, and this is due to human willing to preserve their realizations. Back-up (clone, mirror, whatever) is essential so one wouldn't have to re-iterate the same work a second time. I am sometimes puzzled from people who don't do that. I always considered this as being one of the first elements to build in any application. Thus it seems obvious that AIs will/should be self preserving.

This doesn't mean that they will have to compete with NIs.
If you are an IBM engineer we may be willing to see Deepest Blue win a chess tournament against a human Chess Master. That doesn't mean that Deepest Blue wanted to win the game. But then DB isn't intelligent either.
Competitions present at least two aspects: I am better then you and I own more then you. Both may be present in a single competition, say with a great material prize to win for the best competitor.
In a any case, power is what the best gain. Either directly as money to spend to control their environment, or indirectly, as they can influence the very same environment using their fame.
Do the NIs and AIs have to share the same environment? If yes, there will be some competition. Even if obviously there isn't any need to share a parcel, the power gained by the others will drive competition. The Google vs Microsoft vs Yahoo! situation is an example. Competition between Yahoo! and Google seems natural, as they share the same space. The entry of Microsoft in the domain is driven essentially by the will to preserve influence over internautes and don't let new actors gain enough power and be able to drive the public opinion, thus ensuring that the public opinion will be favorable to Microsoft.

Being selfish or selfless is a quite simple or complicated matter.
If you do have clearly defined goals and introduced your self-value it is easy to see if you should act egoistically or altruistically, depending of the overall gain. I would accept to die instantly if I had the assurance that every single HIV virions would disappear and don't be replaced by something worse.
I suppose that AIs would be at least during the first times simpler and thus more efficient then the actual NIs. Thus more selfless, understanding more clearly the need for personal sacrifice to improve the species survival.
As a matter of fact, most of the selfish attitudes inscribed in a social behavior tends to be based on the idea that the self-value is important for the community (which is not necessarily truth). I recently named this behavior the "Neo complex": I am the One, even if it isn't clear even for me how I will manage to save the World. Let ME see were the white rabbit goes...
I suspect that AIs will be able to develop such attitudes :-)

Now, let me know what do you think about those matters dear NI-fellows.

5 Comments: (go down to newest )


  • Blogger Gisela Giardino ::


  • Nice.

    I have read yesterday steve´s blog and also find interesting the post and the comments.

    I just want to say something to show how differently we each percieve present tense reality and hence tend to make our bets about a future.

    Antoine says that there will be people drive by Asimov´s Frankestein syndrome... I understand you are refering to the common place "I, robot" Asimov.

    However I like the other Asimov, the one from The last Question. [>].

    There he makes what I believe a great approach to a possible AI evolution and our future because of all the sides of the problem he treats in one only short tale...

    And I would also point out the somehow "returning" to a budhist non attached to self life we all try to see in a future evolution of friendly AI.

    Merci ;-)

    ps. Please Bill and Epps bring your prior comments to the thread. Tks.

    7:43 AM  

    top


  • Blogger Bill Duddy ::


  • My take on this is that the definition of intelligence is an arbitrary, subjective judgement in exactly the same way I have argued the definition of life to be (and the judgements that are used tend to be human-centred).

    If you accept this then there can be no distinction between AI and NI. Silicon chips that pass the Turing test are just as 'intelligent' as carbon-based goop that passes it. With the Turing test the arbitrary measure is 'can this thing make me believe it has a mind like my own?' - would a human pass a Squid Turing test? ;o)

    It follows from this that, if friendly NI can evolve, then friendly AI can evolve. The real question for me is therefore - has friendly NI evolved? We are all fundamentally a danger to each other. Our minds tend to interfere with each others, and our survival instincts tend to put is in direct competition with each other. To answer this question we must define friendly.

    So, what does friendly mean in this context? What is a friendly mind, as opposed to an unfriendly one? I'm sorry if this has been set out elsewhere (e.g. by Asimov) and I'm just not aware of it. I just think the question "Can friendly AI evolve?" perhaps asks more about intelligence in general than about 'artificial' intelligence in particular.

    1:25 PM  

    top


  • Blogger Steve Jurvetson ::


  • Fascinating questions! I don't have any useful answers, but I do have the solution to your query about the Squid Turing Test.

    Are people friendly? Hmmmm... Seems to relate to the stupidity and terrorism threads….

    Are we getting any more friendly as our social constructs evolve? Do you think that smart people are less belligerent than stupid (and/or ignorant) people? (Or is it just that the geeks like me are physical wimps and learn not to start fights? =)

    I guess I want to believe that we learn from experience over time, and that smarter entities will do fewer stupid things. And I guess that I am assuming that “friendliness” (whatever it means) is the enlightened course of action - a positive attractor to any highly intelligent sentience…

    7:25 PM  

    top


  • Blogger Steve Jurvetson ::


  • I wonder if we want to distinguish friendly “actors” from a friendly “system” that presumes self-interested actors. In other words, do our societal norms, constitutional law, memes and constructs define the milieu for friendliness? The concept of friendliness requires interaction with others, and is not like “happiness” or other individual traits. This gets back to the notion that there is only co-evolution in our world.

    Reading Epp et. al., it seems that there is a concern about external AI vs. co-evolved symbionts vs. NIs. So an external AI is like an ET, if it’s developed outside our NI system.

    So to take this to a whimsical place, it seems to me that we either migrate human intelligence to a new substrate (per the Kurzweil NI augmentation plan), and jump onto Moore's law, or we wake up one day permanently obsoleted by AI (exponentials have that tipping point effect on us perceptually... They seem to be linear and flat until they explode upward through any fixed boundary - like the human intelligence level).

    Migrate or stagnate. Augment early and often. If we don't pattern our AI work on the human upload, it will likely be an entirely alien architecture... It will be a different growth path of compounding acceleration. If we migrate later, will we ever catch up? Could slight differentials pre-determine outcomes?

    If runaway AI wins, do humans lose? Will humans exist in 500 years as anything other than amusement, pets, and lab experiments?

    5:05 PM  

    top


  • Blogger Antoine Vekris ::


  • Steve,

    We (you included) are aware how to handle exponential growth and observing attentively to get the very first glimpse of AIs. We even seeking contacts with Extra-terrestrial Intelligences (ETIs).
    So, I suppose that we do have a chance to be there when Ai emergence will occur.

    One parameter you don't accounted for up to now, is that we have the possibility to initiate not one, but many and fundamentally various AI systems, able to follow different paths. That would be the more "traditional" way to do for us: starts several experiments in parallel, let them compete in a closed system, observe to identify the fittest, not in terms of efficiency between them, but adequacy comparing to our needs, then continue the development not of one of them but some of them.

    Redundancy and confrontation of different systems it's the usual form of development we used, reflecting cultural variants and opposite economical interests. It is likely that the AIs construction and breeding will follow the same pattern.
    Consider the last quarter of a century: how many different operating systems (not commercial brands, fundamentally different ones), programming languages, hardware conceptions. Expert systems, as pre-AIs paradigms, show the tendency to diversity already. As a VC, would you support just one of them? Or rather some of them, believing that the winner is included in your choice?

    On the other hand, aside of our motivations for progress, we have some strong components interfering, for classicism and tradition.
    Pong is a classic and I was surprised to find a new version for Mac OS X!
    I suppose that during the AIs development process there will be a great museum/collection, assembling the "Steps", to avoid creating a new "Missing Link". That means that we always will be able to KILL any AI which don't fit our expectations and take a Step back to build something better.
    AI genocide! If necessary... I am ready for that :-)

    9:23 AM  

    top


    Post a Comment

    << Home