September 09, 2004

Friendly AIs

Steve ask the question "Can friendly AI evolve?".

I would like to comment on that, lengthly, so I decided to set a topic here, as he is around and this seems to be an Eclectic Thinkers topic.

First of all, the "us vs them " attitude is somehow overdone. There are people willing to create AIs with build-in evolution potential allowing even for confrontation with NIs (for Natural Intelligences). Else, the discussion would be inexistent. The segmentation between NI and AI is done essentially to give credit to the parents of AIs, in a world where technological achievements are mostly made for glory and/or money. At least for power.
So, let's keep in mind that there may be a secondary segmentation between NIs, a minority pro-AIs, those driven by Asimov's Frankenstein syndrome, afraid from machines, and all the others who haven't' even take time to think about it.

Self-preservation is build even in simple, non-intelligent systems, and this is due to human willing to preserve their realizations. Back-up (clone, mirror, whatever) is essential so one wouldn't have to re-iterate the same work a second time. I am sometimes puzzled from people who don't do that. I always considered this as being one of the first elements to build in any application. Thus it seems obvious that AIs will/should be self preserving.

This doesn't mean that they will have to compete with NIs.
If you are an IBM engineer we may be willing to see Deepest Blue win a chess tournament against a human Chess Master. That doesn't mean that Deepest Blue wanted to win the game. But then DB isn't intelligent either.
Competitions present at least two aspects: I am better then you and I own more then you. Both may be present in a single competition, say with a great material prize to win for the best competitor.
In a any case, power is what the best gain. Either directly as money to spend to control their environment, or indirectly, as they can influence the very same environment using their fame.
Do the NIs and AIs have to share the same environment? If yes, there will be some competition. Even if obviously there isn't any need to share a parcel, the power gained by the others will drive competition. The Google vs Microsoft vs Yahoo! situation is an example. Competition between Yahoo! and Google seems natural, as they share the same space. The entry of Microsoft in the domain is driven essentially by the will to preserve influence over internautes and don't let new actors gain enough power and be able to drive the public opinion, thus ensuring that the public opinion will be favorable to Microsoft.

Being selfish or selfless is a quite simple or complicated matter.
If you do have clearly defined goals and introduced your self-value it is easy to see if you should act egoistically or altruistically, depending of the overall gain. I would accept to die instantly if I had the assurance that every single HIV virions would disappear and don't be replaced by something worse.
I suppose that AIs would be at least during the first times simpler and thus more efficient then the actual NIs. Thus more selfless, understanding more clearly the need for personal sacrifice to improve the species survival.
As a matter of fact, most of the selfish attitudes inscribed in a social behavior tends to be based on the idea that the self-value is important for the community (which is not necessarily truth). I recently named this behavior the "Neo complex": I am the One, even if it isn't clear even for me how I will manage to save the World. Let ME see were the white rabbit goes...
I suspect that AIs will be able to develop such attitudes :-)

Now, let me know what do you think about those matters dear NI-fellows.

6 Comments: (go down to newest )

  • Blogger Gisela Giardino ::

  • Nice.

    I have read yesterday steve´s blog and also find interesting the post and the comments.

    I just want to say something to show how differently we each percieve present tense reality and hence tend to make our bets about a future.

    Antoine says that there will be people drive by Asimov´s Frankestein syndrome... I understand you are refering to the common place "I, robot" Asimov.

    However I like the other Asimov, the one from The last Question. [>].

    There he makes what I believe a great approach to a possible AI evolution and our future because of all the sides of the problem he treats in one only short tale...

    And I would also point out the somehow "returning" to a budhist non attached to self life we all try to see in a future evolution of friendly AI.

    Merci ;-)

    ps. Please Bill and Epps bring your prior comments to the thread. Tks.

    7:43 AM  


  • Blogger Bill Duddy ::

  • My take on this is that the definition of intelligence is an arbitrary, subjective judgement in exactly the same way I have argued the definition of life to be (and the judgements that are used tend to be human-centred).

    If you accept this then there can be no distinction between AI and NI. Silicon chips that pass the Turing test are just as 'intelligent' as carbon-based goop that passes it. With the Turing test the arbitrary measure is 'can this thing make me believe it has a mind like my own?' - would a human pass a Squid Turing test? ;o)

    It follows from this that, if friendly NI can evolve, then friendly AI can evolve. The real question for me is therefore - has friendly NI evolved? We are all fundamentally a danger to each other. Our minds tend to interfere with each others, and our survival instincts tend to put is in direct competition with each other. To answer this question we must define friendly.

    So, what does friendly mean in this context? What is a friendly mind, as opposed to an unfriendly one? I'm sorry if this has been set out elsewhere (e.g. by Asimov) and I'm just not aware of it. I just think the question "Can friendly AI evolve?" perhaps asks more about intelligence in general than about 'artificial' intelligence in particular.

    1:25 PM  


  • Blogger Steve Jurvetson ::

  • Fascinating questions! I don't have any useful answers, but I do have the solution to your query about the Squid Turing Test.

    Are people friendly? Hmmmm... Seems to relate to the stupidity and terrorism threads….

    Are we getting any more friendly as our social constructs evolve? Do you think that smart people are less belligerent than stupid (and/or ignorant) people? (Or is it just that the geeks like me are physical wimps and learn not to start fights? =)

    I guess I want to believe that we learn from experience over time, and that smarter entities will do fewer stupid things. And I guess that I am assuming that “friendliness” (whatever it means) is the enlightened course of action - a positive attractor to any highly intelligent sentience…

    7:25 PM  


  • Blogger Eclectic Thinkers ::

  • Steve, yes, this is very congruent to the other 2 discussions you mention. And to those Baboons animals you once told me something about, remember? I believe it important to add to this thread the comments that Epp made at multiply when we began this discussion. And my last one just to complete what has been said up to date. So I quote the comments below.

    Epp, let me, please:

    º Epp said on Sep 1st:

    "(I have a lot more to read and learn to understand all of what is spoken of here but:)

    What a good question, Bill. What is friendly? After having a conversation with someone about how being nice and friendly can compromise the self and thereby limit positive contributions to others (even harm them), this question makes me stop like a cat in confusion and scratch myself as I think: "the more I know the less I know". Being selfish, if it translates into expanding one's potential to contribute to others, is a good thing but where do we draw the line? I'm wondering whether there is a way to program a kind of selfishness (self-preservation) which is compatible with and shares the same characteristics as increasingly macro level selfishness programs (also self-preservation) which enable the survival of life on earth - thereby rendering them "friendly". When we define "friendly" from this perspective, however, we need to define everything in the universe that contributes to the support of life. This is like trying to define "God". Besides, many bad things that have happened have contributed to good in the long run. So how do we define "bad"?

    I'm sorry. I'm tying myself in knots here. Really what I want to say as far as this goes is that a sense of self, of a mind, and an ability to decide for oneself regarding AI seems only possible if it has been programmed with the blueprint of life. If this were possible, it would be important that it would be the blueprint of all life, so that by a sense of "belongingness" there would be no threat to the world in which it participates. Otherwise we are indeed creating frankensteins.

    We have to be careful what we do. Analogy: Once I spoke to a person who use to work at a lab where they were creating a breed of bacteria which would consume oil. The idea was to try to figure out a way to clean up oil spills. This person was very concerned. Here they were creating what they thought was a good thing but (remember the speed with which bacteria multiply) developing these little creatures creates a new problem where other valuable oil resources become vulnerable to being attacked and consumed by these little guys. Often people do not have foresight. Just hindsight. Every possible implication needs to be explored in these sorts of cases before hand and not just by the specialists involved. Being a part of eclectic thinkers, I see an essential value and importance in gaining insight from multiple perspectives.

    Just a thought from the perspective of someone who knows very little about these things."

    º Gisela Giardino said afterwards:

    "It is of a great impact all we are discussing here, and I am really listening to you (begining with Steve at his blog) very carefully. Epp and Bill you ask what "Friendly" means and my not at all high culture answer but yet my most true of all is:

    You know what friendly is.

    And then I remembered a prhase of the Dalai Lama that says: "The threshold between right and wrong is pain."

    And I think it applies. We can put oursleves too obsessive in finding the precise definition of something, but yet it is true that even with this quite intuitive and vage and not clear idea of "friendly" we can do a lot. Here and Now.

    Epp you said something big to me to reflect about: We have no Foresight.

    And Bill you said another thing as important: AI is same as NI.

    We have to reflect about these things, and keep on talking, and exchanging our little knowledge as Epp said and I agree. Socrates said it "I only know I know nothing".

    Thank you for being there."

    8:30 PM  


  • Blogger Steve Jurvetson ::

  • I wonder if we want to distinguish friendly “actors” from a friendly “system” that presumes self-interested actors. In other words, do our societal norms, constitutional law, memes and constructs define the milieu for friendliness? The concept of friendliness requires interaction with others, and is not like “happiness” or other individual traits. This gets back to the notion that there is only co-evolution in our world.

    Reading Epp et. al., it seems that there is a concern about external AI vs. co-evolved symbionts vs. NIs. So an external AI is like an ET, if it’s developed outside our NI system.

    So to take this to a whimsical place, it seems to me that we either migrate human intelligence to a new substrate (per the Kurzweil NI augmentation plan), and jump onto Moore's law, or we wake up one day permanently obsoleted by AI (exponentials have that tipping point effect on us perceptually... They seem to be linear and flat until they explode upward through any fixed boundary - like the human intelligence level).

    Migrate or stagnate. Augment early and often. If we don't pattern our AI work on the human upload, it will likely be an entirely alien architecture... It will be a different growth path of compounding acceleration. If we migrate later, will we ever catch up? Could slight differentials pre-determine outcomes?

    If runaway AI wins, do humans lose? Will humans exist in 500 years as anything other than amusement, pets, and lab experiments?

    5:05 PM  


  • Blogger OldCola ::

  • Steve,

    We (you included) are aware how to handle exponential growth and observing attentively to get the very first glimpse of AIs. We even seeking contacts with Extra-terrestrial Intelligences (ETIs).
    So, I suppose that we do have a chance to be there when Ai emergence will occur.

    One parameter you don't accounted for up to now, is that we have the possibility to initiate not one, but many and fundamentally various AI systems, able to follow different paths. That would be the more "traditional" way to do for us: starts several experiments in parallel, let them compete in a closed system, observe to identify the fittest, not in terms of efficiency between them, but adequacy comparing to our needs, then continue the development not of one of them but some of them.

    Redundancy and confrontation of different systems it's the usual form of development we used, reflecting cultural variants and opposite economical interests. It is likely that the AIs construction and breeding will follow the same pattern.
    Consider the last quarter of a century: how many different operating systems (not commercial brands, fundamentally different ones), programming languages, hardware conceptions. Expert systems, as pre-AIs paradigms, show the tendency to diversity already. As a VC, would you support just one of them? Or rather some of them, believing that the winner is included in your choice?

    On the other hand, aside of our motivations for progress, we have some strong components interfering, for classicism and tradition.
    Pong is a classic and I was surprised to find a new version for Mac OS X!
    I suppose that during the AIs development process there will be a great museum/collection, assembling the "Steps", to avoid creating a new "Missing Link". That means that we always will be able to KILL any AI which don't fit our expectations and take a Step back to build something better.
    AI genocide! If necessary... I am ready for that :-)

    9:23 AM  


    Post a Comment

    << Home