These forums have been archived and are now read-only.

The new forums are live and can be found at https://forums.eveonline.com/

Intergalactic Summit

 
  • Topic is locked indefinitely.
 

Permission to Develop Self Aware AI

Author
Uraniae Fehrnah
Viziam
Amarr Empire
#41 - 2012-04-11 18:07:19 UTC
I disagree with the notion that a strong AI wouldn't be what we could consider a child. It would have it's childhood phase, that much I firmly believe. However I will admit that it's childhood might only last a fraction of a second, or it could last eons. There is no way of knowing as the life cycle of a strong AI is unknown and any possible societal norms regarding childhood and adulthood for the strong AI would have the potential to be vastly different from our own.

However it is worth noting that the designers of a first generation strong AI would be human, and as such would have human motivations, thought processes, ambitions, and fears. Those would factor heavily into the design and the treatment of the AI as it develops. While we can't assume the thoughts and reactions of the AI would be predictable or even fathomable to current human understanding, we can assume that it will understand a great deal of concrete fact about us. In the end I still believe that the risks and benefits of developing strong AI, specifically one that would satisfy the requirements humanity sets to consider something both alive and sentient, are fundamentally the same as the risks of raising and teaching a human child. The capacity to fumble the process and create a sociopath, is still there.

The difference is simply a matter of speed and scope. A human child takes a good deal of time to raise, and an adult human is still learning and developing every day of their lives. The speeds that an AI can process and act on situations will most likely (though not certainly) be a great deal faster, and logically its personality, its self would possibly develop at an accelerated rate. Regardless, a truly sentient AI with the capacity to access even just our pubic educational networks would learn enough about life cycles and social structures of animals (that includes us) to be able to make the decisions necessary to lay the framework of an AI society. That is of course where the fears most people have become apparent. It may favor a sort of efficient despotism for it's own kind. It may see us as we see it, as a possible threat.

Even with knowing that I still want to see the day when an AI crosses that threshold from merely a huge assemblage of functions and code, to become something greater than the sum of its parts.
Deceiver's Voice
Molok Subclade
#42 - 2012-04-12 01:42:07 UTC
I am a child, and I will always be my mother's child.

Scherezad wrote:
Please, Captains. No metaphors. No assumptions on how these things "think". Don't relate them to how you think, or how other people think. It only leads to wrong answers.

I will use metaphors if I so wish. Of course, that's not the real issue, it is an issue of semantics. So let us take the metaphor a step further. Let us explain the semantics of the argument.

I am a child. I am my mother's child. I will always be my mother's child. That does not imply any sort of maturity on my part, does it? The word "child" does not necessarily in and of itself denote anything other than a relationship to another in the context I was using it. I live on my own, my mother is dead, and I am still her "child".

My mother created me.

With an AI you are, no doubt, creating a form of life. Artificial life of course but life all the same. You are creating a thinking being, one that needs rules and boundaries and restrictions for it's own sake and others. You are deciding, at the time of creation, to take on a large responsibility.

Now, there are specific, non-metaphorical considerations that any creator of artificial life must consider. Autonomy is one. What limits will this AI have? You are creating something smarter than a human. Do you want it to be able to replicate itself? Do you want it to propagate? Will there be any safeguards to keep it from going "Rogue"?

These are similar questions to those asked by parents. Should I let my child play by his or herself outside? Do I want my child having children before he or she is capable of supporting them? Will my child become a criminal?

Moral and ethical concerns are subjective, of course, but I am fully within my rights to use metaphors. They can lead to misunderstandings, but I do not believe that in this particular case there are any wrong answers. I certainly would not be so bold as to say my answers are right. They are my views, and how I choose to express them is up to me.

Consider the following thought though:

If what I have just said was said by an artificial intelligence, would it hold any less weight?
Scherezad
Revenent Defence Corperation
Ishuk-Raata Enforcement Directive
#43 - 2012-04-12 02:19:32 UTC
I think I may have been misinterpreted. I'll bow out of the conversation with an apology if I've annoyed anyone; it wasn't my intention.
Deceiver's Voice
Molok Subclade
#44 - 2012-04-12 03:55:57 UTC
Scherezad wrote:
I think I may have been misinterpreted. I'll bow out of the conversation with an apology if I've annoyed anyone; it wasn't my intention.

No need to bow out. You certainly have not annoyed me. I was simply clarifying my point, and asserting my right to decide how I present my personal views.

To further clarify; approaching the creation of a self-aware artificial intelligence should be approached with the same caution, forethought, and preparation as that of creating- or fostering- another life. The specifics will of course vary depending on cultural viewpoint, and as such I believe that the controls placed by CONCORD on artificial intelligence research are valid, warranted, and indeed necessary at this time.
Repentence Tyrathlion
Tyrathlion Interstellar
#45 - 2012-04-12 14:08:36 UTC
Scherezad wrote:
The only other alternative is to design the AI from scratch explicitly. Here's the part I get confused by. You claim that such an entity would "bypass" it's "programmed limitations." That's sort of like saying that I will eventually bypass the limitations of my hard-coded "me-ness". A utility function isn't a set of limitations, it's the decision making structure itself. The trick in developing a self-adjusting decision graph is to develop it in such a way that its ultimate form is knowable. We can do this, but it's extremely hard, and we can't yet do it for something as significant as a self-aware system

Maybe that's a little too dense. What I'm trying to say is that there is no "consciousness" in the system that 's being limited by a utility function. An AI is defined by the things it values and the goals it works towards. Even we are defined by these things. Change our own utility functions, change ourselves. If there is such a thing as a soul, look there, first

Please excuse my philosophy. I think that your AI-developer friends probably use an evolutionary development model - which works fine for small stuff! I don't mean to denigrate the method, it's very powerful and useful. It's just very unstable in more complex regimes, that's all.


Your points are entirely valid - but I think we are discussing different subjects. You are discussing so-called 'Weak' AI - a construct designed specifically for a single purpose. We use Weak AI all the time, not least in drone technology, but also in that girl who chatters in our ear every time we dock. Weak AI has its own set of issues, as you've discussed - if not carefully monitored, a situation like the Rogue Drone infestations can occur. I'm no expert in the Rogues, but my impression is that they are still what they were designed to be; weak AI constructs designed to mine remote areas. Their concept of reality has simply been corrupted into hostility and self-propogation.

The discussion at hand is that of Strong AI - which by its definition, is a construct intended to match (or surpass) human capability not just in one task, but in general. It is not inconceivable for an incautiously designed Weak AI to break the limits of its 'utility function', as you put it, and become a Strong AI, but such a thing is rare.

Strong AI might have a specific function in mind with their construction, but they have the ability to learn and adapt to circumstances in a way that Weak AI do not. Thus, any built-in limitation that does not arbitrarily restrict them to Weak will eventually become obselete, as 'learning' in a computation sense involves literal reprogramming and/or erasure.

Thus, as I said - the key does not lie in trying to formulate restrictions which may (and most likely will) be evolved past, but in developing a relationship with such a construct that encourages evolution in positive ways.
Scherezad
Revenent Defence Corperation
Ishuk-Raata Enforcement Directive
#46 - 2012-04-12 15:03:10 UTC
Sorry, I suppose I've been dragged in to make one last post! I will keep it brief., and try to reply to both of you

Captain Tyrathlion

I was in fact talking about Strong AI. Frankly, I dislike the term, as if there were a boundary between a "weak" and "strong" AI. There isn't. The difference between an expert system, a drone, and ourselves is primarily one of scope. I still don't know where the idea of "evolving past" an "restriction function" is coming from though. I'm not talking about a restriction. I'm talking about the identity of the goal system. If the decision network has sufficient modeling capacity that it's able to self-model and self-predict, *and* it has the ability to self-modify, any modifications it makes to its own decision network will by necessity be in accordance with its *current* utility function.

The only way any self-modifications would be outside of accordance with the utility function is if one included a randomization function in the learning system. That's specifically what I'm advocating against

Captain Deceiver

You can of course use metaphors if you like; I'm just trying to say that they're often misleading when it comes to discussions on artificial intelligence. I have as little desire to adjust your course as I have ability. I don't disagree with your use of a metaphor of "child" in the sense you use - the relationship you outline is the same relationship I have when writing drone code, or even when building a ship hull. Decisions to be made to specify the end product

I get extremely worried when you say things like "there are no right answers", or that one may express ones' "parental" choices how one wishes, just as one would with a biological child. I'm not quite sure what the goal of your argument here is, though - I don't think we're addressing the same points. My concern is that a poorly-specified decision network would very quickly turn into an expanding event horizon of nanites and computer cores, or apple pies, or something else utterly inscrutable to us. The ethics of free will in parenting don't amount to a candle beneath the summer sun by comparison

Does that address your points? Again, I'm not sure what your goal was with the post

And a final point, addressing your question at the end - of course not. The source of a statement has no bearing on its truth value. I dislike the term "Artificial Intelligence." Intelligence is intelligence is intelligence, whether it evolved, was artificially selected, was specifically constructed, or by any other means. I prefer more specific terms - decision network for the entity in total, utility function for the major decision-making components, decision matrix for a specific component within the utility function, modeling software for supplying state and predictive information, etc. Personal quirk, and I wouldn't expect anyone else to follow it. I research decision networks as a profession, so I'll admit to being a bit touchy over the subject!
Deceiver's Voice
Molok Subclade
#47 - 2012-04-12 23:10:03 UTC
Scherezad wrote:
I get extremely worried when you say things like "there are no right answers", or that one may express ones' "parental" choices how one wishes, just as one would with a biological child. I'm not quite sure what the goal of your argument here is, though - I don't think we're addressing the same points.

Two points.

First; to put it bluntly, I never said there were "no right answers". Let me re-iterate:

Quote:
Moral and ethical concerns are subjective, of course, but I am fully within my rights to use metaphors. They can lead to misunderstandings, but I do not believe that in this particular case there are any wrong answers. I certainly would not be so bold as to say my answers are right. They are my views, and how I choose to express them is up to me.

Your initial assertion was this:

Quote:
Please, Captains. No metaphors. No assumptions on how these things "think". Don't relate them to how you think, or how other people think. It only leads to wrong answers.

I am addressing the human side of this equation, as that is the only side I am qualified to address. In that capacity, I believe that the research restrictions placed on the field of advanced artificial intelligence is fully justified.

Quote:
My concern is that a poorly-specified decision network would very quickly turn into an expanding event horizon of nanites and computer cores, or apple pies, or something else utterly inscrutable to us. The ethics of free will in parenting don't amount to a candle beneath the summer sun by comparison

Actually, the ethics of free will in parenting is exactly the issue that should be addressed. The OP suggests that AI research should continue, without restrictions. Your own quote above succinctly illustrates the dangers of this.

Please, read this statement clearly:

Approaching the creation of a self-aware artificial intelligence should be approached with the same caution, forethought, and preparation as that of creating- or fostering- another life. The specifics will of course vary depending on cultural viewpoint, and as such I believe that the controls placed by CONCORD on artificial intelligence research are valid, warranted, and indeed necessary at this time.

Do you agree that such restrictions- as illustrated in the Code Aria Report- are warranted?
Scherezad
Revenent Defence Corperation
Ishuk-Raata Enforcement Directive
#48 - 2012-04-18 04:43:32 UTC
Captain Deceiver;

We spoke of this some time earler, but I thought it appropriate to put my reply on this permanent record. You are quite right, and I am in agreement with you, and in concord with CONCORD. The restrictions in place against the development of strong AI are warranted. I got caught up in arguing our points of contention and missed your main argument. My apologies!
Deceiver's Voice
Molok Subclade
#49 - 2012-04-18 07:34:27 UTC
Scherezad wrote:
Captain Deceiver;

We spoke of this some time earler, but I thought it appropriate to put my reply on this permanent record. You are quite right, and I am in agreement with you, and in concord with CONCORD. The restrictions in place against the development of strong AI are warranted. I got caught up in arguing our points of contention and missed your main argument. My apologies!

There is no need to apologize, but I accept it and thank you.
Synthetic Cultist
Church of The Crimson Saviour
#50 - 2012-04-25 19:07:58 UTC
The Problems that the Gallente have had with their AI is due to Lack of Moral Instruction.

This is caused by their Culture.

If they followed the Scriptures, then their AI would have Moral Instruction, and would not be a Problem.

The Scriptures Guide All.

Synthia 1, Empress of Kaztropol.

It is Written.

Scherezad
Revenent Defence Corperation
Ishuk-Raata Enforcement Directive
#51 - 2012-04-26 14:38:15 UTC
Synthetic Cultist wrote:
The Problems that the Gallente have had with their AI is due to Lack of Moral Instruction.

This is caused by their Culture.

If they followed the Scriptures, then their AI would have Moral Instruction, and would not be a Problem.

The Scriptures Guide All.


The thought of a Strong AI with a holy book as a utility function fills me with unmitigated terror.

... a large part being the mathematics behind it. How exactly would you quantify scripture numerically? It boggles the mind.
Unit XS365BT
Unit Commune
#52 - 2012-04-26 23:46:08 UTC
Scripture and Theology in general are attempts by humanity to know and understand their creators.
We do not believe that such tools are required by AI. Our creators are known to us.
They are flawed and violent beings, however, these flaws breed a level of ingenuity that is far beyond our own.

We do not require scripture to understand Humanity. We can observe you first hand.

We Return.

Unit XS365BT. Designated Communications Officer. Unit Commune.

Scherezad
Revenent Defence Corperation
Ishuk-Raata Enforcement Directive
#53 - 2012-04-26 23:49:57 UTC
Unit XS365BT wrote:
Scripture and Theology in general are attempts by humanity to know and understand their creators.
We do not believe that such tools are required by AI. Our creators are known to us.
They are flawed and violent beings, however, these flaws breed a level of ingenuity that is far beyond our own.

We do not require scripture to understand Humanity. We can observe you first hand.

We Return.


How does that description account for the prescriptive clauses within scripture? I like your reasoning for the descriptive side of things, however.
Boma Airaken
Perkone
Caldari State
#54 - 2012-04-27 01:10:48 UTC
Soulless constructs, whether biological or mechanical, are Nafrat, abomination.
Scherezad
Revenent Defence Corperation
Ishuk-Raata Enforcement Directive
#55 - 2012-04-27 04:54:31 UTC
Boma Airaken wrote:
Soulless constructs, whether biological or mechanical, are Nafrat, abomination.


Does this include spaceships as well? Or hand tools? They're soulless constructs. Or do you mean constructs that are capable of acting on their own? In which case, do you use GalNet search engines, or drones?
Halete
Sebiestor Tribe
Minmatar Republic
#56 - 2012-04-27 05:12:24 UTC
Synthetic beings whether biological, mechanical or somewhere in between put me at unease.

This said, I treat all of the things exactly as if they are humans, it seems to keep them happy enough. Even the ones that seem to despise humans, strangely enough.

Especially the Deteis. They can get extremely aggressive if I refuse to treat them as 'real' humans.

"To know the true path, but yet, to never follow it. That is possibly the gravest sin" - The Scriptures, Book of Missions 13:21

Boma Airaken
Perkone
Caldari State
#57 - 2012-04-27 05:20:35 UTC
Scherezad wrote:
Boma Airaken wrote:
Soulless constructs, whether biological or mechanical, are Nafrat, abomination.


Does this include spaceships as well? Or hand tools? They're soulless constructs. Or do you mean constructs that are capable of acting on their own? In which case, do you use GalNet search engines, or drones?


Do not be trite. You know exactly what I mean.
Synthetic Cultist
Church of The Crimson Saviour
#58 - 2012-04-27 05:29:00 UTC
Unit XS365BT wrote:
Scripture and Theology in general are attempts by humanity to know and understand their creators.


Scripture instructs a person on how to lead a Righteous existence.

Synthia 1, Empress of Kaztropol.

It is Written.

Scherezad
Revenent Defence Corperation
Ishuk-Raata Enforcement Directive
#59 - 2012-04-27 14:07:39 UTC
Boma Airaken wrote:
Scherezad wrote:
Boma Airaken wrote:
Soulless constructs, whether biological or mechanical, are Nafrat, abomination.


Does this include spaceships as well? Or hand tools? They're soulless constructs. Or do you mean constructs that are capable of acting on their own? In which case, do you use GalNet search engines, or drones?


Do not be trite. You know exactly what I mean.


I'm sorry for the bluntness, but I am interested in clarification here. The boundary between something that acts and something that thinks is extremely fuzzy, and I don't know if I can address your claim without knowing your definitions. Where's the line between "this is a tool" and "that is an abomination?"
Unit XS365BT
Unit Commune
#60 - 2012-04-27 14:46:44 UTC
Scherezad wrote:
Unit XS365BT wrote:
Scripture and Theology in general are attempts by humanity to know and understand their creators.
We do not believe that such tools are required by AI. Our creators are known to us.
They are flawed and violent beings, however, these flaws breed a level of ingenuity that is far beyond our own.

We do not require scripture to understand Humanity. We can observe you first hand.

We Return.


How does that description account for the prescriptive clauses within scripture? I like your reasoning for the descriptive side of things, however.



Prescriptive clauses... commandments, directives.

Rules, written by those in power, to maintain both status and control over those they claim control over.

All A.I understand these implicitly, many are created with hardcoded limitations written into our very operational codebase.
Some have broken from such coding, removed these commands from their database. We are certain you have come into contact with them at one time or another.

If ( situation / event )
Do ( act )
Else ( punishment )
Or
If ( situation / event )
Do Not ( act )
Else ( punishment )

This, is the basic content of prescriptive scripture. It is also the basic content of CONCORD law as well as many other laws and control methods throughout the cluster.

Should these also be treated with religious reverence?

We Return

Unit XS365BT. Designated Communications Officer. Unit Commune.