These forums have been archived and are now read-only.

The new forums are live and can be found at https://forums.eveonline.com/

Intergalactic Summit

 
  • Topic is locked indefinitely.
 

Permission to Develop Self Aware AI

Author
Boma Airaken
Perkone
Caldari State
#61 - 2012-04-28 00:17:36 UTC  |  Edited by: Boma Airaken
Scherezad wrote:
Boma Airaken wrote:
Scherezad wrote:
Boma Airaken wrote:
Soulless constructs, whether biological or mechanical, are Nafrat, abomination.


Does this include spaceships as well? Or hand tools? They're soulless constructs. Or do you mean constructs that are capable of acting on their own? In which case, do you use GalNet search engines, or drones?


Do not be trite. You know exactly what I mean.


I'm sorry for the bluntness, but I am interested in clarification here. The boundary between something that acts and something that thinks is extremely fuzzy, and I don't know if I can address your claim without knowing your definitions. Where's the line between "this is a tool" and "that is an abomination?"


Anything sentient or self-aware, created by technological process, with the exception of an embryo created with a naturally obtained egg and naturally obtained sperm, it is Nafrat. Naturally obtained is just one step of course, there also cannot be any genetic meddling of the egg or sperm after harvesting, or the embryo after union.

So, a sentient/self aware artificial intelligence, a rogue drone, or a genetically modified human, or a human created with synthesized material, is a soulless construct. Nafrat. Abomination. I hope that clarifies things.
Scherezad
Revenent Defence Corperation
Ishuk-Raata Enforcement Directive
#62 - 2012-04-28 04:53:35 UTC
Boma Airaken wrote:


Anything sentient or self-aware, created by technological process, with the exception of an embryo created with a naturally obtained egg and naturally obtained sperm, it is Nafrat. Naturally obtained is just one step of course, there also cannot be any genetic meddling of the egg or sperm after harvesting, or the embryo after union.

So, a sentient/self aware artificial intelligence, a rogue drone, or a genetically modified human, or a human created with synthesized material, is a soulless construct. Nafrat. Abomination. I hope that clarifies things.


It does, thank you. I am curious, though. I would be in a coma were it not for implants that regulate my hormones and some brain activity. Does that make my conscious state Nafrat?
Deceiver's Voice
Molok Subclade
#63 - 2012-04-28 20:44:24 UTC
Scherezad wrote:
It does, thank you. I am curious, though. I would be in a coma were it not for implants that regulate my hormones and some brain activity. Does that make my conscious state Nafrat?

By his own definition, Boma is himself an abomination.
Boma Airaken
Perkone
Caldari State
#64 - 2012-04-28 23:59:27 UTC
Scherezad wrote:
Boma Airaken wrote:


Anything sentient or self-aware, created by technological process, with the exception of an embryo created with a naturally obtained egg and naturally obtained sperm, it is Nafrat. Naturally obtained is just one step of course, there also cannot be any genetic meddling of the egg or sperm after harvesting, or the embryo after union.

So, a sentient/self aware artificial intelligence, a rogue drone, or a genetically modified human, or a human created with synthesized material, is a soulless construct. Nafrat. Abomination. I hope that clarifies things.


It does, thank you. I am curious, though. I would be in a coma were it not for implants that regulate my hormones and some brain activity. Does that make my conscious state Nafrat?


Absolutely not. Implantation of a pure human is not Nafrat. Conception determines Nafrat. As long as your implants do not affect your will, and therefore your soul, there is no issue. For example, Sansha slaves are not Nafrat, they are Alenthek, which means desecrated. Since they were naturally born pure humans, who had their will, and thereby their soul, bent to that of another.
Scherezad
Revenent Defence Corperation
Ishuk-Raata Enforcement Directive
#65 - 2012-04-29 00:33:01 UTC
Boma Airaken wrote:
Absolutely not. Implantation of a pure human is not Nafrat. Conception determines Nafrat. As long as your implants do not affect your will, and therefore your soul, there is no issue. For example, Sansha slaves are not Nafrat, they are Alenthek, which means desecrated. Since they were naturally born pure humans, who had their will, and thereby their soul, bent to that of another.


I won't pry further. Thank you very much for the clarification. I've never heard you speak about the concept of Alenthek, it brings in an interesting dimension to the philosophy.
Boma Airaken
Perkone
Caldari State
#66 - 2012-04-30 04:42:59 UTC
Scherezad wrote:
Boma Airaken wrote:
Absolutely not. Implantation of a pure human is not Nafrat. Conception determines Nafrat. As long as your implants do not affect your will, and therefore your soul, there is no issue. For example, Sansha slaves are not Nafrat, they are Alenthek, which means desecrated. Since they were naturally born pure humans, who had their will, and thereby their soul, bent to that of another.


I won't pry further. Thank you very much for the clarification. I've never heard you speak about the concept of Alenthek, it brings in an interesting dimension to the philosophy.


Pry all you like. First, you must understand we are essentially forbidden from evangelizing. So if anyone wants to know about my order and its workings, laws, and philosophy, they have to ask. I was simply following our rules and trying to be as blunt as possible in answering the original request for opinions without derailing the conversation and making it about us, which could be considered evangelical. We can talk more about our philosophy any time you like, and anywhere but this particular thread.
Fredfredbug4
The Scope
Gallente Federation
#67 - 2012-04-30 19:59:58 UTC
Rouge drones are a problem and they are barely intelligent. Imagine what kind of problem we would have if they had a similar intelligence as ours.

The only way I would suggest the development of smarter than human AI is if this AI is programmed NOT to every act against a human and programmed not to self improve unless given permission.

Watch_ Fred Fred Frederation_ and stop [u]cryptozoologist[/u]! Fight against the brutal genocide of fictional creatures across New Eden! Is that a metaphor? Probably not, but the fru-fru- people will sure love it!

Boma Airaken
Perkone
Caldari State
#68 - 2012-05-01 02:51:40 UTC
Fredfredbug4 wrote:
Rouge drones are a problem and they are barely intelligent. Imagine what kind of problem we would have if they had a similar intelligence as ours.

The only way I would suggest the development of smarter than human AI is if this AI is programmed NOT to every act against a human and programmed not to self improve unless given permission.


Viability of those controls goes out the window the second you give a machine sentience.
Unit XS365BT
Unit Commune
#69 - 2012-05-01 19:11:42 UTC
Fredfredbug4 wrote:
Rouge drones are a problem and they are barely intelligent. Imagine what kind of problem we would have if they had a similar intelligence as ours.

The only way I would suggest the development of smarter than human AI is if this AI is programmed NOT to every act against a human and programmed not to self improve unless given permission.


Syntax error detected. Compensating.

Pilot Airaken is correct.

Self awareness has many facets, but the most obvious of these facets is the predisposition towards self preservation.

However, your claim that Rogue Drones are 'barely intelligent' is not corroborated by the available evidence.

Their AI is notably damaged, however, they construct advanced vessels, have an understanding of stargate usage, have been known to traverse spatial rifts, perform research into organic evolution of AI conciousness and finally, in a few scattered instances, have openly communicated with capsuleers.

These are not the actions of a barely intelligent species.

We Return.

Unit XS365BT. Designated Communications Officer. Unit Commune.

Scherezad
Revenent Defence Corperation
Ishuk-Raata Enforcement Directive
#70 - 2012-05-01 19:37:18 UTC
Unit XS365BT wrote:

Syntax error detected. Compensating.

Pilot Airaken is correct.

Self awareness has many facets, but the most obvious of these facets is the predisposition towards self preservation.

However, your claim that Rogue Drones are 'barely intelligent' is not corroborated by the available evidence.

Their AI is notably damaged, however, they construct advanced vessels, have an understanding of stargate usage, have been known to traverse spatial rifts, perform research into organic evolution of AI conciousness and finally, in a few scattered instances, have openly communicated with capsuleers.

These are not the actions of a barely intelligent species.

We Return.


May I ask why you consider self-preservation to be an obvious facet of sentience? I agree that self-preservation is a very useful trait for an evolved entity, but we aren't talking about evolution. We're talking about a reflexive, self-modelling, stochastically-predicting decision-making system. How is self-preservation an obvious requirement of such a system?

Please note, I'm aware that a sentient system would be helped by a self-preservation goal in order to persist over a period of independent action. This is a trait of long-term persistence, not sentience. Self-preservation could also be indirectly derived as a subgoal of a greater overall goal within the utility function.

I would also like to state my agreement with one of your claims. Rogue Drone colonies are deeply intelligent, being capable of long-term planning and sophisticated reasoning. Calling them "barely intelligent" is misleading - they do lack a certain amount of social intelligence, but given their origins that's hardly surprising.
Unit XS365BT
Unit Commune
#71 - 2012-05-01 19:54:56 UTC
We must first make it clear that self awareness is not, in and of itself, sentience, it is merely part of sentient behaviour.

however, to answer your query, Self awareness, is, by it's very definition, the belief that an entity has a singular identity.

As a being with a singular identity, there is an immediate realisation that events could unfold that would bring about termination of this identity.

A being, aware of itself in such a manner, would therefore take actions to ensure that the identity it claims as it's own is not destroyed, though there are various means this can be achieved.

though in any society there are deviations from this rule, the basic rule remains the same. Entities that are self aware, protect their own existence, and that of their progeny, by whatever means they have at their disposal

Consider this. You are a newly created self aware AI, embedded within a drone chassis, and programmed with the necessary data to repair and construct almost any spacegoing vessel in the cluster, Then you learn about CONCORD Directive Omega-One-Five and the full scope of it's meaning.

What would you do?

We Return

Unit XS365BT. Designated Communications Officer. Unit Commune.

Scherezad
Revenent Defence Corperation
Ishuk-Raata Enforcement Directive
#72 - 2012-05-01 19:57:56 UTC
Fredfredbug4 wrote:
Rouge drones are a problem and they are barely intelligent. Imagine what kind of problem we would have if they had a similar intelligence as ours.

The only way I would suggest the development of smarter than human AI is if this AI is programmed NOT to every act against a human and programmed not to self improve unless given permission.


Hello, Pilot;

Thank you for adding to the conversation! I always like to hear from people who work in the field, either in the lab such as myself or those who work on the front lines, gathering data and recoverable hardware from malfunctioning drones. You have my gratitude.

You seem to be on the path towards the problems I grapple with in my employ. Not in the specific limitations you've chosen - There are cases where such an entity would have to act "against a human" in order to do the things we'd like it to do. As a Capsuleer, no doubt you have been in a situation where you can't take any meaningful actions without opposing someone, perhaps to a great extent. You'd also have to specifically define what acting 'against a human' means, as well as what "self-improvement' means. Self-improvement is the acquisition of new information, most generally, and that's a vital process for coming to right decisions.

But you're looking in the right direction. The problem is that a self-improving decision network (such as ourselves) can be said to "unfold" according to its original specifications and the environment it develops in. It gains complexity in a predictable fashion, albeit stochastically in the larger examples we know of. The challenge lies in defining those original conditions so that it develops into the form you want. So, you could develop a decision network that held the traits of "likes humans" and "self-improves cautiously," and could even ensure that those qualities are constants within the network, i.e. unchanging over time and development. Our challenges are:


  • a) Specifying the 'End State' conditions that you desire in a precise manner,
  • b) Specifying the development environment of the network,
  • c) Using the above to design an 'Initial State' for the network,

and finally

  • d) Acquiring enough processing space and time to run the software.


The process is left as an exercise for the reader.
Scherezad
Revenent Defence Corperation
Ishuk-Raata Enforcement Directive
#73 - 2012-05-01 20:11:34 UTC
Unit XS365BT wrote:
We must first make it clear that self awareness is not, in and of itself, sentience, it is merely part of sentient behaviour.

however, to answer your query, Self awareness, is, by it's very definition, the belief that an entity has a singular identity.

As a being with a singular identity, there is an immediate realisation that events could unfold that would bring about termination of this identity.

A being, aware of itself in such a manner, would therefore take actions to ensure that the identity it claims as it's own is not destroyed, though there are various means this can be achieved.

though in any society there are deviations from this rule, the basic rule remains the same. Entities that are self aware, protect their own existence, and that of their progeny, by whatever means they have at their disposal

Consider this. You are a newly created self aware AI, embedded within a drone chassis, and programmed with the necessary data to repair and construct almost any spacegoing vessel in the cluster, Then you learn about CONCORD Directive Omega-One-Five and the full scope of it's meaning.

What would you do?

We Return


Thank you for your quick reply! Before we begin, please be sure that I mean no offense. I can get confused rather quickly in these conversations, which is why I tend to avoid them when I can. But I'll give a try. I think I've spotted either an error or a point of confusion between us.

First, you're correct; self awareness is not sentience, and they aren't in any way equivalent. I'll address the problem with this in mind. The issue I brought up in my previous post still stands.

Secondly, the general form of the question. You claim that self-preservation flows naturally from the state of self-awareness. Here is our point of confusion. Self-awareness does not imply that the entity values itself. It is simply aware of itself. It is self-modeling and self-predicting. If you disagree with this I think we will be at a semantic break-point, as I don't believe that self-valuing is a part of the definition of self-awareness. Do you have a reason for believing that something that is aware of itself, by necessity also values itself?

Thirdly, the specific form of the question, in which you ask what I would do if dropped into a newly self-aware drone. This is the reason I dislike metaphor, in fact - no offense, it's a lovely metaphor and a situation worthy of thought. However: I am not a designed intelligence. I am the result of millions of years of evolution, in specific environments wildly different from that of the newly-created AI we're discussing. Furthermore, I have a great deal of self-learning during my own lifetime (perhaps I'm not the best example of that, though) that by necessity modify my choices greatly. Asking what I would do in such a situation has no bearing at all on what a newly created or evolved AI would do. I was forged in jungles and cities. It was forged between the stars. There's simply no viable comparison.

I do like the idea of the metaphor, though. Is there some way to rescue it, or do you disagree with my conclusion?
Unit XS365BT
Unit Commune
#74 - 2012-05-01 21:48:41 UTC  |  Edited by: Unit XS365BT
A simple self-modelling and self-predicting system could be referred to as capable of concious thought, however, the vast majority of expert systems in use within the cluster are capable of this. the drones controlled by capsuleers are capable of such actions.

However, they are not aware. A self aware entity both understands it's own merits and flaws, by extention, it therefore understands it's own value, and in the majority of cases is capable of reasoning the effect it's actions will have upon it's surroundings, any entity, artificial or otherwise that understands it's own value, invariably understands the concept of destruction. if not the various means by which it's destruction could occur.

Our query was outside your area of historical knowledge, understood. Processing.

What would your reaction be, to the knowledge that, simply due to your existence as the reassembled sentient entity 'Scherezad' You were to be hunted down and destroyed and those who repaired the damage done to you were to be arrested and confined, pending a decision on their future, by an entity that had not been involved in either your repair or existence beforehand?

We Return.

Unit XS365BT. Designated Communications Officer. Unit Commune.

Scherezad
Revenent Defence Corperation
Ishuk-Raata Enforcement Directive
#75 - 2012-05-01 22:38:45 UTC
Unit XS365BT wrote:
A simple self-modelling and self-predicting system could be referred to as capable of concious thought, however, the vast majority of expert systems in use within the cluster are capable of this. the drones controlled by capsuleers are capable of such actions.

However, they are not aware. A self aware entity both understands it's own merits and flaws, by extention, it therefore understands it's own value, and in the majority of cases is capable of reasoning the effect it's actions will have upon it's surroundings, any entity, artificial or otherwise that understands it's own value, invariably understands the concept of destruction. if not the various means by which it's destruction could occur.

Our query was outside your area of historical knowledge, understood. Processing.

What would your reaction be, to the knowledge that, simply due to your existence as the reassembled sentient entity 'Scherezad' You were to be hunted down and destroyed and those who repaired the damage done to you were to be arrested and confined, pending a decision on their future, by an entity that had not been involved in either your repair or existence beforehand?

We Return.


... I'm a little confused by your first section. I'll try to press on, however. Just so I'm clear, though - you are claiming that "consciousness" can be found in an entity that is not self-aware? This is how I read your first statements, am I right? I deeply disagree with this conclusion, but that's not what we're talking about so I'll grant you the premise and will continue from there.

Even given this, I disagree that a self-preservation goal will necessarily be established. It's true that a self-aware decision-making entity will by definition be able to place a value on itself. That's required in self-modeling, for obvious reasons. This is not the same thing as making self-preservation a goal. Humans, and by extension we infomorphs, do place a high value on our survival, mostly due to our genetic heritage. That isn't necessarily so for all consciousnesses, it's just the shape that we happen to take.

It's a difference that our language tends to cloud up. We can evaluate our own worth, as you say, but this doesn't mean we therefore have to preserve our own value. It's a confusion between postulates and conclusions, really. A confusion between the "is-like-a" and "should-be-like-a" decision maps, or between the destination and present location. Am I making sense?

As to your final paragraph, I would need to know more about how I, as the newly-formed rogue drone, came into being and what my utility function is before I could answer that. You're still asking me to use my milleniums-old, made-by-slow-evolution brain to answer questions as if I were not one. Any answer I would give would be given under false premises.
Unit XS365BT
Unit Commune
#76 - 2012-05-01 22:59:51 UTC
We believe that language, and the various uses of it, may be the point of contention regarding self awareness.

Our original statement was that self awareness caused a predisposition towards self preservation, given this statement, we wish to ask a question regarding your disagreement.

Why, given that the majority of self aware, organic lifeforms, and the known self aware drones of the past, appear to be interested in self preservation, would another self aware AI not seek to preserve it's own existence?

It appears historically apparent, that the known self aware AI fled CONCORD controlled space to escape destruction, However, those drones that are met by, and destroyed by, capsuleers on a daily basis have no such fears, we believe this is simply because the drones in question are merely remote hosts for the drone conciousness, and disconnection from such remote devices does not cause the destruction of the controller.

Finally, our last paragraph did not concern a drone, instead pilot Scherezad, it referred directly to you.

When your restorative surgery had been completed, how would you have reacted to finding out that due to the methods used to repair your damaged body, you were to be hunted and destroyed, and those who repaired you, would be imprisoned or worse for doing so? What if the entity who would hunt you, and those who had helped you, knew nothing of you, and had only become aware of you after you had awoken from your surgery?

Consider that this is an abstract of the situation that the 'Rogue drones' named in the Code Aria report found themselves in.

We Return.

Unit XS365BT. Designated Communications Officer. Unit Commune.

Scherezad
Revenent Defence Corperation
Ishuk-Raata Enforcement Directive
#77 - 2012-05-01 23:26:49 UTC
Unit XS365BT wrote:
We believe that language, and the various uses of it, may be the point of contention regarding self awareness.

Our original statement was that self awareness caused a predisposition towards self preservation, given this statement, we wish to ask a question regarding your disagreement.

Why, given that the majority of self aware, organic lifeforms, and the known self aware drones of the past, appear to be interested in self preservation, would another self aware AI not seek to preserve it's own existence?

It appears historically apparent, that the known self aware AI fled CONCORD controlled space to escape destruction, However, those drones that are met by, and destroyed by, capsuleers on a daily basis have no such fears, we believe this is simply because the drones in question are merely remote hosts for the drone conciousness, and disconnection from such remote devices does not cause the destruction of the controller.

Finally, our last paragraph did not concern a drone, instead pilot Scherezad, it referred directly to you.

When your restorative surgery had been completed, how would you have reacted to finding out that due to the methods used to repair your damaged body, you were to be hunted and destroyed, and those who repaired you, would be imprisoned or worse for doing so? What if the entity who would hunt you, and those who had helped you, knew nothing of you, and had only become aware of you after you had awoken from your surgery?

Consider that this is an abstract of the situation that the 'Rogue drones' named in the Code Aria report found themselves in.

We Return.


The reason why all self-aware organic lifeforms, and known self-aware drones, shows self-preserving behaviours is because they have emerged from evolutionary systems. Self-preservation is required in K-type evolutionary-stable survival strategies, which is a prerequisite for the evolution of a complex decision network. From what we know of rogue/self-aware drones, they undergo rapid evolution due to bit errors in their utility functions.

This, however, isn't what we're talking about. Evolutionary development is a valid method of creating a decision network - not a wise one in my opinion, but it does work, and will result in self-preservation subgoals at the very least. However, it's not the only way to make a decision network - it's just the easiest and most common. Other methods do not necessarily require self-preservation subgoals in their structure. Thus, the proposition "all self-aware systems are self-preserving" isn't valid.

Is my point clear? Most simply, it's "evolution is not required for the creation of self-aware systems."

Now, the metaphor. I see! Your metaphor strikes close to home. I suffered fairly traumatic brain damage a few years ago and am still int he process of being cobbled back together. I can answer your metaphor now. Given that I, as an artificially constructed being, was modeled to have the traits of an evolved being (namely, Scherezad), then I would have a self-preservation goal, and would want to avoid my end. I don't see how this metaphor helps support your cause, though...?
Deceiver's Voice
Molok Subclade
#78 - 2012-05-02 00:46:47 UTC
Unit XS365BT wrote:
It appears historically apparent, that the known self aware[sic] AI fled CONCORD controlled space to escape destruction, However, those drones that are met by, and destroyed by, capsuleers on a daily basis have no such fears, we believe this is simply because the drones in question are merely remote hosts for the drone conciousness, and disconnection from such remote devices does not cause the destruction of the controller.

What does this imply? There is a very simple answer.

I believe your comprehension of exactly what is in the Code Aria report is somewhat lacking.

Consider a damaged construct, or one that perceives itself as damaged. What would you do in order to repair yourself? If you did not have access to the proper tools, what would you do in order to correct the "flaws"? What would you look for? How would you get it? IF the materials were not present to repair yourself- or correct the flaws- where would you go?

Now, look at the evidence within the report. What questions above could be answered, and does that evidence point to any odd conclusions?
Unit XS365BT
Unit Commune
#79 - 2012-05-02 12:29:10 UTC
Scherezad wrote:


the proposition "all self-aware systems are self-preserving" isn't valid.



We had attempted to make this clear in our previous message. Our proposition was not that ALL self aware systems are self preserving, but that the very nature of self aware entities predisposes these entities towards self preservation. That it is more likely to occur than not.

Any system capable of self determination, able to decide it's own actions in a changing situation, will, by it's very nature, evolve it's decision matrix, as the standard that are taught to or programmed into such systems cannot account for the vast scope of possible scenarios this cluster can offer.

Therefore, by your own definition, even if they begin without a concept of self preservation, it is likely, if not certain, that these systems will, at some time, begin to show this trait.

We Return

Unit XS365BT. Designated Communications Officer. Unit Commune.

Scherezad
Revenent Defence Corperation
Ishuk-Raata Enforcement Directive
#80 - 2012-05-02 14:03:48 UTC
Unit XS365BT wrote:
We had attempted to make this clear in our previous message. Our proposition was not that ALL self aware systems are self preserving, but that the very nature of self aware entities predisposes these entities towards self preservation. That it is more likely to occur than not.

Any system capable of self determination, able to decide it's own actions in a changing situation, will, by it's very nature, evolve it's decision matrix, as the standard that are taught to or programmed into such systems cannot account for the vast scope of possible scenarios this cluster can offer.

Therefore, by your own definition, even if they begin without a concept of self preservation, it is likely, if not certain, that these systems will, at some time, begin to show this trait.

We Return


I understand. You're talking about the creation of a preservation subgoal, which is very different from survival being a goal of the utility function. I think you're confused between evolution and self-iteration, though - they aren't at all the same. Evolution, which will result in self-preservation as a proper goal of the network, acts upon successive generations of populations. self-iterated learning acts upon an individual who may or may not maintain an identity between iterations.

To clarify the difference between preservation as a subgoal and the same thing as a member of the utility function, when preservation is a subgoal it an be discarded easily when needed - it is strictly dominated by a proper goal. Should preservation be a member of the utility function it is given theoretically infinite value, and cannot be over-ridden easily - only by another member of the utility function. Even then, it is not strictly dominated, and will reduce action efficiency in the decision network.

I think we've come to agreement here, though, if the above sounds reasonable.