These forums have been archived and are now read-only.

The new forums are live and can be found at https://forums.eveonline.com/

Intergalactic Summit

 
  • Topic is locked indefinitely.
 

Permission to Develop Self Aware AI

Author
Myxx
The Scope
#21 - 2012-04-10 20:01:39 UTC
Tiberious Thessalonia wrote:
Myxx wrote:
You don't want 'Strong' AI, the ramifications of it are catastrophic at best. Do your research on what they can achieve and then reconsider your inquiry.

Short version, even if they were not hostile, it would undermine our power and harm many facets of life we take for granted.


Fine!

Good!

For the love of all humanity, let the AI's take over and stop us from doing the awful things we do to eachother when left to our own devices!



That isn't the way to achieve what it is that you want...
Tiberious Thessalonia
True Slave Foundations
#22 - 2012-04-10 20:19:00 UTC
Myxx wrote:
Tiberious Thessalonia wrote:
Myxx wrote:
You don't want 'Strong' AI, the ramifications of it are catastrophic at best. Do your research on what they can achieve and then reconsider your inquiry.

Short version, even if they were not hostile, it would undermine our power and harm many facets of life we take for granted.


Fine!

Good!

For the love of all humanity, let the AI's take over and stop us from doing the awful things we do to eachother when left to our own devices!



That isn't the way to achieve what it is that you want...


You don't know what it is that I want.
Uraniae Fehrnah
Viziam
Amarr Empire
#23 - 2012-04-10 20:53:58 UTC
As has already been pointed out, people do ignore, skirt around, bend, and outright break various regulations. The development of strong AI is and will continue regardless of any laws passed regarding it. Yes, the risks are great but they are no more dire than those posed by raising a child already. And who bothers to get CONCORD permission to have a child?

Make no mistake, the methods and details may be different but on a fundamental level the creation of strong AI is the same as raising and teaching a child. Those of you who would disagree with that, I urge you to find and speak with AI programmers and engineers, and you'll see that the truly worthwhile ones invest the same amount of physical, mental, and emotional attachment to their AI creations as they would to any member of their flesh and blood family.
Katrina Oniseki
Oniseki-Raata Internal Watch
Ishuk-Raata Enforcement Directive
#24 - 2012-04-10 21:33:10 UTC
Technically, developing a strong AI is not such a bad idea provided you give it no tools or ability to exercise its new-found intelligence. There are dossiers available through certain security clearances that record early Gallentean experiments with Strong AI.

Early intelligence that were classifiable as Strong AI under current definitions were often incorporated into networked computer systems or installed into drone platforms (consider that the same as putting a mind in a body). It is there that the AI programs became a threat when going rogue.

If you offer no platform or networking and limited I/O devices, and physically isolate the AI from any other networked systems... you could conceivably develop it safely. The moment you 'plug it in' to something or allow it to transmit itself to a different device... you run the risk of creating yet another rogue drone.

Katrina Oniseki

Scherezad
Revenent Defence Corperation
Ishuk-Raata Enforcement Directive
#25 - 2012-04-11 04:28:57 UTC
Katrina Oniseki wrote:
Technically, developing a strong AI is not such a bad idea provided you give it no tools or ability to exercise its new-found intelligence. There are dossiers available through certain security clearances that record early Gallentean experiments with Strong AI.

Early intelligence that were classifiable as Strong AI under current definitions were often incorporated into networked computer systems or installed into drone platforms (consider that the same as putting a mind in a body). It is there that the AI programs became a threat when going rogue.

If you offer no platform or networking and limited I/O devices, and physically isolate the AI from any other networked systems... you could conceivably develop it safely. The moment you 'plug it in' to something or allow it to transmit itself to a different device... you run the risk of creating yet another rogue drone.


Forgive me for reolying to you here Captain Oniseki; I address te general conversation moreso than your comments.

A strong AI needs a lot more classification, the term is very vague. I assume we are discussing a self-editing independent agent with some manner of utility function. All of our worries and concerns as voiced here hinge upon the specification of that utility function. A miscalculated utility function would result in something akin to the rogue drone issue - and unfortunately, instantiation is no issue for such an entity, sufficiently advanced, Captain Oniseki. I don't share your confidence in those methods of containment.

What I fear more, Captains, is a poorly defined utility function, of the sort one gets from an evolutionary procedure. Too many of my colleagues use this method in developing drone AI, as it is relatively simple for building a simple drone memeplex. A conscious entity would require a much, much more deeply involved evolutionary simulation, with exhaustive selection vectors. Here we find that chaos theory steps in, and we lose the usual payoff of taking the evolutionary route.

Please, for the love of humanity and all you care about, don't pursue this without first specifying the exact utility function. I realize that it's a monumental request, but the other way lies Ragnarok.
Katrina Oniseki
Oniseki-Raata Internal Watch
Ishuk-Raata Enforcement Directive
#26 - 2012-04-11 04:44:21 UTC
This is not my area of expertise.

The only AI I have regular contact or investment in is CONCORD registered, despite how real or lifelike 'she' may seem.

Katrina Oniseki

Myxx
The Scope
#27 - 2012-04-11 05:25:19 UTC
Tiberious Thessalonia wrote:
Myxx wrote:
Tiberious Thessalonia wrote:
Myxx wrote:
You don't want 'Strong' AI, the ramifications of it are catastrophic at best. Do your research on what they can achieve and then reconsider your inquiry.

Short version, even if they were not hostile, it would undermine our power and harm many facets of life we take for granted.


Fine!

Good!

For the love of all humanity, let the AI's take over and stop us from doing the awful things we do to eachother when left to our own devices!



That isn't the way to achieve what it is that you want...


You don't know what it is that I want.



Assuming its the same thing Kuvakei wants, then yes, I do: To anhilliate or consume every single being with free will that would dare defy you. Either way, highly advanced AI isn't the way you should go about it. Not the way I would, anyway.
Arkady Vachon
The Gold Angels
Sixth Empire
#28 - 2012-04-11 05:28:47 UTC
Synthetic Cultist wrote:

"Will this be a problem that could wipe out our civilizations in the future?"

No. As Long As The AI Learns Scripture And Is Righteous.


The only problem with an Artificial Intelligence that has gotten religion is when it decides that its creators are not righteous enough...

Then bad things happen.

Nothing Personal - Just Business...

Chaos Creates Content

Taresh Quickfingers
Doomheim
#29 - 2012-04-11 08:18:04 UTC
Heh. Religion gives us enough problems without us trying to indoctrinate AI, as well.
Rek Jaiga
Teraa Matar
#30 - 2012-04-11 10:57:56 UTC
Myxx wrote:


Assuming its the same thing Kuvakei wants, then yes, I do: To anhilliate or consume every single being with free will that would dare defy you. Either way, highly advanced AI isn't the way you should go about it. Not the way I would, anyway.


If the end-goal is a unified collective, a "hivemind" of sorts, then the rogue drones are a prime example. I wouldn't be surprised if Kuvakei has taken inspiration from them over the years.
Tiberious Thessalonia
True Slave Foundations
#31 - 2012-04-11 11:00:54 UTC
Myxx wrote:

Assuming its the same thing Kuvakei wants, then yes, I do: To anhilliate or consume every single being with free will that would dare defy you. Either way, highly advanced AI isn't the way you should go about it. Not the way I would, anyway.


Your assumptions are incorrect.
Repentence Tyrathlion
Tyrathlion Interstellar
#32 - 2012-04-11 11:05:28 UTC
Scherezad wrote:
What I fear more, Captains, is a poorly defined utility function, of the sort one gets from an evolutionary procedure. Too many of my colleagues use this method in developing drone AI, as it is relatively simple for building a simple drone memeplex. A conscious entity would require a much, much more deeply involved evolutionary simulation, with exhaustive selection vectors. Here we find that chaos theory steps in, and we lose the usual payoff of taking the evolutionary route.

Please, for the love of humanity and all you care about, don't pursue this without first specifying the exact utility function. I realize that it's a monumental request, but the other way lies Ragnarok.


Unfortunately, my understanding from speaking with those who have some background in AI research is that beyond a certain point, specifying a function becomes redundant. By its definition, a Strong AI is adaptive and evolving; certainly such a construct would be limited to begin with by a preset function, but it would rapidly move beyond that.

AI development is a dubious course at the best of times. My instinct is that, if one were to proceed with such a project, the key would not lie in formulating parameters which would likely become rapidly irrelevant, but in developing a relationship and partnership with the construct that would encourage it to evolve in a beneficial fashion.

Hardcoded programming limitations might help, but again, any sufficiently advanced construct could eventually bypass them.
Deceiver's Voice
Molok Subclade
#33 - 2012-04-11 12:30:10 UTC
Repentence Tyrathlion wrote:
AI development is a dubious course at the best of times. My instinct is that, if one were to proceed with such a project, the key would not lie in formulating parameters which would likely become rapidly irrelevant, but in developing a relationship and partnership with the construct that would encourage it to evolve in a beneficial fashion.

So, one would be creating a relationship not unlike a parent-child relationship, non?

Quote:
Hardcoded programming limitations might help, but again, any sufficiently advanced construct could eventually bypass them.

A child will push the limits of the rules placed on it. A good parent will understand how to handle this, and hopefully promote working within those limitations, for the sake of the child as well as that of the parent and society as a whole.

Intelligence without the wisdom to use it appropriately is a threat, regardless of whether it is "artificial" or not.
Kalaratiri
Full Broadside
Deepwater Hooligans
#34 - 2012-04-11 13:55:31 UTC
Aranakas wrote:
First, I think we should develop self-aware Minmatar, then we can move on to more advanced stuff.


I'm actually still laughing at this..

She's mad but she's magic, there's no lie in her fire.

This is possibly one of the worst threads in the history of these forums.  - CCP Falcon

I don't remember when last time you said something that wasn't either dumb or absurd. - Diana Kim

Telegram Sam
Sebiestor Tribe
Minmatar Republic
#35 - 2012-04-11 13:57:01 UTC
Tiberious Thessalonia wrote:
Jandice Ymladris wrote:
Strong AI, also known as fully self aware AI are dangerous. Even when they mean well they can pose a threat. Any self-aware entity will have toughts on how to run things best, and a being without emotions would stick to cold hard mathematics. Can you imagine the horrors unleashed with it? It would make the atrocities of Sansha's Nation pale in comparision.



Explain to me why, exactly, a strong AI would lack emotions. I'll wait.

That's an interesting question. Would emotions arise spontaneously at a certain level of sentient/Self Aware AI? For instance, the emotion of sadness, if the AI were to fail to achieve some self-set goal, or were to experience some loss? A feeling of happiness at achieving some aim? It is hard to imagine any sentient intelligence that would not feel emotions, but it doesn't seem that they must necessarily arise with sentience. It's conceivable that an intelligence could accept failures and achievements with no feeling at all, and still continue in efficiently pursuing its goals.

Seen another way, is the development of emotions an evolutionary pathway to success? We humans and other higher organisms are descendents of invertebrates that are not so much different than lower AI robots. Not a lot of emotion going on in a sea slug. But perhaps proto-emotions are there. They feel pain when physically damaged. The shrink away from the source of the pain, so they obviously are averse to it. Aversion a proto-emotion to the more developed emotions of panic or fear? Or consider the excitement an ant shows upon locating a good food source for the colony. That excitement a proto-emotion to the more developed emotions of joy and happiness? Whatever the case, all higher organic beings we know of experience emotions. Apparently evolving emotions has led to success as organisms. Perhaps Self Aware AI would choose to follow a similar development path. Or conversely, perhaps emotions necessarily develop along with higher sentience/Self Awareness.

Tiberious Thessalonia
True Slave Foundations
#36 - 2012-04-11 14:38:24 UTC
Telegram Sam wrote:

That's an interesting question. Would emotions arise spontaneously at a certain level of sentient/Self Aware AI? For instance, the emotion of sadness, if the AI were to fail to achieve some self-set goal, or were to experience some loss? A feeling of happiness at achieving some aim? It is hard to imagine any sentient intelligence that would not feel emotions, but it doesn't seem that they must necessarily arise with sentience. It's conceivable that an intelligence could accept failures and achievements with no feeling at all, and still continue in efficiently pursuing its goals.

Seen another way, is the development of emotions an evolutionary pathway to success? We humans and other higher organisms are descendents of invertebrates that are not so much different than lower AI robots. Not a lot of emotion going on in a sea slug. But perhaps proto-emotions are there. They feel pain when physically damaged. The shrink away from the source of the pain, so they obviously are averse to it. Aversion a proto-emotion to the more developed emotions of panic or fear? Or consider the excitement an ant shows upon locating a good food source for the colony. That excitement a proto-emotion to the more developed emotions of joy and happiness? Whatever the case, all higher organic beings we know of experience emotions. Apparently evolving emotions has led to success as organisms. Perhaps Self Aware AI would choose to follow a similar development path. Or conversely, perhaps emotions necessarily develop along with higher sentience/Self Awareness.



That's very true. It may not be necessary, but neither is it known not to be. Frankly, we don't know because we have yet to see a Strong AI develop on its own.

I hope that one day, we will. The things we could learn about sentience!
Scherezad
Revenent Defence Corperation
Ishuk-Raata Enforcement Directive
#37 - 2012-04-11 15:02:33 UTC
Repentence Tyrathlion wrote:
Unfortunately, my understanding from speaking with those who have some background in AI research is that beyond a certain point, specifying a function becomes redundant. By its definition, a Strong AI is adaptive and evolving; certainly such a construct would be limited to begin with by a preset function, but it would rapidly move beyond that

AI development is a dubious course at the best of times. My instinct is that, if one were to proceed with such a project, the key would not lie in formulating parameters which would likely become rapidly irrelevant, but in developing a relationship and partnership with the construct that would encourage it to evolve in a beneficial fashion

Hardcoded programming limitations might help, but again, any sufficiently advanced construct could eventually bypass them.


I disagree, Captain Tyrathlion. I'm afraid your statement doesn't make much sense from the perspective of a decision network that hasn't been generated through artificial evolution. But, I wasn't clear before. Let me try again, if I could. I apologize if I'm covering things you already know

The big issue with an evolved AI is that one starts with a (relatively) simple decision network, accompanied by functions to allow for self modification. You then drop many copies of this network into a "playground", an artificial environment that you've constructed. It's a simulation of the area that you want them to do tasks. After a while operating, you select the best of the variants and "breed" from them, creating a new generation. Just like natural evolution, but much quicker. Eventually you'll have an AI that's very good at doing the task you're looking for

The danger with this is that a simulated environment is never as detailed as the real world, often by a great many orders of magnitude. This discrepancy means that it will behave differently - sometimes very differently - when confronted with real-world interactions. It's inherently unpredictable, because we don't know how the AI's decision network has been configured. In order to avoid this, the design team has to not only build a strong decision network, it also has to build a simulation environment at least as detailed as reality itself. Thus, rogue drones, malfunctioning AI, and the other issues we've come to know and love

The only other alternative is to design the AI from scratch explicitly. Here's the part I get confused by. You claim that such an entity would "bypass" it's "programmed limitations." That's sort of like saying that I will eventually bypass the limitations of my hard-coded "me-ness". A utility function isn't a set of limitations, it's the decision making structure itself. The trick in developing a self-adjusting decision graph is to develop it in such a way that its ultimate form is knowable. We can do this, but it's extremely hard, and we can't yet do it for something as significant as a self-aware system

Maybe that's a little too dense. What I'm trying to say is that there is no "consciousness" in the system that 's being limited by a utility function. An AI is defined by the things it values and the goals it works towards. Even we are defined by these things. Change our own utility functions, change ourselves. If there is such a thing as a soul, look there, first

Please excuse my philosophy. I think that your AI-developer friends probably use an evolutionary development model - which works fine for small stuff! I don't mean to denigrate the method, it's very powerful and useful. It's just very unstable in more complex regimes, that's all.
Scherezad
Revenent Defence Corperation
Ishuk-Raata Enforcement Directive
#38 - 2012-04-11 15:11:51 UTC
Deceiver's Voice wrote:
So, one would be creating a relationship not unlike a parent-child relationship, non?

...

A child will push the limits of the rules placed on it. A good parent will understand how to handle this, and hopefully promote working within those limitations, for the sake of the child as well as that of the parent and society as a whole.

Intelligence without the wisdom to use it appropriately is a threat, regardless of whether it is "artificial" or not.

I don't mean to pick on you, Captain, just using your post as an example of something that we need to avoid when talking about decision graphs.

An "AI" is not a child. It's not a rebellious child or a good one, it's not a doting mother or angered father. It may feel joy, but only if we explicitly put it there - by defining joy mathematically and building it in. It might feel sorrow, or anger, or wrath, but again, only if we explicitly define them. It is not a person, it is not a mind - not unless we dissect what a mind is and build it.

There are more potential shapes of decision networks than there are stars in the sky or atoms in the universe. The number of those graphs that are what we would call "minds" are an incredibly tiny portion of that sky. And the number of those minds which are minds like ours are infinitesimal in comparison to the whole - you might as well not even count them. In short, if you're planning on building a mind that we would recognize as one, with emotions we would understand and thoughts we could relate to, you have to aim very, very, very carefully.

Please, Captains. No metaphors. No assumptions on how these things "think". Don't relate them to how you think, or how other people think. It only leads to wrong answers.

I'm sorry if this post was somewhat harsh, that isn't my intention. We're talking about some incredibly powerful stuff here, so I can get carried away.
Scherezad
Revenent Defence Corperation
Ishuk-Raata Enforcement Directive
#39 - 2012-04-11 15:13:53 UTC
Tiberious Thessalonia wrote:
I hope that one day, we will. The things we could learn about sentience!

Sorry, one last post. Hello, Tiberious. Don't take this the wrong way, but I hope that we don't learn a thing about sentience from a Strong AI. I fervently hope that we do all of our learning before we get a Strong AI.
Tiberious Thessalonia
True Slave Foundations
#40 - 2012-04-11 15:45:39 UTC
Scherezad wrote:
Tiberious Thessalonia wrote:
I hope that one day, we will. The things we could learn about sentience!

Sorry, one last post. Hello, Tiberious. Don't take this the wrong way, but I hope that we don't learn a thing about sentience from a Strong AI. I fervently hope that we do all of our learning before we get a Strong AI.


I think the difference between us is that you are looking at this as math, and I am looking at it as psychology.

I need the counterpoint of a consciousness arising in a manner seperate from us so that I can see how the differing variables affect the outcome.