These forums have been archived and are now read-only.

The new forums are live and can be found at https://forums.eveonline.com/

Intergalactic Summit

 
  • Topic is locked indefinitely.
123Next pageLast page
 

Permission to Develop Self Aware AI

Author
Xi-Admiral-P6045
Severus Transactions LLC
#1 - 2012-04-10 05:43:46 UTC
Do you believe that scientists should be able to develop Strong AI (Artificial Intelligence that matches or exceeds human intelligence), why or why not? And if so, would you sign a petition or attempt to get other citizens of the Empires to petition CONCORD and have the laws against developing Strong AI changed? Do you believe that if we do become able to develop strong AI that it could bring our downfall? How would be win in a war against an enemy that feels no pain, has no emotion, does not need food or leadership and can improve itself? Will this be a problem that could wipe out our civilizations in the future?
Cmdr Baxter
Deep Core Mining Inc.
Caldari State
#2 - 2012-04-10 06:26:30 UTC  |  Edited by: Cmdr Baxter
I assume this is a crude attempt at a joke. Go find a copy of the Code Aria Inquiry.

Commander S. "Old Man" Baxter, CN (ret.)

Chief Archivist, The Synenose Accord

Xi-Admiral-P6045
Severus Transactions LLC
#3 - 2012-04-10 06:38:26 UTC
Cmdr Baxter wrote:
I assume this is a crude attempt at a joke. Go find a copy of the Code Aria Inquiry.


I've read the Code Aria Inquiry, it was a prime example of CONCORD's ignorance.
Daniel L'Siata
Phoenix Naval Operations
Phoenix Naval Systems
#4 - 2012-04-10 09:35:37 UTC
You're assuming that people don't just completely disregard regulations.
Unit XS365BT
Unit Commune
#5 - 2012-04-10 12:28:49 UTC
We would not suggest continuing this course of action.
Those who are involved, or apparently involved, in the creation of self aware AI have a tendency to disappear.

We Return.

Unit XS365BT. Designated Communications Officer. Unit Commune.

Tiberious Thessalonia
True Slave Foundations
#6 - 2012-04-10 13:14:18 UTC
I, on the other hand, would like to suggest continuing this course of action. Humankind advances through bold actions, and big ideas, especially in the areas of science.

The problem, in my mind, is that people have been working too hard to force a self-aware AI and building the hardware to match it.

My own work in this area has been focused around allowing the AI to emerge from the hardware itself. So far, results are simple, but quite organic.

There's no need, for instance, to force moral intelligence when our own example shows us that moral intelligence is a naturally occuring aspect of natural self-awareness.
Deceiver's Voice
Molok Subclade
#7 - 2012-04-10 14:22:38 UTC
Xi-Admiral-P6045 wrote:
I've read the Code Aria Inquiry, it was a prime example of CONCORD's ignorance.

Why do you believe it is an example of ignorance?

Quote:
Do you believe that scientists should be able to develop Strong AI (Artificial Intelligence that matches or exceeds human intelligence), why or why not?
Scientists are able to develop strong AI. Regulations are currently in place to curtail such research. It is not a matter of "should" but rather one of "why would". As in, why would you want to create such a thing?

Quote:
And if so, would you sign a petition or attempt to get other citizens of the Empires to petition CONCORD and have the laws against developing Strong AI changed?
No. I am not willing to sign such a petition without a frame of reference regarding the intended use of "Strong AI".

Quote:
Do you believe that if we do become able to develop strong AI that it could bring our downfall?
It depends on the context of said creation. A strong AI designed for, say, medical research and used ethically in that capacity is not a danger.

Quote:
How would be[sic] win in a war against an enemy that feels no pain, has no emotion, does not need food or leadership and can improve itself? Will this be a problem that could wipe out our civilizations in the future?
These two questions are linked, and without a strong basis in previous questions is telling as to the intended bias of this line of questioning. "Fearful anti-science dogma keeping pure research in the dark ages" lines of inquiry rarely illustrate the benefits of pursuing questionably legal research.

I would suggest putting these questions into more specific context. Why develop Strong AI? What are the benefits of such research? How would such technology be used responsibly and ethically, and what kinds of constraints must be put on this research?
Kikia Truzhari
Teraa Matar
#8 - 2012-04-10 14:24:24 UTC
I'm pretty sure this is one of those things, where if you really wanted to do it, it would have been much smarter to just do it quietly and not say anything about it to anyone.
Scherezad
Revenent Defence Corperation
Ishuk-Raata Enforcement Directive
#9 - 2012-04-10 14:58:25 UTC
Xi-Admiral-P6045 wrote:
Do you believe that scientists should be able to develop Strong AI (Artificial Intelligence that matches or exceeds human intelligence), why or why not? And if so, would you sign a petition or attempt to get other citizens of the Empires to petition CONCORD and have the laws against developing Strong AI changed? Do you believe that if we do become able to develop strong AI that it could bring our downfall? How would be win in a war against an enemy that feels no pain, has no emotion, does not need food or leadership and can improve itself? Will this be a problem that could wipe out our civilizations in the future?


If I may be so bold, sir, and operating under the assumption that you are an implanted Capsuleer, I would suggest taking a look in a mirror.

All humour aside, if you do choose to proceed on this course, I implore you - do not use iterative evolutionary development. We use this method frequently in the development of drone AI, and I have strong suspicions that this is a major factor in the development of "Rogue Drones". To press firmly in this direction is specifically what CONCORD warns against, and legislates against.

It's the more difficult course by far, but the only safe method I'm aware of is a properly specified set of decision matrices. I hope you have a lot of computing resources available!
Rek Jaiga
Teraa Matar
#10 - 2012-04-10 16:20:30 UTC
I would not at all be surprised if this tech already existed. Not every group of. people in the cluster are CONCORD signatories.

I think that codebreaker CONCORD reverse-engineered from Nation tech would be a powerful starting-place.
Aranakas
Imperial Academy
Amarr Empire
#11 - 2012-04-10 16:59:52 UTC
First, I think we should develop self-aware Minmatar, then we can move on to more advanced stuff.

Aranakas CEO of Green Anarchy Green vs Green

Tiberious Thessalonia
True Slave Foundations
#12 - 2012-04-10 17:02:58 UTC
Aranakas wrote:
First, I think we should develop self-aware Minmatar, then we can move on to more advanced stuff.


You're the idiot who was claiming to be the emperor, aren't you?

Oh my.
Synthetic Cultist
Church of The Crimson Saviour
#13 - 2012-04-10 17:48:30 UTC
"Do you believe that scientists should be able to develop Strong AI (Artificial Intelligence that matches or exceeds human intelligence), why or why not?"

Yes

"would you sign a petition or attempt to get other citizens of the Empires to petition CONCORD and have the laws against developing Strong AI changed?"

No

"Do you believe that if we do become able to develop strong AI that it could bring our downfall?"

No

"How would we win in a war against an enemy that feels no pain, has no emotion, does not need food or leadership and can improve itself?"

By Following The Words of Scripture.

"Will this be a problem that could wipe out our civilizations in the future?"

No. As Long As The AI Learns Scripture And Is Righteous.

Synthia 1, Empress of Kaztropol.

It is Written.

Taresh Quickfingers
Doomheim
#14 - 2012-04-10 18:26:44 UTC
Rek Jaiga wrote:
I would not at all be surprised if this tech already existed. Not every group of. people in the cluster are CONCORD signatories.

I think that codebreaker CONCORD reverse-engineered from Nation tech would be a powerful starting-place.

Yeah, I'd have to agree with this. We're in a big place, there's bound to be someone who's bending the rules.
Telegram Sam
Sebiestor Tribe
Minmatar Republic
#15 - 2012-04-10 18:28:02 UTC
The implications of Self Aware AI should be considered. Please consider the example of rogue drones. They are intelligent, they are armed, and they have learned colony/schooling behaviors (for mutual defense and support). Reportedly, a drone colony near the New Eden gate has developed quite complex behaviors, including replication (manufacturing and programming of other drones) and design advancement (development of more powerful and capable drones).

If this is true, in essence they are a new organism. One that is taking an alien course of development, rather than evolving along the paths of DNA as have all other organisms in our cosmos. The implications deserve some thought.
Myxx
The Scope
#16 - 2012-04-10 18:56:18 UTC
You don't want 'Strong' AI, the ramifications of it are catastrophic at best. Do your research on what they can achieve and then reconsider your inquiry.

Short version, even if they were not hostile, it would undermine our power and harm many facets of life we take for granted.
Tiberious Thessalonia
True Slave Foundations
#17 - 2012-04-10 19:22:27 UTC
Myxx wrote:
You don't want 'Strong' AI, the ramifications of it are catastrophic at best. Do your research on what they can achieve and then reconsider your inquiry.

Short version, even if they were not hostile, it would undermine our power and harm many facets of life we take for granted.


Fine!

Good!

For the love of all humanity, let the AI's take over and stop us from doing the awful things we do to eachother when left to our own devices!
Jandice Ymladris
Aurora Arcology
#18 - 2012-04-10 19:28:35 UTC
Strong AI, also known as fully self aware AI are dangerous. Even when they mean well they can pose a threat. Any self-aware entity will have toughts on how to run things best, and a being without emotions would stick to cold hard mathematics. Can you imagine the horrors unleashed with it? It would make the atrocities of Sansha's Nation pale in comparision.

Providing a new home for refugees in the Aurora Arcology

Tiberious Thessalonia
True Slave Foundations
#19 - 2012-04-10 19:30:13 UTC
Jandice Ymladris wrote:
Strong AI, also known as fully self aware AI are dangerous. Even when they mean well they can pose a threat. Any self-aware entity will have toughts on how to run things best, and a being without emotions would stick to cold hard mathematics. Can you imagine the horrors unleashed with it? It would make the atrocities of Sansha's Nation pale in comparision.



Explain to me why, exactly, a strong AI would lack emotions. I'll wait.
Tiberious Thessalonia
True Slave Foundations
#20 - 2012-04-10 19:40:11 UTC
In fact, I will help you out. Do you want to know a secret?

You are an emergent AI. You were not designed, you were not created. Likely you were influenced by several societal factors. In the end, though, a strong AI is just an intelligent machine.

Just as you are an intelligent machine.

Are you threatened by the AI's ability to self-improve? Maybe that's valid, but if you consider that to be a trait you don't share, then I feel bad for your obvious limitations.
123Next pageLast page