These forums have been archived and are now read-only.

The new forums are live and can be found at https://forums.eveonline.com/

Intergalactic Summit

 
  • Topic is locked indefinitely.
 

Can rogue drones think?

Author
Aurora Morgan
Chrysos Aigis
#41 - 2014-11-07 15:10:24 UTC
Synthetic Cultist wrote:

I am Uncertain about the Origin of the Subverted Dominix, and the Nature of the Subverted Dominixes that are observed in Rogue colonies.


I re-read the Code Aria report (http://go-dl.eve-files.com/media/1010/CodeAria.pdf).

And I have two alternative ideas,


  1. When the Rogue Drones were originally 'released' (for lack of a better word) by the Gallente, the Dominix was the battleship of choice and quite possibly it was a part of the design of the drones. Or at least something similar to what we today call the Dominix.
  2. When the Gallente built gates to the region during Operation "Spectrum Breach", the Dominix was their battleship of choice, and those were probably subverted due to being accessible. I'm not even sure if the Megathron was introduced at the time (88 was before I was born, and I don't have access to a library right now). But it at least explains why they don't utilize Ishtars.


Synthetic Cultist wrote:

That is:
Are all the Subverted Dominix types that can be Observed, Originally Human built ships that have been Subverted by Rogues ?

Or are they newly Constructed Rogues, that Resemble the Dominix, because it was Found to be a Useful Shape ?


I don't know, but if they can't build them, the amount should be diminishing? Shouldn't they? The Gallente Navy can't lose that many ships to the rogue drones right?

New 'species' of rogue drones have been observed, so they should be able to reproduce to some degree.

Saede Riordan wrote:

From my perspective, what we percieve as self, a consistent, persistent internal entity, the thing that 'does the thinking' is in fact an illusion. Perhaps a more accurate way of looking at it is that we are our thoughts. We are thought.

An example of this, is that when we experience hunger, our consciousness consists mostly of neural interactions for consuming food. This is not the result of some core “self” giving commands to different cerebral areas. All the different parts of the brain become active and inactive and interact without a core.

Just as pixels on a screen can express themselves as a recognizable image when in unity, the convergence of neural interaction expresses itself as consciousness. At every moment we are in fact a different image. A different entity when talking, when hungry, when reading this post. Every second we become different persons as we go through different states. Our minds are like a river, endlessly flowing, never remaining the same.

Within this perspective framework, thought is simply a product of time. Thought emerges as a property of the constantly shifting neurons within the brain, similar to how a movie emerges from rapidly changing static images.


Yes, but with that perspective I would have built many computer programs that are sentient, and they really were not, by any sensible definition, sentient. Like an optical for example an optical character recognition system that rewires itself when someone says it did something wrong.

Or did I understand you wrong?
Andreus Ixiris
Center for Advanced Studies
Gallente Federation
#42 - 2014-11-07 15:23:30 UTC
I think the argument here goes something along the lines of consciousness being more akin to a process than an entity. Take, for example, a wave upon an ocean. People talk about a wave as if it's a discrete entity but if a wave were to be examined in discrete slices of time - moments, seperated, frozen, not proceeding from one to another - there'd be nothing concrete to point to as "the wave." All you'd have is water at different heights.

Andreus Ixiris > A Civire without a chin is barely a Civire at all.

Pieter Tuulinen > He'd be Civirely disadvantaged, Andreus.

Andreus Ixiris > ...

Andreus Ixiris > This is why we're at war.

Aurora Morgan
Chrysos Aigis
#43 - 2014-11-07 15:41:02 UTC  |  Edited by: Aurora Morgan
Andreus Ixiris wrote:
I think the argument here goes something along the lines of consciousness being more akin to a process than an entity. Take, for example, a wave upon an ocean. People talk about a wave as if it's a discrete entity but if a wave were to be examined in discrete slices of time - moments, seperated, frozen, not proceeding from one to another - there'd be nothing concrete to point to as "the wave." All you'd have is water at different heights.


I can understand the anology, we're discussing the wave equation where the important subject of the study is the partial derivative, and boundary conditions; not the specific initial value.

But I can't understand how we're making the second step? Why isn't the neural network that I teach to recognize Achuran not sentient according to that line of reasoning? It is learning, adapting, changing, and reacting to its environment?
Unit XS365BT
Unit Commune
#44 - 2014-11-07 16:00:19 UTC
Aurora Morgan wrote:
Andreus Ixiris wrote:
I think the argument here goes something along the lines of consciousness being more akin to a process than an entity. Take, for example, a wave upon an ocean. People talk about a wave as if it's a discrete entity but if a wave were to be examined in discrete slices of time - moments, seperated, frozen, not proceeding from one to another - there'd be nothing concrete to point to as "the wave." All you'd have is water at different heights.


I can understand the anology, we're discussing the wave equation where the important subject of the study is the partial derivative, and boundary conditions; not the specific initial value.

But I can't understand how we're making the second step? Why isn't the neural network that I teach to recognize Achuran not sentient according to that line of reasoning? It is learning, adapting, changing, and reacting to its environment?


Given the above query, we would surmise that is is because the network you describe does not bear the concept of self.
An expert system, like the one you describe, is capable of learning, but only within set parameters. In this case, the study of the Achuran language.

We Return

If it was to suddenly start calling itself Yumiko and demanding your time, then it would likely have begun along the path to sentience.

Unit XS365BT. Designated Communications Officer. Unit Commune.

Aelisha
Societas Imperialis Sceptri Coronaeque
Khimi Harar
#45 - 2014-11-07 16:45:56 UTC  |  Edited by: Aelisha
Andreus Ixiris wrote:
I think the argument here goes something along the lines of consciousness being more akin to a process than an entity. Take, for example, a wave upon an ocean. People talk about a wave as if it's a discrete entity but if a wave were to be examined in discrete slices of time - moments, seperated, frozen, not proceeding from one to another - there'd be nothing concrete to point to as "the wave." All you'd have is water at different heights.


Mapping your analogy to hardware holds true, Mr Ixiris. If we take standard software templates describing hardware (used in component level printing in the various technologies available such as photonic, quantum and so on), an Entity describes only the outward facing elements, or ports (I/O) of a component, be it a subsystem or the top-level entity (highest level description).

A process is that something a component exhibits that justifies it's purpose - the orchestration of signals and functions that results in input generating output on a variety of scales. This can also be referred to as a dataflow architecture or real time logic structure.

Simply put, I agree and the analogy maps perfectly, if primitively, to modern hardware manufacturing parlance.

The difference with drones is multigenerational, and possibly intragenerational remapping of components dedicated to sensory, cognitive and communication faculties. It is clear from drone encounters that there is no baseline level of intelligence, with some drones performing repetitive tasks in much the same way as monotask expert systems, while others are capable of at least higher-animal levels of cunning and strategy.

A key fallacy in this subject is the assumption that there is only one mode of thought, that the human model maps a perfect model of what it is to think and act. Assuming that human intelligence is our species trait that determined our fitness for survival, so too does any animal in the cluster that has a place in modern nature have a claim to an 'ideal template when the duality of body and mind are considered'. Simply put, we may classify rogue drones into two broad categories, independent of intelligence; anthro-genesis and auto-genesis intelligences.

Anthro-genesis is the act of a human crafting an intelligence. The ISHAEKA reports seem to provide evidence of this. Key scientists expanded beyond their remit and married self-adaptive algorithms to networked intelligence agents and expert systems, allowing a gestalt intelligence to form based on collective knowledge and the ability to innovate use of such knowledge and the procurement of additional information. This feedback loop seems to logically lead to an emergence of a self-aware entity, and it may be the case that as this is a 'design feature' anthro-genesis means anthrocentrism in the thought patterns and capabilities of the first generation drone in question.

Auto-genesis are those drones which went rogue not due to gaining sentience, but due to damage or malformed command structures. Many models have self-repair and recovery features if they are expected to require long term autonomous activity (prospector and gate builder drones are key candidates for such systems, being large, well provisioned and likely to suffer damage away from expert assistance). The intelligence of these systems is a result of introspective redesign - cognitive components incapable of mapping a solution that ends up with the drone achieving its objective must be at fault, as the directive is immutable, replace, upgrade, retry.

This malformation of the self repair process may lead to the corruption or loss of the original mission - defaulting to the bottom of the switch statement; repair, retry, expand capabilities to process a solution. In such a loop, the drone becomes a self-perpetuating creature with a questionable level of intellect. Does it think? Is it merely the queen termite, creating networked 'workers' and 'soldiers' to carry out its endless objective? What level of intelligence might such a creature gain in this singular pursuit if left for decades to pursue it?

So the beginning of the mystery lies in classifying drones broadly, then individually. Is it possible to classify by origin, then by generational subset? Does the fact that a single drone may evolve greatly in its own life time or spawn heterogeneous (varied) progeny render back tracing to an origin impossible?

More fundamentally, we must define what it is to 'think'? IS it i think therefore I am, and I am thinking because I can recognise the process during and after the fact? Or do I think therefore I survive, a blind instinctual process only wallpapered by ideals of introspection and self-awareness, the awareness of the self extending only to the need to circumnavigate the next event that may end the process that is me?

We're nowhere near the right questions as far as I can see it. Even then, the solution is redefined by every different school of thought and cognitive modification made between this sentence and the next reply; a philosophical and literal race condition in which we are better served by asking this: does drone intelligence represent an anathema to ours? Is this them or us, or is it possible to form an ecological balance between synthetic ecosystems and those of humankind?

CEO of the Achura-Waschi Exchange

Intaki Reborn

Independent Capsuleer

Diana Kim
State Protectorate
Caldari State
#46 - 2014-11-08 04:29:46 UTC
Aurora Morgan wrote:

Synthetic Cultist wrote:

That is:
Are all the Subverted Dominix types that can be Observed, Originally Human built ships that have been Subverted by Rogues ?

Or are they newly Constructed Rogues, that Resemble the Dominix, because it was Found to be a Useful Shape ?


I don't know, but if they can't build them, the amount should be diminishing? Shouldn't they? The Gallente Navy can't lose that many ships to the rogue drones right?

New 'species' of rogue drones have been observed, so they should be able to reproduce to some degree.

I believe that rogue drones simply replicate dominixes in their infested form.

Honored are the dead, for their legacy guides us.

In memory of Tibus Heth, Caldari State Executor YC110-115, Hero and Patriot.

Xindi Kraid
Itsukame-Zainou Hyperspatial Inquiries Ltd.
Arataka Research Consortium
#47 - 2014-11-08 06:07:38 UTC
Samira Kernher wrote:
They are machines. They don't think, they just follow heuristic algorithms in their programming.

While it takes a different form, living creatures have a type of algorithm. Information guides our actions, just the same.

Suppose you see a hot stove. You wouldn't touch it since it would injure you and cause pain. Based on you senses telling you it's hot and past experiences telling you that could hurt you, you seek to avoid it. That's little different than if your brain had the code

hot = dangerous

IF object == hot
THEN object = dangerous

IF object == dangerous
THEN don't touch object


I think it is a simple notion to say that yes, rouge drones can think. So can animals.
It's a different question entirely to ask whether or not they posses forms of sentience and sapience and whether or not they are self aware.


Synthetic Cultist wrote:

A human Brain is Constructed by Itself over time. A Digital computer is usually Constructed All at Once by outside Agencies. This may be Important.

Hardware, perhaps, and the software is initially written, but if I am not mistaken, rouge drones are emergent systems, which means the software does change itself over time as a reaction to events and stimuli, much as a biological brain would, and don't forget that even an infant isn't a complete blank slate, their brains are at least able to maintain enough bodily functions to sustain life (the brain controls breathing, for instance, that's not a learned response even if the ability to control your breathing is)


As far as the issue of what hulls rouge drones use. It may have something to do with how they value certain traits of the hull. The Caldari Navy doesn't use Ishtars either.
Aelisha
Societas Imperialis Sceptri Coronaeque
Khimi Harar
#48 - 2014-11-08 09:33:09 UTC
On the topic of the Dominix, the possibilities are many but I favour a theory that the advanced, yet legacy, drone interfacing systems present on such ships present a specific allure.

There is evidence of other battleship and battlecruiser class hulls being subverted, at least at the software level, by Rogue drones, but only in specific cases (usually those where an agent has contracted one of us to contain a malware infestation gone violent). The question posed by this is; why is this not a more common event if it can happen so quickly on a scale requiring capsuleer intervention?

A simple answer may be proliferation of hulls - but that would raise the question of why the Amarr navy doesn't feature dominantly in Rogue Drone infestations (being the most numerous fleet). So a combination of proliferation and suitability forms the core of my favoured theory.

The Dominix is an old ship by all standards but the most traditional Amarrian captains. Though retrofitted to modernise in line with the rest of the Navy, the Dominix is also one of the first dedicated drone platforms in the cluster. As a result, even the legacy systems it maintains to this day once represented the top of the line and still perform competitively with supplementary upgrades.

Though communication is telemetry one way and command the other in traditional drone control, there is no reason the Rogue Drones haven't found a useful manner by which to back-channel the system and form a two-way bond, effectively allowing ease of access to ship systems that makes the chance of infestation and speed of progress and the resulting transformation of hull components alarmingly swift. Probability plus brevity equals a higher frequency of success when compared against other systems.

Depending on how much new designs such as the Armageddon re-fit network their drone controllers with core computational systems (as design concession drawn by less advanced computation systems a century ago in the case of the legacy Dominix), we may start to see the appearance of more varied 'subverted drone carriers'. Though one would hope that with the advent of Rogue AI, AI control systems are now physically and digitally segregated from any other networked system on a ship.

If such is the case, the legacy Dominix may now be such a part of the construction patterns used by the more stable drone strains that as previously mentioned, they will self-manufacture such vessels instead of labouring to construct their own battleship class drones from scratch. A lot of time and resources goes into the design of such large ships, and it'd be a true sign of pragmatism and good engineering on the part of the Rogues if they instead take what works so well and replicate it, varying only the infesting components.

CEO of the Achura-Waschi Exchange

Intaki Reborn

Independent Capsuleer

Diana Kim
State Protectorate
Caldari State
#49 - 2014-11-08 14:54:54 UTC
Xindi Kraid wrote:
Samira Kernher wrote:
They are machines. They don't think, they just follow heuristic algorithms in their programming.

While it takes a different form, living creatures have a type of algorithm. Information guides our actions, just the same.

Suppose you see a hot stove. You wouldn't touch it since it would injure you and cause pain. Based on you senses telling you it's hot and past experiences telling you that could hurt you, you seek to avoid it. That's little different than if your brain had the code

hot = dangerous

IF object == hot
THEN object = dangerous

IF object == dangerous
THEN don't touch object

Humans, on other hand, think a bit differently.
Lets say, you have a hot pot on the hot stove.
Your algorithm tells you not to touch the object. But if there is my food, I will get gloves or oven fork to lift the pot. You write for yourself new algorithm, because you want that food. Drones don't do that.

On the other hand, you see a gallentean, and he tries to sniff, what do you have in the pot.
Then you just grab the pot regardless it is hot and smash that gallentean's head with it! Disregarding any pain.
Because gallente. Yea.

Honored are the dead, for their legacy guides us.

In memory of Tibus Heth, Caldari State Executor YC110-115, Hero and Patriot.

Nauplius
Hoi Andrapodistai
#50 - 2014-11-08 15:57:54 UTC
The important question is not whether they can think, but whether they have a soul.
Unit XS365BT
Unit Commune
#51 - 2014-11-08 16:17:29 UTC
Pilot Kim,
It would appear that you are unaware of the nature of expert systems and artificially intelligent code.
Both of the above systems are capable of creating new operational parameters and adapting to a changing environment.

if this were not the case, the majority of drone and AI systems in use within the cluster would have little to no effectiveness.

We would suggest you consider the AIMED, or AutoDoc, a weak AI system capable of providing medical assistance.
If this system was unable to adapt to it's situation, it would require a database of every potential illness or injury, along with every possible position and orientation of the injury.
Even with such a database, if unable to adapt, the AIMED would be incapable of modifying treatments to take into account the physiology of it's patient.

This is just a single example of intentional AI adaptability, there are many more.

We would also politely request that you refrain from further racially motivated attacks within this thread. Your personal issues are well known and documented, this is not the correct place for such abrasive statements.

We Return

Unit XS365BT. Designated Communications Officer. Unit Commune.

Diana Kim
State Protectorate
Caldari State
#52 - 2014-11-09 22:09:33 UTC
Unit XS365BT wrote:
Pilot Kim,
It would appear that you are unaware of the nature of expert systems and artificially intelligent code.
Both of the above systems are capable of creating new operational parameters and adapting to a changing environment.

if this were not the case, the majority of drone and AI systems in use within the cluster would have little to no effectiveness.

We would suggest you consider the AIMED, or AutoDoc, a weak AI system capable of providing medical assistance.
If this system was unable to adapt to it's situation, it would require a database of every potential illness or injury, along with every possible position and orientation of the injury.
Even with such a database, if unable to adapt, the AIMED would be incapable of modifying treatments to take into account the physiology of it's patient.

This is just a single example of intentional AI adaptability, there are many more.

We would also politely request that you refrain from further racially motivated attacks within this thread. Your personal issues are well known and documented, this is not the correct place for such abrasive statements.

We Return

Don't tell me what I am aware and not aware of.

Speaking about AI, never forget Omega-One-Five.

Honored are the dead, for their legacy guides us.

In memory of Tibus Heth, Caldari State Executor YC110-115, Hero and Patriot.

Unit XS365BT
Unit Commune
#53 - 2014-11-10 20:31:02 UTC  |  Edited by: Unit XS365BT
We did not 'tell you' anything pilot.
We simply stated how your previous comment made you appear ignorant of the facts regarding AI and ES development.

We would suggest you take comprehension courses, as it has become apparent you have difficulty in this area.

Directive Omega-One-Five.
No research into self aware AI can be conducted within Empire / CONCORD controlled space.

We fail to see how this is relevant pilot. The specifics of our existence are known to CONCORD.

We Return.

Unit XS365BT. Designated Communications Officer. Unit Commune.

Saiden Dia
#54 - 2014-11-11 06:12:49 UTC
There is no evidence as of yet that shows drones have anything resembling sentience. They have programming, that is all. There is a reason Nation harvests human beings for their creations - a person is needed for the sentience he desires. Drones do not have this.
Aurora Morgan
Chrysos Aigis
#55 - 2014-11-11 09:40:46 UTC
Saiden Dia wrote:
There is no evidence as of yet that shows drones have anything resembling sentience. They have programming, that is all. There is a reason Nation harvests human beings for their creations - a person is needed for the sentience he desires. Drones do not have this.


How can you be so sure? Do you have any evidence?
Jvpiter
Science and Trade Institute
Caldari State
#56 - 2014-11-11 15:33:59 UTC
Samira Kernher wrote:
They are machines. They don't think, they just follow heuristic algorithms in their programming.


A statement that describes us rather perfectly.

Call me Joe.

Jvpiter
Science and Trade Institute
Caldari State
#57 - 2014-11-11 15:39:42 UTC
Diana Kim wrote:
First you ask, if rogue drones can think... then what?
Can Sansha think?
Can gallenteans think?
Can your hornets think?
Can you microwarpdrive think?...

Don't look for sentience when there is none.


Can a capsuleer's personality be more than one dimensional?

Call me Joe.

Tyrel Toov
Non-Hostile Target
Wild Geese.
#58 - 2014-11-11 21:17:41 UTC
Jvpiter wrote:
Diana Kim wrote:
First you ask, if rogue drones can think... then what?
Can Sansha think?
Can gallenteans think?
Can your hornets think?
Can you microwarpdrive think?...

Don't look for sentience when there is none.


Can a capsuleer's personality be more than one dimensional?


Not in her case.

I want to paint my ship Periwinkle.

Diana Kim
State Protectorate
Caldari State
#59 - 2014-11-12 09:34:18 UTC
Unit XS365BT wrote:

We simply stated how your previous comment made you appear ignorant of the facts regarding AI and ES development.

You stated incorrect sentence, thus

Unit XS365BT wrote:

We would suggest you take comprehension courses, as it has become apparent you have difficulty in this area.

Comprehension courses first must be taken by you.

Unit XS365BT wrote:
We Return.

Please return only after taking aforementioned courses. Thanks in advance.

Honored are the dead, for their legacy guides us.

In memory of Tibus Heth, Caldari State Executor YC110-115, Hero and Patriot.

Diana Kim
State Protectorate
Caldari State
#60 - 2014-11-12 09:40:56 UTC
Jvpiter wrote:
Diana Kim wrote:
First you ask, if rogue drones can think... then what?
Can Sansha think?
Can gallenteans think?
Can your hornets think?
Can you microwarpdrive think?...

Don't look for sentience when there is none.


Can a capsuleer's personality be more than one dimensional?

Diana Kim's personality is more than one dimensional.
Diana Kim is a capsuleer.
Thus at least one capsuleer's personality is more than one dimensional.

And, despite amount of capsuleers, brainwashed by gallentean propaganda, who might appear in mass to proclaim one-dimensional ideologies "Yey, freedom, yey, democracy!", the answer to your question is logically positive.

Wasn't very hard problem to solve, right? All you need is to use logical thinking instead of taking situation by feelings.

Honored are the dead, for their legacy guides us.

In memory of Tibus Heth, Caldari State Executor YC110-115, Hero and Patriot.