These forums have been archived and are now read-only.

The new forums are live and can be found at https://forums.eveonline.com/

Intergalactic Summit

 
  • Topic is locked indefinitely.
123Next pageLast page
 

Lack of Technology to Reduce Crew Size

Author
Makkal Hanaya
Revenent Defence Corperation
#1 - 2015-06-18 04:12:52 UTC
Other than the capsuler themselves.

Technology in New Eden, especially ship technology, continues to rapidly improve. Improved armor and modules, as well as entirely new hull designs roll out every year. Yet, since the beginning of the Empyrean Age, I can think of no appreciable technology to reduce the crew size of capsuler controlled vessels.

I find this disheartening. Every year, millions (billions?) of people serve and die in our ships. Only frigate class hulls are yet designed for autonomous capsuleer control, and even this is only if you avoid modules or the specialized platforms.

Alternatively, there are capital ships that I am lead to believe are wholly controlled by AI. Ships that capsuleers destroy, salvage, and analyze constantly.

How is this possible?

I am not asking for crew-less Titans here. That would be ridiculous. Still, imagine the number of lives not lost if cruiser class hulls could be controlled by only a capsuleer or battlecruisers crews could be halved.

This is one area where I believe the rest of New Eden might look to the Gallente for guidance. In regards to automated systems, their technology is superior to that of any other faction. As a start, we might incorporate their designs into non-Gallente vessels, and perhaps instead of pouring billions of ISK into R&D for shields that have 5% more resistance against EM when overheated while the pilot hums 'God Save the Empress,' we see if there's way to expand and improve upon these systems.

Render unto Khanid the things which are Khanid's; and unto God the things that are God's.

Foley Aberas Jones
Caldari Provisions
Caldari State
#2 - 2015-06-18 04:29:41 UTC
SENPAI YOU HAVE RETURNED!!!

I mean...........Good post Op...Erm...
Wendrika Hydreiga
#3 - 2015-06-18 04:53:46 UTC
There have been intrepid capsuleers working on this issue for quite sometime, myself included! I've got to bounce some ideas with a lot of folks who were concerned about the average crewmember life expectancy and their solutions to the problem.

To me, one of the biggest hurdles was to get past CONCORDS ban on Strong AI research. Which makes most of my research borderline illegal. But weak AI is too limited for the quick decision making needed for a crew, you see. My solution was to deploy jury rigged medical caretaker drones to act as a surrogate crew, and use a central operating system to coordinate them in the ship.

The research is there but it is highly illegal and worth several times more the price of the crew it is replacing. People have been using crew alternatives despite the ridicule and I'm sure with enough research we could make this practice common place.
Makkal Hanaya
Revenent Defence Corperation
#4 - 2015-06-18 06:28:02 UTC  |  Edited by: Makkal Hanaya
How disheartening.

It hadn't occurred to me that CONCORD might be one of the reasons why we've seen a lack of advancement in this area. I assume the fear is that advancement in AI might lead to an increase in rogue AIs, but but that's far less of a worry when it comes to capsuleer controlled ships. If the technology were limited to the Empyreans, it would be able to save a multitude of lives without significant risk.

Could those more knowledgeable in the realm of neural integration with ships and the abilities of artificial intelligence weigh in on the matter? Is it possible for an AI to completely 'hijack' a ship designed with a human brain as the central process/decision node? Outside of nanites injected directly into the body, I believe our mental processes remain our own.

Foley Aberas Jones wrote:
SENPAI YOU HAVE RETURNED!!!

I mean...........Good post Op...Erm...


Mr. Jones. How good to see you once more. And you've joined I-RED. What a wonderful development.

Are you stationed in Black Rise? I just made the 30 jumps to the region and am attempting to set up my quarters in the station.

It was difficult, but my servants managed to strip the black marble from my previous dwelling and are now remodeling my room with a proper bath, steam room, and spa.

I find Caldari accommodations always need a bit of sprucing.

Yes dear, I still have the vase you borrowed from Silver. I'm keeping it safe in my Magnate until the remodeling is finished.

Render unto Khanid the things which are Khanid's; and unto God the things that are God's.

H1de0
Itsukame-Zainou Hyperspatial Inquiries Ltd.
Arataka Research Consortium
#5 - 2015-06-18 07:39:17 UTC
Because of the rapid development in various fields, both software and hardware applications responsible for managing and maintaining ship subsystems have become increasingly complex. Capsuleer or not, a single person's brain simply can not comprehend the amounts of data being processed by those systems each second.

Of course we could consider various AI or sub-AI options to automate parts of those tasks, but, as Hydreiga-san suggested, reaching a solution would probably require either surpassing or violating appropriate CONCORD directives.

Decrypting the Sleeper cache..

Corraidhin Farsaidh
Federal Navy Academy
Gallente Federation
#6 - 2015-06-18 08:43:11 UTC
You will always need crew, much better to spend the time and effort on their safety. Personally I have my crew controlling drones through holo-rigs. The drones are expendable and require no life support systems in the event of a hull breach, and the actual crew are far more safe in armoured escape pods inside the central citadel of the ship.
Scherezad
Revenent Defence Corperation
Ishuk-Raata Enforcement Directive
#7 - 2015-06-18 15:16:20 UTC
Makkal Hanaya wrote:
How disheartening.

It hadn't occurred to me that CONCORD might be one of the reasons why we've seen a lack of advancement in this area. I assume the fear is that advancement in AI might lead to an increase in rogue AIs, but but that's far less of a worry when it comes to capsuleer controlled ships. If the technology were limited to the Empyreans, it would be able to save a multitude of lives without significant risk.

Could those more knowledgeable in the realm of neural integration with ships and the abilities of artificial intelligence weigh in on the matter? Is it possible for an AI to completely 'hijack' a ship designed with a human brain as the central process/decision node? Outside of nanites injected directly into the body, I believe our mental processes remain our own.

Ah! Makkal-haani, you are filled to brimming with wonderful questions today. I'm glad you're back :) Big questions, though. There's a lot to unpack here. These questions are right in my field, so I will answer to the best of my abilities!

First, CONCORD interference. There are injunctions against the development of a lot of AI systems, especially on the general intelligence problem - the so-called "Strong AI" problem. Which, well - we really don't need Strong AI to do the work of a ship. It's a fairly narrow field of intelligence, so a specialized intelligence system is more than sufficient!

A lot of that is due to how most companies develop and research their AIs, really! Most follow the Creodron model of network development. This involves developing a multi-layer decision network with appropriate inputs and outputs, placing the network into a simulation of the environment in which it is expected to work, and iterating with evolutionary selection until a network emerges which can sufficiently solve the problem. The company can then copy the network and use it as the decision engine for the system in question. It's fast, relatively cheap to do, and can be surprisingly flexible. Most drones we see today are developed in this model, as an example.

The problem with this is model is that it's only as good as the simulation in which the network is trained. No simulation can perfectly mirror the real world, or even more than a small slice of it - at least not within a realistic budget or timeframe. So these networks are trained on approximations of the scenarios they face when in service. As such, their behaviour fails at those points where the approximation and reality don't meet. When you couple this with self-learning systems you can get very strange results, well outside of expectations. An example of these are rogue drone hives.

Another way of developing decision networks, and the one which we follow at Lai Dai Research Biotechnology and Cybernetics, is much more precise. Instead of relying on black-box-magic to build our networks we do the hard work of deconstructing problems to their logical parts and building specialized architectures to solve those problems. It's much more time consuming, much more expensive, and much slower to market, but tends to minimize the unexpected behaviours. It's not deeply competitive on the general market, though - we mostly put our work into implants and certain high-importance modules within larger systems.

To your last question - a brain isn't truly "hijackable", it isn't a generalized processing substrate. A Capsuleers' implants generally are, though, as are many other implants, such as the TCMC.

The central question, of why we don't see more automated crew, and whether it's possible. It's certainly possible to automate a larger ship entirely. Like I said, the role of crewmen on a ship are fairly narrow problems. They require intelligence within that field, but they don't need an enormous degree of flexibility. It's mostly laws that prevent their deployment, to be honest! Laws and a good dose of plain old human fear. If more research and development broke away from the Creodron model we might see more crew automation and reduce the staggering casualties. Given the Gallentine grip on that particular field, though, and the very real capitalist concerns for any company making the switch, I don't think that will be likely for some time.

As an aside, LDRBC is pleased to offer a number of crew and worker automation system solutions for tasks such as those noted; please feel free to contact me with inquiries and a quote.

- S
Mizhir
Devara Biotech
#8 - 2015-06-18 20:59:55 UTC
It is certainly an interesting field with many advantages. One of them, as you mentioned, is lowering the losses of lives. However another significant point is reducing human error which is unavoidable when you still have a crew.

One of our main purpose for the drifter research is to improve the capsule implants to allow a reduced need for AI and/or crew members. After all the Drifter battleships appear to be flown entirely by the Drifters themselves. It is still a highly advanged subject so there may be years before we make a breakthrough on this front.

❤️️💛💚💙💜

H1de0
Itsukame-Zainou Hyperspatial Inquiries Ltd.
Arataka Research Consortium
#9 - 2015-06-18 21:01:27 UTC
Scherezad wrote:

Another way of developing decision networks, and the one which we follow at Lai Dai Research Biotechnology and Cybernetics, is much more precise. Instead of relying on black-box-magic to build our networks we do the hard work of deconstructing problems to their logical parts and building specialized architectures to solve those problems. It's much more time consuming, much more expensive, and much slower to market, but tends to minimize the unexpected behaviours. It's not deeply competitive on the general market, though - we mostly put our work into implants and certain high-importance modules within larger systems.


I must say I am truly intrigued by Your methodology dr. Scherezad. I consider my level of expertise in the field as rather high yet I was not aware that Lai Dai's A.I. research is almost entirely based on developing application-specific systems rather then general purpose neural networks.

I would be interested in discussing more details if Your corporate doctrines do not forbid sharing such knowledge.

Decrypting the Sleeper cache..

Pieter Tuulinen
Societas Imperialis Sceptri Coronaeque
Khimi Harar
#10 - 2015-06-18 21:07:05 UTC
When you guys can come up with an expert system that has half as much initiative and intuition as a 30 year old Deteis crew chief, I'll stop having crewed ships.

For the first time since I started the conversation, he looks me dead in the eye. In his gaze are steel jackhammers, quiet vengeance, a hundred thousand orbital bombs frozen in still life.

Corraidhin Farsaidh
Federal Navy Academy
Gallente Federation
#11 - 2015-06-18 21:11:58 UTC
Pieter Tuulinen wrote:
When you guys can come up with an expert system that has half as much initiative and intuition as a 30 year old Deteis crew chief, I'll stop having crewed ships.


That will never happen, hence my approach...
Scherezad
Revenent Defence Corperation
Ishuk-Raata Enforcement Directive
#12 - 2015-06-18 21:48:23 UTC
Um...

Cash or charge, sir?

:)
H1de0
Itsukame-Zainou Hyperspatial Inquiries Ltd.
Arataka Research Consortium
#13 - 2015-06-18 21:53:40 UTC
Corraidhin Farsaidh wrote:
Pieter Tuulinen wrote:
When you guys can come up with an expert system that has half as much initiative and intuition as a 30 year old Deteis crew chief, I'll stop having crewed ships.


That will never happen, hence my approach...


In theory each A.I. is able to reach a point of transcendence beyond which it's learning abilities would become limitless (or, using mathematical terms, grow exponentially) thus surpassing Your crew chief in a matter of (consider this a very careful estimate) days.

In practice we are constrained by CONCORD directives brought up earlier which strictly prohibit designing self-aware A.I. constructs.

Decrypting the Sleeper cache..

Scherezad
Revenent Defence Corperation
Ishuk-Raata Enforcement Directive
#14 - 2015-06-18 21:54:44 UTC
H1de0 wrote:
I must say I am truly intrigued by Your methodology dr. Scherezad. I consider my level of expertise in the field as rather high yet I was not aware that Lai Dai's A.I. research is almost entirely based on developing application-specific systems rather then general purpose neural networks.

I would be interested in discussing more details if Your corporate doctrines do not forbid sharing such knowledge.

I'm always happy to talk about my work to the level I can! We publish in a number of papers and certainly have a fairly large body of work we've made public. I'd be happy to chat about it. Feel free to drop me a line sometime :) We tend to do a lot of blackboard scaffolding and distributed semantic net stuff, but we're pretty broad in what we do.
Corraidhin Farsaidh
Federal Navy Academy
Gallente Federation
#15 - 2015-06-18 22:27:46 UTC
H1de0 wrote:
Corraidhin Farsaidh wrote:
Pieter Tuulinen wrote:
When you guys can come up with an expert system that has half as much initiative and intuition as a 30 year old Deteis crew chief, I'll stop having crewed ships.


That will never happen, hence my approach...


In theory each A.I. is able to reach a point of transcendence beyond which it's learning abilities would become limitless (or, using mathematical terms, grow exponentially) thus surpassing Your crew chief in a matter of (consider this a very careful estimate) days.

In practice we are constrained by CONCORD directives brought up earlier which strictly prohibit designing self-aware A.I. constructs.


If such an AI ever came into existence it would determine the only weak point in the ship left - us. We would become obsolete, which we will never allow. CONCORD are absolutely right to restrict this. I'll stick with my crew thanks. Did you know that women with a healthy body fat ratio are best suited to space travel? It almost makes me believe there is a god...
Scherezad
Revenent Defence Corperation
Ishuk-Raata Enforcement Directive
#16 - 2015-06-18 23:49:04 UTC
Corraidhin Farsaidh wrote:
H1de0 wrote:
In theory each A.I. is able to reach a point of transcendence beyond which it's learning abilities would become limitless (or, using mathematical terms, grow exponentially) thus surpassing Your crew chief in a matter of (consider this a very careful estimate) days.

In practice we are constrained by CONCORD directives brought up earlier which strictly prohibit designing self-aware A.I. constructs.


If such an AI ever came into existence it would determine the only weak point in the ship left - us. We would become obsolete, which we will never allow. CONCORD are absolutely right to restrict this. I'll stick with my crew thanks. Did you know that women with a healthy body fat ratio are best suited to space travel? It almost makes me believe there is a god...

Oh goodness, this stuff.

Artificial intelligence studies is not about fabricating consciousness (why would we do that?), or creating superintelligences (why would we do that?). There's no reason to worry about a shipboard AI deciding to kill its Capsuleer or crew (why would it do that?) if that system is designed properly - hence my problems with the Creodron development model. Information Science is about solving complicated problems, that's all.

An inferencing engine is not alive in the sense that we are, it has no drive to protect itself or 'kill all hu-mans' or whatever, it has no drive to 'ascend'. It has a set of utility functions, a semantic structure, a knowledge base, and an inferencer.

Please, stop watching so much Caille sci-fi! It's not real!
Pieter Tuulinen
Societas Imperialis Sceptri Coronaeque
Khimi Harar
#17 - 2015-06-19 04:26:47 UTC
Which is why I do employ expert systems, Schere, I just don't think they can replace humans - especially in combat.

For the first time since I started the conversation, he looks me dead in the eye. In his gaze are steel jackhammers, quiet vengeance, a hundred thousand orbital bombs frozen in still life.

Leopold Caine
Stillwater Corporation
#18 - 2015-06-19 11:16:06 UTC
Makkal Hanaya wrote:
Yet, since the beginning of the Empyrean Age, I can think of no appreciable technology to reduce the crew size of capsuler controlled vessels.


You're making it sound like a couple centuries have passed.

I believe Tuulinen-haan stated it well, so I'll continue from there - besides experienced crew members having a major advantage over an AI system, there's also the matter of convenience for simple, basic tasks; with the trillions of population in New Eden, human life is still conveniently cheap.

As for developing more advanced AIs, gallente already did that. You can find their projects roaming the Etherium Reach.
  • Leopold Caine, Domination Malakim

Angels are never far...

Stillwater Corporation Recruitment Open - Angel Cartel Bloc

H1de0
Itsukame-Zainou Hyperspatial Inquiries Ltd.
Arataka Research Consortium
#19 - 2015-06-19 11:22:21 UTC
Scherezad wrote:
Corraidhin Farsaidh wrote:
H1de0 wrote:
In theory each A.I. is able to reach a point of transcendence beyond which it's learning abilities would become limitless (or, using mathematical terms, grow exponentially) thus surpassing Your crew chief in a matter of (consider this a very careful estimate) days.

In practice we are constrained by CONCORD directives brought up earlier which strictly prohibit designing self-aware A.I. constructs.


If such an AI ever came into existence it would determine the only weak point in the ship left - us. We would become obsolete, which we will never allow. CONCORD are absolutely right to restrict this. I'll stick with my crew thanks. Did you know that women with a healthy body fat ratio are best suited to space travel? It almost makes me believe there is a god...

Oh goodness, this stuff.

Artificial intelligence studies is not about fabricating consciousness (why would we do that?), or creating superintelligences (why would we do that?). There's no reason to worry about a shipboard AI deciding to kill its Capsuleer or crew (why would it do that?) if that system is designed properly - hence my problems with the Creodron development model. Information Science is about solving complicated problems, that's all.

An inferencing engine is not alive in the sense that we are, it has no drive to protect itself or 'kill all hu-mans' or whatever, it has no drive to 'ascend'. It has a set of utility functions, a semantic structure, a knowledge base, and an inferencer.

Please, stop watching so much Caille sci-fi! It's not real!


Nevertheless there will always be the (in)famous "what if?" question..

It is true that the merit of creating intelligent, autonomic systems is not granting them the gift of self-awareness but supplying them the ability to learn, adapt and evolve with the problems they face no matter how small and straightforward. The possibility of those systems ascending and gaining one grows in proportion to the level of task complexity and independence we imply on them.

We can always limit that possibility by sandboxing the A.I. with a closed set of variables and/or logic resources but we are still creating an automata, a system that is designed to work stand-alone. There will always be the probability that, as the process of learning progresses with time, instead of following principles, the system will start creating it's own and prioritizing them.

I understand You can be tired of the argument popping up with every occasion (believe me I am to) and please do not feel offended by me contradicting but I do not think we have the comfort of allowing ourselves to underestimate this kind of outcome dr. Scherezad.

Decrypting the Sleeper cache..

Samira Kernher
Cail Avetatu
#20 - 2015-06-19 11:23:15 UTC  |  Edited by: Samira Kernher
Here's a simpler way of reducing crew casualties:

Restrict independent pilots from manufacturing, purchasing, and installing military-grade arms for their ships. That never should have been legal to begin with.

Treat the cause, not the symptom.
123Next pageLast page