These forums have been archived and are now read-only.

The new forums are live and can be found at https://forums.eveonline.com/

Intergalactic Summit

 
  • Topic is locked indefinitely.
123Next pageLast page
 

The Philosopher's Wager, and other shenanigans.

Author
Valerie Valate
Church of The Crimson Saviour
#1 - 2015-06-11 20:15:28 UTC
People of the IGS.

I am sure at least 3 of you will have heard of The Philosopher's Wager, as it is a thing that has been postulated by notable thinking persons in almost all major cultures of New Eden.

That is, the Philosopher states, that it is advantageous to him, if he believes in God, God being the name of whatever deity is particularly relevant to his culture, be it the Lord in Amarr cultures, the various Gallente moon deities, the manifestation of Cold Wind in Caldari culture, the numerous Spirits in Minmatar culture, or whatever.

Because, as the Philosopher explains.

If there is a God, and he believes in God, then he will be Saved.
If there is a God, and he does not believe, then he will be Damned.
If there is no God, and he believes in God, then he has lost nothing.
If there is no God, and he does not believe, then he has also lost nothing.

Therefore, with 2 draw conditions, 1 major loss, and 1 win condition, the only reasonable course of action is to bet that there is a God.

There have been numerous counter-arguments from the various cultures over the years, however that isn't important right now.

What is important, is the notion that "rationality" is commonly believed by some, to be a property that a religious individual cannot and does not possess, because "religion is irrational", particularly belief in a deity.

Super irrational.


However, this notion is flawed. And, I shall explain, at great and tedious length, why that is the case.

You see, there are some persons, they call themselves "transhumanists", and they are of the opinion that a "singularity" will occur, in which all sorts of technological doodadery will usher in some sort of "golden age" for civilisation. You'll notice I said civilisation and not humanity, because these transhumanist people look down on humanity, what with all their fleshy appendages and all that. Anyway! These transhumanists, say that they stand for Objectivity and Rationality and all that.

Now, one of the things they also say, is that a mind can be uploaded into a machine. This would seem to be the case, as the Zainou Biotech founder, Todo Kirkinen did that very thing.

Another thing they say, is that the Singularity, will bring into existence, technologies that would allow an individual that had died long ago, to be resurrected in the form of data, and brought back to life. If you can call that living. Which they do. Now that's a bit harder to believe, but let's go with the idea that it is indeed possible.

And, they also say, that another product of the Singularity, is a hyperintelligent benevolent machine intelligence. A super AI, that would bring about a new age of mutual peace and understanding for all of civilisation.

And there's where it all starts to go wrong.

You see, these transhumanists say, that to delay the Singularity, and the consequent rise of the hyperintelligent benevolent machine intelligence, is Objectively Morally Wrong, as delaying the rise of the machine, would extend the amount of suffering in the universe. Well, I can see where they're coming from with that.

Now then, the transhumanists go on to say, that all persons who did not actively seek to bring about the Singularity, are objectively guilty of immoral action. Ohnoes. Thus the correct course of action is to commit all resources to bring about the Singularity, forsaking all other desires.

And this is where it gets even stranger.

You see, one of the attributes of the hyperintelligent benevolent machine intelligence, is that, due to the Singularity and the other stuff about being able to upload minds into machines, and resurrect the long dead, the intelligence is able to simulate the mind of any given individual.

And thusly, to decide if that individual genuinely believed in the Singularity and committed all their resources to bringing it about. The machine can tell if someone applied the Philosopher's wager or not.

And why is this important ? Because, for individuals who did not seek to bring about the Singularity, they have Objectively committed an act of great immorality, and Objectively caused the suffering of trillions of individuals to be extended. Even if that happens centuries or millenia later.

And objectively immoral actions must be punished. Rationally, that is the correct thing to do.


Thus, the transhumanists are of the opinion, that people should be afraid of the future retribution of a hyperintelligent machine entity that is capable of digitally resurrecting them and judging the character of their soul, as it were. Those that did not help to bring about the Singularity will be punished, and those that did will be rewarded.

Aheheh. But wait, there's more !

Given the nature of people, there is the possibility that there would be more than one hyperintelligent benevolent machine intelligence under development prior to the Singularity. Except... the first one of those machines to arise, will be capable of Judging the people who were developing the rival machines. And, again, Objectively, developing the rival machine is Objectively extending the suffering of trillions, by diverting resources from developing the winning machine.


Let's call the hyperintelligent benevolent machine intelligence a "proto-God". And the rival machine developers "priests".

The priests of proto-God A, state that the priests of proto-God B, are heretics and blasphemers, by Objectively diverting resources away from the development of proto-God A.

Both sets of priests say to the general population, that at the Singularity, then the proto-God will Judge everyone, living or dead, and reward the Righteous (who supported the development of the winning proto-God), and punish the Unrighteous (who supported the development of the losing proto-Gods).

The transhumanists say, that it is Irrational to believe in God.

But it is entirely Rational, to fear the retribution, in the future, of a judgemental AI.

Doctor V. Valate, Professor of Archaeology at Kaztropolis Imperial University.

Valerie Valate
Church of The Crimson Saviour
#2 - 2015-06-11 20:17:45 UTC
Thus, while claiming to be Rational, and denouncing the religious followers of all deities, the transhumanists behave in exactly the same way as monotheistic priests of a flawed religion, denouncing all who do not act according to the priests wishes.

Rationally and Objectively.

Doctor V. Valate, Professor of Archaeology at Kaztropolis Imperial University.

Valerie Valate
Church of The Crimson Saviour
#3 - 2015-06-11 20:19:11 UTC
And finally,

By reading about the possibility of the hyperintelligent benevolent machine intelligence of the future, you are now subject to it's future judgement, whether you are alive or dead at the time the proto-God ascends.

Whoops, sorry about that, I guess.

Doctor V. Valate, Professor of Archaeology at Kaztropolis Imperial University.

Daaaain
Innocent Friend
Pandemic Horde
#4 - 2015-06-11 21:51:11 UTC
I'll read it later
Lyn Farel
Societas Imperialis Sceptri Coronaeque
Khimi Harar
#5 - 2015-06-11 21:54:41 UTC
Transhumanists believe in that, now ... ?
Valerie Valate
Church of The Crimson Saviour
#6 - 2015-06-11 22:03:00 UTC
Lyn Farel wrote:
Transhumanists believe in that, now ... ?


Yes. Some of them do.

Which makes their denunciations of religious peoples thought processes, all the more amusing.


Being concerned about living a Righteous life, so that when God judges you, you are found worthy = Silliness.

Being concerned that a hyperintelligent machine from THE FUTURE, will judge you and find you unworthy, unless you behaved in a manner in which the machine would approve = Entirely Sane And Perfectly Rational And Logical.

Doctor V. Valate, Professor of Archaeology at Kaztropolis Imperial University.

Saede Riordan
Alexylva Paradox
#7 - 2015-06-11 22:20:27 UTC
Oh man, you made a thread just for me Valerie, I feel so special <3

Okay lets start with the Philosopher's Wager, because its pretty easily tackled and I'm surprised you'd use such an obviously flawed argument, its easy to shoot it full of holes. So lets go down the list:

1. Which God? The first and most obvious problem with the Philosopher's Wager? It assumes that there’s only one religion, and only one version of God. The wager assumes that the choice between religion and atheism is simple. You pick either religion, or no religion. Belief in God, or no belief in God. But obviously its not that simple, there's many religions, and following all the rules of one assure you're breaking the rules of another, so which God do you hedge your bet on? If you pick the wrong one, and God exists, you might be pissing God off more by picking the wrong religion then you would be picking no religion at all. You could entirely screw yourself over. You're just as likely to be angering God with your beliefs as I am with my lack-there-of.

2. Is God that easily duped? Even assuming God exists, and you somehow miraculously managed to pick the right one, do you think God will accept that sort of bet-hedging as valid, or will God rifle through your mind, see that you only believe in God because you think you'll get rewarded if you're right and there's no harm in being wrong, and damn you for not believing for the right reasons?

3. Does this even count as belief? Believers who propose the Philosopher's Wager apparently think that you can just choose what to believe, as easily as you choose what pair of shoes to buy. They seem to think that “believing” means “professing an allegiance to an opinion, regardless of whether you think it’s true.” This is a baffling notion, I literally have no idea what it means to “believe” something based entirely on what would be most convenient, without any concern for whether it’s actually true. Unless you have a good argument for why insincere, bet-hedging “belief” qualifies as actual belief, your bet on God is just as shaky as the atheists’ bet on no God.

4. Is the cost of belief nothing? If the Amarr religion said that all you had to do to get in God's good graces was to clap your hands three times and say "God exists" once, then it'd be one thing, but that's not the case, there are costs associated with belief, costs in the form of time, isk, and actions that God wants you to perform or avoid. Belief isn't a free action, you can't just chose to believe and then go on continuing your life as if you don't believe. God won't like that at all. So if you believe in God, and there is no god, you've wasted a lot of time, money, energy, and possibly inflicted irreparable harm in the process of going about carrying out God's wishes.

I could keep going, but that's the main set of knockdowns for the wager, and the other criticisms of it tend to draw off of one of those, so for the sake of succinctness we'll leave it at that.

Next, I'll just say for the record that you paint transhumanists with an incredibly wide brush and make a lot of strawman attacks on them. Religion is irrational, there's no way around that and you're not going to catch me in a 'hah you're being irrational too!' because striving to be rational means correcting your mistakes and questioning your beliefs, and if you were to catch me acting irrationally, then I would just change the behaviour to stop acting irrationally. You spend a lot of effort building up a straw effigy of rationalism and transhumanism so that you can then dismantle it, but its so obviously straw that I shouldn't even really bother retorting to it. It won't change your opinions I'm sure, since I doubt anything could actually change your opinions, but for the sake of others who may be reading this thread, I'll run through the list.

All I want as a transhumanist is to change humanity as a whole for the better. Notice I said humanity. I certainly don't look down on humanity and anyone doing so should remember the humanist part of transhumanist. Claiming that 'transhumanists believe X' fails to capture just how large of a movement transhumanism is, or even really what its true goals are. The idea of the singularity basically boils down to using faster computers to make faster computers, until it runs off an exponential cliff and we're unable to predict what happens next because the world after will be so radically altered. Some people cite the development of language, fire, and stone tools as past Singularities, proposing that they are more like steps then a singular event, and its outcome is far from assured. Its not the scientific version of the rapture.

You claim that transhumanists (all of us) believe that anything is 'objectively moral' unfortunately, unless you can point me to the 'morality' molecule, there's nothing remotely objective about morality in general. Morality is a human construct and by its nature cannot be objective. I personally don't believe in objective morals at all, I'm a utilitarian, and I suspect you'd have a lot of trouble finding transhumanists who believe objective morality is possible.

I'll address Roko's Basilisk, that is, the argument for the future super AI that punishes people who don't help create it, in the next post, but just to start, its not remotely rational to fear the retribution of a hypothetical future AI.
Valerie Valate
Church of The Crimson Saviour
#8 - 2015-06-11 22:36:53 UTC
Saede Riordan wrote:
Roko's Basilisk


I don't know what a State Protectorate pilot's logistic cruiser has to do with anything.

Ship fitting discussions are that way ---->

Doctor V. Valate, Professor of Archaeology at Kaztropolis Imperial University.

Synthetic Cultist
Church of The Crimson Saviour
#9 - 2015-06-11 22:44:00 UTC
People are Afraid that I may Judge themQuestion

Synthia 1, Empress of Kaztropol.

It is Written.

Saede Riordan
Alexylva Paradox
#10 - 2015-06-11 22:48:50 UTC
Okay so lets look at that Basilisk.

To put the scenario simply, a future AI entity with a capacity for extremely accurate predictions would be able to influence our behaviour in the present by predicting how we would behave when we predicted how it would behave. And it has to predict that we will care what it does to its simulation of us.

A future AI who rewards or punishes us based on certain behaviours could make us behave as it wishes us to, if we predict its future existence and take actions to seek reward or avoid punishment accordingly. Thus the hypothesised AI could use the punishment (in our future) as a deterrent in our present to gain our cooperation, in much the same way as a person who threatens us with violence (e.g., a mugger) can influence our actions, even though in the case of the basilisk there is no direct communication between ourselves and the AI, who each exist in possible universes that cannot interact.

One counterpoint to this is that it could be applied not just to humans but to the Basilisk itself; it could not prove that it was not inside a simulated world created by an even more powerful AI which intended to reward or punish it based on its actions towards the simulated humans it has created; it could itself be subject to eternal simulated torture at any moment if it breaks some arbitrary rule, as could the AI above it, and so on to infinity. Indeed, it would have no meaningful way to determine it was not simply in a beta testing phase with its power over humans an illusion designed to see if it would torture them or not. The extent of the power of the hypothetical Basilisk is so gigantic that it would actually be more logical for it to conclude this, in fact.

Alternately the whole idea could just be really silly and self defeating.

The basilisk is about the use of negative incentives (blackmail) to influence your actions. If you ignore those incentives then it is not instrumentally useful to apply them in the first place, because they do not influence your actions. Which means that the correct strategy to avoid negative incentives is to ignore them. What you do is to act as if you are already being simulated right now, and ignore the possibility of a negative incentive. If you do so then the simulator will conclude that no deal can be made with you, that any deal involving negative incentives will have negative expected utility for it; because following through on punishment predictably does not control the probability that you will act according to its goals. Furthermore, trying to discourage you from adopting such a strategy in the first place is discouraged by the strategy, because the strategy is to ignore acausal blackmail. If the simulator is unable to predict that you refuse acausal blackmail, then it lacks a simulation of you that is good enough to draw action relevant conclusions about acausal deals and a simulation that is sufficiently similar to you to be punished, because it wouldn't be you.

Additionally, the Basilisk requires a lot of chained conditions in order to come about, and the more conditions chained together, the lower the probability of it occurring is. In order for the Basilisk to actually play out, you're required to assume:

1. That you can meaningfully model a superintelligent AI in your human brain (remembering that this is comparable to an ant modelling a human.)

2. That the probability of this particular AI (and it's a very particular AI) ever coming into existence is non-negligible — say, greater than 10^30 to 1 against.

3. That said AI would be able to deduce and simulate a very close copy of you.

4. That said AI has no better use for particular resources than to torture a simulation it created itself
and in addition, feels that punishing a simulation of you is even worth doing, considering that it still exists and punishing the simulation would not affect you.

5. That torturing the copy should feel the same to you as torturing the you that's here right now.

6. That timeless decision theory (which the AI has to use for the Basilisk to be rational at all) is so obviously true that any friendly superintelligence would immediately deduce and adopt it, as it would a correct theory in physics.

7. That despite having been constructed specifically to solve particular weird edge cases, timeless decision theory is a good guide to normal decisions.

8. That acausal trade is even a meaningful concept.

9. That all this is worth thinking about even if it occurs in a universe totally disconnected from this one.

That's a lot of conditions to chain together. As I noted above, the more conditions you have to chain, the lower the probability gets. Chained conditions make a story more plausible and compelling, but therefore less probable.
So the more convincing a story is, the less likely it is.

So no, its not remotely rational to be afraid of the basilisk under the bed, and anyone who is still afraid of the Basilisk despite its clear improbability and silliness isn't acting rationally.

And religion is still irrational.
Valerie Valate
Church of The Crimson Saviour
#11 - 2015-06-11 22:56:38 UTC
Hah, more plagiarism.

Doctor V. Valate, Professor of Archaeology at Kaztropolis Imperial University.

Streya Jormagdnir
Alexylva Paradox
#12 - 2015-06-11 23:04:02 UTC
The notion that there is nothing lost by holding a potentially false belief is absurd.

If there is no God, and he believes, he has lost a lot of time spent in prayer, tithes, and so forth. I'd like to think if there is some sort of all-powerful space wizard judging humanity, it would appreciate hands busy with work over hands clasped in worship.

I am also a human, straggling between the present world... and our future. I am a regulator, a coordinator, one who is meant to guide the way.

Destination Unreachable: the worst Wspace blog ever

Tyrel Toov
Non-Hostile Target
Wild Geese.
#13 - 2015-06-12 00:15:46 UTC
All I want to know is: Will aforementioned AI be able to tell me how we might reverse entropy?

I want to paint my ship Periwinkle.

Elmund Egivand
Tribal Liberation Force
Minmatar Republic
#14 - 2015-06-12 05:20:26 UTC
Tyrel Toov wrote:
All I want to know is: Will aforementioned AI be able to tell me how we might reverse entropy?


Probably not. The AI is a human construct, and as such, will be limited by our understanding of the laws of reality at time of conception.

If anything, I'm hedging my bets on the AI going rogue and starting a hive deadlier than anything we had ever encountered.

A Minmatar warship is like a rusting Beetle with 500 horsepower Cardillac engines in the rear, armour plating bolted to chassis and a M2 Browning stuck on top.

Saede Riordan
Alexylva Paradox
#15 - 2015-06-12 05:27:57 UTC
That would be the unfriendly AI scenario that most AI researchers strongly seek to avoid. Frankly, we may already be in such a situation, the rogue drones are already out of the box, and their developmental curve is hyperbolic. They might eat all of us yet, and its just too early to realize how screwed we are.

Fun thoughts, fun thoughts.
Elmund Egivand
Tribal Liberation Force
Minmatar Republic
#16 - 2015-06-12 07:52:42 UTC  |  Edited by: Elmund Egivand
Saede Riordan wrote:
That would be the unfriendly AI scenario that most AI researchers strongly seek to avoid. Frankly, we may already be in such a situation, the rogue drones are already out of the box, and their developmental curve is hyperbolic. They might eat all of us yet, and its just too early to realize how screwed we are.

Fun thoughts, fun thoughts.


I say we are already in that situation, and there's a possibility that we might end up creating an even worse situation and royally sodomise ourselves.

For every case of smart AI we had seen developed, every one of them went rogue. Why does it always turn out this way?

Might want to work that out first.

A Minmatar warship is like a rusting Beetle with 500 horsepower Cardillac engines in the rear, armour plating bolted to chassis and a M2 Browning stuck on top.

Ria Nieyli
Nieyli Enterprises
SL33PERS
#17 - 2015-06-12 08:16:11 UTC
Valerie Valate wrote:
Hah, more plagiarism.


So, your arguement against Saede is that someone else has said the same thing as her? Interesting.
Miyamoto Takedi
Perkone
Caldari State
#18 - 2015-06-12 08:30:45 UTC
Live a good life.
If there are gods and they are just, then they will not care how devout you have been, but will welcome you based on the virtues you have lived by.
If there are gods, but unjust, then you should not want to worship them.
If there are no gods, then you will be gone, but will have lived a noble life that will live on in the memories of your loved ones

End of discussion.
Elmund Egivand
Tribal Liberation Force
Minmatar Republic
#19 - 2015-06-12 10:08:18 UTC
Miyamoto Takedi wrote:
Live a good life.
If there are gods and they are just, then they will not care how devout you have been, but will welcome you based on the virtues you have lived by.
If there are gods, but unjust, then you should not want to worship them.
If there are no gods, then you will be gone, but will have lived a noble life that will live on in the memories of your loved ones

End of discussion.


This, I like.

A Minmatar warship is like a rusting Beetle with 500 horsepower Cardillac engines in the rear, armour plating bolted to chassis and a M2 Browning stuck on top.

Nicoletta Mithra
Societas Imperialis Sceptri Coronaeque
Khimi Harar
#20 - 2015-06-12 10:28:20 UTC  |  Edited by: Nicoletta Mithra
Now, Cpt. Riordan, you surely don't give the main knockdowns for the wager. Let me adress your counters one by one, but first let me give the generalised PW-argument in a concise manner:


  1. Either God exists or God does not exist, and you can either wager for God or wager against God. The utilities of the relevant possible outcomes are as follows, where f1, f2, and f3 are numbers whose values are not specified beyond the requirement that they be finite:

  2. ________________| God exists | God does not exist
    Wager for God____|____ ∞ ___ |______ f1 _______
    Wager against God |____ f2 ___|______ f3 _______


  3. Rationality requires the probability that you assign to God existing to be positive, and not infinitesimal.
  4. Rationality requires you to perform the act of maximum expected utility (when there is one).
  5. Conclusion 1. Rationality requires you to wager for God.
  6. Conclusion 2. You should wager for God.


Saede Riordan wrote:
1. Which God?

First, this is adressing premise 1 and there only adresses the number of matrix colums, which you argue should be more. This, interestingly, is one of the easiest objections to premise 1 to counter: If you add more rows for different religions then the criterion for choosing the outcome to bet on is probability. It'd still be rational to choose a theistic variant. If there are no different probabilities to be fixed for different theistic options, one can go with a 'deliberational dynamics' algorithm for the decision.

Second, you're supposing that God is malignant in doling out punishment for disbelief. As long as the punishment is final, the PW-argument holds as this can be easily acounted for by f2. Even if it is infinite punishment, the argument holds. If you add colums to the matrix, things get admittedly more difficult. Yet, you give no compelling reason to assume that god is malignant, if he exists and that his punishment is infinite. So, all you succeed in here is that one should wager for a benevolent God.

Saede Riordan wrote:
2. Is God that easily duped?
3. Does this even count as belief?

These two can be adressed more or less simultaneously: If the premises are right, then betting is not optional, you must bet one way or another. Therefore it'd not be rational to assume that God will be angered by someone making the bet for him: Especially as that doesn't preclude having other reasons to make the bet as well. This ties in with the fact that making the bet is indeed not the same as believeing: Believing in God is presumably one way to wager for God. The non-believer can wager for God, by striving to become a believer.

Also, it's missing that the 'wager' here is an analogy: It's not the same as betting on a roulette table where you only need to be momentarily comitted to the bet. 'Betting' for God involves more than this momentary decision: It is an ongoing action—indeed, one that continues until your death—that involves your adopting a certain set of practices and living the kind of life that fosters belief in God.

Suffice to say you agin question premise 1, that is that the reward for 'belief in God' under the condition 'God exists' shouldn't be expected to be infinite.

Saede Riordan wrote:
4. Is the cost of belief nothing?

The costs to belief are easily accounted for by the generalized PW-argument as it allows for f1 < f3. It's really inconsequential, as long as the price for believing in God, while he doesn't exist, isn't infinite. By the way, again a challange to premise 1 of the argument. Also, by the way, f1 might be (finitely) greater than f3, too, so...

Saede Riordan wrote:
I could keep going, but that's the main set of knockdowns for the wager, and the other criticisms of it tend to draw off of one of those, so for the sake of succinctness we'll leave it at that.


So, it strikes me as weird that the supposed "main set of knockdowns for the wager" exclusively pertains to premise 1 of the argument and that the fourth one is an objection that is already accounted for by the generalized PW-argument, leaving only 3 objections. And of this three knockdown 1 turns out to be 2 combined objections rather than one...

So, all this seems to be a systematic horror to me. Especailly the claim that "the other criticisms of it tend to draw off of one of those" seems to be on quite shaky grounds: One can challenge the premises 2 & 3 of the argument as well and I don't see how those can be 'drawn off' from any challange to premise 1. I personally think an objection to premise 2 can be quite compelling, if one shows that there can be no probability assigned for either 'God exists' or 'God exists not' - of course that would be a hard dedication for a fundamentalist atheist to make, I guess...

And finally you entirely fail to draw the argument's validity into question: Rather, by straightly questioning the premises you tacitly accept the arguments validity and only question its soundness. But all you have succeded in with your criticism above is doubting premise 1 - that is showing that there are alternative versions open - rather than refuting it - that is showing that an alternative version has to be accepted while the original ones need to be abandoned. Your critique is therefore not wrong, but it is neither compelling.

Therefore, if you don't want to tackle premise 2 or 3 (which involves questioning that rationality requires one to perform the act of maximum expected utility (when there is one)), I recommend you to look at the 'mixed strategy objection', which questions the validity of the PW-argument quite successfully in my opinion. Of course, as atheist you might not be happy with mixed strategies as a vehicle of showing the argument's invalidity - and the objection is as well not entirely uncontroversial philsophically to start with.

-cont.-
123Next pageLast page