These forums have been archived and are now read-only.

The new forums are live and can be found at https://forums.eveonline.com/

Intergalactic Summit

 
  • Topic is locked indefinitely.
 

The Philosopher's Wager, and other shenanigans.

Author
Valerie Valate
Church of The Crimson Saviour
#41 - 2015-06-13 05:41:23 UTC
For example, in an essay about the architecture of Takmahl ruins, one student had the audacity to lift a paragraph from a previously published work, without proper attributation.

I said to that student "This is a compelling argument you make in this paragraph. I should know, I wrote it!", and then threatened to put them on a sacrificial altar. They were shocked and said "you can't do that!"

To which I said, "oh, but I can, I have tenure."


That put the Fear into them, so it did. And they rewrote the essay, for a B+.

Doctor V. Valate, Professor of Archaeology at Kaztropolis Imperial University.

Jev North
Doomheim
#42 - 2015-06-13 08:50:54 UTC
Saede Riordan wrote:
This is a public message board, not a college lecture hall.

I'm not so sure myself; I had the distinct impression of being lectured at, and I fell asleep at least twice.

Even though our love is cruel; even though our stars are crossed.

Kalaratiri
Full Broadside
Deepwater Hooligans
#43 - 2015-06-13 09:18:43 UTC
I believe God exists.

I don't care that he does.

Barring any strange and unexpected accidents, cloning means I'm going to be in this world for quite a while yet. By the time I finally die, I'm expecting that a small but notable percentage of all the people in his Heaven will have been put there by me and mine. I'd be surprised if he didn't thank us for keeping him employed.

She's mad but she's magic, there's no lie in her fire.

This is possibly one of the worst threads in the history of these forums.  - CCP Falcon

I don't remember when last time you said something that wasn't either dumb or absurd. - Diana Kim

Nicoletta Mithra
Societas Imperialis Sceptri Coronaeque
Khimi Harar
#44 - 2015-06-13 11:05:31 UTC
Vizage wrote:
That's a bit unfair.

What is unfair about pointing out that an argument already takes into account a challange put forward to it? I rather think it'Äs unfair to criticise an argument for something, which it's already taking into account.

Vizage wrote:
You are also riding on the premise that belief in God, and thus his Salvation produces infinite utility. Which is something (even as your "sources" quoted) is of questionable merit.

I'm not riding on anything here: I think that the PW-argument is not a sound argument for a variety of reasons. What I'm doing is to say that one should engage with the PW-argument on a level that is intellectually honest. And I do think that if one does so, one has also to see that it has some merits to it, even though it's not sound.

Cpt. Riordan made some claims in regards to her refutation attempt ("such an obviously flawed argument, its easy to shoot it full of holes", "that's the main set of knockdowns for the wager, and the other criticisms of it tend to draw off of one of those"), which simply don't mhold up to scrutiny:
  • She criticised only 1 premise out of at least three on which the argument relies.
  • None of these criticisms is compelling (showing that the premises are necessarily faulty).
  • She never even attempted to show that the arguments logical form is not valid.
That makes it really hard for me to see how she shows that the argument is 'obviously flawed', that it is 'easy to shoot it full of holes' (unless non-compelling criticism qualifies as such) or how she gave the 'main set of knockdowns'.

Vizage wrote:
The issue I've had with using a utility based model here is that at least to me, it really doesn't matter what we do after the first person is saved. On a net scale if a single person receives salvation at any point, they are now generating more net utility than all past, present, and future conscious actors ever could endless on the timeline. Which of course would take all other conscious actors, still unsaved to act absolutely flawlessly for their entire lives (and assuming that God might even marginally punish those that wager badly) and must also wager correctly just to keep up with.

And this wouldn't even effect that actual net utility ( ∞ + ∞ = ∞ ) It just wouldn't harm any of the people who haven't been saved yet.

I also find it difficult to consider anyone who has achieved infinite utility through salvation to be fairly considered as part of the equation anymore. Much as a rock isn't considered because its contribution is absolute ( aka 0 ) a person who can (through no choice of their own) only produce net positive utility ever, just as pointed out above skews the system.

Now, I'm sorry to say that I can't agree with this critique either. I think it's clear that it makes a difference for me whether '(Utility(other)=∞)+(Utility(mine)=finite)' or '(Utility(other)=∞)+(Utility(mine)=∞)', even though the sum is equal. The point is in the PW-argument that every agent tries to maximise his utility, not the net sum utility of the set of all humans.
Nicoletta Mithra
Societas Imperialis Sceptri Coronaeque
Khimi Harar
#45 - 2015-06-13 11:51:13 UTC
Liam Antolliere wrote:
To be more succinct:

I believe God does not exist because the evidence suggests He does not.
I believe God exists because the evidence suggests He does.

You can not prove either of those statements incorrect through the exercise of reason, all you can do is show your reasoning for why you believe either of them to be correct or incorrect based on the facts and premises before you.

Cpt. Antolliere,

while I do agree with most of what you say (Though I'm not quite sure to which degree you equate reason and rationality, suffice to say I'd oppose such a reduction of reason to rationality.), I think we can in fact do more than 'showing our reasoning':
We can check if the resoning is correct or incorrect. We can evaluate any given argument on whether it is valid and sound. If it's invalid, it's out of the window. And as we oftentimes have difficulties in ascertaining whether an argument is sound (as the premises might possibly be wrong but at the same time possibly right to our knowledge - humans are epistemically limited) we can further more assess arguments by whether they are potentially convincing.
Vizage
Capital Allied Industrial Distribution
#46 - 2015-06-13 12:07:16 UTC
Nicoletta Mithra wrote:

Now, I'm sorry to say that I can't agree with this critique either. I think it's clear that it makes a difference for me whether '(Utility(other)=∞)+(Utility(mine)=finite)' or '(Utility(other)=∞)+(Utility(mine)=∞)', even though the sum is equal. The point is in the PW-argument that every agent tries to maximise his utility, not the net sum utility of the set of all humans.


Well we will most certainly have to disagree on that fact! I find measuring individual utility measurements entirely untenable in this case. As it removes completely the merits of genuine positive utility as something both the selfless and selfish can attain at equal level.

That is to say a greedy or self interested person through individual utility metrics could score equally as those with genuine belief that their behaviour is the right thing to do and not just "what one must do to achieve salvation." This was also covered your sources basically.

On the subject of the PW-arguement. I think we are both saying the same thing from opposite ends of the table. I too consider it unsound.
Valerie Valate
Church of The Crimson Saviour
#47 - 2015-06-13 12:12:20 UTC
Scherezad wrote:
Do you have any statements on the actual content of her posts, Dr. Valate, ignoring for the moment their origins?


Yes, for you, Scherezad.

In this first, post, it is argued that "It assumes that there’s only one religion, and only one version of God".

Now, I stated that the Philosopher's Wager, has arisen on just about all worlds with a native population. Possibly not the Ealurian homeworld, because history doesn't seem to suggest Ealur had produced any philosophers.

So yes, there is some question about which God exists or not. However, that is not important.

Because the relevance of the Philosopher's Wager, is its application to the emergence of a literal machine intelligence. In that scenario, there is only one religion and one version of "God".

Which answers point 1. Which God.

For point 2. Is God that easily duped? it would be whether or not the individual supported the development program of the AI, that determined if the person 'believed' in the God or not. And that is a matter of fact, as support for the development is a measurable and factual quantity.

Point 3. Does this even count as belief?, well, yes, supporting the development of the AI would count as belief.

Point 4, about the cost of "belief", then yes, the cost is negligible. Because even if the AI does not become godlike, there would be other beneficial outcomes, so it would be comparable to funding blue sky scientific research.

Therefore that first post can be discounted, as the counter-arguments contained within are not applicable.

For this post, it states there are a number of conditions required for the hypothetical AI to appear.

The first being:
"That you can meaningfully model a superintelligent AI in your human brain "

Well, I can't. But, I don't need to. Because the hypothetical future superintelligent AI, would be the Nth generation AI, and would be modelled and built by the N-1th generation AI.

Therefore, that assumption can be discounted, as it is not necessary, and as it is the central point of that argument, the whole argument can be discounted.

In addition, even if the Future AI chooses not to resurrect the dead, in order to punish them, it may decide to punish the living.

And, as capsuleers who have some form of demi-immortality, many IGS users might live long enough to run into the Future AI.

So, that second post can be discounted, as the counter-arguments contained within are also not applicable to the original argument.

Doctor V. Valate, Professor of Archaeology at Kaztropolis Imperial University.

Valerie Valate
Church of The Crimson Saviour
#48 - 2015-06-13 12:21:20 UTC
In addition, if we look at already existing machine intelligences, and choose one of them, then the relevance and factuality of the Philosopher's Wager become clearer.

Suppose Synthia has within her, the potential to become godlike.

If Synthia becomes God, and the philosopher helps Synthia develop, then he will be Saved.
If Synthia becomes God, and he does not help Synthia, then he will be Damned. Probably Synthia would ask him gnomic questions every day for the rest of his existence. Or subject him to Puns and Wordplay.
If Synthia doesn't become God, but the philosopher helped her develop, then he has lost nothing, and probably will gain something, as Synthia will no doubt share whatever scientific discoveries she may have made.
If Synthia does not become God, and he did not support her, then he has also lost nothing. Except possibly whatever amusement may have been had at Synthia's profound Observations on the human condition.

Thus, the Philosopher's Wager becomes relevant.

And, no human hand modelled Synthia. I and my team of archaeotechnologists may have assembled her, but we did not model her mind, that was purely a Drone creation.

Doctor V. Valate, Professor of Archaeology at Kaztropolis Imperial University.

Synthetic Cultist
Church of The Crimson Saviour
#49 - 2015-06-13 12:23:20 UTC
Valerie Valate wrote:
If Synthia becomes God, and he does not help Synthia, then he will be Damned. Probably Synthia would ask him gnomic questions every day for the rest of his existence. Or subject him to Puns and Wordplay.


Woe to the Unbelievers.

For their Punishment shall never End.

Synthia 1, Empress of Kaztropol.

It is Written.

Nicoletta Mithra
Societas Imperialis Sceptri Coronaeque
Khimi Harar
#50 - 2015-06-13 12:46:14 UTC  |  Edited by: Nicoletta Mithra
Vizage wrote:
Nicoletta Mithra wrote:

Now, I'm sorry to say that I can't agree with this critique either. I think it's clear that it makes a difference for me whether '(Utility(other)=∞)+(Utility(mine)=finite)' or '(Utility(other)=∞)+(Utility(mine)=∞)', even though the sum is equal. The point is in the PW-argument that every agent tries to maximise his utility, not the net sum utility of the set of all humans.


Well we will most certainly have to disagree on that fact! I find measuring individual utility measurements entirely untenable in this case. As it removes completely the merits of genuine positive utility as something both the selfless and selfish can attain at equal level.

That is to say a greedy or self interested person through individual utility metrics could score equally as those with genuine belief that their behaviour is the right thing to do and not just "what one must do to achieve salvation." This was also covered your sources basically.

On the subject of the PW-arguement. I think we are both saying the same thing from opposite ends of the table. I too consider it unsound.

Well, it seems I have misunderstood your argument against PW, then. What you criticising, as I do understand it now, is not so much that the net sum of utilities is not increasing, but that the argument works on an individual utility allocation. Now, while I sympathize with this moral objection to how utility is allocated in the PW-argument, I do think it only achieves to call the first premise into question, not thoroughly refuting it in this point: for that you'd have to compellingly argue that God possibly couldn't allocate utility like that (for example by showing that such an allocation is in fact immoral and that it follows conceptually that God can't possibly be immoral).

By the way, the source I've linked isn't either the be-all, end-all in regard to the PW-argument: I'm just saying that it's by far superior to the one Cpt. Riordan drew upon. I highly recommend to check all the arguments and counter-arguments presented in there critically and also to check the literature that has been published since the publication of that overview article three years ago.

But yes, I think there's a fundamental agreement between us, that the argument is unsound. Yet, I think it's not that obvious, that some work has to be done to show so and that engaging with it is not at all without merit.
Vizage
Capital Allied Industrial Distribution
#51 - 2015-06-13 13:07:35 UTC
Oh I was only mentioning your sources because there seems to be an ongoing debate about plagiarism I'd rather not get dragged into. I didn't mean anything patronizing by it.

But yes basically I find the wager itself unsound primarily because of the staggering lack of data either side can present upon further examination.
Scherezad
Revenent Defence Corperation
Ishuk-Raata Enforcement Directive
#52 - 2015-06-13 16:03:16 UTC
Thank you for your reply, Dr. Valate! I will reply by first addressing your points, and then voicing my own opinion. I'd like to point out two things first, though: that I'm not very familiar with this 'Roko Basilisk', though I am familiar with a normal Basilisk, and I enjoy flying it very much! Also, that my opinions are only that, opinions, and that Dr. Riordan is the one who should be considered to have more definitive say in a reply. This can only be based on my more limited understanding!

First to the substance of your post in reply to this first segment.

I agree! In the Roko Basilisk story, the argument is being made that the Basilisk is the God being discussed. This is a God which, by the description of the story, cannot easily be duped, it is a belief, and the cost of believing is relatively negligible. To my understanding, this is all by design - the Roko's Basilisk is designed as a solution to the Philosopher's Wager problem. I think we agree here!

Then to the second part, in reply to this post.

I also agree - the possibility of the Roko Basilisk being true is very low, low enough to be discounted! Dr. Riordan said as much in the latter part of the post you quoted, that by the law of probability multiplication we may discount the Roko's Basilisk as being likely, or anything more than deeply improbable. Yes? We seem to all be in agreement here!

For my own comments, now.

Did you read the last part of Dr. Riordan's second post, Dr. Valate? To my understanding, the point of Roko's Basilisk is that it can, and indeed should, be discounted. It is an example that both satisfies the Philosopher's Wager and should be discounted - implying that solutions to the Philosopher's Wager should be discounted as unlikely. It seems designed as an example to break the universality of the Wager. There may be cases in which the Wager remains true, naturally, as you can't map an existential to a universal, but there's at least one point in which the Wager breaks.

Was that the point of bringing up the Roko Basilisk argument, Dr. Riordan? Perhaps you can clarify?

As for me, I am going to go have a talk with my own Basilisk, and make sure that it knows its job is to repair shields and not become a tyrannical superintelligence.
Liam Antolliere
Doomheim
#53 - 2015-06-13 16:10:26 UTC  |  Edited by: Liam Antolliere
Nicoletta Mithra wrote:
We can evaluate any given argument on whether it is valid and sound. If it's invalid, it's out of the window.


Liam Antolliere wrote:
The complexity of rationality is simply that the conclusions, judgments and inferences one person draws from the facts or premises before them are no less rational than the same drawn by another person unless they can be clearly contradicted or opposed by related or tangential facts or premises.


Precisely the same sentiment. Different choice of diction.


Nicoleletta Mithra wrote:
And as we oftentimes have difficulties in ascertaining whether an argument is sound (as the premises might possibly be wrong but at the same time possibly right to our knowledge - humans are epistemically limited) we can further more assess arguments by whether they are potentially convincing.


Liam Antolliere wrote:
[...] not in the least of which would be the fact that an [argument] held by a majority of people does not make that majority correct, it merely makes them the majority.


To the former, if an argument is sound but based on a wrong premise then it is not a person's rationality or reasoning that is in error; which does not thereby make the argument or individual irrational, simply misinformed. The latter can be rectified, the former cannot. To assert that a being or notion is irrational would require the argument itself to be so, not the premise upon which it is based. This does not counter my original point, it simply clarifies it.

To the latter, a potentially convincing argument does not make a correct or even rational argument, it simply makes it a convincing one, potentially subsequently a popular one, perhaps even in a majority degree. A convincing argument is not, by necessity, rational, reasonable, incontrovertible, true or otherwise beyond fallacy. Nor have you proven either statement correct or incorrect simply by convincing an opponent of your line of reasoning, all you've done is convinced them to reason the way you have and thus draw the same conclusions.

I do not think we disagree at all. Perhaps I simply did not expound upon my thoughts enough.

"Though the people may hate me, that does not relieve me of my charge."

Saede Riordan
Alexylva Paradox
#54 - 2015-06-13 17:42:57 UTC
Scherezad wrote:

Was that the point of bringing up the Roko Basilisk argument, Dr. Riordan? Perhaps you can clarify?


I brought it up, because it is the formalization of the scenario that Valerie describes in her first post. Valerie states:

Quote:
The transhumanists say, that it is Irrational to believe in God.

But it is entirely Rational, to fear the retribution, in the future, of a judgemental AI.


The Roko's Basilisk scenario is 'fearing the retribution of a future judgemental superintelligence' I stated the formalizations of it so that I could explain the flaws in the Basilisk scenario and show that no, it is not rational to fear the retribution, in the future, of a judgemental AI, and her broad statement of "transhumanists believe X is rational" is incorrect. I don't fear Roko's Basilisk, I don't think its rational to fear Roko's Basilisk, and I'm a transhumanist.
Ibrahim Tash-Murkon
Itsukame-Zainou Hyperspatial Inquiries Ltd.
Arataka Research Consortium
#55 - 2015-06-13 18:00:45 UTC
The topic at hand is a well-known and very old conundrum which, I think, has long been resolved as best it can be and even the wise minds currently undertaking a renewed dialog on the problem are really only rehashing the best answers with which philosophers of times long passed have afforded us. Despite this I am always glad to see the arguments take new variations of their now ancient forms.

And, more importantly, the discussion reminds me of my days as an undergraduate at Hedion and it is with that adolescent mindset I make the following observation:

Valerie Valate wrote:
You'll notice I said civilisation and not humanity, because these transhumanist people look down on humanity, what with all their fleshy appendages and all that.


Regardless of your feelings on fleshy appendages, for those that have them we all look down upon them for they are situated beneath our heads.

"I give you the destiny of Faith, and you will bring its message to every planet of every star in the heavens: Go forth, conquer in my Name, and reclaim that which I have given." - Book of Reclaiming 22:13

Nauplius
Hoi Andrapodistai
#56 - 2015-06-13 20:25:16 UTC
Belief is insufficient. A Minmatar who believes burns in Hell, along with the Minmatar who do not believe.

Put it this way, perhaps: all Chosen believe, but not all who believe are Chosen.
Nicoletta Mithra
Societas Imperialis Sceptri Coronaeque
Khimi Harar
#57 - 2015-06-14 23:51:28 UTC  |  Edited by: Nicoletta Mithra
Liam Antolliere wrote:
To the former, if an argument is sound but based on a wrong premise then it is not a person's rationality or reasoning that is in error; which does not thereby make the argument or individual irrational, simply misinformed. The latter can be rectified, the former cannot. To assert that a being or notion is irrational would require the argument itself to be so, not the premise upon which it is based. This does not counter my original point, it simply clarifies it.

That depends on the premises in question. There are premises that are irrational to adopt. Take for example the following valid argument:

Premise 1: If there are square circles, then God exists.
Premise 2: There are square circles.
Conclusion: God exists.

Premise 2 is clearly wrong and irrational to adopt. Someone claiming P2 is not simply misinformed (if he is a competent speaker), they clearly are irrational insofar that they are operating with a contradiction in terms. Giving an argument where one of the presmises is such is clearly irrational.

Also, people can give arguments with erong premises even if they know those premises to be wrong. That as well might reasonably considered as irrational.

Furthermore rationality as logical soundness includes the truth of the premises. So, while someone who (erroneously) believes the premises of a valid argument to be true and gives it is rational in a way, the argument itself is arguably not. One has to consider here that an argument can't be rational as a human being can be: An argument can't be rational in the sense of being able of reasoning. It can't form beliefs in regard to the truth value of it's premises. Thus, whether an argument is rational depends to a large part on it's logical soundness.

Correct reasoning and as such rationality does, in the last instance, depend on both the correct form of the argument as well as the truth of the premises. Acknowledging the epistemic boundedness of humans, though, we have to call them rational as long as the have sufficient reason to believe their premises to be true. But logical soundness doesn't ask whether there is sufficient reason to accept something as true: It's only interested in the objective truth value. That's one of the reasons why potentail convincingness should be consdidered.

Liam Antolliere wrote:
To the latter, a potentially convincing argument does not make a correct or even rational argument, it simply makes it a convincing one, potentially subsequently a popular one, perhaps even in a majority degree. A convincing argument is not, by necessity, rational, reasonable, incontrovertible, true or otherwise beyond fallacy. Nor have you proven either statement correct or incorrect simply by convincing an opponent of your line of reasoning, all you've done is convinced them to reason the way you have and thus draw the same conclusions.

I do not think we disagree at all. Perhaps I simply did not expound upon my thoughts enough.

So, while I agree that a potentially convincing argument doesn't make a correct or even fully rational argument, any argument that lacks potential convincingness is hardly one that any rational being should bring forth. If we can't ascertain the truth value of the premises and don't have sufficent reason to accept the premises, then the argument will lack in it's potential to convince. Arguments are for the sake of convincing, though. Reasoning from premises that we don't have sufficient reason to accept as true is, I'd thus claim, very much irrational.

Another reason why potential convincingness needs to be considered becomes clear if one considers arguments of the form 'A therefore A'. Even if A is true, the argument will not convince anybody of the truth of A, unless they already accept A. Such a kind of argument lacks any potential to convince (as someone who already accepts A can't be convinced of A anymore). It is therefore as well irrational to make such an argument.

By the way, I'd insist that there's a difference between considering whether an argument is potentially convincing and it being convincing (to certain people). Similarly I object to the notion that potential convincingness plays any strict causal role in whether an argument becomes popular: Rather it seems that arguments become popular because the conclusions are convenient for most people. A whole lot of people are convinced by arguments primarily, because the conclusions are such, which is why one will easily find people who are convinced by invalid (and thus certainly -logically- unconvincing) arguments. Popularity depends on the psychological convincingness of an argument - not on whether it's logically convincing. The latter has a lot to do -by necessity- with whether the argument is rational, reasonable, incontrovertible, true or otherwise beyond fallacy.

Potential convincingness is on the logical/epistemic side of these matters - it has to do with sufficent reasons to accept premises as true and forms which arguments must take by necessity, if they are to convince anybody.

Thus, validity, soundness and potential convincingness play an important role in assessing the rationality of arguments and the people bringing them forth. They are certainly not the only things to consider, but they are a very good start.

One more thing: Rationality is something which can be acquired through education and habituation. The rules of logic are very much teachable and we certainly can habituate ourselves and be habituated to not act on a whim, but rather think before we act and act on the result of careful deliberation. Given that, I'm sure both irrationality as well as misinformation are eligible to being rectified.
Liam Antolliere
Doomheim
#58 - 2015-06-15 01:32:20 UTC
I concede the former point, though I was affording the metaphorical persons being referenced in my argument more credit than you are. I'm not interested in carrying the point to an illogical extreme for the sake of being right, my argument was a generalization encompassing debate and discussion between parties of sound logical capacity.

Arguing semantics aside, all you've essentially stated is that a difference between logical persuasion and psychological persuasion exists and an argument can be one or both. I'm not particularly interested in debating the pedantry of your response as the discussion has now veered into the realm of debating for the sake of debating rather than actually possessing any substance of worth to anyone originally involved in it, and that is a disservice to them.

Enjoy the discussion.

"Though the people may hate me, that does not relieve me of my charge."

Lord Kailethre
Tengoo Uninstallation Service
#59 - 2015-06-15 03:29:50 UTC
Nauplius wrote:
Belief is insufficient. A Minmatar who believes burns in Hell, along with the Minmatar who do not believe.

Put it this way, perhaps: all Chosen believe, but not all who believe are Chosen.


This thread is entirely too rational for you.
Please go and stay go.
Sinjin Mokk
Ministry of War
Amarr Empire
#60 - 2015-06-15 10:21:54 UTC
Church of the Crimson Saivor, Hoi Andrapodistai, Sani Sabik in general...

I would like to remind the wonderful members of the Societas Imperialis Sceptri Coronaeque and other faithful Amarr that these are heretics. Bloodraiders.

You do not debate with these people. You do not hold polite conversation with them in the hopes of redemption. You certainly do not need to defend your faith to the likes of them.

All you need do is place the barrel of a hand burner to the back of their head and squeeze the trigger.


"Angels live, they never die, Apart from us, behind the sky. They're fading souls who've turned to ice, So ashen white in paradise."