Skip to content

Latest commit

 

History

History
135 lines (79 loc) · 7.82 KB

Globals.md

File metadata and controls

135 lines (79 loc) · 7.82 KB

Globals for later inquiry

Software book read... or listen to?

7:12pm 12.17.2018
7:20am 12.18.2018

Dezert-Smarandache Theory: The management and combination of uncertain, imprecise, fuzzy and even paradoxical or highly conflicting sources of information.

Math and Belief link

Eliezer:

You know, the concept of the first uncountable ordinal is actually one of the strongest reasons I've ever heard to disbelieve in set theory. ZFC, or rather NBG, does imply the existence of a first uncountable ordinal, right, or am I mistaken? link (Need to load all comments. Has karma 0.)

komponisto:

The article's failure to distinguish between a mathematical theory and a mathematical model (map and territory, possibly?)

Exactly. This is one of Eliezer's few genuine philosophical mistakes, one which, four years later, he's still making.

Elizer:

I know very well the difference between a collection of axioms and a collection of models of which those axioms are true, thank you.
A lot of people seem to have trouble imagining what it means to consider the hypothesis that SS0+SS0 = SSS0 is true in all models of arithmetic, for purposes of deriving predictions which distinguish it from what we should see given the alternative hypothesis that SS0+SS0=SSSS0 is true in all models of arithmetic, thereby allowing internal or external experience to advise you on which of these alternative hypotheses is true.

komponisto:

Then why do you persist in saying things like "I don't believe in [Axiom X]/[Mathematical Object Y]"? If this distinction that you are so aptly able to rehearse were truly integrated into your understanding, it wouldn't occur to you to discuss whether you have "seen" a particular cardinal number.
I understand the point you wanted to make in this post, and it's a valid one. All the same, it's extremely easy to slip from empiricism to Platonism when discussing mathematics, and parts of this post can indeed be read as betraying that slip (to which you have explicitly fallen victim on other occasions, the most recent being the thread I linked to).

Elizer:

I don't think people really understood what I was talking about in that thread. I would have to write a sequence about

  • the difference between first-order and second-order logic
  • why the Lowenheim-Skolem theorems show that you can talk about integers or reals in higher-order logic but not first-order logic
  • why third-order logic isn't qualitatively different from second-order logic in the same way that second-order logic is qualitatively above first-order logic
  • the generalization of Solomonoff induction to anthropic reasoning about agents resembling yourself who appear embedded in models of second-order theories, with more compact axiom sets being more probable a priori

Occam's Razor?

  • how that addresses some points Wei Dai has made about hypercomputation not being conceivable to agents using Solomonoff induction on computable Cartesian environments, as well as formalizing some of the questions we argue about in anthropic theory why seeing apparently infinite time and apparently continuous space suggests, to an agent using second-order anthropic induction, that we might be living within a model of axioms that imply infinity and continuity
  • why believing that things like a first uncountable ordinal can contain reality-fluid in the same way as the wavefunction, or even be uniquely specified by second-order axioms that pin down a single model up to isomorphism the way that second-order axioms can pin down integerness and realness, is something we have rather less evidence for, on the surface of things, than we have evidence favoring the physical existability of models of infinity and continuity, or the mathematical sensibility of talking about the integers or real numbers.

Yes. Very words.

Calibration:

  • Factfulness: Ten Reasons We're Wrong About the World - and Why Things Are Better Than You Think.
  • lesswrong threads?

Brain parts

Rationality and Alcohol link:


Closet Survey #1 link

Suppressing the moral gag reflex is hard to do.

I think many philosophical questions would be clearer, or at least more interesting, if we reconceptualized death as "Persistent Mineral Syndrome".

mere addition paradox (repugnant conclusion): more people who are less happy vs less people who are happier.

Parfit observes that i) A+ seems no worse than A. This is because the people in A are no worse-off in A+, while the additional people who exist in A+ are better off in A+ compared to A (if it is stipulated that their lives are good enough that living them is better than not existing).

Next, Parfit suggests that ii) B− seems better than A+. This is because B− has greater total and average happiness than A+.

Then, he notes that iii) B seems equally as good as B−, as the only difference between B− and B is that the two groups in B− are merged to form one group in B.

Together, these three comparisons entail that B is better than A. However, Parfit observes that when we directly compare A (a population with high average happiness) and B (a population with lower average happiness, but more total happiness because of its larger population), it may seem that B can be worse than A.

Thus, there is a paradox. The following intuitively plausible claims are jointly incompatible: (a) that A+ is no worse than A, (b) that B− is better than A+, (c) that B− is equally as good as B, and (d) that B can be worse than A.


7:27am 12.18.2018

  • mere addition paradox (repugnant conclusion)

A.I. Constraining Principles.

P(Win | Opponent A) > 50%. P(Win | Opponent A) = 50% + p_a.

Take N ~= 1000 opponents.

How do you measure the enjoyment Opponent A gains out of the game/challenge/exercise?

What fraction of opponents returns for another round?

P(Opponent A returns for game within C_t time | Game Is Loss) = P(Opponent A returns for game within C_t time | Game Is Win) =

That is insufficient.

How do you contrast that with the mental and physical health of the individual opponents?

Injecting heroine into the opponent may result in > 80% retainment rate, but this does not benefit the opponent.

P(Opponent A never plays again) = p_r. Constrain p_r < 0.1 - 0.01 .

P(Opponent A feels Misery > v_m from Loss).

Measure how much of a "putdown" losses feel. Measure how much enjoyment was drawn from the game. Loss or Win. Need to combine both.

High enjoyment frequency doesn't excuse rare high magnitude misery.

E_f: enjoyment feeling. 0 - 10. M_f: misery feeling. 0 - 10. Not independent.

Wish the Opponent to be sufficiently entertained and not overly put down.

P(E_f, M_f) should concentrate above e_min, and below m_max.

P(E_f > e_min, M_f < m_max) > 95%... or not. Depends on how long you already played and if you shouldn't take some time off for real life.

How much total time does Opponent A have in a day for the game s.t. his (real-life) overall productivity doesn't drop, his relationships with others don't degenerate?

A_t: Daily availability time. Eventually sleep is way more important then more gaming. Build in notices for "exceeding healthy timing" ... "increasing difficulty to discourage".

Want to increase Quality of the experience, not perturb it's average duration.

Will is a finite, variable parameter OF THE MODEL and users.

It's not about doing it right. It's about not doing it wrong. Each way of failure has to be addressed.

Are the natural heuristics and biases necessary?