֍ Haig Հայկ ֍

4K posts

֍ Haig Հայկ ֍ banner
֍ Haig Հայկ ֍

֍ Haig Հայկ ֍

@haig

@[email protected] Inscrit le Mayıs 2007
975 Abonnements462 Abonnés
֍ Haig Հայկ ֍
“We need research on the possible use of technology to create institutions which serve personal, creative, and autonomous interaction and the emergence of values which cannot be substantially controlled by technocrats. We need counterfoil research to current futurology.” -Illich
English
1
0
0
42
֍ Haig Հայկ ֍
֍ Haig Հայկ ֍@haig·
@DoctorTro The @masteringdib group does that all the time, and there’s a rich history going back over 70 years to Kempner, Pritikin, McDougal. The trick is you have to also keep fat very low while eating high carbs, which will fix insulin resistance.
English
0
0
0
40
DoctorTro
DoctorTro@DoctorTro·
How do you feel about the ADA spokesperson (who previously worked for the soybean lobby) saying that people with diabetes need to eat rice, bread, potatoes and starch ?
English
284
58
441
41.5K
֍ Haig Հայկ ֍
֍ Haig Հայկ ֍@haig·
@anabology @ketontrack I work out early in the morning and usually have pre-workout fruit and a post-workout protein shake. When trying the honey diet would I just have more fruit/honey post-workout and wait till dinner for the protein bolus?
English
0
0
0
45
anabology
anabology@anabology·
Will release the honey diet Q+A follow-up podcast in just a few days: if you have any last minute questions to ask, reply here! (increasing production quality as well based on your recs) Tomorrow a podcast with @ketontrack comes out that I'm excited about!
English
55
2
142
11.2K
֍ Haig Հայկ ֍
֍ Haig Հայկ ֍@haig·
@IvanVendrov Whitehead and process thought in general, maybe not explicitly in terms of computational complexity, but arrives at the same place.
English
0
0
0
19
ivan
ivan@IvanVendrov·
why don't philosophers talk about computational constraints more? utilitarianism make sense - if you have infinite compute and time. but you never do. you have to make decisions fast, or get eaten by those who do. where can I find computationally literate ethics & epistemology?
English
131
33
688
65.8K
֍ Haig Հայկ ֍
֍ Haig Հայկ ֍@haig·
@QiaochuYuan Because computationalism still doesn’t explain subjective experience, especially affect and conation. It is behaviorism pushed down to the level of cognitive algorithms.
English
0
0
0
23
֍ Haig Հայկ ֍
֍ Haig Հայկ ֍@haig·
@ESYudkowsky These substances were traditionally just one component of ritual practices embedded in cultures that had worldviews and ways of living reinforcing a more holistic socioecological purpose. Reducing them to mere chemical mechanisms abstracted from all the rest is the problem.
English
1
0
0
67
Eliezer Yudkowsky ⏹️
Eliezer Yudkowsky ⏹️@ESYudkowsky·
Among my friends who came to my attention for psychelics having had any significant impact on them, positive or negative, I would say the mean result has been overwhelmingly, heartbreakingly negative. Please seriously consider not doing drugs.
English
253
120
2.4K
283K
֍ Haig Հայկ ֍
@jessi_cata @tim_tyler Intractability due to computational complexity would make it insurmountable though. In that case a closed form solution would necessitate closing off the inputs to the decision function. Totalitarian regimes try to do exactly that. Paperclipping via Procrustes.
English
0
0
0
66
jessicat
jessicat@jessi_cata·
@tim_tyler I doubt Godelian results by themselves would be an insurmountable obstacle. Executable philosophy presumes the universe is computable, so it's not particularly important if the logical system can't decide the halting problem in general.
English
3
0
12
1.2K
jessicat
jessicat@jessi_cata·
It is easy to interpret Eliezer Yudkowsky's main goal as creating a friendly AGI. Clearly, he has failed at this and has little hope of completing it. That's not a particularly interesting analysis, however. A priori, creating a machine that makes things ok forever is not a particularly plausible objective. Failure to do so is not particularly informative. So I'll focus on a different but related project of his: executable philosophy. There is such a thing as common sense rationality, which says the world is round, you shouldn't play the lottery, etc. Formal notions like Bayesianism, VNM utility theory, and Solomonoff induction formalize something strongly related to this common sense rationality. Yudkowsky presents these as the basis for a totalizing meta-worldview, of epistemic and instrumental rationality, and uses the meta-worldview to argue for his object-level worldview (which includes many-worlds, AGI foom, importance of AI alignment, etc). While one can get totalizing (meta-)worldviews from elsewhere (such as interdisciplinary academic studies), this (meta-)worldview is relatively easy to pick up for analytically strong people (who tend towards STEM), and is effective ("right" and "winning") relative to its simplicity. Yudkowsky's source material and his own writing do not form a closed worldview, however. There are open problems as to how to formalize and solve real problems. Many of the more technical sort are described in MIRI's technical research agenda. These include questions about how to parse a physically realistic problem as a set of VNM lotteries ("decision theory"), how to use something like Bayesianism to handle uncertainty about mathematics ("logical uncertainty"), how to formalize realistic human values ("value loading"), and so on. Whether or not the closure of this worldview leads to creation of friendly AGI, it would certainly have practical value. It would allow real world decisions to be made by formalizing them within a computational framework (related to Yudkowsky's notion of "executable philosophy"), whether or not the computation itself is tractable (with its tractable version being friendly AGI). The practical strategy of MIRI as a technical research institute is to go meta on these open problems by recruiting analytically strong STEM people (especially mathematicians and computer scientists) to work on them, as part of the agent foundations agenda. I was one of these people. While we made some progress on these problems, we didn't come close to completing the (meta-)worldview, let alone building friendly AGI. With the Agent Foundations team at MIRI fired, MIRI's agent foundations agenda is now unambiguously a failed project. I had called MIRI technical research as likely to fail around 2017 with the increase in internal secrecy, but at this point it is not a matter of uncertainty to those informed of the basic institutional facts. What can be learned from this failure? One possible lesson is that totalizing (meta-)worldviews fail in general. This is basically David Chapman's position: although he promotes "meta-rationality", he doesn't consider this to be formalizable as rationality is. It would seem that one particular failure of constructing a totalizing (meta-)worldview is Bayesian evidence in favor of Chapmanian postrationalism, but this isn't the only alternative. Perhaps it is feasible to construct a totalizing (meta-)worldview, but it failed in this case for particular reasons. Someone familiar with the history of the rationality scene can point to plausible causal factors (such as non-technical social problems) in this failure. Two possible alternatives are that the initial MIRI worldview was mostly correct, but that the practical strategy of recruiting analytical STEM people to complete it failed; or that it wasn't mostly correct, so a different starting philosophy is needed. Mostly, I don't see people acting as if the first branch is the relevant one. Orthogonal AI is most acting like they believe this out of relevant orgs. And my own continued commentary on philosophy relevant to MIRI technical topics shows some interest in this branch, although my work tends to point towards wider scope of philosophy rather than (meta-)worldview closure. What about a different starting philosophy? I see people saying that the Sequences were great and someone else should do something like them. Currently, I don't see opportunity in this. Yudkowsky wrote the Sequences at a time when many of the basic ideas, such as Bayesianism and VNM, were in the water supply in sufficiently elite STEM circles, and had credibility. There don't currently seem to be enough credible abstractions floating around in STEM to form a totalizing (meta-)worldview out of. This is partially due to social factors including a decline in belief in neoliberalism and meritocracy. Fewer people than before think the thing to be doing is apolitical elite STEM-like thinking. Postmodernism, a general critique of meta-narratives, has reached more of the elite analysts, and the remainder are more focused on countering postmodernism than they were before. And the AI risk movement has moved much of its focus from technical research to politics, and much of its technical focus from agent foundations to empirical deep learning research. Now is a post-paradigmatic stage, that may move to pre-paradigmatic (and then paradigmatic) as different abstract ideas become credible. Perhaps, for example, some credible agency abstractions will come from people playing around with and trying to understand deep learning systems. But immediately forming and explicating a new paradigm seems premature. And so I accept that the current state of practical rationality involves what Chapman calls "reasonableness" and "meta-rationality" (the "outside formality") type, though I take this to be a commentary on the current state of rationality frameworks and discourse rather than a universal. I believe more widespread interdisciplinary study is reasonable for the intellectually ambitious in this context. arbital.com/p/executable_p…
English
28
18
230
32.5K
֍ Haig Հայկ ֍
֍ Haig Հայկ ֍@haig·
RIP Daniel Dennett. I really loved “Darwin’s Dangerous Idea”, and respectfully disagreed with a lot of “Consciousness Explained”, One of my favorite philosophers even if my worldview isn’t fully aligned with his.
English
0
0
2
383
֍ Haig Հայկ ֍
֍ Haig Հայկ ֍@haig·
@WiringTheBrain They do know and care about the part of “you” at their level of scale though, those parts following suit at the next scale, and so on. We’re probably in a similar situation in this great chain of Being, a possible rational basis for a spiritual worldview lost on the reductionist.
English
0
0
0
37
֍ Haig Հայկ ֍
Been using ChatGPT for Feynman Technique-ing and it’s great. The potential for false information is a feature not a bug in this case, reinforces the need to finely re-evaluate every detail.
English
2
1
5
421
֍ Haig Հայկ ֍
@drmichaellevin Love his work. He also participated in Principia Cybernetica, a really interesting wiki on all things cybernetics, with Dr. Valentin Turchin, whose book “The Phenomenon of Science” hugely affected me and is recommended.
English
0
0
1
107
֍ Haig Հայկ ֍
֍ Haig Հայկ ֍@haig·
@DavidDeutschOxf The critique shouldn’t be about profit, but profit *maximization* with its corresponding profit motive, and what that does to the economic environment via incentive landscape. Competing for market share to grow as much profit for shareholders is not the only (or best) way
English
0
0
0
85
֍ Haig Հայկ ֍
֍ Haig Հայկ ֍@haig·
@DrYohanJohn You probably did it unintentionally, but phrasing the question with “someone” instead of “something” does actually get to the core of it better. Without someone to observe, would there be a meaningful difference between something and nothing?
English
0
0
2
73
֍ Haig Հայկ ֍
֍ Haig Հայկ ֍@haig·
“We believe in competition, because we believe in evolution.”. Evolution is also teeming with cooperation & mutualism. “Our enemy is statism, authoritarianism, collectivism, central planning, socialism.” Market socialism & left-libertarianism👀 a16z.com/the-techno-opt…
English
0
0
2
278
֍ Haig Հայկ ֍
֍ Haig Հայկ ֍@haig·
@MashTunTimmy i bought the 1+1 deal (i was young and dumb). Hacked around on mine then gave it to my nephew. I hope the one sent to Africa at least made the kid a little happy.
English
1
0
48
3.2K
mash tun
mash tun@MashTunTimmy·
The 2000’s were crazy, remember everyone thought we could transform Africa if we just sent them cheap Linux laptops?
mash tun tweet media
English
29
87
1.3K
59.2K
֍ Haig Հայկ ֍
@BartoszMilewski Gödel undecidability exhibits within local formal system boundaries w/ sufficient expressibility, made decidable with appropriate meta-system added. Reductive models to particle-force interactions may just be fundamentally limited to describe all of nature.
English
0
0
0
77
Bartosz Milewski
Bartosz Milewski@BartoszMilewski·
Physics is tied to mathematics, so we have to assume that, by Goedel, that it must be undecidable. We should be able to come up with an experiment whose outcome cannot be derived, but which Nature "knows" how to answer.
English
55
25
248
48.6K