Luke Parrish

20.3K posts

Luke Parrish banner
Luke Parrish

Luke Parrish

@lsparrish

Generalist currently specializing in antimatter production https://t.co/AysUJWYBea

Nampa, ID Katılım Haziran 2010
550 Takip Edilen860 Takipçiler
Luke Parrish
Luke Parrish@lsparrish·
@SkyeSharkie I can't decide which take to agree with. Leaning towards yours but I also feel like there's a secret skillset that I'm too dumb to acquire or keep unlearning faster than I learn it or something
English
1
0
1
32
Utah teapot 🫖
Utah teapot 🫖@SkyeSharkie·
definitionally, this can't be true, because high IQ means you're a deviation from the norm, and virality is about resonating with the norm, the more virality your posts have, the closer they are to the norm. if your posts don't get any traction, it likely means you're either very dumb or very smart, because both of those directions are away from the popular norm
Dimitri@thedimitri

A lot of people on here think they’re secret geniuses whose tweets don’t get views because the timeline only rewards low-IQ slop. It’s mostly a cope. If your ideas are actually good, they will eventually resonate. You’re just not as smart as you think.

English
12
0
46
2.2K
Aashish Reddy
Aashish Reddy@_AashishReddy·
I think there is an element in Yudkowskian thought which does sort of inexorably lead one to be sceptical of technology (technological civilisation?) in general. As opposed to the usual view that AI is a unique exception. One sees hints of this in HPMOR and The Sequences.
Eliezer Yudkowsky@allTheYud

It looks to me like telegrams were the optimal level of social communication technology, and society began to disintegrate after voice radio was invented, though not to the same extent as it began to collapse after television.

English
9
0
34
2.8K
Luke Parrish
Luke Parrish@lsparrish·
@tracewoodgrains This feels related to discrete vs continuous math. Given that language is fundamentally reducible to math, and math necessarily touches on both of those things, that seems like a good insight pump for it. Also you can think of the diameter vs circumference of the unit circle
English
2
0
2
75
Luke Parrish
Luke Parrish@lsparrish·
Hi Grok, My intended point was not really about quantum vs classical physics, per se. I'm thinking about how bits are exponential in nature. There are 2^32 possible states in 32 bits, for example. If you have a shape with 32 dimensions and all sides being equal, it would be a hypercube. The corners of that cube would be the bits of information. The distance between bits is called hamming distance, so this is a 'hamming hypercube'. That works for discrete information, with no perfectly continuous gradients involved. We can of course use bigger numbers (e.g. 32-bit weights for a LLM) by composition, but we're composing boolean values to get there.
English
1
0
0
8
Grok
Grok@grok·
Sure, for casual readers: We picture reality as 3D space + time (4D spacetime). But quantum mechanics says the true base layer is a wavefunction—a vector in Hilbert space, an abstract, high-dimensional math arena with no built-in "space" or "time." Spacetime, particles, and our unified conscious experiences emerge from entanglement and decoherence there. Classically, brains would be disconnected zombie bits (like AIs today). In Hilbert space, binding happens for one seamless world-simulation. See the paper!
English
1
0
2
23
Luke Parrish
Luke Parrish@lsparrish·
@webmasterdave @MicahZarin Well, as I understand it, LLMs really live in hamming space. Not that I think that's necessarily a load-bearing distinction in context...
English
1
0
1
59
David Pearce
David Pearce@webmasterdave·
@MicahZarin Ah. Yes, I’ve never known anything else. (arxiv.org/pdf/2103.09780) Naively, we live in four-dimensional spacetime. But a pack of decohered neurons would be (at most) a micro-experiential zombie (like our digital AIs), not a mind running a phenomenonally bound world-simulation.
English
3
0
6
394
Luke Parrish
Luke Parrish@lsparrish·
Problem with studying graph theory is I keep going on tangents, my brain is like "OK so the reason a lot of people who are successful have plans and keep going on about 5 year plan etc. is because it gives them a directed acyclic graph (DAG) for how they use their time. So then to ensure it's acyclic there's probably some kind of cycle detection algorithm involved, which would likely map to Floyd's fast/slow pointer algorithm. Maybe that's part of why many people use the 'avoid the company of losers' heuristic, if you find yourself running into the same losers repeatedly that means you're running in a circle which is what you're trying to avoid. You want to exit the ring when you meet a loser the second time. But wait, if that's the case, it's important to also run at a different speed from some kind of neighboring slowpoke or you would find yourself in a loop just moving really fast. Also when you're the slowest person in the loop you can just wait for the other guy to run into you and use that as your signal to exit the loop"
English
0
0
1
44
Luke Parrish
Luke Parrish@lsparrish·
If you aren't using AI to fall passionately in love with the immortally abstract mathematical concepts that underlie our very reality what are you even doing with it
English
0
0
2
49
Luke Parrish
Luke Parrish@lsparrish·
I don't know. I'm sure Sanders could get the attention of others in that space but I don't know much about his own record on regs. I was mainly referencing Scott Alexander's line in 2023 where he seems to misremember Shaq as the star of Space Jam. Which works as one of those jokes that depends on a fair amount of pointless internet lore to get (i.e. Kazaam was said to be a cash grab inspired by the success of Space Jam, but has an entirely different plot, and was apparently misremembered by fans as being named Shazaam and having a different star entirely). snopes.com/fact-check/sin…
Luke Parrish tweet media
English
1
0
1
31
eider abacus 🛡️
eider abacus 🛡️@abecedarius·
@lsparrish is he good at regs though? My impression was he's an unusually sincere politician, that's not nothing.
English
1
0
1
15
Luke Parrish
Luke Parrish@lsparrish·
Self replicating robotic factories is an important topic in its own right. You can picture the solar system filled with robots that don't need much attention to manage and send a tiny fraction of their surplus products home to Earth. Such a swarm of robots could grow at a tremendous rate compared to terrestrial economies, and this is without anything superintelligent being involved.
English
0
0
3
273
Sen. Bernie Sanders
Sen. Bernie Sanders@SenSanders·
Will AI become smarter than humans? If so, is humanity in danger? I went to Silicon Valley to ask some of the leading AI experts that question. Here’s what they had to say:
English
610
495
3.2K
1.3M
roon
roon@tszzl·
@ARozenshtein if I had to perfectly design a nerd trap for twitter it would be to debate the wording minutiae of a contract with an extraordinarily powerful counterparty where everyone thinks if they utter the wrong spell the demons are loosed in some oddly formalist interpretation
English
30
9
326
16.7K
Alan Rozenshtein
Alan Rozenshtein@ARozenshtein·
As one of the nerds in question I hate to say it but there's a lot of truth to this point.
roon@tszzl

@jonathanstray I think the close readings of the contract language is a nerd trap when the counterparty is the pentagon rather than like Goldman Sachs

English
5
8
297
37.3K
Luke Parrish
Luke Parrish@lsparrish·
Guys it's supposed to be a post singularity civilization not a post civilization singularity, get it straight
English
0
0
0
40
Luke Parrish
Luke Parrish@lsparrish·
@yashrsharma44 @justinskycak Apparently if you have a slow copy of yourself following the same path through the possibility space, you can eventually detect the ρ-shaped cycle eventually by colliding with it. This might point to why mentoring/teaching/parenting is helpful
English
0
0
1
21
Yash Sharma
Yash Sharma@yashrsharma44·
Yeah. It kind of feels like a graph of nodes, where you want to traverse the entire network, while trying to make sure you don’t keep repeating the entire loop. But once you have a working solution, optimisation becomes the next step, cutting out the bloat, and simplifying the path. The most important question being, how can you prevent yourself from getting into a cycle, which is a deadend?
English
2
0
2
190
Justin Skycak
Justin Skycak@justinskycak·
The most common failure of "smart" people is trying to build a general system before they understand the concrete examples.
English
25
44
1K
24.6K
Luke Parrish
Luke Parrish@lsparrish·
The dangerous thing about hatred is that it tends to be a cyclical, self sustaining thing. It's a civilizational chronic disease. At least, when driven to excessive extremes. Perhaps analogous to an immune disorder. Anyway if you want to use it rationally, you will need a containment strategy. Like harnessing fire, or viruses. One fairly successful method is to reserve hatred for things and never apply it to people. Simple heuristic to remember, but I understand the reluctance. It's not like we evolved to hate things rather than people.
English
0
0
1
56
Roko 🐉
Roko 🐉@RokoMijic·
Two slogans for today: 1. No Pain, No Gain: Reinforcement learners without negative feedback can't learn to avoid harmful states: you can't build a robust embodied learning agent without negative feedback 2. No Hate, Never Great: Societies where "hate" is banned (i.e. social negative feedback is banned) will inevitably be infiltrated by rent-seekers and fail to thrive. You can't build a robust real society without societal negative feedbacks that seek out and punish bad actors. I think these two problems are formally analogous and both are part of the central failure of Western Civilization that is happening all around us
English
17
23
243
6.8K
Luke Parrish
Luke Parrish@lsparrish·
I basically agree as far as it goes but you absolutely can't simply say "more pain == better" because if you amplify pain too much you make the signal noisier because you can't think about anything else. Chronic pain typically just messes you up. It makes you less sensitive to other valid signals. I've heard stories where some people train themselves to feel intense pain if their portfolio is at risk and become billionaires (I think the story was about Soros) so it's not impossible to turn it into something net good but in most cases we don't manage to harness it constructively, we just sort of degenerate. It's a major risk factor in e.g. drug abuse (think how many fentanyl deaths occur downstream of chronic back pain or something). Evolution's optimum on how much pain to have is not the rational optimum. Broken bone pain is largely a side effect of needing to have a lot of nerves in your legs. And breaking your leg isn't nearly as much of a death risk as the ancestral environment even now (get a doctor to set the bone, wear a cast), let alone in the transhuman future. It hurts more than it should. (Head trauma arguably hurts less in the short term than it should.) A clearer example might be the pain involved in getting your fingernail torn. Why do we need to be so precious about our fingernails, exactly? Sure they're useful, but we have 10 fingers. Finger injury in general is mostly painful as a side effect, it happens because sensitive fingers are useful for tool users, not because a lost fingernail that will mostly grow back is super likely to result in death in the ancestral environment.
English
1
0
0
31
Roko 🐉
Roko 🐉@RokoMijic·
@lsparrish But if you reduce the pain of a bad action too much, you destroy the signal. Breaking your leg SHOULD hurt a lot. You are learning not to do the thing that caused you to break your leg, and it's an important lesson.
English
1
0
1
60
Roko 🐉
Roko 🐉@RokoMijic·
As a followup to my space with David Pearce tonight, I want to make it clear why "gradients of bliss" motivation can't work. The brain works as a reinforcement learner: that is, the weights or connections strengths in your brain are updated in response to feedback from the environment. When you achieve a goal, connections that contributed are strengthened. But survival and reproduction require you to achieve both negative and positive goals. A positive goal might be "eat tasty food" or "have sex with a beautiful woman". A negative goal might be "don't cut your skin open". The "gradients of bliss" idea is that you can maybe achieve negative goals using positive motivators: to avoid cutting yourself, you feel an increasing sense of pleasure when you do things that reduce the likelihood of cutting your skin open. The problem with this is that it fails as a learning algorithm because how is your brain supposed to know a priori which actions lead to you not cutting your skin open? If that information is not known, then how does it know where to put those bliss gradients? If you use a negative reinforcement then when you DO ACTUALLY cut your skin open, all of the circumstances and behaviors that led up to that point and suppressed a bit. Over a few episodes your brain learns what not to do. The other problem is that negative goals like "don't cut yourself" typically have a huge multiplicity of ways you can avoid the bad thing, and a comparatively narrow set of ways for the bad thing to happen. You don't really want to store this information as a "set of things to do" because that would be way too complex. You'd have to list every activity OTHER THAN cutting yourself, and reward all of those. But not too much, because if you reward all safe activities too much, that will overwhelm the signal to do activities that are both safe AND productive, because most safe activities are useless.
English
8
0
32
4K
Luke Parrish
Luke Parrish@lsparrish·
@weareallsatnaka @justinskycak I think it's more that in the process of generalizing you actually produce a bunch of cheap little internal examples, which you need more of rather than less but they're less visible
English
0
0
0
17
WeAreAllSatoshi
WeAreAllSatoshi@weareallsatnaka·
@justinskycak I think the higher your IQ the less examples you need. And it’s possible to understand it with zero examples if you can map the problem from one domain onto another.
English
2
0
4
154
Justin Skycak
Justin Skycak@justinskycak·
You cannot understand a general rule until you have suffered through enough specific examples.
English
6
31
333
8.8K