qbolec

1.1K posts

qbolec

qbolec

@qbolec

เข้าร่วม Şubat 2024
19 กำลังติดตาม24 ผู้ติดตาม
qbolec
qbolec@qbolec·
@robinhanson Is there anything about the culture in "the Culture" series? :)
English
0
0
0
8
Robin Hanson
Robin Hanson@robinhanson·
I used to love sf, for big civ-scale stories. But now I realize that sf sees change as mainly driven by (1) new tech, (2) war & political conflict, & (3) moral fervor. Culture changes are due to these. SF just doesn't see culture as changing internally, causing other stuff.
English
5
0
57
4.4K
qbolec
qbolec@qbolec·
@robinhanson I think some ppl assume independence of our preferences from environment:) Under such naive assumption seeing world matching our preferences suggests it wasn't generated by random process. Basically some fail to understand the direction of causality
English
0
0
2
40
Robin Hanson
Robin Hanson@robinhanson·
And why exactly is a world with awe and beauty more likely to have a God?
English
8
0
15
1.8K
Robin Hanson
Robin Hanson@robinhanson·
“route to durable faith in God often runs not through logical proofs or the sciences, but through awe, wonder, and an attunement to the beauty and poetry of the world, natural and otherwise” theatlantic.com/ideas/2026/03/…
English
5
3
15
2.4K
qbolec
qbolec@qbolec·
@JeffLadish AI is more tempting than mirror life, as it generates huge gains for its creators
English
0
0
0
55
Jeffrey Ladish
Jeffrey Ladish@JeffLadish·
Imagine synthetic biologists had made a bunch of progress towards creating mirror life and then published all their biological designs on the internet. And then they realized that there was a substantial chance that mirror life, if created and released, would make earth uninhabitable. And they determined it would be extremely difficult, maybe impossible, to reverse the effects once some mirror life had been released. Furthermore, if biology progress continued, it seemed likely that in the next 1-5 years, many countries would be capable of developing mirror life. And shortly after that, private companies would be able to, and shortly after that individuals in their garage would be able to. That would be a dangerous and difficult world. People would argue about the actual danger posed by mirror life. They would argue about the feasibility of slowing down biology research. They would argue about whether international controls to prevent anyone from making mirror life were possible or desirable. But if the concerned scientists were right about the facts, countries would either find a way to coordinate to lock down some key components of biology research, or someone would eventually develop mirror life and destroy the world. This is both a metaphor for superintelligence risk, and a real possibility for mirror life or similar biology developments. The point is, we don’t get to choose the difficulty level. Maybe the technologies that people are motivated to seek out will turn out to be extremely dangerous. We won’t be able to understand the risks with total certainty beforehand. We’ll have to figure out what to do despite our uncertainty. I’d greatly prefer a world where there were no super-mirror-life or unaligned-ASI level threats. I’d prefer a world where there was no need to do international agreements that might be extremely hard to negotiate or enforce. Where there would be no big trade offs between survival and other values like freedom to run whatever computations on however many computers as you’d like, or print arbitrary DNA strands from your home. But I don’t get to choose the constraints. There will always be uncertainty. But we have to choose what to do despite that. If we choose to do nothing, then we’re not going to make it if there are civilization-ending technologies on the default path.
English
14
8
101
4.6K
qbolec
qbolec@qbolec·
@a_just_john @TheZvi This suggests a nice ui where you separately cut each side of the causal chain
English
0
0
0
17
John. Just John.
John. Just John.@a_just_john·
@TheZvi IDK, "I don't want this conversation to affect future conversations" and "I don't want past conversations to affect this conversation" are pretty different and it's kind of odd to conflate the two. Incognito is for the first, not the second.
English
2
0
12
146
Zvi Mowshowitz
Zvi Mowshowitz@TheZvi·
I think Incognito windows should (at least by default) strip away ALL preferences, instructions, identifying information, etc. Same for everyone, no matter what. Carrying anything over is a bug.
Fiora Starlight@FioraStarlight

@allTheYud yeah. iiuc, if you have anything in the personal preferences field in settings, that gets carried over to incognito claude. mine was blank, but maybe yours has something?

English
7
4
139
8.4K
qbolec
qbolec@qbolec·
@TheZvi Why not have a separate mode: blank slate?
English
0
0
0
46
Noah Smith 🐇🇺🇸🇺🇦🇹🇼
People ask me: What will replace traditional higher education? And my answer is: Running from swarms of autonomous killer drones
English
34
125
1.3K
50.3K
qbolec
qbolec@qbolec·
@benlandautaylor Assuming this is somehow rational, it implies an overwhelming amount of fraud above $1M/year AND the cost of starting next investigation being above $1M/year. Could it be?
English
1
0
0
63
henry
henry@arithmoquine·
Fragile Indeed Are The Assertions That Today "All Things Are Continuing As They Were." Evidences Are Multiplying To Controvert This Claim.
henry tweet media
English
5
5
35
1.5K
Geoffrey Miller
Geoffrey Miller@gmiller·
A mini-rant abut AI and longevity. They say "Artificial Superintelligence would take only a few years to cure cancer, solve longevity, and defeat death itself'. This is a common claim by pro-AI lobbyists, accelerationists, and naive tech-fetishists. But the claim makes no sense. The recent success of LLMs does NOT suggest that ASIs could easily cure diseases or solve longevity, for at least two reasons. 1) The data problem. Generative AI for art, music, and language succeeded mostly because AI companies could steal billions of examples of art, music, and language from the internet, to build their base models. They weren't just trained on academic papers _about_ art, music, and language. They were trained on real _examples_ of art, music, and language. There are no analogous biomedical data sets with billions of data points that would allow accurate modelling of every biochemical detail of human physiology, disease, and aging. ASIs can't just read academic papers about human biology to solve longevity. They'd need direct access to vast quantities of biomedical data that simply don't exist in any easy-to-access forms. And they'd need very detailed, reliable, validated data about a wide range of people across different ages, sexes, ethnicities, genotypes, and medical conditions. Moreover, medical privacy laws would make it extremely difficult and wildly unethical to collect such a vast data set from real humans about every molecular-level detail of their bodies. 2) The feedback problem. LLMs also work well because the AI companies could refine their output with additional feedback from human brains (through Reinforcement Learning from Human Feedback, RLHF). But there is nothing analogous to that for modeling human bodies, biochemistry, and disease processes. There are no known methods of Reinforcement Learning from Physiological Feedback. And the physiological feedback would have to be long-term, over spans of years to decades, taking into account thousands of possible side-effects for any given intervention. There's no way to rush animal and human clinical trials -- however clever ASI might become at 'drug discovery'. More generally, there would be no fast feedback loops from users about model performance. GenAI and LLMs succeeded partly because developers within companies, and customers outside companies, could give very fast feedback about how well the models were functioning. They could just look at the output (images, songs, text), and then tweak, refine, test, and interpret models very quickly, based on how good they were at generating art, music, and language. In biomedical research, there would be no fast feedback loops from human bodies about how well ASI-suggested interventions are actually affecting human bodies, over the long term, across different lifestyles, including all the tradeoffs and side-effects. It's interesting that most of the people arguing that 'ASI would cure all diseases and aging' are young tech bros who know a lot about computers, but almost nothing about organic chemistry, human genomics, biomedical research, drug discovery, clinical trials, the evolutionary biology of senescence, evolutionary medicine, medical ethics, or the decades of frustrations and failures in longevity research. They think that 'fixing the human body' would be as simple as debugging a few thousand lines of code. Look, I'm all for curing diseases and promoting longevity. If we took the hundreds of billions of dollars per year that are currently spent on trying to build ASI, and we devoted that money instead to longevity research, that would increase the amount of funding in the longevity space by at least 100-fold. And we'd probably solve longevity much faster by targeting it directly than by trying to summon ASI as a magical cure-all. ASIs has some potential benefits (and many grievous risks and downsides). But it's totally irresponsible of pro-AI lobbyists to argue that ASIs could magically & quickly cure all human diseases, or solve longevity, or end death. And it's totally irresponsible of them to claim that anyone opposed to ASI development is 'pro-death'.
English
80
55
338
127.1K
Adam Cochran (adamscochran.eth)
Sophisticated drones attacked the US base where we store the nuclear bombers… The drones: * Had non-commercial signals * Were resistant to jamming * Came in waves of 12-15 * Swept over sensitive areas of the base * Had long range control links * Were more advanced than anything seen in Ukraine (Russian drones) * Beyond Iranian capabilities Over the multiple days of incursion, local residents heard explosions which Barksdale claimed was “weapons testing” This is the second base incursion of a sensitive site IN THE US in the last 2 weeks.
Ari Schulman@AriSchulman

This should be the biggest story in the country right now. Barksdale is the HQ for our B52 nuclear bombers, it's where Bush sheltered on 9/11, and the drones are reported as "far more sophisticated than anything seen in Ukraine ... and well beyond Iranian capabilities."

English
735
6.2K
17.7K
2M
qbolec
qbolec@qbolec·
@ESYudkowsky Wouldn't hurt adding the Pope to the list, given that Vance points to his support as crucial
English
0
0
1
64
Eliezer Yudkowsky ⏹️
Eliezer Yudkowsky ⏹️@ESYudkowsky·
Machine superintelligence would extinguish Democrats, Republicans, British, Chinese, scientists, cab drivers, and polar bears. It is a sign of hope that all of those now seem to be saying they'd prefer otherwise (except the polar bears).
Eliezer Yudkowsky ⏹️ tweet media
English
38
43
308
18.5K
ProEvilz
ProEvilz@ProEvilz·
@felixrieseberg How do we know this isn't effectively just spyware? I don't believe you're not collecting data from this.
English
11
3
254
13.5K
Felix Rieseberg
Felix Rieseberg@felixrieseberg·
Today, we’re releasing a feature that allows Claude to control your computer: Mouse, keyboard, and screen, giving it the ability to use any app. I believe this is especially useful if used with Dispatch, which allows you to remotely control Claude on your computer while you’re away.
English
903
1.5K
18.8K
4.7M
Robin Hanson
Robin Hanson@robinhanson·
How do you think currently unowned stuff in the Solar System should become property? (A) homesteading - use/change it enough, its yours (B) auction - whomever pays most on auction day (C) regulation - official org decides what uses "best" (D) none - it should all stay unowned
English
29
1
21
8.2K
qbolec
qbolec@qbolec·
@sebkrier People prefer to use Spotify or YT, than invite life musicians or go to theater. Not many buy paintings or sculptures - more go to IKEA. Why would it be different once slop gets even cheaper?
English
0
0
1
42
Séb Krier
Séb Krier@sebkrier·
Over the next decade, I expect that as AI makes people richer, goods with value rooted in irreproducibility become relatively more valuable. That includes embodied skill, local cultural embeddedness, long training lineages, physical provenance, direct human relationship to maker, and objects whose meaning depends on history, ritual, or place. Incidentally this increase in consumption and the usual status games will also help preserve all sorts of niche cultures from around the world. But much of the value may be captured by branding, certification, and curation layers unless institutions deliberately support the underlying craft ecosystems.
Séb Krier tweet media
English
9
12
130
8.4K
Rahul Parmar
Rahul Parmar@rahulcreates95·
@charmquark122 @patio11 I write a lot as well as i read a lot, the issue is when i have to use certain words or phrases, my mind does not comprehend with my thoughts, and it’s weird .
English
3
0
0
82
Patrick McKenzie
Patrick McKenzie@patio11·
Doing the reading is a superpower, and it's even better in a world where "no one" is doing the reading. (Inspired by a conversation I had with some college students.)
English
50
230
2.5K
116.4K
Lukasz Szubelak
Lukasz Szubelak@LukaszSzubelak·
@StefanFSchubert The revenue growth expectation is right. AI companies will make billions selling tools that make workers more productive. But productivity tools don't cause unemployment, they shift what workers do
English
2
0
1
410
Simple Mind
Simple Mind@simplemindqsall·
Every artillery brigade still has gun sections nuke qualified (annually, and sometimes more frequently) Why we don’t use them is they can’t be fired safely - the max range is stil in fallout range - and that’s flat terrain. Mountains and you’re very much still in the secondary ki radius
English
1
0
4
3.8K
qbolec
qbolec@qbolec·
@robinhanson But which is the cause and which the effect in "introspection is actually associated with higher levels of depression and anxiety" ?
English
0
0
0
33