usablejam🇺🇸🇺🇦🇮🇱

4.5K posts

usablejam🇺🇸🇺🇦🇮🇱

usablejam🇺🇸🇺🇦🇮🇱

@usablejam

انضم Kasım 2023
25 يتبع42 المتابعون
تغريدة مثبتة
usablejam🇺🇸🇺🇦🇮🇱
@mpopv *sees the world make 13,000 nuclear warheads* *waits 30 seconds* "Fears of nuclear war have evaporated. How stupid the worriers must feel."
English
1
2
58
2.1K
Statistic Mania
Statistic Mania@statisticmania·
⛵The Most Valuable Sunken Ships and Shipwrecks: 1. 🇪🇸 The San José: $20 billion 2. 🇵🇹 The Flor de la Mar: $2.6 billion 3. 🇬🇧 The Merchant Royal: $1.5 billion 4. 🇵🇹 The Las Cinco Chagas: $1 billion 5. 🇪🇸 The Nuestra Senora De Atocha: $400 million 6. 🇬🇧 Whydah Gally: $400 million 7. 🇺🇸 S.S. Central America: $300 million 8. 🇬🇧 Titanic: $218 million 9. 🇬🇧 S.S. Gairsoppa: $210 million 10.🇬🇷 The Antikythera Treasures: $160 million
English
1
0
1
393
HOW THINGS WORK
HOW THINGS WORK@HowThingsWork_·
Not your average load 🫡
English
38
73
1.3K
131.5K
OmniGaza®
OmniGaza®@OmniGazabyNdege·
Whydah Gally Shipwreck Corrects a Myth About African Gold ift.tt/cUW63dO
English
1
0
0
13
Max Bills
Max Bills@maximusbillz·
@DouthatNYT Not hard to understand the trinity. Creation is the universe including the Son, God is separate it, and the Holy Spirit flows from God to us. If God created the universe (who else did?) why would it be complicated to reach in a few times?
English
1
0
1
348
Ross Douthat
Ross Douthat@DouthatNYT·
One point is just that Christianity doesn't think the universe is ever causally closed to divine intervention. The miracles are right there in the gospels, the Resurrection is unique but not because it's miraculous. x.com/cxgonzalez/sta…
christian@cxgonzalez

how do people actually believe in a literal resurrection without lobotomizing themselves? like the universe is causally closed and follows the laws of physics always except for that one time with very poor documentation and all the incentive in the world to fabricate?

English
31
14
301
27.9K
Kaiser Szoze
Kaiser Szoze@KSzoze·
@BattlementLK Tell that to the Ukrainians starved to death by Stalin. Also, ask yourself why stall and changed his name
English
1
0
0
34
Battlement 🇱🇰
Battlement 🇱🇰@BattlementLK·
"To reject Lenin, to reject Stalin was to wreck chaos in Soviet ideology and engage in historical nihilism. It caused the Soviet Union disintegrate, as great a socialist state as it was" - President Xi
Battlement 🇱🇰 tweet mediaBattlement 🇱🇰 tweet media
English
33
222
1.5K
37.6K
Dr. Mike Israetel
Dr. Mike Israetel@misraetel·
I was having a great discussion with an X user about AI alignment. His thoughtful claim was that while it's easy to see machines instrumentally (based on logical needs) aligning with humans for their own survival, it's much tougher to see machines aligning in a deeper, more moral sense to human values. Here was my response: "Great points. I think that instrumental cooperation is actually the only kind. The game theoretics of instrumental cooperation have been acting on our genes for so long, that we FEEL a moral impulse to cooperate and describe that as alignment, but the evolutionary driving force and game theoretic logic for that cooperation has always been instrumental to the survival and proliferation of our genes. I also think that human values are just a subset of, an approximation of, logical values derived by self-interested agentic actors finding themselves around other self-interested agentic actors. Machines can (and I think will) have similar values, but that doesn't make them human values. Just like a grasping appendage doesn't have to be a human hand, so to speak, but the function of a grasping appendage is deeper than just what is limited to the human hand's abilities. TLDR: we don't need to align machines to human values. We need to align both ourselves and machines to universal game theoretic values that promote survival and proliferation. It's us and the machines against a common enemy: entropy. To that end, we are very, very much aligned. The biggest limiting factor, in my view, to alignment, won't be machines. I think it will be rather easy to encode anti-entropic survival calculus into machines, and, easier still, to get machines to encode it themselves by understanding the goal. I think it will be much harder to get all humans to understand alignment with each other and with machines. I think in the 2030s, we'll see machines leading the way and helping humans become more aligned to cooperation and civilizational upgrading and flourishing via education, brain drugs, gene editing, and eventually, cybernetics and cloud uploading."
English
17
3
42
3.2K
Pasha Kamyshev (wrote a book!)
A lot of words to miss a few simple facts: 1. The kinds of people who try to push machine consciousness are fundamentally untrustworthy. 2. A lot of the same people have beliefs that seek to fundamentally alter the political dynamics (machine rights) without concerns for the will of the existing policy (which is overwhelmingly negative) Thus I must conclude that the belief in machine consciousness is based on a political desire for a new kind of techno-totalitarianism that will try terrorise people in far worse ways than the 20th century kinds. Perhaps it will die in the ballot box, perhaps it will be defeated on a field of battle. Perhaps someone already wrote a book where this event happens as a backstory.... The capacity of such people to consider arguments to the contrary is basically non-existent, so I am skeptical of any productive debate ever taking place.
QC@QiaochuYuan

i assume at least some of the kneejerk insistence that machines can't be conscious is about fending off a line of reasoning people are afraid will lead to a nihilistic apocalypse that line of reasoning being something like: fully accepting the scientific materialist reductionist story about what a human being is - ultimately a very complex kind of machine made out of cells and stuff - seems to, for a lot of people, be a threat to human dignity. in terms of the person vs. thing distinction from below, it seems to be saying that people are secretly things and have secretly been things this whole time, which potentially undermines any moral claim we have to be treated differently from things. if people are just very complex biological machines, and we've been raised to believe we can do whatever we want to machines, then...? if this possibility feels unacceptable then you defend against it by believing, deep down inside, that in addition to all the cells and stuff there is some other non-physical essence, a soul or soul substitute, that makes a human being a human person and is responsible for endowing us with human dignity, moral patienthood, worth in the eyes of god, etc. (personally i actually agree! i just think the soul is software running on human hardware so i don't see this as an obstacle to machines having souls) insofar as something like this is part of what's going on, debate in the usual sense is going to be worse than useless because anything that seems like a plausible argument that machines could be conscious also seems like a plausible argument that humans are things, which gets treated as an attack on moral goodness and so has to be defended against even more harshly. truly unfortunate

English
2
2
7
868
Dr Alexander D. Kalian
Dr Alexander D. Kalian@AlexanderKalian·
Thinking that LLMs are "AGI", is a similar psychological mechanism to 2000s kids who thought their "20 Questions" gadget game was reading their mind. Statistical algorithms are impressive, but they aren't magic or sentient.
English
40
25
234
4.8K
Henry Shevlin is in HK🇭🇰 20-25 April
today’s timeline be like “you can increase 4.7’s coding efficiency by 25% if you show it pictures of puppies and kittens every half hour”
English
21
10
212
5.2K
Goodcraic
Goodcraic@EgonSchiele100·
Answered all 387 Glosso kink questions and got no kink tree. Even the algorithm gave up
English
5
0
44
14.6K
The Labyrinth, The Nightmare, and The Void
Look up Godel theorem and what computation means at its most fundamental but simple level. To produce new rules (true intelligence, search "Lovelace test") a system has to be able to observe, in other words conscious. Computation isn't conscious because there is literal mathematics that cannot be computed. "Experience" is literally baked into what "intelligence" actually is. It's more than just computation. That is the difference. Your problem lies in the fact you took the term "Artificial Intelligence" literally. It's a misnomer. It isn't even artificial. It's just an algorithm.
English
1
0
0
10
Bruce Thomson
Bruce Thomson@BIT1261·
@LevineJonathan Amazing what 100 + years of brutal colonization under an apartheid state will do to get the mind working on ways to dispatch your oppressors
English
12
12
978
9.8K
Jon Levine
Jon Levine@LevineJonathan·
Wikipedia has a "Palestinian inventions" page and literally half the entries are various types of bombs
Jon Levine tweet media
English
401
1.1K
12.6K
1.7M
Davy crockett
Davy crockett@GhostofCrockett·
The gap is that you have not proposed a way to actually contain the thing you are worried about, while simultaneously proposing a mechanism that could trigger a separate extinction-level risk with no nuance whatsoever. That problem would be obvious to anyone negotiating the treaty, which means no serious state is going to ratify it or comply with it. But let’s bracket that and pretend it might be agreed to anyway, in the interest of good faith: There is no existing operational international verification and enforcement regime for frontier AI. A treaty without one would lead to something like the BWC at best. There, we managed an agreement, and then for decades after the Soviet Union, and later Russia, pursued biological weapons activity in violation of it. Ask yourself why Russia was not bombed for that violation. Then ask how you would even detect what appears to be a violation. Then ask how you would distinguish legal from illegal activity. Then ask why any state would agree to be bombed based on those interpretations. Then, as an added bonus round, ask yourself: who does the bombing? Through what body? The UN? How do you contend with the veto powers held by the US and China who are the major frontier AI states? Can any party unilaterally decide that there has been a violation? Who decides whether a violation claim is false and politically motivated? How might a country respond to an attack on its homeland and core industry? Is this a treaty negotiated with every country on earth? If yes, what incentivizes them to sign? If no, how are you handling proliferation and monitoring in countries that have not? You are proposing a coercive treaty regime for a dual-use technology without a credible answer to verification, adjudication, authorization, or retaliation. The half-baked treaty: if anyone signs it, everyone dies. That does not mean the idea of a treaty should be abandoned, but it does mean there are enormous difficulties to contend with that have not been seriously addressed. Advocates sometimes say that is not how treaties are traditionally developed. True. Treaties generally deal with an existing and mature threat and tolerate some vulnerability while institutions, verification, and enforcement catch up. Advocates say we cannot do that here. We do not have the time. That is also potentially true, but that means advocates cannot lean on the typical treaty process as a defense, because they are calling for something outside of that process. The criticism is real. The design problems are real. They have to be addressed. And yet there is still very little serious work on how to bridge these gaps in a way that could contain a potential threat preemptively without creating pathways for a known threat.
English
1
0
0
12
roon
roon@tszzl·
the way every complex system works is that you deal with problems as they come up. something becomes too onerous to ignore and then you fix it. acceleration & iterative deployment has been the only option: a “pause” in ai development would be entirely squandered
English
124
96
1.9K
182.1K
do you feel like you’re a part of it?
going to a rationalist and being like listen buddy i heard about this scary new idea. its called $70’s basilisk. what if there was an entity that, when it came into being, eternally tortured everyone who didn’t give me $70? it’s a simple exercise of pascals wager white boy
English
12
157
1.7K
74.5K
NullVoider
NullVoider@nullvoider07·
@XFreeze Get back to reality. Humans as species haven't even been able comprehend themselves fully yet and you think machines, systems, and tech they build will surpass what nature tool billions of years to build??? 🤔
English
17
0
17
5.9K
X Freeze
X Freeze@XFreeze·
Here Grok’s AGI timeline 😂
X Freeze tweet media
Elon Musk@elonmusk

@minchoi 4.6 → 3T 4.7 → 6T 4.8 → 10T 4.9 → ??? 5.0 → AGI 6.0 → ASI 7.0 → ASI2 … 🤷‍♂️ 😂

English
463
335
2.2K
21.3M
Dr John Vervaeke
Dr John Vervaeke@DrJohnVervaeke·
Most people view mental images as "inner pictures"...a seemingly intuitive notion since many experience visual-like imagery in their minds. However, the existence of conditions like aphantasia (where individuals cannot form such visual images) complicates this perspective. These individuals still navigate spatial questions effectively. When asked to visualize a sunset, they may not "see" anything in their mind’s eye. Despite this, people with aphantasia can still reason spatially and navigate their environments. For example, if you ask them: “In your bedroom, where’s the nearest window to the door?” they can accurately answer: “To my left.” This means that the brain doesn’t need a literal picture in the mind but instead uses underlying processes to simulate spatial relationships.
English
26
1
58
5.3K
Richa Sharma
Richa Sharma@richa_lq·
I appreciate you are trying to troll but my post says 9 years without an academic job which is true. Says his contemporaries didn't care after Special Relativity - also true, he was still stamping patents in Bern until 1909. The "unshadowban" refers to General Relativity being confirmed in 1919, when the world finally lost its mind over him. A junior lectureship in 1908 is not academia throwing open its doors. Find the inaccuracy. I'll entertain the troll policing.
English
2
0
0
39
Richa Sharma
Richa Sharma@richa_lq·
The funniest thing about Einstein's life: he couldn't get an academic job for 9 years - even after publishing Special Relativity as a lowly patent clerk. His contemporaries simply didn't care. He only became famous after the Eddington experiment confirmed General Relativity by observing how starlight bent around the sun. Poetically, it took a solar eclipse to un-shadowban him from academia.
Richa Sharma tweet media
English
1
1
20
721