Biobayes

6K posts

Biobayes banner
Biobayes

Biobayes

@BioBayes

Shaping messy biology into structured datasets.

Katılım Temmuz 2010
577 Takip Edilen395 Takipçiler
Biobayes
Biobayes@BioBayes·
@StealthQE4 This misinterpretation of what he said will really hurt his third term.
English
0
0
7
72
Biobayes retweetledi
planefag
planefag@planefag·
As someone who's been writing military science-fiction for years, and have many friends in or formerly in the military (some of which are authors themselves,) I have something to say about this: If all Yoshiyuki Tomino has to say with his art is that "war is bad," then he should stop making art, as he's only going to waste our time. Any fool with two brain cells to rub together knows that war is ugly, brutal and costly. That doesn't mean war is pointless and should never be fought no matter the circumstances. In fact, such a statement is worse than pointless, as lethal conflict is a common constant of human civilization - and, for that matter, a constant among the vast majority of life existing on Earth, even between bacteria. If all your story does is shout "this is bad!" it's a childish lament that leaves a tremendous amount of this constant of human existence unexamined. Who fights wars - the elites, like the ancient Greek Hoplites, or the knights of the middle ages, or the common men who volunteer, like in many modern nations? What do they fight for - for the ideals of their beloved nation, for honor and glory, or to save the women and children in the city that stands at their backs? What defines a good soldier? What defines a good leader? These questions are just as essential for us as they were for our forefathers, because the world is a tumultuous place full of evil people and great dangers and the time is coming, sooner than many may think, where wars between great powers will shake the foundations of the world and the lives of millions will hang in the balance. To explore questions like this, of such import to our souls, is one of the core reasons people tell stories to begin with. And our tools and machines have always been essential to the conduct of war and the defense of all we hold dear. Men have told stories of talking swords or "tsukumogami" for as long as swords have existed; long before we could even conceptualize a thinking machine might be made with science; we dreamt of them existing through magic or spirit. Tools are what first brought us out of the trees to stride the earth as its masters; in the tools we shape and wield with our own hands we make manifest our intent, our will, our spirit. In the modern age, the vastness of our creations sometimes makes it easy to forget, but the human element is still the entire point. I quote from page 71 of "Shattered Sword" by Johnathan Parshall and Anthony Tully: "The study of naval warfare (more than any other form of combat) holds the potential to completely subordinate the human element to the weapons themselves. Naval combat is conducted almost exclusively by means of machines – machines that are in many cases so huge and grand that they often seem to take on a life and personality of their own that transcend the tiny figures that inhabit them. Yet, in the final analysis, it is men who live in the ship, command and fight the ship, and often die in the ship. Their story, no matter how seemingly eclipsed by the great vessels they serve in, is still the fundamental story to be related.” Its only natural we should be entranced with the great machines of war that we build, as they're the final product of the genius and labors of an entire society; fashioned into an incredible tool that is nothing if not wielded by the hand of a skilled warrior devoted to his craft and his mission. I know of not a single mecha story that runs afoul of Parshall and Tully's warning as quoted above; everyone seems to understand the assignment. The ones that don't are the likes of Tomino, or his fellow anti-war traveler Miyazaki. I can't understand a man who thinks fighter planes are beautiful but has little more to say about war than "it's bad;" he refuses to see that the beautiful form of a fighter plane follows its function, and that there's a savage, primal beauty in that function, like the fury that animates a thunderstorm. Or the fury and purpose that animate its pilot, for that matter. Tomino seems to think that "nothing of substance is getting across." I disagree. I think the substance came across very well, and many in younger generations just think that substance is woefully lacking. There's a cutscene in the Knights of the Old Republic, between Carth Onasi and Canderous, where Carth expounds on the difference between "soldiers" and "warriors," defining warriors as those who fight for plunder and the glory of conquest, and soldiers as those who fight to protect their nation and peoples - usually from warriors. He made a great point, but Canderous wasn't entirely wrong. As any fighter pilot can tell you, you need more than noble motivations to sacrifice and serve to be truly excellent - to overcome your enemy in an aerial duel, you need that urge to "lean in" to the fight; that competitive drive - a part of you needs to love the fight. Many soldiers over the ages have spoken of this; as Robert E. Lee said "it's well that war is so terrible, or we should grow too fond of it." It's that primal urge drawn straight from our deepest instincts; that thirst to compete and win, that gives soldiers the fire and fury to do their utmost in combat, to win the challenge, to defeat those who would plunder their temples, raze their cities and enslave their women and children. That is the truth of war, every bit as much as the death and boredom and bloodshed and terror. And if you can only tell one half of that truth, because the other half doesn't align with your political or personal views, then I don't give a god damn what you have to say about it, or about the works of storytellers who do.
AUTOMATON WEST@AUTOMATON_ENG

Mobile Suit Gundam creator Yoshiyuki Tomino thinks many of his fans are just military geeks who “didn’t get the message” automaton-media.com/en/news/mobile…

English
267
772
5.3K
876.3K
Biobayes
Biobayes@BioBayes·
@Toscamit @uniquemoviemom His business partner fucked the love of his life (the other business partner), and then both married and got the whole company to themselves. I can understand why someone would be resentful.
English
0
0
2
163
Toscamit
Toscamit@Toscamit·
@uniquemoviemom This scene really explained why Walt was acting so irrationally. The pain of losing out on Grey Matter. He never got over it. Chased it to the very end. Amd it destroyed him. There's a lesson here for all of us.
English
3
0
61
5.7K
Unique Movie Moments 🐬
Unique Movie Moments 🐬@uniquemoviemom·
This is where Walt made it clear he was doing it for himself
English
23
86
2.9K
119.5K
Biobayes
Biobayes@BioBayes·
@vzdzt @ns123abc They knew, they were just hoping it didn't blew up in their face
Biobayes tweet media
English
1
1
8
84
veazy🎄
veazy🎄@vzdzt·
@ns123abc Bold of them to assume this wouldn’t come back to haunt them they gotta be on drugs
English
3
1
39
2.6K
NIK
NIK@ns123abc·
🚨 NEW COURT FILING — OpenAI's own solicitation emails to Musk For three days, OpenAI's lawyer Savitt has been framing Musk as a founding donor who broke his pledges. Today Musk's lawyers filed the receipts to show what actually happened: Altman's October 2015 email to Elon Musk: > "As discussed I think starting with a $100MM commitment (and leaving the time unspecified) is the way to go..." Then the number: > "Can you donate $30MM over the next 5 years?" Musk responded: > "Let's discuss governance. This is critical. I don't want to fund something that goes in what turns out to be the wrong direction." Altman to Musk, a few months later: > "Can you do $20MM a year for the each of the next 3 years?" Musk delivered $38 million plus the office rent. Two and a half years after Musk left the OpenAI board, the asks resumed. July 22, 2020, OpenAI's CFO to Musk's family office: >"It would greatly help the nonprofit org if you're willing to assist with covering... landlord passthroughs and security costs." Musk agreed. He funded OpenAI's rent. Under California law, when a charity solicits and accepts donations, a fiduciary relationship forms between the person who asked and the person who gave. A legal duty to use the money for the declared charitable purpose. Altman and the CFO solicited. Musk donated. OpenAI accepted. Then converted the charity into an $852 billion company. The trust was breached.
NIK tweet mediaNIK tweet media
English
167
1.1K
6.9K
436.3K
Biobayes
Biobayes@BioBayes·
@michelletomkim @techreview This reporter really needs to learn how to write a thread in X. It's all over the place. Correct way to do it is to reply to the previous post in an orderly and sequential fashion.
English
0
0
1
16
Michelle Kim
Michelle Kim@michelletomkim·
Day 3 of the Musk v. Altman trial in Oakland federal courthouse. Elon Musk continues to testify, represented by his lawyer Steven Molo. US District Judge Yyvonne Gonzalez Rogers is presiding over the case. I'm a reporter (and lawyer) covering the trial for @techreview.
English
56
52
932
138.3K
Biobayes
Biobayes@BioBayes·
@michelletomkim Elon could have created ten ClosedxAI companies after that and it would still be irrelevant to the topic of Sam/OAI breaking the contract on his original deal with Elon.
English
0
0
3
91
Michelle Kim
Michelle Kim@michelletomkim·
OpenAI's lawyer asks Elon Musk whether Grok 4.1 and 4.2 are open-source. Musk says xAI plans to open-source them. OpenAI is suggesting that despite suing OpenAI for becoming closed-source, Musk himself isn't committed to running an open-source company.
English
3
3
63
6.7K
Michelle Kim
Michelle Kim@michelletomkim·
Day 4 of Musk v. Altman trial in Oakland federal courthouse! Elon Musk continues to get cross-examined by OpenAI’s lawyer William Savitt, who once represented Tesla (and Twitter in its battle against Elon Musk). Elon will then be cross-examined by Microsoft’s lawyer and then direct-examined by his own lawyer Steven Molo. I'm a reporter (and lawyer) covering the trial for @techreview.
English
23
34
614
86.7K
GunsnGolf
GunsnGolf@gunsngolf·
When I was practicing heavily in family law, one of my favorite questions to ask an opposing party was, “if the court were to grant you full custody, would you agree to start being an active and engaged dad/mom?” Most people gave a knee jerk “yes” response and then I’d spend the next 10 min going into all of the evidence showing they’re not currently active and engaged. Worked marvelously.
English
1
1
12
2.1K
Biobayes retweetledi
Mike Solana
Mike Solana@micsolana·
fwiw, and this is embarrassing but I'm going to admit it, my instinct was blue, and I pressed blue. then I thought about it for a moment and the answer was clearly red. I can't make a rational case for blue, but understand where blues are coming from (they are wrong).
Tim Urban@waitbutwhy

Everyone in the world has to take a private vote by pressing a red or blue button. If more than 50% of people press the blue button, everyone survives. If less than 50% of people press the blue button, only people who pressed the red button survive. Which button would you press?

English
282
24
1.4K
182K
Biobayes retweetledi
Dr. Marty Makary
Dr. Marty Makary@DrMakaryFDA·
A milestone day for clinical trial innovation. We’re announcing the first real-time clinical trials, where @US_FDA can see data signals and endpoints in real time. A quick explainer:
English
203
552
3.1K
405.7K
Biobayes
Biobayes@BioBayes·
@gfodor @HighJayster A perfectly sadistic player would dedicate every moment and dollar they have towards the end of getting any human to vote blue, funnily enough. Red coordination is not only the game theoretically correct choice, it's also the moral one.
English
0
0
0
10
gfodor.id
gfodor.id@gfodor·
@HighJayster Of course not, if coordination is allowed I basically spend every moment and dollar I have towards the end of getting any human I care about to vote red, my immediate family obviously being at the top of the list
English
3
0
4
153
ib
ib@Indian_Bronson·
Driven by the decline in marriage, in turn driven by the delay in marriage, in turn driven by the rise in female secondary education and labor force participation, which reduces male wages, as both men and women compete for housing, and jobs, on top of intra/inter-sexual rancor.
English
4
29
298
18.3K
Biobayes
Biobayes@BioBayes·
@RokoMijic A funny observation is that basically all real sadists would be promoting blue propaganda with all their might.
English
1
0
0
131
Roko 🐉
Roko 🐉@RokoMijic·
Time for some math on the blender game. The Blender Game is an excellent probe that reveals as very particular way that the minds of WEIRD (Western, Educated, Industrialized, Rich, and Democratic) people are broken. Perhaps THE way that they are broken. What is the rational solution to the blender game? Well, basic game theory for self-interested players gives a clear answer. You never get into the blender. This is because the move of not getting into the blender strictly dominates getting in: whatever happens, you will always be better off or the same if you stay out of the blender. The end. In game theory a Nash Equilibrium is very simple - it is a state where there's no unilateral move anyone could make to improve their own situation. In the (selfish) Blender game there are many Nash Equilibria because lots of states have the property that it's way above 50% in the blender, so it doesn't matter what any one individual does. So all of those are Nash Equilibria, as well as the state where everyone is outside the blender ("all red" in Tim Urban's red/blue framing). But all the states that have <50% in the blender are not Nash Equilibria because anyone one of the people in the blender could now save their own life by exiting. But the Nash Equilibrium where everyone is outside is in some sense better. It is more stable. In game theory we can formalize this as a type of equilibrium called Trembling Hand Perfect Equilibrium (THPE). In THPE we imagine that people will make their moves in the game and then with some small probability they will accidentally press the wrong button because their "hand is trembling". There is only one THPE for the blender game with selfish players, which is when everyone presses red. It's easy to see why: imagine there's 50% people exactly in the blender. Out of 1000, 500, for example. Then if you imagine that each one of those 500 people who are deliberately putting themselves into the blender has a small chance of accidentally not going into the blender. Now if you are one of these people, you reason that even if you don't make the mistake yourself, someone else might. And then you are going to die, which is bad, so you can improve your situation by actually exiting. The same is true for 501 people, 502, etc. None of the "in the blender" states are actually Trembling Hand Perfect Equilibria. But, the "everyone out of the blender" state is a Trembling Hand equilibrium, because even though some people might accidentally go into it with some small probability, you are definitely not going to improve your own chances by joining them to almost certainly be blended. Okay, but what about if you are a mixture of selfish and altruistic. Say you assign utility +1 to yourself for surviving, and +1/N for each other person who survives. We can analyze this new game: there are now other "stable" (THPE) equilibria? Yes. If everyone is a bit altruistic, then "all in the blender" also becomes a Trembling Hand Perfect Equilibrium. The reason for this is that for someone who is at least a little bit altruistic, it is okay for them to suffer a small chance of being blended in exchange for a larger chance of saving the larger group. "The good of the many outweighs the good of the few - or the one". Note that in these games both the size of the set you are saving and the probability of saving them is larger, because in order for getting out of the blender to actually save yourself, you need one more other person to also get out, which is ε times less likely. So any nonzero amount of altruism is enough to make these blue equilibria THPE. This seems to vindicate the "Blue" position. As long as everyone is at least a little bit altruistic, "All in the Blender" is actually a Trembling Hand Perfect Equilibrium, so it is at least equally valid to "All out of the Blender", and some might argue superior since under trembling hand conditions it can prevent anyone from getting blended, most of the time (there are absurdly unlikely cases where many people simultaneously slip up). But there is a problem. The "All in the Blender"/"All Blue" equilibrium is only Trembling Hand Perfect if the number of altruists is at least at or above the 50% threshold. If there are 49% altruists and 51% are egoists, then the egoists will rationally abandon the altruists in the blender because both the altruists', and egoists' hands are trembling, so the blender is still dangerous, even if only slightly. But in reality you never really know how many people are slightly altruistic, versus just self interested and rational. In practice a fair number of these games end up with red winning. In these games if you use a mixed population with more than half the people being purely self-interested or even sadistic, the "All in the Blender"/"All blue" equilibrium is no longer Trembling Hand Perfect. To see why, think about a mixed population where there are 3 selfish players and 2 altruists. Imagine them all provisionally choosing to go into the blender, and then reconsidering their options in light of the fact that someone(or several!) might slip. All the selfish people realize that if any three (or all four!) of the other four slip, they will be in the blender either with one other person or on their own, and they will then die. Therefore, all three selfish players will not enter the blender. But then, the altruists also get blended with high probability, so actually they don't want to get in either. Now imagine that all the altruists are sort of "running the same algorithm", like functional decision theory. If they assign any nonzero probability to the case that they are outnumbered by selfish people, they should all choose to get out of the blender/all play red. This is because in cases where self-interested players outnumber altruists, playing red strictly dominates even for the altruists, and in cases where altruists outnumber the self-interested you can do either and it makes no difference to first order. High commitment cooperation only makes sense when you are absolutely sure that the altruists outnumber the merely self-interested who larp as altruists. So to pick blue, it is not enough to merely be an altruist. Rational altruists wouldn't pick blue. You must also walk around with the background assumption that everyone else in the entire world is also an altruist. What is the flaw of the WEIRD mind that this thought experiment exposes? It's that WEIRD people do game theory by tentatively assuming that every group they ever interact with is composed of altruists/cooperators, and then maybe adjusting given specific information on bad individuals. It's "Assume everyone is a cooperator by default, and then adjust if needed" decision theory. WEIRD Decision Theory. WDT. This sounds stupid, but it is a neat hack that solves lots of things. It prevents WEIRD people from letting rational mutual doubt ruin their lives by defecting just on the chance that the other person might want to defect. It is also probably about the simplest way to solve that, other than "always cooperate". So to WEIRD people, "All blue" comes out as the obviously correct answer, even though it is not actually the right answer in the math. They don't like it even when they know the math! The blender game is weirdly, unnaturally balanced to expose this flaw. Usually there is some active benefit to coordination, so the "always assume other people will cooperate if you do" hack does tend to line up with the math, because the small chance of people not cooperating is usually cancelled out by big benefits of cooperation. But in the blender game, there is no benefit to cooperating. The uncoordinated equilibrium is just better. WEIRD people don't like it when uncoordinated equilibria are just better. This is why they are always trying to cancel capitalism. And this is why they keep getting into the blender. □
Roko 🐉 tweet mediaRoko 🐉 tweet media
Roko 🐉@RokoMijic

We're doing the "Blender" game again There is a large blender. Everyone in the world has to decide whether to step into the blender. If at least 50% of the people do step into the blender, it will be unable to overcome their inertia to get started, and everyone survives. If less than 50% of the people step into the blender, then they all get blended up into paste and die. People who do not step into the blender suffer no adverse effects. Would you step into the blender? (Blue=step into the blender, Red= don't do that)

English
28
27
293
22.3K
Biobayes
Biobayes@BioBayes·
This "WEIRD Decision Theory" unironically explains half of current modern politics. Once you see it you can't unsee it.
Roko 🐉@RokoMijic

Time for some math on the blender game. The Blender Game is an excellent probe that reveals as very particular way that the minds of WEIRD (Western, Educated, Industrialized, Rich, and Democratic) people are broken. Perhaps THE way that they are broken. What is the rational solution to the blender game? Well, basic game theory for self-interested players gives a clear answer. You never get into the blender. This is because the move of not getting into the blender strictly dominates getting in: whatever happens, you will always be better off or the same if you stay out of the blender. The end. In game theory a Nash Equilibrium is very simple - it is a state where there's no unilateral move anyone could make to improve their own situation. In the (selfish) Blender game there are many Nash Equilibria because lots of states have the property that it's way above 50% in the blender, so it doesn't matter what any one individual does. So all of those are Nash Equilibria, as well as the state where everyone is outside the blender ("all red" in Tim Urban's red/blue framing). But all the states that have <50% in the blender are not Nash Equilibria because anyone one of the people in the blender could now save their own life by exiting. But the Nash Equilibrium where everyone is outside is in some sense better. It is more stable. In game theory we can formalize this as a type of equilibrium called Trembling Hand Perfect Equilibrium (THPE). In THPE we imagine that people will make their moves in the game and then with some small probability they will accidentally press the wrong button because their "hand is trembling". There is only one THPE for the blender game with selfish players, which is when everyone presses red. It's easy to see why: imagine there's 50% people exactly in the blender. Out of 1000, 500, for example. Then if you imagine that each one of those 500 people who are deliberately putting themselves into the blender has a small chance of accidentally not going into the blender. Now if you are one of these people, you reason that even if you don't make the mistake yourself, someone else might. And then you are going to die, which is bad, so you can improve your situation by actually exiting. The same is true for 501 people, 502, etc. None of the "in the blender" states are actually Trembling Hand Perfect Equilibria. But, the "everyone out of the blender" state is a Trembling Hand equilibrium, because even though some people might accidentally go into it with some small probability, you are definitely not going to improve your own chances by joining them to almost certainly be blended. Okay, but what about if you are a mixture of selfish and altruistic. Say you assign utility +1 to yourself for surviving, and +1/N for each other person who survives. We can analyze this new game: there are now other "stable" (THPE) equilibria? Yes. If everyone is a bit altruistic, then "all in the blender" also becomes a Trembling Hand Perfect Equilibrium. The reason for this is that for someone who is at least a little bit altruistic, it is okay for them to suffer a small chance of being blended in exchange for a larger chance of saving the larger group. "The good of the many outweighs the good of the few - or the one". Note that in these games both the size of the set you are saving and the probability of saving them is larger, because in order for getting out of the blender to actually save yourself, you need one more other person to also get out, which is ε times less likely. So any nonzero amount of altruism is enough to make these blue equilibria THPE. This seems to vindicate the "Blue" position. As long as everyone is at least a little bit altruistic, "All in the Blender" is actually a Trembling Hand Perfect Equilibrium, so it is at least equally valid to "All out of the Blender", and some might argue superior since under trembling hand conditions it can prevent anyone from getting blended, most of the time (there are absurdly unlikely cases where many people simultaneously slip up). But there is a problem. The "All in the Blender"/"All Blue" equilibrium is only Trembling Hand Perfect if the number of altruists is at least at or above the 50% threshold. If there are 49% altruists and 51% are egoists, then the egoists will rationally abandon the altruists in the blender because both the altruists', and egoists' hands are trembling, so the blender is still dangerous, even if only slightly. But in reality you never really know how many people are slightly altruistic, versus just self interested and rational. In practice a fair number of these games end up with red winning. In these games if you use a mixed population with more than half the people being purely self-interested or even sadistic, the "All in the Blender"/"All blue" equilibrium is no longer Trembling Hand Perfect. To see why, think about a mixed population where there are 3 selfish players and 2 altruists. Imagine them all provisionally choosing to go into the blender, and then reconsidering their options in light of the fact that someone(or several!) might slip. All the selfish people realize that if any three (or all four!) of the other four slip, they will be in the blender either with one other person or on their own, and they will then die. Therefore, all three selfish players will not enter the blender. But then, the altruists also get blended with high probability, so actually they don't want to get in either. Now imagine that all the altruists are sort of "running the same algorithm", like functional decision theory. If they assign any nonzero probability to the case that they are outnumbered by selfish people, they should all choose to get out of the blender/all play red. This is because in cases where self-interested players outnumber altruists, playing red strictly dominates even for the altruists, and in cases where altruists outnumber the self-interested you can do either and it makes no difference to first order. High commitment cooperation only makes sense when you are absolutely sure that the altruists outnumber the merely self-interested who larp as altruists. So to pick blue, it is not enough to merely be an altruist. Rational altruists wouldn't pick blue. You must also walk around with the background assumption that everyone else in the entire world is also an altruist. What is the flaw of the WEIRD mind that this thought experiment exposes? It's that WEIRD people do game theory by tentatively assuming that every group they ever interact with is composed of altruists/cooperators, and then maybe adjusting given specific information on bad individuals. It's "Assume everyone is a cooperator by default, and then adjust if needed" decision theory. WEIRD Decision Theory. WDT. This sounds stupid, but it is a neat hack that solves lots of things. It prevents WEIRD people from letting rational mutual doubt ruin their lives by defecting just on the chance that the other person might want to defect. It is also probably about the simplest way to solve that, other than "always cooperate". So to WEIRD people, "All blue" comes out as the obviously correct answer, even though it is not actually the right answer in the math. They don't like it even when they know the math! The blender game is weirdly, unnaturally balanced to expose this flaw. Usually there is some active benefit to coordination, so the "always assume other people will cooperate if you do" hack does tend to line up with the math, because the small chance of people not cooperating is usually cancelled out by big benefits of cooperation. But in the blender game, there is no benefit to cooperating. The uncoordinated equilibrium is just better. WEIRD people don't like it when uncoordinated equilibria are just better. This is why they are always trying to cancel capitalism. And this is why they keep getting into the blender. □

English
0
0
0
23
Biobayes
Biobayes@BioBayes·
@RokoMijic This unironically explains at least half of modern politics.
English
0
0
1
79