Alek Erickson

1.3K posts

Alek Erickson banner
Alek Erickson

Alek Erickson

@AlekErickson

evolutionary/developmental biology, abstract games, and music

Stockholm, Sweden เข้าร่วม Kasım 2016
1.2K กำลังติดตาม371 ผู้ติดตาม
ทวีตที่ปักหมุด
Alek Erickson
Alek Erickson@AlekErickson·
Have you recently (or nearly) defended your PhD and are interested in gene regulatory networks (GRNs), cell fate decisions, and evolution of skeletal form? Let's apply together for the Wallenberg DDLS Postdoc Call 2026 (deadline is March 31). 1/n scilifelab.se/data-driven/dd…
English
1
0
0
130
Alek Erickson รีทวีตแล้ว
Jason Ai. Williams
Jason Ai. Williams@GoingParabolic·
This image is destroying my brain.
English
249
1.7K
9.3K
712.5K
Alek Erickson รีทวีตแล้ว
Hao Yin
Hao Yin@HaoYin20·
Optical clearing & time-lapse fluorescent imaging of Live mouse Brain (up to 800 μm of depth)🤯 SeeDB-Live is finally in peer-reviewed print (1.5 yr from Preprint)🥸 An acute olfactory bulb slice(P11) loaded with GCaMP6f (Ca2+ sensor) was imaged with #2PM at a depth of 150 μm during clearing with SeeDB-Live👹 @Shigenori774 @TakeshiImaiLab @naturemethods 2026 nature.com/articles/s4159…
GIF
English
7
70
295
29K
Alek Erickson รีทวีตแล้ว
Quanta Magazine
Quanta Magazine@QuantaMagazine·
Meet the ultimate gatekeeper of the nucleus. This molecular machine determines what compounds are welcome inside and which shall not pass. The mechanism behind its selectivity remains a mystery. quantamagazine.org/disorder-drive…
English
17
156
782
48.3K
Alek Erickson รีทวีตแล้ว
Vladimir Bulatov
Vladimir Bulatov@bulatov_org·
two different kinds of interacting gears in Gray-Scott reaction
English
0
3
26
1.3K
Alek Erickson รีทวีตแล้ว
Chew Wei Leong
Chew Wei Leong@ChewWeiLeong·
Spatial perturb-seq: single-cell functional genomics within intact tissue architecture Scale up your in situ CRISPR screens Out in @NatureComms
English
2
40
206
17.5K
Alek Erickson รีทวีตแล้ว
Vladimir Bulatov
Vladimir Bulatov@bulatov_org·
Animation of Simone Conradi attractor
English
1
8
57
4.8K
Alek Erickson รีทวีตแล้ว
Vladimir Bulatov
Vladimir Bulatov@bulatov_org·
Drunk Gliders in Gray-Scott Reaction Diffusion
English
0
5
56
3.1K
Alek Erickson รีทวีตแล้ว
nature
nature@Nature·
Nature research paper: Highly dynamic dural sinuses support meningeal immunity go.nature.com/4qVnxlm
English
0
19
77
13.1K
Alek Erickson รีทวีตแล้ว
Development
Development@Dev_journal·
Cell divisions test and shape boundaries in Drosophila embryos This Research Highlight showcases work from Veronica Castle, Rodrigo Fernandez-Gonzalez and Gonca Erdemci-Tandogan: journals.biologists.com/dev/article/15…
English
1
1
5
545
Alek Erickson รีทวีตแล้ว
Parmita Mishra
Parmita Mishra@parmita·
This is biology. But it is also…material science.
Parmita Mishra tweet media
English
12
93
580
22.7K
Alek Erickson
Alek Erickson@AlekErickson·
That is my current idea, though it is open to many 'beneficial mutations' driven by a strong and/or persuasive candidate. If this aligns with your interests, contact me by DM or email with your CV and a short statement of motivation. 4/4
English
0
0
0
10
Alek Erickson
Alek Erickson@AlekErickson·
Then, modelling 3D growth fields given a starting set of cells with defined GRN activation states. Predictions about GRN function on cell fate and tissue shape would then be reconstituted in the lab using in vitro or in vivo approaches. 3/n
English
1
0
0
32
Alek Erickson
Alek Erickson@AlekErickson·
Have you recently (or nearly) defended your PhD and are interested in gene regulatory networks (GRNs), cell fate decisions, and evolution of skeletal form? Let's apply together for the Wallenberg DDLS Postdoc Call 2026 (deadline is March 31). 1/n scilifelab.se/data-driven/dd…
English
1
0
0
130
Alek Erickson
Alek Erickson@AlekErickson·
Thanks to both the Petrus och Augusta Hedlunds Stiftelse, and the Åke Wiberg Stiftelse, for their generous decisions in 2025 to support my group's research into molecular mechanisms of brain-face covariation and birth defects affecting the facial skeleton.
English
0
0
0
44
Alek Erickson รีทวีตแล้ว
Mathelirium
Mathelirium@mathelirium·
Lecture 1 on Physics-Informed Neural Networks: A Mini-Series Physics-Informed Neural Networks (PINNs) are neural networks trained to satisfy a differential equation by building the PDE residual directly into the loss. They emerged from a very practical problem...classical PDE pipelines can be brilliant, but they often demand heavy discretization work (meshes, stencils, stability tuning), and the method you build is usually tied to one geometry and one solver setup. A PINN flips the workflow by representing the solution itself as a smooth function uᵩ(x,t) and enforcing the physics everywhere you choose to sample the domain. People often meet PINNs in the least helpful way...via a flashy solution plot, and almost no explanation of what was enforced to get it. In this series we keep the enforcement visible. We pick a differential equation, represent the unknown solution as a flexible function, measure how well that function satisfies the equation across the domain, and train it to reduce that mismatch everywhere we sample. A normal neural net learns from labels...you give it inputs and target outputs. A PINN learns from a differential equation...you give it inputs (x,t) and it gets punished whenever its output fails the PDE. By punish we mean that the loss increases when the mismatch is large we reward it if the loss decreases as the mismatch gets smaller. The network isn’t replacing physics, it’s becoming a flexible function that is forced to satisfy the same calculus you’d impose on any candidate solution. The math breakdown: We start with a PDE we want to solve on a domain Ω. Write it as uₜ(x,t) + N(u(x,t), uₓ(x,t), uₓₓ(x,t), …) = 0 for (x,t) in Ω A PINN replaces the unknown function u with a neural network output uᵩ(x,t) Now define the physics residual by plugging uᵩ into the PDE rᵩ(x,t) = ∂uᵩ/∂t + N(uᵩ, ∂uᵩ/∂x, ∂²uᵩ/∂x², …) If uᵩ were an exact solution, we would have rᵩ(x,t) = 0 everywhere. We may also have data points (xᵢ,tᵢ,uᵢ) from measurements or a known initial condition. The training objective is just a weighted sum of squared errors L(ᵩ) = L_data(ᵩ) + λ L_phys(ᵩ) + L_bc/ic(ᵩ) with L_data(ᵩ) = meanᵢ |uᵩ(xᵢ,tᵢ) − uᵢ|² L_phys(ᵩ) = meanⱼ |rᵩ(xⱼ,tⱼ)|² where (xⱼ,tⱼ) are the collocation points in Ω L_bc/ic(ᵩ) = penalties enforcing boundary conditions and initial conditions The key technical step is that the derivatives inside rᵩ are computed by automatic differentiation ∂uᵩ/∂t, ∂uᵩ/∂x, ∂²uᵩ/∂x², … So we can differentiate the total loss L(ᵩ) with respect to ᵩ and train with gradient descent. This is the whole idea behind PINNs. Learn a function, but make the PDE part of the loss, so the network is trained to be a solution, not just a curve-fitter. In the render, the main 3D surface is the network’s current guess uᵩ(x,t), drawn as a living sheet over the (x,t) plane. Hovering above is the neural scaffold...a visible graph of feature nodes and connections. The bright tension threads are the physics residual rᵩ(x,t): each thread tethers a collocation bead on the sheet up to the scaffold, and it thickens and brightens exactly where |rᵩ| is large (color encodes the sign). As training runs, those threads go slack across the domain not because we hid the error, but because the network has actually been pushed toward rᵩ(x,t) ≈ 0. #PINNs #PhysicsInformedNeuralNetworks #ScientificMachineLearning #PDE #DifferentialEquations #Optimization #MachineLearning #AppliedMath #ComputationalPhysics
English
19
157
1.1K
47.2K
Alek Erickson รีทวีตแล้ว
Carlos E. Perez
Carlos E. Perez@IntuitMachine·
The World Is Not Linear: A Field Guide to the Laws That Quietly Run Everything Most smart people don’t fail because they’re dumb. They fail because they apply clean logic to a messy world — and the world punishes that mistake with a smile. The messy truth is that modern life is shaped less by individual intent and more by systems: incentives, competition, scaling effects, path dependence, and statistical weirdness. These systems produce outcomes that feel unfair or mysterious until you learn the underlying “laws” — a set of lenses that let you predict how things actually behave. This is not about becoming cynical. It’s about becoming accurate. Once you internalize these lenses, you start noticing that most disagreements aren’t about values. They’re about which hidden force you think dominates: Do incentives matter more than morals? Do networks scale value more than craftsmanship? Do rare events matter more than averages? Do systems evolve, or can they be designed? This article is a guided map through those forces — told as one story. 1) The seduction of “doing the obvious thing” Imagine you’re in charge of improving something important: a company, a city, a hospital, a school, a product, maybe even your own life. You do what responsible people do: you define a goal. You pick a metric. And you tell everyone: we’re going to win on this number. This is where the first trap snaps shut. Goodhart’s Law: the metric stops being real When a measure becomes a target, it stops being a good measure. Before it became a target, the metric was an instrument: a thermometer. After it becomes a target, it becomes a game. Hospitals improve “wait times” by changing intake rules. Companies improve “engagement” by nudging addiction. Schools improve test scores by teaching to the test. Police departments improve crime stats by changing what counts as a crime. Not because anyone is evil. Because the system rewards it. The principal–agent problem: the doers don’t pay This is the deeper engine under Goodhart. The person deciding is not the person suffering the consequences. Executives chase quarterly optics; employees deal with the chaos. Politicians chase election cycles; citizens live with the long-term effects. Managers chase easy metrics; customers absorb the frustration. Once you see principal–agent problems, you start seeing why seemingly intelligent organizations keep doing self-destructive things: the incentives are miswired. The Cobra Effect: perverse incentives grow cobras Sometimes this miswiring gets darkly funny. Reward outcomes and people will manufacture the appearance of outcomes. In the original parable, a colonial government offered a bounty for dead cobras — and people began breeding cobras. This isn’t historical trivia; it’s a universal pattern: Reward bug counts → people file junk bugs. Reward convictions → plea bargains + overcharging. Reward content volume → SEO sludge. Reward “delivery” → rushed work + tech debt. The world is full of cobra farms. 2) Why fixing things often makes them worse Okay, so: choose better metrics, align incentives, done. Not quite. Because even well-intentioned fixes trigger the next law: second-order effects. Chesterton’s Fence: don’t remove constraints you don’t understand You walk into an old system and see “stupid rules.” You want to clean house. You want to simplify. But: why is that rule there? Don’t remove a fence until you know why it was built. A lot of institutional weirdness is scar tissue from past disasters. The rule might be dumb — but if you don’t understand it, you don’t know what disaster you’re re-inviting. This is why naive reformers are dangerous: they confuse “not understanding a thing” with “the thing being pointless.” Gall’s Law: complex systems must grow from simple working ones Even if the fence is removable, you still hit the next problem: A complex system that works is always evolved from a simple system that worked. This demolishes a common fantasy: that you can design complexity from scratch. Most large redesigns fail for one reason: They try to create a finished organism instead of growing a living embryo. If the system matters, you don’t “implement” the final form. You build something simpler that works. Then you iterate. Gall’s law is harsh, but kind: it explains why so many ambitious “transformations” flame out. 3) Efficiency doesn’t save you (and sometimes consumes you) Now suppose you do manage to improve a system. You make it cheaper, faster, more efficient. Surely this reduces resource usage? Often, no. Jevons Paradox: efficiency increases total consumption When you make something more efficient, you often make people use more of it. Make lighting cheaper → people illuminate more spaces. Make driving more fuel-efficient → people drive farther. Make computing cheaper → people compute vastly more. Efficiency doesn’t always shrink the pie. It can expand it. This is one of the most important and least emotionally intuitive truths about progress: efficiency changes behavior. 4) Some things don’t get more efficient — and get expensive forever Now meet the mirror image of Jevons: not everything can get dramatically more productive. Some work is bottlenecked by time, humans, and attention. Baumol’s Cost Disease: sectors that don’t scale inflate A string quartet takes as long to play Beethoven as it did 200 years ago. A therapist can’t 10× their clients without breaking the thing. A teacher can’t “scale” classroom attention the way software scales distribution. Meanwhile, other industries do scale — manufacturing, computing, logistics. So as society grows richer, productivity sectors get cheaper and cheaper… and human-time sectors get relatively more expensive. That’s why: healthcare education legal services childcare eldercare …feel like they eat the world. Baumol isn’t “a problem to solve” so much as a physics constraint: certain value comes from human presence. And presence doesn’t compress easily. 5) The invisible accelerant: networks At this point you might feel like everything is doom and friction. It’s not. Some forces make systems wildly better as they grow. The biggest one is networks. Metcalfe’s Law: value scales with connections A phone is useless alone. A fax machine is useless alone. A social app is useless without other humans. As users increase, connections increase faster than users do. That creates accelerating value. Reed’s Law: groups scale even faster than connections But it’s not just one-to-one links. Once people can form groups — communities, coalitions, companies, subcultures — the number of potential groupings explodes. That’s Reed’s law: group-forming networks can scale with frightening speed. This is why networked platforms can go from “niche” to “dominant” almost overnight: the product isn’t just features — it’s the social graph. 6) Progress has a heartbeat: learning curves Not all progress comes from networks. Some comes from repetition. Wright’s Law: cost falls with cumulative production This is the law behind why solar, batteries, and manufacturing tech get cheaper and cheaper: Every doubling of cumulative production yields a predictable cost reduction. The implications are enormous: the future is shaped by what we manufacture at scale volume is not just output; it’s learning building the thing teaches you to build the thing better Strategy through Wright’s law becomes: maximize learning rate. Not “be brilliant,” but “iterate relentlessly.” 7) Cooperation is rare — and competition forces ugliness Now we move from economics into game theory and moral physics. Even with good metrics, good redesign, good scaling… Sometimes the system makes people do bad things. Prisoner’s Dilemma: defecting is rational If you and I cooperate, we both win. But if I suspect you might defect, I should defect first. So we both defect. We both lose. This structure appears everywhere: labor vs management nations vs nations companies vs companies roommates siblings Twitter discourse It’s tragedy-by-incentives. Moloch: the god of coordination failure “Moloch” is the poetic version of the same idea: systems where competition forces everyone into worse behavior, even if nobody wants it. No one wants the attention economy. But creators compete for attention. Platforms compete for engagement. So everyone converges on outrage and addiction. Moloch doesn’t need villains. It only needs incentives. 8) The biggest mistake smart people make: believing in averages Now we arrive at the statistical heart of why forecasts fail. Most planning assumes the world behaves like a bell curve: most outcomes are near the average, extremes are rare. In many domains, that’s false. Fat tails: extremes happen way more than you think In fat-tailed worlds, the “average” is a comforting lie. Outliers dominate: venture returns blockbuster movies bestselling authors company outcomes war and peace pandemics market crashes In a fat-tailed world: one event can erase ten years of progress or create it overnight Black swans: surprise + impact + fake hindsight A black swan isn’t just an outlier. It’s an outlier we didn’t know how to model. The signature of black swans is: huge impact surprise beforehand “it was obvious” afterward We are story machines. We can rationalize anything after it happens. Survivorship bias: you’re studying the winners This is why business advice is mostly nonsense. We read biographies of billionaires and imitate their habits — forgetting the cemetery of equally hardworking, equally smart people who lost. Survivorship bias turns randomness into “wisdom.” A good thinker always asks: what am I not seeing because it died? 9) The final set of tools: tradeoffs, simplicity, and time After you’ve internalized incentives, scaling, networks, and tail risk, you earn the right to something important: Less ideology. More judgment. That’s what these last lenses provide. Pareto efficiency: every improvement has a cost At some point, you stop making “free” gains and enter a world of tradeoffs. If you want more of A, you give up B. This is what breaks utopian thinking: more safety can mean less liberty more speed can mean less quality more fairness can mean less efficiency more growth can mean more inequality Smart people aren’t the ones who avoid tradeoffs. They’re the ones who name the tradeoff out loud. Occam’s Razor: don’t add gears without proof Now that you’re thinking in systems, you could easily overcomplicate. Occam is your brake pedal: prefer the simplest explanation that predicts. It’s not “simplicity is truth.” It’s: don’t hallucinate complexity. Lindy: time is the best filter we have In fragile worlds, “new” is often a synonym for “untested.” The Lindy effect says: the longer something has survived, the longer it’s likely to survive. Ideas, books, institutions, even practices: time is a stress test. Lindy isn’t anti-innovation. It’s pro-robustness. Comparative advantage: specialization beats self-reliance Finally, comparative advantage gives you the social version of Occam. Even if you’re worse at everything than someone else… trade can still make both better off, because efficiency comes from relative differences. That lens dissolves a lot of macho self-sufficiency myths. So what does this worldview do? It does three things. First: it replaces naive optimism with durable optimism Not “everything will work out.” But: we can build systems that don’t collapse under their own incentives. Second: it changes what you fear Not competitors. Not critics. Not even failure. You start fearing: bad metrics misaligned incentives brittle complexity tail risks coordination failure Which are the real predators. Third: it gives you a usable strategy A decision-making style that looks like this: Start simple (Gall) Measure carefully (Goodhart) Align incentives (principal–agent) Expect adaptation (cobra effect) Respect old constraints (Chesterton) Model scaling honestly (Metcalfe/Reed/Wright) Don’t assume efficiency saves you (Jevons/Baumol) Prepare for tails (fat tails / black swans) Don’t trust winner stories (survivorship bias) Name tradeoffs and keep models simple (Pareto + Occam + Lindy) That list is more than theory. It’s a survival kit for reality. Closing: the meta-law If I had to compress this entire worldview into one sentence, it would be: Outcomes come from incentives and scaling under uncertainty—not from intentions and plans. Most people live inside stories. This toolkit makes you live inside systems. And once you do, you become harder to fool — including by yourself.
English
51
331
1.7K
102.5K