Tom Knight -- e/🦀🌴

1.2K posts

Tom Knight -- e/🦀🌴

Tom Knight -- e/🦀🌴

@TomKnightSynBio

Engineer of stuff that is alive or computes. Follow us all to that blue place.

Katılım Ocak 2021
3.7K Takip Edilen2.9K Takipçiler
Tom Knight -- e/🦀🌴
Tom Knight -- e/🦀🌴@TomKnightSynBio·
@parmita You are ignoring at least two other important instruments: High performance mass-specs and cryo-EM, both critical to progress with proteins.
English
0
0
21
1.3K
Parmita Mishra
Parmita Mishra@parmita·
the last instrument that actually changed biology was the sequencer. 1977. everything since has been cheaper, faster, parallel — same readout. fifty years of moore's law on a dead measurement. we are so overdue it is embarrassing.
English
12
4
44
18.3K
Tom Knight -- e/🦀🌴 retweetledi
Shae McLaughlin
Shae McLaughlin@shae_mcl·
It’s estimated that the Protein Data Bank (PDB) cost around $13B to create. Alphafold was only possible because of it. If we want ML to solve biology, we should be funding the creation of databases and the development of new assay technologies. ML is nothing without data.
English
39
171
1.3K
153.8K
Tom Knight -- e/🦀🌴 retweetledi
Patrick Boyle — e/🦀
Patrick Boyle — e/🦀@p_maverick_b·
Reminder that we could be using AI-enabled tools to detect, track, and develop countermeasures for viral outbreaks, but that the experience of mpox vaccine researchers less than two years ago was being blocked from using alphafold for weeks
English
3
4
17
1.9K
Tom Knight -- e/🦀🌴 retweetledi
Erika Alden DeBenedictis
Erika Alden DeBenedictis@erika_alden_d·
I think the first successful virtual cells are going to model prokaryotes, not eukaryotes. It’s technologies like BioBloom that are going to be the difference between being able to collect enough data and NOT. Thoughts? @ChanZuckerberg @DBBurkhardt @arcinstitute @joncalles
PioneerLabs@Pioneer__Labs

This tube is a library of bacteria with every single-base-pair genome mutation, all DNA-barcoded.🧪Growth + barcode sequencing = data on millions of mutations. 📄 Report: biorxiv.org/content/10.648… Retweet if you want more data, and read on if you want to use the library! 🧵

English
3
12
60
8.8K
Tom Knight -- e/🦀🌴
Tom Knight -- e/🦀🌴@TomKnightSynBio·
@p_maverick_b @AnthropicAI Well, Patrick. Great we have a responsible AI infrastructure that can rein in terrorist plant engineers like you. Next thing you know, you'll want to engineer left-handed wheat. ;-)
English
0
0
5
179
Patrick Boyle — e/🦀
Patrick Boyle — e/🦀@p_maverick_b·
I was going to get off my biosecurity refusal hobby horse today but my brilliant product thinking has again been thwarted by Claude. What can we do about this @AnthropicAI ???
Patrick Boyle — e/🦀 tweet media
English
14
5
59
5K
Tom Knight -- e/🦀🌴 retweetledi
荻田佑
荻田佑@OTTA58027889·
6畳1間の自宅研究室。 ここから世界へ。 コツコツ研究中。 #6畳1間 #研究室 #自宅
日本語
10
24
339
42.6K
Tom Knight -- e/🦀🌴 retweetledi
Trevor Campbell
Trevor Campbell@TrevorCampbell_·
There are so many untapped veins of riches in biology just waiting to be prospected, mined, and extracted New tools, resources, medicines, materials, technologies, derivatives The boomtown is coming, and I say “Drill, baby, drill!”
Trevor Campbell tweet media
English
3
1
15
644
Tom Knight -- e/🦀🌴
Tom Knight -- e/🦀🌴@TomKnightSynBio·
@martinmbauer How about astrophysical distance: light-years, parsecs, astronomical units. Explain to me what is wrong with gigameters or exameters.
English
6
0
4
979
Martin Bauer
Martin Bauer@martinmbauer·
The worst unit is decibel: It isn't even a unit, isn't defined consistently across fields, hides an arbitrary reference scale and is named after Alexander Bell but spelled 'bel'
Aaron Bergman 🔍 ⏸️ (in that order)@AaronBergman18

milli-ampere*hours (mAh, what you see on charging banks) is a serious contender for worst unit of all time. It is almost never useful and serves exclusively to obscure rather than clarify It is NOT an amount of energy

English
86
95
2.8K
117.2K
Tom Knight -- e/🦀🌴 retweetledi
Patrick Boyle — e/🦀
Patrick Boyle — e/🦀@p_maverick_b·
@FilippaLentzos Nobody working on mirror life has stated a good reason for doing it besides funding the mirror life biosecurity workshop circuit
English
3
2
4
660
Tom Knight -- e/🦀🌴 retweetledi
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
An MIT engineer published a 13-page essay in The Atlantic magazine in July 1945 describing a desktop machine called the Memex. It would store every book, every photo, and every letter a person owned, let them browse the contents by clicking links between documents, and let them save trails of related thoughts. He invented the personal computer, hyperlinks, Wikipedia, and the World Wide Web in a single magazine article 50 years before any of it existed. I read it cover to cover in under an hour and walked away convinced I had just read the blueprint for the world I live in. His name was Vannevar Bush. The essay is called As We May Think. The context for what he wrote matters because it explains how a single person could see so far ahead. Vannevar Bush was not a futurist. He was not a science fiction writer. He was the most powerful scientist in the United States during World War II. He ran the Office of Scientific Research and Development, which coordinated the Manhattan Project, the development of radar, the proximity fuse, mass production of penicillin, and almost every other major American scientific breakthrough of the war. He had personally directed the work of 30,000 scientists. He reported directly to President Roosevelt. When the war was ending in the summer of 1945, he sat down to write something that had been forming in his head for years. The essay was published in The Atlantic in July 1945. It is 13 pages. The atomic bombs dropped on Japan three weeks later. Here is what he saw, and why one essay accidentally became a blueprint for the world I live in. His opening problem was specific. Scientists were producing more research than humans could read. The body of human knowledge was growing exponentially. Any single researcher had access to a tiny fraction of what was relevant to their work. Most discoveries were being lost not because they were wrong, but because nobody could find them. Bush called this the central problem of the post-war world. Information was abundant. Attention was scarce. The bottleneck was no longer producing knowledge. The bottleneck was retrieving it. He proposed a solution. He called it the Memex, short for memory extender. The Memex was a desk-sized machine. The user sat in front of it. It had screens. It had a keyboard. It used microfilm because the transistor had not been invented yet, but the function he described is exactly what a hard drive does today. The user could store every book they had ever read, every note they had ever taken, every photo they had ever owned, and every letter they had ever written. All of it accessible in seconds. That alone would have been a stunning prediction. He described a personal computer in 1945. There were no personal computers. The first electronic computer in the world, ENIAC, would not be unveiled for another year, and it weighed 30 tons and filled a room. He was describing a machine the size of a desk that could hold everything a single person knew. But the desktop machine was the small idea. The big idea is the part that almost nobody who quotes the essay actually understands. Bush argued that the way humans store information in books and libraries was wrong. Books are organized by category. Library shelves are organized by Dewey decimal. Any given fact has one position in the hierarchy. To find it, you have to know the category it lives in. He pointed out that this is not how the human brain works at all. The brain does not store information by category. The brain stores information by association. You think of your grandmother and immediately remember a song. The song reminds you of a vacation. The vacation reminds you of a meal. The meal reminds you of a person you have not thought about in years. Each thought triggers another, not because they share a category, but because they are linked. Bush proposed that information storage should imitate the brain. Documents should be linked to other documents directly. Click on one, jump to another. Click on a footnote, see the source. Click on a name, see the person's other writings. He called these connections "associative trails." This is hypertext. He invented it on paper in 1945. Tim Berners-Lee, the man who actually built the World Wide Web (WWW) at CERN in 1989, has cited this essay directly as his inspiration. The HTTP protocol, the HTML standard, the entire system of clicking from one document to another that you use a thousand times a day, descends from an idea Bush sketched on paper before the bombs dropped on Japan. The third part of the essay is the part that hit me hardest. Bush argued that the user of the Memex would not just consume information. They would build their own trails through it. They would save sequences of documents that mattered to them. They would annotate them with their own notes. They would share their trails with other people. Other researchers would inherit those trails and extend them. He was describing personal annotation, social bookmarking, link sharing, the entire creator economy, and the collaborative editing model behind Wikipedia. He was describing it in 1945. He was describing it in plain English in a popular magazine. He even predicted that some users would build trails so valuable that they would be paid to produce them. He said professional trail-blazers would emerge as a new kind of expert, paid to organize and connect knowledge for others. This is, more or less, every newsletter writer, every YouTube explainer, every modern educator. He saw the entire economy of online knowledge work coming. The fourth thing he predicted is the one that should make you stop and put your phone down. Bush wrote that the Memex would extend the human brain. Not metaphorically. Literally. He argued that the machine would become an external memory that humans would access as easily as their own thoughts. The boundary between the brain and the machine would dissolve in normal use. People would stop thinking of the Memex as a separate device. They would think of it as part of how they thought. This is exactly what has happened to the smartphone in the last 15 years. You do not memorize phone numbers anymore. You do not memorize directions. You do not memorize most facts. You offload everything to a glass rectangle in your pocket and treat the rectangle as part of your own mind. Bush predicted this in 1945. He thought it would be a triumph for human civilization. The strangest part of reading the essay in 2026 is realizing how few people have actually read it. The essay is free online at The Atlantic. It is in the public domain. It is 13 pages. You can read it in 30 minutes. Steve Jobs read it. Doug Engelbart, the man who invented the computer mouse, said the essay was the foundation of his life's work. Tim Berners-Lee said it was the foundation of the web. T ed Nelson, who coined the word "hypertext," said it was the seed of his entire career. Every single major step of the digital revolution came from people who read this essay carefully and decided to build it. The man who wrote it died in 1974 at age 84. He lived just long enough to see the early internet take shape, and just early enough that he never saw it become what it is now. He never saw a personal computer in a home. He never used a search engine. He never followed a hyperlink in his life. He just wrote down, in 13 pages, the world the rest of us would spend 80 years building for him. You are reading these words right now on a device that is the Memex. You found this post by following an associative trail that did not exist when he wrote the essay. You will probably share this post with someone else and extend the trail. He saw all of this before he had any reason to believe it was possible. The blueprint for the world you are living in is one click away from you, and most people who use it every day have never read the original.
Ihtesham Ali tweet media
English
45
315
840
53.5K
Tom Knight -- e/🦀🌴 retweetledi
Jonathan Eisen
Jonathan Eisen@phylogenomics·
Reposting this from April 1, 2007 in honor of Craig Venter. I did this as an April Fool's joke - sharing it as a PDF. I got stressed when I found out Craig had seen it and knew it was by me. But he loved it and said it was OK to share it more widely. So then I posted it on the web.
Jonathan Eisen tweet mediaJonathan Eisen tweet mediaJonathan Eisen tweet mediaJonathan Eisen tweet media
English
2
17
73
13.4K
Tom Knight -- e/🦀🌴 retweetledi
Jake Wintermute 🧬/acc
Jake Wintermute 🧬/acc@SynBio1·
It’s hard to collect data about how bioterrorists might try to use AI. Few people want to create a bioweapon, and those who might aren’t talking. On the other hand, it's easy to predict how the news will cover bioterrorism and how social media responds. We have years of clickbait headlines and viral scareposts to train on. This makes it much simpler to build a biosecurity policy around avoiding bad headlines, rather than installing safeguards that would actually stop bad actors. I have a PhD in Synthetic Biology. I know roughly what it would take to make a bioweapon. It would be enormously difficult and dangerous. Most of the work is in the physical world, where AI tools would be only marginally useful. None of the relevant uses of AI look anything like the examples cited in the NY Times story below. - Printing 8,000 word protocols for methods already in the public domain - Making a list of common cattle diseases - Generating a shopping list of test tubes and media - Describing how to use a weather balloon The actual biosecurity questions that need answers are technical and too boring to cover in a major media outlet. - How can we tell the difference between a dangerous DNA sequence and a harmless one? - What separates a python script used to discover a therapeutic from one used to discover a toxin? - Which practical R&D bottlenecks are being rapidly opened by AI and which are not? Much of the work of biology happens in the real world and doesn’t involve AI much at all. A serious biosecurity policy needs to focus on how bad actors might access physical hardware, specialized facilities and trained personnel. These are infinitely more important barriers than what Claude might tell someone about weather balloons. My point here is that the people telling you to be afraid, and the media outlets who cover them, are putting us all in danger. The big AI shops are going to lock down their models, not to stop bad actors, but to stop bad press. Training models to stop using scary words is easy, the real work of biosecurity is hard. If we don’t push back, we’re going to end up with an industry dedicated to performative biosecurity theater. nytimes.com/2026/04/29/us/…
English
9
27
123
52.2K
Tom Knight -- e/🦀🌴 retweetledi
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
A 21-year-old MIT student wrote a master's thesis in 1937 that Harvard's most famous professor of cognitive science later called "possibly the most important master's thesis of the century." I read it at 2am and could not believe one paper had quietly built the entire foundation of every computer that exists today. His name was Claude Shannon. The thesis is called "A Symbolic Analysis of Relay and Switching Circuits." Every smartphone in your pocket. Every server farm running ChatGPT. Every chip Nvidia ships. Every line of code an engineer has ever written. All of it traces back to a single insight one graduate student had at 21 years old, working on a side project at MIT. Here is the story almost nobody tells you. Claude Shannon was born in 1916 in a small town in Michigan. He grew up tinkering. Built a telegraph between his house and a friend's house using barbed wire from a nearby fence. Repaired radios for the local department store. He studied both mathematics and electrical engineering at the University of Michigan because he could not decide which one he loved more. That refusal to choose is what eventually made him. When he got to MIT for graduate school in 1936, he was assigned to operate a strange machine called the differential analyzer. It was room-sized. Mechanical. Built by Vannevar Bush. It used a tangle of gears, shafts, and electrical relays to solve calculus problems. Most students just operated it. Shannon did something else. He stared at the relay circuits inside it. The way they clicked open and closed. The way they routed signals through the machine. He noticed something nobody had noticed before. The relays inside the machine had two states. Open or closed. On or off. One or zero. And the way the relays were wired together to make decisions looked exactly like a 90-year-old branch of mathematics that almost everyone had forgotten about. Boolean algebra. Invented by a British mathematician named George Boole in the 1850s. Boole had built a system of logic where statements could be true or false, and you could combine them with operators like AND, OR, and NOT to derive new statements. For 90 years, Boolean algebra had been a curiosity. A philosophical tool. Nobody saw a practical use for it. Shannon saw it. He realized that an electrical circuit was not just an electrical circuit. It was a physical implementation of a logical statement. A switch that closed when both A and B were true was an AND gate. A switch that closed when either A or B was true was an OR gate. The entire branch of pure mathematics that Boole had invented as a thought experiment could be built out of wires and relays. And once you could build logic out of wires, you could build anything that could be expressed in logic out of wires too. This was the insight that quietly created the modern world. Before Shannon's thesis, electrical engineers designed circuits the way artisans built watches. By feel. By experience. By trial and error. Every new circuit was a craft project. There was no theory underneath it. After Shannon's thesis, circuit design became a branch of mathematics. You could specify the logic you wanted on paper, and translate it directly into a wiring diagram. You could prove a circuit was correct before you built it. You could simplify a circuit by simplifying the underlying logical expression. The MIT historian who reviewed his thesis described the shift in one sentence. It transformed circuit design from an art into a science. Shannon was 21 years old when he wrote it. That alone would have earned him a place in every computer science textbook on Earth. But Shannon was not done. He spent the next 11 years working on a problem nobody had even framed properly. He wanted to know what information actually was. Not what messages were. Not what signals were. What information was. Mathematically. Quantitatively. As a measurable thing. In 1948, while working at Bell Labs, he published a 79-page paper called "A Mathematical Theory of Communication." The paper invented the entire field of information theory in a single shot. He proved that all information, regardless of whether it was a voice on a phone, a photograph in a magazine, or a chess move on a board, could be measured in a single unit. He named that unit the bit. Short for binary digit. It was the first time anyone had given information a unit of measurement. The paper proved something that sounded impossible. He showed that you could send a message reliably through a noisy channel, with arbitrarily low error, as long as you encoded it correctly and stayed below a specific limit he called the channel capacity. Every Wi-Fi connection, every satellite signal, every cell phone call, every fiber optic transmission across the floor of the Pacific Ocean operates inside the mathematical bounds that Shannon proved in this single paper. He did all of this in his spare time while officially working on cryptography for the war effort. The strangest part of the man is what he did when he was not inventing the future. He rode a unicycle through the hallways of Bell Labs at night while juggling. He built a chess-playing machine in 1950 that played a primitive form of chess decades before computers were supposed to be capable of it. He built an electronic mouse named "Theseus" that could solve a maze and remember the solution. It was one of the first machines on Earth that learned. He built a flame-throwing trumpet for fun. He had a closet full of unicycles in different sizes. He installed a chairlift across his backyard so his kids could get to the lake faster. Marvin Minsky, one of the founders of artificial intelligence, said Shannon was the most genuinely playful great scientist he had ever met. Other people approached research with seriousness. Shannon approached it like a kid who had snuck into the toy store after closing time. Stevens Institute of Technology called him the least known genius of the 20th century. That title is exactly correct. Most people have heard of Einstein, Turing, von Neumann. Shannon's name barely registers outside engineering departments. Yet without his master's thesis, there is no digital circuit. Without his 1948 paper, there is no internet. Without his framework, there is no measurement of information at all, which means no compression, no error correction, no cryptography, no machine learning. He died in 2001 at age 84, after years of Alzheimer's disease that took away his ability to recognize the world he had built. Most newspapers ran a small obituary. The world he had given us did not pause. His thesis is on the MIT archive. His 1948 paper is on the Bell Labs site. Both are free. Both are short. Both are still readable today by anyone willing to spend an evening with them. The least known genius of the 20th century is one click away from you. Most people will never open the file.
Ihtesham Ali tweet media
English
68
815
2.6K
134.4K
Tom Knight -- e/🦀🌴
Tom Knight -- e/🦀🌴@TomKnightSynBio·
@austinc3301 I think much of this focus on biosecurity is an effort to say 'my model is so powerful we can't allow you to access it,' as a way of implying greater ability than it really has. Who cares if that makes a narrow group (biologists) unhappy? cf. Mythos
English
0
0
0
80
Agus 🔸
Agus 🔸@austinc3301·
after chatting with like people who actually work on biosafety I am increasingly convinced that major model providers should be having a trusted access program with laxer filters but stronger monitoring
English
7
3
83
3.2K
American Wetware
American Wetware@americanwetware·
What’s the cheapest piece of biotech you’ve ever bought? Today we’re launching 25¢ Digital Biology Make your mark on a shared canvas of agar art. Get a digital print of the plate you co-create. 4 pixels for $1. What have you got to lose? cheap.americanwetware.com
English
1
6
35
13.7K