Gio

4.2K posts

Gio banner
Gio

Gio

@johnny_83

De omnibus dubitandum

Katılım Nisan 2011
1.2K Takip Edilen285 Takipçiler
Gio
Gio@johnny_83·
@OsintCollective @f4denz @Neesh774 My elderly grandma counts as "any human", but I'm pretty sure she can't spot any of the non-trivial bugs buried under the few metric tons of dogshit-quality code produced by the latest and greatest LLM agents I'm constantly forced to work with.
English
0
0
0
18
OSINT Collective
OSINT Collective@OsintCollective·
@f4denz @Neesh774 It still needs a human in the loop. It makes dumb mistakes that any human would catch on a double check. On the flip side—it's also capable of producing tons of work that would take a human weeks in a mere fraction of the time.
English
2
0
0
27
Gio
Gio@johnny_83·
@OsintCollective @Neesh774 That’s just smoke and mirrors for a target audience who doesn't know a damn thing about software engineering or AI. I'm pretty sure Nvidia engineers don't give a flying f**k about that dogshit. What a ridiculous act 🤦🏼.
English
0
0
0
5
OSINT Collective
OSINT Collective@OsintCollective·
@Neesh774 I don't think you guys are analyzing this comment right just imo. Nvidia isn't an AI provider, but they certainly have integrated AI into their work flows. They're not saying everyone's engineers, they're saying their employees. Good framework.
English
9
0
25
10.2K
Gio retweetledi
Mathelirium
Mathelirium@mathelirium·
Even as a Mathematician, I have to say Electrical Engineers impress me in a way few professions do. For me, it's just the way they use Mathematics so naturally, so practically, and so comfortably to build real things like hardware and systems. Even if someone never becomes an Engineer, there is still something worth borrowing from the Engineering mindset: Work with reality, test ideas, accept imperfect models, and keep refining until something useful emerges. One concept that really stuck with me from my Engineering Optimization course was Space Mapping. You keep a cheap coarse model, fᶜ(x), that runs fast but is only an approximation, and a costly fine model, fᶠ(x), that is much closer to reality. Then you construct a mapping, P, so that fᶜ(P(x)) tracks fᶠ(x) where it matters. Most of the optimisation happens on the cheap side, and the fine model is used carefully and sparingly. So, you do not wait for a perfect description of the world before you begin. You start with what you have, understand its limits, and still find a way to make it useful. John Bandler, a Canadian engineer and professor, formalised this in the early 1990s and showed you could make full-wave electromagnetic optimisation practical rather than masochistic. He founded Optimization Systems Associates in 1983 to commercialise the idea, and in 1997 Hewlett-Packard bought the company and folded its tools into what became HP EEsof, then Agilent, now Keysight’s RF design stack. #SpaceMapping #EngineeringMindset #ComputationalElectromagnetics #RFDesign #Optimization
English
20
203
2K
99.5K
Gio
Gio@johnny_83·
@pmddomingos TSMC might be the SoH for AI, but they have their own deep dependencies. ASML, Applied Materials, Tokyo Electron, Shin-Etsu, GlobalWafers, SK Hynix, and Micron to count a few. It takes only a bunch of those cogs to get stuck for the whole machine to crash and burn.🍿
English
0
0
0
157
Pedro Domingos
Pedro Domingos@pmddomingos·
TSMC is the Strait of Hormuz of AI.
English
200
713
7.6K
444.7K
Gio
Gio@johnny_83·
@pmddomingos Yeah, apparently he’s building AI superintelligence in 18 months now. Let's revisit in one month 🤣.
English
0
0
0
99
Gio
Gio@johnny_83·
@nanogenomic @beffjezos In my experience, nothing good has ever come from single-author preprints paired with “proprietary code base.” I really hope I’m wrong, but at this stage it honestly smells a bit like BS to me.
English
0
0
0
61
Gio
Gio@johnny_83·
@beffjezos @MartinShkreli @maxmarchione Isn’t this the same Shkreli who raised the price of Daraprim from $13.50 to $750 per pill and was later sentenced to 7 years in prison for securities fraud and conspiracy in an unrelated case?
English
0
0
0
61
Gio
Gio@johnny_83·
@SERobinsonJr @elonmusk AlphaFold is a sequence to structure protein modeling tool. What do you mean by "AF for mutation analysis"?
English
0
0
0
30
S.E. Robinson, Jr.
S.E. Robinson, Jr.@SERobinsonJr·
xAI NEWS: Paul Conyngham, a Sydney-based tech entrepreneur and AI consultant, used Grok to finalize a mRNA vaccine construct for his dog Rosie's mast cell cancer. Paul used three AI models. He combined ChatGPT for initial ideas, AlphaFold for mutation analysis, and Grok took the mutation data and other inputs to create the final sequence for the custom mRNA vaccine targeting those exact mutations. Paul sequenced Rosie's healthy DNA and the tumor DNA for $3,000 at the University of New South Wales' (UNSW) Ramaciotti Research Centre, in Sydney, Australia. He partnered with researchers like Prof. Pall Thordarson and Prof. Martin Smith for manufacturing and injection. They reported a 75% shrinkage in one tumor.
S.E. Robinson, Jr. tweet mediaS.E. Robinson, Jr. tweet mediaS.E. Robinson, Jr. tweet media
English
422
1.2K
5.5K
863.2K
Gio
Gio@johnny_83·
@beffjezos @extropic Looks great on paper, but when will people get their greedy little fingers on one of those scaled-up systems to test shit out?
English
0
0
1
57
Beff (e/acc)
Beff (e/acc)@beffjezos·
In 3.5 years @extropic: -reinvented how to use the transistor -reinvented architectures for probabilistic compute -reinvented deep learning for thermo compute -created our CUDA-like THRML -created our TF-like framework (coming soon) -scaled our systems 1000x yoy (3 gens of TSUs)
English
39
60
764
58K
Hadi Vafaii
Hadi Vafaii@hadivafaii·
Summary: we need an energy-aware theory of computation, and rate-distortion theory with Poisson latents is a good start. But this is only the beginning, so please reach out if this sparked a thought/idea! Here's the preprint again: 📜 arXiv: arxiv.org/abs/2602.13421 [11/11] 🧵
English
4
2
31
1.4K
Hadi Vafaii
Hadi Vafaii@hadivafaii·
The "decoupling of information and energy" is a major point of divergence between biological and artificial computers. Brains are efficient, modern AI isn't. And energy consumption is the biggest bottleneck in scaling AI (you can't hallucinate electrons into existence). To address this we need an "energy-aware theory of computation." And this new preprint is an attempt to address this. [1/11] 🧵
Hadi Vafaii tweet media
English
17
74
340
50.9K
Blaise Agüera (@blaiseaguera.bsky.social)
I’m so honored to see “What Is Intelligence?” named the 2026 PROSE Award winner in the Engineering and Technology category. Thank you to the Association of American Publishers (@AmericanPublish) for this recognition, and to everyone who has supported the book.
Blaise Agüera (@blaiseaguera.bsky.social) tweet mediaBlaise Agüera (@blaiseaguera.bsky.social) tweet media
English
6
18
150
8.8K
Gio
Gio@johnny_83·
@martinalexsmith @PathologyRCPA I assume this was a personalized mRNA vaccine encoding one or more MHC-presented neoantigens obtained from tumor NGS. Are there any scientific details available about this? I haven’t found any so far.
English
0
0
0
20
Martin Smith
Martin Smith@martinalexsmith·
This take on our recent news story is spot on (thanks Palli). Similar logic can also be extended to the use of AI in clinical pathology, a keynote topic at the recent #PathUpdate26 meeting organised by @PathologyRCPA
Ash Jogalekar@curiouswavefn

My take on the whole "AI cures cancer in dog in Australia". It's a very interesting story, but perhaps not for the reasons that are being noted. In 2007, Freeman Dyson published an essay in The New York Review of Books called “Our Biotech Future.” It contains one of the most memorable predictions about the future of biology I’ve ever read. “I predict that the domestication of biotechnology will dominate our lives during the next fifty years at least as much as the domestication of computers has dominated our lives during the previous fifty years.” Dyson believed biology would eventually follow the trajectory of computing. At first, powerful tools live inside large institutions - universities, government labs, major companies. Over time those tools get cheaper, easier to use, and more widely distributed. Eventually individuals start doing things that once required entire organizations. “Biotechnology will become small and domesticated rather than big and centralized.” He even imagined genome design becoming something almost artistic: “Designing genomes will be a personal thing, a new art form as creative as painting or sculpture.” Dyson's words rang in my mind as I read the "AI cures dog cancer" story. Much of the coverage framed this as an example of AI discovering new science. But that’s not really the interesting part of the story. The scientific pipeline involved here is actually well known. It closely mirrors the workflow used in personalized neoantigen vaccine research that has been under active development for years. The steps are fairly standard: sequence the tumor, identify somatic mutations, predict which mutated peptides might be recognized by the immune system, encode those sequences in an mRNA construct, and deliver them to stimulate an immune response. The biological targets themselves were almost certainly not new discoveries (I have been unable to find out what they are, but mutations in targets like KIT which are common might be involved). Partly therein lies the rub, since the hardest part of drug discovery, whether in humans or dogs, is target validation, the lack of which leads to lack of efficacy - the #1 reason for drug failure. In neoantigen vaccines, the proteins involved are usually ordinary cellular proteins that happen to contain tumor-specific mutations. AlphaFold which was used to map the mutations on to specific protein structures is now a standard part of drug discovery pipelines. The challenge is identifying which mutated peptides might plausibly trigger immunity. What is interesting though is how the pipeline was assembled. Normally, this type of workflow spans multiple domains - genomics, bioinformatics, immunology, and translational medicine - and in institutional settings those pieces are distributed across specialized teams, document sources and legal and technical barriers. Navigating the literature, selecting computational tools, interpreting sequencing results, and designing a candidate mRNA construct is typically a collaborative process. In this case, AI appears to have helped compress that process, pulling together data and tools from different sources. Instead of requiring multiple experts, a motivated individual was able to assemble the workflow with AI acting as a kind of guide through the technical landscape. I’ve seen something similar in my own work while building lead-optimization pipelines in drug discovery. The underlying science hasn’t changed, but the friction involved in assembling the workflow can drop dramatically. Tasks that once required stitching together multiple tools, papers, and areas of expertise can now often be executed much faster with AI helping navigate the terrain; and by faster I mean roughly 100x. That kind of workflow compression is powerful, to say the least. When the cost of navigating technical knowledge drops, more people can realistically assemble sophisticated research pipelines. This story is a great example of what naively seems like a boring quantitative acceleration of the research process. In that sense, therefore, the real novelty here is not the biology but the combination of three things: a non-specialist orchestrating a complex biomedical pipeline, AI acting as a navigational layer across multiple technical domains, and the resulting decentralization of capabilities that were once confined to institutional research environments. But I think the story also points to something deeper, which is a challenge to modern regulatory environments. Modern biomedical innovation does not operate solely according to what is scientifically possible. It is structured by regulatory frameworks - clinical trials, safety oversight, institutional review boards, and regulatory agencies. Those systems exist for important reasons, but they also assume that the development of therapies occurs primarily within large, regulated organizations. When individuals begin assembling pieces of these pipelines outside those institutions, the relationship between technological capability and regulatory oversight starts to shift. The dog in this story sits outside the human regulatory framework. That fact alone made the experiment possible. In other words, the story is not just about technological capability; it is also about how certain forms of experimentation can occur when they bypass the regulatory pathways that normally govern biomedical innovation. One is reminded of another Australian, Barry Marshall, who received a Nobel for demonstrating through self-experimentation that ulcers are caused by bacteria. This raises an interesting question: what happens when the tools for assembling sophisticated biological workflows become widely accessible while the regulatory structures governing them remain institution-centric? That tension may ultimately be the most important implication of this moment. Regulatory frameworks will need to adapt to this kind of citizen science. Seen in this light, the story about the AI-assisted vaccine is less about a breakthrough in cancer therapy and more about a glimpse of the early stages of something Dyson anticipated nearly two decades ago: the domestication of biotechnology. If AI continues to reduce the cognitive overhead required to navigate biological knowledge and assemble complex pipelines, the boundary between professional research and motivated individuals may begin to blur. That shift will require careful thinking about safety, governance, and responsibility. But it also carries an exciting possibility. Dyson imagined a world in which biological design might eventually become something like a creative craft practiced not only by institutions but also by curious individuals experimenting at smaller scales. For a long time that vision felt distant. Now, it feels like we may be seeing the first hints of it.

English
1
0
12
1.4K
Evgenii
Evgenii@joe_fenrir·
@ElliotHershberg @btnaughton I wonder who is target audience of this article. It is highly specialized yet explains very basic stuff known to most people in structural biology.
English
1
0
6
312
Elliot Hershberg
Elliot Hershberg@ElliotHershberg·
How to Design Antibodies (with AI) asimov.press/p/antibody-des… @btnaughton is one of the best applied practitioners in the rapidly evolving world of computational protein design. Today, he published a step-by-step primer on the five steps that go into this new science: 1. Choosing a Target 2. Preparing the Target Structure 3. Running a Design Campaign 4. Filtering and Selecting Candidates 5. Experimentally Validating the Results Worth reading to better understand this frontier. It's also a great resource for understanding how the different models and products in this space can be compared right now.
Elliot Hershberg tweet media
English
2
39
209
11.7K