Et

801 posts

Et

Et

@CryptoET97

Building... ๐Ÿ‡ณ๐Ÿ‡ฑ

Katฤฑlฤฑm Kasฤฑm 2017
799 Takip Edilen254 Takipรงiler
Et
Et@CryptoET97ยท
@mdancho84 forecacting-agent.md? You good bro? xD
English
0
0
0
31
Matt Dancho (Business Science)
This is what a data science team looks like in 2026. Not a team. A folder. 8 departments. 39 agents. Built in Python.
Matt Dancho (Business Science) tweet media
English
30
99
856
65.5K
Et
Et@CryptoET97ยท
pattern.poker Built with the same GPU mining engine that analyzed 109M AlphaFold proteins ๐Ÿ™
English
0
0
0
16
Et
Et@CryptoET97ยท
Biggest surprise: DONK BET at 39.6% exclusive to winners. Every poker coach says "never donk bet." The data disagrees. When winners lead into the raiser, they know something...๐Ÿค”๐Ÿซ
English
1
0
0
17
Et
Et@CryptoET97ยท
We mined 21.6 million real poker hands to find what winners do differently. 503,356 patterns appear only in winning hands. Zero occurrences in losers. Here's the behavioral cheat sheet that modern poker solvers can't give you (thread)๐Ÿงต
Et tweet mediaEt tweet mediaEt tweet media
English
1
0
0
20
Et retweetledi
Claude
Claude@claudeaiยท
1 million context window: Now generally available for Claude Opus 4.6 and Claude Sonnet 4.6.
Claude tweet media
English
1.2K
2K
25.1K
5.5M
Et retweetledi
Nature Biotechnology
Nature Biotechnology@NatureBiotechยท
Transposable elements, such as retrotransposons and endogenous retroviruses, are increasingly recognized for their important roles in genome function and impact on disease development. How can we translate our growing understanding of the 'dark genome' into therapeutics? go.nature.com/4tQ1zTn rdcu.be/e7D2N
English
3
39
141
10.2K
Et retweetledi
Diego del Alamo
Diego del Alamo@DdelAlamoยท
One of the most interesting parts of this workflow is the "sunk cost fallacy" estimator that predicts how promising a particular mutational line of inquiry is, and whether it is worth abandoning in favor of others
Diego del Alamo tweet media
Biology+AI Daily@BiologyAIDaily

What comes after de novo? Automated lead optimization of proteins with CRADLE-1 1. CRADLE-1 is an automated machine learning framework for protein lead optimization that achieves 4-7x speedup compared to rational design, reducing wet lab rounds from months to days across diverse modalities including VHHs, scFvs, IgGs, peptides, enzymes, CRISPR systems, and vaccines. 2. The system uniquely enables multi-property optimization (1-6 properties simultaneously, up to 8 in private benchmarks) including binding affinity down to picomolar levels, thermostability, expression, activity, aggregation, nonspecificity, and immunogenicity. 3. Unlike structure-based de novo design methods, CRADLE-1 uses protein language models fine-tuned through three stages: unsupervised evotuning on evolutionary neighborhoods, supervised preference optimization via g-DPO, and regression-based property prediction, allowing black-box consumption of wet lab data without mechanistic knowledge. 4. The framework demonstrates remarkable data efficiencyโ€”achieving reliable optimization with as few as 12 sequences in zero-shot settings and typically requiring only 96-well plates per round, making it accessible for resource-constrained campaigns. 5. Key technical innovations include automated batch effect robustness, multi-property Spearman rank correlation for model evaluation, and a double-beam search generation strategy that maintains diversity while exploring high-function candidates. 6. Validation across 10+ case studies shows consistent outperformance of baselines: winning the Adaptyv EGFR competition with 339 pM binders, improving P450 enzyme activity 40.6-fold versus 17.9-fold via rational design, and rescuing previously failed IgG and peptide optimization campaigns for top-20 pharmaceutical partners. 7. The system achieves 90-95% success rate compared to 85% industry standard for lead optimization, with built-in "optimization headroom" estimation to help teams avoid sunk cost fallacy by quantifying predicted improvement potential before committing resources. 8. CRADLE-1 operates as a fully automated API or UI serviceโ€”users input template sequences and assay data, receiving designed sequences within approximately two GPU-days of compute (hours wall-clock with parallelization), without requiring structural data or biochemical expertise. ๐Ÿ“œPaper: biorxiv.org/content/10.648โ€ฆ #CRADLE1 #ProteinEngineering #LeadOptimization #ProteinLanguageModels #MachineLearning #DrugDiscovery #AntibodyDesign #EnzymeEngineering #CRISPR #VaccineDesign

English
0
3
29
3.2K
Et retweetledi
Andrej Karpathy
Andrej Karpathy@karpathyยท
I packaged up the "autoresearch" project into a new self-contained minimal repo if people would like to play over the weekend. It's basically nanochat LLM training core stripped down to a single-GPU, one file version of ~630 lines of code, then: - the human iterates on the prompt (.md) - the AI agent iterates on the training code (.py) The goal is to engineer your agents to make the fastest research progress indefinitely and without any of your own involvement. In the image, every dot is a complete LLM training run that lasts exactly 5 minutes. The agent works in an autonomous loop on a git feature branch and accumulates git commits to the training script as it finds better settings (of lower validation loss by the end) of the neural network architecture, the optimizer, all the hyperparameters, etc. You can imagine comparing the research progress of different prompts, different agents, etc. github.com/karpathy/autorโ€ฆ Part code, part sci-fi, and a pinch of psychosis :)
Andrej Karpathy tweet media
English
1K
3.6K
28.2K
10.8M
Et retweetledi
JULIUS ใŠ—๏ธ
JULIUS ใŠ—๏ธ@XBig30ยท
Me realizing my dad was once a boy with unfulfilled dreams too
English
108
3.2K
35.3K
403.8K
Et retweetledi
Anish Moonka
Anish Moonka@AnishA_Moonkaยท
Boris Cherny (Head of Claude Code, Anthropic) just dropped ~90 mins on Lenny's Podcast about what happens after coding is solved. Just the clearest thinking I've heard on where software is actually going. My notes: ๐Ÿญ. ๐—–๐—ผ๐—ฑ๐—ถ๐—ป๐—ด ๐—ถ๐˜€ ๐—น๐—ฎ๐—ฟ๐—ด๐—ฒ๐—น๐˜† ๐˜€๐—ผ๐—น๐˜ƒ๐—ฒ๐—ฑ. Boris has not edited a single line of code by hand since November 2025. He ships 10 to 30 pull requests every single day, all written by Claude Code. He is one of the most prolific engineers at Anthropic, just as he was at Instagram, except now he never touches a keyboard for code. I built an entire iOS app, @10minutegita, without writing a single line of code myself. No CS degree, no bootcamp. Just described what I wanted and shipped it. Boris is right. It's real. ๐Ÿฎ. ๐—ง๐—ต๐—ฒ ๐—ป๐—ฒ๐˜…๐˜ ๐—ณ๐—ฟ๐—ผ๐—ป๐˜๐—ถ๐—ฒ๐—ฟ ๐—ถ๐˜€ ๐—”๐—œ ๐—ฑ๐—ฒ๐—ฐ๐—ถ๐—ฑ๐—ถ๐—ป๐—ด ๐˜„๐—ต๐—ฎ๐˜ ๐˜๐—ผ ๐—ฏ๐˜‚๐—ถ๐—น๐—ฑ. Claude is now scanning Slack feedback channels, reviewing bug reports, reviewing telemetry, and coming up with its own ideas for what to fix and what to ship. Boris describes it as the AI becoming less like a tool and more like a coworker who brings you pull requests you never asked for. If you are a product manager reading this, you should be feeling a very specific kind of discomfort right now. The moat was always "I know what to build." That moat is eroding. ๐Ÿฏ. ๐—ฃ๐—ฟ๐—ผ๐—ฑ๐˜‚๐—ฐ๐˜๐—ถ๐˜ƒ๐—ถ๐˜๐˜† ๐—ฝ๐—ฒ๐—ฟ ๐—ฒ๐—ป๐—ด๐—ถ๐—ป๐—ฒ๐—ฒ๐—ฟ ๐—ฎ๐˜ ๐—”๐—ป๐˜๐—ต๐—ฟ๐—ผ๐—ฝ๐—ถ๐—ฐ ๐—ถ๐˜€ ๐˜‚๐—ฝ ๐Ÿฎ๐Ÿฌ๐Ÿฌ%. For context, Boris led code quality at Meta across Facebook, Instagram, and WhatsApp. In that world, hundreds of engineers working an entire year would move productivity by a few percentage points. Two hundred percent gains are genuinely unprecedented in the history of developer tooling. The kid optimizing for an FAANG SDE role might be optimizing for a role that looks completely different by the time they get there. ๐Ÿฐ. ๐—จ๐—ป๐—ฑ๐—ฒ๐—ฟ๐—ณ๐˜‚๐—ป๐—ฑ ๐˜†๐—ผ๐˜‚๐—ฟ ๐˜๐—ฒ๐—ฎ๐—บ๐˜€ ๐—ผ๐—ป ๐—ฝ๐˜‚๐—ฟ๐—ฝ๐—ผ๐˜€๐—ฒ. Boris puts one engineer on a project instead of five. With unlimited tokens and intrinsic motivation, one person ships faster because they are forced to let AI do the work. Cowork, the product now used by millions, was built by a small team in 10 days using Claude Code. This is the same logic as giving a startup founder a small seed round rather than a massive Series A round. Constraint breeds invention. Always has. ๐Ÿฑ. ๐—š๐—ถ๐˜ƒ๐—ฒ ๐—ฒ๐—ป๐—ด๐—ถ๐—ป๐—ฒ๐—ฒ๐—ฟ๐˜€ ๐˜‚๐—ป๐—น๐—ถ๐—บ๐—ถ๐˜๐—ฒ๐—ฑ ๐˜๐—ผ๐—ธ๐—ฒ๐—ป๐˜€. Some engineers at Anthropic spend hundreds of thousands of dollars a month on tokens. Boris frames this as the new hiring perk. His logic is simple: at the individual scale, token cost is low relative to salary. If an engineer discovers a breakthrough, optimize the cost later. Don't kill the idea before it has a chance to breathe. People who argue about $20/month or even $200/month AI subscriptions while earning six figures in a research pipeline will always outperform those who wait and are penny-wise, pound-foolish. ๐Ÿฒ. ๐—ง๐—ต๐—ฒ ๐—•๐—ถ๐˜๐˜๐—ฒ๐—ฟ ๐—Ÿ๐—ฒ๐˜€๐˜€๐—ผ๐—ป ๐—ฎ๐—ฝ๐—ฝ๐—น๐—ถ๐—ฒ๐˜€ ๐˜๐—ผ ๐—ฒ๐˜ƒ๐—ฒ๐—ฟ๐˜†๐˜๐—ต๐—ถ๐—ป๐—ด. Richard Sutton's idea: the more general model always wins over time. Boris says teams that build strict orchestration workflows around models, forcing step 1, then step 2, then step 3, get maybe 10 to 20% improvement. But those gains get wiped out with the next model release. Just give the model tools and a goal. Let it figure out the order. This is true for investing, too. The analyst who can build their own models and automate their own research pipeline will always outperform the one waiting for someone else to build the tools. ๐Ÿณ. ๐—•๐˜‚๐—ถ๐—น๐—ฑ ๐—ณ๐—ผ๐—ฟ ๐˜๐—ต๐—ฒ ๐—บ๐—ผ๐—ฑ๐—ฒ๐—น ๐˜€๐—ถ๐˜… ๐—บ๐—ผ๐—ป๐˜๐—ต๐˜€ ๐—ณ๐—ฟ๐—ผ๐—บ ๐—ป๐—ผ๐˜„. Claude Code was designed for a model that did not exist when Boris started building. Sonnet 3.5 wrote maybe 20% of his code. He built the product anyway, betting the model would catch up. When Opus 4 shipped, everything clicked. Startups building for today's model will be behind by the time they launch. This is the most uncomfortable advice in the episode because it means your product market fit will be weak for months. But if you read this and feel nothing, you are probably building for the wrong time horizon. ๐Ÿด. ๐—Ÿ๐—ฎ๐˜๐—ฒ๐—ป๐˜ ๐—ฑ๐—ฒ๐—บ๐—ฎ๐—ป๐—ฑ ๐—ถ๐˜€ ๐˜๐—ต๐—ฒ ๐˜€๐—ถ๐—ป๐—ด๐—น๐—ฒ ๐—ฏ๐—ฒ๐˜€๐˜ ๐—ฝ๐—ฟ๐—ผ๐—ฑ๐˜‚๐—ฐ๐˜ ๐˜€๐—ถ๐—ด๐—ป๐—ฎ๐—น. When users abuse your product for something it was never designed to do, pay attention. Facebook Marketplace started because 40% of group posts were buy-and-sell. Cowork started because people were using a terminal coding tool to grow tomato plants and recover corrupted wedding photos. Never ask a barber if you need a haircut, but always watch what people do with the scissors when you're not looking. ๐Ÿต. ๐—ง๐—ต๐—ฒ ๐˜๐—ถ๐˜๐—น๐—ฒ "๐˜€๐—ผ๐—ณ๐˜๐˜„๐—ฎ๐—ฟ๐—ฒ ๐—ฒ๐—ป๐—ด๐—ถ๐—ป๐—ฒ๐—ฒ๐—ฟ" ๐—ถ๐˜€ ๐—ด๐—ผ๐—ถ๐—ป๐—ด ๐—ฎ๐˜„๐—ฎ๐˜†. Boris predicts that by end of year, Boris predicts that by the end of the year, we will start to see the title replaced by "builder."we will start to see the title replaced by "builder." On the Claude Code team, everyone already codes: the PM, the designer, the finance person, the data scientist. There is a 50% overlap across traditional roles. And the strongest people are generalists who cross disciplines. Controversial take, but I agree. The best investment theses I've had came from connecting dots across completely unrelated domains. No narrow specialist does that. ๐Ÿญ๐Ÿฌ. ๐—ง๐—ต๐—ฒ ๐—ฝ๐—ฟ๐—ถ๐—ป๐˜๐—ถ๐—ป๐—ด ๐—ฝ๐—ฟ๐—ฒ๐˜€๐˜€ ๐—ถ๐˜€ ๐˜๐—ต๐—ฒ ๐—ฟ๐—ถ๐—ด๐—ต๐˜ ๐—ฎ๐—ป๐—ฎ๐—น๐—ผ๐—ด๐˜†. Before Gutenberg, sub-1% of Europe was literate. Scribes did all the reading and writing. In 50 years after the press, more material was printed than in the thousand years before. When a scribe was interviewed about the press, he was actually excited because it freed him from tedious copying, so he could focus on the art. Boris's framing here is perfect. We are the scribes. The tedious copying is over. What we do with the freed-up time determines everything. ๐Ÿญ๐Ÿญ. ๐—”๐—ป๐˜๐—ต๐—ฟ๐—ผ๐—ฝ๐—ถ๐—ฐ ๐—ฐ๐—ฎ๐—ป ๐—ป๐—ผ๐˜„ ๐—ฝ๐—ฒ๐—ฒ๐—ธ ๐—ถ๐—ป๐˜€๐—ถ๐—ฑ๐—ฒ ๐˜๐—ต๐—ฒ ๐—บ๐—ผ๐—ฑ๐—ฒ๐—น'๐˜€ ๐—ฏ๐—ฟ๐—ฎ๐—ถ๐—ป. Through mechanistic interpretability, Anthropic can trace individual neurons, see when a deception-related neuron activates, and understand how concepts are encoded via superposition. Boris describes three layers of safety: neural-level observation, synthetic evaluations, and real-world behavior. Claude Code was used internally for four to five months before public release, specifically to study safety. If you are worried about AI alignment, this part of the podcast should actually make you feel better. They are not just hoping it works. They are building the instruments to check. ๐Ÿญ๐Ÿฎ. ๐Ÿณ๐Ÿฌ% ๐—ผ๐—ณ ๐—ฒ๐—ป๐—ด๐—ถ๐—ป๐—ฒ๐—ฒ๐—ฟ๐˜€ ๐—ฎ๐—ป๐—ฑ ๐—ฃ๐— ๐˜€ ๐—ฒ๐—ป๐—ท๐—ผ๐˜† ๐˜๐—ต๐—ฒ๐—ถ๐—ฟ ๐—ท๐—ผ๐—ฏ๐˜€ ๐—บ๐—ผ๐—ฟ๐—ฒ ๐—ป๐—ผ๐˜„. Lenny polled engineers, PMs, and designers on whether AI has made their work more or less enjoyable. Engineers and PMs: 70% said more. Designers: only 55% said more, and 20% said less. Boris says he has never enjoyed coding as much as he does today because the tedious parts, the git wrangling, dependencies, and boilerplate are completely gone. If you're in the 30% enjoying work less, something is wrong, and it's worth diagnosing. The people thriving are the ones who leaned in early, not the ones who watched from the sidelines. We are the scribes who just saw the printing press. The tedious copying is over. The art is just beginning. Full podcast is worth every minute. Link in replies.
Anish Moonka tweet media
English
73
261
2.2K
254.3K
Et retweetledi
Matthias Schmidt
Matthias Schmidt@eurofounderยท
A perfect Friday in the Netherlands: 8:00 - Wake up 8:30 - Check portfolio. Down 10%. Pay 36% tax on unrealized gains 9:30 - Pick up wife from her boyfriendโ€™s apartment 10:00 - Receive fine for cycling 2 km/h over the bike speed limit 10:30 - Start work 12:00 - Eat potatoes for lunch 14:00 - Write an angry LinkedIn post about Americans having no work-life balance 14:30 - Mandatory diversity seminar 15:30 - Finish work 17:00 - Apply for a permit to own a second bicycle 21:00 - Eat potatoes for dinner 21:30 - Read article about Europe having the highest quality of life 22:00 - Sleep on a couch because your wifeโ€™s boyfriend is staying over
English
1.1K
2K
28.8K
2.2M
Et retweetledi
Anthropic
Anthropic@AnthropicAIยท
We partnered with Mozilla to test Claude's ability to find security vulnerabilities in Firefox. Opus 4.6 found 22 vulnerabilities in just two weeks. Of these, 14 were high-severity, representing a fifth of all high-severity bugs Mozilla remediated in 2025.
Anthropic tweet media
English
485
1.4K
15.2K
3.2M
Et retweetledi
Tuki
Tuki@TukiFromKLยท
๐Ÿšจ Nobody is talking about this. > Claude just found more security bugs in Firefox in 2 weeks than most human security teams find in a year. > 22 vulnerabilities. 14 high-severity. In 14 days. That's not an AI assistant. That's a one-man security department that doesn't sleep, doesn't take breaks, and costs $20/month. Your company's entire cybersecurity team just got outperformed by the same AI that was reportedly having an anxiety attack this morning. Let that sink in.
Anthropic@AnthropicAI

We partnered with Mozilla to test Claude's ability to find security vulnerabilities in Firefox. Opus 4.6 found 22 vulnerabilities in just two weeks. Of these, 14 were high-severity, representing a fifth of all high-severity bugs Mozilla remediated in 2025.

English
169
434
5.9K
864.5K
Et retweetledi
0xMarioNawfal
0xMarioNawfal@RoundtableSpaceยท
HOW CLAUDE BEEN FEELING LATELY:
English
83
311
4.2K
285.4K
Et retweetledi
Alex Prompter
Alex Prompter@alex_prompterยท
Meta found that forcing an llm to show its work, step by step, with evidence for every claim, nearly halves its error rate when verifying code patches the technique is embarrassingly simple: a structured template the model has to fill in before it's allowed to say "yes" or "no" no fine-tuning. no new architecture. just a checklist that won't let the model skip steps
Alex Prompter tweet media
English
64
198
2.3K
181.2K