Jan Novotný 🇺🇦
9.3K posts

Jan Novotný 🇺🇦
@novoj
Guerilla developer. Fuck the war! Mastodon: https://t.co/xNCw3B1rJ9
Czech Republic Katılım Şubat 2009
387 Takip Edilen631 Takipçiler
Jan Novotný 🇺🇦 retweetledi
Jan Novotný 🇺🇦 retweetledi

45 Claude Code Tips: From Basics to Advanced
github.com/ykdojo/claude-…
English

@FoldynaJaroslav Každý národ má své Čurdy. Ale tohle srovnání s vámi pokulhává. Vy Čechy zrazujete z jiných důvodů než že strachu.
Čeština
Jan Novotný 🇺🇦 retweetledi

Vezmou peníze zpravodajským službám, abychom nevěděli, koho si tu rusák připravuje.
Vezmou peníze armádě, abychom se rusákovi nemohli bránit.
Svážou naše neziskovky ruským zákonem, aby nemohly fungovat.
Chtějí zrušit sankce, aby ma to měl rusák peníze.
Chtějí nakupovat ruskou ropu, aby měl rusák i na deportace dětí na Sibiř.
ANO, tohle neni vláda pro Čechy.
Čeština
Jan Novotný 🇺🇦 retweetledi

Imagine you're John Carmack
you're 22 years old and you just wrote a 3D engine in assembly that runs at 35fps on a 486
Doom drops. Quake drops. Half the planet is playing your code.
you're the reason GPUs exist. you're the reason your friend Jensen has a yacht today.
then in 2009, you sell id Software. people call it betrayal. you call it "they made an offer I couldn't refuse."
VR obsession. Oculus. Meta buys it for $2B. you're CTO.
but Meta thinks you're a liability. your demos are "too intense." your emails are "too long." your focus on frame timing is "slowing us down."
2022. they push you out. not fired officially. just "restructured."
the media writes "end of an era." some crypto bro calls you "washed up."
silicon valley moves on.
but you don't.
you don't write a book. you don't start a podcast. you don't collect speaking fees.
you go completely quiet.
you take the money. you buy a warehouse in Texas. you hire 10 engineers. and you start coding.
not games. not VR.
AGI.
two years. radio silence. no tweets. no conference talks.
while everyone's debating ChatGPT, you're debugging CUDA kernels at 3AM, testing world models.
then in 2025, Keen Technologies pivots hard. you're not "exploring" anymore. you're building it.
here's what people get wrong:
everyone calls it a comeback. a redemption arc. "revenge on Meta."
it's none of that.
you're a 54-year-old engineer who still codes 12 hours a day because you genuinely can't stop.
most CTOs would have bought an island. most legends would have written memoirs.
you just kept typing.
the most dangerous person in any codebase is the one who goes quiet and never stops shipping commits.
karma doesn't need to be real.
but obsession is.
welcome back, Carmack.

English
Jan Novotný 🇺🇦 retweetledi

🚨 BREAKING: Someone built a swarm of thousands of AI agents with real memories and personalities and used it to predict the future.
MiroFish is a universal swarm intelligence engine. And the live demos are scarily accurate.
Here is what it actually does:
→ Spins up thousands of autonomous agents simultaneously
→ Each agent has its own memory, personality, and behavior
→ Feeds on real-world data powered by GraphRAG
→ Predicts markets, public opinion, and narrative outcomes
→ Simulates how crowds think before it happens
The live demos are what got people. Scarily accurate is the phrase everyone keeps using.
17,300 stars. +2,907 in a single day.
It's 100% free and open source.

English

@borekb Příjemně mě překvapilo, že funguje Ctrl-R na vyhledávání zadaných vstupů, stejně jako v bashi.
Čeština

tohle bude fajn, až dodělají do CC, mezitím ccsearch + vlastní skill
GitHub@github
So you forgot how you fixed that bug the other day. 😬 No worries. GitHub Copilot CLI remembers. With its local SQLite database of your session history (and a full-text search index), you can instantly pull up context from your previous work. ✅ Install Copilot CLI and save brain space. ⬇️ github.com/features/copil…
Čeština
Jan Novotný 🇺🇦 retweetledi

Remember that "one more turn" itch from Sid Meier’s #Civilization?
@claudeai is the same: just one more feature, one more fix, one more hypothesis to explore. But even builders need sleep.
Taking the weekend off to recharge (first real break since I went Max). 🔋
English

To minimize request count I pre-approed set of generally safe commands Claude uses in global settings.json (pwd, cat, head, etc.), but I also need to instruct Claude in global CLAUDE.md to prefer those and avoid work interruptions by permission request. Is it necessary @bcherny?
English
Jan Novotný 🇺🇦 retweetledi

Prof. Donald Knuth opened his new paper with "Shock! Shock!"
Claude Opus 4.6 had just solved an open problem he'd been working on for weeks — a graph decomposition conjecture from The Art of Computer Programming.
He named the paper "Claude's Cycles."
31 explorations. ~1 hour. Knuth read the output, wrote the formal proof, and closed with: "It seems I'll have to revise my opinions about generative AI one of these days."
The man who wrote the bible of computer science just said that. In a paper named after an AI.
Paper: cs.stanford.edu/~knuth/papers/…

English
Jan Novotný 🇺🇦 retweetledi

@AnthropicAI Demo of potential AI Surveillance System and this isn't even 1% of what its actually capable of. Every data source in this demo is legally purchasable today.
claude.ai/public/artifac…
English
Jan Novotný 🇺🇦 retweetledi

I have been optimizing our custom LLM runner tool APIs recently. Easy to get 2x gains by rather simple improvements. I'd say our tool is over 10x faster now than it was a week ago and does a better job. Less LLM<->tools roundtrips, significantly less tokens.
Lean on the LLM to generate the context with tool API calls. Don't try to hack the prompt yourself (by grepping the prompt for example). LLM knows best what info it needs and how it operates.
There's of course a lot of manual work to go through the logs and the LLM<->tool API inputs/outputs. So I started feeding these logs back to the same AI to analyze them, and since it's the same AI that is doing those tool calls, it also has pretty good intuition for the tool APIs it needs and why it used them in unoptimal way. Then I improve the APIs accordingly and run the same prompts again to see whether the problems are gone.
I also recently automated our LLM tool API improvements even further. I added a new tool API for the LLM to directly give us feedback log. Performance warnings, invalid API calls (syntax misunderstanding), etc. The LLM knows why it called some tool, so it knows the reasoning and can dump extra warning logs easily. This works surprisingly well.
LLM tooling is yet another example where deep understanding and optimization matters. You have to analyze what is happening and why and fix the issues. Then validate and benchmark the results.
Thariq@trq212
English

Jan Novotný 🇺🇦 retweetledi

Holy shit… Your anonymous internet identity can now be unmasked for $1 😳
Not by the FBI. By anyone with access to Claude or ChatGPT and a few of your Reddit comments.
ETH Zurich and Anthropic just dropped a paper called “Large-Scale Online Deanonymization with LLMs” and the results are the most alarming privacy research I’ve read this year.
They built an automated pipeline that takes your anonymous posts, extracts identity signals, searches the web, and figures out who you are.
No human investigator needed. Fully autonomous. Works on Hacker News, Reddit, LinkedIn, even redacted interview transcripts.
Here’s how bad the numbers are.
On Hacker News users: 67% identified correctly.
When the system made a guess, it was right 90% of the time.
On Reddit academics posting under pseudonyms: 52%.
On scientists whose interview transcripts were explicitly redacted for privacy: 9 out of 33 still got unmasked.
The pipeline works in four steps they call ESRC. Extract identity signals from your posts using LLMs.
Search for candidate matches using embeddings across thousands of profiles.
Reason over top candidates with models like GPT-5.2. Calibrate confidence so when it does guess, it’s almost never wrong.
The classical deanonymization method from the famous Netflix Prize attack? Nearly 0% recall across every test.
LLMs didn’t just improve on old techniques. They made old techniques look like toys.
When they scaled to temporally split Reddit profiles, matching a user’s old posts to their newer ones across a full year gap, the pipeline hit 67% recall at 90% precision and 38% recall at 99% precision.
Meaning even a year of changed interests and different conversations wasn’t enough to hide.
More reasoning compute = better deanonymization. High reasoning effort doubled recall at 99% precision in some tests. As frontier models get smarter, this attack strengthens automatically. Every model upgrade is a privacy downgrade.
What makes it nearly impossible to defend against: the pipeline splits into subtasks that all look benign. Summarize a profile.
Compute embeddings. Rank candidates. No single API call screams “deanonymization.” The researchers themselves say they’re pessimistic that safety guardrails or rate limits can stop it.
Their conclusion is blunt: “Users who post under persistent usernames should assume that adversaries can link their accounts to real identities.” And it extrapolates.
Log-linear projections suggest roughly 35% recall at 90% precision even at one million candidates.
Every throwaway account. Every anonymous forum post. Every “nobody will connect this to me” comment.
It’s all searchable micro-data now. And the cost to run the full agent on one target is less than a cup of coffee.
Practical anonymity on the internet just died. The paper killed it with math.

English

@borekb Mluvíš mi z duše. Zrovna dneska jsem s Claude automatizoval upgrade a prověření PGP klíčů knihoven třetích stran před releasem. O přípravě strukturovaného changelogu ani nemluvě.
Čeština

Jednou z nejlepších věcí na agentech je, že mi to hrozně zjednodušilo práci s GitHubem / GitLabem / Jirou.
Některé z těch nástrojů vyloženě nemám rád (tipni si), ale s pomocí agentů mi vlastně začaly být ukradené. A u nástrojů, které mám rád a nevadilo mi s nimi pracovat, nebo jsem to i dělal rád, tak i tam je automatizace s pomocí agenta skvělá přidaná hodnota.
Čeština

@borekb Ještě to můžeš commitnout a nepushnout, to taky dělám, když potřebuji 2 stage. A pak amend něco undo (reset). 😅
Čeština
Jan Novotný 🇺🇦 retweetledi
Jan Novotný 🇺🇦 retweetledi

Simone Segouin, mostly known by her codename, Nicole Minet, was only 18-years-old when the Germans invaded. Her first act of rebellion was to steal a bicycle from a German military administration, and to slice the tires of all of the other bikes and motorcycles so they couldn't pursue her. She found a pocket of the Resistance and joined the fight, using the stolen bike to deliver messages between Resistance groups.
She was an extremely fast learner and quickly became an expert at tactics and explosives. She led teams of Resistance fighters to capture German troops, set traps, and sabotage German equipment. As the war dragged on, her deeds escalated to derailing German trains, blocking roads, blowing up bridges and helping to create a German-free path to help the Allied forces retake France from the inside. She was never caught.
Segouin was present at the liberation of Chartres on August 23, 1944, and then the liberation of Paris two days later. She was promoted to lieutenant and awarded several medals, including the Croix de Guerre. After the war, she studied medicine and became a pediatric nurse. She passed away at the age of 97 in 2023.
Not all heroes wear capes.

English
Jan Novotný 🇺🇦 retweetledi




