Andrea

4.4K posts

Andrea

Andrea

@__AndreaW__

Cerco di seguire persone in buona fede che abbiano opinioni diverse dalla mia. L'ignorante non si conosce mica dal lavoro che fa ma da come lo fa (C. Pavese)

Katılım Ağustos 2020
2.8K Takip Edilen135 Takipçiler
Andrea retweetledi
antirez
antirez@antirez·
One thing to understand about the new Array type of Redis, and the support of ARGREP, is that you can store, in Redis keys, different markdown documents (skills) that are collectively used and updated by a multitude of remote agents.
English
2
10
101
8.6K
Andrea
Andrea@__AndreaW__·
@Pinperepette @antirez Grazie! Scrivi bene. Un indice ad inizio articolo potrebbe aiutare a orientarsi nel contenuto più velocements. La parte per me più interessante (il futuro/presente di redis) è veramente inn "basso"
Italiano
1
0
1
296
Andrea retweetledi
autumn
autumn@adrusi·
when i was a kid, my dad (formerly a physics grad student) was shittalking the romans for building these giant aqueducts when the greeks understood centuries earlier that water would go back up a hill i asked him how they would have held the pressure and he was like "huh."
👎project hail riptide🚀@buttrscotchknif

this remains to perplex me

English
183
528
28.9K
4.9M
Andrea retweetledi
Lorenzo Ruffino
Lorenzo Ruffino@Ruffino_Lorenzo·
Questa credo sia una delle cose più imbarazzanti che abbia mai letto. Sia proprio per l'idea alla base sia per le banalissime domande.
Lorenzo Ruffino tweet media
Italiano
19
40
516
35.5K
Andrea retweetledi
Fausto Panunzi
Fausto Panunzi@FPanunzi·
Una delle prime lezioni in un corso di corporate finance: se vuoi indebitarti, devi cedere dei diritti di controllo ai finanziatori.
Fausto Panunzi tweet media
Italiano
8
21
206
10.9K
Andrea
Andrea@__AndreaW__·
@antirez @antirez do you see any realistic cybersec risk coming from chinese days (but Who knows, americans as well these days)
English
0
0
0
168
antirez
antirez@antirez·
Europe AI strategy should be to specialize on AI inference and improvement of large open weight models, while we try to recover the GPU / companies gap to have a viable internal path. A large Chinese open weight model that works is only better than an European-trained weak one.
English
19
12
210
12.3K
Andrea retweetledi
Bridget Brink
Bridget Brink@AmbBridgetBrink·
I resigned as U.S. Ambassador Ukraine when Trump kept siding with Putin over our democratic partner. Now, my successor is doing the same. I knew I had to speak out and run for office because siding with dictators is just not who we are.
Amy Mackinnon@ak_mack

Scoop: Julie Davis, the acting US Ambassador in Kyiv, is leaving the State Department having grown frustrated with Trump's dwindling support for Ukraine. Davis's resignation follows that of her predecessor, Bridget Brink, who resigned for similar reasons early last year. W/@christopherjm  as.ft.com/r/1781e555-fad…

English
3.4K
13.4K
46.5K
1.9M
Andrea retweetledi
Lavoce.info
Lavoce.info@lavoceinfo·
Il futuro non si prospetta roseo: shock energetico e inflazione minacciano il potere d’acquisto delle buste paga. Servirebbero cambiamenti radicali, che non sembrano però imminenti. Ne scrive @RiccardoTrezzi lavoce.info/archives/11119…
Italiano
1
14
37
6.3K
Andrea retweetledi
Matthew Yglesias
Matthew Yglesias@mattyglesias·
Five months in, I think I've decided that I don't want to vibecode — I want professionally managed software companies to use AI coding assistance to make more/better/cheaper software products that they sell to me for money.
English
183
199
4K
399.6K
Andrea retweetledi
Bojie Li
Bojie Li@bojie_li·
Closed labs hide model sizes. They can't hide what their models know, and what a model knows is an indicator on how big it is. Reasoning compresses. Factual knowledge doesn't. So you can size a frontier model from black-box API calls alone, and across releases you can literally watch a single fact arrive in the parameters over time. For three years, my friends Jiyan He and Zihan Zheng have been asking frontier LLMs the same question: "what do you know about USTC Hackergame?", a CTF contest. May 2024: GPT-4o invented fake titles. Feb 2025: Claude 3.7 Sonnet listed 19 verified 2023 challenges. By April 2026, frontier models recall specific challenges across consecutive years. After DeepSeek-V4 dropped, I instructed my agent to spend four days autonomously turning that habit into Incompressible Knowledge Probes (IKP) — 1,400 questions, 7 tiers of obscurity, 188 models, 27 vendors. Three findings: 1/ You can approximately size any black-box LLM from factual accuracy alone. Penalized accuracy is log-linear in log(params), R² = 0.917 on 89 open-weight models from 135M to 1.6T params. Project closed APIs onto the curve → GPT-5.5 ~9T, Claude Opus 4.7 ~4T, GPT-5.4 ~2.2T, Claude Sonnet 4.6 ~1.7T, Gemini 2.5 Pro ~1.2T (90% CI: 0.3-3x size). 2/ Citation count and h-index don't predict whether a frontier model recognizes a researcher. Two researchers with similar citation profiles get very different responses. Models memorize impact — work that shaped a field, not many incremental papers. 3/ Factual capacity doesn't compress over time. Across 96 open-weight models across 3 years, the IKP time coefficient is statistically zero, rejecting the Densing-Law prediction of +0.0117/month at p<10⁻¹⁵. Reasoning benchmarks saturate; factual capacity keeps scaling with parameters. Website: 01.me/research/ikp/ Paper: arxiv.org/pdf/2604.24827
Bojie Li tweet mediaBojie Li tweet mediaBojie Li tweet media
English
70
234
2.2K
381.6K
Anatoliy Gatt
Anatoliy Gatt@anatoliygatt·
Agentic RAG often gets adopted because the index doesn't expose the right primitives, not because the task needs the LLM in the retrieval loop. The traditional/agentic split skips a third option: structured retrieval, where the LLM emits one query and the index handles multi-hop natively (graph traversal, query planning, schema-aware lookup). Curious where that line sits for you in practice. How often is it really an agent-loop problem versus an index-design problem?
English
1
0
1
522
Leonie
Leonie@helloiamleonie·
This is the most common question I’ve been asked recently: “Should I replace my RAG system with an agentic RAG one?” Well, are you happy with your current RAG system’s performance? Then, probably not. The key difference between traditional RAG and agentic RAG is HOW the context is built: Traditional RAG has a fixed retrieval pipeline that retrieves exactly once. → The LLM is a passive recipient of additional context Agentic RAG has access to one or more retrieval tool(s) to retrieve on demand. → The agent actively builds its own context > Does your use case always require exactly one retrieval step? (Example: always pull in customer information) > Or do you have cases where no additional context is needed? > Or do you have cases where multi-hop retrieval is necessary? If you don’t know, set up a simple evaluation with a gold test set of a few common user patterns: > Do precision and recall improve for your retrieval component? > How much does the end-to-end performance improve based on performance improvements of the retrieval component? > Do these performance improvements justify the added latency and cost of agentic RAG?
Leonie tweet media
English
8
17
71
3.7K
Andrea
Andrea@__AndreaW__·
@gbponz Why not sharing some hard-won learnings on how to do a better visualization to understand the geographical distribution of income in italian cities? I get It, "income" May be not the whole story, but thats part of the challenge
English
1
0
2
1K
Giovanni B. Ponzetto - 🇨🇦🇮🇱
When I attended economics in 1983, the faculty president in Turin was a woman, the professor of Statistics. Had I produced such an image she'd have flipped a coin to decide if the right answer would have been laughing at me uncontrollably or setting up a pillory outside the entrance and tie me there for two days.
Giulio Mattioli@giulio_mattioli

Income levels in Milan, Italy. Incredibly concentric and huge differences between the richer city centre and the poorer peripheries tg24.sky.it/economia/2026/…

English
2
0
24
28.3K
Andrea retweetledi
Joruno
Joruno@wsl8297·
最近在 GitHub 偶遇一本很不错的开源书:The Accidental CTO,忍不住想分享给大家。 作者不是科班出身,却一路把一个平台从零带到能扛住百万用户访问。书里讲的不是“最佳实践”的样板答案,而是踩坑、救火、复盘之后沉淀下来的真经验:凌晨三点服务器崩了怎么扛,数据库复制延迟怎么追,架构怎么一步步撑大。 它用故事把分布式系统的关键概念讲透,同时也把技术决策背后的取舍摊在台面上:为什么这么选、代价是什么、下一步该怎么演进。 GitHub: github.com/subhashchy/The… 你会看到的内容包括: - 扩展实战:从几千用户到百万级的架构演进路径 - 分布式选型:分片、缓存、消息队列等何时该上、怎么权衡 - 可观测性:监控与告警如何在关键时刻“救命” - 容错设计:熔断、重试、优雅降级等稳态技巧怎么落地 - 成本控制:规模上来后,云账单怎么管,自建怎么评估 - CAP 落地:一致性、可用性、延迟在真实系统里如何取平衡 如果你想了解“大规模系统到底是怎么被搭起来、跑起来、扛起来的”,这本书很值得一读:工程师、架构师、技术创业者都适合。
Joruno tweet media
中文
5
64
497
39.2K
Andrea retweetledi
Dmitrii Kovanikov
Dmitrii Kovanikov@ChShersh·
Could someone explain why Anthropic lists their salary for 10 years and not just annually like everyone else? This is misleading and gives an impression they can pay so much per year.
English
218
78
9.6K
5.9M
Riccardo Trezzi
Riccardo Trezzi@RiccardoTrezzi·
A nome mio e di Guido (Ascari) ringrazio il direttore @DeBortoliF per aver citato nel suo editoriale odierno il nostro libro, "Fotografia di un declino". Il volume sarà presto acquistabile. Tutte le informazioni, lista d'attesa inclusa, si trovano su ildeclinoitaliano.it. Appena disponibile, ne parlerò in dettaglio anche qui su X. Grazie ancora.
Riccardo Trezzi tweet media
Italiano
6
39
297
20.6K
Andrea retweetledi
Gabriele Berton
Gabriele Berton@gabriberton·
What are some real-world use cases for "small" (<10B) LLMs?
English
342
13
869
182.9K
Andrea retweetledi
Alberto G. Franceschini Weiss
Alberto G. Franceschini Weiss@Alb_Franc_Weiss·
Nel suo libro @RiccardoTrezzi dice che l’Italia è cresciuta negli ultimi 20 anni meno di chiunque al mondo. Solo 8 Paesi hanno fatto peggio di noi, come Sudan o Venezuela. Com'è stato possibile che Pnrr e vari bonus, (500mld), abbiano avuto un impatto così modesto sulla crescita?
Alberto G. Franceschini Weiss tweet media
Italiano
38
40
186
9K
Andrea
Andrea@__AndreaW__·
@AlfonsoFuggetta @checovenier Da lavoratore del settore: grazie. Solo una nota sull'uso di Claude per confermare/smentire. Questi così sono così compiacenti (sychophantic) che tendono a dare sempre ragione a chi scrive. Anziché validare la (ragionevole) osservazione rischiano di depotenziarla.
Italiano
1
0
1
62
Alfonso Fuggetta
Alfonso Fuggetta@AlfonsoFuggetta·
@checovenier Tutti i miei post e articoli di queste settimane hanno un unico scopo: mettere in luce le esagerazioni e distorsioni di questo modo di fare comunicazione. Sulla questione tecnica, parliamone. Io uso sia il LLM Wiki che Claude integrato con Reader/Readwise.
Italiano
3
2
18
1.3K
Alfonso Fuggetta
Alfonso Fuggetta@AlfonsoFuggetta·
Ho visto questa immagine e mi pareva una banalità. Ma nel dubbio ho chiesto a Claude di che si trattava. Confermo: é una banalità. Allego immagine e commento di Claude.
Alfonso Fuggetta tweet mediaAlfonso Fuggetta tweet media
Italiano
7
4
33
5.7K