Andrea Volpini

21.4K posts

Andrea Volpini banner
Andrea Volpini

Andrea Volpini

@cyberandy

One of the better-known cyberandy. Passionate about Semantic SEO and AI I am co-founder and CEO of WordLift and insideout10.

Rome, Italy Katılım Kasım 2006
2.7K Takip Edilen6K Takipçiler
Sabitlenmiş Tweet
Andrea Volpini
Andrea Volpini@cyberandy·
Good morning 🦞 WordLift Claw is your SEO execution layer in Slack, helping you spot AI Overview opportunities at robot speed.
English
2
2
8
780
Andrea Volpini
Andrea Volpini@cyberandy·
@wordliftit has been selected by ICSC 4 Startup, with our work on #SEOcrate supported by Italy’s National Research Centre for HPC and Quantum Computing. Ontology-led RL for training trustworthy Small Language Models for Agentic AI is getting funded 🦾🥳 wor.ai/GCZrGV
English
0
0
0
9
Andrea Volpini retweetledi
Lily Ray 😏
Lily Ray 😏@lilyraynyc·
Google just put out its first official article on optimizing for AI search (AEO/GEO)! TL;DR: ⭐️ SEO is still the foundation for AI search. ⭐️ Create non-commodity, "people-first" content. ⭐️ Ignore most "GEO/AEO hacks like chunking and llms.txt. developers.google.com/search/docs/fu…
Lily Ray 😏 tweet media
English
10
16
80
3.8K
Andrea Volpini
Andrea Volpini@cyberandy·
The mistake many AI visibility solutions made: confusing scale with durability. Scalable SEO/GEO templates may win temporarily, but they leave footprints that search systems eventually learn to detect.
Lily Ray 😏@lilyraynyc

Can scaling AI content be risky for SEO? I've been monitoring the impact across hundreds of sites for the past few months. Check out my recent research and findings in my latest Substack: lilyraynyc.substack.com/p/it-works-unt…

English
0
0
2
232
Andrea Volpini
Andrea Volpini@cyberandy·
AI visibility is not a content-volume game. It is an evidence game. In e-commerce, the winners are not the brands publishing more pages, but the ones giving AI systems stronger, clearer, and more verifiable evidence. Great insights from @aleyda aleydasolis.com/en/ai-search/e…
English
0
1
3
196
Andrea Volpini retweetledi
Carlos Ortega
Carlos Ortega@carlos_darko·
@mpedrao #05132026" target="_blank" rel="nofollow noopener">support.google.com/analytics/answ…
QME
1
2
6
805
Andrea Volpini
Andrea Volpini@cyberandy·
@suganthan I’m usually very selective about what I read on schema 😀 but I knew I would appreciate your clarity on the topic.
English
1
0
1
48
Suganthan Mohanadasan
Suganthan Mohanadasan@suganthan·
@cyberandy I love the graphic. (Made me look much younger too lol) Thanks for sharing. Yes great point on copilot. I will add it to the article. 🙏💪
English
2
0
1
73
Andrea Volpini
Andrea Volpini@cyberandy·
Worth reading today. “Think of schema markup as business registration, not advertising.” I’d add: Bing relies heavily on schema too, and the semantic layer you build for search is the layer that aligns your AI workflows. Same schema. Three lives: indexing, training, retrieval.
Andrea Volpini tweet media
Suganthan Mohanadasan@suganthan

SEO Twitter has been arguing about schema all month. Half say it's dead. The other half claim a 2.5x AI citation magic lever. Both. Are. Wrong. Schema is read by 3 different systems for 3 different jobs. Google's index pipeline, LLM pretraining, and LLM runtime retrieval. I wrote a beginner's guide that untangles all three. suganthan.com/blog/three-liv…

English
1
3
13
1.4K
Andrea Volpini retweetledi
Kirk Marple
Kirk Marple@KirkMarple·
@ziodave @cyberandy @lilyraynyc @jdevalk We bet on schema.org several years ago as the canonical data model for our “observed entities” in content. It’s been a valuable bet since now we offer that as a context layer for AI agents, pre-indexed, and with standards-based entity layer.
English
0
2
5
215
Andrea Volpini retweetledi
Gianluca Fiorelli
Gianluca Fiorelli@gfiorelli1·
Super interesting catch by @natzir9, which seems confirming a suspect I had about Google grounding also using its Shopping Graph because, let’s be honest, it would be strange Google not using it if it aims to expand transaction via AI Mode
Natzir@natzir9

No es la primera vez que me encuentro cosas de este estilo en AI Mode o las AI Overviews. Tiene pinta de ser Google tirando directamente del Shopping Graph para sacar entidades y normalizar el grounding del Web Graph. ¿Lo habéis visto alguna vez? #Leak

English
2
2
6
1.2K
Andrea Volpini retweetledi
Ryan Jones
Ryan Jones@RyanJones·
One of my pet peeves is that many SEOs haven't updated their mental model of how a mondern search engine works. So, I wrote it down for you. serprecon.com/blog/how-a-sea…
English
4
14
114
14.4K
Andrea Volpini retweetledi
Ben Wills
Ben Wills@benwills·
I spent the last 3 weeks running what might be the most comprehensive LLM ranking factors analysis to date. 29,562 unique domains tracked and scored across 145 industries, 1,595 buyer personas, and 105k+ ChatGPT prompts. Over 500TB of data, and 12 external signals correlated against rank-weighted LLM recommendation scores. This is a large-scale correlation study: what external signals actually predict whether a brand gets recommended by ChatGPT, across 145 industries and 1,595 buyer personas. -- Research Process 145 industries from 500 candidates. 11 personas each (10 targeted + 1 neutral). 25 runs per persona, rank-weighted scoring. 29,562 unique domains tracked. Data collected: - Common Crawl: 1.15B pages, domain mentions + phrase co-occurrences - Reddit: 5B+ posts and comments scanned - Google Search: 15,697 queries, top 100 results; 1.5M+ results captured - SERP HTML: parsed for outbound links and phrase presence - Wikimedia: 300M+ Wikidata entities + Wikipedia citations - Backlinks (Common Crawl Web Graph): PageRank + Harmonic Centrality; 4B+ - Top Site Homepages: parsed for persona-specific phrases -- Analysis Process 13 signals per domain. Spearman ρ vs. LLM recommendation score, per-industry and globally. R² shows variance explained. Lift measures over-representation in the top 10% most-recommended domains. Tiered: Dominant (ρ ≥ 0.30) down to Baseline (< 0.05). -- Key Findings SERP appearances, SERP rank, and outbound links from search results pages are the three strongest signals. Traditional SEO is the dominant measurable influence on LLM recommendations. Backlink authority (PageRank, Harmonic Centrality) follows. Combined, these point to one thing: established search authority drives LLM visibility. Signal hierarchies vary by industry. Wikidata dominates in established categories (hotels, ERP, furniture). Reddit drives community-driven ones (enterprise AI, live entertainment). No universal strategy. 80–85% of recommendation variance is inside the model. All external signals combined explain under 20%. You cannot infer LLM visibility from search rankings; you have to test it directly. -- The Two Conclusions That Matter 1. SEO is the foundation. OpenAI is using search data today and building their own index. As that matures, the connection between search authority and LLM visibility deepens. Traditional SEO principles are not obsolete, they're the starting point for LLM visibility too. 2. Persona is the measurement unit. The #1 airline for a frequent flyer is a different site from the #1 for a student flying abroad. Same model, same industry, different person, different result. You don't have one LLM rank, you have a rank per buyer segment. Monitor by persona or the number is meaningless. -- Full Report and data for all 145 industries and 1,595 personas available here: oppalerts.com/LLM-Ranking-Fa…
Ben Wills tweet media
English
11
33
155
17.8K
Andrea Volpini
Andrea Volpini@cyberandy·
I began using Gemma 3 as an open interpretability lab. Prompt → layer-32 residual vectors → Gemma Scope SAE activations → Natural Language Autoencoder explanations. The goal is to map how a model internally represents a brand, its products, and competitors before it answers.
Andrea Volpini tweet media
English
2
1
7
340
Andrea Volpini
Andrea Volpini@cyberandy·
Context Graphs help the agentic workforce understand the “why” behind decisions. In most cases, these graphs need to be mined from conversations. But the missing layer is not the graph technology itself, RDF and OWL can support that. What is missing is the knowledge architecture.
The Year of the Graph@TheYotg

Context Graph Architecture: Why Knowledge Architecture Is the Missing Layer Context graphs are being called AI's next trillion-dollar opportunity. But before chasing the new label, it's worth asking: what's actually new here? Forrester's Charles Betz cuts through the noise: EA has maintained entity graphs since Zachman (1987). CMDBs go back to ITIL v1 in the 1990s. APM, process mining, ChatOps, architecture decision records -- these disciplines have been assembling the pieces of a unified context graph in isolation for decades. The graph was never missing. It's fragmented. George Anadiotis takes the argument further. The decision trace layer -- who decided what, why, under what authority -- isn't absent from organisations. It lives in Slack threads, incident postmortems, Jira tickets, and people's heads. Extracting it and making it queryable is not a database problem. It requires knowledge engineering: observing work practices, interviewing domain experts, encoding tacit reasoning in formal, machine-readable representations. That's the missing layer. Not the graph itself -- the knowledge architecture that makes it governable. The infrastructure answer is not exotic either. RDF/OWL provides typed entities and governed relationships. Named graphs handle provenance and versioning. SPARQL enables queryability. These are the building blocks that turn an entity layer from a drawing into something that can actually satisfy governance requirements. Alberto D. Mendoza's conversion of ArchiMate 3.2 to an RDF ontology is a direct, working instantiation of this approach. On the tooling side: the LLM Wiki pattern -- extracting discrete facts from unstructured sources into a graph, then synthesising into structured queryable form -- is being adopted at scale as a population accelerator for enterprise Agentic AI implementations. The Semantic Web has a 25-year library of patterns, vocabularies and tools to build on. The key reframe: ontological modeling was never meant to be a runtime. Its value is in defining consistent logic aligned with domain knowledge -- ensuring concepts don't contradict each other across different data schemas. Entity graphs anchored in EA, EA anchored in knowledge representation, decision traces made queryable: that's context graph architecture grounded in something that can actually hold. The question isn't whether context graphs are real. It's whether organisations will start building the knowledge architecture they require now, or wait until their competitors have a three-year head start. By @linked_do linkeddataorchestration.com/2026/05/08/con… #KnowledgeArchitecture #EnterpriseArchitecture #ContextGraphs #AgenticAI #Ontology -- 💬 ‘A great newsletter’ - Claudia Remlinger, former Sr. Marketing Director, Neo4j.  Join readers from Amazon, Capgemini, Michelin, Neo4j & more Subscribe to the Year of the Graph newsletter for quarterly updates and insights on all things #KnowledgeGraph, #GraphDB, Graph #Analytics / #DataScience / #AI and #SemTech 👇 yearofthegraph.xyz/newsletter

English
0
0
3
242
Andrea Volpini retweetledi
clem 🤗
clem 🤗@ClementDelangue·
Local open-weight AI on a laptop has been improving more than twice as fast as Moore's Law! Between May 2024 and May 2026, the most expensive MacBook Pro you could buy stayed at 128 GB of unified memory. The hardware ceiling barely moved. But the smartest open-weight model from @huggingface you could actually run on it went from a score of 10 (Llama 3 70B) to 47 (DeepSeek V4 Flash on @antirez's mixed-Q2 GGUF) on the @ArtificialAnlys Intelligence Index. That is 4.7× in 24 months, or a doubling of intelligence every 10.7 months. Moore's Law (transistor count) doubles every 24 months. Local open-weight AI on a laptop has been improving more than twice as fast as Moore's Law, on completely unchanged hardware.
clem 🤗 tweet media
English
48
92
613
57.2K