Propagatica

222 posts

Propagatica banner
Propagatica

Propagatica

@propagatica

Propagatica – creators of the OPO (Organic Propagation Online) standard: a universal framework for ethical online content propagation aligned with E-E-A-T.

Entrou em Aralık 2023
296 Seguindo28 Seguidores
Propagatica retweetou
The SEO Framework
The SEO Framework@TheSEOFramework·
While Yoast SEO and Rank Math claim llms.txt helps AI bots find your content, we checked. 6 months. 57 AI bots. 180k AI hits. 0 llms.txt requests. Full data and methodology: #issuecomment-4226935622" target="_blank" rel="nofollow noopener">github.com/sybrew/the-seo…
English
6
26
104
24.3K
Propagatica
Propagatica@propagatica·
This doesn't prove that LLMs.txt is ineffective – it's not actually "robots.txt in a new guise," but an entirely different tool with a different purpose. LLMs.txt isn't designed to block access; it's meant for content curation – signaling to AI models which pages are most valuable and optimized for processing during inference. The issue lies within the ecosystem: WordPress plugin developers, other CMS creators, and hosting providers (e.g., Hostinger, which generates llms.txt server-side) haven't yet aligned with the AI industry. The problem isn't the concept itself, but the lack of consistent implementation standards. This isn't complex technology – it's a simple plain-text file in Markdown format, designed to help models discover high-value, authoritative content. What's missing is advocacy from web tooling vendors who could meaningfully influence the developers of major models: Anthropic, xAI, OpenAI, and others. It's worth watching the Chinese market – their AI models often innovate at the same pace Chinese automotive brands are reshaping the global car industry. That might be precisely where lobbying for unified LLMs.txt implementation makes sense, because the core concept is sound – the challenge is fragmented adoption. Let's not write off LLMs.txt. The conversation remains open, and its relevance could grow quickly. Consider HTTPS – initially optional, now a near-universal standard (93% adoption) and a confirmed Google ranking factor. Or SEO meta tags: title tags now see 99% adoption, viewport meta tags over 93%. Standardization takes time, coordination, and critical mass – but once it happens, it reshapes the entire industry. LLMs.txt could follow the same trajectory. Worth being ready.
English
2
0
0
83
Propagatica retweetou
Stanford HAI
Stanford HAI@StanfordHAI·
Introducing the #AIIndex2026: Our most comprehensive, independently sourced data analysis of AI’s trajectory, with a clear-eyed assessment of the critical gaps that remain. As AI advances rapidly, can the systems built around it keep up? Explore the data: hai.stanford.edu/ai-index/2026-…
English
28
229
548
101.2K
Propagatica retweetou
Design Arena
Design Arena@Designarena·
BREAKING: Wan2.7-Video by @Alibaba_Wan is now #1 on Video-to-Video Arena with an Elo of 1337! This establishes a new state of the art with video editing models Huge congrats to the @Alibaba_Wan team!
Design Arena tweet media
English
19
37
325
29.4K
Propagatica retweetou
Design Arena
Design Arena@Designarena·
BREAKING: GLM 5.1 by @Zai_org overwhelmingly dominates design-centric coding tasks amongst open-weight models In the categories featured below, it is most comparable to Opus 4.6 by @AnthropicAI at ~1/8th the cost Huge congrats to the @Zai_org team for this achievement!
Design Arena tweet media
English
14
61
843
56.9K
Propagatica retweetou
X Freeze
X Freeze@XFreeze·
Grok 4.20 Non-Hallucination rate improved to even higher than previous highest Just days ago, it hit a record-breaking 78% Non-Hallucination Rate - already #1 in the world, smoking Claude Opus 4.6 (max), Gemini 3.1, GPT-5.4 (xhigh), and every other major model Now, it just pushed that number even higher to 83% While every other AI confidently makes up stuff and fabricate answers it doesn't know - Grok simply says "I don't know"
X Freeze tweet media
X Freeze@XFreeze

Most AI models hallucinate more than you'd think and make up stuff that doesn't exist Grok 4.20 just ranked #1 in Non-Hallucination Rate with a 78% score - beating Claude Opus 4.6(max), Gemini 3.1, GPT-5.4(xhigh), and every other model on the list xAI is quietly winning the accuracy game… and it’s built to be truthful

English
427
664
3.5K
12.3M
Propagatica retweetou
Neil Patel
Neil Patel@neilpatel·
ChatGPT drives 6.41 times more referral traffic than Gemini. That may not be shocking to you, as ChatGPT is more popular, but when you look at active monthly users, ChatGPT is only 1.6 times more popular. Yet it drives roughly 6.41 times more referral traffic. You can't just look at active users, you also have to consider which platform benefits you the most.
Neil Patel tweet media
English
19
15
67
3.2K
Propagatica retweetou
Z.ai
Z.ai@Zai_org·
Introducing GLM-5.1: The Next Level of Open Source - Top-Tier Performance: #1 in open source and #3 globally across SWE-Bench Pro, Terminal-Bench, and NL2Repo. - Built for Long-Horizon Tasks: Runs autonomously for 8 hours, refining strategies through thousands of iterations. Blog: z.ai/blog/glm-5.1 Weights: huggingface.co/zai-org/GLM-5.1 API: docs.z.ai/guides/llm/glm… Coding Plan: z.ai/subscribe Coming to chat.z.ai in the next few days.
Z.ai tweet media
English
533
1.3K
10.8K
4.2M
Propagatica retweetou
Satya Nadella
Satya Nadella@satyanadella·
We’re bringing our growing MAI model family to every developer in Foundry, including … · MAI-Transcribe-1, most accurate transcription model in world across 25 languages · MAI-Voice-1, natural, expressive speech generation · MAI-Image-2, our most capable image model yet Start building: microsoft.ai/news/today-wer…
GIF
English
220
280
1.8K
282.6K
Propagatica retweetou
Google Gemma
Google Gemma@googlegemma·
Meet Gemma 4! Purpose-built for advanced reasoning and agentic workflows on the hardware you own, and released under an Apache 2.0 license. We listened to invaluable community feedback in developing these models. Here is what makes Gemma 4 our most capable open models yet: 👇
Google Gemma tweet media
English
166
838
7.2K
619.6K
Propagatica retweetou
Neil Patel
Neil Patel@neilpatel·
Check out AI adoption by country. AI may be built heavily in the United States, but many other countries are adopting it more quickly.
Neil Patel tweet media
English
43
51
170
12.7K
Propagatica retweetou
X Freeze
X Freeze@XFreeze·
xAI's Grok Imagine just took over the entire DesignArena Video leaderboard - not one, but THREE #1 rankings → #1 Video Arena - Elo 1337, a 33-point gap over #2#1 Image to Video Arena - Elo 1298, beating Google Veo 3.1, Kling & Sora → #1 Video Editing Arena - Elo 1291 It’s wild, xAI was nowhere in the video space a few months ago, and now it's #1 across various benchmarks Grok Imagine's rate of progress is in a league of its own
X Freeze tweet media
English
301
239
1.3K
16.8M
Propagatica retweetou
Similarweb
Similarweb@Similarweb·
Gemini was the fastest-growing Gen AI tool in year-over-year website visits in February 2026.
Similarweb tweet media
English
169
334
1.8K
1.5M
Propagatica retweetou
Semrush
Semrush@semrush·
AI citations are not static. They evolve fast. Over 3 months, we tracked how AI platforms changed the sources they trust when generating answers. What we saw: • Less dependency on a single community source • More weight given to authoritative and reference domains • Clear differences in how each AI platform evaluates trust This tells us something important: AI is moving from raw community signals to a more curated trust model. Visibility in AI is no longer about being everywhere. It is about being credible in the right places. Full study and data 👇 social.semrush.com/4rBt81a.
Semrush tweet media
English
9
13
75
6K
Propagatica retweetou
Artificial Analysis
Artificial Analysis@ArtificialAnlys·
GLM-5 demonstrates improvement in AA-Omniscience Index, driven by lower hallucination. This means the model is abstaining more from answering questions it does not know
Artificial Analysis tweet media
English
6
5
76
6.7K
Propagatica
Propagatica@propagatica·
New SparkToro research exposes the "AI rankings" myth — and points to what actually works. Rand Fishkin just dropped research that should be required reading for anyone considering investment in "AI Optimization." The findings are brutal. Less than 1 in 100 AI responses are identical, even with the exact same prompt. Same order of recommendations? That's 1 in 1,000. Every response is unique — different brands, different order, different number of items. What does this mean in practice? Tracking "ranking position in AI" is meaningless. Selling these metrics to clients is — let's be direct — selling smoke and mirrors. But here's the nuance. The research shows that visibility percentage — how often a brand appears across dozens or hundreds of queries — can be a legitimate metric. Brands with strong authority and presence in credible sources show up more frequently, even if never in the same order. And this is where it gets interesting. AIO/AEO/GEO as a standalone discipline of "optimizing for AI" has limited value. You're chasing a moving target with metrics that can't be verified. But AIO combined with solid fundamentals — building authentic authority, valuable E-E-A-T-aligned content, deep semantic relationships — that's a different game entirely. AI visibility isn't the goal. It's a natural byproduct of systematic, quality-driven work on your brand's online presence. This is exactly the foundation of the OPO standard (Organic Propagation Online) we've been developing for the past two years. Public release of version 26.2.0 — mid-February. Full SparkToro research: NEW Research: AIs are highly inconsistent when recommending brands or products; marketers should take care when tracking AI visibility - SparkToro sparktoro.com/blog/new-resea…
Propagatica tweet media
English
0
0
0
17