Loganix

5.7K posts

Loganix banner
Loganix

Loganix

@loganix

AI search. Modern SEO. No fairy tales. Just what moves rankings in 2026. Trusted by 5,000+ SEOs and agencies.

Seattle vs. Vancouver Katılım Şubat 2014
397 Takip Edilen613 Takipçiler
Sabitlenmiş Tweet
Loganix
Loganix@loganix·
third-party sources are cited by AI 6.5x more than your own website your site is ~15% of the equation the other ~85%? what others say about you, on sites you don’t control
Loganix tweet media
English
1
0
6
2.1K
Loganix
Loganix@loganix·
76.4% of the most-cited pages in AI are updated within 30 days so monthly placements are ideal quarterly is the minimum this compounds like link building itself: the strongest returns come at months 4-6 stopping at month 3 sacrifices that
English
1
0
0
12
Loganix
Loganix@loganix·
you buy links for rankings, but what if those same links also got you cited by AI for your category queries? we published a strategy guide on how to get both from the same placement takeaways 🧵
English
1
0
1
77
SE Ranking
SE Ranking@SERanking·
@loganix Could be, for sure. There are probably several factors influencing this, so we see it as one of a few possible explanations.
English
1
0
1
1K
SE Ranking
SE Ranking@SERanking·
Could AI-generated content rank? Yes. And it may last about three months. We built 20 new domains and published 2,000 articles with no human input. In 36 days: - 71% got indexed - 8 sites ranked for 1,000+ keywords - 122K impressions 3 months in: completely gone. Was AI content the problem? Not exactly. The problem was publishing AI content with no strategy and no SEO behind it. These were new sites with no backlinks, no authority. Once Google picked up on that, the rankings dropped. And we’ve also seen the opposite: when AI-generated content is reviewed by a human and published on a strong domain, it can keep ranking and drive clicks. 📌 Read the full experiment: seranking.com/blog/ai-conten… We’re running new experiments now, and we’ll be sharing the results along the way. If you want us to test your theories too, drop them in the comments 👇
SE Ranking tweet media
English
15
13
135
35.5K
Loganix
Loganix@loganix·
anyone elses notifications broken?
English
0
0
0
20
Loganix
Loganix@loganix·
@pcshipp my guess would be index selection google’s probably crawling your pages, but not finding enough unique value to index ~40% of them
English
0
0
0
50
pc
pc@pcshipp·
Generated 10K+ pages but 40% still stuck not indexed March 3: Sitemap submitted - 6.14k Indexed - 4.06k Not indexed Why 40% not indexed what’s going wrong?
pc tweet media
English
64
1
83
15.1K
Loganix
Loganix@loganix·
@SeoTudent anyone who's used ai detectors enough knows that when a new model is released, it takes the detectors a minute to catch up my guess is that google has algorithms to detect abuse of ai content. they'll catch up soon, and then google's wrath will be unleased
English
0
0
2
17
Testing Stuff 🦄📈✨
Can't stop thinking about the site with the super tastefully done AI content scaling. The text is clearly unedited AI. Def a newer model, but there's no high-level workflow behind it. Will Google catch that footprint? I'm really invested now.
English
2
0
1
119
Loganix
Loganix@loganix·
@dejanseo just tried it. super cool! i asked gemini for the "best link-building service in the US" your tool guessed at:
Loganix tweet media
English
0
0
1
41
Loganix retweetledi
DEJAN
DEJAN@dejanseo·
We're releasing a new model called Reverse Prompter designed to reverse-engineer Gemini-generated assistant responses back to most likely original prompts. Try it here: dejan.ai/tools/reverse-… or read about its design and training here: dejan.ai/blog/reverse-p… PS: This is a tiny 270M parameter Gemma fine-tuned on 100,000 Gemini generated input-output pairs, we're not just asking Gemini to guess what input prompts are.
DEJAN tweet media
English
4
6
37
2.8K
Loganix
Loganix@loganix·
well, shoot! you learn something new everyday
Britney Muller@BritneyMuller

"Grounding" Doesn't Mean What You Think It Means 🗺️ Words matter, especially when they're quietly reshaping how an entire industry thinks. "Grounding" comes from "ground truth," rooted in statistics and originally cartography, where it literally meant going outside to verify that your map matched reality. In some AI models, "ground truth" is the objectively correct real-world data, like sensor readings or medical records, used to anchor the model to reality. Not documents. Not web pages. Reality. The core problem with LLMs is that there's no ground truth signal during training or generation. The model isn't checking its answer against the facts; it's only predicting the next most likely word. What Microsoft, a company I deeply respect + admire, calls "grounding" is actually RAG (Retrieval-Augmented Generation): retrieving web documents to supplement a response. Useful! But web text is written by humans, about reality, not reality itself. Those documents can be wrong, biased, SEO-manipulated, or outdated. RAG is better-informed guessing. True "grounding" is fundamentally a different thing. The uncomfortable part: Microsoft's own AI Guide features a quote from me where, after significant pushback on their "grounding" framing during a long interview, I said: "RAG does help the LLM ground its response in information from the web, but it's worth remembering that not everything online is true." The caveat got published. The correction didn't & the term has escaped into GEO AIO E-I-E-I-O gauntlet. I've since watched real people repeat versions of Microsoft's definition & treat it as fact. And I don't blame them. They're trying to keep up with all of these changes. Microsoft's new "Grounding Queries" metric in Bing Webmaster Tools makes this even more confusing. Those aren't user queries. They're background searches AI quietly generates when a user submits a prompt. For example, when you ask "should I bring an umbrella in Seattle?" the AI might internally generate "Seattle weather today" to inform its response. Calling those "grounding queries" buries an already-misused term one layer deeper. I raised this concern with Microsoft & suggested alternatives like "Retrieval Queries" or "AI Queries," which I feel would be more accurate and less confusing but to no avail. The real irony? Microsoft employs SO many world-class AI researchers. They know the difference. By rebranding RAG and synthetic AI queries as "grounding," a precise technical term has now become a marketing buzzword. SEOs are now optimizing for a word we don't have a shared definition of. And when AI researchers hear you use "grounding" this way, it'll erode your credibility. As AI continues to reshape industries, it's more important than ever for us to understand these nuances. By learning the true meaning behind AI terms & tech we can communicate more effectively, make better decisions & drive real results. 19 Days until the next Actionable AI For Marketers Course 🎓

English
0
0
1
27