AJ Ghergich

17.5K posts

AJ Ghergich banner
AJ Ghergich

AJ Ghergich

@SEO

AI & Marketing Technology Executive | 4x Founder, 3 Exits | Services Transformation | PE, Bootstrap & VC

Saint Louis, Missouri Katılım Kasım 2008
675 Takip Edilen128.9K Takipçiler
AJ Ghergich
AJ Ghergich@SEO·
Thx for the reply Brian. The use case im hoping for is something like Notion can host bundled SKILL.md skills (SKILL.md plus references/*.md plus scripts/*.py) end-to-end via MCP etc. The brainlabs CEO article (I'm sure you read) using Notion was great, but it's not really using SKILLS the right way, IMO, until it can do the full bundle. Otherwise, it's just a prompt library.
English
1
0
1
19
AJ Ghergich retweetledi
OpenAI
OpenAI@OpenAI·
You've been asking for this one... Now in preview: Codex in the ChatGPT mobile app. Start new work, review outputs, steer execution, and approve next steps, all from the ChatGPT mobile app. Codex will keep running on your laptop, Mac mini, or devbox.
English
1.4K
2.4K
19.8K
3.5M
AJ Ghergich retweetledi
Ben Wills
Ben Wills@benwills·
I spent the last 3 weeks running what might be the most comprehensive LLM ranking factors analysis to date. 29,562 unique domains tracked and scored across 145 industries, 1,595 buyer personas, and 105k+ ChatGPT prompts. Over 500TB of data, and 12 external signals correlated against rank-weighted LLM recommendation scores. This is a large-scale correlation study: what external signals actually predict whether a brand gets recommended by ChatGPT, across 145 industries and 1,595 buyer personas. -- Research Process 145 industries from 500 candidates. 11 personas each (10 targeted + 1 neutral). 25 runs per persona, rank-weighted scoring. 29,562 unique domains tracked. Data collected: - Common Crawl: 1.15B pages, domain mentions + phrase co-occurrences - Reddit: 5B+ posts and comments scanned - Google Search: 15,697 queries, top 100 results; 1.5M+ results captured - SERP HTML: parsed for outbound links and phrase presence - Wikimedia: 300M+ Wikidata entities + Wikipedia citations - Backlinks (Common Crawl Web Graph): PageRank + Harmonic Centrality; 4B+ - Top Site Homepages: parsed for persona-specific phrases -- Analysis Process 13 signals per domain. Spearman ρ vs. LLM recommendation score, per-industry and globally. R² shows variance explained. Lift measures over-representation in the top 10% most-recommended domains. Tiered: Dominant (ρ ≥ 0.30) down to Baseline (< 0.05). -- Key Findings SERP appearances, SERP rank, and outbound links from search results pages are the three strongest signals. Traditional SEO is the dominant measurable influence on LLM recommendations. Backlink authority (PageRank, Harmonic Centrality) follows. Combined, these point to one thing: established search authority drives LLM visibility. Signal hierarchies vary by industry. Wikidata dominates in established categories (hotels, ERP, furniture). Reddit drives community-driven ones (enterprise AI, live entertainment). No universal strategy. 80–85% of recommendation variance is inside the model. All external signals combined explain under 20%. You cannot infer LLM visibility from search rankings; you have to test it directly. -- The Two Conclusions That Matter 1. SEO is the foundation. OpenAI is using search data today and building their own index. As that matures, the connection between search authority and LLM visibility deepens. Traditional SEO principles are not obsolete, they're the starting point for LLM visibility too. 2. Persona is the measurement unit. The #1 airline for a frequent flyer is a different site from the #1 for a student flying abroad. Same model, same industry, different person, different result. You don't have one LLM rank, you have a rank per buyer segment. Monitor by persona or the number is meaningless. -- Full Report and data for all 145 industries and 1,595 personas available here: oppalerts.com/LLM-Ranking-Fa…
Ben Wills tweet media
English
11
33
155
17.8K
AJ Ghergich retweetledi
Joost de Valk
Joost de Valk@jdevalk·
FAQ schema died twice from spam. The second time was last Wednesday. The fix is a new schema .org type: FAQSection. I filed it. joost.blog/faq-schema-cyc…
English
10
22
151
20.1K
AJ Ghergich retweetledi
Chris Long
Chris Long@chris_nectiv·
Many SEOs might be interested in using Claude Code to build agents + automations. This video showcases how you can set up a basic one using Claude's skill files.
English
1
11
94
7.6K
Chong-U
Chong-U@chongdashu·
@jaredatch That's a great idea! Kids have great imaginations. It would be great to bring their game ideas to life.
English
1
0
0
335
Chong-U
Chong-U@chongdashu·
Just a couple of prompts from start to finish into a playable game! Pixel-perfect art > Codex App w/ GPT 5.5 High > Images 2.0 for animations (except walkcycles) > WAN 2.0 for walkcycles (cheap!) > 11Labs for bgm/sfx (in 1 prompt) > Phaser 4 Sound ON🔉 Full tutorial anyone?
Chong-U@chongdashu

If you use AI to generate pixel art You'll know that while the frames look good... They are often times not aligned -- making your character slide all around. As part of my suite of tools to fix this, I vibe coded a tool to help me align frames to ensure this doesn't happen.

English
32
27
340
54.4K
AJ Ghergich
AJ Ghergich@SEO·
@robin_liquidium @ajambrosino I know, but there is a million things they need to do on the normie side of the table...they should put 70% of the effort there for now.
English
0
0
2
149
Robin | Liquidium
Robin | Liquidium@robin_liquidium·
@SEO @ajambrosino you can switch between simple and developer mode in settings. simple is the mode for normies
English
1
0
2
143
Orin Thomas
Orin Thomas@orinthomas·
@SEO Those that forget BonzaiBuddy are doomed to reinvent it ;-p
English
1
0
1
86
AJ Ghergich
AJ Ghergich@SEO·
@ajambrosino Not for noobs u didn't though, I like the new mode, it's a start for sure. Or am I missing something?
English
1
0
3
185
AJ Ghergich
AJ Ghergich@SEO·
@gregberryai I wanted that experience badly lol..I got it working ok after a ton of tweaking and display port 1.4. Glad it's working for I thought. Size wise it's amazing
English
0
0
1
12
Greg Berry
Greg Berry@gregberryai·
@SEO Interesting. I've had exactly 0 issues. It's, by far, the best monitor I've ever had.
English
1
0
1
12
Matthew Berman
Matthew Berman@MatthewBerman·
thinking about getting the apple studio display 27", is it worth it? right now i have the lg 34" UltraWide 5K (about 7 years old, held up very well)
English
65
3
59
25.4K
IAmIrv.Sol
IAmIrv.Sol@IAmIrvSol·
@SEO Goblin mode all summer 👺👹
English
1
0
1
36
AJ Ghergich
AJ Ghergich@SEO·
@koltregaskes @david_saint_ Didn't Elon say multiple times the current size for Grok is 4.3 etc 500 and he gave the roadmap and sizes for each model?
English
1
0
1
75
Kol Tregaskes
Kol Tregaskes@koltregaskes·
@david_saint_ Hard to say but with the estimate of a much larger model for the Gemini range, I'm wondering if that's something to do with the TPUs.
English
1
0
0
279
Kol Tregaskes
Kol Tregaskes@koltregaskes·
GPT-5.5 likely has around 1.5T parameters after a sanity check of the Incompressible Knowledge Probes paper that originally claimed 9.7T. GPT-5.5 = 1.5T Claude Opus 4.7 = 1.1T GPT-5 = 1.3T Grok-4.20 = 768B Gemini 3.1 Pro = 4.7T Benjamin and Lawrence reproduced the code and regression before identifying an undocumented floor on low scores and serious ambiguities in about 25 percent of the hard tier questions. The IKP dataset pulls heavily from obscure researcher records via DBLP and OpenAlex where a quarter of tough questions proved ambiguous or had incorrect gold answers giving the revised estimates real punch. lesswrong.com/posts/veFMEzDD…
Kol Tregaskes tweet media
English
9
7
111
11.3K
AJ Ghergich
AJ Ghergich@SEO·
@gregberryai @MatthewBerman I have this monitor...do not under any circumstances get this monitor for a Mac...ultrawides like this don't work well Apple
English
2
0
0
62
Greg Berry
Greg Berry@gregberryai·
@MatthewBerman I highly recommend the Samsung 57" Odyssey Neo G9 (G95NC) Series Dual 4K UHD 1000R Curved Gaming Monitor. The thing is ridiculous in all the best ways. Super crisp, insane screen real estate.
Greg Berry tweet media
English
5
0
11
1.3K