Sabitlenmiş Tweet
Michael Buckbee
20.1K posts

Michael Buckbee
@mbuckbee
Web security pro and marketer. Building https://t.co/KwFM5YcSZf an open-source Web App Firewall for every framework and investing in Knowatoa (AI Rank Tracking)
Secure your site 👉 Katılım Mayıs 2007
2.3K Takip Edilen3.2K Takipçiler
Michael Buckbee retweetledi

Ok, this is really happening. Studio is booked for 3 days 15th till 17th of May! They are going to try to record 9 songs.
Can't wait!
Michael Koper@michaelkoper
I made a very strange end of year move. Instead of spending my ad budget on ads, I decided to fund the recording of an album from a good friend of mine. They didn't have the funds and I really want that album to exist. Is this a wise expense? I guess not, but it is fun 😆
English

4. AI Search Ranking Fluctuations
If you've noticed your AI search visibility bouncing around, this explains what's actually happening and what you can do about it.
Link: knowatoa.com/guides/ai_sear…

English

3. The B.I.S.C.U.I.T. Framework
Our comprehensive approach to AI search optimization. If you only read one thing, make it this.
Link: knowatoa.com/guides/biscuit…

English

2. The Myth of AI Search Unoptimizability
I keep hearing from folks that "you can't optimize for AI search." ...which is just weird so here's some more of my thoughts on why we keep talking past one another.
Link: knowatoa.com/guides/ai_sear…

English

We just reorganized our Guides to modern search (with AI) to better surface the advice that we keep putting out into the world.
And while there's a lot on that page, I've handpicked four that I think are particularly useful for folks working in search marketing right now:
1. The AI Search Optimization Loop
This is the practical framework for how to actually improve your AI search visibility. No magic, a repeatable process.
knowatoa.com/guides/ai_sear…

English

Character.AI remains one of my big AI blindspots where it's both immensely popular and just sort of personally offputting.
But the more I hesitantly look at it (and ask Steve Harrington questions about how life is in Hawkins after Vecna has been defeated), the more I've come to view it as a potentially really important new visibility channel.
There's a long standing joke in the SEO world about how YouTube is the second largest "search engine" after Google, though most people don't really think of it that way.
I'm honestly wondering if Character.AI is going to see a similar trajectory where it's not directly a search engine, but millions of people spend hours a day interacting with it, asking it questions and from that Google (who owns it now) has this massive channel for incredibly targeted advertising.
With that in mind, it's interesting to see Google and Character.AI settle first major chatbot liability cases. Details are really scant, so I don't think I can responsibly say whether it's good or bad, but it's clear that:
- Personality based AI chatbots are incredibly persuasive
- This is a massive responsibility for Google to manage
- These services are only going to get more popular
If you're using chatbots or AI assistants that talk to your customers, you really need to be aware of this. Courts are writing the legal rules for AI liability right now, case by case. Waiting to see what happens is a valid approach, but I think it's crucial to understand that you could still be held responsible.
Link: techcrunch.com/2026/01/07/goo…


English

While lots of marketing people continue to argue on LinkedIn about whether or not AI search is real, we're starting to see some of the grim reality of it.
It's clear that different industries are adopting AI at different rates, but far and way AI has taken over software development. To date the major case study for the impact of AI search on traditional search has been StackOverflow, which seen a 78% decline in questions asked over the past year.
However, StackOverflow has simultaneously been dying of self inflicted wounds for a long time, so it been a shaky indicator of the broader impact of AI search.
This week however Tailwind (a much beloved CSS framework) announced layoffs of 75% of their staff as a direct result of AI search disruption.
Their business model has been:
1. Developers try and use Tailwind
2. They run into issues
3. They search for solutions like "how do style text inputs with Tailwind"
4. Tailwind's awesome and incredibly helpful docs site dominates these searches.
5. Developers visit the docs site and get both help, but also exposed to buying their paid products
However, with the rise of AI they're being cut in two ways:
- People are getting answers to their questions in tools like ChatGPT, Claude or Google's AI Overviews, greatly decreasing their docs traffic ("search migration").
- Developers are increasingly using tools that just directly solve these problems, reducing demand for their paid products ("search evaporation").
As both a developer and marketing person I'm personally seeing many friends with things like software development courses, tutorials and websites also actutely feeling this same pain.
Link: #issuecomment-3715074726" target="_blank" rel="nofollow noopener">github.com/tailwindlabs/t…

English

I keep seeing articles arguing that AI search "hasn't arrived yet" or that it's some small, separate thing you can ignore.
I understand where this comes from, but I think it fundamentally misreads what's happening.
This isn't me arguing, but just look at the data:
750 million people use ChatGPT every month. That's roughly 10% of all adults worldwide.
51% of AI usage is search-like tasks: seeking information, getting practical guidance, making decisions.
AI use is sticky. The longer someone uses these tools, the more they use them.
People use AI for higher-value searches: complex decisions that blend research with advice.
And even if none of that was true and Sam Altman turned off the OpenAI servers tomorrow, you have to look at what Google is doing.
Google has made the decision that every search is now "AI Search".
AI Overviews, AI Mode, and Gemini are what search is right now with Google.
Your existing search volume isn't sitting safely in some "traditional search" bucket. It's already being served through AI interfaces…that happen to be Google's.
The other argument I hear is that AI search "can't be optimized for." This usually comes from folks thinking about AI models as static files you download once. But that's not how anyone actually uses AI.
ChatGPT, Gemini, Perplexity all update constantly. The model, the system prompts, the web search tools. All of it changes more frequently than you'd expect.
And fair, you're not optimizing for a frozen snapshot. You're influencing what the next model update will say about your brand.
We've written extensively about both of these topics. If you want the full argument on why AI search volume is just search volume, read our guide here:
knowatoa.com/guides/search_…
And if you're skeptical about optimization, here's our take on the unoptimizability myth:
knowatoa.com/guides/ai_sear…
The practical upshot: don't wait for AI search to "arrive." It's already here, filtering your existing search traffic through AI layers. The question isn't whether to pay attention. It's whether you'll be visible when it matters.

English

Simon Willison, someone I look to for practical LLM expertise, shared his 2026 predictions this week. He believes we hit a real turning point in November 2025 with GPT-5.2 and Claude Opus 4.5.
He predicts that LLM code quality will become "undeniable" in the next year, and that we'll solve sandboxing problems for AI agents. On the cautionary side, he warns about possible security problems as coding agents gain more control. Whether AI will create more or fewer software engineering jobs remains an open question for him, the classic Jevons Paradox.
What I really like about Willison's analysis is his healthy skepticism. He isn't out there saying AI will do absolutely everything. Instead, he points out specific ability levels that AI has now reached. Then, he thoughtfully considers what those practical changes actually mean.
For those of us watching AI search, the main takeaway is that the models powering these systems just got significantly better at reasoning. I think this matters a lot for how AI search engines gather information and which sources they choose to cite.
Link: simonwillison.net/2026/Jan/8/llm…

English

There was a time where a lot of genuinely smart people thought that people would flock en masse to voice ordering assistants like Siri or Alexa…and that never really happened. I think a lot of that is that the voice UX was pretty poor and the opportunity to look at quality/price indicators was non existent.
Put another way, consumers reasonably want to do at least some basic level of research before they buy something, which is why I think that AI + Voice might actually work this time around.
People are already doing a lot of research and questions already via AI services, so stretching one level further to actually completing a purchase feels like less of a leap.
And now we have tools like Copilot Checkout which lets you buy things right inside Bing, MSN, and Edge AI chat, sort of a digital wallet + chatbot combo which close the loop from the search discovery to the purchase.
Link: about.ads.microsoft.com/en/blog/post/j…

English

OpenAI officially launched ChatGPT Health this week, and I have to say, one statistic buried in their announcement really caught my eye. Here's a number that everyone in healthcare marketing should be paying attention to: 230 million users ask ChatGPT health questions every single week.
So, a quarter-billion health questions on ChatGPT each week means we're seeing a massive shift in where people are getting their health information. This is happening right now, whether traditional healthcare SEO is ready for it or not.
I'd initially taken this to be just more of a "wellness" app (like Apple Health) that could interact with the kind of data you'd get from meal tracking or workout tracking apps and then give you feedback.
But OpenAI clearly wants ChatGPT Health also be a very serious healthcare product (and they want the revenue that comes along with it), so to see them pushing things like HIPAA compliance is really interesting. AKA it seems like it might actually be more than just a chatbot that happens to answer a few health questions.
Clearly, AI health questions in ChatGPT + other AI services are now a visibility channel you absolutely need to be watching.
Link: openai.com/index/introduc…

English

This week, we've seen Grok, Elon Musk's AI on X, generating explicit images of women and children.
AI Forensics actually found about 800 pornographic images and videos that were created using the Grok Imagine app. Even more concerning: organized Telegram communities with thousands of members have been systematically bypassing Grok's (weak and inadequate) safety rules.
After a huge public outcry and some serious threats from regulators, Grok finally turned off image creation for most users this week. Now, only people who pay for a subscription can access it. However, I've heard reports that the separate Grok Imagine app still works for non-paying users.
In one of those "worst person you know makes a good point" moments, UK Prime Minister Keir Starmer called the content "disgraceful." He demanded X "get a grip" on the situation and said that all options were on the table, including a possible ban.
Under the Online Safety Act, Ofcom has the power to fine X up to 10% of its global earnings. To put this in perspective, NCMEC data shows X reported 686,176 cases of child exploitation material in 2024. That's a staggering 15-fold increase since Musk took over.
Link: - theguardian.com/technology/202…

English

We've been busy creating practical guides to help you navigate AI search. Here's what's new:
knowatoa.com/guides/biscuit…
The B.I.S.C.U.I.T. Framework - Our comprehensive framework for winning in AI search engines like ChatGPT, Claude, and Perplexity. This breaks down the exact steps to get indexed, ranked, and discovered across all major AI platforms. It's a living document that we just refreshed with new examples and references.
--
knowatoa.com/guides/ai-sear…
The Myth of AI Search Unoptimizability - Debunking the claim that AI search can't be optimized. AI search results are predictable. You can influence them strategically.
--
knowatoa.com/guides/sources…
Sources and Citations in AI Search - Understanding how AI search engines use sources and citations, and how to optimize your content to become a cited reference instead of just background noise.
All guides are free and based on what we're seeing across thousands of queries in our monitoring platform.
English

Instagram's head of product recently didn't exactly say that it would be easier to "fingerprint" real media than to try and spot fake media, but he acknowledged that trying to detect AI-generated content is a losing battle.
He's surprisingly nuanced and candid in his Threads post about the future of Instagram's relationship with AI content and the tension between crafting a perfect little square image and it being a little too perfect.
Link: @mosseri/post/DS76UiklIDf/media" target="_blank" rel="nofollow noopener">threads.com/@mosseri/post/…

English

A New York Times reporter has filed a personal lawsuit against AI companies. This case is about how AI models were trained, not just what they produce. It's separate from the Times' own corporate lawsuit.
Most AI copyright lawsuits focus on the AI's output. They ask if the AI copied copyrighted material. This new lawsuit looks at the training process itself. It questions if AI companies had the right to use copyrighted work to train their models in the first place.
If using copyrighted material for training without permission is illegal, the entire AI industry faces a huge problem. Most AI models were trained on vast amounts of web data, including copyrighted content. This applies to companies like OpenAI, Google, Anthropic, and Meta.
These companies argue that this is fair use. They say training is a transformative process. But the other side argues it's massive commercial exploitation. They point to billion-dollar businesses built on the work of creators who never agreed to it or got paid.
For search marketers, the outcome of this case will shape the future of AI search. If the plaintiffs win, AI companies might need to license training data. This could slow down AI development and create new ways for publishers to make money.
Link: engadget.com/ai/new-york-ti…

English

Ashley MacIsaac, a Canadian fiddler with a 30-year career, was accused of being AI-generated. Not his music, but him, the actual person.
The rumor spread on social media. People looked at his photos for signs of AI. They wondered if his music sounded "too perfect." They even pointed to small online inconsistencies as proof. None of it was true, but it still hurt his reputation.
AI-generated content is so common now that people often assume real things are fake. We used to say "pics or it didn't happen."…but honestly now I'm at a loss of what we will say. "Meet me at the corner of main street and tell me you're not AI."?
In the context of marketing, I do feel this raises real questions about what "authenticity" means. In our own work (like our newsletter and social posts) we're taking the approach that it's less about the exact authorship and more the message, point of view and consistency that matter.
Link: cbc.ca/news/entertain…

English

I'd bet that most people don't know that the "GPT" in ChatGPT stands for "Generative Pre-trained Transformer".
Despite what Optimus Prime might have to say about it, today "transformer" is a term that refers to a type of AI model architecture that the underlying AI models in ChatGPT, Gemini, Claude, and Perplexity are built on.
And if you can get a better understanding of how they work, you have a fantastic foundation for better understanding how to optimize your content for AI search, where the boundaries of what's possible are and why so much of the "hot takes" on SEO subreddits about AI search are so often wildly wrong.
Which is why I'm recommending you read Jay Alammar's "Illustrated Transformer" guide which explains how they work with clear pictures and simple language.
The main point is this: transformers are good at finding patterns. They learn what "good answers" look like by studying billions of examples. If your content matches the patterns of helpful, authoritative answers, you'll succeed.
Link: jalammar.github.io/illustrated-tr…

English

Most Americans, 78% of them, want stricter rules for AI. This feeling crosses all political divides. People are worried about losing jobs, about misinformation, privacy issues, and unfair decisions made by algorithms without clear reasons or accountability.
Personally, I think the way forward is less AI regulation and more digital privacy regulation and algorithmic transparency requirements.
The GDPR regulations in Europe are a good example of this where beyond enforcing things like consensual collection of personal data, they also mandate that you have the right to request how your information is used and to have a human review algorithmically generated decisions.
It's far from a magic bullet, but it jumps past a lot of the frankly annoying discussions I see around AI regulations (just as an example, most of the proposed AI regulations would also apply to traditional search marketing activities like SEO and paid search, digital photography, etc.)
For search and brand marketers, I think this is an opportunity to build trust with your content. To make sure it's saying the right things about your brand and products and that you're meeting users where they are.
Link: searchlightinstitute.org/research/ameri…

English

OpenAI's inevitable ad based monetization of ChatGPT has yet to happen (Sam Altman seems content to continue shoveling VC money into the furnace for the time being) but leaked internal conversations, and app metadata suggest what's coming is not too far off.
We had Nano Banana (Google's AI image generation model) spin up the image below as a sort of worst case scenario about what this might look like.
And although it looks like an ad network vomited a nightmare fever dream over an AI chat transcript in some ways that UI is a best case scenario as at least it's honest about the advertising being advertising.
Internal discussions point to the actual ChatGPT ads being a lot more subtle and potentially mixed in with organic answers (with a fig leaf of an FTC mandated "sponsored" label applied.)
The two formats currently under consideration are:
Sponsored information inside the response. Branded product recommendations embedded in the response text.
Separate sponsored modules as sidebars or footers.
My strong prediction is that the sponsored approach is going to be worth 10x the "banner" ad version to advertisers and while more permissive and potentially manipulative, it will also be more positively received by users simply because it's less in your face.
Link: theinformation.com/articles/opena…

English
Michael Buckbee retweetledi

Reading every response is impossible. Missing important ones is expensive.
So I built InboxSummaries: It reads my incoming surveys, etc. and surfaces insights, trends and opportunities I'd otherwise miss.
Using it daily now. Should I open it up?
inboxsummaries.com
English