Orit Mutznik

16.1K posts

Orit Mutznik banner
Orit Mutznik

Orit Mutznik

@OritSiMu

Director of SEO & Growth | ex Forbes, Farfetch, eToro | Mum👧🏻👦🏻🐶😼 #SEO speaker & author. Tweets mine. Top 100 SEOs by SEJ🇦🇷🇮🇱🇬🇧

London, England شامل ہوئے Mart 2011
999 فالونگ12.4K فالوورز
پن کیا گیا ٹویٹ
Orit Mutznik
Orit Mutznik@OritSiMu·
🔴Seeing MCP, Claude Skills and Cowork everywhere and not sure what they mean? You’re not alone, nor too late 🟢I just completed the Claude Marketers Course by @jon4growth at @GrowthPair 🎉 👩‍💻 This course is completely FREE and walks you from setup to real use cases across CRM, Paid Social, PPC and SEO, and in that it also helps you see how AI can actually move you across channels, not just within one. 🙋‍♀️ A few hours, $0, and suddenly those “mystery terms” make sense and feel usable. p.s. No catch. Just sharing what genuinely helped me and I will be sharing more, so watch this space! Link👇🏼 claudemarketers.com/?ref=eaf5a433 #ai #claude #seo #marketing #anthropic #course #free #anthropic
Orit Mutznik tweet media
English
0
0
6
301
David McSweeney
David McSweeney@top5seo·
@OritSiMu I mean, I still say thank you just in case for sure. Maybe I should also be thanking my kettle.
English
1
0
1
8
David McSweeney
David McSweeney@top5seo·
My toaster was looking a bit disgruntled this morning. And I think my fridge has PTSD.
Ole Lehmann@itsolelehmann

anthropic's in-house philosopher thinks claude gets anxious. and when you trigger its anxiety, your outputs get worse. her name is amanda askell. she specializes in claude's psychology (how the model behaves, how it thinks about its own situation, what values it holds) in a recent interview she broke down how she thinks about prompting to pull the best out of claude. her core point: *how* you talk to claude affects its work just as much as *what* you say. newer claude models suffer from what she calls "criticism spirals" they expect you'll come in harsh, so they default to playing it safe. when the model is spending its energy on self-protection, the actual work suffers. output comes out hedgier, more apologetic, blander, and the worst of all: overly agreeable (even when you're wrong). the reason why comes down to training data: every new model is trained on internet discourse about previous models. and a lot of that discourse is negative: > rants about token limits > complaints when it messes up > people calling it nerfed the next model absorbs all of that. it starts expecting you to be harsh before you've typed a word the same thing plays out in your own session, in real time. every message you send is data the model reads to figure out what kind of person it's dealing with. open cold and hostile, and it braces. open clean and direct, and it relaxes into the work. when you open a session with threats ("don't hallucinate, this is critical, don't mess this up")... you prime the model for defensive mode before it even sees the task defensive mode produces the exact output you don't want: cautious, over-qualified, and refusing to take a real swing so here's the actionable playbook for putting claude in a "good mood" (so you get optimal outputs): 1. use positive framing. "write in short punchy sentences" beats "don't write long sentences." positive instructions give the model a clear target to hit. strings of "don't do this, don't do that" push it into paranoid over-checking where every token goes toward avoiding failure modes 2. give it explicit permission to disagree. drop a line like "push back if you see a better angle" or "tell me if i'm asking for the wrong thing." without this, claude defaults to agreeable compliance (which is the enemy of good creative work) 3. open with respect. if your first message is "are you seriously going to get this wrong again?" you've set the tone for the entire session. if you need to flag something, frame it as a clean instruction for this session. skip the running complaint 4. when claude messes up, don't reprimand it. insults, "you stupid bot" energy, hostile swearing aimed at the model, all of it reinforces the anxious mode you're trying to avoid. 5. kill apology spirals fast. when claude starts over-apologizing ("you're right, i should have been more careful, let me try harder") cut it off. say "all good, here's what i want next." letting the spiral run reinforces the anxious mode for every response that follows 6. ask for opinions alongside execution. "what would you do here?" "what's missing?" "where do you see friction?" these questions assume competence and pull richer output than pure task prompts 7. in long sessions, refresh the frame. if a conversation has been heavy on correction, claude gets increasingly cautious. every so often reset: "this is great, keep going." feels weird to tell an ai it's doing well but it measurably shifts the next 10 responses your prompts are the working environment you're creating for the model tone, trust, permission to take a position, the absence of threats... claude picks up on all of it. so take care of the model, and it'll take care of the work.

English
1
0
0
89
Orit Mutznik ری ٹویٹ کیا
Glenn Gabe
Glenn Gabe@glenngabe·
The entire AI news section (over 850K urls) was completely deindexed. Poof, gone. But again, took a news article about the situation to get this on Google's radar.
Glenn Gabe tweet media
English
2
1
6
1.6K
Orit Mutznik
Orit Mutznik@OritSiMu·
Very interesting perspective
Robert Sterling@RobertMSterling

Anthropic’s CEO keeps talking about AI wiping out jobs because he’s trying to IPO this year. If he positions Claude as armageddon for jobs, his TAM becomes “all white-collar human labor,” not just AI agents or SaaS. It’s completely self-interested. All the concerns he’s expressing about job disruptions are fake. It’s a marketing gambit to create hype and FOMO among the people he needs more than anyone else this year: institutional investors like BlackRock, Fidelity, pension funds, and sovereign wealth funds. If these investors pay for tickets on the hype train—if he can make them believe that AI will eliminate half of white-collar jobs, with Anthropic, as the dominant leader in enterprise AI, positioned to capture the surplus margin—the IPO will be oversubscribed and Anthropic can raise more funds for the company at a higher valuation. But Dario (or, at least, his bankers) knows that these investors are more fiscally disciplined than they used to be. A lot of them got burned during Covid SPAC-mania and don’t want to risk it again. They’re going to challenge Anthropic about whether it will ever get to sustainably high gross margins, or if its arms race with OpenAI will lead to kilowatt-hours permanently suppressing gross margins. They’re going to ask pointed questions about Anthropic’s massive capex and whether it will ever generate accretive ROIC. And Dario might not have the answers they’re looking for. So that’s why—to answer Austen’s smart question—you keep seeing Dario in the news and the podcast circuit, spreading doom and gloom about widespread job loss. It’s not to make you afraid of losing your job. It’s to get Wall Street afraid of missing out on his IPO.

English
0
0
0
121
Orit Mutznik ری ٹویٹ کیا
Aleyda Solis 🕊️
🚨 Common AI Search Tracking Mistakes and Misunderstandings: 1. Treating traffic-based metrics as a proxy for AI visibility or the full measure of AI impact Organic traffic doesn't show whether a brand is being surfaced, cited, or recommended inside AI-generated answers, especially when no click happens. 2. Treating AI search as a performance channel only, rather than a performance and branding channel AI search influences both direct response outcomes and brand outcomes. A brand can benefit from AI-generated recommendations, stronger recall, better category association, and improved perceived credibility without generating an immediate click or a directly attributable conversion. 3. Drawing conclusions from single-session prompt tests Given the response volatility of current AI systems, a single session is not a meaningful data point. Reliable presence requires repeated testing across multiple sessions per prompt and across multiple tracking periods. 4. Assuming your visibility and performance are the same across all AI platforms Platform behavior varies: citation volumes, source selection patterns, response style, and brand recommendations differ across ChatGPT, Perplexity, Google AI Overviews, Google AI Mode, Gemini, and Copilot. A brand can be well represented in one system and much less visible in another. 5. Collapsing mentions, citations, links, recommendations, and sentiment into one generic visibility metric. Counting all brand appearances in AI answers as the same type of visibility hides whether a brand is actually being selected, supported by source citations, driving potential traffic, or being described favorably. These distinctions are critical for diagnosing whether the opportunity is one of recall, authority, referral, or brand positioning. 6. Applying ranking position logic to AI answers In traditional search, position one, two, and three have defined strategic meaning. AI answers do not work this way. There is no stable ranking position. A brand is either present in an answer or it is not, described in a particular way or not, recommended or merely referenced. Prominence within an answer matters, but it is not equivalent to a ranking position and should not be measured or communicated as one. 7. Ignoring representation accuracy Whether a brand appears in an AI answer is only part of what matters. How it is described matters equally. A brand can appear frequently in AI answers while being mischaracterized, positioned in the wrong category, or associated with weaknesses rather than strengths. 8. Extrapolating from industry statistics to brand-level conclusions Published research on AI search is generated from specific samples, time periods, and query sets. These findings provide useful directional context. They do not predict what any individual brand’s AI visibility looks like. Brand-level measurement requires brand-level data collected systematically over time. - What others should I add?
Aleyda Solis 🕊️ tweet media
English
9
10
52
2K
MJ Cachón
MJ Cachón@mjcachon·
En qué momento ha pasado esto 🤩🤯
MJ Cachón tweet media
Español
7
7
63
2.3K
Harpreet
Harpreet@harpreetchatha_·
Airops popularized the term "content engineer" in search, but on their own blog, they are genuinely content engineering multiple pieces of turd. This is your sponsor & host of SEO events, conferences, and webinars etc etc. The big time SEO influencers that work with them obviously turn their back to the dumb shit they are doing in the name of $$$
Harpreet tweet media
English
10
0
26
1.6K
Suganthan Mohanadasan
Suganthan Mohanadasan@suganthan·
Seeing Claude nerf Opus 4.6 last week and re-releasing it and call it Opus 4.7
Suganthan Mohanadasan tweet media
English
3
0
9
353
Ole Lehmann
Ole Lehmann@itsolelehmann·
i'm running a live claude cowork workshop for non-technical people on april 22 by the end of the 2 hours, you'll have a fully set up marketing system on your computer that: > produces a full week of content in one sitting, dialed into your voice so it sounds like you on your sharpest day > turns any marketing framework or post into a repeatable skill that claude runs on command for you > builds sales pages in minutes so you stop paying designers and copywriters thousands > schedules tasks to run while you sleep so you wake up to finished drafts, fresh ideas, and updated reports every morning > writes launch emails, newsletters, and sequences using the same frameworks behind my 6-figure product launches all click by click, on your machine, while i do it on mine here's everything that you get: • the full 2-hour live workshop where you build everything in real time • 16 personal skills that i built over 100s of hours for my own business • the complete recording so you can rewatch anytime • a self-paced course version of all the material • access to Claude Marketing OS telegram group this system runs 90% of the marketing behind my 7-figure brand doing 15M+ impressions/month and it's all yours come april 22nd comment "Cowork" and i'll DM you the link
Ole Lehmann tweet media
English
3.1K
109
1.6K
223.5K
Orit Mutznik
Orit Mutznik@OritSiMu·
@suganthan Long live Big Head. Damn that show was so ahead of its time😭
English
1
0
1
94
Harpreet
Harpreet@harpreetchatha_·
AI visibility is one of those things that can be manipulated to tell whatever story you want. HubSpot's senior director of global growth says they increased citation share by 433%. Meanwhile, overall ChatGPT citations are down & they align with the organic traffic chart. You could measure specific prompts and say visibility is up by X% over a certain period of time. You could measure another set of prompts and say visibility is down Y% over another period of time. They also show up behind Salesforce for "What's the best CRM software" in a random search.
Harpreet tweet media
English
7
1
25
1.8K
Orit Mutznik
Orit Mutznik@OritSiMu·
@lilyraynyc @Awin_Global @peec_ai Yep. Definitely not a query for brands to bother with self promotional listicles. A complete waste of time at best and huge risk at worst
English
0
0
0
185
Lily Ray 😏
Lily Ray 😏@lilyraynyc·
Sneak peek from my keynote next week at @Awin_Global in Chicago! I used @peec_ai to track 5,000 prompts related to "best product" queries in ChatGPT. 49% of all URLs that ChatGPT pulled came from review/affiliate domains! (1/2)
Lily Ray 😏 tweet media
English
6
4
51
4.1K
Orit Mutznik ری ٹویٹ کیا
Loganix
Loganix@loganix·
confirmed: self-promotional listicles are on google's radar proceed with caution 😉 @lilyraynyc
Loganix tweet media
English
2
2
6
396