Nick Venezia
11.6K posts

Nick Venezia
@DataNick
Self Proclaimed #BotKiller | FOUNDER || #BigData Cruncher / #AdTech, “Data Royalties™”, LLM white paper connoisseur, and on a mission to Keep Data Human.
Los Angeles CA Beigetreten Nisan 2011
2.7K Folgt5K Follower

@venturetwins And not having to cover compute and API cost… all you eat buffet!
English

The LangChain post serves as a catalog of the failure modes encountered by modern deep agents, but it does not provide the correct solutions to address these issues.
#HarnessEngineering is fundamentally about coupling. LangChain demonstrates this #coupling through prompt strings and #Python #middleware, which can be likened to the 2026 equivalent of punch cards: a superficial grammar layered over a substrate that lacks essential semantic primitives.
It raises the question: when will the industry focus on addressing the root of the problem rather than just another petal of the 🌼?
For full LangChain post, visit the link: langchain.com/blog/improving….
English


@heynavtoor They do call it “generative-AI” the purpose of it is to generate ideas. I feel a lot of times that is forgotten.
English

🚨BREAKING: OpenAI published a paper proving that ChatGPT will always make things up.
Not sometimes. Not until the next update. Always. They proved it with math.
Even with perfect training data and unlimited computing power, AI models will still confidently tell you things that are completely false. This isn't a bug they're working on. It's baked into how these systems work at a fundamental level.
And their own numbers are brutal. OpenAI's o1 reasoning model hallucinates 16% of the time. Their newer o3 model? 33%. Their newest o4-mini? 48%. Nearly half of what their most recent model tells you could be fabricated. The "smarter" models are actually getting worse at telling the truth.
Here's why it can't be fixed. Language models work by predicting the next word based on probability. When they hit something uncertain, they don't pause. They don't flag it. They guess. And they guess with complete confidence, because that's exactly what they were trained to do.
The researchers looked at the 10 biggest AI benchmarks used to measure how good these models are. 9 out of 10 give the same score for saying "I don't know" as for giving a completely wrong answer: zero points. The entire testing system literally punishes honesty and rewards guessing.
So the AI learned the optimal strategy: always guess. Never admit uncertainty. Sound confident even when you're making it up.
OpenAI's proposed fix? Have ChatGPT say "I don't know" when it's unsure. Their own math shows this would mean roughly 30% of your questions get no answer. Imagine asking ChatGPT something three times out of ten and getting "I'm not confident enough to respond." Users would leave overnight. So the fix exists, but it would kill the product.
This isn't just OpenAI's problem. DeepMind and Tsinghua University independently reached the same conclusion. Three of the world's top AI labs, working separately, all agree: this is permanent.
Every time ChatGPT gives you an answer, ask yourself: is this real, or is it just a confident guess?

English

@vasuman If the ads are custom tailored to you though, that means they are using AI regarding your demographics, psychographics, context of your historical requests, and purchase behavior to sell you shit.
English

MCP Moves From Protocol to Production open.substack.com/pub/aicentral/…
English

This is just a way for the frontier LLM companies to collect all of your data. They’ve been trying to convince people to give them more and more data. Because it’s open source software it sounds so benign, but you’re giving the LLM all the data on your computer. You are giving them something very valuable, and they are also giving you something very valuable in return.
This is also an ingenious way for the frontier model companies to insulate themselves from liability in case something goes wrong. Since the frontier model is not directly controlling your computer, they can just blame it on the middleman.
English

Grok is a raw firehose of unfiltered human emotion. People's reactions to this post show the value of @Grok as a barometer of real-time human sentiment (@OpenAI v #xAI).
#AI must ingest the @Grok firehose to fuel its fire! @elonmusk just happens to be the fire marshal.
It takes "Come on Baby Light My Fire" to a whole new level.

English

cancelled our corporate @OpenAI account today; We were spending ~ $10k a year
@xai is better for real time data
@Gemini is better for travel, local YouTube
& @claudeai is much better for corporate (Cowork and Project features specifically)
ChatGPT isn’t keeping up imo — and I don’t trust them with my corporate data
Long game, but I think ChatGPT is 4th place now
English

@iamshackelford Love the 2x on website visitors (6.5M) to cans sold (12.9M) ratio! Well played. 🌬️🍃
English

You know you can just build your own AI wrappers for anything you want directly into Gemini with Gems?
The instructions and input is key to getting what you want.
For example, I have a Gem 100% dedicated to reading Meta's AI whitepapers. That's all it does. Then, I upload creative to it and ask for it to critique it.
Because this Gem is already fully aware of how Gem and Andromeda work together and how Meta has built these tools it's helped me better articulate to clients and creative teams what I need.
The one thing I keep coming back to with every recommendation is this:
Stop "Iterating," Start "Diversifying"
Instead of 5 variations of the same product image (background color, text overlay, etc). We want 5 materially different looking images so that Meta's "AI hooks" can tell the difference and know which direction to go off of.
AI Hooks should and is the new "Ad Hooks" and the more you understand what an AI hook is, the better you can get at building creative.
I'm still continuously adding more and more to this Gem, but it has been a game-changer for me in building decks and better articulating that what I want isn't just the same creative, but ad eco-systems weekly from our ad teams.
I'm now trying to build a vast library of AI Hooks for each business I work with to even dial this in more.
English

@alexalbert__ I can still hit within one single prompt and nothing uploaded. Claude MMax customer even… I watched it try and compress… but it failed still.
Guess that it is what happens when Vibe out with Physics and Geometricprompts and not just another UI…
English

In case you missed it, earlier this week we fixed one of the most common frustrations on Claude.ai: hitting context limits mid-conversation.
Claude now intelligently compacts earlier context automatically when you're nearing the limit so the chat can keep going.
English

I know it's hard to convey over just words. But for those who don't know me personally I am here to just help.
There is no need for me to gain more work, more followers, more recognition.
God blessed me with being in this industry at the right place at the right time I grew with it.
As a veteran I am just trying to be helpful and give back.
Why I tweet about Meta bugs and weird UI changes and ad status updates on the platform is there was no voice for it. For the longest time I tweeted to the abyss here.
I am not just a media buyer... I spend more time actually working on full on marketing campaigns than I do inside Meta ads manager.
But still, between the DMs, texts etc there is no voice speaking on behalf of media buyers regarding the biggest share of wallet, Meta. Essentially I'm trying to be a voice so you don't feel alone, but heard.
Many and I mean MANY Meta folks follow me here and LinkedIn. To their credit they are always listening, but moving things quick at a company of their size is just not a reality most of the time.
My tweets are designed to bring to light the pain, frustration, and annoyances we have as a collective. Not just Meta, but for all ad platforms (except Google because I refuse to look at it).
Anyway, I'm rooting for everyone to have a great and stress free BFCM. Just remember:
1. Leave your evergreen ads on
2. Launch flexible ads (still working for me even w/bug) for BFCM
3. 1DC those BFCM ads
4. Scale up Fri, drop spend Sat - Sun, scale up Monday 12pm then increase budget another 25-50% at 6pm - 11pm PT.
Best of luck.
English

Claude + Facebook Ads MCP is legitimately insane 🤯
This MCP integration turns Claude into a full-stack ads analyst.
Generates complete client reports with one prompt.
Perfect for agencies & e-comm brands buried in Facebook Ads Manager data.
The problem:
Building client reports manually is brutal.
You're exporting CSVs, calculating metrics, creating charts, formatting slides, all for data that's outdated by the time you finish.
This Claude MCP setup solves it:
→ Direct connection to your Facebook Ads account
→ Pull any performance data with natural language prompts
→ Auto-calculates ROAS, CPA, CTR, conversion rates
→ Generates visual charts and breakdowns instantly
→ Creates formatted reports with insights + recommendations
→ All built in real-time from a single prompt
No manual exports.
No spreadsheet wrestling.
No outdated reports.
What you can generate:
→ Account-level performance summaries
→ Campaign and ad set breakdowns
→ Demographic and placement analysis
→ Custom date ranges and comparisons
→ Actionable optimization recommendations
Built with Claude MCP.
Want the complete setup guide?
> Comment "MCP"
> Like this post
And I'll send it over (must be following so I can DM)
English

Q: How do you protect the #AIBubble ?
A: You use bubble wrap, of course.
Insulate BIG 🫧 with smaller bubbles—i.e., $1B here, another $B there—and the mothership AI-bubble wrapped flywheel keeps churning.
Damn, I love🥰 bubble wrap! 🫧 🪽
English

@iamshackelford Sounds like your Wisdom growth line is on an upward trend.
English


















