Eamonn Doyle
1.7K posts

Eamonn Doyle retweetledi

The most valuable skill in 2026 is:
• Not coding
• Not running ads
• Not building AI apps
It’s something no one is talking about.
But I’ve QUIETLY made $700,000 in the last 11 months using this.
And it compounds every month.
• Without going viral
• Without chasing clients
• Without building a personal brand
All you need:
ChatGPT, Claude, and 1 hour a day.
I broke down the entire system in a 6+ hour training.
With all my prompts, workflows, and execution plan.
To get it:
Like this post
Comment “SKILL”
I’ll DM you the full system for FREE.
Only for the next 48 hours.

English

I want to help 1 million people make a living writing on the internet.
So here's what I'm doing:
For the next 48 hours, I'm giving away my 2 best-selling books for free:
• The Art & Business of Online Writing
• The Art & Business of Ghostwriting
Why?
Because I've spent 10+ years figuring out how to make money as a writer.
(After I was told in college, "Nobody makes a living as a writer!")
And since then, I have generated over $1,000,000 across:
• SaaS ($1m+)
• Agency ($5m+)
• Consulting ($2m+)
• Paid Newsletter ($2m+)
• Self-Published Books ($1m+)
• Cohort-Based Course ($3m+)
• Low-Ticket Digital Products ($2m+)
• High-Ticket Group Coaching ($10m+)
So, yeah...
You can earn a lucrative living as a writer IF you know how.
Which is why I'm giving away my 2 most popular books (so you can start making a living as a writer too!).
Inside these books, I explain:
• Why you shouldn’t start a blog, how to find your voice as a writer, and the exact steps to build a daily writing habit.
• The quickest way to monetize your writing (and get paid to network and learn from industry leaders in your niche).
• The secret to making more money as a writer (so you can escape the freelance writer hamster wheel).
And tons more!
Comment "books" below and I'll DM them to you for free.

English

I'm Boris and I created Claude Code. Lots of people have asked how I use Claude Code, so I wanted to show off my setup a bit.
My setup might be surprisingly vanilla! Claude Code works great out of the box, so I personally don't customize it much. There is no one correct way to use Claude Code: we intentionally build it in a way that you can use it, customize it, and hack it however you like. Each person on the Claude Code team uses it very differently.
So, here goes.
English
Eamonn Doyle retweetledi

💻 I make $400/hr online and here’s exactly how I do it.
I put together a guide with 20 proven AI income methods that can earn up to $5,000/day when you scale them.
🔥 Normally $199 but it’s FREE for the next 48 hours.
Want it?
✅ Like
🔁 Repost
💬 Comment “GUIDE”
⏰ Must follow me so I can send it to you

English

Stop paying for “knowledge base” tools.
Build your own NotebookLM clone locally for free.
Here’s the setup flow →
→ Install Docker (one time).
→ Install Ollama.
→ Pull Gemma 3 1B for the chat model.
→ Add mxbai-embed-large for embeddings (semantic search).
→ Run Open NotebookLM via Docker.
→ Select Chat Model + Tools Model or it won’t work.
Save this video, you’ll build your private research hub in 10 minutes.
Want the SOP? DM me. 💬
English

BREAKING: 💰 Nvidia is taking a non-exclusive license to Groq’s AI inference tech and hiring key Groq leaders in a $20b deal while Groq keeps operating independently.
Groq says founder Jonathan Ross and president Sunny Madra will join Nvidia, and Groq will be led by Simon Edwards as CEO.
The core hardware idea is Groq’s LPU, ts custom inference chip, basically Groq’s alternative to an Nvidia GPU for running trained LLMs.
And Groq says it is deterministic, means the chip runs the model with a fixed, planned schedule, so the order and timing of operations are predictable instead of being affected by dynamic scheduling, caches, and contention that can make latency jittery on general-purpose hardware.
That predictability helps low latency because each token step can be executed with fewer surprises, so response time stays tight and consistent, which matters for real-time chat and streaming outputs.
Groq also claims up to 10x less power than graphics cards.
Another lever is memory, because Groq leans on on-chip SRAM (static random-access memory) instead of relying on off-chip HBM like many GPU systems, which can speed responses but can limit the largest models that fit.
SRAM and HBM are 2 different ways to feed data to the chip, and inference is often bottlenecked by “how fast can the chip read the model weights.”
SRAM (static random-access memory) is very fast memory that can sit on the chip, so the chip can grab data with very low delay and usually lower energy per access.
HBM (high-bandwidth memory) is also very fast, but it sits off the chip in stacked memory packages next to the GPU, so accesses still travel farther and the system has more moving parts.
If Groq can keep more of what it needs in on-chip SRAM, it can reduce “waiting on memory,” which helps latency and sometimes power for inference.
The tradeoff is capacity, because on-chip SRAM is tiny compared to HBM, so if the model’s active weights do not fit, the system has to stream from slower memory or split work across chips, which can hurt speed and cost.
Groq also describes a cluster interconnect called RealScale that tries to keep many servers in sync by compensating for clock drift.
What this means is that, each chip runs on a clock, like a metronome that tells circuits when to do the next step. When you connect lots of servers together for inference, you want their “metronomes” to stay aligned so data arrives when the next chip expects it.
Real clocks are slightly imperfect, so over time one server’s clock can run a tiny bit faster or slower than another, that is clock drift. Even tiny drift can force extra buffering, retries, or waiting, which adds latency and lowers overall throughput when you are trying to run one big model across many chips.
Groq’s “RealScale” claim is basically, their interconnect plus control logic measures these timing mismatches and adjusts coordination so the cluster behaves more like one synchronized machine. So the point is not higher raw bandwidth by itself, it is keeping multi-server inference predictable and low-latency when you scale out.
If Nvidia can fold these ideas into its AI factory stack, it gets another path to serve real-time inference workloads alongside GPUs.
---
siliconangle .com/2025/12/24/nvidia-license-technology-inference-chip-startup-groq-reported-20b-deal/

English

A solid paper from Stanford, Princeton, Harvard, University of Washington, and many other top univ.
Says that almost all advanced AI agent systems can be understood as using just 4 basic ways to adapt, either by updating the agent itself or by updating its tools.
It also positions itself as the first full taxonomy for agentic AI adaptation.
Agentic AI means a large model that can call tools, use memory, and act over multiple steps.
Adaptation here means changing either the agent or its tools using a kind of feedback signal.
In A1, the agent is updated from tool results, like whether code ran correctly or a query found the answer.
In A2, the agent is updated from evaluations of its outputs, for example human ratings or automatic checks of answers and plans.
In T1, retrievers that fetch documents or domain models for specific fields are trained separately while a frozen agent just orchestrates them.
In T2, the agent stays fixed but its tools are tuned from agent signals, like which search results or memory updates improve success.
The survey maps many recent systems into these 4 patterns and explains trade offs between training cost, flexibility, generalization, and modular upgrades.
github .com/pat-jj/Awesome-Adaptation-of-Agentic-AI/blob/main/paper.pdf

English

This is the biggest vibe-coding hack that most builders don't know about.
Here is how I use it (and you should too.)
I've been vibe coding for 6 months.
For the first 4, I was writing prompts, getting happy, writing more prompts, getting frustrated.
And then I would abandon the project, move on to the next.
Then I did what product managers usually do: create detailed requirements.
These requirements are detailed, structured, comprehensive, and include everything that AI tools need to build a world class product.
This has been a game changer.
Want to take a look inside?
Reply AI PRD, and I'll DM it to you.
English

Build a habit of reading 100 pages in one sitting.
This one skill can help you become a top 0.1% in your domain. Be it learning or working, getting into the top tier requires you to study tough topics that demand undivided attention and to do deep work to drive the most impact in an optimal time.
Being able to consistently read 100 pages without getting distracted will help you build the ability to understand, comprehend, and maintain large, complex contexts over a longer duration. This improves focus, which is essential during studying tough concepts and during your regular SWE work.
Today, when most people have the attention span of a goldfish, you can easily ace your career if you can just put your head down and work without getting distracted.
English

The dragon in the dark. A red team post exploitation framework for testing security controls during red team assessments.
github.com/0xflux/Wyrm
English







