Farooq

456 posts

Farooq

Farooq

@farooqsd

Seattle, WA Katılım Nisan 2013
88 Takip Edilen81 Takipçiler
Farooq retweetledi
Allie K. Miller
Allie K. Miller@alliekmiller·
The most expensive mistake in enterprise AI right now: treating FDEs as your whole transformation plan. Forward deployed engineers (FDEs) are important for custom deployments, but they won’t fix the change management issue most enterprises are facing. It’s likely more the former that Anthropic and OpenAI will continue to prioritize (and hire into the thousands, who knows). Beyond performance and cost, it’s systems integration, ROI, and literal usefulness that drive revenue and stickiness. *However* External FDEs, in my opinion, will not make your company an AI-first company. You can have the sleekest multi-agent orchestrations and still have the majority of your employee base hating AI, avoiding AI, and distrusting leadership decisions on AI. And we already know this because we see this in traditional SaaS too: you can customize the heck out of your Salesforce deployment, but that doesn’t mean your sales team will improve their data hygiene or even attempt to change the way they track and grow with it. Buying a fancier car doesn’t mean you magically learn to drive better overnight. If you’re an enterprise exec and FDEs are sold as the immediate and sole solution to your company transformation woes, walk away. It’s the combination of tech *and* people enablement *and* process reinvention that compounds into actual business outcomes. Large complex enterprises will stall out if they only prioritize the first.
Aaron Levie@levie

Forward deployed engineers, or equivalent, are about to become one of the most in-demand jobs in tech. And one of the most important functions for AI rollouts. Deploying agents is far more technical of a task than most people realize, often far more involved than deploying software. Software generally works the same way every time, and generally for the past few decades has been updated versions of an existing technology or concept (which basically means easier for the enterprise to update their workflows on a newer system). With agents, you’re actually deploying the equivalent of work output within the enterprise. The customer is effectively using you as a professional services provider for a task, which they expect to get solved nearly end-to-end now. This means you need to actually deeply understand the business process as a vendor, and get the customer from the current to the end state seamlessly. Companies need help figuring out which models will work best for their workflows, they need extensive evals setup often, they need change management support for workflows, they need to get their data setup for the agents, and constant tuning of the agentic system for their process. Massive role in tech now. And another example of the kind of highly technical work that AI is creating.

English
79
53
575
107.4K
Farooq retweetledi
Aaron Levie
Aaron Levie@levie·
If I were a college career counselor or in career services, I’d quickly be figuring out how to get students to understand these forward deployed engineer jobs exist and how to get them. The requirements are a mix of deep technical skills, often CS majors or minors. You must be great at understanding problem solving, how to have systems thinking, and have a strong business acumen. The kicker, of course, is to make sure you’re very deep in AI agents; you need to have fluency in coding agents, MCP, CLIs, Skills, and so on. Hundreds (thousands?) of technology companies will be hiring for these roles, same with any consulting and IT services company, and the vast major of mid-size and large enterprises will be hiring for this talent internally as well. One great example of opportunity for highly technical talent out there.
nader dabit@dabit3

Forward Deployed Engineer is the hottest, and one of the most in-demand, jobs right now. Every major AI company is hiring including companies like @OpenAI @cognition @AnthropicAI and @Google If you possess a combination of soft skills (good communication), have an engineering background, and are up to speed on the latest and greatest in agentic coding you're probably able to land one of them. They pay well and offer a foot in the door to some of the fastest growing companies in the world.

English
102
123
1.4K
383.7K
Farooq retweetledi
Sridhar Vembu
Sridhar Vembu@svembu·
Important post from Meta engineer Arnav Gupta on all the AI-led layoffs. "The layoffs will continue till we learn to use AI" is his title and Gemini(!) correctly identified it as coming from, "The beatings will continue until morale improves". x.com/i/status/20518… As he explains well, AI has increased costs massively for all tech companies. Our own AI bill is skyrocketing and to add insult to injury, server prices have gone up 200-300% in a year because the AI infrastructure boom is consuming all the advanced memory chips. So these layoffs are the economic response by tech companies to control the main cost they control (people) to pay for AI and servers. Of course most are spinning it as the result of the "AI productivity miracle" but reality is more cost control than a productivity miracle, at least not yet. Now let me go back to using AI to generate even more code, so that we don't fall behind all the other guys generating massive amounts of code 😅
Arnav Gupta@championswimmer

x.com/i/article/2051…

English
43
88
927
182.9K
Farooq retweetledi
Avid
Avid@Av1dlive·
Eric Schmidt (ex-Google CEO): “if you really want to make money, it’s actually easy. found an agentic AI company.” spoiler: the supply of builders is tiny. the demand is enormous. this guy is literally giving away the exact 2026 playbook to build and sell AI automations to make $10k/mo bookmark and start this weekend
Khairallah AL-Awady@eng_khairallah1

x.com/i/article/2050…

English
109
944
7.8K
2.8M
Farooq
Farooq@farooqsd·
ZXX
0
0
0
0
Farooq
Farooq@farooqsd·
ZXX
0
0
0
1
Farooq retweetledi
Tech Fusionist
Tech Fusionist@techyoutbe·
Building AI Agents in GCP Vertex AI 🔥🔥 31 Minutes, Production ready AI Agents
English
0
18
93
5.1K
Farooq retweetledi
Ruben Hassid
Ruben Hassid@rubenhassid·
Instead of watching Netflix tonight. Spend a day mastering Claude here: claude101.com → Level 1 - 24 min: The basics. Claude For Dummies: ruben.substack.com/p/claude-for-d… Claude Setup: how-to-claude.ai → Level 2 - 1 hour: Real workflows. Claude Cowork: claude-co.work Claude for teams: how-claude.team Claude Design: claudedesign.free Cowork + Projects: ruben.substack.com/p/claude-cowor… Claude for slides: how-to-gamma.ai Claude Skills: claude-skills.free → Level 3 - 3.5 hours: The pro moves. Avoid sycophancy: ruben.substack.com/p/i-love-to-be… Claude Code: claudecode.free Claude 101: anthropic.skilljar.com/claude-101 Stop hitting Claude limits: ruben.substack.com/p/how-to-stop-… Stop Prompting: ruben.substack.com/p/stop-prompti… → Level 4 - 8 hours: Expert mode. Claude Computer: ruben.substack.com/p/claude-compu… Build with Claude API: anthropic.skilljar.com/claude-with-th… Pro tip: Don't binge it. Do one level per sitting. Actually apply each guide before moving to the next
Ruben Hassid tweet media
Ruben Hassid@rubenhassid

x.com/i/article/2045…

English
93
1.4K
7.2K
982.7K
Farooq retweetledi
Himanshu Kumar
Himanshu Kumar@codewithimanshu·
Anthropic's Claude Ai Agents Team just Educated how to build production AI agents in under 30 mins. For Free. From the engineers who built the stack. CANCEL Your Weekend Plans, and Learn to Build AI Agents Today. Bookmark it. Watch it. Build your first production agent this weekend. $5,000/month. $7,000/month. $12,000/month. People are building agents for clients and charging $$$ as Beginners. You're still stuck in the thinking about AI phase. This video fixes that tonight. Follow @codewithimanshu for more high-signal content that actually moves your AI engineering career forward. ↓ Ivan Nardini runs Developer Relations for AI at Google Cloud. He just gave away the entire production agent stack in 30 minutes. This is the talk that separates people deploying AI agents that actually scale from people whose agents break the moment they leave localhost. Here's everything inside. I break down a production AI video like this every week. Follow @codewithimanshu. ↓ The 4-part agent stack that actually scales. Most devs are duct-taping frameworks together and calling it an "AI agent." Ivan lays out the real stack: Agent Development Kit (ADK): open-source, code-first framework for building, evaluating, and deploying agents. Supports Claude models through Vertex AI directly. Model Context Protocol (MCP): lets your agent talk to any tool or data source with one standard. Vertex AI Agent Engine: managed platform for deploying, monitoring, and scaling agents in production. No DevOps headaches. Agent-to-Agent Protocol: open protocol so agents built on different frameworks can actually work together. This is the stack replacing every hacky agent setup in production right now. Full MCP + Claude breakdowns drop weekly on @codewithimanshu. ↓ Building your first real agent. Ivan builds a birthday planner agent live. LLM Agent class. Name it. Define instructions. Pick the model. He uses Claude 3.7 Sonnet. You could use Opus 4.7 for better reasoning. Full agent built in minutes. Not weeks. Watch the build once and you'll never structure an agent the wrong way again. I post agent architectures people pay $500 courses to learn. @codewithimanshu. ↓ Multi-agent systems without the chaos. Single agents are easy. Multi-agent systems are where 99% of builders fail. Ivan extends the birthday planner by: Adding a calendar service through MCP tools Creating an orchestrator agent to route requests between agents Handling state and context across agent handoffs This is production multi-agent architecture. Clean. Scalable. Debuggable. Most tutorials hand-wave this part. This one shows you every step. Multi-agent orchestration content drops weekly on @codewithimanshu. ↓ Deployment without the DevOps nightmare. This is where most AI projects die. You build a cool agent locally. It works. You try to deploy it. Everything breaks. Vertex AI Agent Engine fixes this: Minimal code deployment Automatic monitoring of latency, CPU, and memory Built-in observability and logging No infrastructure setup needed You provide config and requirements. The platform handles the rest. This is how agents actually get to production. Deployment guides for Claude agents post every week. @codewithimanshu. ↓ Agent-to-Agent Protocol: the future nobody's talking about. Most people don't know this exists yet. The A2A Protocol lets agents built in different frameworks communicate seamlessly. Your Claude agent. My LangChain agent. Someone else's CrewAI agent. All talking to each other. All solving parts of the same problem. All without custom integration code. This is the infrastructure layer of the coming AI economy. Getting in early on A2A Protocol is like getting in early on HTTP in 1995. A2A deep dive coming soon. @codewithimanshu. ↓ 30 minutes from the team shipping this in production. You'll learn more from this than from 6 months of YouTube tutorials made by people who've never deployed an agent past localhost. People who watch this understand production AI agents at the architect level. People who skip it keep hacking together frameworks that break every time an API updates. Save the video. Watch it tonight. Build a real agent this weekend. Follow @codewithimanshu for more high-signal content that actually moves your AI engineering career forward.
English
43
441
2.4K
220.4K
Farooq retweetledi
Khairallah AL-Awady
Khairallah AL-Awady@eng_khairallah1·
🚨 Anthropic's own team just showed how to actually use Claude Code properly. 30 minutes. free. the person who created Claude Code. watch the workshop. bookmark it. worth more than every $500 course you almost bought. you've been using Claude without knowing 40 of its commands. Then read the guide below.
Khairallah AL-Awady@eng_khairallah1

x.com/i/article/2047…

English
261
5K
30.1K
7.2M
Farooq retweetledi
Sundar Pichai
Sundar Pichai@sundarpichai·
Extraordinary progress with UCP!
Vidhya Srinivasan@VidsSrinivasan

The Universal Commerce Protocol is taking a major step in building the future of agentic commerce with the expansion of its Tech Council. Welcome to @Amazon, @Meta, @Microsoft, @Salesforce and @Stripe. The success of UCP is an industry-wide effort that requires a true ecosystem approach. Welcome to the new partners joining us to build the future of agentic commerce! 💪

English
80
163
2.2K
292K
Farooq retweetledi
Vidhya Srinivasan
Vidhya Srinivasan@VidsSrinivasan·
The Universal Commerce Protocol is taking a major step in building the future of agentic commerce with the expansion of its Tech Council. Welcome to @Amazon, @Meta, @Microsoft, @Salesforce and @Stripe. The success of UCP is an industry-wide effort that requires a true ecosystem approach. Welcome to the new partners joining us to build the future of agentic commerce! 💪
Vidhya Srinivasan tweet media
English
21
122
653
325.9K
@jason
@jason@Jason·
We started an AI founder twitter group... reply with "I'm in" if you're a founder and want to be added
English
10.8K
135
4.6K
903.4K
Farooq retweetledi
Brian Roemmele
Brian Roemmele@BrianRoemmele·
I hope this helps. If you need more help let me know: Connecting OpenClaw to the X API is straightforward now thanks to X’s official native support... The best and most direct method uses the official xurl CLI tool from X, which comes with a pre-built SKILL.md that OpenClaw recognizes automatically. This lets your local OpenClaw agent post tweets, search timelines, read posts, reply, manage follows, send DMs, upload media, and more all via natural language commands. Official Native Method: xurl CLI (Recommended for Direct X API Access) This uses X’s pay-per-use API pricing (very affordable now: e.g., owned reads ~$0.001/request, basic posts ~$0.015). No third-party bridges required. 1Get X API Credentials ◦Go to the X Developer Portal. ◦Create a new app (or project). ◦In the app settings: ▪Set the redirect URI to http://localhost:8080/callback. ▪Move the app to the Pay-per-use package and Production environment (important — otherwise reads may fail with “client-forbidden”). ◦Note your Client ID and Client Secret. 2Install the xurl CLI (on your machine where OpenClaw runs)
Run one of these (pick your OS/preference): ◦macOS (Homebrew):
brew install --cask xdevplatform/tap/xurl ◦ ◦npm (any OS):
npm install -g @xdevplatform/xurl ◦ ◦One-liner shell script:
curl -fsSL raw.githubusercontent.com/xdevplatform/x… | bash ◦ ◦Or go install @latest" target="_blank" rel="nofollow noopener">github.com/xdevplatform/x…. 3Authenticate xurl (do this manually — never via the agent for security) ◦Register your app:
xurl auth apps add my-app --client-id YOUR_CLIENT_ID --client-secret YOUR_CLIENT_SECRET ◦ ◦Start OAuth2 flow:
xurl auth oauth2 ◦
(Follow the browser prompt; it opens automatically.) ◦Check status:
xurl auth status ◦ ◦(Optional) Set default app/user: xurl auth default my-app or run the interactive picker with xurl auth default. 4OpenClaw Integration ◦Restart your OpenClaw gateway/agent if needed. ◦The built-in xurl skill (in skills/xurl/SKILL.md) is already merged into OpenClaw — no extra install required. ◦Your agent now understands commands like: ▪xurl post "Hello from my OpenClaw agent using Grok!" ▪xurl search "Grok 4.3" -n 10 ▪xurl timeline -n 20 ▪xurl reply 1234567890123456789 "Great point!" ▪xurl whoami, xurl read [post URL], xurl follow @handle, etc. ◦Just talk to your agent naturally: “Post a tweet about the new Grok pricing” or “Search X for recent OpenClaw updates and summarize.” Security Notes (built into the skill): •Never paste tokens/secrets into chat or let the agent run auth commands with flags. •All credentials stay in ~/.xurl (YAML file) on your machine. •The agent calls the local xurl binary directly. Easier Alternative: OpenTweet.io Bridge (No X Developer Account Needed) If you want the simplest setup (great for quick testing or posting-only): 1Sign up free at opentweet.io and connect your X account via OAuth (one click). 2Generate an API key (starts with ot_) in Settings → API. 3Install the official skill:
clawhub install opentweet-x-poster Set the key:
 export OPENTWEET_API_KEY="ot_your_key_here" 
(Or add it to ~/.openclaw/openclaw.json under secrets.) Ask your agent: “Post a tweet saying [text]” it just works. Tips for Your Setup •Use Grok as the backend (as Brian mentioned in the thread): It’s far cheaper than Claude for running OpenClaw agents long-term and has excellent reasoning for agent tasks. •Test with xurl whoami or xurl timeline in your terminal first. •Full command list and raw API access (xurl /2/tweets etc.) are in the skill docs. That’s it: once set up, your OpenClaw agent has full X superpowers running locally. If you hit any snags (e.g., auth errors), the most common fix is confirming your app is in Pay-per-use/Production in the developer portal. 🦞
English
217
834
2.4K
812.4K
Farooq retweetledi
Crazy Moments
Crazy Moments@Crazymoments01·
How bridges are constructed
English
129
2.8K
17.7K
1.2M
Farooq retweetledi
Robert Smith
Robert Smith@robertsmith_ai·
BREAKING: If you're not using Claude at your job, you're already behind. Copy these 7 prompts:
English
28
110
745
234.5K
Farooq retweetledi
Claude
Claude@claudeai·
Introducing Claude Managed Agents: everything you need to build and deploy agents at scale. It pairs an agent harness tuned for performance with production infrastructure, so you can go from prototype to launch in days. Now in public beta on the Claude Platform.
English
2.1K
6K
57.1K
21.6M
Farooq retweetledi
Garry Tan
Garry Tan@garrytan·
How I get my claw to be a durable AI agent I never have to instruct twice Paste this into your OpenClaw's AGENTS.md or send it as a message: You are not allowed to do one-off work. If I ask you to do something and it's the kind of thing that will need to happen again, you must: 1. Do it manually the first time (3-10 items) 2. Show me the output and ask if I like it 3. If I approve, codify it into a SKILL.md file in workspace/skills/ 4. If it should run automatically, add it to cron with `openclaw cron add` Every skill must be MECE — each type of work has exactly one owner skill. No overlap, no gaps. Before creating a new skill, check if an existing one already covers it. If so, extend it instead. The test: if I have to ask you for something twice, you failed. The first time I ask is discovery. The second time means you should have already turned it into a skill running on a cron. When building a skill, follow this cycle: - Concept: describe the process - Prototype: run on 3-10 real items, no skill file yet - Evaluate: review output with me, revise - Codify: write SKILL.md (or extend existing) - Cron: schedule if recurring - Monitor: check first runs, iterate Every conversation where I say "can you do X" should end with X being a skill on a cron — not a memory of "he asked me to do X that one time." The system compounds. Build it once, it runs forever.
English
143
167
2.4K
257.8K