Airtop

116 posts

Airtop banner
Airtop

Airtop

@airtop

Automate your work with just words. Build powerful web agents by chatting with AI.

San Francisco Katılım Eylül 2024
37 Takip Edilen597 Takipçiler
Sabitlenmiş Tweet
Airtop
Airtop@airtop·
🥳 Today, we’re excited to announce Airtop Web Agents for Zapier. Now your Zapier workflows can automate the entire web by running Airtop Agents. Simply embed any Airtop Agent into any Zap, and your Zapier automations will be able to navigate the web, fill out web forms, and take action on almost any website. For years, Zapier has been the automation glue of the internet, enabling seamless connections between thousands of tools like Slack, Hubspot, Google Sheets and countless more. But what about tasks that require real on-screen browser actions like clicking, scrolling, typing, or extracting structured info from web pages? 🔑🌐 Zapier now has access to these powerful agentic browsing capabilities by running Airtop Agents within Zaps. 🔌 How It Works 1. Add the new Airtop step inside Zapier and choose “Run an Airtop Agent.” 2. Create an Airtop Agent by describing what you want to automate in plain English. For example: “Go to any public Loom URL, extract the transcript, and output structured JSON.” 3. Connect the Airtop Agent to your Zapier workflow and pass parameters. Your Zapier workflow can pass inputs to the Airtop Agent, trigger it to run, and receive the Agent output back from Airtop. We can’t stress enough how many possibilities Airtop Agents will open up for Zapier users, and we’re excited to see what kinds of web automations the Zapier community will build! Drop a comment if you’d like to be invited for a webinar on how to automate the web with Zapier.
English
1
6
20
129.5K
Airtop retweetledi
Daniel Shteremberg
Daniel Shteremberg@shteremberg·
I agree with you. My team has been trying to get that working with Claude Code & Codex cloud sessions, but their (Ant & OAI) cloud VMs are so underpowered that just doing pnpm i & build in our monorepo is either taking 40 mins or times out/runs out of resources. What are you using for running agents in sandboxes in the cloud reliably?
English
0
1
3
572
Airtop
Airtop@airtop·
Your next 10 customers are already on LinkedIn. You just haven’t found them yet. Intent tools charge $500/seat to tell you who’s “in-market.” But the signals are public. • Commenting on a competitor’s post • Asking questions in your category • Engaging with content about the exact problem you solve That’s intent. The problem isn’t data. It’s collecting it. So I built an agent. Give it a LinkedIn profile URL. It finds everyone who commented on their recent posts. Build: ~30 minutes, ~$10 Run: ~$0.02 No scraping your account. No $500/seat tool. The signals are sitting in the comments. Public. Free. 👉 Drop any LinkedIn profile URL in the form below. Link in the first comment. I’ll send you the list of commenters. No account required. No risk. No strings.
English
1
0
0
73
Airtop
Airtop@airtop·
Everyone’s adding more AI to their agents. We’re removing it. Why? In 2000, at Shopping.com, 90% of our engineers were fixing broken automations. Lesson: reliability beats intelligence in production. LLMs are great for building. But using them at runtime? Non-deterministic. Slow. Expensive. Easy to break. Great demo. Bad production. So we flipped it: AI at build time. Deterministic code at runtime. Same behavior every time. 10x faster. No prompt injection surface. Intelligence builds. Reliability ships. More AI ≠ better agents. Build your first production-grade browser agent in 10 minutes. Link in comments.
Airtop tweet media
English
2
0
1
232
Airtop
Airtop@airtop·
AI agents are failing. It’s not your imagination. 88% of AI pilot projects never make it to production. Most agents use LLMs at runtime: Screenshot → ask LLM → take 1 action → repeat. Ask the same model the same question twice. You can get 2 different answers. Fun for writing. Bad for production. Monday: 10 leads. Tuesday: 0 leads. Or wrong button. Or stuck in a loop. No idea why. The issue isn’t the model. It’s nondeterminism at runtime. Most platforms use AI while the agent runs. We use AI when you build. AI writes the code. Code runs in production. No LLM guessing at runtime. One customer: 1,000+ tickets handled. No breaks. “Rock solid. Reliably, correctly every time.” – Dr. Silvia Pfeiffer, CPO, Cared.io Reliable agents don’t need better prompts. They need deterministic execution. AI to build. Code to run. Want to see it in action? Try the LinkedIn lead gen agent. Link in comments.
Airtop tweet media
English
2
0
2
60
Airtop
Airtop@airtop·
One n8n node. The entire web. Zero code. 🤯 n8n automations hit a brick wall when faced with restrictive APIs. Airtop’s new n8n integration destroys that barrier. Now you can automate the "un-automatable" inside n8n: 👉 Access data behind logins 👉 Kill manual web tasks 👉 Replace fragile scripts 👉Automate tools without an API Airtop just opened up n8n to the entire web 🚀
English
1
0
0
133
Airtop retweetledi
Daniel Shteremberg
Daniel Shteremberg@shteremberg·
A non engineer friend recently asked me why Claude Code is getting so much attention vs. Cursor. This was my answer. Curious if others agree or have other opinions. 1. Claude Code is definitely a superior product compared to Cursor, for engineering. We’ve been using both for a while and can absolutely tell the difference. The main reason, I think, is because Anthropic only has to worry about supporting its own models in the agent harness, whereas Cursor has to support every model out there. There is a lot of model specific work you have to do to tune these agentic systems. Also, I think culturally, Cursor operates too much in YOLO mode, and so the product feels not well designed, buggy, and too many changes from day to day. Claude Code feels much more methodical and well thought through. 2. When Opus 4.5 came out a few months back, the combination of the Claude Code harness and the model produced a huge jump in capability. Today I, and my team, mostly just prompt Claude Code and only hand write a super small percentage of code. We review everything, but the days of typing out code are over for us. Cursor has also gotten better with each new model, but the quality of Claude Code is better. 3. Anthropic made the design decision of shipping Claude Code, primarily, as a command line tool. They’re building out UIs for easier use on top of it, but you can embed Claude Code anywhere, which makes it really powerful. You can also run orchestrator tools that, themselves run Claude Code. So there are a ton of people now building higher level tools that manage your 15 separate claude instances. 4. In the last few months, people started realizing that you can use Claude Code, not just for writing code, but as a general purpose agent. I use it to gather analytics, write blog posts, debug Stripe payment issues, etc. There are even people using it to do video editing now. Technically, you could do the same with Cursor, but because Cursor is primarily an IDE, it just feels weird to open your IDE, so you can chat in the tiny sidebar to do something that isn’t about coding. 5. Anthropic recognized the power of Claude Code for non coding things and recently shipped Cowork, which is a much more accessible general purpose agent for a non technical audience. This got a ton of traction on X and is clearly where this stuff is going. Thoughts?
English
1
4
27
2.8K
Airtop retweetledi
Daniel Shteremberg
Daniel Shteremberg@shteremberg·
Totally agree. I've been slowly moving my daily usage to Claude and Gemini. It's unfortunate that the UX of Gemini is so bad though. For example, I can't dictate my query with even the slightest pause since it will auto submit. With ChatGPT you can leave the mic open, dictate with pauses, then submit. ChatGPT nailed the UX, but the models have been going backwards, clearly
English
2
1
55
6.8K
Airtop retweetledi
Daniel Shteremberg
Daniel Shteremberg@shteremberg·
AI browsers running locally on your machine, yes. But the cloud browser agentic ecosystem is alive, well, and thriving. This is because people don't want to sit and watch their browser, taking up time/cycles on their machine, sit and do stuff. They want to kick out to a background agent that is set up to do work on your behalf automatically, in the cloud. Our customers set up agents that pay invoices, monitor competitors, to lead generation, triage inbound, etc. All automated with browser, but just in the cloud, not locally.
English
1
1
1
3.8K
Airtop retweetledi
Daniel Shteremberg
Daniel Shteremberg@shteremberg·
@pitdesi Fast forward 8 months: "Apple's highly rumored AI-powered wearable pin has been shelved due to underwhelming capabilities and has been described as been 'mired in politics and disfunction'"
English
2
2
7
6K
Airtop retweetledi
Daniel Shteremberg
Daniel Shteremberg@shteremberg·
Steve Lemay to the rescue!! Hopefully with Alan Dye out the door, the HI team can move back to optimizing the actual experience, and not making flashy gratuitous demos. Apple HI used to be the pinnacle of UX design and hopefully Lemay (with his 25 year track record) can bring it back
English
0
1
5
21.3K
Airtop retweetledi
Daniel Shteremberg
Daniel Shteremberg@shteremberg·
This is how you know @Replit is going to the moon. Founder mode @amasad liked and escalated a support request within minutes of me posting. This kind of attention to detail is rare and likely one of the main reasons Replit has done so well. Amazing job guys... you rock!
Daniel Shteremberg tweet media
English
1
1
11
721
Airtop
Airtop@airtop·
The tools you want to automate have API gaps. @airtop fills them. @SilviaPfeiffer saved 8-12 hrs/month automating what @Cliniko 's API couldn't - with a cloud browser agent. Reply with your stack 👇 and I'll show you how to automate the un-automatable
English
0
2
2
273
Airtop retweetledi
Daniel Shteremberg
Daniel Shteremberg@shteremberg·
AI coding is going to change the traditional EPD roles in the following ways. Think of a spectrum: on one end you have the PM who does nothing but translate business needs into product requirements, works with the designer to craft the UX, and hands of tickets to engineers. They have zero technical abilities, but are good at understanding the business needs and translating into reqs. In the old world, there was value here. On the other end, you have code monkeys. They have zero business sense, UX sense, but are great at taking a ticket and writing the code, and saying: next please. Again, there used to be value here. Both ends of the spectrum are dead now. Enter the generalist "builder" that can reason about business value, product requirements, has design sense, can build full stack, and can follow a feature through a prod release and measuring the impact on the business. This kind of generalist used to be a unicorn because of the significant SME need. Someone who can write decent UI code, backend code/distributed systems, dev ops, AND can do design, AND understands the business? Damn. But now, any good generalist that can use AI tools to deliver a full stack feature, with all the deep complexities of each type of engineering, and with a deep connection to how those features push the business forward are the ones who will thrive in this new world. My #1 piece of advice for anyone in product, design, or engineering: be a generalist. Learn the AI tools to be competent in all 3 areas and stop thinking of yourself as a PM, a designer, or an engineer. Learn autonomy and E2E ownership. You're a builder now. Otherwise you're ngmi.
English
6
11
90
45.9K
Airtop
Airtop@airtop·
An important skillset to master in this new world is building AI agents that can do reliable/repeatable work for you. As a builder, you're going to want to have as many agents doing work on your behalf. Airtop can help with the repeatable stuff
Daniel Shteremberg@shteremberg

AI coding is going to change the traditional EPD roles in the following ways. Think of a spectrum: on one end you have the PM who does nothing but translate business needs into product requirements, works with the designer to craft the UX, and hands of tickets to engineers. They have zero technical abilities, but are good at understanding the business needs and translating into reqs. In the old world, there was value here. On the other end, you have code monkeys. They have zero business sense, UX sense, but are great at taking a ticket and writing the code, and saying: next please. Again, there used to be value here. Both ends of the spectrum are dead now. Enter the generalist "builder" that can reason about business value, product requirements, has design sense, can build full stack, and can follow a feature through a prod release and measuring the impact on the business. This kind of generalist used to be a unicorn because of the significant SME need. Someone who can write decent UI code, backend code/distributed systems, dev ops, AND can do design, AND understands the business? Damn. But now, any good generalist that can use AI tools to deliver a full stack feature, with all the deep complexities of each type of engineering, and with a deep connection to how those features push the business forward are the ones who will thrive in this new world. My #1 piece of advice for anyone in product, design, or engineering: be a generalist. Learn the AI tools to be competent in all 3 areas and stop thinking of yourself as a PM, a designer, or an engineer. Learn autonomy and E2E ownership. You're a builder now. Otherwise you're ngmi.

English
1
0
3
68
Airtop
Airtop@airtop·
"Fully agentic AI treats every run like the first time — even when it shouldn’t." Our take on the AI Agency spectrum and why you don't always need full agency for AI Agents. airtop.ai/blog/agency-sp…
English
1
1
15
123K