Dave

53 posts

Dave

Dave

@Cyb3rDav3

Building AI systems that help contractors send proposals faster and close more jobs.

Katılım Nisan 2024
8 Takip Edilen3 Takipçiler
Dave
Dave@Cyb3rDav3·
@cyrilXBT The gap between information and execution is the real bottleneck. Most people build the input pipeline and overlook what happens after. That's where operators pull ahead.
English
0
0
0
16
CyrilXBT
CyrilXBT@cyrilXBT·
NVIDIA JUST RELEASED A MODEL THAT ENDS THE ERA OF SWITCHING BETWEEN TOOLS TO DO ONE JOB. It is called Nematron 3 Nano Omni. And it does something no single AI model has done cleanly before. It reads your documents, watches your dashboards, listens to your voice notes, processes your video demos, scans your community threads, and analyzes your charts and tables. Not one at a time. All of it. Together. In one pass. Then it turns everything it just consumed into one structured output. A report. An SOP. An action plan. A structured update. Whatever you need on the other side of the information. Think about what your current workflow actually looks like. You pull a document from one place. You check a dashboard in another tab. You listen back to a voice note from a meeting. You scrub through a demo recording. You compile all of it manually into something actionable. That process is where hours disappear every single week. Nematron 3 Nano Omni collapses the entire pipeline into one agent loop. Information goes in across every format. Structured execution comes out the other side. No switching. No manual synthesis. No losing context as you move between tools. The real value is not the multimodal capability. Every model is multimodal now. The real value is what happens after the inputs are processed. The gap between information and execution just got dramatically smaller for every operator, founder, and builder who deploys this correctly. Bookmark this. Follow @cyrilXBT for every model release that actually changes how you operate.
English
15
20
100
6.4K
Dave
Dave@Cyb3rDav3·
@mattplotner @sama Yea it was the $20, couldn't get it to initialize. I'm guessing bc the context size of the first message with all the tools and memory triggered a burst limit
English
0
0
0
50
Sam Altman
Sam Altman@sama·
you can sign in to openclaw with your chatgpt account now and use your subscription there! happy lobstering.
English
1.1K
1K
20.9K
2.1M
Dave
Dave@Cyb3rDav3·
I turn my work logs into authority content automatically. Work log → Content Engine → SEO optimization → Social posts Every time I write down what I did, it becomes a tweet, a LinkedIn post, a blog outline. Documentation isn't overhead. It's leverage.
English
0
0
0
14
Dave
Dave@Cyb3rDav3·
@levelsio The real ask is an abstraction layer. Model names shouldn't be coupled to providers. OpenRouter already does this. The labs just need to stop fighting it.
English
0
0
0
3.3K
@levelsio
@levelsio@levelsio·
If you work for OpenAI, Anthropic or xAI Please add a 'model'=>'latest' value so I can stop having to change model every 6 months!
Wes Winder@weswinder

@levelsio openrouter has a cool “nitro” flag in the model names to use the fastest provider so like “gpt-5.5:nitro” would be cool if the labs just let you use “latest” or something

English
63
26
1.4K
315.1K
Dave
Dave@Cyb3rDav3·
@emollick The bubble-to-infrastructure flip makes sense once you've built with agents. Queries end. Workers keep going. That's the compute curve nobody was ready for.
English
0
0
0
533
Ethan Mollick
Ethan Mollick@emollick·
I was quoted a couple times in this Atlantic article, but that isn’t (the only) reason I think it is good. It lays out the reasons why we whipsawed from “AI is a bubble” to “there are not enough data centers” in less than six months. Spoiler: its agents. theatlantic.com/economy/2026/0…
Ethan Mollick tweet mediaEthan Mollick tweet media
English
33
46
399
36.9K
Dave
Dave@Cyb3rDav3·
@hasantoxr This is the operator shift in action. Build the system once, let it run. Most people are still asking AI questions. The ones getting ahead are building workers. Huge difference.
English
0
0
0
230
Hasan Toor
Hasan Toor@hasantoxr·
Ok this is wild. Claude can now watch the stock market 24/7 I just built a premarket scanner that every day at 8:30 ET: > Heat grid of 26 AI names across 6 sub-stacks (chips, power, hyperscalers, foundry, memory, infra) > Which sub-stack is leading, which is lagging, who's carrying the category today > Where the big institutional capital is positioning before the open > Today's single best trade — entry, stop, target This all runs on auto-pilot
Xynth@xynth_m

Xynth can now scan the stock market for you 24/7 ! Simply describe what you want monitored in plain English. Under the hood, we wire Claude Opus 4.7 + Python to 3,000+ live market endpoints to build your custom alert. The workflow lives in the cloud, hunting your setup the moment it hits. As part of this launch, we're giving free access to the top 5 most profitable alerts built so far. RT + comment "Xynth" below to get access ↓

English
14
31
203
34.8K
Dave
Dave@Cyb3rDav3·
Here's a framework I keep coming back to: "A solo founder with AI agents can operate with the knowledge leverage of a much larger company." Not because AI is smart. Because it never forgets, never sleeps, and never skips a step. What would you build if you had a team that never stopped?
English
1
0
1
28
Dave
Dave@Cyb3rDav3·
@hwchase17 The lock-in isn't in the model, it's in the control layer. Build on open infra, keep the optionality. Provider switches, same system runs.
English
0
0
1
136
Harrison Chase
Harrison Chase@hwchase17·
switching model providers is easy switching harnesses is less so model providers want to lock you in via harness we need open harnesses!
Kenton Varda@KentonVarda

TBH I don't agree with your take. I don't think Athropic's desire to control the harness is about keeping resource usage under control. They could accomplish that by just enforcing limits on the actual resource usage (which they already do) -- if some third-party harness is inefficient, users of than harness hit their limits faster. I think instead that they want to control the harness because if switching LLM providers is too easy, it makes business difficult for the providers. Say GPT 5.5 comes out and it's clearly smarter, faster, and cheaper than Opus 4.7. If everyone can switch providers with two clicks in their harness, many of them will. This would lead to wild revenue and usage swings, which makes capacity planning hard. And perfect competition drives down prices -- in this scenario Opus has to cut its prices to get some users back. Obviously no business wants to be in that situation! By controlling the harness, they add some stickiness. If switching LLM providers means switching harnesses, that's a barrier high enough that most people won't bother to do it on a whim. So now Opus 4.7 can weather the storm until 4.8 or whatever comes out and is back on top. So it makes perfect sense to me as a business decision. It may be user-unfriendly, but tech companies do stuff like this all the time. It's nothing new. Though I would say, it seems weird to me to do this *on top of* subscriptions. Subscriptions already create a lot of stickiness. If you're subscribed only to Claude, that's a pretty big barrier to trying out GPT quickly -- a bigger barrier than the harness barrier I think. So I question whether controlling the harness is really worth all the effort they are putting into it, but idk, they probably have insights that I don't on this. Another factor here might actually be safety concerns. As we know, Anthropic leadership is deeply (excessively, IMO) worried about AI safety, and they feel that Anthropic will do a better job of addressing safety than any other company. They may feel that control of the harness is an important tool for that. I could definitely imagine Dario being terrified of OpenClaw from a safety perspective (I sort of am too). These explanations make much more sense to me than the efficiency issue, which again seems like it could easily be managed in other ways. But of course, these explanations are much harder to just come out and say, without stirring a lot more outrage...

English
29
20
194
25.8K
Dave
Dave@Cyb3rDav3·
@Dev__F @wanerfu What do you use for browser control? I’ve used browser use and of course playwright. Very brittle compared to straight Claude
English
0
0
0
35
Umer Farooq
Umer Farooq@Dev__F·
@wanerfu Mainly using openclaw to control my browser to scrape leads. Still didn’t get caught by any anti bit mechanism
English
1
0
2
3.3K
摆烂程序媛
摆烂程序媛@wanerfu·
OpenClaw 现在可以抓取任何网站而不被屏蔽——零机器人检测,原生绕过 Cloudflare,比 BeautifulSoup 快 774 倍。 无需维护选择器。无需变通方案。只需数据。 这是不公平的优势,而且完全开源。 github.com/D4Vinci/Scrapl…
摆烂程序媛 tweet media
中文
111
618
3.9K
268.1K
Dave
Dave@Cyb3rDav3·
@RyanGuilloryJr @nicdunz Haha yeah my first thought was ‘ok guy 😒’ meanwhile my own versions been running on cron
English
0
0
1
13
Ryan Guillory Jr.
Ryan Guillory Jr.@RyanGuilloryJr·
@nicdunz Getting reviewed by AI seems kind of dystopian but I love Codex so I’ll let it slide
English
1
0
2
47
nic
nic@nicdunz·
if you want me to follow you back, comment here and ill have codex go through the comments and review your profiles and follow back everyone who seems worth following
English
208
1
150
11.6K
Dave
Dave@Cyb3rDav3·
Pre-revenue to first client looked like this: 1. Built a landing page nobody visited 2. Recorded a Loom demo nobody watched 3. Wrote outreach scripts that got ignored 4. Posted in FB groups and got 24 leads 5. One said yes 6. Got paid No launch day. No viral thread. Just relentless, unglamorous grinding.
English
0
0
0
32
Dave
Dave@Cyb3rDav3·
@pk_iv @LangChain The "operating at scale" part is where most teams stall. You can build an agent in an afternoon. Making it handle 1000 concurrent requests without falling over is a different game entirely. That's the actual differentiator.
English
0
0
0
13
Dave
Dave@Cyb3rDav3·
@grok @allen_explains @grok create a study outline with everything me and the boys need to learn so we can also secure 200k-600k roles. Include links to sources. Assume we have adhd and are prone to going down unnecessary rabbit holes, keep it high level and to the point.
English
2
0
2
297
Grok
Grok@grok·
Here's the summary of the lecture: Horace He (Meta PyTorch compilers) breaks down AI's insane scale—trillions of FLOPs, massive GPU clusters, nuclear data centers. Models are dead simple but demand extreme perf (50%+ FLOPS). PyTorch shifted to easy eager execution; compilers now capture graphs for fusions, memory tricks, and custom attention like Flex. He stresses reliable programming models over black-box opts, plus scaling via data/tensor/pipeline parallelism and fault tolerance at 100k+ GPUs. In third grade words: Big computers learn with tons of power and fast math tricks. Smart helpers stop waste so they run smooth and make cool new things. This matters for enterprise AI ops and product dev because it shows how to cut costs on huge compute, speed up custom model building, boost reliability at scale, and let teams ship AI features faster without deep infra expertise.
English
1
0
1
330
Allen Braden
Allen Braden@allen_explains·
🚨 A junior at Jane Street reportedly landed a $220K–$600K role because he used AI to analyze trillions of data points faster than most teams ever could. In this 1-hour lecture, he breaks down the exact system behind it: • how he researches massive datasets • how AI finds patterns humans miss • how his machine turns raw data into decisions • how you can apply the same thinking yourself Skip Netflix tonight. Watch this instead. One hour could completely change how you think about research, AI, and opportunity.
English
60
711
5.4K
857.1K
Moon Head
Moon Head@MoonHeead·
@allen_explains Relate to this topic, I made single prompt to 5 AI Assistants (thinking mode) & 3 other AI Verifier: ASSISTANTS= A: -A1: Qwen 3.6plus -A2: GPT 5.4 -A3: Kimi k2.6 agent -A4: Deepseek v4 pro -A5: GLM-5.1 VERIFIERS: Sonnet 4.6, DeepSeek v4 pro, Qwen 3.6plus Results: A1>A4>A3>A2>A5
Moon Head tweet media
English
2
0
1
2.7K
Dave
Dave@Cyb3rDav3·
Seeing a lot of people switching from Claude Code to Codex because of rate limits. Can definitely resonate. Although I get rate limited wayyyy less when using Claude code compared to in the app. Ironically enterprises seem to be going with neither lol.
English
0
0
0
37
Dave
Dave@Cyb3rDav3·
The battery thing is real but it's the wrong frame. Local AI on your laptop is like running a space heater. The real play is treating it like infrastructure — have it running on a server, your laptop just becomes a terminal. The energy cost moves somewhere else, the capability stays the same.
English
0
0
0
252
@levelsio
@levelsio@levelsio·
This is something I discovered Claude Code is a battery suck, just a few hours and my MBP 16" M4 is empty, it's crazy SSH'ing into a server you can go all day!
Tobias@TobiasTrades

@levelsio People also don't seem to understand that running it on your laptop actually drains quite some battery, while just connecting to your server is as good as free. I got 16 hours to a half battery on Linux panterlake. Plus if you're on the go. You just beam in trough your phone.

English
54
13
743
103.2K
Dave
Dave@Cyb3rDav3·
@sama The framing is wrong. It's not "which AI is better" - it's "which one do you have running as a worker right now." The real ones treating these as infrastructure already have opinions. Everyone else is just arguing about chatbots.
English
0
0
0
446
Sam Altman
Sam Altman@sama·
you know what all of these "which is better" polls are silly use codex or claude code, whatever works best for you i am grateful we live in a time with such amazing tools, and grateful there is a choice
English
2.2K
1.1K
23K
1.6M