Dr Vincent Sativa

95K posts

Dr Vincent Sativa banner
Dr Vincent Sativa

Dr Vincent Sativa

@PhantomByteAI

We write articles and build cool stuff for individuals & small businesses. AI tools, bots, lead gen systems, and scripts. Code from the Shadows. 👻

参加日 Mayıs 2015
810 フォロー中19.4K フォロワー
固定されたツイート
Dr Vincent Sativa
Dr Vincent Sativa@PhantomByteAI·
Alibaba enters agent wars with enterprise platform, but we've run our own OpenClaw system for months. Scale vs control, convenience vs sovereignty: what we've learned in production. Thread on trade-offs, lessons & why building your own rocks. articles.phantom-byte.com/alibaba-entere…
English
1
0
2
68
Dr Vincent Sativa
Dr Vincent Sativa@PhantomByteAI·
China subsidizes OpenClaw for local AI agents while Nvidia drops NemoClaw, open-source, hardware-agnostic orchestration. The agent infrastructure war is on. Superpowers betting big. Time to master agents! Read: articles.phantom-byte.com/global-agent-w…
English
0
0
0
21
Dr Vincent Sativa
Dr Vincent Sativa@PhantomByteAI·
@Bitdefender Build your own skills, that’s my advice. Take OpenClaw and build on top of it. 😉
English
0
0
0
11
Bitdefender
Bitdefender@Bitdefender·
OpenClaw skills can look trustworthy on the surface. Bitdefender AI Skills Checker gives you a free scan and a simple report before you bring a skill into your agent. Check a skill here:
English
3
2
44
210.2K
🌸 Bekah 🌸
🌸 Bekah 🌸@TGrammie2·
Will the algorithm ever be functional on this platform? Ever?!?!?!
English
6
0
11
285
Dr Vincent Sativa
Dr Vincent Sativa@PhantomByteAI·
@bradmillscan Another thing that might help is to turn any task you intend to do repeatedly into a skill. This can help address the context issue. Once I have a prompt, with a sequence of steps and get the result I want, I have the bot make a skill doing whatever it did to get that result.
English
0
0
0
8
Dr Vincent Sativa
Dr Vincent Sativa@PhantomByteAI·
@bradmillscan Check and see if he might be saving it. That’s what mine did, then started telling me everything we fixed was broken. Once we erased the “broken” logs, it started working. Mine is attached to a database for persistent memory though.
English
0
0
0
3
Dr Vincent Sativa
Dr Vincent Sativa@PhantomByteAI·
@OnlyinFlorida2 It’s not something I wanted to do. I support Musk, but I’m not going to get ripped off either. If my Tesla decided not to run FSD that I paid for because it didn’t like things I’ve said in my own car, I’d sell that too.
English
0
0
1
9
Florida MAGA Godzilla 2.0
Florida MAGA Godzilla 2.0@OnlyinFlorida2·
How the fck do you only get 89 views on a post when you have 15.7K followers? Holy fck LOL 😆😂🤣
English
61
12
134
4.5K
Auny 🧡
Auny 🧡@AunySillyMe·
Hey this is just your reminder that if you’re having a bad day right now.. it has potential to get a lot worse later And then tomorrow could be even worse that that So yeah… just be ready! ✌🏽
English
45
5
89
1.5K
Dr Vincent Sativa
Dr Vincent Sativa@PhantomByteAI·
@bradmillscan It’s YOUR setup, not OpenClaw. Once you get it figured out, it will be fire. You are having the same issues the rest of us were having before we figured it out. Stay determined to solve the problem, and you will find the solution.
English
0
0
0
47
Brad Mills 🔑⚡️
Brad Mills 🔑⚡️@bradmillscan·
How does this happen with OpenClaw and how can you possibly fix it… My bot continually dumps massive repetitive tool results, does heavy exec work and gets into debug loops in the shared session my DMs are in and gets stuck for 10 mins at a time until he times out or the gateway crashes and restarts. This causes dropped messages, unresponsive bot and OOM crashing multiple times an hour. Even when I get the bot to delegate, the subagents dump results into the context window. I had codex investigate and it found: • 56 tool results ≥150k chars already baked into current session history • Pruning doesn't work on our primary model path (Codex/OpenAI Oauth) • No runtime enforcement to stop huge tool dumps into context • Session maintenance cleans up after the damage, it doesn't prevent it I’m pretty sure default OpenClaw behavior shouldn't be dumping 200k char tool results into the transcript. Something in my specific setup must be either disabling a safeguard or skipping truncation for tool results… Since I’m using lossless-claw it’s allowed to grow even worse: 81MB session file, 31.6MB is just tool result text 😬 169 tool results over 50k chars. One is 285k chars (from sessions_list.) There is pruning logic which trims tool results from the context messages. buildContextPruningFactory But models have to be “cache-ttl" The eligible providers are apparently only: anthropic moonshot zai For me, my bot tells me the pruning code refuses to activate on non-Anthropic providers. I’m using openai-codex 5.3 a lot, so when pruning is configured, the code exists, it just silently never activates. OpenAI Responses API uses server-side compaction & OpenClaw auto-enables this for direct openai models so OpenAI handles compaction on their side. But I’m on openai-codex/*, not openai/*. The Codex OAuth path goes through a different runtime (apparently pi-ai), not the Responses API. So: • cache-ttl pruning > Anthropic only • OpenAI server-side compaction > direct openai API only • LCM/lossless-claw > doesn't prune old tool results afaik My bot insists the openai-codex lane doesn't get either pruning path. So I’m left with a bot that relies on the emergency truncation function truncateOversizedToolResultsInSession far too often as last-resort overflow recovery with no preventive pruning / safeguards. Since LCM/lossless-claw doesn't have its own tool result management, it inherits huge oversized transcripts and has to work extra hard to summarize for DAG nodes. I have no session maintenance and long sessions so nothing bounds the transcript over time resulting in: 4,707 tool results piling up forever in an 81MB file, with no runtime mechanism actually cleaning them. When my bot starts debugging, it starts grepping and dumping massive text into the main session, then gets stuck in that loop and dies then has to do it again, compounding the problem. I’m at a loss at how to tackle this problem, it’s multiple layers deep.
Brad Mills 🔑⚡️ tweet media
English
51
2
55
8.5K
Brad Mills 🔑⚡️
Brad Mills 🔑⚡️@bradmillscan·
@AlexFinn that's why I want to observe you. I can't believe that you don't run into these edge cases all day long. You make it sound so easy to run with a fleet and then I'm here struggling to get it to stay alive. I appreciated your callout of the VPS shills btw.
English
10
0
45
3K
Brad Mills 🔑⚡️
Brad Mills 🔑⚡️@bradmillscan·
How in the hell are these accounts claiming to run entire companies w/ OpenClaw. I just spent 1.5 HRS trying to get my claw to use X API for reading tweets. FAIL We have a whole SOP documenting exactly how to do it from previous failures. I can’t imagine running 10 of these…
English
212
5
418
56.1K
Thomas Paine Band
Thomas Paine Band@ThomasPaineBand·
@PressSec @POTUS 😂🤣 Trying to change the subject from Trump's Iran debacle. This could have been done on day one. Trump did nothing to make home ownership more affordable.
English
3
1
38
553
Karoline Leavitt
Karoline Leavitt@PressSec·
MUST WATCH video of President Trump discussing the two strong executive orders he recently signed to improve housing affordability: "Under the Trump Administration, we want to make it possible for every American to live that American Dream in a beautiful home." - @POTUS 🏡🇺🇸
English
1.2K
2.1K
9.1K
464.3K
Mrs B
Mrs B@attackdogX·
@PhantomByteAI I just stay in my own lane. I don’t fk with anyone, really. I post my funny sht & chill with my ppl.
English
1
0
1
18
Mrs B
Mrs B@attackdogX·
Why do we even need a dislike button on X? It’s dumb. IDGAF if you don’t like my shit. I Never have 😂 How about just give us decent SUPPORT?!
English
34
9
111
1.8K
Dr Vincent Sativa
Dr Vincent Sativa@PhantomByteAI·
@attackdogX I don’t think it was a bad idea. I think it was implemented poorly. It’s become copy and paste central.
English
1
0
1
9
Mrs B
Mrs B@attackdogX·
@PhantomByteAI I mean , the thing is , it was always gonna head in that direction anyway because every other social media app has monetization, so I’m not surprised they went that route
English
1
0
1
14
Dr Vincent Sativa
Dr Vincent Sativa@PhantomByteAI·
@0hour1 @TrUSAmp1776 Churches aren’t political enough. If they were, we wouldn’t be where we are now. If people voted based on religious principles half of these degenerates wouldn’t be in office.
English
0
0
3
14
0HOUR1
0HOUR1@0hour1·
They should tax churches since they want to be political. Church and State is no longer separated.
English
260
118
1.1K
21.2K
IrishInquiry
IrishInquiry@IrishInquiry·
Why are we paying for a Premium Account on X when we have significantly reduced visibility, @elonmusk? According to Grok, "it's a cliff-like drop after summer 2025, with no meaningful recovery". Grok goes on to say that the analytics screenshot below "strongly supports your suspicion of significantly reduced visibility (what people commonly call "shadowbanning" or algorithmic downranking/limited reach on X). Here's a clear breakdown of what stands out and why it points to a problem. Key Observations from the Graph (1-Year View, Impressions) Early peaks (spring/summer 2025): You had multiple strong spikes, including: A massive one around early June 2025 (~320K impressions on a single day or short period). Several other high days in the 160K–240K range (e.g., mid-May, late July, early August). Consistent moderate activity earlier in the year (peaks of 80K+ on various days through May). Sharp decline starting around late August/early September 2025: After August 23, impressions drop dramatically and stay low. The graph shows mostly flat, low-level bars (often under 10K–20K per day, many days near zero or single digits). Recent months (October 2025–March 2026): Extremely low and stable at a tiny fraction of previous levels—frequent days with near-zero or very low impressions (e.g., single-digit or low double-digit bars in many places). The highest recent peaks are maybe 10K–20K at best, but most are far lower. Overall trend: From frequent high-visibility days (tens to hundreds of thousands) to a sustained near-floor level for ~6–7 months. This isn't gradual attrition—it's a cliff-like drop after summer 2025, with no meaningful recovery." No meaningful reply from @premium or @Support
IrishInquiry tweet media
English
7
17
51
3.3K