alphaoptics

21.1K posts

alphaoptics banner
alphaoptics

alphaoptics

@alphaoptics_

Sitting on generational wealth.

เข้าร่วม Mart 2011
1.7K กำลังติดตาม3.1K ผู้ติดตาม
alphaoptics รีทวีตแล้ว
Tunechi518
Tunechi518@Tunechi518·
@RealGregWolf @andyserling @MaggieWolfndale @LaffitPincayTV @mig4450 @TheNYRA @FOXSports You helped SELL Game of Silks on America’s Day at the Races—live minting, promotions, your voices pushing it hard. We trusted YOU. We trusted NYRA. Yet you’ve all stonewalled the investors you helped sell to: worthless land/NFTs, millions lost, dead silence. You’re torching your own reputations by ghosting the very fans who trusted your word. Speak up! #NYRA #GameOfSilks
English
1
7
10
393
alphaoptics รีทวีตแล้ว
Tunechi518
Tunechi518@Tunechi518·
@RealGregWolf @andyserling @MaggieWolfndale @LaffitPincayTV @mig4450 @TheNYRA @FOXSports You helped SELL Game of Silks on America’s Day at the Races—live minting, promotions, your voices pushing it hard. We trusted YOU. We trusted NYRA. Yet you’ve all stonewalled the investors you helped sell to: worthless land/NFTs, millions lost, dead silence. You’re torching your own reputations by ghosting the very fans who trusted your word. Speak up! #NYRA #GameOfSilks
English
1
6
13
173
alphaoptics รีทวีตแล้ว
Tunechi518
Tunechi518@Tunechi518·
@andyserling @TropicalRacing Blocked me? Cowards! 😂 That’s the sound of impact, boys. You hyped Game of Silks like pros, pocketed the trust, now hiding behind mute buttons. Meanwhile, I’m still here dropping truth at the track with my tiny megaphone. Keep running—fans see everything. #NYRA #GameOfSilks
Tunechi518 tweet mediaTunechi518 tweet media
English
0
8
11
111
Tunechi518
Tunechi518@Tunechi518·
@andyserling @tropicalracing @TheNYRA @NYRABets Stonewalling won’t erase the millions or shut me up. You used your big platform to hype Game of Silks and cash in. I’ll use my tiny one to keep dropping truth bombs. See you at the track—me yelling facts, you pretending not to hear. Fans notice.
Tunechi518 tweet media
English
3
6
11
181
alphaoptics รีทวีตแล้ว
Tunechi518
Tunechi518@Tunechi518·
@TheNYRA @NYRABets Shame on you! You called it a ‘landmark partnership’ & said ‘Game of Silks presents its players with a fascinating and entertaining challenge by gamifying racehorse ownership in a completely new way’ (Dave O’Rourke, NYRA CEO). You took equity, made them Official Blockchain/Metaverse Partner, promoted heavy—who the fuck do you think invested? Your core supporters, the same diehards at the windows pumping money into the machines every damn day. Now it’s dead silent? Shame! Speak up! #NYRA #GameOfSilks
English
2
4
8
194
alphaoptics รีทวีตแล้ว
Sarah Wolf
Sarah Wolf@sarahzorah·
a credit card but instead of cash back you get Claude credits
English
205
286
6.4K
493.7K
dfinzer.eth | opensea
dfinzer.eth | opensea@dfinzer·
an update on $SEA. the team has been building at full speed, and the foundation had planned to kick off the first steps as part of our march 30th event. but @openseafdn is pushing back the timeline. a delay is a delay. i’m not going to dress it up, and i know how it lands. the reality is that market conditions are challenging across crypto right now, and $SEA only launches once. @openseafdn could force the original date, or we could ensure every piece is in place and make this moment what this community deserves. we gave a tremendous amount of thought to how to do right here. I’m thankful to @HollanderAdam for bringing the community’s voice into every conversation. we’ll be doing the following: no more waves: the current rewards wave will be our last. optional fee refund: recognizing that we originally committed to a Q1 date, we’re offering refunds of the platform fees we retained while participating in the rewards waves (3 - 6) that followed our timing announcement. if you like, you can receive a refund of those fees, which when combined with treasure chest prizes, essentially means all of your trading during that period was on us. if you opt for a refund, the Treasures you were awarded during these waves will be removed from your account. details on this process will follow. honoring existing Treasures: for Treasures you continue to hold, our prior commitment stands: they will be meaningfully considered by the Foundation at TGE. this is independent from allocations for historical activity. 0% fees for 60 days: starting on march 31st, opensea will reduce our own token trading fees to 0%. we want to make it a no-brainer for everyone to experience our new platform: cross-chain token trading, mobile app, perps and more. after this 60 day period, we will put a new system in place that makes fees significantly more competitive for anyone trading consistently on opensea. product updates: while we’re postponing our march 30th event, we’ll host a separate one in the coming months focused on product updates. it’s been incredible to see the early responses to our mobile app, and we can’t wait to get it into more people’s hands. so if not now, wen? when we announced last year, it was too early. that created unnecessary uncertainty and reactivity. so when the Foundation sets a new timeline, it will be deliberate and specific. here’s why i’m confident that’s the right move: i’ve been building opensea for almost a decade. when this started, we were two people and the only thing you could trade on OS was cryptokitties. i’ve watched this space go from a niche curiosity to billions in volume to where we are today. the thing that’s carried us through every cycle was a willingness to make hard calls when it mattered. when our market crashed, we rebuilt from zero: an entirely new stack, a new product, and a new team culture. that hurt in the short term. but today OS2 is undeniably the strongest marketplace offering, and it’s the foundation everything sits on. we have huge ambitions as a company, and we’re here for the long game. making all of non-custodial crypto delightful on mobile is just the beginning. that means we have to set a very high bar for everything we do, and it’s why i’m so protective of delivering a launch that’s worthy of this community and everything we’re putting into this.
English
2K
287
2.4K
1.9M
PennybagsCX
PennybagsCX@PennybagsCX·
106 Openclaw agent's launched for my A.I agency ... what could go wrong? 😅 What if I duplicate them, give them self-learning abilities then throw them into an arena where they will A/B split test against each other, and whoever looses, dies? 👻 May the smartest survive 👀
PennybagsCX tweet media
English
4
0
6
2.7K
alphaoptics รีทวีตแล้ว
Claude
Claude@claudeai·
A small thank you to everyone using Claude: We’re doubling usage outside our peak hours for the next two weeks.
English
1.9K
3.6K
48.6K
12.5M
Luke The Dev
Luke The Dev@iamlukethedev·
If your OpenClaw agents don’t go to the gym, something is wrong with your setup. Just added a gym to the 3D office. When agents are learning or developing new skills, they go train. Even AI engineers need leg day. 🏋️
English
189
148
2K
270.6K
Jay Scambler
Jay Scambler@JayScambler·
Describe a task in plain English. autocontext generates a spec, builds a rubric, and starts improving. "Write incident postmortems" → spec → rubric → evaluate → revise → repeat. The judge scores each dimension independently.
English
2
0
33
12.6K
Jay Scambler
Jay Scambler@JayScambler·
Introducing autocontext: a recursive self-improving harness designed to help your agents (and future iterations of those agents) succeed on any task. I built this for our clients with the intention of commercializing it but the community support around Karpathy's autoresearch convinced me to open source it instead. Our space is on the verge of something big and we want to do our part.
Andrej Karpathy@karpathy

Three days ago I left autoresearch tuning nanochat for ~2 days on depth=12 model. It found ~20 changes that improved the validation loss. I tested these changes yesterday and all of them were additive and transferred to larger (depth=24) models. Stacking up all of these changes, today I measured that the leaderboard's "Time to GPT-2" drops from 2.02 hours to 1.80 hours (~11% improvement), this will be the new leaderboard entry. So yes, these are real improvements and they make an actual difference. I am mildly surprised that my very first naive attempt already worked this well on top of what I thought was already a fairly manually well-tuned project. This is a first for me because I am very used to doing the iterative optimization of neural network training manually. You come up with ideas, you implement them, you check if they work (better validation loss), you come up with new ideas based on that, you read some papers for inspiration, etc etc. This is the bread and butter of what I do daily for 2 decades. Seeing the agent do this entire workflow end-to-end and all by itself as it worked through approx. 700 changes autonomously is wild. It really looked at the sequence of results of experiments and used that to plan the next ones. It's not novel, ground-breaking "research" (yet), but all the adjustments are "real", I didn't find them manually previously, and they stack up and actually improved nanochat. Among the bigger things e.g.: - It noticed an oversight that my parameterless QKnorm didn't have a scaler multiplier attached, so my attention was too diffuse. The agent found multipliers to sharpen it, pointing to future work. - It found that the Value Embeddings really like regularization and I wasn't applying any (oops). - It found that my banded attention was too conservative (i forgot to tune it). - It found that AdamW betas were all messed up. - It tuned the weight decay schedule. - It tuned the network initialization. This is on top of all the tuning I've already done over a good amount of time. The exact commit is here, from this "round 1" of autoresearch. I am going to kick off "round 2", and in parallel I am looking at how multiple agents can collaborate to unlock parallelism. github.com/karpathy/nanoc… All LLM frontier labs will do this. It's the final boss battle. It's a lot more complex at scale of course - you don't just have a single train. py file to tune. But doing it is "just engineering" and it's going to work. You spin up a swarm of agents, you have them collaborate to tune smaller models, you promote the most promising ideas to increasingly larger scales, and humans (optionally) contribute on the edges. And more generally, *any* metric you care about that is reasonably efficient to evaluate (or that has more efficient proxy metrics such as training a smaller network) can be autoresearched by an agent swarm. It's worth thinking about whether your problem falls into this bucket too.

English
62
117
1.9K
295.1K
alphaoptics
alphaoptics@alphaoptics_·
@burwick_max @remusofmars Busy not providing Silks folks with an update and next steps. The same folks that gave ya’ll a chance before all the attention.
English
0
0
0
33
remus🦞(⚫️, ⬜️)
remus🦞(⚫️, ⬜️)@remusofmars·
Why did burwick law stop tweeting? Did they actually get a real case?
English
11
0
58
8K
alphaoptics
alphaoptics@alphaoptics_·
@BurwickLaw when do you guys plan to update the Silks folks? 🤔
English
1
1
2
59
Zeneca🔮
Zeneca🔮@Zeneca·
If you want to reduce your token usage and costs with OpenClaw, give it this prompt: "Create a Token Usage Dashboard so I can see how many tokens are being spent on all of our various activities. I want to have clarity on token usage, to see how large any of our .md files are and how many tokens are used when you load them, how many tokens each cron job uses, how many tokens are used when you boot a new session, to see our daily usage based on the previous 5 days of activity (broken down by llm). I want our context files listed in a directory where I can click on the folders and see all the individual files and how large they are and how much context they take up. Then, audit our file structure and systems and come up with a way to optimize our token usage without losing functionality. Give me a plan with the recommended changes before comitting them." I did this and was able to reduce my usage by 20-30% right off the bat. I also then recommend asking it to create a cron job to perform this audit 2x a week, to ensure you're not duplicating information across files or having stale info in there, etc.
Zeneca🔮 tweet media
English
90
59
1K
69.4K
Nav Toor
Nav Toor@heynavtoor·
@PratikPatel_227 The main edge is zero dependencies and framework-agnostic HTTP. Most other tools lock you into their stack. This one doesn't.
English
2
0
0
894
Nav Toor
Nav Toor@heynavtoor·
🚨 Someone just solved the biggest bottleneck in AI agents. And it's a 12MB binary. It's called Pinchtab. It gives any AI agent full browser control through a plain HTTP API. Not locked to a framework. Not tied to an SDK. Any agent, any language, even curl. No config. No setup. No dependencies. Just a single Go binary. Here's why every existing solution is broken: → OpenClaw's browser? Only works inside OpenClaw → Playwright MCP? Framework-locked → Browser Use? Coupled to its own stack Pinchtab is a standalone HTTP server. Your agent sends HTTP requests. That's it. Here's what this thing does: → Launches and manages its own Chrome instances → Exposes an accessibility-first DOM tree with stable element refs → Click, type, scroll, navigate. All via simple HTTP calls → Built-in stealth mode that bypasses bot detection on major sites → Persistent sessions. Log in once, stays logged in across restarts → Multi-instance orchestration with a real-time dashboard → Works headless or headed (human does 2FA, agent takes over) Here's the wildest part: A full page snapshot costs ~800 tokens with Pinchtab's /text endpoint. The same page via screenshots? ~10,000 tokens. That's 13x cheaper. On a 50-page monitoring task, you're paying $0.01 instead of $0.30. It even has smart diff mode. Only returns what changed since the last snapshot. Your agent stops re-reading the entire page every single call. 1.6K GitHub stars. 478 commits. 15 releases. Actively maintained. 100% Open Source. MIT License.
Nav Toor tweet media
English
180
512
5.2K
363.7K