WΞNDΞL

9.2K posts

WΞNDΞL banner
WΞNDΞL

WΞNDΞL

@bitdeep_

Katılım Mayıs 2011
2.1K Takip Edilen2.3K Takipçiler
WΞNDΞL
WΞNDΞL@bitdeep_·
now that papa elon is giving me free tokens to abuse on hermes... I need to spend them. time to look at my old obsidian vault for old ideas.
English
0
0
0
29
WΞNDΞL
WΞNDΞL@bitdeep_·
@Jaaneek congrats, seriously, now I'm going to use heavily this. impressive combo.
English
0
0
1
38
WΞNDΞL
WΞNDΞL@bitdeep_·
@steipete going to run the 4th peter tool on my env. 😬
English
0
0
1
137
Peter Steinberger 🦞
Try clawpatch.ai on one of your repos and let codex work its magic. It's amazing at uncovering bugs you didn't know you had.
Peter Steinberger 🦞 tweet media
English
63
85
1.5K
112.4K
WΞNDΞL
WΞNDΞL@bitdeep_·
@NyanpasuKA interested too, as seo sloptization as service, I like.
English
0
0
1
18
Nyanpasu
Nyanpasu@NyanpasuKA·
what if i set codex to sloptimize seo every 5 hours ?
English
4
0
11
513
WΞNDΞL
WΞNDΞL@bitdeep_·
@deeempak @samsoniuk hoooo, good idea! having different models can be interesting! FPGA as Service. 😬
English
1
0
0
54
WΞNDΞL
WΞNDΞL@bitdeep_·
@Kirsten3531 the problem is not the QI level of it (let’s say this). the problem is it scales and can run 24h, he can’t compete with 1000 avg guys working 24x7.
English
0
0
1
106
Kirsten
Kirsten@Kirsten3531·
My cousin is betting his career on "LLMs can never be more than the average of their training data" but I feel like that's a very 2024 take? Aren't we already past this in like, coding and math?
English
281
14
1.6K
274.1K
WΞNDΞL
WΞNDΞL@bitdeep_·
@v12sec you patch, then, few hours later v12 drop another one. plaese, we need to sleep.
English
0
0
0
156
V12
V12@v12sec·
new fragnesia variant (unpatched)
English
12
50
390
60.4K
WΞNDΞL
WΞNDΞL@bitdeep_·
@rand_longevity I believe so, the problem is timeline: I expect a good LeV threshold in 30ys, full immortality in 50ys.
English
1
0
3
126
Rand
Rand@rand_longevity·
how many of you actually believe me when I say we are gonna cure aging? I wanna know
English
205
11
381
10.8K
Fahd Mirza
Fahd Mirza@fahdmirza·
⚡ Luce Megakernel just proved the NVIDIA efficiency gap is a software problem not a hardware one 🔬 a 2020 RTX 3090 at 220W now matches Apple M5 Max efficiency and delivers 1.8x the throughput 🔹 413 tok/s decode vs 267 tok/s on llama.cpp — same GPU, different software 🔹 1.87 tok/J — matching Apple M5 Max at less than a third of the system cost 🔹 All 24 layers of Qwen3.5-0.8B fused into a single CUDA kernel — zero CPU round trips 🔹 25x faster than PyTorch HuggingFace on the same hardware 🔹 Hybrid DeltaNet and Attention architecture — the first megakernel ever built for this pattern 🔥 Full breakdown and live benchmark below 👇 youtu.be/e6jY4goVIu0
YouTube video
YouTube
English
4
13
69
25.7K
WΞNDΞL
WΞNDΞL@bitdeep_·
@Dr_Gingerballs he is literally the promo code and I live it. I asked for like 3 organizations now: see this? you need this level to be in the edge.
English
0
0
0
294
Dr_Gingerballs
Dr_Gingerballs@Dr_Gingerballs·
This ONE GUY is using 1/2000th of OpenAI’s total revenue.
Peter Steinberger 🦞@steipete

People freaking out over my AI spend. What nobody sees: Part of what excites me so much about working on OpenClaw is that I'm trying to answer the question: How would we build software in the future if tokens don't matter? We constant run ~100 codex in the cloud, reviewing every PR, every issue. If a fix on main lands, @clawsweeper will eventually find that 6 month old issue and close it with an exact reference. We run codex on every commit to review for security issues (as it's far too easy to miss). We run codex to de-duplicate issues and find clusters and send reports for the most pressing issues. We have agents that can recreate complex setups, spin up ephemeral crabbox.sh machines, log into e.g. Telegram, make a video and post before/after fix on the PR. There's codex that watch new issues and - if it fits our documented vision well, automatically create a PR of it. (that then another codex reviews) We have codex running that scans comments for spam and blocks people. We have codex instances running that verify performance benchmarks and report regressions into Discord. We have agents that listen on our meetings and proactively start work, e.g. create PRs when we discuss new features while we discuss them. We build clawpatch.ai to split all our projects into functional units to review and find bugs and regresssions. We do the same split for security with Vercel's deepsec and Codex Security to find regressions and vulnerabilities. All that automation allows us to run this project extremely lean.

English
4
2
56
17.5K
WΞNDΞL
WΞNDΞL@bitdeep_·
@kunchenguid was going to sleep and damn... claude did a reset limit, I need to use it on something... it 22:51 now... and I'm here babysitting agents.
English
0
0
0
421
WΞNDΞL
WΞNDΞL@bitdeep_·
@DaveShapi for peter I know that is reasonable from him. now, from FANNGers, idk... they are just burning tokens like degens.
English
0
0
0
281
WΞNDΞL
WΞNDΞL@bitdeep_·
man... boris is fight hard, amazing to see.
WΞNDΞL tweet media
English
0
0
0
37
Tibo
Tibo@thsottiaux·
We found and fixed two issues that could explain this degradation of the capability of GPT-5.5 in Codex over the last ~ 48 hours. We are monitoring over the coming hours to fully confirm and I will reset usage limits this evening. Apologies and now is the time for /fast maxxing.
Tibo@thsottiaux

Codex team is aware of reports of GPT-5.5 performing worse for some users and investigating. We don't have anything conclusive yet and systems are healthy but we will share updates as we go.

English
812
507
7.6K
1.7M
hanlon’s mortola razr
hanlon’s mortola razr@rhizomaticthot·
the fast16 malware was almost certainly targeting spherical implosion simulations. left: unmodified LS-DYNA 970 right: LS-DYNA 970 modified with the relevant portions of fast16.sys both running a spherical implosion deck
GIF
English
97
431
4.6K
2.4M