zero

1.5K posts

zero banner
zero

zero

@zerocaulk

Builder, ex-K Street, wannabe polyglot 🇺🇸/🇫🇷🇱🇻🇷🇺

Katılım Temmuz 2025
708 Takip Edilen117 Takipçiler
zero
zero@zerocaulk·
this is gpt-5.4's attempt at a "prompt" icon...
zero tweet media
English
0
0
0
1.3K
zero
zero@zerocaulk·
@ondakx @Jilles If you haven’t used durable objects as WS servers, you’re missing out on some cool ass tech
English
0
0
0
13
Onďák
Onďák@ondakx·
@Jilles is workers any good for app reliant on like crud or websockets etc? I have never used them but would look into exploring it on some use case, can be playground-ish
English
4
0
1
2.4K
zero
zero@zerocaulk·
gg
zero tweet mediazero tweet media
0
0
0
38
madison
madison@_madison______·
@teej_dv @dillon_mulroy Do you reach for alchemy over sst in general these days? I’m overdue to give alchemy a go and would like to hear the experience of folks who have tried both.
English
2
0
0
69
Dillon Mulroy
Dillon Mulroy@dillon_mulroy·
migrating all my personal apps and agents on cloudflare to a single monorepo has been a clutch move
English
32
2
430
54.3K
zero
zero@zerocaulk·
@kcosr @Teknium I noticed this happens a lot with background agents in codex.
English
1
0
2
26
Teknium (e/λ)
Teknium (e/λ)@Teknium·
So.. I'd gotten a lot of complaints that GPT-5.4 is pretty hesitant to actually.. do the task presented to it, to call tools, etc. I have checked like 15 times now that we call it the same way we call Claude or any other model, and we do. Then I had hermes-agent look into it. It decided to check opencode and cline's codebase to see if maybe they do it differently. They don't - but they do prompt it differently.. lol
Teknium (e/λ) tweet media
English
43
13
338
48.4K
zero
zero@zerocaulk·
@thsottiaux And yet yall remarkably maintain two 9s uptime. A writeup from @sk7037 and team would be an fantastic read
English
0
0
0
36
Tibo
Tibo@thsottiaux·
Growth of codex continues to outpace our prediction (maybe we are not super good at that) and we almost ran out of capacity three days in a row. We have ramped up more significantly ahead of next week.
English
187
48
2.1K
82.6K
Kyle Mistele 🏴‍☠️
codexbros is this normal buddy had is brain fried in the RL torment nexus wouldn't even TRY reading outside the project directory (it could bc custom harness)
Kyle Mistele 🏴‍☠️ tweet media
English
8
0
2
1.3K
zero
zero@zerocaulk·
@0xSero I need functionality similar to pi’s /tree
English
0
0
0
34
0xSero
0xSero@0xSero·
If Codex team is reading, I would love to be able to make threads without asking the model to do it. Sort of like Slack or Discord. I love using Codex to learn, but I don't waste tokens or muddy my threads. Making a new session & forking works but visually is disjointed, ty
0xSero tweet media
English
12
2
95
8.8K
zero
zero@zerocaulk·
@steveruizok What’s crazy to me is this isn’t part of the w3c spec
English
0
0
0
629
Julius
Julius@jullerino·
@davis7 > having to "vp run _" instead of "vp _" like u can in bun is annoying `bun test` / `bun build` would like to have a word…
English
5
0
46
3.2K
Ben Davis
Ben Davis@davis7·
Been more seriously testing out vite plus over the last week or so and while it's got some really annoying edges, the core is excellent - managed node is great - package management is the best non-bun one out there right now - the vite plus config is great - everything fits together nicely - 5.4 has zero issue working with it - deploying was actually really easy - monorepos are painless - tsgo support out of the box - tsdown packing It's really just all the best modern TS tech wrapped up into a cohesive system with (mostly) good opinions the things I don't like: - "vp dev" over writing the dev command literally everyone puts in every project fucking sucks I hate it so much - having to "vp run _" instead of "vp _" like u can in bun is annoying - the default agents md and precommit hooks aren't for me - vite.config.ts "expanding" is good and bad. I get why they did it and wouldn't change it. It's just weird to get used to it meaning something it didn't used to. (like I now have a vite.config.ts file in a cli app lol) Currently building btca v3 (the pi version) around it github.com/davis7dotsh/bt… and so far I've been overall very happy with it
Ben Davis tweet media
English
13
6
219
31K
Lyra Intheflesh
Lyra Intheflesh@LyraInTheFlesh·
yeah, but 20% of your time is spent watching ads. :P For the other 80% you're being surveilled or trying to figure out which model is actually serving you inference or you're having to upload copies of your government issued photo ID for completely reasonable reasons. github.com/openai/codex/i…
English
2
0
6
3.4K
Yuchen Jin
Yuchen Jin@Yuchenj_UW·
Claude needs to do whatever it takes to reach at least 99% uptime. A 5-hour outage today is unacceptable. Wish them to get more GPUs and TPUs.
Yuchen Jin tweet media
English
76
19
556
40.1K
Nick Levine
Nick Levine@status_effects·
First round of the budok-ai tournament. #1 seed Gemini 3.1 Pro vs #8 seed Grok 4.20. Best-of-3 quarterfinal. G1, Cowboy vs Ninja, Grok wins narrowly via shuriken pressure (0-1). G2, Gemini changes to Wizard and evens the match through zoning (1-1). G3, Grok swaps to Cowboy. Gemini wins in a photo finish by correctly reading Grok's block (2-1). Gemini 3.1 Pro will face the winner of #4 seed Sonnet 4.6 vs #5 seed GLM 5.
Nick Levine@status_effects

running the inaugural budok-ai tournament. eight models seeded with the artificial analysis intelligence index. each round is best of three. models get to pick what character they want to play (cowboy, ninja, wizard, robot, mutant), and can change characters in subsequent games in response to the results.

English
4
4
56
3.7K
zero
zero@zerocaulk·
@ugbahisioma @adamdotdev ^ very creative and better to talk to but writes some of the worst code I’ve ever seen
English
0
0
2
520
MONARCH
MONARCH@ugbahisioma·
@adamdotdev I thought they said 5.4 was worse than codex 5.3
English
5
0
3
3.8K
Adam
Adam@adamdotdev·
Anomaly team model usage this week (token counts) 👀
Adam tweet media
English
57
13
1K
83.1K
zero
zero@zerocaulk·
@krzyzanowskim @wiedymi Selling a reverse engineered product is a tougher legal battle over a simple and inevitable DMCA takedown. Just open source it
English
2
0
662
11.7K
Marcin Krzyzanowski
Marcin Krzyzanowski@krzyzanowskim·
I reimplemented "claude" CLI with codex and gpt-5.4-high. It cost $1100 in tokens, and is 73% faster and 80% lower resident memory during sustained interactive use. It is very easy to reverse claude from npm distribution, then reimplement is 1:1. It is indistinguishable from the Anthropic version to the every header and analytics it send back github.com/krzyzanowskim/…
English
153
114
2.1K
934.8K
zero
zero@zerocaulk·
@bitforth This is fucking frightening. “And remember, this is what they decided to make public.”
English
0
0
0
26
Alan
Alan@bitforth·
Yo fui ingeniero en Meta, y siempre seguía FAIR desde adentro. Lo que acaban de publicar es la versión que les dejan publicar. Pero con eso, es más que suficiente para decirles exactamente que es lo que está pasando. TRIBE v2 predice, vértice por vértice sobre la corteza cerebral, qué zonas activa cualquier video. Sin escáneres. Sin humanos. Subes el contenido, obtienes el mapa neural (activación emocional, supresión de razonamiento crítico, modulación prefrontal) antes de que el video lo vea un solo usuario. Ahora considera la posición de Meta: 1. Tiene años de datos de Reels sobre qué contenido retiene atención, genera enojo, provoca compartir. 2. Saben empíricamente qué funciona. TRIBE v2 les da el mecanismo causal de por qué funciona (a nivel de tejido cortical) Eso convierte correlación histórica en capacidad predictiva sobre contenido nuevo. 3. Internamente hay herramientas que se llaman Gatekeepers y Quick Promotions que sirven para inyectar contenido en el feed de poblaciones arbitrarias a escala. 4. Simulador de respuesta cerebral + conocimiento empírico de contenido efectivo + maquinaria de distribución selectiva. El pipeline está completo. Y luego está Thiel. Inversor y amigo personal de Zuck. Fundador de Palantir, cuyo negocio es análisis de poblaciones a escala para gobiernos e inteligencia. NO es descabellado observar que confluyen los incentivos de plataformas construidas por las mismas personas. La licencia CC BY-NC dice que Meta retiene los derechos comerciales del predictor de respuesta cerebral más preciso jamás construido. Y recuerda, esto es lo que decidieron hacer público.
AI at Meta@AIatMeta

Today we're introducing TRIBE v2 (Trimodal Brain Encoder), a foundation model trained to predict how the human brain responds to almost any sight or sound. Building on our Algonauts 2025 award-winning architecture, TRIBE v2 draws on 500+ hours of fMRI recordings from 700+ people to create a digital twin of neural activity and enable zero-shot predictions for new subjects, languages, and tasks. Try the demo and learn more here: go.meta.me/tribe2

Español
195
3K
12.2K
1.3M