makevoid

1.3K posts

makevoid banner
makevoid

makevoid

@makevoid

Senior Blockchain Engineer

Geneva, Geneva, Switzerland 参加日 Nisan 2008
338 フォロー中733 フォロワー
makevoid
makevoid@makevoid·
I am a Claude Code user but I also tried to use just Claude API and tools with a small context window and a short pre-prompt. Having a smaller context window as baseline is generally giving me a better experience when performing not excessively complex tasks. I think we need a mixed approach overall to nail this.
English
0
0
2
320
eric provencher
eric provencher@pvncher·
After doing my own research in the codex repo, I finally understand why context is so strange in Codex Back in August, they introduced a new super aggressive tool call pruning mechanism, that truncates all tool calls more than 256 lines, splitting them in two so that the model only ever sees 128 continuous lines before a [truncated] break in the middle. Rather than making payload truncation based on token size like Claude Code (25k tokens max), Codex aggressively limits responses based on lines, which means in many cases it might only be seeing 1-2k tokens per tool call, and need to make many tool calls to compensate for this, leading to it being slower, on top Codex already being a slow model. But there's more! Before this week's 0.56 release, this truncation did not apply to MCP tools, until the next user message rolled around. This is because tool calls were hitting the model raw, and only after the next turn did the truncated inputs replace the full ones in the history sent to the next response api request. This means that users who were primarily using MCP tools got a much better Codex experience within the first turn, because the model could much more efficiently digest information about a codebase. I think the codex team could do a lot better with this naive line based truncation - please follow the Claude Code approach of having reasonable token limits instead of arbitrary line based ones.
eric provencher@pvncher

@thsottiaux Can you shed a bit more light on this part of the Codex 0.56 release? It's touted as a big enhancement, but it's a big change for mcp tools, and I want to understand more about what it's doing, and what the token limits are for outputs. "Smarter history management – The new context manager normalizes tool call/output pairs and truncates logs before they hit the model, keeping context windows tight and reducing token churn (codex-rs/core/src/context_manager)."

English
25
18
445
70.7K
makevoid
makevoid@makevoid·
@brankopetric00 monitor load average to prevent these I/O issues, load avg. is often an overlooked metric
English
1
0
1
528
Branko
Branko@brankopetric00·
Production went down. Load balancer health checks failing. All instances marked unhealthy. SSH'd into an instance: - App: running - CPU: 5% - Memory: 40% - Disk: 15% - Network: fine Manually hit health endpoint: curl localhost/health {"status": "ok"} Worked perfectly. Checked load balancer logs: - Health check URL: /health - Response: timeout - Instance marked: unhealthy The issue: - Health endpoint responded in 100ms locally - Load balancer timeout: 2 seconds - Should be plenty of time Then I noticed: Health check ran every 5 seconds. App logged every health check. To a file. That file grew to 47 GB. Every health check: 1. Opened 47 GB log file 2. Appended 1 line 3. Closed file 4. Took 3 seconds due to file size 5. Timed out Fix: Disabled health check logging. Response time: back to 100ms.
English
185
102
3.3K
219.2K
makevoid
makevoid@makevoid·
@siranwrapper redis is not only cache but it can be used as a full-fledged database (e.g. Amazon MemoryDB / Valkey, Azure Managed Redis or self host redis with AOF + Cluster)
English
0
0
1
349
Si Ran
Si Ran@siranwrapper·
i don’t see why caching is necessary anymore. databases are getting so fast and good like i don’t know how redis still in business lol
English
129
8
402
457.4K
John
John@ionleu·
drop ur startup link
English
1.1K
38
1K
113.8K
makevoid
makevoid@makevoid·
@CoderUday High quality synthetic data will become important. Don’t forget that AI can just read working code from Github open source projects, I bet that this will be the second main source of training in terms of size, in particular well documented code.
English
2
0
3
2.9K
Uday
Uday@CoderUday·
Please solve my doubt🙏🙏 Currently AI is getting training on StackOverflow(.)com. Because of AI, people stopped posting question & answers on Stackoverflow. Which means no new data to AI for training. So after few year will AI become irrelevant as there is no new data??
English
341
74
2.1K
162K
makevoid
makevoid@makevoid·
@clivassy ask the AI to read and explain the code, if after reading the code you still don’t understand it, ask the ai to write a high level test suite (optional) and to write a simpler version that works and you can read
English
0
0
0
20
Julia 🌞
Julia 🌞@clivassy·
built with ai. can’t explain half the code worst feeling in the world
English
155
13
446
26.4K
makevoid
makevoid@makevoid·
It’s a very nice perspective but I disagree on coding, we passed the biggest advancement point with sonnet 3.5 and we’re decelerating. GPT3, 3.5 and Sonnet 3.5 were all revolution points for coding but after Sonnet 3.5 we didn’t get any big breakthroughs in terms of models. Agentic coding methods are evolving but that’s not a silver bullet.
English
2
0
8
8.2K
Julian Schrittwieser
Julian Schrittwieser@Mononofu·
As a researcher at a frontier lab I’m often surprised by how unaware of current AI progress public discussions are. I wrote a post to summarize studies of recent progress, and what we should expect in the next 1-2 years: julian.ac/blog/2025/09/2…
English
221
800
5.9K
2M
Rhys
Rhys@RhysSullivan·
If you're using Tailwind, don't sleep on using @​apply in CSS modules to create reusable readable classes Vibe coders aren't smart enough to this (if you're using CSS, keep scrolling)
Rhys tweet media
English
111
41
1.3K
144.5K
makevoid
makevoid@makevoid·
Even with BRC-20 and Ordinals around (which, let's be honest, aren't exactly elegant implementations, and most of the top BRC-20s are not very useful), we still haven't seen fees spike that much or a proper fee market develop on Bitcoin.
makevoid tweet media
English
0
0
0
122
makevoid
makevoid@makevoid·
On the OP_RETURN debate - honestly, I don't have a strong stance either way. I'm generally pro-innovation when it comes to Bitcoin, and this feels like it could be a good time to push things forward. That said, there's definitely something to be said for playing it safe - maybe wait few years, and if we do increase it next year, just bump it to something reasonable like ~3x the current 80 bytes. I think we'll eventually need bigger OP_RETURN for legitimate use cases anyway. What I'm really interested in is making life easier for protocols that want to build token systems right on Bitcoin L1 - those would benefit from being able to store more data per transaction.
English
1
0
0
136
makevoid がリツイート
Mathias Buus 🕳🥊
Mathias Buus 🕳🥊@mafintosh·
Onboarding a pear ton of users to @keet_io this weekend 😅 Just so you know. @keet_io doesn't, nor will it ever, have a token. It's P2P. If someone tells you otherwise they are trying to scam you. Enjoy free and private P2P comms.
English
6
18
66
34.7K
makevoid
makevoid@makevoid·
This means you have functions in the smart contract to get BTC address balances, transactions and UTXOs block headers. That's pretty cool if you ask me.
makevoid tweet media
English
1
0
0
49
makevoid
makevoid@makevoid·
I just discovered hemi.xyz and it looks like a very nice project. See thread for what I like about it.
English
1
0
0
55
makevoid がリツイート
Andrej Karpathy
Andrej Karpathy@karpathy·
There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It's possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like "decrease the padding on the sidebar by half" because I'm too lazy to find it. I "Accept All" always, I don't read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I'd have to really read through it for a while. Sometimes the LLMs can't fix a bug so I just work around it or ask for random changes until it goes away. It's not too bad for throwaway weekend projects, but still quite amusing. I'm building a project or webapp, but it's not really coding - I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.
English
1.4K
3.6K
33.5K
6.9M
makevoid
makevoid@makevoid·
@PreslavMihaylov nice, are you using the Claude Code SDK? I am using the python SDK and it's pretty smooth!
English
1
0
1
30
Pres Mihaylov
Pres Mihaylov@PreslavMihaylov·
@makevoid I built a vibe coding tool for myself where I can build stuff by prompting in slack/discord
Pres Mihaylov tweet media
English
1
0
0
25
makevoid
makevoid@makevoid·
What's your favorite tool for vibecoding/AI-assisted development? 🤖I 'm deep into orchestrating code with Claude for the productivity boost, but curious what's working for others. Cursor? Windsurf? Lovable? V0? Others? Drop your setup below - keen to try new tools! 👇
English
1
0
0
89