Marco Hefti

225 posts

Marco Hefti

Marco Hefti

@mheftii

Senior SWE (~16y) and contractor.

Katılım Ekim 2014
222 Takip Edilen216 Takipçiler
Tibo
Tibo@thsottiaux·
Does anyone have a breakdown of how much value you get in your various AI subscriptions from different providers? When compared to API prices
English
184
15
945
113.5K
Marco Hefti
Marco Hefti@mheftii·
@GergelyOrosz As a dev that was around a long time before AI became a thing, it's still difficult to go back when you know you can work at 5x the speed with AI😅
English
4
0
47
7.4K
Gergely Orosz
Gergely Orosz@GergelyOrosz·
Devs who can code also WITHOUT AI as well looking to became 10x more valuable They are the ones who won’t panic or be idle when their Claude quota runs out… So much for all the advice on how learning to code is not worth it any more…
Thariq@trq212

To manage growing demand for Claude we're adjusting our 5 hour session limits for free/Pro/Max subs during peak hours. Your weekly limits remain unchanged. During weekdays between 5am–11am PT / 1pm–7pm GMT, you'll move through your 5-hour session limits faster than before.

English
155
141
2K
277.3K
Marco Hefti
Marco Hefti@mheftii·
@thsottiaux It's difficult to experiment when I plan my week and work conservatively because the usage drops ~8x-10x faster than a month ago. Resets don't reset just the usage, they reset the whole schedule so nothing changes if you plan to stay within limits for a given day
English
1
0
1
2.2K
Tibo
Tibo@thsottiaux·
Hello. We have reset Codex usage limits across all plans to let everyone experiment with the magnificent plugins we just launched, and because it had been a while! You can just build unlimited things with Codex. Have fun!
English
671
394
9.1K
914.3K
Marcos Damasceno 🏴
Marcos Damasceno 🏴@marcospaulosd·
@GergelyOrosz A CEO spending his time vibe coding with lovable to replace an excellent product like Notion which costs pennies in the great scheme of things... to save ~5k a year? FIVE K a year? If I were one of that guy's investor I would be worried.
English
1
0
1
116
Gergely Orosz
Gergely Orosz@GergelyOrosz·
Congrats on having fun and building a vibe coded WYSIWYG editor Patiently waiting for when they’ll realize things like permissions, tagging, backups+disaster recovery, search, exports, tables, mobile app, integrations w Slack+Linear+others need to be built… 🍿
Gergely Orosz tweet media
English
175
50
1.5K
190.8K
Marco Hefti
Marco Hefti@mheftii·
@steipete @openclaw I don't get it, this has been a thing for.. years in automation? browser-agent and surfwright cli offer it as well, I have a profile per project that contains saved cookies, sessions etc for an agent to use. Can also be startet in headed mode to co-drive a browser
English
1
0
2
130
Marco Hefti
Marco Hefti@mheftii·
@GergelyOrosz I honestly never understood some peoples obsession with added LOC
English
0
0
0
49
Gergely Orosz
Gergely Orosz@GergelyOrosz·
When it comes to AI agents / AI tooling + coding, I hear an awful lot of talk about: Efficiency Iteration speed / PR output rate / lines of codes produced I hear zero mentions about: Quality Customer obsession This will bite back, and it probably already is...
English
102
64
892
121.2K
geoff
geoff@GeoffreyHuntley·
mastodon vs twitter
geoff tweet mediageoff tweet media
Español
75
91
1.8K
158.4K
Marco Hefti retweetledi
Dima Mikielewicz
Dima Mikielewicz@dimamikielewicz·
OpenAI published a repo with the code to orchestrate AI agents built primarily with Elixir (96.1%): github.com/openai/symphony. While explaining why they chose Elixir, they say that - It is great for supervising long-running processes - It has an active ecosystem of tools and libraries - It supports hot code reloading without stopping actively running subagents, which is very useful during development. Amazing news for the Elixir community; I hope even more people will appreciate how amazing Elixir is for agentic AI systems. #myelixirstatus
English
34
105
1.3K
112.7K
Marco Hefti
Marco Hefti@mheftii·
@j_mcgraph I started using diagrams during development to keep oversight of what's even going on in more complex processes. Helps a lot.
English
0
0
0
18
Josh McGrath
Josh McGrath@j_mcgraph·
codex has made swe so different that it's hard again. I can build systems so fast that I don't have time to fit the whole thing in my head. The value of a diagram before starting is higher than ever.
English
18
4
183
10.1K
Marco Hefti
Marco Hefti@mheftii·
the 1 million context window for gpt-5.4 needs to be manually enabled for codex config.toml model = "gpt-5.4" model_context_window = 1000000 model_auto_compact_token_limit = 950000 just be aware that everything above 272k will eat up 2x the usage
English
0
0
0
118
Marco Hefti
Marco Hefti@mheftii·
@burkov @KluepfelMark I see what you did here. It's like that meme of how a logical thought process would interpret a request
Marco Hefti tweet media
English
0
0
2
27
BURKOV
BURKOV@burkov·
@KluepfelMark My request was logically clear: monitor and then say that everything is fine. But the post isn't about this specific request; it addresses the impossibility of predicting how AI will interpret ANY request.
English
3
0
6
646
BURKOV
BURKOV@burkov·
Why AI will never do what you need and therefore cannot be left alone, a simple example. So, I started a deployment to Cloud Run and I said this to the chatbot: "Monitor the deployment and tell me that everything is fine." The chatbot monitored the deployment, saw a failure, and told me that there was a failure. The AI *assumed* that I needed from it the honest assessment of *whether* everything is fine, but my request, from the computer programming perspective, clearly was that I wanted it to tell me that everything is fine, regardless of what the monitoring revealed. AI will always assume what you want based on its training experience, which you don't control, and there's nothing you can do about it.
English
51
9
124
15.7K
Marco Hefti
Marco Hefti@mheftii·
Subagents are legit a game changer in Codex. Codex does this thing where it compacts many things into one when documenting. For example, I evaluate a feature or a change and have it write a concept for this change. I then make a task out of the concept. Usually Codex will just skip 90% of what has been defined in the concept. Now I can tell it to use subagents to make parity checks against the concept to ensure the task covers everything in the concept. I usually do this a few times until I'm satisfied. I then have it spawn subagents to check the task against the actual codebase and make sure it adheres to its architecture, testing guidelines, design guidelines, how to implement it in a way where it is most maintainable, more secure. Every subagent has a specific task, that it will then report to my main agent, and my main agent manages the task and adds things as the subagents go. I've achieved much better result with planning this way
English
0
0
0
72
Marco Hefti
Marco Hefti@mheftii·
You can take this one step farther. We all know multiple Codex agents doing work in the same git worktree don't mix. What you can do, is to have one agent doing work on versioned files, another one doing reviews, evaluations and collecting information in a gitignored folder. This is how I build up future work while an agent is actively making changes
English
0
0
0
48
Marco Hefti
Marco Hefti@mheftii·
@WeedleChamp @JsonBasedman Codex disallows rm -rf (or at least did at some point). The agent noticed that and figured out it can delete with find --delete lol
English
0
0
7
191
Jake – 🇺🇸/acc
Jake – 🇺🇸/acc@WeedleChamp·
You can set up a hook for that. Below is mine for Mac filename: block-rm-rf.sh #!/bin/bash # Block rm -rf commands - use trash instead for safe deletion # Read the tool input from stdin INPUT=$(cat) # Check if the input contains rm -rf patterns (various flag combinations) # Patterns: rm -rf, rm -fr, rm -r -f, rm -f -r, etc. if echo "$INPUT" | grep -qE 'rm\s+(-[a-zA-Z]*r[a-zA-Z]*f[a-zA-Z]*|-[a-zA-Z]*f[a-zA-Z]*r[a-zA-Z]*)\s|rm\s+-r\s+-f|rm\s+-f\s+-r'; then echo "BLOCKED: 'rm -rf' is not allowed. Use 'trash' command instead for safe deletion." echo "" echo "Example: trash " echo "" echo "If you need to install trash: brew install trash" exit 2 fi exit 0
English
4
0
19
2.9K
Marco Hefti
Marco Hefti@mheftii·
@GergelyOrosz I have a feeling we are in the minority (which is why they do it in the first place)
English
0
0
3
592
Gergely Orosz
Gergely Orosz@GergelyOrosz·
I don’t have notifications on for many apps: it’s only for the very few that never abuse them. Some marketing or growth hacking person at Revolut has now achieved me turning off all Revolut notifications, after sending what I consider spam. Non actionable, just plain junk.
Gergely Orosz tweet media
English
61
15
1.8K
72.2K
Marco Hefti retweetledi
Mario Zechner
Mario Zechner@badlogicgames·
hi, i'm the guy who wrote "what if we don't need mcp at all?" and here are my €0,02. 1. MCP, specifically via SSE transport, is great for normies, e.g. in Claude desktop/Cowork 2. MCP solves the problem of both auth and hiding auth from the model 3. MCP allows for statefulness MCP is useful, but often less so in development, specifically if you don't want the model to handle filtering, tansforms & composition in context. that's why pi doesn't have MCP support out of the box (i don't need it ever), but it's trivial to add. see @nicopreme 's extension.
Chrys Bader@chrysb

unpopular (maybe?) opinion: MCP is dead in the water @openclaw has shown me that api & cli will win. every MCP server you connect loads its tool definitions into your context window. name, description, parameter schema, all of it. connect 10 servers with 5 tools each and you've burned 50 tool definitions worth of tokens before your conversation even starts. context bloat will never be a good thing - performance-wise or economically. i assume this is why @steipete left it out of @openclaw. the "exec" tool paired with on-demand skills is all you need. it can run any command invented since the beginning of computers. a resurgence of glory for ancient, but powerful tools like curl, sed, awk, grep. command line tools once mastered by the greats, but long forgotten and buried underneath abstractions developed for us lesser mortals. now available to us all, piloted by the smartest models on earth. every founder gets their own mass army of greybeards. the inertia required for MCP adoption, imo, is too great to overcome the momentum @openclaw has breathed into api + cli + skills. the common defenses people bring up: • "MCP gives you typed schemas and validation" — so does a well-documented CLI • "MCP gives you explicit permissions" — so does a sandbox with an allowlist • "MCP is a standard" — a standard that scales poorly is still a standard that scales poorly lastly, i've heard many MCP servers are just wrapping existing APIs - that kind of redundancy and unnecessary indirection should be a red flag. so, let's drop it and redirect our efforts into cli tools & apis with accompanying skills.

English
22
18
279
42K
Marco Hefti
Marco Hefti@mheftii·
@boshen_c I think it leads to more innovation because prototyping and trying out new things became cheap Also it's never only developers being involved with a product so nothing about shower thoughts changes apart from them being able to be realized the very next day already
English
0
0
0
373
Boshen
Boshen@boshen_c·
AI coding is leading to a loss of innovation. Innovation requires long periods of refinement and those “shower thought” moments. Right now, it’s impossible to innovate with so many coding sessions open, where you are high as fuck with intracranial tinnitus that won’t stop.
English
18
11
283
24.4K
Marco Hefti
Marco Hefti@mheftii·
@addyosmani holy converter, that link was supposed to just be AGENTS.md - The auto hyperlinking for .md files is awful
English
0
0
0
43
Addy Osmani
Addy Osmani@addyosmani·
Tip: Be careful with /init. A good mental model is to treat AGENTS(.md) as a living list of codebase smells you haven't fixed yet rather than a permanent configuration. Auto-generated AGENTS(.md) files hurt agent performance and inflate costs because they duplicate what agents can already discover. Human-written files help only when they contain non-discoverable information - tooling gotchas, non-obvidous conventions, landmines. Every other line is noise. Beyond what to put in it, there's a structural problem worth naming: a single AGENTS(.md) at the root of your repo isn't sufficient for any codebase of real complexity. What you actually need is a hierarchy of AGENTS(.md) files - placed at the relevant directory or module level - automatically maintained so that each agent gets context scoped precisely to the code it's working in, rather than a monolithic file that conflates concerns across the entire project.
Theo - t3.gg@theo

You should delete your CLAUDE․md/AGENTS․md file. I have a study to prove it.

English
53
87
1K
188.6K
Marco Hefti
Marco Hefti@mheftii·
I've been analyzing how agents interact with protocols (efficiency, amount of turns, tokens/resourcefulness). As for CLI, it often times takes them more turns because they are trying to get as much context as possible and be creative. As for MCP, they get the contract up front and are fairly limited in what they can do. Both can be good and bad. I'm tending towards CLI, but it would need to be written for an agent, not a normal CLI. It's an interesting discussion regardless
English
1
0
1
12
DisposableSoftware
DisposableSoftware@disposoft·
lazy lazy boy. unlike you, i'm actually willing to put in a modicum of effort and am reading them now. tho tbh now im wondering if its worth it, since i suspect the reason you "like" mcp is because you're simply too lazy to think for yourself cra.mr/context-manage… right off the bat: "What we’re going to focus on, within the MCP server, is simply the value of steering the agent and why CLI vs MCP doesn’t make sense as a conversation point." "I want you to focus on the intent — what we’re exposing as a description. Our goal is to steer the agent so that, when a Sentry URL is offered, it can fetch information about that resource." you're telling me you cant see how that could be done with any other pattern? why is mcp specifically good at that? answer an actual question, i dare you
English
2
0
3
201
David Cramer
David Cramer@zeeg·
1. what
Chrys Bader@chrysb

unpopular (maybe?) opinion: MCP is dead in the water @openclaw has shown me that api & cli will win. every MCP server you connect loads its tool definitions into your context window. name, description, parameter schema, all of it. connect 10 servers with 5 tools each and you've burned 50 tool definitions worth of tokens before your conversation even starts. context bloat will never be a good thing - performance-wise or economically. i assume this is why @steipete left it out of @openclaw. the "exec" tool paired with on-demand skills is all you need. it can run any command invented since the beginning of computers. a resurgence of glory for ancient, but powerful tools like curl, sed, awk, grep. command line tools once mastered by the greats, but long forgotten and buried underneath abstractions developed for us lesser mortals. now available to us all, piloted by the smartest models on earth. every founder gets their own mass army of greybeards. the inertia required for MCP adoption, imo, is too great to overcome the momentum @openclaw has breathed into api + cli + skills. the common defenses people bring up: • "MCP gives you typed schemas and validation" — so does a well-documented CLI • "MCP gives you explicit permissions" — so does a sandbox with an allowlist • "MCP is a standard" — a standard that scales poorly is still a standard that scales poorly lastly, i've heard many MCP servers are just wrapping existing APIs - that kind of redundancy and unnecessary indirection should be a red flag. so, let's drop it and redirect our efforts into cli tools & apis with accompanying skills.

English
17
0
46
25.2K