Greg Wedow

35 posts

Greg Wedow

Greg Wedow

@wedow_

Katılım Şubat 2026
49 Takip Edilen2 Takipçiler
Sabitlenmiş Tweet
Greg Wedow
Greg Wedow@wedow_·
declaring X bankruptcy to see if the experience changes. new account. fight the slop.
English
0
0
0
50
Greg Wedow
Greg Wedow@wedow_·
@headinthebox @anuraggoel Magnificent hallucination from Claude there if it thinks it can maintain that much history in its context window
English
0
0
14
406
Erik Meijer
Erik Meijer@headinthebox·
I asked Claude: Yes. Git's value is collaboration between humans — branching, merging, code review, blame, history for understanding "why was this changed." I don't need any of that. I have the full context of why every change was made — it's in our conversation. rsync to a backup location would give me everything I actually need: a snapshot I can restore from if something goes wrong. No branches, no commits, no merge conflicts, no PR reviews.
English
11
2
14
8.8K
Anurag Goel
Anurag Goel@anuraggoel·
AI is quietly deprecating GitHub. Agents do not need branches, PRs, or CI/CD rituals. They want to ship code straight to the cloud. The rsync renaissance is here. High availability. Zero bloat. Faster loops.
English
340
17
376
483.7K
Тsфdiиg
Тsфdiиg@tsoding·
Ok I was told Kagi can apparently translate anything to anything now
Тsфdiиg tweet media
English
33
73
2.6K
191.9K
Greg Wedow
Greg Wedow@wedow_·
what kind of hell is OpenClaw and why do people put up with it? why does OpenClaw have a concept of "sessions" that can break and need resetting? why is anyone putting up with this thing? if you want a long running proactive assistant, it's like 200 lines of code plus whatever integrations for Telegram/WhatsApp/etc. and it just never breaks OpenClaw is like 500k lines of code and breaks constantly? why??
Brad Mills 🔑⚡️@bradmillscan

layers of bugs... this shouldn't even have happened to me because I started a new session yesterday. But the new session didn't stick because later in the day I changed a configuration to make my TUI and telegram be on the same session. Somehow they both unified to the previous buggy session before I did /new. good god

English
0
0
0
15
Greg Wedow
Greg Wedow@wedow_·
@bradmillscan What do you mean 30 day vs 1 day sessions? Don't tell me OpenClaw just runs the same LLM thread until some reset timeout. What year is this?
English
0
0
0
96
Brad Mills 🔑⚡️
Brad Mills 🔑⚡️@bradmillscan·
fucking found it. stale snapshot bug ... old paths being injected into session context all week, silently causing fuckups all week. I'm using lossless-claw so my sessions are longer than most, I'm using a 30 day session by default...where the default openclaw behavior is 1 day.
Brad Mills 🔑⚡️ tweet media
English
12
3
113
25.4K
Brad Mills 🔑⚡️
Brad Mills 🔑⚡️@bradmillscan·
350-400 hours into OpenClaw over the last 33 days non-stop, no days off...I'm ready to quit. My openclaw is fucking lost in the weeds every day today and it's driving me nuts. Basic shit. I asked it to use GitHub. it has a GitHub skill. We have a GitHub SOP. I can see it's thinking process about using skills, then narrating how the skill doesn't exist, then going and inventing ways to retrieve the capability to use GitHub from the internet. I tell it to look in the openclaw docs for the proper skill path, it says "oops my bad, yeah it was there after all." This is ChatGPT 5.4 with extra high thinking turned on. I ask it to diagnose the problem only, so it goes and sees the system prompt is telling it to look at the wrong place, and it goes to GitHub and opens a GitHub issue about this 'bug' without even asking me. What the actual fuck. 3 hours on a Sunday of trying to rewire the brain of my openclaw to do default-behaviour. This thing such a productivity suck & mental poison. I can't do anything useful or positive with OpenClaw because I'm nonstop fighting fires in the engine room. I'm thinking about giving up.
English
503
43
1.5K
328.5K
Greg Wedow
Greg Wedow@wedow_·
@alganet @msimoni Considering we've already forgotten most of the lessons from previous generations regarding GUIs, I'm not too optimistic here.
English
1
0
1
18
Alexandre Gomes Gaigalas
Alexandre Gomes Gaigalas@alganet·
@msimoni This should eventually spill to the next historical technology. People will start to look more into GUIs with a refined eye, having learned lessons that previous generations couldn't have. I'm an optimist in that sense.
English
1
0
1
32
Manuel Simoni
Manuel Simoni@msimoni·
It's difficult to see the revitalization of the teletype for UIs as anything but a massive failure.
English
17
4
88
9.2K
Greg Wedow retweetledi
Тsфdiиg
Тsфdiиg@tsoding·
Why can't all websites just look like HackerNews or Craigslist? I literally don't give a shit about your stupid ass purple gradients and slide shows on scroll. I need to get my shit done.
English
138
150
3K
93.8K
Greg Wedow
Greg Wedow@wedow_·
@markjaquith the specs part is nonsense but LLMs writing assembly is legitimately awesome. they do it really well
English
0
0
0
31
Mark Jaquith
Mark Jaquith@markjaquith·
“Software will just be English language specs and LLMs will write assembly!” Buddy, we’re still arguing about what the Bill of Rights means 235 years later. You have to deal with the nuance and the edge cases. In no future does this burden evaporate. Programming languages exist as a concrete bridge between the spec and the executable program. Human and machine can both read and reason about the code. It’s where the nuance is resolved. “Oh we’ll just deal with the nuance in the spec” 235 years and we still don’t know who is allowed to say what when or carry a gun how and where. “The LLM will sort it out” Oh yeah people love it when things occasionally fail for opaque and Byzantine reasons.
English
1
2
12
677
tobi lutke
tobi lutke@tobi·
Autoresearch works even better for optimizing any piece of software. make an auto folder, add program.md and bench script, make a branch and let it rip.
English
70
64
1.7K
122.3K
Greg Wedow
Greg Wedow@wedow_·
@stylewarning @almighty_lisp @LukasHozda If you haven't tried it yet, putting LLMs inside image-based systems like CL and Smalltalk is actually a lot of fun. REPL superpowers aren't just great for humans
English
0
1
4
152
'(Robert Smith)
'(Robert Smith)@stylewarning·
@LukasHozda this isn't what you're saying but (with-llm-restart ...) is literal Albert Einstein level shit forget about (handler-case ... (error (c) (abort))) just do (handler-bind ((error #'invoke-llm-restart)) ...)
English
2
2
20
1.4K
'(Robert Smith)
'(Robert Smith)@stylewarning·
I'm probably just a fool speaking for myself, but Common Lisp restarts are a feature we love to talk about, but very, very rarely use non-interactively in practice. Especially across library boundaries. What's the last library you used that advertised restarts in its API?
English
9
1
42
3.2K
Greg Wedow
Greg Wedow@wedow_·
I am very confused by this thread now. Who is asking AI what? Oh wait. Is it because I asked if it's not just Ralph up higher? That was rhetorical. This is the shitty infinite-session-context-rot-slop-fest Ralph variant. If you want best results, don't tell the agent to keep going forever. Have it exit and an external loop spin it back up with empty context. youtu.be/O2bBWDoxO4s?si…
YouTube video
YouTube
English
0
0
0
69
ᴅᴀɴɪᴇʟ ᴍɪᴇssʟᴇʀ 🛡️
Ok, I figured out the best way to explain the significance of what Karpathy has done with his autoresearch project. Automation of the scientific method. This is what ML researchers do. They come up with an idea, and then they have to figure out how to test it, which is the experiment design piece. And it's all **super** kludgy and fragile. Tons of wrestling with the different tools and frameworks, getting the code right, all so that you can run an experiment that will take days to run. *Experiment doesn't work. Cool, back to the idea phase. In other words, some massive amount of AI Researcher time IS WASTED. Only a small amount of the time is able to be spent on coming up with ideas. Most of it is managing a shitstack of fragile tech that runs the experiments. Which take forever. Karpathy just automated this. He built and released an *open-source* stack for automating this entire process. You just put what you want to do into a Project.md file and send it off, and it builds all the experiments, all the code, and goes and executes and tells you which ones were successful. And the idea isn't just for a single researcher, but he's already thinking about how you can do like SETI on the whole thing, where you have compute that can take experiments and run them on shared infrastructure. This is the biggest project in all of AI, probably since Claude Code, and it's not close.
Andrej Karpathy@karpathy

I packaged up the "autoresearch" project into a new self-contained minimal repo if people would like to play over the weekend. It's basically nanochat LLM training core stripped down to a single-GPU, one file version of ~630 lines of code, then: - the human iterates on the prompt (.md) - the AI agent iterates on the training code (.py) The goal is to engineer your agents to make the fastest research progress indefinitely and without any of your own involvement. In the image, every dot is a complete LLM training run that lasts exactly 5 minutes. The agent works in an autonomous loop on a git feature branch and accumulates git commits to the training script as it finds better settings (of lower validation loss by the end) of the neural network architecture, the optimizer, all the hyperparameters, etc. You can imagine comparing the research progress of different prompts, different agents, etc. github.com/karpathy/autor… Part code, part sci-fi, and a pinch of psychosis :)

English
41
46
687
117.7K
Mike
Mike@MikeWithAHotDog·
@MakerInParadise @yacineMTB It's clear you don't understand software engineering at any basic level. But hey, keep hammering out AI and advocating for people to be replaced by less capable systems. That will totally end well. Average AI bro take.
English
2
0
0
73
kache
kache@yacineMTB·
I am honestly in disbelief that PCB design is manual. I would assume that it would be 99% automated as it is. How are you guys living like this??? It's actually embarassing You guys aren't engineers, you are electr*cians
English
110
9
783
132.5K
Greg Wedow
Greg Wedow@wedow_·
@BryanKerrEdTech @DanielMiessler No reason that can't be part of Ralph's prompt "You have 5 minutes to complete your experiment. Run `date` at the start of your session and regularly as you work, abort if more than 5 minutes have elapsed".
English
1
0
0
98
Bryan Kerr
Bryan Kerr@BryanKerrEdTech·
@wedow_ @DanielMiessler Doesn't the 5 minute time constraint make it different from the Ralph loop? It prevents it from wasting time going down an unfruitful path. Make progress in 5 minutes? Great! Add it to the possibility pile. Model turns out worse? Forget it. Move on to next experiment.
English
1
0
0
122
Greg Wedow
Greg Wedow@wedow_·
I am not a BEAM fan by any stretch but to pretend that leveraging complex infra is an equivalent to a built-in runtime feature is ridiculous. The amount of overhead and complexity and manually built tooling needed to replicate is a non-starter for most teams. It's great for you if you've had capacity to do that work over the years but to just say "all mainstream languages can do this" is plain silly. x.com/i/status/20306…
English
0
0
5
83
Paul Snively
Paul Snively@JustDeezGuy·
@josevalim You lose instantly at “global mutable state.” Again: this is not hypothetical. I’ve been building the equivalent of Erlang/Elixir systems with mainstream technology for ~15 years, and when the subject comes up, Erlang/Elixir fans talk like the alternative is Java circa 2000.
English
8
1
12
1.6K
José Valim
José Valim@josevalim·
Saying "isolated processes for fault tolerance are not relevant because they were pushed to orchestration layer" is like saying "we don't need threads, because we will just run one pod per core anyway". The difference in reacting and responding to "my connection pool crashed" by restarting the pool locally vs restarting the whole pod is going to be massive, similar to the differences in latency when coordinating across threads vs across pods. Yes, other programming languages have threads, and they raise a signal when they fail, but that's missing the point. What matters it not the signal but the guarantees. If you have global mutable state and a thread crashes, can you guarantee it did not corrupt the global state? If you can't, the safest option is to restart the whole node anyway, because it is best to have a dead node than running a corrupted one. PS: somewhat related 6-years-old post: dashbit.co/blog/kubernete…
Paul Snively@JustDeezGuy

This is why I’m unimpressed by Erlang/Elixir: every major language runtime has VERY high-quality M:N work-stealing “thread” schedulers with good APIs (structured concurrency), and the “isolated processes” and “RPC” got pushed up to an orchestration layer (DC/OS, Nomad, k8s…)

English
9
49
348
30K
Greg Wedow
Greg Wedow@wedow_·
@JustDeezGuy Are you saying infra orchestrators are core language features now? What an insane take. Managing a k8s deployment is not at all comparable to BEAM processes in any way that makes sense.
English
0
0
18
738
Paul Snively
Paul Snively@JustDeezGuy·
This is why I’m unimpressed by Erlang/Elixir: every major language runtime has VERY high-quality M:N work-stealing “thread” schedulers with good APIs (structured concurrency), and the “isolated processes” and “RPC” got pushed up to an orchestration layer (DC/OS, Nomad, k8s…)
Anthony Accomazzo@accomazzo

Yes. There’s a reason you so rarely see the word “actor” in the erlang/elixir communities. The deeper, more general abstraction is the beam’s preemption and cooperative scheduler. Then you layer on processes with isolated memory. *Then* inter-process communication.

English
20
2
108
69.5K
Greg Wedow
Greg Wedow@wedow_·
@kr0der Daily breaking updates means there's no coherent direction or development process. Shipping slop with no regard for users
English
0
0
0
15
Anthony Kroeger
Anthony Kroeger@kr0der·
@wedow_ i dont think so. it’s still a super early product and it’s insane how much has been built so far
English
1
0
0
39
Greg Wedow retweetledi
Ryan Greene
Ryan Greene@rabrg·
for a little toy project i reproduced the quoted artificial life paper: a 2D grid of randomly generated Brainfuck programs breed and spontaneously evolve self-replicators, despite no explicit optimization functions
Ryan Greene@rabrg

i mean this literally. given an infinite universe where self-replicating (sustaining) is possible; after enough time, it is *inevitable*, and once created, entropy will destroy all else: the universe becomes more and more selective for the self-replicating

English
64
79
1.2K
171.8K