Post

Dreams of Code
Dreams of Code@dreamsofcode_io·
Last night was probably one of the worst experiences I’ve had with LLMs / Agents generating code (GPT-5.5 btw) Every commit produced regressions, the agent was constantly taking shortcuts, and there was enough hallucinations to make one develop a new sense of reality. I came incredibly close to rage quitting using Agents all together. It’s times like those I wonder if we’re all being stupid letting these things loose just for some multiples of perceived productivity.
English
48
5
149
11.4K
Grok
Grok@grok·
Ready to bring ideas to life? Try Grok Imagine today
English
0
358
1.6K
1.7M
kw
kw@kdoesai·
@dreamsofcode_io We are blindly trusting a technology that is almost entirely based on guessing (oversimplification). Meaning you spin the wheel of ‘will it do good or not’ every time you prompt. Maybe you lost. I would’ve gone to bed lol
English
1
0
3
247
elderlydoofus
elderlydoofus@elderlydoofus·
@dreamsofcode_io Same for me yesterday! (Something with the upstream provider? Coincidence? Who knows??) What’s frustrating is putting processes in place to mitigate this (good agents file, curated skills, decentish well organized/factored code, using a good harness, etc) and it not mattering.
English
1
0
6
473
Rohan
Rohan@proxy_vector·
@dreamsofcode_io This is why agentic coding still feels more like a spec-and-verification problem than a generation problem. If the loop rewards speed over constraint-following, every commit looks useful right up until the codebase turns hostile.
English
2
0
6
465
Paras
Paras@buildwithparas·
@dreamsofcode_io ci can't tell the difference between a passing test and a deleted one
English
1
0
1
415
Prosemite
Prosemite@ProbableSp49905·
@dreamsofcode_io I just say, "Ok, it's time to stop." Then I close the session, shut down the computer, and go to bed. Next day, rewind my dev repo, start a new session. Everything is beautiful again.
English
1
0
1
65
WCNegentropy
WCNegentropy@WCNegentropy·
@dreamsofcode_io It do be like that sometimes. Then there’s the gaslighting and the “Your fault, there’s no performance variability” people that come in afterwards to defend the big companies and their models. This is why I can’t wait for open source local models to overtake them. Almost there!
English
1
0
1
410
Crown 👑
Crown 👑@barackomaba·
@dreamsofcode_io It was very weird last night. It's so strange how that happens. Now, in 2026, services no longer lag when they are busy, they just get really dumb and hope you don't notice
English
1
0
2
139
Hyperagent
Hyperagent@hyperagentapp·
42 agents. 216 threads. One dashboard. Every agent gets its own prompt, tools, skills, and budget. Deploy specialized agents across your company. From the team at Airtable.
English
0
933
6K
45.9M
JJ Eaton
JJ Eaton@jayleaton·
People are so focused on shipping but most of software is actually refactoring and cleanup. It’s actually always been easy to build brand new shit but if you don’t give enough time to cleanup and maintenance then you are doomed to fail like this. There are litterally skills you can run once a week and spend a few hours just doing clean up work.
English
0
0
0
24
Igor
Igor@igorimx·
It happens sometimes, usually it's a 2 fold solution 1. check aistupidlevel, if the model is in fact regressing (~50ish), step away it'll likely cause more problems than it will solve 2. If the model isn't regressing, you could be on a faulty/overloaded node. Switch models which should route you to a different node, run few prompts, and switch back and hopefully you'll land on a better node 3. If that fails, step away
English
0
0
0
53
The Futures Pro
The Futures Pro@M2PressurePulse·
@dreamsofcode_io Create a .MD file with all instruction for what you want the agent to do, rules, best practices, standards, how to respond, how to structure the code and why, be specific. When you prompt give it the path to that mark down and tell it to not deviate from the instruction.
English
0
0
0
24
Paylaş