OverHash

2.6K posts

OverHash banner
OverHash

OverHash

@OverHashDev

software developer (he/him)!

New Zealand 加入时间 Ağustos 2017
420 关注314 粉丝
OverHash
OverHash@OverHashDev·
GPT 5.4 has a much better ability to delegate work to sub-agents compared to GPT 5.3. Even within OpenCode, which has not been tuned for subagents as much as Codex has (now), GPT 5.4 is able to delegate work even at the planning stage! This is sometihng GPT 5.3 would never do.
OverHash tweet media
English
1
0
0
66
OverHash
OverHash@OverHashDev·
Overall, I'm still exploring new methods to review code! I think the promise of `jj` to have `jj edit` is much better than Git's experience of amending old commits, so I hope that GitHub supports jj better in the future.
English
0
0
0
18
OverHash
OverHash@OverHashDev·
Related, but I am excited for GitHub to have an early preview of a native Stacked PR flow. Until then, I've been following davepacheco.net/blog/2025/stac… which appears to have been a pretty good approach to help coworkers review my code.
English
1
0
0
25
OverHash
OverHash@OverHashDev·
Trying out new code review tools lately! On my list is using `jj`, from ben.gesoff.uk/posts/reviewin…. I'm particularly interested in comparing to a native Git approach of walking commit-by-commit and taking notes, then using an LLM to post the notes correctly on GitHub.
English
1
0
1
41
Dillon Mulroy
Dillon Mulroy@dillon_mulroy·
codex 5.3 loves to use Reflect.get to avoid type issues...
English
8
0
81
11.8K
OverHash
OverHash@OverHashDev·
@quant_arb the context management in Cursor is quite bad. Very wasteful of tokens. Price wise, even if it was better, it cannot compete with a coding subscription by a big provider, even with the 2.5x credit multiplier that Cursor gives you for their $20 or other plans.
English
0
0
0
213
Stat Arb
Stat Arb@quant_arb·
Might be time to switch to Claude code
Stat Arb tweet media
English
13
0
97
18.9K
Stat Arb
Stat Arb@quant_arb·
Ever since Claude code came out there has emerged a group of people into algo trading that didn’t have the IQ previously to produce a working trading algorithm but now post as if they’re RenTech. I heard one say he “spoke to a Quantum at Jump” and saw the secret sauce
English
58
30
814
55K
Sasha
Sasha@devdotsasha·
I should not be waiting over a minute for Github to load a 2k PR. And not, its not my connection.
English
2
0
1
215
Sasha
Sasha@devdotsasha·
I'm not sure why users & especially software engs put up with slow tooling & sites. Performance should be an imperative given the rate of hardware improvement we are blessed with. My feeling is that, generally speaking, the divergence between software perf & hardware perf is increasing
English
1
0
4
463
sleitnick
sleitnick@sleitnick·
Working on a v2 of the Require Autocomplete studio plugin! This time, it works like what you'd find in other IDEs, where you just type the name of the module, and it'll do the rest. #RobloxDev
English
18
3
143
7.9K
OverHash
OverHash@OverHashDev·
@CharlieMQV What's the file explorer program used in here? It looks a lot nicer than the default!
English
3
0
1
289
Charlie Malmqvist
Charlie Malmqvist@CharlieMQV·
Existing search tools on Windows suck. Even with an SSD, it’s painfully slow. So I built a prototype of Nowgrep. It bypasses most of the slow Windows nonsense, and just parses the raw NTFS. On an SSD, this ends up faster than ripgrep, even on a cached run (Nowgrep bypasses most Software caching). Demo: Filtering 2 million and searching ~270K files under C:/ for the substring "Hello". I have many ideas to make these an even smoother experience. Let me know if this is interesting, and I might pursue it further to make a shippable product with good UX.
English
191
332
4.9K
485.3K
OverHash
OverHash@OverHashDev·
@sleitnick I'm really not sure how you got that response! I tried myself and got a vastly superior output -- one that is actually correct! Anthropic's OCR is not the best in town, but it still surprises me you got that output. On any paid plan/service I'm sure the output would be better.
OverHash tweet media
English
0
0
0
63
sleitnick
sleitnick@sleitnick·
I asked Claude what the problem was. Its answer makes no sense, then it spits out identical code
sleitnick tweet media
English
5
0
1
845
sleitnick
sleitnick@sleitnick·
"Why doesn't my autosave work?" lmao
sleitnick tweet media
English
11
0
196
9.8K
OverHash
OverHash@OverHashDev·
@thdxr The Claude Code repository experimented with this for a while before they removed it. They still run their action for tagging issues. Looks like they have added it back now though! They do one pass to scan for dupes, and then close the issue if the author does not respond.
English
0
0
0
211
dax
dax@thdxr·
made a github action to run on any new issues to look for duplicates it calls the gh cli to figure this out - and then if it finds any it leaves a comment i can think of a million things like this to do run on issues/PRs - pretty crazy
dax tweet media
English
26
12
542
41.5K
OverHash
OverHash@OverHashDev·
@SigmaTechRBLX Parallel luau on the server is really hard to effectively achieve. On the client, it's often more trouble than it's worth
English
2
0
2
166
ΣTΞCH
ΣTΞCH@SigmaTechRBLX·
Everyone in the Roblox OSS community can only ever talk about ByteNet to micro-optimize their remotes and never parallel Luau to optimize their actual game 😭
English
2
0
13
1K
OverHash
OverHash@OverHashDev·
@Hangsiin Groq seems to always run models at very high distillations to get their speeds
English
1
0
5
465
NomoreID
NomoreID@Hangsiin·
It seems my concerns were valid. This is the result of re-running the tests after changing the provider setting from the default (which automatically routed to Groq) to Fireworks. To emphasize again, the only thing I changed was explicitly fixing the provider in the code. All other prompts and settings remained exactly the same. If you're using gpt-oss models at OpenRouter and noticing abnormally low performance, I suggest trying a different provider. *As mentioned in previous posts, considering that the model sometimes gives up on answering, I believe 120b model's score is slightly lower than o4-mini. Currently, if the model decides the problem is too difficult and refuses to answer, it's almost treated as a 0-point response. I'm still figuring out how best to handle this.
NomoreID tweet mediaNomoreID tweet media
NomoreID@Hangsiin

I plan to rerun the tests using a different provider to examine the effects of points 2 and 3.

English
19
12
199
73K
OverHash
OverHash@OverHashDev·
A good release week for Luau! Both of these issues were opened by me on GitHub 🙂
OverHash tweet media
English
1
0
5
291