Ranger

42 posts

Ranger banner
Ranger

Ranger

@RangerNetHQ

Stop worrying about testing; automate it with Ranger.

San Francisco Beigetreten Ekim 2024
47 Folgt160 Follower
Ranger
Ranger@RangerNetHQ·
Ranger Feature Review works with Claude! Whenever you create a new feature with Claude, Feature Review automatically QA's, tests, and verifies it so you can get to production faster.
Ranger tweet media
English
0
0
4
434
Ranger
Ranger@RangerNetHQ·
don't be him and use Ranger 🙄
English
0
0
2
63
Ranger
Ranger@RangerNetHQ·
What it feels like to smash bugs while your 5x agents are committing changes to prod
English
1
2
4
635
Ranger
Ranger@RangerNetHQ·
Get closer to spending $250,000 worth of tokens and still ship to production with Ranger. More tokens doesn't add value unless the code is verified and shipped to production.
English
0
0
2
165
Ranger
Ranger@RangerNetHQ·
Ranger is not a code review tool. Ranger is not a code review tool. Ranger is not a code review tool. Ranger is not a code review tool. Ranger is not a code review tool. Ranger is not a code review tool. Ranger goes a lot further.
English
0
0
0
82
Bobby Thakkar
Bobby Thakkar@BobbyThakkar·
MARCH 28th can’t come sooner
Bobby Thakkar tweet media
English
3
0
16
685
Ranger
Ranger@RangerNetHQ·
dm's now open
English
0
0
0
71
Ranger
Ranger@RangerNetHQ·
@alex_prompter You can't expect to scale your codebase without scaling your testing and feedback loop. We're trying to solve exactly this at Ranger, would love for anyone to try our new CLI tool.
English
0
0
0
48
Alex Prompter
Alex Prompter@alex_prompter·
🚨BREAKING: Alibaba tested AI coding agents on 100 real codebases, spanning 233 days each. the agents failed spectacularly. turns out passing tests once is easy. maintaining code for 8 months without breaking everything is where AI collapses. SWE-CI is the first benchmark that measures long-term code maintenance instead of one-shot bug fixes. each task tracks 71 consecutive commits of real evolution. 75% of AI models break previously working code during maintenance. only Claude Opus 4 stays above 50% zero-regression rate. every other model accumulates technical debt that compounds over iterations. here's the brutal part: - HumanEval and SWE-bench measure "does it work right now" - SWE-CI measures "does it still work after 6 months of changes" agents optimized for snapshot testing write brittle code that passes tests today but becomes unmaintainable tomorrow. Alibaba built EvoScore to weight later iterations heavier than early ones. agents that sacrifice code quality for quick wins get punished when consequences compound. the AI coding narrative just got more honest: most models can write code. almost none can maintain it.
Alex Prompter tweet media
English
186
537
3.3K
706K
errnsterr
errnsterr@errnsterr·
ppl hate writing playwright by hand - playwright mcp is different. plugs into claude code, you describe the flow in english, claude walks your actual app. no scripts to maintain. momentic & spur are more for teams with PMs writing tests & regression suites at scale. what are you building? solo or team?
English
3
0
0
47
camille
camille@camillenvargas·
what are people using for automated qa on their vibe coded projects?
English
2
0
2
341
Ranger
Ranger@RangerNetHQ·
Claude doesn't get to have all the fun using Ranger to verify its work: Ranger Feature Review is available as an @opencode plugin! More coming soon about our custom integration into the OpenCode Web UI. 👀
Ranger tweet media
English
1
2
4
1.3K
Ranger retweetet
Josh Ip
Josh Ip@joship__·
If you wish you had a background agent, turns out it's not that hard to make your own. The @RangerNetHQ engineering team just posted a spec on how to do so!
English
1
1
3
195