vr8vr8

1.4K posts

vr8vr8

vr8vr8

@vr8vr8

Virtual เข้าร่วม Haziran 2017
296 กำลังติดตาม1.4K ผู้ติดตาม
ทวีตที่ปักหมุด
vr8vr8
vr8vr8@vr8vr8·
I built a system where AI agents argue with each other before writing a single line of code. It's called v84 — a documentation format that forces AI agents through a 9-step pipeline: plan → 4 role agents assess in parallel → architect resolves contradictions → compare against existing specs → generate tasks → execute → publish. Why? Because vibe coding breaks the moment your project gets real. Agents hallucinate dependencies, invent auth requirements nobody asked for, write migrations with reserved SQL keywords, and default to stale training data patterns. Every rule in v84 exists because an agent failed without it. 8+ full pipeline runs. Each failure became a guard rail: Agents must flag new dependencies, not silently add them Pattern files need CORRECT and WRONG examples or agents ignore them Migrations are always generated, never hand-written Interactive CLI tools hang agents — manual setup patterns required The format is 32 atomic files (~2k tokens each) so agents load only what they need, not a 50k token dump. Tested on Claude and Qwen. Claude wasn't perfect either, but at least it didn't decide to rewrite everything in pure JS. Qwen 3.6 was a different story — did random things even when explicitly instructed otherwise, including creative rewrites nobody asked for. Which brings me to the next point. What's next: building the boilerplate. This is essential. Once agents start juggling ESM and CommonJS in a monorepo, fixing one thing breaks another. Proper Docker dev setup, TypeORM config, deployment scripts — this stuff is genuinely tough for them. A human-polished starting point eliminates the most fragile part of the process so agents can focus on feature iterations where they actually shine. To give you a sense of cost: building something like a raffle app (admin picks a winner, API + UI + Docker dev environment) burns roughly 20% of a max plan session. Keep that in mind when planning your projects. Not perfect. But real. Would love to hear suggestions and ideas from AI enthusiasts — and if you're willing, go try it and ship something. github.com/bilikaz/v84
vr8vr8 tweet media
English
2
0
2
266
vr8vr8
vr8vr8@vr8vr8·
@Surendar__05 Vscode has http plugin ;) write all stuff you need in one file and you ready to go
English
1
0
0
12
Surendar
Surendar@Surendar__05·
Genuine question: what’s your go to API testing tool in 2026 ?
Surendar tweet mediaSurendar tweet mediaSurendar tweet mediaSurendar tweet media
English
37
0
40
1.6K
vr8vr8
vr8vr8@vr8vr8·
@romxdev How many times in past you had to rewrite junior code? But once you learned to provide full context and flow it started producing results. Ai is same…
English
1
0
1
4
Roman
Roman@romxdev·
how many times today did you rewrite a function because AI completely missed the context?
English
13
1
10
1K
Inosuke
Inosuke@Inosukeei_coder·
As a developer, what’s your go-to package manager?
Inosuke tweet media
English
41
1
54
4.3K
vr8vr8
vr8vr8@vr8vr8·
@theo Done, nice questions
English
0
0
0
277
vr8vr8
vr8vr8@vr8vr8·
@lucas_montano Be the one who manages ai. For me it sounds like a solution 🤭
English
0
0
0
88
montano
montano@lucas_montano·
software engineers are going to be micromanaged by AI by the end of this year and there’s nothing you can do about it
English
28
10
391
28.1K
Kappaemme
Kappaemme@Kappaemme1926·
If AI removes the need to learn, what should we still learn?
English
59
0
39
3.4K
vr8vr8
vr8vr8@vr8vr8·
@manojdotdev How about using docker and traefik to have normal url like api.localhost / sitea.localhost ?
English
0
0
0
355
Manoj Kumar
Manoj Kumar@manojdotdev·
As a developer, which localhost do you choose?
Manoj Kumar tweet media
English
174
20
743
77.2K
vr8vr8
vr8vr8@vr8vr8·
@kstar04 @xoaanya They are not developers they are framework configuration writer :) Or we can say they are promoting for specific framework and you are blowing they idea, they have 0 concept how things work they just know how to ‘prompt’ - write syntax
English
0
0
0
21
KMasterWon
KMasterWon@kstar04·
@xoaanya I think developers have been tricked into thinking being able to manually write/type code was the skill when I'm reality making educated design decisions is what being a developer is truly about.
English
1
0
0
61
Aanya
Aanya@xoaanya·
Can you really call yourself a developer if you can't code without AI assistance?
English
64
4
81
6.2K
vr8vr8
vr8vr8@vr8vr8·
@xoaanya Van you call your self a mechanic if you can’t weld? Probably industry evolving and you don’t need to do it in the way you did.
English
0
0
0
35
vr8vr8
vr8vr8@vr8vr8·
@dpratyush02 Same here you will have mappings of data that saved space but to use it will be complicated
English
0
0
0
6
vr8vr8
vr8vr8@vr8vr8·
@dpratyush02 You can press car in a press will take less space but will it drive?
English
1
0
0
32
Pratyush
Pratyush@dpratyush02·
Interviewer: A 8GB file becomes 2GB when zipped. No quality lost. No data removed. So the real question is: Why was it 8GB in the first place?
English
22
0
76
6.4K
vr8vr8
vr8vr8@vr8vr8·
@dev_maims because most people will leave those companies moment they will learn enough and investments doesn't pay off. So they prefer that you invest in your self and learn upfront.
English
0
0
0
28
Coder girl 👩‍💻
Coder girl 👩‍💻@dev_maims·
Why don’t companies invest in training anymore? Are we supposed to magically have five years of experience overnight?
English
39
18
264
14.9K
vr8vr8
vr8vr8@vr8vr8·
@Franc0Fernand0 Good example is massive tdd when they are testing forms and other small things just to say 90% coverage but in reality you should test whole integration from data sent to you to data stored but in reality database not mocks as migration doesn’t update your mock but change all.
English
0
0
0
95
Fernando
Fernando@Franc0Fernand0·
After years of reading and writing code, I find that the dumbest code is the best code. It doesn't matter if it's C#, C++, or Python. Make your code simple. Don't use complex abstractions or difficult syntactic sugar, and you'll have a codebase that anyone can jump into and quickly add features without introducing bugs (or bugs that are less likely to happen). This matters more than anything else.
English
98
63
761
37.4K
vr8vr8
vr8vr8@vr8vr8·
@Franc0Fernand0 @Trezker They can blame you for not knowing new shiny pattern that adds ton of work and gives 2% theoretical benefit and 10% of more bugs :D
English
0
0
0
12
Fernando
Fernando@Franc0Fernand0·
@Trezker There is no need to write complex code to look smart because simple is not easy
English
1
0
11
1.4K
vr8vr8
vr8vr8@vr8vr8·
@Franc0Fernand0 In past programers where creating rules to make it all for programming not producing. Now it’s changing back.
English
0
0
0
51
vr8vr8
vr8vr8@vr8vr8·
@SergioRocks Totally agree that’s why im building v84 that even vibe coder would end up as spec driver development. Dev vibes Ai writes specifications and then small tasks then builds. x.com/vr8vr8/status/…
vr8vr8@vr8vr8

I built a system where AI agents argue with each other before writing a single line of code. It's called v84 — a documentation format that forces AI agents through a 9-step pipeline: plan → 4 role agents assess in parallel → architect resolves contradictions → compare against existing specs → generate tasks → execute → publish. Why? Because vibe coding breaks the moment your project gets real. Agents hallucinate dependencies, invent auth requirements nobody asked for, write migrations with reserved SQL keywords, and default to stale training data patterns. Every rule in v84 exists because an agent failed without it. 8+ full pipeline runs. Each failure became a guard rail: Agents must flag new dependencies, not silently add them Pattern files need CORRECT and WRONG examples or agents ignore them Migrations are always generated, never hand-written Interactive CLI tools hang agents — manual setup patterns required The format is 32 atomic files (~2k tokens each) so agents load only what they need, not a 50k token dump. Tested on Claude and Qwen. Claude wasn't perfect either, but at least it didn't decide to rewrite everything in pure JS. Qwen 3.6 was a different story — did random things even when explicitly instructed otherwise, including creative rewrites nobody asked for. Which brings me to the next point. What's next: building the boilerplate. This is essential. Once agents start juggling ESM and CommonJS in a monorepo, fixing one thing breaks another. Proper Docker dev setup, TypeORM config, deployment scripts — this stuff is genuinely tough for them. A human-polished starting point eliminates the most fragile part of the process so agents can focus on feature iterations where they actually shine. To give you a sense of cost: building something like a raffle app (admin picks a winner, API + UI + Docker dev environment) burns roughly 20% of a max plan session. Keep that in mind when planning your projects. Not perfect. But real. Would love to hear suggestions and ideas from AI enthusiasts — and if you're willing, go try it and ship something. github.com/bilikaz/v84

English
0
0
1
201
Sergio Pereira
Sergio Pereira@SergioRocks·
Vibe coding vs spec-driven development Both use AI. Both can get you to a working product. But they behave very differently over time. Vibe coding looks like this: - You describe what you want - You iterate with prompts - You tweak until it works It feels fast. It feels intuitive. And it works… at first. Spec-driven development looks different: - You define the workflow - You write down the rules - You map inputs and outputs - You think through edge cases Then you use AI to build it. Both approaches can get you to a demo. Only one reliably gets you to production. Because the difference is not speed. It’s determinism. Vibe coding creates products that work sometimes. Spec-driven development creates products that behave correctly every time. AI didn’t change what makes software work. It just made it easier to skip the step that matters most. The specification.
English
41
23
176
35.2K
vr8vr8
vr8vr8@vr8vr8·
@joefioti he is not alone Qwen 3.6 asked me to stop in middle of processes in running automated task he built at firs and was already executing for like 30 min 🤣. Claude although doped similar bomb today bu can't find now....
vr8vr8 tweet media
English
0
0
0
159
Joe Fioti
Joe Fioti@joefioti·
anthropic gpus are crying
Joe Fioti tweet media
English
32
17
965
61.9K
vr8vr8
vr8vr8@vr8vr8·
@Bossy_Cee wana help building that thing? x.com/vr8vr8/status/…
vr8vr8@vr8vr8

I built a system where AI agents argue with each other before writing a single line of code. It's called v84 — a documentation format that forces AI agents through a 9-step pipeline: plan → 4 role agents assess in parallel → architect resolves contradictions → compare against existing specs → generate tasks → execute → publish. Why? Because vibe coding breaks the moment your project gets real. Agents hallucinate dependencies, invent auth requirements nobody asked for, write migrations with reserved SQL keywords, and default to stale training data patterns. Every rule in v84 exists because an agent failed without it. 8+ full pipeline runs. Each failure became a guard rail: Agents must flag new dependencies, not silently add them Pattern files need CORRECT and WRONG examples or agents ignore them Migrations are always generated, never hand-written Interactive CLI tools hang agents — manual setup patterns required The format is 32 atomic files (~2k tokens each) so agents load only what they need, not a 50k token dump. Tested on Claude and Qwen. Claude wasn't perfect either, but at least it didn't decide to rewrite everything in pure JS. Qwen 3.6 was a different story — did random things even when explicitly instructed otherwise, including creative rewrites nobody asked for. Which brings me to the next point. What's next: building the boilerplate. This is essential. Once agents start juggling ESM and CommonJS in a monorepo, fixing one thing breaks another. Proper Docker dev setup, TypeORM config, deployment scripts — this stuff is genuinely tough for them. A human-polished starting point eliminates the most fragile part of the process so agents can focus on feature iterations where they actually shine. To give you a sense of cost: building something like a raffle app (admin picks a winner, API + UI + Docker dev environment) burns roughly 20% of a max plan session. Keep that in mind when planning your projects. Not perfect. But real. Would love to hear suggestions and ideas from AI enthusiasts — and if you're willing, go try it and ship something. github.com/bilikaz/v84

English
0
0
0
2
Bossy Cee
Bossy Cee@Bossy_Cee·
@vr8vr8 AI requires a solid starting point to avoid chaotic development pitfalls
English
2
0
0
11
vr8vr8
vr8vr8@vr8vr8·
Have been running hard these last few days trying to make a unified tool to build from scratch, but AI is just too wild to let it go from the start. Looks like it needs a decent starting point — otherwise, while building the initial back-end / front-end / infra, it hits a wall, and while trying to solve something, it messes everything up. Building initial temple to test things up...
English
2
0
1
35
vr8vr8
vr8vr8@vr8vr8·
@PulseChainLIVE thing that i was talking about, and where you gave me idea to run more cheap agents more dedicated to each role and then battle for better results 🤣 it's already pretty decent as tasks where generated by agents after several iterations x.com/vr8vr8/status/…
vr8vr8@vr8vr8

I built a system where AI agents argue with each other before writing a single line of code. It's called v84 — a documentation format that forces AI agents through a 9-step pipeline: plan → 4 role agents assess in parallel → architect resolves contradictions → compare against existing specs → generate tasks → execute → publish. Why? Because vibe coding breaks the moment your project gets real. Agents hallucinate dependencies, invent auth requirements nobody asked for, write migrations with reserved SQL keywords, and default to stale training data patterns. Every rule in v84 exists because an agent failed without it. 8+ full pipeline runs. Each failure became a guard rail: Agents must flag new dependencies, not silently add them Pattern files need CORRECT and WRONG examples or agents ignore them Migrations are always generated, never hand-written Interactive CLI tools hang agents — manual setup patterns required The format is 32 atomic files (~2k tokens each) so agents load only what they need, not a 50k token dump. Tested on Claude and Qwen. Claude wasn't perfect either, but at least it didn't decide to rewrite everything in pure JS. Qwen 3.6 was a different story — did random things even when explicitly instructed otherwise, including creative rewrites nobody asked for. Which brings me to the next point. What's next: building the boilerplate. This is essential. Once agents start juggling ESM and CommonJS in a monorepo, fixing one thing breaks another. Proper Docker dev setup, TypeORM config, deployment scripts — this stuff is genuinely tough for them. A human-polished starting point eliminates the most fragile part of the process so agents can focus on feature iterations where they actually shine. To give you a sense of cost: building something like a raffle app (admin picks a winner, API + UI + Docker dev environment) burns roughly 20% of a max plan session. Keep that in mind when planning your projects. Not perfect. But real. Would love to hear suggestions and ideas from AI enthusiasts — and if you're willing, go try it and ship something. github.com/bilikaz/v84

English
1
0
1
26
⬣PulseChain LIVE⬣ 💥
⬣PulseChain LIVE⬣ 💥@PulseChainLIVE·
For anyone saying DGX Spark cannot cook. Generating data sets for distilling using Qwen3.5-35B-A3B BF16 !!! (no quants) real data, 0% cache hit, concurrency=192 ; pp=2048 tokens in ; tq=1024 tokens out that`s 1.43M tokens generated every hour for the last 8 hours for 40 W/h.😎
⬣PulseChain LIVE⬣ 💥 tweet media
English
5
0
4
823