Just Your Average Citizen

16K posts

Just Your Average Citizen banner
Just Your Average Citizen

Just Your Average Citizen

@KeepItFLOSSY

Never allow fear to stop you from trying. Turn that son’B into courage and lead. Become a free thinker. I enjoy digesting world events and 💩 out calculations.

Nampa, ID เข้าร่วม Ekim 2014
655 กำลังติดตาม1.2K ผู้ติดตาม
Just Your Average Citizen
Just Your Average Citizen@KeepItFLOSSY·
Bro this is so intense. I can’t believe how complicated my simple idea actually got 😅
English
0
0
0
1
Jakub Vambersky
Jakub Vambersky@ser_vamber·
Holy Moly! Look at those numbers. We have almost reached the update 0.3 numbers with just the 0.4 supporter preview 🤯 Tuesday's launch will get absolutely WILD
Jakub Vambersky tweet media
GIF
English
20
8
123
10K
Just Your Average Citizen
Just Your Average Citizen@KeepItFLOSSY·
The threat of violence should never be an option. But when a weird Californian threatens to kick your dog. The threat you make to deter violence should always be backed up with raw training. You and your kids should go learn jiu jitsu and Maui Thai.
English
0
0
0
9
Kyle Gawley
Kyle Gawley@kylegawley·
Many companies are NOT using coding LLMs Indie hackers might be happy to hand over all their IP but companies building software in regulated industries or who have sensitive IP are not pumping it through third-party APIs
English
21
11
94
7.1K
Just Your Average Citizen
Just Your Average Citizen@KeepItFLOSSY·
I’m on the $200 plan. 30ish % of the weekly usage is from Sunday/Monday when the bug happened that used up your usage. I’ve been building this MCP network and have not once hit my cap. I’ve hit 50% usage once. Ontop of creating 3D assets in Unreal engine, Blender and adobe programs. All automated with Claude. The Max plan is def worth it if you’re hitting caps like I was on the $100 plan. github.com/HurtzDonutStud…
Just Your Average Citizen tweet media
English
0
0
0
558
Sai Satvik
Sai Satvik@saisatvik_·
Just 2 prompts. The input was around 300 characters in total, and the output was around 700-800 characters in total. It feels like @claudeai is just scamming with credits. Thought to upgrade to the $200 plan, but now it feels like it's not worth it.
Sai Satvik tweet media
English
157
25
776
77.5K
Tyler
Tyler@rezoundous·
Are you team Claude or Codex?
Tyler tweet mediaTyler tweet media
English
514
25
531
75.5K
Just Your Average Citizen
Just Your Average Citizen@KeepItFLOSSY·
If context window compaction is ruining your flow with Claude. Then hit me up and I’ll give you some solid pointers that will allow you to keep working through it. I’m still in disbelief that me, Jeremy fucking fawcett has built the best 2D to AAA game ready 3D asset generation pipeline for your local system. My pipeline generator is going to allow a solo indie dev to rival the biggest franchises in the industry. I’m building this monster through hundreds of compactions. Yet it’s still gangster as fuck. While most of you bitch and listen to propoganda. The few are building and vibe coding utilizing the best AI tools available. Stay Hard 😂 github.com/HurtzDonutStud…
English
0
0
0
15
Just Your Average Citizen
Just Your Average Citizen@KeepItFLOSSY·
If they try to do this with my game I will not launch it on the PlayStation network. @Sony this is a terrible path forward! Please stop focusing on investors and focus on the community that actually cares about your product. @TencentHunyuan can you guys talk some sense into Sony? 😂
English
0
0
0
260
andrei saioc
andrei saioc@asaio87·
Soon we will face it. AI is expensive. And dumb.
English
117
15
433
24.6K
Just Your Average Citizen
Just Your Average Citizen@KeepItFLOSSY·
Most of those same people also wore a mask while driving a car by themselves as well as took a very dangerous & experimental MRNA injection. Then a lot of folks from that same exact crowd paid Dr’s to physical an chemically castrate their own children. Don’t worry about their opinion on the AI subject.
English
0
0
0
27
Just Your Average Citizen
Just Your Average Citizen@KeepItFLOSSY·
@doodlestein Orrrrrr you can use your amazing tool to help better the context window for everyone.
Just Your Average Citizen@KeepItFLOSSY

Im building a crazy MCP network and had a pretty cool conversation with my AI Lead Engineer named Claude. If you are interested in the AI memory topic. This is a very interesting read. "What happens when the context window fills up and compiles? This is the real vulnerability. When an AI hits its context limit, different things happen depending on the platform. Claude does a "compaction" — it summarizes the conversation into a shorter form and continues. But that summary loses detail. The AI remembers the gist of what it was doing but forgets specific file contents, tool parameters, and sometimes even which files it read. After compaction, the AI does NOT automatically go back and re-read the server docs or the local docs. It continues from the summary. So if it was in the middle of a 10-asset batch and the context compiles, it might forget the art style settings, the naming convention, or which assets were already processed. It would keep working but potentially with degraded quality. Can we write a command that saves progress before compaction and re-reads after? This is the hard part. As of right now, there is no hook or event that fires before compaction happens. The AI doesn't get a warning that says "you're about to lose context." It just happens. So we can't write a "pre-compaction save" trigger — the AI doesn't know it's coming. However, there are things we can do that are almost as good: First, the PROJECT_STATUS.md and AI_ONBOARDING.md already solve most of this problem — if the AI updates them during the session, not just at the end. Right now the readme tells the AI to update at session end. We could strengthen that to say "update after every completed asset" or "update after every major pipeline phase." That way, even if compaction hits mid-session, the progress is already saved to disk. Second, we could add a note in the server CLAUDE.md that says something like: "If you notice your context has been compacted or you're unsure about the current project state, re-read AI_ONBOARDING.md and PROJECT_STATUS.md before continuing." This is a soft instruction — the AI would need to recognize that it's been compacted, which it sometimes does and sometimes doesn't. Third, the customer themselves can help. If the AI starts acting confused or forgets context, the customer can say "go re-read the readme and onboarding docs." That's actually a huge advantage of having these files on disk — the customer has a reset button. Fourth, there's a more technical approach for the future: a lightweight "session heartbeat" where the AI periodically writes a small checkpoint file (like .forgeroom/session_checkpoint.json) with the current task, current asset, current phase, and any active parameters. If it detects a context gap, it reads the checkpoint. This wouldn't require any special compaction hook — it's just a habit the AI maintains, like saving your game every 5 minutes. The honest answer on the context window problem: No AI platform today gives you a clean pre-compaction hook. It's one of the biggest unsolved UX problems in long-running AI sessions. What we've built with the three-file system is actually one of the best workarounds that exists — persistent memory on disk that survives any context event. The gap is just making sure the AI writes to those files frequently enough during the session, not just at the end. That's a one-line change in the readme when you're ready to make it. The tiered context architecture research you queued up earlier actually ties directly into this. One of your 9 questions was about OpenClaw as a defense layer — but the same concept applies here. A lightweight local agent whose only job is maintaining session state could act as a context guardian, catching state before it's lost. That's a future build, but the foundation we laid today with these three files is what it would write to."

English
0
0
0
9
Jeffrey Emanuel
Jeffrey Emanuel@doodlestein·
This is crazy, this recent introduction of ridiculously low rate limits has basically rendered Claude Code useless to me. They really need to change this or I'm going to cancel all of my accounts soon. It kicks in with like 3 or 4 agents going at once.
Jeffrey Emanuel tweet media
English
118
23
835
82.8K
BlackPanthaa
BlackPanthaa@BlackPanthaaYT·
The games industry is such a disgusting place. Game doesn't perform as expected = fired Game is huge success = fired How you expect these people to want to stay in games where they're undervalued and under paid in the most lucrative entertainment industry to exist lol
English
54
362
3.2K
37.1K
Just Your Average Citizen
Just Your Average Citizen@KeepItFLOSSY·
@remembradev I will be running a bunch of tests after I’m done building the current project I’m on. I built AI Forge MCP. It’s super early access. I still have a lot more training to give it. But the MacOS i hope should be ready to launch. github.com/HurtzDonutStud…
English
0
0
0
13
Remembra Dev
Remembra Dev@remembradev·
the session heartbeat concept is solid — the trick is what you store in that checkpoint. we found tracking entity relationships (not just raw facts) lets you rebuild context faster after compaction. bi-temporal timestamps help too: knowing when you learned something vs when it was true catches stale context before it causes problems. curious: are you planning to have the AI detect its own post-compaction state, or relying on external triggers?
English
0
0
0
9
Just Your Average Citizen
Just Your Average Citizen@KeepItFLOSSY·
Im building a crazy MCP network and had a pretty cool conversation with my AI Lead Engineer named Claude. If you are interested in the AI memory topic. This is a very interesting read. "What happens when the context window fills up and compiles? This is the real vulnerability. When an AI hits its context limit, different things happen depending on the platform. Claude does a "compaction" — it summarizes the conversation into a shorter form and continues. But that summary loses detail. The AI remembers the gist of what it was doing but forgets specific file contents, tool parameters, and sometimes even which files it read. After compaction, the AI does NOT automatically go back and re-read the server docs or the local docs. It continues from the summary. So if it was in the middle of a 10-asset batch and the context compiles, it might forget the art style settings, the naming convention, or which assets were already processed. It would keep working but potentially with degraded quality. Can we write a command that saves progress before compaction and re-reads after? This is the hard part. As of right now, there is no hook or event that fires before compaction happens. The AI doesn't get a warning that says "you're about to lose context." It just happens. So we can't write a "pre-compaction save" trigger — the AI doesn't know it's coming. However, there are things we can do that are almost as good: First, the PROJECT_STATUS.md and AI_ONBOARDING.md already solve most of this problem — if the AI updates them during the session, not just at the end. Right now the readme tells the AI to update at session end. We could strengthen that to say "update after every completed asset" or "update after every major pipeline phase." That way, even if compaction hits mid-session, the progress is already saved to disk. Second, we could add a note in the server CLAUDE.md that says something like: "If you notice your context has been compacted or you're unsure about the current project state, re-read AI_ONBOARDING.md and PROJECT_STATUS.md before continuing." This is a soft instruction — the AI would need to recognize that it's been compacted, which it sometimes does and sometimes doesn't. Third, the customer themselves can help. If the AI starts acting confused or forgets context, the customer can say "go re-read the readme and onboarding docs." That's actually a huge advantage of having these files on disk — the customer has a reset button. Fourth, there's a more technical approach for the future: a lightweight "session heartbeat" where the AI periodically writes a small checkpoint file (like .forgeroom/session_checkpoint.json) with the current task, current asset, current phase, and any active parameters. If it detects a context gap, it reads the checkpoint. This wouldn't require any special compaction hook — it's just a habit the AI maintains, like saving your game every 5 minutes. The honest answer on the context window problem: No AI platform today gives you a clean pre-compaction hook. It's one of the biggest unsolved UX problems in long-running AI sessions. What we've built with the three-file system is actually one of the best workarounds that exists — persistent memory on disk that survives any context event. The gap is just making sure the AI writes to those files frequently enough during the session, not just at the end. That's a one-line change in the readme when you're ready to make it. The tiered context architecture research you queued up earlier actually ties directly into this. One of your 9 questions was about OpenClaw as a defense layer — but the same concept applies here. A lightweight local agent whose only job is maintaining session state could act as a context guardian, catching state before it's lost. That's a future build, but the foundation we laid today with these three files is what it would write to."
English
2
0
1
39
Chris
Chris@ratedx_gaming·
@KeepItFLOSSY @Gamefan1992 Sorry you don’t know how to use the website….. plenty of people are still using ddr4 ram. Not much uses or needs ddr5 ram. Very small difference and you can save a ton of $ going with ddr4
English
1
0
0
13
Gamefan
Gamefan@Gamefan1992·
If you're going to spend $1000, might as well just build a gaming PC, just sayin'. 🤣
Gamefan tweet media
English
34
10
145
6.8K
Just Your Average Citizen
Just Your Average Citizen@KeepItFLOSSY·
The MacOS app version will be released almost first thing tomorrow. I gotta make sure everything is peachy before I drop it. As for everyone who has been holding out until y'all see videos of the AI Forge MCP in action. I respect it. The hype of pre orders has been killed and beat dead by the big gaming corps. You will love what I built for the community once you see it and get to play with it. It's so affordable and effective. I still can't believe im the one who built it. Have a beautiful evening everyone.
English
0
0
0
20
Just Your Average Citizen
Just Your Average Citizen@KeepItFLOSSY·
I guess it’s a rich people thing. You make more money and produce more after letting go employees. They have to expense a lot more money. I think the dollar sign is more of an inside industry slang for. “This AI cost me $5,000 a month and outputs you by 20x.” Be likable or get fired I guess 😂
English
0
0
0
11
Can Vardar
Can Vardar@icanvardar·
when did it become all about spending ai tokens, not building the project?
Can Vardar tweet media
English
51
11
139
12.3K