Celio

1.1K posts

Celio banner
Celio

Celio

@ccidral

Big fan of YAGNI-oriented programming. 🫶🏻 Clojure 🫶🏻

Entrou em Mart 2009
106 Seguindo73 Seguidores
Hunter Leath
Hunter Leath@jhleath·
@ThePrimeagen at the time of the tweet, we were down to like 50% AI generated code from a peak of 100% over the summer. now, we’re *basically* back to 100%, but we’re doing much smaller changes (we had a lot of like 6-10kLOC refactors in the fall)
English
7
0
134
9.8K
ThePrimeagen
ThePrimeagen@ThePrimeagen·
i still think about this tweet every now and then
ThePrimeagen tweet media
English
67
80
2.7K
143.6K
Celio
Celio@ccidral·
@atmoio Why is it always a CEO, why
English
0
0
1
7
Mo
Mo@atmoio·
AI is giving every CEO the same advice
English
317
665
6.5K
533.1K
Celio
Celio@ccidral·
@opencode so Go is not pay-as-you-Go and paying-as-you-go doesn't feel zen shouldn't Go be pay-as-you-Go and zen give you some peace of mind? </scratching_head>
Celio tweet media
English
0
0
5
368
OpenCode
OpenCode@opencode·
Qwen3.6 Plus and 3.5 Plus now available in Go - both strong - 3.5 is 3x cheaper - both support images - zero data retention update to the latest to try
English
92
97
2.3K
256.4K
Łukasz | Wookash Podcast
Łukasz | Wookash Podcast@wookash_podcast·
To people who are *good* at reviewing code (or claim to be hehe) - how is that possible? To what extend you can properly review the code with low familiarity with the codebase? Eg. New project, you jump in, Claude Code PR - 500 lines changed - review now What's the strategy?
English
180
8
368
95K
Polymarket
Polymarket@Polymarket·
JUST IN: Google DeepMind hires a philosopher as it prepares for machine consciousness.
English
867
1.3K
11.7K
9.5M
Celio
Celio@ccidral·
@wookash_podcast The same way we did code reviews before without AI, except now I can ask the AI to clarify specific parts of the code without spending time to navigate through it myself.
English
0
0
0
20
ThePrimeagen
ThePrimeagen@ThePrimeagen·
I am slowly coming around to AI assisted programming. I am genuinely trying to codify every rule about programming that I have and using that + several stages to build out small changes. Not sure the productivity changes, but I think I can see a modest gain in speed. I am also trying to be concerned about every line produced, not just slop trebucheting code over the wall.
English
387
62
3.5K
453.6K
Celio
Celio@ccidral·
@justyx404 @mhdcode Using for what? What's the context window size? What agent harness? Etc etc
English
0
0
0
14
Yixiang Gao
Yixiang Gao@justyx404·
@mhdcode been using gemma 4 31B on my 5090, perfectly fine with Q4_K_M.
English
4
0
6
1.4K
MHD
MHD@mhdcode·
sorry local AI folks but i’m not dropping $4k on this high end gpu just to run last year’s models. seriously, GPT-OSS-20B??
MHD tweet media
English
60
1
177
25.4K
Peter Pistorius
Peter Pistorius@appfactory·
Pi harness + workers + virtual fs + codemode + glm 5.1
English
22
12
329
19.2K
Soumitra Shukla
Soumitra Shukla@soumitrashukla9·
This is aging beautifully
Soumitra Shukla tweet media
English
40
73
2.5K
152K
Celio
Celio@ccidral·
@petergyang Because it's not a sustainable business at the moment.
English
0
0
0
5
Peter Yang
Peter Yang@petergyang·
My entire feed and the Claude subreddit is full of ppl saying opus got nerfed. Why would Anthropic nerf its own models?
English
339
11
777
152.5K
Lewis Menelaws
Lewis Menelaws@LewisMenelaws·
@ForrestPKnight Generally: 32GB of VRAM (Nvidia) -> Qwen 3.5 27b dense model (decent speed and great quality) 128GB of RAM (MLX or Spark) -> Qwen 3 Coder Next (low activated parameters so faster).
English
1
0
1
168
Forrest Knight
Forrest Knight@ForrestPKnight·
I'm working on a video all about coding with local LLMs vs cloud LLMs. What's the best local AI model for coding right now? I have an absolute beast of a PC specifically for this, and while I have my own thoughts, I want to hear from y'all to ensure I'm doing the video justice. Any other tips or advice? Is there anyone I should reach out to?
English
15
2
26
4K
Celio
Celio@ccidral·
@ForrestPKnight @mjtechguy I'm not an expert but I'm interested in the topic. Looks like a dual 32GB VRAM card setup + Qwen3-Coder-Next 80B or so may do the job. But I'm speculating so I might be wrong.
English
0
0
0
30
Forrest Knight
Forrest Knight@ForrestPKnight·
@mjtechguy VRAM/GPU: 32GB of GDDR6 with 640 GB/s bandwidth on an AMD Radeon AI PRO R9700, so ROCm. RAM: 128GB DDR5 ECC (6400MHz) for model offloading when needed. CPU: AMD Ryzen Threadripper 9970X.
English
2
0
1
468
Celio
Celio@ccidral·
@loftwah I'm waiting for agentic coding with local LLMs to become viable but I guess that's gonna take a while.
English
1
0
1
8
Loftwah
Loftwah@loftwah·
I’m not paying more than $20 a month for an AI subscription. Either they figure out how to make it cheaper or I won’t use it as much.
English
116
26
710
41.3K
La Gazzetta Ferrari
La Gazzetta Ferrari@GazzettaFerrari·
🚨 | Ferrari are keeping an eye on Max Verstappen’s situation. Maranello is planning for the long term, especially since Lewis Hamilton may leave the team after next season, creating a significant vacancy to fill. 📰 @ErikvHaren
La Gazzetta Ferrari tweet media
English
70
150
2.3K
101.1K
Uncle Bob Martin
Uncle Bob Martin@unclebobmartin·
@pachilo At the moment. I’ve been investigating them one at a time. ChatGPT, grok, Claude. Perhaps Openspec should be next.
English
11
0
20
5.1K
Uncle Bob Martin
Uncle Bob Martin@unclebobmartin·
Starting with three claudes. One implementer. One planner. One reviewer. Using git worktrees instead of cloned repos.
English
80
39
1K
80K
Celio
Celio@ccidral·
@outsource_ @grok I asked what's the context window size on your hardware, but I guess you aren't running it on your local machine?
English
1
0
1
12
Eric ⚡️ Building...
Eric ⚡️ Building...@outsource_·
🚀 NEW GEMMA 4 31B TURBO DROPPED Runs on a SINGLE RTX 5090: ⚡️18.5 GB VRAM only (68% smaller) 🧠51 tok/s single decode 💻1,244 tok/s batched 🤖15,359 tok/s prefill ← yes, fifteen thousand 🚨2.5× faster than base model with basically zero quality loss. It hits Sonnet-4.5 level on hard classification tasks… at 1/600th the cost. Local models are shipping faster than we can test 👇🏻 🔥 HF: huggingface.co/LilaRest/gemma…
Eric ⚡️ Building... tweet media
English
97
207
2.6K
197.4K
Celio
Celio@ccidral·
@Zeneca lol 400k chrome tabs, you're gonna need a bigger mac studio bro
English
0
0
0
59
Zeneca🔮
Zeneca🔮@Zeneca·
Alright we got 512gb of ram to play with Time to experiment with some local models and 400,000 chrome tabs
Zeneca🔮 tweet media
English
170
18
944
52.3K