Dariusz Kuśnierek

53.2K posts

Dariusz Kuśnierek banner
Dariusz Kuśnierek

Dariusz Kuśnierek

@dkodr

Raczkujący tata · Alternatywny kawosz (V60, AeroPress, Chemex) · Służbowo Excel, SQL, Power BI · Po godzinach seriale, retro gaming, no-code/low-code

Katılım Temmuz 2009
151 Takip Edilen1.5K Takipçiler
Dariusz Kuśnierek retweetledi
Lydia Hallie ✨
Lydia Hallie ✨@lydiahallie·
if your skill depends on dynamic content, you can embed !`command` in your SKILL.md to inject shell output directly into the prompt Claude Code runs it when the skill is invoked and swaps the placeholder inline, the model only sees the result!
Lydia Hallie ✨ tweet media
English
127
244
2.9K
822.7K
Daniel San
Daniel San@dani_avila7·
By default, Claude Code worktrees are created in .claude/worktrees inside your project. You can change this location in Settings → Claude Code under the "Worktree Location" option. It's good practice to keep your worktrees outside your project directory. This section also lets you customize the branch prefix so you can easily identify who's creating each branch.
Daniel San tweet media
Daniel San@dani_avila7

Claude Code Desktop now lets you enable Worktrees automatically for every new session. So each session runs in its own isolated Git worktree by default. What does this mean? - No branch switching back and forth - Agents don’t overwrite each other’s work - You can run multiple tasks in parallel, each isolated In the video, I start an agent that writes a blog post, Claude executes it inside its own worktree. After that, you can see the new directories being created under .claude/worktrees 👇

English
12
5
67
9.3K
Dariusz Kuśnierek retweetledi
Arvid Kahl
Arvid Kahl@arvidkahl·
Devs are acting like they didn’t write slop code before AI.
English
618
959
12.4K
757.8K
Dariusz Kuśnierek retweetledi
Boris Cherny
Boris Cherny@bcherny·
Personally, I've been using exclusively 1M context for the last few months and loving it. You can also customize your auto-compact threshold with the CLAUDE_CODE_AUTO_COMPACT_WINDOW env var. More here: code.claude.com/docs/en/model-…
English
30
23
358
45.1K
Dariusz Kuśnierek retweetledi
Thariq
Thariq@trq212·
Today we're launching local scheduled tasks in Claude Code desktop. Create a schedule for tasks that you want to run regularly. They'll run as long as your computer is awake.
English
673
1K
13.6K
3.7M
Dariusz Kuśnierek retweetledi
Andy Nguyen
Andy Nguyen@theflow0·
I ported Linux to the PS5 and turned it into a Steam Machine. Running GTA 5 Enhanced with Ray Tracing. 🤯
English
490
1.7K
18.5K
2.2M
Thariq
Thariq@trq212·
I want to make /init more useful- what do you think it should do to help setup Claude Code in a repo?
English
439
19
855
137K
Dariusz Kuśnierek
Dariusz Kuśnierek@dkodr·
@dani_avila7 Yeah, but: #how-claude-looks-up-memories" target="_blank" rel="nofollow noopener">code.claude.com/docs/en/memory…
Dariusz Kuśnierek tweet media
English
1
0
0
45
Dariusz Kuśnierek retweetledi
Josh Kale
Josh Kale@JoshKale·
Everyone’s saying OpenAI got the “same deal” Anthropic was banned for. Read the fine print. They’re not the same: On weapons: Anthropic asked for “no fully autonomous weapons without human oversight” = a human involved in the decision. OpenAI’s deal says “human responsibility for the use of force” = someone accountable, which can happen after the fact. Oversight ≠ Responsibility. One requires a human before the trigger. The other requires a name on the paperwork after. On surveillance: Dario said explicitly: current law hasn’t caught up with AI. The government can already buy your movement data, browsing history, etc without a warrant. AI can assemble that into a complete picture of your life, at scale. That’s mass surveillance without breaking a single law. Anthropic wanted protections beyond current law. OpenAI’s deal says the Pentagon “reflects them in law and policy.” That’s existing law as the safeguard, the exact law Anthropic said is insufficient. Same words. Different agreements. Read them carefully
English
201
1.9K
9.3K
788.1K
Dariusz Kuśnierek retweetledi
Big Brain AI
Big Brain AI@realBigBrainAI·
The National Bureau of Economic Research just surveyed 6,000 executives and the results are shocking. 90% of CEOs say AI had zero impact on productivity, yet corporate AI spending hit $250 billion in 2024. Economists say a 40-year-old paradox explains exactly why this is happening ↓ In 1987, Nobel laureate Robert Solow wrote a famous line: "You can see the computer age everywhere but in the productivity statistics." Back then, companies poured billions into mainframes and PCs. U.S. productivity growth actually slowed, dropping from 2.9% per year to just 1.1% despite massive IT investment. Sound familiar? Apollo's chief economist Torsten Slok is now echoing Solow directly, saying AI "is everywhere in the macroeconomic narrative" but "you don't see it in the data." And just like the computer age before it, the gap between investment and results is widening. But here's what makes this so puzzling ↓ At the micro level, AI works. Controlled experiments show individual productivity jumps of 34–40%, especially for less experienced workers. Customer service reps, coders, and writers all show real gains in lab settings. Yet when you zoom out to the firm level, 80–95% of AI pilots never successfully scale. And the research reveals exactly why: • Top performers see only marginal gains, sometimes even slight quality declines • 80% of time saved through AI gets reallocated to other tasks rather than boosting output • Scaling requires new data infrastructure, process redesign, and worker training that most firms simply haven't committed to • Most AI use remains shallow: drafting emails, summarizing docs, small time savings that barely register in company-wide metrics Instead of replacing workers, AI is quietly redistributing what they spend their time on. So is AI actually useless? In the 1970s and 1980s, companies invested heavily in computers, but the productivity payoff only became visible in the 1990s once businesses completely redesigned their processes around the technology. Some analysts believe AI is following the same pattern. Early investment drags productivity down before reorganization eventually pushes it up. MIT economist Erik Brynjolfsson already points to early signs: U.S. productivity growth recently hit roughly 2.7%, which may signal firms are finally moving from experimentation to extraction. The takeaway? AI hasn't failed. The organizations using it have simply treated it as a surface-level tool rather than a reason to fundamentally rethink how work gets done. That's why 90% of firms report zero impact. Individual workers are getting faster, but the companies around them haven't changed enough for those gains to actually show up in the results. The paradox won't solve itself. The leaders who close the gap first will be the ones brave enough to rebuild their entire organization around AI. — Thanks for reading! Enjoyed this post? Follow @realBigBrainAI for more content like this.
Big Brain AI tweet media
English
51
206
739
63.4K
Dariusz Kuśnierek retweetledi
khalid kaime
khalid kaime@kaime·
Our experiment stopped producing useful data. Basically, participants increasingly didn't want to work w/out AI, even at $50/hr, so the sample drifted toward tasks where AI access doesn't matter. This biases our estimate downward. The pilot's 20% slowdown result is no longer valid and shouldn't be cited. The world has since changed and so has our understanding. I think there are roughly two wrong takeaways: 1. "the experiment failed, therefore AI's effect is enormous and unmeasurable." - way too strong, experiments can fail for boring reasons too. 2. "the experiment failed, so we've learned nothing." I think that's also wrong, the way you fail can be quite informative, even if I'd be cautious about how much weight to put on it. In Khalid's personal opinion, effect is prob positive and we're prob underestimating it. METR is working on better ways to answer this...but plz,"METR's experiment broke b/c people are wayyy too sped up by AI" is convenient but not what we're saying.
METR@METR_Evals

Since early 2025, we've been studying how AI tools impact productivity among developers. Previously, we found a 20% slowdown. That finding is now outdated. Speedups now seem likely, but changes in developer behavior make our new results unreliable. We’re working to address this.

English
12
13
286
40.5K
Dariusz Kuśnierek
Dariusz Kuśnierek@dkodr·
@uwteam **Natywny** sandbox w CC nie korzysta właśnie z bubblewrap? #os-level-enforcement" target="_blank" rel="nofollow noopener">code.claude.com/docs/en/sandbo…
Polski
0
0
0
21
Jakub Mrugalski 🔥
Jakub Mrugalski 🔥@uwteam·
Ja używam softu zwanego "Bubblewrap". To coś zbliżonego do konteneryzacji, ale bez budowania obrazów. Tworzony jest 'kontener' (namespace + tmpfs) z Twoimi danymi, a appka nie może z niego wyjść. Do tego da się ustawić, które dane są dostępne w trybie read-only.
Polski
2
0
5
1.1K
Jakub Mrugalski 🔥
Jakub Mrugalski 🔥@uwteam·
Na co dzień używam Claude Code w tzw. "trybie YOLO" 😱 Nie jest to zbyt bezpieczne, aleeee... da się pokombinować tak, aby jednak nie zrobić sobie krzywdy. Więcej info 🧵 ↓
Jakub Mrugalski 🔥 tweet media
Polski
5
2
43
12.1K