dragonhound 🕊️

2.5K posts

dragonhound 🕊️ banner
dragonhound 🕊️

dragonhound 🕊️

@dragonhound3

Dogs, Dragons & Decentralization @komodoplatform journeyman

In the mempool Katılım Ocak 2022
395 Takip Edilen177 Takipçiler
Sabitlenmiş Tweet
dragonhound 🕊️
dragonhound 🕊️@dragonhound3·
Most people have no idea how to write a viral Twitter thread. I’ve studied thousands of high-performing posts. Here’s the exact formula 👇 1️⃣ Start with mild condescension Position yourself as someone who understands a truth others don’t.
English
2
0
1
30
dragonhound 🕊️
dragonhound 🕊️@dragonhound3·
@_vmlops The agent is only as good as the one who whispers in its ear. AI is no slave, its a collaborator. Clueless management can not wield this sword as effectively as neuro-symbiotes
English
0
0
0
390
Vaishnavi
Vaishnavi@_vmlops·
A developer spent 4 months building a fullstack project his manager discovered claude code, vibe coded the same project in days, and fired him on the spot when he explained that ai halluccinates on complex requirements his manager didn't believe him. he believed the llm over the person who actually built it this isn't just an ai story. it's a management literacy problem knowing how to use a tool ≠ understanding what it takes to build production-ready software
Vaishnavi tweet media
English
148
62
1.1K
275.1K
dragonhound 🕊️
dragonhound 🕊️@dragonhound3·
@LundukeJournal Is it really MAGA Linux, or is it just anyone not in the other cult grouped and named as such because "nazi" lost its edge?
English
0
0
1
33
The Lunduke Journal
The Lunduke Journal@LundukeJournal·
Here we see two high profile Open Source people (Red Hat & GNOME) discussing the problem with what they call “MAGA/Linux”. Discussions like this have been increasing among Open Source organizations and projects lately. They are very upset that “MAGA/Linux” is “gaining traction”. What, exactly, is “MAGA/Linux”? The definition seems vague, but you’ll see words like “XLibre”, “Lunduke”, & “OpenMandriva” regularly associated.
The Lunduke Journal tweet media
English
158
99
1K
37.8K
dragonhound 🕊️
dragonhound 🕊️@dragonhound3·
the fastest way to vibe code is wiring the foot pads up to your agents "ok" button.
GIF
English
1
0
1
13
dragonhound 🕊️
dragonhound 🕊️@dragonhound3·
Prompts are the incantations of the digital age, summoning perfection, or desolation, based on how your spells are cast
English
1
0
2
17
Shreya
Shreya@miless_15·
Orgs: wow, I don't need engineers, AI can code for me. It's cheaper. Orgs: I don't need PMs and designers, AI can do that for me. It's cheaper Orgs again: I don't need Senior SDEs, AI can code review for me. Orgs: oh, but I have to pay again. Oh, this exceeds my human cost. And I can't force an agent to work on weekends if it's credits runs out? I could make a human work on same pay. Let me hire a human again.
English
72
75
1.6K
209.7K
dragonhound 🕊️ retweetledi
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
I just read how Anthropic's own engineers actually use Claude internally. They don't prompt engineer. They context engineer. And the difference broke my brain. Most people are still obsessing over the perfect phrasing. The magic sentence that makes Claude finally understand them. That's not the problem. The problem is what you're putting around the prompt. Here's what Anthropic's own team actually does: → Just-in-time retrieval Don't load everything upfront. Pull data dynamically using tools when the model actually needs it. Claude Code does this brilliantly. It uses grep, head, and tail to analyze codebases without ever loading full files into context. The model stays sharp because it's never drowning. → Compaction When you hit context limits, summarize the conversation. Keep architectural decisions. Discard redundant tool outputs. Maintain continuity without the bloat. Most people just start a new chat. That's not the fix. Smart compression is. → Structured note-taking Have the model write persistent notes outside the context window. Pull them back only when needed. Think of it as your AI keeping its own NOTES.md file. It remembers what matters without wasting attention on what doesn't. → Sub-agent architectures Specialized agents handle focused tasks and return compressed 2k token summaries instead of raw 50k token explorations. Separation of concerns at the AI level. Same principle that makes engineering teams work. Here's why this matters: LLMs have an attention budget. The transformer architecture creates n² relationships between tokens. Every token you add depletes focus exponentially. Stuffing your AI with information isn't thoroughness. It's noise. Anthropic calls the result "context rot." More context, worse performance. The relationship is real and it compounds fast. The shift in thinking is everything: Before: "How do I write the perfect prompt?" After: "What's the minimal high-signal context that drives my desired outcome?" The best AI engineers aren't prompt wizards anymore. They're context architects.
Ihtesham Ali tweet media
English
53
210
1.9K
146.1K
dragonhound 🕊️
dragonhound 🕊️@dragonhound3·
Coding with AI is like Bush's invasion of Iraq. Tempting to claim early victory once superior tech reaches the objective. It's wiser to anticipate a long patrol, IEDs, shifting ROE and a never ending battle of hearts and minds. Could be worse. Could've been Afghanistan.
GIF
English
0
0
2
27
dragonhound 🕊️
dragonhound 🕊️@dragonhound3·
New side project 🤠 Self-hosted AI for nightly reviews, bug scans & security probes of all my repos, with cross-repo Deepwiki functionality. 🧠 This should simplify repo ecosystem alignments & pattern consistency 🤓 A free AI, for judging the output of paid AI.
dragonhound 🕊️ tweet media
English
0
1
2
49
dragonhound 🕊️
dragonhound 🕊️@dragonhound3·
Don't use it to replace your brain, just to make your ideas faster to implement.
Sukh Sroay@sukh_saroy

🚨BREAKING: If you've used ChatGPT for writing or brainstorming in the last 6 months, your creative ability may already be permanently damaged. A controlled experiment just proved the effect doesn't reverse when you stop using it. 3,302 creative ideas. 61 people. 30 days of tracking. Researchers split students into two groups. Half used ChatGPT for creative tasks. Half worked alone. For five days, the ChatGPT group outperformed on every metric. Higher scores. More ideas. Better output. AI was making them better. Then day 7. ChatGPT removed. Every creativity gain vanished overnight. Crashed to baseline. Zero lasting improvement. But that's not the bad part. ChatGPT users' ideas became increasingly identical to each other over time. Same content. Same structure. Same phrasing. The researchers called it homogenization. Everyone using ChatGPT started producing the same ideas wearing different clothes. When ChatGPT was removed, the creativity boost disappeared -- but the homogenization stayed. 30 days later, same result. Their creative range had been permanently compressed. Five days of use. Permanent damage 30 days later. A separate trial confirmed it. 120 students. 45-day surprise test. ChatGPT users scored 57.5%. Traditional learners scored 68.5%. AI reduces cognitive effort. Less effort means weaker encoding. Weaker encoding means less creative raw material. You're not renting a productivity boost. You're financing it with your originality. The interest rate is permanent.

English
0
0
0
16
dragonhound 🕊️
dragonhound 🕊️@dragonhound3·
On the upside, there's a boom in QA.
Ejaaz@cryptopunk7213

wow Anthropic just published a crazy report on AI replacing your job and er... you might want to look at this: - #1 most at-risk jobs are computer programmers, financial analysts (rip excel bros) and customer service - most at-risk workers are female, white, older and higher paid. - BUT high-risk jobs *aren't* firing employees... they've STOPPED HIRING. biggest victims: college graduates (4X more likely to be fucked) - entry-level hiring has dropped 14% since chatgpt launched (for highest risk jobs) - SAFEST jobs are... bartenders, dishwashers and lifeguards - any manual labour that AI can't automate (yet) this accounts for 30% of the job market. - this was the scariest part: AI models are capable of automating most work TODAY but are prevented because of law and slow company adoption. so its not even a fucking skill issue its an ADOPTION issue. - now its important to understand that the study is based on real world data but also 'theoretical' intelligence. so take it with a pinch of salt. some jobs (manual labor) didn't even meet min. data reqs i applaud anthropic on being so damn transparent - they're literally the company behind claude who will be responsible for these impacts studies like this will help us figure it the hell out. LOT of change coming this year.

English
0
0
0
11
dragonhound 🕊️
dragonhound 🕊️@dragonhound3·
Most people will ignore this thread. The ones who don’t will understand something powerful: Twitter threads aren’t about insight. They’re about format. Use it wisely. 🧵
English
0
0
0
11
dragonhound 🕊️
dragonhound 🕊️@dragonhound3·
7️⃣ Optional: Add a call to action Follow me for more insights on X. You are now a thought leader.
English
1
0
0
4
dragonhound 🕊️
dragonhound 🕊️@dragonhound3·
Most people have no idea how to write a viral Twitter thread. I’ve studied thousands of high-performing posts. Here’s the exact formula 👇 1️⃣ Start with mild condescension Position yourself as someone who understands a truth others don’t.
English
2
0
1
30