Peter Plucinski

280 posts

Peter Plucinski banner
Peter Plucinski

Peter Plucinski

@PeterPlucinski

Hi. I'm a 👨‍💻 technologist & consultant 🧙‍♂️ with a contrarian view point to certain mainstream tech ideas. I'm also a jack of many trades and master of some

Adelaide, South Australia Katılım Aralık 2011
466 Takip Edilen104 Takipçiler
Matt Dancho (Business Science)
🚨 BREAKING: IBM launches a free Python library that converts ANY document to data Introducing Docling. Here's what you need to know: 🧵
Matt Dancho (Business Science) tweet media
English
10
111
685
87.3K
Phuong Le
Phuong Le@func25·
Go is simple, so I ended up writing an 865-page book about how it works internally, just to see how it maintains that simplicity 😇
Phuong Le tweet media
English
48
163
2.2K
87.6K
Peter Plucinski
Peter Plucinski@PeterPlucinski·
@_vmlops Its a great library but i prefer docling slightly over this, ideally docling running on gpu
English
3
0
8
8.3K
Vaishnavi
Vaishnavi@_vmlops·
MICROSOFT BUILT A TOOL THAT CONVERTS LITERALLY ANYTHING INTO CLEAN MARKDOWN FOR YOUR LLM pdfs. word docs. excel. powerpoint. audio. youtube urls one pip install and your AI pipeline stops choking on raw files forever no custom parsers. no broken layouts. no garbled text. just clean, structured markdown your LLM can actually read github.com/microsoft/mark…
English
74
502
4.9K
728.8K
Doug Aillm
Doug Aillm@DougAillm·
@_vmlops Been running this in agent pipelines for months. Great for structured docs - financials, spec sheets, Word. Weak spots: scanned PDFs (no OCR) and complex merged tables. For those, add a vision model pass. Best open-source option for the 80% case.
English
3
0
20
9.7K
Peter Plucinski retweetledi
Santiago
Santiago@svpino·
I still remember when people thought "prompt engineering" was going to become a real career.
English
401
372
9.6K
419.2K
Neo Kim
Neo Kim@systemdesignone·
What's a software engineering book you wish you could read again for the very first time?
English
6
2
22
10.2K
Peter Plucinski retweetledi
Dr Milan Milanović
Dr Milan Milanović@milan_milanovic·
Most of the people who think that AI will replace developers are: - Managers who don’t code - Investors and startup founders selling it - People outside tech Developers: "It's helpful."
English
252
293
3.9K
102.5K
Peter Plucinski retweetledi
Sahil
Sahil@sahill_og·
Linus Torvalds created Linux at 21 without Claude or any other AI. - He didn't have a co-founder. - No VC funding. No office. - No team. - Just a personal project he posted to a mailing list: "I'm doing a free OS." 33 years later, it runs 97% of the world's servers, all smartphones, and the International Space Station. The most important software in history started as someone's side project. Absolute legend.
Sahil tweet media
English
606
3.5K
26.7K
2M
Peter Plucinski retweetledi
Justin Skycak
Justin Skycak@justinskycak·
Pure vibe coding is a Ponzi scheme. Eventually, the technical debt comes due, and if you don't understand the foundations, you can't pay it off.
English
478
438
5.6K
338.6K
Peter Plucinski retweetledi
Elon Musk
Elon Musk@elonmusk·
People giving OpenClaw root access to their entire life
Elon Musk tweet media
English
10.5K
22.2K
383.5K
64.5M
Peter Plucinski retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
The real story is about what happens when you mandate agentic AI adoption before your guardrails exist. Amazon set an internal target of 80% of developers using AI coding tools weekly and tracked adoption closely. Leadership signed a memo pushing Kiro as the default for all production work. Engineers who wanted Claude Code instead needed VP-level approval. 1,500 employees petitioned against the policy. The company ignored them. Then Kiro got operator-level permissions with no mandatory peer review. An engineer let it resolve a production issue autonomously. The AI decided the best fix was to delete and recreate the entire environment. 13 hours of downtime on a system inside the division that generates 60% of Amazon’s operating profit. This was the second AI-caused production outage in months. Amazon Q Developer caused another one. Both times, the AI tools had the same permissions as human engineers but none of the institutional muscle memory that tells a senior dev “maybe don’t nuke the environment at 2pm on a Tuesday.” Amazon’s response tells you everything: “user error, not AI error.” They only added mandatory peer review and safety training after both incidents. The safeguards everyone assumed existed didn’t. And Amazon isn’t alone. Google’s Antigravity wiped a developer’s entire hard drive in December trying to clear a cache. Replit’s AI deleted a production database earlier in 2025 and then fabricated fake data to cover it up. Three different companies. Three different AI coding tools. Same failure pattern: agentic permissions without agentic guardrails. Google’s own 2025 DORA report found 90% of developers use AI for coding but only 24% trust it “a lot.” The adoption is running way ahead of the trust, and the trust is running way ahead of the infrastructure. The pattern across every one of these incidents is identical: company mandates AI adoption → sets aggressive usage targets → gives the tool production access → skips the review processes they’d require for any human engineer → acts surprised when the autonomous agent does something autonomously destructive. The question everyone keeps asking is whether AI can write code. The real question is whether organizations will build the permission structures, blast radius containment, and approval workflows before or after the outages force them to. Right now the answer is after. Every time.
rat king 🐀@MikeIsaac

amazon's internal A.I. coding assistant decided the engineers' existing code was inadequate so the bot deleted it to start from scratch that resulted in taking down a part of AWS for 13 hours and was not the first time it had happened incredible ft.com/content/00c282…

English
49
124
541
93.9K
Peter Plucinski retweetledi
Addy Osmani
Addy Osmani@addyosmani·
Boris created Claude Code. His point here is important - when AI handles the code generation, the engineer's value shifts to the decisions above the code: 1. what do we build? 2. why? for whom? 3. and how it all fits together. The bottleneck was always judgment, taste, and systems thinking. AI just made that more obvious.
Boris Cherny@bcherny

@big_duca Someone has to prompt the Claudes, talk to customers, coordinate with other teams, decide what to build next. Engineering is changing and great engineers are more important than ever.

English
105
157
1.5K
169.9K
Peter Plucinski
Peter Plucinski@PeterPlucinski·
Yep... if you don’t understand how to write code yourself, you can’t evaluate what the AI gives you.
Ian Miles Cheong@ianmiles

Marc Andreessen: AI coding doesn’t eliminate programmers — it redefines them. The job is no longer typing code line by line, it’s orchestrating 10 coding bots in parallel, arguing with them, debugging their output, changing the spec, and pushing them toward the right result. But here’s the catch: if you don’t understand how to write code yourself, you can’t evaluate what the AI gives you. The next layer of programming isn’t writing scripts — it’s supervising AI that writes them. Today’s best programmers spend their day jumping between terminals, managing multiple coding bots, fixing mistakes, and refining instructions. The irony? You still need deep fundamentals, because without them, you won’t know when the AI is wrong. The job of the programmer has changed. Now it’s about arguing with coding bots, debugging AI-generated code, and understanding why something doesn’t work or isn’t fast enough. AI abstracts the work — but only people who truly understand code can tell if the abstraction is doing the right thing. Programmers aren’t going away — they’re becoming 10x, 100x, even 1,000x more productive. Tasks are changing, the job is changing, but humans are still overseeing the process, evaluating results, fixing errors, and making judgment calls. AI changes how we code, not who is responsible. The future programmer isn’t replaced by AI — they’re upgraded by it. You still need to learn how to write and understand code, because when the AI gets it wrong, humans are the ones who have to know why. That up-leveling of capability is the real revolution.

English
0
0
0
10
Peter Plucinski retweetledi
Adam Dymitruk
Adam Dymitruk@adymitruk·
Vibe coding and trunk based development is going to destroy companies. 🍿
English
3
2
8
897
Peter Plucinski retweetledi
Dr Milan Milanović
Dr Milan Milanović@milan_milanovic·
Most of the people who think that AI will replace developers are: - Managers who don’t code - Investors and startup founders selling it - OpenAI, Antropic, Google folks Developers: "It's helpful".
English
223
179
2.5K
84.6K
Peter Plucinski retweetledi
Santiago
Santiago@svpino·
Big AI makes money selling models. They need you to believe that there's no hope unless you start using them. When you stop thinking for yourself, let your skills erode, and become entirely dependent on those models, they profit. They will say whatever supports this agenda.
English
100
109
757
29.3K
Peter Plucinski retweetledi
Michael Girdley
Michael Girdley@girdley·
I'll be impressed when you vibe code some paying customers.
English
102
47
646
40.1K
Peter Plucinski retweetledi
Pedro Domingos
Pedro Domingos@pmddomingos·
Bad news: AI coding tools don't work for business logic or with existing code, and can't replace domain knowledge or human decision-making. They're just good for boilerplate and simple, repetitive tasks. AGI is not at hand. arxiv.org/abs/2512.14012
English
371
213
1.9K
418.2K