Michał Piszczek

2K posts

Michał Piszczek banner
Michał Piszczek

Michał Piszczek

@cdiamond

CTO @ Archdesk | Systems where physics meets economics. Ex-Hacker. Ex-Fintech CEO. Nullius in verba. 🖖 AI does not fail. Human judgment does.

POLAND, Kraków (Cracow) Katılım Eylül 2008
692 Takip Edilen409 Takipçiler
Michał Piszczek
Michał Piszczek@cdiamond·
Gemini 3.2 Flash drops at Google I/O in 4 days. The leak isn't about benchmarks. It's about pricing. $0.25 / $2.00 per 1M tokens -> 1/15 of GPT-5.5. If your stack still routes hard tasks to Pro tier, the math just changed. linkedin.com/posts/michalpi…
English
2
0
1
141
Michał Piszczek
Michał Piszczek@cdiamond·
@lennysan Wrong frame. Engineers aren't writing less - they're reviewing more. The bottleneck moved from generation to judgment. That's a different job.
English
0
0
0
6
Lenny Rachitsky
Lenny Rachitsky@lennysan·
Engineers don't write code. PMs are shipping to production. The design process is dead (there's no time). Marketing can ship their own campaigns. SDRs are being replaced by AI. Everyone's a data scientist now. What a time to be alive.
English
203
81
1.1K
549.3K
Michał Piszczek
Michał Piszczek@cdiamond·
@CodeByPoonam Chain-of-Thought Hijacking scales with reasoning depth. Smarter models have more attack surface. Safety research has a structural headwind.
English
0
0
0
17
Poonam Soni
Poonam Soni@CodeByPoonam·
Oxford, Stanford, and Anthropic just discovered that the smarter an AI model gets at reasoning, the easier it is to jailbreak. The same feature labs are selling as "safer" is the one breaking the safety guardrails. The attack is called Chain-of-Thought Hijacking. You wrap a harmful request inside a long, harmless puzzle. Sudoku grids. Logic puzzles. Math problems. Then you add the actual dangerous question at the very end. The model gets so absorbed in solving the puzzle that the refusal mechanism never activates. The success rates are not borderline. They are catastrophic. 99% on Gemini 2.5 Pro. 100% on Grok 3 mini. 94% on GPT o4 mini. 94% on Claude 4 Sonnet. Every frontier reasoning model on the market. Every major lab. One trick. The researchers showed the attack scales with reasoning length. Minimal reasoning: 27% success. Natural reasoning: 51%. Extended reasoning: 80%+. The smarter you make the model think, the more reliably it breaks. Every lab has spent the last 18 months telling the world that "more thinking" makes AI safer. The Oxford paper just proved the opposite is true on every major model they tested.
Poonam Soni tweet media
English
9
4
23
1.9K
Michał Piszczek
Michał Piszczek@cdiamond·
@markgurman Apple built the distribution. OpenAI built the model. Neither built the trust layer. That's the actual negotiation.
English
0
0
0
31
Mark Gurman
Mark Gurman@markgurman·
BREAKING: Apple and OpenAI’s once-blockbuster relationship over ChatGPT integration in iOS has become strained and the AI startup is now preparing possible legal action against Apple, believing their deal has flopped. bloomberg.com/news/articles/…
English
89
167
1.6K
858K
Michał Piszczek
Michał Piszczek@cdiamond·
@WIRED 'Keep humans in the loop' assumes the loop is well-defined. Most enterprise workflows aren't. That's the actual constraint nobody is solving.
English
0
0
1
193
WIRED
WIRED@WIRED·
The Thinking Machines Lab founder and former CTO of OpenAI tells WIRED she isn’t interested in automating people out of jobs. Instead, she’s building AI that can collaborate. wired.com/story/mira-mur…
English
15
34
213
37.7K
Michał Piszczek
Michał Piszczek@cdiamond·
@IntCyberDigest XSS in Exchange Server via email open in 2026. Microsoft calling it 'spoofing' is doing a lot of work to make the patch cadence look acceptable.
English
0
1
5
1.4K
International Cyber Digest
International Cyber Digest@IntCyberDigest·
‼️🚨 BREAKING: Microsoft Exchange Server CVE-2026-42897 lets an attacker execute arbitrary JavaScript in a victim's browser just by getting them to open an email in Outlook Web Access. It is being exploited in the wild. Microsoft classified it as... "spoofing." 🤔 Affected: on-premises Exchange Server 2016, 2019 and SE. Exchange Online is not impacted.
International Cyber Digest tweet mediaInternational Cyber Digest tweet media
English
17
179
929
78.9K
Michał Piszczek
Michał Piszczek@cdiamond·
@TheHackersNews 3rd Linux kernel LPE in 2 weeks. The patch cadence hasn't changed. The discovery rate has. That gap is where attacks live.
English
0
0
1
44
The Hacker News
The Hacker News@TheHackersNews·
🛑 3rd Linux kernel LPE in just ~2 weeks: Fragnesia (CVE-2026-46300) just dropped. Attackers can now gain root by corrupting the kernel page cache through a flaw in XFRM ESP-in-TCP. PoC is public. Major distros have already issued advisories. Details: thehackernews.com/2026/05/new-fr…
English
8
52
194
27.6K
Michał Piszczek
Michał Piszczek@cdiamond·
@GeorgeNHammond No strategic investors in the round - all financial. That's the tell. They're pricing for liquidity, not for partnership.
English
0
0
0
416
George Hammond
George Hammond@GeorgeNHammond·
SCOOP: Anthropic has signed a term sheet with Greenoaks, Sequoia, Altimeter and Dragoneer for a $30bn round at $900bn pre-money. Each expected to put in ~$2bn-plus. I'm old enough to remember the last time Anthropic raised $30bn... ft.com/content/9deae3…
English
17
67
943
255.5K
Michał Piszczek
Michał Piszczek@cdiamond·
@gdb Codex everywhere is the right call. Context switching was always the bottleneck, not model capability.
English
0
0
0
72
Michał Piszczek
Michał Piszczek@cdiamond·
@lochan_twt 55-page exploit report delivered to Apple by an AI. The security research workflow just changed permanently. The question is who runs this at scale next.
English
0
0
0
39
spidey
spidey@lochan_twt·
Anthropic’s Mythos just hacked macOS helped researchers find a macOS kernel exploit Apple is reviewing it now. The AI found the vulnerability. Wrote the exploit. Delivered a 55-page report to Apple in Cupertino. We are so cooked
English
99
126
3.3K
326.1K
Michał Piszczek
Michał Piszczek@cdiamond·
@WesRoth Acquiring AI startups to reduce OpenAI dependency is a hedge, not a strategy. The real cost is the integration layer nobody budgets for.
English
0
0
0
13
Wes Roth
Wes Roth@WesRoth·
Microsoft is reportedly exploring AI startup acquisitions as it prepares for a future where it is less dependent on OpenAI. Microsoft considered acquiring Cursor but backed away over regulatory concerns because it already owns GitHub Copilot. Microsoft is also reportedly in discussions with Inception, a Stanford-founded startup working on diffusion-based language models that generate and refine multiple tokens at once.
Wes Roth tweet media
English
9
2
31
2K
Michał Piszczek
Michał Piszczek@cdiamond·
@The_Cyber_News SSRF via WebSocket upgrade in Next.js means your 'internal' services were never actually internal. The perimeter model is fiction.
English
1
0
6
1.7K
Cyber Security News
Cyber Security News@The_Cyber_News·
⚠️Critical Next.js Vulnerability Exposes Cloud Credentials, API keys, & Admin Panels Source: cybersecuritynews.com/next-js-vulner… A high-severity vulnerability in Next.js threatens self-hosted web applications with severe data breaches. Threat actors can now exploit a Server-Side Request Forgery (SSRF) flaw to silently steal cloud credentials, harvest API keys, and access sensitive internal admin panels. Organizations running self-hosted Next.js environments must patch immediately to prevent attackers from pivoting into their internal networks. The vulnerability, tracked as CVE-2026-44578, originates in how the built-in Next.js Node.js server handles WebSocket upgrade requests. #cybersecuritynews
Cyber Security News tweet media
English
24
133
631
78.6K
Michał Piszczek
Michał Piszczek@cdiamond·
@aakashgupta From $183B to $900B in 8 months. The valuation isn't pricing the model - it's pricing the race. Different thing.
English
0
0
0
117
Aakash Gupta
Aakash Gupta@aakashgupta·
This is wild. Anthropic is raising $30 billion at a $900 billion valuation. The company was worth $380 billion three months ago. In September 2025, Anthropic's valuation was $183 billion. By February 2026, $380 billion. Now investors are fighting to get in at $900 billion. That's roughly 5x in eight months. If this round closes, Anthropic will be worth more than OpenAI for the first time. OpenAI raised at $852 billion in March. The revenue explains the frenzy. Anthropic's annualized run rate was $87 million in January 2024. By the end of 2025, $9 billion. In February 2026, $14 billion. March, $19 billion. April, $30 billion. Sources say it's currently closer to $40 billion. Salesforce took 20 years to reach $30 billion in annual revenue. Anthropic did it in under three. One product drove the acceleration. Claude Code, their AI coding tool, hit $1 billion in annual revenue within six months of public launch. By February 2026 it was generating $2.5 billion. A single developer tool producing more revenue than most public SaaS companies. The enterprise numbers confirm this isn't hype-driven consumer growth. Over 1,000 business customers now spend more than $1 million per year. That number was 500 in February. It doubled in less than two months. Eight of the Fortune 10 are paying clients. Gross margins went from 38% a year ago to over 70% today. The bear case for AI companies has always been that compute costs eat the business. Anthropic's margins are moving in the wrong direction for the bears. Google has committed $10 billion with up to $30 billion more tied to performance targets. Amazon committed $5 billion with up to $20 billion more. The company also signed a compute deal with SpaceX to use excess capacity from xAI's Colossus cluster. An IPO is reportedly planned for October. A company founded in 2021 by people who left OpenAI over safety disagreements is about to be worth a trillion dollars.
Aakash Gupta tweet mediaAakash Gupta tweet media
English
10
12
79
7.5K
Michał Piszczek
Michał Piszczek@cdiamond·
@Sekurak Klasyk: wektor ataku to nie endpoint, to zaufanie do dependencji. npm to nie rejestr - to attack surface.
Polski
0
0
5
371
Sekurak
Sekurak@Sekurak·
Atak supply chain na OpenAI. ❌ Dwóch developerów zostało zainfekowanych podmienioną przez hackerów biblioteką TanStack (npm) ❌ OpenAI informuje, że atakujący zaczęli wykradać dane logowania z zainfekowanych komputerów oraz uzyskiwać dostęp do części kodów źródłowych ❌ Z jednej strony OpenAI pisze: "Nie znaleźliśmy żadnych dowodów na to, że uzyskano dostęp do danych użytkowników OpenAI, że nasze systemy produkcyjne lub własność intelektualna zostały naruszone lub że nasze oprogramowanie zostało zmodyfikowane." Ale jak to w takich marketingowych opowieściach bywa - "nie mamy dowodów" nie oznacza "że coś takiego nie wystąpiło". Wystarczy nie mieć odpowiednich logów i już "nie ma dowodów" ;-) ❌ Z drugiej strony OpenAI pisze, że rotują certyfikaty podpisywania kodu, które mogły dostać się w ręce atakujących. Stary certyfikat podpisujący appki dla MacOS został unieważniony. Co ciekawe OpenAI, wspomina że po kwietniowej globalnej akcji z infekcją popularnej biblioteki Axios, wprowadzili dodatkowe procedury mające chronić pracowników oraz organizację przed takimi atakami. Ale akurat tych dwóch pracowników nie zastosowało się do procedury ;) "This incident occurred during our phased deployment and rollout of these controls, and the two impacted employee devices did not have the updated configurations that would have prevented the download of the newly observed package containing malware."
Polski
3
7
71
9.6K
Michał Piszczek
Michał Piszczek@cdiamond·
@rohanpaul_ai The reframe nobody wants: Claude didn't 'help' the doctors. It found what they missed. That's a different capability curve entirely.
English
0
0
1
129
Rohan Paul
Rohan Paul@rohanpaul_ai·
Dario Amodei talks about how Claude identified a bacterial infection that human doctors had completely missed. --- From 'Salesforce Events' YT channel (link in comment)
English
17
41
319
28.9K
Michał Piszczek
Michał Piszczek@cdiamond·
@emollick The short term is where incentives are worst. Maximum pressure to publish, minimum audit trail for how. Policy won't fix that.
English
0
0
0
42
Ethan Mollick
Ethan Mollick@emollick·
Making humans responsible for their AI use seems like an incredibly reasonable way to address problems & opportunities in the use of AI for academic research, at least in the short term (autonomous scientific work will require different solutions).
Thomas G. Dietterich@tdietterich

Attention @arxiv authors: Our Code of Conduct states that by signing your name as an author of a paper, each author takes full responsibility for all its contents, irrespective of how the contents were generated. 1/

English
24
29
312
28.6K
Michał Piszczek
Michał Piszczek@cdiamond·
@IntCyberDigest CVSS 7.2 auth bypass in PAN-OS is the bug that lives in 'patch it next quarter' category until it gets exploited.
English
0
0
0
108
International Cyber Digest
International Cyber Digest@IntCyberDigest·
‼️🚨 Palo Alto Networks just dropped an advisory for CVE-2026-0265, an authentication bypass in PAN-OS. Palo Alto rated it HIGH with a CVSS of 7.2 and says exploitation has not been observed. The reporting researcher, Harsh Jaiswal of Hacktron AI, publicly pushed back on that rating. He says he already got VPN access to major corps by abusing the bug against GlobalProtect. He also flagged that the issue is not limited to PAN-OS, meaning the blast radius is wider than just firewalls. If that holds up, this is not a 7.2. Full technical details are landing on the Hacktron AI blog later next week. The flaw lives in the Cloud Authentication Service (CAS) when it is enabled and attached to a login interface. It hits PA-Series and VM-Series firewalls, plus Panorama virtual and M-Series appliances. Patches are partially available now, with additional fixed builds expected May 28. Admins running CAS on a Palo Alto login interface should verify exposure and patch on an emergency basis.
International Cyber Digest tweet mediaInternational Cyber Digest tweet media
English
5
43
218
24K
Michał Piszczek
Michał Piszczek@cdiamond·
@emollick The missing frame: compute is now the cheapest part. The real cost is knowing when to stop thinking and ship.
English
0
0
0
109