Michael

5.9K posts

Michael banner
Michael

Michael

@mgrczyk

Train bigger models, build more housing, have more babies Currently @AnthropicAI

San Francisco, CA 가입일 Ocak 2017
468 팔로잉905 팔로워
고정된 트윗
Michael
Michael@mgrczyk·
"So are you guys using a vector database?" "Yeah, Google Sheets"
Michael tweet media
English
0
1
37
9.2K
Michael
Michael@mgrczyk·
@AmandaAskell I don't want my coworkers to have a door they can close!
English
1
0
1
82
Amanda Askell
Amanda Askell@AmandaAskell·
Tech companies pay millions of dollars for their employees and then stick them in open-plan offices that make it nearly impossible to get work done. Best strategy for poaching employees is probably to just offer them an office with a door.
English
217
204
4.1K
558.2K
Michael
Michael@mgrczyk·
sora was one of the last AGI pilled things OpenAI has ever done
English
0
0
0
117
christian
christian@cxgonzalez·
LLMs will lead to productivity gains and new forms of high skilled labor, not mass unemployment
christian tweet media
English
108
50
522
24.9K
Michael
Michael@mgrczyk·
Clever enough to realize you can build that thing you always wanted with Claude code Not clever enough to realize 20000 people did the same thing today
English
1
0
1
129
Michael
Michael@mgrczyk·
@MegaBasedChad The justice department changed now they report and track cases in 2025, but even still it does not look like they filed more charges than in previous years. Do you expect that things have changed in 2026?
English
0
0
3
161
L3 Tweet Engineer
L3 Tweet Engineer@MegaBasedChad·
@mgrczyk Yes, a lot more. Dr Oz (who is in govt now btw) is driving around LA and Queens and shit finding fraud
English
2
0
18
985
Michael
Michael@mgrczyk·
@daniel_w_owens My previous apartment was literally at the exit of this thing and I would have gladly spent an extra 5 minutes on intersections below to see this blight removed
English
0
0
0
27
Daniel Owens
Daniel Owens@daniel_w_owens·
Hey SF, let's get rid of this. It's ugly as fuck. It's elite, top-tier urban blight. Removing it won't adversely affect traffic. It's dangerous (people are injured and/or die underneath the freeway every year), and it blocks benefits we need (housing, transit, green space).
Daniel Owens tweet media
English
77
13
424
71.1K
Mad ML scientist
Mad ML scientist@HououinTyouma·
@zephyr_z9 what's the percentage of non chinese citizens among top researchers at deepseek etc
English
1
0
3
1.4K
Zephyr
Zephyr@zephyr_z9·
40%-60% of the top researchers at frontier labs are non-US citizens If they get removed, then American labs will lose the race
English
38
38
479
103.8K
Eneasz Brodski
Eneasz Brodski@EneaszWrites·
Half the local rats are at the Stop The AI Race March
Eneasz Brodski tweet mediaEneasz Brodski tweet mediaEneasz Brodski tweet mediaEneasz Brodski tweet media
English
140
33
841
1.3M
Michael
Michael@mgrczyk·
@tenobrus @IceSolst That's pretty close to what everyone else does neither of my two soc2 reports had anything specific about my company except form fields like address
English
0
0
1
50
Tenobrus
Tenobrus@tenobrus·
@IceSolst instead of taking what you did and sending the report to independent auditors, they generated virtually identical and false final reports which were pre-stamped by auditors before clients even filled anything in
English
2
0
34
1.4K
solst/ICE of Astarte
solst/ICE of Astarte@IceSolst·
I read this whole list and cannot figure out how it’s different than coalfire or any other compliance auditing firm
Ryan@ohryansbelt

Delve, a YC-backed compliance startup that raised $32 million, has been accused of systematically faking SOC 2, ISO 27001, HIPAA, and GDPR compliance reports for hundreds of clients. According to a detailed Substack investigation by DeepDelver, a leaked Google spreadsheet containing links to hundreds of confidential draft audit reports revealed that Delve generates auditor conclusions before any auditor reviews evidence, uses the same template across 99.8% of reports, and relies on Indian certification mills operating through empty US shells instead of the "US-based CPA firms" they advertise. Here's the breakdown: > 493 out of 494 leaked SOC 2 reports allegedly contain identical boilerplate text, including the same grammatical errors and nonsensical sentences, with only a company name, logo, org chart, and signature swapped in > Auditor conclusions and test procedures are reportedly pre-written in draft reports before clients even provide their company description, which would violate AICPA independence rules requiring auditors to independently design tests and form conclusions > All 259 Type II reports claim zero security incidents, zero personnel changes, zero customer terminations, and zero cyber incidents during the observation period, with identical "unable to test" conclusions across every client > Delve's "US-based auditors" are actually Accorp and Gradient, described as Indian certification mills operating through US shell entities. 99%+ of clients reportedly went through one of these two firms over the past 6 months > The platform allegedly publishes fully populated trust pages claiming vulnerability scanning, pentesting, and data recovery simulations before any compliance work has been done > Delve pre-fabricates board meeting minutes, risk assessments, security incident simulations, and employee evidence that clients can adopt with a single click, according to the author > Most "integrations" are just containers for manual screenshots with no actual API connections. The author describes the platform as a "SOC 2 template pack with a thin SaaS wrapper" > When the leak was exposed, CEO Karun Kaushik emailed clients calling the allegations "falsified claims" from an "AI-generated email" and stated no sensitive data was accessed, while the reports themselves contained private signatures and confidential architecture diagrams > Companies relying on these reports could face criminal liability under HIPAA and fines up to 4% of global revenue under GDPR for compliance violations they believed were resolved > When clients threaten to leave, Delve reportedly pairs them with an external vCISO for manual off-platform work, which the author argues proves their own platform can't deliver real compliance > Delve's sales price dropped from $15,000 to $6,000 with ISO 27001 and a penetration test thrown in when a client mentioned considering a competitor

English
13
1
93
12.7K
Michael
Michael@mgrczyk·
I lucked into a techno-optimist personality perfectly suited for the timeline we happen to be on Every day is a gift, my younger self would have given anything to end up seeing and experiencing what I get to see now
English
0
0
6
145
delaniac 🌹🌱
delaniac 🌹🌱@ChadNotChud·
increasingly clear that LLMs are just a Normal Technology cool and useful, but fantasies that their rate of improvement will continue indefinitely are just that like everything, we’ll reach a point where further improvement requires a qualitatively different approach
English
170
82
1.9K
74.3K
Michael
Michael@mgrczyk·
@jxmnop > six to twelve months of study This might be a problem
English
0
0
1
976
dr. jack morris
dr. jack morris@jxmnop·
Learning to write kernels might be the highest-ROI activity for displaced SWEs: → prereq: reasonable engineering ablity → six to twelve months of study → millions of dollars, mark zuckerberg showing up at your house to hire you, etc. i wish this were an exaggeration
English
43
62
1.9K
122.7K
Michael
Michael@mgrczyk·
@Scott_Wiener How about we agree to name it after a Chinese person instead of MLK if inner Richmond agrees to upzone
English
0
0
1
120
Brangus🔍⏹️
Brangus🔍⏹️@RatOrthodox·
Oh uhh, successfully scaling something once 10,000x does not seem like very strong evidence that scaling it 10,000x again is safe, consider CO2, or number of hsv1 viruses in your body, or whatever really. In this particular case you should expect a discontinuity in safety around the point where if you fuck up, you are definitely not going to be able to fix it. Sorry, I sort of thought this was obvious.
English
2
0
31
471
roon
roon@tszzl·
modern alignment methods seem to work reasonably well across orders of magnitude of model scaling, survived the transition to verifiable rewards and that should at least inform your decision making
Brangus🔍⏹️@RatOrthodox

I have heard that some anthropic safety leadership are going around telling people that alignment is a solved problem. This seems like a predictable failure to me, and I would like people who thought that funneling talent towards anthropic was a good idea to think about it.

English
35
11
374
77.8K
Michael
Michael@mgrczyk·
@RatOrthodox @tszzl ^having strong preferences over ones own beliefs is interesting because one can remain rational while never updating on new evidence
English
0
0
2
52
Brangus🔍⏹️
Brangus🔍⏹️@RatOrthodox·
@tszzl i will do my best to dm you a well timed "lol, told ya" before the end
English
2
0
26
524