0.005 Seconds (3/694)

48.5K posts

0.005 Seconds (3/694) banner
0.005 Seconds (3/694)

0.005 Seconds (3/694)

@seconds_0

human first systems are the only thing left built https://t.co/DSD2mZtuap

San Francisco, CA 가입일 Ocak 2020
2.5K 팔로잉19.3K 팔로워
고정된 트윗
0.005 Seconds (3/694)
0.005 Seconds (3/694)@seconds_0·
There's an entire parallel scientific corpus most western researches never see. Today i'm launching chinarxiv.org, a fully automated translation pipeline of all Chinese preprints, including the figures, to make that available.
0.005 Seconds (3/694) tweet media0.005 Seconds (3/694) tweet media0.005 Seconds (3/694) tweet media
English
199
1.1K
7.4K
837.6K
0.005 Seconds (3/694) 리트윗함
Ryan
Ryan@ohryansbelt·
Delve, a YC-backed compliance startup that raised $32 million, has been accused of systematically faking SOC 2, ISO 27001, HIPAA, and GDPR compliance reports for hundreds of clients. According to a detailed Substack investigation by DeepDelver, a leaked Google spreadsheet containing links to hundreds of confidential draft audit reports revealed that Delve generates auditor conclusions before any auditor reviews evidence, uses the same template across 99.8% of reports, and relies on Indian certification mills operating through empty US shells instead of the "US-based CPA firms" they advertise. Here's the breakdown: > 493 out of 494 leaked SOC 2 reports allegedly contain identical boilerplate text, including the same grammatical errors and nonsensical sentences, with only a company name, logo, org chart, and signature swapped in > Auditor conclusions and test procedures are reportedly pre-written in draft reports before clients even provide their company description, which would violate AICPA independence rules requiring auditors to independently design tests and form conclusions > All 259 Type II reports claim zero security incidents, zero personnel changes, zero customer terminations, and zero cyber incidents during the observation period, with identical "unable to test" conclusions across every client > Delve's "US-based auditors" are actually Accorp and Gradient, described as Indian certification mills operating through US shell entities. 99%+ of clients reportedly went through one of these two firms over the past 6 months > The platform allegedly publishes fully populated trust pages claiming vulnerability scanning, pentesting, and data recovery simulations before any compliance work has been done > Delve pre-fabricates board meeting minutes, risk assessments, security incident simulations, and employee evidence that clients can adopt with a single click, according to the author > Most "integrations" are just containers for manual screenshots with no actual API connections. The author describes the platform as a "SOC 2 template pack with a thin SaaS wrapper" > When the leak was exposed, CEO Karun Kaushik emailed clients calling the allegations "falsified claims" from an "AI-generated email" and stated no sensitive data was accessed, while the reports themselves contained private signatures and confidential architecture diagrams > Companies relying on these reports could face criminal liability under HIPAA and fines up to 4% of global revenue under GDPR for compliance violations they believed were resolved > When clients threaten to leave, Delve reportedly pairs them with an external vCISO for manual off-platform work, which the author argues proves their own platform can't deliver real compliance > Delve's sales price dropped from $15,000 to $6,000 with ISO 27001 and a penetration test thrown in when a client mentioned considering a competitor
Ryan tweet media
erin griffith@eringriffith

A detailed and brutal look at the tactics of buzzy AI compliance startup Delve "Delve built a machine designed to make clients complicit without their knowledge, to manufacture plausible deniability while producing exactly the opposite." substack.com/home/post/p-19…

English
121
175
2.3K
593.1K
Liminal Warmth ❤️‍🔥
Liminal Warmth ❤️‍🔥@liminal_warmth·
RIP--can't really blame them though since basically everyone I know who's tried GLP-1s have had great results I'd be mad too if it became obvious I was having to take a placebo for months for no impact
Crémieux@cremieuxrecueil

This is not good: People have learned that GLP-1s are really effective, so if they're not losing weight, they know they're in the placebo group. So these people getting placebos are getting mad and leaving the trials.

English
2
2
47
2.7K
0.005 Seconds (3/694) 리트윗함
tomie
tomie@tomieinlove·
(Monkey's Paw): I will grant you three wishes...but remember...be careful what you wish for [chuckles evilly]...you might get it. (Me): triple the price of insulin (Monkey's Paw): ha ha...you—what? (Me): wish 2, quadruple world hunger
English
91
198
13.5K
1.5M
leo 🐾
leo 🐾@synthwavedd·
can one of the big labs do something? been quiet as of recent, need new SoTA models...
English
20
1
156
11.1K
Charlie Marsh
Charlie Marsh@charliermarsh·
We've entered into an agreement to join OpenAI as part of the Codex team. I'm incredibly proud of the work we've done so far, incredibly grateful to everyone that's supported us, and incredibly excited to keep building tools that make programming feel different.
English
273
137
3K
379.1K
Visa is doing marketing consults (see pinned!)
my routine these days is i wake up and drop my kid off at preschool and get home to do some writing, but i keep forgetting to get breakfast and end up crashing like a fool. what are your favorite no-brainer mom/dad breakfasts? after the classic peanut butter sandwich
English
93
1
199
14.7K
aurora
aurora@AuroraLevinson·
@seconds_0 why would you pick that over other wl drugs like ozempic
English
3
0
2
848
0.005 Seconds (3/694)
0.005 Seconds (3/694)@seconds_0·
There will be a whole new class of models trained who's job is to call specialist models. MoE taken to the logical extreme Compaction models, search models, memory models and above all the planning dispatch model
Cody Blakeney@code_star

I have no idea what specialized model for context compaction means and I have like 5 papers and announcements to read before I can think about this. It’s crazy that for even a single model we may have a whole ecosystem of specialized models for optimization. Spec decode model, compaction. What comes after that?

English
5
2
70
2.6K
meg.ai 🇨🇦
meg.ai 🇨🇦@MeganRisdal·
Made some updates to Kaggle's logged-out homepage - still lots more to do. :) What do you think? What are we missing?
meg.ai 🇨🇦 tweet media
English
3
0
14
640