GVRN

339 posts

GVRN banner
GVRN

GVRN

@gvrn_ai

Trusted by the teams behind 500m + in raises 🌐 Your superpower for incorporation, legal, and fundraising in web3.

เข้าร่วม Ağustos 2023
124 กำลังติดตาม1.8K ผู้ติดตาม
ทวีตที่ปักหมุด
GVRN
GVRN@gvrn_ai·
Every launchpad on @solana will soon be able to offer legal foundations + on-chain ownership. In collaboration with @0xSoju and the @MeteoraAG team -- @BedrockFndn plugs legal frameworks directly into DeFi rails. Founders raise, and investors get access from day one. Real entities. Enforceable equity. We're building the infrastructure that makes Internet Capital Markets actually work.
Soju 燒酒 | Meteora@0xSoju

.@MeteoraAG is a token AMM, and we need to make tokens great again. Today we’re announcing @BedrockFndn, our attempt to rebuild Internet Capital Markets, with real foundations. Bedrock is a joint venture with @GVRN_AI, a leading LegalTech firm based in Singapore. Bedrock's goal is to create frameworks, innovate with our partners, or solve legal problems and help bring ICM into reality on @MeteoraAG. Additionally, we've formed Bedrock Foundation -- a new independent, ownerless foundation with one goal: to own equity, IP and other assets on behalf of tokenholders. With Bedrock Foundation, we will work together with any @MeteoraAG powered Launchpad to bring tokenized equity to life on Solana. We are also excited to work with existing tokenized equity teams to improve their frameworks with our legal muscle. We will build the public goods for Meteora's partners to leverage, and bring ICM to reality on MeteoraAG.

English
6
2
29
6.6K
GVRN
GVRN@gvrn_ai·
You're building. You convince yourself that you don't have time for paperwork. Who reads it anyway? You follow the 2026 Founder playbook and ask GPT. Then Claude. Then Gemini (just to be sure). You screenshot all three and drop them in the group chat. We hate to break it to you but you didn't get three expert opinions. You got one opinion phrased differently. They were all trained on the same sources, the same writers, the same ideas. You go ahead and ship using the infrastructure you pulled together from the prompt output. A few paragraphs that look official from Claude. A few lines from ChatGPT. An occasional idea from Gemini. Project ships. It moons. An investor reads your docs and asks you to walk them through the logic. Your entire defense is "the AI all agreed." You're cooked. How deep are you running AI on your critical stack right now? Unlike your last meme coin rug, it's not too late for you to get REAL advice that's been trained on actual client outcomes, not search results. GVRN is for serious founders and moves that scale. Protect your work from the start. We're here to help.
Alex Prompter@alex_prompter

🚨 BREAKING: Researchers at UW Allen School and Stanford just ran the largest study ever on AI creative diversity. 70+ AI models were given the same open-ended questions. They all gave the same answers. They asked over 70 different LLMs the exact same open-ended questions. "Write a poem about time." "Suggest startup ideas." "Give me life advice." Questions where there is no single right answer. Questions where 10 different humans would give you 10 completely different responses. Instead, 70+ models from every major AI company converged on almost identical outputs. Different architectures. Different training data. Different companies. Same ideas. Same structures. Same metaphors. They named this phenomenon the "Artificial Hivemind." And the paper won the NeurIPS 2025 Best Paper Award, which is the highest recognition in AI research, handed to a small number of papers out of thousands of submissions. This is not a blog post or a hot take. This is award-winning, peer-reviewed science confirming something massive is broken. The team built a dataset called Infinity-Chat with 26,000 real-world, open-ended queries and over 31,000 human preference annotations. Not toy benchmarks. Not math problems. Real questions people actually ask chatbots every single day, organized into 6 categories and 17 subcategories covering creative writing, brainstorming, speculative scenarios, and more. They ran all of these across 70+ open and closed-source models and measured the diversity of what came back. Two findings hit hard. First, intra-model repetition. Ask the same model the same open-ended question five times and you get almost the same answer five times. The "creativity" you think you're getting is the same output wearing a slightly different outfit. You ask ChatGPT, Claude, or Gemini to write you a poem about time and you keep getting the same river metaphor, the same hourglass imagery, the same reflection on mortality. Over and over. The model isn't thinking. It's defaulting to whatever scored highest during alignment training. Second, and this is the one that should really alarm you, inter-model homogeneity. Ask GPT, Claude, Gemini, DeepSeek, Qwen, Llama, and dozens of other models the same creative question, and they all converge on strikingly similar responses. These are models built by completely different companies with different architectures and different training pipelines. They should be producing wildly different outputs. They're not. 70+ models all thinking inside the same invisible box, producing the same safe, consensus-approved content that blends together into one indistinguishable voice. So why is this happening? The researchers point directly at RLHF and current alignment techniques. The process we use to make AI "helpful and harmless" is also making it generic and boring. When every model gets trained to optimize for human preference scores, and those preference datasets converge on a narrow definition of what "good" looks like, every model learns to produce the same safe, agreeable output. The weird answers get penalized. The original takes get shaved off. The genuinely creative responses get killed during training because they didn't match what the average annotator rated highly. And it gets even worse. The study found that reward models and LLM-as-judge systems are actively miscalibrated when evaluating diverse outputs. When a response is genuinely different from the mainstream but still high quality, these automated systems rate it LOWER. The very tools we built to evaluate AI quality are punishing originality and rewarding sameness. Think about what this means if you use AI for brainstorming, content creation, business strategy, or literally any task where you need multiple perspectives. You're getting the illusion of diversity, not the real thing. You ask for 10 startup ideas and you get 10 variations of the same 3 ideas the model learned were "safe" during training. You ask for creative writing and you get the same therapeutic, perfectly balanced, utterly forgettable tone that every other model gives. The researchers flagged direct implications for AI in science, medicine, education, and decision support, all domains where diverse reasoning is not a nice-to-have but a requirement. Correlated errors across models means if one AI gets something wrong, they might ALL get it wrong the same way. Shared blind spots at massive scale. And the long-term risk is even scarier. If billions of people interact with AI systems that all think identically, and those interactions shape how people write, brainstorm, and make decisions every day, we risk a slow, invisible homogenization of human thought itself. Not because AI replaced creativity. Because it quietly narrowed what we were exposed to until we all started thinking the same way too. Here's what you can actually do about it right now: → Stop accepting first-draft AI output as creative or diverse. If you need 10 ideas, generate 30 and throw away the obvious ones → Use temperature and sampling parameters aggressively to push models out of their comfort zone → Cross-reference multiple models AND multiple prompting strategies, because same model with different prompts often beats different models with the same prompt → Add constraints that force novelty like "give me ideas that a traditional investor would hate" instead of "give me creative ideas" → Use structured prompting techniques like Verbalized Sampling to force the model to explore low-probability outputs instead of defaulting to consensus → Layer your own taste and judgment on top of everything AI gives you. The model gets you raw material. Your weirdness and experience make it original This paper puts hard data behind something a lot of us have been feeling for a while. AI is getting more capable and more homogeneous at the same time. The models are smarter, but they're all smart in the exact same way. The Artificial Hivemind is not a bug in one model. It's a systemic feature of how the entire industry builds, aligns, and evaluates language models right now. The fix requires rethinking alignment itself, moving toward what the researchers call "pluralistic alignment" where models get rewarded for producing diverse distributions of valid answers instead of collapsing to a single consensus mode. Until that happens, your best defense is awareness and better prompting.

English
1
0
7
1.2K
GVRN
GVRN@gvrn_ai·
@ohryansbelt “Looks compliant” and “is compliant” are very different things.
English
0
0
1
526
Ryan
Ryan@ohryansbelt·
Delve, a YC-backed compliance startup that raised $32 million, has been accused of systematically faking SOC 2, ISO 27001, HIPAA, and GDPR compliance reports for hundreds of clients. According to a detailed Substack investigation by DeepDelver, a leaked Google spreadsheet containing links to hundreds of confidential draft audit reports revealed that Delve generates auditor conclusions before any auditor reviews evidence, uses the same template across 99.8% of reports, and relies on Indian certification mills operating through empty US shells instead of the "US-based CPA firms" they advertise. Here's the breakdown: > 493 out of 494 leaked SOC 2 reports allegedly contain identical boilerplate text, including the same grammatical errors and nonsensical sentences, with only a company name, logo, org chart, and signature swapped in > Auditor conclusions and test procedures are reportedly pre-written in draft reports before clients even provide their company description, which would violate AICPA independence rules requiring auditors to independently design tests and form conclusions > All 259 Type II reports claim zero security incidents, zero personnel changes, zero customer terminations, and zero cyber incidents during the observation period, with identical "unable to test" conclusions across every client > Delve's "US-based auditors" are actually Accorp and Gradient, described as Indian certification mills operating through US shell entities. 99%+ of clients reportedly went through one of these two firms over the past 6 months > The platform allegedly publishes fully populated trust pages claiming vulnerability scanning, pentesting, and data recovery simulations before any compliance work has been done > Delve pre-fabricates board meeting minutes, risk assessments, security incident simulations, and employee evidence that clients can adopt with a single click, according to the author > Most "integrations" are just containers for manual screenshots with no actual API connections. The author describes the platform as a "SOC 2 template pack with a thin SaaS wrapper" > When the leak was exposed, CEO Karun Kaushik emailed clients calling the allegations "falsified claims" from an "AI-generated email" and stated no sensitive data was accessed, while the reports themselves contained private signatures and confidential architecture diagrams > Companies relying on these reports could face criminal liability under HIPAA and fines up to 4% of global revenue under GDPR for compliance violations they believed were resolved > When clients threaten to leave, Delve reportedly pairs them with an external vCISO for manual off-platform work, which the author argues proves their own platform can't deliver real compliance > Delve's sales price dropped from $15,000 to $6,000 with ISO 27001 and a penetration test thrown in when a client mentioned considering a competitor
Ryan tweet media
erin griffith@eringriffith

A detailed and brutal look at the tactics of buzzy AI compliance startup Delve "Delve built a machine designed to make clients complicit without their knowledge, to manufacture plausible deniability while producing exactly the opposite." substack.com/home/post/p-19…

English
402
732
8.2K
5.6M
Joseph (eu/acc)
Joseph (eu/acc)@ImmutableLawyer·
Every time I see YC and Compliance in the same sentence it’s unusable or, in this case, a scam Compliance tooling should always be built by compliance veterans - end of story.
Ryan@ohryansbelt

Delve, a YC-backed compliance startup that raised $32 million, has been accused of systematically faking SOC 2, ISO 27001, HIPAA, and GDPR compliance reports for hundreds of clients. According to a detailed Substack investigation by DeepDelver, a leaked Google spreadsheet containing links to hundreds of confidential draft audit reports revealed that Delve generates auditor conclusions before any auditor reviews evidence, uses the same template across 99.8% of reports, and relies on Indian certification mills operating through empty US shells instead of the "US-based CPA firms" they advertise. Here's the breakdown: > 493 out of 494 leaked SOC 2 reports allegedly contain identical boilerplate text, including the same grammatical errors and nonsensical sentences, with only a company name, logo, org chart, and signature swapped in > Auditor conclusions and test procedures are reportedly pre-written in draft reports before clients even provide their company description, which would violate AICPA independence rules requiring auditors to independently design tests and form conclusions > All 259 Type II reports claim zero security incidents, zero personnel changes, zero customer terminations, and zero cyber incidents during the observation period, with identical "unable to test" conclusions across every client > Delve's "US-based auditors" are actually Accorp and Gradient, described as Indian certification mills operating through US shell entities. 99%+ of clients reportedly went through one of these two firms over the past 6 months > The platform allegedly publishes fully populated trust pages claiming vulnerability scanning, pentesting, and data recovery simulations before any compliance work has been done > Delve pre-fabricates board meeting minutes, risk assessments, security incident simulations, and employee evidence that clients can adopt with a single click, according to the author > Most "integrations" are just containers for manual screenshots with no actual API connections. The author describes the platform as a "SOC 2 template pack with a thin SaaS wrapper" > When the leak was exposed, CEO Karun Kaushik emailed clients calling the allegations "falsified claims" from an "AI-generated email" and stated no sensitive data was accessed, while the reports themselves contained private signatures and confidential architecture diagrams > Companies relying on these reports could face criminal liability under HIPAA and fines up to 4% of global revenue under GDPR for compliance violations they believed were resolved > When clients threaten to leave, Delve reportedly pairs them with an external vCISO for manual off-platform work, which the author argues proves their own platform can't deliver real compliance > Delve's sales price dropped from $15,000 to $6,000 with ISO 27001 and a penetration test thrown in when a client mentioned considering a competitor

English
3
0
2
490
GVRN
GVRN@gvrn_ai·
Official Apology Statement: We regret to inform founders that legal structure is not a one-time decision. You cannot: incorporate once and carry it forever assume 2022's setup survives 2025's DD skip the review because nothing has "gone wrong yet" We understand this is inconvenient. We're sorry.
English
2
1
13
752
GVRN
GVRN@gvrn_ai·
@coinfessions Based on docs we've seen, you're not alone.
English
0
0
3
55
Coinfessions
Coinfessions@coinfessions·
I work for a decently sized crypto project and have no clue what I'm doing.
English
144
34
1.5K
201.4K
GVRN
GVRN@gvrn_ai·
@pirwot Luckily CT erupts anytime legislation changes. That should always be a CTA for you to re-examine your docs.
English
0
0
0
23
GVRN
GVRN@gvrn_ai·
@pirwot Excellent question and it's not an easy one, especially with rapidly changing legislation. The easy answer is hire the right people to take that load. Following the right blogs/accounts, setting alerts for your jurisdiction, etc will also help.
English
1
0
0
60
Brad Carry (VC & Podcaster)
Brad Carry (VC & Podcaster)@bradcarryvc·
Invested in a founder last year Just asked to see updated financials “Sorry, we aren’t making those yet since we aren’t a public company,” he replied back to me I think my money is gone
English
20
1
135
16K
GVRN
GVRN@gvrn_ai·
@rezoundous If only there were people around who could tell you nobody will use it...
English
0
0
1
17
Tyler
Tyler@rezoundous·
Nothing humbles you like launching something nobody uses.
English
330
224
2.7K
73.9K
LoKi 😈
LoKi 😈@lokithebird·
GM to everyone except those holding NFTs for XP.
English
77
2
177
5.1K
GVRN
GVRN@gvrn_ai·
@mdudas machines or regulators? 🫳🎤
English
0
0
0
20
NickyScanz
NickyScanz@NickyScanz·
Pls send help
NickyScanz tweet media
English
11
0
44
1.5K
b
b@bmontxna·
bro have you ever thought about like how everything we do is just data collection
English
66
22
209
14.6K
GVRN
GVRN@gvrn_ai·
@LarsenJensenUSA We are looking to help founders who are highly skilled in Retardmaxximization to not oopsiemaxx their founding docs.
English
0
0
0
11
Larsen Jensen
Larsen Jensen@LarsenJensenUSA·
We are looking to invest in founders who are highly skilled in Retardmaxximization.
English
124
20
514
49.2K
GVRN
GVRN@gvrn_ai·
@zachtratar You'd think if everyone was all building the same product, their legal documentation would at least be right for once.
English
1
0
1
57