Lighthouse Labs

170 posts

Lighthouse Labs banner
Lighthouse Labs

Lighthouse Labs

@LighthouseGov

Accelerating the adoption of onchain governance technology

Katılım Şubat 2023
37 Takip Edilen171 Takipçiler
Lighthouse Labs retweetledi
ensdao.eth
ensdao.eth@ENS_DAO·
ENS names lacked a standard way to express structured metadata. Lighthouse’s ENS Schemas fixes that. Live now → draft ENSIP + tooling (10 schemas, SDK, CLI, app/docs). Review @ensdomains PR #64 and weigh in. github.com/ensdomains/ens…
Lighthouse Labs@LighthouseGov

ENS names are how people and orgs identify themselves on-chain, but until now there's been no standard way to describe what a name represents. More details on our solution for @ensdomains: @lighthousegov/ens-metadata" target="_blank" rel="nofollow noopener">paragraph.com/@lighthousegov

English
0
3
14
1.6K
Lighthouse Labs
Lighthouse Labs@LighthouseGov·
How-to guides now live for: → Representing an entire org on-chain → Publishing your delegate statement → Giving an AI agent identity Try it → ensmetadata.app What would you classify first?
English
0
0
1
70
Lighthouse Labs
Lighthouse Labs@LighthouseGov·
Any ENS name can now declare its class (wallet, contract, agent, etc) and attach a JSON schema describing its attributes. This enables DAOs to publish a verifiable org chart directly to their ENS name, including treasuries, contracts, and delegate info.
English
1
0
1
107
Lighthouse Labs
Lighthouse Labs@LighthouseGov·
ENS names are how people and orgs identify themselves on-chain, but until now there's been no standard way to describe what a name represents. More details on our solution for @ensdomains: @lighthousegov/ens-metadata" target="_blank" rel="nofollow noopener">paragraph.com/@lighthousegov
English
1
3
10
2K
Lighthouse Labs retweetledi
Lighthouse Labs retweetledi
blockful.eth
blockful.eth@blockful_io·
Moonwell on Moonriver is under an active governance attack. $1,808. That's what it cost to buy enough tokens to pass a proposal that can drain $1.08M in user funds. A 597x profit. Voting ends on March 27. There's still time to stop it. 👇
blockful.eth tweet media
English
4
17
42
5.3K
Lighthouse Labs retweetledi
vitalik.eth
vitalik.eth@VitalikButerin·
It's a good decision! ENS names and records are a form of state that is central to the Ethereum ecosystem, the state is limited in size and there is high value in it being as accessible as possible from anywhere. It's also a semi-financial application, in the sense that buying and holding ENS names has a cost, and ENS names can become very valuable objects. With the expanded scaling roadmap, Ethereum L1 is the ideal place for these applications. More generally, I expect that the optimal architecture for decentralized identity and social (the general space I see ENS being in) is to have this kind of per-user account and profile data on L1, and to have special-purpose L2s, likely much simpler than full EVMs, to handle user actions (eg. actions on social platforms).
Katherine Wu | katherine.eth@katherinewu

A quick update on ENSv2: we have made the decision to deploy ENSv2 exclusively on Ethereum L1 and to cease development of Namechain. To be clear, ENSv2 will still ship. The only thing that’s changed is that instead of deploying ENSv2 on our own L2 stack, it will be deployed on L1. It is important to note that ENSv2 is ultimately an upgrade to ENS as it exists today — it’s still ENS! Regardless of where it ultimately gets deployed, it does not fundamentally change ENS the protocol nor does it change any part of our mission and ultimate goal of building the identity layer on Ethereum. The design for ENSv2 was always intended to work fully as designed, whether deployed on L1 or L2. Our product roadmap does not change. We have detailed progress on the ENSv2 Hub to show what exactly v2 will mean for you, and what the team has been building: giving each name its own registry (making your .eth names more powerful and customizable to your own rules!), building two brand new apps from the ground up (both deployed to testnet this week), and much more. I am so excited for this release (soon!) and think it will completely change the way you interact with your own ENS names. The timing of this decision coincides with a broader discussion about the role of L2s in Ethereum. I continue to believe that L2s play a vital role in extending the value of the world computer that is Ethereum, and ENS will continue to support as many chains as possible. In fact, very soon anyone will be able to register a .eth name regardless of which EVM chain they are on — meaning that even if your assets live on Optimism or Arbitrum, it’s a one-click process (no bridge, no gas tokens). We also continue to believe in a multi-chain world beyond EVM chains (a reminder that ENS has and always will support your addresses across major chains like Solana, Bitcoin, and more). We have published the detailed rationale for the decision to stay on L1 on our blog, and I encourage you to read it (in the QT here!) The .eth stays on 🫡

English
481
329
2.2K
392.7K
Lighthouse Labs
Lighthouse Labs@LighthouseGov·
Submitted to @Designing_DeFi Harbor Protocol unlocks secondary markets for vesting and staked positions, without breaking the lock. Exit positions early. Buy someone else's at a discount. The original commitment stays intact.
English
1
0
1
49
Lighthouse Labs
Lighthouse Labs@LighthouseGov·
@saturnial You guys did a great job pushing the envelope of good crypto UX. Good luck on the next project.
English
0
0
1
55
Lighthouse Labs retweetledi
Uniswap Labs 🦄
Uniswap Labs 🦄@Uniswap·
Continuous Clearing Auctions are now live on @arbitrum With this deployment, Arbitrum builders can now: → Run onchain token auctions → Discover a credible market price → Automatically seed liquidity on v4 In a way that's transparent and open to everyone
English
1.1M
52
337
72.5K
Lighthouse Labs retweetledi
Enscribe
Enscribe@enscribe_·
As part of @ens_dao Contract Naming Season, @corkprotocol has adopted ENS-based naming across its smart contract infrastructure using Enscribe. Its core contracts and wallets now have human-readable, verifiable identities.
Enscribe tweet media
English
4
9
28
1.6K
Lighthouse Labs retweetledi
Syndicate
Syndicate@syndicateio·
Transparency in crypto often stops at dashboards. Not here. Today, the Syndicate Network Collective (DUNA) published its Q4 2025 financials—a full accrual-basis report from a decentralized network operating under U.S. law. Here’s what stands out ↓
Syndicate tweet media
English
2
3
21
1.7K
Lighthouse Labs retweetledi
Kelsie Nabben
Kelsie Nabben@kelsiemvn·
New book chapter out! 'Tornado Cash, Flashbots, and regulatory equivalence: Alternatives to regulatory compliance or avoidance in blockchain systems' w. @yaoeo @MannanMorshed in Public Governance ed. by @EconomistChohan & Sven Van Kerckhoven 🙏 #v=onepage&q&f=false" target="_blank" rel="nofollow noopener">books.google.com.au/books?hl=en&lr…
English
1
10
20
1.5K
Lighthouse Labs
Lighthouse Labs@LighthouseGov·
Adjacent to this is using AIs as “independent” review bodies. See Simocracy by @dwddao (researchretreat.org/papers/paper/?…) TL;DR - Seed several digital twins with eval prompts - They discuss the issue over two rounds - They provide an outcome Interestingly, when you seed a model with a particular bias, it still converges toward the group. Current-gen models seem to be too agreeable.
English
0
0
0
10
Jay Yu 🐟
Jay Yu 🐟@0xfishylosopher·
woke up to this amazing blog by andy hall - how can we build an AI delegate? and how can we break it? i've been intrigued by this topic for a few years, coming from a DAO delegate + researcher perspective. a few thoughts here: > first, LLMs are surprisingly good at decision making in context now - able to reason about complex legislative proposals and make an informed decision! this means the tech is here today to create ai-enhanced delegates, whether for DAOs or for the CA legislature. > the attack that andy describes here - prompt injection - i don't actually see it as the LLM's fault. at least in this thought experiment, the user (andy) is giving the LLM some extra input data on how to vote, and the LLM is simply doing its duty in representing the user's opinion and context. > but when you elevate this to the system prompt layer, this indeed becomes a problem - who knows if OpenAI/Google/Llama injected some weird system prompt to influence voting, perhaps lobbied by interest groups? > the second issue is if LLM persuasion techniques (all well documented, see the "johnny" paper by diyi yang et al.) actually get woven into bill text itself to conduct SEO on AI agents, should they become a larger force. > imo actually both of these are fixable - requiring a combo of reg + tech advancements: (1) For the system prompt issue, opensourcing high-stakes prompts + requiring auditability is actually a pretty good first step to resolving this trust issue. (2) For proposal-level LLM persuasion, I see something like a gptzero/pangram for LLM persuasion techniques potentially being useful here - explicitly stating how bill text has been altered to fit LLMs. for all of these questions, issues, and topics - i can see DAOs as almost petri dishes for these ideas - NEAR's agentic governance for example could provide us with crucial datapoints on how we want to go about these problems before they manifest in the wider world.
Andy Hall@ahall_research

JPMorgan announced they're replacing human proxy advisors with an AI system called Proxy IQ to vote $7 trillion in client assets, as @matt_levine explained. But does AI voting work? This is a big deal—ISS and Glass Lewis have dominated proxy voting for decades, and AI voting is an idea that will apply way beyond proxy voting. So I spent some time building my own AI proxy voter to see whether this could actually work in practice. The good news: it got some hard cases right, internalizing my skepticism of ideological framing and recommending votes I genuinely agreed with. The bad news: I broke it in about ten minutes with a hidden unicode message that flipped its recommendation completely. As AI voting scales, proposal writing is going to become prompt engineering—SEO for democracy. What should we do? (1) JP Morgan and other institutions will need to design and prompt their agents very carefully and embed them in processes that catch mistakes and hacks. (2) Frontier labs should build tools to help us teach our voting agents our preferences, and make them resistant to prompt injections or surface-level reframings. (3) Voters should see AI assistants as one tool among many and not rely on them too heavily or let them replace the important role of deliberation and personal agency. Check out my new post, linked below, on what I learned and what we should do about it.

English
4
0
21
2.5K