Eva ON

492 posts

Eva ON banner
Eva ON

Eva ON

@evaoltida

managing director @maads_hub 🧡 powering @aads_network • @slise_xyz #hiring

Canada เข้าร่วม Nisan 2021
670 กำลังติดตาม1K ผู้ติดตาม
ทวีตที่ปักหมุด
Eva ON
Eva ON@evaoltida·
thank you @radarblock for a great panel and event, thank you @OliWeb3_ 💚 it was a pleasure sharing thoughts with some great people: @Tianqi_Wang5 from @KaitoAI, @KrysiaKozak from @cookie3, Jack from @ionet, and @TomerSharoni from @addressableid. we all agreed on the importance of products that stick by prioritizing UI/UX how cost per retention should be a crucial metric for tracking campaigns and ROI and that data analysis in web3 = chaos + opportunity. the projects that solve attribution, retention, and real user intent will own the next cycle… maybe
Eva ON tweet media
English
10
0
67
6.2K
⚡︎
⚡︎@_sorrengailll·
Every person with ADHD has a favorite playlist. It’s called “Liked Songs”.
English
395
3.5K
33.3K
857.8K
aasha
aasha@aashatwt·
how it feels explaining to people your job isn’t just prompting
English
21
2
115
4K
Eva ON
Eva ON@evaoltida·
@KeruboSk it's me against the alarm, i always win
English
0
0
0
6
Sophia ❣️
Sophia ❣️@KeruboSk·
Apparently there are people who wake up before their alarm… and just get up. Just one alarm. No snooze. No struggle. Explain yourselves. How do you do that?
English
6.6K
1.6K
17.7K
717.7K
Spencer A. Klavan
Spencer A. Klavan@SpencerKlavan·
This is a real thing that happened
Spencer A. Klavan tweet media
English
84
169
2.7K
68.5K
Alex
Alex@AlexOnchain·
@paw_lean imagine eating beans on toast from here
English
5
0
18
583
Deebs DeFi 🛰
Deebs DeFi 🛰@Deebs_DeFi·
I'm embarrassed to be on the internet > Project builds a Claude Wrapper that finds mentions of your company on Reddit > Tosses a bunch of basic info you already know about your company into a dashboard > Charges you $99/month for it Half of X: 🤯
Deebs DeFi 🛰 tweet media
English
175
4
317
24.3K
Eva ON
Eva ON@evaoltida·
@pmarca it's all the same you nincompoop
English
0
0
0
9
a.wuah_papa
a.wuah_papa@wavymff·
"Bring your money lemme keep for you" my mum was the first forex trader in my family
English
258
4.8K
23.6K
377.5K
Caitlin Cook
Caitlin Cook@DeadCaitBounce·
My mom just texted me to ask if this is real (Chicago River btw)
Caitlin Cook tweet media
English
109
385
16.3K
438K
Q
Q@quionie·
AI has taken over my life.
English
61
29
370
13.1K
Eva ON รีทวีตแล้ว
MAADS
MAADS@maads_hub·
BC.Game runs billions of impressions through AADS at $0.04-2.40 CPM. That 60x range on the same network isn't a bug. It's the model. At $0.04 you're buying volume - faucets, earn sites, broad crypto audiences earlier in their journey. At $2.40 you're buying intent - specific placements, verified crypto-native users, mid-session context. For BC.Game, scale is the strategy. iGaming brand awareness works differently from a DeFi conversion campaign. When you're running 8 billion impressions, even a small fraction of high-intent users adds up fast. That's how we structure iGaming campaigns. maads.com/cases/bc-game
MAADS tweet media
English
1
1
5
359
Eva ON รีทวีตแล้ว
vitalik.eth
vitalik.eth@VitalikButerin·
"AI becomes the government" is dystopian: it leads to slop when AI is weak, and is doom-maximizing once AI becomes strong. But AI used well can be empowering, and push the frontier of democratic / decentralized modes of governance. The core problem with democratic / decentralized modes of governance (including DAOs on ethereum) is limits to human attention: there are many thousands of decisions to make, involving many domains of expertise, and most people don't have the time or skill to be experts in even one, let alone all of them. The usual solution, delegation, is disempowering: it leads to a small group of delegates controlling decision-making while their supporters, after they hit the "delegate" button, have no influence at all. So what can we do? We use personal LLMs to solve the attention problem! Here are a few ideas: ## Personal governance agents If a governance mechanism depends on you to make a large number of decisions, a personal agent can perform all the necessary votes for you, based on preferences that it infers from your personal writing, conversation history, direct statements, etc. If the agent is (i) unsure how you would vote on an issue, and (ii) convinced the issue is important, then it should ask you directly, and give you all relevant context. ## Public conversation agents Making good decisions often cannot come from a linear process of taking people's views that are based only on their own information, and averaging them (even quadratically). There is a need for processes that aggregate many people's information, and then give each person (or their LLM) a chance to respond *based on that*. This includes: * Inferring and summarizing your own views and converting them into a format that can be shared publicly (and does not expose your private info) * Summarizing commonalities between people's inputs (expressed as words), similar to the various LLM+pol.is ideas ## Suggestion markets If a governance mechanism values "high-quality inputs" of any type (this could be proposals, or it could even be arguments), then you can have a prediction market, where anyone can submit an input, AIs can bet on a token representing that input, and if the mechanism "accepts" the input (either accepting the proposal, or accepting it as a "unit" of conversation that it then passes along to its participant), it pays out $X to the holders of the token. Note that this is basically the same as firefly.social/post/x/2017956… ## Decentralized governance with private information One of the biggest weaknesses of highly decentralized / democratic governance is that it does not work well when important decisions need to be made with secret information. Common situations: (i) the org engaging in adversarial conflicts or negotiations (ii) internal dispute resolution (iii) compensation / funding decisions. Typically, orgs solve this by appointing individuals who have great power to take on those tasks. But with multi-party computation (currently I've seen this done with TEEs; I would love to see at least the two-party case solved with garbled circuits vitalik.eth.limo/general/2020/0… so we can get pure-cryptographic security guarantees for it), we could actually take many people's inputs into account to deal with these situations, without compromising privacy. Basically: you submit your personal LLM into a black box, the LLM sees private info, it makes a judgement based on that, and it outputs only that judgement. You don't see the private info, and no one else sees the contents of your personal LLM. ## The importance of privacy All of these approaches involve each participant making use of much more information about themselves, and potentially submitting much larger-sized inputs. Hence, it becomes all the more important to protect privacy. There are two kinds of privacy that matter: * Anonymity of the participant: this can be accomplished with ZK. In general, I think all governance tools should come with ZK built in * Privacy of the contents: this has two parts. First, the personal LLM should do what it can to avoid divulging private info about you that it does not need to divulge. Second, when you have computation that combines multiple LLMs or multiple people's info, you need multi-party techniques to compute it privately. Both are important.
English
576
289
1.9K
295.2K
Eva ON รีทวีตแล้ว
Dustin
Dustin@r0ck3t23·
The godmother of AI just delivered the reality check Silicon Valley refuses to hear. She has the standing to say it. Li: “Silicon Valley as a whole tends to mistake clear vision with short distance.” Seeing the destination clearly has nothing to do with how hard it is to reach. Self-driving cars were first demonstrated in 2006. Twenty years later Waymo is barely on the road. The vision was never the problem. The distance was. Clarity of destination gets mistaken for proximity to arrival. That’s the mistake the industry keeps making. And keeps making. Li: “I consider myself a scientist in my heart and I actually really don’t like hyping.” In an industry running at maximum temperature, Fei-Fei Li is one of the few people at the top willing to say that publicly. Not because the technology isn’t real. Because the gap between what’s visible and what’s required is being systematically underestimated. Large Language Models dominate the conversation. Text to text. Comparatively contained. The harder problem is spatial intelligence. AI that reasons about and acts within the physical three-dimensional world. Hardware. Physics. Data that doesn’t exist yet. Real-time adaptation to chaos. A robot that can clean a bathroom requires understanding every surface, every object, every force, every exception. That’s not a software update. That’s a civilizational research problem. Li: “I don’t call it hype. I call it a misleading sentiment. We don’t want to replace human creators.” The second place the industry gets it wrong is creativity. The narrative has hardened around replacement. AI takes the jobs. AI tells the stories. AI makes the art. Li considers that not just wrong but destructive. Wrong because AI doesn’t replicate creativity. Destructive because believing it can devalues the humans creating culture. Human creativity isn’t a process to be automated. It’s fundamental to what we are as a species. The goal is augmentation. Tools that make human creators faster and more capable. Not systems that generate output in the style of human work and call it creation. That distinction matters more than most people in the industry are willing to sit with. Precision of imagination is not proximity to reality. Li has spent her career in the gap between those two things. The map isn’t the territory. The journey is long. The hurdles are deep. And the scientist who built the foundation this era stands on is telling you the timeline everyone is selling is wrong. We’ve been almost there with self-driving for twenty years. The pattern doesn’t change just because the destination looks different.
English
101
792
3.2K
257.9K