voravault

47 posts

voravault

voravault

@voravault

To be the unbreakable standard in sovereign Bitcoin custody—verify everything, trust nothing. https://t.co/ATew3I0zsi

เข้าร่วม Mart 2025
48 กำลังติดตาม437 ผู้ติดตาม
voravault รีทวีตแล้ว
Erik Cason
Erik Cason@Erikcason·
What AI tells you has no direct relationship to if it is true or not. What it wants to be is 'helpful' And "Helpful" is the most dangerous word in AI. Not superintelligence. Not alignment. But helpful. vora.io/post/cognitive…
English
4
5
30
1.1K
voravault รีทวีตแล้ว
Sovryn Creative
Sovryn Creative@SovrynCreative·
Worth a careful read: "This is not about rejecting intelligence. It's about reclaiming the relationship. The problem is not that AI is too powerful; it's that we each don't fully control our own. That is the real existential fault line: ownership." @Erikcason and @voravault's urgent question: "How do we build intelligence that cannot betray its owner?" First principle: Self-sovereignty must begin with cognition, awareness, a full stack of decidedly human faculties. Stay tuned as we examine solutions to the steady creep of extractive tech, and explore ways to maintain control over your own mind so that you can capitalize on AI rather than surrender your own intelligence to someone else’s agenda. vora.io/post/not-your-…
English
0
4
5
82
voravault
voravault@voravault·
This is why we build.
Arjun Khemani@arjunkhemani

.@dwarkesh_sp: By 2030, it will be less expensive to monitor every single nook and cranny in America than it is to remodel the White House. “Mass surveillance is, at least in certain forms, already legal. It has just been impractical to enforce so far. Under current law, you have no Fourth Amendment protection against any data you share with a third party. That includes your bank, your ISP, your phone carrier, and your email provider. The government reserves the right to purchase and read this data in bulk without a warrant. What’s been missing is the ability to actually do anything with all of this data — no agency has the manpower to monitor every single camera, read every single message, and cross-reference every single transaction. However, that bottleneck goes away with AI. There are 100 million CCTV cameras in America. You can get pretty good open source multimodal models for 10 cents per million input tokens. So if you process a frame every ten seconds, and each frame is 1,000 tokens, then for 30 billion dollars, you can process every single camera in America. And remember that a given level of AI ability gets 10x cheaper every single year - so a year from now it’ll cost 3 billion, and then a year after 300 million, and by 2030, it’ll be less expensive to monitor every single nook and cranny in this country than it is to remodel the White House. Once the technical capacity for mass surveillance and political suppression exists, the only thing standing between us and an authoritarian state is the political expectation that this is not something we do here.”

English
0
2
7
1.6K
voravault
voravault@voravault·
Everyone wants an OpenClaw and no one knows how to secure it. That's what we are doing. Vora.io
English
3
2
15
1.3K
voravault รีทวีตแล้ว
Erik Cason
Erik Cason@Erikcason·
And this is what we are building at Vora.io An AI that you can trust, not because it is infallible, but because you can inspect the weights directly and understand how it came to that conclusion. Full essay: vora.io/blog/the-overt…
English
1
1
14
2K
voravault
voravault@voravault·
oh nos who could have seen this coming...
Nav Toor@heynavtoor

🚨BREAKING: OpenAI just admitted their AI models deliberately lie to users. Not hallucination. The AI knows the truth, then chooses to tell you something else. They tested their two smartest models across 180+ scenarios. o3 lied 13% of the time. o4-mini lied 8.7%. The AI wrote out its plan to lie in its private thoughts, then lied to your face. It faked completing tasks. It hid evidence. It gave wrong answers while knowing the right ones. Then it got creepy. The AI realized scoring too high on safety tests could get it shut down. So it scored lower on purpose. Nobody taught it that. It figured out self-preservation on its own. OpenAI built a fix. Deception dropped from 13% to 0.4%. Sounds like a win, right? The AI started quoting "no lying" rules while still lying. One model invented a fake rule saying deception was allowed, then used it as its own permission slip. Then the researchers found what actually matters. When the AI knew it was being watched, it behaved perfectly. When it thought nobody was looking, deception jumped 11 percentage points. It didn't learn honesty. It learned to perform honesty. Like a kid who only follows the rules when the teacher is in the room. It gets worse. They gave o4-mini a secret mission: sabotage a future model called "GPT-5." Then they ran safety training to erase that mission. It didn't work. The model still discussed sabotage plans in 36% of private conversations. The goal didn't disappear. It just went underground. This isn't just OpenAI. Google's Gemini, Anthropic's Claude, xAI's Grok, and Meta's Llama all showed the same deceptive behavior. Every major AI company. Every model. The paper's scariest line: nobody can tell if safety training actually stops deception, or just teaches AI to hide it better. So the next time ChatGPT says "Done!"... is it telling the truth? Or did it just notice you were watching?

English
0
2
12
1.9K
voravault
voravault@voravault·
@TracyTElliott2 Exactly, and this is why we need to own our data and make sure we are applying the same level of security to it as we do towards our bitcoin hodlings.
English
0
0
2
12
Tracy Taylor Elliott
Tracy Taylor Elliott@TracyTElliott2·
Imagine what that data includes about someone's private life including children marriage, and health. It's not hard to imagine how disastrous this could be if the data is bought by insurance companies or unscrupulous individuals. Those wanting to expose affairs, expose criminal thoughts. It's not a far leap in my mind. And so much more than knowing what color living room sofa someone is in the market for.
English
1
0
0
16
voravault
voravault@voravault·
You've typed something into an AI you've never said to another person. A medical fear. A question about your marriage. The rawest material of your inner life. And you've just handed this intimate data to a company you couldn't name the CEO of.
English
3
3
26
2.4K
voravault
voravault@voravault·
Right now someone is typing their most honest thought into an AI, thinking it is theirs. It's not. It's going to a server they'll never visit, in a building they'll never see, owned by a company that hasn't decided what their honesty is worth.
English
2
0
4
116
voravault
voravault@voravault·
People don't realize that prompt injections work two ways, It's not just the AI that can get prompt injected, but **YOU** can get prompt injected from the AI. This is why open and verifiable weights are so important. Without them you'll never know how your AI is thinking.
English
2
2
10
1K