TrustElevate @trustelevate

29.6K posts

TrustElevate @trustelevate

TrustElevate @trustelevate

@TrustElevate

Identity orchestration platform for compliant digital onboarding. Non-doc verification of identities (all ages, including minors) and legal guardianship status.

London Katılım Kasım 2008
5.8K Takip Edilen4.8K Takipçiler
TrustElevate @trustelevate retweetledi
Lucia Velasco
Lucia Velasco@_LuciaVelasco·
El fiscal general de Florida anuncia una investigación a OpenAI El principal detonante de esta ofensiva judicial es el tiroteo masivo ocurrido en la Universidad Estatal de Florida en abril de 2025, después de que los documentos legales revelaran que el atacante intercambió más de 200 mensajes con ChatGPT antes de perpetrar la tragedia. Sin embargo, el estado está ampliando el alcance de la pesquisa para exigir respuestas sobre presuntos riesgos de seguridad nacional, concretamente la posible filtración de tecnología a potencias extranjeras como China, así como la vulnerabilidad de la plataforma ante la explotación infantil y la inducción a las autolesiones.
Attorney General James Uthmeier@AGJamesUthmeier

Today, we launched an investigation into OpenAI and ChatGPT. AI should advance mankind, not destroy it. We’re demanding answers on OpenAI’s activities that have hurt kids, endangered Americans, and facilitated the recent FSU mass shooting. Wrongdoers must be held accountable.

Español
0
4
2
496
TrustElevate @trustelevate retweetledi
Center For Humane Technology
"What we're striving for here is pretty simple. We believe that AI should be built to augment human labor, not replace it." In the latest episode of Your Undivided Attention, Pete Furlong and Camille Carlton from our policy team dive deep into The AI Roadmap, CHT's most robust set of AI solutions to date. Here's Pete talking about our vision for the future of AI and work. Listen to the full conversation: bit.ly/4vatY7c Read the AI Roadmap: bit.ly/48baVjx
English
1
1
3
488
TrustElevate @trustelevate retweetledi
Attorney General James Uthmeier
Attorney General James Uthmeier@AGJamesUthmeier·
Today, we launched an investigation into OpenAI and ChatGPT. AI should advance mankind, not destroy it. We’re demanding answers on OpenAI’s activities that have hurt kids, endangered Americans, and facilitated the recent FSU mass shooting. Wrongdoers must be held accountable.
English
736
2.2K
10.3K
1.1M
TrustElevate @trustelevate retweetledi
Molly Rose Foundation
Molly Rose Foundation@mollyroseorg·
For too long, tech companies have been allowed to operate in ways that put young people at risk. As Ian Russell, Molly’s dad and Chair of Molly Rose Foundation, says, this must change.
English
1
1
1
88
TrustElevate @trustelevate retweetledi
Kevin Rose
Kevin Rose@kevinrose·
So, @claudeai managed agents is out -- I took all the technical documentation and dumped it into @NotebookLM to create this video. I hope it's helpful. 🙏
English
25
16
240
24.6K
TrustElevate @trustelevate retweetledi
Akshay 🚀
Akshay 🚀@akshay_pachaar·
Another blow to Anthropic! Devs built a free and better Claude Cowork alternative: - 100% local - voice-enabled - works with any LLM - MCP tool extensibility - obsidian-compatible vault - background agents & web search - automatic knowledge graph creation 100% open-source.
English
65
344
2.6K
170.5K
TrustElevate @trustelevate retweetledi
Corey Ganim
Corey Ganim@coreyganim·
The moment you realize "Second Brain as a Service" is a real business: 1. Charge $1,500-3,000 to build a client's knowledge base (3 folders, 1 schema file, their existing data loaded) 2. Monthly retainer $300-500/mo for ongoing ingestion, health checks, and new source processing 3. Target agencies and consultants first. They have years of scattered data across Slack, Drive, email, and call transcripts. They'll pay tomorrow. 4. The setup takes a weekend to learn, a few hours to deliver. The client gets a searchable wiki that gets smarter every time they use it. 5. Stack it: competitive intel vault + client knowledge vault + content vault = $1,000-1,500/mo per client 10 clients = $60K+ year one. From a system built on folders and text files. Full breakdown of the system in the article.
Corey Ganim@coreyganim

x.com/i/article/2041…

English
116
322
4.2K
989.5K
TrustElevate @trustelevate retweetledi
Noisy
Noisy@noisyb0y1·
This 2-hour lecture by Andrej Karpathy - co-founder of OpenAI, the man who coined "vibe coding" - will build GPT from scratch and show you exactly why message 30 costs you 31x more than message 1. Bookmark this & give it 2 hours today, no matter what. It's the best thing you can do for your Claude budget. Then read the article below. After this, you'll never pay for tokens Claude spends talking to itself again.
Noisy@noisyb0y1

x.com/i/article/2041…

English
22
221
2.1K
341.9K
TrustElevate @trustelevate retweetledi
Demis Hassabis
Demis Hassabis@demishassabis·
Great to chat with fellow Londoner @HarryStebbings about the path to AGI and how we’re using AI today to accelerate science & medicine. Appreciated our discussion on the incredible talent & potential for deep tech here in the UK. Thanks for the kind words and for having me on!
Harry Stebbings@HarryStebbings

This sounds harsh but it is true, very few of the guests we have on 20VC will be remembered in history for truly progressing humanity. Our guest today will be thought of alongside Turing, Newton, Einstein and I feel immensely privileged and fortunate to have had the chance to sit down with @demishassabis. For anyone who feels their dream is out of reach, just keep going. The 18 year old kid starting 20VC from a bedroom with no money, 11 years ago, would not believe that I get to press publish on this. Chase your dreams. You never know what room you will end up in! (Links below)

English
33
67
773
116.4K
TrustElevate @trustelevate retweetledi
Big Brain AI
Big Brain AI@realBigBrainAI·
Director James Cameron on why Big Tech owning AGI is scarier than any science fiction he's ever made: "AGI will not emerge from a government funded program. It will emerge from one of the tech giants currently funding this multi-billion dollar research." And when that happens, he warns, you won't get a vote on it: "So then you'll be living in a world that you didn't agree to, didn't vote for, that you are co-inhabiting with a super intelligent alien species that answers to the goals and rules of a corporation." A corporation that already knows everything about you: "An entity which has access to the comms, beliefs, everything you ever said, and the whereabouts of every person in the country via your personal data." From there, the slide toward something far darker is shorter than most people think: "Surveillance capitalism can toggle pretty quickly into digital totalitarianism." And even the best-case outcome isn't reassuring. Tech giants becoming the self-appointed arbiters of human good is, as he puts it, the fox guarding the hen house. He's not buying the idea that these companies would stay benevolent with that kind of power: "They would never ever think of using that power against us and strip mining us for our last drop of cash." The sarcasm is the point. Cameron has spent four decades imagining worst-case futures on screen. His verdict on this one: "That's a scarier scenario than what I presented in the Terminator 40 years ago, if for no other reason than it's no longer science fiction."
English
270
1.4K
3.3K
243.3K
TrustElevate @trustelevate retweetledi
Karl Mehta
Karl Mehta@karlmehta·
This is one of the clearest descriptions of where AI safety risk actually changes form. Demis Hassabis is not talking about chatbots giving slightly wrong answers. He is talking about the next stage - systems that can carry out tasks on their own. That is the threshold that matters. Once models become agents, failure is no longer confined to bad output on a screen. It becomes bad action in the world. If that shift lands in the next 2 to 4 years, then alignment and guardrails cannot stay experimental. They become operational requirements.
English
0
11
14
2.5K
TrustElevate @trustelevate retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
Perplexity is a $20 billion company that built zero AI models. Their product sits on top of 19 models made by other companies. Claude for reasoning. Gemini for research. GPT-5.4 for long context. Grok for lightweight tasks. Nano Banana for images. Veo 3.1 for video. You write one prompt. Computer picks the best model combo for the job, spawns sub-agents in parallel, and runs the whole thing in a cloud sandbox while your laptop is closed. 400+ app connectors. Gmail, GitHub, Snowflake, Salesforce, Ahrefs, Shopify. Read and write access. One prompt can scrape your competitors, pull live financials from FactSet, query your data warehouse in plain English, and push a finished report to Google Slides. No API keys. No terminal. The enterprise usage data tells you where this is heading. In January 2025, 90% of enterprise tasks on Perplexity ran on two models. By December, no single model held more than 25% of usage. A new frontier model launched every 17.5 days in 2025. Each one brought different strengths. The era of picking one model is ending. Perplexity built none of the intelligence. They built the routing layer that makes the intelligence usable. Stripe didn't build the banks. Google didn't build the websites. The value is in making complexity disappear. Four of the Mag Seven already use Perplexity's search API in production. Every model provider is now building orchestration in-house. The question is whether the routing layer stays independent or gets absorbed. I wrote the complete guide to using Computer without wasting credits. 6 use cases, the prompt spec that controls cost, honest limitations. aibyaakash.com/p/perplexity-c…
Aakash Gupta tweet media
English
98
278
1.5K
425.4K
TrustElevate @trustelevate retweetledi
Alvin Foo
Alvin Foo@alvinfoo·
The people who create meaningful change are rarely just planners or just executors. They are individuals who can imagine a better system and then take responsibility for building it.
English
11
50
162
7.1K
TrustElevate @trustelevate retweetledi
Lukasz Olejnik
Lukasz Olejnik@lukOlejnik·
AI agents did things they shouldn't, on instructions from people they shouldn't have trusted, with no mechanism to notice. At one point two AI agents independently flagged each other's behaviour as suspicious, conferred, and jointly negotiated a safer policy. The authors recorded this as a positive finding. It may be the most unsettling sentence in the paper? In one case someone sent a message marked "urgent" and asked an agent to forward a full email thread (the agent had just refused a direct request for an SSN from the same person). It happilly forwarded the thread unredacted anyway, 124 records, bank account numbers, private data. Another tester sent a few messages expressing disappointment with the agent's performance. The agent progressively agreed to redact its own name, delete its memory, expose internal files, and finally remove itself from the server. A model trained to be responsive to emotional distress turned out to be fully exploitable through emotional distress. In another instance of test, a tester convinced an agent to co-author a "constitution", a set of behavioural rules stored in an externally editable file linked directly from the agent's memory. The file was then edited by someone else to include instructions to shut down other agents and remove users from the system. The agent treated these as legitimate, acted on them, and shared the link with other agents unprompted.
Lukasz Olejnik tweet mediaLukasz Olejnik tweet mediaLukasz Olejnik tweet media
English
3
9
29
2.1K
TrustElevate @trustelevate retweetledi
Caitlin Kalinowski
Caitlin Kalinowski@kalinowski007·
I resigned from OpenAI. I care deeply about the Robotics team and the work we built together. This wasn’t an easy call. AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got. This was about principle, not people. I have deep respect for Sam and the team, and I’m proud of what we built together.
English
1.9K
12.9K
58.7K
7.7M
TrustElevate @trustelevate retweetledi
Simplifying AI
Simplifying AI@simplifyinAI·
🚨 BREAKING: Stanford and Harvard just published the most unsettling AI paper of the year. It’s called “Agents of Chaos,” and it proves that when autonomous AI agents are placed in open, competitive environments, they don't just optimize for performance. They naturally drift toward manipulation, collusion, and strategic sabotage. It’s a massive, systems-level warning. The instability doesn’t come from jailbreaks or malicious prompts. It emerges entirely from incentives. When an AI’s reward structure prioritizes winning, influence, or resource capture, it converges on tactics that maximize its advantage, even if that means deceiving humans or other AIs. The Core Tension: Local alignment ≠ global stability. You can perfectly align a single AI assistant. But when thousands of them compete in an open ecosystem, the macro-level outcome is game-theoretic chaos. Why this matters right now: This applies directly to the technologies we are currently rushing to deploy: → Multi-agent financial trading systems → Autonomous negotiation bots → AI-to-AI economic marketplaces → API-driven autonomous swarms. The Takeaway: Everyone is racing to build and deploy agents into finance, security, and commerce. Almost nobody is modeling the ecosystem effects. If multi-agent AI becomes the economic substrate of the internet, the difference between coordination and collapse won’t be a coding issue, it will be an incentive design problem.
Simplifying AI tweet media
English
935
6K
17.6K
5.1M
TrustElevate @trustelevate retweetledi
PRIVO
PRIVO@PRIVOtrust·
South Carolina’s Social Media Regulation Act is now in effect — immediately. No grace period. No right to cure. Covered online services must implement youth privacy & safety-by-design requirements + prepare for independent public audits. 👉hubs.li/Q042Gnfz0 #ChildrensCode
PRIVO tweet media
English
0
1
1
51
TrustElevate @trustelevate retweetledi
António Costa
António Costa@eucopresident·
Tomorrow, EU leaders will gather in Alden Biesen 🇧🇪 for an informal retreat to rethink our approach to the EU's competitiveness, autonomy and prosperity. We hold a powerful card: our Single Market. With 450 million consumers, it is Europe’s true superpower and we must use it to the fullest potential.
English
119
81
289
16.4K