Miguel

82 posts

Miguel banner
Miguel

Miguel

@mburgercalderon

🇨🇭🇨🇴

New York,NY Katılım Mart 2011
560 Takip Edilen717 Takipçiler
Miguel retweetledi
Omer Goldberg
Omer Goldberg@omeragoldberg·
Chaos AI analyzed the vaults impacted by the USR exploit. A ton of interesting weekend transactions. Here, we build a knowledge graph, filtering for the Gauntlet USD Alpha Vault and the Gauntlet Resolv USDC vault: Pre-exploit: • USD Alpha had allocated ~438,440 USDC to the Gauntlet Resolv USDC vault • That vault was deposited into the impacted Morpho Resolv markets via the Gauntlet Resolv USDC vault. app.morpho.org/ethereum/vault… • USD Alpha held receipt tokens representing this exposure Post-exploit, March 23: • 00:30 UTC, Gauntlet USD Alpha sends ~438,440 USDC worth of resolvUSDC receipt tokens (~405,439 shares) to a Safe. • 00:35 UTC, Gauntlet USD Alpha receives USDC ($438,401) from Coinbase. Receipt tokens out, USDC in, 5 min apart, w/ roughly the same notional. etherscan.io/tx/0x7db422e95… #eventlog#541" target="_blank" rel="nofollow noopener">etherscan.io/tx/0x35ce2a750… A ton more to dig into. Analyzing and contrasting vault curator operations and allocation patterns in real time is one of the use cases we’re building Chaos AI for.
English
3
15
100
34.7K
Miguel retweetledi
Or Hiltch
Or Hiltch@_orcaman·
Today we are launching @openwork_ai, an open-source (MIT-licensed) computer-use agent that’s fast, cheap, and more secure. @openwork_ai  is the result of a short two-day hackathon our team decided to hack, which brings together some of our favorite open source AI modules into one powerful agent, to allow you to: 1. Bring your own model/API key (any provider and model supported by @opencode is supported by Openwork) 2. ~4x faster than Claude for Chrome/Cowork, and much more token-efficient, powered by dev-browser by @sawyerhood (legend) 3. More secure - contrary to Claude for Chrom/Cowork, does not leverage the main browser instance where you are logged into all services already. You login only to the services you need. This significantly reduces the risk of data loss in case of prompt injections, to which computer-use agents are highly exposed. 4. Free and 100% open-source! You can download the DMG (macOS only for now) or fork the github repo via the link in bio (@openwork_ai). Let us know what you think (or better, send a pull request)!
Claude@claudeai

Introducing Cowork: Claude Code for the rest of your work. Cowork lets you complete non-technical tasks much like how developers use Claude Code.

English
214
572
5K
1M
Miguel retweetledi
Omer Goldberg
Omer Goldberg@omeragoldberg·
Undercollateralized Onchain Lending In July, @KintoXYZ was hacked for ~$2M, and the token collapsed. To recover, the team borrowed $750K through Wildcat Finance at 50% APR, planning to relaunch and repay users. This week, Kinto shut down, and the loan defaulted.
Omer Goldberg tweet media
English
8
10
58
7.9K
Miguel
Miguel@mburgercalderon·
@Atla_AI Big leap for AI safety! Selene Mini sets a new standard for AI evaluation—beating larger models while staying fast & accurate. Excited to see its impact!🚀 #AI #AIEvaluation
English
0
0
3
181
Alex Konrad
Alex Konrad@alexrkonrad·
@AlmostMedia If you want to be added to the Midas distro list, just ask lol
English
3
0
3
2.3K
Alex Konrad
Alex Konrad@alexrkonrad·
sent an email to about 1,500 VCs last night. even expecting some, i'm impressed to see just how many people either: 1) no longer work at their firm or 2) took vacation until Jan 13
English
50
6
453
78.5K
Miguel retweetledi
Omer Goldberg
Omer Goldberg@omeragoldberg·
BTCFi is becoming a powerful new vertical—one that brings utility and lasting value. @use_corn gives BTCers a genuine footing in DeFi, and Edge Oracles will help unlock its full potential. Excited to partner with @spadaboom and the @use_corn team to realize this vision🌽
Chaos Labs@chaoslabs

1/ @use_corn, a protocol redefining Bitcoin’s role in DeFi, has integrated Edge Price Oracles! With over $60 billion in secured volume, Edge delivers intelligent, context-aware price data to the Corn Network, significantly boosting Corn’s resilience and operational efficiency in dynamic market conditions.

English
1
6
31
2.7K
Miguel retweetledi
Omer Goldberg
Omer Goldberg@omeragoldberg·
1/ On Oct 18th, @FiveThirtyEight’s prediction of Trump’s Election odds tipped above 50% for the first time since August, after dipping as low as 36% in Sept and remaining below 50% for ten consecutive weeks. On @Polymarket, Trump never fell behind for more than a single week.
Omer Goldberg tweet media
English
2
23
88
23.3K
Miguel retweetledi
Omer Goldberg
Omer Goldberg@omeragoldberg·
tl;dr “Trusting Trust in the Age of AI” co-authored by @omeragoldberg Founder of @chaoslabs and @reah_ai Head of Model Evaluation at @OpenAI Introduction >> "There will be more AI Agents than there are people in the world" - Mark Zuckerberg >> AI agents are autonomous intelligent systems which perform specific tasks without human intervention. >> LLMs are powerful tools but susceptible to manipulation through biased training data, unreliable document retrieval, and prompt engineering → misleading outputs >> @chaoslabs identifies a critical risk of AI agents unknowingly trained on sybilled, manipulated content. The outcome? Normalization of low-integrity information and erosion of user trust. >> So how can AI systems fail and what’s the solution? The Compiler Paradox Trust in foundational systems can be easily betrayed if the underlying processes are compromised. >> "No matter how thoroughly the source code is inspected, trust is an illusion if the compilation process itself is compromised." LLM Poisoning LLMs can be “poisoned” by biased training data, unreliable document retrieval, and prompt injection. >>Discusses 3 types of data bias: (1) Implicit bias, (2) Outdated data, and (3) Hallucinations >> "If biases or misinformation are introduced during training, they become embedded in the model—akin to Thompson’s Trojan horse, an invisible vulnerability that quietly taints every output." >> Retrieval-Augmented Generation (RAG) is a natural language processing (NLP) technique that combines generative AI and traditional information retrieval systems to create more accurate and relevant text >> If the external sources are compromised—through manipulated websites, misinformation, or biased content—the LLM will blindly retrieve and amplify false data. >> Discusses 2 types of retrieval bias: (1) Source bias, and (2) Context retrieval failures >> Underscores model reliance on third party search engines for document retrieval; search engines prioritize popularity and commercial interests over accuracy. Resolving Conflicting Data? >>LLMs often rely on probability and patterns rather than factual verification. >> "LLMs do not discern truth from falsehood; they predict the most statistically probable answer without any method to evaluate the correctness of conflicting data." >> Lack of systemic source verification → inconsistent or imprecise answers → lower confidence in the model’s output Attack Vectors for LLMs: Biased data, unreliable document retrieval, and prompt engineering. >> "If an LLM is trained on datasets filled with conspiracy theories—'The moon landing was faked'—it can confidently echo these claims without hesitation." >> Discusses 5 types of prompt engineering attacks: (1) Vague or ambiguous prompts, (2) Exploiting AI limitations, (3) Misinformation traps, (4) Bias-inducing prompts, and (5) Sensitive data exposure >> "Prompt engineering attacks exploit the AI's interaction layer, allowing adversaries to manipulate outputs without tampering with the model itself." Trust in LLMs must extend beyond the surface of model outputs and encompass training data, retrieval sources, and every interaction. >> "The sophistication of these systems creates an illusion of infallibility, but trust must be earned through active verification." >> Discusses 4 risks of unreliable LLMs: (1) Historical revisionism, (2) Social media manipulation, (3) Echo chambers, and (4) Economic and social harm What’s the solution? >> The @chaoslabs North Star is leveraging high quality data to inform high quality risk management — of onchain finance, capital flow, information, and ideas. >> In five years, verified truth and user confidence will be an application’s edge. >> Chaos Labs is actively working on solutions to improve LLM accuracy and reliability. >> We propose a novel solution: a collaborative network of diverse frontier models (ex. ChatGPT, Claude, and Llama) that work together to counter single-model bias, referred to as AI Councils. If these challenges interest and excite you, we want to hear from you. DMs are open @chaoslabs @omeragoldberg.
Omer Goldberg@omeragoldberg

x.com/i/article/1846…

English
1
16
52
7.4K
Miguel retweetledi
Hooman
Hooman@hoomanradfar·
So many VCs are becoming, or building media properties to drive top of funnel and brand. This is creating a strong opening for 'quieter' operating VCs/angels that are relentlessly focused on creating value for their portfolio versus spending time on top of funnel. The thesis for the latter strategy is that you will not only improve fund outcomes, but hopefully - if you pick well - you drive referral. It's the difference between investing in advertising versus product-led growth.
English
38
13
237
41.3K
Miguel retweetledi
Omer Goldberg
Omer Goldberg@omeragoldberg·
1/ Are You Confident in Your Oracle’s Data Security and Quality? Blockchain adoption requires security to be as robust as Web2, ideally better. Yet, tooling to assess Oracle security, performance, and quality has not been available. The @chaoslabs Risk Portal solves this.
Chaos Labs@chaoslabs

1/ Chaos Labs is excited to announce the launch of our Oracle Risk Portal! This portal is a public resource designed to offer an accessible overview of Oracle's performance, enabling stakeholders to assess and compare reporting deviations efficiently. chaoslabs.xyz/posts/oracle-r…

English
1
14
44
5.8K
Miguel retweetledi
Omer Goldberg
Omer Goldberg@omeragoldberg·
Airdrops reward community engagement and drive growth, yet remain an evolving art in a nascent design space. @chaoslabs and @nansen_ai are committed to recognizing those who've established @LayerZero_Labs as an interop leader. Transparency is key; stay tuned for updates 🫡
LayerZero@LayerZero_Core

Sybil Report LayerZero has been working with industry-leading partners @chaoslabs and @nansen_ai to conduct our sybil detection report. This analysis will consider every user’s total transactions weighted across all LayerZero applications with the goal of aligning TGE with developers and durable users. It will be co-published after the sybil self-report deadline. The sybil detection process is outlined here: medium.com/layerzero-offi… Other We have received inbound requests from users about mistakenly self-reporting as sybil. The site will have a “Cancel Sybil Report” option starting tomorrow May 7th at 12PM PST. It will be live through the end of the self-reporting period on May 18th. Stay tuned for more information in the coming days.

English
3
9
34
3.3K
Miguel retweetledi
Omer Goldberg
Omer Goldberg@omeragoldberg·
1/ Next up in our deep dive into Oracle Risk and Security Standards, we explore Price Composition Methodologies in Chapter 3 🔍
Omer Goldberg tweet media
English
3
24
79
20.8K
Miguel retweetledi
Omer Goldberg
Omer Goldberg@omeragoldberg·
1/ Today, we're proud to share the @chaoslabs Risk Oracles launch 🚀 Complementing the existing work completed by @bgdlabs, Risk Oracles will streamline @aave's risk management, unlocking near-real-time risk adjustment capabilities.
Omer Goldberg tweet media
English
10
21
97
15.5K
Miguel retweetledi
Omer Goldberg
Omer Goldberg@omeragoldberg·
1/ Ethena's Protocol Launch First of all - hats off to @leptokurtic_ and the team. @ethena launch has been spectacular and well-deserved after months of hard work.
Omer Goldberg tweet media
English
8
35
130
26.2K
Sheel Mohnot
Sheel Mohnot@pitdesi·
Never hurts to ask / ChatGPT is your friend I didn't qualify for 1K last year so I had ChatGPT write a letter about my loyalty to United, they extended my status by a year.
Sheel Mohnot tweet media
English
45
21
855
315.1K
Miguel retweetledi
Omer Goldberg
Omer Goldberg@omeragoldberg·
1/ RWA tokenization and on-chain exposure boost blockchain growth, but what do they mean for on-chain risk and capital efficiency? A key distinction of RWAs is a weak correlation with crypto assets, reshaping risk dynamics.
Omer Goldberg tweet media
English
3
10
52
7.2K