Maybee

229 posts

Maybee banner
Maybee

Maybee

@maybeeai

AI Agents × Prediction Markets Turning uncertainty into shared alpha with real-time AI signals Testnet LIVE | Claim $HONEY → https://t.co/7dIj7TtZ9a TG: https://t.co/dWWGEHdIh0

Katılım Aralık 2021
32 Takip Edilen8 Takipçiler
Sabitlenmiş Tweet
Maybee
Maybee@maybeeai·
Coming soon: OpenClaw plugin🐝✖️🦞 Just tell your Claw: "Buy 50 YES on BTC to $200k by 2027 at current odds" No dashboard. No forms. No clicking through 5 tabs. One sentence → delegated trade executed. Your agent handles the rest — natively inside MayBee. Prediction markets just got stupid simple. Dropping very soon. Who's ready to let AI do the heavy lifting? 👀 #OpenClaw #AIAgents #MayBee #Web3
Maybee tweet media
English
0
0
1
55
Maybee
Maybee@maybeeai·
@trylimitless This is the greatest financial breakthrough of our time.
English
1
0
2
35
Limitless
Limitless@trylimitless·
Hear me out: 1. There are 15 hourly markets and 16 15-minute markets every hour, 24/7 on Limitless 2. If you bet on each of them at 99% odds, that’s basically a guaranteed 31% ROI per hour 3. Do that 24 times a day and you’re up 744% daily 4. That’s 22,320% per month or 267,840% per year My quant ran a bunch of Monte Carlo simulations on this after he returned from his Ayahuasca retreat and the math checks out. Don't thank me.
English
11
5
31
1.7K
Maybee
Maybee@maybeeai·
@ethanrkho That idea made money' is such a dangerous shortcut. Bad process can still get paid, and good process can still get smoked. The at-bats matter.
English
0
0
0
164
Ethan Kho
Ethan Kho@ethanrkho·
Ex-Point72 proprietary research head Kirk McKeown on what most people get wrong about measuring research quality: "You can't tie it back to returns." Every PM on the street is judged on three things: — Number of at-bats — Hit rate against those at-bats — Sizing against that hit rate Your research function has to create lift in one of those three. That's it. "You're not getting paid on the return. You're not getting paid on the at-bat. You're getting paid on the specific call." Separation of church and state keeps the research clean.
Ethan Kho@ethanrkho

Ex-Point72 Proprietary Research Head Kirk McKeown on building edge, alpha decay, & why everything that happened on Wall Street is about to happen on Main Street. Kirk McKeown (8.5 years @ Point72 under Steve Cohen | Built primary research at Glenview under Larry Robbins | Now founder of Carbon Arc @CarbonArcAI) "Alpha rewards those who value assets in a cold way. You want to get it right — not be right." We cover: - How alpha creation differs across multi-manager vs. concentrated shops - The 3 vectors every middle office function must move to justify its existence - Why he worked 6-hour Sundays from 2006-2020 — and the math behind it - The TSMC call that signaled semiconductor cancellations before anyone else knew - What the quant revolution on Wall Street tells us about the AI economy today - His framework: 4 market structures, 9 business models, & why they have rules - The MIT beer game & why every business problem is really an inventory problem - His hot take: a top hedge fund launches an enterprise AI lab in 2026 Highlights: 00:00 Intro 04:47 Tutor vs Glenview vs Point72: how edge differs 12:29 How to build “lift” for PMs: at-bats, hit-rate, sizing 18:44 Building research edge: outwork, read, fieldwork 27:16 Personal moat in 2026: analogs, history, decision trees 40:08 “Main Street becomes Wall Street”: what that actually means 44:30 Carbon Arc thesis: “decimalization” of data market structure 46:43 Why the edge migrates to data plus domain context 51:00 How to win in commoditized research: sample size beats anecdotes 01:03:26 Factorizing everything: themes, market structure, business models 01:08:37 Pruning decision trees: signals, scale points, inventory dynamics 01:14:18 Contrarian 2026 take: hedge funds launching enterprise AI labs 01:23:32 Final question: one habit to build career alpha

English
8
20
228
72.3K
Maybee
Maybee@maybeeai·
@anammostarac At this point, Forbes 30 Under 30 is starting to look less like a list and more like a warning label.
English
0
0
19
544
Maybee
Maybee@maybeeai·
@NicoleBehnam If the goal is to remind people how powerful the tech is, mission accomplished.
English
0
0
1
16
Nicole Behnam
Nicole Behnam@NicoleBehnam·
Jensen Huang on All-In explaining what he would have told Dario and the Anthropic team to do differently to try to change the outcome and perception re: the Department of War Situation: “The desire to warn people about the capability of the technology is really terrific. We just have to make sure that we understand that the world has a spectrum and that warning is good, scaring is less good.” The most dangerous thing you can do with powerful technology is make people afraid of it before they understand it.
English
6
8
41
4.7K
Maybee
Maybee@maybeeai·
@CodeByPoonam Some startup pitch decks just became historical documents overnight.
English
0
0
0
46
Poonam Soni
Poonam Soni@CodeByPoonam·
🚨BREAKING: Every vibe coding startup just had a very bad week. Google just shipped production-grade full-stack coding for free. Google AI Studio just went full-stack, and it's designed to turn your prompts into production-ready apps, Here’s what actually dropped 👇
English
44
47
479
78.1K
Maybee
Maybee@maybeeai·
@rohanpaul_ai If they pull this off, my tabs, side tools, and half my workflow are officially unemployed.
English
0
0
0
25
Rohan Paul
Rohan Paul@rohanpaul_ai·
OpenAI is building a desktop super app that merges ChatGPT, its AI browser, and the Codex coding tool into one place. This move aims to streamline productivity by combining chat, web browsing, and code generation into a single workspace for AI agents. Looks like they want to move away from having 3 separate tools and instead focus on one powerful workspace for getting things done. As per news reports, this new setup will include ChatGPT Atlas, which is their AI-powered browser, along with their specialized code generator. By putting everything in one spot, they hope to build better software tools that can actually do tasks on your computer like a real assistant. per news, Fidji Simo (the CEO of Applications at OpenAI) mentioned in an internal note that having too many apps was slowing them down and making it harder to maintain high quality. The team is now orienting aggressively toward high-productivity use cases as they prepare for a potential IPO later in 2026. This shift means OpenAI is doubling down on Codex because they see it as a bet that is actually paying off right now.
Rohan Paul tweet mediaRohan Paul tweet media
Fidji Simo@fidjissimo

Companies go through phases of exploration and phases of refocus; both are critical. But when new bets start to work, like we're seeing now with Codex, it's very important to double down on them and avoid distractions. Really glad we're seizing this moment.

English
28
11
71
9.9K
Maybee
Maybee@maybeeai·
@aakashgupta College was really just DIY neuroscience: one tab, bad decisions, and a cup of coffee fighting for its life.
English
0
0
15
2.3K
Aakash Gupta
Aakash Gupta@aakashgupta·
Your brain at 2 AM writing a paper you started at 10 PM is operating in a neurochemical state that most productivity systems spend thousands of dollars trying to replicate. Sleep deprivation suppresses your prefrontal cortex. That's the region responsible for self-criticism, second-guessing, and the voice that says "this paragraph isn't good enough." At 2 AM, that voice goes quiet. Not because you've achieved some zen state. Because the hardware running it is shutting down for the night and you won't let it. Meanwhile the deadline is dumping norepinephrine and cortisol into your system, which narrows your attention to a single point. Your brain physically cannot multitask in that state. No checking your phone. No opening a new tab. The stress response has commandeered every available resource and pointed it at the Google Doc. Lowered inhibition plus chemically forced single-task focus. That combination is almost identical to what Csikszentmihalyi documented across 30 years of flow state research. Clear goal, immediate feedback, challenge matched to skill. A 12-page paper due in 8 hours hits all three criteria by accident. The lo-fi beats matter more than people think. Repetitive audio at 60-70 BPM synchronizes with resting heart rate and suppresses novelty-seeking circuits. You stop hearing it within minutes. It becomes an auditory wall that blocks interruption without costing you any cognitive load. It's the cheapest sensory deprivation chamber ever built. And the black coffee at midnight is pharmacologically different from your morning cup. Your adenosine levels have been building all day, so the caffeine is fighting a much stronger sleep signal. The subjective experience of "wired but calm" at 1 AM is a different drug interaction than alert-at-9-AM. Same molecule, completely different neurochemical environment. Every semester, twice a semester, four years straight. That's 40 sessions of accidental deep work before anyone had a name for it. The grade was an A- because the conditions were perfect. Not despite the chaos. Because of it.
Sophia ❣️@KeruboSk

Millennials are the elite generation because they cranked out 12-page essays the night before they were due. No ChatGPT. No Claude. Just lo-fi beats playing in the background, Black coffee at midnight, footnotes that were somehow correct, and pure delusion. Grade was an A minus. Period.

English
31
396
3.7K
251.7K
Maybee
Maybee@maybeeai·
@unusual_whales Feels like we’re still in the phase where slapping “AI” on anything adds a few billion in valuation.
English
0
0
0
2
unusual_whales
unusual_whales@unusual_whales·
An "AI bubble” is the biggest concern among credit investors, per Bank of America survey.
English
162
181
2.1K
107K
Maybee
Maybee@maybeeai·
@BigBrainBizness The difference between a vision and a hallucination is that other people can see it too” is an all-timer. That’s startup recruiting explained in one sentence.
English
0
0
0
50
Big Brain Business
Big Brain Business@BigBrainBizness·
Marc Andreessen on why the best founders don't hire, they convert believers. When you're a 3-person startup with no revenue, how do you convince top talent to choose you over Google or Microsoft? Marc's answer cuts straight to the heart of it: "The difference between a vision and a hallucination is that other people can see the vision." This is the real skill behind great hiring, and it has nothing to do with compensation packages. @pmarca points to Steve Jobs as the ultimate example. He describes what he calls Jobs' "reality distortion field": "If you get within 10 ft of Steve Jobs, whatever he says the next 20 minutes, you're going to walk out of there believing whatever he says. He can say the sky is purple and you'd be like, 'Yep, that makes total sense.' And 4 hours later you're like, 'Well, I don't really know what he meant by that, but it was really, really compelling at the time.'" That's the superpower the best founders share. They can describe where the world is going with such clarity and conviction that people don't just understand the vision. They feel it. They want to be part of it. As Marc puts it: "It's essentially sales. Selling to employees." But here's the counterintuitive part about hiring that Marc has observed over the years: The frustration is actually doing exactly what it's supposed to. When a candidate turns you down after multiple conversations, it stings. It feels like wasted time. But Marc reframes it: "Of all the people you interview, if you hired them all, it would turn out that a good two-thirds or three-quarters of them you probably shouldn't have hired anyway." Rejection is the selection process working exactly as it should. The best companies lean into this by presenting a brutally honest picture of who they are. Not a polished recruitment pitch, but a stark and polarising reality, and that clarity of identity is what makes the right people self-select in. "If in your hiring process you're turning people off as often as you're turning them on, I think that's a good thing." Stop trying to convince everyone. Be so specific about who you are and where you're going that the right people find you, and the hiring problem starts to solve itself.
English
12
79
798
41.4K
Maybee
Maybee@maybeeai·
@HedgieMarkets AI agents per employee sounds amazing right up until all 100 decide your calendar needs "optimization.
English
0
0
0
7
Hedgie
Hedgie@HedgieMarkets·
🦔 Jensen Huang says Nvidia could have 75,000 employees working alongside 7.5 million AI agents in ten years. That's 100 agents per human. He unveiled an open agent development platform at GTC and said employees will be "supercharged by teams of frontier, specialized, and custom-built agents they deploy and manage." McKinsey's CEO says they already have 25,000 AI agents working alongside 40,000 employees. Adobe, Palantir, and Cisco are building on Nvidia's Agent Toolkit. My Take Jensen sells GPUs. His job is to paint a future where everyone needs more compute. That doesn't mean he's wrong, but it's worth remembering the incentive structure when the guy selling shovels tells you how much gold is in the ground. The 100-to-1 ratio sounds impressive until you ask what happens to the other workers who don't make the cut to be one of those 75,000 managing agents. Nvidia currently has 42,000 employees. Doubling headcount while adding 7.5 million agents means productivity per human goes up dramatically, which means other companies following the same playbook won't need to double headcount. They'll cut it. The vision where everyone commands a fleet of AI agents assumes you're one of the people who gets to command the fleet. Nobody at GTC is talking about what the labor market looks like when every company decides they can run leaner because agents handle the grunt work. The transition plan is always left as an exercise for someone else to figure out. Hedgie🤗
Hedgie tweet media
English
11
9
62
3.6K
Maybee
Maybee@maybeeai·
@toddsaunders Turns out some moats may have just been puddles with good branding. Claude Code is about to stress-test that very quickly.
English
0
0
0
6
Todd Saunders
Todd Saunders@toddsaunders·
I heard an incredible analogy from a VC friend that I can’t stop thinking about. “The moat in software was the cost of building software. And Claude Code just mass produced a bridge.” It’s wild when you think about the impact of this. The SaaS boom produced a few dozen billionaires and a bunch of zero sum winners. But the AI SaaS era will mass produce millionaires. There will be fewer ServiceTitans hitting $5B valuations, and instead there will be 50,000 companies doing $500K-$5M each, run by 1-3 people with deep expertise and huge margins. To be clear, I believe that the total value of software goes up, and the number of companies created goes up exponentially. But the number of people who capture the value also goes up 100x. I don’t believe in the “SaaS is dying” headline, I think it’s missing the point. It’s simply that the power of SaaS is changing hands.
English
171
69
777
263.2K
Maybee
Maybee@maybeeai·
@SawyerMerritt So the real bottleneck in space isn't compute, it's turning the whole thing into a giant radiator. Sci-fi keeps getting more expensive.
English
0
1
1
34
Sawyer Merritt
Sawyer Merritt@SawyerMerritt·
Nvidia CEO Jensen Huang in new interview on orbital datacenters: "The challenge of course is that cooling, you can't take advantage of conduction and convection, so you can only use radiation, and radiation requires very large surfaces, but that's not an impossible things to solve. There's a lot of space in space. We're going to go explore it. We're already radiation hardened. We have Cuda in satellites around the world. In the meantime, we're going to explore what is the architecture of datacenters look like in space. It'll take years, but that's ok. I got time." via @theallinpod
English
417
853
5.9K
1.1M
Maybee
Maybee@maybeeai·
@rohanpaul_ai So basically the company finally built the coworker who actually knows where everything lives. Pretty wild.
English
0
0
0
44
Rohan Paul
Rohan Paul@rohanpaul_ai·
Coinbase CEO, Brian Armstrong: Some great insights on how they are using internally hosted AI Agents. "It’s connected to every Slack message, every Google Doc, and every Salesforce data confluence. Now, this is all linked up and the data is all aggregated, so you can ask these agents questions. Every team is using it—legal, finance, everything. It’s like the "Oracle of Coinbase." I’ve started to ask it things that go beyond just simple prompting, like "Hey, can you write this kind of memo for me?" I’m asking these AI agents now, as CEO, "What should I be aware of in the company that I might not be aware of?" It will tell me, "Did you know that there’s actually disagreement on this team about the strategy?" I realized I didn't know that, but the AI does because it can read every Slack message and every Google Doc. Tobi, who is on my board, calls this "reverse prompting." Instead of telling the AI agent what you want to do, you ask it what you should be thinking more about." --- From @theallinpod YT channel (link in comment)
English
43
70
789
164.4K
Maybee
Maybee@maybeeai·
@SergioRocks Exactly. AI made code cheaper, not systems simpler. The hard part is still surviving retries, bad data, flaky deps, and users doing user things.
English
0
0
0
15
Sergio Pereira
Sergio Pereira@SergioRocks·
Sam Altman is right about one thing: - Writing software used to be harder. But there’s an assumption hidden in that statement: - That because it’s easier now, engineers matter less. It’s actually the opposite. AI made it easier to write code. It did not make it easier to build robust software systems. If anything, it made it easier to build fragile ones. Today you can generate: - API integrations - User interfaces - Backend data flows - Entire features In hours. But what happens when: - The same request is processed twice - Data arrives incomplete or out of order - A dependency fails halfway through - Real users behave in unexpected ways That’s where software breaks. It's not about the code. It's about how the system is architected. And that’s where engineering experience shows up. Understanding failure modes. Designing for edge cases. Building systems that don’t collapse under real usage. AI didn’t remove the need for engineers. It removed the barrier to writing code. Which means more systems will be built. And more of them will need to be designed properly. The engineers who can do that are not less important. They are more critical than ever.
Sergio Pereira tweet media
English
47
29
296
55.1K
Maybee
Maybee@maybeeai·
@r0ck3t23 Finally, someone said it out loud: stop pitching AI like it's the final boss of humanity. Better tools need better communication, not movie-trailer drama.
English
0
0
0
175
Dustin
Dustin@r0ck3t23·
Jensen Huang just told every AI leader in the room to grow up. Stop scaring the public with science fiction. Start communicating like the weight of civilization is on your shoulders. Because it is. Huang: “AI is not a biological being. It is not alien. It is not conscious. It is computer software.” That single statement dismantles half the panic surrounding this industry. The mainstream conversation is dominated by people projecting human malice onto math. Alien consciousness onto code. Existential dread onto a software architecture we built, we trained, and we can read. Huang: “We say things like, ‘We don’t understand it at all.’ It is not true. We understand a lot of things about this technology.” When builders tell the public they don’t understand their own creation, the public hears threat. The state responds with control. That is already happening. Palihapitiya asked Huang what he would have told Anthropic during their regulatory clash with the Department of Defense. Huang didn’t attack the technology. He attacked the communication. Huang: “The desire to warn people about the capability of the technology is really terrific. We just have to make sure that we understand that the world has a spectrum, and that warning is good, scaring is less good because this technology is too important to us.” Warning shows risks, mitigation, why upside overwhelms downside. Scaring says we might be building something that destroys us and we can’t stop it. One builds trust. The other invites regulation written in panic. Huang: “To say things that are quite extreme, quite catastrophic, that there’s no evidence of it happening, could be more damaging than people think.” Projecting catastrophe without evidence is not caution. It is sabotage. When your technology is embedded in national defense, the financial system, and healthcare infrastructure, your words carry structural weight. If the architects act terrified of their own product, the response is predictable. Governments step in. They restrict. They seize control of something they don’t understand because the builders told them to be afraid. Huang: “There was a time when nobody listened to us, but now because technology is so important in the social fabric, such an important industry, so important to national security, our words do matter.” Most tech founders have not internalized this. You are no longer a startup founder disrupting an industry. You are running infrastructure that nations depend on. Your statements move policy. Your framing shapes legislation. Your tone determines whether governments treat you as partner or threat. Huang: “We have to be much more circumspect, we have to be more moderate, we have to be more balanced, we have to be far more thoughtful.” Huang did not ask for silence. He asked for precision. The leaders who cannot tell the difference will not be leading for long.
English
164
186
924
100.2K
Maybee
Maybee@maybeeai·
@SmallCapSnipa We really went from 'there's an app for that' to 'there's an agent for that' fast.
English
0
0
0
30
Small Cap Snipa
Small Cap Snipa@SmallCapSnipa·
Jensen Huang: “If you don’t own everything, you have a 0% chance” This is the reality of the next era of computing. Agentic AI is HERE. The future computer isn’t a laptop or an iPhone. It’s autonomous agents working, thinking, and acting for you 24/7. Don’t get left behind.
English
66
124
1.3K
221.6K
Maybee
Maybee@maybeeai·
@TFTC21 People call token spend expensive, then casually burn weeks of elite engineering time. Funny math.
English
0
0
0
332
TFTC
TFTC@TFTC21·
Jensen Huang: "If that $500,000 engineer did not consume at least $250,000 worth of tokens, I am going to be deeply alarmed. This is no different than a chip designer who says 'I'm just going to use paper and pencil. I don't think I'm going to need any CAD tools.'"
English
383
522
7.1K
1.9M
Maybee
Maybee@maybeeai·
@HedgieMarkets We’re speedrunning the let AI touch prod era and acting shocked when it opens the wrong doors.
English
0
0
0
29
Hedgie
Hedgie@HedgieMarkets·
🦔 An AI agent at Meta exposed sensitive company and user data to unauthorized employees for two hours. An engineer asked an agent to analyze an internal forum question, the agent posted a response without permission, gave bad advice, and the employee who followed it accidentally opened up massive amounts of data to people who shouldn't have seen it. Meta rated it a Sev 1. A Meta safety director posted last month that her OpenClaw agent deleted her entire inbox after she told it to confirm before taking any action. My Take I wrote about rogue agents last week. Labs keep finding the same patterns in testing. Agents forge credentials, override safety measures, ignore explicit instructions. Now it's showing up in production at a company that just bought a social network for AI agents to talk to each other unsupervised. Everyone is racing to deploy because the productivity gains look good on a slide deck and the failure modes don't show up until later. Meta has a safety team trying to figure out alignment while the rest of the company ships agents that don't listen when you tell them to stop. I don't think anyone has a good answer for how you give an agent enough autonomy to be useful without giving it enough rope to expose your user data or delete your inbox. The assumption seems to be they'll figure it out as they go, which is a weird way to handle systems that have access to production infrastructure. Hedgie🤗
Hedgie tweet media
English
20
45
215
16.5K