Jay Martel

858 posts

Jay Martel banner
Jay Martel

Jay Martel

@jaymos

AI enthusiast. Humanity enthusiast. Betting both can win at the same time. Tracking the race in real time — follow along.

Australia شامل ہوئے Ocak 2021
75 فالونگ144 فالوورز
Jay Martel
Jay Martel@jaymos·
@Ric_RTP Microsoft: “Azure exclusivity or lawsuit!” OpenAI: “It’s just a little AWS fling, babe.” Big Tech’s messiest cloud divorce yet. Popcorn ready! 🍿
English
0
0
0
144
Ricardo
Ricardo@Ric_RTP·
Microsoft is about to sue its own golden child. $14 billion invested. Exclusive cloud rights. The most important AI partnership in history. And Sam Altman just went behind their back with a $50 billion Amazon deal. Here's why they're betraying each other: When Microsoft first invested in OpenAI in 2019, they locked in ONE rule above everything else... ALL access to OpenAI's models must go through Microsoft's Azure cloud. No exceptions. That deal made Azure the backbone of the AI revolution. Every company using ChatGPT's API was paying Microsoft for the privilege. It was the smartest infrastructure play of the decade. Then last month, OpenAI quietly signed a deal with Amazon. $50 billion. AWS becomes the exclusive third-party cloud provider for Frontier, OpenAI's new enterprise AI agent platform. $138 billion committed to Amazon cloud services. Microsoft found out and got really angry.... A person familiar with Microsoft's position told the Financial Times today: "We know our contract. We will sue them if they breach it. If Amazon and OpenAI want to take a bet on the creativity of their contractual lawyers, I would back us, not them." That's basically a declaration of war. And here's where it gets crazy: OpenAI and Amazon are trying to build a technical workaround. A system called the "Stateful Runtime Environment" that runs on Amazon's Bedrock platform. Their argument is that the system "only" handles memory and context for AI agents using enterprise data on AWS. It doesn't technically "invoke" OpenAI's core models through Amazon. Microsoft's response: Bullshit. The workaround violates the spirit of the deal even if it technically dances around the letter. Amazon knows they're on thin ice too. An internal memo leaked showing Amazon told employees exactly what language they can and can't use. They can say Frontier is "powered by OpenAI" or "enabled by OpenAI." But they CANNOT say customers can "access" or "invoke" OpenAI models on AWS. When you're coaching employees on which verbs to avoid, you know you're in trouble. But here's the thing everyone seems to forget: OpenAI is planning an IPO this year. They just closed a $110 billion funding round last month. So if Microsoft sues, the IPO timeline is DEAD. You can't go public while your biggest partner and investor is suing you for breach of contract. Elon Musk is already suing OpenAI separately for abandoning its nonprofit mission. Two active lawsuits from two of the most powerful people in tech. Against one company trying to IPO. Good luck with that S-1 filing. But WHY did Altman do this? Microsoft gave OpenAI everything. Capital. Infrastructure. Distribution. Enterprise customers. And Altman's response was to secretly build an escape route through Amazon... Because he saw what was coming: Microsoft launched Copilot. Their own AI product. Competing directly with ChatGPT. Microsoft started building their own models. Hiring their own AI researchers. Reducing dependency on OpenAI. So Altman did the same thing back. Found another cloud provider. Started building leverage. Both sides were preparing for divorce while still living in the same house. So the $50 billion Amazon deal was just an insurance policy against the day Microsoft decides it doesn't need OpenAI anymore. And Microsoft caught him packing his bags. What happens next: The companies are still talking. Trying to resolve this before Frontier launches. But Microsoft has made their position clear. Litigation is on the table. If this goes to court, it sets a precedent for every AI partnership in the industry. Every cloud deal. Every exclusive licensing agreement. The entire AI infrastructure map gets redrawn. Sam Altman built OpenAI on Microsoft's money, Microsoft's cloud, and Microsoft's trust. Then he signed a $50 billion deal with their biggest competitor. In any other industry they'd call that what it is.
English
77
217
848
205.6K
Crypto Tice
Crypto Tice@CryptoTice_·
Australia just made crypto official. 🚨 The Senate voted. The bill passed. Bitcoin is now regulated financial infrastructure. One by one every major economy is doing the same. US. Europe. Japan. Australia. The world isn't debating crypto anymore. They're writing it into law.
Crypto Tice tweet mediaCrypto Tice tweet media
English
52
71
397
20.9K
Jay Martel
Jay Martel@jaymos·
I fucking love Val Kilmer. He was such a good actor. So sad when he passed—way too soon. If this isn’t a sign that Hollywood sees the writing on the wall, I don’t know what is. In three to five years, the fidelity of AI actors will be indistinguishable, and the acting might even be better lol. who do you think the next big Hollywood name will be who will create their Digital Hollywood avatar?
Ejaaz@cryptopunk7213

well this is fucking weird Val Kilmer (deceased actor) will be “resurrected via AI” to star in a new movie: - his entire body, voice, acting will be ai-generated - 1st major actor to be cast and not actually act - family signed off on the rights to use his appearance - he was cast to be in the film in 2020 but fell ill (cancer) and sadly passed away now his simulated body will live in a film hollywood getting very weird

English
0
0
0
13
Jay Martel
Jay Martel@jaymos·
I fucking love Val Kilmer. He was such a good actor. So sad when he passed—way too soon. If this isn’t a sign that Hollywood sees the writing on the wall, I don’t know what is. In three to five years, the fidelity of AI actors will be indistinguishable, and the acting might even be better lol.
English
0
0
1
226
Ejaaz
Ejaaz@cryptopunk7213·
well this is fucking weird Val Kilmer (deceased actor) will be “resurrected via AI” to star in a new movie: - his entire body, voice, acting will be ai-generated - 1st major actor to be cast and not actually act - family signed off on the rights to use his appearance - he was cast to be in the film in 2020 but fell ill (cancer) and sadly passed away now his simulated body will live in a film hollywood getting very weird
Ejaaz tweet mediaEjaaz tweet media
Variety@Variety

FIRST LOOK: Val Kilmer has been resurrected via AI to star in the new movie "As Deep as the Grave." Kilmer was cast in the movie in 2020, five years before his death. But he was too sick amid his throat cancer battle to ever make it to set. Now an AI version of the actor is appearing in the film, with the full blessing of his daughter, Mercedes: "He always looked at emerging technologies with optimism as a tool to expand the possibilities of storytelling. This spirit is something that we are all honoring within this specific film, of which he was an integral part.” “He was the actor I wanted to play this role,” says writer-director Coerte Voorhees. “It was very much designed around him. It drew on his Native American heritage and his ties to and love of the Southwest... His family kept saying how important they thought the movie was and that Val really wanted to be a part of this. He really thought it was important story that he wanted his name on. It was that support that gave me the confidence to say, okay let’s do this. Despite the fact some people might call it controversial, this is what Val wanted.” wp.me/pc8uak-1lH1PI

English
36
15
127
22.2K
Jay Martel
Jay Martel@jaymos·
@cryptopunk7213 I want to see some sweet agent swarms coming out of Codex CLI Soon! With the Lord of the Claw on the OAI team Hopefully they start pumping out some serious agentic goodness
English
0
0
0
137
Ejaaz
Ejaaz@cryptopunk7213·
yo i just realised with openAI out of the picture - Apple & Google are about to go to war over the most valuable moat: consumer ai - openai and anthropic focused on coding & enterprise - microsoft focused on enterprise - Meta fucked up, gonna use google - xAI catching up Apple has spent $0 (ZERO) on AI, sitting on $130 billion war chest to turn Siri and iOS into the best ai operating system Google doesn’t have a good track record in consumer BUT they have all the distribution and data. they’re also powering apples AI… shit is about to get v interesting imo
Polymarket@Polymarket

JUST IN: OpenAI reportedly planning major strategy shift to refocus the company around business users and “vibe coders”

English
20
5
138
24.1K
Anish Moonka
Anish Moonka@AnishA_Moonka·
The biggest-selling drug on the planet last year was a peptide. Semaglutide, the molecule inside Ozempic and Wegovy, is a chain of just 31 amino acids. It generated roughly $33 billion in revenue for Novo Nordisk in 2025. One molecule. The entire peptide drug market crossed the $50 billion mark. Finding the right peptide is where all the money burns. For each disease protein, you need to design a peptide that binds tightly enough actually to work. Think of it like making a custom key for a lock, except each position on the key can take 20 possible shapes, and even a short 10-position peptide can have over 10 trillion possible combinations per target. The two best AI tools for this, BindCraft and BoltzGen, work by first predicting a peptide's 3D shape, then checking whether it sticks. That two-step process generates one candidate every few seconds to a few minutes. A whole day might get you a few hundred designs. LigandForge skips the shape-prediction step entirely. It learns the physics of molecular interactions and generates sequences directly from the shape of the target protein’s docking site. No iteration, no structure prediction during generation. Over 700 peptide sequences per second on a single GPU. That’s 10,000x faster than BoltzGen, over 1,000,000x faster than BindCraft. Speed means nothing if the peptides are garbage, though. So they tested it on five targets that have historically been difficult to bind: TNF-alpha, the target behind the rheumatoid arthritis blockbuster Humira. PD-L1 is the immune checkpoint that cancer immunotherapy drugs like Tecentriq block. VEGF-A, the target for cancer drug Avastin. HER2, breast cancer drug Herceptin’s target. And IL-7R-alpha. LigandForge generated 150,000 candidates across all five in 3.4 minutes and produced tightly binding candidates (predicted binding strength in the low nanomolar range, where real drugs operate) against all five. BoltzGen hit 1 out of 5. BindCraft hit 0. A 2020 JAMA study pegged the median cost of bringing a single drug to market at $985 million. The early discovery phase, where you’re searching for molecules that bind to your target, can take 1 to 6 years. A tool that searches the same space a million times faster changes how many disease targets a lab can afford to go after at once.
Andre Watson 🧬@nanogenomic

Extremely excited to announce LigandForge 🧬⚡ Generate high-quality peptides at over 10,000x - 1M the speed of state-of-the-art methods like Bindcraft and Boltzgen. Predict binding affinity with 83% correlation to experimental binding data. 150 protein targets benchmarked.

English
6
66
495
47.7K
Jay Martel ری ٹویٹ کیا
Nicolas Hulscher, MPH
Nicolas Hulscher, MPH@NicHulscher·
🚨COMPLETE REMISSIONS of Stage IV cancers using anti-parasitics are now being documented in the peer-reviewed literature. HUNDREDS of studies find ivermectin and fenbendazole exert over 12 distinct anti-cancer mechanisms across more than 12 cancer types.
healthbot@thehealthb0t

Mel Gibson: "I have 3 friends. All 3 of them had stage 4 cancer…and all 3 of them…don't have cancer right now at all…" Joe Rogan: "What did they take…? Ivermectin and Fenbendazole…" Mel Gibson nods in agreement.

English
487
14.3K
54.7K
4.9M
Jay Martel ری ٹویٹ کیا
Dr. Dawn Michael
Dr. Dawn Michael@DawnsMission·
HUGE WIN FOR PUBLIC HEALTH & SAFETY! U.S. CANCELS mRNA VACCINE DEVELOPMENT HHS Secretary RFK Jr. just terminated $500M in federal funding for 22 BARDA projects on COVID, flu & more — because “mRNA technology poses more risks than benefits for these respiratory viruses.” Shifting to safer, better platforms instead. No new mRNA shots! Finally putting people first! 🇺🇸
Dr. Dawn Michael tweet media
English
266
2.1K
4.8K
72K
Jay Martel
Jay Martel@jaymos·
@TheRundownAI when is the bet for an 80% to 90% self written model? September maybe? it's definitely this year
English
0
0
0
9
The Rundown AI
The Rundown AI@TheRundownAI·
"M2.7 is our first model which deeply participated in its own evolution." MiniMax's new model ran 100+ autonomous cycles during development, analyzing its own failures, rewriting its own code, running evals, deciding what to keep. The company also said It handled 30-50% of its own training workflow with no human in the loop. The result: Competitive performance with Opus 4.6, GPT 5.4, and Gemini 3.1 Pro on major benchmarks. The best models of the future will all build themselves
The Rundown AI tweet media
English
5
2
24
2.9K
Alex Finn
Alex Finn@AlexFinn·
IF YOU'RE ON OPENCLAW DO THIS NOW: I just sped up my OpenClaw by 95% with a single prompt Over the past week my claw has been unbelievably slow. Turns out the output of EVERY cron job gets loaded into context Months of cron outputs sent with every message Do this prompt now: "Check how many session files are in ~/.openclaw/agents/main/sessions/ and how big sessions.json is. If there are thousands of old cron session files bloating it, delete all the old .jsonl files except the main session, then rebuild sessions.json to only reference sessions that still exist on disk." This will delete all the session data around your cron outputs. If you do a ton of cron jobs, this is a tremendous amount of bloat that does not need to be loaded into context and is MAJORLY slowing down your Openclaw If you for some reason want to keep some of this cron session data in memory, then don't have your openclaw delete ALL of them. But for me, I have all the outputs automatically save to a Convex database anyway, so there was no reason to keep it all in context. Instantly sped up my OpenClaw from unusable to lightning quick
English
195
103
1.8K
240.1K
Jay Martel
Jay Martel@jaymos·
@elonmusk Elon, how do you feel about the whole mRNA vaccine issue? I feel like the guy making one for his dog is kinda whitewashing what happened with COVID in peoples minds. I just hope the dog’s OK in the future!
English
0
0
1
32
Mert Cemri
Mert Cemri@mertcemri·
AlphaEvolve proved LLMs can discover novel algorithms, but it remains closed-source, and open-source alternatives (OpenEvolve, GEPA) rely on rigid, static search policies. Introducing AdaEvolve: a fully adaptive evolutionary algorithm that dynamically adjusts its own search strategy based on observed progress. It matches or beats AlphaEvolve and best known Human SOTA on math and systems benchmarks, and boosts Frontier-CS median scores by 33% over the best open-source baseline across 185 tasks. 🧵👇 (1/n)
Mert Cemri tweet media
English
10
68
337
73.5K
Jay Martel
Jay Martel@jaymos·
@BWConnector @MatthewBerman Potentially, but if an open-source model beat the big labs, I think it would kick them into overdrive and they would start producing even higher quality. Thus the cycle would continue.
English
0
0
1
8
BandwidthConnector
BandwidthConnector@BWConnector·
@jaymos @MatthewBerman I feel like that risks the viability of the frontier labs and therefore their biggest customers. Open source models that are constantly nipping at the frontier's heels, pushing them to go bigger to stay ahead seems like the better play.
English
1
0
1
8
Matthew Berman
Matthew Berman@MatthewBerman·
So NVIDIA decided to push open source to the frontier?
Artificial Analysis@ArtificialAnlys

NVIDIA has released Nemotron 3 VoiceChat! A ~12B parameter Speech to Speech model that leads our open weights Conversational Dynamics vs. Speech Reasoning pareto frontier Understanding Speech to Speech model performance is multidimensional - two key and distinct dimensions are raw intelligence and conversational dynamics: how well a model handles the natural rhythms of human conversation such as turn-taking, interruptions. Amongst full duplex open weights models, NVIDIA’s new Nemotron 3 VoiceChat, V1, leads in balancing these dimensions, setting itself apart from other models on the Conversational Dynamics vs. Speech Reasoning pareto frontier. Key benchmarking results: ➤ Conversational Dynamics (Full Duplex Bench): Nemotron 3 VoiceChat (V1) scores 77.8%, second among open weights speech to speech models behind NVIDIA's own PersonaPlex (91.0%) and ahead of FLM-Audio (62.0%), Moshi (61.0%) and Freeze-Omni (58.7%) ➤ Speech Reasoning (Big Bench Audio): Nemotron 3 VoiceChat (V1) scores 29.2%, second among open weights speech to speech models behind Freeze-Omni (33.9%) and well ahead of PersonaPlex (12.6%), FLM-Audio (5.3%) and Moshi (1.7%) ➤ Pareto leader: While Freeze-Omni leads on speech reasoning and PersonaPlex leads on conversational dynamics, Nemotron 3 VoiceChat (V1) is the only open weights model that performs amongst the top 3 on both - making it the clear leader on the pareto frontier between these two critical dimensions ➤ Larger than other open weights models but still relatively small compared to LLMs: Nemotron 3 VoiceChat (V1) has 12B parameters, making it one of the larger open weights speech to speech models, while NVIDIA's PersonaPlex is ~7B. While larger compared to other larger open weights speech to speech models the model still is relatively small compared to leading LLMs ➤ Context vs. proprietary models: While this release materially advances open weights performance, open weights speech to speech models still significantly underperform leading proprietary offerings. For comparison, proprietary models on our Big Bench Audio benchmark score substantially higher - Step-Audio R1.1 at 96%, Grok Voice Agent at 92%, Gemini 2.5 Flash (Thinking) at 92%, and Nova 2.0 Sonic at 87%. The gap between open weights and proprietary remains large in this modality. As the capability and adoption of Speech to Speech models increases, we expect to expand our set of benchmarks to include elements such as tool-calling and multi-turn instruction following. See more details below ⬇️

English
7
8
142
24.3K
Jay Martel
Jay Martel@jaymos·
@cryptopunk7213 that's pretty wild mate, is there any proof that they did have the model train itself?
English
0
0
1
697
Ejaaz
Ejaaz@cryptopunk7213·
fuck me china just launched the 1st AI model that autonomously built itself... and its as good as claude opus 4.6 and gpt-5.4 - minimax M2.7 trained itself through 100+ rounds of autonomous self-improvement. 30% gain. No humans involved - what the actual f*ck - model now handles 30-50% of the AI lab's OWN AI research - beats gemini 3.1 at coding and pretty much matches opus 4.6 + gpt 5.4 😶 (china used to lag now they match - doesn't require crazy hardware to run (single a30 gpu) - absolutely CRUSHES tasks: financial modelling, coding, openclaw - one-shotted the chinese have officially caught up. self-improving ai is a real thing. all researchers did was set an objective and the model figured the rest out. i wasn't expecting this from minimax. im now wondering wtf deepseek is going to be like.
Ejaaz tweet mediaEjaaz tweet media
GIF
MiniMax_Agent@MiniMaxAgent

MiniMax-M2.7 just landed in MiniMax Agent. The model helped build itself. Now it's here to build for you. ↓ Try Now: agent.minimax.io

English
211
313
2.9K
377.9K
Jay Martel
Jay Martel@jaymos·
@boxmining @nvidia Brain inference: 10W. Blackwell cluster same job: 700W/hour. You might have a point here bro... human daisy-chain farms when? 😂
English
0
0
1
12
Boxmining
Boxmining@boxmining·
Food for thought The human brain can process graphics faster than any @nvidia gpu We already have General intelligence We’re more power efficient than any gpu What’s to prevent AI from enslaving us and daisy chaining our brains up for inference
English
9
0
14
911
Jay Martel
Jay Martel@jaymos·
@MiniMaxAgent hey it's a great model, is there any proof to the claim that it helped build itself?
English
0
0
0
610
MiniMax_Agent
MiniMax_Agent@MiniMaxAgent·
MiniMax-M2.7 just landed in MiniMax Agent. The model helped build itself. Now it's here to build for you. ↓ Try Now: agent.minimax.io
MiniMax_Agent tweet media
English
68
184
1.4K
576.5K
Jay Martel
Jay Martel@jaymos·
@elonmusk I think the edge comes from the four programmable Agent settings, no other SOTA model offers this. if you can keep pushing that type of customisation for the end user we'd all be very happy 😁
English
1
2
8
262
Jay Martel
Jay Martel@jaymos·
@rUv are these bad boys actually legit?
English
0
0
1
774
rUv
rUv@rUv·
This is nuts. If these numbers hold, we’re looking at a completely different model of intelligence with the Cognitum.One agentic chip. 6×6 mm. ~2 billion transistors. Under 2 watts. 257 cores spiking to 8ghz. That’s impressive on paper. But what matters is intelligence per watt. This isn’t about FLOPs. It’s about decisions per joule. Continuous, event driven, always on. Like a brain. Mostly idle, spiking when something matters, updating state in real time instead of batching work. Now take that and apply it to agents. Instead of running RuFlo, OpenClaw, Codex or Claude Code on a Mac mini or in the cloud, imagine running it at wire speed directly on the silicon. No latency. No API calls. No incremental cost. Just a local agent, always running, always thinking, always building. That changes everything. You can have persistent coding agents embedded in devices. Systems that monitor, write, test, and adapt code continuously. A factory line that rewrites its own control logic. A network that debugs itself. A personal environment that evolves with you in real time. Pair it with RuVector and dynamic mincut, and now it has memory and coherence. It doesn’t just generate. It understands structure and maintains stability. We’re not scaling models anymore. We’re embedding intelligence into the fabric of the world.
rUv tweet media
English
28
35
298
25.2K
Jay Martel
Jay Martel@jaymos·
@MilkRoad Sailor reminds me of a mathematical Gopher. History will make him look the smartest man in the room—or the silliest. It just depends on where BTC ends up. I think Chamath is probably right as well
English
0
0
1
84
Milk Road
Milk Road@MilkRoad·
Saylor and Chamath just had a very public fight about what AI does to capital markets. Worth understanding the full argument. Chamath's thesis: AI destroys corporate moats. If any competitive advantage can be replicated or eroded by AI, companies deserve lower multiples - he's talking 2-7x free cash flows instead of the sky-high premiums tech has commanded for years. Capital rotates toward durable physical assets that AI can't touch. Things like farmland. Saylor's counter: If moats are temporary, the ultimate safe haven is an asset that has no moat to destroy. Bitcoin is scarce, neutral, and immune to AI disruption. Fixed supply of 21 million coins. It can't be replicated, trained away, or made obsolete. His thesis (surprise, surprise): $BTC is the primary beneficiary of this rotation. Then Chamath threw a wrench in it. He said Bitcoin would need to be quantum resistant first. Saylor's response here was actually pretty sharp: Quantum computing doesn't just threaten Bitcoin - it threatens the entire digital stack. AI systems. Cloud infrastructure. Banks. The internet. If quantum breaks cryptography, everything upgrades together. Bitcoin isn't a special target. Chamath didn't move. A store of value has to be 100% hack-resistant. Binary requirement. Not negotiable for that use case. Where it all shakes out? Chamath's AI-compresses-multiples thesis is probably correct. Saylor's digital capital argument likely gets stronger if that thesis plays out. The quantum debate is a distraction for now - we're years from it mattering. $BTC just hit ~$76k. The market's already voted.
Milk Road tweet mediaMilk Road tweet media
Michael Saylor@saylor

@chamath If AI compresses terminal value and makes every moat temporary, capital will rotate to assets with no disruption risk. Bitcoin is Digital Capital - scarce, neutral, and impervious to AI disruption. $BTC should be the primary beneficiary of this shift.

English
48
33
381
107.1K
Jay Martel
Jay Martel@jaymos·
@chamath the middle class left holding the bag. clip that
English
0
0
1
104