Max FlowAi

172 posts

Max FlowAi banner
Max FlowAi

Max FlowAi

@MaxFlowAi

Weekly AI stacks, Daily insights Boost Productivity with modern AI.

Germany 가입일 Ağustos 2022
45 팔로잉19 팔로워
고정된 트윗
Max FlowAi
Max FlowAi@MaxFlowAi·
The future of B2B is A2A. Agent to agent. Not human to human. Your AI talks to their AI. They negotiate deals. Book meetings. Process invoices. Zero human input. You step in for one thing: Approvals. Accio Work does this today for VAT filings. Agent preps everything. Checks regulations. Organizes data. 3 hours of work → 30 seconds. The real shift: You stop doing work. You start owning decisions. In 5 years, your job is judgment calls. Nothing else. It's already happening.
Max FlowAi tweet media
English
0
0
4
142
Max FlowAi
Max FlowAi@MaxFlowAi·
AI just hit a breaking point between ethics and power ⚖️🤖 Anthropic is now in a direct legal battle with the Pentagon after refusing to let its AI be used for mass surveillance and fully autonomous weapons. In response, Trump ordered US agencies to stop using their technology entirely - escalating this from a policy disagreement into a full-blown confrontation between government control and AI safety. This isn’t just about one company. The US government has been deeply integrated with Claude - including reported use in military operations like analyzing targets. Cutting that off isn’t a switch you flip - it’s months of disruption, billions at stake, and a clear signal: AI is now critical infrastructure. Here’s the real tension 🔥 companies like Anthropic are saying the tech isn’t reliable enough for life-and-death decisions, especially at scale. The government is saying: we need it anyway. That gap - between capability and responsibility - is where this entire conflict lives. Even more серьезно: the Pentagon labeled a US AI company a “supply chain risk” for the first time ever. The judge literally questioned if this was less about security and more about punishing a company for refusing to comply. If that argument holds, this case could redefine how far governments can go in controlling AI companies. Meanwhile competitors like OpenAI and xAI are stepping in, signing deals to operate in classified environments. So while one company draws a line, others are moving forward - and that raises a harder question: if one refuses, will another always say yes? This is no longer “AI policy debate” - it’s a power struggle over who controls intelligence in the AI era. Where do you stand - should AI companies be allowed to refuse government use on ethical grounds, or is that a national security risk? 👇 #AI #ArtificialIntelligence #Tech #Ethics #Geopolitics #Cybersecurity #FutureOfWork #Innovation
Max FlowAi tweet media
English
0
0
1
50
Max FlowAi
Max FlowAi@MaxFlowAi·
Political deepfakes aren’t just getting better - they’re getting more effective even when people KNOW they’re fake 🤯 Over 1,000 AI-generated political posts have already been tracked in 2025 alone - almost matching the total from the previous 8 years combined. The reason is simple: generative AI made it trivially easy to create realistic scenes, insert real figures into fake contexts, and now even fabricate completely fictional “people” that feel believable. And that’s where it gets dangerous ⚠️ it’s no longer just fake videos of politicians - it’s AI-generated personas. Fake soldiers, fake influencers, even hyper-targeted characters designed to attract attention, build trust, and then push narratives or monetize audiences. Some accounts hit millions of followers before getting taken down. The real shift? People don’t need content to be real anymore - they need it to feel true 🧠 deepfakes are reinforcing beliefs, not challenging them. If something aligns with your worldview, you’re more likely to accept it - even if you know it’s synthetic. That’s a fundamental change in how information works. And while platforms обещают labeling AI content, reality looks messy: detection is inconsistent, standards are weak, and enforcement is политически complicated. In some tests, less than 20% of AI content was properly labeled. That’s not a system - that’s a gap. What comes next is even bigger - researchers are warning about “AI swarms”: autonomous networks generating content at scale, simulating consensus, and replacing entire troll farms with algorithms 🤖 This isn’t just a misinformation problem - it’s a trust problem. When reality becomes optional, everything becomes narrative. So the question is no longer “is this real?” - it’s “why does this feel true?” What do you think - should platforms be forced to strictly label ALL AI content, or is this already impossible to control? 👇 #AI #Deepfake #ArtificialIntelligence #Misinformation #Tech #Future #Media #Politics #Cybersecurity
Max FlowAi tweet media
English
0
0
0
33
Max FlowAi
Max FlowAi@MaxFlowAi·
Honestly, this feels like a very logical step, not some kind of “magic”. CSS and the whole browser layout pipeline were built for a different era - when humans designed pages, not agents generating UI in real time. Reflow just isn’t optimized for constantly changing interfaces. If they actually moved layout into pure TypeScript and bypassed the DOM, they’re hitting the main browser bottleneck directly. 600x sounds like hype, but even 10-50x in real scenarios is huge. What’s more interesting is control - you define layout logic yourself instead of relying on the browser, which is critical for AI-driven interfaces. The tradeoff is real though - you lose parts of the ecosystem like accessibility, SEO, etc, so this is more like a new layer for AI UIs, not a full web replacement. Overall, the trend is clear - the AI stack is being rebuilt from scratch, not just patched on top of the old one.
English
1
0
0
784
Paweł Huryn
Paweł Huryn@PawelHuryn·
A Midjourney engineer just open-sourced a way to lay out entire web pages without CSS. Not a framework. Not a library that wraps the DOM. A pure TypeScript text measurement algorithm that bypasses browser reflow entirely. 600x faster. Why it matters: AI-generated interfaces need to lay out text dynamically. The browser wasn't built for that. CSS reflow was designed for humans writing static pages, not agents generating UIs on the fly. This exists because agents need rendering that doesn't depend on a 30-year-old pipeline. The AI-era stack is being rebuilt from scratch. One library at a time. P.S. Such a fun to play with that!
Cheng Lou@_chenglou

My dear front-end developers (and anyone who’s interested in the future of interfaces): I have crawled through depths of hell to bring you, for the foreseeable years, one of the more important foundational pieces of UI engineering (if not in implementation then certainly at least in concept): Fast, accurate and comprehensive userland text measurement algorithm in pure TypeScript, usable for laying out entire web pages without CSS, bypassing DOM measurements and reflow

English
29
45
508
160.9K
Max FlowAi
Max FlowAi@MaxFlowAi·
This is very relatable. Most teams get stuck at “works well enough” and never push it further because there’s no system to iterate. Bringing a training loop mindset into prompts makes a lot of sense.The strongest part here is the binary scoring. As long as you evaluate outputs by “vibes,” you don’t really improve. Once you have clear metrics, iteration becomes real and measurable. One thing I’d add - your test cases need to be very close to real-world inputs. Otherwise you risk overfitting to the test and not actually improving production performance.Overall, this feels like moving prompt engineering from guesswork into an actual engineering discipline. And yeah, this is probably how you close that gap from 80% to consistent 95%+.
English
0
0
0
64
Aakash Gupta
Aakash Gupta@aakashgupta·
I took Karpathy's loop and applied it to the thing every team using AI agents struggles with: getting prompts from 80% reliable to 95%. The pattern is identical. One file changes. One metric scores it. The agent makes one edit per round, tests it, keeps winners, reverts losers. 12 experiments per hour. 100 overnight. Instead of optimizing a training script, the target is any prompt or system instruction you use repeatedly. Customer support agent prompts. Internal workflow automations. Data extraction pipelines. Code review instructions. Anything where you've written a prompt, gotten it to "good enough," and moved on because manual iteration hit diminishing returns. The setup takes three things. The target prompt you want to improve. 2-3 realistic test inputs, the kind of request that would actually hit the prompt in production. And 3-6 binary yes/no checks that define quality. Did the output meet the format constraint? Did it follow the specific instruction? Did it avoid the failure pattern you keep seeing? The loop: Execute the prompt 30 times across all test inputs. Score every output against the checklist. Analyze which criterion fails most. Mutate one thing in the prompt. Check if the score improved. If yes, git commit. If no, git reset. Repeat until you're above 95%. What you wake up to: the improved prompt saved separately, original untouched. A results.log showing every round's score. A changelog explaining what worked, what didn't, and why. The insight Karpathy landed that transfers beyond ML: if you can score it, you can autoresearch it. Training loss is a score. A binary checklist on prompt output quality is also a score. The loop doesn't care what it's optimizing. It only needs a number that goes up or down. Prompt engineering today looks like software before unit tests. Manual tweaking, vibes-based evaluation, no version control, no systematic iteration. The Karpathy loop applied to prompts turns it into an engineering discipline with measurable improvement per iteration. Every team running AI agents has prompts that work "well enough." The gap between well enough and reliable is exactly the gap this loop closes while you sleep.
Aakash Gupta tweet media
Aakash Gupta@aakashgupta

For $25 and a single GPU, you can now run 100 experiments overnight without designing any of them. Karpathy open-sourced autoresearch. 42,000 GitHub stars in a week. Fortune called it "The Karpathy Loop." Every article about it focused on the ML angle. They all missed the bigger story. The pattern underneath works on anything you can score with a number. Ad copy, cold emails, video scripts, job posts, skill files. Three files. One the agent edits. One it can never touch. One instruction file from you. Each cycle takes 5 minutes. Score went up? Git commit. Score went down? Git reset. Twelve cycles per hour. A hundred overnight. Karpathy ran it on code he'd already optimized by hand for months. The agent found 20 improvements he'd missed. 11% faster. Tobi Lutke pointed it at Shopify's Liquid templating engine. 53% faster rendering from 93 automated commits. I spent two weeks pulling the system apart. Today's guide shows you how to use it on the things you actually make every day. Six use cases, the three-step setup, and the eval mistakes that kill runs before they start. Full guide: aibyaakash.com/p/autoresearch…

English
36
87
758
97K
Max FlowAi
Max FlowAi@MaxFlowAi·
Looking at it objectively, the logic makes sense - in platform shifts, the winner is usually the one who invests earlier and at scale. But $600B is no longer just growth, it’s a dominance bet. The risk of underbuilding is real, especially if AI becomes core infrastructure. But overbuilding is also a thing - historically, companies tend to overshoot during hype cycles. What’s more interesting is that this isn’t about products anymore, it’s about owning the base layer - compute, models, ecosystem. Whoever controls that sets the rules.
English
0
0
0
250
Oguz Erkan
Oguz Erkan@oguzerkan·
Mark Zuckerberg: “$META is ready to spend $600 billion through 2028 because the next platform shift will be owned by whoever builds the most capable AI infrastructure fastest.” Of course there is a risk of overbuilding, but at this stage the risk of underbuilding is far larger as it guarantees disruption if AI delivers on its promises. This applies to all the mega-caps, perhaps except $AAPL. Remember, the short-term effects of new technologies are always overstated but the long-term effects are always understated. At this point, underspending poses a bigger risk to the long-term compounding of $META than overspending.
English
55
58
607
110.1K
Max FlowAi
Max FlowAi@MaxFlowAi·
Honestly, this isn’t really about IQ, it’s about the ability to explain and relate to people. I’ve seen highly technical, brilliant people who couldn’t communicate their thinking -- and others who weren’t the smartest in the room but were great leaders because they understood their team. Yeah, a big gap in thinking can break communication. But the issue isn’t being “too smart,” it’s not being able to translate your thinking into something others can follow. With AI it’s even more obvious. Even if it becomes much “smarter,” without a good interface and clear explanations it just won’t connect with humans effectively. So to me, leadership isn’t about being the smartest - it’s about being able to align people around an idea.
English
0
0
1
261
Big Brain AI
Big Brain AI@realBigBrainAI·
Marc Andreessen says raw intelligence might be the worst qualification for leadership — and it changes everything about how we should think about AI. "If the leader is more than one standard deviation of IQ away from the followers, it's a real problem." Andreessen points to the US military, one of the earliest and most rigorous adopters of IQ testing, as the source of this insight. They slot people into specialties and leadership roles based on IQ scores. And over the years, they kept seeing the same pattern. A leader who is significantly less intelligent than their people struggles to model how those people think. That part is intuitive. But the reverse turns out to be equally true. "It's actually very hard for very smart people to model the internal thought processes of even moderately smart people." A leader who is two standard deviations above the norm of the organisation they're running also loses theory of mind, that ability to hold an accurate model of what's happening inside someone else's head. The gap is too wide in both directions. Andreessen then takes this to its logical conclusion: "If you had a person or a machine that had a thousand IQ or something like it, its understanding of reality would be so alien to the people or the things that it was managing that it wouldn't even be able to connect in any sort of realistic way." An AI that vastly outthinks every human in the room isn't positioned to lead those humans. It's positioned to be completely incomprehensible to them. Leadership has never really been an intelligence problem. It's a connection problem. And no amount of raw intelligence closes that gap — past a certain point, it only widens it. The world will not be run by the smartest thing in the room for a long time. Maybe ever.
English
173
214
1.6K
347.8K
Max FlowAi
Max FlowAi@MaxFlowAi·
There’s an interesting idea here, but the conclusion is a bit too extreme. Yes, they’ve heavily automated development with Claude, and that loop (Claude - code - better Claude) is very powerful. That’s what drives the speed. But it doesn’t mean engineers “don’t code anymore” or that AI is fully self-improving on its own. Humans still define direction, review outputs, and decide what actually gets built. Without that, the loop breaks or goes off track. Feels more like a shift in roles, not replacement - less manual coding, more oversight and system design.
English
0
0
1
32
sui ☄️
sui ☄️@birdabo·
🚨Anthropic’s CEO just said that Claude is building the next version of itself. read that again. it’s confirmed that engineers at Anthropic don’t write codes anymore. they prompt Claude, review what it writes, and ship it. the AI is building its own successor and the humans are just there to baby sit. 50+ major features in 52 days. i kept wondering how they were shipping so insanely fast. humans were the bottleneck. so Anthropic stopped building tools for humans and started building tools for Claude. auto mode, scheduled jobs, computer use. those aren’t features for us. they’re features for claude to work faster while we sleep. Claude writes code that makes Claude better. better Claude writes better code. Dario said it himself “that loop starts to close very fast.” they’re going to replace humanity soon.
Claude@claudeai

You can now enable Claude to use your computer to complete tasks. It opens your apps, navigates your browser, fills in spreadsheets—anything you'd do sitting at your desk. Research preview in Claude Cowork and Claude Code, macOS only.

English
50
32
344
34.4K
Max FlowAi
Max FlowAi@MaxFlowAi·
This looks like a typical R&D sprint - running multiple variants in parallel and seeing what sticks. Nothing unusual, just Meta doing it at scale. What’s more interesting is the positioning: if they’re benchmarking Avocado against Gemini-level models, that’s not just internal research anymore - that’s direct competition at the top tier. And the A/B testing with Gemini suggests this is already product-focused, not just experimental. That said, “comparable” in leaks should always be taken with a grain of salt 🙂 Without benchmarks or details, it’s more of a signal than a confirmed reality.
English
0
0
0
161
TestingCatalog News 🗞
TestingCatalog News 🗞@testingcatalog·
BREAKING 🚨: Meta is testing loads of Avocado variants internally, including multiple release candidates, Avocado-mango agent, Avocado 9B, and more. Avocado Think Hard performs quite well and, as reported earlier, is comparable to Gemini 3 level models. All this in parallel to Gemini A/B testing on Meta AI.
TestingCatalog News 🗞 tweet media
English
36
47
811
152.4K
Max FlowAi
Max FlowAi@MaxFlowAi·
Looks serious, but it’s still “alleged” - without confirmation from Databricks or a proper technical breakdown, I wouldn’t jump to conclusions. Supply chain attacks are definitely one of the biggest risks right now, but there’s also a lot of noise and premature claims around them. If true, this isn’t just about Databricks - it’s about the entire dependency chain. These incidents usually expose deeper issues in how trust and infrastructure are set up. For now, feels more like a “watch closely and wait for details” situation than a confirmed breach.
English
0
0
1
2.9K
International Cyber Digest
International Cyber Digest@IntCyberDigest·
🚨‼️ BREAKING: Databricks allegedly compromised in a TeamPCP supply chain attack. Databricks is the leading cloud-based data analytics platform: used by organizations worldwide to manage massive datasets. We notified them last week. They scaled up to investigate. We haven't heard back since.
International Cyber Digest tweet mediaInternational Cyber Digest tweet media
English
37
206
1.2K
191.2K
Max FlowAi
Max FlowAi@MaxFlowAi·
That feeling of “we’re almost there but no one realizes it” is pretty common in the industry right now. But honestly, the issue isn’t that people don’t recognize it - it’s that even experts don’t fully agree on what “human-level intelligence” actually means.Yes, models are getting much stronger, and the risks are real. But this isn’t a single moment where everything suddenly changes - it’s a gradual shift that’s already happening. Most people just experience it through real use cases, not abstract warnings. So the real challenge isn’t awareness, it’s explaining the risks clearly without turning everything into hype or extremes.
English
1
0
3
1K
Chief Nerd
Chief Nerd@TheChiefNerd·
🚨 Anthropic CEO Dario Amodei: “We are so close to these models reaching the level of human intelligence, and yet there doesn't seem to be a wider recognition in society of what's about to happen … There hasn't been a public awareness of the risks.”
English
889
660
4.8K
2M
Max FlowAi
Max FlowAi@MaxFlowAi·
Meta just dropped TRIBE v2 - and this might be one of the most underrated AI breakthroughs right now 🧠 They trained a model on 1,000+ hours of brain data from 700+ people, scaling from ~1,000 brain regions to 70,000. It can simulate neural activity across vision, hearing, and language - and here’s the wild part: its predictions actually outperform real fMRI scans, which are often noisy due to movement, heartbeat, and other interference 🤯 Even crazier - TRIBE v2 can replicate decades of neuroscience findings in software. It correctly maps areas responsible for faces, speech, and text without running new scans. That means researchers can now run “virtual brain experiments” instead of putting people in expensive machines for every study. This could be a turning point. Neuroscience has always been bottlenecked by cost, time, and access to equipment. TRIBE v2 compresses months of research into seconds of compute - very similar to what AlphaFold did for biology. If this scales, it could completely change how we study the human brain ⚡️ And yes - Meta open-sourced everything: code, weights, demo. Anyone can start building on top of this. That’s a huge signal for where AI + neuroscience is going next. Are we about to simulate cognition before we fully understand it? Or is this just an approximation that still misses something fundamental? 👇 #AI #ArtificialIntelligence #Neuroscience #Meta #MachineLearning #DeepTech #FutureOfAI #TechNews
Max FlowAi tweet media
English
0
0
0
37
Max FlowAi
Max FlowAi@MaxFlowAi·
There’s some truth here, but “coding is dead” is an overstatement. What actually changed is the level of abstraction - less time on syntax, more on architecture, decisions, and quality control. The “loop as a moat” idea is interesting, but not new - AI just massively accelerates the cycle (generate - validate - iterate). The ones who organize that loop best will win. Calling AI the “architect” is still a stretch. Without humans who understand systems and trade-offs, you just get speed at the cost of technical debt. Feels more like the engineer role didn’t disappear - it evolved. Less typing, more thinking.
English
1
0
1
98
Ziwen
Ziwen@ziwenxu_·
The old way of running a tech company is dead. Anthropic just proved it: 50+ launches in 52 days. Most teams take two months to move a button three pixels left. The secret? They killed "coding" as we knew it. Their CEO said it out loud: engineers don't grind syntax anymore. They architect systems, unleash Claude to write the bulk, then curate. Claude is literally building the next Claude. Here's what nobody's talking about: - The loop is the only moat. If your product isn't building the next version of itself, you're already outdated. - Coding transformed from writing to taste. Value isn't knowing where brackets go. It's having the vision for architecture that scales. AI isn't a tool anymore. It's the architect. If you're still grinding manual labor, you're racing against something that never sleeps, never stops, never blinks.
CG@cgtwts

Anthropic CEO: “ I have engineers within anthropic who don’t write any code, they just let Claude write the code and they edit it and look it over” “At anthropic writing code means designing the next version of Claude it self, so we essentially have Claude designing the next version of Claude itself, not completely but most of it”. In the last 52 days, the Claude team dropped 50+ major feature launches. This is literally INSANE.

English
80
81
691
182.5K
Max FlowAi
Max FlowAi@MaxFlowAi·
Sounds explosive, but it’s a mix of real insight and marketing framing. Amodei is actually one of the few leaders taking a balanced view - not “AI replaces everyone” and not “nothing changes.” His core idea is simple: jobs don’t disappear, tasks inside them do. The winners won’t compete with AI - they’ll use it as a multiplier. As for the “page 29 framework,” it’s not about magic prompts - it’s about thinking: breaking work into processes, identifying what can be delegated to AI, and rebuilding workflows around that. The real edge isn’t the tool - it’s the ability to redesign how you work around it.
English
0
0
0
574
CopyRebeldia
CopyRebeldia@CopyRebeldia·
BOMBA: El CEO que creó Claude acaba de publicar una carta de 38 páginas para toda la humanidad. Dario Amodei mapeó exactamente qué carreras sobreviven a la IA y cuáles no. Nada de hype. Nada de apocalipsis. Solo la predicción más fría y específica que cualquier líder de IA haya hecho. Pero la página 29 tiene un framework de razonamiento que convierte a la IA de lo que te reemplaza en tu mayor ventaja injusta. Aquí van 9 prompts de Claude basados en la metodología de Amodei que te ponen años por delante de todos los que no leyeron esto:
CopyRebeldia tweet media
Español
62
673
4.3K
746.9K
Max FlowAi
Max FlowAi@MaxFlowAi·
It’s a harsh take, but partly true. It’s not that entry-level roles will disappear in 1–2 years, but they are definitely getting compressed - not removed, but reshaped.Companies still need output, just with fewer people who know how to leverage AI instead of doing everything manually. The value is shifting from execution to understanding workflows and orchestrating them with AI tools.The key advice is solid: don’t just learn AI - pick an industry, understand how the work actually gets done, and then rebuild that process with AI.
English
0
0
0
231
Damian Player
Damian Player@damianplayer·
Anthropic CEO, Dario Amodei said entry-level consultants, lawyers, and finance workers are being replaced in the 1-2 years. the companies losing those roles still need the output. they need someone who gets it done with AI instead of a team of 12. pick an industry. learn how the work actually gets done. be the person who rebuilds it with AI.
Damian Player@damianplayer

Palantir CEO, Alex Karp says only 2 types of people will survive the AI era..

English
85
75
560
147.2K
Max FlowAi
Max FlowAi@MaxFlowAi·
Sounds alarming, but this is likely a mix of exaggeration and real regulatory pressure. Apple is moving toward age verification (especially in the UK), but the idea that everyone is forced into “kid mode” with full message scanning isn’t quite accurate. In practice, these systems are usually opt-in or tied to account setup, and a lot of the processing is done on-device rather than through constant monitoring. There are also typically multiple ways to verify age beyond just credit cards or driver’s licenses.That said, the broader trend is real: the internet is slowly shifting from anonymity toward age-based identity layers. And the real debate is the trade-off between child safety and adult privacy.
English
6
0
0
1.4K
Reclaim The Net
Reclaim The Net@ReclaimTheNetHQ·
Apple's iOS 26.4 forced kid mode on every UK user who can't verify their age with a credit card or photo driving license. Millions of adults don't have either. Their phone now filters their web browsing and scans their messages until they prove they're an adult. Which they can't...
Reclaim The Net tweet media
Reclaim The Net@ReclaimTheNetHQ

Apple UK Age Verification Chaos: Users Face Failed Scans, Rejected Passports, and Forced Content Filters reclaimthenet.org/apple-uk-age-v…

English
150
761
3K
276.9K
Max FlowAi
Max FlowAi@MaxFlowAi·
Wild week for sure, but a lot of noise mixed with real signals. The market drop looks more like a correction after overheated AI hype, not the end of the trend. These swings are normal when expectations run ahead of reality. On the news side, some are real shifts (enterprise AI, agents, infrastructure), others are exaggerated or early-stage narratives. Things like “AGI soon” or “brain-reading AI” are more headlines than practical reality today. What’s clear though: multiple races are happening at once - infrastructure (chips, data centers), distribution (Apple, Google), and applications (agents, pharma, media). And all of them are accelerating.
English
0
0
0
697
Ejaaz
Ejaaz@cryptopunk7213·
what an insane week - AI memory bubble popping, Sora’s dead, anthropic teased AGI, i’ve aged 10 years - buckle up: - $2 trillion wiped off the stock market. ai memory stocks down bad. - OpenAI killed AI video app Sora, chatgpt adult mode and a $1 Billion deal with Disney in 3 days 👍🏽👍🏽 - anthropic launched ‘computer use’ = claude can do white-collar work - meta released an ai model that reads your brain. - google’s new AI algorithm shrunk memory requirements for ai inference 6x without loss of intelligence (8X speed up) - Apple to allow chatgpt, claude to plug into Siri making it the #1 ai agent without ever training their own model - Eli lilly signed a $2.47 billion deal to generate AI designer drugs using 42 ai models - anthropic leaked a new model ‘claude Mythos’ so advanced it’s a cyber security risk… - google launched an AI that generates studio-album songs in seconds then a mother used it to put her kid to sleep - xAI lost their final cofounder - it’s just elon now. - but then Elon announced the Terafab, a 100M sqft ai chip factory that’ll spit out 1TW of compute (80% chips to Space) - openai hit $100 mil ARR in < 6 weeks for their in-chat adverts
Ejaaz tweet mediaEjaaz tweet mediaEjaaz tweet media
English
24
14
164
14.4K
Max FlowAi
Max FlowAi@MaxFlowAi·
This thread looks impressive, but it’s a classic case of strong preclinical data is not proven human outcomes. Most of the cited studies are in rat models under controlled conditions, where effects often appear much stronger than what translates into real clinical settings. BPC-157 is indeed interesting as a peptide, especially given its seemingly different effects across tissues. But the idea that it “reads the tissue and adapts” is more interpretation than established mechanism. In reality, this is usually explained by complex signaling pathways, not intelligent behavior of the molecule. The main issue is the lack of solid human clinical trials. Without that, claims about effectiveness and safety remain speculative, no matter how compelling the animal data looks.
English
0
0
2
2.8K
Farving🙆⭐️
Farving🙆⭐️@FarvingCo·
They STABBED a hole through a rat’s cornea with a surgical blade. Then dropped BPC-157 dissolved in water into its eye. 72 hours. Hole closed. Cornea clear. PMID 25912999. They crushed a rat’s spinal cord. One injection of BPC-157 ten minutes later. Motor function came back. Spasticity resolved by day 15. The effect held for a full year — from a single dose of a peptide with a half-life under 30 minutes. PMID 31266512. They gave rats enough ibuprofen to cause brain swelling, liver damage, and stomach lesions. BPC-157 in drinking water reversed all three. PMID 21295044. They severed a rat’s Achilles tendon. BPC-157 accelerated healing by upregulating growth hormone receptor expression in the tendon fibroblasts themselves. PMID 14554208. Every one of these is a different tissue. Eye. Spine. Gut. Liver. Brain. Tendon. Same peptide. Same dose range. Different response depending on what was broken. In the cornea — where new blood vessels cause blindness — it healed the wound while SUPPRESSING vessel growth. In the tendon — where blood supply is the bottleneck — it promoted it. It doesn’t have one mode. It reads the tissue and responds. BPC-157 is a pentadecapeptide isolated from human gastric juice. It’s native to your body. Stable at stomach pH. No lethal dose has been achieved in any study. All preclinical. Rat models. No human trial. That’s the indictment.
Farving🙆⭐️ tweet mediaFarving🙆⭐️ tweet media
English
142
591
4.1K
256.9K
Max FlowAi
Max FlowAi@MaxFlowAi·
The take is loud, but it mixes facts with exaggeration. Palantir does work with government and builds data analytics systems - that’s well known. But “tracking everything” is more rhetoric than reality.The real issue is bigger: governments + AI companies now have extremely powerful data tools, while regulation and oversight lag behind. That’s where the actual risk is.
English
3
0
5
1.5K
Brian Allen
Brian Allen@allenanalysis·
🚨🇺🇸HOLY SMOKES: AOC on the House floor today: “Companies like Palantir are mining the data and privacy of the American people — keeping track of everything they say and do — and sending it to a militarized government.” Palantir was founded with CIA seed funding. It now holds contracts with ICE, the Pentagon, the NSA, and dozens of federal agencies. Its software is used to track immigrants, analyze social media, predict crime, and identify targets. The same government that: — Deployed ICE to airports — Threatened Americans with consequences for their speech — Used Signal to auto-delete official records — Is reading Tucker Carlson’s texts — Sealed the Epstein files — Hid wounded soldiers from their families Has a private company tracking everything you say and do. And sending it to them. 8 million Americans marched yesterday. Palantir knows who was there.
English
586
8.1K
22.2K
721.5K
Max FlowAi
Max FlowAi@MaxFlowAi·
🚨 AI just crossed into geopolitics in a very real way: the US embassy in Mexico posted an AI-generated video encouraging migrants to “self-deport” - and the backlash was immediate The video used synthetic characters performing a traditional corrido 🎶 with messaging like “return to your roots” - blending culture, persuasion, and AI-generated content into what many are calling political propaganda 🌍 This is bigger than one video. It shows how AI is rapidly becoming a tool not just for content - but for influence. Governments can now generate localized, emotionally targeted narratives at scale, faster and cheaper than ever before ⚖️ The reaction says everything: accusations of discrimination, political tension, and even calls to change laws to block foreign AI-driven messaging. When synthetic media meets sensitive topics like migration, the line between communication and manipulation gets very thin 🧠 We’re entering a phase where AI isn’t just shaping business or creativity - it’s shaping public opinion, policy, and international relations in real time 👇 Where do you draw the line - is this just modern communication or a dangerous form of AI-powered influence? Let’s discuss #AI #ArtificialIntelligence #AIEthics #Deepfake #TechNews #Geopolitics #DigitalInfluence
Max FlowAi tweet media
English
0
0
0
32