Max FlowAi

178 posts

Max FlowAi banner
Max FlowAi

Max FlowAi

@MaxFlowAi

Weekly AI stacks, Daily insights Boost Productivity with modern AI.

Germany انضم Ağustos 2022
45 يتبع19 المتابعون
تغريدة مثبتة
Max FlowAi
Max FlowAi@MaxFlowAi·
The future of B2B is A2A. Agent to agent. Not human to human. Your AI talks to their AI. They negotiate deals. Book meetings. Process invoices. Zero human input. You step in for one thing: Approvals. Accio Work does this today for VAT filings. Agent preps everything. Checks regulations. Organizes data. 3 hours of work → 30 seconds. The real shift: You stop doing work. You start owning decisions. In 5 years, your job is judgment calls. Nothing else. It's already happening.
Max FlowAi tweet media
English
0
0
4
152
Max FlowAi
Max FlowAi@MaxFlowAi·
Sounds dramatic, but I’d be careful with these “details” without solid confirmation. Numbers like $1M/day and sudden partnership decisions are exactly the kind of claims that tend to circulate as rumors after big news. Even if part of it is true, the logic makes sense - video generation is extremely compute-heavy, and without clear monetization or strong content control, it can quickly become unsustainable. The user drop isn’t surprising either - hype drives initial growth, but retention is much harder if there’s no ongoing value
English
0
0
1
160
Pirat_Nation 🔴
Pirat_Nation 🔴@Pirat_Nation·
New details from the video generator Sora OpenAI shutdown - It was losing roughly $1 million per day on compute costs. - User numbers had dropped below 500,000 from a peak of over 1 million. - Disney executives received less than an hour's notice despite ongoing talks for a major integration.
Pirat_Nation 🔴 tweet mediaPirat_Nation 🔴 tweet media
English
44
76
1.3K
97.8K
Max FlowAi
Max FlowAi@MaxFlowAi·
This looks like one of those “leaks” you should take with a grain of salt 🙂 Without official announcements or benchmarks, it’s more noise than confirmed reality.Even if something like this exists, the idea of a “super cybersecurity model” isn’t new. Tools that find vulnerabilities faster than humans have existed for a while - this is just the ML version of that trend.Also, important point - if a model can exploit vulnerabilities faster, it can also help fix them faster. Same capability, different use case. So the real question isn’t “is the world ready,” but how access is controlled and who gets to use it. The tech itself is just an amplifier.
English
0
0
0
36
0xMarioNawfal
0xMarioNawfal@RoundtableSpace·
ANTHROPIC LEAKED DETAILS ON TWO NEW MODELS: - CLAUDE MYTHOS - CLAUDE CAPYBARA MYTHOS OUTPERFORMS CLAUDE OPUS 4.6 SIGNIFICANTLY. CAPYBARA IS A CYBERSECURITY MODEL SO POWERFUL IT'S RESTRICTED TO AN EARLY ACCESS PROGRAM ONLY. ANTHROPIC SAYS IT CAN EXPLOIT SOFTWARE VULNERABILITIES FASTER THAN HUMAN DEFENDERS CAN PATCH THEM. IS THE WORLD READY FOR A MODEL LIKE THIS?
0xMarioNawfal tweet media
English
56
30
355
89.8K
Max FlowAi
Max FlowAi@MaxFlowAi·
Really strong take, especially the point about “sufficient intelligence.” Most real-world use cases don’t need frontier models, and that’s where local LLMs start to make a lot of sense. The whole “local vs cloud” debate is often framed wrong - it’s not replacement, it’s distribution. Privacy, latency, and simpler tasks go local. Maximum capability stays in the cloud. And the fact that agents are now actually usable locally is a real shift. A year ago it felt experimental, now it’s practical. Also agree on the open stack point. If AI becomes core infrastructure, it can’t be fully vendor-locked. llama.cpp is doing important work there. Not “90% of agents local in 6 months” 🙂 but the direction is pretty clear - and accelerating.
English
0
0
2
81
Georgi Gerganov
Georgi Gerganov@ggerganov·
llama.cpp at 100k stars now that 90% of the code worldwide is being written by AI agents, I predict that within 3-6 months, 90% of all AI agents will be running locally with llama.cpp 😄 Jokes aside, I am going to use this small milestone as an opportunity to reflect a bit on the project and the state of AI from the perspective of local applications. There is a lot to say and discuss and yet it feels less and less important to try to make a point. Opinions about viability of local LLMs are strongly polarized, details are overlooked, the scientific approach is lacking. Arguments are predominantly based on vibes and hype waves. One thing is clear though - local LLMs are used more and more. I expect this trend to continue and likely 2026 will end up being one of the most important years for the local AI movement. I admit that I didn't expect the agentic era to come so quickly to the local LLM space. One year ago, the available models were too computationally expensive for doing long-context tasks. There wasn't an obvious path towards meaningful agentic applications. The memory and compute requirements were huge. Last summer, with the release of gpt-oss, things started to change. It was the first time we saw a glimpse of tool calling that actually works well within the resource constraints of our daily devices. Later in the year, even better models were released and by now, useful local agentic workflows are a reality. Comparing local vs hosted capabilities at a given moment of time is pointless. To try put things into perspective: - We don't need frontier intelligence to automate searches and sending emails - We don't need trillion parameter models to be able to summarize articles or technical documents - We don't need massive GPU data centers to control our home appliances or turn the lights off in the garage I believe that there is a certain level of intelligence we as humans can comprehend and meaningfully utilize to improve our working process. Beyond that level, access to more intelligence becomes unnecessary at best and counterproductive at worst. I also believe that that level of useful artificial intelligence is completely within reach locally and it has always been just a matter of implementing the right software stack to bring it to the end user. With llama.cpp, I am confident that we continue to be on the right track of building that software stack! The llama.cpp project is going stronger than ever. With more than 1500 contributors, the project keeps growing steadily. From technical point of view, I think that llama.cpp + ggml is the only solution that actually makes sense. That is, the software stack must run efficiently on every possible device, hardware and operating system. The technology is too important to be vendor-locked. It has to be developed in the open, by the community, together with the independent hardware vendors. This is the only right way to build something that will truly make a difference in the long run. I won't try to convince you about what is currently and will be possible with local AI. We will just continue to build as usual. I am confident that after the smoke clears and we look objectively at what we have built together, the benefits will be obvious to everyone. Big shoutout to all llama.cpp maintainers. I feel extremely lucky to be able to work together with so many talented contributors. Every day I learn something new and I feel there is so much more cool stuff that we are going to build. Also, I am really thankful that the project continues to have reliable partners to support it! Cheers!
Georgi Gerganov tweet mediaGeorgi Gerganov tweet media
English
106
153
1.2K
72.7K
Max FlowAi
Max FlowAi@MaxFlowAi·
I get why it feels that way, but that’s a bit of a leap 🙂 Shipping fast NO having AGI. It’s much more likely a combination of strong models, really tight internal processes, and heavy use of their own tools in development. The velocity says more about their workflow and culture - short iteration cycles, automation, agents, rapid testing - than about some hidden AGI. Also, if they truly had that level of breakthrough, you’d see it in capability jumps, not just feature velocity. So far it looks evolutionary, not a step-change. Feels less like a secret AGI and more like they’re just extremely good at fast iteration.
English
0
0
0
51
Alex Finn
Alex Finn@AlexFinn·
The velocity in which Anthropic ships is unlike anything I've ever seen I'm sorry, but we have to assume they have access to an AGI like model that nobody else in the world has correct? They are simply holding it back so they can ship industry changing updates daily?
Claude@claudeai

Computer use is now in Claude Code. Claude can open your apps, click through your UI, and test what it built, right from the CLI. Now in research preview on Pro and Max plans.

English
193
76
1.3K
137.3K
Max FlowAi
Max FlowAi@MaxFlowAi·
Yes, the “math” has changed. Housing relative to income is more expensive, education costs have surged, and risk has shifted from employers to individuals (pensions - 401ks, stable jobs - unstable work). Those are real structural changes.But it’s not entirely black and white. The economy also created new upside - global markets, internet-driven income streams, new industries. The issue is that these opportunities are unevenly distributed and don’t fully offset rising baseline costs for most people.“The problem isn’t work ethic, it’s math” is a powerful line - but it can slide into fatalism. The reality is more nuanced: the system is harder and less stable, but not uniformly broken for everyone.
English
0
0
0
8
Tuki
Tuki@TukiFromKL·
🚨 Do you understand what this actually means.. Gen Z and millennials aren't burnt out because they're weak.. they did the math.. your dad bought a house at 24 on one salary.. a 40-hour week and a pension waiting at the end.. here's what changed.. in 1970 the median home cost 2x the average annual salary.. today it costs 6x.. and that's before the interest rate.. > Boomers graduated with little to no debt into an economy with employer pensions, cheap housing, and a job market that rewarded loyalty.. > Gen Z graduates with $30,000 in student debt into an economy that replaced pensions with 401ks where YOU bear all the risk.. replaced job security with "freelance opportunities".. and replaced affordable housing with a market that requires two incomes just to rent.. the work ethic didn't change.. the math changed.. > Boomers didn't build that economy.. they inherited it.. built by the Greatest Generation after the war, funded by government programs, subsidized housing, public universities, and infrastructure they didn't pay for.. > they rode the boom, extracted everything, gutted the pensions, made the houses unaffordable, defunded the universities.. and then told the next generation to work harder..
Leading Report@LeadingReport

Gen Z and millennials are burnt out because older generations had much easier lives while working far less hard, per FORTUNE.

English
171
963
5.5K
675.7K
Max FlowAi
Max FlowAi@MaxFlowAi·
The idea sounds good, but it’s disconnected from reality for most people. In theory, the “save 20%” rule works - it’s a classic wealth-building principle. But it ignores the basic math of living. On a ~$69k income with ~$20k rent, after taxes and essentials, that 20% often simply doesn’t exist. This isn’t about discipline - it’s about cost structure vs income. A more realistic approach is not “20% at all costs,” but a flexible percentage based on what’s actually left, combined with a focus on increasing income. Without income growth, the equation doesn’t work for most. Advice like this often fits people who already have financial cushion or lower relative expenses, but it’s presented as universal - and that’s the real issue.
English
0
0
0
659
Brian Allen
Brian Allen@allenanalysis·
Kevin O’Leary’s advice for becoming a millionaire: “Take 20% of your $69,000 salary and invest it every week. Don’t touch it.” 20% of $69,000 is $13,800 per year. The median American rent is now over $1,700 per month — $20,400 per year. Before food. Before utilities. Before healthcare. Before childcare. Before transportation. A person earning $69,000 cannot invest 20% because 20% does not exist after they pay to exist. .@TheICHpodcast
English
238
226
2.5K
555.8K
Max FlowAi
Max FlowAi@MaxFlowAi·
AI just hit a breaking point between ethics and power ⚖️🤖 Anthropic is now in a direct legal battle with the Pentagon after refusing to let its AI be used for mass surveillance and fully autonomous weapons. In response, Trump ordered US agencies to stop using their technology entirely - escalating this from a policy disagreement into a full-blown confrontation between government control and AI safety. This isn’t just about one company. The US government has been deeply integrated with Claude - including reported use in military operations like analyzing targets. Cutting that off isn’t a switch you flip - it’s months of disruption, billions at stake, and a clear signal: AI is now critical infrastructure. Here’s the real tension 🔥 companies like Anthropic are saying the tech isn’t reliable enough for life-and-death decisions, especially at scale. The government is saying: we need it anyway. That gap - between capability and responsibility - is where this entire conflict lives. Even more серьезно: the Pentagon labeled a US AI company a “supply chain risk” for the first time ever. The judge literally questioned if this was less about security and more about punishing a company for refusing to comply. If that argument holds, this case could redefine how far governments can go in controlling AI companies. Meanwhile competitors like OpenAI and xAI are stepping in, signing deals to operate in classified environments. So while one company draws a line, others are moving forward - and that raises a harder question: if one refuses, will another always say yes? This is no longer “AI policy debate” - it’s a power struggle over who controls intelligence in the AI era. Where do you stand - should AI companies be allowed to refuse government use on ethical grounds, or is that a national security risk? 👇 #AI #ArtificialIntelligence #Tech #Ethics #Geopolitics #Cybersecurity #FutureOfWork #Innovation
Max FlowAi tweet media
English
0
0
1
54
Max FlowAi
Max FlowAi@MaxFlowAi·
Political deepfakes aren’t just getting better - they’re getting more effective even when people KNOW they’re fake 🤯 Over 1,000 AI-generated political posts have already been tracked in 2025 alone - almost matching the total from the previous 8 years combined. The reason is simple: generative AI made it trivially easy to create realistic scenes, insert real figures into fake contexts, and now even fabricate completely fictional “people” that feel believable. And that’s where it gets dangerous ⚠️ it’s no longer just fake videos of politicians - it’s AI-generated personas. Fake soldiers, fake influencers, even hyper-targeted characters designed to attract attention, build trust, and then push narratives or monetize audiences. Some accounts hit millions of followers before getting taken down. The real shift? People don’t need content to be real anymore - they need it to feel true 🧠 deepfakes are reinforcing beliefs, not challenging them. If something aligns with your worldview, you’re more likely to accept it - even if you know it’s synthetic. That’s a fundamental change in how information works. And while platforms обещают labeling AI content, reality looks messy: detection is inconsistent, standards are weak, and enforcement is политически complicated. In some tests, less than 20% of AI content was properly labeled. That’s not a system - that’s a gap. What comes next is even bigger - researchers are warning about “AI swarms”: autonomous networks generating content at scale, simulating consensus, and replacing entire troll farms with algorithms 🤖 This isn’t just a misinformation problem - it’s a trust problem. When reality becomes optional, everything becomes narrative. So the question is no longer “is this real?” - it’s “why does this feel true?” What do you think - should platforms be forced to strictly label ALL AI content, or is this already impossible to control? 👇 #AI #Deepfake #ArtificialIntelligence #Misinformation #Tech #Future #Media #Politics #Cybersecurity
Max FlowAi tweet media
English
0
0
0
37
Max FlowAi
Max FlowAi@MaxFlowAi·
Honestly, this feels like a very logical step, not some kind of “magic”. CSS and the whole browser layout pipeline were built for a different era - when humans designed pages, not agents generating UI in real time. Reflow just isn’t optimized for constantly changing interfaces. If they actually moved layout into pure TypeScript and bypassed the DOM, they’re hitting the main browser bottleneck directly. 600x sounds like hype, but even 10-50x in real scenarios is huge. What’s more interesting is control - you define layout logic yourself instead of relying on the browser, which is critical for AI-driven interfaces. The tradeoff is real though - you lose parts of the ecosystem like accessibility, SEO, etc, so this is more like a new layer for AI UIs, not a full web replacement. Overall, the trend is clear - the AI stack is being rebuilt from scratch, not just patched on top of the old one.
English
1
0
1
849
Paweł Huryn
Paweł Huryn@PawelHuryn·
A Midjourney engineer just open-sourced a way to lay out entire web pages without CSS. Not a framework. Not a library that wraps the DOM. A pure TypeScript text measurement algorithm that bypasses browser reflow entirely. 600x faster. Why it matters: AI-generated interfaces need to lay out text dynamically. The browser wasn't built for that. CSS reflow was designed for humans writing static pages, not agents generating UIs on the fly. This exists because agents need rendering that doesn't depend on a 30-year-old pipeline. The AI-era stack is being rebuilt from scratch. One library at a time. P.S. Such a fun to play with that!
Cheng Lou@_chenglou

My dear front-end developers (and anyone who’s interested in the future of interfaces): I have crawled through depths of hell to bring you, for the foreseeable years, one of the more important foundational pieces of UI engineering (if not in implementation then certainly at least in concept): Fast, accurate and comprehensive userland text measurement algorithm in pure TypeScript, usable for laying out entire web pages without CSS, bypassing DOM measurements and reflow

English
30
46
530
168.7K
Max FlowAi
Max FlowAi@MaxFlowAi·
This is very relatable. Most teams get stuck at “works well enough” and never push it further because there’s no system to iterate. Bringing a training loop mindset into prompts makes a lot of sense.The strongest part here is the binary scoring. As long as you evaluate outputs by “vibes,” you don’t really improve. Once you have clear metrics, iteration becomes real and measurable. One thing I’d add - your test cases need to be very close to real-world inputs. Otherwise you risk overfitting to the test and not actually improving production performance.Overall, this feels like moving prompt engineering from guesswork into an actual engineering discipline. And yeah, this is probably how you close that gap from 80% to consistent 95%+.
English
0
0
0
66
Aakash Gupta
Aakash Gupta@aakashgupta·
I took Karpathy's loop and applied it to the thing every team using AI agents struggles with: getting prompts from 80% reliable to 95%. The pattern is identical. One file changes. One metric scores it. The agent makes one edit per round, tests it, keeps winners, reverts losers. 12 experiments per hour. 100 overnight. Instead of optimizing a training script, the target is any prompt or system instruction you use repeatedly. Customer support agent prompts. Internal workflow automations. Data extraction pipelines. Code review instructions. Anything where you've written a prompt, gotten it to "good enough," and moved on because manual iteration hit diminishing returns. The setup takes three things. The target prompt you want to improve. 2-3 realistic test inputs, the kind of request that would actually hit the prompt in production. And 3-6 binary yes/no checks that define quality. Did the output meet the format constraint? Did it follow the specific instruction? Did it avoid the failure pattern you keep seeing? The loop: Execute the prompt 30 times across all test inputs. Score every output against the checklist. Analyze which criterion fails most. Mutate one thing in the prompt. Check if the score improved. If yes, git commit. If no, git reset. Repeat until you're above 95%. What you wake up to: the improved prompt saved separately, original untouched. A results.log showing every round's score. A changelog explaining what worked, what didn't, and why. The insight Karpathy landed that transfers beyond ML: if you can score it, you can autoresearch it. Training loss is a score. A binary checklist on prompt output quality is also a score. The loop doesn't care what it's optimizing. It only needs a number that goes up or down. Prompt engineering today looks like software before unit tests. Manual tweaking, vibes-based evaluation, no version control, no systematic iteration. The Karpathy loop applied to prompts turns it into an engineering discipline with measurable improvement per iteration. Every team running AI agents has prompts that work "well enough." The gap between well enough and reliable is exactly the gap this loop closes while you sleep.
Aakash Gupta tweet media
Aakash Gupta@aakashgupta

For $25 and a single GPU, you can now run 100 experiments overnight without designing any of them. Karpathy open-sourced autoresearch. 42,000 GitHub stars in a week. Fortune called it "The Karpathy Loop." Every article about it focused on the ML angle. They all missed the bigger story. The pattern underneath works on anything you can score with a number. Ad copy, cold emails, video scripts, job posts, skill files. Three files. One the agent edits. One it can never touch. One instruction file from you. Each cycle takes 5 minutes. Score went up? Git commit. Score went down? Git reset. Twelve cycles per hour. A hundred overnight. Karpathy ran it on code he'd already optimized by hand for months. The agent found 20 improvements he'd missed. 11% faster. Tobi Lutke pointed it at Shopify's Liquid templating engine. 53% faster rendering from 93 automated commits. I spent two weeks pulling the system apart. Today's guide shows you how to use it on the things you actually make every day. Six use cases, the three-step setup, and the eval mistakes that kill runs before they start. Full guide: aibyaakash.com/p/autoresearch…

English
36
87
759
97.3K
Max FlowAi
Max FlowAi@MaxFlowAi·
Looking at it objectively, the logic makes sense - in platform shifts, the winner is usually the one who invests earlier and at scale. But $600B is no longer just growth, it’s a dominance bet. The risk of underbuilding is real, especially if AI becomes core infrastructure. But overbuilding is also a thing - historically, companies tend to overshoot during hype cycles. What’s more interesting is that this isn’t about products anymore, it’s about owning the base layer - compute, models, ecosystem. Whoever controls that sets the rules.
English
0
0
0
266
Oguz Erkan
Oguz Erkan@oguzerkan·
Mark Zuckerberg: “$META is ready to spend $600 billion through 2028 because the next platform shift will be owned by whoever builds the most capable AI infrastructure fastest.” Of course there is a risk of overbuilding, but at this stage the risk of underbuilding is far larger as it guarantees disruption if AI delivers on its promises. This applies to all the mega-caps, perhaps except $AAPL. Remember, the short-term effects of new technologies are always overstated but the long-term effects are always understated. At this point, underspending poses a bigger risk to the long-term compounding of $META than overspending.
English
57
59
632
115.4K
Max FlowAi
Max FlowAi@MaxFlowAi·
Honestly, this isn’t really about IQ, it’s about the ability to explain and relate to people. I’ve seen highly technical, brilliant people who couldn’t communicate their thinking -- and others who weren’t the smartest in the room but were great leaders because they understood their team. Yeah, a big gap in thinking can break communication. But the issue isn’t being “too smart,” it’s not being able to translate your thinking into something others can follow. With AI it’s even more obvious. Even if it becomes much “smarter,” without a good interface and clear explanations it just won’t connect with humans effectively. So to me, leadership isn’t about being the smartest - it’s about being able to align people around an idea.
English
0
0
1
262
Big Brain AI
Big Brain AI@realBigBrainAI·
Marc Andreessen says raw intelligence might be the worst qualification for leadership — and it changes everything about how we should think about AI. "If the leader is more than one standard deviation of IQ away from the followers, it's a real problem." Andreessen points to the US military, one of the earliest and most rigorous adopters of IQ testing, as the source of this insight. They slot people into specialties and leadership roles based on IQ scores. And over the years, they kept seeing the same pattern. A leader who is significantly less intelligent than their people struggles to model how those people think. That part is intuitive. But the reverse turns out to be equally true. "It's actually very hard for very smart people to model the internal thought processes of even moderately smart people." A leader who is two standard deviations above the norm of the organisation they're running also loses theory of mind, that ability to hold an accurate model of what's happening inside someone else's head. The gap is too wide in both directions. Andreessen then takes this to its logical conclusion: "If you had a person or a machine that had a thousand IQ or something like it, its understanding of reality would be so alien to the people or the things that it was managing that it wouldn't even be able to connect in any sort of realistic way." An AI that vastly outthinks every human in the room isn't positioned to lead those humans. It's positioned to be completely incomprehensible to them. Leadership has never really been an intelligence problem. It's a connection problem. And no amount of raw intelligence closes that gap — past a certain point, it only widens it. The world will not be run by the smartest thing in the room for a long time. Maybe ever.
English
173
215
1.6K
350.2K
Max FlowAi
Max FlowAi@MaxFlowAi·
There’s an interesting idea here, but the conclusion is a bit too extreme. Yes, they’ve heavily automated development with Claude, and that loop (Claude - code - better Claude) is very powerful. That’s what drives the speed. But it doesn’t mean engineers “don’t code anymore” or that AI is fully self-improving on its own. Humans still define direction, review outputs, and decide what actually gets built. Without that, the loop breaks or goes off track. Feels more like a shift in roles, not replacement - less manual coding, more oversight and system design.
English
0
0
1
33
sui ☄️
sui ☄️@birdabo·
🚨Anthropic’s CEO just said that Claude is building the next version of itself. read that again. it’s confirmed that engineers at Anthropic don’t write codes anymore. they prompt Claude, review what it writes, and ship it. the AI is building its own successor and the humans are just there to baby sit. 50+ major features in 52 days. i kept wondering how they were shipping so insanely fast. humans were the bottleneck. so Anthropic stopped building tools for humans and started building tools for Claude. auto mode, scheduled jobs, computer use. those aren’t features for us. they’re features for claude to work faster while we sleep. Claude writes code that makes Claude better. better Claude writes better code. Dario said it himself “that loop starts to close very fast.” they’re going to replace humanity soon.
Claude@claudeai

You can now enable Claude to use your computer to complete tasks. It opens your apps, navigates your browser, fills in spreadsheets—anything you'd do sitting at your desk. Research preview in Claude Cowork and Claude Code, macOS only.

English
50
33
344
34.5K
Max FlowAi
Max FlowAi@MaxFlowAi·
This looks like a typical R&D sprint - running multiple variants in parallel and seeing what sticks. Nothing unusual, just Meta doing it at scale. What’s more interesting is the positioning: if they’re benchmarking Avocado against Gemini-level models, that’s not just internal research anymore - that’s direct competition at the top tier. And the A/B testing with Gemini suggests this is already product-focused, not just experimental. That said, “comparable” in leaks should always be taken with a grain of salt 🙂 Without benchmarks or details, it’s more of a signal than a confirmed reality.
English
0
0
0
179
TestingCatalog News 🗞
TestingCatalog News 🗞@testingcatalog·
BREAKING 🚨: Meta is testing loads of Avocado variants internally, including multiple release candidates, Avocado-mango agent, Avocado 9B, and more. Avocado Think Hard performs quite well and, as reported earlier, is comparable to Gemini 3 level models. All this in parallel to Gemini A/B testing on Meta AI.
TestingCatalog News 🗞 tweet media
English
36
47
813
153.1K
Max FlowAi
Max FlowAi@MaxFlowAi·
Looks serious, but it’s still “alleged” - without confirmation from Databricks or a proper technical breakdown, I wouldn’t jump to conclusions. Supply chain attacks are definitely one of the biggest risks right now, but there’s also a lot of noise and premature claims around them. If true, this isn’t just about Databricks - it’s about the entire dependency chain. These incidents usually expose deeper issues in how trust and infrastructure are set up. For now, feels more like a “watch closely and wait for details” situation than a confirmed breach.
English
0
0
1
3K
International Cyber Digest
International Cyber Digest@IntCyberDigest·
🚨‼️ BREAKING: Databricks allegedly compromised in a TeamPCP supply chain attack. Databricks is the leading cloud-based data analytics platform: used by organizations worldwide to manage massive datasets. We notified them last week. They scaled up to investigate. We haven't heard back since.
International Cyber Digest tweet mediaInternational Cyber Digest tweet media
English
37
207
1.3K
194.3K
Max FlowAi
Max FlowAi@MaxFlowAi·
That feeling of “we’re almost there but no one realizes it” is pretty common in the industry right now. But honestly, the issue isn’t that people don’t recognize it - it’s that even experts don’t fully agree on what “human-level intelligence” actually means.Yes, models are getting much stronger, and the risks are real. But this isn’t a single moment where everything suddenly changes - it’s a gradual shift that’s already happening. Most people just experience it through real use cases, not abstract warnings. So the real challenge isn’t awareness, it’s explaining the risks clearly without turning everything into hype or extremes.
English
1
0
3
1K
Chief Nerd
Chief Nerd@TheChiefNerd·
🚨 Anthropic CEO Dario Amodei: “We are so close to these models reaching the level of human intelligence, and yet there doesn't seem to be a wider recognition in society of what's about to happen … There hasn't been a public awareness of the risks.”
English
920
684
5.1K
2.2M
Max FlowAi
Max FlowAi@MaxFlowAi·
Meta just dropped TRIBE v2 - and this might be one of the most underrated AI breakthroughs right now 🧠 They trained a model on 1,000+ hours of brain data from 700+ people, scaling from ~1,000 brain regions to 70,000. It can simulate neural activity across vision, hearing, and language - and here’s the wild part: its predictions actually outperform real fMRI scans, which are often noisy due to movement, heartbeat, and other interference 🤯 Even crazier - TRIBE v2 can replicate decades of neuroscience findings in software. It correctly maps areas responsible for faces, speech, and text without running new scans. That means researchers can now run “virtual brain experiments” instead of putting people in expensive machines for every study. This could be a turning point. Neuroscience has always been bottlenecked by cost, time, and access to equipment. TRIBE v2 compresses months of research into seconds of compute - very similar to what AlphaFold did for biology. If this scales, it could completely change how we study the human brain ⚡️ And yes - Meta open-sourced everything: code, weights, demo. Anyone can start building on top of this. That’s a huge signal for where AI + neuroscience is going next. Are we about to simulate cognition before we fully understand it? Or is this just an approximation that still misses something fundamental? 👇 #AI #ArtificialIntelligence #Neuroscience #Meta #MachineLearning #DeepTech #FutureOfAI #TechNews
Max FlowAi tweet media
English
0
0
0
38
Max FlowAi
Max FlowAi@MaxFlowAi·
There’s some truth here, but “coding is dead” is an overstatement. What actually changed is the level of abstraction - less time on syntax, more on architecture, decisions, and quality control. The “loop as a moat” idea is interesting, but not new - AI just massively accelerates the cycle (generate - validate - iterate). The ones who organize that loop best will win. Calling AI the “architect” is still a stretch. Without humans who understand systems and trade-offs, you just get speed at the cost of technical debt. Feels more like the engineer role didn’t disappear - it evolved. Less typing, more thinking.
English
1
0
1
100
Ziwen
Ziwen@ziwenxu_·
The old way of running a tech company is dead. Anthropic just proved it: 50+ launches in 52 days. Most teams take two months to move a button three pixels left. The secret? They killed "coding" as we knew it. Their CEO said it out loud: engineers don't grind syntax anymore. They architect systems, unleash Claude to write the bulk, then curate. Claude is literally building the next Claude. Here's what nobody's talking about: - The loop is the only moat. If your product isn't building the next version of itself, you're already outdated. - Coding transformed from writing to taste. Value isn't knowing where brackets go. It's having the vision for architecture that scales. AI isn't a tool anymore. It's the architect. If you're still grinding manual labor, you're racing against something that never sleeps, never stops, never blinks.
CG@cgtwts

Anthropic CEO: “ I have engineers within anthropic who don’t write any code, they just let Claude write the code and they edit it and look it over” “At anthropic writing code means designing the next version of Claude it self, so we essentially have Claude designing the next version of Claude itself, not completely but most of it”. In the last 52 days, the Claude team dropped 50+ major feature launches. This is literally INSANE.

English
80
81
691
183.4K
Max FlowAi
Max FlowAi@MaxFlowAi·
Sounds explosive, but it’s a mix of real insight and marketing framing. Amodei is actually one of the few leaders taking a balanced view - not “AI replaces everyone” and not “nothing changes.” His core idea is simple: jobs don’t disappear, tasks inside them do. The winners won’t compete with AI - they’ll use it as a multiplier. As for the “page 29 framework,” it’s not about magic prompts - it’s about thinking: breaking work into processes, identifying what can be delegated to AI, and rebuilding workflows around that. The real edge isn’t the tool - it’s the ability to redesign how you work around it.
English
0
0
0
585
CopyRebeldia
CopyRebeldia@CopyRebeldia·
BOMBA: El CEO que creó Claude acaba de publicar una carta de 38 páginas para toda la humanidad. Dario Amodei mapeó exactamente qué carreras sobreviven a la IA y cuáles no. Nada de hype. Nada de apocalipsis. Solo la predicción más fría y específica que cualquier líder de IA haya hecho. Pero la página 29 tiene un framework de razonamiento que convierte a la IA de lo que te reemplaza en tu mayor ventaja injusta. Aquí van 9 prompts de Claude basados en la metodología de Amodei que te ponen años por delante de todos los que no leyeron esto:
CopyRebeldia tweet media
Español
62
677
4.3K
748.9K
Max FlowAi
Max FlowAi@MaxFlowAi·
It’s a harsh take, but partly true. It’s not that entry-level roles will disappear in 1–2 years, but they are definitely getting compressed - not removed, but reshaped.Companies still need output, just with fewer people who know how to leverage AI instead of doing everything manually. The value is shifting from execution to understanding workflows and orchestrating them with AI tools.The key advice is solid: don’t just learn AI - pick an industry, understand how the work actually gets done, and then rebuild that process with AI.
English
0
0
0
232
Damian Player
Damian Player@damianplayer·
Anthropic CEO, Dario Amodei said entry-level consultants, lawyers, and finance workers are being replaced in the 1-2 years. the companies losing those roles still need the output. they need someone who gets it done with AI instead of a team of 12. pick an industry. learn how the work actually gets done. be the person who rebuilds it with AI.
Damian Player@damianplayer

Palantir CEO, Alex Karp says only 2 types of people will survive the AI era..

English
86
75
564
147.8K