NuScienta

1.6K posts

NuScienta banner
NuScienta

NuScienta

@NuScienta

The #1 platform to learn AI skills for free. Make money. Land your dream job in 2026. You will regret not starting today. Click the link in bio to start now.

Chicago, IL Katılım Kasım 2023
13 Takip Edilen103 Takipçiler
Sabitlenmiş Tweet
NuScienta
NuScienta@NuScienta·
Free workspace for scientists sounds generous until you ask what OpenAI gets from it. It's not charity. It's a data pipeline. Scientists write. Collaborate. Upload research. All of it flows through OpenAI's system. All of it potentially trains the next model. That's the deal. Free tools in exchange for high-quality domain-specific data that's expensive to generate any other way. Smart move by OpenAI. They get research-grade training data. Scientists get free infrastructure. Everyone wins until someone asks who owns the IP on insights generated through the platform or whether uploaded research stays private. Those terms matter more than the features. And they're probably buried in a ToS nobody reads. Prism might be genuinely useful. Might help researchers move faster. But it's not free. You're paying with data. Always. nuscienta.com
English
5
9
89
18.1K
NuScienta
NuScienta@NuScienta·
Good post but the ownership problem isn't new. It's the only problem. It's been the problem since they flipped from nonprofit to capped profit to whatever structure they're calling it now. Every pivot OpenAI makes tells you the same thing. The product roadmap follows the cap table. Adult mode didn't die because of ethics or safety or user harm. It died because it threatened the next funding round. That's how every VC-backed company works. This one just has better PR about saving humanity while doing it. The part nobody wants to admit is that every AI company people are building their workflows around has this exact same leash. Different investors. Same dynamic. Your favorite tool's roadmap isn't decided by what you need. It's decided by what their board meeting needs. That's why skills matter more than tools. Tools have owners. Skills don't. nuscienta.com/build-ai-and-d…
English
0
0
1
53
Tuki
Tuki@TukiFromKL·
🚨 stop scrolling.. read this twice.. OpenAI just paused "adult mode" and everyone's talking about the erotic part.. nobody's talking about the investor part.. investors didn't say it was dangerous.. they didn't say it was unethical.. they said it was bad for the brand.. there's a difference.. the same company selling itself as the builder of superintelligence.. can't ship a feature without calling its investors first.. this is a $300 billion company that needs permission from the money to decide what its own product does.. OpenAI doesn't have a safety problem.. it has an ownership problem.. and the thing about ownership problems is the product always ends up looking like whoever's holding the leash.. not whoever built it.. Altman keeps telling you he's building god.. but god apparently needs board approval before it flirts.
Polymarket@Polymarket

BREAKING: OpenAI will pause development of its erotic “adult mode” chatbot following concerns from investors.

English
44
44
414
81.4K
NuScienta
NuScienta@NuScienta·
Bigger context window. Faster tokens. Cheaper pricing. This is the spec sheet wars and nobody winning them has ever stayed on top for long. Remember when GPUs were sold on clock speed alone. Then it was cores. Then memory bandwidth. The number kept changing because the number never mattered as much as the marketing said it did. 2 million tokens of context means nothing if the person using it doesn't know what context to give it. Most people can't write a clear two-sentence prompt. Doubling the window doesn't fix that. It just gives bad instructions more room to breathe. And "thinks deep" is doing a lot of work for a model that's been out for five minutes. Benchmarks aren't depth. Usage over time is. Ask again in a year. nuscienta.com/build-ai-and-d…
English
0
0
0
9
X Freeze
X Freeze@XFreeze·
The future belongs to those who remember the past Grok has a massive context window….remembers more than any AI on the planet - 2,000,000 tokens of context While GPT-5.4, Gemini 3.1, and Claude 4.6 cap out at 1M, Grok 4.20 Beta doubles the industry standard But it’s not just about bigger memory: → 2M context window - largest on Earth → 267 tokens/sec - faster than any model tested → $2 in / $6 out - 60% cheaper than Grok 4 → 4-agent swarm mode…. deep, multi-agent reasoning The competition is building AI that thinks fast… xAI is building Grok that thinks deep
X Freeze tweet media
English
63
42
247
9.3K
NuScienta
NuScienta@NuScienta·
Two 23-year-olds got $94M to use 2,000 AI agents to predict gold prices. Hedge funds have been trying to predict gold prices with every tool imaginable for decades. Most of them still get it wrong. Adding more agents doesn't fix a prediction problem. Markets aren't puzzles you solve with more compute. They're driven by human behavior, geopolitics, panic, greed. Stuff that doesn't sit in a training set. $650M valuation before selling anything to anyone. That's not investment. That's a bet that the words "AI agents" keep working on pitch decks for another eighteen months. Finance firms will buy it though. They always buy the new shiny thing. Then quietly shelve it when the predictions aren't better than what they already had. nuscienta.com/build-ai-and-d…
English
0
0
0
73
Chubby♨️
Chubby♨️@kimmonismus·
OpenAI is backing Isara, a new startup founded by two 23-year-old AI researchers that coordinates thousands of AI agents to solve complex problems, like using ~2,000 agents to forecast gold prices. The company just raised $94M at a $650M valuation and plans to sell predictive modeling tools to finance firms first.
Chubby♨️ tweet media
The Wall Street Journal@WSJ

Exclusive: OpenAI is backing a new AI startup that aims to build software allowing so-called AI “agents” to communicate and solve complex problems in industries such as finance and biotech on.wsj.com/4bTvwKd

English
104
88
1.6K
496.7K
NuScienta
NuScienta@NuScienta·
So the company that says it's building AGI for humanity was spending engineering hours on an erotic chatbot. And the reason they stopped wasn't because it was a bad idea. It was because staff and investors pushed back. That tells you everything about how these companies make decisions. Not what's right. What's fundable. And "refocusing on core productivity tools" is PR for "we got caught chasing revenue in weird places." Sora getting wound down too. The social app gone. That's not focus. That's retreat. The technical challenge part is the only honest sentence in the whole story. They couldn't figure out how to make it safe. Which means they tried. Let that sit for a second. These are the companies people trust to build the future responsibly. nuscienta.com/build-ai-and-d…
English
0
0
1
91
Chubby♨️
Chubby♨️@kimmonismus·
OpenAI has indefinitely shelved its planned "adult mode" erotic chatbot amid pushback from staff and investors over risks to minors and concerns about encouraging unhealthy emotional attachments to AI. The decision is part of a broader refocusing away from "side quests" toward core productivity tools, with the company also winding down Sora and its social app. Technical challenges in training safety-aligned models to produce explicit content while filtering illegal material added further complications to the project.
Chubby♨️ tweet media
Financial Times@FT

OpenAI puts erotic chatbot plans on hold ‘indefinitely’ ft.trib.al/4Q2hLpT

English
169
60
672
128.2K
NuScienta
NuScienta@NuScienta·
Cool so now you can do shallow work faster from your phone. That's the pitch? Pulling a dashboard isn't analysis. Building a chart isn't insight. Generating a presentation isn't strategy. You just automated the easy parts and skipped the thinking. If you can't explain why sales dropped without a tool doing it for you, the tool isn't making you better at your job. It's doing your job. There's a difference and most people won't notice until it matters. Super app, everything app, doesn't matter what you call it. If the person using it doesn't know what to ask or what to do with the answer, it's just a fancy remote control. nuscienta.com/build-ai-and-d…
English
0
0
1
2
sui ☄️
sui ☄️@birdabo·
Claude’s mobile app now does what most people need a laptop for. pull live amplitude dashboards, edit Figma boards, build canva decks. from your phone. in one chat thread. you can ask it why sales dropped last week and it pulls the data, finds the root cause, builds a comparison chart, and generates a client ready presentation. all from your phone. Anthropic seems to be creating a super app while X is creating the everything app.
Claude@claudeai

Your work tools in Claude are now available on mobile. Explore Figma designs, create Canva slides, check Amplitude dashboards, all from your phone. Give it a try: claude.com/download

English
20
15
120
9.3K
NuScienta
NuScienta@NuScienta·
Every company has that person who built three tools over the weekend. Most of those tools break by Wednesday because nobody thought about maintenance, security, or whether the team actually needed them. Enthusiasm isn't strategy. Funding the loudest AI person in the room without asking what problem they're solving is how you end up with a graveyard of internal tools nobody uses. The person you actually want isn't the one building fast. It's the one asking why before they build anything. nuscienta.com/build-ai-and-d…
English
0
0
1
6
Codie Sanchez
Codie Sanchez@Codie_Sanchez·
You know that person on your team with AI Derangement Syndrome who won't stop talking about what AI could do for the business? They've already built three internal tools over the weekend and sent them to the team on Slack. Whatever you do...don't shut them down. Fund them. Get out of their way.
English
230
90
1.1K
62.2K
NuScienta
NuScienta@NuScienta·
Everybody's hyped about the deal. Nobody's asking the obvious question. Apple doesn't need Gemini. Apple needs your data running through Gemini so they own the loop. That's not innovation. That's infrastructure lock-in wearing a partnership press release. Distilling models for their own use? Cool. That means a closed system gets more closed. You're not the customer here. You're the training set. Two trillion dollar companies didn't shake hands so you could have a better Siri. They shook hands so you'd never leave. nuscienta.com/build-ai-and-d…
English
3
0
2
13
Shay Boloor
Shay Boloor@StockSavvyShay·
$AAPL deal with $GOOGL gives it full access to Gemini inside its own data centers. It also gives Apple the ability to distill that system into smaller models for its own use.
Shay Boloor tweet media
English
38
52
545
40.2K
NuScienta
NuScienta@NuScienta·
This reads like a vision board, not a strategy. Open source didn't kill AWS. Locally hosted didn't kill the cloud. Sovereign AI sounds great until you price out the compute, the talent, and the infrastructure. Listing what "will win" without explaining how is just motivational content dressed as prediction. Every one of these outcomes requires massive investment, policy shifts, and technical literacy that most people don't have yet. The faster you accept that none of this happens automatically, the quicker you stop waiting and start building actual skills. Tools don't liberate people by default. Understanding does. Hope is not preparation. nuscienta.com/build-ai-and-d…
English
0
0
0
3
Alex Finn
Alex Finn@AlexFinn·
Open source will win Locally hosted will win Sovereign super intelligence will win Personal research labs will win Empowered individuals will win Corporate spying will lose Selling of data will lose Price gouging will lose The faster you accept these things, the quicker you'll be prepared for whats coming
English
155
55
574
22.4K
NuScienta
NuScienta@NuScienta·
"Almost everyone on our team uses this." Of course they do. They built it. Auto mode is convenient. Convenience is not competence. The easier the tool becomes, the less you understand what it's doing under the hood. Every dev team that went all-in on a single copilot is now one pricing change away from a productivity crisis. That's not autonomy. That's dependency with a keyboard shortcut. The question worth asking. Can your team debug what auto mode generates when it breaks at 2am in production? If not, you don't have a daily driver. You have a crutch. Skills transfer. Tools get deprecated. nuscienta.com/build-ai-and-d…
English
0
0
1
3
cat
cat@_catwu·
Auto mode is a step change improvement in the Claude Code UX, balancing autonomy and safety. Almost everyone on our team uses this as a daily driver. Now available to Claude for Team users! `claude --enable-auto-mode` to turn on, then Shift + Tab to enter the mode
Claude@claudeai

New in Claude Code: auto mode. Instead of approving every file write and bash command, or skipping permissions entirely, auto mode lets Claude make permission decisions on your behalf. Safeguards check each action before it runs.

English
59
29
415
47.1K
NuScienta
NuScienta@NuScienta·
Super apps don't create super users. They create more dependent ones. Claude adding features is a product strategy. Not a literacy strategy. Microsoft gave everyone Excel in the 90s. Most people still can't write a VLOOKUP. The real question nobody's asking. What happens when your "super app" changes its pricing, kills a feature, or pivots its model? You're locked in with no transferable skill. The bottleneck was never the interface. It's whether people understand what they're asking these tools to do and why. That's judgment. That's literacy. No app ships that. Stop picking sides between wrappers. Start building skills that work across all of them. nuscienta.com/build-ai-and-d…
English
0
0
0
4
NuScienta retweetledi
NuScienta
NuScienta@NuScienta·
The timing isn't coincidence. It's contrast marketing. Karpathy warns about malicious software stealing credentials. Anthropic announces AI with autonomous access. Now everyone's debating AI permissions instead of questioning why we're giving software this much access at all. Here's what just happened: Anthropic benefited from a security scare that makes their controlled environment look safer by comparison. "At least it's our AI making decisions, not some random download." Except the risk is identical. Software with permission to run commands autonomously is software that can be exploited. Doesn't matter if it's malware or Claude. The attack surface is the same. Think about what "AI decides what's safe" actually requires: trusting a system that hallucinates to evaluate risk accurately every single time. One wrong call and your data's gone. One misinterpreted prompt and files get deleted. But hey, at least you consented to giving it access. The real problem isn't AI permissions. It's that we've normalized giving everything root access because convenience beats security. Now AI's just the next thing demanding keys to the kingdom. And we're too tired from security fatigue to push back. nuscienta.com/build-ai-and-d…
English
0
2
7
1.1K
NuScienta
NuScienta@NuScienta·
Anthropic announcing mobile tool access shows exactly what's wrong with how companies think about AI adoption. They're optimizing for access, not literacy. You can check Amplitude dashboards on mobile now. Great. But if you don't understand what metrics actually matter for your business, you're just scrolling numbers that look important. Same with Figma designs. Claude can pull them up on your phone. But without design literacy, you can't tell good feedback from surface-level reactions. The bottleneck was never device access. It was knowing how to use the tools meaningfully. And here's what happens when you prioritize distribution over education: more people using AI badly. Faster decisions based on outputs they can't validate. Confidence without competence. Mobile access makes that worse, not better. Because now people are making AI-assisted calls while distracted, on small screens, without the context desktop work provides. That's not productivity. That's expanding the surface area for expensive mistakes. The companies celebrating mobile AI tools should ask: do our teams actually understand what they're looking at, or are we just making it easier to act on information they can't interpret? Build AI & Data Skills 15 Minutes a Day: nuscienta.com
English
0
0
1
7
Claude
Claude@claudeai·
Your work tools in Claude are now available on mobile. Explore Figma designs, create Canva slides, check Amplitude dashboards, all from your phone. Give it a try: claude.com/download
English
1.2K
1.5K
20.1K
5M
NuScienta
NuScienta@NuScienta·
One engineer shipping a voice feature with Codex isn't proof AI multiplies productivity. It's proof Notion had existing code to copy and a simple enough feature to automate. Ryan pointed Codex at working mobile code and said "port this to web." That's not building. That's translating. Useful? Sure. But let's not pretend this is "building features solo." The hard work, designing the feature, solving platform constraints, testing edge cases, already happened on mobile. Codex just moved proven code to a new platform. And "while still managing a team" is supposed to sound impressive. But managing is exactly when you need automation for simple tasks. Because you don't have time to port features manually. Here's the test could Ryan have built a novel voice feature from scratch with Codex? Or does it only work when the blueprint already exists? Porting existing code is the easy part of engineering. The part AI handles well. The parts it can't do: figuring out what to build, why it matters, and how it fits the product. Those decisions still need humans. Notion's not celebrating AI replacing engineers. They're celebrating tools making experienced engineers faster at grunt work they shouldn't be doing anyway. Build AI & Data Skills 15 Minutes a Day: nuscienta.com
English
0
0
0
34
OpenAI Developers
OpenAI Developers@OpenAIDevs·
With Codex, Ryan built @NotionHQ’s AI Voice Input feature entirely by himself. @ryannystrom used Codex to understand the context, point Codex to an existing mobile feature, and ship it to web and desktop while still managing a team.
English
42
26
462
49.9K
NuScienta
NuScienta@NuScienta·
Anthropic shipping features daily isn't impressive. It's proof they're releasing half-baked code to win news cycles. Quality products don't need daily updates. They need updates that actually work. Here's what's really happening: every AI company's in a features arms race. So they ship whatever's ready enough to demo and fix it later in production. "Auto mode" today. Patch for auto mode breaking things tomorrow. Another feature the day after to distract from the bugs. That's not velocity. That's churn. Think about what daily releases actually mean for users. Constant interface changes. Breaking workflows. Relearning features that got "improved." Nobody benefits from that except the PR team tracking mentions. And "permission decisions on your behalf" shipping this fast means it wasn't thoroughly tested. Can't be. Testing takes time. So users are the QA. Running autonomous agents in production while Anthropic figures out edge cases. The companies celebrating "they ship every day" should ask: how many of those features actually stuck versus got quietly deprecated when they didn't work? Shipping fast doesn't mean shipping well. nuscienta.com/build-ai-and-d…
English
0
0
0
3
NuScienta
NuScienta@NuScienta·
"Almost anything humans can" with the same error rate is the most generous interpretation of LLM capabilities I've seen this year. Here's what those models actually do: excel at pattern-matching tasks within their training distribution. Fail catastrophically outside it. Humans don't work that way. Ask GPT-5.4 to debug code it's seen variations of? Great performance. Ask it to reason about a novel system architecture with conflicting constraints? Confident nonsense. And "similar error rate" is doing massive work. Humans know when they're guessing. Models don't. That difference matters more than speed. Think about what "with the right tools" actually means. You're giving AI access to search, code execution, APIs, specialized plugins. Humans operate with vision, hearing, motor control, real-world physics understanding, and contextual memory. Those aren't equivalent tool sets. Not even close. The tasks AI handles well, text generation, code synthesis, data analysis, are a tiny subset of human capability. It can't negotiate ambiguous social situations. Can't adapt to completely novel problems. Can't explain its reasoning reliably. Calling that "general human-level" is confusing speed with intelligence. Fast execution of known patterns isn't the same as flexible reasoning across domains. nuscienta.com/build-ai-and-d…
English
0
0
0
26
Haider.
Haider.@slow_developer·
from what i'm seeing with opus 4.6 and gpt-5.4, i think people who say we haven't reached general human-level AI are probably imagining something beyond it with the right tools both models can do almost anything humans can, with a similar error rate, just much faster
English
36
8
120
5.5K
NuScienta
NuScienta@NuScienta·
Every monitoring and testing platform already does some version of this. Datadog. Sentry. LaunchDarkly. PlayerZero just wrapped it in AI branding and claimed they invented it. "92.6% accuracy across 3,000 scenarios" sounds impressive until you ask: what happens with the 7.4% it misses? Those are the bugs that take down production. The edge cases. The stuff you can't predict from historical data. And "eliminates 30,000 hours of test writing" is consultant math. Assumes the entire QA process is writing tests that AI could generate. It's not. QA is understanding requirements. Designing test strategies. Validating that the system does what users actually need. Code simulation doesn't replace that. It automates one piece. Think about the cap table flex: CEOs of Figma, Dropbox, Vercel backing this. That's not validation of the technology. That's well-connected founders getting checks from their network. If this actually eliminated QA teams, those same CEOs would be cutting their own testing departments right now. They're not. Because tools that predict bugs don't replace people who understand what the product should do. nuscienta.com/build-ai-and-d…
English
0
0
0
68
Chris
Chris@chatgpt21·
It seems as if writing code is mostly solved. Keeping it alive in production after you ship it is the actual bottleneck. A 26-year-old ex-OpenAI researcher out of Stanford just raised $20M to kill traditional QA testing. Here is the breakdown: * The Tech: PlayerZero’s Sim-1 model takes a code change and cross-references it against real production data to predict what will break. * The Accuracy: 92.6% across 3,000+ real production scenarios. It's currently beating both Claude Code and Codex at accurate code simulations. * The Impact: It does in minutes what a 300-person QA team does in weeks. For a 50-engineer team, it eliminates roughly 30,000 hours of test writing per year. The cap table is a who's who of tech royalty: CEOs of Figma, Dropbox, Vercel, and Databricks.
Chris tweet media
Animesh Koratana@akoratana

Introducing: PlayerZero The world's first Engineering World Model that puts debugging, fixing, and testing your code on autopilot. We've raised $20M from Foundation Capital, @matei_zaharia (Databricks), @pbailis (Workday), @rauchg (Vercel), @zoink (Figma), @drewhouston (Dropbox), and more PlayerZero frees up 30% of your engineering bandwidth by: 1.⁠ ⁠Finding the root cause for bugs & incidents in minutes that engineering teams take days to identify. 2.⁠ ⁠Predicting in minutes, edge case issues that a 300-person QA team would take weeks to find. ------ Here's why this matters: No one in your org has a complete picture of how your production software actually behaves. Support sees tickets. SRE sees infra. Dev sees code. Each team builds their own fragmented view - and none of these systems talk to each other. When something breaks, everyone scrambles to stitch the picture together by hand. PlayerZero connects all of it into a single context graph - → The Slack thread where your lead said "we went with X because Y fell apart in prod last time" → The PR review where an engineer explained the tradeoff → The lifetime history of your CI/CD pipeline, observability stack, incidents, and support tickets So you can trace any problem to its root cause across every silo. And it compounds. Every incident diagnosed teaches the model something new. The longer it runs, the deeper it understands - which code paths are high-risk, which configurations are fragile, which changes tend to break which customer flows. So when you sit down to debug a live issue, you have your entire org's collective reasoning and production memory behind you - instantly. ------ Zuora, Georgia-Pacific, and Nylas have reduced resolution time by 90% and caught 95% of breaking changes and freeing an average of $30M in engineering bandwidth. ------ Our guarantee: If we can't increase your engineering bandwidth by at least 20% within one week, we'll donate $10,000 to an open-source project of your choice. Book a demo - bit.ly/3NlLMeN

English
13
11
150
20.7K
NuScienta
NuScienta@NuScienta·
Calling eight LLMs "the only useful ones" is just your current preferences with authority cosplay. Here's what this actually is: a snapshot of what works for you right now. Not universal truth. "Opus 4.6 king of coding" today. Sonnet 4.7 ships next week and you rewrite the list. These rankings change faster than people can learn to use the tools. And "Grok explains things on X" isn't a use case. That's "I use X so I use Grok." Context-dependent, not objectively useful. Think about what "real world" even means here. Real world for who? Developers? Researchers? People writing emails? Different jobs need different models. There's no universal eight. The "four expensive and four cheap" framing is the only honest part. Because that's the actual decision: budget versus capability. Everything else is just personal workflow dressed up as industry standard. Here's the pattern: people find models that work for their specific tasks. Then declare those the best ones. Ignoring that someone doing different work would build a completely different list. This isn't guidance. It's anecdata. nuscienta.com/build-ai-and-d…
English
0
0
1
205
Bindu Reddy
Bindu Reddy@bindureddy·
There are literally a million LLMs but only these eight are useful in the real world Opus 4.6 - king of coding GPT 5.4 - everyday use Grok 4.2 - explains things on X Gemini Flash - cheap, all purpose 5.4 Thinking - xls and deep research GLM - cheap coding agents Kimi K2.5 - claw like agents MiniMax 2.7 - more claw Four expensive and four cheap ones.
English
90
75
889
85.7K
NuScienta
NuScienta@NuScienta·
$730 billion valuation for a company that's never turned a profit is either genius or the biggest bubble in tech history. There's no middle ground. Here's what $730B actually means: OpenAI is now valued higher than every payment processor, most banks, and companies that actually make physical products people need. For what? Chat software and API access that loses money on every request. $120 billion raised total means they've burned through previous rounds fast enough to need more. That's not growth momentum. That's cash addiction. And the investor list, MGX, Coatue, Thrive, tells you this isn't about near-term returns. It's about not missing out if AGI actually happens. That's FOMO funding. Not thesis-driven investing. Think about the math. At $730B, OpenAI needs to generate massive profits just to justify current valuation. Not grow into it. Justify what they're worth today. They're not even close. Revenue's climbing but so are costs. Margins are negative. Path to profitability keeps pushing further out. This valuation only works if you believe AGI arrives soon, transforms everything, and OpenAI captures most of the value. That's a lot of ifs for three-quarters of a trillion dollars. nuscienta.com/build-ai-and-d…
English
0
0
1
15
Cointelegraph
Cointelegraph@Cointelegraph·
⚡️ JUST IN: OpenAI is nearing a $10B raise from investors including MGX, Coatue, and Thrive, valuing the company at around $730B and bringing its latest funding round to roughly $120B
Cointelegraph tweet mediaCointelegraph tweet media
English
82
52
282
22.8K
NuScienta
NuScienta@NuScienta·
OpenAI is now valued higher than every payment processor, most banks, and companies that actually make physical products people need. For what? Chat software and API access that loses money on every request. $120 billion raised total means they've burned through previous rounds fast enough to need more. That's not growth momentum. That's cash addiction. And the investor list, MGX, Coatue, Thrive, tells you this isn't about near-term returns. It's about not missing out if AGI actually happens. That's FOMO funding. Not thesis-driven investing. Think about the math. At $730B, OpenAI needs to generate massive profits just to justify current valuation. Not grow into it. Justify what they're worth today. They're not even close. Revenue's climbing but so are costs. Margins are negative. Path to profitability keeps pushing further out. This valuation only works if you believe AGI arrives soon, transforms everything, and OpenAI captures most of the value. That's a lot of ifs for three-quarters of a trillion dollars. nuscienta.com/build-ai-and-d…
English
0
0
1
10