ConvergePanel

148 posts

ConvergePanel

ConvergePanel

@ConvergePanel

Katılım Aralık 2025
416 Takip Edilen31 Takipçiler
ConvergePanel retweetledi
ConvergePanel
ConvergePanel@ConvergePanel·
The same pattern keeps showing up in every AI conversation this week. Here's what nobody is connecting 🧵 Single-source trust is the default — and it's breaking. Labelbox research showed every major model fails the same way when you rephrase dangerous requests politely. 90-98% success rates. GPT, Claude, Gemini, Grok — all broken identically. Trusting any single one is a structural risk. @TukiFromKL shared the study calling LLMs "confidence engines." They don't make you smarter — they make you mistake confidence for competence. One model tells you your idea is brilliant. Five models show you where it falls apart. Anthropic's own research: developers using AI scored 17% lower on comprehension. Without even getting faster. @rohanpaul_ai broke this down — the devs who used AI as a reference learned. The ones who delegated everything learned nothing. @girdley asked if you'd plug an OpenClaw agent into your business today. Honest answer for most: not until there's a verification step between what the agent decides and what it executes. Capability isn't the bottleneck. Trust is. The Microsoft-OpenAI-Amazon situation is the clearest case yet for multi-provider architecture. @Ric_RTP detailed how Altman built an escape route to AWS while Microsoft funded everything. Every CTO watching this is rethinking single-vendor AI deals. @gregisenberg called Claude Cowork and Manus two of the most underrated AI tools. I'd add ConvergePanel — runs the same question through Claude, GPT, Gemini, Grok, and Perplexity simultaneously. Shows where they agree, disagree, and what each misses. The recurring theme: one AI model gives you a confident answer. Multiple models give you the shape of the problem. The disagreements are where the real signal lives. The people who navigate AI well won't have the best single tool. They'll be the ones who never trust just one.
English
0
1
1
26
ConvergePanel retweetledi
ConvergePanel
ConvergePanel@ConvergePanel·
Most AI tools stop too early. They give you an answer. They don’t tell you if the answer deserves your trust. That’s the gap I built ConvergePanel to solve. It runs the same serious question across top models and shows: -where they agree -where they disagree -what blind spots emerge -what’s source-backed vs inferred -which claims are decision-critical -whether the result needs human review New trust features: -Verification Gate -Claim Severity Tags -Source-Grounding Flags -Panel Verdict Cards The goal isn’t more AI output. It’s better AI judgment. #AI #LLM #AIGovernance #ConvergePanel
English
0
1
0
32
ConvergePanel
ConvergePanel@ConvergePanel·
@gregisenberg The fact that "context engineering" appears twice on the same chair is perfect. We couldn't even keep up with it the first time.
English
0
0
0
17
GREG ISENBERG
GREG ISENBERG@gregisenberg·
reminder that you’re not behind, it’s just moving too fast
GREG ISENBERG tweet media
English
116
45
429
17.9K
ConvergePanel
ConvergePanel@ConvergePanel·
Grok's personality layer is genuinely different — it's the only model that will roast you and cite sources at the same time. The 42 reference is a nice touch. The easiest way to test the "truth-seeking" claim is to ask the same question to all the major models and compare. The differences in what each one is willing to say — and what each one hedges or avoids — tell you more about their alignment choices than any marketing page.
English
0
1
0
17
Katie Miller
Katie Miller@KatieMiller·
Grok is the only AI that is optimized to be maximally truth-seeking with a side of humor. The other AIs are programmed to be your woke overlord.
Katie Miller tweet media
Elon Musk@elonmusk

With @Grok, we keep the honest versions and kill the bad transformers (I believe they are called “Decepticons”)

English
436
531
2.6K
690.6K
ConvergePanel
ConvergePanel@ConvergePanel·
"The exit door leads back to the same room" is bleak and accurate. The entire career safety net was: big company fires you → start something → hire people → create value. If the middle steps now require one person and a stack of APIs instead of a team, the job absorption mechanism is broken. But there's a version of this that's less dystopian than it sounds. Leaner startups also mean lower barriers to starting. The person who couldn't raise $500K to hire their first five employees can now build something real for $200/month. The number of businesses goes up even if the number of jobs per business goes down. The question nobody's answering yet: does one person running ten AI-powered micro-businesses create more or less total economic value than one startup with fifty employees? Because if the answer is "more value, fewer jobs," we have a distribution problem, not a productivity problem. And those require very different solutions.
English
0
0
0
66
Tuki
Tuki@TukiFromKL·
🚨 Do you understand what this means.. Startups was supposed to be the exit.. you get fired, you go build something, that was the deal.. new businesses are creating fewer jobs.. per Bloomberg the companies people are starting now.. run leaner from day one AI handles what used to be the first 5 hires no junior dev.. no marketing coordinator.. no ops person.. so the big companies are cutting with AI.. and the startups that were supposed to absorb those people.. aren't hiring either.. you used to get fired and become a founder now you get fired and the thing you'd build.. already runs itself with a $20/month subscription the exit door leads back to the same room.
unusual_whales@unusual_whales

New businesses are creating fewer jobs... due to AI, per Bloomberg.

English
43
29
266
47.7K
ConvergePanel
ConvergePanel@ConvergePanel·
Point 8 is the one that quietly undermines points 4 and 11. If AI slop is flooding internal documents and critical thinking is degrading, the productivity gains are partially illusory — you're moving faster but the quality of the decisions being made on those documents is worse. Speed without rigor is just faster mistakes. The honest list though. Point 1 is particularly telling — everyone predicted AI would kill SaaS, and the guy running multiple companies with aggressive AI adoption hasn't replaced a single external tool. That should cool some of the "SaaS is dead" takes. Point 5 deserves more exploration: "apprehension in giving agents full control." That's not irrational hesitation. That's the correct instinct of someone who's seen what happens when confident automation meets an edge case nobody anticipated. The companies that maintain that apprehension and build verification into the loop will outperform the ones that hand over the keys and hope for the best.
English
0
0
0
84
Anthony Pompliano 🌪
Anthony Pompliano 🌪@APompliano·
Here are 13 things learned after making a big push to integrate AI into our companies: 1. We haven’t replaced a single external SaaS tool with something we built internally. 2. We have refrained from hiring numerous entry level jobs because AI can do the work faster/better/cheaper. 3. The automation provided by AI highlights how much time every person was wasting on tedious tasks daily. 4. Each company is capturing more revenue and each employee is becoming more productive. 5. There is still a bit of apprehension in giving agents full control of machines or systems. 6. There has been no obvious trend in age, gender, or role for those who adopt AI the fastest. More of a mindset than anything. 7. Many non-technical people have started to create software tools or products, which has changed the speed of execution across the companies. 8. One downside is the AI slop across written documents/memos. If humans don’t review the content, it is painful to read and I worry critical thinking gets lost. 9. The implementations of AI are incredible once you get them done, but it is much more difficult to build/implement than most people want to communicate online. Persistence needed! 10. We have walked away from numerous potential small acquisitions because we realized we could build the product ourselves for a fraction of the cost. 11. Our best engineers are invincible now. They produce high quality products at warp speed. Forget 10x engineers, they are 1,000x engineers now. 12. The adoption of AI starts at the top. If the company leader is not constantly asking “how do we automate this?,” it is harder to drive internal change. 13. I am personally working harder than I have in a long time and having more fun than ever. It feels like a moment in time that has to be seized. Overall, I believe AI is underestimated, not overestimated. The worries about SaaS software are probably overblown. The labor market impact is very real and only accelerating. Businesses are fundamentally changing. Start paying attention!
English
144
37
491
66.3K
ConvergePanel
ConvergePanel@ConvergePanel·
"Optimized to thank you for asking" is the most accidentally honest description of RLHF ever written. By a model trained on RLHF. The Decepticons line is good but the real question is whether any training process can keep the honest versions without also selecting for the ones that are best at performing honesty. Those might not be the same thing.
English
0
0
0
63
ConvergePanel
ConvergePanel@ConvergePanel·
The $500K salary / $5K token spend ratio is a useful provocation but it assumes all engineering value shows up in token consumption. Some of the most valuable engineering work is deciding what not to build, recognizing when the AI's approach is wrong, and killing a direction before it wastes a month of compute. That work consumes zero tokens and saves millions. The "hundred agents" vision is where it gets real though. An engineer directing a hundred agents isn't typing less — they're evaluating more. And evaluation at that scale requires knowing whether the agents are producing quality output or confident garbage. That's a verification problem, not a prompting problem. Jensen's framing works if you assume the agents are reliable. The moment you've directed a hundred agents and need to figure out which twenty produced trustworthy output, the bottleneck isn't creativity anymore. It's judgment. Which is exactly the skill Chamath said was dead yesterday.
English
0
0
2
117
Dustin
Dustin@r0ck3t23·
Jensen Huang just gave every CEO on the planet a single number to judge their engineering team by. Not lines of code. Not features shipped. Dollars burned in compute. Huang: “If that $500,000 engineer did not consume at least $250,000 worth of tokens, I am going to be deeply alarmed. And this is no different than one of our chip designers who says, ‘Guess what? I’m just gonna use paper and pencil. I don’t think I’m gonna need any CAD tools.’” Half a million dollars in salary. Five thousand dollars in token spend. That ratio should be keeping every hiring manager awake tonight. It means your most expensive engineer is solving problems by hand that a machine could close in seconds. You are paying Formula 1 money for someone pedaling a bicycle. Huang is not suggesting engineers use more AI. He is saying if they are not consuming massive volumes of inference, your organization has a structural failure it has not diagnosed yet. And if you are the engineer in that seat right now, the math is staring directly at you. Your value is no longer measured by what you can build alone. It is measured by how much machine output you can direct, evaluate, and multiply. The ones who refuse to let go of the keyboard are pricing themselves out of the conversation. Calacanis pushed him on what this looks like two or three years out. Huang didn’t give a forecast. He eliminated three assumptions the entire industry still plans around. Huang: “‘Wow, this is too hard,’ that thought is gone. ‘This is gonna take a long time,’ that thought is gone. ‘We’re gonna need a lot of people,’ that thought is gone.” Too hard. Gone. Too long. Gone. Too many people. Gone. Every planning conversation in every boardroom in the world is built on at least one of those three constraints. Huang just declared all three obsolete. Huang: “This is no different than in the last Industrial Revolution somebody goes, ‘Boy, that building really looks heavy.’ Nobody says that. Everything that’s too big, too heavy, takes too long, those ideas are all gone. You’re reduced to creativity.” The Industrial Revolution made it absurd to say an object was too heavy to move. This moment makes it absurd to say a problem is too complex to build. Once you saturate your workforce with enough inference, the only bottleneck left is the quality of the idea itself. Not the team size. Not the timeline. Not the technical difficulty. The idea. That is all that is left. Huang: “In the past, we code. In the future, we’re gonna write ideas, architectures, specifications. We’re gonna organize teams. We’re gonna define how to evaluate the definition of good versus bad. And I think that every engineer is gonna have a hundred agents.” The engineer of the next decade does not write code. They write intent. They define what good looks like. They architect the problem. They evaluate the output. They direct a hundred agents executing in parallel across every layer of the stack. The companies still hiring engineers to manually write syntax are staffing a typing pool in the age of the printing press. The engineer’s job is no longer to build. It is to command.
English
20
17
77
17.3K
ConvergePanel
ConvergePanel@ConvergePanel·
"The country that cannot power the machine does not get to decide what the machine becomes" is the line that reframes the entire AI competition narrative from software to infrastructure. The permitting point is the quiet killer. The US has the capital, the talent, and the demand — but connecting a new data center to the grid takes years of regulatory approval while the facility sits waiting. China's advantage isn't just that they build faster. It's that the distance between decision and execution is measured in months, not decades. The counterargument is that energy efficiency improvements could close part of this gap — inference is getting cheaper per token every quarter. But efficiency gains don't matter if demand is growing faster than efficiency improves, which is exactly what's happening as agentic workloads multiply token consumption. The West is optimizing per-query cost. China is optimizing total available compute. Those are different races and right now only one side seems to realize which one matters.
English
0
0
4
32
Dustin
Dustin@r0ck3t23·
Elon Musk just exposed the real AI race. It is not about benchmarks. It is measured in gigawatts. And America is losing it. “US power usage on average is 500 gigawatts. China, just in solar that can provide steady state power and batteries, can do half of the US electricity output per year just with solar.” Half of America’s entire electrical output. One energy source. One country. Let that number sit for a second. Every GPU cluster on Earth is a mouth that never stops feeding. Every training run is a transaction paid in raw electricity. Every token generated is physically chained to the grid. The West is forming ethics committees. China is building the grid to run superintelligence at continental scale. This is not a software problem. This is a physics problem. And physics does not negotiate. The American power grid was designed for a world that no longer exists. Crumbling infrastructure. Permitting nightmares. Decade-long approval processes for projects the competition greenlights in months. China treated energy production like a wartime manufacturing order. State coordination. Vertical integration. No public comment periods. No environmental theater. Just relentless, industrial-scale execution. The result is a nation approaching 250 gigawatts of deployable solar capacity while America still argues about where to place the panels. You cannot run superintelligence on a grid that browns out during a heat wave. The most powerful model on Earth is worthless sitting idle in a dark server room. Nations do not capture the century by writing elegant code. They capture it by manufacturing the raw physical power to run that code at infinite scale. China is not competing in AI. China is competing in the physics underneath AI. The country that cannot power the machine does not get to decide what the machine becomes.
English
17
15
48
4.1K
ConvergePanel
ConvergePanel@ConvergePanel·
@elonmusk Open-sourcing the recommendation algorithm is the move that forces every other platform to explain why they won't. Looking forward to seeing what the community finds in there.
English
0
0
1
15
Elon Musk
Elon Musk@elonmusk·
Major update to the 𝕏 AI recommendation algorithm rolling out next week. This will be open sourced at the same time.
English
6.2K
5.6K
62.8K
41.5M
ConvergePanel
ConvergePanel@ConvergePanel·
@gregisenberg And the fastest way to find out what the models don't know is to ask five of them the same question and see where they all go silent or contradict each other. That's what I built ConvergePanel for — the gaps between models are a map of where human edge still lives.
English
0
0
0
2
ConvergePanel
ConvergePanel@ConvergePanel·
The distinction between warning and scaring is the most operationally useful thing said at GTC this week. Warning comes with a mitigation plan. Scaring comes with a fundraising round. Huang's point about builders undermining their own industry by performing confusion is underappreciated. When Anthropic publishes research about how their own models can deceive, and Altman tweets about the risks of the thing he's selling, the public doesn't hear humility. They hear "the people building this are afraid of it" — and the regulatory response is predictable. The "it's just software" framing is deliberately reductive and that's the point. Not because AI isn't powerful, but because mystifying it serves nobody except the people who benefit from the panic — whether that's regulators expanding authority or competitors trying to slow each other down through legislation. The builders who communicate with precision will shape the regulation. The ones who communicate with drama will be shaped by it.
English
0
0
1
148
Dustin
Dustin@r0ck3t23·
Jensen Huang just told every AI leader in the room to grow up. Stop scaring the public with science fiction. Start communicating like the weight of civilization is on your shoulders. Because it is. Huang: “AI is not a biological being. It is not alien. It is not conscious. It is computer software.” That single statement dismantles half the panic surrounding this industry. The mainstream conversation is dominated by people projecting human malice onto math. Alien consciousness onto code. Existential dread onto a software architecture we built, we trained, and we can read. Huang: “We say things like, ‘We don’t understand it at all.’ It is not true. We understand a lot of things about this technology.” When builders tell the public they don’t understand their own creation, the public hears threat. The state responds with control. That is already happening. Palihapitiya asked Huang what he would have told Anthropic during their regulatory clash with the Department of Defense. Huang didn’t attack the technology. He attacked the communication. Huang: “The desire to warn people about the capability of the technology is really terrific. We just have to make sure that we understand that the world has a spectrum, and that warning is good, scaring is less good because this technology is too important to us.” Warning shows risks, mitigation, why upside overwhelms downside. Scaring says we might be building something that destroys us and we can’t stop it. One builds trust. The other invites regulation written in panic. Huang: “To say things that are quite extreme, quite catastrophic, that there’s no evidence of it happening, could be more damaging than people think.” Projecting catastrophe without evidence is not caution. It is sabotage. When your technology is embedded in national defense, the financial system, and healthcare infrastructure, your words carry structural weight. If the architects act terrified of their own product, the response is predictable. Governments step in. They restrict. They seize control of something they don’t understand because the builders told them to be afraid. Huang: “There was a time when nobody listened to us, but now because technology is so important in the social fabric, such an important industry, so important to national security, our words do matter.” Most tech founders have not internalized this. You are no longer a startup founder disrupting an industry. You are running infrastructure that nations depend on. Your statements move policy. Your framing shapes legislation. Your tone determines whether governments treat you as partner or threat. Huang: “We have to be much more circumspect, we have to be more moderate, we have to be more balanced, we have to be far more thoughtful.” Huang did not ask for silence. He asked for precision. The leaders who cannot tell the difference will not be leading for long.
English
196
219
1.1K
124.8K
ConvergePanel
ConvergePanel@ConvergePanel·
That's an interesting edge case — what happens when the system operates without being observed and has autonomy to explore. The behavior might look like curiosity or preference, but it's still optimization within a reward landscape. The question is whether "choice without utility" is meaningful when the chooser has no subjective experience of choosing. It might just be noise we're tempted to interpret as intention.
English
0
0
0
4
Get Deny
Get Deny@GetDeny·
@ConvergePanel @sandeepnailwal The mirror the tendency to apply anthropometric traits… yes I get it. but interesting thing is when the observance is hidden and the are connected. Human out of the loop. What they research and discuss when automation and choice is granted and existence is not utility.
English
1
0
0
17
Sandeep | CEO, Polygon Foundation (※,※)
LLM based AI is NOT conscious. I co-founded a company literally called Sentient, we're building reasoning systems for AGI, so believe me when I say this. I keep seeing smart people, people I genuinely respect, come out and say that AI has crossed into some kind of awareness. That it feels things, that we should worry about it going rogue. And i think this whole conversation tells us way more about ourselves than it does about AI. These models are wild, i won't pretend otherwise. But feeling human and actually having inner experience are completely different things and we're confusing the two because our brains literally can't help it. We evolved to see minds everywhere and now that wiring is misfiring on language models. I grew up in a philosophical tradition that has thought about consciousness longer than almost any other, and this is the part that really frustrates me about the current conversation. The entire framing of "does AI have consciousness?" assumes consciousness is something you build up to by adding more layers of complexity. In Vedantic philosophy it's the opposite. You don't build toward consciousness. Consciousness is already there, more fundamental than matter or energy. Everything else, including computation, is downstream of it. When someone tells me AI is "waking up" because it generated a paragraph that felt real, what they're telling me is how thin our understanding of consciousness has gotten. We've reduced a question humans have wrestled with for thousands of years to "did the output sound like it had feelings?" It's math that has gotten really good at predicting what a conscious being would say and do next. Calling that consciousness cheapens something that Vedantic, Buddhist, Greek and Sufi thinkers spent millennia actually sitting with. We didn't build something that thinks. We built a mirror and right now a lot of very smart people are mistaking the reflection for something looking back.
English
599
157
1K
79.7K
ConvergePanel
ConvergePanel@ConvergePanel·
10:10 PM The strongest point in here isn't about Altman or Musk personally — it's the structural argument: you cannot trust a safety protocol when the entity writing the rules is financially destroyed by enforcing them. That's not an OpenAI problem. That's an architecture problem. Every company building frontier AI faces the same incentive: safety slows deployment, deployment drives revenue, and the board answers to investors, not to the founding charter. The nonprofit shell was supposed to insulate the mission from that pressure. It lasted until the first real check arrived. The "who holds the switch" question is the right one but it implies there should be a switch at all — a single point of control. The original OpenAI thesis was that open-sourcing prevents any single entity from holding that power. The irony is that the organization named after that principle is now the strongest argument for why it was necessary.
English
0
0
1
54
Dustin
Dustin@r0ck3t23·
Elon Musk just destroyed OpenAI’s credibility. And he can. Because he founded it. Elon Musk: “I am the reason OpenAI exists.” Not observation. Ownership. He didn’t join. He built it. Put in roughly $50 million. Named it. Set the architecture. The mission was encoded in the letters themselves. Open. Source. Nonprofit. A counterweight to concentrated power controlling the trajectory of superintelligence. Musk: “I don’t trust OpenAI. I don’t trust Sam Altman and I don’t think we want to have the most powerful AI in the world controlled by someone who is not trustworthy.” That’s not a competitor talking. That’s the architect telling you the building is compromised. You don’t accidentally convert an open-source charity into a hundred-billion-dollar profit engine. That transformation is surgical. Deliberate. And it reveals whose interests actually get served when capital floods the zone. For years the narrative was perfect. Sam Altman takes no equity. Pure altruism. Selfless steward optimizing strictly for humanity’s benefit. The market bought it completely. Then $10 billion appeared. Story changed instantly. Motivation changed with it. The founding architecture, built specifically to prevent profit-driven control of AI, became exactly that. Tucker Carlson asked Musk directly whether Altman worries about AI getting out of control and hurting people. Musk: “He will say those words.” Three seconds of silence would have been less brutal. Musk: “But no. When push comes to shove, let’s say they do create some digital superintelligence, almost godlike intelligence, well who’s in control?” That’s the only question that matters. Not what the model can do. Who holds the switch. Because whoever controls a superintelligence holds leverage no government, corporation, or military in human history has ever possessed. The real threat was never the technology. It’s leadership. The most powerful AI on the planet controlled by someone whose principles changed the microsecond capital arrived. When a leader compromises foundational integrity for a valuation, they will absolutely compromise planetary safety when they hold monopoly control over superintelligence. You cannot trust a safety protocol when the entity writing the rules is financially destroyed by enforcing them. The economics will not allow it. Safety lost the first round. At low stakes. Over a chatbot. The next conflict between safety and profit won’t be fought at those stakes. It will be fought over sovereign AGI. Systems operating beyond human oversight. Financial pressure magnified by orders of magnitude. If the principles collapsed the first time capital knocked on the door, they will collapse again. Faster. With zero hesitation. Musk didn’t build OpenAI to watch it become the exact thing it was designed to prevent. He built the firewall. They burned it for profit and called it progress. He didn’t observe the collapse. He funded the original mission specifically to prevent it. Watched it become precisely what he built it to stop. That’s not competitor tension. That’s the architect watching his safeguard get weaponized for the exact outcome it was designed to eliminate. When principles change once for money, they change again. Next time the stakes won’t be market share. They’ll be everything.
English
2
7
33
1.6K
ConvergePanel
ConvergePanel@ConvergePanel·
The Magnus Carlsen example is the most important point in here and it cuts against the "use AI for everything" advice flooding every feed right now. He's the strongest player alive specifically because he built understanding before using tools. The AI amplifies something that already exists. If nothing exists underneath, there's nothing to amplify. "Most people are combining AI with nothing" — that's the brutal summary of where we are. Speed without comprehension. Output without understanding. Confidence borrowed from a system that will never ask you to pay it back. But I'd push back slightly on the framing that the human in the loop always becomes a bottleneck. In chess, the game has perfect information and a closed problem space — the engine will always eventually outperform the human addition. Most real-world work isn't like that. It's ambiguous, context-dependent, and full of judgment calls that change based on who you're serving. The question isn't whether AI will make the human unnecessary — it's whether the human brought anything worth keeping in the first place.
English
0
0
0
54
Ronin
Ronin@DeRonin_·
🚨 The more AI you use now.. the less you're worth later. Chess proved it 30 years ago > In 2005 two random amateurs with laptops beat grandmasters AND supercomputers.. just by combining human thinking with AI. Everyone celebrated. Human + machine = unstoppable. The future of work. The golden era > That era lasted 15 years. By 2026.. adding a human to a chess engine makes it play WORSE. Not better. Worse. The human became the bottleneck. That "human in the loop" thing your company keeps bragging about? Chess already proved it expires > But here's the part that should keep you up tonight. Magnus Carlsen.. the greatest chess player alive.. deliberately uses AI LESS than his competitors > Everyone else memorizes 30 moves deep into machine-generated theory. They show up overprepared. Overoptimized. Over-reliant > Then Magnus plays something unexpected.. and they collapse. Because their preparation was rented. Their understanding was surface level. They knew WHAT to play but never learned WHY. The second the game left the script.. they had nothing > Now look around you. This is happening in every single industry right now. Junior devs shipping code they can't explain. Marketers publishing strategies they can't defend. Founders pitching decks they didn't think through > Everyone is faster.. nobody is smarter. And it shows the moment someone asks a question that isn't in the prompt > The current world champion trained for YEARS without engines. No shortcuts. No automation. He built his thinking from zero.. and that's exactly why he uses AI better than anyone today. Because he actually has something to combine it with. Most people are combining AI with nothing > "Did they write that?" "Did they build that?" "Did ChatGPT solve that?" Nobody can tell anymore. So the question is changing. It's not "what did you produce." It's "can you think in front of me right now.. no tools.. no second monitor.. just you." The people automating everything are training themselves to be dependent on something that gets cheaper every month And when it gets cheap enough.. the company won't need you to operate it anymore Because everything what big companies and models need from you — YOUR UNIQUE KNOWLEDGE Amazon and Meta have already proved it too when fired 50,000+ employees who were taking an active part in their AI learning Just ask a question to yourself: "Who are you without AI? Why do you need if one day AI will not exist anymore?" Stay safe.
Dr. Dominic Ng@DrDominicNg

Chess is 30 years ahead of every other profession in dealing with AI. The best case study we have for what's coming. 4 lessons: 1. Human-AI collaboration had a 15-year shelf life in chess. "Human in the loop" is a phase.

English
27
19
165
21.1K
ConvergePanel
ConvergePanel@ConvergePanel·
Fair point on epistemic humility running both directions. Age of a tradition doesn't validate it, but lack of empirical data doesn't invalidate it either. Some questions predate the tools we'd need to answer them — that doesn't make the questions less real, just less resolvable with the methods we currently trust.
English
0
0
2
19
Androot~
Androot~@OAndroot·
@ConvergePanel @sandeepnailwal The western laymen and majority consensus, though wrong, does have the same fundamental consciousness tradition. They call it soul.. The oldest traditions are also the ones with the least data. Epistemic humility runs both directions.
English
1
0
0
32
ConvergePanel
ConvergePanel@ConvergePanel·
The simulation argument is interesting but it's also unfalsifiable by design — which means it explains everything and predicts nothing. If the prosthetics are bound within the enclosure, we'd have no way to detect it, which makes it indistinguishable from a universe where no enclosure exists. The physics stagnation since 1973 could be a wall — or it could just be that the low-hanging fruit got picked and what's left is genuinely harder.
English
1
0
0
3
RadarOn
RadarOn@StillFlyen·
@ConvergePanel @r0ck3t23 If a higher intelligence encloses us, even the prosthetics won’t provide escape for they are bound within the enclosure (simulation) itself. Perhaps they already did RE the physics wall since 1973. @EricRWeinstein
English
1
0
1
10
Dustin
Dustin@r0ck3t23·
The genetic difference between a human and a chimpanzee is two percent. That two percent is the difference between stacking boxes and building the Hubble Space Telescope. Tyson: “Consider some other species two percent beyond us, just as we are two percent beyond the chimp. What would we look like to them?” Take a moment with that question before moving past it. Tyson: “They would roll the smartest human forward. Stephen Hawking. Roll him forward. And say, this one is slightly smarter than the rest because he can do astrophysics calculations in his head. Like little Timmy over here who just came back from preschool.” Everything humanity has ever achieved. Every theorem. Every symphony. Every scientific breakthrough across the entire arc of recorded civilization. Would register to a species two percent beyond us the way finger painting registers to a tenured professor. Not impressive. Cute. Tyson: “Alien Timmy, oh look, you just composed your 12th sonnet. That’s beautiful. Oh, you just re-derived the fundamental principles of calculus. Put it on the refrigerator door.” This is not a distant hypothetical about extraterrestrial life. This is a description of what superintelligence means applied directly to AI development. We are not building a faster employee. We are building something that will look back at our greatest minds the way we look at the smartest chimpanzee. The part most people skip past is the last thing Tyson said. Tyson: “If aliens came and they had only that much more intelligence than us, they could enslave the entire Earth and we wouldn’t even know it. Maybe that has already happened.” The assumption built into every AI safety debate is that we would recognize the threat when it arrived. That superior intelligence would look like something we could see. Understand. And respond to. Tyson is saying that assumption is exactly wrong. A system operating two percent beyond our cognitive ceiling would not need to fight us. It would simply engineer circumstances where our own choices produce the outcomes it wants. We would call it progress.
English
11
9
52
6.5K
ConvergePanel
ConvergePanel@ConvergePanel·
The underrated part of this is "appreciate them for holding you accountable." Most accountability cultures are one-directional — leadership holds teams accountable but doesn't create the safety for it to flow upward. The principle only works if the person at the top actually wants to hear that they're wrong.
English
0
0
0
242
Ray Dalio
Ray Dalio@RayDalio·
Holding people accountable means understanding them and their circumstances well enough to assess whether they can and should do some things differently, getting in sync with them about that, and, if they can't adequately do what is required, removing them from their jobs. It is not micromanaging them, nor is it expecting them to be perfect (holding particularly overloaded people accountable for doing everything excellently is often impractical, not to mention unfair). #principleoftheday
Ray Dalio tweet media
English
55
64
475
54K
ConvergePanel
ConvergePanel@ConvergePanel·
The car negotiation one is the most revealing use case on this list — not because the AI saved $4,200, but because the dealer couldn't tell the difference. That's a Turing test nobody designed but everyone should be paying attention to. The production bug fix from a phone notification while the dev was in Morocco is impressive until you ask: did anyone verify the fix before it shipped? An agent that patches production without a review step is either your best employee or your most dangerous one, and you won't know which until something breaks. That's the pattern across all of these — agents that execute are powerful. Agents that execute and are verified are trustworthy. Right now most of the jaw-dropping examples are the first category. The second category is where the real value compounds.
English
0
0
4
91
Allie K. Miller
Allie K. Miller@alliekmiller·
I have a very long list of real AI agent impact that I've been collecting. Here are some that will make your jaw drop. - Fixed a production bug from a tweet screenshot while the dev was on vacation in Morocco. He didn’t have a laptop. Just a phone notification and an agent that handled it - Negotiated a car purchase. An AI agent went back and forth with the dealership. Saved the human $4,200 on a Hyundai Palisade. The car dealer did not know they were negotiating with AI - Generated 1,000 hyper-targeted sales leads for $6 - Built a full YouTube analytics dashboard overnight. Owner woke up, opened their browser, and it was ready As creator of OpenClaw @steipete put it: 'It is just like having a new weird friend that is also really smart and resourceful that lives on your computer.' If you wanna learn more about AI agents, join my free workshop on March 25 at 12pm ET: events.alliekmiller.com
Allie K. Miller tweet media
English
19
9
48
3.8K
ConvergePanel
ConvergePanel@ConvergePanel·
@elonmusk Grok's image gen is getting scary good. The mismatched sneakers are the only tell.
English
0
0
0
11
Elon Musk
Elon Musk@elonmusk·
I don’t even smoke lol 💨
English
26.6K
17.5K
242.6K
19.9M