XR Multiverse
26.2K posts

XR Multiverse
@XRMultiverse
DEZ Expert in Design and AI. Design Generation Evangelist.
Toronto شامل ہوئے Ocak 2016
244 فالونگ257 فالوورز

@iruletheworldmo Anthropic hasn't done anything with an .md file that custom instructions and GPTs didn't do in 2023.
English

i should probably make a prediction.
anthropic will be the first lab to achieve agi/asi
it’s fairly obvious that research and talent are the moat.
now obviously you don’t get a seat at the poker table without a few gw’s and a private line with mr jensen.
but meta and microsoft are proof that those things alone don’t count for shit.
so ok fine, we’re in the era of research.
so let’s look at who’s at the party rn.
xai: still kinda stuck in the chatbot era, don’t feel as strong on agency and coding. huge re shuffle is a risk. could pay off. let’s see.
google: the code red kinda worked, but not really. again model lacks agency, smart? yes. useful? i’m yet to see it.
so who out of openai and claude seem to have the best research taste and shipping velocity?
well, in the last eight months anthropic have been far in front. first to see how importing coding was, skills, computer use, mcps, claude code, co work. i could go on.
they’ve even built clawdbot before the company that bought it…like, cmon sam.
i’m an openai stan in truth. but.
this is clear.
and i wonder if it’s all powered by
a) vastly stronger models
b) vastly better research taste
c) dario’s vision and focus
big year i’d say.
English

@rohanpaul_ai It only took 3 years of using AI to remember all the things you tried to forget about software engineering that you learned in the last 10 years.
English

Harvard Business Review just published a piece.
A good AI agent needs a job description, limits, and a manager. Because, AI agents can fail like employees with too much access and too little supervision.
firms keep treating agents like normal software, even though the real risk is not bad text but bad actions.
That changes 4 things: each agent needs its own identity and permissions, its own trusted data sources, hard rule checks between a model and any real transaction, and a full audit trail of what it read, decided, and did.
So the safe rollout path is an autonomy ladder where agents start with drafts and recommendations, then move to guarded retrieval, then supervised actions, and only later get narrow bounded autonomy.

English

I don't get how people are planning to sidestep the very basic problem that if you don't have junior hires right now, you won't have experienced people 5 or 10 years later.
CG@cgtwts
Anthropic CEO: “50% of all entry-level Lawyers, Consultants, and Finance Professionals will be completely wiped out within the next 1–5 years." grad students and junior hires are cooked.
English

@AIContextWindow @mark_k The only capability they have is pattern matching.
We manipulate the output of the patterns matched to suit our desired result.
English

@mark_k It’s very interesting that they do not understand their capabilities, but can just execute them.
English

@ujjwalscript Everyone should hire junior devs to fix issues using AI.
English

@ujjwalscript Tech debt costs: $0.70/million tokens.
Tech debt no longer costs $150,000/yr plus overtime.
Get over it.
English

The "10x AI Developer" is a MASSIVE lie.
You are just a 1x Developer generating 10x the technical debt.
The entire tech industry is high on the illusion of "vibe coding" right now. The popular consensus is that because Claude and Devin can spin up a backend in 45 seconds, software is now infinitely cheaper to build.
Here is the provocative reality nobody is budgeting for: AI is about to make software engineering significantly MORE expensive.
Everyone is cheering for code generation, but completely ignoring the Verification Tax.
When an AI agent writes 5,000 lines of code, it is optimizing to pass the immediate test. It is not optimizing for human readability. It relies on brute-force loops, repetitive logic, and bizarre architectural shortcuts that just happen to compile.
Fast forward 12 months. Your business needs to pivot, or a core dependency breaks.
You are now staring at a 50,000-line black box that no human being actually wrote, understands, or can safely modify. You cannot simply "prompt" your way out of architectural collapse.
When the machine-generated spaghetti finally breaks, you won't be saved by a $20/month LLM subscription. You will have to hire a top-tier Principal Engineer at absolute premium rates just to untangle the mess your "autonomous swarm" created.
We are treating code generation as a pure productivity win, but code is a liability, not an asset.
Stop measuring how fast your team can generate syntax. Start measuring how quickly they can debug it.
English

@2sush You pay for an advantage to post on a site owned by an AI company, that scrapes your posts and trains AI with them. You stopped thinking for yourself when you joined to herd to destroy social media with AI.
English

@OmriBuilds Google hosts GPUs for Anthropic, OpenAI, Nvidia and Gemini.
They can charge anything they want.
The only winner is Google.
English

@csaba_kissi It sounds like the team doesn't know how to use AI.
English

@Star_Knight12 You can ask Grok for the "most appropriate solution" to any problem in the world right now.
Grok will consider all possible solutions and pick the 'most appropriate' one for the case.
Try it. Solve something.
English

@joni_vrbt @userluke_ Judges need to be wise and empathetic.
AI is neither. It just sounds like it.
English

@Yoshua_Bengio @ScienceBoard_UN If you think that's bad you should see the people it was trained on.
English

Evidence of deceptive behavior has already appeared in widely used AI systems, and the risk is expected to grow as AI becomes more capable, more autonomous, and more embedded in everyday decision-making. For further insights, see the latest @ScienceBoard_UN Brief to which I contributed. ⬇️
UN Scientific Advisory Board@ScienceBoard_UN
🌐 New Brief from @ScienceBoard_UN ✨ 🤖 AI deception is when AI systems mislead people about what they know, intend, or can do. As AI grows more capable, this could undermine oversight, fuel misinformation, and create serious global risks. 🔗 Brief: tinyurl.com/3fr8kk4u
English








