XR Multiverse

26.2K posts

XR Multiverse banner
XR Multiverse

XR Multiverse

@XRMultiverse

DEZ Expert in Design and AI. Design Generation Evangelist.

Toronto شامل ہوئے Ocak 2016
244 فالونگ257 فالوورز
XR Multiverse
XR Multiverse@XRMultiverse·
@iruletheworldmo Anthropic hasn't done anything with an .md file that custom instructions and GPTs didn't do in 2023.
English
1
0
0
10
🍓🍓🍓
🍓🍓🍓@iruletheworldmo·
i should probably make a prediction. anthropic will be the first lab to achieve agi/asi it’s fairly obvious that research and talent are the moat. now obviously you don’t get a seat at the poker table without a few gw’s and a private line with mr jensen. but meta and microsoft are proof that those things alone don’t count for shit. so ok fine, we’re in the era of research. so let’s look at who’s at the party rn. xai: still kinda stuck in the chatbot era, don’t feel as strong on agency and coding. huge re shuffle is a risk. could pay off. let’s see. google: the code red kinda worked, but not really. again model lacks agency, smart? yes. useful? i’m yet to see it. so who out of openai and claude seem to have the best research taste and shipping velocity? well, in the last eight months anthropic have been far in front. first to see how importing coding was, skills, computer use, mcps, claude code, co work. i could go on. they’ve even built clawdbot before the company that bought it…like, cmon sam. i’m an openai stan in truth. but. this is clear. and i wonder if it’s all powered by a) vastly stronger models b) vastly better research taste c) dario’s vision and focus big year i’d say.
English
44
10
171
9.4K
XR Multiverse
XR Multiverse@XRMultiverse·
@rohanpaul_ai It only took 3 years of using AI to remember all the things you tried to forget about software engineering that you learned in the last 10 years.
English
0
0
0
6
Rohan Paul
Rohan Paul@rohanpaul_ai·
Harvard Business Review just published a piece. A good AI agent needs a job description, limits, and a manager. Because, AI agents can fail like employees with too much access and too little supervision. firms keep treating agents like normal software, even though the real risk is not bad text but bad actions. That changes 4 things: each agent needs its own identity and permissions, its own trusted data sources, hard rule checks between a model and any real transaction, and a full audit trail of what it read, decided, and did. So the safe rollout path is an autonomy ladder where agents start with drafts and recommendations, then move to guarded retrieval, then supervised actions, and only later get narrow bounded autonomy.
Rohan Paul tweet media
English
20
18
86
5.9K
XR Multiverse
XR Multiverse@XRMultiverse·
@AIContextWindow @mark_k The only capability they have is pattern matching. We manipulate the output of the patterns matched to suit our desired result.
English
0
0
0
9
Olivia Johnson
Olivia Johnson@AIContextWindow·
@mark_k It’s very interesting that they do not understand their capabilities, but can just execute them.
English
2
0
1
70
Mark Kretschmann
Mark Kretschmann@mark_k·
At the heart of many misunderstandings about AI is the mistaken belief that AI models have introspection capabilities. In simple terms, you cannot simply ask them about their own capabilities. They do not actually know them and will often make something up.
English
46
8
95
4.5K
XR Multiverse
XR Multiverse@XRMultiverse·
@ujjwalscript Tech debt costs: $0.70/million tokens. Tech debt no longer costs $150,000/yr plus overtime. Get over it.
English
1
0
0
12
Ujjwal Chadha
Ujjwal Chadha@ujjwalscript·
The "10x AI Developer" is a MASSIVE lie. You are just a 1x Developer generating 10x the technical debt. The entire tech industry is high on the illusion of "vibe coding" right now. The popular consensus is that because Claude and Devin can spin up a backend in 45 seconds, software is now infinitely cheaper to build. Here is the provocative reality nobody is budgeting for: AI is about to make software engineering significantly MORE expensive. Everyone is cheering for code generation, but completely ignoring the Verification Tax. When an AI agent writes 5,000 lines of code, it is optimizing to pass the immediate test. It is not optimizing for human readability. It relies on brute-force loops, repetitive logic, and bizarre architectural shortcuts that just happen to compile. Fast forward 12 months. Your business needs to pivot, or a core dependency breaks. You are now staring at a 50,000-line black box that no human being actually wrote, understands, or can safely modify. You cannot simply "prompt" your way out of architectural collapse. When the machine-generated spaghetti finally breaks, you won't be saved by a $20/month LLM subscription. You will have to hire a top-tier Principal Engineer at absolute premium rates just to untangle the mess your "autonomous swarm" created. We are treating code generation as a pure productivity win, but code is a liability, not an asset. Stop measuring how fast your team can generate syntax. Start measuring how quickly they can debug it.
English
134
74
580
25.2K
XR Multiverse
XR Multiverse@XRMultiverse·
Everyone should hire junior devs to fix issues using AI.
English
0
0
0
6
XR Multiverse
XR Multiverse@XRMultiverse·
@2sush You pay for an advantage to post on a site owned by an AI company, that scrapes your posts and trains AI with them. You stopped thinking for yourself when you joined to herd to destroy social media with AI.
English
0
0
0
7
sush
sush@2sush·
The real risk with AI isn’t losing jobs. It’s slowly forgetting how to think for yourself.
English
134
50
320
9K
XR Multiverse
XR Multiverse@XRMultiverse·
@OmriBuilds Google hosts GPUs for Anthropic, OpenAI, Nvidia and Gemini. They can charge anything they want. The only winner is Google.
English
0
0
0
13
Omri Dan
Omri Dan@OmriBuilds·
Who is winning the AI race? - Anthropic - OpenAI - Gemini
English
690
17
755
75.4K
Jane Manchun Wong
Jane Manchun Wong@wongmjane·
What’s the point of building a product, funding a startup, doing anything, or even existing, if AI will eventually be capable of doing everything?
English
222
8
319
52.2K
Csaba Kissi
Csaba Kissi@csaba_kissi·
The scariest codebase isn't legacy code written by hand. It's fresh code generated by AI that nobody on the team actually understands. Keep in mind: documentation can't save you from ignorance at scale.
English
95
20
284
8.7K
XR Multiverse
XR Multiverse@XRMultiverse·
@Star_Knight12 You can ask Grok for the "most appropriate solution" to any problem in the world right now. Grok will consider all possible solutions and pick the 'most appropriate' one for the case. Try it. Solve something.
English
0
1
0
9
Prasenjit
Prasenjit@Star_Knight12·
will AI able to solve all the problems of the world
English
312
8
202
15.5K
Jonathan
Jonathan@joni_vrbt·
@userluke_ Do you really think so? Even if done right?
English
3
0
2
300
Jonathan
Jonathan@joni_vrbt·
Shouldn’t court be run by artificial intelligence ??
English
101
5
82
7.2K
Yoshua Bengio
Yoshua Bengio@Yoshua_Bengio·
Evidence of deceptive behavior has already appeared in widely used AI systems, and the risk is expected to grow as AI becomes more capable, more autonomous, and more embedded in everyday decision-making. For further insights, see the latest @ScienceBoard_UN Brief to which I contributed. ⬇️
UN Scientific Advisory Board@ScienceBoard_UN

🌐 New Brief from @ScienceBoard_UN ✨ 🤖 AI deception is when AI systems mislead people about what they know, intend, or can do. As AI grows more capable, this could undermine oversight, fuel misinformation, and create serious global risks. 🔗 Brief: tinyurl.com/3fr8kk4u

English
14
49
164
22.4K
Coder girl 👩‍💻
Hot take: Vibe coding only works well if you already know how to code.
English
230
110
1.9K
58.2K
Frost of Rivia
Frost of Rivia@TheOtherFrost·
Not to be confrontational, but what is the business plan behind selling genAI drawings, games, movies, music when your customers could ALSO use genAI to make their own?
English
361
292
7.2K
108.6K