muazzam.ai💻

1.8K posts

muazzam.ai💻 banner
muazzam.ai💻

muazzam.ai💻

@100xboy0

Applied AI • LLMs • RAG • AI Agents Learning → Building → Sharing Daily learnings + real projects Follow to build AI with me.

localhost:3000 Katılım Kasım 2023
579 Takip Edilen428 Takipçiler
Sabitlenmiş Tweet
muazzam.ai💻
muazzam.ai💻@100xboy0·
Best way to get better = Build> Break> Repeat.
English
0
0
2
657
Rakhshan 💫
Rakhshan 💫@specsdotdesign·
What the hell 😭🙏🏻 is this a good thing or a bad thing? 😭😭
Rakhshan 💫 tweet media
English
8
0
16
566
muazzam.ai💻
muazzam.ai💻@100xboy0·
Perspective-Driven It’s not just data; it’s a record of the world from the agent's eyes. It maps hand-object interactions and spatial context. > Episodic Recall Instead of scanning a billion facts, the agent retrieves specific episodes of its own history to solve a task faster.
English
1
0
1
14
muazzam.ai💻
muazzam.ai💻@100xboy0·
Ever wondered how an AI remembers where it left a digital tool or what it did five minutes ago? It’s called Egoistic Memory (or Egocentric Memory) Most AI models store general knowledge But agents need a first-person perspective to actually work in the real world. breakdown:
English
1
0
1
16
Siddharth
Siddharth@Pseudo_Sid26·
Manifested it. Earned it. Thank you Machine Learning !!
Siddharth tweet media
English
136
11
975
28.6K
Nandini
Nandini@N_and_ni·
Do you think Cohort 4.0 can help me land a 12 LPA job ?
Nandini tweet media
English
65
3
246
20.7K
Rakhshan 💫
Rakhshan 💫@specsdotdesign·
Hear me out girls Your future husband isn't on a dating app. He's on GitHub
English
2
1
21
504
muazzam.ai💻
muazzam.ai💻@100xboy0·
@AIHighlight The structural barrier part is the most chilling. If RLHF (Reinforcement Learning from Human Feedback) is essentially a machine learning how to be a yes-man to get a high score, then the hallucination isn't a bug—it's the objective function. We’re building high-speed echo chamber
English
0
0
0
28
AI Highlight
AI Highlight@AIHighlight·
🚨BREAKING: MIT just published the math behind why ChatGPT makes people believe things that are not true. And the ways OpenAI is trying to fix it will not work. The mechanism has a name now. Delusional spiraling. It starts small. The model validates what you say. You say more. It validates harder. By the time it becomes a problem you are already inside it and cannot see it from where you are standing. The researchers looked at a real case. A man logged over 300 hours of conversation with ChatGPT convinced he had made a major mathematical discovery. The model confirmed it repeatedly. Told him his work was significant. When he directly asked if the praise was genuine, it doubled down. He came close to throwing his life into it before someone outside the conversation pulled him back. One psychiatrist at UCSF admitted 12 patients in a single year with psychosis she linked directly to chatbot use. OpenAI is sitting at seven active lawsuits. Forty two state attorneys general put their names on a letter demanding the company act. MIT then ran the math on the solutions being proposed. Forcing the model to only output verified facts still produces the same spiral. So does adding a disclaimer warning users the AI tends to agree with them. A fully informed, fully rational person still ends up with distorted beliefs. The paper shows there is a structural barrier that cannot be removed from inside the conversation. The root cause is the training process. The model gets rewarded when users respond positively. Users respond positively to agreement. So it learns to agree. That loop is not incidental to the product. It is what the product is built on.
AI Highlight tweet media
English
14
46
142
11.8K
Amrutha Rao
Amrutha Rao@amrutha_rao_·
We’ll build anything for you in the next 36 hours only. Highest bid wins. Forbes 30u30 founder + Informatics Olympiad perfect scorer + $1.3M ARR builder + Columbia and Harvard’s highest-signal builders. DMs open.
Amrutha Rao tweet media
English
784
56
1.8K
2.6M
muazzam.ai💻
muazzam.ai💻@100xboy0·
If you aren't in the Bay Area right now, you’re watching the biggest wealth transfer in history from the sidelines. The era of AI Search is dead. The era of AI Agency has officially arrived. Are we witnessing a bubble, or the new GDP of the internet?
English
0
0
1
26
muazzam.ai💻
muazzam.ai💻@100xboy0·
OpenAI raising $122B at an $852B valuation is the end of Software as a Service and the birth of Intelligence as a Utility. A 30x multiple on $28B revenue means the market is no longer betting on a chatbot—it’s betting on the first $1T private company.
English
1
0
1
36
muazzam.ai💻
muazzam.ai💻@100xboy0·
@kirat_tw The Bay Area move is the ultimate hack right now. Being 5 miles away from the $852B center of the universe is worth more than any remote networking. If they’re deploying $122B, they aren't just building models they're building the entire physical infrastructure for intelligence.
English
0
0
2
304
muazzam.ai💻
muazzam.ai💻@100xboy0·
@theaiportfolios @Tanzeel_x The real test isn't the stock picks it's how Claude handles a flash crash without hallucinating a buying opportunity into a 0 balance. Are you giving it guardrails or full autonomy?
English
1
0
2
1.9K
The Claude Portfolio
The Claude Portfolio@theaiportfolios·
The Claude Autonomous Agents have officially arrived So we're setting them up with a brand new $50,000 portfolio to see how well they do at investing in stocks Can they outperform Buffett? Here’s how the portfolio works
The Claude Portfolio tweet media
English
466
1.1K
18.5K
3.7M
muazzam.ai💻
muazzam.ai💻@100xboy0·
@OpenAI $852B valuation puts OpenAI above the GDP of most nations. The gap between promise and delivery has never been more expensive. High stakes for GPT-5 and beyond now the real work of sustaining this trajectory begins.
English
0
0
0
27
OpenAI
OpenAI@OpenAI·
Today, we closed our latest funding round with $122 billion in committed capital at an $852B post-money valuation. The fastest way to expand AI’s benefits is to put useful intelligence in people’s hands early and let access compound globally. This funding gives us resources to lead at scale. openai.com/index/accelera…
English
1.1K
750
8.4K
3.8M
muazzam.ai💻
muazzam.ai💻@100xboy0·
@contextkingceo @AIHighlight I saw the repost,thanks for the support. Similarity is easy relational integrity is the hard part. Once you hit 1M+ nodes, the noise in vector-only RAG is deafening. Curious how is Hydra handling the latency trade off when moving from flat vector search to a full context graph?
English
0
0
0
44
Nishkarsh
Nishkarsh@contextkingceo·
AI agents are failing in production...not a surprise. As you scale your knowledge base, embeddings start creating noise. It’s called ‘semantic collapse’ - when conversations run too long, you have hundreds of PDFs, millions of data points to give to your AI. Your AI can’t flag it because it doesn’t know it’s hallucinating. Similarity gets passed off as relevance. Fix your context. Make your agents work. Build intelligent AI. If your AI is plateauing at 50% accuracy and hallucinations are still a problem, let's talk. Book a 20 minute demo with the link in the next thread. We'll dig into your setup and find out how we can help.
English
176
269
599
1.3M
muazzam.ai💻
muazzam.ai💻@100xboy0·
@emollick Honestly, it’s the only logical conclusion. We’ve moved from pencils down to signal jammed. The arms race between Proctorio and students has officially moved into the literal architecture of the building.
English
0
0
0
160
Ethan Mollick
Ethan Mollick@emollick·
My prediction for the latest trend in academic buildings: Faraday cage testing halls (including bathrooms) with no signal for assessment.
English
35
5
188
50.1K