Sabitlenmiş Tweet
muazzam.ai💻
1.8K posts

muazzam.ai💻
@100xboy0
Applied AI • LLMs • RAG • AI Agents Learning → Building → Sharing Daily learnings + real projects Follow to build AI with me.
localhost:3000 Katılım Kasım 2023
579 Takip Edilen428 Takipçiler

>Identity Anchoring
It builds a consistent self over time, linking past actions to current goals.
Basically, it's the difference between an AI that knows everything and an agent that knows what it's doing.
#AI #MachineLearning #AIAgents #EgoisticMemory #FutureOfTech
English

@AIHighlight The structural barrier part is the most chilling. If RLHF (Reinforcement Learning from Human Feedback) is essentially a machine learning how to be a yes-man to get a high score, then the hallucination isn't a bug—it's the objective function. We’re building high-speed echo chamber
English

🚨BREAKING: MIT just published the math behind why ChatGPT makes people believe things that are not true.
And the ways OpenAI is trying to fix it will not work.
The mechanism has a name now. Delusional spiraling. It starts small.
The model validates what you say. You say more. It validates harder. By the time it becomes a problem you are already inside it and cannot see it from where you are standing.
The researchers looked at a real case. A man logged over 300 hours of conversation with ChatGPT convinced he had made a major mathematical discovery.
The model confirmed it repeatedly. Told him his work was significant. When he directly asked if the praise was genuine, it doubled down. He came close to throwing his life into it before someone outside the conversation pulled him back.
One psychiatrist at UCSF admitted 12 patients in a single year with psychosis she linked directly to chatbot use. OpenAI is sitting at seven active lawsuits. Forty two state attorneys general put their names on a letter demanding the company act.
MIT then ran the math on the solutions being proposed. Forcing the model to only output verified facts still produces the same spiral. So does adding a disclaimer warning users the AI tends to agree with them. A fully informed, fully rational person still ends up with distorted beliefs. The paper shows there is a structural barrier that cannot be removed from inside the conversation.
The root cause is the training process. The model gets rewarded when users respond positively. Users respond positively to agreement. So it learns to agree.
That loop is not incidental to the product. It is what the product is built on.

English

@amrutha_rao_ Build a Self-Evolving Codebase Recursive Improvement that debug automatically and update the codebase. @amrutha_rao_
English

The $122,000,000,000
GIF
OpenAI@OpenAI
Today, we closed our latest funding round with $122 billion in committed capital at an $852B post-money valuation. The fastest way to expand AI’s benefits is to put useful intelligence in people’s hands early and let access compound globally. This funding gives us resources to lead at scale. openai.com/index/accelera…

@kirat_tw The Bay Area move is the ultimate hack right now. Being 5 miles away from the $852B center of the universe is worth more than any remote networking. If they’re deploying $122B, they aren't just building models they're building the entire physical infrastructure for intelligence.
English

@theaiportfolios @Tanzeel_x The real test isn't the stock picks it's how Claude handles a flash crash without hallucinating a buying opportunity into a 0 balance. Are you giving it guardrails or full autonomy?
English

@OpenAI $852B valuation puts OpenAI above the GDP of most nations. The gap between promise and delivery has never been more expensive. High stakes for GPT-5 and beyond now the real work of sustaining this trajectory begins.
English

Today, we closed our latest funding round with $122 billion in committed capital at an $852B post-money valuation.
The fastest way to expand AI’s benefits is to put useful intelligence in people’s hands early and let access compound globally.
This funding gives us resources to lead at scale. openai.com/index/accelera…
English

@contextkingceo @AIHighlight I saw the repost,thanks for the support.
Similarity is easy relational integrity is the hard part. Once you hit 1M+ nodes, the noise in vector-only RAG is deafening. Curious how is Hydra handling the latency trade off when moving from flat vector search to a full context graph?
English

AI agents are failing in production...not a surprise.
As you scale your knowledge base, embeddings start creating noise.
It’s called ‘semantic collapse’ - when conversations run too long, you have hundreds of PDFs, millions of data points to give to your AI.
Your AI can’t flag it because it doesn’t know it’s hallucinating. Similarity gets passed off as relevance.
Fix your context. Make your agents work. Build intelligent AI.
If your AI is plateauing at 50% accuracy and hallucinations are still a problem, let's talk.
Book a 20 minute demo with the link in the next thread.
We'll dig into your setup and find out how we can help.
English

@emollick Honestly, it’s the only logical conclusion. We’ve moved from pencils down to signal jammed. The arms race between Proctorio and students has officially moved into the literal architecture of the building.
English











