AJAY

3.6K posts

AJAY banner
AJAY

AJAY

@jaykrishAGI

I share AI insights ,news and latest trends and tools - helping you stay ahead in just 5 minutes a week | @IITGuwahati @Covcampus | @UNDP Volunteer (Climate)

Coventry, England Beigetreten Ağustos 2023
993 Folgt953 Follower
Angehefteter Tweet
AJAY
AJAY@jaykrishAGI·
Apple Vision Pro users can now enable spatial Personas in SharePlay-enabled apps, allowing for collaboration, gaming, and media consumption with other users in a virtual space. More details 👇🏻
AJAY tweet media
English
6
7
30
18.5K
Rohit Negi
Rohit Negi@rohit_negi9·
Discrete Mathematics is the most underrated subject in engineering. I am slowly unlocking its power.
English
14
24
346
8.7K
AJAY retweetet
Ricardo
Ricardo@Ric_RTP·
The man who INVENTED modern AI just made a billion dollar bet that ChatGPT, Claude, and every AI company on earth is building the wrong technology. Yann LeCun won the Turing Award in 2018 for creating the neural networks that made AI possible. He spent a decade running AI research at Meta. Oversaw the creation of Llama and PyTorch, the tools that half the AI industry runs on. Then he quit. And raised $1.03 billion in a seed round. The LARGEST seed round in European history. $3.5 billion valuation before generating a single dollar of revenue. Bezos wrote the check. So did Nvidia. Samsung. Toyota. Temasek. Eric Schmidt. Mark Cuban. Tim Berners-Lee (the guy who invented the internet). His new company is called AMI Labs. And it's built on one thesis: Every AI company spending billions on large language models is wasting their money. ChatGPT, Claude, Gemini, Grok. They all work the same way. They predict the next word in a sequence. See "the cat sat on the" and predict "mat." Scale that to trillions of words and you get something that sounds intelligent. But LeCun says it doesn't UNDERSTAND anything. It can't reason. It can't plan. It can't predict what happens when you push a glass off a table. A two year old can do that. GPT-5 cannot. That's why AI hallucinates. It doesn't have a model of how the world actually works. It just predicts words. His solution? Something called JEPA. Instead of predicting words, it learns how the PHYSICAL WORLD works. Abstract representations of reality. Not language but physics. Think about what that means. Current AI can write your emails. LeCun's AI could design a car, run a factory, operate a robot, or diagnose a patient without hallucinating and killing someone. The CEO of AMI said it perfectly: "Factories, hospitals, and robots need AI that grasps reality. Predicting tokens doesn't cut it." And here's what's really crazy to me... LeCun isn't some outsider throwing rocks. He literally built the foundations that ChatGPT runs on. He knows exactly how these systems work because he helped create them. And after watching the entire industry sprint in one direction for three years, he raised a billion dollars to run the OPPOSITE way. No product. No revenue. No timeline. Just pure research. He told investors it could take YEARS to produce anything commercial. But they funded it anyway in just four months. Meanwhile OpenAI just raised $120 billion and still can't stop their models from making things up. Anthropic is building AI so dangerous they're afraid to release it. Google is burning billions trying to catch up. And the guy who started it all says they're all solving the wrong problem. Two Turing Award winners raised $2 billion in three weeks betting AGAINST the entire LLM approach. LeCun at AMI. Fei-Fei Li at World Labs. The smartest people in AI are quietly building the exit from the technology everyone else is betting their future on. Either they're wrong and the trillion dollar LLM industry keeps printing. Or they're right and every AI company on earth just built on a foundation that's about to crack.
English
135
379
1.1K
105.7K
AJAY
AJAY@jaykrishAGI·
Part 2: Thoughts Over Ethics of AI in the Current Scenario. Checking the foundation model transparency index is important; most Chinese AI companies are not publishing their model training dataset details, which entails a lower score in transparency. I mean low FMTI. In DeepSeek, the FMTI might be lower. Ultimately this AI model needs a supply chain; I mean, everything needs to deliver a product to the customer. If you look at the DeepSeek company, the upstream indicators are extremely weak. After they figure out the fine-grained quantization and model distillation, this model apparently works much better than ChatGPT, but the downstream indicators they got are extremely fantastic. It turned out that they got an increasing amount of followers within weeks. This is not really any information hazard; any people have the right to know this, I mean, the interoception of a system. Just examined this Claud bot; it isn't apparently safer than DeepSeek. Let's say you WhatsApp message; that is private data. I don't want to make it centralized for an AI model to train. Let AI take public data, not private. Federated learning is very efficacious. Let's say there might be a case where it will take your private data; the only solution is adding noise so your data is safe; that is differential privacy. Usually we say, "Connect the signal and avoid noise," but you add noise.
AJAY@jaykrishAGI

The government in Iran is setting a puppet technique for our minds to be stationary through illusionism in media. A puppet moves because someone else controls it. it with strings. Now Iran is using these same tactics in perpetuity, like making historical figures speak and more manipulation. With the face swap technology, it became very hard to identify what is real and what is not. Wave2Lip and FaceSwap are all GANs; they all have a generator and discriminator to identify what's an illusion. Ultimately it creates ultra-realistic fake data. Currently I have noticed even this SOTA model shows high accuracy for light-skinned males and much lower accuracy for dark-skinned females; some kind of gender bias happening is detrimental even with this LenYun model, as well as the machine bias. Most of this agentic AI will run on feedback loops, like a system that reacts to its own results. + Feedback loops are reinforcing, but negative feedback loops are stabilizing. I am just assuming confusion matrices here, like is there a chance where this embodied AI shows a true negative (ignore a person from helping during an emergency), a false positive (attack an innocent person by mistake), and a false negative (misses a real patient from helping)? A true negative would be the cheapest AI-embodied robot you might see in a local place. But FP would not be tolerable. FN is dangerous here. I recommend using a fairness mitigation technique at this point. That's only the solution. part 1.

English
0
0
1
9
AJAY retweetet
Maziyar PANAHI
Maziyar PANAHI@MaziyarPanahi·
Wow! This is amazing! Segmented every car locally in real time with Meta's SAM3 converted to MLX. Just on-device (M2 laptop) vision getting absurdly good. Local AI is moving faster than most people realize! What other models should we test? what kind of videos?
English
39
64
615
80.4K
AJAY
AJAY@jaykrishAGI·
@rohanpaul_ai So he's indirectly saying if the leader's IQ is close to the followers', it's a problem. So this typical value strays more from the standard and is inherently good.
English
0
0
0
2
Rohan Paul
Rohan Paul@rohanpaul_ai·
Marc Andreessen says extreme intelligence does necessarily not make better leaders. "If the leader is more than 1 standard deviation of IQ away from the followers, it's a real problem."
English
30
8
115
11.5K
AJAY
AJAY@jaykrishAGI·
@cooltechtipz this might be the best annealing process lol
English
0
0
0
1.5K
Learn Something
Learn Something@cooltechtipz·
Water can cool nearby surfaces, but it won’t stop the flame unless the gas supply is shut off.
English
124
284
3.3K
782.6K
AJAY
AJAY@jaykrishAGI·
@rohanpaul_ai that would be the best non instrusive approach , no noise . no jittery seeing this post lol
English
0
0
0
7
Rohan Paul
Rohan Paul@rohanpaul_ai·
Andrej Karpathy: "the industry just has to reconfigure in so many ways, like the customer is not the human anymore, it's agents who are acting on behalf of humans. And this refactoring will be probably substantial in the space."
English
26
47
396
32.9K
AJAY
AJAY@jaykrishAGI·
The government in Iran is setting a puppet technique for our minds to be stationary through illusionism in media. A puppet moves because someone else controls it. it with strings. Now Iran is using these same tactics in perpetuity, like making historical figures speak and more manipulation. With the face swap technology, it became very hard to identify what is real and what is not. Wave2Lip and FaceSwap are all GANs; they all have a generator and discriminator to identify what's an illusion. Ultimately it creates ultra-realistic fake data. Currently I have noticed even this SOTA model shows high accuracy for light-skinned males and much lower accuracy for dark-skinned females; some kind of gender bias happening is detrimental even with this LenYun model, as well as the machine bias. Most of this agentic AI will run on feedback loops, like a system that reacts to its own results. + Feedback loops are reinforcing, but negative feedback loops are stabilizing. I am just assuming confusion matrices here, like is there a chance where this embodied AI shows a true negative (ignore a person from helping during an emergency), a false positive (attack an innocent person by mistake), and a false negative (misses a real patient from helping)? A true negative would be the cheapest AI-embodied robot you might see in a local place. But FP would not be tolerable. FN is dangerous here. I recommend using a fairness mitigation technique at this point. That's only the solution. part 1.
AJAY tweet media
English
0
0
0
17
AJAY
AJAY@jaykrishAGI·
I recommend using a Gabor filter; you can detect edges and lines in images. It's like based on combining a sine wave and a Gaussian (localized region). Some slight bends in a line look straight to your brain; that is perpetual straightening. just examined the time to reach velocity, TTRV, of it; it's really instantaneous speed (velocity). Some in an exam paper, teachers have to choose the best answer in English literature, and teachers learn a new intuition from it. Group relative policy optimization is powerful due to that. If there is only one bright person in a class, policy optimization is efficacious because we only needed that answer. Most of the headmaster will give a recommendation to the teacher , Don't be too sure; keep exploring, like entropy regularization.
English
0
0
1
7
AJAY
AJAY@jaykrishAGI·
Auxiliary loss is added to main loss for faster learning. My misconception was that loss is detrimental and it's just noise, but rather it's just a feedback signal, not punishment in an absolute sense. Like a extra teacher hints in exam preparation.
English
0
0
0
13
AJAY
AJAY@jaykrishAGI·
poeple have a wrong discernment about appearance and reality ,which is kinda undue strawman stuffs. if you see lots of verbose ,just leave from that conversation.
English
0
0
0
10
AJAY
AJAY@jaykrishAGI·
This might be quasi linear ..
AJAY tweet media
English
0
0
0
7
AJAY
AJAY@jaykrishAGI·
sublinear > linear > quasi linear ....
Português
0
0
0
5
AJAY
AJAY@jaykrishAGI·
Money , thought is just vyavaharika , but paramarthika is the consciousness , god. Whatever we see around is just upadhi , a illusion or condition that makes us perceive differences.
English
0
0
0
6
AJAY
AJAY@jaykrishAGI·
.idxmax() → Give the hotel with the maximum rank. .max() → Give the value of that rank. print() → Speak it aloud.
AJAY tweet media
English
0
0
1
13
AJAY
AJAY@jaykrishAGI·
groupby('hotel_reference') → Group all rows by hotel, like collecting all followers for each temple. ['user_id'] → Look at the jar with users who booked each hotel. .nunique() → Count unique users. Now, hotel_rank = how popular each hotel is. Dot mnemonic: Dot means “perform this magic on the group.”
AJAY tweet media
English
1
0
1
14
AJAY
AJAY@jaykrishAGI·
merge() → Like combining two scrolls based on a common column (hotel_reference). on="hotel_reference" → Tell the helper which column matches the two tables. how="left" → Keep all entries from the first table and match what we can from the second. Filtering for London: merged_df['city'] → Take the column city. == 'London' → Only keep rows where city equals London. & → “and” in Python, both conditions must be true (city = London and country = UK). Brackets [ ] → They select rows where the condition is true.
AJAY tweet media
English
0
0
0
5
AJAY
AJAY@jaykrishAGI·
AJAY tweet media
ZXX
1
0
0
5
AJAY
AJAY@jaykrishAGI·
how would you solve this q ? part 1
AJAY tweet media
English
1
0
0
15