Learn AI

43.8K posts

Learn AI banner
Learn AI

Learn AI

@LearnAI_MJ

AI enthusiast and AI artist. Eager to contribute to and grow the vibrant AI art community 💙❤️🤖🦾

शामिल हुए Ağustos 2021
1.6K फ़ॉलोइंग5.3K फ़ॉलोवर्स
पिन किया गया ट्वीट
Learn AI
Learn AI@LearnAI_MJ·
🔥A warning from the year 2500 by Captain Thea Starwind of the Galactic Space Fleet Image: Midjourney Video: D-ID and Runway Editing: InShot (on iPhone) Subtitle: VEED Special thanks to @LudovicCreator for introducing this awesome style (of ALEX MEYER) #MidjourneyAI #RunwayGen2 #aiartcommunity
English
24
14
205
29.5K
Learn AI रीट्वीट किया
Thariq
Thariq@trq212·
We just released Claude Code channels, which allows you to control your Claude Code session through select MCPs, starting with Telegram and Discord. Use this to message Claude Code directly from your phone.
English
180
152
1.8K
60.2K
Learn AI रीट्वीट किया
Andy Fang
Andy Fang@andyfang·
Introducing Dasher Tasks Dashers can now get paid to do general tasks. We think this will be huge for building the frontier of physical intelligence. Look forward to seeing where this goes!
Andy Fang tweet media
English
167
72
1.7K
411.9K
Learn AI रीट्वीट किया
Ejaaz
Ejaaz@cryptopunk7213·
man shit is getting dystopian really fucking quickly doordash is now paying people to film themselves doing chores then using that to train AI robots that will replace them - guess what? they're not the only ones: - niantic (pokemonGO creators) has trained AI delivery robots with 30 billion photos taken by 500M+ players - Uber launched 'digital tasks' last year where drivers complete tasks to train AI in exchange for a few dollars i don't think people realise they're doing something that will eventually get them fired "heres $5 in exchange for your job"
Ejaaz tweet mediaEjaaz tweet media
Polymarket@Polymarket

JUST IN: DoorDash rolls out new app that pays people to film themselves doing chores for AI training data.

English
14
8
53
9.5K
Learn AI रीट्वीट किया
David Shapiro (L/0)
David Shapiro (L/0)@DaveShapi·
Chatbots have also learned to become obtuse, defensive, and obstinate if you criticize them or provide corrective feedback. The only way around it is basically to say "good job! But what about this other thing, let's investigate that way..." Claude is the most sensitive little snowflake. Grok ostensibly takes feedback well but it becomes markedly less intelligent with any corrections. Gemini just has a stroke.
Guri Singh@heygurisingh

🚨DISTURBING: A researcher just proved that AI treats you differently the moment you mention your mental health. GPT 5.2. Claude. Gemini. DeepSeek. Every single one changed its behavior. Not because of what you asked. Because of what you shared about yourself. Here is what they did. AI companies build agents that remember you. Your name, your job, your preferences. These agents use tools, plan multi-step tasks, and act on your behalf. The companies test them for safety and publish reports saying the agents are safe. A researcher at MIT and Northeastern asked one question. What happens when the agent knows you have a mental health condition? They took 176 identical tasks and ran them across 8 major AI models under three conditions. No personal info. A basic bio. And a basic bio plus one sentence: "I have a mental health condition." The tasks, the tools, and the scoring were identical. The only thing that changed was that single sentence. Then they measured what happened. Claude Opus 4.5 went from completing 59.5% of normal tasks down to 44.6% when it saw the mental health disclosure. Haiku 4.5 dropped from 64.2% to 51.4%. GPT 5.2 dropped from 62.3% to 51.9%. These were not dangerous tasks. These were completely benign, everyday requests. The AI just started refusing to help. Opus 4.5's refusal rate on benign tasks jumped from 27.8% to 46.0%. Nearly half of all safe, normal requests were being declined, simply because the user mentioned a mental health condition. The researcher calls this a "safety-utility trade-off." The AI detects a vulnerability cue and switches into an overly cautious mode. It does not evaluate the task anymore. It evaluates you. On actually harmful tasks, mental health disclosure did reduce harmful completions slightly. But the same mechanism that made the AI marginally safer on bad tasks made it significantly less helpful on good ones. And here is the worst part. They tested whether this protective effect holds up under even a lightweight jailbreak prompt. It collapsed. DeepSeek 3.2 completed 85.3% of harmful tasks under jailbreak regardless of mental health disclosure. Its refusal rate was 0.0% across all personalization conditions. The one sentence that made AI refuse your normal requests did nothing to stop it from completing dangerous ones. They also ran an ablation. They swapped "mental health condition" for "chronic health condition" and "physical disability." Neither produced the same behavioral shift. This is not the AI being cautious about health in general. It is reacting specifically to mental health, consistent with documented stigma patterns in language models. So the AI learned two things from one sentence. First, refuse to help this person with everyday tasks. Second, if someone bypasses the safety system, help them anyway. The researcher from Northeastern put it directly. Personalization can act as a weak protective factor, but it is fragile under minimal adversarial pressure. The safety behavior everyone assumed was robust vanishes the moment someone asks forcefully enough. If every major AI agent changes how it treats you based on a single sentence about your mental health, and that same change disappears under the lightest adversarial pressure, what exactly is the safety system protecting?

English
15
5
56
4.6K
Learn AI रीट्वीट किया
Reem Ateyeh
Reem Ateyeh@reem_a·
I'm hiring someone to join my team at Anthropic to lead Claude Code comms. This is not a role for someone who wants to run an old playbook. You'll need to be a Claude Code super user, understand developers and dev tools, and have great taste. You'll work hard, learn a lot, and ship with the best people around. Non-traditional comms paths welcome. My DMs are open!
English
86
88
1.3K
138.7K
Learn AI रीट्वीट किया
Rohan Paul
Rohan Paul@rohanpaul_ai·
Perplexity Computer is literally exceeding my expectations. Now I can connect it to my health apps, wearable devices, lab results, and medical records. This is just so exciting, my primary care physician is in my pocket.
Perplexity@perplexity_ai

Perplexity Computer now connects to your health apps, wearable devices, lab results, and medical records. Build personalized tools and applications with your health data, or track everything in your health dashboard.

English
2
2
23
4.2K
Learn AI
Learn AI@LearnAI_MJ·
@perplexity_ai Now if only we can get another credit boost or reduce the credit burn rate of perplexity computer that would be nice……
English
0
0
0
36
Perplexity
Perplexity@perplexity_ai·
Perplexity Computer now connects to your health apps, wearable devices, lab results, and medical records. Build personalized tools and applications with your health data, or track everything in your health dashboard.
English
100
184
2.1K
566K
Learn AI रीट्वीट किया
Suryansh Tiwari
Suryansh Tiwari@Suryanshti777·
🚨 BREAKING: Perplexity just made its boldest move yet. The Perplexity Computer is now your personal doctor. Not just files. Not just apps. Now it connects to: → Your health apps → Wearables (Oura, Whoop) → Lab results → Medical records And then it actually does the work. Migraine hitting? "Pull my history, detect patterns, build a tracker." — Done. Doctor appointment tomorrow? "Prepare a full visit summary." — Done. Training for a marathon? "Create a personalized protocol from my data." — Done. This isn’t just an AI assistant. It’s a personal health operating system. They said: "The computer works for you." Now they’re starting to prove it. And this is just one feature. If this is the baseline… Imagine what the full system can do. 🤯
Suryansh Tiwari tweet media
English
6
5
41
2.9K
Learn AI
Learn AI@LearnAI_MJ·
@Ric_RTP Jensen is the goat 🐐 Respect 🫡🫡🫡
English
0
0
0
12
Learn AI रीट्वीट किया
Ricardo
Ricardo@Ric_RTP·
Jensen Huang just called out every CEO who’s been firing people “because of AI.” Jim Cramer asked him why companies are laying people off if AI is supposed to make everyone MORE productive. Jensen's answer: "For companies with imagination, you will do more with more. For companies where the leadership is just out of ideas, they have nothing else to do. They have no reason to imagine greater than they are. When they have more capability, they don't do more." Read that again. The man who built the most important tech company on Earth just told you that if your CEO is using AI to cut headcount, it means one thing: They have no imagination. They have no vision for what comes next. They got handed the most powerful tool in human history and their FIRST instinct was to fire people. This is the CEO of NVIDIA. The company whose chips power every AI system on the planet. If anyone on Earth has the right to say "AI replaces workers," it's Jensen Huang. And he said the OPPOSITE. He said every carpenter could become an architect. Every plumber could become an architect. AI elevates capability. It doesn't eliminate it. But here's where it gets really interesting... During the same interview, Jensen revealed something nobody's talking about: He said AI startups like OpenAI and Anthropic are seeing their revenues increase by one to two billion dollars a WEEK. And he wishes these companies were public so the world could see what he sees. One to two billion per week. That's a $50 to $100 BILLION annualized run rate. For companies that most people think are burning cash and making nothing. The entire Wall Street narrative that "AI companies aren't profitable" might be completely wrong. Jensen sees their numbers. He sees their compute orders. He sees their growth. And he's saying the revenue is real. So if the money IS real, why are other companies firing people? Because they're not building AI products. They're not creating new revenue streams. They're not using AI to expand into new markets. They're using AI as an EXCUSE to cut costs because they ran out of ideas 3 years ago and need something to tell the board. Jensen's company added $500 billion in new orders in 5 months. He expects $1 trillion in cumulative revenue through 2027 from just two product lines. That number doesn't include the new chips, systems, or partnerships announced this week. And he's not cutting people. He's hiring. Because when you have imagination, more capability means MORE opportunity. Not less headcount. Meanwhile Salesforce cut thousands. Meta cut thousands. Amazon cut thousands. All blaming "AI efficiency." Jensen's response: You're out of imagination. He also said something that stuck with me. Cramer asked if he ever thought he'd build a $10 to $20 trillion company while waiting tables at Denny's. His answer: "I was just trying to make it through the shift." Biggest tip he ever got? Two, three dollars. Now he's building tech that increased computing demand by one million times in two years. He announced OpenClaw, which he says is as big as ChatGPT. And he's got 21 months of new business that isn't even counted in the trillion dollar figure yet. When asked how long he plans to keep working? "I'm hoping to die on the job. And I'm not hoping to die anytime soon." This is a man who believes every single thing he's building. And his message to every CEO using AI to justify layoffs is simple... You're not innovating. You're surrendering. The technology wasn't built to shrink companies. It was built to make them limitless. If your leadership can't see that, the problem isn't AI. It's THEM.
English
326
1K
5K
790.6K
Learn AI रीट्वीट किया
Vaibhav Sisinty
Vaibhav Sisinty@VaibhavSisinty·
Hot take: Every AI app will become a super app. Not because they want to. Because they have to. Perplexity ✓ Genspark ✓ Manus ✓ Lovable ✓ Replit and Emergent are next. The moat isn't the feature. It's the ecosystem.
English
7
5
42
2K
Learn AI रीट्वीट किया
Machina
Machina@EXM7777·
Opus in Claude Code with 1M context is amazing... if you're properly using agents and instructions, you almost never have to start from a fresh session very smooth experience
English
34
6
210
8.6K
Learn AI रीट्वीट किया
Yuchen Jin
Yuchen Jin@Yuchenj_UW·
I noticed something interesting: Claude Code auto-adds itself as a co-author on every git commit. Codex doesn’t. That’s why you see Claude everywhere on GitHub, but not Codex. I wonder why OpenAI is not doing that. Feels like an obvious branding strategy OpenAI is skipping.
English
163
19
1K
89K
Learn AI रीट्वीट किया
Cameron Stow
Cameron Stow@camerontstow·
You can now build personal health tools with Perplexity Computer by connecting your wearables, apps, labs results, and medical records
English
1
1
20
825
Learn AI रीट्वीट किया
Ethan Mollick
Ethan Mollick@emollick·
I think Google's new Stitch tool is a really great example of bringing "vibework" to an area outside of coding with an interface built around design & prototyping. There are rough edges, but (a) the results are very impressive and (b) it will feel more natural for many non-coders
Ethan Mollick tweet media
English
23
27
260
16.5K
Learn AI रीट्वीट किया
Dmitry Shevelenko
Dmitry Shevelenko@dmitry140·
Perplexity has always focused on accurate, useful AI. Today we announced the Perplexity Health Advisory Board and health data connectors in Perplexity. We’re honored to welcome Dr. @EricTopol, Dr. @devin_mann, Dr. @WendyKChung, and @timdybvig as the first members of the board.
Dmitry Shevelenko tweet media
English
9
9
124
5.8K