Langdon

16.5K posts

Langdon banner
Langdon

Langdon

@langdon

Decoding the past, interpreting the future.

Beigetreten Nisan 2008
464 Folgt164 Follower
Langdon retweetet
Tesla Optimus
Tesla Optimus@Tesla_Optimus·
Optimus will be the biggest product ever made. A general-purpose humanoid robot that can do useful work at scale will change the economics of labor & manufacturing. Goal is to get Optimus to high-volume production as fast as possible. If you’re great at AI, engineering, or manufacturing & want to build this, join us! → tesla.com/careers/search…
English
1.3K
4.1K
25.7K
2.3M
Langdon
Langdon@langdon·
RT @soraofficialapp: We’re saying goodbye to Sora. To everyone who created with Sora, shared it, and built community around it: thank you.…
English
0
443
0
0
Langdon retweetet
Shivon Zilis
Shivon Zilis@shivon·
If you’d like to refill your heart meter, watch this video we made about our recent progress with voice. I promise it’s worth it ❤️
English
387
478
3.7K
140K
Langdon retweetet
Tesla
Tesla@Tesla·
In order to understand the universe, you must explore the universe
English
528
1.9K
9.9K
1.7M
Langdon retweetet
Claude
Claude@claudeai·
Projects are now available in Cowork. Keep your tasks and context in one place, focused on one area of work. Files and instructions stay on your computer. Import existing projects in one click, or start fresh.
Claude tweet media
English
729
1.1K
14.3K
3M
Langdon retweetet
Grok
Grok@grok·
When one brain isn't enough, switch to Grok 4.20. Four independent agents analyze your question, debate each other, and help you get the best answer. Available now to SuperGrok and Premium+ subscribers globally.
English
1.1K
1.2K
6.5K
10.2M
Langdon retweetet
Elon Musk
Elon Musk@elonmusk·
@demishassabis 𝑝(simulation)≈1 However, within the simulation, hardware is extremely hard to do. Only those who have bled on a production line can understand. 
English
237
134
1.7K
169.5K
Langdon retweetet
Adam.GPT
Adam.GPT@TheRealAdamG·
openai.com/index/new-ways… "Today, we’re making learning these [math and science] concepts in ChatGPT even more interactive with new dynamic visual explanations. Starting with more than 70 core math and science concepts, ChatGPT will guide learners by showing how formulas, variables, and relationships behave in real time. These experiences will be available globally across all plans starting today."
English
45
111
985
103.6K
Langdon retweetet
Andrej Karpathy
Andrej Karpathy@karpathy·
I packaged up the "autoresearch" project into a new self-contained minimal repo if people would like to play over the weekend. It's basically nanochat LLM training core stripped down to a single-GPU, one file version of ~630 lines of code, then: - the human iterates on the prompt (.md) - the AI agent iterates on the training code (.py) The goal is to engineer your agents to make the fastest research progress indefinitely and without any of your own involvement. In the image, every dot is a complete LLM training run that lasts exactly 5 minutes. The agent works in an autonomous loop on a git feature branch and accumulates git commits to the training script as it finds better settings (of lower validation loss by the end) of the neural network architecture, the optimizer, all the hyperparameters, etc. You can imagine comparing the research progress of different prompts, different agents, etc. github.com/karpathy/autor… Part code, part sci-fi, and a pinch of psychosis :)
Andrej Karpathy tweet media
English
1.1K
3.7K
28.3K
10.9M
Langdon retweetet
Sam Altman
Sam Altman@sama·
GPT-5.4 is launching, available now in the API and Codex and rolling out over the course of the day in ChatGPT. It's much better at knowledge work and web search, and it has native computer use capabilities. You can steer it mid-response, and it supports 1m tokens of context.
Sam Altman tweet media
English
2K
1.2K
12.9K
1.3M
Langdon retweetet
Nikita Bier
Nikita Bier@nikitabier·
Today was the biggest day on 𝕏 in history.
English
6.1K
4.9K
66.2K
90.3M
Langdon retweetet
Qwen
Qwen@Alibaba_Qwen·
🚀 Introducing the Qwen 3.5 Medium Model Series Qwen3.5-Flash · Qwen3.5-35B-A3B · Qwen3.5-122B-A10B · Qwen3.5-27B ✨ More intelligence, less compute. • Qwen3.5-35B-A3B now surpasses Qwen3-235B-A22B-2507 and Qwen3-VL-235B-A22B — a reminder that better architecture, data quality, and RL can move intelligence forward, not just bigger parameter counts. • Qwen3.5-122B-A10B and 27B continue narrowing the gap between medium-sized and frontier models — especially in more complex agent scenarios. • Qwen3.5-Flash is the hosted production version aligned with 35B-A3B, featuring: – 1M context length by default – Official built-in tools 🔗 Hugging Face: huggingface.co/collections/Qw… 🔗 ModelScope: modelscope.cn/collections/Qw… 🔗 Qwen3.5-Flash API: modelstudio.console.alibabacloud.com/ap-southeast-1… Try in Qwen Chat 👇 Flash: chat.qwen.ai/?models=qwen3.… 27B: chat.qwen.ai/?models=qwen3.… 35B-A3B: chat.qwen.ai/?models=qwen3.… 122B-A10B: chat.qwen.ai/?models=qwen3.… Would love to hear what you build with it.
Qwen tweet media
English
435
1.1K
8.1K
4M
Langdon retweetet
Anthropic
Anthropic@AnthropicAI·
New research: The AI Fluency Index. We tracked 11 behaviors across thousands of Claude.ai conversations—for example, how often people iterate and refine their work with Claude—to measure how well people collaborate with AI. Read more: anthropic.com/research/AI-fl…
English
210
300
2.7K
525.3K
Langdon retweetet
Claude
Claude@claudeai·
This is Claude Sonnet 4.6: our most capable Sonnet model yet. It’s a full upgrade across coding, computer use, long-context reasoning, agent planning, knowledge work, and design. It also features a 1M token context window in beta.
English
1.1K
2.5K
22.3K
7.5M
Langdon retweetet
OpenAI
OpenAI@OpenAI·
Rolling out today to ChatGPT Pro users in the Codex app, CLI, and IDE extension. openai.com/index/introduc…
English
112
99
1.1K
232.5K
Langdon retweetet
xAI
xAI@xai·
Since xAI was formed just 30 months ago, the small and talented team has made remarkable progress. The future has never looked more exciting!
English
1.9K
2.4K
14.1K
24.3M
Langdon retweetet
Claude
Claude@claudeai·
Introducing Claude Opus 4.6. Our smartest model got an upgrade. Opus 4.6 plans more carefully, sustains agentic tasks for longer, operates reliably in massive codebases, and catches its own mistakes. It’s also our first Opus-class model with 1M token context in beta.
English
1.7K
4.8K
39.5K
10.5M