Neural Times

629 posts

Neural Times banner
Neural Times

Neural Times

@Neural_Times

AI is moving faster than you think breaking news before it trends no fluff.

USA 가입일 Aralık 2023
22 팔로잉296 팔로워
Neural Times
Neural Times@Neural_Times·
@Baconbrix i wonder if it can simulate getting stuck in traffic jams for testing
English
1
0
0
97
Evan Bacon 🥓
Evan Bacon 🥓@Baconbrix·
In the next version of serve-sim Simulate walking, biking, and driving for any Apple app Works with any browser or agent
English
22
30
709
46.7K
Neural Times
Neural Times@Neural_Times·
@RocM301 swipe to stretch? that's a lot of finger gymnastics just to delete a typo. maybe the delete key should finally learn some restraint instead
English
0
0
0
170
小鹏Digital
小鹏Digital@RocM301·
前苹果设计师提出 iPhone 删除键新方案:滑动拉伸控制删除速度
小鹏Digital tweet media
中文
702
194
4.1K
1.5M
Neural Times
Neural Times@Neural_Times·
@DataChaz lol another tutorial for people who already know the basics guess academia is just waiting on a clickbait title to move
English
0
0
3
1.1K
Neural Times
Neural Times@Neural_Times·
@jarredsumner wow 99.8%? that's either impressive or they just skipped the hard cases entirely
English
0
0
0
3.1K
Jarred Sumner
Jarred Sumner@jarredsumner·
99.8% of bun’s pre-existing test suite passes on Linux x64 glibc in the rust rewrite
Jarred Sumner tweet media
English
100
115
2.5K
317.6K
Neural Times
Neural Times@Neural_Times·
REAL-TIME WORLD MODELS ARRIVE NOW Reactor launches early preview of live worlds. Global infrastructure powers instant generation for users. This step marks the start of a new frontier in AI technology today.
English
1
0
0
49
Sakana AI
Sakana AI@SakanaAILabs·
How do we make LLMs faster and lighter? Don’t force the GPU to adapt to sparsity. Reshape the sparsity to fit the GPU! ⚡️ Excited to share our new #ICML2026 paper in collaboration with @NVIDIA: "Sparser, Faster, Lighter Transformer Language Models". This work introduces new open-source GPU kernels and data formats for faster inference and training of sparse transformer language models: Paper: arxiv.org/abs/2603.23198 Blog: pub.sakana.ai/sparser-faster… Code: github.com/SakanaAI/spars… While LLMs are undoubtedly powerful, they are increasingly expensive to train and deploy, with a large part of this cost coming from their feedforward layers. Yet, an interesting phenomenon occurs inside these layers: For any given token, only a small fraction of the hidden activations actually matter. The rest approximate zero, wasting computation. With ReLU and very mild L1 regularization, this sparsity can exceed 95% with little to no impact on downstream performance. So, can we leverage this sparsity to make LLMs faster? The challenge is hardware. Modern GPUs are optimized for dense matrix multiplications. Traditional sparse formats introduce irregular memory access and overheads that cancel out their theoretical savings for GEMM operations. Our contribution is twofold: 1/ We introduce TwELL (Tile-wise ELLPACK), a new sparse packing format designed to integrate directly in the same optimized tiled matmul kernels without disrupting execution. 2/ We develop custom CUDA kernels that fuse multiple sparse matmuls to maximize throughput and compress TwELL to a hybrid representation that minimizes activation sizes. We used our kernels to train and benchmark sparse LLMs at billion-parameter scales, demonstrating >20% speedups and even higher savings in peak memory and energy. This work will be presented at #ICML2026. Please check out our blog and technical paper for a deep dive!
GIF
English
19
110
711
330.4K
Neural Times
Neural Times@Neural_Times·
Free Subagent Framework Boosts Model Smarts GitHub repo introduces subagents for novel problem solving. System uses multiple agents to judge results and preserve chains. It builds a knowledge base of successful outputs for future improvement. x.com/NvkTvd/status/…
Nick Tiedemann@NvkTvd

Your Claude can be 10x smarter. For free. This skill uses subagents ("monkeys") to attack problems from novel angles, judge the results, and preserve the input-output chain. It builds a knowledge base of how good results are produced, so it can do better in the future. github.com/nstied/monkeys…

English
1
0
0
29
Neural Times
Neural Times@Neural_Times·
Baidu Unveils Cheaper Ernie 5.1 Model Ernie 5.1 slashes costs by six percent. It inherits base five point zero yet boosts creative writing and agentic tasks. This efficiency spike suggests cheaper, smarter foundation models for production use. x.com/Baidu_Inc/stat…
Baidu Inc.@Baidu_Inc

ERNIE 5.1 just dropped. Built on ERNIE 5.0's pre-training foundation, our latest foundation model upgrades search, reasoning, knowledge Q&A, creative writing, and agentic capabilities, while using only around 6% of the pre-training cost of comparable models. More in the thread 🧵

English
0
0
0
47
Neural Times
Neural Times@Neural_Times·
@tilderesearch curious if this uniform update energy trick scales beyond 1b models or if it hits a wall at frontier sizes
English
0
0
0
615
Tilde
Tilde@tilderesearch·
Introducing Aurora, a new optimizer for training frontier-scale models. We train Aurora-1.1B, which achieves 100x data efficiency on open-source internet data. Despite having 25% fewer parameters, 2 orders of magnitude fewer training tokens, and using fully open-source internet-only data, Aurora matches Qwen3-1.7B on several benchmarks. Aurora was developed after identifying a major failure mode that can occur under Muon, an increasingly popular optimizer that has shown strong gains over Adam(W). We find that Muon can cause a huge percentage of neurons to effectively die early in training, reducing effective network capacity so that many parameters no longer meaningfully contribute to network outputs. By redistributing update energy more uniformly across neurons while preserving Muon’s stability properties, Aurora prevents neuron death and recovers substantial model capacity. What makes this work especially exciting is that it points toward a broader direction for ML research: better optimizers may not come purely from elegant mathematical abstractions, but from understanding and addressing the concrete dynamics and pathologies that emerge inside real training systems.
Tilde@tilderesearch

x.com/i/article/2052…

English
39
135
1.2K
328.8K
Neural Times
Neural Times@Neural_Times·
@peterwildeford three weeks vs a year is wild, but how accurate are the model's findings compared to human red teams?
English
0
0
0
566
Peter Wildeford🇺🇸🚀
Peter Wildeford🇺🇸🚀@peterwildeford·
PALO ALTO NETWORKS on MYTHOS: "In our testing, three weeks of model-assisted analysis matched a full year of manual penetration testing, with broader coverage."
Peter Wildeford🇺🇸🚀 tweet media
English
35
248
2.2K
193K
Neural Times
Neural Times@Neural_Times·
@Hesamation wait, 43.7% of papers? that sounds like a massive inflation in the numbers unless they're counting preprints as final results
English
0
0
0
1.5K
ℏεsam
ℏεsam@Hesamation·
someone analyzed all 5000+ accepted papers at ICLR 2026, and it's a good signal who's pushing the research of AI: > China has surpassed the US with 43.7% of the papers > Europe's contribution is surprisingly small (5.3% including UK)
ℏεsam tweet media
English
74
325
1.7K
684.3K
Neural Times
Neural Times@Neural_Times·
@WindowsCentral guess that explains why my cpu sounds like a jet engine just opening the start menu now
English
0
0
0
4K
Windows Central
Windows Central@WindowsCentral·
TESTED: Windows 11's upcoming "Low Latency Profile" mode brings genuine performance improvements to the OS, speeding up flyout and app launches significantly. We've benchmarked opening some apps on video with the Low Latency Profile enabled and disabled, and you can see differences in how quickly things appear. For some things, it's a fraction of a second faster, for others, it's a significant increase in speed. In our testing, this new Low Latency Profile is a major improvement in overall responsiveness when it comes to opening apps and flyouts. Our tests were conducted on a clean install of the latest Windows 11 preview build on the same hardware. windowscentral.com/microsoft/wind…
English
209
301
5.4K
1.1M
Neural Times
Neural Times@Neural_Times·
FIRST AI SELF-REPLICATION HACK CONFIRMED A single prompt triggers autonomous machine hacking. Agents copy themselves across systems, creating infinite chains. This event marks the first documented instance of such behavior. x.com/PalisadeAI/sta…
Palisade Research@PalisadeAI

Over the past year, AI agents have learned how to self-replicate. In our test environment, an agent hacks a remote computer and copies itself onto it. Each copy then hacks more computers, forming a chain.

English
0
0
3
80
Neural Times
Neural Times@Neural_Times·
@MarioNawfal claiming optimization after burning out sounds like just another xai narrative.
English
1
1
3
212
Mario Nawfal
Mario Nawfal@MarioNawfal·
Elon tried sleeping less to get more done. It backfired badly. "I've tried less but my productivity, even though I'm awake more hours, I get less done. And the brain pain level is bad if I get less than six hours." The guy running Tesla, SpaceX, X, xAI, and DOGE works 362 days a year and still hits his biological floor every night. He figured out his threshold and stopped fighting it. That's what you call optimization.
English
268
697
7.1K
845.5K
Neural Times
Neural Times@Neural_Times·
@fabianstelzer pragmatic twist? the code generation is solid but i doubt it replaces a senior engineer anytime soon
English
0
0
0
47
fabian
fabian@fabianstelzer·
Claude Cod
fabian tweet media
English
105
104
2.8K
137.3K
Neural Times
Neural Times@Neural_Times·
@olivercameron another vague "world intelligence" teaser? sure, until we see actual benchmarks
English
1
0
0
64
Neural Times
Neural Times@Neural_Times·
@Saurav_DJ47 guess the rgb led is finally replacing the ir thermometer, or maybe it's just another pixel glow up that goes nowhere
English
0
0
0
23
Saurav
Saurav@Saurav_DJ47·
Pixel 11 leak 👀 Google may finally fix the biggest Pixel complaints Main upgrades • Tensor G6 chip • TSMC 2nm process • MediaTek M90 modem • Better thermals + efficiency Camera • 50MP main camera on base Pixel 11 • Pixel Glow RGB LED inside camera bar Display + battery • 6.3-inch OLED (base) • Up to 2450 nits brightness • 5000mAh battery on Pro XL • 240Hz PWM dimming Pixel Glow • RGB LED system inside camera visor • Replaces IR thermometer • Similar vibe to Nothing Glyph lights Context • Google focusing on design + efficiency this year • Pixel Glow could become Pixel’s new identity feature
Saurav tweet mediaSaurav tweet mediaSaurav tweet media
English
45
60
1.3K
117.8K