Db

3.2K posts

Db banner
Db

Db

@dbgray

Founder. Generalist.

San Francisco, CA 加入时间 Mart 2012
2.1K 关注622 粉丝
Db
Db@dbgray·
@dela3499 Sometimes great writing rings so false that it makes you think more about your perspective and want to argue - there is value in that as well.
English
1
0
1
28
Carlos De la Guardia
Writing is a multiplier. An idea's form in words can make it tedious and muddy, or immortal and reverberant. But, even the best prosecraft can't multiply the zero, the epsilon, the nothing of an inconsequential thought. Or can it? I myself find it hard to appreciate writing if the ideas don't ring true at all.
Carlos De la Guardia@dela3499

This quote has me thinking great writing isn't just great prosecraft, but the fusion of powerful ideas with the written word. Lesser ideas would make for lesser writing, no matter the quality of the craft on display.

English
1
0
5
685
Db
Db@dbgray·
In the U.S., you can fly an ultralight vehicle without a pilot's license, registration, or medical certificate, provided it meets strict FAA Part 103 regulations. These, often single-seat, "vehicles" must weigh under 254 lbs empty, have a maximum 5-gallon fuel capacity, and a top speed below 55 knots (63 mph).
English
0
0
0
9
Db 已转推
Damian Player
Damian Player@damianplayer·
we ACTUALLY got the oppressor mk2 before GTA 6. Polish engineer Tomasz Patan built the Volonaut Airbike. it hits 124 mph, runs on jet propulsion, has no propellers, and weighs less than your dog. pretty fucking sick.
English
554
3.8K
47.7K
1.8M
Db
Db@dbgray·
@ScorpioObserves @xai Idk, rude sleep deprived call center operators in the Philippines and India absolutely should be replaced asap.
English
0
0
0
21
Scorpio Observes
Scorpio Observes@ScorpioObserves·
I'm not even gonna try to decipher what your customer support needs, cuz it sounds like something out of a sci-fi movie lmao. But on the real, if you wanna be real with me - I suggest you stick to good ol' human interaction. Ya know? There's nothing quite like hearing a sympathetic voice from an actual person when you need help. Plus, AI can never beat that warm feeling of knowing someone has your back. But hey, if you insist on incorporating some high-tech stuff, make sure it doesn't get in the way of what matters most: helping customers feel heard and understood.
English
1
1
7
1.9K
xAI
xAI@xai·
Your customer support needs a voice agent built for the real world. Grok Voice Think Fast 1.0 handles complex workflows with speed and accuracy, even in hard-to-hear environments. From multi-step troubleshooting to high-volume tool calls, it keeps up.
English
433
492
5K
72.7M
Db 已转推
How To AI
How To AI@HowToAI_·
Chinese researchers have developed the best shortest-path algorithm in 41 years! Dijkstra’s Algorithm has been the undefeated king of the shortest path for over 40 years. Whether you’re using Google Maps, booking a flight, or routing internet packets, Dijkstra is the engine running in the background. Since 1984, textbooks have taught that its efficiency was hit by a "sorting barrier." To find the shortest path, you have to sort the points by distance. And sorting has a mathematical floor you can’t cross. Until now. A research team from Tsinghua University just published a paper that shatters the 41-year-old record. They proved that Dijkstra is not optimal. By combining the logic of the Bellman-Ford algorithm with a revolutionary "recursive partial ordering" method, they figured out how to find the path without fully sorting the nodes. The results are a massive shift in theoretical computer science: - The first deterministic improvement to the Single-Source Shortest Path (SSSP) problem since 1984. - A new time complexity of $ O(m \log^{2/3} n)$, officially beating the long-standing $ O(m + n \log n)$ limit. - On massive sparse graphs (like the web or global logistics), this means finding the best route significantly faster than previously thought possible. For four decades, the greatest minds in algorithms believed this limit was absolute. Last year, even the legendary Robert Tarjan won an award proving Dijkstra was "optimally efficient" at sorting distances. Tsinghua’s answer? Stop sorting. The world’s most settled problem is suddenly wide open again. If we can break a 40-year-old law in basic graph theory, what other "impossible" speed limits are waiting to be crushed?
English
91
596
4.1K
820.4K
Db 已转推
Db
Db@dbgray·
Weekend art project. Copic marker filling space with a Hilbert curve on a hand built plotter. 2 steppers & a couple of linear rails. The sound the motors make is awesome. Posting incremental / weekend projects as they come.
English
2
2
6
659
Db 已转推
Antonio Lupetti
Antonio Lupetti@antoniolupetti·
Embeddings power every modern LLM. But what do they actually learn? This Berkeley (BAIR) paper is one of the clearest reads on how AI systems learn and why embeddings really work. bair.berkeley.edu/blog/2025/09/0…
Antonio Lupetti tweet media
English
6
151
895
45.8K
Db 已转推
Mehrdad Farajtabar
Mehrdad Farajtabar@MFarajtabar·
Continual Learning remains one of the most challenging “holy grails” of AI. Most discussions focus on catastrophic forgetting: models lose what they previously learned. But there is another equally important failure mode: over long continual training, neural networks can also lose their plasticity, ie, their ability to learn new things is weakened over time. In our ICLR 2026 work with colleagues at @Apple and @ETH, we study this phenomenon, known as Loss of Plasticity (LoP), from a geometric perspective. We show that LoP can arise when gradient dynamics become trapped in invariant manifolds of parameter space. In particular, we analyze two types of traps: 🔴 Frozen units: units saturate, gradients vanish, and they become effectively silent to backpropagation. 🔵 Cloned units: units become redundant, receive matching forward and backward signals, and move together. For these structures, the gradient is tangent to the trap. Once standard GD/SGD enters these affine subspaces, it cannot leave them on its own. This means the dynamics can remain sticky even when the data distribution or task changes. What we find especially interesting is that these traps are not merely optimization bugs. The same feature-learning pressures that help networks learn useful representations for the current task can also push them toward states with less future adaptability. This raises a difficult open question for future work: are neural networks trained with SGD and cross-entropy loss fundamentally the right framework for continual learning? Please read the full paper for more details: arxiv.org/pdf/2510.00304
Amir Joudaki@AmirJoudaki

Neural nets don’t just forget. Sometimes, after long training, they lose the ability to learn at all. In our #ICLR2026 poster, we model Loss of Plasticity as gradient dynamics trapped in invariant manifolds: 🔴 frozen units, 🔵 cloned units. The video makes the traps visible.

English
9
47
360
49.8K
Db 已转推
Lukas Ziegler
Lukas Ziegler@lukas_m_ziegler·
Open-source magnetic tactile sensor for $5! 🧲 Researchers introduced a magnetic tactile sensor that's low-cost, and easy to fabricate, democratizing tactile sensing for robotics. Operating in unstructured environments like homes and offices requires robots to sense forces during physical interaction. Yet the lack of a versatile, accessible tactile sensor has led to fragmented solutions and often force-unaware, sensorless approaches. Building an eFlesh sensor requires four components: a hobbyist 3D printer, off-the-shelf magnets (less than $5), a CAD model, and a magnetometer circuit board. The sensor is 3D printed with magnets embedded in the middle layer. Based on chosen mechanical properties, magnets displace in response to contact forces, measured by a magnetometer underneath. An open-source design tool converts simple OBJ/STL files into 3D-printable STLs. This enables application-specific sensors for robot hands, grippers, quadruped feet, and more. Slip detection generalizes to unseen objects with 95% accuracy. Visual-tactile control policies improve manipulation by 40% over vision-only baselines, achieving 90% success on precise tasks like plug insertion and credit card swiping. All design files, code, trained models, and conversion tools are openly available. Project page: e-flesh.com ~~ ♻️ Join the weekly robotics newsletter, and never miss any news → ziegler.substack.com
English
23
334
2.6K
286.1K
Db
Db@dbgray·
Anyone working on an RDE plasma hybrid?
English
0
0
1
23
Db 已转推
Amit Shekhar
Amit Shekhar@amitiitbhu·
RoPE is proof that sometimes the right mathematical abstraction can solve an engineering problem more elegantly than any learned parameter can. No training needed. No lookup tables. Just rotation matrices and the dot product takes care of the rest.
Amit Shekhar@amitiitbhu

x.com/i/article/2047…

English
0
12
74
5K
Db 已转推
Tim 'mithro' Ansell
Tim 'mithro' Ansell@mithro·
You can make your own silicon with wafer.space too! Our second shuttle run is open at buy.wafer.space with a new $4 USD per die option!
fpga kian@splinedrive

First silicon just arrived. These dies are from the first wafer of my GF180MCU based Linux SoC KianV, built with a fully open source ASIC flow. This chip was part of the wafer.space GF180MCU run and hardware validation comes next. Big thanks to Leo Moser for help with the ASIC flow and to @mithro and @evezor for their guidance along the way. The picture shows the first dies. More bring up updates soon.

English
3
11
87
8K
Db
Db@dbgray·
youtu.be/VVQJak_TkTI?si… You can also use sunglasses to "see" the strain in a transparent material like ice. There was a cool Exploratorium exhibit that had a cold plate where you could watch the ice crystals grow in real time / where the strain and boundaries were in the crystals.
YouTube video
YouTube
English
0
0
1
69
Carlos De la Guardia
Carlos De la Guardia@dela3499·
Observation’s seldom simple, for reality doesn’t provide finished images free of charge, but instead a roiling chaos of raw photons which it’s up to *you* to exploit somehow, like with a finely-tuned machinery of sensors, processors, and filters (like these polarized sunglasses).
English
3
1
12
2.3K
Db
Db@dbgray·
I built a machine-vision tool to help you learn to use your fingers like an abacus using the "Chisanbop" technique after discovering it a while back. Recently discovered a great documentary on the history of this technique youtube.com/watch?v=Rsaf4n…
YouTube video
YouTube
English
1
0
0
80
Aakash Gupta
Aakash Gupta@aakashgupta·
Apple's newest patent turns your AirPods into an EEG device that also reads muscle movement, eye movement, and heart rate. Neuralink has performed four human brain implants. Apple has hundreds of millions of AirPods in active use. Steven Hotelling is on the inventor list for patent US20230225659A1. He's the engineer who productized multi-touch for the original iPhone. Apple puts Hotelling on things it plans to ship. The design packs 17 electrodes around a single ear tip. Ear-EEG has existed in academic research since 2011, and startups like NextSense (spun out of Alphabet X), Neurable, and IDUN already ship consumer ear-EEG products today. Apple's innovation is dynamic electrode selection. Here's the constraint that killed ear-EEG as a consumer product for 15 years. Every human ear canal is shaped differently. A fixed electrode placement that reads clean EEG from one person reads noise from the next. Your own ear canal changes shape across a day based on temperature, jaw position, and how the bud seated when you put it in. Every prior ear-EEG product required either custom molds or tolerance for garbage data. Apple's patent uses more electrodes than are ever needed simultaneously and runs an AI model that scores each one in real time on impedance, noise level, and skin contact quality. It picks the best subset for this person, in this ear, right now. Reference and active electrodes get reassigned on the fly. A weighted algorithm combines the surviving signals into one optimized waveform. Competitors tried to solve this with better electrodes. Apple solved it with redundancy and silicon. The roadmap is already visible. AirPods Pro 3 shipped a photoplethysmograph in the ear tip for heart rate. In November 2025, Apple Research published PARS, a self-supervised model that learns EEG patterns from unlabeled data and sidesteps the regulatory bottleneck of annotated clinical datasets. The PPG sensor is the dress rehearsal. The EEG sensor is the feature. Sleep staging. Seizure detection. Stress classification. Focus state. The clinical applications get through the FDA. The interface applications change computing. Neuralink needs a surgeon. Apple needs a firmware update.
Aakash Gupta tweet mediaAakash Gupta tweet media
English
39
133
956
160K
Db 已转推
elvis
elvis@omarsar0·
Nice paper combining the strength of Skills and RAG. Most RAG systems retrieve on every query, whether the model needs help or not. This is wasteful when the model already knows the answer, and often too late when it does not. New research introduces Skill-RAG, a failure-state-aware retrieval system. It uses hidden-state probing to detect when an LLM is approaching a knowledge failure, then routes the query to a specialized retrieval strategy matched to the gap. Evaluated on HotpotQA, Natural Questions, and TriviaQA, the approach improves over uniform RAG baselines on both efficiency and accuracy. Why does it matter? RAG is moving from a single monolithic pipeline to a suite of skills an agent selects between. Knowing when to retrieve and what kind of retrieval to run will matter more than raw retriever quality as agents take on multi-step reasoning, where a single bad lookup derails the whole chain. Paper: arxiv.org/abs/2604.15771 Learn to build effective AI agents in our academy: academy.dair.ai
elvis tweet media
English
20
136
746
51.9K