

George Howell
1.2K posts

@ghow01
Artificial Intelligence and Defense: Research & Analysis







Quarterhorse Mk 2.1 has received authorization from the Federal Aviation Administration to exceed Mach 1 speeds. Hermeus is the second company ever authorized by the FAA to fly a civil aircraft supersonic and the first to do so with an unmanned system. We’re grateful for a strong partnership with the agency and proud to be a part of aviation history.





Omen is not just a concept. Anduril started building Omen in 2019, flew the first Omen demonstrators in 2020, and has logged hundreds of flight hours across 30+ prototypes. Today, Omen is the first tail-sitter airplane with a mass-production contract.


"America gets a much better McKinsey, and China gets a much better Foxconn." Dan Wang explains how each country will use AI: The U.S. will automate services, while China will automate manufacturing. China will use AI "to produce a lot more drones, munitions, and ships as well. The U.S. simply doesn’t have the training data or the process knowledge in place to get much better at manufacturing."




SpikingBrain’s technical report reveals a new family of brain-inspired LLMs. Learn how its hybrid-linear attention, conversion-based training, and spiking neurons deliver over 100x speedups and unprecedented efficiency on non-NVIDIA hardware. 100x faster first token at 4M tokens, with training on about 2% of the usual data. Transformers slow down as sequences grow, because each new token checks many earlier tokens and the memory cache keeps growing. SpikingBrain mixes 2 cheaper attentions, linear keeps a small running summary, sliding-window reads only a short slice. The 7B model alternates these layers for near linear cost, the 76B model adds parallel branches and a few full layers. Feed forward blocks use Mixture of Experts, a router picks a small set per token so most weights stay idle. The key idea is adaptive threshold spiking, activations become integer counts during training then expand into sparse events at inference. A light conversion pipeline remaps a standard checkpoint, extends context to 128k, then finishes with supervised fine tuning. Everything runs on MetaX C550 GPUs, the 7B model keeps memory near constant as inputs grow and accuracy stays close to baselines. ---- Paper – arxiv. org/abs/2509.05276 Paper Title: "SpikingBrain Technical Report: Spiking Brain-inspired Large Models"