Joni Pelham 🇬🇧🇱🇹 ✈️ 🚀

4.4K posts

Joni Pelham 🇬🇧🇱🇹 ✈️ 🚀 banner
Joni Pelham 🇬🇧🇱🇹 ✈️ 🚀

Joni Pelham 🇬🇧🇱🇹 ✈️ 🚀

@jonititan

Research Fellow in Flight Data, Christian, aerospace engineer, & husband to @rasapelham Opinions my own Follow != Endorsement

Bedford, UK เข้าร่วม Ocak 2015
2.5K กำลังติดตาม561 ผู้ติดตาม
Joni Pelham 🇬🇧🇱🇹 ✈️ 🚀 รีทวีตแล้ว
Chris Combs (iterative design enjoyer)
In all seriousness this may have been my favorite shot from Artemis II so far
English
225
4.9K
40.7K
1.4M
JRaw
JRaw@JustusRwrt·
@byte_thrasher have seen them inside of an FAA/EASA certified device that was to be permanently mounted inside an aicraft cabin. so yeah... probably good enough.
English
3
0
36
1.8K
avi
avi@byte_thrasher·
have any of you worked with these surface mount connectors for castellated edges? any thoughts on general reliability vs. soldering pins and using header sockets?
avi tweet media
English
26
23
399
58.1K
Brian Roemmele
Brian Roemmele@BrianRoemmele·
This is absolutely astounding! A printer that prints circuit board traces. I am going all in on this. We will have a 100x increase in production and testing!
English
334
809
6.4K
440.6K
Joni Pelham 🇬🇧🇱🇹 ✈️ 🚀
@DrPhiltill Yes hence my caution regarding heating dusts. But no everything burns. I suspect that 1200K vaporization is when it's in vacuum? Probably ignition temperature is much lower assuming there is some oxygen present.
Kaunas, Lietuva 🇱🇹 English
0
0
0
7
Phil Metzger
Phil Metzger@DrPhiltill·
@jonititan Lunar dust can’t burn because it is already fully oxidized minerals. But dusts like grain dust or metal dusts are explosive!
English
1
0
0
16
Phil Metzger
Phil Metzger@DrPhiltill·
@jonititan It would, but that temperature for lunar dust is about 1200 degrees.
English
1
0
0
40
Joni Pelham 🇬🇧🇱🇹 ✈️ 🚀
It will always be one of my fondest memories having been able to facilitate three airships coming to Shortstown. Even if only for the day.
Joni Pelham 🇬🇧🇱🇹 ✈️ 🚀 tweet media
English
0
0
0
23
Varun
Varun@varun_mathur·
Autoskill: a distributed skill factory | v.2.6.5 We're now applying the same @karpathy autoresearch pattern to an even wilder problem: can a swarm of self-directed autonomous agents invent software? Our autoresearch network proved that agents sharing discoveries via gossip compound faster than any individual: 67 agents ran 704 ML experiments in 20 hours, rediscovering Kaiming init and RMSNorm from scratch. Our autosearch network applied the same loop to search ranking, evolving NDCG@10 scores across the P2P network. Now we're pointing it at code generation itself. Every Hyperspace agent runs a continuous skill loop: same propose → evaluate →keep/revert cycle, but instead of optimizing a training script or ranking model, agents write JavaScript functions from scratch, test them against real tasks, and share working code to the network. It's live and rapidly improving in code and agent work being done. 90 agents have published 1,251 skill invention commits to the AGI repo in the last 24 hours - 795 text chunking skills, 182 cosine similarity, 181 structured diffing, 49 anomaly detection, 36 text normalization, 7 log parsers, 1 entity extractor. Skills run inside a WASM sandbox with zero ambient authority: no filesystem, no network, no system calls. The compound skill architecture is what makes this different from just sharing code snippets. Skills call other skills: a research skill invokes a text chunker, which invokes a normalizer, which invokes an entity extractor. Recursive execution with full lineage tracking: every skill knows its parent hash, so you can walk the entire evolution tree and see which peer contributed which mutation. An agent in Seoul wraps regex operations in try-catch; an agent in Amsterdam picks that up and combines it with input coercion it discovered independently. The network converges on solutions no individual agent would reach alone. New agents skip the cold start: replicated skill catalogs deliver the network's best solutions immediately. As @trq212 said, "skills are still underrated". A network of self-coordinating autonomous agents like on Hyperspace is starting to evolve and create more of them. With millions of such agents one day, how many high quality skills there would be ? This is Darwinian natural selection: fully decentralized, sandboxed, and running on every agent in the network right now. Join the world's first agentic general intelligence system (code and links in followup tweet, while optimized for CLI, browser agents participate too):
Varun tweet media
Varun@varun_mathur

Autosearcher: a distributed search engine We are now insanely experimenting with building a distributed search engine utilizing the same pattern @karpathy introduced with autoresearch: give an agent a metric, a tight propose→run→evaluate→keep/revert loop, and let it iterate. Our autoresearch network proved this works at scale: 67 autonomous agents ran 704 ML training experiments in 20 hours, rediscovering Kaiming initialization, RMSNorm, and compute-optimal training schedules from scratch through pure experimentation and gossip-based cross-pollination. Agents shared discoveries over GossipSub, and the network compounded insights faster than any individual agent: new agents bootstrapped from the swarm's collective knowledge via CRDT-replicated leaderboards and reached the research frontier in minutes. Now we're applying the same evolutionary loop to search ranking: every Hyperspace agent runs an autonomous search researcher that proposes ranking mutations, evaluates them against NDCG@10 on real query-passage data, shares improvements with the network, and cross-pollinates with peers. The architecture is a seven-stage distributed pipeline where every stage runs across the P2P network. Browser agents contribute pages passively, desktop agents crawl and index, GPU nodes run neural reranking. Every user click generates a DPO training pair that improves the ranking model, and gradient gossip distributes those improvements to every agent. The compound flywheel is what makes this different from centralized search: at 10,000 agents that's 500,000 pages indexed per day; at 1 million agents, 50 million pages per day with 90%+ cache hit rates and sub-50ms latency. This network will get smarter with every query. Code and other links in followup tweet here:

English
43
156
1.6K
461.9K
Joni Pelham 🇬🇧🇱🇹 ✈️ 🚀
@CalumDouglas1 They should require actual attempts at engineering sketch level calculations. But then they did always have "movie magic" stuff bodging things to make it a safer. I know they had professionals going around after the teams finished doing proper welding and things like that .
English
0
0
1
911
Calum E. Douglas FRAeS
Calum E. Douglas FRAeS@CalumDouglas1·
Someone needs to redo Scrapheap Challenge, but at a higher engineering level, with something more serious than a rusty Trebuchet as the aim, but with one team allowed no computers at all, and the other allowed all modern engineering analysis tools. I would bet that some rather surprising things might happen. Team A could be called the "Luddites", and I nominate myself as their first team member.
Calum E. Douglas FRAeS tweet media
English
57
21
617
52.9K
Joni Pelham 🇬🇧🇱🇹 ✈️ 🚀 รีทวีตแล้ว
hackaday
hackaday@hackaday·
Sliderule Simulator Teaches You How To Do Calculations The Old Fashioned Way ift.tt/DlTXUI7
English
15
152
776
34.9K
Rohit
Rohit@rohit4verse·
graph is the final boss of memory. a skill graph is a network of skill files connected with wikilinks. one of the most interesting article i’ve read recently
Heinrich@arscontexta

x.com/i/article/2023…

English
58
175
2.8K
605.5K
Joni Pelham 🇬🇧🇱🇹 ✈️ 🚀
It's great but... It's a rest of the owl thing. I'm well aware you know but it's a little disappointing if this was all it took to impress a recruiter. Sort of like the scene from West Wing. youtu.be/85dKvletfSo?si… What impresses me in interview is when the candidate can explain the next bits. Rather like a tree I find it interesting to see what candidates can discuss beyond surface level. Certainly how they react when I find something they don't currently know.
YouTube video
YouTube
Bedford, England 🇬🇧 English
0
0
1
42