Tim Davison ᯅ

1.7K posts

Tim Davison ᯅ banner
Tim Davison ᯅ

Tim Davison ᯅ

@timd_ca

Scientist •  Apple Design Awards Finalist • Graphics • visionOS

Calgary, Canada Katılım Eylül 2008
1.2K Takip Edilen4.8K Takipçiler
Sabitlenmiş Tweet
Tim Davison ᯅ
Tim Davison ᯅ@timd_ca·
Check it out! I'm proud to unveil CellWalk 2 for Apple Vision Pro. Bring beautiful and immersive biology learning into your space.
English
11
23
182
11.6K
Jacob Bartlett
Jacob Bartlett@jacobtechtavern·
I put SwiftUI vs UIKit scroll performance to the ultimate test: an infinite scrolling feed full of interactive 90s GIFs. The results? Well, see for yourself...
English
4
8
99
7.9K
Jacob Bartlett
Jacob Bartlett@jacobtechtavern·
WTF are Protocol Witness Tables in Swift?  Protocols allow developers to add polymorphism to types through composition, even to value types like structs or enums. Protocol methods are dispatched via Protocol Witness Tables. The mechanism for these is the same as virtual tables: Protocol-conforming types contain metadata (stored in an existential container*), which includes a pointer to their witness table, which is itself a table of function pointers. When executing a function on a protocol type, Swift inspects the existential container, looks up the witness table, then dispatches to the memory address of the function to execute. It’s not necessarily all indirection and jumps though. The witness table dispatch happens if the type you’re dispatching to is an abstract protocol type. If you specify the concrete type of something conforming to the protocol, then the specific implementation of the code is known at compile-time, and can be dispatched statically. Then why use abstract types? Anything mockable, for instance. We often use abstract protocol declarations when performing dependency injection: we specify the protocol ; the interface our dependency conforms to ; without a concrete type, injecting an implementation at runtime. Other times, we might have a Collection containing various protocol-conforming objects we want to iterate over. In these cases, method dispatch is via the witness table. The term “witness table” is borrowed from constructive logic, where proofs serve as witnesses for propositions. In my opinion, though, this kinda-sorta feels like a post-hoc justification .  They already used the term “virtual tables” for dynamic dispatch with subclasses. I reckon our boy Lattner just needed a different phrase to distinguish the concept. Get the full deep-dive here - Method Dispatch in Swift: The Complete Guide 🚀 blog.jacobstechtavern.com/p/swift-method…
Jacob Bartlett tweet mediaJacob Bartlett tweet media
English
2
2
47
3.7K
Tim Davison ᯅ
Tim Davison ᯅ@timd_ca·
WIP: Trying to do something more interesting with the disordered regions of synaptophysin (green) in the syp-vamp2 complex. I have this bug at the moment, a little nightmarish, but cool looking. The first image is the broken experiment, the second image is the disordered prediction from AlphaFold. It's disordered, so we have flexibility with what we do with the tails I'll probably just run a little coarse-grained simulation when you get close.. they'll be a flailing blur.
Tim Davison ᯅ tweet mediaTim Davison ᯅ tweet media
English
0
0
7
457
Tim Davison ᯅ
Tim Davison ᯅ@timd_ca·
@aakashgupta I wonder what the upper limit is for the GPU die and memory interface. Beyond a potential M5 Ultra, could there be a datacenter version of this?
English
0
0
0
237
Aakash Gupta
Aakash Gupta@aakashgupta·
Apple just told you laptops are now AI inference machines and nobody’s repricing what that means. The “4x faster AI performance vs M4” headline is burying the architectural story. M5 Pro and M5 Max use a new Fusion Architecture that connects two dies into a single SoC. Apple moved from efficiency cores to “super cores” and “performance cores.” They put Neural Accelerators inside each GPU core instead of keeping them separate. This is Apple designing silicon around one assumption: the primary workload for a pro laptop in 2026 is running LLMs locally. The math tells you how serious they are. M5 Max: 128GB unified memory, 614GB/s bandwidth, 40-core GPU with neural accelerators baked into every core. That bandwidth number matters because local LLM inference is memory-bandwidth-bound. At 614GB/s, you can run 70B parameter models at usable token speeds on a laptop. No cloud API calls. No latency. No per-token pricing. Compare that to the M4 Max from 14 months ago. Same 128GB ceiling, but the architecture wasn’t optimized for inference throughput. Apple doubled down on the constraint that actually matters for on-device AI, which is getting data to the compute units fast enough. The pricing tells the second story. M5 Pro 14-inch starts at $2,199, up from $1,999 for M4 Pro. M5 Max 16-inch tops out at $7,349. Apple raised the floor and kept the ceiling high because they know the buyer profile is shifting. Creatives and developers aren’t buying these for Final Cut renders anymore. They’re buying them to run Llama, Mistral, and whatever ships next quarter without touching a cloud provider. And here’s what makes the timing fascinating. Apple confirmed the M6 MacBook Pro gets an OLED display, touchscreen, and full redesign. That means M5 Pro and M5 Max are the last generation of the current industrial design. Apple is shipping the AI-optimized silicon first, saving the hardware redesign for later. Silicon leads, form factor follows. If you’re building on-device AI workflows, this is the machine Apple built for you. If you’re waiting for the prettier version, you’re telling Apple you care more about the screen than the inference engine underneath it.
Marques Brownlee@MKBHD

Finally, new M5 Pro and M5 Max Macbook Pros: apple.com/newsroom/2026/…

English
77
143
1.8K
430.5K
Tim Davison ᯅ
Tim Davison ᯅ@timd_ca·
@SebAaltonen Okay. Was just reading more. They fused two dies. Supposedly the GPU and CPU. Which is awesome.. imagine what the datacenter version could look like for AI workloads.
English
0
0
0
55
Tim Davison ᯅ
Tim Davison ᯅ@timd_ca·
@SebAaltonen Im wondering if they did a mid generation design change. Or the new CPU wasn't ready for M5, but the M5 GPU was. M5 Pro and Max were super delayed. Can't wait for benchmarks, and to get our engine running on it.
English
1
0
0
288
Sebastian Aaltonen
Sebastian Aaltonen@SebAaltonen·
I am confused about the Apple M5 Max CPU core naming. The new super cores are apparently renamed performance cores? But they have new performance cores as well. Same name as the M5 performance cores, but different core apparently? M4 Max = 12P + 4E M5 Max = 6S + 12P If I understood correctly, the new P core is between the old P and E cores in performance. A bit similar change than Intel did recently when they made E cores much faster. But not as fast as AMDs C cores (compact), which only differ in cache size to full cores. Regardless, M5 Max is apparently 15% faster in MT workloads. But this is 16->18 cores (+12.5%) so, it's possible that if your workload scales to 8-12 cores, the old design is likely slightly faster, since the new one has only 6 highest performance cores, but the new one should scale better beyond 12 cores, since the old E cores were very slow and new design has more total cores too. Single core perf increase seems marginal, also proving that the new S cores are just renamed P cores. Apple also didn't compare M4 Max -> M5 Max single core perf in their marketing material. They only compared MT perf, further proving this point.
English
8
0
72
12.5K
Joseph Simpson
Joseph Simpson@vrhermit·
This is really cool: Today I learned how to combine defaultWindowPlacement and onGeometryChange3D to build Synced Window Sets. We close the secondary windows while moving the main window, then reopen them when movement stops. stepinto.vision/example-code/h…
English
1
2
23
1.3K
Tim Davison ᯅ
Tim Davison ᯅ@timd_ca·
WIP: modelled the synaptophysin-VAMP2 complex for our synaptic vesicle. This complex is involved in vesicle fusion. SYP forms a hexamer with 12 copies of VAMP2, a key part of the fusion machinery. Each hexamer pre-clusters 12 VAMP2 molecules so they're ready to assemble into SNARE complexes when the vesicle docks. For the VAMP2 cytoplasmic SNARE domains I did some randomized rigid body transforms, and took a bit of license to spread out the dangly bits (disorganized regions from the AlphaFold prediction). The synaptophysin disordered regions have misleading C6 symmetry from the AlphaFold prediction, so I'll do something with those too. Almost there. I need to double check principal axes and offsets, tweak some things, add a membrane.
English
1
6
41
2.7K
Tim Davison ᯅ retweetledi
Gregory Wieber
Gregory Wieber@dreamwieber·
Alright, friends – this is my most ambitious YouTube to date. It was a ton of work, I hope you'll watch, subscribe, and share! youtube.com/watch?v=hxHkNT…
YouTube video
YouTube
English
7
4
22
3.7K
Tim Davison ᯅ
Tim Davison ᯅ@timd_ca·
@SebAaltonen A large prompt also confuses the agent. It has so much weight that ambiguity or contradictions really confuse it. It's almost a programming language in how closely they will follow it
English
0
1
1
222
Sebastian Aaltonen
Sebastian Aaltonen@SebAaltonen·
It's tempting to give the LLM a MASSIVE system prompt with all the information it needs to perform all the potential task API calls. This way you don't need to think about it and you ensure there's no extra roundtrips for the LLM to find the information/APIs it needs. The problem is that this bloats the token count significantly. LLM calls (to server) are stateless, you need to send the system prompt (and history) again for every tool call, so that the LLM knows what it was doing and why. If the system prompt is thousands of lines, those lines are resent for every tool call. Let's discuss the alternatives for a massive system prompt...
English
15
4
115
21.5K
Tim Davison ᯅ
Tim Davison ᯅ@timd_ca·
No membrane. Need VAMP complex, etc. Principal axis for many proteins isn't set yet, nor transmembrane offset.
English
0
0
0
199
Tim Davison ᯅ
Tim Davison ᯅ@timd_ca·
Lazy morning building a synaptic vesicle from scratch. Super WIP and not correct at the moment. Much to do. But it has the coolest guys imho: - Amber: Vesicular glutamate transporter (VGluT) - Red: V-ATPase Glutamate charging works like this: The ATPase uses ATP to create a proton gradient, and VGluT uses the gradient to exchange protons inside the vesicle for glutamate outside. It's really quite cool how they work together. #ScreenshotSaturday #screenshotsunday
Tim Davison ᯅ tweet media
English
1
3
24
1.1K
Tim Davison ᯅ
Tim Davison ᯅ@timd_ca·
@dreamwieber It's so crazy.. right? I have been working a lot with this dataset, but the tools are complex. I'd love to bring aspects of it to a wider audience. CW v10? 🤔 There's a fully mapped fly brain too, and an even more ambitious successor from Google and collaborators underway.
English
1
0
1
132
Gregory Wieber
Gregory Wieber@dreamwieber·
In my own internal benchmark, Opus 4.6 smoked Codex on the first experiment.
English
1
0
0
276
Heinrich
Heinrich@arscontexta·
how my @openclaw feels waking up each session
English
20
28
287
22.5K