Sabitlenmiş Tweet
Tim Davison ᯅ
1.7K posts

Tim Davison ᯅ
@timd_ca
Scientist • Apple Design Awards Finalist • Graphics • visionOS
Calgary, Canada Katılım Eylül 2008
1.2K Takip Edilen4.8K Takipçiler

@jacobtechtavern I like the video preview! Nice idea for a blog post
English

WTF are Protocol Witness Tables in Swift?

Protocols allow developers to add polymorphism to types through composition, even to value types like structs or enums. Protocol methods are dispatched via Protocol Witness Tables.
The mechanism for these is the same as virtual tables: Protocol-conforming types contain metadata (stored in an existential container*), which includes a pointer to their witness table, which is itself a table of function pointers.
When executing a function on a protocol type, Swift inspects the existential container, looks up the witness table, then dispatches to the memory address of the function to execute.
It’s not necessarily all indirection and jumps though.
The witness table dispatch happens if the type you’re dispatching to is an abstract protocol type. If you specify the concrete type of something conforming to the protocol, then the specific implementation of the code is known at compile-time, and can be dispatched statically.
Then why use abstract types?
Anything mockable, for instance. We often use abstract protocol declarations when performing dependency injection: we specify the protocol ; the interface our dependency conforms to ; without a concrete type, injecting an implementation at runtime.
Other times, we might have a Collection containing various protocol-conforming objects we want to iterate over. In these cases, method dispatch is via the witness table.
The term “witness table” is borrowed from constructive logic, where proofs serve as witnesses for propositions. In my opinion, though, this kinda-sorta feels like a post-hoc justification . They already used the term “virtual tables” for dynamic dispatch with subclasses. I reckon our boy Lattner just needed a different phrase to distinguish the concept.
Get the full deep-dive here - Method Dispatch in Swift: The Complete Guide
🚀 blog.jacobstechtavern.com/p/swift-method…


English

WIP: Trying to do something more interesting with the disordered regions of synaptophysin (green) in the syp-vamp2 complex. I have this bug at the moment, a little nightmarish, but cool looking.
The first image is the broken experiment, the second image is the disordered prediction from AlphaFold. It's disordered, so we have flexibility with what we do with the tails
I'll probably just run a little coarse-grained simulation when you get close.. they'll be a flailing blur.


English

@aakashgupta I wonder what the upper limit is for the GPU die and memory interface. Beyond a potential M5 Ultra, could there be a datacenter version of this?
English

Apple just told you laptops are now AI inference machines and nobody’s repricing what that means.
The “4x faster AI performance vs M4” headline is burying the architectural story. M5 Pro and M5 Max use a new Fusion Architecture that connects two dies into a single SoC. Apple moved from efficiency cores to “super cores” and “performance cores.” They put Neural Accelerators inside each GPU core instead of keeping them separate.
This is Apple designing silicon around one assumption: the primary workload for a pro laptop in 2026 is running LLMs locally.
The math tells you how serious they are. M5 Max: 128GB unified memory, 614GB/s bandwidth, 40-core GPU with neural accelerators baked into every core. That bandwidth number matters because local LLM inference is memory-bandwidth-bound. At 614GB/s, you can run 70B parameter models at usable token speeds on a laptop. No cloud API calls. No latency. No per-token pricing.
Compare that to the M4 Max from 14 months ago. Same 128GB ceiling, but the architecture wasn’t optimized for inference throughput. Apple doubled down on the constraint that actually matters for on-device AI, which is getting data to the compute units fast enough.
The pricing tells the second story. M5 Pro 14-inch starts at $2,199, up from $1,999 for M4 Pro. M5 Max 16-inch tops out at $7,349. Apple raised the floor and kept the ceiling high because they know the buyer profile is shifting. Creatives and developers aren’t buying these for Final Cut renders anymore. They’re buying them to run Llama, Mistral, and whatever ships next quarter without touching a cloud provider.
And here’s what makes the timing fascinating. Apple confirmed the M6 MacBook Pro gets an OLED display, touchscreen, and full redesign. That means M5 Pro and M5 Max are the last generation of the current industrial design. Apple is shipping the AI-optimized silicon first, saving the hardware redesign for later. Silicon leads, form factor follows.
If you’re building on-device AI workflows, this is the machine Apple built for you. If you’re waiting for the prettier version, you’re telling Apple you care more about the screen than the inference engine underneath it.
Marques Brownlee@MKBHD
Finally, new M5 Pro and M5 Max Macbook Pros: apple.com/newsroom/2026/…
English

@SebAaltonen Okay. Was just reading more. They fused two dies. Supposedly the GPU and CPU. Which is awesome.. imagine what the datacenter version could look like for AI workloads.
English

@SebAaltonen Im wondering if they did a mid generation design change. Or the new CPU wasn't ready for M5, but the M5 GPU was. M5 Pro and Max were super delayed.
Can't wait for benchmarks, and to get our engine running on it.
English

I am confused about the Apple M5 Max CPU core naming. The new super cores are apparently renamed performance cores? But they have new performance cores as well. Same name as the M5 performance cores, but different core apparently?
M4 Max = 12P + 4E
M5 Max = 6S + 12P
If I understood correctly, the new P core is between the old P and E cores in performance. A bit similar change than Intel did recently when they made E cores much faster. But not as fast as AMDs C cores (compact), which only differ in cache size to full cores.
Regardless, M5 Max is apparently 15% faster in MT workloads. But this is 16->18 cores (+12.5%) so, it's possible that if your workload scales to 8-12 cores, the old design is likely slightly faster, since the new one has only 6 highest performance cores, but the new one should scale better beyond 12 cores, since the old E cores were very slow and new design has more total cores too. Single core perf increase seems marginal, also proving that the new S cores are just renamed P cores. Apple also didn't compare M4 Max -> M5 Max single core perf in their marketing material. They only compared MT perf, further proving this point.
English

This is really cool: Today I learned how to combine defaultWindowPlacement and onGeometryChange3D to build Synced Window Sets. We close the secondary windows while moving the main window, then reopen them when movement stops.
stepinto.vision/example-code/h…
English

WIP: modelled the synaptophysin-VAMP2 complex for our synaptic vesicle. This complex is involved in vesicle fusion.
SYP forms a hexamer with 12 copies of VAMP2, a key part of the fusion machinery. Each hexamer pre-clusters 12 VAMP2 molecules so they're ready to assemble into SNARE complexes when the vesicle docks.
For the VAMP2 cytoplasmic SNARE domains I did some randomized rigid body transforms, and took a bit of license to spread out the dangly bits (disorganized regions from the AlphaFold prediction).
The synaptophysin disordered regions have misleading C6 symmetry from the AlphaFold prediction, so I'll do something with those too.
Almost there. I need to double check principal axes and offsets, tweak some things, add a membrane.
English
Tim Davison ᯅ retweetledi

Alright, friends – this is my most ambitious YouTube to date.
It was a ton of work, I hope you'll watch, subscribe, and share!
youtube.com/watch?v=hxHkNT…

YouTube
English

@SebAaltonen A large prompt also confuses the agent. It has so much weight that ambiguity or contradictions really confuse it. It's almost a programming language in how closely they will follow it
English

It's tempting to give the LLM a MASSIVE system prompt with all the information it needs to perform all the potential task API calls. This way you don't need to think about it and you ensure there's no extra roundtrips for the LLM to find the information/APIs it needs. The problem is that this bloats the token count significantly.
LLM calls (to server) are stateless, you need to send the system prompt (and history) again for every tool call, so that the LLM knows what it was doing and why. If the system prompt is thousands of lines, those lines are resent for every tool call.
Let's discuss the alternatives for a massive system prompt...
English

Lazy morning building a synaptic vesicle from scratch. Super WIP and not correct at the moment. Much to do.
But it has the coolest guys imho:
- Amber: Vesicular glutamate transporter (VGluT)
- Red: V-ATPase
Glutamate charging works like this: The ATPase uses ATP to create a proton gradient, and VGluT uses the gradient to exchange protons inside the vesicle for glutamate outside. It's really quite cool how they work together.
#ScreenshotSaturday #screenshotsunday

English

@dreamwieber Here's a great example:
h01-dot-neuroglancer-demo.appspot.com/#!gs://h01-rel…
English

@dreamwieber It's so crazy.. right?
I have been working a lot with this dataset, but the tools are complex. I'd love to bring aspects of it to a wider audience. CW v10? 🤔
There's a fully mapped fly brain too, and an even more ambitious successor from Google and collaborators underway.
English

@Hiteshdotcom @github no, and my builds are stuck resolving dependencies :(
English

@arscontexta @openclaw You win the internet today, this is too funny
English

Alright, peeps – big announcement today. Over 15 years ago I released my first app: Polychord for iPad.
Brand new version out today! Tons of improvements. Ton of work. Hope you'll share this post, and then go download!
Polychord@polychordapp
It's been a while, folks! Polychord 3 is now available 🤯
English


