Nantte Kivinen

59 posts

Nantte Kivinen

Nantte Kivinen

@Nanttearmand

Katılım Kasım 2022
49 Takip Edilen111 Takipçiler
FR8
FR8@shipfr8·
We took over a former technical university. This is Hogwarts in real life. For people who want to work on something too early, too weird, too ambitious.
English
20
17
117
18.7K
Nantte Kivinen
Nantte Kivinen@Nanttearmand·
The team at Tendrils reminds me daily that elegance still exists, and standards are something we want to stand for Truly a once in a lifetime opportunity to work with some of the most humble and capable teams I've ever interacted with.
Nils Cremer@nilscmr

CPUs suck. We're building a new general-purpose chip that scales to thousands of cores while being more energy-efficient. We're hiring hardware design engineers, consider joining us tendrils.co/jobs What we do differently ...

English
0
2
6
1.8K
Nils Cremer
Nils Cremer@nilscmr·
CPUs suck. We're building a new general-purpose chip that scales to thousands of cores while being more energy-efficient. We're hiring hardware design engineers, consider joining us tendrils.co/jobs What we do differently ...
English
58
48
462
100.5K
rasmus
rasmus@ahtavarasm_us·
"you can just do things" article about me! it's in finnish but maybe you can survive hs.fi/urheilu/art-20…
English
4
0
7
147
Arnie Ramesh
Arnie Ramesh@arnie_hacker·
@Nanttearmand I think it's like 50K (for the programmable Unitree) 💀 but @pham_blnh probably knows best. I could probably automate it to do the fr8 dishes though so imo worth it haha
English
2
0
1
138
Shahvir Sarkary
Shahvir Sarkary@SarkaryShahvir·
@tbpn featured LeLamp! also $50 is the pre-order deposit, full price will be ~$399. Stripe defaults to hide the description, especially on mobile. I have edited our order title so there is no confusion for anyone ordering LeLamp.
English
16
9
112
8.3K
Nantte Kivinen retweetledi
Shahvir Sarkary
Shahvir Sarkary@SarkaryShahvir·
Introducing LeLamp. An emotional and expressive robot to reshape our attachment to technology. Pre-orders open now (link below).
English
194
104
1.1K
180.5K
FR8
FR8@shipfr8·
COHORT 2.s JUST LAUNCHED, AND WE WENT FOR THE IMPOSSIBLE, again. FR8 is for founders, nerds, and scientists working at the frontier, people chasing something impossible and ready to dedicate their whole life to it. If you want to build the next app and exit early, apply elsewhere. If you want to go all in something that actually pushes humanity forward, you’ve found your home. Applications are now open.
English
35
33
189
41.8K
Nantte Kivinen retweetledi
Ernesti Sario
Ernesti Sario@ErnestiSario·
FR8 is coming to London. ​We're bringing together people who feel out of place in a world that values conformity over curiosity and agency. The event is meant for young technical founders and researchers who are going (or looking to go) all in on crazy ideas. Sign up: luma.com/7idjc31h
Ernesti Sario tweet media
English
3
6
36
4.4K
Elliot Arledge
Elliot Arledge@elliotarledge·
timelapse #100 (1438 hrs): - wear headphones and watch til the end
English
45
18
481
26.6K
Elliot Arledge
Elliot Arledge@elliotarledge·
im famous on linkedin too
Elliot Arledge tweet media
English
8
7
144
6.5K
Elliot Arledge
Elliot Arledge@elliotarledge·
timelapse #99 (99 hrs): - 1hr per second - moved desktop rig back into my room - lined up CAT6A cable to utility room - got the green light after seeing 1.1 Gbps download - repurposed 2 SATA SSDs - revamped my linux server with monitors and minimalistic desktop setup - testing out the setup (24 gb vram) with 2-bit quantized qwen3-next 80b on sglang - bunch of manual data collection - had a bit of a break so decided to get nvidia/canary-qwen-2.5b running on 3090 and connect it to voiceink so i can hyperengineer faster - ended up removing this ^^^ as the latency was too high (internet speed) - solved a 4x4x4 rubiks cube parity - made kernelbench-v3 (zero-shot, few-shot, and agentic modes) - made nanoNext (nanoGPT style repo based on architecture seen in Qwen3-Next-80B — supporting single gpu pre-training and inference) - made nanoDSV3.2 (nanoGPT style repo based on deepseek v3.2 — supporting single gpu pre-training and inference) - made nanoMuon (optimizer used in kimi-k2 pretrain) - began setting up infrastructure for dist trainer and scaling law visualizer nn that decodes brain signals - made a mini course covering the entire ecosystem of low-level deep learning - watched karpathy podcast episode spent the weekend with a girl (halfway through) - caught up on editor feedback for my book immediately now waiting on another response - polished and sped up kernelbench-v3 by having a global job queue to compile, generate and eval kernels more efficiently - built codex from source and yolo’d my way through everything (kitty terminal w/ many tmux panes) - contract work - cancelled cursor ultra and fully pivoted to codex in terminal - look fear (fear itself) directly in the eye and it will dissappear
English
49
25
932
101.3K
Adrian De Gendt
Adrian De Gendt@AdrianDeGendt·
Late night grinding 🔥
Adrian De Gendt tweet mediaAdrian De Gendt tweet media
English
1
0
7
131
Elliot Arledge
Elliot Arledge@elliotarledge·
timelapse #87 (50 hrs): - 2800x speedup - i suggest you stop for a min and seriously watch this whole timelapse. speed and energy has been very consistent this time around. - was relying on claude 4.5 sonnet but its only worth using on niche problems, not codebase refactors - found myself deviating back toward xAI models by the end - figured i should tackle the final boss of low precision GEMM kernels nvfp8 and nvfp4 which led me to cutlass and cute so im now tackling two chapters (gemm optimization chapter + cutlass/cute chapter) at once - 30 min mentoring meeting - figuring out how to get O-1 as soon as physically possible - vibe coded weak + strong scaling tests for 8xH100 node - worked my magic on quantized video lms new monitor + headphones - merged and simplified simon’s SGEMM_CUDA and pranjal’s h100 kernels to make my life easier on seeing the last of optimizations to apply - optimized gemv, hopper gemm, ampere gemm, cuda core gemm, softmax, layernorm, topK chapter carefully being shipped - implemented pipeline parallelism on 8xH100 for arbitrarily large MLP (inference only educational example) - wont have to worry about quantization chapter until my editors look at it - havent full polished the open source repos but ill roll those out passively - everything is going my way except balance
English
36
19
416
85.1K