Robins

240 posts

Robins banner
Robins

Robins

@robinskarani

get things done

Nairobi, Kenya Katılım Eylül 2022
381 Takip Edilen449 Takipçiler
CNX Software
CNX Software@cnxsoft·
Open-source, self-hosted Arduino, Raspberry Pi, and ESP32 simulator. cnx-software.com/2026/04/04/vel… Like the Wokwi project, Velxio simulates popular development boards right in your web browser. But the difference is that the open-source project can be self-hosted, running on your own machine. The project currently supports 19 boards and 48 components. Other highlights include support for multiple boards (e.g., Arduino connected to ESP32) and full QEMU emulation support for ESP32 and Raspberry Pi 3 (Linux).
CNX Software tweet mediaCNX Software tweet mediaCNX Software tweet media
English
7
279
1.9K
71.7K
Robins
Robins@robinskarani·
I’ve realized something: Starting big rarely works ,from my experience. All those bold, ambitious leaps I tried before often ended in burnout or frustration. So now, I’m testing a different approach -starting small and scaling up after building discipline. Tiny, consistent actions that are actually doable. The goal isn’t to do everything at once, it’s to build the habit first, and let discipline grow. Without the foundation of discipline, starting big often fails. It’s almost counterintuitive; doing less to achieve more but the truth is, small consistent steps are easier to stick with, and they compound over time. Once the habit is built, scaling up becomes natural
English
0
0
0
41
Robins
Robins@robinskarani·
Le mythe de Sisyphe is really a great read
Robins tweet media
English
0
0
1
64
Robins
Robins@robinskarani·
DAY 16 OF ML: Building a perceptron from scratch((python & numpy). I implemented my first neural network from scratch. - built a perceptron that learns AND/OR logic -visualized decision boundaries with matplotlib -added unit tests (2/2 passing) -created reusable modules: activations, layers I've added nice explanations to most of the code and how a perceptron works check it out here github.com/RobinsKarani/N… Next: solving XOR with hidden layers
Robins tweet media
Robins@robinskarani

DAY 15: reviewed building blocks structure of a basic neural network and finalized the architecture design.I'll implement it tomorrow

English
0
0
1
100
Robins
Robins@robinskarani·
DAY 15: reviewed building blocks structure of a basic neural network and finalized the architecture design.I'll implement it tomorrow
Robins tweet media
Robins@robinskarani

Day 14: spent some time going through #21-scalars-vectors-matrices-and-tensors" target="_blank" rel="nofollow noopener">sebastianraschka.com/teaching/pytor… and figured building a neural network from scratch this week would be a good exercise. I’ll keep posting updates

English
0
0
0
216
Robins
Robins@robinskarani·
Day 14: spent some time going through #21-scalars-vectors-matrices-and-tensors" target="_blank" rel="nofollow noopener">sebastianraschka.com/teaching/pytor… and figured building a neural network from scratch this week would be a good exercise. I’ll keep posting updates
Robins@robinskarani

Day 13 of ML: Today I read Andrej Karpathy’s article karpathy.medium.com/yes-you-should… completed his neural nets lecture 1(which is Lecture 4 in the full series)on backpropagation on YouTube - youtu.be/i94OvYb6noo?si…

English
0
0
0
100
Robins
Robins@robinskarani·
dl2.ai is solid, but I felt I needed a different perspective to really internalize things. Karpathy’s approach clicks more for me because he builds stronger visual intuition, which makes the concepts much clearer
English
0
0
0
39
Robins
Robins@robinskarani·
Day 11 of ML went through backprop and autograd Understood how gradients flow backward via the chain rule, and how autograd builds the computation graph to compute derivatives automatically during training
Robins@robinskarani

Day 10 of ML with dl2.ai neural networks and a deeper dive into neurons. what stood out: -random initialization breaks symmetry, so neurons specialize during training -learning = updating weights with gradient descent to shape decision boundaries.

English
0
0
0
164
Latent.Space
Latent.Space@latentspacepod·
From rewriting Google’s search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code. We sat down with Jeff to unpack what it really means to “own the Pareto frontier,” why distillation is the quiet force behind every generation of faster, cheaper models, how energy not FLOPs is becoming the true constraint on AI compute, what it takes to co-design hardware and models 2–6 years into the future, why unified multimodal systems will outperform specialized ones, what it was like leading the charge to unify all of Google’s AI teams, and his prediction that deeply personalized models with access to your full digital context will redefine what useful AI looks like. @JeffDean @GoogleDeepMind @Google
English
16
107
1K
527.8K
Robins
Robins@robinskarani·
Day 10 of ML with dl2.ai neural networks and a deeper dive into neurons. what stood out: -random initialization breaks symmetry, so neurons specialize during training -learning = updating weights with gradient descent to shape decision boundaries.
Robins@robinskarani

DAY 9 OF ML with dl2.ai: pytorch & collab basics

English
1
0
0
145