Jenish Rudani

1.8K posts

Jenish Rudani banner
Jenish Rudani

Jenish Rudani

@JenishRudani

Driven by a love for running, embedded systems, and building impactful health tech

Vancouver, British Columbia Katılım Ocak 2019
568 Takip Edilen160 Takipçiler
Sabitlenmiş Tweet
Jenish Rudani
Jenish Rudani@JenishRudani·
I kept losing my place in audiobooks. The "30-sec rewind" button is useless when you've been gone for a week. So I built PageMatch: paste a sentence from the book → get the exact timestamp back. Runs on your Mac. No cloud. No API key. Open source. 🧵 github.com/jenish-rudani/…
English
1
0
5
87
Jenish Rudani retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
2.7K
6.6K
55.8K
19.8M
Jenish Rudani retweetledi
Mustafa
Mustafa@oprydai·
in a world full of software guys, be a hardware guy!
Mustafa tweet media
English
127
365
4.1K
96K
Jenish Rudani retweetledi
Nav Toor
Nav Toor@heynavtoor·
🚨 Screen Studio charges $89 for this. Someone open sourced the entire thing for free. It's called OpenScreen. 8,400+ GitHub stars. You record your screen. It automatically transforms it into a polished, professional demo video. Auto-zoom into clicks. Smooth cursor animations. Motion blur. Custom backgrounds with wallpapers, gradients, and shadows. Webcam overlays. Annotations. Timeline editing. Export in any aspect ratio. The exact workflow that Screen Studio sells for $89 and Loom sells as a subscription. Free. No watermarks. No accounts. No subscriptions. Here's what you get out of the box: → Full screen or window capture with system audio and mic → Automatic zoom that follows your cursor and clicks → Manual zoom with customizable depth and timing → Smooth motion blur on pan and zoom transitions → Animated cursor rendering with motion effects → Webcam bubble overlay with drag-and-drop positioning → Wallpapers, solid colors, gradients, or custom backgrounds → Text and arrow annotations layered over recordings → Timeline trimming and variable speed segments → Crop, resize, and export in any resolution or aspect ratio → Save and reopen projects anytime Here's the wildest part: A developer forked it and built an even more advanced version called Recordly. Full cursor animation pipeline. Native macOS and Windows recording. Zoom behavior that mirrors Screen Studio frame-for-frame. Audio tracks. Webcam overlays with zoom-reactive scaling. Both are free. Both are MIT licensed. Both work on Windows, macOS, and Linux. Download. Record. Export. Done. 100% Open Source. MIT License. (Link in the comments)
Nav Toor tweet media
English
175
694
8.5K
710.2K
むらさん(Murasan)
むらさん(Murasan)@murasametech·
日本の個人開発者です。 日本国外の方とも繋がりたいです。 RaspberryPiなどのSBCをメインにロボットを作ったり、自宅でサーバーを運用したりしています🤖 ぜひ皆さんの開発したアプリケーションを見せてほしいです🔧 #RaspberryPi #Python
むらさん(Murasan) tweet mediaむらさん(Murasan) tweet mediaむらさん(Murasan) tweet media
日本語
80
67
1.2K
27.4K
Jenish Rudani retweetledi
Lain
Lain@Lain_Ego0·
最近我们在完善一只开源的仿生龙虾水下机器人。 这个仓库放的是固件部分:双 STM32 控制机械臂、尾部、推进、电机、传感器,还有板间通信和基础姿态稳定。 机械结构和视觉系统也分别开源。对水下机器人、嵌入式或者开源硬件感兴趣的话欢迎参与我们的项目。 固件: github.com/Lain-Ego0/Bion…
Lain tweet mediaLain tweet media
中文
33
61
841
126.3K
Jenish Rudani retweetledi
Arya Hezarkhani
Arya Hezarkhani@_i_am_arya·
Today, we're announcing Heaviside, our foundation model for electromagnetism. Trained on tens of millions of designs and over 20 years of proprietary simulation data, Heaviside predicts electromagnetic behavior from geometry in 13ms, which is 800,000x faster than a commercial solver. Heaviside is not a language model, and it’s not a surrogate model. Heaviside marks a new class of foundation model for physics which understands the fundamental relationships between materials, the geometries and the electromagnetic fields they generate. We’re releasing a research preview of Heaviside in Atlas RF Studio, an interactive agentic sandbox where you describe the EM behavior you want and the model generates the physical structure that produces it. @arenaphysica , we believe the implications of this class of model extend well beyond RF, as the frontier of exquisite hardware is electromagnetically-governed: wireless communication, radar, power delivery, high-speed computing, and the interconnects inside every chip on earth. In the months ahead, we’re excited to scale up Heaviside to broader frequency ranges, design spaces, and to support silicon-level designs, and deploy it with our closest partners and collaborators in service of their biggest design challenges. If you’ve read our thesis, this is just Step 2 in our pursuit of electromagnetic superintelligence. Read the full announcement and try Atlas RF Studio…tell us what you think: arenaphysica.com/publications/r…
English
148
482
3.9K
685.7K
Jenish Rudani retweetledi
AI at Meta
AI at Meta@AIatMeta·
Without any retraining, TRIBE v2 can reliably predict the brain responses of individuals it has never seen before, achieving a nearly 2-3x improvement over previous methods for both movies and audiobooks We’re releasing the model, codebase, paper, and demo to help researchers advance neuroscience, apply brain insights to build better AI, and use computational simulation to speed up breakthroughs in neurological disease diagnosis and treatment. 🔗 Paper: go.meta.me/210503 🔗 Model: go.meta.me/ea1cff 🔗 Code: go.meta.me/873d02
AI at Meta tweet media
English
35
118
1K
204.6K
Jenish Rudani retweetledi
stacksmashing
stacksmashing@ghidraninja·
If you ever lose the keys to your older Sea-Doo Jetski you might find these bytes useful when talking to the ECU😇 95 BC 2F 02 04 A4 75 BE
English
5
13
768
47.7K
Jenish Rudani retweetledi
Mitko Vasilev
Mitko Vasilev@iotcoi·
I just implemented Google’s TurboQuant for vLLM. My USB-charger-sized HP ZGX now fits 4,083,072 KV-cache tokens on GB10. This may be the biggest open inference breakthrough of 2026 so far. Training is the flex. Inference is the forever bill.
Mitko Vasilev tweet media
English
70
234
3K
208.7K
toki
toki@tokifyi·
one thing vancouver is missing is builder density so instead of complaining, i’m doing something about it bringing together some of the most ambitious founders and engineers in vancouver this week if you want in dm me
English
26
4
100
7.7K
Jenish Rudani retweetledi
Lisa Tanh
Lisa Tanh@LisaLi_T·
Vancouver's Beaver's Den has opened up applications for pre-seed to Series A startups for a chance to win $100K. Eligibility requirements: -Raised less than $1M in dilutive funding to date. -Has less than $500K in total revenues, $10K MRR, and $100K ARR. After submissions close, 25 companies will be selected for a semifinal pitch night, and then five will be chosen to pitch at a @vanstartupweek kick-off event. The team behind it, serial entrepreneurs Tiffany Scarlett and @changchatter, are also launching a podcast soon about funding in Vancouver. fundedinvancouver.com/beavers-den
English
0
2
12
913
Jenish Rudani
Jenish Rudani@JenishRudani·
From a failed buffing pad idea to a smiling sponge seen on Shark Tank — Scrub Daddy became a hit by highly over engineered german foam changing texture with water temp. Sometimes the “wrong” idea just needs the right twist
English
0
0
0
574
Jenish Rudani retweetledi
Hedgie
Hedgie@HedgieMarkets·
🦔 Researchers at Aikido Security found 151 malicious packages uploaded to GitHub between March 3 and March 9. The packages use Unicode characters that are invisible to humans but execute as code when run. Manual code reviews and static analysis tools see only whitespace or blank lines. The surrounding code looks legitimate, with realistic documentation tweaks, version bumps, and bug fixes. Researchers suspect the attackers are using LLMs to generate convincing packages at scale. Similar packages have been found on NPM and the VS Code marketplace. My Take Supply chain attacks on code repositories aren't new, but this technique is nasty. The malicious payload is encoded in Unicode characters that don't render in any editor, terminal, or review interface. You can stare at the code all day and see nothing. A small decoder extracts the hidden bytes at runtime and passes them to eval(). Unless you're specifically looking for invisible Unicode ranges, you won't catch it. The researchers think AI is writing these packages because 151 bespoke code changes across different projects in a week isn't something a human team could do manually. If that's right, we're watching AI-generated attacks hit AI-assisted development workflows. The vibe coders pulling packages without reading them are the target, and there are a lot of them. The best defense is still carefully inspecting dependencies before adding them, but that's exactly the step people skip when they're moving fast. I don't really know how any of this gets better. The attackers are scaling faster than the defenses. Hedgie🤗 arstechnica.com/security/2026/…
English
123
813
3K
716.9K