Will Finger

907 posts

Will Finger banner
Will Finger

Will Finger

@fiwill

Product Designer, Web3 On-chain, AI engineer. Married. Father. Learning love with Jesus.

Portugal Katılım Haziran 2009
435 Takip Edilen179 Takipçiler
Ozzny
Ozzny@Ozzny_CS2·
This Reddit user says CS2 only needs 1 update to surpass CSGO... thoughts?
Ozzny tweet media
English
47
23
1.4K
197.8K
rico
rico@_heyrico·
Inter vs Geist Which one?
rico tweet media
English
86
6
501
43.1K
The Smart Ape 🔥
The Smart Ape 🔥@the_smart_ape·
how to get free AI tokens for life? i made a python script that scrapes public GitHub repos to find occurrences of OPENAI_API_kEY= ... in 1 minute it founds 5 valid keys. thanks vibecoder! how to protect your api keys: > create a file called .env in your project folder > put your key inside like this: OPENAI_API_KEY=sk-xxxxx > create a file called .gitignore (if you don't have one) > add this line inside: .env the key must ONLY live in .env. never in your code, never in a commit, never in a readme. or honestly, just make the repo private. problem solved. THIS POST IS FOR PREVENTION AND EDUCATIONAL PURPOSES, TOO MANY PEOPLE ARE STILL MAKING THESE MISTAKES AND LOSING MONEY FOR NOTHING.
English
8
0
29
3.4K
Will Finger
Will Finger@fiwill·
@MeganeJon Howdy! I am looking for the same direction. Let’s connect
English
1
0
1
17
眼鏡Jon
眼鏡Jon@MeganeJon·
Hey, X. I am looking to connect with more people who are into 0→1 product, branding, vibe-coding, senior and junior designers or people afraid of how fast everything is moving. I will be documenting more of the studio's work and sharing what I've learned over 20 years of designing and now in the age of AI. Teaching is something I have aspired to do for a long time, this will be a great platform to do so!
English
26
2
63
10.5K
Vitalii Dodonov
Vitalii Dodonov@vitaliidodonov·
Dear algo, Please show this tweet to founders, builders and creators in Europe.
English
125
4
369
19.5K
Grayson
Grayson@Graysonbook·
@fiwill Would you mind if I helped you????
English
1
0
0
5
Will Finger retweetledi
NASA
NASA@NASA·
1972 ➡️2026 Apollo 17 ➡️ Artemis II
NASA tweet mediaNASA tweet media
Indonesia
7.6K
107.2K
683K
38.5M
Will Finger retweetledi
Z.ai
Z.ai@Zai_org·
Introducing GLM-5V-Turbo: Vision Coding Model - Native Multimodal Coding: Natively understands multimodal inputs including images, videos, design drafts, and document layouts. - Balanced Visual and Programming Capabilities: Achieves leading performance across core benchmarks for multimodal coding, tool use, and GUI Agents. - Deep Adaptation for Claude Code and Claw Scenarios: Works in deep synergy with Agents like Claude Code and OpenClaw. Try it now: chat.z.ai API: docs.z.ai/guides/vlm/glm… Coding Plan trial applications: docs.google.com/forms/d/e/1FAI…
English
254
655
5.8K
2M
Will Finger retweetledi
0xSero
0xSero@0xSero·
I promise this is the last pleometric video I drop on you.
English
50
229
2.4K
131.1K
Will Finger retweetledi
taylor
taylor@taydotfun·
youve seen light mode youve seen dark mode but have you ever seen sunny mode?
Brooklyn, NY 🇺🇸 English
393
738
14K
503.2K
Will Finger retweetledi
Will Finger
Will Finger@fiwill·
@josesaezmerino If is for a single content focus, absolutely horizontal readability is the way. Just to adds up
English
0
0
0
6
Will Finger
Will Finger@fiwill·
@josesaezmerino Vertical readability for multiple contents are fair higher speed for our eyes, that’s why
English
1
0
0
28
Will Finger retweetledi
Ryan Loechner
Ryan Loechner@RyanLoechner·
@GoogleResearch I’m requesting that the compression interface looks like this.
Ryan Loechner tweet media
English
11
8
617
25.2K
Sudo su
Sudo su@sudoingX·
if you're about to download nvidia's nemotron cascade 2 at Q4_K_M for a single RTX 3090, stop. save yourself the frustration i went through last night. Q4_K_M is 24.5GB. your 3090 has 24GB VRAM. the model loads, no room for KV cache, no room for context, no room for compute buffer. it will not run. this is a MoE architecture where the expert weights don't compress well at standard Q4. every quant table online lists it as "recommended" without checking if it fits consumer VRAM. the fix: bartowski IQ4_XS at 18.17GB. imatrix quantization that's smarter about which weights need precision and which don't. same 4-bit tier, 6GB smaller because it doesn't blindly keep every expert at the same precision. leaves you 5.4GB of headroom for KV cache and context. downloading it now on the same RTX 3090 i ran qwen 3.5 35B-A3B on at 112 tok/s. same machine, same node, same everything. first up is context scaling sweep from 4K to 262K to see how mamba-2 handles long context compared to qwen's deltanet. then speed benchmarks at each context level. then i'm pointing hermes agent at it for autonomous coding sessions to see how it handles tool calls, file creation, and multi-step builds over long sessions. nvidia vs alibaba. mamba vs deltanet. same hardware, different architectures. i'll report back with exact flags, exact numbers, exact VRAM breakdowns. no theory, no spec sheets. tested data from a real card.
Sudo su tweet media
Sudo su@sudoingX

the hype around this model settled fast. good. now i can test it without the noise. NVIDIA released nemotron cascade. 30B total, 3B active. fits on a single RTX 3090. hybrid mamba MoE. gold medal on the international math olympiad with only 3 billion active parameters. they say it beats qwen on math, code, and reasoning. i tested qwen 3.5 35B-A3B on a single 3090 at 112 tok/s. now same card, same tests, different architecture. mamba vs deltanet. nvidia vs alibaba. receipts incoming tonight.

English
40
33
601
137.8K