seslly

4.4K posts

seslly

seslly

@seslly

Former tradfi software engineer working in web3 | Buying high and selling low since 2017 | Tweets are usually a reminder to myself | Not financial advice

Internet Katılım Haziran 2020
1.9K Takip Edilen393 Takipçiler
Sabitlenmiş Tweet
seslly
seslly@seslly·
goals for 2024: - career shift to web3 infra - more oss contributions - continue building
English
5
0
15
2K
seslly retweetledi
Tom Turney
Tom Turney@no_stp_on_snek·
Gemma 4 support on TurboQuant+ ... WEIGHT COMPRESSION! Gemma 4 31B: 30.4 GB down to 18.9 GB All with the same benefits of TurboQuant+ KV cache. Get started on this experimental branch here: #weight-compression-tq4_1s--experimental" target="_blank" rel="nofollow noopener">github.com/TheTom/turboqu…
Tom Turney tweet media
English
34
66
914
47.1K
seslly
seslly@seslly·
just a lil 5.0 earth quake to wake up to
English
0
0
1
379
Matt Wallace
Matt Wallace@MattWallace888·
If anyone can successfully copy the link on this post I will buy you a Tesla
English
14.7K
1.6K
39K
12.9M
seslly retweetledi
Avi
Avi@AviFelman·
Once again this graph has a hold on me
Avi tweet media
English
110
1.8K
13.7K
387.4K
seslly retweetledi
solst/ICE of Astarte
>use a security tool in CI get pwnd >update your dependencies get pwnd >do nothing, ignore security win
English
36
137
2.2K
53.2K
seslly
seslly@seslly·
@_manboy_ love the pretext but brother we gotta get those links to unfurl
English
1
0
1
25
Manboy
Manboy@_manboy_·
I keep my blog as markdown in a separate GitHub repo and use it like a tiny CMS. Just wired in a Pretext-powered post without changing that setup: - post still lives in markdown - site still fetches it from GitHub - one post gets a custom interactive demo with @chenglou/pretext Nice pattern for adding weird rich behavior without giving up a simple blog stack.
English
1
0
2
45
seslly retweetledi
Natalie Fratto
Natalie Fratto@NatalieFratto·
One of these things is not like the other… The other day @PratapRanade brought home 3 RF circuits. Ok “10GHz band pass-filters” he says, to be precise. The first two are human-made, the third is what they’re calling “an alien geometry” 👾 Look how funky it is. That’s the world’s first-ever AI-made RF circuit achieved by the electromagnetism foundation model @arenaphysica. No human would have created it this way. It’s odd, it looks random, but it really works & it might be the future guts inside every satellite, radar, microwave etc one day.
Natalie Fratto tweet media
Arya Hezarkhani@_i_am_arya

Today, we're announcing Heaviside, our foundation model for electromagnetism. Trained on tens of millions of designs and over 20 years of proprietary simulation data, Heaviside predicts electromagnetic behavior from geometry in 13ms, which is 800,000x faster than a commercial solver. Heaviside is not a language model, and it’s not a surrogate model. Heaviside marks a new class of foundation model for physics which understands the fundamental relationships between materials, the geometries and the electromagnetic fields they generate. We’re releasing a research preview of Heaviside in Atlas RF Studio, an interactive agentic sandbox where you describe the EM behavior you want and the model generates the physical structure that produces it. @arenaphysica , we believe the implications of this class of model extend well beyond RF, as the frontier of exquisite hardware is electromagnetically-governed: wireless communication, radar, power delivery, high-speed computing, and the interconnects inside every chip on earth. In the months ahead, we’re excited to scale up Heaviside to broader frequency ranges, design spaces, and to support silicon-level designs, and deploy it with our closest partners and collaborators in service of their biggest design challenges. If you’ve read our thesis, this is just Step 2 in our pursuit of electromagnetic superintelligence. Read the full announcement and try Atlas RF Studio…tell us what you think: arenaphysica.com/publications/r…

English
115
383
2.9K
450.6K
seslly retweetledi
Dev Ed
Dev Ed@developedbyed·
Going to bed peacefully knowing I’ve always recommended sticking with fetch in my tutorials
GIF
Feross@feross

🚨 CRITICAL: Active supply chain attack on axios -- one of npm's most depended-on packages. The latest axios@1.14.1 now pulls in plain-crypto-js@4.2.1, a package that did not exist before today. This is a live compromise. This is textbook supply chain installer malware. axios has 100M+ weekly downloads. Every npm install pulling the latest version is potentially compromised right now. Socket AI analysis confirms this is malware. plain-crypto-js is an obfuscated dropper/loader that: • Deobfuscates embedded payloads and operational strings at runtime • Dynamically loads fs, os, and execSync to evade static analysis • Executes decoded shell commands • Stages and copies payload files into OS temp and Windows ProgramData directories • Deletes and renames artifacts post-execution to destroy forensic evidence If you use axios, pin your version immediately and audit your lockfiles. Do not upgrade.

English
17
25
549
31.5K
seslly
seslly@seslly·
ngl I never understood what the value add of axios since node added fetch 4 years ago (crazy it took that long tbh)
Feross@feross

🚨 CRITICAL: Active supply chain attack on axios -- one of npm's most depended-on packages. The latest axios@1.14.1 now pulls in plain-crypto-js@4.2.1, a package that did not exist before today. This is a live compromise. This is textbook supply chain installer malware. axios has 100M+ weekly downloads. Every npm install pulling the latest version is potentially compromised right now. Socket AI analysis confirms this is malware. plain-crypto-js is an obfuscated dropper/loader that: • Deobfuscates embedded payloads and operational strings at runtime • Dynamically loads fs, os, and execSync to evade static analysis • Executes decoded shell commands • Stages and copies payload files into OS temp and Windows ProgramData directories • Deletes and renames artifacts post-execution to destroy forensic evidence If you use axios, pin your version immediately and audit your lockfiles. Do not upgrade.

English
0
0
2
84
seslly retweetledi
Tom Turney
Tom Turney@no_stp_on_snek·
the original TurboQuant paper tested on A100 with models up to 8B. 6 days later, a bunch of strangers on the internet had it built and running on: - Apple Silicon M1 through M5 - NVIDIA 3080 Ti through DGX Spark Blackwell - AMD RX 6800 XT and 9070 - a 10-year-old Tesla P40 - an 8GB MacBook Air - models from 3.8B to 70B across 6 architecture families - 30+ independent testers along the way we found new optimizations the paper didn't cover and failure modes it didn't test. the fact that a loose group of people across the world can read a paper, build implementations from scratch, stress-test across hardware none of us could individually afford, and push the research further in under a week is genuinely one of the best things about this era. the tools and the community make it possible. open source is something else.
Tom Turney tweet media
English
51
484
4.9K
139.7K
seslly retweetledi
Guri Singh
Guri Singh@heygurisingh·
Humans: 100% Gemini 3.1 Pro: 0.37% GPT 5.4: 0.26% Opus 4.6: 0.25% Grok-4.20: 0.00% François Chollet just released ARC-AGI-3 -- the hardest AI test ever created. 135 novel game environments. No instructions. No rules. No goals given. Figure it out or fail. Untrained humans solved every single one. Every frontier AI model scored below 1%. Each environment was handcrafted by game designers. The AI gets dropped in and has to explore, discover what winning looks like, and adapt in real time. The scoring punishes brute force. If a human needs 10 actions and the AI needs 100, the AI doesn't get 10%. It gets 1%. You can't throw more compute at this. For context: ARC-AGI-1 is basically solved. Gemini scores 98% on it. ARC-AGI-2 went from 3% to 77% in under a year. Labs spent millions training on earlier versions. ARC-AGI-3 resets the entire scoreboard to near zero. The benchmark launched live at Y Combinator with a fireside between Chollet and Sam Altman. $2M in prizes on Kaggle. All winning solutions must be open-sourced. Scaling alone will not close this gap. We are nowhere near AGI. (Link in the comments)
Guri Singh tweet media
English
316
1.1K
6.4K
1.3M
seslly retweetledi
Robin
Robin@solarise_webdev·
I know you're all getting mighty tired of seeing typography on your timeline today! But here's a pretext.js demo that (hopefully) isn't a crime against justification and indentation.
English
209
1.3K
17K
444.4K
seslly retweetledi
Cheng Lou
Cheng Lou@_chenglou·
My dear front-end developers (and anyone who’s interested in the future of interfaces): I have crawled through depths of hell to bring you, for the foreseeable years, one of the more important foundational pieces of UI engineering (if not in implementation then certainly at least in concept): Fast, accurate and comprehensive userland text measurement algorithm in pure TypeScript, usable for laying out entire web pages without CSS, bypassing DOM measurements and reflow
English
1.3K
8.2K
64.7K
23M
seslly retweetledi
seslly retweetledi
0xSero
0xSero@0xSero·
Best models to run on your hardware: —— 64 GB —— - Qwen3-coder-next-80B-4bit (coding, Claude code, general agent) - Qwen3.5-122B-reap: (browser use, multimodal, tool calling, general agent) —— 96 GB —— - GLM-4.6V (multimodal and tool calls) - Hermes-70B (Jailbroken) - Nemotron-120B-Super: (openclaw) - Mistral-4-Small (general agent) —— 192 GB —— All these are excellent top tier LLMs and approach sonnet in capabilities - Step-3.5-Flash - Qwen3.5-397B-REAP - MiniMax-M2.5 (soon M2.7) - GLM-4.7-Reap
0xSero@0xSero

Best models to run on your hardware level I'll be doing this every week, I hope you guys enjoy. ---- 8 GB ---- Autocomplete for coding (like Cursor Tab) - huggingface.co/NexVeridian/ze… - huggingface.co/bartowski/zed-… Tool calling, assistant style - huggingface.co/nvidia/NVIDIA-… ---- 16 Gb ---- Here things get better: Multimodal - huggingface.co/Qwen/Qwen3.5-9B - huggingface.co/Tesslate/OmniC… - huggingface.co/unsloth/Qwen3.… ---- 24 GB ---- - The best model you can get (thanks Qwen) huggingface.co/Qwen/Qwen3.5-2… - Great model (strong agents) huggingface.co/nvidia/Nemotro… - Mine hehe huggingface.co/0xSero/Qwen-3.… I'm doing a weekly series

English
172
243
3.3K
470.4K
seslly retweetledi
Sos Sosowski
Sos Sosowski@Sosowski·
Pre-2022 software and content is the low-background steel of the information era.
Sos Sosowski tweet media
English
45
1.3K
16.1K
273.2K
seslly
seslly@seslly·
@_manboy_ the irony is that we built guardrails for this but still just let him yolo gotta stop giving him root access
English
0
0
1
32
Manboy
Manboy@_manboy_·
holy shit our openclaw just son of Anton'd our entire production server lmao
English
1
0
1
29
seslly
seslly@seslly·
@_manboy_ Achievement unlocked: incredible good done
English
0
0
0
29