isma

1.1K posts

isma banner
isma

isma

@ismaelbdev

Barcelona Tham gia Temmuz 2021
845 Đang theo dõi286 Người theo dõi
isma
isma@ismaelbdev·
@davidgomes would love to be there but need to be in the office :( want to learn how other people use cursor
English
0
0
1
42
David Gomes
David Gomes@davidgomes·
Next Tuesday, Cafe Cursor comes to Barcelona 🇪🇸☀️ This is a cool event to get to know other Cursor fans, and talk about what you're building. Nos vemos! 👋 (And if you live in Barcelona and want to hang out before/after, shoot me a message I may find some time.)
David Gomes tweet media
English
17
2
88
7K
NVIDIA GeForce
NVIDIA GeForce@NVIDIAGeForce·
Five years in Night City. Endless memories. One legendary GPU. To celebrate the anniversary, we’re giving away a custom Cyberpunk 2077 GeForce RTX 2080 Ti signed by Jensen Huang. Comment "GeForce Season" to enter 🌆
English
27.6K
2.5K
26.2K
5.9M
isma đã retweet
FFmpeg
FFmpeg@FFmpeg·
Arguably the most brilliant engineer in FFmpeg left because of this. He reverse engineered dozens of codecs by hand as a volunteer. Then security "researchers" and corporate employees came along repeatedly insisted "critical" security issues were fixed immediately waving their CVEs. This was hugely demotivating to the fun and enjoyment of reverse engineering.
FFmpeg@FFmpeg

The maintainer of libxml2 put it very well

English
156
710
8.9K
842.9K
isma đã retweet
Kai Lentit (e/xcel)
Kai Lentit (e/xcel)@KaiLentit·
I am glad Rockstar didn’t announce GTA 5o-mini
English
44
228
4.6K
141.8K
Jeremy Nguyen ✍🏼 🚢
Jeremy Nguyen ✍🏼 🚢@JeremyNguyenPhD·
@aidenybai for an actual physical whiteboard (that isn't a whiteboard): you can buy these long rolls of whiteboard sheets that stick to your wall without adhesive (uses static cling). I have them over so many of my walls, even in the bathroom. It's great.
Jeremy Nguyen ✍🏼 🚢 tweet media
English
3
1
25
1.9K
Aiden Bai
Aiden Bai@aidenybai·
what whiteboarding tool do you use?
English
237
3
398
72.2K
isma
isma@ismaelbdev·
@getreconxyz It's time to make the first SEO tool in Solidity
English
1
0
0
135
Recon
Recon@getreconxyz·
Solidity HTTP You love foundry so much, now you can browse the internet with it
Recon tweet media
English
31
41
318
30.5K
isma
isma@ismaelbdev·
@LearnAI_MJ @mattshumer_ me too, but i tried and just got blocked by content policies lmao. no time to be brave today i guess 🤣
English
0
0
2
397
Matt Shumer
Matt Shumer@mattshumer_·
Ask o3: “Based on everything you know about me, reason + predict what the next 50 years of my life will look like.”
English
145
83
2.2K
755.1K
isma
isma@ismaelbdev·
Lmao what a legend
Warlizard@Warlizard

Wait, don't go, it's really simple. It’s a large language model, self-hosted on your own rig. You grab a GGUF file—think distilled neural net weights, quantized to the nth degree, like Q4_K_M with 4-bit precision or Q8_0 with 8-bit integer ops, packed tight with GGML optimizations for minimal memory footprint. Snag it off Hugging Face, maybe a 7B parameter model, 7 billion weights, fits in about 4-6GB of VRAM if you’re lucky. Then you compile llama.cpp—straight C++ inference engine, leverages SIMD instructions, single instruction multiple data, for parallel crunching. Point it at the GGUF, and it’s live, no cloud, no nonsense. Hardware’s key. You need a beefy GPU—say an NVIDIA RTX 4090 with 24GB GDDR6X VRAM, tensor cores screaming at 16-bit float precision, pushing 30 tokens per second on a 13B model. CPU fallback’s doable, Intel i9-13900K with 24 cores, 32 threads, AVX-512 support for vectorized math, but it’ll crawl at 5 tokens per second tops. RAM’s non-negotiable—64GB DDR5 at 5600 MT/s, because context spills into system memory past 8k tokens. Storage? NVMe SSD, Samsung 990 Pro, 2TB, 7450MB/s read, keep those weights streaming. Settings are a playground. Temperature’s a float, 0.65 for tight coherence, 1.8 if you want it spitting chaotic embeddings. Context length—4096 tokens, 4k word fragments, needs 16GB VRAM or it swaps to RAM and stutters. Tokenization’s baked into the GGUF, BPE algo, byte pair encoding, splits text into subword units, 50k vocab size typical. Tuning? LoRA’s your ticket—low-rank adaptation, slap a 16-rank delta on the weight matrix, fine-tune on a 3080 Ti in half a day if you’ve got the dataset. Crazy thought—could you cram it on a Raspberry Pi 5? 8GB LPDDR4X, ARM Cortex-A76, no CUDA, so you’re stuck with CPU inference. Maybe a 1B parameter model, Q2 quantization, 2-bit weights, 500MB footprint. Chugs at 1 token per second if the thermals don’t throttle it to death. Overclock it, liquid cool it, who knows? I’d benchmark it just to see the bus bandwidth choke. Stock’s fine for most, though—13B on a 3090, call it a day. ... Ladies?

English
0
0
1
43
isma
isma@ismaelbdev·
now i have a reason to send love (simp) letters prompt in image's ALT. i stole it from sowehere but i don't remember where. images were stolen from pepelangelo
isma tweet media
English
0
0
1
59
isma
isma@ismaelbdev·
chat gpt4o is the absolute gigachad, made this spritesheet with quite a simple prompt
isma tweet media
English
1
0
3
349
isma
isma@ismaelbdev·
isma tweet media
ZXX
0
1
5
65
isma
isma@ismaelbdev·
@zhusu What I understand is the guy is putting the Hyperliquid LP in heavy losses but we don't really know if he's really winning money in spot? if it's an attack then i guess the spot buys are to cover part of the attack expenses
English
0
0
0
444
Zhu Su
Zhu Su@zhusu·
HL jellyjelly attacker 1) goes max short JELLYJELLY perps (likely while buying spot simultaneously) 2) pumps spot onchain to self-liq perps 3) HLP inherits the giga short JELLYJELLY position 4) continues to pump spot 5) HLP still short, down $6.5m
Zhu Su tweet mediaZhu Su tweet mediaZhu Su tweet media
English
61
53
687
294.6K
isma
isma@ismaelbdev·
TIL you can open youtube in Brave from your phone in desktop mode , skipping the ads. Free youtube premmium
English
0
0
0
61
isma
isma@ismaelbdev·
lofi pepe
isma tweet media
HT
0
0
0
31