std::panic

7.4K posts

std::panic banner
std::panic

std::panic

@stdpanic

Goons Typescript & C++ and watches @realmadrid play football , that sums up this account ig

New Delhi Katılım Temmuz 2022
161 Takip Edilen334 Takipçiler
Sabitlenmiş Tweet
std::panic
std::panic@stdpanic·
Jan–Feb Recap: • Built Chronos : high performance job scheduler (single digit µs latency) github.com/aryan55254/Chr… • Built Scruffy : simple mark & sweep Garbage Collector github.com/aryan55254/Scr… • Trips to Gaon, Baidyanath Dham, Vrindavan, Mathura & Agra
English
12
5
62
1.8K
std::panic
std::panic@stdpanic·
@maaz404 Mujhe toh nhi aaye yaar wishes modii ni dwaara
हिन्दी
1
0
2
9
maaz
maaz@maaz404·
Now I want our PM to email us Eid Mubarak
maaz tweet media
English
0
0
2
69
std::panic
std::panic@stdpanic·
@flying_jatt45 @ryuzaki2401 Not really a right wing or liberal or any specific thing , most people in general regardless of political stance are highly ignorant in terms of looking through a different lens
English
2
0
1
18
Ryuzaki
Ryuzaki@ryuzaki2401·
political discussions with some self-proclaimed liberals feel like smashing your head against a wall. their ignorance and dishonesty toward their own ideology is astonishingly annoying. they kept academia hijacked for decades, with no space for an alternate narrative. now that people are calling out the one-sided stories presented to indians for so long, and offering a different perspective, they're unable to breathe. that's indian liberals for you.
English
2
0
8
143
std::panic retweetledi
datavorous
datavorous@datavorous_·
I managed to compress ~30GB of 10M vector embeddings into ~370MB cause RAM prices are too high. While rewriting my vector search library for the 3rd time, I decided to stress test it with 10 million 768-dim vector embeddings. My ThinkPad has 31GB usable RAM but the dataset is ~30GB uncompressed. I was just a few hundred MB from an OOM while loading the data for the first time XD So I figured out how to implement Product Quantization. Github: github.com/datavorous/sph… Using PQ, I got to split each high dim vector into smaller subvectors, then approximate each subvector by its nearest centroid from a learned codebook, because of that we can now store a tiny integer code per subspace instead of storing the raw floats. During query time, we reconstruct an approximation from those centroids and compute distances in that compressed space, and that makes it lossy. For generating a candidate set, a ~80x reduction should amortize the decrease in accuracy (~83% recall@10-in-100). But again, a re-ranking post-processing step should help with that. I will giving OPQ a shot next, which should improve the recall by exploiting the fact that embedding dimensions are correlated. It was really a cool month where I stripped down ~700 lines of unnecessary code, and implemented PQ from scratch by just reading its the paper. It was really a cool month where I stripped down ~700 lines of unnecessary code, and implemented PQ from scratch by reading only the paper itself.
datavorous tweet media
English
1
3
14
207
maaz
maaz@maaz404·
yo '29 folks stay away from my acc
English
3
0
8
131
std::panic
std::panic@stdpanic·
@skshmgpt @TheByteDax 2 lakh 🙏😭 bc itne mein react native ki team personal videos bna dein waise bhi unpaid open-source wale log h
English
0
0
1
3
saksham
saksham@skshmgpt·
@TheByteDax i read that its only for selected people (for free or some amount ig) and not for sale
English
1
0
0
120
Daksh
Daksh@TheByteDax·
This has to be a joke right?!
Daksh tweet media
English
13
0
31
1.4K
std::panic
std::panic@stdpanic·
@VeritasErrant06 Neighbour aunties or in general friends aunties go to a house or restaurant once in a while to chill eat drink and gossip basically
English
1
0
1
10
std::panic retweetledi
yashaswi.
yashaswi.@pixperk·
here it is. i implemented the complete "Google File System" paper in rust. [repo in replies] raw TCP, custom framing, bincode serialization. pipelined writes through replica chains with serial-ordered commits. record append with overflow detection and cross-replica padding. COW snapshots that fork chunks on first write, not on snapshot. operation log with rotation, checkpointing, and crash recovery. shadow masters that replicate the oplog live and catch up from disk when they reconnect. namespace locking so concurrent file ops don't step on each other. lazy GC that hides deleted files, sweeps after a retention window, then cleans chunkservers via heartbeat. re-replication when a server goes down. two-phase chunk rebalancing, copy first, confirm via heartbeat, then delete. reading the paper was the easy part. making all the pieces not break each other was THE project.
yashaswi. tweet mediayashaswi. tweet media
English
18
19
230
6.1K
AADITYANSHA
AADITYANSHA@aadityansha_06·
@stdpanic @xandrom_twt Sadly it won't Since processes are isolated and in concurrency u switch the thread which runs inside the same process share the same resources
English
2
0
1
20
maniac
maniac@xandrom_twt·
concurrency sucks. wdym more processes won't make my program run faster??!
English
2
0
6
109
Hrithik
Hrithik@Hrithikkj·
@9elcapitano Ney deserved the B D’or ahead of Messi when you guys won the UCL in 2015..
English
3
0
0
26
std::panic
std::panic@stdpanic·
done with the connection plane rn , the pooling and protocol encode decode all shit fine gonna start with the logical plane today : sql to ast parsing and consistent hashing
English
2
0
12
96
Shresth
Shresth@monotonouslogx·
Delhi , you have been absolutely pleasant this march , what actual fuck is this weather !!!!
Shresth tweet media
English
6
0
30
437