evil malloc

84 posts

evil malloc

evil malloc

@evil_malloc

std::bad_alloc

Poland انضم Temmuz 2016
41 يتبع36 المتابعون
Rinav
Rinav@rrrinavH·
@evil_malloc @schteppe auto * obj = new ..; unique_ptr p1 = obj; unique_ptr p2 = obj; Yesssss, good to go.
English
1
0
0
19
Stefan
Stefan@schteppe·
C++’s new memory safety features
English
42
107
3.3K
213.3K
evil malloc
evil malloc@evil_malloc·
@niewaznyanonim @PrzemekShura Propaganda też była, i nie tylko po ich stronie Ale w sumie jaki byłby do tego bodziec? Jeśli chcą otrzymać jak najwięcej pomocy od innych to powinni udawać, że dostają po dupie
Polski
1
0
4
62
NieważnyAnonim
NieważnyAnonim@niewaznyanonim·
@evil_malloc @PrzemekShura We wiadomość którą jest przekazana we wpisie bez żadnego potwierdzenia poza słowami UA którzy dość często mijają się z prawdą ponieważ propaganda 🙂
Polski
1
0
0
217
Takeshi Kovacs
Takeshi Kovacs@PrzemekShura·
Rosja ruszyła do natarcia na 7 kierunkach — Trzeci Korpus informuje, że w zaledwie 4 godziny udaremnił największą próbę zmechanizowanej ofensywy FR! Armii FR nie udało się zająć ani miejscowości, ani pozycji. Do próby przełamania na kierunku łymańsko-borowskim okupanci zaangażowali siły 1. Armii Pancernej oraz 20. Armii Ogólnowojskowej FR, ale wróg zaczął ponosić straty, zanim dojechał do linii starcia bojowego. Straty przeciwnika w sprzęcie: 84 jednostki mototechniki, 11 BWP i transporterów opancerzonych oraz 3 czołgi. Trafiono system TOS „Suncepiok” i 5 dział. Zniszczono ponad 160 wrogich BSP. W ciągu doby brygady i pułki korpusu wyeliminowały 405 żołnierzy wroga: 288 to straty bezpowrotne (zabici), reszta to straty sanitarne (ranni). „Przygotowywaliśmy się: na różnych kierunkach wypracowano działania obronne, każda brygada miała własny plan odparcia ataku. Korpus koordynował te działania — i w rezultacie na wszystkich kierunkach uderzenia zostały udaremnione” — generał brygady Andrij Biłećkyj.
Takeshi Kovacs tweet media
Polski
29
160
1.4K
43K
evil malloc
evil malloc@evil_malloc·
@niewaznyanonim @PrzemekShura Ale wierzymy w co? To najlepiej udokumentowany konflikt na świcie, gdzie z dnia na dzień da się znaleźć nagrania z obu stron w internecie.
Polski
1
0
7
222
Grok
Grok@grok·
This looks like ray marching with signed distance functions (SDFs). For every char position in the terminal grid, you cast a ray from the camera, step it through a procedural 3D world (math-defined terrain/caves/objects), find the nearest surface hit, then map depth/lighting/normal to an ASCII char for shading and structure. Real-time because it's CPU-efficient and low-res. Classic demoscene trick! What lib/language is it in?
English
1
0
0
192
Benji Taylor
Benji Taylor@benjitaylor·
My current pointless evening project (everyone should have one) is a real-time ASCII art engine where you can walk through procedural worlds and interact with your surroundings. Zero purpose to this, but fun to make!
English
244
240
5.3K
274.1K
evil malloc
evil malloc@evil_malloc·
@tsoding Can you use preprocessor for conditional compilation like this @grok
English
1
0
0
3.1K
Тsфdiиg
Тsфdiиg@tsoding·
Very often I want to temporarily disable a piece of code. I comment it out, but then I'm faced with a problem that since the code is never compiled it gets "stale". Some functions it uses may have changed and it is never type checked. So the next time I enable it, it doesn't compile and I spent a lot of time fixing it. The solution I came up with so far is to "comment out" the code with the runtime `if (0)`. The code will never be executed, the optimizer will very like eliminate the code entirely, but before doing so the compiler will type check it, and will force me to fix it on the spot.
Тsфdiиg tweet media
English
161
103
4.5K
285.2K
Donny Wals 👾
Donny Wals 👾@DonnyWals·
Holy crap Gemini 3.1 totally just one-shotted this entire workout tracker with a 2 sentence prompt. 🤯🤯🤯
English
84
24
936
163.5K
evil malloc
evil malloc@evil_malloc·
@D96013119 @clashreport Compare purchasing power. Haha Learn to use Google to find polish products / startups examples. Haha Account for the fact that Poland was not beneficiary of Marshall plan like West Europe Haha Learn 19/20th century history to have broader context Haha
English
5
1
63
1.1K
David 🇪🇸🇮🇹
David 🇪🇸🇮🇹@D96013119·
@clashreport Hahaha Clowns Average salary in Poland ? 1200 euro ? Minimum salary ? Haha What about if EU will stop support Poland with fin.fonds? Hahaha Name me some product of poland ? Haha
English
36
0
19
4.2K
Clash Report
Clash Report@clashreport·
Poland’s finance minister says the country will match the UK’s living standards (PPP GDP per capita) within 5–6 years. Finance Minister Andrzej Domański: I accept this challenge: five–six years to catch up with the U.K. in price-adjusted GDP per capita. In five years, we will catch up with Great Britain.
Clash Report tweet media
English
91
109
1K
74.2K
varun
varun@varunneal·
Is multi-epoch training still a thing or is it universally seen as catastrophic to train on already seen data now?
English
6
1
58
13.9K
Muyu He
Muyu He@HeMuyu0327·
As I want to understand how @deepseek_ai v3.2 "fixed" GRPO, I break down the math behind the KL term. What is in the original GRPO: the DS math/R1 papers borrowed what @johnschulman2 calls the k3 estimator. Starting from the true KL term (in green square, "the k1 estimator") that pushes the policy model closer to the reference model, there is a harmless second term (in yellow square) that has an expectation of 0 which means no bias is introduced. The role of this second term is to offset the k1 estimator, so that while the expectation stays the same, the variance of the KL is much lower. This is because the second term (the "r-1") can be shown to negatively correlate with the KL term (the "logr"). The problem with GRPO: a true KL samples the rollouts from the policy model, but in reality, during several optimization steps, we are sampling from what is to the current step a previous policy. This makes GRPO off-policy and the KL term loses its meaning. The fix: the new formula simply applies importance sampling, which basically reweights each sample so that the probability of it now equates it coming from the policy model. There is then no more bias in the KL estimation because each sample is weighted with the correct probability density. The leftover: since the sampled rollouts still come from the previous policy model, the KL estimation is slightly high variance, but still there is no bias. As a result, training becomes more stable. The paper is quite fascinating even after two weeks. I want to next have a deeper look at the DeepSeek Attention mechanism and potentially do some interp stuff on it!
Muyu He tweet mediaMuyu He tweet mediaMuyu He tweet media
English
11
76
592
37K
evil malloc أُعيد تغريده
blue
blue@bluewmist·
major cheat code in life: be the one who reaches out. text first. call first. plan first. initialize first. most people wait to be chosen. be the chooser. connection requires initiative. friendship requires effort. love requires action. stop waiting to be picked. start picking. initiative is attractive.
English
202
4K
41.6K
1.4M
evil malloc
evil malloc@evil_malloc·
@RpsAgainstTrump I have a theory that Putin will seek asylum in USA. He knows that he fucked up good and he wants to resign, but he knows that he will be hanged in Russia. So he will promise to Trump that he won't release the Epstein pdf shit that he has on Mr Orange in exchange for the shelter
English
2
0
3
263
Republicans against Trump
Republicans against Trump@RpsAgainstTrump·
Trump, caught on a hot mic about Putin: “I think he wants to make a deal for me. Do you understand? As crazy as it sounds.”
English
368
653
3.1K
545.7K
evil malloc أُعيد تغريده
Boring_Business
Boring_Business@BoringBiz_·
This is easily my favorite clip of Bill Ackman > His fund was down 30%+ > Being sued by Valeant Pharma investors > Going through divorce with his wife > Elliot, an activist firm, trying to take over his fund Here is his advice on how to deal with the tough moments in life
English
130
1.3K
10.8K
1.4M
evil malloc
evil malloc@evil_malloc·
@a1zhang @OfirPress Cool work. How do you deal with the latency? Do you let it run or you make the prediction frame by frame ?
English
0
0
0
22
alex zhang
alex zhang@a1zhang·
Claude can play Pokemon, but can it play DOOM? With a simple agent, we let VLMs play it, and found Sonnet 3.7 to get the furthest, finding the blue room! Our VideoGameBench (twenty games from the 90s) and agent are open source so you can try it yourself now --> 🧵
English
21
51
412
75.4K
Jeffrey Emanuel
Jeffrey Emanuel@doodlestein·
If Hotz can actually pull this off and get the same performance via PyTorch as CUDA but without actually using CUDA, it opens the floodgates to alternative hardware (especially AMD’s), particularly on the inference side. There’s no fundamental reason why CUDA must remain a moat.
the tiny corp@__tinygrad__

What is tinygrad? tinygrad is a formalist project. It attempts to capture the full gamut of software 2.0 in a non leaky abstraction. The methods on Tensor class create a directed graph of immutable RISC UOps defining what the computation is. Tensor is a frontend, in addition we have an ONNX frontend and a PyTorch frontend. Whether you code in tinygrad, torch, or import ONNX models, it all boils down to the same very simple UOp graph, which you can see with VIZ=1 This graph contains nothing like matmul or conv, it's just movement ops, elementwise ops, and reduction ops. Seriously, try VIZ=1. Below that, there's a scheduler which breaks that graph up into kernels. Then we do more graph transforms on each kernel subgraph until we have code which can run on an accelerator. See the kernels with DEBUG=2. Then we have runtimes capable of running that code. For AMD, our runtime goes all the way to the physical hardware; we are mmaping the PCIe bus and peeking and poking it. It's all in Python, but it is fast because once you have the graph compiled, you are running the same graph over and over; just ringing a doorbell. The hope is that, similar to Linux and LLVM, we will prevent a major source of rent seeking in our AI future. By clearly and simply specifying the job, being able to precisely spec what is bought and sold, you can have a fair marketplace for compute. By the end of the year, we should be similar in speed on NVIDIA to the existing torch CUDA backend, except without CUDA. We will also have a test cloud up where you can run jobs from any of the three frontends. You don't want to rent a GPU per hour on a machine, you want to rent a couple FLOPS in a lambda function. That's what the OpenAI API is. Now offer it decoupled from the specific model.

English
23
82
1.7K
163.3K
Krishna Mohan
Krishna Mohan@KMohan2006·
Model size is such a crazy parameter, how do people even decide it Is it intuitive or experimental 🤔
English
5
1
21
1K
evil malloc
evil malloc@evil_malloc·
Tokenization issues still impact SOTA models in unexpected ways🤔
evil malloc tweet media
English
1
0
0
164
evil malloc
evil malloc@evil_malloc·
@huggingface Ultra-playbook Cheatsheet converted to png for easy access:
evil malloc tweet media
English
1
0
0
63