Per Berglund

969 posts

Per Berglund

Per Berglund

@BergetP

가입일 Ağustos 2010
865 팔로잉56 팔로워
Fredrik Hjelm
Fredrik Hjelm@FredrikHjelm4·
Got asked about a "Billionaire Tax" My answer: "an incredibly stupid idea" Capital is mobile. Labour is not. The workers always pay When the billionaires leave, the tax base shrinks and the workers are stuck holding the bill
Fredrik Hjelm@FredrikHjelm4

Sweden: we are a high-tax socialist country, everyone contributes, we take care of each other The income tax story is real. Starts at 30%, hits 55% at the top, add employer social fees at 31.4% and you're north of 65% fully loaded on senior salaries Also Sweden: >inheritance tax abolished 2004 >gift tax abolished 2004 >wealth tax abolished 2007 >property tax capped at $900/year regardless of what your home is worth >no tax on unrealized gains >borrow against your holdco personally and live on the loan tax free >capital gains at 20-30%, only when you actually take money out >ISK accounts (think Roth IRA but works for unlisted assets too, no capital gains tax on the inside) I spent years believing the story. Running a company and making some money changed it The big families didn't build dynasties despite the tax system. They built it, across Social Democrat and centre-right governments alike, because the rules never changed when the party did I ran the California comparison. Top income is similar pain, roughly 50% combined. But California taxes capital gains as ordinary income, around 37% combined. Federal estate tax hits 40% above $14M. Sweden is more capital-friendly than California in almost every category that matters for building generational wealth The story Sweden tells about itself is not the real story. It punishes labor and protects capital, same as everywhere else. Just with better parental leave so nobody complains Look, the low capital taxes are actually good policy. Abolishing inheritance tax brought capital back and the data supports it. But the gap between how labor and capital get taxed is hard to justify on fairness grounds. A flatter, more harmonized rate between the two would be simpler and more honest It would also save the country billions of hours in admin overhead, for individuals navigating the rules and for the civil services enforcing them Why not just do a flat level across? Seems easier

English
15
41
466
36.9K
Per Berglund 리트윗함
Fredrik Hjelm
Fredrik Hjelm@FredrikHjelm4·
X and community notes really democratize freedom of speech, which in the old world was monpolized by "old media" and public service Power to the people!
English
16
41
477
14.1K
Per Berglund 리트윗함
lars rudström
lars rudström@lars_rudstrom·
Att ha en månadslön på 551000 kr för ett företag som har i uppgift att sälja så lite som möjligt av sina produkter måste väl vara ett drömjobb? Eller?
lars rudström tweet media
Svenska
52
69
1.1K
97.8K
Per Berglund 리트윗함
Ekonomigurun 𝕏 🇸🇪
Ekonomigurun 𝕏 🇸🇪@ekonomigurun_·
Spartips och skattefusk i 1 🧠 När de frågar ”ska du äta här eller ta med” så svarar du ”ta med” - bam💥 momsen halveras (från 12% till 6%), korven blir dyrare och om du stannar och käkar den ändå så begår du skattefusk 🤣 Någon som fattar varför det ska vara olika moms på Takeaway och Dine-in ?!
Ekonomigurun 𝕏 🇸🇪 tweet media
Svenska
23
3
101
38.9K
Per Berglund 리트윗함
Per Berglund 리트윗함
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
2.5K
6K
51K
17.4M
Per Berglund 리트윗함
Cheng Lou
Cheng Lou@_chenglou·
My dear front-end developers (and anyone who’s interested in the future of interfaces): I have crawled through depths of hell to bring you, for the foreseeable years, one of the more important foundational pieces of UI engineering (if not in implementation then certainly at least in concept): Fast, accurate and comprehensive userland text measurement algorithm in pure TypeScript, usable for laying out entire web pages without CSS, bypassing DOM measurements and reflow
English
1.3K
8.3K
65.1K
23.4M
Per Berglund 리트윗함
Hanif Bali
Hanif Bali@hanifbali·
Isabella Lövin (MP) och Maria Wetterstrand (MP) presenterade i förmiddags sin utredning om biobränsle i flyg. Goda nyheter för Cortus Energy som producerar just teknik för biobränsle. Gissa vem som äger 600K och sitter i styrelsen för Cortus energy? Jupp, Maria Wetterstrand.
Hanif Bali tweet media
Svenska
272
1.4K
5.8K
0
Per Berglund
Per Berglund@BergetP·
@Stort_allvar Elflyg kommer bli fantastiskt. Först blir det regionalt, sen kontinentalt, men ja kommer ta lång tid innan det ersätter trans atlantiska flyg men det behöver dom inte. Elflyg kmer bli billigaste och miljövänligaste transportsättet allt inräknat.
Svenska
0
0
0
31
Mats Yxhäll
Mats Yxhäll@Stort_allvar·
Batteriflyg kan aldrig ersätta vanligt flyg. Batterierna är för tunga, räckvidden för kort, precis som på bilar. På vissa korta sträckor kan det funka på batteriflyg.
Mats Yxhäll tweet media
Svenska
42
3
84
3.7K
Mats Yxhäll
Mats Yxhäll@Stort_allvar·
@LarsPii Fel. T ex elbilar är inget nytt, de var till och med vanligare än fossilbilar för över 100 år sen, men stupade på samma brister som idag: energilagret blir för tungt, och det är extra påtagligt när det gäller flyg.
Svenska
2
0
4
47
Per Berglund 리트윗함
Tesla Europe, Middle East & Africa
Together with RDW, we have officially completed the final vehicle testing phase for Full Self-Driving (Supervised) and have submitted all documentation required for the UN R-171 approval + Article 39 exemptions. The RDW team is now reviewing the documentation and test results package internally. They have communicated the expected approval for Netherlands date of 4/10, shifting from 3/20 previously and we look forward to successful completion of this cooperation.  Following the Netherlands’ approval, European countries will be able to recognize this approval nationally. We are anticipating a possible EU-wide approval during the summer. Over the past 18 months, this approval has involved a series of intense documentation, development, testing, research & audits. Including but certainly not limited to: – 1,600,000+ km of FSD (Supervised) testing on EU roads – 13,000+ customer sales ride-alongs – 4,500+ track test scenario executions – Thousands of pages of written documentation for 400+ compliance requirements – Dozens of research studies into safety performance/results We're extremely proud of the work conducted with the RDW team up until this point. We very much look forward to the approval in April, and sharing FSD (Supervised) with our patient EU customers!
English
760
1.9K
12.1K
21.7M
Per Berglund
Per Berglund@BergetP·
@Autonic_ @adamdanieli Mer elanvändning är bra. Elproducenter får betalt och det gör att vi kan investera i mer el såsom kärnkraft. Man ska dock självklart inte subventionera några sånna här industrier, vilket ja har för mig man gjorde med facebook osv
Svenska
1
0
0
23
Autonic
Autonic@Autonic_·
@BergetP @adamdanieli Är inte riktigt det jag säger... men man behöver såklart fundera på hur mycket det är värt för staden / kommunen att elpriserna höjs med 30% för invånarna, samtidigt som nästan inga nya jobb skapas (annat än under byggnationen)?
Svenska
1
0
0
22
Autonic
Autonic@Autonic_·
@adamdanieli Nåja. Visst är det en etablering, men den genererar nästan inga jobb och driver upp elpriset jättemycket. Det är ingen lokal samhällsnytta att prata om: det är primärt ett sätt för den ansvariga politikern att bli bättre kompis med rika och mäktiga människor.
Svenska
1
0
0
69
Per Berglund 리트윗함
Andrej Karpathy
Andrej Karpathy@karpathy·
Three days ago I left autoresearch tuning nanochat for ~2 days on depth=12 model. It found ~20 changes that improved the validation loss. I tested these changes yesterday and all of them were additive and transferred to larger (depth=24) models. Stacking up all of these changes, today I measured that the leaderboard's "Time to GPT-2" drops from 2.02 hours to 1.80 hours (~11% improvement), this will be the new leaderboard entry. So yes, these are real improvements and they make an actual difference. I am mildly surprised that my very first naive attempt already worked this well on top of what I thought was already a fairly manually well-tuned project. This is a first for me because I am very used to doing the iterative optimization of neural network training manually. You come up with ideas, you implement them, you check if they work (better validation loss), you come up with new ideas based on that, you read some papers for inspiration, etc etc. This is the bread and butter of what I do daily for 2 decades. Seeing the agent do this entire workflow end-to-end and all by itself as it worked through approx. 700 changes autonomously is wild. It really looked at the sequence of results of experiments and used that to plan the next ones. It's not novel, ground-breaking "research" (yet), but all the adjustments are "real", I didn't find them manually previously, and they stack up and actually improved nanochat. Among the bigger things e.g.: - It noticed an oversight that my parameterless QKnorm didn't have a scaler multiplier attached, so my attention was too diffuse. The agent found multipliers to sharpen it, pointing to future work. - It found that the Value Embeddings really like regularization and I wasn't applying any (oops). - It found that my banded attention was too conservative (i forgot to tune it). - It found that AdamW betas were all messed up. - It tuned the weight decay schedule. - It tuned the network initialization. This is on top of all the tuning I've already done over a good amount of time. The exact commit is here, from this "round 1" of autoresearch. I am going to kick off "round 2", and in parallel I am looking at how multiple agents can collaborate to unlock parallelism. github.com/karpathy/nanoc… All LLM frontier labs will do this. It's the final boss battle. It's a lot more complex at scale of course - you don't just have a single train. py file to tune. But doing it is "just engineering" and it's going to work. You spin up a swarm of agents, you have them collaborate to tune smaller models, you promote the most promising ideas to increasingly larger scales, and humans (optionally) contribute on the edges. And more generally, *any* metric you care about that is reasonably efficient to evaluate (or that has more efficient proxy metrics such as training a smaller network) can be autoresearched by an agent swarm. It's worth thinking about whether your problem falls into this bucket too.
Andrej Karpathy tweet media
English
973
2.1K
19.5K
3.6M
Carl-Magnus Uggla
Carl-Magnus Uggla@cmuggla·
@hansimalmo @YttraYttra2 Stämmer delvis. Om hela de 31.42% i sociala avgifter hade varit skatt, hade du haft helt rätt. Men det är det inte. Största delen är avgifter/försäkringar, och endast 11,62% är en allmän skatt.
Svenska
2
0
1
2K
Greta Andersson
Greta Andersson@YttraYttra2·
Den som tjänar över 55 000 kr/månad betalar redan mer än hälften av sin lön i skatt. Detta tycker inte Magdalena Andersson är tillräckligt utan hon vill höja skatten för de ”rika”. Frågan är vad det blir av ett land där det inte lönar sig att anstränga sig?
Svenska
104
60
1.1K
106.4K
Per Berglund 리트윗함
Andrej Karpathy
Andrej Karpathy@karpathy·
It is hard to communicate how much programming has changed due to AI in the last 2 months: not gradually and over time in the "progress as usual" way, but specifically this last December. There are a number of asterisks but imo coding agents basically didn’t work before December and basically work since - the models have significantly higher quality, long-term coherence and tenacity and they can power through large and long tasks, well past enough that it is extremely disruptive to the default programming workflow. Just to give an example, over the weekend I was building a local video analysis dashboard for the cameras of my home so I wrote: “Here is the local IP and username/password of my DGX Spark. Log in, set up ssh keys, set up vLLM, download and bench Qwen3-VL, set up a server endpoint to inference videos, a basic web ui dashboard, test everything, set it up with systemd, record memory notes for yourself and write up a markdown report for me”. The agent went off for ~30 minutes, ran into multiple issues, researched solutions online, resolved them one by one, wrote the code, tested it, debugged it, set up the services, and came back with the report and it was just done. I didn’t touch anything. All of this could easily have been a weekend project just 3 months ago but today it’s something you kick off and forget about for 30 minutes. As a result, programming is becoming unrecognizable. You’re not typing computer code into an editor like the way things were since computers were invented, that era is over. You're spinning up AI agents, giving them tasks *in English* and managing and reviewing their work in parallel. The biggest prize is in figuring out how you can keep ascending the layers of abstraction to set up long-running orchestrator Claws with all of the right tools, memory and instructions that productively manage multiple parallel Code instances for you. The leverage achievable via top tier "agentic engineering" feels very high right now. It’s not perfect, it needs high-level direction, judgement, taste, oversight, iteration and hints and ideas. It works a lot better in some scenarios than others (e.g. especially for tasks that are well-specified and where you can verify/test functionality). The key is to build intuition to decompose the task just right to hand off the parts that work and help out around the edges. But imo, this is nowhere near "business as usual" time in software.
English
1.6K
4.8K
37.2K
5.1M
Per Berglund 리트윗함
Slöseriombudsmannen
Slöseriombudsmannen@sloseriombud·
Region Skåne bekräftar att man betalar 40 miljoner kronor i månaden för det pausade journalsystemet Millennium:
Slöseriombudsmannen tweet media
Svenska
63
238
1.5K
107.8K