

Sérgio Tavares 🇵🇹 🇪🇺 🇱🇻
25.2K posts

@STavaresEU
Project management: Strategic Policy Development: Energy Transition, eMobility, Storage, EVs, Europe, Democracy and Diaspora-related fields.




A group of investigative journalists, including @michaeldweiss, obtained another intercepted conversation between Hungary’s FM Péter Szijjártó and Lavrov. It appears that Moscow may have been able to access virtually any information it sought from the EU/NATO via Budapest.

@RuiPaiva5 @amencario Nem mais... Por exemplo, neste caso, sendo uma excelente notícia, costumo perguntar quantos destes fabricantes criaram marcas próprias, apostando em vendas diretas ao consumidor? eco.sapo.pt/2023/09/14/por…

Bicicletas representam quase três quintos das exportações de bens desportivos... eco.sapo.pt/2026/04/02/bic…



















Some statistics from Hungary, where an election is coming: Viktor Orban, a friend Putin, has been in power for 16 years. A corruption index puts Hungary at the very bottom of the EU table, with 84 points (Denmark has 1 point). The opposition has a 9-point lead in the polls.



China just absolutely steam rolling everyone You'll see a massive push into renewables in the coming months And who's gonna be able to take advantage of that by having subsidized wind, solar and EVs for years? China There isn't even a competition, they've just won

The thing is these are not new technologies. China has already been working on all of them: next-gen batteries, eVTOL, aircraft engines, advanced chips, etc. Instead, their inclusion in China’s latest Five-Year Plan gives them political and strategic weight. “Accelerate here.”





Potential centre-right candidate Philippe is projected to beat far-right Bardella in a presidential run-off



Tesla (TSLA) reportedly in talks to buy $2.9B in Chinese solar equipment for 100 GW US push electrek.co/2026/03/20/tes… by @fredlambert




This is how European political groups voted on Chat-Control. Green: Stopping Chat-Control Red: Allowing Chat-Control The difference was one vote.



Google dropped the TurboQuant paper yesterday morning. 36 hours later it's running in llama.cpp on Apple Silicon, faster than the baseline it replaces. the numbers: - 4.6x KV cache compression - 102% of q8_0 speed (yes, faster, smaller cache = less memory bandwidth) - PPL within 1.3% of baseline (verified, not vibes) the optimization journey: 739 > starting point (fp32 rotation) 1074 > fp16 WHT 1411 > half4 vectorized butterfly 2095 > graph-side rotation (the big one) 2747 > block-32 + graph WHT. faster than q8_0. 3.72x speedup in one day. from a paper I read at dinner last night. what I learned along the way: - the paper's QJL residual stage is unnecessary. multiple implementations confirmed this independently - Metal silently falls back to CPU if you mess up shader includes. cost me hours - "coherent text" output means nothing. I shipped PPL 165 thinking it worked. always run perplexity - ggml stores column-major. C arrays are row-major. this will ruin your afternoon everything is open source. the code, the benchmarks, the speed investigation logs, the debugging pain, all of it. github.com/TheTom/turboqu… paper to parity in 36 hours. what a time to be alive.

Introducing TurboQuant: Our new compression algorithm that reduces LLM key-value cache memory by at least 6x and delivers up to 8x speedup, all with zero accuracy loss, redefining AI efficiency. Read the blog to learn how it achieves these results: goo.gle/4bsq2qI