bucca

28.8K posts

bucca banner
bucca

bucca

@bngnlul

flamengo, direito e machine learning. calm as buddha

Manaus, Brazil Katılım Ekim 2010
1.2K Takip Edilen676 Takipçiler
bucca retweetledi
autist
autist@litteralyme0·
autist tweet media
ZXX
30
2.7K
27K
227.9K
bucca retweetledi
Cheng Lou
Cheng Lou@_chenglou·
My dear front-end developers (and anyone who’s interested in the future of interfaces): I have crawled through depths of hell to bring you, for the foreseeable years, one of the more important foundational pieces of UI engineering (if not in implementation then certainly at least in concept): Fast, accurate and comprehensive userland text measurement algorithm in pure TypeScript, usable for laying out entire web pages without CSS, bypassing DOM measurements and reflow
English
1.3K
8.1K
64.2K
22.7M
sushi com abacate
sushi com abacate@sushicomabacate·
Meu pai amado Esse caso aq superou tds os outros Realmente acabou pro código 🤣 #bolhasec
sushi com abacate tweet media
Reesha@Reeshasx

uma coisa daora é q ele ta no github pages e consegui toda a source code consegui a rota pra o profile com tracert ================ C:\Users\rsha>tracert cinema.abrahub.com Rastreando a rota para abraham1152.github.io [185.199.111.153] ================ outra coisa ele fez o .gitignore mas nao colocou o .env, entao tem um .env no github de todas as chaveskkkkk ================ 10:42PM INF 206 commits scanned. 10:42PM INF scan completed in 8.79s 10:42PM WRN leaks found: 7 ================ alias, q codigo feio e mal escrito sinceramente, eu ia fazer uma analise melhor do site mas foi vazado em 2 minutos entao nem vale a pena

Português
69
66
1.6K
192K
bucca retweetledi
Claudio
Claudio@kazzkiq·
Vai no seu Claude Code e cola esse prompt, e no final escreve "ultrathink about this": You'll act as a professional code reviewer and tech lead for this whole codebase. Review all my project code structure, each file, each page (if any), each endpoint (if any), each logic, and write me a QUESTIONS.md with all the architectural, refactoring and technical questions you found odd or need more info about. The idea here is for you to grasp and understand what this project its about, what its doing, what its lacking in terms of performance, architecture and security. Any flaws, any issues, any bugs or any improvements must be reported in the QUESTIONS.md as independent questions so I can answer them in this file explaining what should be done and how, and what is a bug or what is intended behavior. This file can be quite big (lots of questions) so don't be shy nor lazy, write all the questions or attention points you find! Once you write the file, I'll then answer the QUESTIONS.md and prompt you again so you can then start with the improvements in the codebase based on your own questions answered by me in QUESTIONS.md.
Claudio tweet media
English
20
64
1.4K
158K
bucca retweetledi
Ben Pescod
Ben Pescod@benpescod_·
Absolute icon. Never forget it. #LFC
Ben Pescod tweet mediaBen Pescod tweet media
English
200
16K
58K
3.9M
bucca retweetledi
Google Research
Google Research@GoogleResearch·
Introducing TurboQuant: Our new compression algorithm that reduces LLM key-value cache memory by at least 6x and delivers up to 8x speedup, all with zero accuracy loss, redefining AI efficiency. Read the blog to learn how it achieves these results: goo.gle/4bsq2qI
GIF
English
1K
5.8K
39K
19M
bucca
bucca@bngnlul·
😨
Andrej Karpathy@karpathy

Software horror: litellm PyPI supply chain attack. Simple `pip install litellm` was enough to exfiltrate SSH keys, AWS/GCP/Azure creds, Kubernetes configs, git credentials, env vars (all your API keys), shell history, crypto wallets, SSL private keys, CI/CD secrets, database passwords. LiteLLM itself has 97 million downloads per month which is already terrible, but much worse, the contagion spreads to any project that depends on litellm. For example, if you did `pip install dspy` (which depended on litellm>=1.64.0), you'd also be pwnd. Same for any other large project that depended on litellm. Afaict the poisoned version was up for only less than ~1 hour. The attack had a bug which led to its discovery - Callum McMahon was using an MCP plugin inside Cursor that pulled in litellm as a transitive dependency. When litellm 1.82.8 installed, their machine ran out of RAM and crashed. So if the attacker didn't vibe code this attack it could have been undetected for many days or weeks. Supply chain attacks like this are basically the scariest thing imaginable in modern software. Every time you install any depedency you could be pulling in a poisoned package anywhere deep inside its entire depedency tree. This is especially risky with large projects that might have lots and lots of dependencies. The credentials that do get stolen in each attack can then be used to take over more accounts and compromise more packages. Classical software engineering would have you believe that dependencies are good (we're building pyramids from bricks), but imo this has to be re-evaluated, and it's why I've been so growingly averse to them, preferring to use LLMs to "yoink" functionality when it's simple enough and possible.

ART
0
0
0
31
bucca retweetledi
vicky 🪷
vicky 🪷@v1ck0oo·
falaram do meu lado que o luffy não vai ser o rei dos piratas, eu me descontrolei no ônibus rapaziada
Português
104
2.9K
9.8K
158.6K
bucca retweetledi
God of Prompt
God of Prompt@godofprompt·
🚨 BREAKING: Someone just open-sourced a full offline survival computer with AI, Wikipedia, and maps built in. Project N.O.M.A.D. is an open-source offline survival computer. Self-contained. Zero internet required after install. Zero telemetry. Everything runs locally on your hardware. What it includes: → Full Wikipedia archives via Kiwix → Offline maps via OpenStreetMap → Local AI models via Ollama + Open WebUI → Calculators, reference tools, resource libraries → A management UI to control everything from a browser One curl command installs the entire system on any Debian-based machine. Runs headless as a server so any device on your local network can access it. Minimum specs to run the base system: dual-core processor, 4GB RAM, 5GB storage. To run local LLMs offline, you want 32GB RAM and an NVIDIA RTX 3060 or better. No accounts. No authentication by default. No cloud dependency. No phone-home behavior. Built to function when nothing else does. The grid, the cloud, the API you depend on. None of it is guaranteed. The people building local-first systems right now are the ones who won’t be asking for help when access disappears.
God of Prompt tweet media
English
365
3.3K
24.3K
4.9M