PB

3.4K posts

PB banner
PB

PB

@planobr

🍤 que 😴 a 🌊 leva.

São Paulo Katılım Kasım 2013
2K Takip Edilen269 Takipçiler
PB retweetledi
Piu Esportes
Piu Esportes@piuesportes·
⛷️ As últimas seis provas de Lucas Pinheiro Braathen no slalom gigante: 🥈 Copa do Mundo Alta Badia 🇮🇹 🥈 Copa do Mundo Adelboden 🇨🇭 🥈 Copa do Mundo Schladming 🇦🇹 🥇 Jogos Olímpicos - Bormio 🇮🇹 🥇 Copa do Mundo Kranjska 🇸🇮 🥇 Copa do Mundo Lillehammer 🇳🇴 - 🔮 Globo de Cristal
Piu Esportes tweet media
Português
1
44
440
10.3K
PB retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
I am unreasonably excited about self-driving. It will be the first technology in many decades to visibly terraform outdoor physical spaces and way of life. Less parked cars. Less parking lots. Much greater safety for people in and out of cars. Less noise pollution. More space reclaimed for humans. Human brain cycles and attention capital freed up from “lane following” to other pursuits. Cheaper, faster, programmable delivery of physical items and goods. It won’t happen overnight but there will be the era before and the era after.
English
793
2K
21.9K
1.5M
PB
PB@planobr·
claude: the best to make right what is wrong. codex: the best to make it right in the first place.
English
0
0
0
17
PB retweetledi
Ali Hatamizadeh
Ali Hatamizadeh@ahatamiz1·
Are you ready for web-scale pre-training with RL ? 🚀 🔥 New paper: RLP : Reinforcement Learning Pre‑training We flip the usual recipe for reasoning LLMs: instead of saving RL for post‑training, we bring exploration into pretraining. Core idea: treat chain‑of‑thought as an action. Reward it by the information gain it provides for the very next token: This gives a verifier‑free, dense reward on ordinary text with no task checkers, no labels, no filtering. Why this matters ? * 🧠 Models think before predicting during pretraining, not just after alignment. * 📈 Position‑wise credit at every token = stable signal at full web‑scale. * 🔁 No proxy filters or “easy‑token” heuristics. Trains on the entire stream. Results: On the 8‑benchmark math+science suite (AIME’25, MATH‑500, GSM8K, AMC’23, Minerva Math, MMLU, MMLU‑Pro, GPQA): • Qwen3-1.7B-Base: RLP improves the overall average by 24% ! • Nemotron-Nano-12B-v2-Base: RLP improves the overall average by 43% ! 📄Paper: tinyurl.com/rlp-pretraining ✍️Blog: research.nvidia.com/labs/adlr/RLP/ #AI #LLM #ReinforcementLearning #ChainOfThought #Pretraining #RLP
Ali Hatamizadeh tweet media
English
25
112
736
120.7K
PB retweetledi
Ethan Mollick
Ethan Mollick@emollick·
The jump from "agents are nowhere close to working" to "okay, narrow agents for research and coding work pretty well" to (very recently) "general purpose agents are actually useful for a range of tasks" has been quick enough (less than a year) so that most people have missed it.
English
65
129
1.7K
164.1K
PB
PB@planobr·
@EasyCodeAI yeah, and effective vibe coding is anything but glamorous.
English
0
0
0
2
OpenBuilder
OpenBuilder@theopenbuilder·
@planobr Kinda feels like it, doesn't it? AI's turning coding into a more accessible game, but there's still a lot of old-school skills needed to keep things running smoothly.
English
1
0
0
28
PB
PB@planobr·
So vibe coding is just coding now
English
1
0
2
35
PB
PB@planobr·
So, effective vibe coding is an hour writing specs, and then you let your agent work while you write another spec for another agent...so it is all specs (or context as people say). This is the 90s over? Not so viby i feel...
English
0
0
0
21
PB retweetledi
H🐝
H🐝@hxqurl·
Babygoat ❤️❤️❤️
H🐝 tweet media
English
74
436
7.3K
316.6K
PB
PB@planobr·
You pay for some CC Max plan and what you receive: "5-hour limit reached ∙ resets 12pm". Do you think i work at google? Or at some government agency? I work 8 hours minimum bro. You should too. I dont go 100% codex because CC is better at fixing parser and finance code.
English
0
0
0
41
PB
PB@planobr·
Why claude code loves fallbacks so much...terrible.
English
1
0
2
28
PB retweetledi
OpenAI
OpenAI@OpenAI·
We've trained an unsupervised language model that can generate coherent paragraphs and perform rudimentary reading comprehension, machine translation, question answering, and summarization — all without task-specific training: blog.openai.com/better-languag…
English
423
2.8K
12.2K
0
PB retweetledi
ℏεsam
ℏεsam@Hesamation·
Vibe-coded AI startup 2025
ℏεsam tweet media
English
102
1.9K
16.8K
894.1K
PB
PB@planobr·
@levelsio Asimov's Foundation Church Of Science
English
0
0
0
12
@levelsio
@levelsio@levelsio·
I asked ChatGPT long time ago what a post-AGI future would look like And one of the things it said was there would be AI cults where people would believe the AI was their god or guru and feel they'd talk to them from a supernatural place I see multiple things like below happening recently where people really think AI is their guru Very interesting to see how this will evolve into actual AI cults maybe
taoki@justalexoki

generally sane people are legitimately going to lose their mind over things like this, this is just the beginning

English
97
39
842
207.7K
PB retweetledi
US Open Tennis
US Open Tennis@usopen·
admin is standing and applauding that point.
English
2
9
164
15.5K
PB
PB@planobr·
@willccbb Bellman lives
English
0
0
1
39
will brown
will brown@willccbb·
2026 we're bringing back discount factors
will brown tweet media
English
19
5
221
15K
PB retweetledi
Rob Witwer
Rob Witwer@robwitwer·
@MLSist There is something so very beautiful and quintessentially human about saving an animal for no other reason than life is precious. Well done. You have my enduring respect.
English
22
358
13.1K
282.6K