Víthor Rosa Franco

1.8K posts

Víthor Rosa Franco banner
Víthor Rosa Franco

Víthor Rosa Franco

@vthorrf

Assistant Professor in Psychometrics at São Francisco University 🇧🇷 #rstats #bayes #psychometrics #machinelearning

Brasil Katılım Şubat 2014
478 Takip Edilen343 Takipçiler
Víthor Rosa Franco retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
A human consumes about 2,000 calories per day. Over 20 years, that’s roughly 17,000 kWh of total food energy. Training GPT-4 consumed an estimated 50 GWh of electricity. That’s 3,000 humans worth of “training energy” for a single model run. And GPT-4 is already dead. OpenAI retired GPT-4o from ChatGPT on February 13th. The model that took 50 GWh to train got less than two years of flagship status before replacement. The human you spent 17,000 kWh “training” for 20 years produces economic output for the next 40 to 60 years. The amortization window on GPT-4 was shorter than a car lease. Now look at what replaced it. GPT-5.2, released December 2025, is OpenAI’s current default. The GPT-5 series consumes an estimated 18 Wh per average query according to the University of Rhode Island’s AI Lab, up to 40 Wh for extended reasoning. That’s 8.6 times more electricity per response than GPT-4. With 2.5 billion queries hitting ChatGPT daily and GPT-5.2 now the default model, the inference math gets staggering fast. Even at a blended average well below 18 Wh, you’re looking at daily electricity consumption that could power over a million American households. This is what Altman is actually doing. OpenAI hit $13 billion in annual recurring revenue but still isn’t profitable. They need you to think of AI energy consumption as natural and inevitable, the same way you think about feeding a child, because the alternative framing is that they’re burning through enough electricity to rival small countries while racing to build 1-gigawatt Stargate data centers. The food analogy makes the energy costs feel biological and unavoidable instead of what they are: an engineering and business choice that scales with every model generation. The comparison sounds clever at a fireside chat in India. It falls apart the second you do the arithmetic.
Chief Nerd@TheChiefNerd

🚨 SAM ALTMAN: “People talk about how much energy it takes to train an AI model … But it also takes a lot of energy to train a human. It takes like 20 years of life and all of the food you eat during that time before you get smart.”

English
420
3.2K
14.2K
1.3M
Víthor Rosa Franco retweetledi
Sridhar Ramesh
Sridhar Ramesh@RadishHarmers·
Generative AI is amazing at tasks where I am not qualified to judge the output.
English
54
1.4K
17.9K
288.7K
Víthor Rosa Franco retweetledi
Misha Teplitskiy | Science of Science
A famous study in science of science space took some papers *published* in prominent psych journals, changed authors' names/affiliations, and resubmitted them to the *same* journals. Allegedly only 8% of editors & reviewers detected the resubmissions. I keep staring at that 8% and thinking: that can't be quite right can it? Like, there must be some caveats/unreported aspects to that number?
Misha Teplitskiy | Science of Science tweet media
English
9
27
121
36.1K
Víthor Rosa Franco retweetledi
Prof. Shamika Ravi
Prof. Shamika Ravi@ShamikaRavi·
This has been an open secret in the economics profession for decades. Several instances come to mind. Here’s one from the editor of an ‘A’ journal in 2009: “This is very good work, your model is neat and the empirical approach is novel. Unfortunately the data is from India, so not generalizable.”(!!) Lesson: we need more of our own journals & thank god for open source.
Philipp Heimberger@heimbergecon

This paper shows that authors from low-income countries remain excluded from top-ranked economics journals and receive less attention from other economists. Developing country authors are far less likely to be published in top journals even when holding citation counts constant.

English
47
367
1.6K
164.5K
Víthor Rosa Franco retweetledi
Rishi Jha
Rishi Jha@rishi_d_jha·
I’m stoked to share our new paper: “Harnessing the Universal Geometry of Embeddings” with @jxmnop, Collin Zhang, and @shmatikov. We present the first method to translate text embeddings across different spaces without any paired data or encoders. Here's why we're excited: 🧵👇🏾
Rishi Jha tweet media
English
37
261
1.8K
160K
Víthor Rosa Franco retweetledi
Matter as Machine
Matter as Machine@matterasmachine·
1/18 Today I will try to describe a mathematical trick that can logically explain what happens in Quantum Mechanics and Special Relativity.
Matter as Machine tweet media
English
11
111
1K
131.9K
Víthor Rosa Franco retweetledi
Alex Imas
Alex Imas@alexolegimas·
There is a field experiment showing this exact effect. Introducing GPT tutors increases performance by *a lot*--students seem to be picking up the material much faster--but when GPT is removed those who had access perform *much worse* compared to those w/o access. 1/4
Alex Imas tweet media
Lakshya Jain@lxeagle17

I'm teaching databases this semester at Berkeley. My students all seem unusually brilliant. Not many go to office hours, and not too many folks post on the course forum asking project questions. Weirdly, the exam had the lowest recorded average in my 10 semesters teaching it.

English
95
1K
6K
721.7K
Víthor Rosa Franco retweetledi
Florian Ederer
Florian Ederer@florianederer·
Academics from poorer socio-economic backgrounds are more likely to - not publish - have outstanding publication records - introduce more novel scientific concepts - less likely to receive recognition, as measured by citations, Nobel Prize nominations, and awards.
Florian Ederer tweet media
English
25
929
3.6K
274.6K
Víthor Rosa Franco
Víthor Rosa Franco@vthorrf·
Plus, there are two empirical examples thoroughly discussed with available R code and data! Of course, the examples only cover two specific cases that may not suit your own research interests. But I hope they will serve as inspiration for your next project 😊
English
0
0
1
44
Víthor Rosa Franco
Víthor Rosa Franco@vthorrf·
This manuscript aids on the understanding of how data theory and experiments can lead to applications of RMTs not only as scaling methods (i.e., the assignment of numbers to observations), but also as means of testing meaningful aspects of psychological theories
Víthor Rosa Franco tweet media
English
1
0
1
64
Víthor Rosa Franco retweetledi
Santiago
Santiago@svpino·
Large Language Models don't reason. Thank you, Apple.
Santiago tweet media
English
285
1.2K
8.8K
954.9K
Víthor Rosa Franco
Víthor Rosa Franco@vthorrf·
Our main motivation was to propose a way to develop Bayesian GGMs that could be as flexible as possible, not to test the efficiency of a specific model per se. But we did show, with a toy simulation and an empirical example, that our approach is viable for real life applications
English
1
0
2
157
Víthor Rosa Franco
Víthor Rosa Franco@vthorrf·
#methods people on Xwitter: @SkeptPsych, Guilherme Wang, and I present to you our generalized approach to Bayessian Gaussian graphical models! From positive definite polychoric correlation matrices to model-based sparseness, we've got you covered 😎 advances.in/psychology/10.…
English
1
5
14
1.2K