M. Alex O. Vasilescu

6.2K posts

M. Alex O. Vasilescu banner
M. Alex O. Vasilescu

M. Alex O. Vasilescu

@AlexTensor

Developing #causal #tensor framework -TensorFaces, Human Motion Signatures | Alumna @MIT, @UofT | #womenwhocode

Los Angeles and New York شامل ہوئے Nisan 2009
1.6K فالونگ2.4K فالوورز
M. Alex O. Vasilescu ری ٹویٹ کیا
Zhengzhong Tu
Zhengzhong Tu@_vztu·
So @icmlconf just desk-rejected all the papers whose authors have been detected to use LLMs for review. Insane
English
12
9
237
67.3K
M. Alex O. Vasilescu ری ٹویٹ کیا
Rohan Paul
Rohan Paul@rohanpaul_ai·
Terence Tao explains the math behind today’s LLMs is actually simple. Training and running them mostly uses linear algebra, matrix multiplication, and a bit of calculus, material an undergraduate can handle. We understand how to build and operate these models. The real mystery is why they work so well on some tasks and fail on others, and why we cannot predict that in advance. We lack good rules for forecasting performance across tasks, so progress is largely empirical. A key reason is the nature of real-world data. Pure noise is well understood, perfectly structured data is well understood, but natural text sits in between, partly structured and partly random. Mathematics for that middle regime is thin, similar to how physics struggles at meso-scales between atoms and continua. Because of this gap, we can describe the mechanisms but cannot yet explain capability jumps or give reliable task-level predictions. That mismatch, simple machinery versus hard-to-predict behavior, is the core puzzle. ---- Video from 'Dr Brian Keating' YT Channel (Link in comment)
Rohan Paul@rohanpaul_ai

Terence Tao on AI in Math. AI can synthesize a million papers and brute-test ideas. Humans can check just 5 examples and see the pattern. But as systems move toward world models, causal reasoning, and active learning, this efficiency gap will narrow.

English
70
603
4K
504.3K
M. Alex O. Vasilescu ری ٹویٹ کیا
Tori atheist
Tori atheist@ToriatheistTori·
Tori atheist tweet media
ZXX
393
11.5K
61K
1.7M
M. Alex O. Vasilescu
M. Alex O. Vasilescu@AlexTensor·
Ironically, #AI was supposed to accelerate research. Instead, it is slowing it down. One side effect of #LLMs is synonym churn for concepts that already have well-established names. LLMs do not understand that technical disciplines rely on precise, established terminology, not endless rewording. As a result, we are producing papers in which the only novelty is the vocabulary. There ought to be a penalty for that.
Yi Ma@YiMaTweets

In system theory, it is called "linearization"... which has been studied and used for decades. Honestly, folks, there is no need to invent or introduce any new terminology. Remember, there is rarely anything new under the sun...

English
0
1
9
1.3K
M. Alex O. Vasilescu
M. Alex O. Vasilescu@AlexTensor·
One side effect of LLMs is synonym churn for concepts that already have well-established names. LLMs do not understand that technical disciplines rely on precise, established terminology, not endless rewording. As a result, we are producing papers in which the only novelty is the vocabulary. There ought to be a penalty for that. Ironically, AI was supposed to accelerate research. Instead, it is slowing it down.
English
0
0
2
289
Yi Ma
Yi Ma@YiMaTweets·
In system theory, it is called "linearization"... which has been studied and used for decades. Honestly, folks, there is no need to invent or introduce any new terminology. Remember, there is rarely anything new under the sun...
Ying Wang@yingwww_

What is a good latent space for world modeling and planning? 🤔 Inspired by the perceptual straightening hypothesis in human vision, we introduce temporal straightening to improve representation learning for latent planning. 📑: agenticlearning.ai/temporal-strai…

English
22
52
737
113K
Toiu Oana
Toiu Oana@oana_toiu·
President of Romania, Nicușor Dan, and the President of Ukraine, Volodymyr Zelensky, met today in Bucharest and decided to elevate our bilateral relationship to the level of a Strategic Partnership, the highest level of cooperation between two states. This is a crucial moment between two neighboring countries. In difficult times, we have strengthened our partnership and friendship, looking wisely at our shared history and at the responsibility to build trust between our peoples. Ukraine’s fight to protect its land, people, and future is a struggle that also safeguards Europe and us. We remain committed to supporting their efforts to restore peace and security in the region, as well as their integration into the European Union. The joint commitment to peace and prosperity in the region also has practical dimensions: •Framework arrangement for cooperation in the energy sector •Partnership for the development of the defense industry •Mutual respect for the culture and identity of our peoples As I announced with @andrii_sybiha during the first visit I took to Ukraine last year, we decided to work together so that the Romanian language has a dedicated day. After today’s meeting of the Presidents, we can now say that August 31 will officially be that day every year (Decree 235/2026).
Toiu Oana tweet media
English
64
51
561
5.3K
M. Alex O. Vasilescu ری ٹویٹ کیا
Judea Pearl
Judea Pearl@yudapearl·
@jonSnow757072 @ylecun I'm not sure about Rung-2. How do we estimate P(y|do(x),do(z)) ??
English
1
1
4
656
M. Alex O. Vasilescu ری ٹویٹ کیا
Judea Pearl
Judea Pearl@yudapearl·
Models specified by "action conditioned" knowledge, as in classical control theory, are surely "causal" but are not "world models" in the causal sense. The former requires specifying the effect of every action or combination of actions; the latter permits the derivation of such effects from a compact description of (causal relationships in) the world, not of actions. To appreciate the difference, see "Does Obesity Shorten Life? Or is it the Soda? ucla.in/2EpxcNU Another way to appreciate the difference is to consider a city road-map, as an example of world model, and contrast it with GPS instructions, "Go right, left or straight", which contain the same information about approaching the destination, but not about handling unforeseen road blocks.
English
4
10
59
6K
M. Alex O. Vasilescu ری ٹویٹ کیا
Irena Buzarewicz
Irena Buzarewicz@IrenaBuzarewicz·
A comic by Bill Bramhall
Irena Buzarewicz tweet media
English
21
415
1.6K
39K
M. Alex O. Vasilescu ری ٹویٹ کیا
European Conference on Computer Vision #ECCV2026
📢 Message from our #ECCV2026 Program Chairs: We fully recognize that ongoing conflicts and global challenges are creating difficult situations for many individuals, and we sincerely sympathize with those affected. 1/3
English
1
5
36
18.1K
M. Alex O. Vasilescu
M. Alex O. Vasilescu@AlexTensor·
LLMs have made it obvious that very few questions are truly novel. LLMs are a revolution in library science and they made your literature search easy. You've got questions, LLMs have answers. No more re-inventing the wheel. For existing answers, to he technical burden involves verifying conditions under which those answers are true.
English
0
0
0
15
Dimitris Papailiopoulos
Dimitris Papailiopoulos@DimitrisPapail·
had major intellectual dread when i realized that the technical burden required to go from question to answer (at least for ml research) basically became near 0, but converged to being delighted that we've finally arrived at the golden age of asking questions. What a time to be alive..
English
1
0
1
183
Dimitris Papailiopoulos
Dimitris Papailiopoulos@DimitrisPapail·
My precise feelings, after started using Claude Code and Codex
Dimitris Papailiopoulos tweet media
English
10
23
378
16.1K
M. Alex O. Vasilescu ری ٹویٹ کیا
Andrei Bursuc
Andrei Bursuc@abursuc·
Deadline day today! Let’s go #eccv2026
Andrei Bursuc tweet media
English
0
3
28
2.1K
M. Alex O. Vasilescu ری ٹویٹ کیا
AI Conference DL Countdown
AI Conference DL Countdown@DlCountdown·
RLC'26 (paper): 1 day + 1h. ECCV'26 (paper): DL today, good luck (10h)! COLLAS'26 (abs): 36 day. COLLAS'26 (paper): 41 day.
English
0
1
16
2.3K
M. Alex O. Vasilescu ری ٹویٹ کیا
Dustin
Dustin@r0ck3t23·
The intelligence we are building is not artificial. It never was. Microsoft Chief Scientific Officer Eric Horvitz just reframed the entire foundation of the AI arms race with one sentence. The tech industry calls it Artificial Intelligence. That word is wrong. Horvitz: “I don’t actually like the term artificial intelligence. I wish the field was called computational intelligence because I think it applies to biological nervous systems as well as machines, and together we can go far.” We are not building a digital imitation of the human brain. We are scaling the exact same computational physics that created biological awareness and transferring it into silicon. Your mind and a massive AI data center run on the same underlying rules. The transition isn’t artificial. It is universal. And here is where it gets deeply unsettling. Tech optimists always fall back on the same comfort. Humans hold the steering wheel. Our values guide the machine. Horvitz acknowledges this. Horvitz: “We’ll take a humanistic standpoint here, always being on top of things and guiding with our values and our preferences and our goals.” Then the caveat that changes everything. Horvitz: “As much as they might be shaped over time by the machines we work with.” You cannot interact with a superintelligence at scale without it quietly rewiring your psychological baseline. The values you use to command the machine will be shaped by the machine you are commanding. The frameworks you use to perceive reality will be constructed by the system you believe you are directing. That feedback loop started the moment you asked an AI what to think about something. Most people haven’t noticed yet. Horvitz: “I think in our own lifetimes we will all experience incredible breakthroughs in understanding biology, with applications in medicine, in healthcare, that will be named as AI breakthroughs.” Horvitz: “It’s gonna accelerate over the next 10 to 15 years.” Because biological systems and machine networks both operate on computational intelligence, a sufficiently advanced AI can solve the human body like a math equation. The architects who win the next decade will not just control the digital economy. They will control the physical building blocks of life itself. The line between silicon and carbon was always an illusion. And once humanity fully realizes that, the question of whether we are using the intelligence or it is using us becomes impossible to answer. Because by then, we will be the same thing.
English
256
359
1.1K
162.5K
M. Alex O. Vasilescu ری ٹویٹ کیا
Chomba Bupe
Chomba Bupe@ChombaBupe·
Three facts about artificial intelligence (AI): - It is dumb, the intelligence in AI is silent. - It can still be useful in some cases like a calculator is useful sometimes. - Generative AI is trained on stolen data without consent, compensation nor attribution.
English
42
133
910
17.4K
M. Alex O. Vasilescu ری ٹویٹ کیا
Pedro Domingos
Pedro Domingos@pmddomingos·
Every subfield of AI has a corresponding field of science that it runs rings around: Machine learning: Statistics NLP: Linguistics Automated reasoning: Formal logic Knowledge representation: Philosophy Multiagent systems: Economics Computer vision: Sensor systems Robotics: Control systems
English
28
46
461
27.3K
M. Alex O. Vasilescu ری ٹویٹ کیا
Gary Marcus
Gary Marcus@GaryMarcus·
“watching billionaires argu[ing] about who stole … more ethically”. 2026 in a nutshell.
Shanaka Anslem Perera ⚡@shanaka86

The most honest sentence in the entire AI industry right now is one nobody wants to say out loud. Every major foundation model was trained on data its creators did not have explicit permission to use. Every single one. Anthropic settled for $1.5 billion over 7 million pirated books used to train Claude. OpenAI faces ongoing lawsuits from authors, newspapers, and code repositories. Google trained on the entire indexed internet. Meta used Libraries Genesis datasets. And xAI’s Grok was trained on the full corpus of X posts, a decision Musk made unilaterally as the platform’s owner without individual user consent. So when Elon Musk tweets that “Anthropic is guilty of stealing training data at massive scale and has had to pay multi-billion dollar settlements for their theft. This is just a fact,” he is telling a true but deeply selective version of the story. The settlement is real. The $1.5 billion is documented. The pirated books are documented. But framing this as an Anthropic problem rather than an industry-wide structural reality is competitive positioning disguised as moral outrage. Here is the actual mechanism nobody is mapping. Anthropic accused Chinese labs of distilling Claude through its public API. Musk responded by pointing out Anthropic trained on stolen data. Gergely Orosz, a respected engineer, wrote “Anthropic can’t have it both ways.” All three are correct simultaneously and all three are being selectively honest. The structural reality is that the entire foundation model industry sits on an unresolved intellectual property question worth hundreds of billions of dollars. Every lab trained on data it did not license. Every lab knows this. Every lab’s legal strategy is to get big enough that the settlement becomes a cost of doing business rather than an existential threat. Anthropic already paid $1.5 billion. That is not a punishment. That is a licensing fee paid retroactively under legal pressure. The reason Musk is raising this now has nothing to do with ethics. Anthropic is in conversations with the Pentagon. xAI is competing for the same contracts. Framing your competitor as a data thief three days before a defense meeting is not moral clarity. It is positioning. And the deepest irony is the China angle. The United States wants to restrict Chinese access to American AI models on intellectual property grounds. But every American AI model was built on intellectual property its creators took without permission from millions of authors, coders, artists, and publishers. The entire moral framework for the technology export control regime rests on an intellectual property argument that the American labs themselves have not resolved domestically. That is not hypocrisy anyone in the industry wants to discuss because the moment you acknowledge it, the legal and regulatory exposure scales to every company simultaneously. Musk is weaponizing it selectively. Anthropic is deflecting it selectively the way I see this. And the actual creators whose work built every one of these models are watching billionaires argue about who stole from them more ethically.

English
16
52
321
17.8K