Igor Zaika

2.2K posts

Igor Zaika banner
Igor Zaika

Igor Zaika

@IgorZaika

Technical Fellow/CVP of Engineering at Microsoft Office. Decreasing entropy, one bit at a time. Opinions here are mine only, do not take them too seriously.

Seattle Katılım Haziran 2009
743 Takip Edilen624 Takipçiler
Igor Zaika retweetledi
Max Levchin
Max Levchin@mlevchin·
If there is one thing you take from this pod, it’s this: socialism corrupts more profoundly than simple words can express, and as the grift gets tougher, the socialists inevitably move from propaganda to oppression as the primary motivating approach. Don’t let it happen here!
sourcery@sourceryy

.@mlevchin: "Socialism sucks." "Take it from somebody who spent his first 16 years under the 'warm embrace of collectivism' as a certain mayor recently put it—socialism sucks." "The only people who do well in redistribution of wealth are the ones doing redistribution." "It's fundamentally corrupt. There's not enough bad things I can say about socialism."

English
35
141
933
60.3K
Igor Zaika retweetledi
Igor Zaika retweetledi
François Chollet
François Chollet@fchollet·
The reason symmetry is so important in physics is because symmetry is a highly effective compression operator. If a system is invariant under some symmetry, you only need to explain one axis of it. Scientific models represent the systematic exploitation of the universe's internal redundancies through symbolic logic.
English
141
189
2K
260.4K
Igor Zaika
Igor Zaika@IgorZaika·
Matches my experience, the gap is not only there but is getting bigger, this gap is the opportunity.
Andrej Karpathy@karpathy

Judging by my tl there is a growing gap in understanding of AI capability. The first issue I think is around recency and tier of use. I think a lot of people tried the free tier of ChatGPT somewhere last year and allowed it to inform their views on AI a little too much. This is a group of reactions laughing at various quirks of the models, hallucinations, etc. Yes I also saw the viral videos of OpenAI's Advanced Voice mode fumbling simple queries like "should I drive or walk to the carwash". The thing is that these free and old/deprecated models don't reflect the capability in the latest round of state of the art agentic models of this year, especially OpenAI Codex and Claude Code. But that brings me to the second issue. Even if people paid $200/month to use the state of the art models, a lot of the capabilities are relatively "peaky" in highly technical areas. Typical queries around search, writing, advice, etc. are *not* the domain that has made the most noticeable and dramatic strides in capability. Partly, this is due to the technical details of reinforcement learning and its use of verifiable rewards. But partly, it's also because these use cases are not sufficiently prioritized by the companies in their hillclimbing because they don't lead to as much $$$ value. The goldmines are elsewhere, and the focus comes along. So that brings me to the second group of people, who *both* 1) pay for and use the state of the art frontier agentic models (OpenAI Codex / Claude Code) and 2) do so professionally in technical domains like programming, math and research. This group of people is subject to the highest amount of "AI Psychosis" because the recent improvements in these domains as of this year have been nothing short of staggering. When you hand a computer terminal to one of these models, you can now watch them melt programming problems that you'd normally expect to take days/weeks of work. It's this second group of people that assigns a much greater gravity to the capabilities, their slope, and various cyber-related repercussions. TLDR the people in these two groups are speaking past each other. It really is simultaneously the case that OpenAI's free and I think slightly orphaned (?) "Advanced Voice Mode" will fumble the dumbest questions in your Instagram's reels and *at the same time*, OpenAI's highest-tier and paid Codex model will go off for 1 hour to coherently restructure an entire code base, or find and exploit vulnerabilities in computer systems. This part really works and has made dramatic strides because 2 properties: 1) these domains offer explicit reward functions that are verifiable meaning they are easily amenable to reinforcement learning training (e.g. unit tests passed yes or no, in contrast to writing, which is much harder to explicitly judge), but also 2) they are a lot more valuable in b2b settings, meaning that the biggest fraction of the team is focused on improving them. So here we are.

English
0
0
1
49
Igor Zaika retweetledi
François Chollet
François Chollet@fchollet·
We should view the history of physics as a long-running program synthesis task. Kepler and Newton were searching the space of possible symbolic models to find the simplest one that would best satisfy available observations.
English
48
39
502
42.1K
Igor Zaika retweetledi
Jeff Dean
Jeff Dean@JeffDean·
Hedged requests (apparently inspired by the Tail at Scale paper by myself and Luiz Barroso) applied within a single machine to replicating data across DRAM channels and issuing reads to all channels, using the one that comes back first. ~5-15X reduction in p99.99 read latency. github.com/LaurieWired/ta… Cool stuff, @lauriewired! Accompanying video forwarded to me by a friend, which is how I learned about it: youtube.com/watch?v=QFi2WV…
YouTube video
YouTube
Jeff Dean tweet media
English
22
85
1.3K
107.8K
Igor Zaika retweetledi
Marc Andreessen 🇺🇸
The pricing tiers for AGI are something like (1) $20/month, (2) $200/day = ~$75,000/year, (3) $1,000/day = ~$350,000/year, and (4) ~$10 billion. For now.
English
343
235
4.5K
32.8M
Igor Zaika retweetledi
Stephen C. Meyer
Stephen C. Meyer@StephenCMeyer·
Meet the mastermind behind the British Olympic Cycling Team's gold medal-winning bicycle design! His book dismantles claims that the human body appears poorly designed. It also demonstrates that human anatomy displays ingenuity far superior to the best creations of humans.
English
0
400
2.4K
2.4M
Igor Zaika retweetledi
Physics & Astronomy Zone
Physics & Astronomy Zone@zone_astronomy·
The highest quality video of the moon was just released… this is so beautiful.
English
5.2K
64.9K
332.1K
11.1M
Igor Zaika
Igor Zaika@IgorZaika·
@OmarShahine Learning is building! So much slower if not impossible to do it any other way!
English
0
0
1
148
Igor Zaika retweetledi
Michael Hla
Michael Hla@hla_michael·
I trained an LLM from scratch on pre-1900 text to see if it could come up with quantum mechanics and relativity. While the model is too small to do meaningful reasoning, it has glimpses of intuition. When given observations from past landmark experiments, the model can declare that “light is made up of definite quantities of energy” and even suggest that gravity and acceleration are locally equivalent. I’m releasing the dataset + models and leave this as an open problem to the research community. I also include what this project has taught me about intelligence in a mini essay linked below. 🧵(1/n)
English
117
260
2K
307.7K
Igor Zaika retweetledi
Geoffrey Litt
Geoffrey Litt@geoffreylitt·
People and agents would be better at writing code if you could easily check what value a variable usually has in production:
Geoffrey Litt tweet media
English
72
62
2.8K
211.3K