Nikita Ostrovsky

63 posts

Nikita Ostrovsky

Nikita Ostrovsky

@nikostro

Writing about AI @TIME. All views my own.

Katılım Ekim 2010
386 Takip Edilen121 Takipçiler
Nikita Ostrovsky
Nikita Ostrovsky@nikostro·
@GaryMarcus With the benefit of hindsight, I feel like this prediction has held up pretty well: Opus 4.5 + Claude Code does seem to have been a step change in the usefulness of AI agents. (Probably also true of GPT 5.2/Gem 3 + harnesses, despite less hype.) Curious for your view @GaryMarcus.
English
1
0
0
167
Nikita Ostrovsky
Nikita Ostrovsky@nikostro·
Buying this opinion while it's cheap: I expect that we will see a release from a frontier AI company (probably Google or OpenAI) before the end of 2025 that will make this vindication seem premature. An o1->o3 type jump, e. g. agents become genuinely useful.
Gary Marcus@GaryMarcus

Fabulous quote on AI, from someone in the media who actually gets what we have just witnessed. “In the aftermath of GPT-5’s launch, it has become more difficult to take bombastic predictions about A.I. at face value, and the views of critics like Marcus seem increasingly moderate… Post-training improvements don’t seem to be strengthening models as thoroughly as scaling once did. A lot of utility can come from souping up your Camry, but no amount of tweaking will turn it into a Ferrari.” - Cal Newport, @NewYorker And yes, on a personal note, I am human, and it is downright thrilling to be vindicated after all these years, in the pages of The New Yorker. Truly a life moment.

English
2
0
6
2.9K
Nikita Ostrovsky retweetledi
mrinank
mrinank@MrinankSharma·
i'm really excited and proud to share this latest research ⭐️ it's a first look into how AI assistant usage can change, and even distort, what it means to be human in the future, it is my hope that AI can be used to magnify, clarify, and support our humanity
Anthropic@AnthropicAI

New Anthropic Research: Disempowerment patterns in real-world AI assistant interactions. As AI becomes embedded in daily life, one risk is it can distort rather than inform—shaping beliefs, values, or actions in ways users may later regret. Read more: anthropic.com/research/disem…

English
6
8
89
23.1K
Nikita Ostrovsky
Nikita Ostrovsky@nikostro·
@bookwormengr @bookwormengr I'm a reporter for Time magazine. These stats are interesting—would you be up for discussing your methodology in more detail? My DMs are open.
English
0
0
0
94
GDP
GDP@bookwormengr·
China AI talent density as fraction of global by metro regions: USA has only one region with more than 10% of total AI research output (SF Bay Area). China has 3. Beijing area leads the world with Tsinghua University, Peking University, University of Chinese Academy of Science, Beihang University, Beijing Institute of Technology with impressive research output. Tsinghua and Peking University beat any other university in the world. Beijing area is also home to likes of Moonshot, Ziphu AI, MiniMax, ByteDance SEED and many others. Most of these universities and labs located in north-west part of the city in Haidian district (area where summer palace is). That is why I say, Beijing Haidian >> SF Cerebral Valley Time to stop looking down at China, may be?
GDP tweet media
English
2
9
68
15.8K
GDP
GDP@bookwormengr·
AI talent density by global metro areas - mega thread, bookmark. ================== Demographics is the destiny, compute alone is overrated in the Age of Research (though @SemiAnalysis_ may not agree). In this thread, let us analyse demographic distribution of AI talent globally. The stats are shocking. - China exceeds USA. Tiny Singapore matches all of Europe (no wonder major labs are opening base in Singapore) - Beijing area has the highest talent density in the world. - Beijing Haidan district >> SF Cerebral valley (there are more tier 1 labs in this area than all of San Fransisco: MoonShot, MiniMax, Ziphu AI, ByteDance SEED and many many others...). - Chinas has 3 metro areas with comparable research output as the entire Bay area (more than 10% of global). Each of these have high concentration of robotics firms as well. - USA has only one major research cluster with more than 10% contribution to AI research, off-course San Fransisco Bay area. You may say, US labs like OpenAI and Anthropic don't publish, that is why this is the case. But, do you really think Chinese base labs like DeepSeek, MiniMax, MoonShot, Z AI with 400+ staff publish as many as they could? How many papers you have seen from Chinese robotics firms? They publish may be 5-10 papers a year far below the number of experiments they conduct. GCR have other corporate labs that publish more, like Alibaba, ByteDance SEED, Tencent. There are more labs on the block: Xiomi, Meituan etc. but they are balanced by American ones like Google, Microsoft, Amazon, SalesForce, Nvidia etc. Most of the difference is made strong research culture at Chinese Universities, as well as emerging Asian universities like NUS, NTU, KSAIT etc. What follows are region specific maps showing distribution of talent. (Source: AI talent density maps is produced on basis of influential paper published. Neurips selection taken as the proxy.) @shaunrein @teortaxesTex @bgurley @chamath @DavidSacks @MohapatraHemant @natolambert @Scobleizer @ClementDelangue @aakrit @svembu @balajis @naval @rohanpaul_ai @SemiAnalysis_ @deedydas @adityaag @pmarca @elonmusk @dwarkesh_sp
GDP tweet media
English
58
118
515
518.6K
Nikita Ostrovsky retweetledi
Stefan Schubert
Stefan Schubert@StefanFSchubert·
Anthropic has overtaken OpenAI in enterprise large language model API market share
Stefan Schubert tweet media
English
289
454
5.3K
695.6K
Nikita Ostrovsky retweetledi
Dean W. Ball
Dean W. Ball@deanwball·
If you said: “We should have real-time incident reporting for large-scale frontier AI cyber incidents.” A lot of people in DC would say: “That sounds ea/doomer-coded.” And yet incident reporting for large-scale, non-AI cyber incidents is the standard practice of all major hyperscalers, as AWS reminded us yesterday. Because hyperscalers run important infrastructure upon which many depend. If you think AI will constitute similarly important infrastructure and have, really, any reflective comprehension about how the world works, obviously “real-time incident reporting for large-scale frontier AI cyber incidents” is not “ea-coded.” Instead, “real-time incident reporting for large-scale frontier AI cyber incidents” would be an example of a thing grown ups do, not in a bid for “regulatory capture” but instead as one of many small steps intended to keep the world turning about its axis. But my point is not about the substance of AI incident reporting. It’s just an illustrative example of the low, and apparently declining, quality of our policy discussion about AI. The current contours/dichotomies of AI policy (“pro innovation” versus “doomer/ea”) are remarkably dumb, even by the standards of contemporary political discourse. We have significantly bigger fish to fry. And we can do much better.
Dean W. Ball tweet media
English
11
39
314
51.1K
Nikita Ostrovsky retweetledi
page
page@michaelhpage·
In defense of OAI’s subpoena practice, @jasonkwon claims this is normal litigation stuff, and since Encode entered the Musk case, @_NathanCalvin can’t complain. As a litigator-turned-OAI-restructuring-critic, I interrogate this claim:🧵
Jason Kwon@jasonkwon

There’s quite a lot more to the story than this. As everyone knows, we are actively defending against Elon in a lawsuit where he is trying to damage OpenAI for his own financial benefit. Encode, the organization for which @_NathanCalvin serves as the General Counsel, was one of the first third parties - whose funding has not been fully disclosed - that quickly filed in support of Musk. For a safety policy organization to side with Elon (?), that raises legitimate questions about what is going on. We wanted to know, and still are curious to know, whether Encode is working in collaboration with third parties who have a commercial competitive interest adverse to OpenAI. The stated narrative makes this sound like something it wasn’t. 1/ Subpoenas are to be expected, and it would be surprising if Encode did not get counsel on this from their lawyers. When a third party inserts themselves into active litigation, they are subject to standard legal processes. We issued a subpoena to ensure transparency around their involvement and funding. This is a routine step in litigation, not a separate legal action against Nathan or Encode. 2/ Subpoenas are part of how both sides seek information and gather facts for transparency; they don’t assign fault or carry penalties. Our goal was to understand the full context of why Encode chose to join Elon’s legal challenge. 3/ We’ve also been asking for some time who is funding their efforts connected to both this lawsuit and SB53, since they’ve publicly linked themselves to those initiatives. If they don’t have relevant information, they can simply respond that way. 4/ This is not about opposition to regulation or SB53. We did not oppose SB53; we provided comments for harmonization with other standards. We were also one of the first to sign the EU AIA COP, and still one of a few labs who test with the CAISI and UK AISI. We’ve also been clear with our own staff that they are free to express their takes on regulation, even if they disagree with the company, like during the 1047 debate (see thread below). 5/ We checked with our outside law firm about the deputy visit. The law firm used their standard vendor for service, and it’s quite common for deputies to also work as part-time process servers. We’ve been informed that they called Calvin ahead of time to arrange a time for him to accept service, so it should not have been a surprise. 6/ Our counsel interacted with Nathan’s counsel and by all accounts the exchanges were civil and professional on both sides. Nathan’s counsel denied they had materials in some cases and refused to respond in other cases. Discovery is now closed, and that’s that. For transparency, below is the excerpt from the subpoena that lists all of the requests for production. People can judge for themselves what this was really focused on. Most of our questions still haven’t been answered.

English
8
43
272
60.3K
Nikita Ostrovsky retweetledi
Nikita Ostrovsky retweetledi
Nathan Calvin
Nathan Calvin@_NathanCalvin·
One Tuesday night, as my wife and I sat down for dinner, a sheriff’s deputy knocked on the door to serve me a subpoena from OpenAI. I held back on talking about it because I didn't want to distract from SB 53, but Newsom just signed the bill so... here's what happened: 🧵
Nathan Calvin tweet media
English
311
1.2K
6.3K
6.7M
Nikita Ostrovsky retweetledi
Epoch AI
Epoch AI@EpochAIResearch·
New data insight: How does OpenAI allocate its compute? OpenAI spent ~$7 billion on compute last year. Most of this went to R&D, meaning all research, experiments, and training. Only a minority of this R&D compute went to the final training runs of released models.
Epoch AI tweet media
English
18
88
671
444.6K
Nikita Ostrovsky retweetledi
Osvald Nitski
Osvald Nitski@OsvaldNitski·
Exciting to see coverage of APEX and the larger shifts happening in the human data industry in TIME as well! time.com/7322386/ai-mer…
English
0
1
6
552
Nikita Ostrovsky retweetledi
Shakeel
Shakeel@ShakeelHashim·
Basically everything people say about AI being terrible for the environment is bullshit. But AI generated video actually is pretty bad.
Shakeel tweet media
English
8
3
54
38.9K