Egor Bogomolov

96 posts

Egor Bogomolov banner
Egor Bogomolov

Egor Bogomolov

@egor_bb

Leading ML Division @ JetBrains Research ML on source code, fancy stuff to make SE better.

Amsterdam, the Netherlands Katılım Aralık 2015
209 Takip Edilen167 Takipçiler
Egor Bogomolov retweetledi
JetBrains
JetBrains@jetbrains·
RL training and coding-agent experiments not scaling locally? IdeGYM fixes that – and it's now open source. jb.gg/rsrch-idegym
JetBrains tweet media
English
0
7
47
6.7K
Egor Bogomolov
Egor Bogomolov@egor_bb·
Interesting @NeurIPSConf experience: a weird mix of likely the most positive review I’ve received and an overriding PC reject on top of it 🤔
Egor Bogomolov tweet media
English
0
0
1
184
Egor Bogomolov retweetledi
IDE Workshop
IDE Workshop@IDEworkshop·
The 3rd IDE Workshop @ICSEconf 2026 is scheduled for Saturday, April 18th! Please submit your short papers and extended abstracts on anything IDE-related: plugins, studies, refactorings, environments, AI in IDE, etc.! All information here: ide-workshop.github.io 🏝️🏝️🏝️
IDE Workshop tweet media
English
1
1
3
557
Egor Bogomolov retweetledi
andrew zakonov
andrew zakonov@andrewzakonov·
JetBrains Junie is live — for everyone! single AI subscription. pro tier included with all products pack and dotultimate.
English
6
23
124
12.6K
Egor Bogomolov
Egor Bogomolov@egor_bb·
@InceptionAILabs Is there an API to access the model? I would be happy to run it on some coding benchmarks, but I have not find any points to the API yet.
English
0
0
0
43
Inception
Inception@_inception_ai·
We are excited to introduce Mercury, the first commercial-grade diffusion large language model (dLLM)! dLLMs push the frontier of intelligence and speed with parallel, coarse-to-fine text generation.
English
224
948
5.3K
1.9M
Egor Bogomolov
Egor Bogomolov@egor_bb·
@john_lam @headinthebox Hi, author here. For code generation task we give model an instruction in natural language and access to the library. The reference code is only used for evaluation to compute metrics
English
0
0
1
73
John Lam
John Lam@john_lam·
I don't disagree with your assertion, but I find the code generation examples quite suspect. They have a file containing some "reference code" and from that they are expecting the model to generate working code based on the reference code. The problem then becomes one of "what is the right reference code to put into the model's context"? How would that happen in the real world vs. a benchmark? I find that in my own experience using Sonnet 8 hours a day every day that the context is critical. Perhaps the most important context is having a shared understanding between me and Sonnet about what we want to achieve before Sonnet goes about writing the code. If we screw up defining the shared context (you could think about this as a "spec via chat", bad results and wasted time follow. If we are aligned on the shared context, I find 80% of the time I get acceptable results.
English
1
0
3
320
Erik Meijer
Erik Meijer@headinthebox·
blog.jetbrains.com/ai/2024/08/lon… " ... The results show that even the best model – GPT-3.5 in this case – can only fix 17% of failing builds, indicating just how much work yet needs to be done for AI to be practically useful in this context. ..." Understatement of the year! Who would have expected 3 years ago that plain stupid next token prediction can solve 1 in 6 of failing builds automatically. That is a huge productivity improvement. Developers, you are fucked as a profession. Completely and utterly fucked.
English
13
18
99
14.5K
Egor Bogomolov
Egor Bogomolov@egor_bb·
🗂️ Module summarization — based on the module’s or project’s source code and a short description of the desired documentation, the model should generate it, testing its abilities in large comprehensive natural language texts. This benchmark includes custom LLM-based evaluation.
Egor Bogomolov tweet media
English
1
0
4
433
Egor Bogomolov retweetledi
Sergey Titov
Sergey Titov@smtitov·
💜Exciting news from JetBrains Research! 💜 We're publishing the Kotlin ML Pack — a set of datasets, multiple fine-tuning checkpoints, and a handwritten evaluation benchmark. Check out our HuggingFace page jb.gg/kotlin-ml-pack & the details below! & the details below (1/5) :
English
1
9
42
2.9K