Bálint Máté

42 posts

Bálint Máté

Bálint Máté

@balintmt

ML @PeptoneLtd. prev intern @AIatMeta, @MSFTResearch, phd at @unige_en

انضم Şubat 2021
321 يتبع199 المتابعون
Bálint Máté أُعيد تغريده
Bálint Máté أُعيد تغريده
Chin-Wei Huang
Chin-Wei Huang@chinwei_h·
🚀 After two years of intense research, we’re thrilled to introduce Skala — a scalable DL density functional that hits chemical accuracy on atomization energies and matches hybrid-level performance on main group chemistry — all at the cost of a semi-local functional. ⚛️🔥🧪⚗️🧬
English
1
30
188
13.8K
Bálint Máté أُعيد تغريده
Rianne van den Berg
Rianne van den Berg@vdbergrianne·
🚀 After two+ years of intense research, we’re thrilled to introduce Skala — a scalable deep learning density functional that hits chemical accuracy on atomization energies and matches hybrid-level accuracy on main group chemistry — all at the cost of semi-local DFT. ⚛️🔥🧪🧬
Rianne van den Berg tweet media
English
5
61
291
33.1K
François Fleuret
François Fleuret@francoisfleuret·
Serious question: Von Neumann is always described as an absolute genius. What did he do that would put him anywhere close to Turing, Godel, or anyone on the Solvay conference pic?
François Fleuret@francoisfleuret

@tszzl I am correct that Von Neumann is the dude who made the main block-and-arrow figure of the first lecture of "Architectures of computers 101"?

English
128
21
667
321.9K
Bálint Máté
Bálint Máté@balintmt·
@eeevgen @francoisfleuret @tristanbereau In this particular experiment, we ran MCMC for 40k steps and dropped the first 10k as burn-in. I see your concern with direct MCMC, and I guess it depends on the target. We could get away with it for this simple target (only LJ interactions, particles are not too densely packed).
English
0
0
0
102
Bálint Máté
Bálint Máté@balintmt·
@eeevgen @francoisfleuret @tristanbereau Hi, thanks for the questions! 1. Yes, we do need samples at t=1. 2. Do you mean something like annealed MCMC? If so, we don't do that, just plain MCMC at the target potential.
English
1
0
0
55
Evgenii Egorov
Evgenii Egorov@eeevgen·
Hi, very nice sequence of papers! I have couple of questions just to be sure I understand correctly. Let’s consider simple case, where one bridge side (prior) is simple distribution at t=0 and target at t=1. 1. Do I understand correctly, that to learn “neural” path you method still need to have samples at t=1? 2. If yes, is this true that to get them with MCMC you will still use some path, like usual geometric say? What is then typical budget of training samples to train neural path?
English
1
0
0
148
Bálint Máté
Bálint Máté@balintmt·
Finally, we also look at what happens if we predict the hydration free energy of methane using the potential that was trained on water (and vice versa). (10/10)
Bálint Máté tweet media
English
0
0
2
318
Bálint Máté
Bálint Máté@balintmt·
The approach is tested on the estimation of hydration free energies of rigid water and methane (LJ + Coulomb interactions). We find good agreement with experimental reference values. (9/n)
Bálint Máté tweet media
English
1
0
2
394
Bálint Máté أُعيد تغريده
Vincent Micheli
Vincent Micheli@micheli_vincent·
To build the next generation of intelligent agents, developing efficient world models is essential. We introduce Δ-IRIS, an agent that learns behaviors by imagining millions of trajectories in its world model. Paper: arxiv.org/abs/2406.19320 Code: github.com/vmicheli/delta… 🧵👇
English
12
50
288
47.5K
Bálint Máté
Bálint Máté@balintmt·
To validate all this, we compare the estimates of the average density and the excess chemical potential to grand canonical MC simulations. (6/n)
Bálint Máté tweet media
English
1
0
3
879
Bálint Máté
Bálint Máté@balintmt·
Given the estimates of the canonical free energies at fixed particle count we end up with a grand canonical sampler at any choice of the chemical potential. (5/n)
Bálint Máté tweet media
English
1
0
4
1K