Tyler R. Josephson (atomslab.bsky.social)

660 posts

Tyler R. Josephson (atomslab.bsky.social) banner
Tyler R. Josephson (atomslab.bsky.social)

Tyler R. Josephson (atomslab.bsky.social)

@trjosephson

AI & Theory-Oriented Molecular Science. New dad. Asst Prof at @UMBC_CBEE, learning proofs and programming in @leanprover. RT≠PV/n

Baltimore, MD Katılım Temmuz 2018
1.4K Takip Edilen1.2K Takipçiler
Andrew White 🐦‍⬛
Andrew White 🐦‍⬛@andrewwhite01·
During testing of our agents, someone said "Hello" and it spiraled and ran for 45 minutes trying to understand the meaning and history of the word hello. Great adversarial test for autonomous agents.
English
9
7
110
24.1K
Tyler R. Josephson (atomslab.bsky.social)
Interested in learning a new programming language that enables you to prove the code you've written is correct? I taught "Lean for Scientists and Engineers" in Summer 2024. Soon, we'll have all the lectures on YouTube! Here's the first one: youtube.com/watch?v=s9Dyiu…
YouTube video
YouTube
English
1
0
8
319
Tyler R. Josephson (atomslab.bsky.social) retweetledi
boris
boris@mysterymeat·
@LNuzhna What is science if not chains of abstractions? The process of explaining the physical world in words and numbers is exactly why it’s so hard, and so rewarding.
English
1
2
39
9.9K
Iñigo Iribarren
Iñigo Iribarren@inigo_iribarren·
Hello beautiful community I am preparing a seminar about making nice scientific figures and graphs (because, apparently, work intrusion is my real passion). Do you have any good figures/graphs that I should show? I will obviously include the greatest figure of all time.
Iñigo Iribarren tweet media
English
8
1
7
1.1K
Tyler R. Josephson (atomslab.bsky.social)
I'm excited to announce *two* funded positions in the ATOMS Lab! Come join us, as we develop computational methods for simulating water pollutants, and AI methods for evaluating scientific claims. We want to hire by Spring 2025; check it out and share with your network!
Tyler R. Josephson (atomslab.bsky.social) tweet media
English
1
7
20
2.7K
Tyler R. Josephson (atomslab.bsky.social) retweetledi
Heather O. LeClerc
Heather O. LeClerc@Heatherlec620·
The 24-25 ChemEng faculty list is heating up! Check out the current 33 tenure-track and 11 teaching-track faculty openings. #gid=1692417990" target="_blank" rel="nofollow noopener">docs.google.com/spreadsheets/d…
English
0
13
25
6K
Tyler R. Josephson (atomslab.bsky.social) retweetledi
Daniel Litt
Daniel Litt@littmath·
a collection of set theorists who are so excited that they cannot contain themselves
English
61
552
4.4K
165.4K
Tyler R. Josephson (atomslab.bsky.social)
We're halfway through the course! So far, we've introduced logic and proofs for scientists and engineers. This week we pivot to writing programs in Lean, before bringing everything together. It's not too late to register!
Tyler R. Josephson (atomslab.bsky.social)@trjosephson

Interested in learning a new programming language that enables you to prove the code you've written is correct? Starting next week, we're teaching "Lean for Scientists and Engineers"!

English
0
1
6
350
Tyler R. Josephson (atomslab.bsky.social) retweetledi
Lean
Lean@leanprover·
Check out this excellent video on proving the continuity of various functions using Lean 4. It's a fantastic introduction to Lean, and its tactic framework for proof automation. youtube.com/watch?v=BZjAgh…
YouTube video
YouTube
English
0
28
130
10.5K
Edward Maginn
Edward Maginn@ejmaginn·
Group member Barnabas (⁦⁦@OBIdient_MrB⁩ ) presents a lightening talk on his research focused on new refrigerant molecule discovery. #FOMMS2024
Edward Maginn tweet media
English
2
1
10
549
Tyler R. Josephson (atomslab.bsky.social) retweetledi
Sean Welleck
Sean Welleck@wellecks·
Interested in LLMs and Lean? Check out LLMLean, a tool for using LLMs to suggest proof steps and complete proofs in Lean: github.com/cmu-l3/llmlean Here's an example of using LLMLean with GPT-4o to solve problems from Mathematics in Lean:
English
6
73
262
28.9K
Tyler R. Josephson (atomslab.bsky.social) retweetledi
François Chollet
François Chollet@fchollet·
The question of whether LLMs can reason is, in many ways, the wrong question. The more interesting question is whether they are limited to memorization / interpolative retrieval, or whether they can adapt to novelty beyond what they know. (They can't, at least until you start doing active inference, or using them in a search loop, etc.) There are two distinct things you can call "reasoning", and no benchmark aside from ARC-AGI makes any attempt to distinguish between the two. First, there is memorizing & retrieving program templates to tackle known tasks, such as "solve ax+b=c" -- you probably memorized the "algorithm" for finding x when you were in school. LLMs *can* do this! In fact, this is *most* of what they do. However, they are notoriously bad at it, because their memorized programs are vector functions fitted to training data, that generalize via interpolation. This is a very suboptimal approach for representing any kind of discrete symbolic program. This is why LLMs on their own still struggle with digit addition, for instance -- they need to be trained on millions of examples of digit addition, but they only achieve ~70% accuracy on new numbers. This way of doing "reasoning" is not fundamentally different from purely memorizing the answers to a set of questions (e.g. 3x+5=2, 2x+3=6, etc.) -- it's just a higher order version of the same. It's still memorization and retrieval -- applied to templates rather than pointwise answers. The other way you can define reasoning is as the ability to *synthesize* new programs (from existing parts) in order to solve tasks you've never seen before. Like, solving ax+b=c without having ever learned to do it, while only knowing about addition, subtraction, multiplication and division. That's how you can adapt to novelty. LLMs *cannot* do this, at least not on their own. They can however be incorporated into a program search process capable of this kind of reasoning. This second definition is by far the more valuable form of reasoning. This is the difference between the smart kids in the back of the class that aren't paying attention but ace tests by improvisation, and the studious kids that spend their time doing homework and get medium-good grades, but are actually complete idiots that can't deviate one bit from what they've memorized. Which one would you hire? LLMs cannot do this because they are very much limited to retrieval of memorized programs. They're static program stores. However, can display some amount of adaptability, because not only are the stored programs capable of generalization via interpolation, the *program store itself* is interpolative: you can interpolate between programs, or otherwise "move around" in continuous program space. But this only yields local generalization, not any real ability to make sense of new situations. This is why LLMs need to be trained on enormous amounts of data: the only way to make them somewhat useful is to expose them to a *dense sampling* of absolutely everything there is to know and everything there is to do. Humans don't work like this -- even the really dumb ones are still vastly more intelligent than LLMs, despite having far less knowledge.
English
72
273
1.5K
165.5K