Fritz Obermeyer

249 posts

Fritz Obermeyer banner
Fritz Obermeyer

Fritz Obermeyer

@ftzo

Program synthesis is the future. λ-calculus will replace vector spaces as the core ML representation.

Bellingham, WA Katılım Ocak 2011
346 Takip Edilen1K Takipçiler
Fritz Obermeyer
@galnagli Is this going to be a chain reaction, like the hypothesized space junk catastrophe?
English
0
0
0
48
Funes
Funes@Bulkington___·
Always fascinated how people in the middle ages for hundreds of years just lived amongst the ever decrepitating Roman ruins. It was just a part of daily life for them.
Funes tweet mediaFunes tweet mediaFunes tweet media
English
271
1.9K
41.2K
6.8M
Fritz Obermeyer
@JessePeltan Lots of taken-for-granted things become impossible when costs go up. E.g. maintaining bridges, water & sewer infrastructure, roads; running a factory.
English
0
0
1
78
Fritz Obermeyer
@GrantSlatton It's like the latent state can't fit multiple simultaneous perspectives. Whenever I ask gpt-5.4-high to update a design doc, it rewrites the design doc's objective to "update the design doc" or some nonsense
English
0
0
1
115
Grant Slatton
Grant Slatton@GrantSlatton·
trying to get gpt-5.4-xhigh to generate code comments that target a first-time reader who is reading the code from top to bottom and it just can't help itself from assuming knowledge of the entire file a priori makes me feel there is a good "theory of mind" benchmark to be made
English
11
0
126
8.3K
Fritz Obermeyer
@gbrew24 As I read Daniel Yergin's "The Prize: The Epic Quest for Oil, Money, and Power" I keep needing to remind myself, no this is not news, it's only history repeating itself
English
1
1
33
10.1K
Fritz Obermeyer
@braelyn_ai would you like to run «innocuous command» && python3 <<'PY' «innocuous imports followed by statements that flow out of my view»
English
0
0
8
315
Braelyn ⛓️
Braelyn ⛓️@braelyn_ai·
would you like to run «incredibly specific command that is only useful in this instance» > 1 - yes > 2 - yes, always allow «incredibly specific command that is only useful in this instance»
English
48
33
1.9K
58.4K
Fritz Obermeyer
@andrewgwils Interesting work! I believe Schmidhuber's 2002 speed prior is related via epiplexity = -log S(x) - K(x), the difference between speed prior surprise and Kolmogorov complexity
English
0
0
0
46
Andrew Gordon Wilson
Andrew Gordon Wilson@andrewgwils·
We introduce epiplexity, a new measure of information that provides a foundation for how to select, generate, or transform data for learning systems. We have been working on this for almost 2 years, and I cannot contain my excitement! 1/7
Marc Finzi@m_finzi

1/🧵 We are very excited to release our new paper! From Entropy to Epiplexity: Rethinking Information for Computationally Bounded Intelligence arxiv.org/abs/2601.03220 with amazing team @ShikaiQiu @yidingjiang @Pavel_Izmailov @zicokolter @andrewgwils

English
35
187
1.3K
161.5K
Fritz Obermeyer
@Nowooski I built an in-window HRV for under $100. C02 stays under 1000ppm, we can maintain a 50° F temperature difference in-vs-out, and yes the air feels fresh
English
1
0
0
25
Wally Nowinski
Wally Nowinski@Nowooski·
ERV Enthusiasts: if you just redesign your entire house for a air tight envelope and also put in an expensive new hvac system with an ERV you can get air exchange that is 85% as good as an open window!!! Me: Thanks, I’ll just crack the window and enjoy the nice breeze.
English
12
2
126
11.4K
Fritz Obermeyer
Fritz Obermeyer@ftzo·
@headinthebox Probably also silently changes the floating point rounding mode of the CPU core on which it executes
English
1
0
3
347
Fritz Obermeyer
Fritz Obermeyer@ftzo·
I'm loving vibe sciencing: a conversation in one tab where we analyze data and formulate hypotheses, and results in another tab where we show statistics and tables and plots (png for me, csv for the agent)
English
0
0
1
138
Fritz Obermeyer
Fritz Obermeyer@ftzo·
I feel bad for humanoid robots. My knees hurt just watching them. Can we at least give them roller skates?
English
0
0
0
110
Fritz Obermeyer
Fritz Obermeyer@ftzo·
Product idea: AI coding agent but it only deletes code
English
0
0
1
129
Fritz Obermeyer
Fritz Obermeyer@ftzo·
@aramh What are the practical applications? I've been using it for cheaper de Bruijn indexing, distinguishing lambda from kappa as the ≥1 and 0-occurrence abstraction binders
English
1
0
1
441
Aram Hăvărneanu
Aram Hăvărneanu@aramh·
Hasegawa, M. (1995). Decomposing typed lambda calculus into a couple of categorical programming languages. In: Pitt, D., Rydeheard, D.E., Johnstone, P. (eds) Category Theory and Computer Science. CTCS 1995. Lecture Notes in Computer Science, vol 953. Springer, Berlin, Heidelberg. DOI:10.1007/3-540-60164-3_28 Abstract: We give two categorical programming languages with variable arrows and associated abstraction/reduction mechanisms, which extend the possibility of categorical programming [Hag87, CF92] in practice. These languages are complementary to each other — one of them provides a first-order programming style whereas the other does higher-order — and are “children” of the simply typed lambda calculus in the sense that we can decompose typed lambda calculus into them and, conversely, the combination of them is equivalent to typed lambda calculus. This decomposition is a consequence of a semantic analysis on typed lambda calculus due to C. Hermida and B. Jacobs [HJ94].
English
3
4
60
16.2K
Fritz Obermeyer retweetledi
Edward Kmett
Edward Kmett@kmett·
GPUs made training massive models possible, but inference needs better memory capacity, memory bandwidth utilization, more power efficiency, and an architecture built bottom up with transformers in mind. To that end, I'm excited to share that Positron just raised a $51.6M Series A! We offer solutions that are already deployed and are shipping today, and we've designed new silicon to reshape AI inference—reducing cost, drastically cutting power consumption, and unlocking entirely new capabilities in production AI environments. I'm proud of what the @positron_ai team has accomplished and I'm grateful to our investors Valor Equity Partners, Atreides Management, and DFJ Growth for backing us! To celebrate a writeup in the Wall Street Journal that landed over the weekend, the folks in the lab put together a schlocky little promo spot, which I couldn't resist sharing here. [No production cards were harmed during the filming of this video.]
Edward Kmett tweet media
English
4
12
54
4K
Fritz Obermeyer
Fritz Obermeyer@ftzo·
@karpathy But the point is to be able to ask long tail questions like "What are the top 10 things I might do when visiting the unincorporated community of Buckeye, Colorado?"
English
0
0
0
113
Andrej Karpathy
Andrej Karpathy@karpathy·
Example when you ask eg “top 10 sights in Amsterdam” or something, some hired data labeler probably saw a similar question at some point, researched it for 20 minutes using Google and Trip Advisor or something, came up with some list of 10, which literally then becomes the correct answer, training the AI to give that answer for that question. If the exact place in question is not in the finetuning training set, the neural net imputes a list of statistically similar vibes based on its knowledge gained from the pretraining stage (language modeling of internet documents).
English
100
137
2.6K
295.4K
Andrej Karpathy
Andrej Karpathy@karpathy·
People have too inflated sense of what it means to "ask an AI" about something. The AI are language models trained basically by imitation on data from human labelers. Instead of the mysticism of "asking an AI", think of it more as "asking the average data labeler" on the internet. Few caveats apply because e.g. in many domains (e.g. code, math, creative writing) the companies hire skilled data labelers (so think of it as asking them instead), and this is not 100% true when reinforcement learning is involved, though I have an earlier rant on how RLHF is just barely RL, and "actual RL" is still too early and/or constrained to domains that offer easy reward functions (math etc.). But roughly speaking (and today), you're not asking some magical AI. You're asking a human data labeler. Whose average essence was lossily distilled into statistical token tumblers that are LLMs. This can still be super useful ofc ourse. Post triggered by someone suggesting we ask an AI how to run the government etc. TLDR you're not asking an AI, you're asking some mashup spirit of its average data labeler.
English
551
1.9K
13.3K
1.8M
Fritz Obermeyer retweetledi
Positron AI
Positron AI@positron_ai·
Positron is proud to share our latest inference performance versus the GPU-based competition: ✅ 70% faster token generation on Llama3.1-8B ✅ 1/3 power usage on Llama3.1-8B ✅ 51% cost savings versus DGX-H100 💸 (Yes, IYKYK: less than half the cost.)
English
0
5
14
1.8K
Fritz Obermeyer
Fritz Obermeyer@ftzo·
@PyroAi @eteq Two reasons the Normal distribution is ubiquitous are (1) it's math is simple; and (2) it is produced by the central limit theorem, and hence should be ubiquitous in nature. However the Lévy Stable distribution is a more general central limit #The_Generalized_Central_Limit_Theorem" target="_blank" rel="nofollow noopener">en.wikipedia.org/wiki/Central_l…
English
0
0
1
65
Pyro
Pyro@PyroAi·
Pyro 1.9.1 is released with a Lévy Stable.log_prob(), a WeighedPredictive, PyroModuleList, bug fixes, and improved type hints. Thanks to @eteq, Dario Coscia, Martin Bubel, Ben Zickel, Kipper Fletez-Brant, and others! github.com/pyro-ppl/pyro/…
English
1
5
18
1.3K