Rick Lamers

3.9K posts

Rick Lamers banner
Rick Lamers

Rick Lamers

@ricklamers

👨‍💻 AI Researcher @NVIDIA. Ex-Groq. Occasional angel investor. Opinions are my own.

Katılım Temmuz 2009
589 Takip Edilen6.1K Takipçiler
Lucas Meijer
Lucas Meijer@lucasmeijer·
Demo of math tutor project! —> helpwiskunde.nl It augments a highschool math textbook. Would love to hear what you think. (There’s a switch to English button!)—>
English
3
0
9
461
elie
elie@eliebakouch·
i'm not a gpu but i'm memory bandwidth bound
English
5
2
59
2.3K
Rick Lamers retweetledi
Andrew Carr 🤸
Andrew Carr 🤸@andrew_n_carr·
somebody made a huggingface model visualizer!! just plug in the url and explore at any granularity
English
17
153
1.3K
53.5K
Rick Lamers
Rick Lamers@ricklamers·
The quality of answers I’m getting from my xhigh configured gpt-5.5 backed hermes with a custom set of my own skills (paired with best in class API services) running on a mac mini (10 gbit/s symmetrical fiber) it completely owns is ridiculous. It can just download and transcribe from large video, ffmpeg extract, clone 4 large git repos to /tmp to search, brew install whatever it needs, fast compile/run on m4 silicon.
English
0
0
2
315
Rick Lamers retweetledi
Ofir Press
Ofir Press@OfirPress·
Historian Peter Westwick explains how the gold rush led to silicon valley being formed in the bay area. Gold mining -> water powered machines for mining -> water-based electricity generation -> electric grid -> universities focused on EE -> semiconductors asteriskmag.com/issues/06/sili…
Ofir Press tweet media
English
1
2
12
1.5K
sunny madra
sunny madra@sundeep·
Great hero banner on Apple TV today
sunny madra tweet mediasunny madra tweet media
English
4
1
50
5.6K
Rick Lamers
Rick Lamers@ricklamers·
Edna low key setting life goals subliminally at age 10
Rick Lamers tweet mediaRick Lamers tweet mediaRick Lamers tweet mediaRick Lamers tweet media
English
0
0
0
193
Rick Lamers retweetledi
Shaurya Jain
Shaurya Jain@jain_shaurya_·
Therapist: linear neolabs are not real, they cannot hurt you. me:
Shaurya Jain tweet media
English
56
126
2.1K
125.2K
Rick Lamers retweetledi
Noam Brown
Noam Brown@polynoamial·
After 100 million tokens, performance was still going up. What we're seeing here is not the capability ceiling. From the report: "Performance on TLO continues to scale with the amount of inference compute spent, and we have not yet observed a plateau with the best models."
AI Security Institute@AISecurityInst

OpenAI’s GPT-5.5 is the second model to complete one of our multi-step cyber-attack simulations end-to-end 🧵

English
33
125
1.3K
188.8K
Mert Ünsal
Mert Ünsal@mertunsal2020·
@stochasticchasm it’s an old pretrained backbone and nowhere close to those flops :) better pretrains will come!
English
6
9
105
19.6K
Rick Lamers
Rick Lamers@ricklamers·
PSA Ghostty has Cmd+Shift+P palette (I'm an idiot for not discovering it sooner)
Rick Lamers tweet media
English
0
0
1
337
Rick Lamers retweetledi
NVIDIA AI
NVIDIA AI@NVIDIAAI·
Meet Nemotron 3 Nano Omni 👋 Our latest addition to the Nemotron family is the highest efficiency, open multimodal model with leading accuracy. 30B parameters. 256K context length. 🧵👇
English
79
175
1.2K
424K
Rick Lamers
Rick Lamers@ricklamers·
Vintage models are a cool approach to ablations/generalization/interpretability research for language models. A lot of leakage risk though so a careful clean room approach needs to be taken.
Nick Levine@status_effects

New work with @AlecRad and @DavidDuvenaud: Have you ever dreamed of talking to someone from the past? Introducing talkie, a 13B model trained only on pre-1931 text. Vintage models should help us to understand how LMs generalize (e.g., can we teach talkie to code?). Thread:

English
1
0
5
495