Dr Paul Kruszewski

955 posts

Dr Paul Kruszewski banner
Dr Paul Kruszewski

Dr Paul Kruszewski

@PKruszewski

Deep tech inventor / entrepreneur / investor. I see the future and then build it. All comments are mine and not my employer.

Montreal Beigetreten Mart 2011
112 Folgt342 Follower
Dr Paul Kruszewski retweetet
Michael A. Arouet
Michael A. Arouet@MichaelAArouet·
Wow, it's quite an achievement for Canada to have had even lower growth than Germany during the last decade. Germany has done almost everything humanly possible to ruin its economy. How has Canada been able to beat that?
Michael A. Arouet tweet media
English
251
797
2.8K
160.6K
Greg Estes
Greg Estes@GregEstes·
Very excited to support @MayfieldFund @NavinChaddha in growing the AI ecosystem. One thing I love about the Mayfield team is the support they give to their Founders at every stage of their business. It's always about the people, and they are great people. linkedin.com/feed/update/ur…
English
1
0
8
75
Dr Paul Kruszewski retweetet
tobi lutke
tobi lutke@tobi·
tobi lutke tweet media
ZXX
191
860
9.5K
335.3K
Dr Paul Kruszewski retweetet
BetaKit
BetaKit@BetaKit·
Brilliant Harvest (@B_HarvestInc), the Calgary firm that runs an AI-driven platform for the heavy equipment industry, describes itself as “farmers first and innovators second.” It just secured $4M USD in seed funding. Here's what it plans to do with it: betakit.com/agtech-startup…
English
0
1
5
435
Dr Paul Kruszewski retweetet
Palmer Luckey
Palmer Luckey@PalmerLuckey·
Whatever happened to the extremely cool 80s TV genre of "Shows based on a single extremely technically advanced vehicle"? Airwolf, Knight Rider, Viper, Blue Thunder, etc. It was an amazing genre that should return to inspire modern youth. youtube.com/watch?v=9Jbbxz…
YouTube video
YouTube
English
154
89
1.4K
242.1K
conor brennan-burke
conor brennan-burke@contextconor·
the tizz / rizz founder matrix (TRFM) all great founders land somewhere on here tag yourself cooked up with @jia_seed
conor brennan-burke tweet media
English
195
225
4K
569K
Andrej Karpathy
Andrej Karpathy@karpathy·
We're missing (at least one) major paradigm for LLM learning. Not sure what to call it, possibly it has a name - system prompt learning? Pretraining is for knowledge. Finetuning (SL/RL) is for habitual behavior. Both of these involve a change in parameters but a lot of human learning feels more like a change in system prompt. You encounter a problem, figure something out, then "remember" something in fairly explicit terms for the next time. E.g. "It seems when I encounter this and that kind of a problem, I should try this and that kind of an approach/solution". It feels more like taking notes for yourself, i.e. something like the "Memory" feature but not to store per-user random facts, but general/global problem solving knowledge and strategies. LLMs are quite literally like the guy in Memento, except we haven't given them their scratchpad yet. Note that this paradigm is also significantly more powerful and data efficient because a knowledge-guided "review" stage is a significantly higher dimensional feedback channel than a reward scaler. I was prompted to jot down this shower of thoughts after reading through Claude's system prompt, which currently seems to be around 17,000 words, specifying not just basic behavior style/preferences (e.g. refuse various requests related to song lyrics) but also a large amount of general problem solving strategies, e.g.: "If Claude is asked to count words, letters, and characters, it thinks step by step before answering the person. It explicitly counts the words, letters, or characters by assigning a number to each. It only answers the person once it has performed this explicit counting step." This is to help Claude solve 'r' in strawberry etc. Imo this is not the kind of problem solving knowledge that should be baked into weights via Reinforcement Learning, or least not immediately/exclusively. And it certainly shouldn't come from human engineers writing system prompts by hand. It should come from System Prompt learning, which resembles RL in the setup, with the exception of the learning algorithm (edits vs gradient descent). A large section of the LLM system prompt could be written via system prompt learning, it would look a bit like the LLM writing a book for itself on how to solve problems. If this works it would be a new/powerful learning paradigm. With a lot of details left to figure out (how do the edits work? can/should you learn the edit system? how do you gradually move knowledge from the explicit system text to habitual weights, as humans seem to do? etc.).
English
715
1K
10.3K
1.5M
Greg Estes
Greg Estes@GregEstes·
Well, I've just retired from NVIDIA after 15 years. Thank you to all my amazing friends, colleagues and partners over 40 years. What a great company, filled with brilliant people doing their life's work. Humbled to have been part of it, and will be cheering them on every day. linkedin.com/posts/estesgre…
Greg Estes tweet media
English
7
2
60
2.2K