Matt Redlon

567 posts

Matt Redlon banner
Matt Redlon

Matt Redlon

@mattredlon

Chair AI Program & VP Digital Biology @MayoClinic. Thinking @timberwolfai. Co-founder @clarioanalytics. Lecturer @UMNTLI. Geek for Bio/ML/AI. Views are my own.

Minneapolis Katılım Kasım 2008
270 Takip Edilen573 Takipçiler
Matt Redlon
Matt Redlon@mattredlon·
@patrickc To be fair one of my favorite Tolkien characters…
English
0
0
2
192
Patrick Collison
Patrick Collison@patrickc·
It's going to be tough for startups when all the Lord of the Rings names are taken and the only thing left is something like Bombadil AI.
English
496
345
11.6K
1M
Nima Alidoust
Nima Alidoust@nalidoust·
We’ve raised $30M to build the foundational dataset for Virtual Cell Models: 1Bn single-cell datapoints, mapping 1M drug-patient interactions, to be shared with one partner. Our goal: Move the frontier - From models to precision medicines that help patients. @tahoe_ai 🧵
Nima Alidoust tweet media
English
35
32
318
60.9K
Neely
Neely@NeelyTamminga·
News Update: I’m starting a PhD program at Gonzaga… 🤓
Neely tweet media
English
29
0
100
4.5K
Deedy
Deedy@deedydas·
Heard a crazy rumor that Anthropic's corporate social responsibility is run by this megalomaniac called Phil who insists on calling his division... Philanthropic.
English
51
41
1.5K
198.5K
Matt Redlon
Matt Redlon@mattredlon·
@emollick Advertising that is well executed and highly targeted does not feel interruptive. Instagram does it better than any other platform.
English
0
0
1
43
kalomaze
kalomaze@kalomaze·
@Dorialexander what makes me sad about small models right now is that they probably only suck because they are very "all or nothing" a 500m model could go much, much farther if it was sometimes allowed to repeat computation for 10+ passes
English
3
0
48
11.6K
Matt Redlon
Matt Redlon@mattredlon·
@jxmnop I really enjoyed working through @rasbt’s “Build a Large Language Model (From Scratch)”. Not a text book per se, but could be taught from.
English
0
0
4
895
Massimo
Massimo@Rainmaker1973·
This truck has been adapted to give drivers behind a view of the road ahead through a large LED screen. This helps other vehicles increase their visibility when overtaking [📹 Denis Shvetsov]
English
63
90
716
100.4K
Matt Redlon
Matt Redlon@mattredlon·
@rasbt The Manning site is killing me, Sebastian! Keeps stalling out during purchase process. Anywhere else I can purchase ebook?
English
1
0
0
79
Sebastian Raschka
Sebastian Raschka@rasbt·
If you are looking for a resource to understand the instruction fine-tuning process in LLMs, I've uploaded a notebook to implement the fine-tuning process from scratch: github.com/rasbt/LLMs-fro… It explains 1. how to format the data into 1100 instruction-response pairs 2. how to apply a prompt-style template 3. and how to use masking. Of course, this also includes a section on implementing an LLM-based automated process for evaluation. Happy coding!
Sebastian Raschka tweet media
English
35
403
2.1K
208.2K
Sam
Sam@Supersam331·
@natiakourdadze @AliHussein_20 Thanks Natia! i only got these files but didn't see a keywords. it's probably because i don't have any keywords setup. by keyword do you mean the keyword in google ads? so that means i need to create some ads campaign first?
Sam tweet media
English
2
0
1
430
Natia Kurdadze
Natia Kurdadze@natiakourdadze·
If you are a startup founder and hate marketing, this is how you can get leads organically 🦄 1. Go to Google Search Console 2. Download CSV file and export all keywords 3. Look at the column called "Position" 4. See what you are almost ranking for 5. Create separate pages for each keyword #buildinpublic
English
38
102
1.2K
273.7K
Chloe Condon
Chloe Condon@ChloeCondon·
Sensible business woman attire that silently screams "Have you heard of Google dot com?" 💙❤️💛💚 @GoogleCloudTech ☁️
Chloe Condon tweet media
English
10
2
91
5.2K
Matt Redlon
Matt Redlon@mattredlon·
@OpenAI @AnthropicAI @Meta @GoogleAI @MistralAI "Supervised Fine Tuning (SFT)" papers cover techniques for aligning the model after pretraining using curated questions from a human labeler. An important step, but it seems to be losing out in research lately to its "big brother RL[H/AI]F" - at least based on Twitter/X activity!
English
0
0
0
93
Matt Redlon
Matt Redlon@mattredlon·
Everyone: "AI is going to take over the world" AI:
Matt Redlon tweet media
English
0
0
0
146
François Fleuret
François Fleuret@francoisfleuret·
Is there a intuitive rationale for the necessity of the complex numbers to exist? Saying "we needed to solve x^2=-1" is a bit short, why not "x+1=x" ?
English
113
11
180
131.9K
Matt Redlon
Matt Redlon@mattredlon·
@karpathy @lateinteraction Another example of the flow used in FunSearch, AlphaGeometry, and @rao2z's LLM-Modulo approach. LLM generates ideas and external verifier checks them. While Andrej says "answer is constructed iteratively" you could say prompt is what is iteratively refined.
English
1
5
7
3.6K
Andrej Karpathy
Andrej Karpathy@karpathy·
Prompt engineering (or rather "Flow engineering") intensifies for code generation. Great reading and a reminder of how much alpha there is (pass@5 19% to 44%) in moving from a naive prompt:answer paradigm to a "flow" paradigm, where the answer is constructed iteratively.
Andrej Karpathy tweet media
Santiago@svpino

We are one step closer to having AI generate code better than humans! There's a new open-source, state-of-the-art code generation tool. It's a new approach that improves the performance of Large Language Models generating code. The paper's authors call the process "AlphaCodium" and tested it on the CodeContests dataset, which contains around 10,000 competitive programming problems. The results put AlphaCodium as the best approach to generate code we've seen. It beats DeepMind's AlphaCode and their new AlphaCode2 without needing to fine-tune a model! I'm linking to the paper, the GitHub repository, and a blog post below, but let me give you a 10-second summary of how the process works: Instead of using a single prompt to solve problems, AlphaCodium relies on an iterative process that repeatedly runs and fixes the generated code using the testing data. 1. The first step is to have the model reason about the problem. They describe it using bullet points and focus on the goal, inputs, outputs, rules, constraints, and any other relevant details. 2. Then, they make the model reason about the public tests and come up with an explanation of why the input leads to that particular output. 3. The model generates two to three potential solutions in text and ranks them in terms of correctness, simplicity, and robustness. 4. Then, it generates more diverse tests for the problem, covering cases not part of the original public tests. 5. Iteratively, pick a solution, generate the code, and run it on a few test cases. If the tests fail, improve the code and repeat the process until the code passes every test. There's a lot more information in the paper and the blog post. Here are the links: • Paper: arxiv.org/abs/2401.08500 • Blog: codium.ai/blog/alphacodi… • Code: github.com/Codium-ai/Alph… I attached an image comparing AlphaCodium with direct prompting using different models. 2024 has barely started, and we are making a ton of progress!

English
105
530
3.2K
798.7K
SpaceX
SpaceX@SpaceX·
Meet Ax-3, the crew of Dragon’s 12th human spaceflight mission
SpaceX tweet mediaSpaceX tweet mediaSpaceX tweet mediaSpaceX tweet media
English
1.6K
4.7K
60.3K
8.8M