Alex

82 posts

Alex

Alex

@gywbd

Katılım Ocak 2008
643 Takip Edilen111 Takipçiler
Alex retweetledi
Lean
Lean@leanprover·
🔥 @GoogleDeepMind just dropped their "formal conjectures" project - formalizing statements of math's biggest unsolved mysteries in #LeanLang and #Mathlib! This Google-backed project is a HUGE step toward developing "a much richer dataset of formalized conjectures", valuable for benchmarks and growing the Lean ecosystem. The project was open sourced today! And you can be part of it! Check it out: github.com/google-deepmin… #LeanProver #FormalMath #AIResearch #GoogleDeepMind
Lean tweet media
English
14
187
906
100K
Alex retweetledi
DSPy
DSPy@DSPyOSS·
For AI software: 1) Information Flow is key 2) AI steps should be functional and structured 3) AI behavior specification should be decoupled from inference strategies 4) … and decoupled from learning paradigms 5) Natural Language Optimization is a powerful learning paradigm
Omar Khattab@lateinteraction

DSPy's biggest strength is also the reason it can admittedly be hard to wrap your head around it. It's basically say: LLMs & their methods will continue to improve but not equally in every axis, so: - What's the smallest set of fundamental abstractions that allow you to build downstream AI software that is "future-proof" and rides the tide of progress? - Equivalently, what are the right algorithmic problems that researchers should focus on to enable as much progress as possible for AI software? But this is necessarily complex, in the sense that the answer has to be composed of a few things, not one concept only. (Though if you had to understand one concept only, the fundamental glue is DSPy Signatures.) It's actually only a handful of bets, though, not too many. I've been tweeting them non-stop since late 2022, but I've never collected them in one place. All of these have proven beyond a doubt to have been the right bets so far for 2.5 years, and I think they'll stay the right bets for the next 3 years at least. 1) Information Flow is the single most key aspect of good AI software. As foundation models improve, the bottleneck becomes basically whether you can actually (1) ask them the right question and (2) provide them with all the necessary context to address it. Since 2022, DSPy addressed this in two directions: (i) free-form control flow ("Compound AI Systems" / LM programs) and (ii) Signatures. Prompts have been a massive distraction here, with people thinking they need to find the magical keyword to talk to LLMs. From 2022, DSPy put the focus on *Signatures* (back then called Templates) which force you to break down LM interactions into *structured and named* input fields and *structured and named output fields*. Getting simply those fields right was (and has been) a lot more important than "engineering" the "right prompt". That's the point of Signatures. (We know it's hard for people to force them to define their signatures so carefully, but if you can't do that, your system is going to be bad.) 2) Interactions with LLMs should be Functional and Structured. Again, prompts are bad. People are misled from their chat interaction with LLMs to think that LLMs should take "strings", hence the magical status of "prompts". But actually, you should define a functional contract. What are the things you will give to the function? What is the function supposed to do with them? What is it then supposed to give you back? This is again Signatures. It's (i) structured *inputs*, (ii) structured *outputs*, and (iii) instructions. You've got to decouple these three things, which until DSP (2022) and really until very recently with mainstream structured outputs, were just meshed together into "prompts". This bears repeating: your programmatic LLM interactions need to be functions, not strings. Why? Because there are many concerns that are actually not part of the LLM behavior that you'd otherwise need to handle ad-hoc when working with strings: - How do you format the *inputs* to your LLM into a string? - How do you separate *instructions* and *inputs* (data)? - How do you *specify* the output format (string) that your LLM should produce so you can parse it? - How do you layer on top of this the inference strategy, like CoT or ReAct, without entirely rewriting your prompt? Signatures solve this. They ask you to *just* specify the input fields, output fields, and task instruction. The rest are the job of Modules and Optimizers, which instantiate Signatures. 3) Inference Strategies should be Polymorphic Modules. This sounds scary but the point is that all the cool general-purpose prompting techniques or inference-scaling strategies should be Modules, like the layers in DNN frameworks like PyTorch. Modules are generic functions, which in this case take *any* Signature, and instantiate *its* behavior generically into a well-defined strategy. This means that we can talk about "CoT" or "ReAct" without actually committing at all to the specific task (Signature) you want to apply them to. This is a huge deal, which again only exists in DSPy. One key thing that Modules do is that they define *parameters*. What part(s) of the Module are fixed and which parts can be learned? For example, in CoT, the specific string that asks the model to think step by step could be learned. Or the few-shot examples of thinking step by step should be learnable. In ReAct, demonstrations of good trajectories should be learnable. 4) Specification of your AI software behavior should be decoupled from learning paradigms. Before DSPy, every time a new ML paradigm came by, we re-wrote our AI software. Oh, we moved from LSTMs to Transformers? Or we moved from fine-tuning BERT to ICL with GPT-3? Entirely new system. DSPy says: if you write signatures and instantiate Modules, the Modules actually know exactly what about them can be optimized: the LM underneath, the instructions in the prompt, the demonstrations, etc. The learning paradigms (RL, prompt optimization, program transformations that respect the signature) should be layered on top, with the same frontend / language for expressing the programmatic behavior. This means that the *same programs* you wrote in 2023 in DSPy can now be optimized with dspy.GRPO, the way they could be optimized with dspy.MIPROv2, the way they were optimized with dspy.BootstrapFS before that. The second half of this piece is Downstream Alignment or compile-time scaling. Basically, no matter how good LLMs get, they might not perfectly align with your downstream task, especially when your information flow requires multiple modules and multiple LLM interactions. You need to "compile" towards a metric "late", i.e. after the system is fully defined, no matter how RLHF'ed your models are. 5) Natural Language Optimization is a powerful paradigm of learning. We've said this for years, like with the BetterTogether optimizer paper, but you need both *fine-tuning* and *coarse-tuning* at a higher level in natural language. The analogy I use all the time is riding a bike: it's very hard to learn to ride a bike without practice (fine-tuning), but it's extremely inefficient to learn *avoiding to ride the bike on the side walk* from rewards, you want to understand and learn this rule in natural language to adhere ASAP. This is the source of DSPy's focus on prompt optimizers as a foundational piece here; it's often far superior in sample efficiency to doing policy gradient RL if your problem has the right information flow structure. That's it. That's the set of core bets DSPy has made since 2022/2023 until today. Compiling Declarative AI Functions into LM Calls, with Signatures, Modules, and Optimizers. 1) Information Flow is the single most key aspect of good AI software. 2) Interactions with LLMs should be Functional and Structured. 3) Inference Strategies should be Polymorphic Modules. 4) Specification of your AI software behavior should be decoupled from learning paradigms. 5) Natural Language Optimization is a powerful paradigm of learning.

English
3
13
168
33.3K
Alex retweetledi
ハセン حسن
ハセン حسن@hasen_95dx·
Fun fact: the more "responsible" a dev team is, the more likely it is they spend a significant about of time firefighting. Well organized jira tickets, several branches in parallel, every PR reviewed, 100% unit test coverage, clean architecture, domain driven design, modular scalable micro services... But: users complain about critical bugs all the time, infra always falling apart, operational costs out of control, local dev environment slow and brittle, bugs take forever to replicate, fix cannot be confirmed except on production, always rewriting the unit tests, so much boiler plate code for every little thing, horrible architecture all around. This is a fact. A very fun fact.
English
23
11
178
13.8K
Alex retweetledi
dr. jack morris
dr. jack morris@jxmnop·
don't waste your time studying CUDA or calculus. pytorch abstracts away both of these instead, if i were you, i would train hundreds of tiny little models train PixelCNN on MNIST. train an MLP to play snake. train a tiny language model on your emails
dr. jack morris@jxmnop

in 2025, if you want to become a successful AI engineer or researcher, you should NOT learn CUDA furthermore – i'd guess that 80% of successful ML researchers have never written a CUDA kernel practical ML is about training models and using them to make predictions. this has nothing to do with CUDA CUDA is necessary in two cases: (a) you are developing a radically new model that isn't easily expressible in PyTorch or Jax (i.e. Mamba) (b) you are running into performance bottlenecks from current CUDA code and need to make it faster i doubt that either case applies to you chances are you aren't building the next Mamba, and the bottlenecks you'll run into in practice are different you should work on finding the right data or hardware or setting things up properly or distributing efficiently across hardware or researching new efficient ways to run models that other people are working on (like vLLM and SGLang) or better than that, work on your eval pipeline. find ways to measure your model's performance that are more realistic, comprehensive, efficient, fair, etc. TLDR: want to learn? spend your time tinkering with models in PyTorch and Jax. not writing matrix multiplications

English
65
130
2.5K
214.2K
Alex retweetledi
knivesysl
knivesysl@knowrohit07·
stop writing cuda—you’re never going to reach a level where you actually need to write cu kernels unless you’re deep into optimizing existing sota architectures or working to speed up real bottlenecks. mfs doing matmuls💀
English
2
3
41
1.4K
Alex retweetledi
dr. jack morris
dr. jack morris@jxmnop·
in 2025, if you want to become a successful AI engineer or researcher, you should NOT learn CUDA furthermore – i'd guess that 80% of successful ML researchers have never written a CUDA kernel practical ML is about training models and using them to make predictions. this has nothing to do with CUDA CUDA is necessary in two cases: (a) you are developing a radically new model that isn't easily expressible in PyTorch or Jax (i.e. Mamba) (b) you are running into performance bottlenecks from current CUDA code and need to make it faster i doubt that either case applies to you chances are you aren't building the next Mamba, and the bottlenecks you'll run into in practice are different you should work on finding the right data or hardware or setting things up properly or distributing efficiently across hardware or researching new efficient ways to run models that other people are working on (like vLLM and SGLang) or better than that, work on your eval pipeline. find ways to measure your model's performance that are more realistic, comprehensive, efficient, fair, etc. TLDR: want to learn? spend your time tinkering with models in PyTorch and Jax. not writing matrix multiplications
English
63
82
1.5K
322.9K
Alex retweetledi
maharshi
maharshi@maharshii·
i have noticed that LLMs like claude and gpt-4o work really well with this prompt, it instructs them to 'contemplate' for a bit before giving the final answer.
maharshi tweet media
English
227
916
14.7K
2.7M
Alex retweetledi
Pratik Kadam
Pratik Kadam@PratikKadam_·
Most haven't even read this paper yet, WHY? If you are building AI Agent's or planning to learn about AI Agents. This paper by Anthropic is perfect start. Just read it or watch a video on it but go through it, they have explained very easily and in simple terms. If you want breakdown of this paper, comment down I will do it for you guy's ! Link: anthropic.com/research/build…
Pratik Kadam tweet media
English
7
43
328
30.4K
Alex retweetledi
Taelin
Taelin@VictorTaelin·
the new coding paradigm is to split your entire codebase in chunks (functions, blocks) and then send EVERY block, in parallel, to DeepSeek to ask: "does this need to change?". then send each chunk that returns "yes" to Sonnet for the actual code editing. thank me later
English
74
117
2.5K
413.7K
Alex retweetledi
Learn Prompting
Learn Prompting@learnprompting·
9 main Chain-of-Thought (CoT) prompting techniques: 🔹 Standard CoT or Few-Shot CoT 🔹 Zero-Shot CoT 🔹 Self-Consistency 🔹 Automatic CoT (Auto-CoT) 🔹 Tabular CoT (Tab-CoT) 🔹 Contrastive CoT 🔹 Tree-of-Thoughts (ToT) 🔹 Graph-of-Thought (GoT) 🔹 Program of Thoughts (PoT) Save the list and check this out for more info 👇🏼
Learn Prompting tweet media
English
3
16
91
3.3K
Alex retweetledi
Avi Chawla
Avi Chawla@_avichawla·
Prompting vs. RAG vs. Finetuning, which one is best for you, clearly explained:
English
6
39
449
89K
Alex retweetledi
jordanov
jordanov@ditenforcer·
weird tips on how to get better output when using AI to code 1. Use broken English For some reason when you speak broken English and explain in simple terms you get better results. 2. Ask the LLM if they understand “If you don’t understand something, please ask me for more clarification” Actually asking the LLM to summarize and explain what it’s being asked to do makes it under the task better. 3. Prompt the LLM to be your teacher (optional) Before you ask it to fix an error, instruct the LLM to guide you what to look for and where the problem is. This is optional but I think it’s still important as a developer to maintain you problem-solving skills sharp. I’ve noticed I have forgotten some simple things because I rely on generated code I just review sometimes.
English
0
2
25
1.7K
Alex retweetledi
jordanov
jordanov@ditenforcer·
To anybody starting their own AI coding journey: Stick to certain tools and master them. I see so many people constantly hopping from one tool to other. Every day there’s something new coming out. That’s fine, I’m not saying not to try them to but stick to what’s working for you and master it. I use cursor + v0 for 95% of all my projects. Are they the best tools out there? For me yes, but many people would have different opinions. They are just tools at the end of the day, it’s still you doing the heavy lifting. Master v0 for UI, build your prompt to prepare PRDs & use AI Code Editor like Cursor/Windsurf. You just have to start building, hit a wall, break some things & brute force until you get the desire output. It’s an amazing possibility for many of you. Me especially, I’ve never thought of myself as a technical person. And I ended up building projects for clients utilizing these AI tools. Start building.
English
0
1
3
607
Alex retweetledi
Lee Robinson
Lee Robinson@leerob·
Just did a fresh macOS install — writing down what I set up: • Superhuman: fast email client • Ghostty: super fast terminal • Neovim: default editor • Raycast: better spotlight • CleanShot X: better screenshot tool • Notion: work notes + calendar • Chrome: default browser • 1Password: best password manager • Spotify: my music hub • Screenflow: lightweight video capture + editor • Screen Studio: another nice video tool The usual web dev stuff: homebrew, node, pnpm, bun. Here's my current neovim plugins: #L739-L770" target="_blank" rel="nofollow noopener">github.com/leerob/vim-for… I use zsh (default on macOS) with some customizations in my .zshrc, but pretty minimal to keep startup fast. There's a few macOS defaults I change: • Remap caps lock to escape • mkdir ~/Developer (it has a fancy icon in finder!) • Faster keyboard repeats • defaults write -g InitialKeyRepeat -int 15 • defaults write -g KeyRepeat -int 1 • Show hidden files in finder • defaults write com.apple.finder AppleShowAllFiles YES That's it! Anything I'm missing? 👀
English
324
217
4.6K
872.8K
God of Prompt
God of Prompt@godofprompt·
Everyone knows ChatGPT-4o is powerful. But few know how to craft the perfect prompt. That's why I made mega prompt creator for you to help you. Now write prompts like pro in seconds! Like + comment 'Mega' and I'll DM you the link FREE! (Must be following me)
God of Prompt tweet media
English
396
42
411
61.9K
Gasper C. | Rebelpreneurs
Gasper C. | Rebelpreneurs@GasperCrepinsek·
I'm deleting this soon because it's legit a formula to PRINT CASH. CUSTOM GPT'S. You can make THOUSANDS building and selling them, and literally anyone can learn it. Comment 'COURSE' and I will DM you my full 23 hour video course right now! (must follow).
Gasper C. | Rebelpreneurs tweet media
English
1.3K
129
954
386.5K
Kyle Balmer
Kyle Balmer@iamkylebalmer·
Want to start an AI consultancy? Consult, run workshops, provide implementation services? I've created the ultimate guide on how to turn your AI interest into profit. No technical AI knowledge needed. High demand. Want it free? Must follow + Reply ‘ai’ = I’ll DM you a copy
Kyle Balmer tweet media
English
432
55
397
58.3K
Alex retweetledi