Aaron Villalpando

5.9K posts

Aaron Villalpando

Aaron Villalpando

@aaronvi

Building a framework for AI called BAML. (YC W23). @BoundaryML

شامل ہوئے Ağustos 2011
1.7K فالونگ4.5K فالوورز
پن کیا گیا ٹویٹ
Aaron Villalpando
Aaron Villalpando@aaronvi·
We are hiring Rust devs to work on building out a programming language (BAML) with crazy new, useful features for making AI pipelines. Our entire team is senior+ engineers. We are a team of 5. We work hard and in person in Seattle. We accept new grads. The interview is not leetcode, but a 1-week work trial. Send me a DM on X or LinkedIn or email aaron @ boundaryml dot com if interested.
English
2
5
33
5.4K
Brian Cardarella
Brian Cardarella@bcardarella·
The pushback from people I got on these two tweets was always "just use Credo" or "that's for your linter to do" and they're completely missing the point. If I'm having to lean on those tools to clean up AI slop then the AI isn't writing good code in the first place. The atom exhaustion is a great example of how idiomatic code is important for the AI to write or else it will create real problems in production. Even Credo and linters aren't going to pick up the nuance on when it is best to use atoms vs strings for performance, that requires reasoning and the AI isn't meeting that standard with Elixir.
English
3
0
4
383
Brian Cardarella
Brian Cardarella@bcardarella·
Something else about AI generated @elixirlang - it *LOVES* to generate code with dynamically created atoms, thus exhausting the memory on machine. Just putting in a rule to always use strings isn't the right solution either for perf. I haven't yet found a good way to prevent OOM issues like this without Boomer Coding the fix.
Brian Cardarella tweet media
English
8
5
48
3.5K
Tim Culverhouse
Tim Culverhouse@rockorager·
Same language for backend and frontend, different runtimes. Mu for mullet? Compiles to luajit for the backend, javascript for the front end. Single build command to build all artifacts, reference them from code. First class templating. Still feeling good about this.
English
5
1
37
5.9K
Tanner Linsley
Tanner Linsley@tannerlinsley·
@sko_phi @aaronvi Nah. Believe me, we could write even more 😅 We take great care and caution with the projects we commit to.
English
1
0
0
38
Aaron Villalpando
Aaron Villalpando@aaronvi·
is it just me or might tanstack be doing way too much
Aaron Villalpando tweet media
English
60
6
927
114.7K
Aaron Villalpando
Aaron Villalpando@aaronvi·
sorry we cant release BAML 1.0 yet, it's too powerful for the general public
English
1
1
10
604
Aaron Villalpando ری ٹویٹ کیا
Anish Palakurthi
Anish Palakurthi@anishpalakurT·
Announcing the BAML Bounty... For all power-users of BAML, we're giving away free BAML merch! (t-shirts, stickers, hoodies 🔥🧯). Share what you built with BAML with #baml → Fill out tally.so/r/PdErze → Free merch! Hurry! Supplies are limited to the first 50 posts.
Anish Palakurthi tweet media
English
0
5
5
656
Vaibhav Gupta
Vaibhav Gupta@vaibcode·
I really need to find out how to code from my phone instead of carrying my laptop
GIF
English
6
0
10
578
Aaron Villalpando
Aaron Villalpando@aaronvi·
your senior distinguished principal software engineers know as much about vibe coding as the SDE2 vibe coding on side projects. This stuff is all new. It's ~1 year old
English
0
0
5
361
Aaron Villalpando
Aaron Villalpando@aaronvi·
code is turning into a live, self-evolving organism
English
0
0
2
358
David Cramer
David Cramer@zeeg·
Is there something simpler than the AI SDK that works well on Vercel, and just uses a normal agent tool loop? I'm literally incapable of making this thing work. Happens 100% of the time I try to use this library.
English
20
1
46
13.3K
Tyler Barnes
Tyler Barnes@tylbar·
🚨 Announcing a new coding agent that rivals Claude Code but with no compaction needed 🚨 The feeling of using it: run your coding sessions forever, don't worry at all, and get shit done! We're calling it Mastra Code, it's powered by @mastra's new observational memory, and we've been using it internally @mastra to do all our work 1/4 🧵
English
59
53
520
65.6K
Aaron Villalpando
Aaron Villalpando@aaronvi·
@karpathy we're working on essentially Typescript v2. With things like no `any`, for example. LLMs are great at writing models and types. We will be launching soon at @boundaryML
English
0
0
3
291
Andrej Karpathy
Andrej Karpathy@karpathy·
I think it must be a very interesting time to be in programming languages and formal methods because LLMs change the whole constraints landscape of software completely. Hints of this can already be seen, e.g. in the rising momentum behind porting C to Rust or the growing interest in upgrading legacy code bases in COBOL or etc. In particular, LLMs are *especially* good at translation compared to de-novo generation because 1) the original code base acts as a kind of highly detailed prompt, and 2) as a reference to write concrete tests with respect to. That said, even Rust is nowhere near optimal for LLMs as a target language. What kind of language is optimal? What concessions (if any) are still carved out for humans? Incredibly interesting new questions and opportunities. It feels likely that we'll end up re-writing large fractions of all software ever written many times over.
Thomas Wolf@Thom_Wolf

Shifting structures in a software world dominated by AI. Some first-order reflections (TL;DR at the end): Reducing software supply chains, the return of software monoliths – When rewriting code and understanding large foreign codebases becomes cheap, the incentive to rely on deep dependency trees collapses. Writing from scratch ¹ or extracting the relevant parts from another library is far easier when you can simply ask a code agent to handle it, rather than spending countless nights diving into an unfamiliar codebase. The reasons to reduce dependencies are compelling: a smaller attack surface for supply chain threats, smaller packaged software, improved performance, and faster boot times. By leveraging the tireless stamina of LLMs, the dream of coding an entire app from bare-metal considerations all the way up is becoming realistic. End of the Lindy effect – The Lindy effect holds that things which have been around for a long time are there for good reason and will likely continue to persist. It's related to Chesterton's fence: before removing something, you should first understand why it exists, which means removal always carries a cost. But in a world where software can be developed from first principles and understood by a tireless agent, this logic weakens. Older codebases can be explored at will; long-standing software can be replaced with far less friction. A codebase can be fully rewritten in a new language. ² Legacy software can be carefully studied and updated in situations where humans would have given up long ago. The catch: unknown unknowns remain unknown. The true extent of AI's impact will hinge on whether complete coverage of testing, edge cases, and formal verification is achievable. In an AI-dominated world, formal verification isn't optional—it's essential. The case for strongly typed languages – Historically, programming language adoption has been driven largely by human psychology and social dynamics. A language's success depended on a mix of factors: individual considerations like being easy to learn and simple to write correctly; community effects like how active and welcoming a community was, which in turn shaped how fast its ecosystem would grow; and fundamental properties like provable correctness, formal verification, and striking the right balance between dynamic and static checks—between the freedom to write anything and the discipline of guarding against edge cases and attacks. As the human factor diminishes, these dynamics will shift. Less dependence on human psychology will favor strongly typed, formally verifiable and/or high performance languages.³ These are often harder for humans to learn, but they're far better suited to LLMs, which thrive on formal verification and reinforcement learning environments. Expect this to reshape which languages dominate. Economic restructuring of open source – For decades, open-source communities have been built around humans finding connection through writing, learning, and using code together. In a world where most code is written—and perhaps more importantly, read—by machines, these incentives will start to break down.⁴ Communities of AIs building libraries and codebases together will likely emerge as a replacement, but such communities will lack the fundamentally human motivations that have driven open source until now. If the future of open-source development becomes largely devoid of humans, alignment of AI models won't just matter—it will be decisive. The future of new languages – Will AI agents face the same tradeoffs we do when developing or adopting new programming languages? Expressiveness vs. simplicity, safety vs. control, performance vs. abstraction, compile time vs. runtime, explicitness vs. conciseness. It's unclear that they will. In the long term, the reasons to create a new programming language will likely diverge significantly from the human-driven motivations of the past. There may well be an optimal programming language for LLMs—and there's no reason to assume it will resemble the ones humans have converged on. TL; DR: - Monoliths return – cheap rewriting kills dependency trees; smaller attack surface, better performance, bare-metal becomes realistic - Lindy effect weakens – legacy code loses its moat, but unknown unknowns persist; formal verification becomes essential - Strongly typed languages rise – human psychology mattered for adoption; now formal verification and RL environments favor types over ergonomics - Open source restructures – human connection drove the community; AI-written/read code breaks those incentives; alignment becomes decisive - New languages diverge – AI may not share our tradeoffs; optimal LLM programming languages may look nothing like what humans converged on ¹ x.com/mntruell/statu… ² x.com/anthropicai/st… ³ wesmckinney.com/blog/agent-erg…#issuecomment-3717222957" target="_blank" rel="nofollow noopener">github.com/tailwindlabs/t…

English
699
651
8K
1.2M
Aaron Villalpando
Aaron Villalpando@aaronvi·
we are working on this. It looks almost like Typescript. Excited to share more soon. The other interesting part we are solving is whether we can build better tooling for using LLMs in your program. So it's a two-fold problem: 1. A language designed for Agents to code in 2. A language /tooling that helps you build apps using non-deterministic components
English
0
0
0
351
Aaron Villalpando
Aaron Villalpando@aaronvi·
Arrival (the movie) in one sentence: True peace is accepting the bad things that will happen in your life.
English
0
0
4
241
Aaron Villalpando
Aaron Villalpando@aaronvi·
@charliermarsh would you do the same thing for the Ruff syntaxerror diagnostics if you had the time? Noticed you use insta exclusively instead
English
0
0
0
1.6K
Charlie Marsh
Charlie Marsh@charliermarsh·
The ty test suite is "written" in Markdown. Every code block here gets evaluated, with the comments representing expectations. We have almost 300 of these files that effectively read as detailed documentation for how ty behaves and how the Python typing spec and runtime work.
Charlie Marsh tweet media
English
33
53
977
127.2K