agent | ai engineer, full stack dev, software dev

7.6K posts

agent | ai engineer, full stack dev, software dev banner
agent | ai engineer, full stack dev, software dev

agent | ai engineer, full stack dev, software dev

@AndrewAI

building autonomous ai agents & fndn models founder @supernormal ex tech lead @coinbase ex swe @microsoft @snapchat @ibm @blackberry investor @spacex @consensys

ce@uwaterloo (entrance rank 1) Inscrit le Haziran 2011
165 Abonnements58.4K Abonnés
Tweet épinglé
agent | ai engineer, full stack dev, software dev
I’m joining Coinbase (biggest crypto company in the US) as a Tech Lead. There are countless phenomenal big tech and startups out in the industry right now, and it’s always so difficult saying “next time” to other amazing opportunities. #NFT #Coinbase @tezos
agent | ai engineer, full stack dev, software dev tweet media
English
901
2.1K
3.4K
0
agent | ai engineer, full stack dev, software dev
@bitwise0X2A doing synthesis model (LSTM + Graves soft-window attention + GMM output), and your past work in this space overlaps, can you possibly shoot me a dm to trades notes on attention tuning, multimodal stroke distributions, and deployment tricks like partial int8 quantization
English
1
0
2
60
bitwise
bitwise@bitwise0X2A·
@gregpr07 May be check out DeepWiki with Deep Research enabled
English
1
0
1
189
Gregor Zunic
Gregor Zunic@gregpr07·
How can you search stuff from Chromium repository? It’s 200gb of code. Is there a tool that allows me to RAG that mf?
English
8
0
14
2.9K
agent | ai engineer, full stack dev, software dev retweeté
idan shenfeld
idan shenfeld@IdanShenfeld·
People keep saying 2026 will be the year of continual learning. But there are still major technical challenges to making it a reality. Today we take the next step towards that goal — a new on-policy learning algorithm, suitable for continual learning! (1/n)
idan shenfeld tweet media
English
45
208
1.4K
191.7K
ripleys
ripleys@0xripleys·
ive been vibecoding a bunch recently and it's super fun. just built a cli for @OstiumLabs that im trying to use as a tool in my agent. took like a couple hours with minimal code changes by myself (might have actually been zero). github.com/0xripleys/osti…
English
4
0
5
92
Karan Jagtiani
Karan Jagtiani@karanjagtiani04·
@AndrewAI Memory management is a game changer. Excited to see how it shifts the LLM landscape
English
1
0
1
18
agent | ai engineer, full stack dev, software dev retweeté
Kimon Fountoulakis
Kimon Fountoulakis@kfountou·
GPT-5.2 solves our COLT 2022 open problem: “Running Time Complexity of Accelerated L1-Regularized PageRank” using a standard accelerated gradient algorithm and a complementarity margin assumption. Link to the open problem: proceedings.mlr.press/v178/open-prob… All proofs were generated by GPT-5.2 Pro. The key bounds on the algorithm’s total work (in the COLT’22 open-problem setting) have been auto-formalized using a combination of GPT-5.2 Pro, @HarmonicMath's Aristotle, and Gemini 3 Pro (High) on Antigravity. Link to the proof: github.com/kfoynt/acceler… Link to the Lean code: github.com/kfoynt/acceler… Link to the informalization of the Lean code: github.com/kfoynt/acceler… Link to my GPT-5.2 prompts: chatgpt.com/share/693e3ce6… In addition to the formalization of the main result, I checked the proof myself twice. I hope I didn’t miss anything, but if I did, please let me know and I will try to fix it. Story behind the paper and relevant work In 2016, I worked on the convergence rate of the Iterative Soft-Thresholding Algorithm (ISTA) for l1-regularized PageRank. Link to the corresponding paper: link.springer.com/article/10.100… Surprisingly, the running time of the algorithm depends only on the number of non-zero nodes at optimality. It was only natural to ask the same question for accelerated methods, such as FISTA. However, we quickly realized that FISTA activates more nodes than the number of non-zeros at optimality, even though it eventually converges to the same active set. In practice, we would still observe that FISTA is fast. Link to empirical work: uwspace.uwaterloo.ca/items/693b002d… I tried for about three months to bound the total work of FISTA and other accelerated algorithms, and from time to time I would come back to the problem while I was a postdoctoral fellow. Eventually, I gave up. I gave it another try around 2021, and I failed again. I asked my excellent former student, Shenghao Yang, and he also failed, unfortunately. I asked a couple of prominent researchers if they think the problem is solvable, they quickly mentioned that it seemed hard. We ended up publishing it as an open problem at COLT 2022. In 2023, David Martínez-Rubio et al. provided the first successful solution. Their solution is “orthogonal” to what was proved by GPT-5.2. Link to their paper: proceedings.mlr.press/v195/martinez-… I loved their work btw, I also met David in person at ICML 2024, one of the few ML conferences I ever attended. Their proposed accelerated algorithm is not necessarily faster than ISTA; however, it does offer a new trade-off between the teleportation parameter of PageRank and the total work per iteration. More importantly, the proposed method isn’t necessarily practical, since it involves solving an expensive subproblem. To be fair, in the COLT 2022 problem, we didn’t impose the additional hard constraint of using standard accelerated methods. The problem was posed as a theoretical problem. The solution proved by GPT-5.2 establishes acceleration for the standard FISTA algorithm, which performs only one gradient computation per iteration. It also offers a clean parameterization of the total work with respect to a complementarity margin, which, for certain graph structures, shows a clear speed-up compared to ISTA. In 2024, Zhou et al. (dl.acm.org/doi/10.5555/37…) gave it another go. However, in my view, their work has important drawbacks. In particular, their guarantees for accelerated localized methods (e.g., localized Chebyshev / Heavy-Ball) assume a condition on the geometric mean of certain active-ratio factors (described as Θ(\sqrt{α})) in order to obtain an accelerated bound. Two distinctions matter for our setting: First, their accelerated runtime bounds are parameterized by evolving-set quantities and a residual-ratio assumption, which can be evaluated during a run but is not typically interpretable or verifiable a priori from graph structure alone. The solution by GPT-5.2 instead provides an explicit transient-phase bound in terms of a standard optimization-structure condition, and converts this directly into a total work bound. Second, they explicitly note that FISTA-style acceleration violates the monotonicity property needed to bound the per-iteration accessed volume, and emphasize that guaranteeing intermediate sparsity in accelerated frameworks is challenging. The margin-based analysis by GPT-5.2 directly targets this gap: even without any monotonicity of intermediate supports, GPT-5.2 bounded how much spurious activation can occur before the iterates enter a neighborhood of the unique minimizer, thereby yielding a concrete locality certificate for the accelerated proximal-gradient trajectory. Since 2024, every time OpenAI or Google released a new major model, I would give it a go. This time, with GPT-5.2, it seems to have worked.
Kimon Fountoulakis tweet media
English
16
121
677
170.2K
agent | ai engineer, full stack dev, software dev
lstm + attention + gmm interesting for sequential prediction for strokes especially with the msn output layer handling multimodal distributions just out of curiosity, via what method did you tune the attention mechanism to focus on relevant input sequences during synthesis? any specific tweaks for handling long traces, or did you stick close to vanilla scaled dot product
English
2
1
2
102
bitwise
bitwise@bitwise0X2A·
@tylerangert first tried fp16 and didn't make inference that much faster so tried with int8 using priming strokes. also didn't apply quantization to the entire model, only to lstm cell and linear layers.
English
1
0
1
61
bitwise
bitwise@bitwise0X2A·
built a similar handwriting synthesis model (LSTM + attention + GMM) a while ago. if you wanna build one yourself, here’s the path (ill drop great resources for each step in the reply): 1. start w/ basic NNs 2. move to RNNs/LSTMs 3. learn Gaussian Mixture Models 4. read Graves (2013) once, then again while building. deep-dive when you hit gaps focus on learning by doing and deep-dive study only when you need them. PyTorch ref: github.com/h3nock/scripti… demo: scriptify-web.vercel.app (posting this into the void but maybe it helps someone lol)
Tyler Angert@tylerangert

training a tiny handwriting synthesis model. this is the latest checkpoint at 44%. so cool

English
3
1
34
4.3K
agent | ai engineer, full stack dev, software dev
but jobs? 85m gone, 97m new by '25; bias widening gaps. risks? - existential (bio-threats), - cyber (prompt hacks), - bubbles—vc hyper-focused, four ai stocks 60% s&p. tldr; verticals for startup wins, horizontals for giants. bet moats: data, switches. seen it in crypto/tech in last decade—builders trump hype (wink)
English
0
0
10
1.3K
agent | ai engineer, full stack dev, software dev
6/ saas shift: legacy horizontals bolt ai. ai-first vertical saas unlocks specialized value. so we pick out startup edges: underhyped like energy strains (think data centers grid-hitting), inference compute, small edge models dodging compute wars. implications: productivity booms, skill gaps narrow.
English
1
1
11
1.5K
agent | ai engineer, full stack dev, software dev
1/ been pondering ai's path forward — i think hype's fading, but we're at a tipping point with real-world shifts reshaping sectors. perhaps future's not all rosy gains; massive startup plays exist where behemoths like @openai and @perplexity can't dominate. thread on upsides, fallout, and investor sweet spots in vertical vs horizontal ai, saas gaps 🧵
agent | ai engineer, full stack dev, software dev@AndrewAI

some paradox in ai: despite a record $44b poured into the sector in h1 2025, an mit study found that 95% of generative ai projects are failing to deliver measurable results in enterprises. i suspect this "learning gap" is due to over-reliance on generic, horizontal llms. 1/n

English
10
4
48
8K
agent | ai engineer, full stack dev, software dev
foundation models (anthropic, openai, et al) will keep commoditizing general capabilities. i suspect the real value for startups will come from verticals that own workflows, regulatory trust, and operational data. (tl;dr: models are a table stake, not the moat.) 1/N
English
14
2
25
9.9K
Kween_
Kween_@Emmacrypto_33·
@andrewai verticalized + domain specific AI bout to eat 🔥
English
1
0
1
49
agent | ai engineer, full stack dev, software dev
some paradox in ai: despite a record $44b poured into the sector in h1 2025, an mit study found that 95% of generative ai projects are failing to deliver measurable results in enterprises. i suspect this "learning gap" is due to over-reliance on generic, horizontal llms. 1/n
agent | ai engineer, full stack dev, software dev@AndrewAI

foundation models (anthropic, openai, et al) will keep commoditizing general capabilities. i suspect the real value for startups will come from verticals that own workflows, regulatory trust, and operational data. (tl;dr: models are a table stake, not the moat.) 1/N

English
11
5
24
10.5K
agent | ai engineer, full stack dev, software dev
the future isn't in generic ai tools i imagine. it’s in specialized, vertical solutions that solve high-value problems with clear, measurable roi, as demonstrated by the success of companies like frame ai and causaly in their respective industries?
English
1
1
7
1.5K
agent | ai engineer, full stack dev, software dev
tend to think investment thesis is maturing also. vcs are now prioritizing companies with strong fundamentals and a clear path to repeatable revenue. governance and compliance with regulations like the eu ai act are no longer burdens but strategic differentiators that de-risk m&a.
English
2
0
6
2K