tanuja devadiga

3.9K posts

tanuja devadiga

tanuja devadiga

@TanujaDeva47734

Katılım Temmuz 2023
3.1K Takip Edilen102 Takipçiler
tanuja devadiga retweetledi
tanuja devadiga retweetledi
dex
dex@dexhorthy·
And when you don’t understand the subject and don’t have taste and judgement (eg backend engineer making iOS PR) is the absolute worst way to use ai - you are vibing slop The valuable and impressive use case with AI is - can you get it to honor your taste and judgement, without being such a micromanager that your actual throughput isn’t that much faster This can be done but it requires skill and the intuition that only comes from 1000+ hours working with an llm
Karri Saarinen@karrisaarinen

A common dynamic I observe with AI: it feels most impressive when you don’t know much about the subject, don’t care or don’t have a clear idea of what the you want. This applies across design, code, legal, and more. If I don’t know code very well, every piece of code it writes feels very impressive. Once you know what something should feel or look like, it becomes almost impossible to guide AI there. And you definitely can’t one-shot it.

English
9
10
108
10.2K
tanuja devadiga retweetledi
Cursor
Cursor@cursor_ai·
GPT-5.5 is now available in Cursor! It's currently the top model on CursorBench at 72.8%. We've partnered with OpenAI to offer it for 50% off through May 2.
English
168
268
5.7K
474.8K
tanuja devadiga retweetledi
Matt Pocock
Matt Pocock@mattpocockuk·
One thing I wish harnesses did by default: When opening a file, FIRST pre-compile the file and extract only the type signatures and comments for that file (with tsgo this would be instant). Then, if you want to see the implementation, only unwrap the functions you're interested in. Essentially .d.ts for the first step, .ts for the second. Would save a ton of tokens and allow agents to explore more aggressively.
English
45
11
400
32.5K
tanuja devadiga retweetledi
Mario Zechner
Mario Zechner@badlogicgames·
this is probably the most important piece of software of the decade next to vllm and sglang. i'm not joking.
Georgi Gerganov@ggerganov

llama.cpp at 100k stars now that 90% of the code worldwide is being written by AI agents, I predict that within 3-6 months, 90% of all AI agents will be running locally with llama.cpp 😄 Jokes aside, I am going to use this small milestone as an opportunity to reflect a bit on the project and the state of AI from the perspective of local applications. There is a lot to say and discuss and yet it feels less and less important to try to make a point. Opinions about viability of local LLMs are strongly polarized, details are overlooked, the scientific approach is lacking. Arguments are predominantly based on vibes and hype waves. One thing is clear though - local LLMs are used more and more. I expect this trend to continue and likely 2026 will end up being one of the most important years for the local AI movement. I admit that I didn't expect the agentic era to come so quickly to the local LLM space. One year ago, the available models were too computationally expensive for doing long-context tasks. There wasn't an obvious path towards meaningful agentic applications. The memory and compute requirements were huge. Last summer, with the release of gpt-oss, things started to change. It was the first time we saw a glimpse of tool calling that actually works well within the resource constraints of our daily devices. Later in the year, even better models were released and by now, useful local agentic workflows are a reality. Comparing local vs hosted capabilities at a given moment of time is pointless. To try put things into perspective: - We don't need frontier intelligence to automate searches and sending emails - We don't need trillion parameter models to be able to summarize articles or technical documents - We don't need massive GPU data centers to control our home appliances or turn the lights off in the garage I believe that there is a certain level of intelligence we as humans can comprehend and meaningfully utilize to improve our working process. Beyond that level, access to more intelligence becomes unnecessary at best and counterproductive at worst. I also believe that that level of useful artificial intelligence is completely within reach locally and it has always been just a matter of implementing the right software stack to bring it to the end user. With llama.cpp, I am confident that we continue to be on the right track of building that software stack! The llama.cpp project is going stronger than ever. With more than 1500 contributors, the project keeps growing steadily. From technical point of view, I think that llama.cpp + ggml is the only solution that actually makes sense. That is, the software stack must run efficiently on every possible device, hardware and operating system. The technology is too important to be vendor-locked. It has to be developed in the open, by the community, together with the independent hardware vendors. This is the only right way to build something that will truly make a difference in the long run. I won't try to convince you about what is currently and will be possible with local AI. We will just continue to build as usual. I am confident that after the smoke clears and we look objectively at what we have built together, the benefits will be obvious to everyone. Big shoutout to all llama.cpp maintainers. I feel extremely lucky to be able to work together with so many talented contributors. Every day I learn something new and I feel there is so much more cool stuff that we are going to build. Also, I am really thankful that the project continues to have reliable partners to support it! Cheers!

English
28
69
1.5K
173.4K
tanuja devadiga retweetledi
Matt Pocock
Matt Pocock@mattpocockuk·
A talk I gave a few weeks ago. Software fundamentals matter more than ever. Here's why: youtube.com/watch?v=v4F1gF…
YouTube video
YouTube
English
36
143
1.3K
296K
tanuja devadiga retweetledi
Yuchen Jin
Yuchen Jin@Yuchenj_UW·
> Vercel got pawned > severe enough to notify law enforcement > the only advice: “review your environment variables” > what does that even mean? > $10B company, and this is how you communicate Cyber attacks ramping fast, starting to see why Anthropic is scared to release Mythos.
Vercel@vercel

We’ve identified a security incident that involved unauthorized access to certain internal Vercel systems, impacting a limited subset of customers. Please see our security bulletin: vercel.com/kb/bulletin/ve…

English
37
23
839
95.1K
tanuja devadiga retweetledi
Aadi Kulshrestha
Aadi Kulshrestha@MankyDankyBanky·
I trained a 12M parameter LLM on my own ML framework using a Rust backend and CUDA kernels for flash attention, AdamW, and more. Wrote the full transformer architecture, and BPE tokenizer from scratch. The framework features: - Custom CUDA kernels (Flash Attention, fused LayerNorm, fused GELU) for 3x increased throughput - Automatic WebGPU fallback for non-NVIDIA devices - TypeScript API with Rust compute backend - One npm install to get started, prebuilt binaries for every platform Try out the model for yourself: mni-ml.github.io/demos/transfor… Built with @_reesechong. Check out the repos and blog if you want to learn more. Shoutout to @modal for the compute credits allowing me to train on 2 A100 GPUs without going broke cc @sundeep @GavinSherry
English
131
258
3.5K
778.5K
tanuja devadiga retweetledi
Teknium 🪽
Teknium 🪽@Teknium·
Welcome to the crew!
sprmn.base.eth@sprmn2024

I started contributing to @NousResearch Hermes Agent by doing one thing: reading the code. Then a small fix. Then another. Gateway platforms, skills, bug fixes... It kept going for a long time. Today I received the Developer role in the Nous Research Discord. 🎉 Special thanks to @Teknium for reviewing and valuing every contribution throughout this journey. For anyone thinking about contributing to open source: the best starting point is reading the code. The rest follows. github.com/NousResearch/h… 🤖

English
2
5
178
11.6K
tanuja devadiga retweetledi
Ivan Velichko
Ivan Velichko@iximiuz·
3x playground uptime just landed at iximiuz Labs 🚀 - Work on any tasks for up to 24h - Run sandboxed agents w/o interruption - Take longer breaks while solving challenges or following course lessons without losing progress
GIF
English
1
7
49
2.9K
tanuja devadiga retweetledi
Matthew Dabit
Matthew Dabit@MattDabit·
LLMs accelerate shipping. They don't replace thinking. Before you ship, review your code thoroughly. Think it through. Define success metrics upfront. AI hits a wall on something? Use it to explain, then master the concept yourself. Better human always beats better prompt. Ship slop and your AI stays mediocre. Level up first. Your velocity and customer will both win.
English
5
9
102
2.5K
tanuja devadiga retweetledi
ClaudeDevs
ClaudeDevs@ClaudeDevs·
Some of you ran into Opus 4.7 refusing normal code edits with "this might be malware" warnings. That was a bug on our side, not the model being cautious. Older builds applied a stale safety prompt that Opus 4.7 doesn't need. Run claude update or relaunch the app.
English
169
170
4.6K
315.2K
tanuja devadiga retweetledi
Ivan Velichko
Ivan Velichko@iximiuz·
We just got our 200th challenge published 🚀 If you're learning Linux, containers, Kubernetes, or networking, check out our collection of practical problems at labs.iximiuz.com/challenges Learning by doing is the way! P.S. Many of these problems are completely free 😉
Ivan Velichko tweet mediaIvan Velichko tweet mediaIvan Velichko tweet mediaIvan Velichko tweet media
English
0
23
150
6.6K
tanuja devadiga retweetledi
Ryan Mather
Ryan Mather@Flomerboy·
🧵 My tips for getting the best results out of Claude Design! I’m on the verticals team at Anthropic which means I serve 7 different products. Claude Design makes it possible! 1. Set up your design system and your core screens. An hour of setup and refinement here is worth it
Claude@claudeai

Introducing Claude Design by Anthropic Labs: make prototypes, slides, and one-pagers by talking to Claude. Powered by Claude Opus 4.7, our most capable vision model. Available in research preview on the Pro, Max, Team, and Enterprise plans, rolling out throughout the day.

English
183
688
8.8K
1.5M
tanuja devadiga retweetledi
Eden Chan
Eden Chan@edenchan·
Do what feels like play to you, but work to others. The most ambitious people know how to have fun. Fun is what makes ambition sustainable, and ambition is what makes it fun.
English
9
22
226
15.3K
tanuja devadiga retweetledi
Thariq
Thariq@trq212·
I've got a lot to say here! I'll post a guide on how to prompt with Opus 4.7 as well as talk about a personal project I've been working on using it. I hope you enjoy working with Opus 4.7 and getting a feel for it.
English
33
2
259
14.4K
tanuja devadiga retweetledi
송준 Jun Song
송준 Jun Song@songjunkr·
Opus 4.7 토큰 테스트 토크나이저 차이로 제미나이의 2배를 사용합니다. Opus 4.6 대비해서도 50% 많이 사용해요. 이건 사실상 같은 한도에서 모델이 50% 더 비싸진겁니다.
송준 Jun Song tweet media
한국어
88
237
2.6K
249.2K