Praneeth

399 posts

Praneeth banner
Praneeth

Praneeth

@PraneethreddyA6

Engineer | Generalist | Product @Cummins Filtration | @UMassAmherst

United States Katılım Aralık 2019
657 Takip Edilen147 Takipçiler
Praneeth
Praneeth@PraneethreddyA6·
@i2cjak Mcmaster for all options. Amazon for a good price.
English
0
0
1
580
i2cjak
i2cjak@i2cjak·
WHAT IS THE SITE I BUY ALUMINUM EXTRUSION FROM NOT 8020 HELP
English
15
0
66
9.1K
Praneeth retweetledi
SpaceX
SpaceX@SpaceX·
SpaceXAI and @cursor_ai are now working closely together to create the world’s best coding and knowledge work AI. The combination of Cursor’s leading product and distribution to expert software engineers with SpaceX’s million H100 equivalent Colossus training supercomputer will allow us to build the world’s most useful models. Cursor has also given SpaceX the right to acquire Cursor later this year for $60 billion or pay $10 billion for our work together.
English
2.4K
5.1K
38.4K
20.8M
Praneeth retweetledi
Jeff Bezos
Jeff Bezos@JeffBezos·
ZXX
11.5K
14.7K
204.8K
19M
Praneeth retweetledi
NASA
NASA@NASA·
Welcome home Reid, Victor, Christina, and Jeremy! 🫶 The Artemis II astronauts have splashed down at 8:07pm ET (0007 UTC April 11), bringing their historic 10-day mission around the Moon to an end.
English
6.8K
107.4K
389K
86M
Teja Karlapudi
Teja Karlapudi@teja2495·
I hated the ICICI NRO/NRE banking experience so much that I closed my account today. I already knew Indian banking isn’t great, but I assumed a top bank like ICICI would be manageable. I was wrong. The experience was terrible. I don’t understand how people regularly use these accounts. - The app fails to work most of the time. - ICICI representatives sometimes call me in the middle of the night just to check if everything's okay. - They ask me to redo KYC every year. That’s fine, but the process is outdated. I had to download forms, fill them manually, sign with ink, print documents, and attest everything. It feels like 2015. - I get account closure warnings every six months if I don’t use it. This might make sense for regular accounts, but not for NRE accounts. I live in the US and don’t use it often. These limits shouldn’t apply as long as there’s sufficient balance. To keep it active, I had to make a ₹1 transaction. Since the app doesn’t work, I had to log in through the website. - Until recently, they didn’t allow password manager autofill. I had to manually type a 14-character random password every time. There are many more issues I’m probably forgetting. Overall, a very frustrating experience. If you had good experience with any others Indian banks with NRE/NRO accounts please let me know.
English
274
41
535
98.7K
Praneeth retweetledi
The White House
The White House@WhiteHouse·
THE ARTEMIS II ECLIPSE. April 6, 2026. Totality, beyond Earth. From lunar orbit, the Moon eclipses the Sun, revealing a view few in human history have ever witnessed. Photo: NASA
The White House tweet media
English
1.9K
15.1K
81.5K
5.8M
Praneeth retweetledi
The White House
The White House@WhiteHouse·
EARTHSET. April 6, 2026. Humanity, from the other side. First photo from the far side of the Moon. Captured from Orion as Earth dips beyond the lunar horizon. Photo: NASA
The White House tweet media
English
2.7K
17.3K
98.3K
5.6M
Praneeth retweetledi
Physics & Astronomy Zone
Physics & Astronomy Zone@zone_astronomy·
The highest quality video of the moon was just released… this is so beautiful.
English
5.2K
65.1K
332.7K
11.3M
Praneeth retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
2.9K
7.1K
58.7K
21M
Praneeth retweetledi
Tenobrus
Tenobrus@tenobrus·
at google this was known as "buying the gnome". there's like a billion tweets about this already but basically the story goes back in like 2005 or something they were building out their shopping search system, and it was working pretty well. except for the fact that if you searched for sneakers, the top result was a garden gnome. engineers were going crazy trying to fix the ranking bug, but eventually someone noticed that the gnome listing was on ebay, and there was only one of them, and it cost like $50. so they just bought the gnome and suddenly the listing was gone, problem solved. why bother fixing software issues when you can just change the world to fit your software instead?
Tenobrus@tenobrus

if you're about to release a model that you know has the ability to reveal zerodays in every commonly used open source project you could delay release for a few years or spend another ten billion on alignment RL. or you could just secretly fix all the zerodays yourself first.

English
28
237
6.1K
360.1K
Praneeth retweetledi
roon
roon@tszzl·
“fake work” and “bullshit jobs” has been fantastically wrong and misleading for understanding the modern world. a much better understanding is of a global economy where minor skill differences and improvements lead to monumentally different outcomes, and the marginal hour of work has never been more measurable or useful after the advent of even moderately effective talent allocation systems and the variability of reward based on effort and skill, people have engaged much harder in a red queen rat race across the world. this is why the Chinese ‘cram schools’ exist and why ‘yuppie striverism’ is a thing and why people trade off later family formation for working more so often. while overall work hours are slightly down, they are actually up for high earners (nber.org/digest/jul06/w…) I see it in the marginal effect with my friends now after the advent of claude and codex: they are actually working harder now than they ever have before. this is due to a personal Jevon’s paradox where they see that the value of their time has increased dramatically, that they can get a lot more visible work done towards goals they care about than they used to after requests from their customers the labs are doing things like inventing dispatch which lets you monitor work and manipulate your computer from your phone, on top of prior changes like having always on communications (slack). You hear about people launching codex jobs from their phone the moment they have an idea and reviewing them later no clue how long this lasts but the most immediate impact of co-existing with the machine state is higher productivity and higher visibility which leads to more work hours
English
130
150
2.5K
308.6K
Praneeth retweetledi
Καλός
Καλός@realKalos·
In 2003, after a thorough study of the Mahabharata, Giampaolo Thomasetti began work on a large-scale project dedicated to it. After 12 years, he completed his collection of over 20 majestic paintings depicting the main moments of this great spiritual epic. 👇
Καλός tweet media
English
99
1.6K
9.1K
755.7K
Praneeth retweetledi
anand mahindra
anand mahindra@anandmahindra·
In Thiruvananthapuram stands the Sree Padmanabhaswamy Temple, where architecture meets astronomy. On equinoxes, the setting sun aligns precisely with the temple’s structure, appearing through a sequence of windows in timed intervals, an event often described as “Suryasmaranam.” Built centuries ago, but it reflects a sophisticated understanding of solar movement, geometry and orientation. Comforting that science and spirituality weren’t separate pursuits. They coexisted, and often reinforced each other. I’m putting this firmly on my own urgent bucket list: to be there on one of the two equinox days. I missed the spring equinox just this past friday. But I have two chances every year. Calendar marked. #SundayWanderer
English
170
2K
17K
314.7K
Praneeth retweetledi
Avi Chawla
Avi Chawla@_avichawla·
Big release from Kimi! They just released a new way to handle residual connections in Transformers. In a standard Transformer, every sub-layer (attention or MLP) computes an output and adds it back to the input via a residual connection. If you consider this across 40+ layers, the hidden state at any layer is just the equal-weighted sum of all previous layer outputs. Every layer contributes with weight=1, so every layer gets equal importance. This creates a problem called PreNorm dilution, where as the hidden state accumulates layer after layer, its magnitude grows linearly with depth. And any new layer's contribution gets progressively buried in the already-massive residual. This means deeper layers are then forced to produce increasingly large outputs just to have any influence, which destabilizes training. Here's what the Kimi team observed and did: RNNs compress all prior token information into a single state across time, leading to problems with handling long-range dependencies. And residual connections compress all prior layer information into a single state across depth. Transformers solved the first problem by replacing recurrence with attention. This was applied along the sequence dimension. Now they introduced Attention Residuals, which applies a similar idea to depth. Instead of adding all previous layer outputs with a fixed weight of 1, each layer now uses softmax attention to selectively decide how much weight each previous layer's output should receive. So each layer gets a single learned query vector, and it attends over all previous layer outputs to compute a weighted combination. The weights are input-dependent, so different tokens can retrieve different layer representations based on what's actually useful. This is Full Attention Residuals (shown in the second diagram below). But here's the practical problem with this idea. Full AttnRes requires keeping all layer outputs in memory and communicating them across pipeline stages during distributed training. To solve this, they introduce Block Attention Residuals (shown in the third diagram below). The idea is to group consecutive layers into roughly 8 blocks. Within each block, layer outputs are summed via standard residuals. But across blocks, the attention mechanism selectively combines block-level representations. This drops memory from O(Ld) to O(Nd), where N is the number of blocks. Layers within the current block can also attend to the partial sum of what's been computed so far inside that block, so local information flow isn't lost. And the raw token embedding is always available as a separate source, which means any layer in the network can selectively reach back to the original input. Results from the paper: - Block AttnRes matches the loss of a baseline LLM trained with 1.25x more compute. - Inference latency overhead is less than 2%, making it a practical drop-in replacement - On a 48B parameter Kimi Linear model (3B activated) trained on 1.4T tokens, it improved every benchmark they tested: GPQA-Diamond +7.5, Math +3.6, HumanEval +3.1, MMLU +1.1 The residual connection has mostly been unchanged since ResNet in 2015. This might be the first modification that's both theoretically motivated and practically deployable at scale with negligible overhead. More details in the post below by Kimi👇 ____ Find me → @_avichawla Every day, I share tutorials and insights on DS, ML, LLMs, and RAGs.
Avi Chawla tweet media
Kimi.ai@Kimi_Moonshot

Introducing 𝑨𝒕𝒕𝒆𝒏𝒕𝒊𝒐𝒏 𝑹𝒆𝒔𝒊𝒅𝒖𝒂𝒍𝒔: Rethinking depth-wise aggregation. Residual connections have long relied on fixed, uniform accumulation. Inspired by the duality of time and depth, we introduce Attention Residuals, replacing standard depth-wise recurrence with learned, input-dependent attention over preceding layers. 🔹 Enables networks to selectively retrieve past representations, naturally mitigating dilution and hidden-state growth. 🔹 Introduces Block AttnRes, partitioning layers into compressed blocks to make cross-layer attention practical at scale. 🔹 Serves as an efficient drop-in replacement, demonstrating a 1.25x compute advantage with negligible (<2%) inference latency overhead. 🔹 Validated on the Kimi Linear architecture (48B total, 3B activated parameters), delivering consistent downstream performance gains. 🔗Full report: github.com/MoonshotAI/Att…

English
81
213
2.3K
350.8K
Praneeth retweetledi
vittorio
vittorio@IterIntellectus·
this is actually insane > be tech guy in australia > adopt cancer riddled rescue dog, months to live > not_going_to_give_you_up.mp4 > pay $3,000 to sequence her tumor DNA > feed it to ChatGPT and AlphaFold > zero background in biology > identify mutated proteins, match them to drug targets > design a custom mRNA cancer vaccine from scratch > genomics professor is “gobsmacked” that some puppy lover did this on his own > need ethics approval to administer it > red tape takes longer than designing the vaccine > 3 months, finally approved > drive 10 hours to get rosie her first injection > tumor halves > coat gets glossy again > dog is alive and happy > professor: “if we can do this for a dog, why aren’t we rolling this out to humans?” one man with a chatbot, and $3,000 just outperformed the entire pharmaceutical discovery pipeline. we are going to cure so many diseases. I dont think people realize how good things are going to get
vittorio tweet mediavittorio tweet mediavittorio tweet mediavittorio tweet media
Séb Krier@sebkrier

This is wild. theaustralian.com.au/business/techn…

English
2.4K
19.6K
117K
17.6M
Praneeth retweetledi
Interesting things
Interesting things@awkwardgoogle·
The unpredictability of the double pendulum.
English
359
1.2K
13.2K
2.5M