Amit Roy

118 posts

Amit Roy banner
Amit Roy

Amit Roy

@AmitRoy7781

Seeking Research Internship for Summer 2026 | #LLMReasoning #LLMAgents #LLMs | PhD Student @PurdueCS | Applied Scientist Intern '25 @Amazon |

United States Katılım Ekim 2015
3.6K Takip Edilen340 Takipçiler
Amit Roy
Amit Roy@AmitRoy7781·
@ShengzheXu Hi Shengzhe Xu, I am Amit Roy, PhD student at Purdue CS. I have sent an email to you with my resume.
English
0
0
0
29
Shengzhe Xu
Shengzhe Xu@ShengzheXu·
Last year, our team and interns published 10+ papers at ICML, NeurIPS, SOSP, AISTATS, and MLSys. If you're interested, email your resume to: shengzx@amazon.com
English
1
0
0
193
Shengzhe Xu
Shengzhe Xu@ShengzheXu·
Hello folks, we at AWS Annapurna Labs Science team still have 1–2 intern openings for Summer 2026, and we also offer Fall 2026 positions. We're looking for PhD candidates excited to work on cutting-edge research across LLM + Systems + RL, including:
English
3
0
2
485
Amit Roy retweetledi
Transluce
Transluce@TransluceAI·
Can LMs learn to faithfully describe their internal features and mechanisms? In our new paper led by Research Fellow @belindazli, we find that they can—and that models explain themselves better than other models do.
Transluce tweet media
English
5
57
276
67.4K
Amit Roy retweetledi
Spencer Baggins
Spencer Baggins@bigaiguy·
🚨 Before you build your first AI agent, learn these 3 fundamentals: LLMs, RAG, and Tool Use. Because if you don’t know how they connect you’re just building chaos with a fancy wrapper. Let’s break it down: 1. LLM (Large Language Model) This is the brain. It understands language, generates text, and reasons through problems. But it has no memory, no awareness, and no access to real data unless you give it one. On its own, it’s just a really smart parrot. 2. RAG (Retrieval-Augmented Generation) This is the memory + knowledge system. It helps your LLM pull in external information (like your docs, Notion, or product data). Without RAG, your agent “knows” nothing beyond its training cutoff. With it, it becomes context-aware. Think of RAG as the agent’s research assistant. 3. Tool Use This is the hands. Tool use allows your agent to actually take action run code, call APIs, send emails, update databases. Without tools, your agent can only talk. With tools, it can do. Now here’s how they connect: LLM = thinks RAG = remembers Tools = acts Put them together, and you have an intelligent system that can reason, recall, and execute. That’s what an AI agent actually is. Not just chat. Not just automation. But thinking automation that understands context, retrieves what it needs, and acts with purpose. Most people try to skip these fundamentals. Then wonder why their agents break, hallucinate, or go off the rails. Learn this stack. Master this loop. Everything else in AI builds on top of it.
Spencer Baggins tweet media
English
19
56
257
21.1K
Amit Roy retweetledi
Alex Hughes
Alex Hughes@alxnderhughes·
I finally understand the difference between LLMs, RAG, and AI Agents. After two years of building production AI systems, I realized most people are treating them like competing tools when they’re actually three layers of the same intelligence stack. 1. The LLM is the brain. It’s the reasoning engine. It understands language, writes, explains, and synthesizes ideas better than any system before it. But it’s frozen in time. GPT-4, for example, knows nothing past its last training update. Ask it about yesterday’s events and it’ll confidently make something up. LLMs can think, but they’re disconnected from the present. 2. RAG is the memory. It’s what connects that frozen brain to live knowledge. Instead of retraining the model, RAG retrieves fresh information from your company’s data, APIs, or the web and feeds it to the LLM as context. Now the model reasons over real, up-to-date facts rather than outdated patterns. The best part? You can trace exactly which documents shaped each answer. It’s the difference between guessing and knowing. 3. AI Agents are the decision-makers. They wrap a control loop around the system. The agent perceives goals, plans actions, executes tasks, and reflects on the outcome. It’s not just answering a question—it’s doing the work. Think of an AI that researches, drafts a report, sends an email, and iterates on feedback, all autonomously. That’s what an agent does. Most “AI” demos stop at the LLM stage. Real production systems combine all three: the LLM for reasoning, RAG for accuracy, and the Agent for autonomy. Use LLMs for pure thinking tasks writing, summarizing, explaining. Add RAG when precision and truth matter like referencing internal documents or specialized data. Deploy Agents when you need end-to-end action systems that decide and operate without manual input. The future of AI isn’t one layer beating the others. It’s about architecting all three together. LLMs think. RAG remembers. Agents act. That’s the real intelligence stack.
Alex Hughes tweet media
English
23
171
948
55.5K
Amit Roy retweetledi
Hasan Toor
Hasan Toor@hasantoxr·
I finally understand how large language models actually work After reading the 2025 textbook “Foundations of LLMs” It blew my mind and cleared up years of confusion Here’s everything i learned (in plain english):
Hasan Toor tweet media
English
51
435
2.7K
341.7K
Amit Roy retweetledi
ℏεsam
ℏεsam@Hesamation·
a senior engineer at google just dropped a 400-page free book on docs for review: agentic design patterns. the table of contents looks like everything you need to know about agents + code: > advanced prompt techniques > multi-agent patterns > tool use and MCP > you name it
ℏεsam tweet media
English
63
1.1K
9.4K
1.1M
Nathan Lambert
Nathan Lambert@natolambert·
WebGPT paper looking so ahead of its time now with o3 and AI search.
Nathan Lambert tweet media
English
9
24
347
30.1K
Md Musfiqur Rahman
Md Musfiqur Rahman@musfiq_shohan·
Updates! 🥳🎉🎉 1. Got my first Phd work accepted in @icmlconf-2024. -Proposed a modular training approach of deep causal generative models for high-dimensional interventional sampling. 2. Joined @genentech as a summer intern to contribute in Causal ML projects. Wish me luck!
English
3
3
97
15.6K
Shamik Roy
Shamik Roy@ShamiikRoy·
Delighted to share our latest paper from @AmazonScience, "FLAP: Flow Adhering Planning with Constrained Decoding in LLMs", which just got accepted to #NAACL2024 main! Work done in collaboration with my exceptional teammates @sailiks , @DX89B , Saab, and Arshit. 1/🧵
Shamik Roy tweet media
English
2
2
27
3.2K
Masudur Rahman
Masudur Rahman@masud99r·
Excited to share that our paper "Natural Language-based State Representation in Deep RL" got into #NAACL2024 conference. We investigate how the image understanding of VLM and the capabilities of LLM can be leveraged to enhance generalization in RL when learning from images.
English
4
0
14
2.1K
Amit Roy
Amit Roy@AmitRoy7781·
Thanks to pygod (pygod.org) team for adding GAD-NR on pygod, feel free to compare as a baseline architecture on the benchmark datasets for node anomaly detection in static attributed graphs. #pygod.nn.GADNRBase" target="_blank" rel="nofollow noopener">docs.pygod.org/en/latest/gene…
English
0
0
2
293
Amit Roy
Amit Roy@AmitRoy7781·
Exciting News! The first work during my PhD journey titled "GAD-NR: Graph Anomaly Detection via Neighborhood Reconstruction" has been ACCEPTED as a regular paper at The 17th ACM International Conference on Web Search and Data Mining, WSDM. (1/n)
Amit Roy tweet media
English
4
0
25
3.4K
Md Ashiqur Rahman
Md Ashiqur Rahman@Ashiq_Rahman_s·
Excited to share our work at #NeurIPS2023🧠 on scale equivariance! 🚀 We introduce an end-to-end scale-equivariant deep net with Fourier layers, achieving perfect equivarinace. Please drop by our poster on Wednesday at 10:45 AM (Poster #924). Project: ashiq24.github.io/Scale_Equivari…
Md Ashiqur Rahman tweet media
English
4
1
24
2.6K