Thomas Chong

54 posts

Thomas Chong

Thomas Chong

@cch_thomas

AI Research Engineer @Beever_AI | @GoogleDevExpert in Cloud AI | HKUST | 🇨🇦

Katılım Şubat 2013
84 Takip Edilen59 Takipçiler
Thomas Chong retweetledi
Beever AI
Beever AI@Beever_AI·
Two weeks ago @karpathy posted about "LLM Knowledge Bases" — the idea that LLMs should maintain structured, evolving knowledge from your documents. 1.7M+ views. He said: "I think there is room here for an incredible new product instead of a hacky collection of scripts." We've been building exactly that. Today we're open-sourcing Beever Atlas. GitHub: github.com/Beever-AI/beev… The difference: Karpathy's approach starts with manual file uploads. Beever Atlas starts with your team's chat. Slack, Discord, Teams, Telegram — the messy, unstructured conversations where 90% of organizational knowledge actually lives and dies. Here's what it does: - Connect your chat platform (self-service, takes 2 minutes) - Ingestion pipeline extracts entities, facts, and relationships automatically - Builds a Neo4j knowledge graph — not just text cross-references, actual typed relationships between people, projects, technologies, decisions - Generates a living Wiki — DeepWiki-style, with topic hierarchies, concept maps, glossaries. Updates every sync. - Ships as an MCP server — Claude, Cursor, any AI assistant can query your team's collective knowledge directly From our internal deployment (4 Slack channels): - 854 structured memories - 1,899 entities - 5,271 relationships - 222 wiki entries auto-generated What Karpathy built is single-user, requires Obsidian + CLI, text-only. Beever Atlas is multi-user, zero-install web UI, knowledge graph, MCP-native. We built this at Beever AI, a Toronto-based research lab under Votee AI, because we needed it ourselves — our engineering team's context was scattered across Slack threads nobody reads. Now our agents can actually reason over what the team knows. 100% on-premise. Docker stack. Bring your own LLM via LiteLLM (Ollama, Gemma 4, whatever you run locally). Zero data leakage. Turn your team's chat into a living wiki. ⭐ github.com/Beever-AI/beev… 💬 discord.gg/VshBCUUX 🌐 beever.ai Shipped by the whole team: Engineering — @jhkchan @cch_thomas @KaiYamYang1 @dantelok1111 Design — Adrian Leung Comms & Media — @nghoihin Beever AI is a Toronto-based research lab under @Votee_AI.
English
1
3
9
362
Tejas Gawande
Tejas Gawande@tejgw·
Cursor for Slides is finally here Watch the first 47 seconds. Then try going back to your old deck tool Reply "Chronicle" + RT to get two months of Pro for free. Make sure you follow so I can DM you asap.
English
1.8K
939
3.1K
813.2K
Thomas Chong retweetledi
Junyang Lin
Junyang Lin@JustinLin610·
me stepping down. bye my beloved qwen.
English
1.7K
730
13.6K
6.6M
Logan Kilpatrick
Logan Kilpatrick@OfficialLoganK·
Reply here or DM me :) will add folks in as much as we can
English
2.2K
13
941
78.6K
Logan Kilpatrick
Logan Kilpatrick@OfficialLoganK·
Big upgrade to vibe coding in @GoogleAIStudio lands in Jan, but if you want to test early… 👇🏻
English
3.8K
190
5.5K
553.7K
Thomas Chong retweetledi
Max Ku
Max Ku@vinesmsuic·
We tried to stress-test current image generation models for every possible real-world task. What we found reveals a lot about how today’s models see, edit, and fail. 🧵 (1/N)
Max Ku tweet media
English
1
15
36
9.8K
Thomas Chong retweetledi
Kevin Lin
Kevin Lin@KevinQHLin·
Thanks @_akhaliq for sharing our work!! 🤔Today’s video generation models (e.g., Veo3, SoRA) are great at realism, but they still struggle to convey structured knowledge and logical teaching. 🌟Code2Video🌟takes a different path: starting from Python Manim code, it renders project-level programs into educational videos—bridging coding, visualization, and knowledge! 📷 Code: github.com/showlab/Code2V… 🏠 Website: showlab.github.io/Code2Video/ 📄 arXiv: arxiv.org/pdf/2510.01174 We want to share our gratitude to @3blue1brown and @manim_community !!! Thanks to the great team @Anno_YanzheChen and @MikeShou1 ! #VIDEO #education #Sora2
AK@_akhaliq

Code2Video A Code-centric Paradigm for Educational Video Generation

English
4
26
130
29.8K
Thomas Chong
Thomas Chong@cch_thomas·
I will be attending 𝗔𝗖𝗟 𝟮𝟬𝟮𝟱 𝗶𝗻-𝗽𝗲𝗿𝘀𝗼𝗻 (27/7 - 1/8) to give an 𝗢𝗥𝗔𝗟 𝗽𝗿𝗲𝘀𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻 on our paper, TheoremExplainAgent (tiger-ai-lab.github.io/TheoremExplain…) I warmly invite you to join my presentation to see our work in action. 👇 ▶️ 𝗦𝗲𝘀𝘀𝗶𝗼𝗻 𝗡𝗮𝗺𝗲: 11 - IP-Orals, Multimodality and Language Grounding 2 ▶️ 𝗧𝗶𝗺𝗲: Wednesday, July 30, 09:00 - 10:30. ▶️ 𝗟𝗼𝗰𝗮𝘁𝗶𝗼𝗻: Hall M2, Austria Center Vienna On the other hand, I will be at the venue for the duration of the conference and would love to connect! 🤝 My interests are AI Agents 🤖, Multimodal LLMs 🗣️🎨, and Low-resource Languages 🌍. If you're working in these areas or just want to discuss, please reach out! 👋 See you in Vienna! #ACL2025NLP #AI #NLP #Research #LLMs
Thomas Chong tweet mediaThomas Chong tweet mediaThomas Chong tweet media
Vienna, Austria 🇦🇹 English
1
3
11
697
Thomas Chong
Thomas Chong@cch_thomas·
Just watched #GoogleIO & immediately applied for the Gemini Diffusion waitlist. Amazed at how quickly I got approved! 🤩 Tested it by asking for UI for an internal tool to fetch/summarize research papers (arXiv, Hugging Face, Slack) & plan project integration. The generation speed is mind-blowing (1044 tokens/s)! 🤯 Code quality might not be as good as Gemini 2.5 Pro/Flash, but it's fantastic to see how fast for rapid prototyping. Huge potential for Diffusion LLMs in iterative tasks like coding & writing. Excited to see it evolve! #GoogleIO #GeminiDiffusion #AI
English
0
1
4
317
Thomas Chong
Thomas Chong@cch_thomas·
🎉 HUGE NEWS! 🎉 My first paper, "TheoremExplainAgent" has been ACCEPTED to ACL 2025 main (@aclmeeting)! 🚀 So proud of what we have accomplished. We explored using AI agents to generate long-form theorem explanation videos! Can't wait to share our work in Vienna! 🇦🇹 @vinesmsuic Johnathan Leung @KrishRShah Alvin Yu @WenhuChen @HelloVotee #ACL2025 #TheoremExplainAgent #NLP 👇 Check out our original work!
Thomas Chong tweet media
Max Ku@vinesmsuic

🚀 We just released the code for #TheoremExplainAgent! 🧮🎬 Agentic LLM approach generates long-form (>5 min) theorem explanation videos using Manim. While highly successful, layout issues remain. We also introduce TheoremExplainBench for systematic evaluation. 👇 Details (1/n)

English
0
6
32
3.1K
Thomas Chong retweetledi
Hung Chia-Yu
Hung Chia-Yu@hungchiayu123·
🚨 Big News 🚨 Our paper "Inference-Time Alignment with Reward Models" has been selected for an oral presentation at #NAACL2025! 🎉 🗓 May 1 | ⏰ 14:30–14:45 📍 Ballroom C (Session H) If you're into inference-time alignment and reward-guided decoding, don't miss this! 🔥 Sadly, I won’t be there — but my amazing advisor @soujanyaporia will be presenting on our behalf! 🙌
Hung Chia-Yu@hungchiayu123

Happy to announce our paper "Inference Time Alignment with reward guided tree search" has been accepted to #NAACL2025 main! 🥳🥳 Through framing alignment as a reward guided tree search problem, we introduce a novel inference time alignment algorithm that employs an off the shelf reward model with evolutionary strategies to align LLMs!

English
1
4
12
894
Thomas Chong
Thomas Chong@cch_thomas·
I'm excited to share my first publication as a co-first author : #TheoremExplainAgent! 🎉 We use agentic LLM approach to create >5 min long theorem videos w/ Manim. TheoremExplainBench is also released for systematic evaluation. 👇 [1/7]
Thomas Chong tweet media
English
3
5
12
948