Linxing Preston Jiang

332 posts

Linxing Preston Jiang banner
Linxing Preston Jiang

Linxing Preston Jiang

@lpjiang97

PhD student @uwcse interested in theoretical neuroscience. Also @lpjiang97.bsky.social

Seattle, WA Katılım Mart 2015
189 Takip Edilen324 Takipçiler
Sabitlenmiş Tweet
Linxing Preston Jiang
Linxing Preston Jiang@lpjiang97·
I'm excited to share our latest work — "Data Heterogeneity Limits the Scaling Effect of Pretraining in Neural Data Transformers", where we examined the effect of scaling up pretraining data in neural foundation models carefully.🧐 (1/9) Preprint: biorxiv.org/content/10.110…
English
1
13
33
10.7K
Linxing Preston Jiang retweetledi
Xiaochuang Han
Xiaochuang Han@XiaochuangHan·
Can we simplify video generation by decomposing it into interleaved text-video co-generation? Would explicit, repeated thinking in language improve generation in pixels? We introduce TV2TV: a unified model that jointly learns - language modeling (next-token prediction) - video flow matching (next-frame prediction) At inference, TV2TV dynamically alternates between textual thinking and video generation. Model generations below: interleaved text plans and video slices (~1–2s) are co-generated over time, conditioned on a single frame per sport. 📖 arxiv.org/abs/2512.05103
English
4
38
93
25.7K
Linxing Preston Jiang retweetledi
Weijia Shi
Weijia Shi@WeijiaShi2·
At #COLM2025 🇨🇦 this week! Would love to meet old and new friends. I’ve been thinking about how to train LMs that can leverage high-risk but high-quality data, and how to build omni models by merging specialized ones across modalities. And come check out our paper 👇
English
2
10
99
12.1K
Kanaka Rajan
Kanaka Rajan@KanakaRajanPhD·
(1/8) New paper from our team! @yuven_duan & @hamzatchaudhry introduce POCO, a tool for FORECASTING brain activity at the cellular & network level during spontaneous behavior. Find out how we built POCO & how it will transform neurobehavioral research 👇 arxiv.org/abs/2506.14957
Kanaka Rajan tweet media
English
5
19
65
8K
Linxing Preston Jiang retweetledi
tuochao chen
tuochao chen@tuochao·
Today’s AI assistants passively wait for questions. But what if they could anticipate when to help-without explicit user invocation? Meet LlamaPIE, the first real-time proactive assistant to enhance  conversations via discreet, concise guidance delivered by hearable.#acl2025
tuochao chen tweet media
English
2
4
8
1.2K
Linxing Preston Jiang retweetledi
Jin Shang
Jin Shang@jinshang1997·
I've been writing some AI Agents lately and they work much better than I expected. Here are the 10 learnings for writing AI agents that work: 1) Tools first. Design, write and test the tools before connecting to LLMs. Tools are the most deterministic part of your code. Make sure they work 100% before writing actual agents. 2) Start with general, low level tools. For example, bash is a powerful tool that can cover most needs. You don't need to start with a full suite of 100 tools. 3) Start with single agent. Once you have all the basic tools, test them with a single react agent. It's extremely easy to write a react agent once you have the tools. All major agent frameworks have builtin react agent. You just need to plugin your tools. 4) Start with the best models. There will be a lot of problems with your system, so you don't want model's ability to be one of them. Start with Claude Sonnet or Gemini Pro. you can downgrade later for cost purpose. 5) Trace and log your agent. Writing agents are like doing animal experiments. There will be many unexpected behavior. You need to monitor it as carefully as possible. There are many logging systems that help. Langsmith, langfuse etc. 6) Identify the bottlenecks. There's a chance that single agent with general tools already works. But if not, you should read your logs and identify the bottleneck. It could be: context length too long, tools not specialized enough, model doesn't know how to do something etc. 7) Iterate based on the bottleneck. There are many ways to improve: switch to multi agents, write better prompts, write more specialized tools etc. Choose them based on your bottleneck. 8) You can combine workflows with agents and it may work better. If your objective is specialized and there's an unidirectional order in that process, a workflow is better, and each workflow node can be an agent. For example, a deep research agent can be a two step workflow, first a divergent broad search, then a convergent report writing, and each step is an agentic system by itself. 9) Trick: Utilize filesystem as a hack. Files are a great way for AI Agents to document, memorize and communicate. You can save a lot of context length when they simply pass around file urls instead of full documents. 10) Another Trick: Ask Claude Code how to write agents. Claude Code is the best agent we have out there. Even though it's not open sourced, CC knows its prompt, architecture and tools. You can ask its advice for your system.
English
0
1
6
357
Linxing Preston Jiang retweetledi
Kenneth D Harris
Kenneth D Harris@kennethd_harris·
A new study led by @timothy_sit shows that different layers of mouse V1 integrate visual and non-visual signals differently. L2/3 activity is dominated by vision (or spontaneous fluctuations) and L5 by movement. This leads to different geometries. biorxiv.org/content/10.110…
English
0
14
57
3K
Allie Sinclair
Allie Sinclair@sinclair_allie·
📢 It's official: I'm super excited to share that I'll be joining Rice University as an Assistant Professor in the Department of Psychological Sciences! Lab will launch in Summer 2026— I'll be recruiting over the next year, so please spread the word! Short thread ⤵️
Allie Sinclair tweet media
English
11
2
94
3.9K
Linxing Preston Jiang
Linxing Preston Jiang@lpjiang97·
Together, our results show that pretraining with more sessions does not naturally lead to improved downstream performance. We advocate for rigorous scaling analyses in future work on neural foundation models to account for data heterogeneity effects. (8/9)
English
1
0
1
302
Linxing Preston Jiang
Linxing Preston Jiang@lpjiang97·
I'm excited to share our latest work — "Data Heterogeneity Limits the Scaling Effect of Pretraining in Neural Data Transformers", where we examined the effect of scaling up pretraining data in neural foundation models carefully.🧐 (1/9) Preprint: biorxiv.org/content/10.110…
English
1
13
33
10.7K
Linxing Preston Jiang retweetledi
Weijia Shi
Weijia Shi@WeijiaShi2·
Our previous work showed that 𝐜𝐫𝐞𝐚𝐭𝐢𝐧𝐠 𝐯𝐢𝐬𝐮𝐚𝐥 𝐜𝐡𝐚𝐢𝐧‑𝐨𝐟‑𝐭𝐡𝐨𝐮𝐠𝐡𝐭𝐬 𝐯𝐢𝐚 𝐭𝐨𝐨𝐥 𝐮𝐬𝐞 significantly boosts GPT‑4o’s visual reasoning performance. Excited to see this idea incorporated into OpenAI’s o3 and o4‑mini models (openai.com/index/thinking…). Huge thanks to my co‑author @huyushi98 @XingyuFu2
Weijia Shi@WeijiaShi2

Visual Chain-of-Thought with ✏️Sketchpad Happy to share ✏️Visual Sketchpad accepted to #NeurIPS2024. Sketchpad thinks🤔by creating visual reasoning chains for multimodal LMs, enhancing GPT-4o's reasoning on math and vision tasks We’ve open-sourced code: visualsketchpad.github.io

English
5
40
253
26.7K