Jeff Wang 👨‍🚀

3K posts

Jeff Wang 👨‍🚀 banner
Jeff Wang 👨‍🚀

Jeff Wang 👨‍🚀

@jffwng

Product Lead @AIatMeta (MSL / FAIR). I like language models. I also like non-language models. Previously at Twitter and startups

San Francisco, CA Katılım Temmuz 2007
785 Takip Edilen3.1K Takipçiler
Nan Yu
Nan Yu@thenanyu·
Yes. PMM is a product concern. I see a lot of orgs out there combining design and product management, which I think creates all sorts of poor incentives. But product management and product marketing are a much more natural fit together.
Tony Fadell@tfadell

Most tech companies break out product management and product marketing into two separate roles: Product management defines the product and gets it built. Product marketing wires the messaging- the facts you want to communicate to customers- and gets the product sold. But from my experience that's a grievous mistake. Those are, and should aways be, one job. There should be no separation between what the product will be and how it will be explained- the story has to be utterly cohesive from the beginning. Your messaging is your product. The story you're telling shapes the thing you're making. I learned story telling from Steve Jobs. I learned product management from Greg Joswiak. Joz, a fellow Wolverine, Michigander, and overall great person, has been at Apple since he left Ann Arbor in 1986 and has run product marketing for decades. And his superpower- the superpower of every truly great product manager- is empathy. He doesn't just understand the customer. He becomes the customer. So when Joz stepped into the world with his next-gen iPod to test it out, he fiddled with it like a beginner. He set aside all the tech specs- except one: battery life. The numbers were empty without customers, the facts meaningless without context. And, that's why product management has to own the messaging. The spec shows the features, the details of how a product will work, but the messaging predicts people's concerns and finds way to mitigate them. - #BUILD Chapter 5.5 The Point of PMs

English
16
10
264
35.3K
Jeff Wang 👨‍🚀
Jeff Wang 👨‍🚀@jffwng·
@thenanyu Exactly. Code used to be the bottleneck. Now that code is abundant, the bottleneck now is choosing which code to keep and which to discard.
English
0
0
1
170
Nan Yu
Nan Yu@thenanyu·
Now that you can build anything, you’re faced with the question: what should you build? The answer isnt “everything”. Do that and a more focused competitor eats your lunch. AI enables distraction. Up to you to resist. See OpenAI. No more side quests.
English
16
20
260
29.9K
Jeff Wang 👨‍🚀
Jeff Wang 👨‍🚀@jffwng·
@kevinyien Extending this: Holds true for other functional pods too. Take comms/marketing/brand. Old lines were never rigid, but still value in each. Will continue so as they expand and deepen.
English
0
0
1
107
Kevin Yien
Kevin Yien@kevinyien·
The belief that EPD is collapsing into a single role called a “builder” is based on the incorrect assumption that product manager, engineer, and designer had the right boundaries to begin with. For example, many of the people with the title “designer” that I respected the most were already a combination of “product manager + product designer + frontend engineer”. Now they can do each aspect even better / more. But they are still a “designer” (at least imo). The same applies to product managers who can (and should) do more of the product marketing, selling, and growth work. Anyone looking for the new neat buckets to slot into will have a hard time over the next decade. The reality is that roles are simultaneously expanding and deepening (which many are ready for, or straight up don’t want to happen).
English
14
14
190
26.4K
Jeff Wang 👨‍🚀 retweetledi
andrew chen
andrew chen@andrewchen·
in a world of agents, the product role is going to split into two jobs: - one that organizes humans (stakeholders, design, eng) - one that organizes agents (prompts, evals, workflows, etc) Both will be in pursuit of offering the right products to customers, but how you get there will dramatically change. What happens to the typical product rituals? Instead of PRDs, OKRs, standups, product reviews, we'll need the equivalent for agents. Couple wild ideas here... instead of standups: the equivalent is that agents will report back to us based on run logs and anomaly flags. no one needs to say what they did yesterday, the system already did thousands of things. the question is where it broke, where it surprised you, and where it got better. Show us the patterns, the trends, the edge cases - particularly the ones the agents didn't fix automatically. the daily ritual becomes reviewing deltas, scanning failures, and deciding which ones matter. less reporting, more triage instead of OKRs: we’ll need adversarial agents that continuously monitor/grade the system and detect patterns, scoring outcomes on an hourly or daily basis. Rather than setting a quarterly goal of "increase X by 5%" and revisiting slowly -- instead, management will be able to monitor success in real-time and detect trends/patterns towards overall goals instead of PRDs: we won't need waterfall. Prototyping will rule the day, and we’ll need a living agentic loop that mediates customer feedback/ratings and what's being prioritized and built. you don’t hand it to eng, you deploy it into the agent loop. if it’s wrong, it fails visibly and you can revert. if it’s right, it produces the right output instead of product reviews: we'll need simulation systems to examine agent behavior in different scenarios. In an agentic world where UI shifts from buttons/menus to agents automatically doing things, you'll want to examine their behavior before you deploy. You rewind decisions, fork alternate paths, and see how different prompts or constraints would have changed outcomes. the review becomes interactive. less storytelling, more counterfactuals. The PM sits in the middle of this split. On the human side, still aligning taste, risk tolerance, and strategy across people. On the agent side, shaping the actual behavior of the system through prompts, evals, and feedback loops. one side is persuasion. The other is instrumentation. the best ones will collapse the gap, translating intent directly into systems that act on it. the fascinating part is that the agentic loop will run 10000x faster than the human one, and of course, you can "hire" them faster. Thus the “organizing humans” half starts to feel slow and lower impact unless it directly improves the agent loop. Eventually the PM will shift towards agents and maybe ignore the human coordination altogether...
English
80
54
582
57.4K
Jeff Wang 👨‍🚀
Jeff Wang 👨‍🚀@jffwng·
Software is not dead. There will be many magnitudes more agent users than human users. Start making software for them.
English
0
0
0
104
Joanne Jang
Joanne Jang@joannejang·
[*] the friend is my husband and i also own 50%
English
4
0
156
17.8K
Joanne Jang
Joanne Jang@joannejang·
if you've been looking to buy a house in sf: you can wake up to this view every day & be right by bernal peak -- all in the objectively cool neighborhood that is bernal heights! best of all: "crime don't climb", "no hill no thrill", etc (it's also a 2 min walk away from cortland)
Joanne Jang tweet mediaJoanne Jang tweet media
English
49
4
410
175.9K
Jeff Wang 👨‍🚀 retweetledi
Manus
Manus@ManusAI·
Manus is entering the next chapter: we’re joining forces with Meta to take general agents to the next level. Full story on our blog: manus.im/blog/manus-joi…
English
683
1.1K
6.7K
5M
Jeff Wang 👨‍🚀 retweetledi
Xiao Liang
Xiao Liang@MasterVito0601·
Definitely, Totally, Completely agree with the two key ideas regarding whether RL can elicit reasoning beyond a model’s inherent boundaries: (i) the task was not heavily covered during pre-training, leaving sufficient headroom for RL to explore. (ii) the RL data are calibrated to the model’s edge of competence, neither too easy (in-domain) nor too hard (out-of-domain). Data remains the most critical catalyst for fully unlocking the potential of RLVR training, as @WeizhuChen highlighted at the NeurIPS 2025 MathAI Workshop. It is unfortunate that many of our earlier strategies for selecting RL data, such as accuracy-based filtering or diversity-driven sampling, overlooked the crucial question of whether the data were actually uncovered during pre-training and whether they aligned with the model’s true capabilities. Interestingly, both of our recent works on improving RLVR from a data-centric perspective, SwS (2506.08989) and SvS (2508.14029), directly address these two points. This alignment enables them to substantially boost evaluation performance and push beyond existing reasoning ceilings.
Rosinality@rosinality

Very interesting observations on the interaction between pre/mid/post-training. 1. The gain from RL is largest when the task is neither too easy nor too hard. 2. Pretraining should focus on cultivating broader atomic skills - RL can combine them to solve composite problems. 3. For tasks near the pretraining distribution heavy mid-training is effective. For harder tasks assigning more compute to RL is effective. 4. Process rewards are helpful for generalization (if we can utilize them!) How can we define atomic skills in general reasoning? And how can we further promote that, potentially using synthetic data? This could be an interesting area to explore in the atomic vs composite skills view of RL.

English
3
8
70
10.5K
Jeff Wang 👨‍🚀
Jeff Wang 👨‍🚀@jffwng·
Waited a while to upgrade to iOS 26, thinking I wouldn't like it, but after doing so, I'm pleasantly surprised.
English
0
0
0
97
Jeff Wang 👨‍🚀 retweetledi
Stephane Kasriel
Stephane Kasriel@skasriel·
🚀 Excited to share that our team at Meta just launched SAM 3 + SAM 3D! These models set a new standard for segmenting & reconstructing media, unlocking new possibilities for developers, researchers, & creators. SAM 3: go.meta.me/d74ec5 SAM 3D: go.meta.me/f019f3
Stephane Kasriel tweet media
English
11
82
653
39.3K