Subba Reddy

13K posts

Subba Reddy banner
Subba Reddy

Subba Reddy

@PostPCEra

Everything around me was someone's lifework| Intelligence is the capacity for learning| Entrepreneur at heart(CS grad), broadly curious, I live & breathe EdTech

Silicon Valley Katılım Ekim 2010
2.8K Takip Edilen780 Takipçiler
Sabitlenmiş Tweet
Subba Reddy
Subba Reddy@PostPCEra·
A thread of my threads:
English
5
0
24
0
Subba Reddy
Subba Reddy@PostPCEra·
@rohanpaul_ai “prefill” & “decode.” in AI compute Prefill: -you fed 100 PDFs to AI model to digest -Nvidia GPUs excel at this Decode: -ask questions about content of PDF files -TPUs, ASICs (AWS ..) excel here Prefill to Decode is 1 to 100 (by usage), reason why NVDA stock flat last 6 months
English
0
0
0
19
Rohan Paul
Rohan Paul@rohanpaul_ai·
Chamath on all important “prefill” and “decode.” in AI compute. Prefill is compute-bound; massive parallel GPUs win, so Nvidia dominates as context grows. Decode is memory-bandwidth bound as each next token depends on scanning what’s already generated
English
14
27
308
11.6K
Subba Reddy
Subba Reddy@PostPCEra·
@sama Software engineering and the "Era of Human coding" ...
Subba Reddy tweet media
English
0
0
0
11
Sam Altman
Sam Altman@sama·
I have so much gratitude to people who wrote extremely complex software character-by-character. It already feels difficult to remember how much effort it really took. Thank you for getting us to this point.
English
4.4K
2.1K
35.7K
5.4M
Subba Reddy
Subba Reddy@PostPCEra·
@rohanpaul_ai AI is making 1x engineer much productive for sure. But only a non-coder/non-engineer (by training) can make the following statement >10x engineer used to have better judgment than the 1x engineer, but by making everybody a 10x engineer (AI coding) , you're taking judgment away.
English
0
0
0
4
Rohan Paul
Rohan Paul@rohanpaul_ai·
Chamath on how AI agents are making the "10x engineer" distinction disappear because the most efficient "code paths" are now obvious to everyone. Just as AI solved chess and removed the mystery of the best move, AI is doing the same for coding, making the process reductive and removing technical differentiation. "I'm going to say something controversial: I don't think developers anymore have good judgment. Developers get to the answer, or they don't get to the answer, and that's what agents have done. The 10x engineer used to have better judgment than the 1x engineer, but by making everybody a 10x engineer, you're taking judgment away. You're taking code paths that are now obvious and making them available to everybody. It's effectively like what happened in chess: an AI created a solver so everybody understood the most efficient path in every single spot to do the most EV-positive (expected value positive) thing. Coding is very similar in that way; you can reduce it and view it very reductively, so there is no differentiation in code." --- From @theallinpod YT channel (link in comment)
English
229
91
917
648K
Subba Reddy
Subba Reddy@PostPCEra·
@jukan05 every Server maker ( all big 3 Hyperscalars) and their Memory suppliers , both wanted to have a upfront long term contracts. > BIG: Samsung Pursues Long-Term Supply 'Lock-In' with Google and Microsoft… A New Memory Paradigm Emerges
English
0
0
0
351
Jukan
Jukan@jukan05·
BIG: Samsung Pursues Long-Term Supply 'Lock-In' with Google and Microsoft… A New Memory Paradigm Emerges Samsung Electronics is reportedly in talks to establish long-term supply agreements (LTAs) with Google and Microsoft. While various contract structures are under discussion, the most likely arrangement is one where large volumes are committed over an extended period with pricing remaining variable. If finalized, this would mark the first binding long-term supply contract in the industry's history — a significant historical inflection point. ◆ Big Tech Gets Desperate, Seeks Long-Term Supply Commitments According to industry sources on the 20th, Samsung Electronics has recently been in negotiations with Google and Microsoft over long-term supply contracts. As memory has emerged as a critical bottleneck for AI data center buildouts, major tech companies are racing to secure commitments from Korean memory makers. Despite concerns over an AI bubble, U.S. hyperscalers have continued — and even accelerated — their capital expenditure programs, exacerbating memory supply shortages. Commodity DRAM prices have surged to the point of rivaling HBM margins, prompting Big Tech to pursue LTAs with memory manufacturers in order to secure stable supply. The exact structure of the contracts remains under discussion, but the most likely format involves fixed volume commitments with pricing linked to spot market rates. Under this arrangement, Big Tech customers would pay large upfront prepayments to Samsung, which would be drawn down if the customer fails to take the agreed volumes within the three-to-five year contract window. However, pricing would remain tied to spot prices, adjusting upward or downward if spot prices move beyond a defined range. Memory semiconductor pricing is typically divided into spot prices and contract prices. Spot prices are determined daily by immediate supply and demand in B2C markets such as distributors and small-to-medium PC assemblers. Contract prices, on the other hand, serve as the benchmark for large-scale supply deals between manufacturers like Samsung Electronics and SK Hynix and major global firms such as Apple and Amazon. Under this type of structure, Samsung would gain long-term demand visibility, enabling it to accelerate capacity expansion while avoiding inventory buildup — thereby preventing the steep price declines seen in past cycles. "This long-term supply agreement is essentially a green light to expand capacity," said one industry official. "By committing to large volumes over an extended period, manufacturers will be able to invest with confidence." Micron is also inking long-term supply agreements. In its fiscal Q2 2026 earnings call, Micron disclosed that it had "signed its first-ever five-year Strategic Customer Agreement (SCA)." SK Hynix is also said to have already reached LTAs with select customers, and it is expected that memory manufacturers will have concluded agreements with all major hyperscalers by mid-year. In response, a Samsung Electronics representative said, "We are unable to confirm details related to specific customers," while adding, "All major customers are currently requesting large volumes of memory." ◆ "This Time Is Different" — Prepayment First, Supply Later The significance of these contracts lies in the potential paradigm shift they represent for the memory industry. Historically, the sector has been plagued by periodic down-cycles driven by the mismatch between large-scale capacity additions and fluctuating demand. Long-term supply agreements, however, would provide more than three years of demand visibility, allowing companies to invest with confidence and maintaining margins at reasonable levels by preventing dramatic price declines. The defining feature of this round of LTAs is their enforceability. While long-term contracts were attempted around 2019, they carried no binding force — customers could cancel orders at will. This year's agreements, by contrast, enforce commitment through large upfront prepayments. "My understanding is that Samsung is in discussions to receive over $10 billion in prepayments from Microsoft, with the prepayment balance drawn down if Microsoft fails to take the agreed volumes," said another industry source. The successful conclusion of LTAs is expected to drive further expansion of Samsung's investment plans. With demand visibility materially improved, there is little reason to hesitate on capex. Micron, for its part, has already announced plans to invest over $25 billion in fiscal 2026 — nearly double the $13.8 billion invested the prior year. At Samsung's annual general meeting on the 18th, Vice Chairman and CEO Jeon Young-hyun stated: "For the semiconductor division, it is critically important to minimize uncertainty in our medium-to-long-term business and maintain a healthy supply-demand balance in memory. By securing long-term supply through multi-year agreements, both our customers and ourselves can enhance predictable business stability and visibility."
Jukan tweet media
English
11
31
234
104.8K
Subba Reddy
Subba Reddy@PostPCEra·
@traderwillhu 👍>4. Brokerage APIs: Charles Schwab vs. IBKR >Charles Schwab API: free. "catch" is that you need to manually refresh your tokens weekly. > IBKR API: While the API is free, real-time data usually costs $1–$2/month. (Trader Workstation/IB Gateway running in background)
English
0
0
0
193
Will Hu
Will Hu@traderwillhu·
My Top 5 Free APIs and Libraries for Stock Screening & Trading Dashboards Many people have been asking about the specific APIs and libraries I use to build my stock screening tools and trading dashboards. After extensive testing, I’ve narrowed it down to a few reliable tools. Here is a breakdown of my current tech stack based on my personal experience. 1. TradingView Screener (Unofficial Library) For my Pre-market Gappers scan, I rely on a TradingView screener library. While this isn't an official API, it provides scanning results and criteria identical to the TradingView desktop software. GitHub: github.com/shner-elmo/Tra… Pros: Highly accurate; matches TradingView’s powerful UI filters. Cons: There is a 15-minute data delay. Unless you require sub-second real-time scanning, this is usually negligible for swing trading or early-day prep. 2. Finviz Finance Library I primarily use this to scrape news and market sentiment. It’s excellent for aggregating headlines and URLs directly from Finviz. GitHub: github.com/lit26/finvizfi… Use Case: Automatically fetching the latest news for specific tickers to understand the "catalyst" behind a price move. 3. TradingView Lightweight Charts & Tradingview Widgets This is my go-to for technical analysis visualization. GitHub: github.com/tradingview/li… Chart Widgets: tradingview.com/widget-docs/wi… The Difference: Lightweight Charts: Best for building custom tools. It’s high-performance and allows you to program any custom indicator you can imagine. Chart Widget: If you want a "plug-and-play" experience, this is easier but comes with a 15-minute delay and limits you to native indicators (no custom Pine Script/logic integration). 4. Brokerage APIs: Charles Schwab vs. IBKR I have integrated both, and here is how they compare: Charles Schwab API: Completely free. The only "catch" is that you need to manually refresh your tokens weekly. IBKR API: While the API is free, real-time data usually costs $1–$2/month. It also requires you to have TWS (Trader Workstation) or IB Gateway running in the background. My Verdict: I prefer Schwab for daily use. It’s more "lightweight" as long as you remember to update your tokens over the weekend. 5. Apache ECharts ECharts is the "all-rounder" of data visualization. I use it to complement TradingView’s charts. Official Website: echarts.apache.org Use Case: While TradingView is more professional for price action, ECharts is superior for Post-Trade Analysis in my trading journal. The interactivity and ability to visualize complex equity curves or win-rate distributions are top-tier. ------ Of course, there are plenty of superior paid resources out there. However, if you’re a trader just getting started with vibe coding, these free tools are perfect for getting your hands dirty and sharpening your skills first. Feel free to share more in the comments!
English
19
119
759
53.1K
Will Hu
Will Hu@traderwillhu·
My Market Dashboard(Updated) My previous GitHub live site link experienced a sudden traffic surge from X (too popular?)and was flagged as bot activity, leading to a suspension. Without any reply from Github after 10 days, I decided to migrate the entire project to GitLab. Also, I added some new features to it, Multi-chart grid view, ETF Holdings and Market Breadth. The data will be updated automatically Mon–Fri at 16:30 US Eastern. The project is fully open-sourced. You can find the Live Site link and the Source Code on my GitLab homepage(link below). If you’d like to customize it, feel free to fork the repo. For more details, you can watch the demo video below. I'm happy to continue sharing this with the FinTwit community. GitLab homepage: gitlab.com/traderwillhu/m…
English
14
22
240
17.2K
Subba Reddy
Subba Reddy@PostPCEra·
@jukan05 Memory space insight ! $MU SK Hynix, Samsung >Micron departure from its conventional multi-patterning approach — a development that is positive for ASML, but modestly negative for AMAT and LRCX.
English
0
0
0
181
Jukan
Jukan@jukan05·
Micron's strategic pivot toward more aggressive EUV adoption is interpreted as an effort to save fab space and thereby expand overall output capacity. This implies a departure from its conventional multi-patterning approach — a development that is positive for ASML, but modestly negative for AMAT and LRCX.
Ray Wang@rwang07

First 5-year SCA signed. New EUV layer roadmap. ✍️

English
9
21
216
34.3K
Subba Reddy
Subba Reddy@PostPCEra·
@levie >* Data and AI governance still remain core challenges. >Getting data and content into a spot that agents can securely and easily operate on remains a huge task for more organizations. And governing what agents can do with data in a workflow still a major topic.
English
0
0
0
16
Aaron Levie
Aaron Levie@levie·
Had meetings and a dinner with 20+ enterprise AI and IT leaders today. Lots of interesting conversations around the state of AI in large enterprises, especially regulated businesses. Here are some of general trends: * Agents are clearly the big thing. Enterprises moving from talking about chatbots to agents, though we’re still very early. Coding is still the dominant agentic use-case being adopted thus far, with other categories of across knowledge work starting to emerge. Lots of agentic work moving from pilots and PoCs into production, and some enterprises had lots of active live use-cases. * Agentic use-cases span every part of a business, from back office operations to client facing experiences from sales to customer onboarding workflows. General feeling is that agentic workflows will hit every part of an organization, often with biggest focus on delivering better for customers, getting better insights and intelligence from data and documents, speeding up high ROI workflows with agents, and so on. Very limited discussion on pure cost cutting. * Data and AI governance still remain core challenges. Getting data and content into a spot that agents can securely and easily operate on remains a huge task for more organizations. Years of data management fragmentation that wasn’t a problem now is an issue for enterprises looking to adopt agents. And governing what agents can do with data in a workflow still a major topic. * Identity emerging as a big topic. Can the agent have access to everything you have? In a world of dozens of agents working on behalf, potentially too much data exposure and scope for the agents. How do we manage agents with partitioned level of access to your information? * Lots of emerging questions on how we will budget for tokens across use-cases and teams. Companies don’t want to constrain use-cases, but equally need to be mindful of ultimate token budgets. This is going to become a bigger part of OpEx over time, and probably won’t make sense to be considered an IT budget anymore. Likely needs to be factored into the rest of operating expenses. * Interoperability is key. Every enterprise is deploying multiple AI systems right now, and it’s unlikely that there’s going to be a single platform to rule them all. Customers are getting savvier on how to handle agent interoperability, and this will be one of the biggest drivers of an AI stack going forward. Lots more takeaways than just this, but needless to say the momentum is building but equally enterprises are acutely aware of the change management and work ahead. Lots of opportunity right now.
English
115
104
885
142.9K
Subba Reddy
Subba Reddy@PostPCEra·
@garrytan battle royale : SOR incumbents vs. AI Agents > startups with AI agents are "parasites" -- Worday CEO >This is what system of record incumbents think of startups. The war is just beginning. >VCs & AI Agents: the user data belongs to the users, not the incumbent software vendor
English
0
0
0
7
Garry Tan
Garry Tan@garrytan·
Recent earnings call, Aneel Bhusri of Workday says startups with AI agents are "parasites" This is what system of record incumbents really think of startups. The war is just beginning. The facts: the user data belongs to the users, not the incumbent software vendor.
Garry Tan tweet media
English
75
32
355
159.8K
Subba Reddy
Subba Reddy@PostPCEra·
@JoshKale >Rivian lost $3.6 billion last year on 42k deliveries >Every 12-18 months, company finds a new partner to write a check: - Amazon: $1.3B equity + 100K van order - VW: $5.8B JV - US DOE: $6.6B loan -Uber: $1.25B robotaxi deal (today) >Tesla's Cybercab is purpose built at $25,000
English
0
0
0
26
Josh Kale
Josh Kale@JoshKale·
Nobody understands how much of a disaster this Rivian <> Uber deal is Rivian lost $3.6 billion last year on 42k deliveries. That's $86,000 of value destruction PER VEHICLE that left their factory. Their solution? Partner with Uber to turn a $58K camping SUV into a robotaxi... to compete with Tesla's Cybercab... YIKES Every 12-18 months, this company finds a new partner to write a check: - Amazon: $1.3B equity + 100K van order - VW: $5.8B joint venture - US DOE: $6.6B loan - Uber: $1.25B robotaxi deal (today) The moment they announced the Uber deal, they admitted they're pushing back profitability AGAIN to fund an autonomy program that can't even handle stoplights. Tesla's Cybercab is purpose built at $25,000 with no steering wheel. The cost per mile math isn't even close. The Uber deal is to deploy 50,000 robotaxis by 2031. Slight problem: The car doesn't exist yet. The factory doesn't exist yet. The autonomy software doesn't exist. Manufacturing is HARD. good luck have fun
Josh Kale tweet mediaJosh Kale tweet media
Rivian@Rivian

A fleet of R2 Robotaxis is coming exclusively to @Uber. ⚡🌿 Today, we announced a partnership to help both companies accelerate their autonomous vehicle plans across 25 cities in the US, Canada and Europe by the end of 2031. rivn.co/uber

English
259
175
2.3K
433.9K
Subba Reddy
Subba Reddy@PostPCEra·
@DeryaTR_ >Flow cytometry data analysis software is smart graphing calculator for biologists/immunologist to study cells ~20,000 LOC >best part is I can continuously improve it and add new features that are not even available in commercial software, which cost thousands of dollars/ user
English
0
0
0
9
Derya Unutmaz, MD
Derya Unutmaz, MD@DeryaTR_·
In just two days, using OpenAI Codex app GPT-5.4, I created a fully functional flow cytometry data analysis software, ~20,000 lines of code from scratch! This is a highly sophisticated and specialized biology software tool that every immunologist relies on. The best part is that I can continuously improve it and add new features that are not even available in comparable commercial software, which can cost thousands of dollars per user! For those not familiar with what flow cytometry software is, here is the detailed explanation from Grok: Flow cytometry analysis software is like a super-smart graphing calculator for biologists and doctors who study cells. What the machine does firstImagine you have a sample of blood or tissue with millions of cells. The flow cytometer machine lines the cells up single-file like cars on a highway and shoots lasers at each one as it zooms by (thousands of cells per second). The lasers tell the machine things like:How big is the cell? How “grainy” or complicated is it inside? Does it have certain “flags” (proteins) stuck on it? (These flags light up in different colors, like red, green, purple tags.) The machine spits out a huge computer file full of raw numbers — no pictures, just data. What the software is forThe analysis software takes that messy pile of numbers and turns it into clear pictures and answers you can actually understand. Think of it as the “translator” or “artist” that draws the story from the data.With a few clicks you can see:Colorful dot plots or graphs that show different groups of cells (like “these blue dots are healthy immune cells, these red dots are cancer cells”). Exactly what percentage of the cells are a certain type (e.g., “78% of the cells in this blood sample are fighting the infection”). How strongly a cell is “glowing” with a certain color tag (which tells you how much of a protein it’s making). Side-by-side comparisons of a patient’s sample before and after treatment. The magic trick scientists use every dayThe most common thing they do is called “gating.” It’s like drawing a circle around a group of similar dots on the graph and saying, “Only look at these cells.” The software instantly counts everything inside that circle and gives you the numbers. You can keep drawing smaller and smaller circles to zoom in on very specific cell types — kind of like zooming into a crowd photo until you only see people wearing red hats and glasses.
Derya Unutmaz, MD tweet media
English
48
104
1K
67.1K
Tech Layoff Tracker
Tech Layoff Tracker@TechLayoffLover·
Recruiter I know has been placing tech talent for 15 years Never seen numbers like what she's tracking now Role that used to get 50 applications now pulls 1,200. Senior Android position last week hit 2,847 before they pulled the posting Her placement rate dropped from 73% in 2022 to 11% this year Companies ghost her after final rounds. She'll shepherd a candidate through 5 interviews, references checked, verbal offer given, then radio silence for 3 weeks One hiring manager told her straight: "We're not actually hiring. We're just seeing what's available before we decide between AI agents and offshore" She placed a DevOps engineer at a logistics company in September. Same role got posted again in November. Asked her contact what happened "Oh we kept him 6 weeks to document everything, then replaced him with 3 contractors in Chennai and a monitoring AI that costs $400/month" Another company she's worked with for 8 years just sent her their 2025 headcount planning Engineering org going from 67 people to 23. "AI productivity gains" they called it She's watching entire teams get "reskilled" in January only to get managed out by March once the knowledge transfer is complete Her biggest client told her yesterday: "We don't need talent acquisition anymore. We need talent extraction" 22 years building relationships in this industry Now she's updating her own resume The machines didn't just take the jobs. They took the people who found the jobs too
English
7
40
165
25.2K
Subba Reddy
Subba Reddy@PostPCEra·
@toddsaunders AI SaaS era ! >“The moat in software was the cost of building software. And Claude Code just mass produced a bridge.” >But the number of people who capture the value also goes up 100x. >It's is simply that the power of SaaS is changing hands.
English
0
0
0
5
Todd Saunders
Todd Saunders@toddsaunders·
I heard an incredible analogy from a VC friend that I can’t stop thinking about. “The moat in software was the cost of building software. And Claude Code just mass produced a bridge.” It’s wild when you think about the impact of this. The SaaS boom produced a few dozen billionaires and a bunch of zero sum winners. But the AI SaaS era will mass produce millionaires. There will be fewer ServiceTitans hitting $5B valuations, and instead there will be 50,000 companies doing $500K-$5M each, run by 1-3 people with deep expertise and huge margins. To be clear, I believe that the total value of software goes up, and the number of companies created goes up exponentially. But the number of people who capture the value also goes up 100x. I don’t believe in the “SaaS is dying” headline, I think it’s missing the point. It’s simply that the power of SaaS is changing hands.
English
176
70
795
274.1K
Subba Reddy
Subba Reddy@PostPCEra·
@StockSavvyShay Fund Mangers view >Goldman analysts expect the stock to be range-bound in the short-term.. >Goldman keeps Micron at neutral, flagging the “potential risk of slowing HBM price momentum in 2027 given the prospects of meaningful supply additions”. cnbc.com/2026/03/19/mic…
English
0
1
0
29
Shay Boloor
Shay Boloor@StockSavvyShay·
Imagine being $MU CEO Sanjay Mehrotra right now: • You just generated $16B in operating profit this quarter after doing only $8B in revenue a year ago • Operating Margin 69% vs Est. 62% (nearly 7 points above what Wall Street modeled) • You guided revenue ~40% above expectations, guided to a record 81% margin, raised the dividend 30% & still trade at under 10x forward earnings Stock is down 5%.
Shay Boloor tweet mediaShay Boloor tweet media
Shay Boloor@StockSavvyShay

This AI cycle is fundamentally different from every prior memory supercycle in $MU history. Past supercycles were driven by unit volume growth with more phones + servers buying largely commoditized DRAM but but this one is a capacity-constrained cycle where HBM sells at ~5x conventional DRAM ASPs. Hyperscalers are willing to pay whatever it takes because the real cost is leaving massive GPU clusters underutilized. That is how Micron ends up producing $16B of operating profit in a single quarter.

English
130
106
1.3K
340.9K
Subba Reddy
Subba Reddy@PostPCEra·
@rohanpaul_ai This analogy can be applied to all Personal data, Contacts, bills/invoices, shopping lists, recalls.. with phone (voice) integration with @openclaw >The big pain in enterprise SW is data entry and coordination. Zoom sits on raw input: internal meeting video/audio,customer calls
English
0
0
0
20
Rohan Paul
Rohan Paul@rohanpaul_ai·
Ali Ghodsi, the cofounder and CEO of Databricks, says Zoom has a massive chance to build an AI-first product, that could seriously disrupt the traditional enterprise SAAS. Because it sits on the largest datasets of meeting videos and transcripts. The big pain in enterprise software is data entry and coordination. Zoom already sits on the raw input: every customer call and internal meeting, plus the video, audio, and transcript. If Zoom can reliably pull out decisions, context, and action items, then write them back into the right system of record automatically, as an AI-first workflow layer, it becomes the front door for work. That would replace lots of separate SAAS tools that exist mainly to collect notes and updates. --- Video from 'Bg2 Pod' YT channel (link in comment)
English
49
24
281
82.4K
Subba Reddy
Subba Reddy@PostPCEra·
@investingluc cool "should I be trading?" dashboard ! Prompt sharing is great value .
English
0
0
1
69
Luc
Luc@investingluc·
I made a "should I be trading?" dashboard. Scores the market across 5 pillars: - volatility (put/call ratio, vix, positioning) - trend (spx vs 20d, 50d, 200d ma's) - breadth (advancing/declining, nas highs/lows) - momentum (sector leaders, laggards, % participation) - macro (fomc, rates, geopolitics) Each is weighted, combined, and averaged to give me a score for the current environment. Basically a yes, no, or stay small. Sometimes I just need someone (or something) to remind me to stay out. Happy to share the prompt if you guys want it...but I made this with @perplexity_ai's Computer.
Luc tweet media
English
221
258
2.8K
224.8K
Subba Reddy
Subba Reddy@PostPCEra·
@AymericRoucher @petergostev Bullshit Bench: >It measures bullshit as "when given false premises disguised in jargon, will the model go with the flow (=bullshit) or push back (=truthful)"
English
0
0
0
12
m_ric
m_ric@AymericRoucher·
I've long preferred Claude Code over Codex or Gemini, because it seemed much more reliable, but couldn't explain why : now Bullshit Bench by @petergostev provides compelling numbers. It measures bullshit as "when given false premises disguised in jargon, will the model go with the flow (=bullshit) or push back (=truthful)" And Claude is leagues ahead ! Also, this objective of truthfulness is probably at odds with the Chatbot Arena emergent objective of "pleasant chat experience" ; but a model optimizing for the former will be more useful.
m_ric tweet media
English
55
114
1.1K
102.8K
Subba Reddy
Subba Reddy@PostPCEra·
@TechLayoffLover Meta >One senior architect told me they're screen-recording every code review session. Building training datasets from 10 years of institutional knowledge >The "reskilling" sessions everyone's talking about? That's just knowledge extraction with extra steps
English
0
0
0
79
Tech Layoff Tracker
Tech Layoff Tracker@TechLayoffLover·
Meta just confirmed they're cutting 20% of global workforce - over 15,800 people gone But sources inside are telling me the real number is closer to 30% Heard from three separate PMs that entire product verticals are getting axed. Not just trimming fat - whole roadmaps deleted overnight One insider said they're gutting Reality Labs completely. 4,000 people who built the metaverse now building their resumes The AI infrastructure spend is $135 billion next year. They're replacing human judgment with model inference at every layer Recruiting just sent termination letters to 340 university relations specialists. The entire campus hiring apparatus - gone Engineering managers finding out their teams of 12 are becoming teams of 3. Same sprint velocity expected with Copilot and offshore contractors Facilities already started boxing up 6 entire floors in Menlo Park. Badge access revoked for sections that used to house 800 engineers One senior architect told me they're screen-recording every code review session. Building training datasets from 10 years of institutional knowledge The "reskilling" sessions everyone's talking about? That's just knowledge extraction with extra steps Word is Mark wants the company to run on 40% fewer humans by end of 2026 If your badge still works at Meta... start interviewing tomorrow Because the knowledge transfer sessions you're attending aren't training - they're your exit interview
English
27
55
394
54.1K