pradeepviswav

21.2K posts

pradeepviswav banner
pradeepviswav

pradeepviswav

@pradeepviswav

Technical Manager at @hcltech

Chennai, India Katılım Kasım 2007
1.7K Takip Edilen1.1K Takipçiler
pradeepviswav retweetledi
tae kim
tae kim@firstadopter·
Here is a new @OpenAI statement regarding The Wall Street Journal article: "The idea that Sarah and Sam are not aligned on compute is ludicrous. She just raised $122 billion, so that we can continue to lean in on compute. The business is firing on all cylinders: 1. Consumer strength starting to show up in revenue as we turn on ads, and features like new image gen accelerating growth; 2. Enterprise business in the best place it has ever been thanks to new Microsoft deal opening up access to the full market — and ongoing Codex exponential surge; 3. Compute strategy as the great enabler— the moves we made (and got criticized for) to lock up massive supply has been proven right and are given us the ability to deliver a better product experience to our customers."
English
11
16
263
36K
pradeepviswav retweetledi
Mario Rodriguez
Mario Rodriguez@mariorod1·
Being the foundation for millions of developers means our bar must be higher for availability, reliability, and security. I’m sorry it’s been a rocky stretch at GitHub. We know we need to do better. Today we published an update on two recent incidents: one on April 23 involving merge queue behavior, and one on April 27 affecting pull requests, issues, projects, and search-backed experiences. We’re taking this seriously. We’re listening, and you have my commitment that we’ll communicate more frequently about the work underway to improve reliability and scale GitHub for what comes next. github.blog/news-insights/…
English
68
62
612
346.4K
pradeepviswav retweetledi
Microsoft Research
Microsoft Research@MSFTResearch·
Coming May 14 at Microsoft Research Forum: a new release and demo from MSR AI Frontiers. Plus new work on Agentic GitHub Workflows, Real-time agent verification, Energy-based fine-tuning, and Guiding the AI transition. Register now:
English
4
10
83
393.8K
pradeepviswav retweetledi
Andy Jassy
Andy Jassy@ajassy·
Very interesting announcement from OpenAI this morning. We’re excited to make OpenAI's models available directly to customers on Bedrock in the coming weeks, alongside the upcoming Stateful Runtime Environment. With this, builders will have even more choice to pick the right model for the right job. More details at our AWS event in San Francisco tomorrow.
English
119
314
3.4K
817.7K
pradeepviswav retweetledi
Forbes
Forbes@Forbes·
Elon Musk is the planet’s richest person by far, worth $839 billion as of Forbes’ annual World’s Billionaires list. He also ranks among the least philanthropic billionaires. Sure, Musk has transferred $8.5 billion of Tesla stock to his charitable foundations (1% of his net worth)—but nearly all of it is still sitting there idle. Only an estimated $500 million, or 0.06% of Musk’s vast fortune, has ever been disbursed to those in need. His lack of giving raises a question: What would our billionaires ranking look like if the world’s most generous people had never donated a dollar to charity? forbes.com/sites/mattduro…
Forbes tweet media
English
978
309
1.5K
1.1M
pradeepviswav retweetledi
Dr.Medusa
Dr.Medusa@ms_medusssa·
At some point I am going to write a paper on how words have lost their meaning. How politicians have butchered Language. How their dishonesty has robbed communication of all truth. Today as Raghav Chaddha joins the BJP, a party he once called "gundo ki party", he has not only lost any credibility he ever had, but also any goodwill any of us felt towards him. What a waste of an education, a confident voice that could have done so much for the nation, a young politician who has time to change things. What a waste.
English
621
1.9K
6K
272.7K
Mads Kristensen
Mads Kristensen@mkristensen·
Friday question: What issue with Visual Studio 2026 did you run into this week, that would make you happy to see fixed?
English
79
5
27
8.4K
pradeepviswav retweetledi
Boyuan Chen
Boyuan Chen@BoyuanChen0·
We are committed to continually improving the GPT Image 2 model! I am actively fixing various issues from the community feedback. Just reply or DM me your GPT conversation! Features like 2K or 4K images are already available via the experimental API. Hope you enjoy the model!
Boyuan Chen tweet media
English
241
54
981
223.1K
pradeepviswav retweetledi
Artificial Analysis
Artificial Analysis@ArtificialAnlys·
GPT-5.5 takes OpenAI back to the clear number one in AI. OpenAI’s new model tops the Artificial Analysis Intelligence Index by 3 points, breaking a three-way tie with Anthropic and Google OpenAI gave us pre-release access to test all five reasoning effort levels: xhigh, high, medium, low and non-reasoning. ➤ OpenAI topping five headline evaluations: GPT-5.5 (xhigh) leads Terminal-Bench Hard, GDPval-AA and our newly hosted APEX-Agents-AA. The model trails only other OpenAI models in CritPt and AA-LCR, and comes second to Gemini 3.1 Pro Preview on three additional evaluations. The largest gains are on AA-Omniscience (+14 pts), our knowledge and hallucination benchmark, and τ²-Bench Telecom (+7 pts), a customer service agent benchmark. ➤ 20% more expensive to run our Intelligence Index: Per-token pricing has doubled from GPT-5.4 to $5/$30 per 1M input/output tokens. However, a ~40% token use reduction largely absorbs the hike - resulting in a net ~+20% cost to run our Intelligence Index. ➤ Effort a clear ladder for balancing intelligence and cost: GPT-5.5 (medium) scores the same as Claude Opus 4.7 (max) on our Intelligence Index at one quarter of the cost (~$1,200 vs $4,800) - although Gemini 3.1 Pro Preview scores the same at a cost of ~$900. GPT-5.5 (low) approximates Claude Opus 4.7 (Non-reasoning, high) on our Intelligence Index at half the cost to run (~$500 vs ~$1 ,000). ➤ Number one in GDPval-AA with an Elo of 1785: GPT-5.5 (xhigh) leads Claude Opus 4.7 (max) by ~30 pts and Gemini 3.1 Pro Preview by ~470 pts. GDPval-AA is Artificial Analysis’ benchmark that leverages OpenAI’s GDPval dataset to evaluate models on real-world economically valuable tasks. ➤ Top AA-Omniscience accuracy, but trailing the frontier on hallucination: Our private AA-Omniscience benchmark rewards factual knowledge across diverse topics, but punishes hallucination. GPT-5.5 (xhigh) has the highest accuracy at 57% - meaning the model can recall facts in the Omniscience corpus more effectively than any other model. However, it has a hallucination rate of 86% - vs Opus 4.7 (max) at 36%, and Gemini 3.1 Pro Preview at 50%. This makes it more likely to answer a question when it does not ‘know’ the answer. The 14 pt gain in AA-Omniscience from GPT-5.4 (xhigh) was largely driven by knowledge, with a modest improvement in hallucination. Congratulations to the team at @OpenAI and @sama on the launch
Artificial Analysis tweet media
English
58
208
1.7K
261K
pradeepviswav retweetledi
OpenAI
OpenAI@OpenAI·
GPT-5.5 delivers this step up in intelligence without compromising on speed. GPT-5.5 matches GPT-5.4 per-token latency in real-world serving, while performing better across nearly every evaluation we measured. It also uses significantly fewer tokens to complete the same Codex tasks, making it more efficient as well as more capable.
OpenAI tweet media
English
145
373
5K
1.7M
pradeepviswav
pradeepviswav@pradeepviswav·
3/3 📉 Or is there a simpler explanation? Perhaps Google and AWS simply have such massive excess capacity that they’re willing to rent it out at lower margins just to keep the data centers humming. Is it a lack of internal growth, or just a scale advantage? #AI #AIInfrastructure
English
0
0
0
20
pradeepviswav
pradeepviswav@pradeepviswav·
2/3 🏗️ This raises a big question: Does this signal that internal demand at Google and AWS isn't scaling as expected? Contrast this with Microsoft, which has reportedly slowed external infra deals because internal demand for GitHub Copilot and M365 Copilot is skyrocketing.
English
1
0
0
65
pradeepviswav
pradeepviswav@pradeepviswav·
1/3 🧵 The AI compute landscape is seeing a fascinating divergence. While OpenAI is reportedly struggling to secure enough compute to keep pace with product demand, AWS and Google are doubling down on massive infrastructure deals with Anthropic and others.
English
1
0
0
90
pradeepviswav retweetledi
Patrick Moorhead
Patrick Moorhead@PatrickMoorhead·
Two new TPUs, one for training and one for inference. TPU 8t is the training box: 9,600 chips per superpod, 2+ PB of shared HBM, 121 exaflops, 2.8x the prior generation and 2x better perf/watt vs. prior gen, native FP4 in the MXUs, and Axion Arm hosts. With Pathways and JAX, a single logical training cluster now scales past one million TPUs. TPU 8i targets inference and reinforcement learning with up to 80% better perf/dollar for low-latency inference and RL vs. the prior TPU generation, SRAM tripled to 384MB, HBM up 50% to 288GB, and a new Collectives Acceleration Engine. The more interesting move is the network. Google’s Boardfly topology was co-designed with DeepMind to optimize for latency, not bandwidth. That is exactly the right bet for agents, where minimum time-to-response is the customer experience. Workload specialization is the hyperscaler playbook, and Google hinted more than two SKUs per year is plausible going forward. An underappreciated metric is goodput, not peak FLOPs. At 10,000-chip scale, fail-stop failures and silent data corruption quietly eat training throughput. Google claims more than 97% goodput at that scale. Google is also introducing NVIDIA VR200 with its Virgo network for the largest clusters. More later. $GOOG $AVGO $NVDA
Patrick Moorhead tweet media
English
5
37
242
68.5K
pradeepviswav retweetledi
SpaceX
SpaceX@SpaceX·
SpaceXAI and @cursor_ai are now working closely together to create the world’s best coding and knowledge work AI. The combination of Cursor’s leading product and distribution to expert software engineers with SpaceX’s million H100 equivalent Colossus training supercomputer will allow us to build the world’s most useful models. Cursor has also given SpaceX the right to acquire Cursor later this year for $60 billion or pay $10 billion for our work together.
English
2.4K
5.1K
38.4K
20.5M
pradeepviswav retweetledi
Google AI Developers
Google AI Developers@googleaidevs·
Introducing one of our biggest updates to the Gemini Deep Research Agent, now available via the Interactions API! Trigger complex, long-horizon research workflows with arbitrary MCP support, get rich visualizations, plan before you execute, and more with these two configurations: 1️⃣ Deep Research (deep-research-preview-04-2026) 2️⃣ Deep Research Max (deep-research-max-preview-04-2026)
Google AI Developers tweet media
English
22
52
384
60.8K