Aakash Jhaveri🧑🏻‍💻ΛJ

22.8K posts

Aakash Jhaveri🧑🏻‍💻ΛJ banner
Aakash Jhaveri🧑🏻‍💻ΛJ

Aakash Jhaveri🧑🏻‍💻ΛJ

@jhaverinator

Events & Comms @Plugn_Drive + Journalist @AndroidPolice🕵🏻 Tech and EVs 😍 Trying to be net positive for the world🌏✨

Toronto, Ontario Присоединился Ekim 2014
2.2K Подписки52.7K Подписчики
Закреплённый твит
Aakash Jhaveri🧑🏻‍💻ΛJ
Went off-roading with a few CyberTrucks! Impressed but not surprised when how easily these vehicles managed extremely challenging conditions 📐 Also, my first time shooting vehicles 😁📸
Aakash Jhaveri🧑🏻‍💻ΛJ tweet mediaAakash Jhaveri🧑🏻‍💻ΛJ tweet mediaAakash Jhaveri🧑🏻‍💻ΛJ tweet mediaAakash Jhaveri🧑🏻‍💻ΛJ tweet media
Toronto, Ontario 🇨🇦 English
54
46
608
159.7K
Aakash Jhaveri🧑🏻‍💻ΛJ
@aakashgupta @rhensing Yep, same thoughts, perfect conclusion Uber knows its business model is at risk, and is willing to try everything in hopes that at least one of them ends up being successful This sounds like a bad strategy even if Tesla did not exist
English
0
0
0
41
Aakash Gupta
Aakash Gupta@aakashgupta·
Rivian lost $13.8 billion in three years. Today they announced a robotaxi. The partner rotation is the business model at this point. Amazon: $1.3B equity plus 100K van order. VW: $5.8B joint venture. US DOE: $6.6B loan. Uber: $1.25B robotaxi deal. Every 12 to 18 months, a new institution writes a check large enough to fund the next chapter. The $1.25B headline is misleading. Uber commits $300 million now. The remaining $950 million is gated behind “autonomous milestones” by unspecified dates through 2031. That’s not a conviction bet on Rivian’s autonomy. That’s a call option priced at $300 million with five years of expiry. Look at Uber’s last 18 months. Waymo rides on the platform. Zoox in Las Vegas this year, LA in 2027. 20,000 Lucid vehicles with Nuro’s autonomy stack. Now Rivian. Every autonomy architecture covered. Camera-only, lidar-first, purpose-built pods, OEM conversions. Uber is building a portfolio of bets so that they win regardless of which stack reaches scale first. Rivian thinks they signed a partnership. Uber signed a hedge. The R2 that this entire deal depends on launches this spring at $57,990. The $45,000 version? Late 2027. The version with lidar and the Gen 3 chip that actually enables robotaxi-grade perception? Late 2026 at the earliest. The robotaxi fleet in San Francisco and Miami? 2028. The Georgia factory for scale production? Still under construction. Waymo is running fully driverless rides across San Francisco, Phoenix, and LA right now. No milestones to hit. No factory to build. No vehicle to finish designing. Rivian needs the product, the factory, the software, and the autonomous driving system to all work simultaneously three years from now. Uber’s smartest move in the last decade was selling its own self-driving unit for a $4 billion loss. Its second smartest is buying call options on everyone else’s.
Aakash Gupta tweet mediaAakash Gupta tweet media
RJ Scaringe@RJScaringe

I’m excited to announce a partnership with @Uber. As part of this, Uber plans to invest up to $1.25 billion in Rivian and deploy up to 50,000 R2 robotaxis. This partnership accelerates our path to Level 4 autonomy and supports our goal of building one of the safest autonomous platforms in the world—across both shared and personally owned vehicles. The combination of Rivian’s rapidly growing data flywheel, our in-house RAP1 inference platform (800 TOPS), and our multi-modal perception stack provides a powerful foundation to scale autonomy quickly and responsibly over the next couple of years.

English
15
7
71
23.6K
Rebellionaire
Rebellionaire@Rebellionaire·
Uber just committed $1.25 billion to Rivian for 50,000 robotaxis across 25 cities by 2031. Worth understanding what that money is actually buying. Rivian still has to build the R2, finish constructing its Georgia factory, achieve Level 4 autonomy, and hit a series of undisclosed milestones before most of that investment even shows up. The timeline runs to 2031. Tesla's Cybercab is already past most of those hurdles and deploying now. Uber has now bet on Waymo, Lucid, and Rivian. Tesla isn't on that list, and doesn't need to be.
Rivian@Rivian

A fleet of R2 Robotaxis is coming exclusively to @Uber. ⚡🌿 Today, we announced a partnership to help both companies accelerate their autonomous vehicle plans across 25 cities in the US, Canada and Europe by the end of 2031. rivn.co/uber

English
5
1
33
2.5K
Rivian
Rivian@Rivian·
A fleet of R2 Robotaxis is coming exclusively to @Uber. ⚡🌿 Today, we announced a partnership to help both companies accelerate their autonomous vehicle plans across 25 cities in the US, Canada and Europe by the end of 2031. rivn.co/uber
Rivian tweet media
English
138
173
1.7K
1.1M
Bitfunded
Bitfunded@bitfunded·
The reply thats gets 0 likes receives 100K Bitfunded challenge. Winner announced in 24 hours 🤝
English
13.4K
330
3.6K
3.7M
Avi Chawla
Avi Chawla@_avichawla·
Big release from Kimi! They just released a new way to handle residual connections in Transformers. In a standard Transformer, every sub-layer (attention or MLP) computes an output and adds it back to the input via a residual connection. If you consider this across 40+ layers, the hidden state at any layer is just the equal-weighted sum of all previous layer outputs. Every layer contributes with weight=1, so every layer gets equal importance. This creates a problem called PreNorm dilution, where as the hidden state accumulates layer after layer, its magnitude grows linearly with depth. And any new layer's contribution gets progressively buried in the already-massive residual. This means deeper layers are then forced to produce increasingly large outputs just to have any influence, which destabilizes training. Here's what the Kimi team observed and did: RNNs compress all prior token information into a single state across time, leading to problems with handling long-range dependencies. And residual connections compress all prior layer information into a single state across depth. Transformers solved the first problem by replacing recurrence with attention. This was applied along the sequence dimension. Now they introduced Attention Residuals, which applies a similar idea to depth. Instead of adding all previous layer outputs with a fixed weight of 1, each layer now uses softmax attention to selectively decide how much weight each previous layer's output should receive. So each layer gets a single learned query vector, and it attends over all previous layer outputs to compute a weighted combination. The weights are input-dependent, so different tokens can retrieve different layer representations based on what's actually useful. This is Full Attention Residuals (shown in the second diagram below). But here's the practical problem with this idea. Full AttnRes requires keeping all layer outputs in memory and communicating them across pipeline stages during distributed training. To solve this, they introduce Block Attention Residuals (shown in the third diagram below). The idea is to group consecutive layers into roughly 8 blocks. Within each block, layer outputs are summed via standard residuals. But across blocks, the attention mechanism selectively combines block-level representations. This drops memory from O(Ld) to O(Nd), where N is the number of blocks. Layers within the current block can also attend to the partial sum of what's been computed so far inside that block, so local information flow isn't lost. And the raw token embedding is always available as a separate source, which means any layer in the network can selectively reach back to the original input. Results from the paper: - Block AttnRes matches the loss of a baseline LLM trained with 1.25x more compute. - Inference latency overhead is less than 2%, making it a practical drop-in replacement - On a 48B parameter Kimi Linear model (3B activated) trained on 1.4T tokens, it improved every benchmark they tested: GPQA-Diamond +7.5, Math +3.6, HumanEval +3.1, MMLU +1.1 The residual connection has mostly been unchanged since ResNet in 2015. This might be the first modification that's both theoretically motivated and practically deployable at scale with negligible overhead. More details in the post below by Kimi👇 ____ Find me → @_avichawla Every day, I share tutorials and insights on DS, ML, LLMs, and RAGs.
Avi Chawla tweet media
Kimi.ai@Kimi_Moonshot

Introducing 𝑨𝒕𝒕𝒆𝒏𝒕𝒊𝒐𝒏 𝑹𝒆𝒔𝒊𝒅𝒖𝒂𝒍𝒔: Rethinking depth-wise aggregation. Residual connections have long relied on fixed, uniform accumulation. Inspired by the duality of time and depth, we introduce Attention Residuals, replacing standard depth-wise recurrence with learned, input-dependent attention over preceding layers. 🔹 Enables networks to selectively retrieve past representations, naturally mitigating dilution and hidden-state growth. 🔹 Introduces Block AttnRes, partitioning layers into compressed blocks to make cross-layer attention practical at scale. 🔹 Serves as an efficient drop-in replacement, demonstrating a 1.25x compute advantage with negligible (<2%) inference latency overhead. 🔹 Validated on the Kimi Linear architecture (48B total, 3B activated parameters), delivering consistent downstream performance gains. 🔗Full report: github.com/MoonshotAI/Att…

English
77
222
2.3K
340.8K
Aakash Jhaveri🧑🏻‍💻ΛJ ретвитнул
Caleb
Caleb@caleb_friesen·
Posted this on IG/YT ~24 hours ago. 350 comments so far. Have never seen such a polarised comments section. Mostly folks in India, but many NRIs too. Half agree the message of the video. The other half strongly disagree, many sharing detailed explanations to defend their POV.
English
93
323
2.2K
156.9K
Gail Alfar
Gail Alfar@gailalfaratx·
Tesla was so kind to offer people overnight demo drives at SXSW. Can you imagine driving to SXSW in a gas car? And then going home in an autonomous Tesla? ❤️
Gail Alfar tweet media
English
18
20
283
11.1K
Aakash Jhaveri🧑🏻‍💻ΛJ
I still don't get why Uber would choose the Gravity for its robotaxis It is one of the best cars right now but if the point of autonomy is to save money (and pass them on), wouldn't something cheaper and smaller make more sense? At 20K units, Uber will probably end up buying 50% of all Gravitys sold
English
1
0
1
144
Gee Bennett
Gee Bennett@Mr_Lucid_UK·
Not quite sure how #Uber announcing that they are buying 20k mid-size along with 20k Gravity isn't a big news story for the #lcid stock? Huge things are coming with autonomy for #lucidmotors $lcid
Gee Bennett tweet media
English
10
14
91
3.6K
Alex MacGregor
Alex MacGregor@alexmacgregor__·
The 11 best cities in Asia for 2026, according to Time Out 1. Shanghai 2. Bangkok 3. Seoul 4. Tokyo 5. Hong Kong 6. Singapore 7. Hanoi 8. Beijing 9. Chiang Mai 10. Ho Chi Minh City 11. Osaka They got this a bit wrong IMO. I’d swap Shanghai and Bangkok around. Lived in former and now the latter, BKK is more foreigner friendly. Would drop Seoul below Tokyo and HK. Singapore should be lower. No idea why Beijing or Hanoi on this list, both seats of power but not much else. Astonished Taipei isn’t on the list either.
English
6
0
4
443
Michael Fisher
Michael Fisher@Captain2Phones·
I really think iKKO is onto something here. The Mind One is a tiny Android phone that feels more like an old MP3 player in the hand. It runs surprisingly well on low-end silicon; the camera is clever; and the whole thing looks fantastic. Sadly, it also commits the typical tech sin of 2026: instead of trying to be a great phone, it tries to be an "AI Phone" – with predictable results. Review goes live at 6p ET on the MrMobile YouTube channel; join the live chat!
Michael Fisher tweet mediaMichael Fisher tweet mediaMichael Fisher tweet mediaMichael Fisher tweet media
English
55
142
3.7K
246K
Aganaz
Aganaz@aganaz2020·
@googlemaps Pay-to-win model?
Aganaz tweet media
English
9
0
208
54.1K
Google Maps
Google Maps@googlemaps·
Finally, you can ask Maps to “Find me a public toilet nearby where I don’t need to wait in line to buy something." Welcome to the future.
English
639
1.6K
28.9K
4.3M
Tesla Australia & New Zealand
Model Y L now available in AU & NZ – 3 rows, 6 seats – 5 secs 0-100 km/h – 201 km/h top speed – 681 km (WLTP) range Highlights – Extended wheelbase for more headroom, legroom & comfort – Row 1: Heated/ventilated seats w/ powered thigh cushion – Row 2: Heated/ventilated captain seats w/ powered armrests & one‑touch fold – Row 3: Heated seats w/ power recline, one‑touch fold & vented climate control – New 19" Machina 2.0 Wheels + acoustic glass & updated suspension to minimise road noise – Electronic damping + adaptive suspension for a smooth, stable ride – Extended roofline and aero design = improved airflow, increased efficiency & more range – Added window & beltline airbags for second-row and side airbags for third-row – 16" first-row & 8” second-row touchscreens + 19-speaker immersive Tesla Audio – FSD Supervised & integrated @grok AI And use your spare charge to power appliances, e-bikes & power tools
Tesla Australia & New Zealand tweet mediaTesla Australia & New Zealand tweet mediaTesla Australia & New Zealand tweet mediaTesla Australia & New Zealand tweet media
English
136
328
2.6K
169.7K
Sawyer Merritt
Sawyer Merritt@SawyerMerritt·
BREAKING: The larger Model Y L is now officially available to order in Australia and New Zealand. Deliveries start May 2026. Starting price: • Premium AWD : $74,900 AUD ($53,000 USD) Couple unique options: • Cosmic silver paint: $2,600 AUD ($1,800 USD) • Zen Grey Interior: $1,500 AUD ($1,000 USD) Order: #overview" target="_blank" rel="nofollow noopener">tesla.com/en_au/modely/d…
Sawyer Merritt tweet mediaSawyer Merritt tweet mediaSawyer Merritt tweet mediaSawyer Merritt tweet media
English
97
139
1.7K
123.9K
Sawyer Merritt
Sawyer Merritt@SawyerMerritt·
NEWS: Tesla has officially announced that the larger Model Y L is coming to Australia and New Zealand.
English
102
114
2.4K
230.8K
Aakash Jhaveri🧑🏻‍💻ΛJ ретвитнул
Praveen Kumar Verma
Praveen Kumar Verma@Alacritic_Super·
Here are oddly specific life tips for someone who's 23: 1. Learn to cook 3 meals extremely well. It will save money, impress people, and make rough weeks easier. 2. Start a “things I learned the hard way” document. Add to it every time life teaches you something painful. 3. When buying things, ask: Will I still want this in 30 days? If yes, buy it. 4. Keep one friend from a completely different field. Their thinking will challenge your assumptions. 5. Always keep one skill you’re quietly improving that nobody knows about yet. 6. When traveling somewhere new, wake up early once and walk the city before 7 AM. You’ll see the real place. 7. If a problem bothers you for more than two weeks, write it down and design a system around it. 8. Save screenshots of compliments people give you. On bad days, you’ll forget who you are. 9. When learning something new, try to teach it within 24 hours. Your brain locks it in faster. 10. Once a year, spend a day completely offline and think about where your life is actually going. 11. Keep a future ideas notebook. Many random thoughts become opportunities years later. 12. The older you get, the more valuable people who reply to messages quickly become. 13. If you ever feel stuck, change one environment variable: city, gym, schedule, or social circle. 14. Pay attention to what people complain about repeatedly. That’s where many businesses start. 15. At 23, don’t worry about being impressive. Focus on becoming dangerously curious.
English
42
84
1.3K
76.1K