Luca Ecari

4.5K posts

Luca Ecari banner
Luca Ecari

Luca Ecari

@lecaritweets

I post about my 2 places of the heart: Italy & Switzerland. (Moving to Switzerland: Your 90-Day Survival Plan https://t.co/qItlqqZ1oL)

Switzerland Katılım Kasım 2010
880 Takip Edilen322 Takipçiler
Sabitlenmiş Tweet
Luca Ecari
Luca Ecari@lecaritweets·
Educate kids of all ages on math, physics and tech while having fun? Possible with #KiwiCo ! 😀 kiwico.com/Refer?i=LucaE5
Walchwil, Schweiz 🇨🇭 English
4
4
21
0
Luca Ecari retweetledi
Paroles d'auteurs
Paroles d'auteurs@Paroles_auteurs·
"Moins ils ont de talent, plus ils ont d’orgueil, de vanité, d’arrogance. Tous ces fous trouvent cependant d’autres fous qui les applaudissent." Erasme, Éloge de la folie
Paroles d'auteurs tweet media
Français
48
1.2K
3K
59.6K
Bobby A
Bobby A@Bobby_1111888·
I’ve spent the past few days studying many charts, and I’ve come to the conclusion that this may be the biggest bear trap I’ve ever witnessed in my two decades doing this. In my opinion, this will be remembered as the Covid-2020 moment of this cycle, minus the pandemic piece. In short, everything is lined up, primed, and no one wants to touch crypto with a ten-foot pole, as everyone believes this is the start of a multi-year bear market with much lower prices expected nearly across the board. I will go against the herd here and say that 2026, leading into 2027, will surprise to the upside. That doesn't mean that certain assets can't/won’t go a bit lower, and I've written about those possibilities plenty for members. That just means we’re a lot closer to the bottom than most think. I'm formulating my ideas and will write about what I see going forward in this upcoming report. Cheers ⚒️
English
49
35
599
51.6K
Luca Ecari retweetledi
Gilberto Trombetta
Gilberto Trombetta@Gitro77·
- Ciao zio, cosa guardi? - Fantozzi. - Cos'è Fantozzi? - Sei serio? - Sì. - Una della più grandi commedie della storia della... vabbè, seduto. - Ma... - Seduto. Subito. ... - Allora? Ti è piaciuto? - Ma è bellissimo! - Te l'avevo detto. È un grande affresco spietato che cattura la... - È una storia così ottimista. - Prego? - Ma sì, zio. È un film dolcissimo. Pure un po' stucchevole certe volte eh, ma capisco che possa affascinare. - Come affascinare? - Pieno di speranza... - Ma che cazzo stai dicendo? Ma è la storia tragicomica per eccellenza. - Fantozzi ha il posto fisso. - Eh? - Dico, Fantozzi ha il posto fisso. - Beh, sì. Ma l'hai visto, fa un lavoro grottesco, alienante... - Pure te. - Però in un'azienda impersonale che lo sfrutta biecamente. - Pure te. Con la differenza che Fantozzi avrà la terza media. Tu hai laurea e master. - Sì, va be', ma la mia è una grande famiglia. - Quanto pigli di stipendio? - Io... - Quanto piglia Fantozzi? - Beh, così su due piedi... calcolando la lira... l'inflazione... piglierà duemilacinquecento euro al mese. - Anche tremila. - Forse... - E poi la macchina, quell'appartamento. Saranno cento, centoventi metri quadri in un palazzo storico in centro. Paga l'affitto, Fantozzi? - Non mi risulta. - Quindi è di proprietà. E la Pina non lavora? - No. - Quindi con uno stipendio mantiene tutta la famiglia? - Sì. - Fantozzi va in pensione? - È il titolo del sesto film. - E tu? - ... - Zio, stai piangendo? - Ma non è questo il punto, Paolo Villaggio ha inventato una maschera che... - Poi è pieno di hobby. Gioca a tennis con gli amici, a biliardo, è stato azzurro di sci, va in campeggio, tu da quant'è che non vai in campeggio? - Non ho mai tempo. - E poi fa le ferie. - Sì, ma con la nuvola... - Sì, ma intanto lui le fa. Sono sicuro che almeno 3 settimane le fa. Forse un mese. E Fantozzi non lo licenziano mica. Anche se fa cazzate. Non gli chiedono performance, non gli fanno firmare patti tombali, contratti capestro. Non lo rinnovano di anno in anno in un eterno precariato. Non va in burnout, timbra il cartellino e va a casa dalla famiglia. - Vabbè, ma è una famiglia grottesca. - Ma è una famiglia. Tu ce l'hai una famiglia? - Ho una pianta di basilico. - Zio, perché ti tormenti con questi film su un benessere irraggiungibile. Poi ci credo che sei stressato. - Non hai capito niente. Fantozzi è una spietata critica sociale a... a... - A? - Non lo so. - Zio? - Sì? - Tu credi che un giorno riusciremo a essere di nuovo come Fantozzi? - Speriamo. [dalla pagina FB "Non è successo niente"]
Gilberto Trombetta tweet media
Italiano
92
594
2.8K
113.7K
Luca Ecari retweetledi
Ming
Ming@tslaming·
BREAKING 🚨 TESLA HAS PATENTED A "MATHEMATICAL CHEAT CODE" THAT FORCES CHEAP 8-BIT CHIPS TO RUN ELITE 32-BIT AI MODELS AND REWRITES THE RULES OF SILICON 🐳 How does a Tesla remember a stop sign it hasn’t seen for 30 seconds, or a humanoid robot maintain perfect balance while carrying a heavy, shifting box? It comes down to Rotary Positional Encoding (RoPE)—the "GPS of the mind" that allows AI to understand its place in space and time by assigning a unique rotational angle to every piece of data. Usually, this math is a hardware killer. To keep these angles from "drifting" into chaos, you need power-hungry, high-heat 32-bit processors (chips that calculate with extreme decimal-point precision). But Tesla has engineered a way to cheat the laws of physics. Freshly revealed in patent US20260017019A1, Tesla’s "MIXED-PRECISION BRIDGE" is a mathematical translator that allows inexpensive, power-sipping 8-bit hardware (which usually handles only simple, rounded numbers) to perform elite 32-bit rotations without dropping a single coordinate. This breakthrough is the secret "Silicon Bridge" that gives Optimus and FSD high-end intelligence without sacrificing a mile of range or melting their internal circuits. It effectively turns Tesla’s efficient "budget" hardware into a high-fidelity supercomputer on wheels. 📉 The problem: the high cost of precision In the world of self-driving cars and humanoid robots, we are constantly fighting a war between precision and power. Modern AI models like Transformers rely on RoPE to help the AI understand where objects are in a sequence or a 3D space. The catch is that these trigonometric functions (sines and cosines) usually require 32-bit floating-point math—imagine trying to calculate a flight path using 10 decimal places of accuracy. If you try to cram that into the standard 8-bit multipliers (INT8) used for speed (which is like rounding everything to the nearest whole number), the errors pile up fast. The car effectively goes blind to fine details. For a robot like Optimus, a tiny math error means losing its balance or miscalculating the distance to a fragile object. To bridge this gap without simply adding more expensive chips, Tesla had to fundamentally rethink how data travels through the silicon. 🛠️ Tesla's solution: the logarithmic shortcut & pre-computation Tesla’s engineers realized they didn't need to force the whole pipeline to be high-precision. Instead, they designed the Mixed-Precision Bridge. They take the crucial angles used for positioning and convert them into logarithms. Because the "dynamic range" of a logarithm is much smaller than the original number, it’s much easier to move that data through narrow 8-bit hardware without losing the "soul" of the information. It’s a bit like dehydrating food for transport; it takes up less space and is easier to handle, but you can perfectly reconstitute it later. Crucially, the patent reveals that the system doesn't calculate these logarithms on the fly every time. Instead, it retrieves pre-computed logarithmic values from a specialized "cheat sheet" (look-up storage) to save cycles. By keeping the data in this "dehydrated" log-state, Tesla ensures that the precision doesn't "leak out" during the journey from the memory chips to the actual compute cores. However, keeping data in a log-state is only half the battle; the chip eventually needs to understand the real numbers again. 🏗️ The recovery architecture: rotation matrices & Horner’s method When the 8-bit multiplier (the Multiplier-Accumulator or MAC) finishes its job, the data is still in a "dehydrated" logarithmic state. To bring it back to a real angle theta without a massive computational cost, Tesla’s high-precision ALU uses a Taylor-series expansion optimized via Horner’s Method. This is a classic computer science trick where a complex equation (like an exponent) is broken down into a simple chain of multiplications and additions. By running this in three specific stages—multiplying by constants like 1/3 and 1/2 at each step—Tesla can approximate the exact value of an angle with 32-bit accuracy while using a fraction of the clock cycles. Once the angle is recovered, the high-precision logic generates a Rotation Matrix (a grid of sine and cosine values) that locks the data points into their correct 3D coordinates. This computational efficiency is impressive, but Tesla didn't stop at just calculating faster; they also found a way to double the "highway speed" of the data itself. 🧩 The data concatenation: 8-bit inputs to 16-bit outputs One of the most clever hardware "hacks" detailed in the patent is how Tesla manages to move 16-bit precision through an 8-bit bus. They use the MAC as a high-speed interleaver—effectively a "traffic cop" that merges two lanes of data. It takes two 8-bit values (say, an X-coordinate and the first half of a logarithm) and multiplies one of them by a power of two to "left-shift" it. This effectively glues them together into a single 16-bit word in the output register, allowing the low-precision domain to act as a high-speed packer for the high-precision ALU to "unpack". This trick effectively doubles the bandwidth of the existing wiring on the chip without requiring a physical hardware redesign. With this high-speed data highway in place, the system can finally tackle one of the biggest challenges in autonomous AI: object permanence. 🧠 Long-context memory: remembering the stop sign The ultimate goal of this high-precision math is to solve the "forgetting" problem. In previous versions of FSD, a car might see a stop sign, but if a truck blocked its view for 5 seconds, it might "forget" the sign existed. Tesla uses a "long-context" window, allowing the AI to look back at data from 30 seconds ago or more. However, as the "distance" in time increases, standard positional math usually drifts. Tesla's mixed-precision pipeline fixes this by maintaining high positional resolution, ensuring the AI knows exactly where that occluded stop sign is even after a long period of movement. The RoPE rotations are so precise that the sign stays "pinned" to its 3D coordinate in the car's mental map. But remembering 30 seconds of high-fidelity video creates a massive storage bottleneck. ⚡ KV-cache optimization & paged attention: scaling memory To make these 30-second memories usable in real-time without running out of RAM, Tesla optimizes the KV-cache (Key-Value Cache)—the AI's "working memory" scratchpad. Tesla’s hardware handles this by storing the logarithm of the positions directly in the cache. This reduces the memory footprint by 50% or more, allowing Tesla to store twice as much "history" (up to 128k tokens) in the same amount of RAM. Furthermore, Tesla utilizes Paged Attention—a trick borrowed from operating systems. Instead of reserving one massive, continuous block of memory (which is inefficient), it breaks memory into small "pages". This allows the AI5 chip to dynamically allocate space only where it's needed, drastically increasing the number of objects (pedestrians, cars, signs) the car can track simultaneously without the system lagging. Yet, even with infinite storage efficiency, the AI's attention mechanism has a flaw: it tends to crash when pushed beyond its training limits. 🔒 Pipeline integrity: the "read-only" safety lock A subtle but critical detail in the patent is how Tesla protects this data. Once the transformed coordinates are generated, they are stored in a specific location that is read-accessible to downstream components but not write-accessible by them. Furthermore, the high-precision ALU itself cannot read back from this location. This one-way "airlock" prevents the system from accidentally overwriting its own past memories or creating feedback loops that could cause the AI to hallucinate. It ensures that the "truth" of the car's position flows in only one direction: forward, toward the decision-making engine. 🌀 Attention sinks: preventing memory overflow Even with a lean KV-cache, a robot operating for hours can't remember everything forever. Tesla manages this using Attention Sink tokens. Transformers tend to dump "excess" attention math onto the very first tokens of a sequence, so if Tesla simply used a "sliding window" that deleted old memories, the AI would lose these "sink" tokens and its brain would effectively crash. Tesla's hardware is designed to "pin" these attention sinks permanently in the KV-cache. By keeping these mathematical anchors stable while the rest of the memory window slides forward, Tesla prevents the robot’s neural network from destabilizing during long, multi-hour work shifts. While attention sinks stabilize the "memory", the "compute" side has its own inefficiencies—specifically, wasting power on empty space. 🌫️ Sparse tensors: cutting the compute fat Tesla’s custom silicon doesn't just cheat with precision; it cheats with volume. In the real world, most of what a car or robot sees is "empty" space (like clear sky). In AI math, these are represented as "zeros" in a Sparse Tensor (a data structure that ignores empty space). Standard chips waste power multiplying all those zeros, but Tesla’s newest architecture incorporates Native Sparse Acceleration. The hardware uses a "coordinate-based" system where it only stores the non-zero values and their specific locations. The chip can then skip the "dead space" entirely and focus only on the data that matters—the actual cars and obstacles. This hardware-level sparsity support effectively doubles the throughput of the AI5 chip while significantly lowering the energy consumed per operation. 🔊 The audio edge: Log-Sum-Exp for sirens Tesla’s "Silicon Bridge" isn't just for vision—it's also why your Tesla is becoming a world-class listener. To navigate safely, an autonomous vehicle needs to identify emergency sirens and the sound of nearby collisions using a Log-Mel Spectrogram approach (a visual "heat map" of sound frequencies). The patent details a specific Log-Sum-Exp (LSE) approximation technique to handle this. By staying in the logarithm domain, the system can handle the massive "dynamic range" of sound—from a faint hum to a piercing fire truck—using only 8-bit hardware without "clipping" the loud sounds or losing the quiet ones. This allows the car to "hear" and categorize environmental sounds with 32-bit clarity. Of course, all this high-tech hardware is only as good as the brain that runs on it, which is why Tesla's training process is just as specialized. 🎓 Quantization-aware training: pre-adapting the brain Finally, to make sure this "Mixed-Precision Bridge" works flawlessly, Tesla uses Quantization-Aware Training (QAT). Instead of training the AI in a perfect 32-bit world and then "shrinking" it later—which typically causes the AI to become "drunk" and inaccurate—Tesla trains the model from day one to expect 8-bit limitations. They simulate the rounding errors and "noise" of the hardware during the training phase, creating a neural network that is "pre-hardened". It’s like a pilot training in a flight simulator that perfectly mimics a storm; when they actually hit the real weather in the real world, the AI doesn’t "drift" or become inaccurate because it was born in that environment. This extreme optimization opens the door to running Tesla's AI on devices far smaller than a car. 🚀 The strategic roadmap: from AI5 to ubiquitous edge AI This patent is not just a "nice-to-have" optimization; it is the mathematical prerequisite for Tesla’s entire hardware roadmap. Without this "Mixed-Precision Bridge", the thermal and power equations for next-generation autonomy simply do not work. It starts by unlocking the AI5 chip, which is projected to be 40x more powerful than current hardware. Raw power is useless if memory bandwidth acts as a bottleneck. By compressing 32-bit rotational data into dense, log-space 8-bit packets, this patent effectively quadruples the effective bandwidth, allowing the chip to utilize its massive matrix-compute arrays without stalling. This efficiency is critical for the chip's "half-reticle" design, which reduces silicon size to maximize manufacturing yield while maintaining supercomputer-level throughput. This efficiency is even more critical for Tesla Optimus, where it is a matter of operational survival. The robot runs on a 2.3 kWh battery (roughly 1/30th of a Model 3 pack). Standard 32-bit GPU compute would drain this capacity in under 4 hours, consuming 500W+ just for "thinking". By offloading complex RoPE math to this hybrid logic, Tesla slashes the compute power budget to under 100W. This solves the "thermal wall", ensuring the robot can maintain balance and awareness for a full 8-hour work shift without overheating. This stability directly enables the shift to End-to-End Neural Networks. The "Rotation Matrix" correction described in the patent prevents the mathematical "drift" that usually plagues long-context tracking. This ensures that a stop sign seen 30 seconds ago remains "pinned" to its correct 3D coordinate in the World Model, rather than floating away due to rounding errors. Finally, baking this math into the silicon secures Tesla's strategic independence. It decouples the company from NVIDIA’s CUDA ecosystem and enables a Dual-Foundry Strategy with both Samsung and TSMC to mitigate supply chain risks. This creates a deliberate "oversupply" of compute, potentially turning its idle fleet and unsold chips into a distributed inference cloud that rivals AWS in efficiency. But the roadmap goes further. Because this mixed-precision architecture slashes power consumption by orders of magnitude, it creates a blueprint for "Tesla AI on everything". It opens the door to porting world-class vision models to hardware as small as a smart home hub or smartphone. This would allow tiny, cool-running chips to calculate 3D spatial positioning with zero latency—bringing supercomputer-level intelligence to the edge without ever sending private data to a massive cloud server.
Ming tweet media
English
950
1.8K
10.3K
4.8M
Luca Ecari retweetledi
Scott Adams
Scott Adams@ScottAdamsSays·
A Final Message From Scott Adams
Scott Adams tweet mediaScott Adams tweet media
English
13.1K
32K
191.6K
43M
Luca Ecari retweetledi
Matthew Prince 🌥
Matthew Prince 🌥@eastdakota·
Yesterday a quasi-judicial body in Italy fined @Cloudflare $17 million for failing to go along with their scheme to censor the Internet. The scheme, which even the EU has called concerning, required us within a mere 30 minutes of notification to fully censor from the Internet any sites a shadowy cabal of European media elites deemed against their interests. No judicial oversight. No due process. No appeal. No transparency. It required us to not just remove customers, but also censor our 1.1.1.1 DNS resolver meaning it risked blacking out any site on the Internet. And it required us not just to censor the content in Italy but globally. In other words, Italy insists a shadowy, European media cabal should be able to dictate what is and is not allowed online. That, of course, is DISGUSTING and even before yesterday’s fine we had multiple legal challenges pending against the underlying scheme. We, of course, will now fight the unjust fine. Not just because it’s wrong for us but because it is wrong for democratic values. In addition, we are considering the following actions: 1) discontinuing the millions of dollars in pro bono cyber security services we are providing the upcoming Milano-Cortina Olympics; 2) discontinuing Cloudflare’s Free cyber security services for any Italy-based users; 3) removing all servers from Italian cities; and 4) terminating all plans to build an Italian Cloudflare office or make any investments in the country. Play stupid games, win stupid prizes. While there are things I would handle differently than the current U.S. administration, I appreciate @JDVance taking a leadership role in recognizing this type of regulation is a fundamental unfair trade issue that also threatens democratic values. And in this case @ElonMusk is right: #FreeSpeech is critical and under attack from an out-of-touch cabal of very disturbed European policy makers. I will be in DC first thing next week to discuss this with U.S. administration officials and I’ll be meeting with the IOC in Lausanne shortly after to outline the risk to the Olympic Games if @Cloudflare withdraws our cyber security protection. In the meantime, we remain happy to discuss this with Italian government officials who, so far, have been unwilling to engage beyond issuing fines. We believe Italy, like all countries, has a right to regulate the content on networks inside its borders. But they must do so following the Rule of Law and principles of Due Process. And Italy certainly has no right to regulate what is and is not allowed on the Internet in the United States, the United Kingdom, Canada, China, Brazil, India or anywhere outside its borders. THIS IS AN IMPORTANT FIGHT AND WE WILL WIN!!!
Matthew Prince 🌥 tweet media
English
2.4K
7.3K
34.4K
9.5M
Luca Ecari
Luca Ecari@lecaritweets·
@Bobby_1111888 What about ALGO and HBAR Bobby? You haven’t talked about them in a while 🖖
English
0
0
0
299
Bobby A
Bobby A@Bobby_1111888·
“Bobby, how can you be sounding the alarm on XRP after being so bullish for so many years?” Answer. The same monthly chart setup that once made me bullish is now showing weakness and flashing warning signals. I'm supposed to ignore that because $589? Ok. Charts > Narratives. Always.
English
46
5
165
23.1K
Bobby A
Bobby A@Bobby_1111888·
"The Best Offense Is a Good Defense" has been published on the website for members. This is one of the more important reports that I have published, given where we currently find ourselves in the market cycle. The Bitcoin video analysis is lengthy, but it was required to articulate what I currently see in the higher timeframe charts. Cheers fam! Enjoy. ☕️ ⚒️ bobbyaxe.com/post/the-best-…
English
11
5
117
12.8K
Luca Ecari
Luca Ecari@lecaritweets·
Been to Japan for the 1st time 3 years ago for 10 days. Absolutely amazing. Visited Tokyo, Yokohama, Kyoto, Nara, Osaka and few of the surrounding areas. We were on the move every day as there is so much to see. My personal recs on must do’s in a shorter timeframe: Tokyo is the absolute mega city, you need to see it and immerse yourself a bit in the megalopolis, Kyoto is the absolute traditional stunning experience, Nara is out of this world with the sacred deers freely roaming the streets. Have cash with you at all times. Be ready to extensively use your phone for live translations. Eat local (ie avoid chains if you can). Try the street food whenever you have the chance (I didn’t like everything but I loved the contemporary taste vibes lining up with locals. Visit temples and gardens: I cannot overstate how magnificent and awe-inspiring they are. Download and charge prepaid public transportation applications as relevant AHEAD of your travel. Use trains as much as possible: punctual, capillary and reliable (go on the Shinkansen as well, the bullet train). Visit the green tee cultivations in Kyoto: they are not only beautiful, I have been converted to Japanese green tea varieties since. Plan your visit: there is so much to see and do that you might get lost. One thing I regret not having time to do was visiting, at least once, an Onsen (hot springs with bathing facilities): if you manage to go there, let me know how it was. Japan is the only place I long to revisit and I want to be in Kyoto when the cherry trees blossom. Enjoy this incredible place on Earth.
English
1
0
1
768
CrediBULL Crypto
CrediBULL Crypto@CredibleCrypto·
I think I need a vacation. Tips for first time in Japan? Maybe 7-10 days. Best cities to visit, things to do, general tips?
English
322
10
762
124K
Luca Ecari
Luca Ecari@lecaritweets·
@Bobby_1111888 How much would you consider to be overweight ETH vs XRP Bobby?
English
0
0
1
330
Bobby A
Bobby A@Bobby_1111888·
Our job as analysts is to move with the market as quickly as possible while remaining level-headed, unemotional, and sticking only to the charts. That said, I have rotated significant profits from XRP to ETH. I'm still allocated to XRP but am more overweight in ETH now. ⚒️
English
24
5
205
11.9K