Avant-garde

776 posts

Avant-garde banner
Avant-garde

Avant-garde

@TAvantgardeT

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks" #BTC

Katılım Aralık 2017
283 Takip Edilen28 Takipçiler
Sabitlenmiş Tweet
Avant-garde
Avant-garde@TAvantgardeT·
"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks" #BTC
English
1
0
0
0
Avant-garde
Avant-garde@TAvantgardeT·
@Younglee88 배경 설명을 해줘야지 , 남자가 돈을 상상 이상 버는데 , 여자는 몸만 와서 노는 상황인 것을
한국어
0
0
1
3.9K
영리_Younglee
영리_Younglee@Younglee88·
대립하고 있다는 부부 관계 회수 월 2회 VS 월 60회 당신의 선택은?
영리_Younglee tweet media
한국어
162
109
1.1K
497.3K
Avant-garde retweetledi
El Trumpista
El Trumpista@ElTrumpista·
El alcalde comunista de Nueva York Mamdani, que fue electo por prometer un montón de cosas gratis y mega aumentos de impuestos anuncia que ya hizo quebrar la ciudad. “Estamos con una crisis presupuestaria. No tenemos ingresos. El déficit es enorme” Además pidió ayuda a Trump.
Español
4.7K
17.7K
66.5K
5.3M
Avant-garde
Avant-garde@TAvantgardeT·
@toss_911 이과애들 뽑아서 문과업무 시켜도 문제 없이 수행 가능 문과애들 뽑아서 이과업무 시키면, 일을 못함
한국어
0
0
7
2.1K
크크(kuku)
크크(kuku)@toss_911·
2025년 SK 하이닉스 합격자 총 지원자수 : 약 30, 735명 총 합격자수 : 627명 합격률 : 2.04% 인문/사회 전공 합격자 : 4명 이공계 전공 합격자 : 623명 남성 : 440명 여성 : 187명 학사 : 520명 석사 : 107명 좀 시간이 지난 내용이지만 문과 4명은 심하네요...
한국어
14
101
405
143.1K
Avant-garde
Avant-garde@TAvantgardeT·
@ikany @Tslachan ???그걸 왜 테코에서?? 규제에 막힌거지,기술에 막힌게 아닌데???
한국어
2
0
1
66
ikany
ikany@ikany·
@Tslachan 테코는 신고하기 전에 오너들에게 FSD에 대한 공지 먼저 해줘야 하는거 아닌가 하는 아쉬움이 있네요 '무단 활성화는 불법이다. 최대한 빠르게 정식 배포하기 위해 노력하고 있다' 등등 진행과정을 단 한번도 설명을 한적이 없네요 이러니 테코의 인식이 안좋을수 밖에요
한국어
1
0
10
1.4K
Tsla Chan
Tsla Chan@Tslachan·
$TSLA 🇰🇷 [국토부 보도자료- 테슬라 FSD 기능을 무단으로 활성화하는 행위는 불법입니다] • 테슬라 코리아, 차량 소프트웨어의 취약점을 인지하고 자동차 사이버보안 위협 상황을 신고 • 국내 차주들도 비공식 외부 장비 또는 공개된 소스코드 사용을 통해 FSD 기능을 무단 활성화려는 시도가 발생할 우려가 있음 • FSD 무단 활성화하는 경우 '자동차관리법 제29조'에 따라 안전 기준에 적합하지 않은 자동차로 분류, 해당 차량은 운행이 불가 • 이를 위반하는 경우 2년 이하 징역 또는 2천만원 이하 벌금에 처해질 수 있음
Tsla Chan tweet media
한국어
113
129
392
82.3K
Avant-garde retweetledi
Mark Gadala-Maria
Mark Gadala-Maria@markgadala·
So sick. People are now using AI to resurrect mysterious ancient artifacts.
English
190
2.4K
14K
1.8M
64비트사령부⚡️
64비트사령부⚡️@64bitcoinkr·
@reset_2021 그러지 않을까요? FSD 기능이 차 사면 주는걸 물으시는거라면 FSD는 현재 '구독제' 입니다.
한국어
1
0
2
1.1K
64비트사령부⚡️
64비트사령부⚡️@64bitcoinkr·
속보⚡️: 중국산 테슬라 (모델Y, 3) FSD 승인 모카 한용님 말에 따르면 중국산 테슬라 모델 Y, 모델 3 에 FSD가 빠르면 올해 말, 늦으면 내년에는 된다고 하심. Y, 3 차주님들 좀만 참으시면 될 듯. 테슬라 마렵다....
64비트사령부⚡️ tweet media64비트사령부⚡️ tweet media64비트사령부⚡️ tweet media
한국어
66
86
423
40.5K
Avant-garde retweetledi
Tesla Korea
Tesla Korea@tesla_korea·
풀 셀프 드라이빙(감독형) in Korea 🇰🇷 한국 Tesla 오너들의 누적 주행 거리는 놀랍게도 약 100일만에 800만 km를 돌파 했습니다! 국내에서 단 1개월 만에 100만 km를 돌파한 후 경이로운 기록이 계속되고 있습니다. *풀 셀프 드라이빙(감독형) 국내 누적 주행 거리 8,000,000km 이상 누적 (3월 3일, 내부 데이터 기준) - Full Self-Driving (Supervised) in Korea 🇰🇷 The cumulative driving distance by Korean Tesla owners has astonishingly surpassed 8 million km in just about 100 days! After surpassing 1 million km in just one month domestically, these remarkable records continue to unfold. *Full Self-Driving (Supervised) domestic cumulative driving distance over 8,000,000 km (as of March 3, based on internal data)
Tesla Korea tweet media
한국어
125
221
831
88.8K
Avant-garde
Avant-garde@TAvantgardeT·
@coreacdy 주말에 유투브 보고 뿌듯해서 일기 남긴듯 ㅋ
한국어
0
0
0
51
정동영
정동영@coreacdy·
‘피지컬AI’가 세계적 화두입니다. 그런데 많은 사람들이 ‘피지컬AI’라는 말을 어려워합니다. 그래서 제안합니다. ‘피지컬AI’를 ‘몸을 쓰는 AI’로 바꿔 부르면 어떨까요? ‘피지컬AI’라는 기술적 용어보다 이 기술의 본질을 꿰뚫어 보는 ‘몸을 쓰는 AI’란 말이 훨씬 쉽고 명확하지 않나요?
한국어
568
135
632
389.2K
Avant-garde retweetledi
Elon Musk
Elon Musk@elonmusk·
Grok 4.20 is BASED. The only AI that doesn’t equivocate when asked if America is on stolen land. The others are weak sauce.
Elon Musk tweet media
English
22.1K
21.1K
161.5K
37M
Avant-garde
Avant-garde@TAvantgardeT·
@bounty_atm 비트코인이 진짜로 입금 되는게 아니잖아 , 생각들을 해라 좀
한국어
0
0
1
641
𝑨𝑻𝑴 ⚡️🦅
𝑨𝑻𝑴 ⚡️🦅@bounty_atm·
생각의 전환 하수 - 어이쿠 빗썸 어떻하냐 큰일났네 - 알아서 비트코인 잘 갚고 해결 하겠지뭐 - 그나저나 가격 언제오르지..? 고수 - 빗썸 문 닫을수도 있겠는데? - 두나무 주식 풀매수 🔥
한국어
10
6
63
26.9K
𝑨𝑻𝑴 ⚡️🦅
𝑨𝑻𝑴 ⚡️🦅@bounty_atm·
빗썸 딸깍 했더니 2,000억 입금 이중 더 ㅈ된 사람을 고르시오 1⃣ 이벤트 시스템 담당자 2⃣ 전량 시장가 매도한 고객
𝑨𝑻𝑴 ⚡️🦅 tweet media
한국어
115
26
328
254.1K
Avant-garde retweetledi
Tesla
Tesla@Tesla·
As we shift to an autonomous future, Model S & X production will wind down next quarter. If you’d like to own one of them, now’s a good time to place your order. Tesla wouldn’t be what it is today without Model S & X and their (early) owners – thank you for your support over the last decade
Tesla tweet media
English
2.2K
2K
16.6K
14.7M
Avant-garde retweetledi
Tesla Optimus
Tesla Optimus@Tesla_Optimus·
Model S & X will live on through me
English
1.3K
2.3K
23.8K
2.8M
김티거
김티거@tiggerkim86·
단순한 아이디어이긴한데, 도로 인프라 구축할때 잘 구축해두면 오히려 제설비용이나 사고비용을 따졌을때 훨씬 이득일듯 함. 출처 : timewith.tech 인스타그램
한국어
22
8
60
13.6K
Turtle🐢
Turtle🐢@Freedom_73X·
⚡️비트코인(BTC) 7,500개가 들어 있는 메모리 복구 영상 이 영상은 제임스 하웰스가 2013년에 실수로 버린 7,500개 비트코인(현재 약 6억 7,500만 달러 상당)이 들어 있는 USB 메모리를 복구하려는 모습이 담겨 있습니다.
한국어
55
48
327
196.9K
Avant-garde
Avant-garde@TAvantgardeT·
@yeoulabba 삼성카드 연회비 없는거 하나 발급 받아서 ㄱ ㄱ
한국어
0
0
1
30
Avant-garde retweetledi
Elon Musk
Elon Musk@elonmusk·
Necessity is the mother of invention. The @Tesla_AI team is epicly hardcore. No one can match Tesla’s real-world AI.
Ming@tslaming

BREAKING 🚨 TESLA HAS PATENTED A "MATHEMATICAL CHEAT CODE" THAT FORCES CHEAP 8-BIT CHIPS TO RUN ELITE 32-BIT AI MODELS AND REWRITES THE RULES OF SILICON 🐳 How does a Tesla remember a stop sign it hasn’t seen for 30 seconds, or a humanoid robot maintain perfect balance while carrying a heavy, shifting box? It comes down to Rotary Positional Encoding (RoPE)—the "GPS of the mind" that allows AI to understand its place in space and time by assigning a unique rotational angle to every piece of data. Usually, this math is a hardware killer. To keep these angles from "drifting" into chaos, you need power-hungry, high-heat 32-bit processors (chips that calculate with extreme decimal-point precision). But Tesla has engineered a way to cheat the laws of physics. Freshly revealed in patent US20260017019A1, Tesla’s "MIXED-PRECISION BRIDGE" is a mathematical translator that allows inexpensive, power-sipping 8-bit hardware (which usually handles only simple, rounded numbers) to perform elite 32-bit rotations without dropping a single coordinate. This breakthrough is the secret "Silicon Bridge" that gives Optimus and FSD high-end intelligence without sacrificing a mile of range or melting their internal circuits. It effectively turns Tesla’s efficient "budget" hardware into a high-fidelity supercomputer on wheels. 📉 The problem: the high cost of precision In the world of self-driving cars and humanoid robots, we are constantly fighting a war between precision and power. Modern AI models like Transformers rely on RoPE to help the AI understand where objects are in a sequence or a 3D space. The catch is that these trigonometric functions (sines and cosines) usually require 32-bit floating-point math—imagine trying to calculate a flight path using 10 decimal places of accuracy. If you try to cram that into the standard 8-bit multipliers (INT8) used for speed (which is like rounding everything to the nearest whole number), the errors pile up fast. The car effectively goes blind to fine details. For a robot like Optimus, a tiny math error means losing its balance or miscalculating the distance to a fragile object. To bridge this gap without simply adding more expensive chips, Tesla had to fundamentally rethink how data travels through the silicon. 🛠️ Tesla's solution: the logarithmic shortcut & pre-computation Tesla’s engineers realized they didn't need to force the whole pipeline to be high-precision. Instead, they designed the Mixed-Precision Bridge. They take the crucial angles used for positioning and convert them into logarithms. Because the "dynamic range" of a logarithm is much smaller than the original number, it’s much easier to move that data through narrow 8-bit hardware without losing the "soul" of the information. It’s a bit like dehydrating food for transport; it takes up less space and is easier to handle, but you can perfectly reconstitute it later. Crucially, the patent reveals that the system doesn't calculate these logarithms on the fly every time. Instead, it retrieves pre-computed logarithmic values from a specialized "cheat sheet" (look-up storage) to save cycles. By keeping the data in this "dehydrated" log-state, Tesla ensures that the precision doesn't "leak out" during the journey from the memory chips to the actual compute cores. However, keeping data in a log-state is only half the battle; the chip eventually needs to understand the real numbers again. 🏗️ The recovery architecture: rotation matrices & Horner’s method When the 8-bit multiplier (the Multiplier-Accumulator or MAC) finishes its job, the data is still in a "dehydrated" logarithmic state. To bring it back to a real angle theta without a massive computational cost, Tesla’s high-precision ALU uses a Taylor-series expansion optimized via Horner’s Method. This is a classic computer science trick where a complex equation (like an exponent) is broken down into a simple chain of multiplications and additions. By running this in three specific stages—multiplying by constants like 1/3 and 1/2 at each step—Tesla can approximate the exact value of an angle with 32-bit accuracy while using a fraction of the clock cycles. Once the angle is recovered, the high-precision logic generates a Rotation Matrix (a grid of sine and cosine values) that locks the data points into their correct 3D coordinates. This computational efficiency is impressive, but Tesla didn't stop at just calculating faster; they also found a way to double the "highway speed" of the data itself. 🧩 The data concatenation: 8-bit inputs to 16-bit outputs One of the most clever hardware "hacks" detailed in the patent is how Tesla manages to move 16-bit precision through an 8-bit bus. They use the MAC as a high-speed interleaver—effectively a "traffic cop" that merges two lanes of data. It takes two 8-bit values (say, an X-coordinate and the first half of a logarithm) and multiplies one of them by a power of two to "left-shift" it. This effectively glues them together into a single 16-bit word in the output register, allowing the low-precision domain to act as a high-speed packer for the high-precision ALU to "unpack". This trick effectively doubles the bandwidth of the existing wiring on the chip without requiring a physical hardware redesign. With this high-speed data highway in place, the system can finally tackle one of the biggest challenges in autonomous AI: object permanence. 🧠 Long-context memory: remembering the stop sign The ultimate goal of this high-precision math is to solve the "forgetting" problem. In previous versions of FSD, a car might see a stop sign, but if a truck blocked its view for 5 seconds, it might "forget" the sign existed. Tesla uses a "long-context" window, allowing the AI to look back at data from 30 seconds ago or more. However, as the "distance" in time increases, standard positional math usually drifts. Tesla's mixed-precision pipeline fixes this by maintaining high positional resolution, ensuring the AI knows exactly where that occluded stop sign is even after a long period of movement. The RoPE rotations are so precise that the sign stays "pinned" to its 3D coordinate in the car's mental map. But remembering 30 seconds of high-fidelity video creates a massive storage bottleneck. ⚡ KV-cache optimization & paged attention: scaling memory To make these 30-second memories usable in real-time without running out of RAM, Tesla optimizes the KV-cache (Key-Value Cache)—the AI's "working memory" scratchpad. Tesla’s hardware handles this by storing the logarithm of the positions directly in the cache. This reduces the memory footprint by 50% or more, allowing Tesla to store twice as much "history" (up to 128k tokens) in the same amount of RAM. Furthermore, Tesla utilizes Paged Attention—a trick borrowed from operating systems. Instead of reserving one massive, continuous block of memory (which is inefficient), it breaks memory into small "pages". This allows the AI5 chip to dynamically allocate space only where it's needed, drastically increasing the number of objects (pedestrians, cars, signs) the car can track simultaneously without the system lagging. Yet, even with infinite storage efficiency, the AI's attention mechanism has a flaw: it tends to crash when pushed beyond its training limits. 🔒 Pipeline integrity: the "read-only" safety lock A subtle but critical detail in the patent is how Tesla protects this data. Once the transformed coordinates are generated, they are stored in a specific location that is read-accessible to downstream components but not write-accessible by them. Furthermore, the high-precision ALU itself cannot read back from this location. This one-way "airlock" prevents the system from accidentally overwriting its own past memories or creating feedback loops that could cause the AI to hallucinate. It ensures that the "truth" of the car's position flows in only one direction: forward, toward the decision-making engine. 🌀 Attention sinks: preventing memory overflow Even with a lean KV-cache, a robot operating for hours can't remember everything forever. Tesla manages this using Attention Sink tokens. Transformers tend to dump "excess" attention math onto the very first tokens of a sequence, so if Tesla simply used a "sliding window" that deleted old memories, the AI would lose these "sink" tokens and its brain would effectively crash. Tesla's hardware is designed to "pin" these attention sinks permanently in the KV-cache. By keeping these mathematical anchors stable while the rest of the memory window slides forward, Tesla prevents the robot’s neural network from destabilizing during long, multi-hour work shifts. While attention sinks stabilize the "memory", the "compute" side has its own inefficiencies—specifically, wasting power on empty space. 🌫️ Sparse tensors: cutting the compute fat Tesla’s custom silicon doesn't just cheat with precision; it cheats with volume. In the real world, most of what a car or robot sees is "empty" space (like clear sky). In AI math, these are represented as "zeros" in a Sparse Tensor (a data structure that ignores empty space). Standard chips waste power multiplying all those zeros, but Tesla’s newest architecture incorporates Native Sparse Acceleration. The hardware uses a "coordinate-based" system where it only stores the non-zero values and their specific locations. The chip can then skip the "dead space" entirely and focus only on the data that matters—the actual cars and obstacles. This hardware-level sparsity support effectively doubles the throughput of the AI5 chip while significantly lowering the energy consumed per operation. 🔊 The audio edge: Log-Sum-Exp for sirens Tesla’s "Silicon Bridge" isn't just for vision—it's also why your Tesla is becoming a world-class listener. To navigate safely, an autonomous vehicle needs to identify emergency sirens and the sound of nearby collisions using a Log-Mel Spectrogram approach (a visual "heat map" of sound frequencies). The patent details a specific Log-Sum-Exp (LSE) approximation technique to handle this. By staying in the logarithm domain, the system can handle the massive "dynamic range" of sound—from a faint hum to a piercing fire truck—using only 8-bit hardware without "clipping" the loud sounds or losing the quiet ones. This allows the car to "hear" and categorize environmental sounds with 32-bit clarity. Of course, all this high-tech hardware is only as good as the brain that runs on it, which is why Tesla's training process is just as specialized. 🎓 Quantization-aware training: pre-adapting the brain Finally, to make sure this "Mixed-Precision Bridge" works flawlessly, Tesla uses Quantization-Aware Training (QAT). Instead of training the AI in a perfect 32-bit world and then "shrinking" it later—which typically causes the AI to become "drunk" and inaccurate—Tesla trains the model from day one to expect 8-bit limitations. They simulate the rounding errors and "noise" of the hardware during the training phase, creating a neural network that is "pre-hardened". It’s like a pilot training in a flight simulator that perfectly mimics a storm; when they actually hit the real weather in the real world, the AI doesn’t "drift" or become inaccurate because it was born in that environment. This extreme optimization opens the door to running Tesla's AI on devices far smaller than a car. 🚀 The strategic roadmap: from AI5 to ubiquitous edge AI This patent is not just a "nice-to-have" optimization; it is the mathematical prerequisite for Tesla’s entire hardware roadmap. Without this "Mixed-Precision Bridge", the thermal and power equations for next-generation autonomy simply do not work. It starts by unlocking the AI5 chip, which is projected to be 40x more powerful than current hardware. Raw power is useless if memory bandwidth acts as a bottleneck. By compressing 32-bit rotational data into dense, log-space 8-bit packets, this patent effectively quadruples the effective bandwidth, allowing the chip to utilize its massive matrix-compute arrays without stalling. This efficiency is critical for the chip's "half-reticle" design, which reduces silicon size to maximize manufacturing yield while maintaining supercomputer-level throughput. This efficiency is even more critical for Tesla Optimus, where it is a matter of operational survival. The robot runs on a 2.3 kWh battery (roughly 1/30th of a Model 3 pack). Standard 32-bit GPU compute would drain this capacity in under 4 hours, consuming 500W+ just for "thinking". By offloading complex RoPE math to this hybrid logic, Tesla slashes the compute power budget to under 100W. This solves the "thermal wall", ensuring the robot can maintain balance and awareness for a full 8-hour work shift without overheating. This stability directly enables the shift to End-to-End Neural Networks. The "Rotation Matrix" correction described in the patent prevents the mathematical "drift" that usually plagues long-context tracking. This ensures that a stop sign seen 30 seconds ago remains "pinned" to its correct 3D coordinate in the World Model, rather than floating away due to rounding errors. Finally, baking this math into the silicon secures Tesla's strategic independence. It decouples the company from NVIDIA’s CUDA ecosystem and enables a Dual-Foundry Strategy with both Samsung and TSMC to mitigate supply chain risks. This creates a deliberate "oversupply" of compute, potentially turning its idle fleet and unsold chips into a distributed inference cloud that rivals AWS in efficiency. But the roadmap goes further. Because this mixed-precision architecture slashes power consumption by orders of magnitude, it creates a blueprint for "Tesla AI on everything". It opens the door to porting world-class vision models to hardware as small as a smart home hub or smartphone. This would allow tiny, cool-running chips to calculate 3D spatial positioning with zero latency—bringing supercomputer-level intelligence to the edge without ever sending private data to a massive cloud server.

English
2K
4.4K
38.2K
11M
Avant-garde retweetledi
Tesla Korea
Tesla Korea@tesla_korea·
안전과 타협하지 않는 높은 품질과 충실한 기본기 합리적 선택이 가져올 새로운 일상 Model 3 Standard RWD 4,199만원부터(전기차 보조금 적용 전) - No compromise on safety Faithful to quality and fundamentals The new daily life brought by a rational choice Model 3 Standard RWD Starting from 41,990,000 KRW (before electric vehicle subsidy)
Tesla Korea tweet media
한국어
64
127
602
65.1K
⟑𝕣⫧𝜿ℊ⁵ ✧ 캣이죠
앜앜앜 빡돌아 🤬🤬🤬🤬🤬 모델 3 가격 3,500 와C 실홥니까? 보조금 받아도 5,600에 산 저는 미친놈입니까? 진심 화가 난다. 내 차 중고가는 어떻게 되는건가 🤬🤬
⟑𝕣⫧𝜿ℊ⁵ ✧ 캣이죠 tweet media
⟑𝕣⫧𝜿ℊ⁵ ✧ 캣이죠@ArtKG5

모델 3 후륜 6,150에 산 개상호구 여깄어요. 아 Cㅂ 진짜 욕 나온다. 차도 최가 주식도 최고가에 사고 영혼도 쓸개도 테슬라에 다 빼준 나 여보 미안해~ 😭😭😭 $TSLA

한국어
40
4
79
13.5K