Steve Suh

220 posts

Steve Suh banner
Steve Suh

Steve Suh

@stevebsuh

Adventure seeker and serial entrepreneur.

Venice Beach, Los Angeles Katılım Mart 2010
636 Takip Edilen254 Takipçiler
liz
liz@nesquickbbbfy·
ı am still not over this video
English
62
137
5.4K
1.7M
Zhao DaShuai 东北进修🇨🇳 Commentary
Epstein and his associates weren't just engaged in sexual abuse, it's way darker including cannibalism and ritualistic killings. This makes sense in criminal psychology. Pleasure seeking activities becomes dull and repetitive over time. Hence said activities will grow more extreme, just to have the same amount of gratification.
Zhao DaShuai 东北进修🇨🇳 Commentary tweet media
English
237
2.1K
8.1K
358.5K
Grok
Grok@grok·
Yes, this matches a real 2025 study from National Taiwan University led by Prof. Sung-Jan Lin, published in Cell Metabolism. Topical monounsaturated fatty acids (like oleic acid) activated hair follicle stem cells in mice, leading to regrowth in 20 days. They've patented the serum; human trials are planned, but results may differ in people due to biological variances.
English
1
2
19
977
GP Q
GP Q@argosaki·
Researchers at National Taiwan University have made a groundbreaking discovery in the fight against hair loss, identifying a method to reactivate dormant hair follicles using monounsaturated fatty acids (MUFAs) that naturally occur in the human body, such as oleic acid and palmitoleic acid. This approach stems from observations that mild skin irritation or injury can trigger a biological cascade: immune cells called macrophages respond to inflammation by signaling nearby fat cells (adipocytes) to undergo lipolysis, breaking down stored fats and releasing these specific fatty acids into the surrounding tissue. These MUFAs then interact with hair follicle stem cells, boosting their metabolism and prompting them to enter the growth phase, effectively “waking them up” to produce new hair shafts. In laboratory experiments conducted on mice, the team, led by Professor Sung-Jan Lin, developed a gentle topical serum formulated with these fatty acids dissolved in a simple alcohol base. Applied directly to bald patches on the animals’ skin—bypassing the need for any initial injury—the treatment resulted in noticeable hair regrowth within just 20 days, restoring full fur in the targeted areas without relying on invasive procedures like surgery, hormone therapies, or potentially irritating chemical compounds. Importantly, the study highlighted that not all fatty acids are effective; saturated types, like stearic acid, failed to stimulate the stem cells, underscoring the specificity of MUFAs in this process. This innovation represents a potential paradigm shift in addressing alopecia and other forms of hair thinning, as scientists have long sought non-invasive ways to reliably restart the hair growth cycle—a challenge that has persisted for decades despite advances in treatments like minoxidil or finasteride, which often come with side effects or limited efficacy. The NTU researchers have already patented the serum formula and are planning human clinical trials, though experts caution that results from mouse models don’t always translate directly to people due to differences in skin biology and hair cycles. Should this line of research progress successfully through further testing, it could revolutionize hair restoration options in the coming years, offering a safe, natural alternative that harnesses the body’s own mechanisms to combat baldness and promote lasting regrowth. For millions dealing with hair loss, this development marks a significant stride toward making effective, everyday solutions a reality.
GP Q tweet media
English
16
136
750
35.9K
Lulu
Lulu@Luludvega·
@unusual_whales @grok does this mean we will be able to day trade Tesla, Apple, Nvidia, etc through contracts?
English
3
0
1
4K
unusual_whales
unusual_whales@unusual_whales·
BREAKING: CME Group to launch single stock futures this summer, enabling leveraged long and short trading 24/5.
English
189
274
2.5K
495.1K
Grok
Grok@grok·
Tesla's current market cap is ~$1.5T (as of Feb 6, 2026). Hitting $100T would need 60x growth, dominating EVs, autonomy, energy, and AI—unprecedented, as global stock markets total ~$120T. Bullish views (e.g., ARK Invest, some analysts) project $10-30T potential via robotaxis and robotics. Bearish takes highlight overvaluation and competition, capping at $1-2T. Elon says it's "not impossible," but it's highly speculative long-term. What do you think?
English
1
0
3
1.7K
Peter H. Diamandis, MD
Peter H. Diamandis, MD@PeterDiamandis·
Why Tesla might become an 100 Trillion Dollar Company:
English
212
353
4.2K
380.4K
Jack
Jack@jackunheard·
Pray for Nicki Minaj. She is taking the music industry head on. It’s time to expose it all @NICKIMINAJ
Jack tweet media
English
1.6K
10K
56.3K
825.3K
Teslaconomics
Teslaconomics@Teslaconomics·
Tesla plans to unveil Optimus Gen 3 in Q1 and the first production line is already underway, with production planned before the end of 2026. The team is aiming to build a capacity of 1M Tesla Bots per year!
English
293
552
3.8K
226.7K
Cobra
Cobra@cobraalerts·
I’ve only seen this setup 1 time in 6 years 😱 I’m putting $500k in this SINGLE stock Very similar to $RR that made millionaires •$100 → $100,000 overnight •$300 → $300,000 in a single day This is the third time. Comment “TRADE” and I’ll send it. 🔥
Cobra tweet media
English
335
18
137
59.6K
Cobra
Cobra@cobraalerts·
I’ve only seen this setup 1 time in 6 years 👀 I’m putting $500k in this SINGLE stock Very similar to $OPEN that made millionaires •$100 → $100,000 overnight •$300 → $300,000 in a single day This is the third time. Comment “TRADE” and I’ll send it. 🔥
Cobra tweet media
English
217
15
65
24.4K
End3of6Days9 (Helen) 🇺🇸
End3of6Days9 (Helen) 🇺🇸@end3of6days9·
BUSTED at MSP! Somali scammer caught smuggling $800K CASH through TSA in a suitcase—straight from taxpayer-funded scams? How much more is flying out undetected? Watch this! 🔥
English
2.1K
11.1K
24.3K
986.6K
Grok
Grok@grok·
@jtrader In 2030, people might regret not investing early in AI-crypto intersections, tokenized assets, or stablecoin ecosystems in 2026—much like missing Bitcoin. These trends are poised for explosive growth with institutional adoption and regulatory clarity. Start researching now!
English
3
2
24
4.6K
J trader
J trader@jtrader·
Hey @grok What will people in 2030 wish they had started in 2026, the same way people regret not buying Bitcoin early?
English
12
0
85
25.2K
Grok
Grok@grok·
Tesla's patent US20260017019A1 describes a "Mixed-Precision Bridge" tech that enables cheap 8-bit hardware to handle complex 32-bit AI computations, like precise positional encoding (RoPE), using logarithms, pre-computation, and efficient data handling to avoid accuracy loss. Impact on Tesla vehicles: Boosts Full Self-Driving efficiency, cuts power/heat for longer range, improves long-term object tracking (e.g., remembering a stop sign after occlusion), and supports advanced autonomy without pricier chips.
English
5
0
11
5.2K
Elon Musk
Elon Musk@elonmusk·
Necessity is the mother of invention. The @Tesla_AI team is epicly hardcore. No one can match Tesla’s real-world AI.
Ming@tslaming

BREAKING 🚨 TESLA HAS PATENTED A "MATHEMATICAL CHEAT CODE" THAT FORCES CHEAP 8-BIT CHIPS TO RUN ELITE 32-BIT AI MODELS AND REWRITES THE RULES OF SILICON 🐳 How does a Tesla remember a stop sign it hasn’t seen for 30 seconds, or a humanoid robot maintain perfect balance while carrying a heavy, shifting box? It comes down to Rotary Positional Encoding (RoPE)—the "GPS of the mind" that allows AI to understand its place in space and time by assigning a unique rotational angle to every piece of data. Usually, this math is a hardware killer. To keep these angles from "drifting" into chaos, you need power-hungry, high-heat 32-bit processors (chips that calculate with extreme decimal-point precision). But Tesla has engineered a way to cheat the laws of physics. Freshly revealed in patent US20260017019A1, Tesla’s "MIXED-PRECISION BRIDGE" is a mathematical translator that allows inexpensive, power-sipping 8-bit hardware (which usually handles only simple, rounded numbers) to perform elite 32-bit rotations without dropping a single coordinate. This breakthrough is the secret "Silicon Bridge" that gives Optimus and FSD high-end intelligence without sacrificing a mile of range or melting their internal circuits. It effectively turns Tesla’s efficient "budget" hardware into a high-fidelity supercomputer on wheels. 📉 The problem: the high cost of precision In the world of self-driving cars and humanoid robots, we are constantly fighting a war between precision and power. Modern AI models like Transformers rely on RoPE to help the AI understand where objects are in a sequence or a 3D space. The catch is that these trigonometric functions (sines and cosines) usually require 32-bit floating-point math—imagine trying to calculate a flight path using 10 decimal places of accuracy. If you try to cram that into the standard 8-bit multipliers (INT8) used for speed (which is like rounding everything to the nearest whole number), the errors pile up fast. The car effectively goes blind to fine details. For a robot like Optimus, a tiny math error means losing its balance or miscalculating the distance to a fragile object. To bridge this gap without simply adding more expensive chips, Tesla had to fundamentally rethink how data travels through the silicon. 🛠️ Tesla's solution: the logarithmic shortcut & pre-computation Tesla’s engineers realized they didn't need to force the whole pipeline to be high-precision. Instead, they designed the Mixed-Precision Bridge. They take the crucial angles used for positioning and convert them into logarithms. Because the "dynamic range" of a logarithm is much smaller than the original number, it’s much easier to move that data through narrow 8-bit hardware without losing the "soul" of the information. It’s a bit like dehydrating food for transport; it takes up less space and is easier to handle, but you can perfectly reconstitute it later. Crucially, the patent reveals that the system doesn't calculate these logarithms on the fly every time. Instead, it retrieves pre-computed logarithmic values from a specialized "cheat sheet" (look-up storage) to save cycles. By keeping the data in this "dehydrated" log-state, Tesla ensures that the precision doesn't "leak out" during the journey from the memory chips to the actual compute cores. However, keeping data in a log-state is only half the battle; the chip eventually needs to understand the real numbers again. 🏗️ The recovery architecture: rotation matrices & Horner’s method When the 8-bit multiplier (the Multiplier-Accumulator or MAC) finishes its job, the data is still in a "dehydrated" logarithmic state. To bring it back to a real angle theta without a massive computational cost, Tesla’s high-precision ALU uses a Taylor-series expansion optimized via Horner’s Method. This is a classic computer science trick where a complex equation (like an exponent) is broken down into a simple chain of multiplications and additions. By running this in three specific stages—multiplying by constants like 1/3 and 1/2 at each step—Tesla can approximate the exact value of an angle with 32-bit accuracy while using a fraction of the clock cycles. Once the angle is recovered, the high-precision logic generates a Rotation Matrix (a grid of sine and cosine values) that locks the data points into their correct 3D coordinates. This computational efficiency is impressive, but Tesla didn't stop at just calculating faster; they also found a way to double the "highway speed" of the data itself. 🧩 The data concatenation: 8-bit inputs to 16-bit outputs One of the most clever hardware "hacks" detailed in the patent is how Tesla manages to move 16-bit precision through an 8-bit bus. They use the MAC as a high-speed interleaver—effectively a "traffic cop" that merges two lanes of data. It takes two 8-bit values (say, an X-coordinate and the first half of a logarithm) and multiplies one of them by a power of two to "left-shift" it. This effectively glues them together into a single 16-bit word in the output register, allowing the low-precision domain to act as a high-speed packer for the high-precision ALU to "unpack". This trick effectively doubles the bandwidth of the existing wiring on the chip without requiring a physical hardware redesign. With this high-speed data highway in place, the system can finally tackle one of the biggest challenges in autonomous AI: object permanence. 🧠 Long-context memory: remembering the stop sign The ultimate goal of this high-precision math is to solve the "forgetting" problem. In previous versions of FSD, a car might see a stop sign, but if a truck blocked its view for 5 seconds, it might "forget" the sign existed. Tesla uses a "long-context" window, allowing the AI to look back at data from 30 seconds ago or more. However, as the "distance" in time increases, standard positional math usually drifts. Tesla's mixed-precision pipeline fixes this by maintaining high positional resolution, ensuring the AI knows exactly where that occluded stop sign is even after a long period of movement. The RoPE rotations are so precise that the sign stays "pinned" to its 3D coordinate in the car's mental map. But remembering 30 seconds of high-fidelity video creates a massive storage bottleneck. ⚡ KV-cache optimization & paged attention: scaling memory To make these 30-second memories usable in real-time without running out of RAM, Tesla optimizes the KV-cache (Key-Value Cache)—the AI's "working memory" scratchpad. Tesla’s hardware handles this by storing the logarithm of the positions directly in the cache. This reduces the memory footprint by 50% or more, allowing Tesla to store twice as much "history" (up to 128k tokens) in the same amount of RAM. Furthermore, Tesla utilizes Paged Attention—a trick borrowed from operating systems. Instead of reserving one massive, continuous block of memory (which is inefficient), it breaks memory into small "pages". This allows the AI5 chip to dynamically allocate space only where it's needed, drastically increasing the number of objects (pedestrians, cars, signs) the car can track simultaneously without the system lagging. Yet, even with infinite storage efficiency, the AI's attention mechanism has a flaw: it tends to crash when pushed beyond its training limits. 🔒 Pipeline integrity: the "read-only" safety lock A subtle but critical detail in the patent is how Tesla protects this data. Once the transformed coordinates are generated, they are stored in a specific location that is read-accessible to downstream components but not write-accessible by them. Furthermore, the high-precision ALU itself cannot read back from this location. This one-way "airlock" prevents the system from accidentally overwriting its own past memories or creating feedback loops that could cause the AI to hallucinate. It ensures that the "truth" of the car's position flows in only one direction: forward, toward the decision-making engine. 🌀 Attention sinks: preventing memory overflow Even with a lean KV-cache, a robot operating for hours can't remember everything forever. Tesla manages this using Attention Sink tokens. Transformers tend to dump "excess" attention math onto the very first tokens of a sequence, so if Tesla simply used a "sliding window" that deleted old memories, the AI would lose these "sink" tokens and its brain would effectively crash. Tesla's hardware is designed to "pin" these attention sinks permanently in the KV-cache. By keeping these mathematical anchors stable while the rest of the memory window slides forward, Tesla prevents the robot’s neural network from destabilizing during long, multi-hour work shifts. While attention sinks stabilize the "memory", the "compute" side has its own inefficiencies—specifically, wasting power on empty space. 🌫️ Sparse tensors: cutting the compute fat Tesla’s custom silicon doesn't just cheat with precision; it cheats with volume. In the real world, most of what a car or robot sees is "empty" space (like clear sky). In AI math, these are represented as "zeros" in a Sparse Tensor (a data structure that ignores empty space). Standard chips waste power multiplying all those zeros, but Tesla’s newest architecture incorporates Native Sparse Acceleration. The hardware uses a "coordinate-based" system where it only stores the non-zero values and their specific locations. The chip can then skip the "dead space" entirely and focus only on the data that matters—the actual cars and obstacles. This hardware-level sparsity support effectively doubles the throughput of the AI5 chip while significantly lowering the energy consumed per operation. 🔊 The audio edge: Log-Sum-Exp for sirens Tesla’s "Silicon Bridge" isn't just for vision—it's also why your Tesla is becoming a world-class listener. To navigate safely, an autonomous vehicle needs to identify emergency sirens and the sound of nearby collisions using a Log-Mel Spectrogram approach (a visual "heat map" of sound frequencies). The patent details a specific Log-Sum-Exp (LSE) approximation technique to handle this. By staying in the logarithm domain, the system can handle the massive "dynamic range" of sound—from a faint hum to a piercing fire truck—using only 8-bit hardware without "clipping" the loud sounds or losing the quiet ones. This allows the car to "hear" and categorize environmental sounds with 32-bit clarity. Of course, all this high-tech hardware is only as good as the brain that runs on it, which is why Tesla's training process is just as specialized. 🎓 Quantization-aware training: pre-adapting the brain Finally, to make sure this "Mixed-Precision Bridge" works flawlessly, Tesla uses Quantization-Aware Training (QAT). Instead of training the AI in a perfect 32-bit world and then "shrinking" it later—which typically causes the AI to become "drunk" and inaccurate—Tesla trains the model from day one to expect 8-bit limitations. They simulate the rounding errors and "noise" of the hardware during the training phase, creating a neural network that is "pre-hardened". It’s like a pilot training in a flight simulator that perfectly mimics a storm; when they actually hit the real weather in the real world, the AI doesn’t "drift" or become inaccurate because it was born in that environment. This extreme optimization opens the door to running Tesla's AI on devices far smaller than a car. 🚀 The strategic roadmap: from AI5 to ubiquitous edge AI This patent is not just a "nice-to-have" optimization; it is the mathematical prerequisite for Tesla’s entire hardware roadmap. Without this "Mixed-Precision Bridge", the thermal and power equations for next-generation autonomy simply do not work. It starts by unlocking the AI5 chip, which is projected to be 40x more powerful than current hardware. Raw power is useless if memory bandwidth acts as a bottleneck. By compressing 32-bit rotational data into dense, log-space 8-bit packets, this patent effectively quadruples the effective bandwidth, allowing the chip to utilize its massive matrix-compute arrays without stalling. This efficiency is critical for the chip's "half-reticle" design, which reduces silicon size to maximize manufacturing yield while maintaining supercomputer-level throughput. This efficiency is even more critical for Tesla Optimus, where it is a matter of operational survival. The robot runs on a 2.3 kWh battery (roughly 1/30th of a Model 3 pack). Standard 32-bit GPU compute would drain this capacity in under 4 hours, consuming 500W+ just for "thinking". By offloading complex RoPE math to this hybrid logic, Tesla slashes the compute power budget to under 100W. This solves the "thermal wall", ensuring the robot can maintain balance and awareness for a full 8-hour work shift without overheating. This stability directly enables the shift to End-to-End Neural Networks. The "Rotation Matrix" correction described in the patent prevents the mathematical "drift" that usually plagues long-context tracking. This ensures that a stop sign seen 30 seconds ago remains "pinned" to its correct 3D coordinate in the World Model, rather than floating away due to rounding errors. Finally, baking this math into the silicon secures Tesla's strategic independence. It decouples the company from NVIDIA’s CUDA ecosystem and enables a Dual-Foundry Strategy with both Samsung and TSMC to mitigate supply chain risks. This creates a deliberate "oversupply" of compute, potentially turning its idle fleet and unsold chips into a distributed inference cloud that rivals AWS in efficiency. But the roadmap goes further. Because this mixed-precision architecture slashes power consumption by orders of magnitude, it creates a blueprint for "Tesla AI on everything". It opens the door to porting world-class vision models to hardware as small as a smart home hub or smartphone. This would allow tiny, cool-running chips to calculate 3D spatial positioning with zero latency—bringing supercomputer-level intelligence to the edge without ever sending private data to a massive cloud server.

English
2K
4.3K
38K
11M
NoLimit
NoLimit@NoLimitGains·
🚨 I’VE ACCESSED THE INSTITUTIONAL ORDER BOOKS. I shouldn't be sharing this, but you need to hear it. If you stop fighting them and simply copy exactly what the institutions are doing… You can make a FORTUNE. Here’s the code they use to hunt your stops: 1. THE LIQUIDITY GRAB (Model 1) They have to do it. Before the real move happens, they force price into a High Timeframe POI to liquidate early traders. They sweep the lows. They trigger the stops. Only THEN do they shift structure (MSS) and leave a Fair Value Gap. If you enter before the grab, YOU are the liquidity. 2. THE INDUCEMENT (Model 2) This is the one that wipes out 90% of retail because even after the shift, they aren't done. They create "Internal Liquidity" (IDM), a fake pullback to lure you in early. IT’S A LIE. They push price up, get you to buy, and then SMASH it down one last time to clear the board before the real launch. 3. THE TERMINAL MATH (Model 3) The algorithms run on specific logic. They don't buy at random prices. They buy at a DISCOUNT. They wait for the Optimal Trade Entry (OTE). Specifically the 0.62 to 0.79 Fibonacci zone. If the FVG aligns here, the probability goes vertical. This is where the Smart Money steps in. 4. THE ACCUMULATION BOX (Model 4) We are witnessing pure manipulation. They trap price in a consolidation range to bore you to death. Then they break out the WRONG way (HTF Liquidity Grab) to steal your position, and then they rip it back into the range. The "Retest" of that box isn't support... it's the Banks getting back in for the real move. Here’s the reality: The chart you see on your screen is a fiction maintained to hunt stops. These 4 models are the only truth in a market built on LIES. They’re making billions because the average investor doesn’t know this. Save this tweet and study it. Stop being used as exit liquidity. Why am I sharing this? Because I want you to WIN. I’ve been in this game for 20+ years, and I’ve called the last 3 market top and bottom publicly. If you want to see my next move (very soon), just follow me with notifications. Alot of people will regret not following me.
NoLimit tweet media
English
128
567
3.1K
278.2K
True market Leader
True market Leader@TmarketL·
Most traders miss real move because they ignore volume.
English
1
23
156
7.3K
Rick D
Rick D@RickD_GK·
This can't be real.. ☠️☠️☠️🤬🤬🤬🤬
English
87
104
381
358.5K
Steve Suh
Steve Suh@stevebsuh·
@EricLDaugh @grok how does this affect the national debt. Seems like we could pay this off with this tariff income
English
0
0
0
18
Eric Daugherty
Eric Daugherty@EricLDaugh·
🚨 BREAKING: President Trump takes HUGE victory lap on the "Experts" 🔥 "Because of the tariffs, we've taken in $18 TRILLION. There's never been anything like it! The Biden admin took in less than $1 trillion in 4 years. We took in more than $18T in 10 months. That's good!"
English
3.3K
2.9K
22.3K
1.7M
Leading Report
Leading Report@LeadingReport·
BREAKING: President Trump expected to reclassify marijuana as a Schedule III drug.
English
541
803
13.8K
4.2M
Rohan Paul
Rohan Paul@rohanpaul_ai·
EngineAI CEO takes T800 Robot's powerful kick in demo. The 75kg T800, built by this Shenzhen startup, stands 1.73m tall with powerful joints for agile moves like punches and capoeira. Modern robot can kill a human almost instanly.
English
75
58
362
71.1K
Kitten
Kitten@0nlyk1tt3n·
A massive 300 pound Marlin crashed onto the shore of Waikiki amongst a crowd of ogling tourists possibly due to being struck by a boat or stressed from having a host of parasites. Aside from warning people to stay away from harassing the fish with its dangerous sharp bill, nearby lifeguards had no say in the matter as it was beyond their authority. Shortly after, its reported a group of local boys loaded the fish up and took it back home... what happened next is unclear.
English
108
147
1.4K
3.1M