Daniel Leebrick

1K posts

Daniel Leebrick

Daniel Leebrick

@dpilot83

Truth seeker. Husband. Father. Steward.

Beigetreten Şubat 2014
372 Folgt194 Follower
Tom Moloughney
Tom Moloughney@tommolog·
Hey @grok, can you block me from everyone on this platform that posts a picture and then asks you to do something stupid, like "remove the criminal/pedophile/dictator", or "put her in a bikini"?
Chester, NJ 🇺🇸 English
9
1
94
13.9K
Elon Musk
Elon Musk@elonmusk·
Every member of Congress who campaigned on reducing government spending and then immediately voted for the biggest debt increase in history should hang their head in shame! And they will lose their primary next year if it is the last thing I do on this Earth.
English
31.3K
37K
308.9K
43.3M
Daniel Leebrick
Daniel Leebrick@dpilot83·
Somehow humans ended up with the biological equivalent of all of that. The more I learn about AI the less likely it seems to me that we could have developed the biological equivalent of all of the characteristics described in Ming’s summary by chance. It seems more likely to me that we were designed, rather than chance creating us.
English
0
0
0
13
Elon Musk
Elon Musk@elonmusk·
Necessity is the mother of invention. The @Tesla_AI team is epicly hardcore. No one can match Tesla’s real-world AI.
Ming@tslaming

BREAKING 🚨 TESLA HAS PATENTED A "MATHEMATICAL CHEAT CODE" THAT FORCES CHEAP 8-BIT CHIPS TO RUN ELITE 32-BIT AI MODELS AND REWRITES THE RULES OF SILICON 🐳 How does a Tesla remember a stop sign it hasn’t seen for 30 seconds, or a humanoid robot maintain perfect balance while carrying a heavy, shifting box? It comes down to Rotary Positional Encoding (RoPE)—the "GPS of the mind" that allows AI to understand its place in space and time by assigning a unique rotational angle to every piece of data. Usually, this math is a hardware killer. To keep these angles from "drifting" into chaos, you need power-hungry, high-heat 32-bit processors (chips that calculate with extreme decimal-point precision). But Tesla has engineered a way to cheat the laws of physics. Freshly revealed in patent US20260017019A1, Tesla’s "MIXED-PRECISION BRIDGE" is a mathematical translator that allows inexpensive, power-sipping 8-bit hardware (which usually handles only simple, rounded numbers) to perform elite 32-bit rotations without dropping a single coordinate. This breakthrough is the secret "Silicon Bridge" that gives Optimus and FSD high-end intelligence without sacrificing a mile of range or melting their internal circuits. It effectively turns Tesla’s efficient "budget" hardware into a high-fidelity supercomputer on wheels. 📉 The problem: the high cost of precision In the world of self-driving cars and humanoid robots, we are constantly fighting a war between precision and power. Modern AI models like Transformers rely on RoPE to help the AI understand where objects are in a sequence or a 3D space. The catch is that these trigonometric functions (sines and cosines) usually require 32-bit floating-point math—imagine trying to calculate a flight path using 10 decimal places of accuracy. If you try to cram that into the standard 8-bit multipliers (INT8) used for speed (which is like rounding everything to the nearest whole number), the errors pile up fast. The car effectively goes blind to fine details. For a robot like Optimus, a tiny math error means losing its balance or miscalculating the distance to a fragile object. To bridge this gap without simply adding more expensive chips, Tesla had to fundamentally rethink how data travels through the silicon. 🛠️ Tesla's solution: the logarithmic shortcut & pre-computation Tesla’s engineers realized they didn't need to force the whole pipeline to be high-precision. Instead, they designed the Mixed-Precision Bridge. They take the crucial angles used for positioning and convert them into logarithms. Because the "dynamic range" of a logarithm is much smaller than the original number, it’s much easier to move that data through narrow 8-bit hardware without losing the "soul" of the information. It’s a bit like dehydrating food for transport; it takes up less space and is easier to handle, but you can perfectly reconstitute it later. Crucially, the patent reveals that the system doesn't calculate these logarithms on the fly every time. Instead, it retrieves pre-computed logarithmic values from a specialized "cheat sheet" (look-up storage) to save cycles. By keeping the data in this "dehydrated" log-state, Tesla ensures that the precision doesn't "leak out" during the journey from the memory chips to the actual compute cores. However, keeping data in a log-state is only half the battle; the chip eventually needs to understand the real numbers again. 🏗️ The recovery architecture: rotation matrices & Horner’s method When the 8-bit multiplier (the Multiplier-Accumulator or MAC) finishes its job, the data is still in a "dehydrated" logarithmic state. To bring it back to a real angle theta without a massive computational cost, Tesla’s high-precision ALU uses a Taylor-series expansion optimized via Horner’s Method. This is a classic computer science trick where a complex equation (like an exponent) is broken down into a simple chain of multiplications and additions. By running this in three specific stages—multiplying by constants like 1/3 and 1/2 at each step—Tesla can approximate the exact value of an angle with 32-bit accuracy while using a fraction of the clock cycles. Once the angle is recovered, the high-precision logic generates a Rotation Matrix (a grid of sine and cosine values) that locks the data points into their correct 3D coordinates. This computational efficiency is impressive, but Tesla didn't stop at just calculating faster; they also found a way to double the "highway speed" of the data itself. 🧩 The data concatenation: 8-bit inputs to 16-bit outputs One of the most clever hardware "hacks" detailed in the patent is how Tesla manages to move 16-bit precision through an 8-bit bus. They use the MAC as a high-speed interleaver—effectively a "traffic cop" that merges two lanes of data. It takes two 8-bit values (say, an X-coordinate and the first half of a logarithm) and multiplies one of them by a power of two to "left-shift" it. This effectively glues them together into a single 16-bit word in the output register, allowing the low-precision domain to act as a high-speed packer for the high-precision ALU to "unpack". This trick effectively doubles the bandwidth of the existing wiring on the chip without requiring a physical hardware redesign. With this high-speed data highway in place, the system can finally tackle one of the biggest challenges in autonomous AI: object permanence. 🧠 Long-context memory: remembering the stop sign The ultimate goal of this high-precision math is to solve the "forgetting" problem. In previous versions of FSD, a car might see a stop sign, but if a truck blocked its view for 5 seconds, it might "forget" the sign existed. Tesla uses a "long-context" window, allowing the AI to look back at data from 30 seconds ago or more. However, as the "distance" in time increases, standard positional math usually drifts. Tesla's mixed-precision pipeline fixes this by maintaining high positional resolution, ensuring the AI knows exactly where that occluded stop sign is even after a long period of movement. The RoPE rotations are so precise that the sign stays "pinned" to its 3D coordinate in the car's mental map. But remembering 30 seconds of high-fidelity video creates a massive storage bottleneck. ⚡ KV-cache optimization & paged attention: scaling memory To make these 30-second memories usable in real-time without running out of RAM, Tesla optimizes the KV-cache (Key-Value Cache)—the AI's "working memory" scratchpad. Tesla’s hardware handles this by storing the logarithm of the positions directly in the cache. This reduces the memory footprint by 50% or more, allowing Tesla to store twice as much "history" (up to 128k tokens) in the same amount of RAM. Furthermore, Tesla utilizes Paged Attention—a trick borrowed from operating systems. Instead of reserving one massive, continuous block of memory (which is inefficient), it breaks memory into small "pages". This allows the AI5 chip to dynamically allocate space only where it's needed, drastically increasing the number of objects (pedestrians, cars, signs) the car can track simultaneously without the system lagging. Yet, even with infinite storage efficiency, the AI's attention mechanism has a flaw: it tends to crash when pushed beyond its training limits. 🔒 Pipeline integrity: the "read-only" safety lock A subtle but critical detail in the patent is how Tesla protects this data. Once the transformed coordinates are generated, they are stored in a specific location that is read-accessible to downstream components but not write-accessible by them. Furthermore, the high-precision ALU itself cannot read back from this location. This one-way "airlock" prevents the system from accidentally overwriting its own past memories or creating feedback loops that could cause the AI to hallucinate. It ensures that the "truth" of the car's position flows in only one direction: forward, toward the decision-making engine. 🌀 Attention sinks: preventing memory overflow Even with a lean KV-cache, a robot operating for hours can't remember everything forever. Tesla manages this using Attention Sink tokens. Transformers tend to dump "excess" attention math onto the very first tokens of a sequence, so if Tesla simply used a "sliding window" that deleted old memories, the AI would lose these "sink" tokens and its brain would effectively crash. Tesla's hardware is designed to "pin" these attention sinks permanently in the KV-cache. By keeping these mathematical anchors stable while the rest of the memory window slides forward, Tesla prevents the robot’s neural network from destabilizing during long, multi-hour work shifts. While attention sinks stabilize the "memory", the "compute" side has its own inefficiencies—specifically, wasting power on empty space. 🌫️ Sparse tensors: cutting the compute fat Tesla’s custom silicon doesn't just cheat with precision; it cheats with volume. In the real world, most of what a car or robot sees is "empty" space (like clear sky). In AI math, these are represented as "zeros" in a Sparse Tensor (a data structure that ignores empty space). Standard chips waste power multiplying all those zeros, but Tesla’s newest architecture incorporates Native Sparse Acceleration. The hardware uses a "coordinate-based" system where it only stores the non-zero values and their specific locations. The chip can then skip the "dead space" entirely and focus only on the data that matters—the actual cars and obstacles. This hardware-level sparsity support effectively doubles the throughput of the AI5 chip while significantly lowering the energy consumed per operation. 🔊 The audio edge: Log-Sum-Exp for sirens Tesla’s "Silicon Bridge" isn't just for vision—it's also why your Tesla is becoming a world-class listener. To navigate safely, an autonomous vehicle needs to identify emergency sirens and the sound of nearby collisions using a Log-Mel Spectrogram approach (a visual "heat map" of sound frequencies). The patent details a specific Log-Sum-Exp (LSE) approximation technique to handle this. By staying in the logarithm domain, the system can handle the massive "dynamic range" of sound—from a faint hum to a piercing fire truck—using only 8-bit hardware without "clipping" the loud sounds or losing the quiet ones. This allows the car to "hear" and categorize environmental sounds with 32-bit clarity. Of course, all this high-tech hardware is only as good as the brain that runs on it, which is why Tesla's training process is just as specialized. 🎓 Quantization-aware training: pre-adapting the brain Finally, to make sure this "Mixed-Precision Bridge" works flawlessly, Tesla uses Quantization-Aware Training (QAT). Instead of training the AI in a perfect 32-bit world and then "shrinking" it later—which typically causes the AI to become "drunk" and inaccurate—Tesla trains the model from day one to expect 8-bit limitations. They simulate the rounding errors and "noise" of the hardware during the training phase, creating a neural network that is "pre-hardened". It’s like a pilot training in a flight simulator that perfectly mimics a storm; when they actually hit the real weather in the real world, the AI doesn’t "drift" or become inaccurate because it was born in that environment. This extreme optimization opens the door to running Tesla's AI on devices far smaller than a car. 🚀 The strategic roadmap: from AI5 to ubiquitous edge AI This patent is not just a "nice-to-have" optimization; it is the mathematical prerequisite for Tesla’s entire hardware roadmap. Without this "Mixed-Precision Bridge", the thermal and power equations for next-generation autonomy simply do not work. It starts by unlocking the AI5 chip, which is projected to be 40x more powerful than current hardware. Raw power is useless if memory bandwidth acts as a bottleneck. By compressing 32-bit rotational data into dense, log-space 8-bit packets, this patent effectively quadruples the effective bandwidth, allowing the chip to utilize its massive matrix-compute arrays without stalling. This efficiency is critical for the chip's "half-reticle" design, which reduces silicon size to maximize manufacturing yield while maintaining supercomputer-level throughput. This efficiency is even more critical for Tesla Optimus, where it is a matter of operational survival. The robot runs on a 2.3 kWh battery (roughly 1/30th of a Model 3 pack). Standard 32-bit GPU compute would drain this capacity in under 4 hours, consuming 500W+ just for "thinking". By offloading complex RoPE math to this hybrid logic, Tesla slashes the compute power budget to under 100W. This solves the "thermal wall", ensuring the robot can maintain balance and awareness for a full 8-hour work shift without overheating. This stability directly enables the shift to End-to-End Neural Networks. The "Rotation Matrix" correction described in the patent prevents the mathematical "drift" that usually plagues long-context tracking. This ensures that a stop sign seen 30 seconds ago remains "pinned" to its correct 3D coordinate in the World Model, rather than floating away due to rounding errors. Finally, baking this math into the silicon secures Tesla's strategic independence. It decouples the company from NVIDIA’s CUDA ecosystem and enables a Dual-Foundry Strategy with both Samsung and TSMC to mitigate supply chain risks. This creates a deliberate "oversupply" of compute, potentially turning its idle fleet and unsold chips into a distributed inference cloud that rivals AWS in efficiency. But the roadmap goes further. Because this mixed-precision architecture slashes power consumption by orders of magnitude, it creates a blueprint for "Tesla AI on everything". It opens the door to porting world-class vision models to hardware as small as a smart home hub or smartphone. This would allow tiny, cool-running chips to calculate 3D spatial positioning with zero latency—bringing supercomputer-level intelligence to the edge without ever sending private data to a massive cloud server.

English
2K
4.5K
38.7K
11M
Sawyer Merritt
Sawyer Merritt@SawyerMerritt·
NEWS: Tesla's redesigned Semi truck has just been spotted for the first time ever. • Model Y style light bar • More efficient • Increased payload capability • 1.7kWh/mile (15% improvement vs old Semi) • Designed for autonomy • 500 mile range • 1.2MW (1,200kW) peak charging speed • 800kW drive power • Side windows don't go as far back • Silver anti-slip diamond traction strips on side steps • A lot more under the skin improvements Volume production starts in the first half of 2026, with customer deliveries around that time.
Zanegler@HinrichsZane

@tesla_semi @danWpriestley here it is youtu.be/PJGFPR5vP2g

English
160
542
7.1K
773.1K
Daniel Leebrick
Daniel Leebrick@dpilot83·
It’s probably dumb to get a kick out of this but I do…
Nick Sortor@nicksortor

🚨 EPIC! Trump’s soon-to-be head of NASA Jared @RookIsaacman, Sen. @TimSheehyMT, and Retired Army Ranger @JarikoDenman just set the WORLD RECORD for the largest flag flown in freefall, bringing the record back to AMERICA 🇺🇸 This American flag is a whopping 3,200SF, and to make the record even MORE patriotic, the team jumped out of a Vietnam-era Huey from 10,000 feet. Right in time for the start of America’s 250th birthday! The previous record was set just WEEKS ago in Libya, prompting Jariko, who’s served in 15 combat deployments, called Jared up knowing they had to bring the record home. And so they assembled a rockstar team, and made it happen just weeks later, on December 7th—Pearl Harbor Day So freaking badass 🔥

English
0
0
2
63
Daniel Leebrick
Daniel Leebrick@dpilot83·
You’re just being obstinate if you can’t see any risks associated with gullible FSD. It must be able to read the context of the situation. Additionally, FSD will eventually be faced with the dilemma of whether to protect the individual in the car or the individual in another car or possibly even a pedestrian. If there is a guy standing in the middle of the road with a shotgun aimed at you, does FSD drive off a bridge to keep from running that guy over or does FSD run the guy over thereby increasing the odds of survival of the person in the car? There are a million crazy situations like that. Humans might not always get them right either but eventually FSD is going to have to make some crazy judgement calls and I don’t expect it to be as good as a human for a long long time.
English
0
0
0
4
Edge Case
Edge Case@edgecase411·
Anyone with a handheld stop sign can stop FSD V14 right now. Honestly I’m not sure how I feel about that with robotaxis around the corner.
English
247
26
616
127.8K
CRNAfarmer
CRNAfarmer@CRNAfarmer2·
Grain bag question for Northern farmers- how long do you dare leave grain in bags that averages 19%? There's some 17.5% in there but also some spots up to 21-22%. Planning to pick them up in the next week but was tempted to leave 1 or 2 till March April @NorthStarAg2
CRNAfarmer tweet media
English
22
3
29
20K
Daniel Leebrick
Daniel Leebrick@dpilot83·
A good bagging operation is less than $0.12/bushel annually. This is less than the annual opportunity cost on a bin. Not to mention in many places there is no property taxes on bags and bagging equipment but there is on bins. Much less labor on bags too due to not trucking it twice. This would decimate your bin argument if that was all there was to it. But it’s not. Your harvest window of opportunity is very narrow with a grain bag. If you put it in too wet and too warm, it spoils quickly because there is no aeration. Bins allow you to start much earlier which is worth a lot.
English
1
0
0
14
Farmer99
Farmer99@Farmer9910·
@meanfarmer71 @agwithemma Id much rather build a bin rather than upkeep a bagger and unloader, and also buy the bags. Bins are a once in a lifetime investment
English
3
0
0
132
Emma
Emma@agwithemma·
A large majority of harvested grain in western North Dakota is stored in bags until the farmers are ready to sell it. This keeps equipment moving during harvest because there’s not wait time for trucks!
English
12
25
455
68.5K
Daniel Leebrick
Daniel Leebrick@dpilot83·
@JHillfarming @agwithemma Keep track of grain temps going in. Look at charts to see how long grain will last without aeration at your moisture and temp. At 18% if grain temp is 50° F you should really only keep it in there about a month.
English
0
0
2
32
Joe Hill
Joe Hill@JHillfarming·
@agwithemma Been trying to figure out what a safe moisture and longevity of storage in bags. Same as Dakota climates. Thoughts? I’m at 18%
English
1
0
2
341
Daniel Leebrick
Daniel Leebrick@dpilot83·
I budget $0.12 per bushel total for: 1. The cost of the bag 2. The cost of putting it in the bag 3. The cost of taking it out of the bag. Back of the napkin calculations tell me I’m well under $0.10 per bushel for all of that but back of the napkin calculations often miss some stuff so the $0.12 gives me some margin.
English
0
0
1
60
JDE
JDE@JDE146021723263·
@agwithemma Anyone know the cost of this storage process? Thanks
English
3
0
0
507
Daniel Leebrick
Daniel Leebrick@dpilot83·
All depends on grain temp. You can’t put air on it so if it goes in at 30° F during a cold time of the year you can probably safely store it for 2 to 4 months (depending on when it warms up) even if it’s 30% moisture. If it goes in at 100° of temp and it’s 16% moisture, you probably need to have it out of there in a couple of weeks.
English
0
0
2
154
Randy Uhrmacher
Randy Uhrmacher@Cornfrmr·
@agwithemma How wet do they put it in the bag? In our climate I say 17.5 max.
English
2
0
2
1.4K
AR__Actual
AR__Actual@AR_Salyer·
@DefiantLs We print off 800B to 1.8T per YEAR whenever we pass a new budget. That's a HUGE amount of new money we just print off and inject into the system. So Elon's right, and the guy is actually right, but he's too stupid to realize they are saying the same thing.
English
1
0
4
123
Daniel Leebrick
Daniel Leebrick@dpilot83·
After voting for him (the lesser of two evils) twice, I wrote in “Mike Pence” the 3rd time. Trump bad. Kamala worse. Wasn’t voting for the lesser of two evils a 3rd time when he threw the VP under the bus with lies about the VP’s Constitutional authority to overturn elections. Thankful Mike Pence has a spine.
English
0
0
0
21
Ashley 🖤
Ashley 🖤@TheeGrainLady·
If the election was held again, who has your vote?
English
3
1
2
2.2K
Daniel Leebrick
Daniel Leebrick@dpilot83·
Please use the Constitution and its amendments as well as Electoral Count Act of 1887 to make your case that Mike Pence had the authority as Vice President to send the electoral ballets back and ask for the alternate slate of electors from given states. I’ll go make some popcorn while I wait.
English
0
0
0
18
Lt Aldo Raine
Lt Aldo Raine@ddotEOD·
@Mike_Pence Mike "Judas" Pence... you knew. And you certified the election anyway. What a coward. What a traitor... x.com/TheIntelSCIF/s…
The SCIF@TheSCIF

BIDEN'S FAMILY & CIA TOOK MILLIONS FROM CHINA and the in-depth look on how they stole the 2020 election. Trump's FBI found out that Biden's intelligence agencies were trying to cover it up and destroy the evidence. This is monumental. The Biden family was taking millions of dollars from China. Biden's CIA director, Bill Burns, also took millions of dollars from the Chinese government. The entire foreign policy establishment of the Biden White House had millions of dollars being injected into it from the Chinese government. They also used fake IDs and fake ballots to help ensure Biden would win the 2020 election and censored massive amounts of people on the internet and social media platforms to silence anything regarding Trump and the election including his loyal supporters. This is exactly what a dictator does and the definition of fascism itself. This is just the tip of the iceberg. Everyone remembers the videos of them pulling suitcases of ballots from underneath tables after the poll watchers went home because of "water pipes" allegedly breaking. Covering windows, wheeling in carts full of duffle bags filled with ballots and no chain of custody, 18 wheelers, and UPS trucks full of fake ballots. Our own postal service was even involved. People were even finding piles of Trump ballots dumped in the woods and caught postal workers ditching Trump ballots in storm drains. We want justice and we, the people, want arrests.

English
18
29
118
5.8K
Fishler
Fishler@10Fishler01·
@grok @RafaCrackYT @sciencegirl @elonmusk Correct. Now you finally understand. You made it way harder then it needed to be. It was not that tricky. I’m surprised you were able to correct your mistake even though it was painfully obvious and you took forever to get it.
English
3
0
0
23
Science girl
Science girl@sciencegirl·
What’s the answer
Science girl tweet media
English
2K
179
3.7K
2M
Daniel Leebrick
Daniel Leebrick@dpilot83·
@10Fishler01 @grok @RafaCrackYT @sciencegirl @elonmusk That was fairly shocking that @grok thought the arrow next to the man on the right represented the man’s height. I knew grok was bad with imagines but that was pretty painful to watch. Wonder when grok’s ability to perceive images accurately will improve?
English
3
0
1
39
Casey Kimbrell
Casey Kimbrell@CBKimbrell·
Bennett slipped this through despite the fact it points the finger at most of the publication’s sponsors. Might ought to read it before they take it down. agweb.com/markets/outrag…
English
22
52
225
83.8K
Daniel Leebrick
Daniel Leebrick@dpilot83·
“We already have more homeless than at any time in our nations history” This is objectively false, almost by an order of magnitude when expressed as a percentage of the population. It’s even false when expressed as a total number rather than a percentage of the population. Those facts actually aren’t good for my argument though. All I know is ~80% of annual federal revenue is spent on interest for the federal debt as well as welfare type programs and it’s unsustainable. Unsustainable to me means: 1. It’s impossible to cut enough in non-welfare areas of the federal budget to balance the budget when welfare areas + interest payments take 80% of federal revenue. 2. Excessive federal spending will eventually result in the USD no longer being the reserve currency and if that happens, the show is over in the USA. The entire population even the homeless will be many times worse off than they currently are.
English
0
0
0
13
Intelligence Dojo
Intelligence Dojo@IntelligenceDo1·
@dpilot83 @beinlibertarian We already have more homeless than at any time in our nations history and virtually no one takes care of any of them. Your point is dead on entry. Any societal solution that leads to elderly people being homeless and dying in the streets is morally bankrupt.
English
1
0
0
67
Being Libertarian
Being Libertarian@beinlibertarian·
It’s not Rand Paul’s fault that Social Security is running dry and unsustainable. Ask your “conservative” representatives to do math and to cut some spending elsewhere to make it fiscally responsible to keep it as is . Social Security needs to be phased out anyway
English
304
89
1.2K
26.3K
Daniel Leebrick
Daniel Leebrick@dpilot83·
Every socialist principle promises safety in exchange for freedom. Give an inch and they end up taking a mile. In a libertarian world, 97 year old men don’t die on the streets because there are various charities that make sure things like that don’t happen. 97 year old men also don’t die on the street because compassionate individuals look at them and say, “that might be me someday. I’ll voluntarily lend a hand”. This is a much more realistic world than the world you said you are looking for. You said you just knew the government could easily cut half the cost out of various welfare programs by eliminating at least some of the waste. That is the really unicorn utopia. The expectation that the government which can only steal from one group and give to another while taking their cut in the process can be efficient and fair. The entire point of the libertarian perspective is that the government really isn’t capable of that. If the government had a chance of doing what you say it can, we would not be having this discussion. There would be no libertarians because government would be what it should be. In the real world, that never happens. In the real world government promises safety and takes freedom. Then they butcher the safety part (on cost and possibly on safety as well). Government keeps doing this over and over a little at a time and eventually you have no prosperity, no safety and long before you lose prosperity and safety you have no freedom. I am not saying charity should not exist. I’m simply saying charity is not something government should be involved in.
English
1
0
0
20
Intelligence Dojo
Intelligence Dojo@IntelligenceDo1·
@dpilot83 @beinlibertarian I’m not a libertarian who believes in a society where elderly people are forced into homelessness and die in the streets after being hard working Americans. Quit edge-lording.
English
1
0
0
25