Homes

2.7K posts

Homes banner
Homes

Homes

@Sarcastic_Cholo

This is my bio. there are many like it but this one is mine.

انضم Nisan 2009
425 يتبع101 المتابعون
Homes
Homes@Sarcastic_Cholo·
@thebearbkg @congressdj @WR4NYGov Lemonade has a chance of turning into a management company. Tesla provides the derisking software through FSD but no extra operating costs. They collect FSD revenue.
English
1
0
1
407
Homes
Homes@Sarcastic_Cholo·
@thebearbkg @congressdj @WR4NYGov Lemonade test pilots, Tesla collects the data, Tesla decides what they can change/do better if it is worth it.
English
2
0
1
428
DJ
DJ@congressdj·
Tesla Insurance = Terminated. It was a great first month, decent second, and outrageous third. Let me know when they stop basing a month’s entire premium on < 1 manually driven mile out of > 2500 FSD driven.
English
86
17
1.1K
148.9K
Homes
Homes@Sarcastic_Cholo·
@wholemars User base grows a lot but revenue does not. What is the ROI? Specially if not monetary?
English
0
0
1
12
Whole Mars Catalog
Whole Mars Catalog@wholemars·
Tesla Self-Driving, Pay Per Use Pricing Every car comes with Self-Driving activated. If you don’t have a monthly subscription, you can charge your Supercharger credit card already on file for a single self-driving ride, the same way you do in the Robotaxi app. 👍 or 👎?
Whole Mars Catalog tweet mediaWhole Mars Catalog tweet media
English
427
62
1.8K
197.6K
Homes
Homes@Sarcastic_Cholo·
@AIDRIVR You need a dashboard of how many loud mustangs get gapped. Win/loss. Type of cars. And keep it safe of course!
English
0
0
2
479
ΛI DRIVR
ΛI DRIVR@AIDRIVR·
I will let FSD drive 99% of the time, but the one thing I will not let it do is get beat off the line by a loud Mustang
English
186
126
4K
203.7K
Maziyar PANAHI
Maziyar PANAHI@MaziyarPanahi·
Gemma 4 watches raw video. Understands the scene. Then prompts SAM 3 to segment and RF-DETR to track. One AI directing two others. Fighter jets. Crowds. Aerial defense footage. All three models running locally on a MacBook. No cloud. What scene should I point this at next?
English
69
110
1.8K
189.1K
Homes
Homes@Sarcastic_Cholo·
@MaziyarPanahi Not familiar with SAM 3 or RF-DETR. I’ll check those out. Might also be useful for mosquito turret I want to build image+sound for targeting.
English
1
0
1
102
Maziyar PANAHI
Maziyar PANAHI@MaziyarPanahi·
@Sarcastic_Cholo that's awesome! it's totally possible. with SAM 3 or fine-tuned version of RF-DETR you can first detect it, then cropping those and passing them to the VLM like Gemma 4 to extract the number. and the whole stack is possible locally! i am excited for you doing this locally! 👏
English
1
0
10
1.2K
Homes
Homes@Sarcastic_Cholo·
@sauce2157 @SkipperStagg @RealDanODowd I mean Andrew doesn’t acknowledge that Dan has a point so I would guess his personal wealth that is tied into Tesla is also more important in this case.
English
1
0
0
15
Toad Sauce
Toad Sauce@sauce2157·
@SkipperStagg @RealDanODowd Dans a fucking idiot and doesn’t care lives will be lost from him keeping people from trusting FSD. Dan thinks his personal wealth is more important than people’s lives.
English
2
0
0
167
Homes
Homes@Sarcastic_Cholo·
@SkipperStagg @RealDanODowd He’s right in that a senior member of the team should know what to do when you see that the road visibility is covered by smoke or are you just as retarded as Dan?
English
0
0
0
109
Andrew Stagg
Andrew Stagg@SkipperStagg·
@RealDanODowd Looks like FSD figured it out. FSD is 9x safe than the average driver in the USA. Dan has less than 9x the average IQ of the average American. In case you were wondering yes, Dan has a financial insentive to try and make FSD look bad.
English
38
0
143
22.4K
Homes
Homes@Sarcastic_Cholo·
@RealDanODowd I disagree with you 1000/1000 times but you’re right on this one. You fucking slow down, FSD would be valuable when human reaction isn’t fast enough but this shows the opposite.
English
0
0
3
545
Homes
Homes@Sarcastic_Cholo·
@sudoingX So specialized model worked for a specialized task?! That’s unheard of!
English
0
0
0
191
Sudo su
Sudo su@sudoingX·
i am not being able to recover from this one. 27B dense on a $900 RTX 3090 outperforming 120B MoE on a $70K production node with 2x H200 NVL at full precision. this is not easy to process. it changes the way we pick models for any task. if you're an AI startup running 120B MoE inference for agent workflows and a 27B dense with all parameters active on a single consumer GPU does it better, your compute bill might be solving the wrong problem. i am writing the full deep dive article to document everything here and share with you all so we can reproduce and verify. the reproduction test is coming first. same 3090, same 27B dense Q4, same prompt, same harness. if it holds twice it's not a fluke. it's architecture. and based on what the VRAM poll is showing me right now, most of you are sitting on the exact hardware that already won this fight. article drops this week.
Sudo su@sudoingX

i am still in shock that Qwen 3.5 27B dense on a single RTX 3090, a $900 GPU, one shotted a game challenge that 120B MoE at full precision on $70K+ production hardware could not. this is leading me to doubt if it was a fluke. so i am going to reproduce it. i will test 27B dense Q4 on my single 3090 again paired with Hermes Agent and have it reproduce the results. after that i will test the same dense 27B but unquantized because if Q4 can one shot something that 120B full precision cannot then i wonder what dense 27B unquantized would do. dense models with all parameters active on every token might matter more than total parameter count for agent coding. if this reproduces it changes how i think about what hardware you actually need. this is not letting me sleep well since yesterday. i will report back.

English
84
104
1.5K
156.6K
Om Patel
Om Patel@om_patel5·
I taught Claude to talk like a caveman to use 75% less tokens. normal claude: ~180 tokens for a web search task caveman claude: ~45 tokens for the same task "I executed the web search tool" = 8 tokens caveman version: "Tool work" = 2 tokens every single grunt swap saves 6-10 tokens. across a FULL task that's 50-100 tokens saved why does it work? caveman claude doesn't explain itself. it does its task first. gives the result. then stops. no "I'd be happy to help you with that." no "Let me search the web for you" no more unnecessary filler words "result. done. me stop." 50-75% burn reduction with usage limits getting tighter every week this might be the most practical hack out there right now
Om Patel tweet media
English
967
1.5K
24.6K
2.9M
Mike P
Mike P@mikepat711·
Tesla Self-Driving sucks and doesn't work. This compressed video shows 90 minutes of Tesla's V14.2.2.5 moving through the heart of Philadelphia, and back to the suburbs for some errands. As you watch this clip, you'll start to realize that Tesla's goal of autonomy at-scale is very far from here, and probably won't ever actually happen. The entire charade is nonsense. Tesla cars have level 2 ADAS just like SuperCruise and BlueCruise. Yes, your Chevy Silverado does do this. More music by the homie @StainlessOne
English
332
59
1K
587.9K
Homes
Homes@Sarcastic_Cholo·
@DTK1281422 @lugaricano Mainstream is getting all the ideas everyone that already knows what they are doing have had only because mainstream people finally share something
English
0
0
0
27
DT K
DT K@DTK1281422·
If you yourself are unable to come up with something similar when you're thinking about getting a handle on your resources, then I'll tell you honestly: I don't predict much success for you, whoever you are. So go ahead and be one of the thousands of sheep and follow him instead of using your own mind.
English
1
0
3
1.7K
Luis Garicano 🇪🇺🇺🇦
Two things: 1. If you care about AI and you don't follow Karpathy, you are making a major mistake. He is a huge provider of public goods. 2. This idea is genius, specially for all academics. I have not implemented it. But it is my next project.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
53
155
4.9K
2.7M
Homes
Homes@Sarcastic_Cholo·
@mikepat711 Let’s colab? I’ll be the stupid guy that knows what sarcasm is. You can be Mike P.
English
1
0
2
75
Mike P
Mike P@mikepat711·
@Sarcastic_Cholo This isn’t a comparison to my post. My post includes a video that refutes my description of it. You are stupid. Not sarcasm
English
1
0
0
102
Homes
Homes@Sarcastic_Cholo·
@Hesamation Tha CTO sucks at whatever he’s doing then? No?
English
0
0
4
254
ℏεsam
ℏεsam@Hesamation·
also, pretty crazy the CTO is using 3 SWE salaries for tokens. makes you think you’re worth 10 days Claude Code as an employee
English
6
4
142
19.9K
ℏεsam
ℏεsam@Hesamation·
Redditor claims Claude Code is nerfed for Pro/Max users vs Enterprise customers and the strategy is to use the paid plan users to generate hype on X and LinkedIn so companies would reach out to them.
ℏεsam tweet media
English
205
273
3.7K
435.3K
Homes
Homes@Sarcastic_Cholo·
@mikepat711 Sarcasm should sound like sarcasm not a lie lmao
English
2
0
17
745
Mike P
Mike P@mikepat711·
This post is sarcasm guys. Holy fuck
English
67
1
722
56.2K
Homes
Homes@Sarcastic_Cholo·
Who knows something?
Homes tweet media
English
0
0
0
18