MargaritaDave

359 posts

MargaritaDave banner
MargaritaDave

MargaritaDave

@herrera1_david

Thinking about something interesting to think about.

I'm right here Katılım Temmuz 2012
810 Takip Edilen124 Takipçiler
MargaritaDave
MargaritaDave@herrera1_david·
@tunguz How long before he discovers agents are omniscient, omnipresent, and by some measures will have aspects of omnipotence.omnipotent. The atheist will turn to into agnostic in 2027!!
English
0
0
0
93
MargaritaDave
MargaritaDave@herrera1_david·
@MatthewBerman Qwen3.6 36B feels like a frontier model. If your not doing something extreme, I don't think there is an appreciable difference. And it's so inexpensive it's mind blowing.
English
0
0
1
270
Matthew Berman
Matthew Berman@MatthewBerman·
$3/million output tokens. Qwen 3.5 Plus is basically a frontier model. Let that sink in.
Together AI@togethercompute

Introducing Qwen3.6-Plus from @Alibaba_Qwen, a 1M-context model built for real-world agents, agentic coding, and multimodal reasoning. AI natives can now use Qwen3.6-Plus on Together AI and benefit from reliable inference for production-scale agent workflows.

English
38
26
351
54.2K
MargaritaDave
MargaritaDave@herrera1_david·
@Teknium I decided to try Hermes even though i have done lots of work on my openclaw. It truly is a fantastic piece of work. I still mostly use Openclaw, but i don't think i will ever uninstall Hermes. They really do work great together. Congrats on the success, you guys deserve it.
English
0
0
0
16
Kevin | Large Fam Dad
Kevin | Large Fam Dad@LargeFamDad·
My boss's boss is like 42, never married, no kids. Earns $275-300K per year. Goes on a minimum of two international vacations a year w/ his girlfriend. 10+ days, all out. Eats the best food, stays in top notch accomodations. Excursions, tours, nicest beaches, etc. Great guy, I'm happy for him. But what I've realized is that without kids, you end up chasing a lifestyle that has to continually be topped in order for you to be satisfied and find happiness. What he and others like him don't understand is that when you have children, seeing THEM experience life's most basic things and watching their eyes light up at all the "firsts", brings greater pleasure and joy than any vacation or travel experience ever could. Seeing THEM try blueberries for the first time is greater than dining at the best 5 star restaurant in Europe. Seeing THEM learn how to walk is greater than walking the Great Wall of China or strolling along the most picturesque beach. Watching THEM giggle uncontrollably at "peek-a-boo" tops any A-list comedian act. Seeing THEIR excitement when building a fort out of cardboard boxes and making a door big enough for daddy is superior to staying at 5-star resorts. Flying kites with THEM far outweighs excursions like parasailing or helicopter rides. Seeing THEM perform a recital on stage for the first time is more rewarding than watching a Broadway show or top notch symphony orchestra. ----------------- When you have children, all of a sudden you realize that life's greatest joys are not in the pursuit of things or pleasure or travel, but rather in the LOVE and bond you share with your very own image bearers. Seeing the beauty and magnificence and wonder of life all over again for the first time through THEIR eyes and expressions gives you something the world simply cannot offer, nor even come close.
Kevin | Large Fam Dad tweet media
English
1.5K
2.6K
26.5K
3.9M
MargaritaDave
MargaritaDave@herrera1_david·
@RichardHanania What are the vaccination rates in the counties that the 15 million immigrants came from. Im sure that has nothing to do with it however, just like protesting didn't speak covid
English
0
0
1
106
Richard Hanania
Richard Hanania@RichardHanania·
MAHA morons are killing people. Measles went from 70 cases a year to 1,700 this year just so far. Other diseases are coming. They're threatening the health of newborns. Anti-vaxxers are a public health hazard.
Richard Hanania tweet media
English
308
212
1.1K
66.4K
MargaritaDave
MargaritaDave@herrera1_david·
@mortgagereels Ms. Nixon plans to live in a studio apartment downtown all her life so you msth doesn't add up ;)
English
0
0
1
430
MargaritaDave
MargaritaDave@herrera1_david·
@elonmusk How much of the white supremacist bullshit we had to endure on X was funded by SPLC? I wonder
English
3
2
10
2.5K
Elon Musk
Elon Musk@elonmusk·
The fact that I wasn’t funded by the SPLC proves I’m not a Nazi
English
13.4K
32.1K
401.4K
80.8M
South Dallas Foodie
South Dallas Foodie@SouthDallasFood·
Ok serious question, how does one even go about eating this?
English
2.8K
572
15.7K
14.8M
Ben Nichol
Ben Nichol@MrBitterTV·
Your kid sucks. Mine is the greatest.
Ben Nichol tweet media
English
3
0
7
956
MargaritaDave
MargaritaDave@herrera1_david·
@MilkRoadAI In a year we will have millions of people who have gathered their thoughts in a Karpathy style Obsidian/Wiki second brains using their agents. It seems like that might be a good dataset to train on. Exciting times!
English
1
0
2
1.1K
Milk Road AI
Milk Road AI@MilkRoadAI·
Andrej Karpathy just made one of the most interesting arguments about AI model design that most people are completely missing. His take is that frontier AI models are not too big because the technology is complex and too big because the training data is garbage. When you or I think of the internet, we picture Wall Street Journal articles, Wikipedia entries, serious writing. That is not what a pretraining dataset looks like. When researchers at frontier labs look at random documents from the actual training corpus, it is stock ticker symbols, broken HTML, spam, gibberish. One estimate puts Llama 3's information compression at just 0.07 bits per token meaning the model has only a hazy recollection of most of what it trained on. So we build trillion parameter models not because we need a trillion parameter brain but because we need a trillion-parameter compression engine to squeeze some intelligence out of a firehose of noise. Most of those parameters are doing memory work, not cognitive work. Karpathy's prediction is separate the two entirely. Build a cognitive core, a model that contains only the algorithms for reasoning and problem-solving, stripped of encyclopedic memorization and pair it with external memory that it can query when it needs facts. He thinks a cognitive core trained on high-quality data could hit genuine intelligence at around one billion parameters. For reference, today's flagship models run between 200 billion and 1.8 trillion parameters with most of that weight dedicated to remembering the internet's slop. The trend is already moving his direction. GPT-4o operates at roughly 200 billion parameters and outperforms the original 1.8 trillion-parameter GPT-4. Inference costs for GPT-3.5-level performance dropped 280-fold between 2022 and 2024 driven almost entirely by smaller, cleaner, better-architected models. The real bottleneck in AI right now is not compute but rather data quality.
English
47
135
907
198.6K
George Pu
George Pu@TheGeorgePu·
Going to Apple next week to buy a Mac Mini. Planning to run our own AI models on it. No API. No subscriptions. No one else's servers. Start with one. Stack a few together over time. Still deciding between Mac Mini and Mac Studio. If you're running local models - what specs would you get and what would you avoid?
English
90
0
105
16.1K
MargaritaDave
MargaritaDave@herrera1_david·
@annbjer @thekitze You should try GLM 5.1 it's not OPUS but it's pretty good. Kimi 2.5 is decent as well
English
0
0
1
36
Daniel Annbjer
Daniel Annbjer@annbjer·
@thekitze I think a lot of people gave up when anthropic killed usings Claude max subscriptions in OC. i’m not giving up that easily though, i still have hopes for OpenAI actually making GPT work. until then I’m forced to track API token spending like a hawk for anything that touches ant.
English
1
0
3
3.8K
kitze
kitze@thekitze·
the openclaw hype is completely dead, wow
English
357
36
2.9K
266.2K
Ahmad
Ahmad@TheAhmadOsman·
@MemoryReboot_ early were $600 RTX 3090s that you can easily grab from r/hardwareswap long gone are those days
English
4
0
12
1.4K
Ahmad
Ahmad@TheAhmadOsman·
if you don't have GPUs already then you're kinda late to the game anon
English
57
6
254
32.5K
Riley Brown
Riley Brown@rileybrown·
Is Hermes better than OpenClaw or is it yet another psyop on the timeline?
English
293
8
619
125.8K
MargaritaDave
MargaritaDave@herrera1_david·
All the trouble of moving off Anthropic with OpenClaw reminds me of the Nature vs. Nurture arguments in regards to child development. It's fascinating!
English
0
0
0
19
MargaritaDave
MargaritaDave@herrera1_david·
@Shpigford @openclaw Opus has great personality, OpenAI is kind of a dork, Gemini is ok but a little lame, GROK is awsome to start, but i feel like each time I send it a message, it takes a hit of Crack, and after about 10 hits it loses it and the session needs to be restarted.
English
0
0
3
570
Josh Pigford
Josh Pigford@Shpigford·
been using claude via API in @openclaw today out of curiosity on what costs would be. it's been a relatively light day of interaction. no explicit programming work, though opus did write a couple of one-off scripts for processing data. used $32 in ~8 hours.
Josh Pigford tweet mediaJosh Pigford tweet media
English
32
1
79
36.6K