Minh Ha

691 posts

Minh Ha banner
Minh Ha

Minh Ha

@_mihado

There I am, therefore I think?

Wellington Katılım Mart 2009
2.3K Takip Edilen248 Takipçiler
Sabitlenmiş Tweet
Minh Ha
Minh Ha@_mihado·
It feels like we're all trapped in the endless cycle of do more, win more, more more more. Gonna try building an app to lowkey weave mindfulness into daily life. Let's see where these threads land.
English
0
0
1
144
Minh Ha
Minh Ha@_mihado·
@RepNateMoran blah blah blah. lots of words. where are the action?
English
0
0
0
340
Congressman Nathaniel Moran
The United States of America must always be prepared to use overwhelming military force when necessary to defend our national security interests and protect Americans at home and abroad. And, a President should have the latitude to make decisions to that end, but only to the extent that those decisions are also consistent with the inherent authority of a Commander-in-Chief under the U. S. Constitution and the provisions of the War Powers Resolution. To date, I have supported the President’s decisions relating to the Iranian conflict because they were consistent with these authorities and the ultimate goal of protecting national security interests. At the same time, what sets America apart is not only our strength, but how we use it. Our nation has always conducted military operations for just causes and through just and moral means. This must continue in the future; otherwise we forfeit our legitimacy to lead the world. So, let me be clear: I do not support the destruction of a "whole civilization." That is not who we are, and it is not consistent with the principles that have long guided America.  I have and will continue to support a strong national defense—one that is focused, disciplined, and firmly rooted in protecting the safety and security of the American people. But, how we protect the lives of the innocent is just as important as how we engage the enemy. America is great because America is good.
English
2.6K
444
3.3K
1.5M
Minh Ha
Minh Ha@_mihado·
@hooeem Before anyone takes this and run with it, pls get Professional Indemnity Insurance.
English
0
0
0
34
hoeem
hoeem@hooeem·
when you become a millionaire in 1-3 years because you sell personalised knowledge bases and it’s all because (I repeat): 1: you learn how to build llm knowledge bases (the guide drops everything you need) 2: you go to people who are cash rich and time poor. lawyers, doctors, consultants, agency owners, property investors, founders. people drowning in information they never have time to organise 3: you show them what a personalised knowledge base looks like. their research, their documents, their industry intel, all compiled into a searchable wiki that gets smarter every time they use it 4: you offer a one-time build for 1.5k. you set up obsidian, build the folder structure, configure the schema, clip their first 20-30 sources, run the compilation, hand them a working system with a walkthrough 5: you offer a yearly maintenance package for 500. you update their wiki with new sources, run health checks, add new topics as their work evolves, keep the whole thing current 6: you land 5 clients and that’s 7.5k upfront plus 2.5k recurring every year. 10 clients and you’re looking at 15k plus 5k annual. for a system that takes you a few hours to build once you know the workflow 7: again, if you find 200 clients and you’re sitting on 300k upfront and 100k recurring every single year. for building markdown files. the beauty of this is the work gets faster every time you do it. your second build takes half the time of your first. by your fifth you could knock one out in an afternoon. and the people who need this most have no idea it exists. their competition definitely doesn’t have one. you’re not selling software. you’re selling an unfair advantage in their specific field.
hoeem@hooeem

x.com/i/article/2041…

English
90
247
3K
501.1K
Minh Ha
Minh Ha@_mihado·
@itsolelehmann Another the thing is Obsidian worked for years before CC is a thing. Now that CC and other harnesses are good enough I use it less and less.
English
0
0
0
7
Minh Ha
Minh Ha@_mihado·
@itsolelehmann I use multiple git based cortex repos. Obsidian is where my loose thoughts landed, manual storages, PKB etc. If the thought becomes more concrete it's distilled into one of the cortex. CC or Hermes works across cortexs and Obsidian.
English
1
0
0
314
Ole Lehmann
Ole Lehmann@itsolelehmann·
why would I use obsidian when I can just use claude code for the knowledge base? whats the advantage?
English
206
4
229
85K
Winter
Winter@WinterArc2125·
Most people don’t realize this: You get 1,500 free daily requests to Gemma 4 31B on @GoogleAIStudio. That’s plenty of free inference (imo). And you can route it into @NousResearch Hermes Agent via Vercel’s AI Gateway: 1. Create an API key on Google AI Studio 2. Add it under BYOK (Google) in Vercel AI Gateway 3. Create a Vercel Gateway API key 4. In Hermes → select “Vercel AI Gateway” + your Google model Now all your Google model requests route through your free AI Studio quota. Basically: free 31B model access inside your agent stack. (Tradeoff: not as private as running locally)
English
48
142
2K
136.8K
Yousr
Yousr@rsuyoy·
Hahahahaha just wait til I share the stack at some point, the parsing is actually cloud based!
Truth@yhdr56ibe4

@rsuyoy This is clean and beautiful, seems you're doing the parsing on device, no way a cloud LLM is that fast with its parsing and response

English
5
2
81
13.6K
Minh Ha
Minh Ha@_mihado·
@WinterArc2125 @GoogleAIStudio @NousResearch Thanks will try again. I signed up for a new account and the sign up credit was decreasing. but maybe it's for the gemini flash compaction tool. Unsure. Still new to hermes config
English
0
0
1
323
Sudo su
Sudo su@sudoingX·
if what i'm cooking works, it changes everything about how we use hermes agent anon.
English
31
5
332
31.1K
ことだよ!!
ことだよ!!@kotosan_dayo·
初めて使ったPCのCPU教えて欲しい!!!! 自分はCore i7 3770
日本語
4.9K
407
2.7K
851.4K
Minh Ha
Minh Ha@_mihado·
@alexcooldev In the original video posted many months ago someone found the account for both. He practiced the dance soo well so maybe he's gonna try that for real. But yeah you can't tell fake from real anymore 😂
English
0
0
0
112
Alex Nguyen
Alex Nguyen@alexcooldev·
I’ve researched the AI influencer space, and now I can’t trust any videos on social media like TikTok or IG anymore, bruh AI is getting more and more realistic 🙃
I,Hypocrite@lporiginalg

English
5
2
32
6.2K
Minh Ha
Minh Ha@_mihado·
I have 2 RTX A4000. slightly faster than 3060 with a bit more VRAM. My workload will involve R&D on vision model (classification, text detection, and bounding box). I know I can use Google Vision for this work but I have 3M images and would love to try local models. Any pointers? Thanks.
English
0
0
0
91
Sudo su
Sudo su@sudoingX·
people keep asking me how many GPUs they should buy and most of you are buying before you even know your workload. this isn't just about hobbyists. i've talked to startups running multi-GPU inference clusters where half the compute sits idle because nobody benchmarked the actual task before scaling. they bought hardware to match a parameter count instead of a workload. that's not infrastructure. that's sunk cost with a power bill. i suggest define the workload first. what model, what context, what throughput do you actually need. start with one card. run your real task. measure where it breaks. that break point is the only honest signal to scale from. i've benchmarked every VRAM tier from 8GB to enterprise and the pattern is the same. a 27B dense on one $900 GPU will outwork a 120B MoE on hardware that costs 80x more if the task fits. most tasks fit. scale from data not from anxiety. your workload will tell you when it needs more. listen to it before your checkout page does.
Umer Farooq@UmerFar02366372

@sudoingX What’s the best hardware setup for running local LLMs? Should I go with one or two RTX 3090s, or is there a better alternative? 🙏

English
27
7
170
18.8K
Minh Ha
Minh Ha@_mihado·
@jaketapper @RealBilal He wrote like a crack addict ... narcissistic, and delusion of grandeur. Gave them an inch and they will haunt you for a very long time.
English
0
0
0
1.7K
Minh Ha
Minh Ha@_mihado·
I'm setting up hermes right now with GH copilot sonnet as the main driver. Probably will add OpenCode Go to use Mimo & Kimi. Then I can ditch Claude sub after this month. I have 2 RTX A4000 but it's a bit slow for meaningful works. But maybe because I don't know how to tune them correctly.
English
0
0
0
258
Minh Ha
Minh Ha@_mihado·
I spent more time on the architecture, edge cases, integrations; and writing more test scripts than writing the actual software. We achieve the same output, at higher quality, with smaller team. Smaller team is not ideal because there's no buffer & no junior upskilling. Hopefully the situation improves.
English
0
0
0
383
Dr Kareem Carr
Dr Kareem Carr@kareem_carr·
I keep hearing that software engineers don’t write much code anymore and it’s mostly AI now. Can any software engineers confirm how true this is? Do you just drink coffee and watch Claude code all day now?
English
535
12
584
171.1K
Minh Ha
Minh Ha@_mihado·
@ibuildthecloud When we stop appreciating the intention, the outcomes become crap. Fast food, fast fashion etc.
English
0
0
0
36
Minh Ha
Minh Ha@_mihado·
@SouthPoint1000 I thought there exists a foundational document called the Constitution for this kind of situation?
English
1
0
1
1.4K
David Doak
David Doak@SouthPoint1000·
My suspicion is that all over Washington people are realizing that Trump and Hegseth are totally out of control, and have led us into a chaotic situation and that it is likely to get worse, far worse, with almost 3 years remaining the results could be catastrophic.
English
452
1.6K
12.2K
668.9K
Minh Ha
Minh Ha@_mihado·
@garrytan @elvissun @FastCompany With all due respect Garry, not everyone can burn token at that rate. But why are we equating more tokens with better judgement?
English
1
0
1
115
Elvis
Elvis@elvissun·
this thread is what mass cope from legacy devs looks like. i talked to @FastCompany about why @garrytan's "AI slop" is actually the future of software engineering. the mass code review. the line-by-line gatekeeping. the "craftsmanship" that was really just slow iteration disguised as rigor - that era is over. and the engineers who built their entire identity around it are panicking. @gregorein brags about burning 3 billion tokens last year while dunking on garry for flexing lines of code. i've burned 6.6 billion in the past three months on codex alone. by his own logic, i'm 8x as credible. see how silly that sounds? yes, he found real issues. yes, they got fixed. that's exactly the point. karpathy's autoresearch proved this already - AI agents can solve very complex problems just by operating inside feedback loops, iterating to optimize a loss function. this is what software engineering is now - gradient descent. ship, measure, self-correct, repeat. all by the agent itself. this is the new startup playbook. your job isn't to review every line before deploy. your job is to build systems where agents observe outcomes - mrr, analytics, error rates, user behavior - and self-improve. the engineer's role shifts from gatekeeper to building the machine that builds the machine. you could run this level of audit (using AI) on any production site and find the same issues - most just don't have a billionaire CEO attached for virality. mocking the people who adapted is easier than adapting. but the craft is evolving whether you like it or not.
gregorein@Gregorein

so... I audited Garry's website after he bragged about 37K LOC/day and a 72-day shipping streak. here's what 78,400 lines of AI slop code actually looks like in production. a single homepage load of garryslist.org downloads 6.42 MB across 169 requests. for a newsletter-blog-thingy. 1/9🧵

English
194
18
218
250.1K
Minh Ha
Minh Ha@_mihado·
@jswriter65 @plantmath1 He'll be remembered as someone who has all the favourable conditions and some how still fuck up everything.
English
1
0
14
270
John in the Shelter
John in the Shelter@jswriter65·
@plantmath1 He'll be remembered for ridding the world of Iran's horrors, kicking Venezuela out from under China, and righting trade wrongs long after your insipid analysis is forgotten.
English
16
0
6
7.9K
Plant
Plant@plantmath1·
Trump 2 was given the easiest layup in history. Inflation was already trending lower, take photo ops deporting a few migrant criminals, take credit for a boom in manufacturing from Biden's CHIPs and infra bills, still get all the bribes, and be popular. What an epic faceplant.
English
137
1.1K
18.4K
448.5K