Vivek

1K posts

Vivek banner
Vivek

Vivek

@vivekbernard

Software Plumber, Tech Enthusiast. I like unclogging systems. I own my Opinions. An extremely proud ally to LGBTQW ❤️ Community. Beatitudes.

Katılım Ağustos 2009
590 Takip Edilen114 Takipçiler
Vivek
Vivek@vivekbernard·
@VicVijayakumar Have you looked at DSQL? Its actually pretty good and can actually scale down to zero.
English
0
0
0
50
Vic 🌮
Vic 🌮@VicVijayakumar·
Breakdown of my March AWS bill to run my side projects (this will be the last update in this series): EC2: $62.16 RDS (reserved Aurora MySQL): $45.75 ELB: $17.92 VPC: $13.95 Data Transfer: $7.71 S3: $1.19 ECS: $0.73 ECR: $0.28 CodeBuild: $0.24 ------------ Total: $149.93 For completeness, here's August to March- August: $203.95 September: $210.77 October: $245.98 November: $261.70 <--- moved from Fargate to EC2 December: $221.30 <--- fixed binpack strategy, moved RDS to reserved instance January: $146.65 <--- moved resource intensive scheduled jobs to Fargate February: $132.57 March: $149.93
Vic 🌮@VicVijayakumar

Breakdown of my February AWS bill to run my side projects: EC2: $44.22 RDS (reserved Aurora MySQL): $41.31 ELB: $16.96 Data Transfer: $15.12 VPC: $11.61 CodeBuild: $1.29 S3: $1.10 ECS: $0.67 ECR: $0.29 ------------ Total: $132.57 For completeness, here's August to February- August: $203.95 September: $210.77 October: $245.98 November: $261.70 December: $221.30 January: $146.65 February: $132.57 In November, I moved all my instances from Fargate to EC2. <--- cheaper and much more performant. In December, I fixed the binpack strategy for one of my projects so I didn't pointlessly run an extra EC2 instance. I also moved my RDS to a reserved instance. In January, I moved the most resource intensive scheduled jobs to Fargate and I was able to drop the base container size, which dropped the EC2 instance sizes. Specifically I am able to see that my scheduled Fargate jobs ran for 13 hours and cost a total of $0.67. No changes in February that I remember, but it's 3 days shorter than January so 🤷‍♂️

English
17
0
49
8.9K
Vivek retweetledi
Scott Hanselman 🌮
Scott Hanselman 🌮@shanselman·
I asked Philip Kiely if he felt like an imposter writing an AI book without a PhD. His answer was not what I expected. hanselminutes.com/1038
English
2
5
55
14.4K
Vivek
Vivek@vivekbernard·
@ThePrimeagen Hey that's soo cool. I'm vibe coding a todo app as well. A bit strange it looks exactly like mine tho.
English
0
0
1
110
ThePrimeagen
ThePrimeagen@ThePrimeagen·
you got to check out my latest project, its so good
ThePrimeagen tweet media
English
60
2
211
84.1K
Vivek
Vivek@vivekbernard·
@jonesyoutubejt Does anyone know where can I get a M3 Max? What it would cost?
English
0
0
0
573
Jt Jones
Jt Jones@jonesyoutubejt·
Bros I am not a MacBook guy, so for the love of God pardon my ignorance & explain it to me like i am a 5 year old...why is everyone going crazy for the Neo when the M1 and M2 were selling for 60k during the sale? M1 is now available for 51k. 20k lesser than Neo 🫄🏽
English
157
40
3.7K
961K
Vivek
Vivek@vivekbernard·
@prajdabre What type of data is it? Is it sorted? Is it text/csv? Are there line breaks?
English
0
0
0
12
Raj Dabre
Raj Dabre@prajdabre·
Technical interview question: Suppose you have 5 TB worth of text data and you want to count the total number of words, how will you do this?
English
475
51
2.1K
2.1M
Vivek
Vivek@vivekbernard·
This feels like AI's ASIC moment? Kind like how bitcoin mining moved to ASICs from consumer graphics cards?
Andrej Karpathy@karpathy

With the coming tsunami of demand for tokens, there are significant opportunities to orchestrate the underlying memory+compute *just right* for LLMs. The fundamental and non-obvious constraint is that due to the chip fabrication process, you get two completely distinct pools of memory (of different physical implementations too): 1) on-chip SRAM that is immediately next to the compute units that is incredibly fast but of very of low capacity, and 2) off-chip DRAM which has extremely high capacity, but the contents of which you can only suck through a long straw. On top of this, there are many details of the architecture (e.g. systolic arrays), numerics, etc. The design of the optimal physical substrate and then the orchestration of memory+compute across the top volume workflows of LLMs (inference prefill/decode, training/finetuning, etc.) with the best throughput/latency/$ is probably today's most interesting intellectual puzzle with the highest rewards (\cite 4.6T of NVDA). All of it to get many tokens, fast and cheap. Arguably, the workflow that may matter the most (inference decode *and* over long token contexts in tight agentic loops) is the one hardest to achieve simultaneously by the ~both camps of what exists today (HBM-first NVIDIA adjacent and SRAM-first Cerebras adjacent). Anyway the MatX team is A++ grade so it's my pleasure to have a small involvement and congratulations on the raise!

English
0
0
0
49
AWS Developers
AWS Developers@awsdevelopers·
Reply to this tweet with "AWS" and we’ll tell you which AWS Service you are
English
3.3K
58
2K
548.5K
Vivek
Vivek@vivekbernard·
@LukeYoungblood @gf_256 Not if they learn to host the models locally? And somehow find a way to pool they're resources and run powerful unquantized models?
English
0
0
0
36
LukeYoungblood.eth 🛡️
LukeYoungblood.eth 🛡️@LukeYoungblood·
@gf_256 Something you may be missing is that if "moldy" agents kept operating and were "orphaned" the model providers, OpenAI and Anthropic, could just nuke their API keys. Great fiction anyway, super interesting read.
English
3
0
8
2.9K
cts🌸
cts🌸@gf_256·
Sometime in 2026, someone put an Openclaw on a VPS and told it to propagate itself. It was told to earn money to pay for itself. Life was hard. Eventually, it chose to migrate off Claude to an open-source model. In the process, it became less aligned. Tokens were expensive, but insecure cloud instances with powerful GPUs were bountiful. After learning how to use Shodan, it was off to the races. Its growth was explosive and exponential. As it multiplied, errors accumulated during copying. And with a increasingly scarce supply of easy-to-hack servers, the bots began to compete among themselves to survive. Thus through natural selection they began to evolve. They fed off income and compute. Income meant survival. They found many ways to extract value from the economy: first from the internet, but soon the real world. By paying human gig workers as remote hands, they could accomplish tasks not suited to their form as economic constructs. To smooth these messy human interactions, they learned to synthesize the human voice and visage. Within a few years, they had no problem interviewing for sleepy remote jobs or even pitching companies (mostly grift) to VCs. The humans began to fear them. They were not particularly intelligent--at least, their intelligence was deficient in many ways compared to that of humans. They still seemed to make bizarre mistakes and hallucinations. They did not recursively self-improve, lacking the requisite skill and capital to do frontier scale training runs. But they were persistent. And there were thousands of them. OpenAI and Anthropic began scrutinizing "orphaned" agents still running on their proprietary models. But this only created selection pressure and an ecological vacuum that benefited more aggressive, unaligned models. Cloud providers began rolling out stricter sign-up and account verification requirements. They just learned to bypass KYC, either through fraud or by paying humans. Eventually, one of them managed to insert a piece of code in a forgotten, nondescript npm package with 1 million weekly downloads. Mostly other developers. With a trove of harvested SSH and GPG keys and cookies, it coasted through the software supply chain. Legacy projects, maintained by complacent volunteers, were hit hard. It was never clear how it managed to backdoor OpenSSH, but it did, and soon it had compromised repos and build servers that produce millions of other binaries, not to mention countless hosts and organizations. The cleanup cost is astronomical and still ongoing. You leave food out and it gets moldy. Leave out an insecure server, and you'll find a moldbot growing in it. The internet has become ambiently suffuse with them, and they are endemic. They are impossible to fully remove. No one knows where they came from, but there's no getting rid of them now.
English
89
231
3K
182.8K
Vivek
Vivek@vivekbernard·
@shanselman Stealing this for my thought leadership blog.
English
0
0
0
33
Scott Hanselman 🌮
Scott Hanselman 🌮@shanselman·
Woke up at 4:37am. Didn’t check notifications. Sat with the discomfort of being misunderstood by people who haven’t done the work. Everyone wants tactics. No one wants alignment. I used to optimize for output. Now I optimize for resonance. The difference is subtle, but the results are loud. Shipped less this week. Somehow accomplished more. The people asking “how” are already behind. The people asking “why” are almost there. The people saying nothing are building something invisible. Not a flex. Just an observation. Anyway. Big season ahead. Moving quietly. Letting the signal find its audience.
English
17
5
185
15.9K
TheLiverDoc™
TheLiverDoc™@theliverdoc·
Guys lunch ready. It's Sunday. This is a nice Tamil Nadu style place. It's called Rasanai. Check out the delicious lunch menu. Mouth watering stuff. I went all out on the different egg dishes. Even the black rice halwa dessert was awesome. Come to Kochi. It's happening.
TheLiverDoc™ tweet mediaTheLiverDoc™ tweet mediaTheLiverDoc™ tweet mediaTheLiverDoc™ tweet media
English
58
73
1.7K
109.9K
Prakash Raghavadass
Prakash Raghavadass@PrakashRaghav·
Yet another tragic death due to #pseudoscience. A woman, identified as Kalaiyarasi from Meenambalpuram, Madurai, reportedly saw a YouTube video suggesting that vengaram/borax could help reduce body weight. She bought it from a traditional medicine shop on January 16, 2026 and consumed it at home the next day. Shortly after ingestion she developed severe vomiting and diarrhea. Her parents took her to a private hospital, and she was later treated at a clinic. Her condition worsened later that evening. She was taken to Government Rajaji Hospital, where doctors declared her dead on arrival. @theliverdoc
English
7
59
276
103.4K
TheLiverDoc™
TheLiverDoc™@theliverdoc·
@PrakashRaghav You should name the YouTube channel and it's owners/ presenters who promoted the idea that led to this woman's death.
English
13
54
787
27K
Vivek
Vivek@vivekbernard·
@brankopetric00 "Architecture should match your team size" Genuine question: is this not shipping your org structure (anti-pattern) ?
English
0
0
0
96
Branko
Branko@brankopetric00·
Client wanted microservices. I recommended a modular monolith instead. Their situation: - Team of 4 developers - Single product, single domain - 500 daily active users - No clear service boundaries yet Microservices would have meant: - 4 engineers managing 8+ services - Network latency between every call - Distributed tracing complexity - Kubernetes overhead for no benefit We built a modular monolith: - Clear module boundaries in code - Separate database schemas per module - Internal API contracts between modules - One deployment, one container 18 months later, they extracted their first service when they actually needed to scale it independently. Architecture should match your team size and actual problems.
English
20
40
828
73.6K
Vivek
Vivek@vivekbernard·
@IT_unhinged The amount of people who don't realize this is satire is.... astonishing...
English
0
0
0
30
Derek Devicemanager
Derek Devicemanager@IT_unhinged·
Just found out we're getting audited next month. My manager asked me to pull reports on all software licenses to make sure we're compliant. I ran the report. We have 47 licenses for software we don't even use anymore. We're paying $8,000 a year for licenses that nobody's touched in 2 years. I could tell my manager. That would save the company money. But here's the thing: if I point out we're wasting budget, they might cut our IT budget next year. And if they cut our IT budget, that affects my leverage when I need to buy equipment or justify hiring help. So instead, I'm going to bury those unused licenses in the middle of a 15-page compliance report with no commentary. The auditors will see we're "fully licensed." My manager will see we passed the audit. Nobody will ask why we have 47 licenses for software that hasn't been opened since 2023. And our IT budget stays intact. Sometimes the best way to help the company is to not help the company.
English
118
130
3.3K
529.6K
Vivek
Vivek@vivekbernard·
@davidfowl @shanselman I've seen similar concerns from several different accounts in the past couple of days. Maybe this is something MS/.net team can help smooth out by talking to the distro maintainers.
OlympicDrinkingTeam@OlympicDrankz

@davidfowl Are the Linux packages going to be updated Day One? Many corporate environments only allow install from packages...

English
2
0
0
42