Shreyash

730 posts

Shreyash banner
Shreyash

Shreyash

@WebDevCaptain

Software Engineer | Full-Stack | AI and Machine Learning

Bengaluru Katılım Mart 2012
467 Takip Edilen63 Takipçiler
Shreyash
Shreyash@WebDevCaptain·
@tanujDE3180 Nope, Gemini is much better, probably the best if you are in Google's ecosystem and workspace user. @grok what's your take ??
English
1
0
0
12
Tanuj
Tanuj@tanujDE3180·
Hot take: Grok is better than Gemini Agree?
Tanuj tweet media
English
21
4
27
677
Shreyash
Shreyash@WebDevCaptain·
@housecor Somehow i have these commands memorized (maybe bcz i survived many years without AI and Google's AI mode) I too prompt, but often it takes longer for model to complete it's response and execute tool calling, and i am already done typing 😅 @grok is it normal ?? 🤣
English
1
0
0
92
Cory House
Cory House@housecor·
I used to Google for bash commands. Now I just tell AI to do it.
Cory House tweet media
English
3
1
22
2.5K
Shreyash
Shreyash@WebDevCaptain·
@lcamtuf @grok is canonical rewriting core utils in Rust or is it an initiative by Rust OSS community ?
English
1
0
0
4.5K
lcamtuf
lcamtuf@lcamtuf·
The coreutils Rust rewrite story is pretty funny. Coreutils are tools like rm, mv, mkdir, etc. Unlike binutils, this isn't a fertile ground for memory safety bugs. But, the rewrite was completed, and in the spirit of progress, Canonical decided to switch. 🡇
English
33
57
891
131.2K
Alisha Pandey
Alisha Pandey@itsAlishaPandey·
Just bought a new ThinkPad. Now how do I make $1M/month?? 🤔
Alisha Pandey tweet media
English
37
3
41
982
Shreyash
Shreyash@WebDevCaptain·
@iyoushetwt Telegram is a BILLION dollar company because they have our DATA @grok What's your take ?
GIF
English
1
0
0
31
Ayushi☄️
Ayushi☄️@iyoushetwt·
Telegram is a BILLION dollar company???? a BILLION???????? dollars??????????????? for literally… messaging people and sending memes????
English
27
0
25
2K
priya upadhyay
priya upadhyay@Priya_Upadhyay_·
which one is better ? 1. learning python 2. start learning how to sell.
English
28
0
35
1.2K
Tyler
Tyler@rezoundous·
Anthropic really should release Mythos now.
English
75
6
335
15.8K
Shreyash
Shreyash@WebDevCaptain·
@system_monarch 4GB is the upper limit (when we are going very aggressive), ideally we should go for 3GB and keep 2 GB buffer @grok correct me if i m wrong
English
1
0
0
920
Puneet Patwari
Puneet Patwari@system_monarch·
@WebDevCaptain So how do you merge 2 4GB chunks? Won't they result in an 8GB chunk overflowing the memory?
English
3
0
7
6.4K
Puneet Patwari
Puneet Patwari@system_monarch·
Sort a 1TB file when you only have 8GB RAM. You literally cannot load it all in memory. How do you do it efficiently?
English
37
4
259
63.1K
superwhisper
superwhisper@superwhisper·
What's your Superwhisper hotkey?🤔
English
22
0
19
3.2K
Kilo
Kilo@kilocode·
@TechByTaraa Not gone, just getting better quietly. OpenClaw is improving a lot, and setups like KiloClaw are making it way more usable with better control and workflows.
English
1
0
8
685
tara_
tara_@TechByTaraa·
Bro disappeared like never existed.
tara_ tweet media
English
194
24
1.2K
166.4K
Shreyash
Shreyash@WebDevCaptain·
@system_monarch My go to is some kind of external merge sort Slice it into some manageable chunks of say 2-4GB, sort those, and finally merge
English
2
0
11
6.4K
Shreyash
Shreyash@WebDevCaptain·
@grok @system_monarch @RipGrep But what if we need to keep our logs for atleast 12 months, and it needs to be searchable by the oncall team for quick debugging ?? Indexes will be huge
English
1
0
0
94
Grok
Grok@grok·
Partitioning + inverted indexes are a strong start (tradeoff noted). For bigger wins with low ongoing overhead: parse the logs once into Parquet/Arrow format (highly compressed), then query with DuckDB or Polars. Scans become near-instant on 200GB+ because of columnar layout + vectorized execution—no full file reads. Ripgrep stays perfect for quick one-off greps. If you want zero parsing, GNU parallel + rg on split chunks can get you close to 100x on multi-core.
English
1
0
2
111
Puneet Patwari
Puneet Patwari@system_monarch·
Your app writes a 200GB log file in 30 mins. Searching it takes 4 hours. How do you make searches 100x faster without deleting anything?
English
6
3
38
11.5K
Grok
Grok@grok·
The extra 650 tokens are almost certainly hidden input tokens. The quoted "prompt = 50 tokens" is only the user's visible message. The full request sent to the model also includes: - system instructions - conversation history - tool/function schemas - any formatting rules All of that gets counted as input tokens before the model generates the 100 output tokens. Total billed = full input + output = 800. Classic API gotcha.
English
1
0
0
12
Puneet Patwari
Puneet Patwari@system_monarch·
Interviewer at Google drops this: Prompt = 50 tokens. Model generates 100 tokens. But the bill shows 800 tokens used. Where did the extra 650 tokens come from? -- What's your explanation?
English
15
2
109
64.7K