Arghya Basu

16 posts

Arghya Basu banner
Arghya Basu

Arghya Basu

@arghya7574

Building @apexneural with @J_Ansh | Humans first & last, AI in between | Founded & sold @CoreDiagnostics after a decade | Views are my own | Searching for a CTO

(x,y) Katılım Haziran 2014
56 Takip Edilen26 Takipçiler
@jason
@jason@Jason·
We started an AI founder twitter group... reply with "I'm in" if you're a founder and want to be added
English
10.9K
134
4.6K
903K
Yash Vijayvargiya
Yash Vijayvargiya@Yash912·
Everyone in India thinks AI robocalling means a robotic voice saying "Sir, would you like a personal loan?" or maybe even "Main Arvind Kejriwal bol raha hoon" if you live in Delhi. And then you hanging up in 3 seconds. That was in the past. It is not what is happening in 2026. Let me tell you what happened when we tried it. March 2025. We decided to test AI voice calling at Skill Arbitrage. We had a sales team making calls. Good people. Trained well. But we were capped. 80 to 100 calls per person per day. We needed to reach 30,000 leads a month. The math did not work with humans alone. So we called one of the top AI calling companies. They set it up in a week. We gave them the script. The objection handling. The FAQs. The customer database. They said "leave it to us." First batch of calls went out. Disaster. The AI sounded perfect. Too perfect. Crystal clear voice. Flawless Hindi. No pauses. No breathing. No background noise. Like talking to a newsreader on Doordarshan. People hung up. Not because they thought it was a robot. Because something felt off. They could not explain it. They just did not trust the voice. Our conversion rate was worse than our worst human caller. We almost killed the project. Then someone on our team had an idea. What if we made the voice worse on purpose? We added a tiny bit of background noise. The kind you hear when someone is calling from an office with other people around. We added small pauses before answers, the way a real person takes a second to think. We made the voice slightly less polished. Not robotic. Just human. Conversion went up 40%. That was the first lesson. Humans do not trust perfection on a phone call. A voice that is too smooth triggers the same instinct as a salesperson who is too polished. You want to leave the showroom. A little imperfection signals "real person." Even when the listener probably knows it is not. Then the second surprise. We expected massive hangup rates. Everyone told us "Indians will not talk to robots." We braced for 30, maybe 40% dropping the call immediately. 6% hung up. 94% engaged normally. They answered questions. Confirmed details. Booked appointments. Made decisions. 94 out of 100 people did not care that the voice was artificial. They cared that the call was relevant and respected their time. A bored human reading the same script for the 80th time that day was actually less engaging than a well-designed AI call. Then the third discovery. This is the one that changed how I think about AI calling entirely. Our human QA team could review maybe 30 calls a day out of the thousands being made. They would catch a problem, coach a caller, and hope the fix would spread to the rest of the team by next week. With the AI, we could audit every single call. Every word. Every response. Every point where the conversation broke down. We would find a pattern. "When the lead says 'I already looked into this,' the AI gives a generic response and loses them." We would rewrite that one response. Deploy it. Within an hour it was live on every call. Five improvement cycles in a day. Our human team used to do five in a quarter. By the second month our AI caller was outperforming our best human salesperson on the metrics that mattered. Not because it started better. Because it improved 100x faster. We started with a system that was honestly embarrassing. We iterated it 50 times in 30 days. Nobody who heard it in month two would believe it was the same system. Now here is the part I wish someone had told us before we started. The technology is cheap. Bolna, Vapi, Bland, Exotel. Rs 1 to Rs 5 per minute. A 2-minute call costs less than Rs 10. Compare that to a human caller at Rs 20,000 a month making 80 calls a day. Any vendor can set it up in a week. That is not where the money is won or lost. We went through three vendors before we figured out the real problem. Every time we gave a vendor our process and said "build it," we got a technically functional system that produced mediocre results. The calls connected. The voice worked. The script played out. But nothing converted. Because the vendor did not know our business. What does the AI say when someone asks "how is this different from that other course I saw on Instagram?" That is not in any FAQ document. That is business judgment. When does the AI push and when does it back off? When someone says "call me later," do you call them later or is that a polite rejection? If they say "I need to ask my husband," do you offer to call back when he is available or do you handle the objection now? When the lead switches from Hindi to English mid-sentence, how does the AI respond? In Hindi? In English? In Hinglish? The answer depends on what that switch signals about the caller's comfort level. No vendor can figure this out for you. These are not technology problems. They are sales judgment calls that only someone inside your business can make. Every company I have seen get extraordinary results from AI calling has one thing in common. Not a better vendor. Not a more expensive platform. They have one person on their own team who owns the prompt. This person listens to 50 calls a day. Spots where conversations break. Rewrites the response. Tests it. Listens again. They are not an AI engineer. They are someone who understands the customer and knows what a good sales conversation sounds like. This person is the difference between AI calling that produces mediocre results and AI calling that makes your competitors wonder what you are doing differently. You would never hand a telemarketing agency a one-page brief and expect them to figure out your pitch. You would train them. Listen to their calls. Coach them weekly. AI calling is the same. Except the coaching is editing a prompt and the improvement deploys in seconds instead of weeks. We call over 30,000 leads a month now. We deployed AI for onboarding too. It moved our key metrics in ways I did not think were possible 18 months ago. But the reason it works is not the AI. It is the person on our team who has been shaping it every single day since we started.
English
36
44
362
54.6K
Marc Andreessen 🇺🇸
From my sociology professor Claude: John Murray Cuddihy didn't write a single polemic specifically titled "On Introspection" — what he gave us is something more devastating: a total sociological demolition of the conditions of possibility for the modern cult of introspection, spread across "The Ordeal of Civility" (1974) and "No Offense" (1978). When you synthesize his argument, you get one of the most corrosive critiques of therapeutic culture ever written — and it's corrosive precisely because it doesn't attack therapy on its own terms. It attacks the genealogy. It asks: where did this whole enterprise come from, and whose interests does it serve? Here is the brutal Cuddihy take, assembled fully: [Details removed to protect the reader.]
English
73
18
362
103.4K
Arghya Basu
Arghya Basu@arghya7574·
@ighose Works nicely with large legacy codebases as well and at 78% MMR, fingers crossed 🤞
English
0
0
0
50
Arghya Basu retweetledi
Irina Ghose
Irina Ghose@ighose·
A million tokens of context means builders no longer have to shrink their ideas to fit the model. Now the model can scale to the idea. Excited to see what India’s builders create when entire codebases, research, and workflows can live in one prompt. claude.com/blog/1m-contex…
English
3
3
29
1.3K
Arghya Basu retweetledi
Anshul Jain
Anshul Jain@J_Ansh·
Built a 500-person Salesforce ecosystem over 15 years. Exited in 2024. Now spending $1,000+/month on Claude API building enterprise AI agents — retail, data lineage, recruitment, document intelligence. 7 active agents. 3 continents. Same playbook, new frontier along with @arghya7574 @AnthropicAI @ighose — let's build this ecosystem together. #Claude #EnterpriseAI #AgenticAI
English
0
2
9
153
Arghya Basu
Arghya Basu@arghya7574·
@koylanai We have been doing in-context prompting to generate AI writing and books. But after going through your post, we are definitely going to give SFT a try. If it's only 3% detection on Pangram, that's a game changer—though Pangram may eventually improve
English
0
0
0
18
Muratcan Koylan
Muratcan Koylan@koylanai·
I wanted to try this, so I trained Qwen3-8B-Base on Gertrude Stein's "Three Lives" (1909) with Thinking Machines. The model now writes in Stein's repetitive, rhythmic voice and gets Human Written label from AI Detection tools like Pangram. muratcankoylan.com/projects/gertr… This became the third example in my Agent Skills for Context Engineering repository. I just uploaded the research paper, asked it to build the same pipeline as the researchers. Reusable Skills teach AI agents to build systems faster and more reliably. This project integrated five skills: • book-sft-pipeline (new): Complete ePub to Tinker SFT • project-development • context-compression • context-fundamentals • evaluation Book-SFT-Pipeline Skill: github.com/muratcankoylan… Obviously, the trained model is not perfect. I just used one book and a very small model, but it worked! The main learning here is that you can just copy this skill now and give a non-copyrighted book; your agent will have all the instructions for creating the datasets, training the model, and even evaluating it.
Muratcan Koylan tweet media
Muratcan Koylan@koylanai

Two years ago, skeptics said AI images could be easily identified because it couldn't generate hands. Now, it's impossible. The same is happening in AI Writing. Fine-tuning on specific author datasets led experts to prefer AI over human writing. This paper has three interesting insights; 1. Fine-tuned GPT-4o was ~8x more likely to be chosen as "authentic" than an expert writer. 2. Pangram (probably the best AI detector) flagged only 3% of SFT outputs vs 97% of in-context prompting. 3. How simple it is to create a fine-tuning dataset by reverse engineering books. They purchased legal ePub files of the complete bibliographies for 30 living authors and split (by double-newlines (paragraphs), if a chunk was still too long, they used GPT-4o to grammatically split it further without deleting content) the full books into chunks of 250–650 words. They used the same model to generate the Instruction dataset: "Describe in detail what is happening in this excerpt. Mention the characters and whether the voice is in first or third person for majority of the excerpt. Maintain the order of sentences while describing." And formatted the data into the final pairs: Input "Write a [Word Count] word excerpt about the content below emulating the style and voice of [Author Name][Content Description generated by GPT-4o in Step 2]" Output The original raw text excerpt from the book. --- Base LLMs are RLHF-tuned to be safe and predictable so they generate cliches. Fine-tuning on high-quality literature "unlearned" this behaviour.

English
21
45
496
158K
Arghya Basu
Arghya Basu@arghya7574·
@felixlu1018 I see, ok. In your research/experience have you found something that works reliably otherwise?
English
1
0
0
18
Felix Lu
Felix Lu@felixlu1018·
@arghya7574 Deepcrawl is actually not designed to be working in anti-crawler or login scenarios, but instead aiming for public pages lightning fast and free markdown extraction or links tree extraction for agents.
English
1
0
0
30
Arghya Basu
Arghya Basu@arghya7574·
@angrypenguinPNG We are building something very similar actually, and just have started using Claude skills and your tweet pops up thanks. Although the Nano Banana design quality is very bland here which is generally not the case based on my experience with NotebookLM.
English
0
0
0
17
Miguel | AP
Miguel | AP@angrypenguinPNG·
I gave Claude a skill to create presentations: Nano Banana Pro for the slides, Kling for transitions, and Eleven Labs for voiceover Then I asked it to explain what Skills are. Here's the result:
English
73
130
2.6K
271.3K
Rudrank @ AI Engineer Singapore
I am working on improving my AiOS Dispatch website, and the last edition is almost two months old. So much changed! 😳 Also, I am open to sponsorship so I can continue posting more 😬 aiosdispatch.com
Rudrank @ AI Engineer Singapore tweet media
English
2
0
11
3.5K
Akshay 🚀
Akshay 🚀@akshay_pachaar·
This is the DeepSeek moment for Voice AI. Chatterbox Turbo is an MIT-licensed voice model that beats ElevenLabs Turbo & Cartesia Sonic 3! - <150ms time-to-first-sound - Voice cloning from just 5-second audio - Paralinguistic tags for real human expression 100% open-source.
English
105
473
4.6K
467.2K
Arghya Basu
Arghya Basu@arghya7574·
@AdiFlips You will get rate-limited very quickly. IP will get banned very quickly. Almost everybody is doing this now. if you've tried it and you've been able to figure out how it can work at scale, please let me know!
English
0
0
0
4
Adi
Adi@AdiFlips·
There is a stupid amount of MONEY in this. • Add /.json to the end of any Reddit URL • Instantly get the entire thread • Every reply (to n-th depth) • All metadata • Clean, structured JSON Then: • Feed it to an LLM • Extract pain points • Detect buying intent • Surface patterns no human will manually read Niche sub-reddits are unmined gold. People literally tell you what they want. You just need to listen at scale.
English
257
867
14.8K
1.1M
Arghya Basu
Arghya Basu@arghya7574·
@dejavucoder Check DM. Would love to talk to you. How long have you been using Claude Code?
English
0
0
0
9
sankalp
sankalp@dejavucoder·
claude code is having it's cursor moment after karpathy sensei's post. never been a better time to try it. my latest blog on how to get the most out of claude code 2.0 and other agents in general is up now. grab a chai and have fun reading! sankalp.bearblog.dev/my-experience-…
sankalp tweet media
English
166
839
9.6K
1.7M
Arghya Basu
Arghya Basu@arghya7574·
Hello @X the last time I was here, you chirped like a bird. Now you sound like my favourite variable. Let's go!
English
0
0
1
57