Jashwanth Mummalaneni

114 posts

Jashwanth Mummalaneni banner
Jashwanth Mummalaneni

Jashwanth Mummalaneni

@0xmjc

Exploring tech, design, & culture through a blockchain lens. Sharing insights, inspiration, & curiosity. Follow for updates on decentralized tech and its impact

Hyderabad, India Katılım Nisan 2021
340 Takip Edilen73 Takipçiler
Jashwanth Mummalaneni retweetledi
Vimlendu Jha विमलेंदु झा
OPEN LETTER TO THE CHIEF JUSTICE OF INDIA 
On the Eve of the Hearing on Delhi’s Air Pollution Crisis
From the Citizens, Children & Breathless Lungs of Delhi. Hon’ble Chief Justice Gavai, Tomorrow, when you sit to hear the matter of Delhi’s unbreathable air, you do so not just as the custodian of the Constitution, but as a man in the final days of his judicial service, standing at a rare moment in history, atleast your personal history, if not that of the nation’s. This is not just another pollution petition or an open letter. Also this is not just another winter. There is a strong reason one thought to write to you, versus the previous CJIs, perhaps you know best why. May be no CJI retired on the eve of great smog, or may be one did. Or may be no CJI ever acted so ignorant and dismissive (forgive my contempt!). And perhaps no CJI lazed over the fact that air pollution monitors were brazenly manipulated. Nevertheless, respectfully speaking, this is possibly your last chance to make a stand that the nation, and especially its children, will remember you for. Sir, allow us to speak truth with respect. For nearly two decades, citizens, health experts, and environmentalists struggled to warn the courts, and the nation, that firecrackers were not a matter of “culture,” but a matter of lungs, cancer, asthma, stunted growth, and premature death. It took years - years - for the Supreme Court to finally acknowledge this, and impose the ban. Your order reversing this , allowing so-called “green crackers”, undid much of that work. It feels sad that it’s been over a month almost and you haven’t even asked for a status report on your orders violations, and you know they were flouted beyond purity. Unfortunately one gets a feeling that you know that would happened and yet you allowed Delhi to choke, under the fumes of the gaseous firecrackers. May be you didn’t know people would flout the order of the 52nd CJI, perhaps they got the number wrong and misbehaved. But now, on the brink of retirement, we ask: Please do not leave the children of Delhi gasping as your judicial farewell. Sir, this is not a demand for bans, or sweeping overnight orders, or the jailing of farmers. We do not ask you to take a drastic step, only a decisive one. Do something that: - Holds the executive to account - Forces action where there has only been optics - Protects the smallest lungs first - Prevents this annual public health. Protect children of 2025, atleast. I have always said, that this is not your job, as in job of the judiciary, it’s the job of the legislature and executive, however when your predecessors have assisted the children of this country, why deny yourself the privilege of doing the same. This was never meant to be the judiciary’s job, but it became yours because the state never performed its duty. We need that Supreme Court again, not a silent one, not a helpless one. Sir, this may be hard to say, but it must be said: In your final days, you have a choice, to serve the comfort of the current political moment, or the survival of the future Prime Ministers, future judges, future citizens of this country. Not as a political act, but as a moral one. Do not let the final breath of your tenure be indifference. And incase you are not inclined to take a stand, please sit down, so that you don’t undo the contribution of your predecessors. Regards, The stunted Lungs of India
English
36
306
925
44.6K
Jashwanth Mummalaneni retweetledi
Paras Chopra
Paras Chopra@paraschopra·
So you’re telling me that Deepseek with private funds can release an open source model, but govt awarding Rs 220 crores of public funds to Sarvam isn’t asking for the same? This is tax payers money, so the full pipeline ought to be open source!
Paras Chopra tweet media
English
165
575
4.6K
275.5K
Jashwanth Mummalaneni retweetledi
smallest.ai
smallest.ai@smallest_AI·
@AravSrinivas We would love to have you onboard. Tomorrow we will be sharing public benchmarks on how we have surpassed Elevenlabs across all parameters - spending very little capital. Our aim is to focus on multi-modal intelligence and we'd gladly invite you to partner with us.
English
3
11
74
7.5K
Jashwanth Mummalaneni retweetledi
Jashwanth Mummalaneni
Jashwanth Mummalaneni@0xmjc·
Looking to join an ETHIndia team! Let me know if you’re looking for an addition.
English
1
0
1
70
Jashwanth Mummalaneni retweetledi
Shiva Rapolu
Shiva Rapolu@shivarapolu01·
4 am club?
English
8
2
25
6K
Jashwanth Mummalaneni retweetledi
Shiva Rapolu
Shiva Rapolu@shivarapolu01·
Go to US -> Do MS -> Work for 3-6 years -> Learn -> Come back to India with Ideas -> Startup -> Become millionaire -> Wake up
English
14
32
706
71.1K
Saurabh Kumar
Saurabh Kumar@drummatick·
IIT KGP Day 1 placement stats Total offers received : 700 Compared to 2022(760) you could say there’s a drop of around 8%. Not much. Kudos to TnP cell of IIT KGP
English
31
22
1K
135.9K
IIT Guwahati
IIT Guwahati@IITGuwahati·
6 students from Electronics Club, @techboard_iitg have achieved remarkable success at T-Works Byte Bending Challenge!🚀 Our teams Hydra & Prometheus secured the 1st runner-up and 4th positions in the national competition out of 600 teams at the grand finale held at Hyderabad.
IIT Guwahati tweet mediaIIT Guwahati tweet mediaIIT Guwahati tweet mediaIIT Guwahati tweet media
English
2
6
52
3K
Jashwanth Mummalaneni retweetledi
techboard.iitg
techboard.iitg@techboard_iitg·
🌟 Dive into the Prelude of Excellence! 🚀 📆 Mark your calendars for October 13th as TechBoard IITG is coming up with the Inter IIT Tech Meet 2023 Orientation! Uncover the What, Why, and How of the incredible Inter IIT saga, all to be revealed at this event.
techboard.iitg tweet media
English
0
3
10
471
Arka Datta | AFI
Arka Datta | AFI@0xriskyops·
Website launching tomorrow 🚀
English
4
0
9
603
Jashwanth Mummalaneni retweetledi
Vijay Shekhar Sharma
Vijay Shekhar Sharma@vijayshekhar·
It seems like we all were given blockchain distraction while all this was being done. This is like writing a new OS. Silicon Valley again proves dominance on building long term tech.
Bindu Reddy@bindureddy

The evolution of LLMs over the next couple of years - Will the tech become a commodity and commonplace? Not a single day passes by without someone announcing a new LLM and foundation models get replaced by next-generation models in a matter of months. So what is the future of this technology and how is the space likely to evolve? Data Advantage and Information Queries - To begin with, LLMs will soon become data-constrained. Even with a large number of GPUs, most companies don't have access to new and unique sources of data. Google and Meta have a huge advantage here compared to anyone else. Google, because of it's search dominance can get away with crawling every website that wants their search traffic and can use YouTube and potentially Gmail data to train their LLMs. Meta enjoys the same advantage in terms of being able to use Facebook and Instagram data. This will translate to Google and Meta LLMs being able to serve and respond to general information queries better than LLMs from OpenAI or other start-ups. Already, Bard outperforms GPT-4 when it comes to queries about recent data and GPT-4 does much better on queries on data available before September 2021 (it's training cut-off date). LLMs For General Purpose Tasks - So the next question is will we have specialized LLMs for some general purpose tasks like coding, reasoning, summarization, or writing. For example, GPT-4 does really well on code compared to Google's LLMs, so will there be several purpose-built LLMs? This is unlikely to be the case for general-purpose tasks. Large SOTA LLMs outperform specialized LLMs in most tasks. Again GPT-4 outperforms specialized LLMs on pretty much everything from code generation to writing and reasoning tasks Here is it important to draw the distinction between general-purpose tasks like Python code generation vs. a very specialized task like having knowledge of Abacus APIs and programming the Abacus platform. The former typically DOES NOT require fine-tuning or RAG (retrieval augmented generation) while the latter requires some custom work All this means that we will end up with Google, Meta, and potentially OpenAI being the key players in the consumer LLM (e.g. ChatGPT, Bard, etc.) world. It is extremely unlikely that we will have more than 2-3 of these services. These services, like ChatGPT, will have a free and paid subscription tier. Paying subscribers will enjoy premium features like personalized responses and access to multi-modal features etc. Enterprise AI and LLM APIs - The other big category of LLM use cases is businesses using these LLMs in their core products, services, and business processes. There are 2 classes of use cases in this space. General purposes use-case embedded in a product or service - e.g. summarize my Slack channel or Zoom meeting. For these use cases, a vanilla API call to SOTA LLM is sufficient. Price will be the key consideration in these use cases and as long as the large LLM providers have very competitive prices, just making simple calls to their APIs will work. Specialized large-scale use-cases on custom knowledgebases This is the category of custom enterprise use-cases, where you may have several thousands of calls per day and the LLM needs to have an understanding of a custom knowledgebase or task. I suspect, that smaller more efficient LLMs that have reasonably good reasoning capabilities can be fine-turned or complemented with RAG and incorporated into the workflow to automate these use cases. Using GPT-4 or some other very large LLM will become cost-prohibitive in these cases. Companies will use LLM Ops platforms such as Abacus to automate an end-to-end workflow and these platforms will offer a combination of both open-source LLMs and closed-sourced APIs. Companies should be free to pick and choose an LLM based on cost, performance, and time to market. In some very specific cases, we will also see some very specialized LLMs emerge - e.g. financeLLM or LegalLLM. These domain-specific LLMs that may require a lot of custom training, RLHF, and fine-tuning. For example, Bloomberg created a BloombergGPT a 50-billion parameter large language model that was purpose-built from scratch for finance. Having said that, BloombergGPT is probably out of date already, as such custom models lack good reasoning skills that general-purpose LLMs possess and it is much better to simply fine-tune or use RAG on a SOTA LLM compared to training a custom model for your special task. Net-net, we are seeing what you expect to see in a new and exciting space - a large number of companies are being started in this space and over time, we will see a lot of consolidation and a handful of key players emerge. Just like with other core infrastructure such as operating systems or databases, there is likely to be a healthy open-source ecosystem that complements the services from the giants

English
20
20
173
46.6K