
Yesterday morning , we launched #DeepSeek-R1-70B model on our india based #GPU cluster for public use. We are experiencing tremendous user inferencing prompts, with thousands of concurrent users almost at all times. Thanks to the big GPU capacity we have deployed at the back, we are yet to see any "going-down" of the service 😊. We expect much more millions of you to start prompting to test us, fail us, and to tell us where all we improve and what all more features we bring in..
Basis the feedbacks received from many of you yesterday, i thought it better to put up this post to explain certain underlying features of #myshakti.ai and the #DeepSeek-R1-70B model which we have put on our Indian servers for your use :
✅ We have containerized the model, created an end point hashtag#API of the same and optimized the model to NVIDIA’s global standard. We have used NVidia #NVCF functions for deploying the model.
✅ We have enabled Enterprise-Grade #Security & #DataPrivacy. Advanced security measures, including #DDoS protection, firewalls, and more have been used.
✅ #Chinese IPs have been completely blocked to prevent any type of unauthorized access at user / admin level.
✅ We have blocked any telemetry data from leaving this #Serverless environment.
✅ Hosted in a Serverless enviornment of a cluster of 128 H100 GPUs, capable of handling millions of concurrent requests. We plan to scale this to 1024 H100 GPUs in case usage grows to a larger scale.
✅ We have ensured #Data #Sovereignty, since the GPU cluster / storage is hosted in India at #YottaNM1 DataCenter, Navi Mumbai. All your user IDs , input prompts and output results are all safely stored in India, within our administrative control.
✅ we have ensured Secure Access & Authentication : Each end-user receives a unique authentication token to access myshakti.ai. We are fixing some bugs here. Today, we are bringing in Google Auth as well.
✅ We are Currently hosting DeepSeek-70B model parameter, with plans to deploy hashtag#DeepSeek-Pro-671B in the coming few days, as soon as we make a container of the same with an end point API using NVCF functions.
✅ We have opened access for Indian startups / enterprises to #finetune the model with their proprietary data, and create their own use case specific models supporting hashtag#Indic languages and then helping them deploy the same in our server less environment for them to make it available to their end users. Reach out to us if you want to use our DeepSeek APIs
Hope you find this useful. Looking forward to receiving more feedbacks to help us improve, to bring more models, more features..
English









