Q Blocks

92 posts

Q Blocks banner
Q Blocks

Q Blocks

@blocks_q

Decentralised computing platform to help ML teams get 10x more affordable computing

Toronto Katılım Ocak 2020
8 Takip Edilen211 Takipçiler
Sabitlenmiş Tweet
Q Blocks
Q Blocks@blocks_q·
@nivi @naval Here's our demonstration of 90% savings for Transcription and Translation using Whisper. Optimised Whisper API, running on decentralised GPU network With further optimisation, we are looking at 95% savings. At moderate scale this means: $50,000 vs $1M qblocks.cloud/blog/ultra-low…
English
1
0
2
685
Q Blocks
Q Blocks@blocks_q·
LORA powered Fine tuning is revolutionary
Santiago@svpino

LoRA is a genius idea. To understand the fine-tuning of Large Language Models, you must understand how LoRA works. By the end of this post, you'll know everything important about how it works. Large Language Models are good generalists, but they have little specialization. We train them in many different tasks, so they know a bit about everything but not enough about anything. Think of a kid who can play three different sports at a high level. While he can be proficient across the board, he won't get a scholarship unless he specializes. That's how the kid can reach his full potential. We can do the same with these large models. We can train them to solve a particular task and nothing else. We call this process "fine-tuning." We start with everything the model knows and adjust its knowledge to help it improve on the task we care about. Fine-tuning is revolutionary, but it's not free. Fine-tuning a large model takes time, care, and lots of money. Many companies can't afford the process. Some can't pay for the hardware. Some can't hire people who know how to do it. Most companies can't do either. That's where LoRA comes in. We realized we could approximate a large matrix of parameters using the product of two smaller matrices. There was a lot of wasted space within these large models. What would happen if we find a new, more optimal representation? Did you ever buy a map at a gas station? Giant pages showing every small road, path, and lake around you. They were exhaustive but hard to navigate. These are like parameters in a large model. LoRA turns a gas station map into a cartoon treasure map. Every useless parameter is gone. Only two roads, a palm tree, and a cross pointing at the treasure. We don't need to fine-tune the entire model anymore. We can only focus on the small treasure map that LoRA gives us. It's a mind-blowing trick. We can train the small approximation matrices from LoRA instead of fine-tuning the entire model. LoRA is cheaper, faster, and uses less memory and storage space. You can also merge the approximation matrices with the model during deployment time. They work like simple adapters. You load up the one you need to solve a problem and use a different one for the next task. Then, we have QLoRA, which makes the process much more efficient by adding 4-bit quantization. QLoRA deserves its own separate post. The team at @monsterapis has created an efficient no-code LoRA/QLoRA-powered LLM fine-tuner. What they do is pretty smart: They automatically configure your GPU environment and fine-tuning pipeline for your specific model. For example, if you want to fine-tune Mixtral 8x7B on a smaller GPU, they will automatically use QLoRA to keep your costs down and prevent memory issues. The @monsterapis platform specializes in no-code LoRA-powered fine-tuning. It's the fastest and most affordable offering for fine-tuning models in the market. They sponsored me and gave me 10,000 free credits for anyone who uses the code "SANTIAGO" in their dashboard: monsterapi.ai/finetuning If you want to read their latest updates, get free credits and special offers, join their Discord server: discord.com/invite/mVXfag4… TL;DR: • Traditional fine-tuning trains the entire model. It requires a complex setup, higher memory, and expensive hardware. • LoRA: Trains a small portion of the model. It's faster, requires much less memory, and affordable hardware. • QLoRA: Much more efficient than LoRA, but it requires a more complex setup. • No-code fine-tuning with LoRA/QLoRA: The best of both worlds. Low cost and easy setup.

English
0
0
1
153
Q Blocks retweetledi
Csaba Kissi
Csaba Kissi@csaba_kissi·
Here's a new LLM deployment solution that enables serving multiple LLMs & Lora adapters as an API endpoint powered by our robust GPU Cloud. Ready for a test drive? Let's deploy the Mixtral 8x7b Chat model with GPTQ 4bit quantization on a 48GB GPU. 🚀🧵
Csaba Kissi tweet media
English
24
9
97
12K
Q Blocks
Q Blocks@blocks_q·
@chaseleantj It's surprising to see that SDXL is able to deliver at par results. Built these using @monsterapis same prompt.
Q Blocks tweet media
English
0
0
1
31
Chase Lean
Chase Lean@chaseleantj·
#5 Landscape photography Left: breathtaking landscape shot, Yellowstone National Park Right: breathtaking landscape shot, Yellowstone National Park, shot with Nikon D850, 15mm lens Adding the camera does improve the image, but it almost always changes it to sunset. However...
Chase Lean tweet mediaChase Lean tweet media
English
3
1
19
5.3K
Chase Lean
Chase Lean@chaseleantj·
#4 Underwater photography Left: sea turtle swimming in deep sea, underwater photography --style raw Right: sea turtle swimming in deep sea, shot with Canon EOS, 35mm lens, underwater photography --style raw The camera makes almost no difference.
Chase Lean tweet mediaChase Lean tweet media
English
1
0
10
4.1K
Q Blocks
Q Blocks@blocks_q·
Now, you can fine tune Llama 2 on our massive network of GPUs without writing a single line of code
Santiago@svpino

How to fine-tune Llama 2 without writing a single line of code. I taught Llama 2 to classify the sentiment of movie reviews. Setting everything up took me 10 minutes. The fine-tuning process lasted 6 hours. If you aren't familiar with the term "fine-tuning," it’s the process we use to teach a model how to solve a specific task. Large Language Models have general knowledge but struggle to solve particular problems. Fortunately, we can fine-tune these models and make them very good at solving specific tasks. In this example, that task is determining how much a person liked a movie based on its review. Unfortunately, fine-tuning a model is a complex, expensive process. It takes a lot of time, effort, and GPU computing. It's also hard to find experienced people who know how to do it. The team @monsterapis built the first platform that offers no-code fine-tuning of open-source models, which changes everything. That’s the platform I’m using here. Here is what you need to do: 1. Sign up here: monsterapi.ai/signup, and use the code SANTIAGO during sign-up to get 5,000 free API credits. 2. Go to the FineTuning option and select Llama 2 7B. 3. Select your task. I'm using "Text Classification" since we want to classify movie reviews. 4. The last step is to select your dataset. I used the IMDb dataset from HuggingFace. It took a bit under 6 hours to finalize the fine-tuning process. The attached image corresponds to the training loss over the first few steps. I spent 14,000 credits in the process, equivalent to $14.00. That’s one of the @monsterapis’ advantages: Besides not dealing with code, complexity, or hardware, their pricing is very competitive, thanks to their decentralized GPU platform. Here's an article that provides a step-by-step guide for fine-tuning Llama 2: blog.monsterapi.ai/how-to-fine-tu… You can also join @monsterapis’ Discord server for the latest updates, free credits, and special offers: discord.com/invite/mVXfag4… Thanks to the team @monsterapis for partnering with me on this post.

English
0
0
2
173
Q Blocks retweetledi
Santiago
Santiago@svpino·
8 billion lives will change because of AI. But those who think large models are the end-game don't know what's happening. The real game-changer coming after every industry is the applications we are building right now.
Santiago tweet media
English
16
57
355
242.6K
Q Blocks retweetledi
SomosNLP
SomosNLP@SomosNLP_·
"Una sorpresa" = 10000 API credits para todas las personas participantes y una suscripción anual para el equipo ganador 🎉🎉🎉 Thank you so much @Gaurav_vij137 and @saurabhvij137 (@blocks_q) 🤩
SomosNLP@SomosNLP_

🔥 Únete hoy a esta charla sobre #GenerativeAI impartida por @Gaurav_vij137 y @saurabhvij137, fundadores de @blocks_q, empresa patrocinadora de las GPU VMs del #HackathonSomosNLP. ✨ Anunciarán una sorpresa ✨ ➡️ ¡Únete! Haz clic en "Notificarme" youtube.com/watch?v=3jgh6Z…

Español
0
5
15
1.5K
Q Blocks
Q Blocks@blocks_q·
@singh_sequoia Some of them are saving significant amount of capital by using GPUs on decentralised networks like us. Once the AWS credits expire, the pain kicks in. We are here to help. Please feel free to connect with any portfolio building in AI.
English
0
0
1
86
Shailendra J Singh
Shailendra J Singh@sjs_day1·
Most are doing it quietly, some are announcing it … … but dozens of founders in India/SEA are turning their startups profitable this year #FundedForever The key will be to remain profitable for several quarters so it becomes part of DNA
English
8
4
107
22.4K
Q Blocks
Q Blocks@blocks_q·
@nivi @naval Here's our demonstration of 90% savings for Transcription and Translation using Whisper. Optimised Whisper API, running on decentralised GPU network With further optimisation, we are looking at 95% savings. At moderate scale this means: $50,000 vs $1M qblocks.cloud/blog/ultra-low…
English
1
0
2
685
Q Blocks
Q Blocks@blocks_q·
@nivi @naval Already saved more than $2M to our users: ML developers training and deploying some really powerful models. Now imagine this: 100s of open sources LLMs running on infinitely scalable GPUs all across the globe abstracted from X Boxes & Playstations That's what we are building
English
1
1
2
72
Naval
Naval@naval·
The biggest near-term question in AI: Will open-source LLMs with decentralized training be competitive with closed-source and centralized LLMs?
English
245
329
3.2K
652.5K
Q Blocks
Q Blocks@blocks_q·
Now edit images with easy to use Instruct-pix2pix API
English
0
0
2
150
@levelsio
@levelsio@levelsio·
Spending $8,000 per day on GPUs how is your day going
@levelsio tweet media
English
116
26
1.3K
0