Antonio V ⚡ (genius/brilliant) 🌵🏴‍☠️

7.4K posts

Antonio V ⚡ (genius/brilliant) 🌵🏴‍☠️ banner
Antonio V ⚡ (genius/brilliant) 🌵🏴‍☠️

Antonio V ⚡ (genius/brilliant) 🌵🏴‍☠️

@ton77v

Entrepreneur, cryptocurrency trader, traveller, coder, player etc.. love tattoos and many other things =) https://t.co/YnXxXlRgXx

Latin America 2 Asia 2 Europe Katılım Eylül 2014
2.6K Takip Edilen3K Takipçiler
Antonio V ⚡ (genius/brilliant) 🌵🏴‍☠️ retweetledi
Elon Musk
Elon Musk@elonmusk·
Elon Musk tweet media
ZXX
16.3K
60.2K
702.6K
88.3M
Bindu Reddy
Bindu Reddy@bindureddy·
The Art of Fine-Tuning Large Language Models LLMs have been great all-purpose models, capable of different tasks such as summarization, Q/A, and code generation out of the box. That said, LLMs have a vast but shallow understanding of language. Fine-tuning narrows down this scope to understand context deeply, which is crucial for tasks like sentiment analysis or legal document summarization. If you're in finance, healthcare, or any specialized field, fine-tuning helps the model "speak your language." TBH Fine-tuning is typically an involved process and requires updating the model parameters. In many cases, in-context learning is sufficient. With in-context, you simply provide examples as part of your input prompt to the model. The LLM can learn based on just a few examples and its performance may be sufficient for your task. While in-context learning helps when you don't have access to the LLM. If you do have access and have a specialized task then fine-tuning makes sense. There are a few different ways you can fine-tune LLMs Feature-Based Approach: In this method, you take a pre-trained LLM and use it to generate output embeddings for your target dataset. These embeddings then serve as input features for another model, like logistic regression or a random forest. Essentially, you're using the LLM as a fancy feature extractor. You don't update the LLM's parameters; you just use it to compute the embeddings it can give you. The approach makes sense if you have a classification-type task like sentiment or topic classification. Fine-tuning the Output Layers: In this case, you're adding new output layers and training only those while keeping the rest of the model as is. This is somewhat similar to the feature-based approach but offers a bit more flexibility. Fine-tuning All The Layers: You update all the layers of the pre-trained LLM. This is computationally expensive but often yields the best performance A popular method called. LoRA, or Low-Rank Adaptation focuses on modifying a subset of the model's parameters rather than the entire set, making the process computationally less demanding. Basically here is how LoRA works - Identification of Crucial Parameters: LoRA identifies a subset of parameters within the model that are most relevant to the specific task at hand. Introduction of Low-Rank Matrices: LoRA adds low-rank matrices to each layer of the pre-trained model. These matrices are smaller and simpler than the original weight matrices, making them easier to fine-tune. Fine-Tuning: The low-rank matrices are then fine-tuned to adapt the model for the specific task. This is done while keeping the rest of the model's parameters fixed. LoRA's performance is shown to be comparable to, or even better than, traditional fine-tuning methods. These are all supervised fine-tuning (SFT) where you need a training dataset comprising a compilation of prompts and their corresponding responses. These SFT datasets can either be curated manually by users or generated by other LLMs. In summary, I would recommend fine-tuning ONLY if in-context learning isn't sufficient and if you have a very domain-specific task like legal document analysis.
Bindu Reddy tweet media
English
11
132
669
191.1K
Antonio V ⚡ (genius/brilliant) 🌵🏴‍☠️ retweetledi
AI Breakfast
AI Breakfast@AiBreakfast·
Found this super interesting: AI-aided imagery algorithm converts thermal night vision into sharp color image Conventional thermal vision (top) depicts this nighttime scene of a forest road in ghostly grays, and the system adds color based on the objects detected
AI Breakfast tweet media
English
3
8
83
17.5K
Antonio V ⚡ (genius/brilliant) 🌵🏴‍☠️ retweetledi
Gabor Gurbacs
Gabor Gurbacs@gaborgurbacs·
Indian Bitcoin maximalists are next level. 😂
English
246
1K
4.2K
702.1K
Antonio V ⚡ (genius/brilliant) 🌵🏴‍☠️ retweetledi
Elon Musk
Elon Musk@elonmusk·
Also the plot of Deus Ex
Elon Musk tweet media
English
9.9K
42.6K
398.3K
48.5M
Antonio V ⚡ (genius/brilliant) 🌵🏴‍☠️
Damn! Another one bites the dust. #Paxful is also closing the business just a few weeks after LocalBitcoins 🥶 Seems decentralized and non-custodial P2P marketplaces will be the only to survive in the long run
Na Kluea, Thailand 🇹🇭 English
0
0
1
64
Mingo
Mingo@MingoAirdrop·
Only 4,400 people got the max allocation of 10,250 $ARB 💸 We can learn from them and get the best allocations on the next airdrops 🧵 Here is how to get almost every airdrop with the biggest allocations 👇
English
64
366
1.3K
504.2K
The Pattaya News Thailand
The Pattaya News Thailand@The_PattayaNews·
A #Thai commitee on public health urges the Thai government to regulate vaping and end the current ban, saying essentially the total ban has been a failure and an opportunity for corruption and shakedowns of tourists. thepattayanews.com/2023/03/14/tha…
Mueang Pattaya, Thailand 🇹🇭 English
5
3
10
1.9K
Antonio V ⚡ (genius/brilliant) 🌵🏴‍☠️ retweetledi
Mohegan ₿TC 🎲
Mohegan ₿TC 🎲@MoheganBTC·
#Bitcoin down 5% after a 40% climb to start the year Bears :
Mohegan ₿TC 🎲 tweet media
English
10
14
145
8.1K
Antonio V ⚡ (genius/brilliant) 🌵🏴‍☠️ retweetledi
Bitcoin News
Bitcoin News@BitcoinNewsCom·
BREAKING: 🇪🇺 Xapo Bank becomes first bank in the world to integrate the #Bitcoin Lightning Network 😱 🙌
Bitcoin News tweet media
English
43
501
2.5K
218.3K
Antonio V ⚡ (genius/brilliant) 🌵🏴‍☠️ retweetledi
Lightning Ventures
Lightning Ventures@ltngventures·
#Bitcoin is dead.”
English
41
160
678
122.8K
Antonio V ⚡ (genius/brilliant) 🌵🏴‍☠️ retweetledi
Fred’s Farm | Herbal Wellness 🌿
Do you know what Dimensions are? This boy explains it very well.. Must Watch👇🏽👇🏽
English
163
1.8K
6.1K
540.8K
Antonio V ⚡ (genius/brilliant) 🌵🏴‍☠️ retweetledi
Catturd ™
Catturd ™@catturd2·
We were right about the lab leak. We were right about natural immunity. We were right about masks. We were right about lockdowns. We were right about the vaccines. We were right about boosters. We were right about them faking COVID numbers. We were right about the deadly hospital protocol. We were right about ivermectin. We were right about evil Dr. Fauci. We were right about the evil WHO. We were right about it being a world power grab. Guess who was wrong about everything? Yep 👉 the government sheep "TrUsT tHe sCieNCe" cult.
English
4.8K
46.4K
168.3K
11.2M
Crypto Dan
Crypto Dan@cryptodan19·
I made $100,000+ on the $BLUR Airdrop Want to find the next one? I'm going to teach you exactly how to find these airdrop opportunities. Like this tweet and reply 'send', and I'll DM you the course for free.
Crypto Dan tweet media
English
496
59
540
86.1K
Mayne
Mayne@Tradermayne·
Just did my monthly listen to the newest Hiphop/rap songs on Spotify. Can confirm 99% of it is still trash.
English
51
4
268
30K
Antonio V ⚡ (genius/brilliant) 🌵🏴‍☠️ retweetledi
Adam Back
Adam Back@adam3us·
$25k is scraping the barrel along 200wma line, before H2 2022 #bitcoin never closed a month < 200wma. 2022 was due to a mountain of bullshit: 3ac, celsius, FTX, defi paper BTC now flushed, plus contagion over-leveraged miners and degens forced selling. would not short the 🌽 here
Adam Back tweet media
English
103
213
2K
215.3K
Antonio V ⚡ (genius/brilliant) 🌵🏴‍☠️ retweetledi
Alvin Foo
Alvin Foo@alvinfoo·
Why Planes Don’t Fly In A Straight Line On A Map! Have you ever been on a long-haul flight and wondered why your aircraft is taking a curved route instead of flying in a straight line when you look at the inflight map?
Alvin Foo tweet media
English
14
47
251
36.9K