WunderPixel

5K posts

WunderPixel

WunderPixel

@Kuschelpixel

Medien und Bildermacher // Unser Kopf ist rund, damit die Richtung das Denken wechseln kann.

Katılım Ocak 2009
725 Takip Edilen64 Takipçiler
WunderPixel retweetledi
Jay Cummings
Jay Cummings@LongFormMath·
This must be tweeted every Halloween. I don’t make the rules
Jay Cummings tweet media
English
164
9.9K
70.3K
3.7M
WunderPixel retweetledi
The Figen
The Figen@TheFigen_·
In case you've never heard the howl of an arctic wolf, here it is, it's worth it. 😂
English
2.3K
15.6K
107.8K
10.8M
WunderPixel retweetledi
Dr. Lutz Böhm
Dr. Lutz Böhm@DrLutzBoehm·
Wie lange hat es im römischen Reich gedauert, um von London nach Rom zu kommen und wie viel denarii hat es gekostet? 30 Tage, 3000km u.a. mit dem Esel Ein Routenplaner für das Römische Reich Oh mein Gott, diese Website ist ein fantastisches Spielzeug! 😍 orbis.stanford.edu
Dr. Lutz Böhm tweet media
Deutsch
97
487
3.1K
253.2K
WunderPixel retweetledi
Nostos
Nostos@NostosLit·
- Franz Kafka, The Diaries of Franz Kafka
Nostos tweet media
Deutsch
162
64K
224.3K
8.2M
WunderPixel retweetledi
golla
golla@golla80s·
Weil es einfach immer geht. Und diese Bumsbude hier mehr zum Lachen braucht. Einmalig 😂😂
Deutsch
77
121
987
53.3K
WunderPixel retweetledi
Bildungskind
Bildungskind@Bildungskind·
Bildungskind tweet media
ZXX
23
163
3.3K
248.2K
WunderPixel retweetledi
Tyfanwy Prudence McPrude 💛
I have no words…… except to say that whatever force brought these two wonderful eccentrics together proves there’s still some silly loveliness in the world 😀💛
English
241
1.3K
4.9K
444.3K
WunderPixel retweetledi
Klaus Steinfelder
Klaus Steinfelder@Kl_Stone·
Kurz vor Abschluss der 14. KW machen wir mal einen kleinen Spaziergang über den #EuropäischenStrommarkt, von West nach Ost, von Portugal nach Polen, und sehen dabei manch Interessantes. 🧵 1/11 Ø Börsenpreise 14. KW in €/MWh: PT 3,87 / ES 4,41 / F 9,88 / D 45,46 / 74,17
Klaus Steinfelder tweet media
Deutsch
89
525
2.3K
602.6K
WunderPixel retweetledi
Gia Macool
Gia Macool@GiaMMacool·
5 years old - Dad knows everything! 7 years old - Dad knows. 10 years old - Maybe dad doesn’t know?! 12 years old - Dad doesn’t know. 14 years old - Dads gone crazy! 16 years old - Can’t take dad seriously. 18 years old - What does dad know?! 22 years old - Dads talking rubbish! 24 years old - I know more than dad! 26 years old - Dad seems to know some things after all. 30 years old - Think I should ask dad about this?! 40 years old - It’s amazing how dad went through all this! 45 years old - Dads been right all along. 50 years old - If dad was here, I could have learned a lot.
English
4.2K
40.4K
219.2K
28.2M
WunderPixel retweetledi
Staats- und Universitätsbibliothek Hamburg
Heute vor 125 Jahren wurde der Schriftsteller Erich Kästner geboren. Am 10. Mai 1958 hielt er zum 25. Jahrestag der nationalsozialistischen Bücherverbrennungen im Lichthof der Staats- und Universitätsbibliothek Hamburg seine berühmte Rede "Über das Verbrennen von Büchern". Mit dem Foto von der Veranstaltung möchten wir an einen großartigen Autor und die Botschaft seiner Rede erinnern. "Drohende Diktaturen", sagte Kästner damals, "lassen sich nur bekämpfen, ehe sie die Macht übernommen haben. Es ist eine Angelegenheit des Terminkalenders, nicht des Heroismus." Apropos Terminkalender: Wir sehen uns hoffentlich am Sonntag, 13 Uhr, direkt bei uns um die Ecke am Dammtor/Edmund-Siemers-Allee zur Demo "Wir sind die Brandmauer"! @AStA_UHH @unihh @kloeterklikke @beyond_ideology #kästner
Staats- und Universitätsbibliothek Hamburg tweet media
Deutsch
1
33
93
3.7K
WunderPixel retweetledi
Zeev Rosenberg
Zeev Rosenberg@zeevrosenberg·
Seht und teilt Sie diesen Spot! Dieser Spot wird heute Nacht beim Super Bowl gezeigt. Verbreitet ihn, denn die Welt muss diese Botschaft hören und verstehen. Watch and share this spot! This commercial will be shown tonight at the Super Bowl. Spread the word, because the world needs to hear and understand this message. #antisemitismus #israel #superbowl #Antisemitism
Deutsch
105
2.2K
4.5K
267.4K
WunderPixel retweetledi
Damien Robitaille
Damien Robitaille@damienrobi·
WE ARE THE WORLD 🌎🌍🌏 Just watched « The Greatest Night in Pop » on @netflix and had to repost this video. One of my proudest achievements 😄 Wish I had made making of documentary ! Je me dois de reposter cette video après avoir vu l’excellent documentaire « The Greatest Night in Pop » sur Netflix. Une des videos dont je suis le plus fière ! J’aurais du faire un making of !
English
63
106
603
50.4K
WunderPixel retweetledi
Ingwar Perowanowitsch
Ingwar Perowanowitsch@Perowinger94·
Eine der wichtigsten TV-Momente der letzten Jahre. Maja Göpel dekonstruiert alle konservativen Talking-Points in 113 Sekunden. Dieser Clip sollte archiviert werden und jedes Mal wenn die immer gleichen dummen Argumente kommen, den entsprechenden Protagonisten vorgespielt werden.
Deutsch
662
4.1K
11K
559.3K
WunderPixel retweetledi
Santiago
Santiago@svpino·
LoRA is a genius idea. To understand the fine-tuning of Large Language Models, you must understand how LoRA works. By the end of this post, you'll know everything important about how it works. Large Language Models are good generalists, but they have little specialization. We train them in many different tasks, so they know a bit about everything but not enough about anything. Think of a kid who can play three different sports at a high level. While he can be proficient across the board, he won't get a scholarship unless he specializes. That's how the kid can reach his full potential. We can do the same with these large models. We can train them to solve a particular task and nothing else. We call this process "fine-tuning." We start with everything the model knows and adjust its knowledge to help it improve on the task we care about. Fine-tuning is revolutionary, but it's not free. Fine-tuning a large model takes time, care, and lots of money. Many companies can't afford the process. Some can't pay for the hardware. Some can't hire people who know how to do it. Most companies can't do either. That's where LoRA comes in. We realized we could approximate a large matrix of parameters using the product of two smaller matrices. There was a lot of wasted space within these large models. What would happen if we find a new, more optimal representation? Did you ever buy a map at a gas station? Giant pages showing every small road, path, and lake around you. They were exhaustive but hard to navigate. These are like parameters in a large model. LoRA turns a gas station map into a cartoon treasure map. Every useless parameter is gone. Only two roads, a palm tree, and a cross pointing at the treasure. We don't need to fine-tune the entire model anymore. We can only focus on the small treasure map that LoRA gives us. It's a mind-blowing trick. We can train the small approximation matrices from LoRA instead of fine-tuning the entire model. LoRA is cheaper, faster, and uses less memory and storage space. You can also merge the approximation matrices with the model during deployment time. They work like simple adapters. You load up the one you need to solve a problem and use a different one for the next task. Then, we have QLoRA, which makes the process much more efficient by adding 4-bit quantization. QLoRA deserves its own separate post. The team at @monsterapis has created an efficient no-code LoRA/QLoRA-powered LLM fine-tuner. What they do is pretty smart: They automatically configure your GPU environment and fine-tuning pipeline for your specific model. For example, if you want to fine-tune Mixtral 8x7B on a smaller GPU, they will automatically use QLoRA to keep your costs down and prevent memory issues. The @monsterapis platform specializes in no-code LoRA-powered fine-tuning. It's the fastest and most affordable offering for fine-tuning models in the market. They sponsored me and gave me 10,000 free credits for anyone who uses the code "SANTIAGO" in their dashboard: monsterapi.ai/finetuning If you want to read their latest updates, get free credits and special offers, join their Discord server: discord.com/invite/mVXfag4… TL;DR: • Traditional fine-tuning trains the entire model. It requires a complex setup, higher memory, and expensive hardware. • LoRA: Trains a small portion of the model. It's faster, requires much less memory, and affordable hardware. • QLoRA: Much more efficient than LoRA, but it requires a more complex setup. • No-code fine-tuning with LoRA/QLoRA: The best of both worlds. Low cost and easy setup.
Santiago tweet media
English
39
448
2.7K
342.4K