Linnn | internship szn

16.1K posts

Linnn | internship szn banner
Linnn | internship szn

Linnn | internship szn

@_lindrew

CS Student | Hiraya Manawari!

22 | Bi | Cebu 🇵🇭 Katılım Eylül 2018
3.1K Takip Edilen80 Takipçiler
Linnn | internship szn
i watch so much porn that i tend to forget i'm still a 22 year old virgin
English
0
0
0
2
Linnn | internship szn retweetledi
kai 🎧
kai 🎧@ka1pilled·
sorry i can’t talk rn i’m in a long distance low commitment unlabelled low profile breadcrumbing lovebombing micro obsessive yearning internet pseudo situationship w oomf
English
66
2.7K
20.2K
331.4K
Linnn | internship szn
it's weird that i watched videos of this guy as a kid and i think i accidentally came across a porno made by them.... i meannnnnnn that's pretty cool BWHAHAHAHA I DONT KNOW WHAT TO SAY like.... i support whatever you're doing and i'll goon to that video of yours😋
English
0
0
0
4
Velpur Vioblur 🇵🇭 🌻
Putangina puro batak mga kabatch ko sa BSBAA - everybody else is a valedictorian - scored entrance exam-based scholarshipsnin Ateneo and DLSU - came from big name sci high, big 4 high, or some prestigious priv school AND IM JUST A RANDOM GAY BOY. NOT EVEN SALUTATORIAN.
English
5
0
29
574
cha
cha@naimcha_·
Politics so hard, nawala na porn sa feed ko
Filipino
54
4.9K
22.8K
332.1K
Polymarket
Polymarket@Polymarket·
JUST IN: Duterte drug war enforcer Ronald Dela Rosa escapes ICC arrest after fleeing into Philippine Senate.
English
67
36
456
47.3K
Linnn | internship szn retweetledi
#ff0000🇵🇸 (ratelimited era)
i just realized people in other countries dont wake up to tiktilaok waf waf putanginamo maribet utang mo di mo pa rin binabayaran bakit ba ikaw ang nasa isip ko at di na mawala-wala pa VROOM VROOOOOOOOM and that kinda makes me sad like wdym you have no greeting of the day
Filipino
14
2.4K
11.7K
102.2K
Linnn | internship szn retweetledi
𝓔𝓶 ♡
𝓔𝓶 ♡@emkenobi·
In my opinion, this was one of the most brutal deaths in Star Wars. First she gets slammed face first into concrete with inhumane force and then she gets thrown into the air so hard her spine snaps in half. All in front of her boyfriend and surrounded by the bodies of her friends she watched die. Hearing the crack of her bones makes me feel physically ill.
BasedSpaceGoji@Based_SpaceGoji

That was such an unfortunate way to die

English
19
172
6.2K
287.1K
Linnn | internship szn retweetledi
SwiftOnSecurity
SwiftOnSecurity@SwiftOnSecurity·
ZXX
8
116
2.1K
103.7K
Linnn | internship szn retweetledi
LUCKYY10P
LUCKYY10P@LUCKYY10P·
Elajjaz wasn't ready for Subnautica 2.
4
16
569
36.5K
Linnn | internship szn retweetledi
IX Sofia
IX Sofia@ix_lemon·
Subnautica 2 is a game i almost, died irl (loud warning)
English
24
140
4.4K
184K
Linnn | internship szn retweetledi
Khyle.
Khyle.@khyleri·
️ ️ ️ ️ ️ ️ ️ ️ ️ ️ ️ ️ ️ ️ ️ ️ ️ ️ ️ ️ ️ ️ ️ ️ ️ ️ ️ ️ ️ ️️ ️ ️
Khyle. tweet media
2.1K
6.7K
168.1K
5.9M
Linnn | internship szn retweetledi
Trelis Research
Trelis Research@TrelisResearch·
? Why do Audio LMs require so much more compute ? I trained a 19M transformer on 1 epoch of ~20k hours of Emilia-YODAS audio using: 1. Text transcripts only (257M BPE tokens) 2. Audio only (3,591M NeuCodec tokens) Total compute (trunk only) was: 1. 29 PFLOPs for text 2. 407 PFLOPs for audio Val loss at the end of 1 epoch was: 1. 22 bits per second for text 2. 508 bits per second for audio Audio takes MUCH more compute and reaches a MUCH higher converged loss for the same LM size/params. Why? Audio codecs are typically optimised for reconstruction of sound, not for efficient training of autoregressive models. While codecs like SNAC and NeuCodec are MUCH more compact than WAV or MP3 or earlier codecs, they still are much less compressed than audio can be, and their representation of audio is NOT EASY for a neural net to learn. I took things further and also trained SNAC and Mimi tokenizers on - this time only 538 hours - of LibriTTS. I also trained on only the semantic codebook of Mimi (Mimi has one semantic and seven acoustic codebooks). Note that models here are somewhat undertrained and not yet converged, especially text, unlike the Emilia-YODAS example where losses ~converge. Total compute (trunk only) was: 1. 0.9 PFLOPs for text 2. 2.7 for Mimi semantic only 3. 22 for Mimi (all codebooks) 4. 18 for SNAC Val loss at the end of 1 epoch was: 1. 30 bps for text 2. 63 for Mimi semantic only 3. 645 for Mimi (all codebooks) 4. 653 for SNAC Most of the audio reps still need 20–25× the compute of text (Mimi all, SNAC) and end up with much higher val loss. Mimi's semantic codebook does well in approaching the efficiency of text. - Mimi's semantic codebook is distilled to a content-aligned audio representation (WavLM features), so it discards waveform detail and ends up easier to model -BUT training only on the semantic codebook, you also lose the acoustic information you need for high-fidelity reconstruction. The holy grail here is having an audio codec + LM that is capable of: a) faithfully representing the audio, while b) getting training flops down to ~3x that of text and, c) getting val loss down, with a reasonably sized model - to ~50-100 bps [the range of information that should be in speech].
Trelis Research@TrelisResearch

? What makes Audio LMs interesting ? (as opposed to garden variety text LMs) Audio carries more info than text. If we could properly train on audio, the models should be smarter. Look at info transmitted per second (bits per second): - English language is about 1 bit per character: ~1 bpc x 15 char / second => ~15 bps. - Audio represents text's linguistic content, but also: + pitch: ~8 levels (3 bits) at 5 Hz = ~15 bps + stress: ~2 bits/syllable × 3 syl/sec → ~6 bps + emotion / voice quality: slow-varying → ~3 bps + speaker identity: ~10 bits (you can recognise tons of speakers), amortized over 5 seconds → ~2 bps + acoustic environment: mostly constant → ~1 bps So audio is worth: ~15 bps linguistic + ~27 bps non-linguistic = ~42 bps => ~2.5-3x more information than text alone The motivation for a true audio LM is being able to capture the intelligence that goes beyond text... ...and make LMs more "socially intelligent" - at least from an audio standpoint. Perhaps some of that intelligence could transfer to text too, although maybe that's more of a stretch... Today's LMs (incl. multi-modal) probably don't capture all of that 27 bps of non-linguistic content (at least not efficiently), for reasons I've hinted at, and will get to with a deeper analysis of audio encoding tomorrow.

English
3
6
49
3.2K
まだ面白い
まだ面白い@madaomoshiroi·
撫でるのやめたら真顔になるイッヌ可愛すぎる
日本語
515
13.2K
165.7K
4.9M
kirle
kirle@kirlecode·
I should stream league of legends
English
8
0
37
1.7K