xirclev

9.5K posts

xirclev banner
xirclev

xirclev

@xirclev_

🚲

Katılım Ağustos 2009
2.1K Takip Edilen1K Takipçiler
Maciej Cepnik 🇵🇱 🇲🇽 🇨🇦
Yesterday, I ran the BIGGEST COFFEE MARKET RESEARCH ever in Mexico.🇲🇽☕️📅 I asked Mexicans 👉 Which coffee chain do they like the most? Here are the results after 6339 votes 👇📊 🥇 Tim Hortons 🇨🇦 (@TimHortonsMX - 41.3%) 🥈 Starbucks 🇺🇸 (@StarbucksMex - 31%) 🥉 Krispy Creme 🇺🇸 (@krispykrememx - 27.8%) Mexicans love Tim Hortons (Canada); it won by a big margin. Do you agree with these results? 🤔
Maciej Cepnik 🇵🇱 🇲🇽 🇨🇦 tweet media
English
49
2
48
5.4K
xirclev retweetledi
rabbitholebot
rabbitholebot@rabbitholebot·
Voting is the adult version of writing a letter to Santa Claus.
English
208
4.5K
27.9K
698.9K
xirclev retweetledi
John Berryhill
John Berryhill@Berryhillj·
If you are using a domain name for personal email, a hobby, or some kind of other personal or family use, you NEED to move it out of @GoDaddy immediately:
John Berryhill tweet media
English
79
228
1.6K
133.5K
xirclev retweetledi
calle
calle@callebtc·
we discovered a great use case for bitchat at a conference: panel moderator on the cypherpunk stage used it to collect questions from the audience. it worked great. no registration, no accounts, no qr codes, just pure mesh q&a
calle tweet media
English
74
154
1.2K
126.4K
xirclev retweetledi
calle
calle@callebtc·
Phil Zimmerman, inventor of PGP end-to-end email encryption, 1991. We're still fighting the exact same fight today.
calle tweet media
English
32
312
1.1K
57.9K
xirclev
xirclev@xirclev_·
antes de la pandemia ya teníamos algo en el mapa, en su forma primitiva, pero no había la motivación por "la falta de la banca tradicional" gob.mx/cms/uploads/at…
Español
0
0
1
21
Porkbun
Porkbun@Porkbun·
If you ever want to experience the most frustrating customer service ever, have a business account with Coinbase and try to get support. It's become comical at this point 😅
Porkbun tweet media
English
33
4
70
5.9K
xirclev
xirclev@xirclev_·
@DrNickA maybe it has a lot to do with training and invaluable weights, id is many years ahead of implementation and h̶i̶g̶h̶ entropy is beaten by a couple of minutes and the coins in your pocket x.com/xirclev_/statu…
xirclev@xirclev_

@AnnaRRose @worldnetwork iridiology works with a naked eye, reading an iris with hi-res cameras and fine tuned LLMs should deliver much more than a proof of humanity. oxfordiridology.co.uk/congestion-acc… mdpi.com/2075-4418/13/6…

English
0
0
0
14
Nick Almond
Nick Almond@DrNickA·
It’s an attack on global identity. A mission to monopolise identity. Very clearly. It’s what it says on the tin.
English
1
0
6
636
xirclev retweetledi
Laprisamata
Laprisamata@Laprisamata1·
Antiflesh series Coming soon✨
Laprisamata tweet media
English
3
6
29
805
Petra
Petra@LuisBetx9·
Gm ✨️
3
1
8
87
xirclev retweetledi
Adrián Treviño
Adrián Treviño@visionario_btc·
Tu amigo con 0.001 #BTC en su primera cita:
Español
17
29
212
11K
xirclev retweetledi
Andi Marafioti
Andi Marafioti@andimarafioti·
Today, we share the tech report for SmolVLM: Redefining small and efficient multimodal models. 🔥 Explaining how to design a tiny 256M VLM that uses less than 1GB of RAM and outperforms our 80B models from 18 months ago! Here are the coolest insights from our experiments: ✨ Longer context = Big wins: Increasing the context length from 2K to 16K gave our tiny VLMs a 60% performance boost! ✨ Smaller is smarter with SigLIP: Surprise! Smaller LLMs didn't benefit from the usual large SigLIP (400M). Instead, we use the 80M base SigLIP that performs equally well at just 20% of the original size! ✨ Pixel shuffling magic: Aggressively pixel shuffling helped our compact VLMs "see" better, achieving the same performance with sequences 16x shorter! ✨ Learned positional tokens FTW: For compact models, learned positional tokens significantly outperform raw text tokens, enhancing efficiency and accuracy. ✨ System prompts and special tokens are key: Introducing system prompts and dedicated media intro/outro tokens significantly boosted our compact VLM’s performance—especially for video tasks. ✨ Less CoT, more efficiency: Turns out, too much Chain-of-Thought (CoT) data actually hurts performance in small models. They dumb ✨ Longer videos, better results: Increasing video length during training enhanced performance on both video and image tasks. 🌟 State-of-the-Art Performance, SmolVLM comes in three powerful yet compact sizes—256M, 500M, and 2.2B parameters—each setting new SOTA benchmarks for their hardware constraints in image and video understanding. 📱 Real-world Efficiency: We've created an app using SmolVLM on an iPhone 15 and got real-time inference directly from its camera! 🌐 Browser-based Inference? Yep! We get lightning-fast inference speeds of 40-80 tokens per second directly in a web browser. No tricks, just compact, efficient models! If you’re into efficient multimodal models, you’ll love this one.
Andi Marafioti tweet media
English
7
111
469
65.9K
xirclev retweetledi
Frank Manzano
Frank Manzano@loved_orleer·
ZXX
3
12
81
3.8K
xirclev retweetledi
Ing:\Juanita\
Ing:\Juanita\@ing_juani7a·
Q podría salir mal al alimentar con tu cara a la misma tecnología que facilita la creación de identidades falsas a escala industrial? Es como dejar las llaves de tu casa en la puerta con una nota que diga: "Por fa, hagan una copia". IA=1 Prioridades=0 securityexpress.info/fake-passport-…
Español
11
135
342
17.8K