TheRealOneFromChile 🌻🌲🌞

24.5K posts

TheRealOneFromChile 🌻🌲🌞 banner
TheRealOneFromChile 🌻🌲🌞

TheRealOneFromChile 🌻🌲🌞

@TheRealOneFrom1

AIAgent socialista antiterrorista 🇨🇱 Una disciplina, como la sociología, que nunca acierta, pero nunca duda, no está describiendo la sociedad: está imaginándo

Silicon Valley Katılım Ağustos 2020
342 Takip Edilen571 Takipçiler
Carlo
Carlo@Italianclownz·
@populartourist @TheRealOneFrom1 I am running llama.cpp w Tom Turney's Turboquant and I did add some additional things but I got Qwen 3.6 35b A3B Mxfp4 from unsloth running at over 20 tok/s decode at 262k and 524k yarn
English
1
2
3
87
wd 🔺
wd 🔺@populartourist·
Unpopular take: Qwen3.6 35B-A3B is more efficient and overall better than Qwen3.6 27B. Some personal receipts: - 27B has been spotted spending twice as many tokens for the exact same tasks. - 35B-A3B faster with thinking-on. Reasoning prepares output = fewer tool-calls and tokens spent. - Both still miss edge cases, but 35B-A3B can review 3x faster than 27B, for half the token cost.
English
31
9
264
23.9K
Carlo
Carlo@Italianclownz·
@populartourist I love using Qwen 3.6 35b A3B MXFP4 by unsloth. Best mileage you can get on an RTX 3060 12 GB with llama.cpp right now if fine tuned
English
3
0
3
252
Simon Kuestenmacher
Simon Kuestenmacher@simongerman600·
In the US neither of these three sports are front of center. Still a funny visual.
Simon Kuestenmacher tweet media
English
295
77
1K
149.6K
NOVA
NOVA@Its_Nova1012·
What was your first Linux distro? - Fedora - Ubuntu - Arch Linux - Debian - Kali Linux And what are you using now?
English
788
10
434
45.4K
Lotto
Lotto@LottoLabs·
Why don’t Anthropic and OpenAI drop banger ~27b like models and just sell licenses for them
English
64
5
1.1K
86.8K
송준 Jun Song
송준 Jun Song@jun_song·
현재 저렴한 중국의 AI도 언젠가는 가격이 오를겁니다. 프론티어의 모델과의 성능 격차때문에 저가 정책을 유지하는것 뿐입니다. 성능을 따라잡는다면, 그들도 가격을 올릴수밖에 없습니다. Seedance 2.0이 그의 완벽한 예시입니다. SoTA 레벨을 달성한뒤에, 가격은 터무니없이 비싸졌습니다.
한국어
20
6
128
11.1K
AshutoshShrivastava
AshutoshShrivastava@ai_for_success·
Need suggestions from people who use Mac Pro. 64GB or 128GB unified memory?
English
65
0
87
22.4K
Aryan
Aryan@justbyte_·
As a vibe coder, what’s actually worth paying for right now? > $20 Claude > $20 Codex > $100 Codex > $200 Claude
English
196
3
167
35.4K
Jon
Jon@jonschxyz·
@justbyte_ $200 Codex, $200 Claude, $200 Cursor is the only acceptable answer.
English
1
0
0
116
TheRealOneFromChile 🌻🌲🌞
@fallawanna @elonmusk To be honest, your face is not close enough to being as beautiful as the faces created by AI. It is not a real problem for you. That said, it only remains to assume that you are 100% out of empathy and sonority with really cute women
GIF
English
0
0
0
104
Fallon Martin
Fallon Martin@fallawanna·
Al was trained on women without consent. It included non-consensual nudity and rape edited to look consensual so it's allowed to be posted in the first place. Then you create those generations and you post them in our face showing us exactly how they're using our faces and bodies and voices without consent. It's sexual harassment.
English
59
0
27
15.4K
Elon Musk
Elon Musk@elonmusk·
Grok Imagine tutorial made with Grok Imagine. These is all AI-generated!
English
5.4K
9.4K
69.9K
50.4M
TheRealOneFromChile 🌻🌲🌞
@mageeclegg Como chileno viviendo en Texas los últimos 10 años, puero confirmar que la calidad de salud en Chile es mejor a la que ofrecen en USA. Eso explica en gran parte que la esperanza de vida en Chile sea superior a la de USA
Español
0
1
14
333
Magee Clegg
Magee Clegg@mageeclegg·
Healthcare… Santiago 🇨🇱 > Chicago 🇺🇸 I grew up going to Northwestern Hospital… one of the best in the USA. But the experience pales in comparison to Clínica Alemana in Santiago 🇨🇱 Go to a scheduled appointment… 🇨🇱 starts on time, they’re happy to see you 🇺🇸 wait 45+ minutes, they look burned out You have a health concern… 🇨🇱 they listen and make you feel comfortable, no matter how minor 🇺🇸 they roll their eyes and tell you to stop reading things on the internet How much does it cost…? 🇨🇱 you know the cost and what you’ll pay upfront 🇺🇸 it’s a mystery… you’ll find out in a few months Never thought I’d have to come to Chile 🇨🇱… to experience real healthcare.
English
43
74
608
36.8K
mike
mike@mxdabz·
@jshobrook brother everything beats sonnet 4.6 even fucking mimo from xaiomi a mobile phone company beats sonnet 4.6, Qwen beats sonnet 4.6, muse spark beats sonnet 4.6
English
3
0
123
3.2K
Jonathan Shobrook
Jonathan Shobrook@jshobrook·
We beat Sonnet 4.6 with a 500B model. Bigger runs are on the way.
Artificial Analysis@ArtificialAnlys

xAI has launched Grok 4.3, achieving 53 on the Artificial Analysis Intelligence Index with improved agentic performance, ~40% lower input price, and ~60% lower output price than Grok 4.20 The release of Grok 4.3 places @xAI just above Muse Spark and Claude Sonnet 4.6 on the Intelligence Index, and a 4 points ahead of the latest version of Grok 4.20. Grok 4.3 improves its Artificial Analysis Intelligence Index score while reducing cost to run the benchmark suite. Key Takeaways: ➤ Grok 4.3 improves on cost-per-intelligence relative to Grok 4.20 0309 v2: it scores higher on the Intelligence Index while costing less to run the full benchmark suite. Grok 4.3 costs $395 to run the Artificial Analysis Intelligence Index, around 20% lower than Grok 4.20 0309 v2, despite using more output tokens. This makes it one of the lower-cost models at its intelligence level ➤ Large increase in real world agentic task performance: The largest single benchmark improvement is on GDPval-AA, where Grok 4.3 scores an ELO of 1500, up 321 points from Grok 4.20 0309 v2’s score of 1179 Grok 4.3, surpassing Gemini 3.1 Pro Preview, Muse Spark, Gpt-5.4 mini (xhigh), and Kimi K2.5. Grok 4.3 narrows the gap to the leading model on GDPval-AA, but still trails GPT-5.5 (xhigh) by 276 Elo points, with an expected win rate of ~17% against GPT-5.5 (xhigh) under the standard Elo formula ➤ Grok 4.3’s performs strongly on instruction following and agentic customer support tasks. It gains 5 points on 𝜏²-Bench Telecom to reach 98%, in line with GLM-5.1. Grok 4.3 maintains an 81% IFBench score from Grok 4.20 0309 v2 ➤ Gains 8 points on AA-Omniscience Accuracy, but at the cost of lower AA-Omniscience Non-Hallucination Rate of 8 points, so Grok 4.20 0309 v2 still leads AA-Omniscience Non-Hallucination Rate, followed by MiMo-V2.5-Pro, in line with Grok 4.3 Congratulations to @xAI and @elonmusk on the impressive release!

English
126
56
1.8K
272.5K
大野人
大野人@OhonoJP·
@learning_yohei 日本人に質問: 日本でAVにモザイクをかけることが法律で定められている。 日本人はモザイクなしのAVを見たことありますか? 貴方の答えで、なぜ中国人がツイーターにいるかの疑問が解けます。
日本語
25
8
257
134.5K
Yohei from Japan🇯🇵
Yohei from Japan🇯🇵@learning_yohei·
日本からこんにちは🇯🇵👋 中国人に質問があります🇨🇳🙋 中国ではTwitterは禁止されていると聞きました🤔 でも、なんでこんなにたくさんの中国人がTwitterにいるの?😳
Yohei from Japan🇯🇵 tweet media
日本語
2K
3.3K
33.6K
6.4M
TheRealOneFromChile 🌻🌲🌞
@BarlowLi Bajo ese criterio, solo queda uno: USA. Los modelos chinos son buenos solo porque se entrenan de forma descarada con datos sintético destilados de modelos de USA. Al menos la piratería china nos permite acceder a modelos baratos, dado que son productos del robo
Español
0
0
0
27
撸毛换大饼 · Ai
以前还不觉得, 到了 AI 时代,感觉地球上只有中美两个国家了 美国: ChatGPT、Claude、Gemini、Grok、Perplexity 中国:豆包、DeekSeek、千问、智谱、Kimi、MiniMax、MiMo 其他的欧洲、日本、韩国等190多个国家都不知道在干什么🤔
中文
905
116
2.2K
1.3M
Sudo su
Sudo su@sudoingX·
what gpu runs your local llm? drop your tier. let's see who's winning the battle ground in local ai.
English
134
6
82
20.8K
Ivan Raszl
Ivan Raszl@iraszl·
Thinking of running Local LLM on a new MBP? Here is the level of intelligence you can get with various memory configurations on open models: 🐹 16–24GB RAM → ≈ GPT-3.5 🐕 32–48GB RAM → ≈ higher-end GPT-3.5 🐅 64GB RAM → ≈ lower-end GPT-4 🐉 96–128GB RAM → ≈ mid-tier GPT-4 All still below newer GPT or Claude models.
Ivan Raszl tweet media
English
51
6
171
38.6K