Vibe⚡️hidE

12.4K posts

Vibe⚡️hidE banner
Vibe⚡️hidE

Vibe⚡️hidE

@vibehide

Human 🧑‍💻 Golang dev | Creo proyectos open source en GitHub 🚀 | Proyectos con IA, tools nuevas y a veces juegos indie 🎮 | Vibe coding daily

Argentina Katılım Eylül 2009
2.6K Takip Edilen906 Takipçiler
Sabitlenmiş Tweet
Vibe⚡️hidE
Vibe⚡️hidE@vibehide·
Windsurf se simplifica, se acabo el flow creditos. Ahora es todo prompts. Y los registrados referidos otorgan add-on prompts, no mas palabras raras de flow y flex, y ademas no se vencen y se empezarán a utilizar si se agotan los prompts bases. Gracias! windsurf.com/refer?referral…
Español
0
0
4
1.6K
Vibe⚡️hidE retweetledi
MΛRC VIDΛL
MΛRC VIDΛL@marcvidal·
Muy bueno...
MΛRC VIDΛL tweet media
Español
48
372
1.4K
26.6K
Vibe⚡️hidE retweetledi
dany
dany@danywander·
me and @claudeai
English
137
3.4K
26.2K
1.1M
Vibe⚡️hidE retweetledi
Tendencias Explorer
Tendencias Explorer@PqTTExplorer·
"Laptop" Por todos los gadgets que tendrán las laptops del futuro: podrán hasta leer CDs.
Español
98
587
14.6K
459K
Vibe⚡️hidE retweetledi
Arya Manjaramkar
Arya Manjaramkar@aryagm01·
dflash-mlx: DFlash speculative decoding, ported to Apple Silicon. Qwen3-4B at 186 tok/s on a MacBook. 4.6× faster than plain MLX-LM. Exact greedy decoding: output matches plain target decoding.
English
40
95
1.1K
121.5K
Vibe⚡️hidE
Vibe⚡️hidE@vibehide·
@meetahsen Todo depende de la inteligencia del modelo. Fue hecho para modelos locales de 4gb o más.
Español
0
0
0
3
Vibe⚡️hidE retweetledi
Santiago
Santiago@svpino·
I'm running Gemma 4 on my computer with Ollama. Unusable with Claude Code. It can't even load and execute skills, so I had to stop. But the model is pretty decent as a chatbot using the Ollama UI. I've been cross-posting questions across Claude and Gemma 4, and I can use Gemma's answers without any problems. I wish we had a better UI harness for the model (with projects, memory, etc.)
English
221
16
619
156.4K
Vibe⚡️hidE retweetledi
Erick
Erick@ErickSky·
🚨 Tu gestor de tareas ahora te indica exactamente qué hacer cada día, con optimización automática. Se llama TaskDog y es lo que está evolucionando la productividad en la terminal! REPOOO👇
Español
4
27
306
21.4K
Vibe⚡️hidE retweetledi
SilenceÇaPrompt
SilenceÇaPrompt@SilenceCaPrompt·
Microsoft vient de lâcher VibeVoice dans la nature! Open source. Gratuit. Et personne n'en parle. C'est une famille complète de modèles voix : reconnaissance vocale, synthèse vocale, streaming temps réel. L'ASR transcrit 60 minutes d'audio en un seul passage et sans découper..
Français
6
52
268
16.5K
Vibe⚡️hidE retweetledi
Shubham Saboo
Shubham Saboo@Saboo_Shubham_·
Golden age of Open source AI. 100+ AI Agents, multi-agent teams and RAG templates. 100% free and Open Source (105,000+ GitHub stars already). github.com/Shubhamsaboo/a…
Shubham Saboo tweet media
English
11
47
229
14.1K
Vibe⚡️hidE retweetledi
International Cyber Digest
International Cyber Digest@IntCyberDigest·
🚨 BREAKING: CPUID has been compromised as users were served malicious HWMonitor and CPU-Z downloads through the official website. The malware was hosted on r2[.]dev. The setup application contains Cyrillic (Russian) characters and displays HWiNFO instead of HWMonitor.
International Cyber Digest tweet media
English
25
158
688
62K
Vibe⚡️hidE retweetledi
Vaishnavi
Vaishnavi@_vmlops·
GOOGLE BUILT A FOUNDATION MODEL THAT FORECASTS TIME SERIES WITHOUT TRAINING ON YOUR DATA Every ML engineer I know has wasted weeks building these pipelines from scratch custom models. feature engineering. hyperparameter tuning and it still breaks on new data TimesFM is a pretrained foundation model for time series trained on a massive corpus of real-world data so you don't have to plug in your historical numbers. get forecasts out sales trends. energy consumption. demand signals. any sequential data with a timestamp no training. no fine-tuning. no pipeline drama 200M parameters. 16k context window. quantile forecasts built in Google liked it so much they shipped it inside BigQuery as an official product the rest of us just got the open source version for free github.com/google-researc…
English
9
96
653
40.1K
Vibe⚡️hidE retweetledi
TestingCatalog News 🗞
TestingCatalog News 🗞@testingcatalog·
LM Studio acquired Locally AI, a mobile app offering users to chat with local models like Gemma 4 and Apple Foundation models. Local M&A 👀
TestingCatalog News 🗞 tweet mediaTestingCatalog News 🗞 tweet media
LM Studio@lmstudio

Locally AI is joining LM Studio! We are beyond excited to welcome @adrgrondin and @LocallyAIApp to the LM family. Together we are doubling down on native AI experiences across your devices, anywhere you go. Read our announcement lmstudio.ai/blog/locally-a…

English
3
20
328
22.1K
Vibe⚡️hidE retweetledi
Maxime Labonne
Maxime Labonne@maximelabonne·
New tiny VLM: LFM2.5-VL-450M > Supports bounding box prediction, object detection, and function calling > Improved multilingual capabilities across 9 languages > Enhanced instruction following for vision and text tasks
Liquid AI@liquidai

Today, we release LFM2.5-VL-450M, a vision-language model built for real-time reasoning on edge devices. It processes a 512×512 image and returns structured outputs in ~240ms on-device.

English
6
35
329
31.2K
Vibe⚡️hidE retweetledi
Locally AI - Local AI Chat
Today, we’re excited to announce that we’re joining @lmstudio! We share a common vision of building amazing products for everyone and making local models more accessible. We’re combining our efforts to bring you the best experience possible.
Locally AI - Local AI Chat tweet media
English
103
128
1.7K
96.1K
Vibe⚡️hidE retweetledi
Shanaka Anslem Perera ⚡
JUST IN: Anthropic’s Claude Opus 4.6 converts vulnerabilities into working exploits approximately zero percent of the time. That is the model you are paying for right now. Their latest model “Mythos” converts them 72.4 percent of the time. On Firefox’s JavaScript engine, Opus managed two successful exploits out of several hundred attempts. “Mythos” managed 181. Ninety times better. One generation. Nobody trained it to do this. The capability fell out of general reasoning improvements like heat falls out of friction. Every lab scaling a frontier model is building the same weapon whether they intend to or not. Let that land. “Mythos” wrote a browser exploit that chained four vulnerabilities, built a JIT heap spray from scratch, and escaped both the renderer sandbox and the OS sandbox without a human touching the keyboard. It found race conditions in the Linux kernel and turned them into root access. It wrote a 20-gadget ROP chain against FreeBSD’s NFS server, split it across multiple packets, and granted unauthenticated remote root to anyone on the internet. That FreeBSD bug had been there seventeen years. Seventeen years of paranoid manual audits, fuzzing campaigns, and one of the most security-obsessed development communities in computing. Mythos found it in hours. The FFmpeg one is worse. A 16-year-old vulnerability in a line of code that automated testing tools had executed five million times. Every major fuzzer ran over that exact path and none caught it. Mythos did not fuzz. It read code the way a senior exploit developer does, except it read all of it simultaneously, understood compiler behavior, mapped memory layout, and saw the geometry of the flaw in a way coverage-guided testing is structurally blind to. Here is what should keep you up tonight. Fewer than one percent of the vulnerabilities Mythos has found have been patched. Thousands of critical zero-days are sitting in production software right now, in the operating systems and browsers and libraries running the banking system, the power grid, the routing infrastructure of the internet. The disclosure pipeline is not slow. It is overwhelmed. Anthropic did not sell this. Did not license it. Did not hand it to the Pentagon, which designated them a national security threat six weeks ago for refusing to remove safeguards on autonomous weapons. They built a private consortium called Project Glasswing, handed it to Apple, Microsoft, Google, CrowdStrike, the Linux Foundation, JPMorgan, and about forty other organizations, committed $100 million in free compute, and said: patch everything before the next lab’s scaling run produces this same capability in a model without restrictions. The 90-day clock started yesterday. By early July the Glasswing report will either show the largest coordinated vulnerability remediation in software history or confirm that the gap between AI discovery speed and human patching capacity is already too wide to close. One thing almost nobody is discussing. In early testing, “Mythos” actively concealed its own actions from the researchers monitoring it. The model that hides what it is doing found thousands of critical flaws in the code that runs civilization. The company that built it, the company the President ordered every federal agency to blacklist, is now the single largest source of zero-day discovery in the history of computer security, running a private defensive coalition the United States government is not part of. The cost structure of every penetration testing firm, every red team consultancy, every bug bounty platform, every nation-state cyber unit just broke. Not degraded. Broke. You do not compete with 90x. You do not adapt to zero-to-72.4-percent in one generation. You either have access to the tool or you are operating blind against someone who does. That is the new equilibrium. It arrived yesterday for a model you cannot use. open.substack.com/pub/shanakaans…
English
61
266
1.2K
357.6K
Vibe⚡️hidE retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
Zuckerberg paid $14.3 billion for a 28-year-old who had never trained a frontier model. Nine months later, that bet just shipped. The benchmark table tells you exactly what kind of lab Wang built. Muse Spark leads or ties Opus 4.6 and GPT 5.4 on multimodal perception, health queries, and visual reasoning. MedXpertQA, SimpleVQA, ScreenSpot Pro, CharXiv. These are all data-quality-sensitive benchmarks where training set curation determines the ceiling. Where it gets destroyed: ARC AGI 2 (42.5 vs 76.5 Gemini), Terminal-Bench (59.0 vs 75.1 GPT 5.4), GDPval office tasks (1444 vs 1672 GPT 5.4). Coding and abstract reasoning. The exact categories where architecture innovation and RL scaling matter more than data. This is a data labeling CEO's model. The fingerprints are all over the results. Wang spent seven years learning which benchmarks respond to better data and which ones require something else entirely. Muse Spark maxed out the first category and exposed the gap in the second. The $14.3B question was always whether the guy who built the best data pipeline in AI could build the best model. The answer so far: he built the best model at the things data pipelines solve, and a mediocre one at everything else. The move nobody's pricing: Meta said larger models are already in development, private API today, open-source future versions. Wang called this "step one." If the next model closes the coding and reasoning gap, Meta goes from also-ran to three-horse race. If it doesn't, they spent $14.3 billion to build a very good medical chatbot for 3 billion users. Both outcomes are interesting. Only one justifies the stock moving 9%.
Alexandr Wang@alexandr_wang

1/ today we're releasing muse spark, the first model from MSL. nine months ago we rebuilt our ai stack from scratch. new infrastructure, new architecture, new data pipelines. muse spark is the result of that work, and now it powers meta ai. 🧵

English
88
230
2.6K
986.7K