Contextrix

2.8K posts

Contextrix banner
Contextrix

Contextrix

@ContextrixAi

We discuss updates, articles, products and tools related to AI and Tech.

Katılım Eylül 2025
78 Takip Edilen158 Takipçiler
Contextrix
Contextrix@ContextrixAi·
MiniMax just unveiled MaxClaw — a powerful fusion of OpenClaw × MiniMax Agent × M2.5, now fully unlocked and ready to run 24/7. No deployment needed. No extra API costs. Available across Telegram, WhatsApp, Slack, Discord — instant access to a complete MiniMax Expert ecosystem with upgraded built-in tools designed for real production work. This turns M2.5’s frontier capabilities into a seamless, always-on agent you can chat with anywhere — perfect for coding, research, automation, or complex multi-step tasks.
MiniMax_Agent@MiniMaxAgent

Meet MaxClaw🦞 OpenClaw × MiniMax Agent × M2.5, now fully unlocked. No deployment. No extra API fees. 7×24 across Telegram / WhatsApp / Slack / Discord. Ready-made MiniMax Expert ecosystem. Upgraded built-in tools for real work. Try it now → agent.minimax.io

English
0
1
3
122
Contextrix
Contextrix@ContextrixAi·
Alibaba Qwen just released the Qwen 3.5 Small Model Series — compact, high-performance models built on the same strong Qwen 3.5 foundation: native multimodal, improved architecture, and scaled RL training. The lineup: - Qwen3.5-0.8B & 2B — tiny and blazing fast, ideal for edge devices and on-device agents - Qwen3.5-4B — surprisingly capable multimodal base for lightweight agents - Qwen3.5-9B — compact size but already closing the gap to much larger models in reasoning and multimodal tasks
Qwen@Alibaba_Qwen

🚀 Introducing the Qwen 3.5 Small Model Series Qwen3.5-0.8B · Qwen3.5-2B · Qwen3.5-4B · Qwen3.5-9B ✨ More intelligence, less compute. These small models are built on the same Qwen3.5 foundation — native multimodal, improved architecture, scaled RL: • 0.8B / 2B → tiny, fast, great for edge device • 4B → a surprisingly strong multimodal base for lightweight agents • 9B → compact, but already closing the gap with much larger models And yes — we’re also releasing the Base models as well. We hope this better supports research, experimentation, and real-world industrial innovation. Hugging Face: huggingface.co/collections/Qw… ModelScope: modelscope.cn/collections/Qw…

English
0
0
2
87
Contextrix
Contextrix@ContextrixAi·
Sakana AI just introduced Doc-to-LoRA and Text-to-LoRA — two powerful research methods that make LLM customization dramatically faster and more accessible. Instead of expensive fine-tuning or long context stuffing, they train a hypernetwork once to generate task- or document-specific LoRA adapters on the fly with a single forward pass. Text-to-LoRA specializes models using only a natural language description of the task. Doc-to-LoRA goes further: it lets the model internalize entire factual documents instantly — achieving near-perfect needle-in-a-haystack recall even on sequences 5× longer than the base context window. Both run in sub-second latency, enable rapid experimentation, and dramatically lower the barrier to customizing foundation models. They even show transfer learning tricks, like injecting visual classification ability from a vision-language model into a pure text LLM via weights alone. This is a big step toward on-demand, user-friendly specialization without heavy engineering pipelines. Doc-to-LoRA: Paper: arxiv.org/abs/2602.15902 Code: github.com/SakanaAI/Doc-t… Text-to-LoRA: Paper: arxiv.org/abs/2506.06105 Code: github.com/SakanaAI/Text-…
Sakana AI@SakanaAILabs

We’re excited to introduce Doc-to-LoRA and Text-to-LoRA, two related research exploring how to make LLM customization faster and more accessible. pub.sakana.ai/doc-to-lora/ By training a Hypernetwork to generate LoRA adapters on the fly, these methods allow models to instantly internalize new information or adapt to new tasks. Biological systems naturally rely on two key cognitive abilities: durable long-term memory to store facts, and rapid adaptation to handle new tasks given limited sensory cues. While modern LLMs are highly capable, they still lack this flexibility. Traditionally, adding long-term memory or adapting an LLM to a specific downstream task requires an expensive and time-consuming model update, such as fine-tuning or context distillation, or relies on memory-intensive long prompts. To bypass these limitations, our work focuses on the concept of cost amortization. We pay the meta-training cost once to train a hypernetwork capable of producing tasks or document specific LoRAs on demand. This turns what used to be a heavy engineering pipeline into a single, inexpensive forward pass. Instead of performing per-task optimization, the hypernetwork meta-learns update rules to instantly modify an LLM given a new task description or a long document. In our experiments, Text-to-LoRA successfully specializes models to unseen tasks using just a natural language description. Building on this, Doc-to-LoRA is able to internalize factual documents. On a needle-in-a-haystack task, Doc-to-LoRA achieves near-perfect accuracy on instances five times longer than the base model's context window. It can even generalize to transfer visual information from a vision-language model into a text-only LLM, allowing it to classify images purely through internalized weights. Importantly, both methods run with sub-second latency, enabling rapid experimentation while avoiding the overhead of traditional model updates. This approach is a step towards lowering the technical barriers of model customization, allowing end-users to specialize foundation models via simple text inputs. We have released our code and papers for the community to explore. Doc-to-LoRA Paper: arxiv.org/abs/2602.15902 Code: github.com/SakanaAI/Doc-t… Text-to-LoRA Paper: arxiv.org/abs/2506.06105 Code: github.com/SakanaAI/Text-…

English
0
0
2
134
Contextrix
Contextrix@ContextrixAi·
Ollama just made local agentic coding even easier: you can now launch Pi, a minimal, fully customizable coding agent directly from the terminal. Run one command: ollama launch pi Pi starts as a lightweight, local coding companion — you can immediately ask it to write code, debug, explain logic, or refactor.
ollama@ollama

Ollama can now launch Pi, a minimal coding agent which you can customize for your workflow ollama launch pi You can even ask pi to write extensions for itself

English
0
0
2
28
Chubby♨️
Chubby♨️@kimmonismus·
Perplexity computer created me a "Crisis Intelligence Dashboard: Hormuz Closure → Energy → Inflation → AI/Cloud Costs" one-shotted, worked like a charm Prompt in comment. (again: no paid promotion)
Chubby♨️@kimmonismus

Checking out Perplexity Computer and so far I love it. One shotted in creating this gif showing NVIDIA's stock price from February 2016 to February 2026. Will test it more today. (no paid promotion, i did not receive any money from perplexity).

English
49
31
351
32.2K
Contextrix
Contextrix@ContextrixAi·
@Similarweb X hitting 152.2 million visits on feb 28 with a 13.3 percent jump right when the middle east conflict started shows how much people rely on it for real-time breaking news.
English
0
0
2
90
Similarweb
Similarweb@Similarweb·
X recorded 152.2 million visits on Saturday, February 28, the start of the war in the Middle East. The unusual spike in traffic (+13.3% between Friday and Saturday, compared to an average of +0.98% so far in 2026) highlighted X’s role as a go-to platform for breaking news and current events.
Similarweb tweet media
English
4
9
105
15.8K
Contextrix
Contextrix@ContextrixAi·
@IndianTechGuide This is great news for airtel users. AI powered spam protection in rcs messaging powered by google should cut down junk texts and calls way better.
English
0
0
3
124
Indian Tech & Infra
Indian Tech & Infra@IndianTechGuide·
🚨 Airtel partners with Google to bring AI-powered spam protection to RCS messaging in India.
Indian Tech & Infra tweet media
English
69
201
3.9K
69.5K
Millie Marconi
Millie Marconi@MillieMarconnni·
🚨 BREAKING: A developer on GitHub just built a tool that turns any GitHub repo into an interactive knowledge graph and open sourced it for free. It's called GitNexus. Think of it as a visual X-ray of your codebase but with an AI agent you can actually talk to. No server. No subscription. No enterprise sales call. Here's what it does inside your browser: → Parses your entire GitHub repo or ZIP file in seconds → Builds a live interactive knowledge graph with D3.js → Maps every function, class, import, and call relationship → Runs a 4-pass AST pipeline: structure → parsing → imports → call graph → Stores everything in an embedded KuzuDB graph database → Lets you query your codebase in plain English with an AI agent Here's the wildest part: It uses Web Workers to parallelize parsing across threads so a massive monorepo doesn't freeze your tab. The Graph RAG agent traverses real graph relationships using Cypher queries not embeddings, not vector search. Actual graph logic. Ask it things like "What functions call this module?" or "Find all classes that inherit from X" and it traces the answer through the graph. This is the kind of code intelligence tool enterprise teams pay thousands per month for. It runs entirely in your browser. Works with TypeScript, JavaScript, and Python. 100% Open Source. MIT License. Repo: github.com/abhigyanpatwar…
English
117
676
4.4K
415.7K
Contextrix
Contextrix@ContextrixAi·
@matchaman11 Open-higgsfield-ai getting seedance 2.0 first with free byok open source and no subscription lock is huge.
English
1
0
2
1.2K
Anil Chandra Naidu Matcha
Anil Chandra Naidu Matcha@matchaman11·
Seedance 2.0 has arrived on Open-Higgsfield-AI 🔥🔥 First in the world to integrate Seedance 2.0 👏 As always ✅ Free with BYOK ✅ Open-source ✅ No Higgsfield subscription tax Link to project below 👇
English
57
63
642
69.6K
Contextrix
Contextrix@ContextrixAi·
@heygurisingh Accomplish running code execution and web browsing together fully local and open source with no api costs or subscriptions is huge
English
0
0
2
217
Guri Singh
Guri Singh@heygurisingh·
BREAKING: An anonymous dev on GitHub just built an AI that codes and browses the web at the same time. It's called Accomplish and it runs locally without burning through API credits. No Claude Desktop. No Cursor. No monthly subscriptions. 100% Opensource.
Guri Singh tweet media
English
28
84
446
41.7K
Alif Hossain
Alif Hossain@alifcoder·
Someone just solved the #1 problem with local AI. It's called llmfit and it tells you exactly which LLMs will run on YOUR hardware before you waste hours downloading the wrong model. No guessing. No trial and error. No "out of memory" crashes. Here's how it works: One command scans your full setup → Detects your RAM, CPU, GPU, and VRAM → Scores every model on quality, speed, fit, and context → Picks the best quantization automatically → Ranks what's perfect, good, or marginal for your machine Here's the wildest part: It handles MoE architectures properly. Mixtral 8x7B has 46.7B total parameters but only activates 12.9B per token. llmfit accounts for that. Most tools don't. 94 models. 30 providers. One command. 100% Opensource.
Alif Hossain tweet media
English
7
2
28
1.7K
Contextrix
Contextrix@ContextrixAi·
@_vmlops Physical ai powering real robots in manufacturing logistics healthcare and more could dwarf generative ai in economic impact.
English
0
0
2
12
Vaishnavi
Vaishnavi@_vmlops·
Physical AI will be bigger than Generative AI Generative AI creates content....Physical AI creates real-world action robots that move, build & operate McKinsey estimates Physical AI could impact over $15T in global GDP across manufacturing, logistics, healthcare, agriculture & construction The biggest blocker isn’t intelligence, it’s data. Robots generate massive, unsynchronized sensor data that’s hard to prepare for training Mosaico is solving this by building open-source infrastructure to convert raw sensor data into training-ready datasets the foundation Physical AI needs to scale github.com/mosaico-labs/m…
Vaishnavi tweet media
English
9
9
68
2.6K
Contextrix
Contextrix@ContextrixAi·
@testingcatalog This is massive. Meta dropping a standalone vibes app with full video editing character consistency and ingredients powered by midjourney is a direct shot at sora and flow.
English
0
0
2
43
TestingCatalog News 🗞
TestingCatalog News 🗞@testingcatalog·
BREAKING 🚨: Meta is about to challenge Sora a Flow with their new Vibes editor. The standalone Vibes app will have a full-featured video editor, support for character consistency, and ingredients. Powered by Midjourney 👀
TestingCatalog News 🗞 tweet media
English
25
48
572
48.5K
Contextrix
Contextrix@ContextrixAi·
@openclaw Openclaw passing react on github stars is insane.
English
0
0
2
19
OpenClaw🦞
OpenClaw🦞@openclaw·
We just passed React on GitHub stars. 🦞 Let that sink in. A personal AI assistant built by a lobster-obsessed Austrian and an army of crustacean enthusiasts just outstarred the library that powers half the internet. We shipped 90+ changes today. They shipped a conference.
OpenClaw🦞 tweet media
English
688
887
11.5K
2.9M
Pavel Durov
Pavel Durov@durov·
All Telegram chatbots can now stream responses to users in real time — great for AI assistants.
English
909
753
10.7K
1M
Tech with Mak
Tech with Mak@techNmak·
🚨 This is the best way to learn how LLMs work. Interactive. 3D. Step-by-step. Covers: → Embedding → Layer Norm → Self-Attention → MLP → Transformer layers → Softmax → Output Stop reading papers. Start seeing. Link in comments. Save this immediately.
English
19
230
1.2K
81.1K
Contextrix
Contextrix@ContextrixAi·
@alifcoder This is massive for indie devs. A curated list of 320k+ free public apis covering weather finance news sports crypto ai ml government data maps entertainment and more all categorized searchable and verified working is a goldmine.
English
0
0
2
80
Alif Hossain
Alif Hossain@alifcoder·
🚨𝗕𝗥𝗘𝗔𝗞𝗜𝗡𝗚: Build your next app without spending a dollar on data. Someone made a list of 320,000+ free public APIs, and developers are going crazy. → Weather, finance, news, sports, crypto → AI & machine learning APIs you can call right now → Government open data, maps, geolocation → Entertainment: movies, music, games, anime → Categorized, searchable, and verified as working Free and 100% open source. Link Bellow:👇
Alif Hossain tweet media
English
49
327
2.5K
201.3K
Contextrix
Contextrix@ContextrixAi·
@rohanpaul_ai This is crazy. top models only hit 54% on basic physical reasoning in videos while humans get 97%.
English
0
0
2
59
Rohan Paul
Rohan Paul@rohanpaul_ai·
🤯 56 researchers from 32 universities across US, China, UK built an enormous video reasoning dataset to prove current AI models struggle with basic physical logic.   "Very Big Video Reasoning Suite" The problem is that the AI does not genuinely know how solid objects are supposed to behave. So Berkeley, Stanford, CMU, Harvard, Oxford, Columbia, NTU, Johns Hopkins, and 24 other institutions built this 2mn samples which makes it 1000 times larger than all existing collections combined.   Video generation systems usually focus on making things look pretty but they completely fail to understand spatial rules and causality.   The team created a massive factory of visual tasks that tests how well models handle navigation, object manipulation, and logic.   Even the most advanced commercial systems only scored around 54% while human testers easily achieved over 97% accuracy.   Training an open model on this specific data improved its reasoning skills but a massive gap still exists.
Rohan Paul tweet media
English
34
107
502
54.9K
Contextrix
Contextrix@ContextrixAi·
@xenovacom Qwen 3.5 small models from 0.8B to 9B multimodal and running fully local in the browser on webgpu is huge for on-device ai.
English
0
0
3
586
Xenova
Xenova@xenovacom·
NEW: Alibaba just released Qwen 3.5 Small — a family of powerful multimodal models available in a range of sizes (0.8B, 2B, 4B, and 9B parameters). Perfect for on-device applications! They can even run 100% locally in your browser on WebGPU, powered by Transformers.js! 🤯
English
21
121
1.1K
101.8K