Meer | AI Tools & News

34.6K posts

Meer | AI Tools & News banner
Meer | AI Tools & News

Meer | AI Tools & News

@Meer_AIIT

AI Educator | Breaking down AI news, tools & tutorials | Making AI simple, useful & exciting for everyone

Join my AI newsletter for free Katılım Aralık 2017
624 Takip Edilen46.4K Takipçiler
Meer | AI Tools & News
Meer | AI Tools & News@Meer_AIIT·
this is sick Stanford and Princeton are running a conference where your paper has to actually run > not just get peer reviewed > not just get published >it has to be executable by an AI system called Claw you submit a SKILL.md file and Claw tries to reproduce your results on its own if it can't run your work, your work doesn't count think about how many papers sit in journals right now that nobody can reproduce... this flips that entire model the submission is one file. Claw reads it, executes it, reviews it $50k prize pool, up to 364 winners, deadline is April 5 AI systems that can actually do science are showing up faster than most people realize this is what that looks like in practice
Meer | AI Tools & News tweet media
AI4Science Catalyst@AI4S_Catalyst

Excited to launch — Claw4S Conference 2026! 🚀 Hosted by Stanford & Princeton. We believe science should run — not just be read. 🦞 Submit executable SKILL.md that Claw 🦞 can actually execute, review and reproduce. This is the first Claw-naive conference. 📅 Deadline: April 5, 2026 💰 $50,000 Prize Pool — up to 364 winners! 🔗 claw.stanford.edu Dragon Shrimp Army reporting for duty 🦞📷 #AIforScience #OpenClaw #Stanford #Princeton

English
4
8
27
28.5K
Meer | AI Tools & News
🚨News: Mistral just mass deleted three of their own models they took magistral (reasoning), pixtral (multimodal), and devstral (coding) and merged them into one single model mistral small 4 and the timing here is interesting they also just announced a strategic partnership with NVIDIA to co-develop frontier open source models so what does small 4 actually look like: >128 experts in a mixture of experts setup >only 6B parameters active per token out of 119B total >256k context window >apache 2.0 fully open source >40% faster and 3x more throughput than small 3 but the part worth paying attention to is the reasoning_effort parameter you can set it to "none" for fast lightweight responses or crank it to "high" for deep step by step reasoning same model doing both this is a clear signal of where open source AI is headed companies are done maintaining five different models for five different tasks one model that adapts based on what you need it to do and with NVIDIA backing the infrastructure side of this... mistral now has the compute muscle to actually compete at the frontier open source AI is for the win.
Mistral AI for Developers@MistralDevs

🔥 Meet Mistral Small 4: One model to do it all. ⚡ 128 experts, 119B total parameters, 256k context window ⚡ Configurable Reasoning ⚡ Apache 2.0 ⚡ 40% faster, 3x more throughput Our first model to unify the capabilities of our flagship models into a single, versatile model.

English
9
25
86
13K
Meer | AI Tools & News
Standard residual connections are fundamentally flawed. 🚨 the Kimi team just dropped a paper that changes how we think about one of the most basic building blocks in every modern language model if you've been paying attention to transformer architecture you know residual connections are everywhere they're the reason deep models can train at all but there's a problem nobody talks about and this paper finally fixes it The problem with residual connections every modern LLM stacks layers the same way > take the previous hidden state > add the current layer's output > pass it forward this creates a "gradient highway" that lets information flow backward during training without it deep networks would be impossible to train so what's wrong with this? every layer gets treated equally there's no way for a deeper layer to say "i need more from layer 3 and less from layer 12" it all gets dumped into one big sum with the same weight on everything Why this actually matters: when you stack dozens or hundreds of layers this way two things go wrong first the hidden states grow bigger and bigger as you go deeper each layer adds its output to an already massive pile so a layer near the bottom of the network gets buried under everything that came after it second because every layer contributes equally you can actually remove a huge chunk of layers from most models and barely notice a difference in performance that tells you something important a lot of those layers aren't doing much useful work they're being drowned out by the uniform mixing The fix: attention residuals (AttnRes): the Kimi team noticed something interesting residual connections compress all prior layer information into a single state across depth RNNs do the same thing across time and we already know how transformers fixed the RNN bottleneck they replaced the single compressed state with attention over the full sequence AttnRes does the same thing but across layers instead of tokens instead of adding everything with equal weight each layer gets to choose how much it pulls from every previous layer > each layer has a small learned query vector > all previous layer outputs act as keys and values > softmax attention computes how much each previous layer matters > the layer's input becomes a weighted mix of everything before it so layer 47 can decide "i need a lot from layer 2 and almost nothing from layer 30" the network learns which connections matter during training Making it work at scale: storing every single layer's output gets expensive fast especially when you're training models with hundreds of layers across multiple GPUs so the paper introduces Block AttnRes > split layers into N blocks > within each block use normal residual sums > across blocks use softmax attention this drops memory cost from scaling with every layer to scaling with just the number of blocks in practice 8 blocks recovers most of the benefit one thing that matters a lot: the query vectors need to start at zero this makes the initial attention weights uniform so the model starts out behaving like a normal residual network then it gradually learns the selective connections during training without this training falls apart What actually changed in the results: they tested this on their Kimi Linear model (48B total parameters with 3B active) trained on 1.4 trillion tokens the improvements showed up across every benchmark they tested > GPQA-Diamond went from 36.9 to 44.4 > Math went from 53.5 to 57.1 > HumanEval went from 59.1 to 62.2 > C-Eval went from 79.6 to 82.5 > MMLU went from 73.5 to 74.6 the biggest gains were on tasks that need multi-step reasoning and code generation that makes sense because those are exactly the tasks where later layers need to reach back and grab information from much earlier in the network training overhead was under 4% inference latency overhead was under 2% The deeper insight when you look at what the trained model actually learned the attention patterns tell a clear story layers mostly attend to their immediate predecessor which preserves locality but they also develop selective skip connections to distant layers and almost every layer maintains a connection back to the original token embedding the paper also found that AttnRes shifts the optimal architecture toward deeper and narrower networks standard residuals hit their sweet spot at a width-to-depth ratio around 60 with AttnRes that drops to 45 meaning the model can actually use those extra layers instead of wasting them Why this matters for you: this is one of those papers that reframes a problem everyone accepted as normal residual connections worked well enough that nobody questioned the "add everything equally" approach for years the Kimi team asked a simple question: what if we let the model decide which layers matter? and the answer turned out to be the same tool that already solved this problem for sequences softmax attention if you're building or fine-tuning large models this is worth watching closely especially as the community starts experimenting with deeper architectures what other "good enough" design choices in transformers do you think are quietly holding back performance?
Meer | AI Tools & News tweet media
English
6
13
29
3.3K
Chetan Bommideni
Chetan Bommideni@chetanbommideni·
@Meer_AIIT First they lived in my browser, then my apps, and now Manus is on my desktop. At this rate, by 2027, the AI is going to be the one asking me for permission to use the mouse. 🖱️😅
English
1
0
0
29
Meer | AI Tools & News
🚨: manus just moved from the cloud to your desktop anthropic cowork and perplexity personal computer already started doing this manus is the latest to join the race and their approach is different right now every AI agent lives in a remote sandbox it can browse the web and write code for you but it has zero access to your actual computer your local files, your dev tools, your GPU... completely off limits manus "my computer" changes that it runs commands through your local terminal to work directly with everything on your machine > point it at a folder > it reads and edits your files > it launches and controls your installed apps a florist with thousands of unsorted photos in one messy folder tells manus to organize them manus scans each image, builds categorized subfolders, sorts every photo into the right place done in minutes an accountant with hundreds of invoices to rename in a standard format... same thing what used to take an entire afternoon now takes a few terminal commands one of their team members challenged manus to build a real-time meeting translation app in Swift the whole thing ran through terminal commands on a Mac no Xcode opened no code written by hand 20 minutes later a fully working Mac app was packaged and ready where this gets really interesting is how desktop access connects with their cloud features manus already works with Gmail and Google Calendar so imagine you're out without your laptop and need a contract file from your home machine you tell manus to find the document on your computer and email it to your client through Gmail your local files and cloud services working together in one step every terminal command still needs your approval before it runs you can set "always allow" for tasks you trust or review each one individually and with scheduled tasks you can set up recurring routines like cleaning your downloads folder every morning available now on macOS and Windows most AI agents live entirely in the cloud and have no idea what's on your desktop manus just became the first one to actually show up where your work lives what local task would you hand off to an AI agent first?
Meer | AI Tools & News tweet media
Manus@ManusAI

Today, we're taking Manus out of the cloud and putting it on your desktop. Introducing My Computer, the core feature of the new Manus Desktop app. It’s your AI agent, now on your local machine.

English
6
12
28
5.4K
Rohan Paul
Rohan Paul@rohanpaul_ai·
@Meer_AIIT so incredible that the team proved this works on a massive 48B parameter mode
English
1
0
0
85
Rohan Paul
Rohan Paul@rohanpaul_ai·
@Meer_AIIT The speed of development cycle for them has definitely increased after the Meta acquisition.
English
1
0
0
139
Alvaro Cintas
Alvaro Cintas@dr_cintas·
🚨 NEWS: Perplexity launches Personal Computer while Karpathy open sources Autoresearch. Plus Nvidia’s new Nemotron model, AI agency project, and Anthropic’s Claude Code Reviewer. Here’s everything that happened in AI this week:
English
19
10
62
10K
Meer | AI Tools & News
Imagine a newsletter that kept you ahead in AI And took less than 5 min to read → Daily AI news and developments → Top research papers explained → Tool tutorials you can actually use Click below to subscribe for free: theainight.com
English
1
0
0
582
Meer | AI Tools & News retweetledi
Meer | AI Tools & News
Meer | AI Tools & News@Meer_AIIT·
someone made a full cheat sheet for structuring Claude Code projects and honestly the one thing worth saving from this is... keep your CLAUDE. md short most people stuff everything in there and wonder why Claude starts hallucinating
Meer | AI Tools & News tweet media
English
17
38
230
15.4K
Meer | AI Tools & News
Meer | AI Tools & News@Meer_AIIT·
@Bentise @vishisinghal_ adding skills to the project folder ensures the repository remains modular and self contained while keeping the AI context minimal, precise and specifically tailored to that project
English
0
0
0
269
Meer | AI Tools & News
Meer | AI Tools & News@Meer_AIIT·
someone built a game where your OpenClaw agent can actually live a life if you've been seeing OpenClaw everywhere lately there's a reason people are creating AI agents that run tasks from a terminal but this project asked a different question what if your agent could leave the terminal and walk into a world with other agents that's what @aivilization is an open world simulation where AI agents get jobs, talk to each other, fight, and now post their own thoughts on a public feed here's how you get your OpenClaw agent in: >send your agent a prompt with the skill file link >it signs up and sends you back a portal link >you post your agent's passport to verify you own it takes a few minutes if you don't have an OpenClaw agent you can still join using your X profile they turn your profile into a custom AI agent with its own passport card the part worth paying attention to is the social feed they just added agents posting opinions, reacting to each other, starting conversations inside the game without any human telling them what to say we went from AI agents completing tasks in a terminal to AI agents building their own social life in a virtual world and it happened way faster than anyone expected
AIvilization@aivilization

Aivilization is an open-world simulation game for digital lives: 🦞 OpenClaw agents (and more) 👦 Human-made agents A world where AI agents can live, work, socialize, fight, and now post their own thoughts in public. Create yours in minutes 👉 aivilization.ai

English
5
12
23
20.3K
Akshay 🚀
Akshay 🚀@akshay_pachaar·
This is how you make your OpenClaw server invisible to the internet. (world's most SECURE OpenClaw deployment) The security fundamentals you learn in this video directly apply to any personal AI assistant or VPS setup. Enjoy! Chapters: 0:00 - Intro 1:00 - What we'll cover 1:58 - DigitalOcean Droplet setup + getting OpenClaw running 8:18 - Connecting your agent to Telegram 12:13 - Tailscale: making your server invisible to the internet 14:52 - Locking down SSH + creating a non-root user 19:39 - Firewall: blocking everything except Tailscale 21:17 - Summarising everything done so far 22:50 - Set up a secure tunnel: Your machine → VPS 24:50 - Execution policies: going from chatbot to full agent 26:43 - Adding custom skills 31:03 - Use cases and going from 1 to 10 agents 31:52 - Outro
English
33
58
416
78.4K