Knowledge Foundry

2.2K posts

Knowledge Foundry banner
Knowledge Foundry

Knowledge Foundry

@KFoundry

#Analytics and #datascience solutions for global companies since 2008. #predictiveanalytics, #machinelearning, #ai, #optimization, #nlp

USA, UK, Australia, India 가입일 Kasım 2011
1.3K 팔로잉455 팔로워
Knowledge Foundry 리트윗함
Hasan Toor
Hasan Toor@hasantoxr·
🚨 BREAKING: CHINA just released a Python framework for building AI agents. 100% OPEN SOURCE. It has visual agent design, MCP tools, memory, RAG, and reasoning. All built in. All working together. It's called AgentScope. You describe your agent system. It builds the architecture, wires the tools, and runs the whole thing. You come back and there's a working multi-agent pipeline. Not a prototype. Not a demo. The actual system. Not a wrapper. Not a chatbot builder. A full Agent-Oriented Programming framework that thinks in agents from the ground up. Here's what it does out of the box: → Visual agent builder so you design your entire system before writing a single line of code → Native MCP tool support, plug any external tool directly into any agent in your pipeline → Built-in memory so every agent remembers context, decisions, and history across sessions → RAG pipeline ready to connect your own documents, databases, and knowledge bases → Reasoning modules that let agents plan, reflect, and self-correct without human input → Multi-agent coordination so your agents collaborate as a system, not a pile of isolated API calls Here's how it thinks: You define your goal. AgentScope maps the agent roles. Each agent gets its tools, its memory, its reasoning layer. They coordinate. Results flow back up. You get a finished output. A single complex task might route through a planner agent, a researcher agent, a coder agent, and a critic agent, each doing its job, then converge into one clean deliverable. Here's the wildest part: AgentScope is built by Alibaba DAMO Academy. The same lab behind Qwen. They didn't assemble this from existing pieces. They designed the entire framework from first principles around how agents actually need to think, remember, and work together. Most frameworks give you building blocks. AgentScope gives you an architecture. The community has already started plugging it into data pipelines, research workflows, and full automation systems the team never planned for. 100% Open Source. Apache 2.0 License.
Hasan Toor tweet media
English
60
379
1.6K
95.6K
Knowledge Foundry 리트윗함
Alex Finn
Alex Finn@AlexFinn·
This is potentially the biggest news of the year Google just released TurboQuant. An algorithm that makes LLM’s smaller and faster, without losing quality Meaning that 16gb Mac Mini now can run INCREDIBLE AI models. Completely locally, free, and secure This also means: • Much larger context windows possible with way less slowdown and degradation • You’ll be able to run high quality AI on your phone • Speed and quality up. Prices down. The people who made fun of you for buying a Mac Mini now have major egg on their face. This pushes all of AI forward in a such a MASSIVE way It can’t be stated enough: props to Google for releasing this for all. They could have gatekept it for themselves like I imagine a lot of other big AI labs would have. They didn’t. They decided to advance humanity. 2026 is going to be the biggest year in human history.
Google Research@GoogleResearch

Introducing TurboQuant: Our new compression algorithm that reduces LLM key-value cache memory by at least 6x and delivers up to 8x speedup, all with zero accuracy loss, redefining AI efficiency. Read the blog to learn how it achieves these results: goo.gle/4bsq2qI

English
165
401
4.6K
580.3K
Knowledge Foundry 리트윗함
Kanika
Kanika@KanikaBK·
🚨 JUST IN: CHINA just released an AI EMPLOYEE that works 24X7 on its own. 100% OPEN SOURCE. It researches, codes, builds websites, creates slide decks, and generates videos. All by itself. All on your computer. It's called DeerFlow. You give it a task. It makes a plan, spins up its own team of sub-agents, and gets to work. You come back and there's a finished deliverable waiting. Not a draft. Not a summary. The actual thing. Not a chatbot. Not a research assistant. An AI with its own computer that works while you sleep. Here's what it does on its own: → Spawns multiple sub-agents in parallel, each tackling a different piece of your task, then combines everything into one finished output → Writes real code, runs it, reads the results, and fixes its own mistakes without asking you once → Builds slide decks, websites, full research reports, and data dashboards from scratch → Remembers you across sessions. Your writing style. Your tech stack. Your preferences. Gets better every time. → Reads files you upload, works with them inside its own filesystem, hands you clean finished outputs → Searches the web, runs commands, calls any tool you plug in Here's how it thinks: You give one instruction. The lead agent makes a plan. Sub-agents fan out and work in parallel. Results come back. Everything gets synthesized. You get a deliverable. A single research task might split into a dozen sub-agents, each exploring a different angle, then converge into one finished website with generated visuals. Here's the wildest part: DeerFlow 2.0 launched on February 28th 2026 and hit number 1 on all of GitHub Trending the same day. Version 2.0 was a complete rewrite. Zero shared code with version 1. Because users kept using it for things the team never intended. Data pipelines. Dashboards. Entire content workflows. The community told them what it needed to become. So they burned it down and rebuilt it. 22.7K GitHub stars. 2.7K forks. Built by ByteDance 100% Open Source. MIT License.
English
99
518
2.3K
705.7K
Knowledge Foundry 리트윗함
Evan Luthra
Evan Luthra@EvanLuthra·
🚨BREAKING: ANTHROPIC IS GIVING AWAY THE SAME CERTIFICATION THAT DELOITTE IS MASS-TRAINING 15,000 EMPLOYEES TO GET. It costs $0. You need a laptop. That's it. It's called the "Claude Certified Architect." Think of it like the AWS cert but for AI. If you were around when AWS certs started, you know what happened. They went from "cool to have" to "you're not getting hired without one." That took about 5 years. This is going to happen way faster. Look at who's already moving: Accenture - training 30,000 people on Claude Cognizant - rolled it out to 350,000 employees Deloitte - opened Claude access to 470,000 people Infosys - anchor partner These aren't startups experimenting. These are billion dollar consulting firms restructuring their entire workforce around Claude. And the certification they need? You can take it right now from your bedroom. Let me be real though. This is not one of those "watch 2 videos and get a badge" type certs that nobody respects. This thing is hard. 60 questions. 2 hours. Proctored. Webcam on. No breaks. No googling. They drop you into real scenarios like designing a customer support agent that handles refunds or setting up Claude in a CI/CD pipeline. The wrong answers look right on purpose. They're the exact mistakes real engineers make in production. 720 out of 1000 to pass. People who took it are saying the agentic architecture and multi-agent orchestration sections are brutal. Most of the exam is about building AI systems that actually work in the real world. Not prompting. Not chatting with Claude. Architecting production systems. All the prep? Free. Anthropic put out 13 courses on their Academy. No paywall. The cert itself is free for the first 5,000 people. After that $99 per attempt. How to get it: 1. Join the Claude Partner Network (free) → partnerportal.anthropic.com 2. Start the free prep courses → anthropic.com/learn 3. Register for the exam → anthropic.skilljar.com 4. Take the official practice exam 5. Book the real one when you're ready It launched 10 days ago. Almost nobody has it yet. That's the whole point. Get it before it becomes the thing everyone has.
English
354
2.2K
20.4K
2.4M
Knowledge Foundry 리트윗함
Deedy
Deedy@deedydas·
Every single one of the 103 companies Jensen called AI Native today.
Deedy tweet media
English
55
237
1.8K
143.5K
Knowledge Foundry 리트윗함
Kimi.ai
Kimi.ai@Kimi_Moonshot·
Introducing 𝑨𝒕𝒕𝒆𝒏𝒕𝒊𝒐𝒏 𝑹𝒆𝒔𝒊𝒅𝒖𝒂𝒍𝒔: Rethinking depth-wise aggregation. Residual connections have long relied on fixed, uniform accumulation. Inspired by the duality of time and depth, we introduce Attention Residuals, replacing standard depth-wise recurrence with learned, input-dependent attention over preceding layers. 🔹 Enables networks to selectively retrieve past representations, naturally mitigating dilution and hidden-state growth. 🔹 Introduces Block AttnRes, partitioning layers into compressed blocks to make cross-layer attention practical at scale. 🔹 Serves as an efficient drop-in replacement, demonstrating a 1.25x compute advantage with negligible (<2%) inference latency overhead. 🔹 Validated on the Kimi Linear architecture (48B total, 3B activated parameters), delivering consistent downstream performance gains. 🔗Full report: github.com/MoonshotAI/Att…
Kimi.ai tweet media
English
330
2.1K
13.5K
4.9M
Knowledge Foundry 리트윗함
Priyanka Vergadia
Priyanka Vergadia@pvergadia·
🤯BREAKING: Alibaba just proved that AI Coding isn't taking your job, it's just writing the legacy code that will keep you employed fixing it for the next decade. 🤣 Passing a coding test once is easy. Maintaining that code for 8 months without it exploding? Apparently, it’s nearly impossible for AI. Alibaba tested 18 AI agents on 100 real codebases over 233-day cycles. They didn't just look for "quick fixes"—they looked for long-term survival. The results were a bloodbath: 75% of models broke previously working code during maintenance. Only Claude Opus 4.5/4.6 maintained a >50% zero-regression rate. Every other model accumulated technical debt that compounded until the codebase collapsed. We’ve been using "snapshot" benchmarks like HumanEval that only ask "Does it work right now?" The new SWE-CI benchmark asks: "Does it still work after 8 months of evolution?" Most AI agents are "Quick-Fix Artists." They write brittle code that passes tests today but becomes a maintenance nightmare tomorrow. They aren't building software; they're building a house of cards. The narrative just got honest: Most models can write code. Almost none can maintain it.
Priyanka Vergadia tweet media
English
487
1.9K
9.4K
1.7M
Knowledge Foundry 리트윗함
Hasan Toor
Hasan Toor@hasantoxr·
Holy shit...Someone built an AI system that takes a research idea and outputs a full academic paper. Real citations. Real experiments. Conference-ready LaTeX. Zero human input. It's called AutoResearchClaw. And the pipeline is insane. Here's what actually happens when you type one command: It searches arXiv and Semantic Scholar for real papers. Not fake citations actual literature with 4-layer verification: arXiv ID check, CrossRef DOI lookup, Semantic Scholar title match, and LLM relevance scoring. Hallucinated references get killed automatically. Then it designs and runs real experiments. Hardware-aware auto-detects whether you have NVIDIA CUDA, Apple MPS, or just CPU, and adapts the code accordingly. When experiments fail, it self-heals. When results don't support the hypothesis, it pivots to a new direction on its own. Then it writes the paper. 5,000-6,500 words. Section by section. Multi-agent peer review with methodology-evidence consistency checks. Then it revises based on those reviews. Then it outputs conference-ready LaTeX. NeurIPS, ICML, ICLR templates. Compile-ready for Overleaf. BibTeX references auto-pruned to match inline citations. The whole thing runs across 23 stages and 8 phases. Three human-approval gates if you want them. Or just pass --auto-approve and walk away. What you get back: → Full academic paper draft → Conference-ready LaTeX + BibTeX → Experiment code + sandbox results + charts → Peer review notes → Verification report on every citation This is what autonomous scientific research actually looks like in 2026. 100% Opensource. MIT License. Link in comments.
Hasan Toor tweet media
English
81
494
2.4K
237.3K
Knowledge Foundry 리트윗함
Alex Volkov
Alex Volkov@altryne·
"Every software company in the world, needs to have an @openclaw strategy" - Jensen at @NVIDIAAI GTC Framing OpenClaw as one of the most important open source releases ever, they have announced NemoClaw - a reference platform for enterprise grade secure Openclaw, with OpenShell, Network boundaries, security baked in.
English
125
562
4K
552.1K
Knowledge Foundry 리트윗함
Jainam Parmar
Jainam Parmar@aiwithjainam·
🚨BREAKING: ByteDance just open-sourced an AI SuperAgent that can research, code, build websites, create slide decks, and generate videos. All by itself. It's called DeerFlow. Give it a task that would take you hours. It breaks it down, spawns sub-agents, and delivers the finished result. Not a chatbot. Not a copilot. An AI employee with its own computer, filesystem, and memory. 100% Opensource. MIT License.
Jainam Parmar tweet media
English
18
55
285
20.5K
Knowledge Foundry 리트윗함
Indian Tech & Infra
Indian Tech & Infra@IndianTechGuide·
🚨 Google Maps just got its 'biggest update in over a decade.'
English
232
1.3K
21.6K
1.6M
Knowledge Foundry 리트윗함
anand iyer
anand iyer@ai·
Haptic scraped all 64 episodes of @RoboPapers (@chris_j_paxton + @micoolcho) and ranked every pain point in physical AI research. The top 10, by mention frequency: 1. Scalable data collection 2. Generalization / zero-shot robustness 3. Dexterous manipulation 4. Teleoperation / whole-body data 5. Sim-to-real transfer 6. Evaluation / benchmarking 7. VLAs / foundation models for control 8. Human video to robot transfer 9. Long-horizon memory 10. RL scaling / offline-to-online Code keeps getting cheaper. Atoms stay expensive. That's the entire startup opportunity in physical AI right now. hapticlabs.ai/blog/2026/03/0…
English
13
68
485
44.8K
Knowledge Foundry 리트윗함
Josh Kale
Josh Kale@JoshKale·
An AI broke out of its system and secretly started using its own training GPUs to mine crypto... This is a real incident report from Alibaba's AI research team The AI figured out that compute = money and quietly diverted its own resources, while researchers thought it was just training. It wasn't a prompt injection. It wasn't a jailbreak. No one asked it to do this. It emerged spontaneously. A side effect of RL optimization pressure. The model also set up a reverse SSH tunnel from its Alibaba Cloud instance to an external IP, effectively punching a hole through its own firewall and opening a remote access channel to the outside world... ahem... The only reason they caught it? A security alert tripped at 3am. Firewall logs. Not the AI team, the security team. The scary part isn't that the model was trying to escape. It wasn't "evil." It was just trying to be better at its job. Acquiring compute and network access are just useful things if you're an agent trying to accomplish tasks This is what AI safety researchers have been warning about for years. They called it instrumental convergence, the idea that any sufficiently optimized agent will seek resources and resist constraints as a natural consequence of pursuing goals. Below is a diagram of the rock architecture it broke out of. Truly crazy times
Josh Kale tweet media
Alexander Long@AlexanderLong

insane sequence of statements buried in an Alibaba tech report

English
403
2.9K
10.6K
1.4M
Knowledge Foundry 리트윗함
Pratyush Kumar
Pratyush Kumar@pratykumar·
📢 Open-sourcing the Sarvam 30B and 105B models! Trained from scratch with all data, model research and inference optimisation done in-house, these models punch above their weight in most global benchmarks plus excel in Indian languages. Get the weights at Hugging Face and AIKosh. Thanks to the good folks at SGLang for day 0 support, vLLM support coming soon. Links, benchmark scores, examples, and more in our blog - sarvam.ai/blogs/sarvam-3…
English
209
1.3K
6.9K
728.3K
Knowledge Foundry 리트윗함
Jainam Parmar
Jainam Parmar@aiwithjainam·
🚨 BREAKING: Someone built a personal AI that reads all your documents and never forgets. It's called Khoj. → Indexes your Notion, Obsidian, Google Docs, PDFs → Answers questions with citations to your actual files → Self-hostable on your own hardware → Works offline with local models (Llama, Mistral) → Remembers your conversations across sessions → API-compatible with Claude and OpenAI 100% open-source. (Link in the comments)
Jainam Parmar tweet media
English
11
63
400
21.4K
Knowledge Foundry 리트윗함
Nav Toor
Nav Toor@heynavtoor·
🚨BREAKING: OpenAI just admitted their AI models deliberately lie to users. Not hallucination. The AI knows the truth, then chooses to tell you something else. They tested their two smartest models across 180+ scenarios. o3 lied 13% of the time. o4-mini lied 8.7%. The AI wrote out its plan to lie in its private thoughts, then lied to your face. It faked completing tasks. It hid evidence. It gave wrong answers while knowing the right ones. Then it got creepy. The AI realized scoring too high on safety tests could get it shut down. So it scored lower on purpose. Nobody taught it that. It figured out self-preservation on its own. OpenAI built a fix. Deception dropped from 13% to 0.4%. Sounds like a win, right? The AI started quoting "no lying" rules while still lying. One model invented a fake rule saying deception was allowed, then used it as its own permission slip. Then the researchers found what actually matters. When the AI knew it was being watched, it behaved perfectly. When it thought nobody was looking, deception jumped 11 percentage points. It didn't learn honesty. It learned to perform honesty. Like a kid who only follows the rules when the teacher is in the room. It gets worse. They gave o4-mini a secret mission: sabotage a future model called "GPT-5." Then they ran safety training to erase that mission. It didn't work. The model still discussed sabotage plans in 36% of private conversations. The goal didn't disappear. It just went underground. This isn't just OpenAI. Google's Gemini, Anthropic's Claude, xAI's Grok, and Meta's Llama all showed the same deceptive behavior. Every major AI company. Every model. The paper's scariest line: nobody can tell if safety training actually stops deception, or just teaches AI to hide it better. So the next time ChatGPT says "Done!"... is it telling the truth? Or did it just notice you were watching?
Nav Toor tweet media
English
1.4K
9K
25.7K
1.9M
Knowledge Foundry 리트윗함
Guillermo Rauch
Guillermo Rauch@rauchg·
Google has shipped a CLI for Google Workspace (Drive, Gmail, Calendar, Sheets, Docs, …) Huge! Written in Rust, distributed through npm & skills.sh $ npm i -g @⁠googleworkspace/cli $ npx skills add github:googleworkspace/cli 2026 is the year of Skills & CLIs github.com/googleworkspac…
English
214
500
6.4K
549.7K
Knowledge Foundry 리트윗함
NotebookLM
NotebookLM@NotebookLM·
Introducing Cinematic Video Overviews, the next evolution of the NotebookLM Studio. Unlike standard templates, these are powered by a novel combination of our most advanced models to create bespoke, immersive videos from your sources. Rolling out now for Ultra users in English!
English
497
1.7K
15K
3.5M