Samuel Danso

1.7K posts

Samuel Danso banner
Samuel Danso

Samuel Danso

@samueldans0

Engineer | 🫂 @developer_dao

Singapore Katılım Mart 2018
1.3K Takip Edilen725 Takipçiler
Sabitlenmiş Tweet
Samuel Danso
Samuel Danso@samueldans0·
We kicked off today with the Opening Ceremony of @shanhaiwoo Singapore 🇸🇬, a month-long popup building what’s next in Ethereum, AI, and frontier tech. Featuring amazing speakers + Resident team introductions. Excited to be part of the ShanHaiWoo Residency, building @necta_fi
Samuel Danso tweet media
Samuel Danso tweet media
English
0
1
8
389
Samuel Danso retweetledi
fks
fks@FredKSchott·
Introducing Flue — The First Agent Harness Framework Flue is a TypeScript framework for building the next generation of agents, designed around a built-in agent harness. Flue is like Claude Code, but 100% headless and programmable. There's no baked in assumption like requiring a human operator to function. No TUI. No GUI. Just TypeScript. But using Flue feels like using Claude Code. The agents you build act autonomously to solve problems and complete tasks. They require very little code to run. Most of the "logic" lives in Markdown: skills and context and AGENTS.md. Flue is like Astro or Next.js for agents (not surprising, given my background 🙃). It's not another AI SDK. It's a proper runtime-agnostic framework. Write once, build, and deploy your agents anywhere (Node.js, Cloudflare, GitHub Actions, GitLab CI/CD, etc). We originally built Flue to power AI workflows inside of the Astro GitHub repo. But then @_bgiori got his hands on it, and we realized that every agent needs a framework like Flue, not just us. Check it out! It's early, but I'm curious to hear what people think. Are agents ready for their library -> framework moment?
fks tweet media
English
172
329
3.6K
672.9K
Cursor
Cursor@cursor_ai·
We’re introducing the Cursor SDK so you can build agents with the same runtime, harness, and models that power Cursor. Run agents from CI/CD pipelines, create automations for end-to-end workflows, or embed agents directly inside your products.
English
401
829
8.7K
2.9M
LoucB
LoucB@LoicBerthelot·
As a founder, which team are you on? 1. Hermes 2. OpenClaw 3. Claude Dispatch
English
314
5
449
91.6K
SpaceX
SpaceX@SpaceX·
SpaceXAI and @cursor_ai are now working closely together to create the world’s best coding and knowledge work AI. The combination of Cursor’s leading product and distribution to expert software engineers with SpaceX’s million H100 equivalent Colossus training supercomputer will allow us to build the world’s most useful models. Cursor has also given SpaceX the right to acquire Cursor later this year for $60 billion or pay $10 billion for our work together.
English
2.4K
5.1K
38.4K
20.6M
Samuel Danso retweetledi
Kimi.ai
Kimi.ai@Kimi_Moonshot·
Meet Kimi K2.6: Advancing Open-Source Coding 🔹Open-source SOTA on HLE w/ tools (54.0), SWE-Bench Pro (58.6), SWE-bench Multilingual (76.7), BrowseComp (83.2), Toolathlon (50.0), Charxiv w/ python(86.7), Math Vision w/ python (93.2) What's new: 🔹Long-horizon coding - 4,000+ tool calls, over 12 hours of continuous execution, with generalization across languages (Rust, Go, Python) and tasks (frontend, devops, perf optimization). 🔹Motion-rich frontend - Videos in hero sections, WebGL shaders, GSAP + Framer Motion, Three.js 3D. 🔹Agent Swarms, elevated - 300 parallel sub-agents × 4,000 steps per run (up from K2.5's 100 / 1,500). One prompt, 100+ files. 🔹Proactive Agents - K2.6 model powers OpenClaw, Hermes Agent, etc for 24/7 autonomous ops. 🔹Claw Groups (research preview) - bring your own agents, command your friends', bots & humans in the loop. - K2.6 is now live on kimi.com in chat mode and agent mode. For production-grade coding, pair K2.6 with Kimi Code: kimi.com/code - 🔗 API: platform.moonshot.ai 🔗 Tech blog: kimi.com/blog/kimi-k2-6 🔗 Weights & code: huggingface.co/moonshotai/Kim…
Kimi.ai tweet media
English
906
2.4K
18.1K
7.4M
Samuel Danso retweetledi
Akshay 🚀
Akshay 🚀@akshay_pachaar·
A harnessed LLM agent. Most people picture this as a model with tools bolted on. The real architecture inverts that relationship. The model itself is deliberately thin. Intelligence gets pushed outward, and the harness composes it at runtime. Three dimensions orbit the harness core: 𝗠𝗲𝗺𝗼𝗿𝘆 holds state the model shouldn't carry in weights or context. Working context, semantic knowledge, episodic experience, and personalized memory each have their own lifecycle. 𝗦𝗸𝗶𝗹𝗹𝘀 hold procedural knowledge. Operational procedures, decision heuristics, and normative constraints specialize the general model per task. 𝗣𝗿𝗼𝘁𝗼𝗰𝗼𝗹𝘀 hold the interaction contracts. Agent-to-user, agent-to-agent, and agent-to-tools are three distinct surfaces with their own failure modes. Between the core and these modules sit the mediators: sandboxing, observability, compression, evaluation, approval loops, and sub-agent orchestration. They govern how the harness reaches out and how state flows back in. The useful question this framing unlocks: for any new capability, where should it live? Stable knowledge goes to memory, learned playbooks go to skills, communication contracts go to protocols, loop governance goes to the mediators. Harness design becomes a question of what to externalize, and how to mediate it. I'm building a minimal agent harness from scratch. Didactic, easy to read, no magic. Open-sourcing it soon. Stay tuned.
GIF
Akshay 🚀@akshay_pachaar

x.com/i/article/2040…

English
78
286
1.6K
233.8K
Samuel Danso retweetledi
Claude
Claude@claudeai·
Introducing Claude Design by Anthropic Labs: make prototypes, slides, and one-pagers by talking to Claude. Powered by Claude Opus 4.7, our most capable vision model. Available in research preview on the Pro, Max, Team, and Enterprise plans, rolling out throughout the day.
English
4.1K
15.1K
148.9K
63.2M
HeyGen
HeyGen@HeyGen·
We built our launch video in Claude Code using HyperFrames. Now it's yours. Open source, agent-native framework. HTML to MP4. $ npx skills add heygen-com/hyperframes RT + Comment "HyperFrames" to get the full source code of this launch video (must follow)
English
2.9K
1.9K
8.3K
2.9M
Shaw (spirit/acc)
Shaw (spirit/acc)@shawmakesmagic·
The quality of your vibecoded slop is horrible. I've seen it. Absolute dogshit. Fortunately, there is a fix. Use this prompt: I want to clean up my codebase and improve code quality. This is a complex task, so we'll need 8 subagents. Make a sub agent for each of the following: 1. Deduplicate and consolidate all code, and implement DRY where it reduces complexity 2. Find all type definitions and consolidate any that should be shared 3. Use tools like knip to find all unused code and remove, ensuring that it's actually not referenced anywhere 4. Untangle any circular dependencies, using tools like madge 5. Remove any weak types, for example 'unknown' and 'any' (and the equivalent in other languages), research what the types should be, research in the codebase and related packages to make sure that the replacements are strong types and there are no type issues 6. Remove all try catch and equivalent defensive programming if it doesn't serve a specific role of handling unknown or unsanitized input or otherwise has a reason to be there, with clear error handling and no error hiding or fallback patterns 7. Find any deprecated, legacy or fallback code, remove, and make sure all code paths are clean, concise and as singular as possible 8. Find any AI slop, stubs, larp, unnecessary comments and remove. Any comments that describe in-motion work, replacements of previous work with new work, or otherwise are not helpful should be either removed or replaced with helpful comments for a new user trying to understand the codebase-- but if you do edit, be concise I want each to do detailed research on their task, write a critical assessment of the current code and recommendations, and then implement all high confidence recommendations.
English
220
330
6K
648.9K
Samuel Danso retweetledi
Guillermo Rauch
Guillermo Rauch@rauchg·
Today we're open sourcing open-agents.dev, a reference platform for cloud coding agents. You've heard that companies like Stripe (Minions), Ramp (Inspect), Spotify (Honk), Block (Goose), and others are building their own "AI software factories". Why? 1️⃣ On a technical level, off-the-shelf coding agents don't perform well with huge monorepos, don't have your institutional knowledge, integrations, and custom workflows. 2️⃣ On a business level, the moat of software companies will shift from 'the code they wrote', to the 'means of production' of that code. The alpha is in your factory. Open Agents deploys to our agentic infrastructure: Fluid for running the agent's brain, Workflow for its long-running durability, Sandbox for secure code execution, AI Gateway for multi-model tokens. (Because of our focus on Open SDKs and runtimes, this codebase is a gem even if you're not hosting on Vercel.) TL;DR: if you're building an internal or user-facing agentic coding platform, deploy this: vercel.com/templates/temp…
Guillermo Rauch tweet media
English
160
354
4.3K
669K
Nico Albanese
Nico Albanese@nicoalbanese10·
3 months ago I started building a coding agent that runs in the cloud. It's since written every line of code I've shipped, including itself. Today, I'm open sourcing it. Introducing Open Agents.
Nico Albanese tweet media
English
140
208
2.3K
395.7K
Samuel Danso
Samuel Danso@samueldans0·
BUILD FOR AGENTS
English
0
0
0
16
Samuel Danso retweetledi
Claude
Claude@claudeai·
Introducing Claude Managed Agents: everything you need to build and deploy agents at scale. It pairs an agent harness tuned for performance with production infrastructure, so you can go from prototype to launch in days. Now in public beta on the Claude Platform.
English
2.1K
6.1K
57.2K
21.5M