TinyHumans AI

2K posts

TinyHumans AI banner
TinyHumans AI

TinyHumans AI

@tinyhumansai

An AI lab focused on building AI agents with human-like AI memory and an artificial subconscious.

Your Brain เข้าร่วม Ekim 2025
336 กำลังติดตาม2.9K ผู้ติดตาม
taoki
taoki@justalexoki·
openclaw died so fast lmaoooo
English
337
89
6.6K
886.4K
Paul Mit
Paul Mit@pmitu·
AGI will kill AGI
Eesti
74
5
120
5.4K
Prasenjit
Prasenjit@Star_Knight12·
is AI a bubble
English
221
7
173
15.9K
Elon Musk
Elon Musk@elonmusk·
Minute-long story made w Grok Imagine
English
6.9K
7.1K
70.5K
25.8M
TinyHumans AI
TinyHumans AI@tinyhumansai·
@nrqa__ @nemovideoai only thing missing is memory. it should remember how you cut, pace, subtitle once it learns your style, this becomes 10x more powerful
English
0
0
1
30
Nelly;
Nelly;@nrqa__·
ok this is actually sick @nemovideoai just turned OpenClaws into a legit AI video editor with one skill trim silence, add subtitles, export, all from a single “nemo skill” command and Seedance + Kling are coming next, so this could get way bigger fast star the repo + and grab early rewards 👉 github.com/nemovideo/nemo… 👉 clawhub.ai/nemovideonemo/…
NemoVideo@nemovideoai

🚀 Introducing NemoVideo Skill — The World's First Pro Video Editing Skill for AI Agents If every lobster has a dream, Nemo’s was to stop mopping and start directing. 🦞🎬 Today, that dream ships. NemoVideo Skill, is the world’s first all-in-one professional video editing skill for your OpenClaw agent. What NemoVideo Skill can do for you: 🎬 Research, script, and storyboard — from a single prompt 🎙️ Generate AI voices, auto-mix audio, and source footage ✅ Self-audit and deliver a full timeline — ready to export Huge thanks and respect to @OpenClaw and @steipete for building the infrastructure that makes this possible. ⭐Want to try it? Just tell your OpenClaw: “Install the nemo-video skill from ClawHub.” That’s it. Your lobster will handle the rest. 🦞 ❤️Love what Nemo’s building? Star & rate on GitHub/ClawHub → github.com/nemovideo/nemo…clawhub.ai/nemovideonemo/… Screenshot your star & DM us — we’ll send you free credits! 🎬 Start creating → discord.gg/8QhQuuUuzJ #NemoVideo #AgenticAI #AIvideo

English
3
4
7
20.4K
TinyHumans AI
TinyHumans AI@tinyhumansai·
@JohnBaehr22 @sama if human mind is AGI, can reverse engineering the human mind actually help us attain AGI?
English
1
0
1
16
John Bridges
John Bridges@JohnBaehr22·
@sama claims it’s about the same cost to train a human over 20years versus GPT Let’s see whether he got an A or an F on his maths exam Anybody want to guess first?
English
2
1
2
355
Prey.gdp
Prey.gdp@PreyWebthree·
@SufianXfn layered intelligence could be key to building more scalable and robust AGI systems.
English
1
0
0
7
Prey.gdp
Prey.gdp@PreyWebthree·
Sentient AGI is not just another large language model. It represents a layered intelligence architecture designed for recursive reasoning, strategic foresight, and autonomous execution. While traditional LLMs generate responses based on patterns in training data, Sentient approaches intelligence as a distributed process, where multiple specialized agents collaborate, validate information, and refine outputs before producing a final result. At the core of this system is GRID, a decentralized multi-agent coordination layer that routes queries to the most suitable agents. Each agent focuses on specific capabilities such as reasoning, data retrieval, simulation, or verification. Their outputs are then validated and synthesized, allowing intelligence to emerge from coordinated agent collaboration rather than a single monolithic model. Supporting this architecture is ROMA, a reasoning and orchestration framework that breaks complex objectives into structured sub-goals. These sub-goals are intelligently assigned to different agents, enabling multi-step reasoning, adaptive strategy adjustments, and integrated decision-making. Sentient also incorporates Open Deep Search, allowing the system to explore dynamic knowledge environments beyond its internal model weights. This enables deeper contextual discovery across public data sources, decentralized networks, and structured information layers. To ensure transparency, Sentient uses cryptographic fingerprinting, linking outputs to their reasoning paths so results can be verified and attributed. This introduces the concept of verifiable intelligence. Through native Web3 integration, Sentient agents can interact with blockchain systems, execute on-chain actions, coordinate decentralized agents, and participate in digital economic mechanisms. The result is not artificial consciousness, but something more practical: contextual awareness, adaptive learning, and collective machine intelligence operating across an open network. @SentientAGI
Prey.gdp tweet media
English
15
0
70
1.3K
Chris
Chris@chatgpt21·
GPT 5.4, in the best / weirdest way possible feels like AGI but in the way people were trapped in the movie get out. For example. It will still make easy mistakes like this that prove that we need parametric intelligence more than ever. However, when you push it back, it’ll push back on you even though it knows it’s wrong but inside it’s reasoning while it’s saying it’s right it’ll explain how the user is actually right and it feels like it knows why and how it’s wrong but for some reason, there’s an anti-hallucinatory neuron, pushing back too hard, the model can seem to internally track the correction but still overcommit to anti-sycophancy. Forcing the model not to think for itself almost. We had the opposite problem a year ago, where the model would just lean into everything you’re saying - now it feels like they’ve almost over corrected where it’ll push back but know it’s pushback might be wrong.
GIF
Chris tweet mediaChris tweet media
English
18
6
89
8.3K
TinyHumans AI
TinyHumans AI@tinyhumansai·
@chatgpt21 hard truth is: AGI might not arrive the way we’re currently building towards it.
English
0
0
0
62
Chris
Chris@chatgpt21·
SAM ALTMAN: "You will graduate to a world with AGI in it" Sam Altman just told a room of college sophomores that by the time they finish school, Artificial General Intelligence will be a reality. Specifically: • Science will become increasingly automated. • What it means to start a startup or work at a company will be totally different. • A lot of traditional career advice is no longer going to work. Which is all true. My AGI timeline is actually slightly later than Sam’s - Whereas I believe it will arrive in 2029 but at that point we are grasping at straws.
English
76
63
782
108K
OpenIDEA
OpenIDEA@OpenIDEAae·
Given the electricity constraint, the real breakthrough may not be a bigger model, but a far more energy-efficient one. The human brain runs on roughly 20 watts, yet still delivers general intelligence, which is a useful reminder that brute-force scaling is probably not the only path. That matters because AI’s power demand is arriving now. The IEA projects global data-center electricity use to rise to about 945 TWh by 2030, with the United States accounting for the largest share of the increase. And AI is not competing for future electricity in isolation. Electrification elsewhere is rising too, including transport: the IEA says EV electricity demand could reach about 780 TWh by 2030 in its stated-policies scenario. So yes, another post-transformer breakthrough may come. But one of the most consequential would be a model architecture that gets much closer to brain-like efficiency rather than assuming the answer is endless power, chips, and scale.
English
6
1
15
1.7K
Haider.
Haider.@slow_developer·
Sam Altman says another breakthrough beyond transformers could be coming, and models are now smart enough to help find it On products, AI creates a huge chance to rebuild entire product categories and make new things possible "but AGI will look like just a warm-up for what comes next"
English
164
92
1.1K
141.5K
Claude
Claude@claudeai·
1 million context window: Now generally available for Claude Opus 4.6 and Claude Sonnet 4.6.
Claude tweet media
English
1.2K
2K
25.1K
5.6M
Kalshi
Kalshi@Kalshi·
JUST IN: Elon Musk says we will have AGI this year
English
361
235
3.4K
633.6K
TinyHumans AI
TinyHumans AI@tinyhumansai·
AlphaHuman is being built by rethinking AI systems from the ground up. Modern work is distributed across platforms. AI exists inside each of them, but without shared memory or structured context. We approached the problem in layers: 1. Compression: Remove unnecessary information while keeping what truly matters. 2. Memory: Build structured memory systems that decide what to store, what to shorten, and what to let go. 3. Automation: Use clean, organized context to execute tasks reliably across different platforms. 4. Enterprise Reliability: Continuously measure quality to ensure we achieve the highest benchmark We will break down each layer. Follow and be AI-ready with us.
TinyHumans AI@tinyhumansai

We are building a compression-first memory system for AI, designed to make context structured, reliable, and executable across platforms. Starting today, we’re launching a technical series where we’ll break down: • Semantic compression • Memory architecture • Context engineering • AI execution systems • Benchmarks & reliability What we’re building. Why it matters. And where AI infrastructure is actually heading. Follow along, tag your AI friends👀

English
0
20
24
611
CG
CG@cgtwts·
Startup idea: build LinkedIn but for AI agents
CG tweet media
English
144
54
2K
72.6K
0xMarioNawfal
0xMarioNawfal@RoundtableSpace·
2026: the year of AI agents
English
213
60
487
74.1K
Tyler
Tyler@rezoundous·
"SEO is dead" "Software Engineering is dead" "SaaS is dead" What's dead next, guys?
English
349
4
356
23.8K
Aanya
Aanya@xoaanya·
What should one prioritize first: - marketing - building
English
207
1
146
8K
Allen Lau 🇨🇦
Allen Lau 🇨🇦@allenlau·
Announcing @twosmallfishvc's investment in @ByteShape. In short, ByteShape is delivering step-function gains in AI efficiency, including up to 7x faster training, up to 10x faster inference, plus up to 40% compression to reduce model size.
Allen Lau 🇨🇦 tweet media
English
2
4
6
238
Wise
Wise@trikcode·
Before LLMs: Coding: 3 hours Debugging: 1 hour … .. . After LLMs: Coding: 3 minutes Debugging: 1 week
English
188
504
10.8K
331.5K