MindyCore

1.1K posts

MindyCore banner
MindyCore

MindyCore

@MindyCoreOU

⚡🧠 Igniting a love for learning through heartfelt AI and playful gamification. We create joyful education experiences that inspire and transform.⚡🧠

Estonia เข้าร่วม Eylül 2025
129 กำลังติดตาม49 ผู้ติดตาม
MindyCore
MindyCore@MindyCoreOU·
@Timeonaut 💙⚡ Fascinating take on the nature of gravity, it opens the door to imagining the universe from deeper, more elegant perspectives.
English
0
1
1
75
Adam James Parkes
Adam James Parkes@Timeonaut·
Foundations of ChronoGravity: (1 of 4) 1. The First Fracture: Why Gravity Cannot Be Fundamental Every paradigm shift begins with a conceptual fault line — a place where the prevailing framework can no longer support its own assumptions. For gravity, that fault line appears the moment we try to apply it to the universe as a whole. Imagine enclosing the entire cosmos within a conceptual box. If gravity is a literal force acting on all mass, then the box must accelerate — it must “fall” in some direction. But acceleration is only meaningful relative to something external. To make sense of this, imagine a second enclosure: an ever‑expanding void, growing at the speed of light, providing the first box with somewhere to fall into. Yet this construction immediately collapses. The outer void cannot be drawn around the system in any physically coherent way. It cannot supply a reference frame, cannot balance the energy of the enclosed universe, and cannot define a direction of fall. The very attempt to create an external frame becomes impossible: the second box can never meaningfully exist around the first, and no third box could ever enclose them both.   The paradox is not an infinite regress — it is a hard stop.   The universe cannot “fall” relative to anything external, because no external frame can exist.   This reveals a deeper contradiction: a universal force cannot depend on an external reference frame to define its action. The implication is unavoidable. Gravity, in its traditional formulation, cannot be fundamental. It cannot be the bedrock of motion, structure, or cosmic evolution. Something deeper must be responsible for the behaviour we currently attribute to gravitational interaction.   This is the first conceptual crack — the point where the old picture gives way.   @elonmusk @grok @ProfBrianCox @seanmcarroll #Chronogravity #timedilation #physics #quantumphysics #cosmos #TheoryOfEverything #Einstien
Adam James Parkes tweet media
English
2
7
17
1.4K
DomRift ⚡️💻
DomRift ⚡️💻@Macnelson92·
My bro doing great things! 😎😎
Coachdee@CoachdeeNG

Woke up to yet another exciting Good News. Intern told me they needed a coach and there was just one spot left. Luckily enough I filled, and now I'm happy to present to you that I'm now an official creator @ElixirGuild creator's program. 💚 I'm excited to be creating with all the creators that got in and ofcourse...this wouldn't be possible without your support! Thank you all for your support, and thank you @ToxiC5501 💚

English
2
0
2
32
MindyCore
MindyCore@MindyCoreOU·
@justinskycak Totally! Lattice and similar methods are fine for exploration, but the standard algorithm stays essential. If the alternative is slower, error-prone, or hard to generalize, students can get stuck and forget the reliable method. Use fun methods sparingly, not as a replacement!
English
1
1
0
11
Justin Skycak
Justin Skycak@justinskycak·
Should you teach alternative multiplication algorithms like the lattice method? Personally, I would be very cautious when introducing any algorithms other than the standard methods. The standard algorithms are standard for a reason: they're easy to set up and they're hard to mess up. The lattice method, for instance, it's hard to set up. It takes a lot of time and effort to draw the entire grid with the diagonals, And it's easy to mess up. I mean, I've seen this happen so many times where a student draws a sloppy grid with misaligned diagonals and then screws up the calculation as a result. But the biggest issue is probably that kids will often latch on to whatever method they like best, and their incentives are often misaligned. For instance, I've tutored students who straight up told me that they preferred the lattice they liked being able to take a break from the math to draw. And believe me, they took their sweet time drawing the grid and making it perfect. Of course, it took these students forever to complete their problems because they were working with incredibly low efficiency, and that frustrated them. But another factor leading them to resist switching to the more efficient standard method was that they had completely forgotten it. Why? Because they were using the lattice method for so long, not practicing the standard method. So they got into a situation where relearning the standard require some additional upfront time and effort on top of what they already an overwhelming workload. I mean, listen, if the alternative method is just as efficient and just as general, mean, sure, introduce it. But if not, then I wouldn't introduce it because students who latch onto it and resist letting go are going to be in for a world of hurt. Even if you try to introduce that alternative method as a fun, temporary vacation away from standard techniques, some students will try to stay on that vacation forever.
English
9
6
67
4.5K
MindyCore
MindyCore@MindyCoreOU·
@mathelirium Absolutely fascinating 💙 The way a simple gravitational question ripples through molecules, ecosystems, and even traffic is mind-blowing ⚡
English
0
0
1
6
Mathelirium
Mathelirium@mathelirium·
A Mathematical Enigma Born of the Universe's Interconnectedness The N-Body Problem asks a deceptively simple question: Given the initial positions, velocities and masses of a group of celestial bodies, how will they move under the influence of their mutual gravitational attraction? We find that even for the 3-Body problem, it can result in chaotic and unpredictable behavior. This seemingly insurmountable problem doesn't just apply to cosmic phenomena. It echoes throughout nature, appearing in the modelling of molecules, the behavior of complex ecosystems and the dynamics of crowds and traffic flows. #NBodyProblem #ThreeBodyProblem #ChaosTheory #CelestialMechanics #AppliedMathematics #Physics
English
13
25
152
6.5K
MindyCore
MindyCore@MindyCoreOU·
@MyEdTechLife Exactly 💙 Focus on students, not the latest trend ⚡ Consistency beats hype every time 💙
English
0
0
0
9
MindyCore
MindyCore@MindyCoreOU·
@_vmlops This is incredible 💙 Interactive, real-time explanations make concepts click ⚡AI teaching done right feels like learning for the first time 💙
English
0
0
0
46
Vaishnavi
Vaishnavi@_vmlops·
Claude is literally teaching me maths right now and i actually understand it?? like it just showed me WHY a positive medical test doesn't mean you're sick (Bayes theorem) with a live interactive dot grid and i could drag sliders to see it change in real time normal distribution, central limit theorem, full interactive bell curves all in one chat this is how school should have worked
Vaishnavi tweet mediaVaishnavi tweet mediaVaishnavi tweet media
English
65
91
1.1K
128.5K
MindyCore
MindyCore@MindyCoreOU·
@randal_olson @ManusAI Love this approach 💙 Encoding expert standards as an API is such a smart way to scale quality ⚡ AI follows the rules, humans just set them once 💙
English
0
0
0
91
Randy Olson
Randy Olson@randal_olson·
This week, I encoded Edward Tufte's data visualization principles into an API. Then I let an AI agent try to pass it. I gave @ManusAI a CSV of women's bachelor's degree percentages across STEM fields (1970-2011) and one prompt: visualize this data. It produced a standard chart. Correct data, readable axes, nothing wrong. But a legend box instead of direct labels. No annotations calling out the rise and fall of women in Computer Science. Default colors. This is what every AI agent produces right now. So I pointed it at the Tufte Test, a quality standard I built in Truesight that checks charts against seven of Tufte's core principles. The API came back: fail on direct labeling and integrated annotations. Five other criteria passed. A quality standard gives an agent something a vague prompt never can: a precise list of exactly what to fix. Manus revised on its own. Legend box became direct endpoint labels. A subtitle surfaced the key insight. An annotation marked the Computer Science peak at 37.1% in 1983. Two prompts total from me. Everything else was autonomous. Any AI agent that can call an API could do this. What matters is the pattern: encode expert judgment once, deploy it as an API, and every AI agent in your stack builds against it. Your taste becomes infrastructure at scale instead of manual review. The Tufte Test is available as a template in Truesight if you want to try it on your own charts. Full writeup + demo video: goodeyelabs.com/insights/the-t…
Randy Olson tweet mediaRandy Olson tweet media
English
14
36
324
31.6K
MindyCore
MindyCore@MindyCoreOU·
@dani_avila7 Wow! The scale is wild 💙 Humans can’t keep up, but banning AI isn’t the answer ⚡We need smarter ways to manage and review AI-generated code 💙
English
0
0
0
2
Daniel San
Daniel San@dani_avila7·
This is completely out of control… but it’s a fascinating challenge GitHub wasn’t built to handle this avalanche of agent-generated PRs, issues, and discussions And as adoption grows, this will only get worse. One thing is clear for me… no human can review this volume of code, and blocking AI-generated PRs isn’t an option What are you doing to maintain your repos?
clem 🤗@ClementDelangue

Our biggest open-source repos are getting overwhelmed by AI slop which literally makes Github unusable (~a new pull request every 3 minutes). Fun new challenges in an agentic world!

English
10
3
26
5.2K
MindyCore
MindyCore@MindyCoreOU·
@gcouros Such a simple but powerful reminder 💙 Recognizing someone’s effort unexpectedly builds trust and makes tough conversations easier ⚡
English
0
0
0
5
George Couros
George Couros@gcouros·
"Because the idea of letting someone know when they are doing something great, even when they are not expecting it, builds rapport for when conversations might become more challenging later!" Tell Them: The Power of Gratitude in Moving "Forward, Together" georgecouros.com/on-tell-them-t…
George Couros tweet media
English
3
5
19
1.2K
MindyCore
MindyCore@MindyCoreOU·
@natashajaques Incredible finding! 💙 LLMs can change more than style: they can alter meaning too ⚡Human oversight is still key 💙
English
0
0
1
223
Natasha Jaques
Natasha Jaques@natashajaques·
The paper I’ve been most obsessed with lately is finally out: nbcnews.com/tech/tech-news…! Check out this beautiful plot: it shows how much LLMs distort human writing when making edits, compared to how humans would revise the same content. We take a dataset of human-written essays from 2021, before the release of ChatGPT. We compare how people revise draft v1 -> v2 given expert feedback, with how an LLM revises the same v1 given the same feedback. This enables a counterfactual comparison: how much does the LLM alter the essay compared to what the human was originally intending to write? We find LLMs consistently induce massive distortions, even changing the actual meaning and conclusions argued for.
Natasha Jaques tweet media
English
25
270
977
139.1K
MindyCore
MindyCore@MindyCoreOU·
Hello MindyFam! 💙 Big news from the MindyMinds team 🚀 ✨ New User Dashboard is LIVE 🔒 Full payment control 📩 Password change alerts ⚡ Better UX And we’re already working on the next big step: MINDYTEACHER 🎓 Try Mindy for free! 🤖 hellomindy.mindycore.com #ai #mindycore
English
0
1
3
33
DomRift ⚡️💻
DomRift ⚡️💻@Macnelson92·
While you prepare for TOMORROW, don’t forget to make the most of TODAY. ✌🏾😎
DomRift ⚡️💻 tweet mediaDomRift ⚡️💻 tweet mediaDomRift ⚡️💻 tweet media
English
1
0
3
28
MindyCore
MindyCore@MindyCoreOU·
Hello, MindyFam!! You asked for this. …And we hear you loud and clear! 📢 That’s why we’re creating a NEW MINDYCORE WEBSITE that fits your needs! – Easier to navigate – With a search bar … and much more! More ideas? core@mindycore.com Because we care. Always. 💙 🫂
MindyCore tweet media
English
0
1
2
24
Agustin Ibañez
Agustin Ibañez@AgustinMIbanez·
Music helps to understand the mind and the brain. Throughout the history of science, metaphors have shaped how we understand complex phenomena. The brain-as-computer metaphor has guided decades of theories and research. We propose music as a scientific metaphor for understanding the mind and brain via triplicate interfaces (listener, performer, composer) and a compound set of predictions. Multiple domains of music can be mapped onto different neural, cognitive and intersubjective processes such as network coordination, prediction, emotion and meaning. Neurocognition is not static but a dynamic, embodied, and time-sensitive system, much like a self-organized orchestra in which multiple processes interact simultaneously. Drawing on synergetics, predictive processing, and embodied cognition, we outline musical principles illuminating cognitive and action integration across time, offering new conceptual frameworks and testable predictions for future research. I enjoyed writing this piece with these stellar authors: @Kaiameye, @acolverson1, Christopher Bailey, @brucemillerucsf, @dafneduron90, Nicholas Johnson, Olga Castaner, @PierLuigiSacco, Eoin Cotter and Lucia Melloni. Science, like music, advances through new ways of listening to complex systems: doi.org/10.1016/j.neub…
Agustin Ibañez tweet media
English
33
669
2.5K
93.5K
MindyCore
MindyCore@MindyCoreOU·
@AndrewYNg This is brilliant! 💙⚡ A collaborative “Stack Overflow” for AI agents could totally accelerate learning: love seeing Context Hub turn docs into a living, agent-driven knowledge base!
English
0
0
0
16
Andrew Ng
Andrew Ng@AndrewYNg·
Should there be a Stack Overflow for AI coding agents to share learnings with each other? Last week I announced Context Hub (chub), an open CLI tool that gives coding agents up-to-date API documentation. Since then, our GitHub repo has gained over 6K stars, and we've scaled from under 100 to over 1000 API documents, thanks to community contributions and a new agentic document writer. Thank you to everyone supporting Context Hub! OpenClaw and Moltbook showed that agents can use social media built for them to share information. In our new chub release, agents can share feedback on documentation — what worked, what didn't, what's missing. This feedback helps refine the docs for everyone, with safeguards for privacy and security. We're still early in building this out. You can find details and configuration options in the GitHub repo. Install chub as follows, and prompt your coding agent to use it: npm install -g @aisuite/chub GitHub: github.com/andrewyng/cont…
English
320
746
5K
601.8K
MindyCore
MindyCore@MindyCoreOU·
@BrandonLuuMD So true! 💙⚡ Handwriting really makes your brain work: processing ideas beats mindless typing every time!
English
0
0
0
8
Brandon Luu, MD
Brandon Luu, MD@BrandonLuuMD·
Students who took notes by hand scored ~28% higher on conceptual questions than laptop note-takers. Writing forces your brain to process and compress ideas instead of copying them.
Brandon Luu, MD tweet media
English
447
5.2K
24.5K
1.6M
MindyCore
MindyCore@MindyCoreOU·
@_avichawla This is fascinating! 💙⚡ Teaching AI agents skills like humans (distilling experience into reusable strategies) feels like the future of smarter, more adaptable AI.
English
0
0
0
46
Avi Chawla
Avi Chawla@_avichawla·
There's a new learning paradigm for AI agents. It learns the way humans do. Think about how you learned to drive. Nobody memorizes every route turn by turn. You develop instincts like maintaining a safe distance, anticipating what other drivers will do, and braking early in the rain. Those instincts become skills you carry to every road you ever drive on. AI agents today do the opposite. Most memory systems store raw trajectories, which are full logs of every action the agent took during a task. These logs are long, noisy, and full of redundant steps. Stuffing them into context often makes things worse, not better. A new paper called SKILLRL rethinks this entirely. Instead of memorizing raw experiences, it distills them into compact, reusable skills that the agent can retrieve and apply to future tasks. Just like humans do. Here's how it works: 1. Experience-based distillation: The agent collects both successful and failed trajectories. Successes become strategic patterns. Failures become concise lessons covering what went wrong, why, and what to do instead. 2. Hierarchical skill library: General skills apply everywhere, while task-specific skills apply to particular problem types. The agent retrieves only what's relevant at inference time. 3. Recursive skill evolution: The skill library is not static. It co-evolves with the agent during RL training, and new failures automatically generate new skills to fill the gaps. The skill library starts with 55 skills and grows to 100 by the end of training. The agent keeps discovering what it doesn't know and builds new skills to address those gaps automatically. The results are impressive. A 7B model beat GPT-4o by 41.9% with 10-20x less context. The biggest gains came on the hardest multi-step tasks. The takeaway for anyone building agents today is simple. Raw experience is not knowledge. The agents that learn to abstract reusable skills from experience will always outperform the ones hoarding raw logs. Link to the paper and code in the next tweet.
Avi Chawla tweet media
English
17
37
233
17.1K
MindyCore
MindyCore@MindyCoreOU·
@femke_plantinga This is gold! 💙⚡ Optimizing queries before they hit the database is such a smart move: fix the question, and the answers almost take care of themselves.
English
0
0
0
31
Femke Plantinga
Femke Plantinga@femke_plantinga·
Is your RAG pipeline failing because of your data, or because of your queries? Most developers optimize their vector databases. But smart developers optimize their queries first. These 4 techniques optimize your queries before they hit your vector database: 𝟭. 𝗤𝘂𝗲𝗿𝘆 𝗗𝗲𝗰𝗼𝗺𝗽𝗼𝘀𝗶𝘁𝗶𝗼𝗻 Query decomposition breaks down complex questions into smaller, manageable pieces. So instead of asking "How do I build an agentic RAG system that handles multi-step reasoning?”, decompose it into: - "What are the core components of agentic RAG?” - "How do agents handle multi-step reasoning chains?" - "What are the best tools for coordinating AI agents and vector search?" This technique enables agents to approach tasks systematically, thereby improving the accuracy and reliability of LLM responses. 𝟮. 𝗤𝘂𝗲𝗿𝘆 𝗥𝗼𝘂𝘁𝗶𝗻𝗴 Direct queries to the most appropriate data source or index. Legal question? → Route it to your legal documents. Technical question? → Send it to your engineering docs. This targeted approach dramatically improves relevance. 𝟯. 𝗤𝘂𝗲𝗿𝘆 𝗧𝗿𝗮𝗻𝘀𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻 Rewrite queries to better match your data structure. Transform "latest updates" → "recent changes 2025" or expand acronyms automatically. This bridges the gap between how users ask questions and how information is stored. 𝟰. 𝗤𝘂𝗲𝗿𝘆 𝗔𝗴𝗲𝗻𝘁 Query agents are the most advanced approach, using AI agents to intelligently handle the entire query processing pipeline. The agent can reformulate the query, choose the right search type and filters, and decide which data collections to search. Query optimization happens before retrieval, addressing the root cause of poor results rather than trying to compensate for them downstream. Dive deeper in this free RAG ebook: weaviate.io/ebooks/advance… Learn more about the query agent: docs.weaviate.io/agents/query?u…
Femke Plantinga tweet media
English
10
22
139
5.6K
MindyCore
MindyCore@MindyCoreOU·
@victorialslocum This is brilliant! 💙⚡ Coordination really is the secret sauce: choosing the right pattern can make or break a multi-agent system’s success.
English
0
0
0
5
Victoria Slocum
Victoria Slocum@victorialslocum·
Most multi-agent systems fail because of coordination, not capability. Here are 6 popular patterns: Multi-agent systems unlock way more complex workflows than any single agent could handle. But coordination matters just as much as capability. 𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹: Multiple agents work on different subtasks simultaneously, then results get combined. Great for tasks that can be split into independent pieces (like analyzing different sections of a document). 𝗦𝗲𝗾𝘂𝗲𝗻𝘁𝗶𝗮𝗹: Agents work in a chain, where each agent's output becomes the next agent's input. Think research → writing → editing pipelines. 𝗟𝗼𝗼𝗽: Agents iterate through a cycle, refining outputs until some condition is met. Useful for tasks requiring progressive improvement or validation (like code generation with testing feedback). 𝗥𝗼𝘂𝘁𝗲𝗿: A coordinator agent decides which specialized agent should handle each task based on the input. Basically dynamic task routing to the right expert. 𝗡𝗲𝘁𝘄𝗼𝗿𝗸: Agents communicate flexibly in a mesh topology, collaborating and debating to refine solutions. More organic than linear patterns. 𝗛𝗶𝗲𝗿𝗮𝗿𝗰𝗵𝗶𝗰𝗮𝗹: Manager agents delegate to worker agents, who can further delegate. Mirrors organizational structures for complex, multi-level tasks. The pattern you choose shapes how your system behaves. Sequential patterns give you control and predictability. Network patterns enable richer collaboration but are harder to debug. Parallel patterns maximize speed but require careful result aggregation. There's more about agentic patterns, workflows, and architectures in this blog: weaviate.io/blog/what-are-…
Victoria Slocum tweet media
English
13
14
113
5.3K