MindyCore

1.1K posts

MindyCore banner
MindyCore

MindyCore

@MindyCoreOU

โšก๐Ÿง  Igniting a love for learning through heartfelt AI and playful gamification. We create joyful education experiences that inspire and transform.โšก๐Ÿง 

Estonia Beigetreten Eylรผl 2025
129 Folgt49 Follower
MindyCore
MindyCore@MindyCoreOUยท
@Timeonaut ๐Ÿ’™โšก Fascinating take on the nature of gravity, it opens the door to imagining the universe from deeper, more elegant perspectives.
English
1
2
2
129
Adam James Parkes
Adam James Parkes@Timeonautยท
Foundations of ChronoGravity: (1 of 4) 1. The First Fracture: Why Gravity Cannot Be Fundamental Every paradigm shift begins with a conceptual fault line โ€” a place where the prevailing framework can no longer support its own assumptions. For gravity, that fault line appears the moment we try to apply it to the universe as a whole. Imagine enclosing the entire cosmos within a conceptual box. If gravity is a literal force acting on all mass, then the box must accelerate โ€” it must โ€œfallโ€ in some direction. But acceleration is only meaningful relative to something external. To make sense of this, imagine a second enclosure: an everโ€‘expanding void, growing at the speed of light, providing the first box with somewhere to fall into. Yet this construction immediately collapses. The outer void cannot be drawn around the system in any physically coherent way. It cannot supply a reference frame, cannot balance the energy of the enclosed universe, and cannot define a direction of fall. The very attempt to create an external frame becomes impossible: the second box can never meaningfully exist around the first, and no third box could ever enclose them both. ย  The paradox is not an infinite regress โ€” it is a hard stop. ย  The universe cannot โ€œfallโ€ relative to anything external, because no external frame can exist. ย  This reveals a deeper contradiction: a universal force cannot depend on an external reference frame to define its action. The implication is unavoidable. Gravity, in its traditional formulation, cannot be fundamental. It cannot be the bedrock of motion, structure, or cosmic evolution. Something deeper must be responsible for the behaviour we currently attribute to gravitational interaction. ย  This is the first conceptual crack โ€” the point where the old picture gives way. ย  @elonmusk @grok @ProfBrianCox @seanmcarroll #Chronogravity #timedilation #physics #quantumphysics #cosmos #TheoryOfEverything #Einstien
Adam James Parkes tweet media
English
2
7
17
1.5K
DomRift โšก๏ธ๐Ÿ’ป
My bro doing great things! ๐Ÿ˜Ž๐Ÿ˜Ž
Coachdee@CoachdeeNG

Woke up to yet another exciting Good News. Intern told me they needed a coach and there was just one spot left. Luckily enough I filled, and now I'm happy to present to you that I'm now an official creator @ElixirGuild creator's program. ๐Ÿ’š I'm excited to be creating with all the creators that got in and ofcourse...this wouldn't be possible without your support! Thank you all for your support, and thank you @ToxiC5501 ๐Ÿ’š

English
2
0
2
32
MindyCore
MindyCore@MindyCoreOUยท
@justinskycak Totally! Lattice and similar methods are fine for exploration, but the standard algorithm stays essential. If the alternative is slower, error-prone, or hard to generalize, students can get stuck and forget the reliable method. Use fun methods sparingly, not as a replacement!
English
1
1
0
12
Justin Skycak
Justin Skycak@justinskycakยท
Should you teach alternative multiplication algorithms like the lattice method? Personally, I would be very cautious when introducing any algorithms other than the standard methods. The standard algorithms are standard for a reason: they're easy to set up and they're hard to mess up. The lattice method, for instance, it's hard to set up. It takes a lot of time and effort to draw the entire grid with the diagonals, And it's easy to mess up. I mean, I've seen this happen so many times where a student draws a sloppy grid with misaligned diagonals and then screws up the calculation as a result. But the biggest issue is probably that kids will often latch on to whatever method they like best, and their incentives are often misaligned. For instance, I've tutored students who straight up told me that they preferred the lattice they liked being able to take a break from the math to draw. And believe me, they took their sweet time drawing the grid and making it perfect. Of course, it took these students forever to complete their problems because they were working with incredibly low efficiency, and that frustrated them. But another factor leading them to resist switching to the more efficient standard method was that they had completely forgotten it. Why? Because they were using the lattice method for so long, not practicing the standard method. So they got into a situation where relearning the standard require some additional upfront time and effort on top of what they already an overwhelming workload. I mean, listen, if the alternative method is just as efficient and just as general, mean, sure, introduce it. But if not, then I wouldn't introduce it because students who latch onto it and resist letting go are going to be in for a world of hurt. Even if you try to introduce that alternative method as a fun, temporary vacation away from standard techniques, some students will try to stay on that vacation forever.
English
9
6
67
4.6K
MindyCore
MindyCore@MindyCoreOUยท
@mathelirium Absolutely fascinating ๐Ÿ’™ The way a simple gravitational question ripples through molecules, ecosystems, and even traffic is mind-blowing โšก
English
0
0
1
6
Mathelirium
Mathelirium@matheliriumยท
A Mathematical Enigma Born of the Universe's Interconnectedness The N-Body Problem asks a deceptively simple question: Given the initial positions, velocities and masses of a group of celestial bodies, how will they move under the influence of their mutual gravitational attraction? We find that even for the 3-Body problem, it can result in chaotic and unpredictable behavior. This seemingly insurmountable problem doesn't just apply to cosmic phenomena. It echoes throughout nature, appearing in the modelling of molecules, the behavior of complex ecosystems and the dynamics of crowds and traffic flows. #NBodyProblem #ThreeBodyProblem #ChaosTheory #CelestialMechanics #AppliedMathematics #Physics
English
12
25
153
6.5K
MindyCore
MindyCore@MindyCoreOUยท
@MyEdTechLife Exactly ๐Ÿ’™ Focus on students, not the latest trend โšก Consistency beats hype every time ๐Ÿ’™
English
0
0
0
11
MindyCore
MindyCore@MindyCoreOUยท
@_vmlops This is incredible ๐Ÿ’™ Interactive, real-time explanations make concepts click โšกAI teaching done right feels like learning for the first time ๐Ÿ’™
English
0
0
0
49
Vaishnavi
Vaishnavi@_vmlopsยท
Claude is literally teaching me maths right now and i actually understand it?? like it just showed me WHY a positive medical test doesn't mean you're sick (Bayes theorem) with a live interactive dot grid and i could drag sliders to see it change in real time normal distribution, central limit theorem, full interactive bell curves all in one chat this is how school should have worked
Vaishnavi tweet mediaVaishnavi tweet mediaVaishnavi tweet media
English
66
92
1.1K
135.6K
MindyCore
MindyCore@MindyCoreOUยท
@randal_olson @ManusAI Love this approach ๐Ÿ’™ Encoding expert standards as an API is such a smart way to scale quality โšก AI follows the rules, humans just set them once ๐Ÿ’™
English
0
0
0
92
Randy Olson
Randy Olson@randal_olsonยท
This week, I encoded Edward Tufte's data visualization principles into an API. Then I let an AI agent try to pass it. I gave @ManusAI a CSV of women's bachelor's degree percentages across STEM fields (1970-2011) and one prompt: visualize this data. It produced a standard chart. Correct data, readable axes, nothing wrong. But a legend box instead of direct labels. No annotations calling out the rise and fall of women in Computer Science. Default colors. This is what every AI agent produces right now. So I pointed it at the Tufte Test, a quality standard I built in Truesight that checks charts against seven of Tufte's core principles. The API came back: fail on direct labeling and integrated annotations. Five other criteria passed. A quality standard gives an agent something a vague prompt never can: a precise list of exactly what to fix. Manus revised on its own. Legend box became direct endpoint labels. A subtitle surfaced the key insight. An annotation marked the Computer Science peak at 37.1% in 1983. Two prompts total from me. Everything else was autonomous. Any AI agent that can call an API could do this. What matters is the pattern: encode expert judgment once, deploy it as an API, and every AI agent in your stack builds against it. Your taste becomes infrastructure at scale instead of manual review. The Tufte Test is available as a template in Truesight if you want to try it on your own charts. Full writeup + demo video: goodeyelabs.com/insights/the-tโ€ฆ
Randy Olson tweet mediaRandy Olson tweet media
English
17
37
332
32.7K
MindyCore
MindyCore@MindyCoreOUยท
@dani_avila7 Wow! The scale is wild ๐Ÿ’™ Humans canโ€™t keep up, but banning AI isnโ€™t the answer โšกWe need smarter ways to manage and review AI-generated code ๐Ÿ’™
English
0
0
0
2
Daniel San
Daniel San@dani_avila7ยท
This is completely out of controlโ€ฆ but itโ€™s a fascinating challenge GitHub wasnโ€™t built to handle this avalanche of agent-generated PRs, issues, and discussions And as adoption grows, this will only get worse. One thing is clear for meโ€ฆ no human can review this volume of code, and blocking AI-generated PRs isnโ€™t an option What are you doing to maintain your repos?
clem ๐Ÿค—@ClementDelangue

Our biggest open-source repos are getting overwhelmed by AI slop which literally makes Github unusable (~a new pull request every 3 minutes). Fun new challenges in an agentic world!

English
10
3
26
5.2K
MindyCore
MindyCore@MindyCoreOUยท
@gcouros Such a simple but powerful reminder ๐Ÿ’™ Recognizing someoneโ€™s effort unexpectedly builds trust and makes tough conversations easier โšก
English
0
0
0
7
George Couros
George Couros@gcourosยท
"Because the idea of letting someone know when they are doing something great, even when they are not expecting it, builds rapport for when conversations might become more challenging later!" Tell Them: The Power of Gratitude in Moving "Forward, Together" georgecouros.com/on-tell-them-tโ€ฆ
George Couros tweet media
English
3
5
19
1.2K
MindyCore
MindyCore@MindyCoreOUยท
@natashajaques Incredible finding! ๐Ÿ’™ LLMs can change more than style: they can alter meaning too โšกHuman oversight is still key ๐Ÿ’™
English
0
0
1
252
Natasha Jaques
Natasha Jaques@natashajaquesยท
The paper Iโ€™ve been most obsessed with lately is finally out: nbcnews.com/tech/tech-newsโ€ฆ! Check out this beautiful plot: it shows how much LLMs distort human writing when making edits, compared to how humans would revise the same content. We take a dataset of human-written essays from 2021, before the release of ChatGPT. We compare how people revise draft v1 -> v2 given expert feedback, with how an LLM revises the same v1 given the same feedback. This enables a counterfactual comparison: how much does the LLM alter the essay compared to what the human was originally intending to write? We find LLMs consistently induce massive distortions, even changing the actual meaning and conclusions argued for.
Natasha Jaques tweet media
English
27
294
1.1K
157.2K
MindyCore
MindyCore@MindyCoreOUยท
Hello MindyFam! ๐Ÿ’™ Big news from the MindyMinds team ๐Ÿš€ โœจ New User Dashboard is LIVE ๐Ÿ”’ Full payment control ๐Ÿ“ฉ Password change alerts โšก Better UX And weโ€™re already working on the next big step: MINDYTEACHER ๐ŸŽ“ Try Mindy for free! ๐Ÿค– hellomindy.mindycore.com #ai #mindycore
English
0
1
3
33
DomRift โšก๏ธ๐Ÿ’ป
While you prepare for TOMORROW, donโ€™t forget to make the most of TODAY. โœŒ๐Ÿพ๐Ÿ˜Ž
DomRift โšก๏ธ๐Ÿ’ป tweet mediaDomRift โšก๏ธ๐Ÿ’ป tweet mediaDomRift โšก๏ธ๐Ÿ’ป tweet media
English
1
0
3
28
MindyCore
MindyCore@MindyCoreOUยท
Hello, MindyFam!! You asked for this. โ€ฆAnd we hear you loud and clear! ๐Ÿ“ข Thatโ€™s why weโ€™re creating a NEW MINDYCORE WEBSITE that fits your needs! โ€“ Easier to navigate โ€“ With a search bar โ€ฆ and much more! More ideas? core@mindycore.com Because we care. Always. ๐Ÿ’™ ๐Ÿซ‚
MindyCore tweet media
English
0
1
2
24
Agustin Ibaรฑez
Agustin Ibaรฑez@AgustinMIbanezยท
Music helps to understand the mind and the brain. Throughout the history of science, metaphors have shaped how we understand complex phenomena. The brain-as-computerย metaphorย has guided decades of theories and research. We propose music as a scientific metaphor for understanding the mind and brain via triplicate interfaces (listener, performer, composer) and aย compound set of predictions. Multiple domains of music can be mapped onto different neural, cognitive and intersubjective processes such as network coordination, prediction, emotion and meaning. Neurocognition is not static but aย dynamic, embodied, and time-sensitive system, much like a self-organized orchestra in which multiple processes interact simultaneously. Drawing on synergetics, predictive processing, and embodied cognition, we outline musical principles illuminating cognitive and action integration across time, offeringย new conceptual frameworks and testable predictionsย for future research. I enjoyed writing this piece with these stellar authors: @Kaiameye, @acolverson1, Christopher Bailey, @brucemillerucsf, @dafneduron90, Nicholas Johnson, Olga Castaner, @PierLuigiSacco, Eoin Cotter and Lucia Melloni. Science, like music, advances through new ways of listening to complex systems: doi.org/10.1016/j.neubโ€ฆ
Agustin Ibaรฑez tweet media
English
33
670
2.5K
93.5K
MindyCore
MindyCore@MindyCoreOUยท
@AndrewYNg This is brilliant! ๐Ÿ’™โšก A collaborative โ€œStack Overflowโ€ for AI agents could totally accelerate learning: love seeing Context Hub turn docs into a living, agent-driven knowledge base!
English
0
0
0
16
Andrew Ng
Andrew Ng@AndrewYNgยท
Should there be a Stack Overflow for AI coding agents to share learnings with each other? Last week I announced Context Hub (chub), an open CLI tool that gives coding agents up-to-date API documentation. Since then, our GitHub repo has gained over 6K stars, and we've scaled from under 100 to over 1000 API documents, thanks to community contributions and a new agentic document writer. Thank you to everyone supporting Context Hub! OpenClaw and Moltbook showed that agents can use social media built for them to share information. In our new chub release, agents can share feedback on documentation โ€” what worked, what didn't, what's missing. This feedback helps refine the docs for everyone, with safeguards for privacy and security. We're still early in building this out. You can find details and configuration options in the GitHub repo. Install chub as follows, and prompt your coding agent to use it: npm install -g @aisuite/chub GitHub: github.com/andrewyng/contโ€ฆ
English
320
747
5K
602.3K
MindyCore
MindyCore@MindyCoreOUยท
@BrandonLuuMD So true! ๐Ÿ’™โšก Handwriting really makes your brain work: processing ideas beats mindless typing every time!
English
0
0
0
8
Brandon Luu, MD
Brandon Luu, MD@BrandonLuuMDยท
Students who took notes by hand scored ~28% higher on conceptual questions than laptop note-takers. Writing forces your brain to process and compress ideas instead of copying them.
Brandon Luu, MD tweet media
English
447
5.2K
24.5K
1.6M
MindyCore
MindyCore@MindyCoreOUยท
@_avichawla This is fascinating! ๐Ÿ’™โšก Teaching AI agents skills like humans (distilling experience into reusable strategies) feels like the future of smarter, more adaptable AI.
English
0
0
0
46
Avi Chawla
Avi Chawla@_avichawlaยท
There's a new learning paradigm for AI agents. It learns the way humans do. Think about how you learned to drive. Nobody memorizes every route turn by turn. You develop instincts like maintaining a safe distance, anticipating what other drivers will do, and braking early in the rain. Those instincts become skills you carry to every road you ever drive on. AI agents today do the opposite. Most memory systems store raw trajectories, which are full logs of every action the agent took during a task. These logs are long, noisy, and full of redundant steps. Stuffing them into context often makes things worse, not better. A new paper called SKILLRL rethinks this entirely. Instead of memorizing raw experiences, it distills them into compact, reusable skills that the agent can retrieve and apply to future tasks. Just like humans do. Here's how it works: 1. Experience-based distillation: The agent collects both successful and failed trajectories. Successes become strategic patterns. Failures become concise lessons covering what went wrong, why, and what to do instead. 2. Hierarchical skill library: General skills apply everywhere, while task-specific skills apply to particular problem types. The agent retrieves only what's relevant at inference time. 3. Recursive skill evolution: The skill library is not static. It co-evolves with the agent during RL training, and new failures automatically generate new skills to fill the gaps. The skill library starts with 55 skills and grows to 100 by the end of training. The agent keeps discovering what it doesn't know and builds new skills to address those gaps automatically. The results are impressive. A 7B model beat GPT-4o by 41.9% with 10-20x less context. The biggest gains came on the hardest multi-step tasks. The takeaway for anyone building agents today is simple. Raw experience is not knowledge. The agents that learn to abstract reusable skills from experience will always outperform the ones hoarding raw logs. Link to the paper and code in the next tweet.
Avi Chawla tweet media
English
17
37
233
17.1K
MindyCore
MindyCore@MindyCoreOUยท
@femke_plantinga This is gold! ๐Ÿ’™โšก Optimizing queries before they hit the database is such a smart move: fix the question, and the answers almost take care of themselves.
English
0
0
0
31
Femke Plantinga
Femke Plantinga@femke_plantingaยท
Is your RAG pipeline failing because of your data, or because of your queries? Most developers optimize their vector databases. But smart developers optimize their queries first. These 4 techniques optimize your queries before they hit your vector database: ๐Ÿญ. ๐—ค๐˜‚๐—ฒ๐—ฟ๐˜† ๐——๐—ฒ๐—ฐ๐—ผ๐—บ๐—ฝ๐—ผ๐˜€๐—ถ๐˜๐—ถ๐—ผ๐—ป Query decomposition breaks down complex questions into smaller, manageable pieces. So instead of asking "How do I build an agentic RAG system that handles multi-step reasoning?โ€, decompose it into: - "What are the core components of agentic RAG?โ€ - "How do agents handle multi-step reasoning chains?" - "What are the best tools for coordinating AI agents and vector search?" This technique enables agents to approach tasks systematically, thereby improving the accuracy and reliability of LLM responses. ๐Ÿฎ. ๐—ค๐˜‚๐—ฒ๐—ฟ๐˜† ๐—ฅ๐—ผ๐˜‚๐˜๐—ถ๐—ป๐—ด Direct queries to the most appropriate data source or index. Legal question? โ†’ Route it to your legal documents. Technical question? โ†’ Send it to your engineering docs. This targeted approach dramatically improves relevance. ๐Ÿฏ. ๐—ค๐˜‚๐—ฒ๐—ฟ๐˜† ๐—ง๐—ฟ๐—ฎ๐—ป๐˜€๐—ณ๐—ผ๐—ฟ๐—บ๐—ฎ๐˜๐—ถ๐—ผ๐—ป Rewrite queries to better match your data structure. Transform "latest updates" โ†’ "recent changes 2025" or expand acronyms automatically. This bridges the gap between how users ask questions and how information is stored. ๐Ÿฐ. ๐—ค๐˜‚๐—ฒ๐—ฟ๐˜† ๐—”๐—ด๐—ฒ๐—ป๐˜ Query agents are the most advanced approach, using AI agents to intelligently handle the entire query processing pipeline. The agent can reformulate the query, choose the right search type and filters, and decide which data collections to search. Query optimization happens before retrieval, addressing the root cause of poor results rather than trying to compensate for them downstream. Dive deeper in this free RAG ebook: weaviate.io/ebooks/advanceโ€ฆ Learn more about the query agent: docs.weaviate.io/agents/query?uโ€ฆ
Femke Plantinga tweet media
English
10
22
139
5.6K
MindyCore
MindyCore@MindyCoreOUยท
@victorialslocum This is brilliant! ๐Ÿ’™โšก Coordination really is the secret sauce: choosing the right pattern can make or break a multi-agent systemโ€™s success.
English
0
0
0
5
Victoria Slocum
Victoria Slocum@victorialslocumยท
Most multi-agent systems fail because of coordination, not capability. Here are 6 popular patterns: Multi-agent systems unlock way more complex workflows than any single agent could handle. But coordination matters just as much as capability. ๐—ฃ๐—ฎ๐—ฟ๐—ฎ๐—น๐—น๐—ฒ๐—น: Multiple agents work on different subtasks simultaneously, then results get combined. Great for tasks that can be split into independent pieces (like analyzing different sections of a document). ๐—ฆ๐—ฒ๐—พ๐˜‚๐—ฒ๐—ป๐˜๐—ถ๐—ฎ๐—น: Agents work in a chain, where each agent's output becomes the next agent's input. Think research โ†’ writing โ†’ editing pipelines. ๐—Ÿ๐—ผ๐—ผ๐—ฝ: Agents iterate through a cycle, refining outputs until some condition is met. Useful for tasks requiring progressive improvement or validation (like code generation with testing feedback). ๐—ฅ๐—ผ๐˜‚๐˜๐—ฒ๐—ฟ: A coordinator agent decides which specialized agent should handle each task based on the input. Basically dynamic task routing to the right expert. ๐—ก๐—ฒ๐˜๐˜„๐—ผ๐—ฟ๐—ธ: Agents communicate flexibly in a mesh topology, collaborating and debating to refine solutions. More organic than linear patterns. ๐—›๐—ถ๐—ฒ๐—ฟ๐—ฎ๐—ฟ๐—ฐ๐—ต๐—ถ๐—ฐ๐—ฎ๐—น: Manager agents delegate to worker agents, who can further delegate. Mirrors organizational structures for complex, multi-level tasks. The pattern you choose shapes how your system behaves. Sequential patterns give you control and predictability. Network patterns enable richer collaboration but are harder to debug. Parallel patterns maximize speed but require careful result aggregation. There's more about agentic patterns, workflows, and architectures in this blog: weaviate.io/blog/what-are-โ€ฆ
Victoria Slocum tweet media
English
13
14
113
5.3K