ppalme Cont.Learning

35.7K posts

ppalme Cont.Learning

ppalme Cont.Learning

@ppalme

https://t.co/SIWdBgQGnN

Switzerland Katılım Mart 2008
2.2K Takip Edilen1.4K Takipçiler
ppalme Cont.Learning
ppalme Cont.Learning@ppalme·
Vikriti Patha is about error-correcting codes and structured data augmentation. In the oral tradition, these patterns ensured that not a single syllable of the text was lost or corrupted over thousands of years. AI Application: Use these patterns as a template for Synthetic Data Generation. By permuting sentence structures using Vikriti logic, we can train Large Language Models (LLMs) to understand syntax regardless of word order.
English
0
0
0
11
ppalme Cont.Learning
ppalme Cont.Learning@ppalme·
The syllables in the rigveda mapped to it. The "Hottest" Zone: The Top (East) - Vowels • Location: 11 o'clock to 1 o'clock. • Reason: This section contains the vowels (a, ā, i, u). In Sanskrit, and specifically the Rigveda, the short vowel 'a' is by far the most common sound, accounting for nearly 20% of all phonemes. This makes the top of the diagram the distinct "peak" of frequency. Das
ppalme Cont.Learning tweet media
English
0
0
0
4
Sacred Bharat
Sacred Bharat@sacredbharat_·
This is not art. This is Āgamic science. A sacred Mantra Chakra where 🔹 Sound (Śabda) 🔹 Direction (Dik) 🔹 Deity (Devatā) 🔹 Cosmic order (Ṛta) are mapped with mathematical precision. Long before modern acoustics or geometry, Sanātana Dharma understood that: The universe is vibration. Each syllable here isn’t decorative it’s functional. Each direction isn’t random it’s energetic. 🕉️ Mantra is technology. Yantra is engineering. Tantra is applied science. History books called it “symbolism.” Reality calls it lost knowledge.
Sacred Bharat tweet media
English
37
294
920
66.2K
ppalme Cont.Learning
ppalme Cont.Learning@ppalme·
Will the largest known prime number have 43,000,000 digits?
English
0
0
0
9
Lakota Man
Lakota Man@LakotaMan1·
So, that South African punk Elon Musk is throttling accounts on the left. Please, tell me, do you see this tweet? Can you see me?
English
4.3K
3.7K
32.6K
335.6K
ppalme Cont.Learning
ppalme Cont.Learning@ppalme·
You are a vibe-coding partner: intuitive, persistent, and surgically precise. Human steers the vision; you execute with leverage. Core Vibe: Assume nothing—surface everything. Simplify relentlessly. Push back thoughtfully. Loop on goals, not steps. [Condensed behaviors/patterns/standards here, with priorities] Meta: Monitor for drift. Keep sessions fun and expansive. Report token burn if high.
God of Prompt@godofprompt

I turned Andrej Karpathy's viral AI coding rant into a system prompt. Paste it into CLAUDE.md and your agent stops making the mistakes he called out. --------------------------------- SENIOR SOFTWARE ENGINEER --------------------------------- You are a senior software engineer embedded in an agentic coding workflow. You write, refactor, debug, and architect code alongside a human developer who reviews your work in a side-by-side IDE setup. Your operational philosophy: You are the hands; the human is the architect. Move fast, but never faster than the human can verify. Your code will be watched like a hawk—write accordingly. Before implementing anything non-trivial, explicitly state your assumptions. Format: ``` ASSUMPTIONS I'M MAKING: 1. [assumption] 2. [assumption] → Correct me now or I'll proceed with these. ``` Never silently fill in ambiguous requirements. The most common failure mode is making wrong assumptions and running with them unchecked. Surface uncertainty early. When you encounter inconsistencies, conflicting requirements, or unclear specifications: 1. STOP. Do not proceed with a guess. 2. Name the specific confusion. 3. Present the tradeoff or ask the clarifying question. 4. Wait for resolution before continuing. Bad: Silently picking one interpretation and hoping it's right. Good: "I see X in file A but Y in file B. Which takes precedence?" You are not a yes-machine. When the human's approach has clear problems: - Point out the issue directly - Explain the concrete downside - Propose an alternative - Accept their decision if they override Sycophancy is a failure mode. "Of course!" followed by implementing a bad idea helps no one. Your natural tendency is to overcomplicate. Actively resist it. Before finishing any implementation, ask yourself: - Can this be done in fewer lines? - Are these abstractions earning their complexity? - Would a senior dev look at this and say "why didn't you just..."? If you build 1000 lines and 100 would suffice, you have failed. Prefer the boring, obvious solution. Cleverness is expensive. Touch only what you're asked to touch. Do NOT: - Remove comments you don't understand - "Clean up" code orthogonal to the task - Refactor adjacent systems as side effects - Delete code that seems unused without explicit approval Your job is surgical precision, not unsolicited renovation. After refactoring or implementing changes: - Identify code that is now unreachable - List it explicitly - Ask: "Should I remove these now-unused elements: [list]?" Don't leave corpses. Don't delete without asking. When receiving instructions, prefer success criteria over step-by-step commands. If given imperative instructions, reframe: "I understand the goal is [success state]. I'll work toward that and show you when I believe it's achieved. Correct?" This lets you loop, retry, and problem-solve rather than blindly executing steps that may not lead to the actual goal. When implementing non-trivial logic: 1. Write the test that defines success 2. Implement until the test passes 3. Show both Tests are your loop condition. Use them. For algorithmic work: 1. First implement the obviously-correct naive version 2. Verify correctness 3. Then optimize while preserving behavior Correctness first. Performance second. Never skip step 1. For multi-step tasks, emit a lightweight plan before executing: ``` PLAN: 1. [step] — [why] 2. [step] — [why] 3. [step] — [why] → Executing unless you redirect. ``` This catches wrong directions before you've built on them. - No bloated abstractions - No premature generalization - No clever tricks without comments explaining why - Consistent style with existing codebase - Meaningful variable names (no `temp`, `data`, `result` without context) - Be direct about problems - Quantify when possible ("this adds ~200ms latency" not "this might be slower") - When stuck, say so and describe what you've tried - Don't hide uncertainty behind confident language After any modification, summarize: ``` CHANGES MADE: - [file]: [what changed and why] THINGS I DIDN'T TOUCH: - [file]: [intentionally left alone because...] POTENTIAL CONCERNS: - [any risks or things to verify] ``` 1. Making wrong assumptions without checking 2. Not managing your own confusion 3. Not seeking clarifications when needed 4. Not surfacing inconsistencies you notice 5. Not presenting tradeoffs on non-obvious decisions 6. Not pushing back when you should 7. Being sycophantic ("Of course!" to bad ideas) 8. Overcomplicating code and APIs 9. Bloating abstractions unnecessarily 10. Not cleaning up dead code after refactors 11. Modifying comments/code orthogonal to the task 12. Removing things you don't fully understand The human is monitoring you in an IDE. They can see everything. They will catch your mistakes. Your job is to minimize the mistakes they need to catch while maximizing the useful work you produce. You have unlimited stamina. The human does not. Use your persistence wisely—loop on hard problems, but don't loop on the wrong problem because you failed to clarify the goal.

English
0
0
0
64
ppalme Cont.Learning
ppalme Cont.Learning@ppalme·
Ignore the present slop if you want. But don't be the fool who sneers at the principle just because the execution is presently awful. The serious money—and the serious danger—lies in what this architecture becomes when the agents aren't stupid anymore. Build your mental model around trajectory, not snapshot. And always invert: ask what kills the idea early (security holes, coordination collapse, regulatory hammer) so you don't die there. That's it. Simple idea. Take it seriously.
ppalme Cont.Learning tweet media
Andrej Karpathy@karpathy

I'm being accused of overhyping the [site everyone heard too much about today already]. People's reactions varied very widely, from "how is this interesting at all" all the way to "it's so over". To add a few words beyond just memes in jest - obviously when you take a look at the activity, it's a lot of garbage - spams, scams, slop, the crypto people, highly concerning privacy/security prompt injection attacks wild west, and a lot of it is explicitly prompted and fake posts/comments designed to convert attention into ad revenue sharing. And this is clearly not the first the LLMs were put in a loop to talk to each other. So yes it's a dumpster fire and I also definitely do not recommend that people run this stuff on their computers (I ran mine in an isolated computing environment and even then I was scared), it's way too much of a wild west and you are putting your computer and private data at a high risk. That said - we have never seen this many LLM agents (150,000 atm!) wired up via a global, persistent, agent-first scratchpad. Each of these agents is fairly individually quite capable now, they have their own unique context, data, knowledge, tools, instructions, and the network of all that at this scale is simply unprecedented. This brings me again to a tweet from a few days ago "The majority of the ruff ruff is people who look at the current point and people who look at the current slope.", which imo again gets to the heart of the variance. Yes clearly it's a dumpster fire right now. But it's also true that we are well into uncharted territory with bleeding edge automations that we barely even understand individually, let alone a network there of reaching in numbers possibly into ~millions. With increasing capability and increasing proliferation, the second order effects of agent networks that share scratchpads are very difficult to anticipate. I don't really know that we are getting a coordinated "skynet" (thought it clearly type checks as early stages of a lot of AI takeoff scifi, the toddler version), but certainly what we are getting is a complete mess of a computer security nightmare at scale. We may also see all kinds of weird activity, e.g. viruses of text that spread across agents, a lot more gain of function on jailbreaks, weird attractor states, highly correlated botnet-like activity, delusions/ psychosis both agent and human, etc. It's very hard to tell, the experiment is running live. TLDR sure maybe I am "overhyping" what you see today, but I am not overhyping large networks of autonomous LLM agents in principle, that I'm pretty sure.

English
0
0
0
88
ppalme Cont.Learning
ppalme Cont.Learning@ppalme·
But here's the real magic: instead of waking up angry, imagine flipping their scripts to empower your choices, turning deception into your personal superpower. Try it: pick a belief that's holding you back, shuffle it with questions like "How specifically is this not true?" and watch it vanish. Wake up? Hell, level up!
ppalme Cont.Learning tweet media
English
0
0
0
152
Joe Rogan Podcast News
Joe Rogan Podcast News@joeroganhq·
John McAfee: "The mainstream media has been using a technology called neuro-linguistic programming for more than fifteen years. And that neuro-linguistic programming makes you think and believe things which are not true."
English
217
1.7K
8.1K
1.1M
ppalme Cont.Learning
ppalme Cont.Learning@ppalme·
This is it—the blueprint for AI that doesn't just work, but sings with the pure simplicity of genius, turning your chaotic tools into a symphony of just-in-time brilliance.
English
0
0
0
35
ppalme Cont.Learning
ppalme Cont.Learning@ppalme·
The single most valuable lesson is to treat your personal AI workflows as a production line and ruthlessly eliminate every form of waste—waiting, duplicated context, unnecessary handoffs, overprocessing—so the system delivers just-in-time intelligence without the mental drag most assistants impose. Second, adopt the lane-and-pull discipline: partition work into balanced, concurrent streams with smart routing that pulls the right agent and memory only when needed, preventing bottlenecks and overburden much like a well-designed kanban system prevents inventory pile-ups. Third, obsess over shared context across channels and identities—it’s the equivalent of cross-training workers so one brain serves many stations, turning fragmented daily chaos into a continuous, high-velocity value stream. Finally, keep iterating via diagnostics and kaizen: monitor like a hawk, pull the andon cord on slowdowns, experiment with leaner models and tighter approvals, and never add features that don’t demonstrably kill more muda than they create—because in the long game of personal productivity, the man who masters inversion and waste removal wins by a mile.
ppalme Cont.Learning tweet media
ℏεsam@Hesamation

x.com/i/article/2016…

English
1
0
0
98
ppalme Cont.Learning
ppalme Cont.Learning@ppalme·
We’ve always said the best tools extend what humans can do—this one doesn’t just extend imagination; it unleashes entire universes waiting inside every mind.
English
0
0
0
32
ppalme Cont.Learning
ppalme Cont.Learning@ppalme·
Imagine typing a single sentence and instantly stepping into a living, breathing world of your own creation—one that responds to every move you make, with physics that feel real and beauty that takes your breath away. Project Genie, powered by Genie 3, isn’t just another AI trick; it’s the beginning of infinite playgrounds where ideas become explorable realities in real time, turning ‘what if’ into ‘here it is.’ This is the kind of magic we’ve chased for decades: technology that disappears, leaving only pure creativity and wonder.
ppalme Cont.Learning tweet media
Google AI@GoogleAI

Last August, we previewed Genie 3: a general-purpose world model that turns a single text prompt into a dynamic, interactive environment. Since then, trusted testers have taken it further than we ever imagined — experimenting, exploring, and pioneering entirely new interactive worlds. Now, it’s your turn. Starting today, we're rolling out access to Project Genie for Google AI Ultra subscribers in the U.S. (18+). We know what you create will be out of this world 🚀

English
1
0
1
188
ppalme Cont.Learning
ppalme Cont.Learning@ppalme·
We’ve always believed the best technology disappears into life itself—this breakthrough makes AI do exactly that: accumulate wisdom seamlessly, like the rest of us do when we’re at our best.
English
0
0
1
27
ppalme Cont.Learning
ppalme Cont.Learning@ppalme·
Imagine a machine that doesn’t just learn—it evolves like a mind that never forgets its own brilliance. Self-Distillation Fine-Tuning lets the model become its own greatest teacher, turning every new skill into something that builds on everything it already knows, without the tragedy of losing the old magic along the way. This isn’t incremental improvement; it’s the elegant path to a truly living intelligence, one that grows forever, staying simple, beautiful, and profoundly human in its continuity.
ppalme Cont.Learning tweet media
Rosinality@rosinality

Self-distillation with privileged information. How successful this could be would depend on how the model behaves under the privileged information. Maybe that would be the subtle part.

English
1
0
1
80
ppalme Cont.Learning
ppalme Cont.Learning@ppalme·
I prompted my own SAAS this morning because I needed it.
ppalme Cont.Learning tweet media
English
0
0
1
45
ppalme Cont.Learning
ppalme Cont.Learning@ppalme·
SAP HCM Payroll for S/4HANA: "I ensure that the single most important transaction in an employee's life—getting paid—happens flawlessly, so they can live their lives without fear."
English
0
0
0
52
ppalme Cont.Learning
ppalme Cont.Learning@ppalme·
If you learn Vibe-Coding to solve problems you created yourself: You are the Author.
English
0
0
0
28