

Caleb
135 posts

@0xCalebx
Living on-chain | Web3, GameFi & Crypto Security | GM & BUIDL





How to set up Claude so it never forgets you: Prompts → Projects → Skills (explained in 3 mins) Prompts = telling a stranger your job every morning. Projects = giving a new hire a binder on day one. Skills = training an employee once. For forever. Step 1: Start with a Prompt (but don't stay there) ✦ Open Claude. Type your task. Get an answer. ✦ It works. But tomorrow? Claude forgot everything. ✦ You re-explain. Again. Every. Single. Chat. ✦ That's Level 1. Most people never leave it. Step 2: Move to a Project ✦ Go to Claude .ai → Create a Project. ✦ Upload your voice file. Upload your instructions. ✦ Now every chat inside that Project knows you. ✦ Your context, style, and tone stick. But you still have to open the right Project. You still have to say "read my file first." Step 3: Graduate to Skills ✦ Open Claude Cowork. ✦ Select Opus 4.7 + Extended Thinking. ✦ Prompt: "Use the skill-creator to help me build a skill for [your most repeated task]." Claude interviews you. Answer extensively. "I write reports" is useless. "I write weekly reports that start with the headline metric, 3 sections max, next steps as bullets" is a Skill. The specificity is the skill. Step 4: Install and test ✦ Save the Skill folder. ✦ Go to Settings → Capabilities → Skills → Upload. ✦ Open a new chat. Type your task normally. ✦ The Skill fires on its own. No slash command. ✦ Claude just knows. I just wrote my full Claude Skills breakdown. It covers setup, the skill-creator walkthrough, and the 7 hacks I found buried in Anthropic's docs. Read it here: claude-skills.free To download all of my Claude infographics: Step 1. Go to how-to-ai.guide. Step 2. Subscribe for free. Don't pay anything. Step 3. Open my welcome email (most skip this). Step 4. Hit the automatic reply button inside. Step 5. Download my infographics from my Notion. ♻️ Repost this to help someone on your team stop re-explaining themselves to Claude every morning.





🚨 BREAKING: Singapore takes the lead again and publishes its Model AI Governance Framework for Agentic AI [Bookmark it below]. Other countries should take note: As the document clarifies, the new components of an agent create new sources of risk. "The risks themselves are familiar – fundamentally, agents are software systems built on LLMs. They inherit traditional software vulnerabilities (such as SQL injection) and LLM-specific risks (such as hallucination, bias, data leakage, and adversarial prompt injections). However, the risks can manifest differently through the different components." Many countries and regions (including Europe) are still unsure how to apply their existing legal AI frameworks to agentic AI. Other countries seem to prefer the deregulatory trend. Singapore understands that AI is evolving fast, and new risks are emerging, and the time to establish dynamic AI governance frameworks is NOW. Bookmark the document below and don't miss pages 6-7, which cover agentic AI risks. - 👉 To learn more about AI governance, join my newsletter's 89,300+ subscribers and don't miss the 27th cohort of my AI Governance Training (links below).

DON’T TRUST BIG TECH WITH YOUR DATA DON’T TRUST BIG TECH WITH YOUR DATA DON’T TRUST BIG TECH WITH YOUR DATA DON’T TRUST BIG TECH WITH YOUR DATA DON’T TRUST BIG TECH WITH YOUR DATA