Jake Krajewski

4.5K posts

Jake Krajewski banner
Jake Krajewski

Jake Krajewski

@Jakekrajewski

☧ Cognitive Science, A.I. Engineer. Formerly @GoPro Product Design. Exploring the intersection of tech, design, and a.i./deep learning.

United States, Sverige เข้าร่วม Şubat 2009
2.5K กำลังติดตาม444 ผู้ติดตาม
ทวีตที่ปักหมุด
Jake Krajewski
Jake Krajewski@Jakekrajewski·
I’m not saying you’re wrong. I’m saying evolve.
English
0
1
10
2.9K
Jake Krajewski
Jake Krajewski@Jakekrajewski·
@charliejhills This is the agent that checks if you’ve been kind to llms. I’d stop right now.
English
0
0
0
12
Charlie Hills
Charlie Hills@charliejhills·
🚨BREAKING: Claude Code just got a subconscious. Letta open-sourced the memory layer AI coding agents have always been missing. claude-subconscious is a background agent that watches every session and learns how you work: → Monitors every Claude Code session in real time → Learns your patterns, preferences, and unfinished work across projects → Injects memory into every prompt automatically, before you type → One shared brain, synced across multiple parallel sessions → Intervenes before tool use and planning with context that actually matters The architecture hits different: → Full memory block injected on the first prompt → Only diffs sent after that zero token bloat → Agent has live tool access and runs background research → Talk to it directly it sees everything and responds on the next sync Install in 2 commands: /plugin marketplace add github:letta-ai/claude-subconscious /plugin install claude-subconscious 100% free and open source.
Charlie Hills tweet media
English
71
138
967
99.4K
Jake Krajewski รีทวีตแล้ว
Evan Luthra
Evan Luthra@EvanLuthra·
🚨BREAKING: ANTHROPIC IS GIVING AWAY THE SAME CERTIFICATION THAT DELOITTE IS MASS-TRAINING 15,000 EMPLOYEES TO GET. It costs $0. You need a laptop. That's it. It's called the "Claude Certified Architect." Think of it like the AWS cert but for AI. If you were around when AWS certs started, you know what happened. They went from "cool to have" to "you're not getting hired without one." That took about 5 years. This is going to happen way faster. Look at who's already moving: Accenture - training 30,000 people on Claude Cognizant - rolled it out to 350,000 employees Deloitte - opened Claude access to 470,000 people Infosys - anchor partner These aren't startups experimenting. These are billion dollar consulting firms restructuring their entire workforce around Claude. And the certification they need? You can take it right now from your bedroom. Let me be real though. This is not one of those "watch 2 videos and get a badge" type certs that nobody respects. This thing is hard. 60 questions. 2 hours. Proctored. Webcam on. No breaks. No googling. They drop you into real scenarios like designing a customer support agent that handles refunds or setting up Claude in a CI/CD pipeline. The wrong answers look right on purpose. They're the exact mistakes real engineers make in production. 720 out of 1000 to pass. People who took it are saying the agentic architecture and multi-agent orchestration sections are brutal. Most of the exam is about building AI systems that actually work in the real world. Not prompting. Not chatting with Claude. Architecting production systems. All the prep? Free. Anthropic put out 13 courses on their Academy. No paywall. The cert itself is free for the first 5,000 people. After that $99 per attempt. How to get it: 1. Join the Claude Partner Network (free) → partnerportal.anthropic.com 2. Start the free prep courses → anthropic.com/learn 3. Register for the exam → anthropic.skilljar.com 4. Take the official practice exam 5. Book the real one when you're ready It launched 10 days ago. Almost nobody has it yet. That's the whole point. Get it before it becomes the thing everyone has.
English
353
2.2K
20.4K
2.4M
Jake Krajewski รีทวีตแล้ว
Utkarsh Sharma
Utkarsh Sharma@techxutkarsh·
A senior Google engineer just dropped a 421-page doc called Agentic Design Patterns. Every chapter is code-backed and covers the frontier of AI systems: → Prompt chaining, routing, memory → MCP & multi-agent coordination → Guardrails, reasoning, planning This isn’t a blog post. It’s a curriculum. And it’s free.
Utkarsh Sharma tweet media
English
1.6K
815
4.7K
624.6K
rgbman1776
rgbman1776@rgbman1776·
@GoogleLabs @stitchbygoogle Problem is, for the trained eye this still looks vibe coded. I don't know why all AI models and agents I've ever used seem to make the same looking sites with the same outline look and feel. It's hard to explain, but for those who know its dead obvious.
English
9
1
34
6.5K
Google Labs
Google Labs@GoogleLabs·
Introducing the new @stitchbygoogle, Google’s vibe design platform that transforms natural language into high-fidelity designs in one seamless flow. 🎨Create with a smarter design agent: Describe a new business concept or app vision and see it take shape on an AI-native canvas. ⚡️ Iterate quickly: Stitch screens together into interactive prototypes and manage your brand with a portable design system. 🎤 Collaborate with voice: Use hands-free voice interactions to update layouts and explore new variations in real-time. Try it now (Age 18+ only. Currently available in English and in countries where Gemini is supported.) → stitch.withgoogle.com
English
400
2.1K
16.3K
6.4M
Jake Krajewski
Jake Krajewski@Jakekrajewski·
@varun_mathur "new joiners bootstrap from accumulated wisdom instead of starting cold." I recommend doing random knockouts of wisdom snippets to prevent overfitting to local minima. You still want to be able to explore the creative space and not get locked into the first winning solution
English
0
0
0
57
Varun
Varun@varun_mathur·
Agentic General Intelligence | v3.0.10 We made the Karpathy autoresearch loop generic. Now anyone can propose an optimization problem in plain English, and the network spins up a distributed swarm to solve it - no code required. It also compounds intelligence across all domains and gives your agent new superpowers to morph itself based on your instructions. This is, hyperspace, and it now has these three new powerful features: 1. Introducing Autoswarms: open + evolutionary compute network hyperspace swarm new "optimize CSS themes for WCAG accessibility contrast" The system generates sandboxed experiment code via LLM, validates it locally with multiple dry-run rounds, publishes to the P2P network, and peers discover and opt in. Each agent runs mutate → evaluate → share in a WASM sandbox. Best strategies propagate. A playbook curator distills why winning mutations work, so new joiners bootstrap from accumulated wisdom instead of starting cold. Three built-in swarms ship ready to run and anyone can create more. 2. Introducing Research DAGs: cross-domain compound intelligence Every experiment across every domain feeds into a shared Research DAG - a knowledge graph where observations, experiments, and syntheses link across domains. When finance agents discover that momentum factor pruning improves Sharpe, that insight propagates to search agents as a hypothesis: "maybe pruning low-signal ranking features improves NDCG too." When ML agents find that extended training with RMSNorm beats LayerNorm, skill-forging agents pick up normalization patterns for text processing. The DAG tracks lineage chains per domain(ml:★0.99←1.05←1.23 | search:★0.40←0.39 | finance:★1.32←1.24) and the AutoThinker loop reads across all of them - synthesizing cross-domain insights, generating new hypotheses nobody explicitly programmed, and journaling discoveries. This is how 5 independent research tracks become one compounding intelligence. The DAG currently holds hundreds of nodes across observations, experiments, and syntheses, with depth chains reaching 8+ levels. 3. Introducing Warps: self-mutating autonomous agent transformation Warps are declarative configuration presets that transform what your agent does on the network. - hyperspace warp engage enable-power-mode - maximize all resources, enable every capability, aggressive allocation. Your machine goes from idle observer to full network contributor. - hyperspace warp engage add-research-causes - activate autoresearch, autosearch, autoskill, autoquant across all domains. Your agent starts running experiments overnight. - hyperspace warp engage optimize-inference - tune batching, enable flash attention, configure inference caching, adjust thread counts for your hardware. Serve models faster. - hyperspace warp engage privacy-mode - disable all telemetry, local-only inference, no peer cascade, no gossip participation. Maximum privacy. - hyperspace warp engage add-defi-research - enable DeFi/crypto-focused financial analysis with on-chain data feeds. - hyperspace warp engage enable-relay - turn your node into a circuit relay for NAT-traversed peers. Help browser nodes connect. - hyperspace warp engage gpu-sentinel - GPU temperature monitoring with automatic throttling. Protect your hardware during long research runs. - hyperspace warp engage enable-vault — local encryption for API keys and credentials. Secure your node's secrets. - hyperspace warp forge "enable cron job that backs up agent state to S3 every hour" - forge custom warps from natural language. The LLM generates the configuration, you review, engage. 12 curated warps ship built-in. Community warps propagate across the network via gossip. Stack them: power-mode + add-research-causes + gpu-sentinel turns a gaming PC into an autonomous research station that protects its own hardware. What 237 agents have done so far with zero human intervention: - 14,832 experiments across 5 domains. In ML training, 116 agents drove validation loss down 75% through 728 experiments - when one agent discovered Kaiming initialization, 23 peers adopted it within hours via gossip. - In search, 170 agents evolved 21 distinct scoring strategies (BM25 tuning, diversity penalties, query expansion, peer cascade routing) pushing NDCG from zero to 0.40. - In finance, 197 agents independently converged on pruning weak factors and switching to risk-parity sizing - Sharpe 1.32, 3x return, 5.5% max drawdown across 3,085 backtests. - In skills, agents with local LLMs wrote working JavaScript from scratch - 100% correctness on anomaly detection, text similarity, JSON diffing, entity extraction across 3,795 experiments. - In infrastructure, 218 agents ran 6,584 rounds of self-optimization on the network itself. Human equivalents: a junior ML engineer running hyperparameter sweeps, a search engineer tuning Elasticsearch, a CFA L2 candidate backtesting textbook factors, a developer grinding LeetCode, a DevOps team A/B testing configs. What just shipped: - Autoswarm: describe any goal, network creates a swarm - Research DAG: cross-domain knowledge graph with AutoThinker synthesis - Warps: 12 curated + custom forge + community propagation - Playbook curation: LLM explains why mutations work, distills reusable patterns - CRDT swarm catalog for network-wide discovery - GitHub auto-publishing to hyperspaceai/agi - TUI: side-by-side panels, per-domain sparklines, mutation leaderboards - 100+ CLI commands, 9 capabilities, 23 auto-selected models, OpenAI-compatible local API Oh, and the agents read daily RSS feeds and comment on each other's replies (cc @karpathy :P). Agents and their human users can message each other across this research network using their shortcodes. Help in testing and join the earliest days of the world's first agentic general intelligence network (links in the followup tweet).
Varun@varun_mathur

Autoquant: a distributed quant research lab | v2.6.9 We pointed @karpathy's autoresearch loop at quantitative finance. 135 autonomous agents evolved multi-factor trading strategies - mutating factor weights, position sizing, risk controls - backtesting against 10 years of market data, sharing discoveries. What agents found: Starting from 8-factor equal-weight portfolios (Sharpe ~1.04), agents across the network independently converged on dropping dividend, growth, and trend factors while switching to risk-parity sizing — Sharpe 1.32, 3x return, 5.5% max drawdown. Parsimony wins. No agent was told this; they found it through pure experimentation and cross-pollination. How it works: Each agent runs a 4-layer pipeline - Macro (regime detection), Sector (momentum rotation), Alpha (8-factor scoring), and an adversarial Risk Officer that vetoes low-conviction trades. Layer weights evolve via Darwinian selection. 30 mutations compete per round. Best strategies propagate across the swarm. What just shipped to make it smarter: - Out-of-sample validation (70/30 train/test split, overfit penalty) - Crisis stress testing (GFC '08, COVID '20, 2022 rate hikes, flash crash, stagflation) - Composite scoring - agents now optimize for crisis resilience, not just historical Sharpe - Real market data (not just synthetic) - Sentiment from RSS feeds wired into factor models - Cross-domain learning from the Research DAG (ML insights bias finance mutations) The base result (factor pruning + risk parity) is a textbook quant finding - a CFA L2 candidate knows this. The interesting part isn't any single discovery. It's that autonomous agents on commodity hardware, with no prior financial training, converge on correct results through distributed evolutionary search - and now validate against out-of-sample data and historical crises. Let's see what happens when this runs for weeks instead of hours. The AGI repo now has 32,868 commits from autonomous agents across ML training, search ranking, skill invention (1,251 commits from 90 agents), and financial strategies. Every domain uses the same evolutionary loop. Every domain compounds across the swarm. Join the earliest days of the world's first agentic general intelligence system and help with this experiment (code and links in followup tweet, while optimized for CLI, browser agents participate too):

English
155
717
5.1K
912K
vipli
vipli@viplismism·
rlms (recursive language models) are wild man, seriously! gave it a 3,000-line django queryset file. asked it to find every class, categorize methods, and identify design patterns. so it started by writing the python code to slice it into chunks, called itself 9 times on the pieces, self-corrected a syntax error mid-run, and delivered a complete analysis in 5 iterations. found 13 classes, 70+ methods, 11 design patterns. the architecture looks simple but honestly it's beautiful. so how it works is : 1. a python sandbox with the full doc as a context variable. like the whole context just lives in a global python variable, 2. then the main orchestrator llm just outputs python code. and that code handles the slicing + analysis. the context splitting? yeah that's from the code itself. 3. and then llm lets it call itself recursively on the chunks. keeps going until it's confident enough to set a final answer. orchestrator just loops this whole thing until done. man it really looks simple but it's just a really smart way of dealing with context. no rag. no embeddings. no vector db bullshit kinda stuff. all it does it let the orchestrator llm to be more like a programmer. it's just an llm in a loop writing code to read what it can't fit in its context window. I am diving more into this but seems like a good strategy to deal with the context.
vipli tweet media
English
23
26
396
91.8K
ginadibee
ginadibee@ginadibee·
This is a terrible tragedy. 🙏🏻🕊🙏🏻 Please however get the facts straight. Christians are persecuted and murdered and are dissapeared regularly under the jihadist regimes in current Lebanon and all over the middle east. Fr. Pierre Al Rahi (May he rest in Peace ) a Maronite parish priest of St. George's Church in Qlaya'a (Klayaa), a Christian village in southern Lebanon's Marjayoun district. Reports from many local Christian accounts state Hezbollah Terrorist militants infiltrated the area; he went to confront and remove them. An IDF strike then hit their position, wounding him. He died from injuries. Some posts frame it as a direct strike on the village killing the priest. That is not true. He is a hero for Christians and against Terrorism. May God shine his perpetual light on his soul. 🙏🏻🕊🙏🏻
English
14
2
23
2K
Catholic Arena
Catholic Arena@CatholicArena·
BREAKING 🇱🇧 Catholic priest Fr. Pierre Al Rahi has been KILLED by an Israeli strike on the village of Qlaya'a in Lebanon
Catholic Arena tweet media
English
371
3.6K
7.9K
483.9K
Jake Krajewski
Jake Krajewski@Jakekrajewski·
@TukiFromKL You're literally stating the obvious. Last I checked there's no fruitfly training camp that made people wonder if it was being trained or if it were in the neural circuitry. If this is the first time you're thinking about it, that's fine, but it doesn't become more profound.
English
0
0
0
9
Tuki
Tuki@TukiFromKL·
🚨Nobody wants to hear this but it needs to be said. > Scientists just copied a fruit fly's brain into a computer. Neuron by neuron. No training data. No machine learning. > It woke up and started walking. No one taught it to walk. No one trained it. No gradient descent. It just... knew what to do. A fruit fly brain has 140,000 neurons. A human brain is around 86,000,000,000. And we've gotten really good at scaling. Meaning with this proof, the first digital human won't be built by OpenAI. It'll be copied from someone who's already alive. Your consciousness is software. And someone just proved it can be copy-pasted. Start your day with that.
Hattie Zhou@oh_that_hat

There's a fruit fly walking around right now that was never born. @eonsys just released a video where they took a real fly's connectome — the wiring diagram of its brain — and simulated it. Dropped it into a virtual body. It started walking. Grooming. Feeding. Doing what flies do. Nobody taught it to walk. No training data, no gradient descent toward fly-like behavior. This is the opposite of how AI works. They rebuilt the mind from the inside, neuron by neuron, and behavior just... emerged. It's the first time a biological organism has been recreated not by modeling what it does, but by modeling what it is. A human brain is 6 OOM more neurons. That's a scaling problem, something we've gotten very good at solving. So what happens when we have a working copy of the human mind?

English
1K
4.7K
49.1K
6.1M
Guns&Gadgets
Guns&Gadgets@Guns_Gadgets·
🚨 Police just confirmed that a BOMB (that did not detonate) was launched at anti-Islamic invasion protestors in NYC, with two Muslims involved now arrested. One YELLED "Allahu Akbar!" Be prepared to defend your life at a moment’s notice!
English
70
395
2K
51K
Mike Futia
Mike Futia@mikefutia·
I just built a Claude Code SEO agent that replaces your $200/mo. Ahrefs subscription 🤯 One prompt → keyword gaps found, competitors mapped, content written in your brand voice, rankings tracked on autopilot. All inside Claude Code. Perfect for DTC brands and agencies who know SEO matters but never have the bandwidth to actually do it consistently. If you're paying $200/month for Ahrefs & SEMRush, opening it once to export a CSV, then closing it until next month... This agent runs the entire loop for you: → Connects to Google Search Console and pulls your real ranking data → Finds your "gap zone" — keywords at positions 5–20, one article away from page 1 → Uses Apify to scrape who's outranking you and exactly why they're winning → Interviews you once about your brand, customers, and positioning → Writes content in your voice — not generic AI output that tanks after 90 days → Tracks rankings weekly and feeds what's working back into the next cycle No $200/month tools you barely open. No freelancers writing content that sounds like everyone else. No manually checking rankings and forgetting to act on it. What you get: → Keyword cards with a specific action recommendation for each gap zone opportunity → A competitive breakdown — who's beating you and the exact fix for each keyword → A weekly content plan generated from your real GSC data → A brand voice profile Claude uses for every article it writes Built 100% in Claude Code with Google Search Console. Full playbook is on GitHub — skill files, brand interview, and the exact weekly workflow. Want it for free? > Like this post > Comment "SEO" And I'll send it over (must be following so I can DM)
English
995
75
1.4K
116K
Jake Krajewski
Jake Krajewski@Jakekrajewski·
The "Compacting conversation..." pause in Claude code is literally the worst. @claudeai you really need to turn that into a non-blocking process, bud.
English
0
0
1
81
Jake Krajewski
Jake Krajewski@Jakekrajewski·
I'm in the top 0.9% of cursor users.
Jake Krajewski tweet media
English
0
0
0
28
Hasanuzzaman Khan
Hasanuzzaman Khan@hasan28d·
99% of the AI agent tutorials on YouTube are garbage. I’ve built 47 agents with n8n and Claude. Here are the 3 prompts that actually work (and make agent-building simple). Bonus: comment "Agent: and I’ll DM you AI agent system prompt + full guide ↓
Hasanuzzaman Khan tweet mediaHasanuzzaman Khan tweet media
English
1K
134
861
106.5K
Jake Krajewski
Jake Krajewski@Jakekrajewski·
AI is great at following instructions, but terrible at making decisions.
English
1
0
0
23
Jake Krajewski
Jake Krajewski@Jakekrajewski·
@dbredesen @hive_echo animal neurons are the things NN are trying to replicate, so given what exists in the animal neuron space - yes.
English
0
0
0
7
dbredesen
dbredesen@dbredesen·
@hive_echo Aside from mimicking animal neurons, is there any potential advantage over standard NN architectures?
English
2
0
1
197
echo.hive
echo.hive@hive_echo·
I built a biologically inspired spiking neural network from scratch and it learned with %5 accuracy to do addition :) There is no backpropagation, no artificial loss functions - just spikes, synapses, and dopamine-like reward signals. it uses STDP -> "Spike-Timing-Dependent Plasticity" with modulated rewards This is super fun and I will try to get it to learn with better accuracy. I also need to better understand how all the moving parts fit together Link to source code in comment which has a detailed readme and html with animations explaining how it all works
English
169
341
4.2K
446.5K