Marcus
270 posts


Marcus retweetledi

Every morning you have a choice.
You can wake up carrying your bags from yesterday. Dragging all the days before it. Already tired from the weight of what you know, what you've done, what you have to do.
Or you can wake up new.
I choose new.
I wake up and start every day as if it is my first day. Not a single past failure clouds my mind. Musings and worries of the future are completely absent. This is not denial. This is not forgetting. This is creating space.
Space to paint a new canvas with the originality of the moment.
Without this space, life becomes repetitive. The same thoughts loop. The same feelings return. The same patterns play out again and again. You become a machine running old programs.
Conformity thrives on this repetition.
But freshness breaks the pattern.
When you approach each day as your first day, you're not bound by who you were yesterday or limited by what failed last week. You're not trapped in the story you've been telling yourself.
You are simply here. Now. Relaxed.
Show up as you are. Raw and real and present.
Nature will meet you there. The cold will meet you there. Your true self will meet you there.
This is celebration. Not of achievement or success or arriving somewhere but a celebration of life itself. Of being here and feeling everything.
Start with a deep breath. Right now.
-Wim Hof
English
Marcus retweetledi
Marcus retweetledi
Marcus retweetledi

I packaged up the "autoresearch" project into a new self-contained minimal repo if people would like to play over the weekend. It's basically nanochat LLM training core stripped down to a single-GPU, one file version of ~630 lines of code, then:
- the human iterates on the prompt (.md)
- the AI agent iterates on the training code (.py)
The goal is to engineer your agents to make the fastest research progress indefinitely and without any of your own involvement. In the image, every dot is a complete LLM training run that lasts exactly 5 minutes. The agent works in an autonomous loop on a git feature branch and accumulates git commits to the training script as it finds better settings (of lower validation loss by the end) of the neural network architecture, the optimizer, all the hyperparameters, etc. You can imagine comparing the research progress of different prompts, different agents, etc.
github.com/karpathy/autor…
Part code, part sci-fi, and a pinch of psychosis :)

English
Marcus retweetledi

Anthropic dropped a 33 pages cheat sheet for building Claude skills
resources.anthropic.com/hubfs/The-Comp…

English
Marcus retweetledi
Marcus retweetledi
Marcus retweetledi
Marcus retweetledi
Marcus retweetledi
Marcus retweetledi

@CryptoNagato Same can be said of everybody expecting the crazy Q4? Right?
English

Everybody is expecting September to be a bad month for Crypto based on past data.
“$40k $BTC imminent.
Alts will get annihilated.”
But, correct me if I’m wrong, this bull-run has shown many differences compared to the most recent ones.
Consequently, expecting past data to perfectly predict the future like a crystal ball wouldn’t be the smartest choice, would it?
Especially considering what’s happening on a macro level…
My guess?
We’ll have a bumpy start to the month, but then the market will finally begin to climb higher and pave the way for a crazy Q4.
Anyway, interesting months lie ahead of us — that’s for sure.
Enjoy the show 𓂀
English
Marcus retweetledi
Marcus retweetledi
Marcus retweetledi
















