Matthew A. Mattson, Esq.

243 posts

Matthew A. Mattson, Esq. banner
Matthew A. Mattson, Esq.

Matthew A. Mattson, Esq.

@mrmmattson

Driving AI-Powered Business Transformation. Systems Thinking For Rapid AI Adoption. Leading AI-First Engineering Teams. Focused On Real, Customer-Driven Impact.

Magna, Utah Katılım Kasım 2009
113 Takip Edilen180 Takipçiler
Matthew A. Mattson, Esq.
Matthew A. Mattson, Esq.@mrmmattson·
@dr_cintas Great share! This makes it so easy to run local models using Claude Code. AI is making our lives so easy.
English
0
0
4
628
Alvaro Cintas
Alvaro Cintas@dr_cintas·
You can now run Claude Code locally for FREE. → Using open-source models → Locally and fully private → Full agent + tool workflows Here's the FULL step-by-step tutorial to get it all set up in less than 10 minutes:
Alvaro Cintas@dr_cintas

x.com/i/article/2013…

English
122
698
6.4K
1.3M
Mike Kelly
Mike Kelly@NicerInPerson·
I managed to unlock a crazy new hidden feature in Claude Code called Swarms. You're not talking to an AI coder anymore. You're talking to a team lead. The lead doesn't write code - it plans, delegates, and synthesizes. When you approve a plan, it enters a new "delegation mode" and spawns a team of specialists who: - Share a task board with dependencies - Work in parallel as teammates - Message each other to coordinate work Workers do the heavy lifting, coordinate amongst themselves, then report back.
English
143
255
2.9K
564.7K
Matthew A. Mattson, Esq.
Matthew A. Mattson, Esq.@mrmmattson·
@zarazhangrui Clever way of using your CLAUDE.md file. I have been doing something similar to this using Claude Skills. There is no reason we only have to be the teacher; AI can be our teacher too.
English
0
0
0
14
Zara Zhang
Zara Zhang@zarazhangrui·
Add this paragraph to the CLAUDE.md file to turn Claude Code into Claude Teacher. Every project is a lesson to become more technical. "For every project, write a detailed FOR[yourname].md file that explains the whole project in plain language. Explain the technical architecture, the structure of the codebase and how the various parts are connected, the technologies used, why we made these technical decisions, and lessons I can learn from it (this should include the bugs we ran into and how we fixed them, potential pitfalls and how to avoid them in the future, new technologies used, how good engineers think and work, best practices, etc). It should be very engaging to read; don't make it sound like boring technical documentation/textbook. Where appropriate, use analogies and anecdotes to make it more understandable and memorable."
Zara Zhang tweet media
English
116
562
5K
948K
ℏεsam
ℏεsam@Hesamation·
this is the future 🔥 Claude running 24/7 on its own computer with access to Gmail, calendar, Slack, everything. everyone talks about personal assistants but this is probably the first one i’ve seen that’s practical, especially for managers.
English
146
495
6.6K
631.9K
Matthew A. Mattson, Esq.
Matthew A. Mattson, Esq.@mrmmattson·
I used to think the future of AI work was orchestration: directing tools, coordinating workflows, and calling the shots. I was wrong. With things like Claude Skills, it’s clear the shift isn’t from human → AI control... It’s from directing to developing. Skills don’t execute workflows. They model behavior. You’re not telling AI what to do. You’re teaching it how to think. That’s not orchestration; it’s coaching. * Principles > procedures * Judgment > steps * Reasoning quality > task completion As AI gains real judgment, our role evolves: * Director → Coach Not less human involvement. Different, and far more interesting.
English
1
1
7
81
Matthew A. Mattson, Esq.
Matthew A. Mattson, Esq.@mrmmattson·
What is the main reason you are diving into AI? Why are you using it? For me, beyond how much I enjoy programming, AI is becoming a major part of my life. It helps me think more clearly, and it helps me express my ideas in a way that others can understand more easily.
English
0
0
13
140
Matthew A. Mattson, Esq.
Matthew A. Mattson, Esq.@mrmmattson·
AI agents don’t fail because they’re “not smart enough.” They fail because of friction. Missing context. Broken handoffs. Bad data. No API access. A friction map exposes these cognitive bottlenecks, and eliminating them is where the real ROI lies. The strategy isn’t “build an agent.” It’s map the friction ⇒ eliminate it ⇒ scale autonomy. How are you eliminating such friction?
English
0
1
10
147
Matthew A. Mattson, Esq.
Matthew A. Mattson, Esq.@mrmmattson·
It all starts with what is entered via the prompt. If you ask AI a generic question, you will more than likely get a generic response. I still see a lot of people just writing one-sentence prompts, and it makes sense why they don't get what they are hoping for. If you want a non-generic response, you need to provide more detail in your prompt and make your prompts more pinpointed.
English
0
0
4
22
VraserX e/acc
VraserX e/acc@VraserX·
If your AI output feels generic, it is because your thinking is generic. Machines mirror the operator.
English
47
6
119
4.3K
Matthew A. Mattson, Esq.
Matthew A. Mattson, Esq.@mrmmattson·
AI is being integrated into everything. What is your favorite AI feature so far?
English
1
0
11
118
Matthew A. Mattson, Esq.
Matthew A. Mattson, Esq.@mrmmattson·
@gregisenberg Once LLMs and social media start sharing the content that is generated via AI, they will be able to more finely detect AI generated posts and filter them out better and not just based it on the way it was written.
English
0
0
1
14
Matthew A. Mattson, Esq.
Matthew A. Mattson, Esq.@mrmmattson·
@ForrestPKnight Mmm, prompts should have a level of generality and vagueness to them; otherwise, we might as well write the code ourselves the old fashion way.
English
0
0
0
190
Forrest Knight
Forrest Knight@ForrestPKnight·
You know how when you prompt AI it doesn’t always give you the right code? Well what if we were just SUPER specific with the prompt and told the AI exactly what we want? And we could actually just write it directly in the code file. I think that’s the future.
English
121
24
716
27.2K
Haider.
Haider.@haider1·
superhuman intelligence is already here in some domains, but ppl don't want to accept it yet in a decade, we look back at today's models the way we look at savant cases now: great at specific tasks, but weak on robustness and general use. but that still means we've figured out intelligence
English
13
4
87
5.8K
Matthew A. Mattson, Esq.
Matthew A. Mattson, Esq.@mrmmattson·
What would say is your biggest friction in using AI? Mine is finding time to keep up on the latest releases.
English
0
0
7
100
Matthew A. Mattson, Esq.
Matthew A. Mattson, Esq.@mrmmattson·
Do you know anyone who overuses AI? If so, how? We don't need to know their name; just why do you consider it excessive?
English
0
0
7
70
Matthew A. Mattson, Esq.
Matthew A. Mattson, Esq.@mrmmattson·
@VraserX I feel like could have done a better job preparing for the AI revolution. We knew it was coming, but did little to adapt beforehand. Now, we are playing catch-up.
English
0
0
0
14
VraserX e/acc
VraserX e/acc@VraserX·
We are not early for AI. We are late for fixing school, work, politics and health before AI amplifies all of it.
English
30
10
151
4.3K
Matthew A. Mattson, Esq.
Matthew A. Mattson, Esq.@mrmmattson·
@danielisdizzy We are going to see more AI on the edge devices appear over the next few years. The more that can be done locally, the more trust people will have in AI as the request doesn’t have to leave the device.
English
0
0
1
761
Daniel
Daniel@danielisdizzy·
As Larry Ellison explains, there are two types of AI models: real-time models, also called AI on the edge, where you need ultra-low latency decisions (think autonomous driving or robotics), and models that don’t need an immediate response, like ChatGPT, where the system has time to reason. Real-time models require local compute inside the device, and these chips have nothing to do with the GPUs running in data centers. The AI-on-the-edge market is still largely untapped — and it’s going to be a multi-trillion-dollar opportunity. I believe $AMD will be one of the biggest beneficiaries
English
98
153
965
151.7K
Matthew A. Mattson, Esq.
Matthew A. Mattson, Esq.@mrmmattson·
@JonhernandezIA If a show is on TV and no one is watching, does the show still air? We maybe in a simulation, but it doesn’t mean we are being watched. But it does make me think what comes after AI? What is the next higher level abstraction?
English
0
0
0
37
Jon Hernandez
Jon Hernandez@JonhernandezIA·
📁 Elon Musk has a theory for predicting the future: the most interesting outcome is the most likely one. If the simulation theory is true, it makes sense that any civilization running millions of possible futures would only keep watching the ones that are not boring. If this also applies to our reality, maybe we are still on air because we make a good Netflix series for some alien audience.
English
15
7
37
3.5K
Matthew A. Mattson, Esq.
Matthew A. Mattson, Esq.@mrmmattson·
Domain memory is a great advancement in AI. But, I think it is also important to understand how memory works and its context window. Things at the beginning and end of that context window often influence more how the output is generated. The stuff in the middle can and often does get forgotten or weighted differently.
English
0
0
1
784
ℏεsam
ℏεsam@Hesamation·
he’s right. so is Anthropic. 90% of building agents that actually work is the memory. not the model, not the framework, not the MCPs, but the context of the agents understanding of: > what it’s capable of > what’s the goal and requirements > the failures of past experience the context can make an agent a 6 yo dumb kid, or a disciplined engineer. the key? you must have “domain memory”: a sense of its specialization and task-specific memory. not just the session memory, but a persistent long-term memory of the most critical insights that will be important in the future. this memory that we call “workflow memory” at @CamelAIOrg is something i’ve been working on for a while, and also a part of my master’s thesis. it’s very simple to set up (but intricate) but pays off very well. even if you have an internal agentic setup, implementing this long-term memory is not complex, and doesn’t need external apis too (though there are options).
ℏεsam tweet media
English
39
129
1.8K
85.2K
Matthew A. Mattson, Esq.
Matthew A. Mattson, Esq.@mrmmattson·
@corbin_braun Yep, while people argue over whether vibe coding is real engineering, we are making money. Couldn’t agree more.
English
1
0
0
220
corbin
corbin@corbin_braun·
"Vibe coding isn't real engineering." Cool. I just did a 2 hour CDN overhaul that cut bandwidth by 99%, dropped costs from $12K to $200/month at scale, and required zero backend changes. While you're gatekeeping, we're shipping. Stay skeptical. It's a competitive advantage for us. W @Cloudflare
English
293
192
3.3K
248.8K