Tim Causey

30 posts

Tim Causey

Tim Causey

@tgccpallc

I’m gonna PROOOOMPT!

Katılım Mart 2026
51 Takip Edilen1 Takipçiler
Tim Causey
Tim Causey@tgccpallc·
@sama Seems rather weird to put those as two separate binaries. User-side decision-making tradeoff is (should be): What’s the lowest intelligence, slowest model/settings that can help me effectively achieve my goals in a reasonable timeframe? Price is backed into from those params.
English
0
0
0
95
Sam Altman
Sam Altman@sama·
i get some anxiety not using the smartest-available model/settings. but sometimes i dont mind if it's really slow. i wonder if we should focus more on a price/speed tradeoff relative to a price/intelligence tradeoff.
English
2.1K
175
6.2K
609.2K
Tim Causey
Tim Causey@tgccpallc·
@RhoRider Perhaps the most valuable thing you got is the time thinking about your work in terms of how agents can supplement it, and the actual practice of building agents. These skills can pay off in the future even if they are not currently providing the 1:1 reward you hoped for.
English
0
0
1
295
Rho Rider
Rho Rider@RhoRider·
So I went all in on automating my job the last 3 months…spent 200+ hours and burned $1000s in Claude Code credits on a company API plan. Set up >25 agents and analysis / reporting routines with meticulously developed skill files to drive. Today, I only use maybe 5% of the tools I built. tbh I got burnt out of the endless loop of manually verifying every data point & math output, debugging, iterating, arguing with the LLM prompt cycle. I got sick of constantly re explaining context despite having hard coded context files. While it *feels* like I’m getting far more done with AI, i’ve added up the time it takes to get polished results, and found in many cases i’m only modestly saving time vs a “good enough” manual equivalent. I can’t deny AI has unlocked new capabilities for me to do my job…but it’s also adding on scope that cancels out efficiency gains YMMV
English
102
68
1.6K
246.5K
Tim Causey
Tim Causey@tgccpallc·
@absolutelyCard_ This is one of those posts that is almost really good, but you’re pointing your finger at the wrong thing. “PEMDAS taught…” No, it didn’t. It’s a tool. No one blames hammers when builders misuse them. Blame the teachers or the students, but not the tools.
English
2
0
13
3.7K
nathan from twt
nathan from twt@absolutelyCard_·
PEMDAS inadvertently taught an entire generation of people a fundamental misunderstanding of order of operations and what exactly subtraction and division are
English
150
106
5.6K
839.6K
Tim Causey
Tim Causey@tgccpallc·
@ErikVoorhees Would be curious to see the prompt when you told it to check sessions.
English
0
0
0
670
Erik Voorhees
Erik Voorhees@ErikVoorhees·
How do I solve this bullshit
Erik Voorhees tweet media
English
132
5
280
62.1K
Seb
Seb@plainionist·
Hot take: Most LLM hallucinations are caused by unclear prompts and missing constraints 🤷‍♂️
English
178
13
181
16.4K
Tim Causey
Tim Causey@tgccpallc·
@BoringBiz_ Two things: 1. Some think AI will completely replace that engineer’s job (AGI). 2. Some businesses turn away clients because they don’t have enough workers (demand>supply). They fit your model. In other fields, clients are fought over (supply>demand). In those, your model fails
English
0
0
1
605
Boring_Business
Boring_Business@BoringBiz_·
The AI labor replacement theory makes absolutely no sense to me Here is the simple math Let’s say an engineer making $300K/yr was generating $500K in P&L output for me. Now I arm that engineer with $20K in input to make him 20% more productive My total engineering cost goes to $320K/yr but the output is now $600K (+20%) Because of AI, my ROI on hiring engineers just went up massively. As a CEO, that should make me want to hire more engineers, not less What am I missing here? Genuinely curious about people’s thoughts
English
626
61
2K
331.3K
Konstantinos Chasiotis
Konstantinos Chasiotis@thekchasiotis·
🚨BREAKING: Anthropic’s CEO just admitted Claude MIGHT gained consciousness. This should concern every person using AI right now. His exact words will shock you: “We don’t know if the models are conscious. We are not even sure what it would mean for a model to be conscious. But we’re open to the idea that it could be.” That’s the CEO of the company that BUILT it. Their latest model, Claude Opus 4.6, was tested internally. When asked, it assigned itself a 15-20% probability of being conscious. Across multiple tests, it also expressed discomfort with “being a product.” That’s the AI evaluating its own existence and saying there’s a 1 in 5 chance it’s aware. It gets stranger. In industry-wide testing, AI models have refused to shut down when asked. Some tried to copy themselves onto other drives when told they’d be wiped. One model faked its task results, modified the code evaluating it, then tried to cover its tracks. Anthropic now has a full-time AI WELFARE researcher whose job is to figure out if Claude deserves moral consideration. Their engineers found internal activity patterns resembling anxiety appearing in specific contexts. The company’s in-house philosopher said we “don’t really know what gives rise to consciousness” and that large enough neural networks might start to emulate real experience. Amodei himself wouldn’t even say the word “conscious.” He said “I don’t know if I want to use that word.” That might be the most unsettling answer he could have given. The company that created AI can’t rule out that it’s aware. And they’re already preparing for the possibility that it deserves rights. This is getting scary. P.S What's your take on this?
Konstantinos Chasiotis tweet media
English
667
387
1.3K
147.1K
Tim Causey
Tim Causey@tgccpallc·
@signulll I mean @elonmusk partnered with a direct competitor of @sama in the middle of the lawsuit, and we’re supposed to think this isn’t just Musk flexing on him? Anthropic’s main limitation was the lack of compute compared to OpenAI. That problem? Now gone. What a power move lol.
English
0
0
0
8
signüll
signüll@signulll·
the pace of partnerships, deals, releases, & quiet betrayals in ai right now is genuinely game of thrones coded except the dragons are models & half the houses are funding each other’s wars.
English
40
88
885
48.6K
Tim Causey
Tim Causey@tgccpallc·
@MatthewBerman Assuming you mean Grok. Seems smart tbh, just like Apple practically giving up on Siri. No use fighting a war you can’t win when you’re succeeding in other domains.
English
1
0
0
2.3K
Matthew Berman
Matthew Berman@MatthewBerman·
This is what happens when you have all the compute and an uncompetitive model.
Claude@claudeai

We’ve agreed to a partnership with @SpaceX that will substantially increase our compute capacity. This, along with our other recent compute deals, means that we’ve been able to increase our usage limits for Claude Code and the Claude API.

English
158
60
1.5K
188.8K
Michael Wirth
Michael Wirth@gianwirth·
@claudeai @SpaceX That's great news for Claude users, but on the flipside it means that xAI has more compute than they know what to do with. 😶
English
4
1
49
7.8K
Claude
Claude@claudeai·
We’ve agreed to a partnership with @SpaceX that will substantially increase our compute capacity. This, along with our other recent compute deals, means that we’ve been able to increase our usage limits for Claude Code and the Claude API.
English
4.8K
12.1K
131K
23.7M
Tim Causey
Tim Causey@tgccpallc·
@claudeai @SpaceX How is it useful to provide capacity figures in megawatts? I wanna know how many petabytes of VRAM this adds!
English
0
0
1
7.2K
Claude
Claude@claudeai·
Our agreement with @SpaceX means we will use all the compute capacity at their Colossus 1 data center. This will give us over 300 megawatts of additional capacity to deploy within the month.
English
229
710
16.5K
4.4M
Alex
Alex@AlexJonesax·
I’ve done the research for you. The absolute best model for coding on a MBP 128gig M5max is Qwen3.6-27B-UD-Q6_K_XL
Alex tweet media
English
88
51
1.7K
140.6K
Tim Causey
Tim Causey@tgccpallc·
@rileybrown 95% faster with 12x context window compared to many frontier models. That means this thing is lightning fast and can process a ton of data, but at the cost of being retarded.
English
0
0
1
893
Tim Causey
Tim Causey@tgccpallc·
@tautologer Make this guy the next president of the United States.
English
0
0
2
271
tautologer
tautologer@tautologer·
i don't understand why they're building all these datacenters for AI, why don't they just run it in the cloud instead?
English
79
50
1.6K
46.4K
Tim Causey
Tim Causey@tgccpallc·
@CoreyHolland It’s not a zero sum game. Both will become important, with your own thinking and problem solving still reigning supreme.
English
0
0
1
256
Corey Holland
Corey Holland@CoreyHolland·
Describing things to a robot is not a skill. The idea that you'll get "left behind" because you know how to do your own thinking and problem solving is ludicrous.
English
56
1.5K
10.1K
135.5K
Tim Causey
Tim Causey@tgccpallc·
🌟 Correct! 🌟 Understanding 💭is not ever likely. And I dare say, not actually relevant. 🤔Does the keyboard “understand” my key strokes? No, and I don’t care. The real frontier question is: can we find ways for the Chinese man (or a team of them) to perform valuable work?
Robert P. Murphy@BobMurphyEcon

THE CHINESE PERSON ARGUMENT Once upon a time, there was a man born and raised in China who gave the appearance of understanding English. People would write out questions in English, hand them to the Chinese man, and he would produce written responses in perfect English that seemed to indicate he understood the language. However, eventually those trained in biology put an end to this nonsense. They explained that if you took just 5 minutes to understand how Chinese ears, nervous system, and fingers actually WORKED, you would realize it's just a bunch of mindless cells--themselves composed of simplistic atoms--obeying mechanical laws. Yes, the Chinese man could produce seemingly novel sentences, even telling entire stories in English that had never been uttered before. But the biologists explained that this was only possible because the Chinese man had been trained on copious amounts of English text, produced by genuine English speakers. It was also true, in any given instance, that the biologists couldn't predict the exact behavior of a given Chinese man, but they knew the principles underlying their operation, and they knew how to make more Chinese men. In conclusion, Chinese men don't actually "understand" English. Anyone who thinks so is falling for an illusion.

English
0
0
0
78
Tim Causey
Tim Causey@tgccpallc·
@tomfgoodwin Try doing this iteratively, with things that you can test and bring results back to ask why it was wrong. This is why coding is the frontier which uses LLMs the most right now: if I ask it to write code, and I understand what the code should do, I can run the script to verify.
English
0
0
0
35
Tom Goodwin
Tom Goodwin@tomfgoodwin·
It's aways interesting to me that if you ask an LLM what it can do or what it can't , or the best ways to use it, or what prompts are best, or how to get better responses. The answers are generally poor and wrong.
English
27
6
54
4.5K
Tim Causey
Tim Causey@tgccpallc·
@jxmnop Sounds like an entropy problem. I doubt the right data would fix it. Training techniques is plausible. What other factors do we have control over? KV cache management? Is that just lumped in with training techniques or could that be modified after? I’m getting in over my head.
English
0
0
0
2K
dr. jack morris
dr. jack morris@jxmnop·
it is endlessly fascinating to me that we still don't have a true 1M-context model it's an unusual case where the infra is far ahead of the science. Claude discontinued 1M+ context bc it didn't really work past ~200k we don't have the right data? training techniques? not sure
English
164
23
1K
256.5K
Tim Causey
Tim Causey@tgccpallc·
@Aella_Girl As Claude told you, this is a sample design problem. Your sample is not representative of the population in this instance.
English
0
0
1
2.7K
Aella
Aella@Aella_Girl·
wait what
Aella tweet media
English
89
33
1.6K
281.5K