Minty

9.6K posts

Minty banner
Minty

Minty

@DeFiMinty

Research @Delphi_Digital | Not Financial Advice | Opinions Are My Own

Rainbow Road Katılım Ekim 2022
1.1K Takip Edilen82.3K Takipçiler
Sabitlenmiş Tweet
Minty
Minty@DeFiMinty·
I've been obsessed with AI but find it hard to keep up. So I built Open Weights, a dashboard that aggregates AI related content in one place. Track breaking AI news. Find the cheapest models. See which repos are shipping updates. Get caught up in seconds.
Minty tweet media
English
38
9
139
13.6K
Minty
Minty@DeFiMinty·
Another thing I’m realizing about AI models now is that they are horrendous at writing. Even with full instructions, I can’t make simple social content anymore without having to go back and forth correcting stiff writing, weird comma structures, and just bad writing. Probably a byproduct of all the frontier companies going all in on coding.
English
9
0
19
1.7K
Minty
Minty@DeFiMinty·
@ZackD0x Thought I was taking crazy pills this whole time. Then you move and realize damn it really did change.
English
1
0
1
313
Minty
Minty@DeFiMinty·
Pretty insane how much better the $100 Codex plan feels compared to the $200 Claude max plan. You only realize how bad Opus has gotten after you leave.
English
5
1
52
5.8K
Minty
Minty@DeFiMinty·
Best change made in a while. I’ve lost the passion to write on here because the algo has been prioritizing clickbait, misleading news, and controversy rather than actual discussion and analysis. There’s a gazillion other places I could go for that. X was a place where you could actually get interesting analysis, have real discussions, and now that’s just gone. A real town hall.
Nikita Bier@nikitabier

All aggregators had their payouts reduced to 60% this cycle. We will add another 20% deduction in the next cycle. It became abundantly clear: flooding the timeline with 100 stolen reposts and clickbait everyday crowded-out real creators and hurt new author growth. The next step is to assign a permanent deduction to habitual bait posters who use “🚨BREAKING”on every post. X will never infringe on speech or reach—but we will not compensate for manipulation of the program or our users.

English
5
1
33
3.1K
Minty
Minty@DeFiMinty·
Think Zcash is the one alt narrative that stands out to me. • One of the best positioned projects for the quantum era • Privacy becoming a commodity as capital controls get tighter • Cult like community • Shares similar characteristics as BTC at around 0.5% the valuation. I don’t think Zcash will flip BTC but it has all the qualities of a proper narrative.
English
5
0
17
1.3K
Minty
Minty@DeFiMinty·
AI will be an existential crisis for DeFi. Security is going to be one of the biggest pain points over the next few years. Companies will try to prepare against them but this will always favor the attacking side over the defending side imo. AI and society are not ready for all the new attack vectors this has opened up. (We saw how many supply chains attacks last month?) And the place that make the most financial sense to attack is onchain. You might say this affects TradFi too, but institutions have to hold some accountability and liability. Only in DeFi will people just accept bad security practices and go “Oh well, you lost your money. Sorry.” People shouldn’t need to just trust teams at their word to secure their money.
Anthropic@AnthropicAI

Introducing Project Glasswing: an urgent initiative to help secure the world’s most critical software. It’s powered by our newest frontier model, Claude Mythos Preview, which can find software vulnerabilities better than all but the most skilled humans. anthropic.com/glasswing

English
3
2
18
1.6K
Minty
Minty@DeFiMinty·
LlamaAI is the best AI tool I have used for crypto research and it’s not even close. One of the only ones I can see myself using regularly over Claude or GPT. GG @0xngmi
English
9
1
38
3.4K
Minty
Minty@DeFiMinty·
Am I the only who thinks that programmers will become more important over time with AI? Companies are running into issues with messy codebases, security issues, and there is more demand for new features because competition Is growing. The past few months have already shown that just because you can make something with AI doesn’t mean people will use it. AI isn’t ready for production ready codebases yet. The nature of the job will naturally change, but with deskilling and more demand, the people who understand the systems will only become more important imo.
Bearly AI@bearlyai

WSJ with a profile that will become more common: a 22-year computer science major dropped out of college to become an electrician (he wanted to AI-proof himself due to progres in AI coding agents). Since 2020, enrolment at trade achool community colleges in America is up 20%.

English
4
0
14
2.6K
Minty
Minty@DeFiMinty·
The AI playbook is to spread the models far and wide at a cost and force people to use them. Once people get hooked and can no longer work without it they nerf the performance, jack up the costs, and sell it at a premium to companies. Claude is just the first. Opus 4.6 performs like garbage now. People complaining about limits are being gaslit. There’s an outage every other week. Eventually OpenAI will probably go down the same route. Have fun while it lasts.
English
5
0
24
2K
Minty
Minty@DeFiMinty·
What’s the point of these Claude upgrades if they devolve into inferior versions of themselves over time anyways?
English
7
0
14
1.6K
Minty
Minty@DeFiMinty·
Contrary to my username, I haven’t locked any money in DeFi for years and I’m probably not the only one. DeFi tries to be a decentralized financial system which sounds cool in theory but in reality, users take a lot of risk for little benefit. And the scary part is what happened to Drift could theoretically happen to the giants right now. Could you imagine a protocol like Hyperliquid being hacked? It would be a death kneel to the space. People will say it’s impossible but I’ve heard the same rhetoric for other major blowups over the years. You are basically putting your trust in founders to never mess up which is insane to me. Hackers have all the time in the world to look for exploits and wait for their chance. AI has only made this worse, giving them more capabilities to scan for attack vectors. It only takes a single lapse of judgement or a tiny error somewhere for everything to be gone.
English
15
4
35
3.2K
Minty
Minty@DeFiMinty·
This gives me bubble vibes I can’t quite shake off. OpenAIs recent raise puts it at around 30x run rate revenue. Even Nvidia trades at around a 15-20x revenue while being profitable and having much better gross margins. Anthropic trades at a cheaper level while growing faster with a clearer path to profitability. Part of this is due to the circular spending that is occurring between companies. Companies like Amazon and Microsoft invest in OpenAI knowing that OpenAI will spend it back through cloud credits over time. These are just a few examples of this that raises the valuation even though there is less hard cash being committed. Even with the raise, OpenAI only has around 2 years at the current burn rate which is probably why they are looking to IPO as soon as possible. This is a cash injection that they need, and it gives them the ability to borrow more to continue sustaining the spend. Essentially, everything needs to work out to justify IPO valuations. They need to gain back a big chunk of enterprise marketshare which they have been losing to Anthropic. Margins need to grow again. Ad business needs to skyrocket. I don’t know man.
Polymarket@Polymarket

JUST IN: OpenAI shares have reportedly “fallen out of favor” on the secondary market, as investors are pivoting to Anthropic.

English
9
1
27
3.3K
Minty
Minty@DeFiMinty·
Think there are two different issues at play here. AI is great at creating personalized learning curriculums for you. It’s a great tutor that can scaffold skills and meet you where you are. But the issue is that kids will mostly use it as a shortcut to get the desired results. They outsource their reasoning skills to AI and this becomes a major crutch. This is something you can mitigate in a classroom setting but it’s not like you can stop them from using AI at home. The trend so far has been to go back to pencil and paper for assessments, but I wonder if there is a better way like AI assisted project based learning that shows mastery instead of going back to the old days.
Anand Sanwal@asanwal

Wharton researchers gave nearly 1,000 high school math students access to ChatGPT during practice problems Result: chatGPT is the perfect trap. Look at the red bars. Students with ChatGPT crushed their practice sessions. The basic ChatGPT group solved more problems and those on the "tutor" version did even more. Now look at the gray bars. That's the exam. No AI allowed. The ChatGPT group scored 17% worse than kids who practiced with zero technology. And the fancy tutor version? No better than working alone. The researchers called AI a "crutch." When they analyzed what students actually typed into ChatGPT, most of them just wrote - “What’s the answer?” The kicker: students who used ChatGPT believed it hadn't hurt their learning. They were confidently wrong. This is the AI trap in education. Outsourcing your thinking. Of course, lots of half-baked AI literacy curricula being rolled out in schools now Let’s of course ignore that basic literacy (the ability to read) is possible for <50% of 8th graders Source: Bastani et al. (2025), "Generative AI Can Harm Learning," PNAS

English
2
0
9
2K
Minty
Minty@DeFiMinty·
From personal use most models do seem to always try to consider both sides in an attempt to sound reasonable. The only thing is it does this even if one side is inherently weaker, even straight up giving inaccurate information sometimes to avoid conflict.
Stefan Schubert@StefanFSchubert

While social media is polarising, evidence suggests AI may nudge people towards the centre. This holds true of all studied models. Grok is more right-leaning than other models, but also has depolarising effects. By @jburnmurdoch.

English
4
0
13
1.9K
Minty
Minty@DeFiMinty·
Slowly but surely trust in the US is eroding. US companies have traded at a premium for decades being at the forefront of the tech boom, and having the benefit of being the world reserve currency. A lot of these are slowly eroding. Dollar dominance has been in steady decline. Top companies are now leveraging up for datacenter development. Policy driven volatility has made it harder to model risk. If I were a fund, how am I supposed to price in risk when a tweet will move the market in seconds? You might fool me a few times but eventually trust fades and people stop believing in anything you say. I don’t think the US will be overtaken, but it’s not farfetched to see the S&P underperform some of its global counterparts for the next few years. International markets have already posted one of their best years in 2025 while still trading below US valuations. The global order may never go back to what it was in the past.
English
5
2
36
2.7K
Minty
Minty@DeFiMinty·
Be Apple > See competitors in an infrastructure arms race > Realizes distribution matters more than compute > Have over 2B active users > Best in class silicon capabilities for local AI > Opens up Siri to every model > Becomes App Store for AI models > Sitting on cash while others burn money Masterclass
English
13
0
15
1.7K
Minty
Minty@DeFiMinty·
@Pantalonia The issue is it affects your reasoning skills too. At least it affected mine.
English
1
0
1
106
Pantalonia
Pantalonia@Pantalonia·
@DeFiMinty If you learned to read and write pre AI, you have an edge, but only if you maintain it. Thinking is hard, feels burdensome and AI is happy to take over and to do it for you.
English
1
0
0
106
Minty
Minty@DeFiMinty·
I don’t think this is a good idea. At least for personal writing. After using AI to write for many months, I realized I completely forgot how my own voice sounded. I would literally just sit there, write a few sentences, and then just stare at the screen because I forgot how to think critically and form my own arguments. AI has a knack for filling in the blanks and making your writing sound plausible. I would edit extensively and go back and forth so it sounded like me. But in the end no matter how I edit, it doesn’t sound like me. And when I tried to form my own thesis about things, I just struggled. When you write manually, you have to think more about what makes sense and what doesn’t. When you explain something, you stop and realize things you didn’t think about before. When you stop using that muscle, you forget how to think for yourself. I still use AI to help research and learn more about topics before I start writing, but it’s crucial to know how to write for yourself. If you can’t write your argument clearly by yourself it’s probably not a strong argument in the first place.
Max Zeff@ZeffMax

New: AI writing is flooding the internet. I talked to a few tech reporters—largely Substackers without the resources of a major newsroom—who are using AI in thoughtful ways to make their scoops, news analysis, and journalism shine.

English
13
5
58
5.1K