Marco Teny

627 posts

Marco Teny banner
Marco Teny

Marco Teny

@italiantechguy

Product shipper | Next.js | React | Tailwind | Supabase | NFC & AI. | #buildinpublic

เข้าร่วม Ocak 2025
298 กำลังติดตาม78 ผู้ติดตาม
ทวีตที่ปักหมุด
Marco Teny
Marco Teny@italiantechguy·
Is this the definitive 2026 Tech Stack? 🚀 🎨 Frontend: • React 19 • Tailwind v4 • Shadcn UI ⚙️ Backend: • Next.js Server Actions (Logic) • Supabase (DB + Auth+RLS) Full power. Zero server management. Would you change anything? 👇 #coding #VibeCoding
Marco Teny tweet media
English
3
0
3
350
Ivan Fioravanti ᯅ
Ivan Fioravanti ᯅ@ivanfioravanti·
I totally agree that Herms Agent + MiniMax M2.7 is match in heaven! 🚀
MiniMax (official)@MiniMax_AI

Capable agents are the result of co-evolution between models and harnesses. We've been working with @NousResearch to ensure that M2.7 x Hermes Agent provides a top-tier experience for users. Hermes’s self-improving loop brings out the best in M2.7 through real usage. We are also launching MaxHermes, a cloud-hosted and managed version of Hermes in @MiniMaxAgent (No terminal setup, no config) If you’re already running Hermes locally, you can now give you agent a partner in the cloud with MaxHermes. The path to AGI is shorter with good company. @NousResearch 🤝 @MiniMax_AI

English
3
1
44
3.7K
Marco Teny
Marco Teny@italiantechguy·
@IraninSA Stop mocking jesus please. It’s not funny
English
1
0
0
105
Iran Embassy SA
Iran Embassy SA@IraninSA·
Trump the (fake) Jesus.
English
59
560
2.1K
51.1K
Marco Teny
Marco Teny@italiantechguy·
@bcherny @rezoundous To be honest, Boris Claude needs to get back on track. Too many errors and unclear changes in the business model in recent months. We are slowly losing confidence. We love Claude but the community will not exchange honesty for a model that can be replaced. Friend's advice!!
English
0
0
0
267
Boris Cherny
Boris Cherny@bcherny·
@rezoundous This is now fixed. More in depth response with technical details here
Boris Cherny@bcherny

👋 1h prompt cache is nuanced actually. It costs more for cache writes, and less for cache reads. Whether you benefit from cheaper cache reads depends on your usage pattern -- context window size, whether the query is the main agent or subagent, etc. We have been testing a number of heuristics to give subscribers better prompt cache hit rates, which means lower token usage and lower latency, when it works. But this effect is far from uniform due to the nuance above. Say you use 1h cache for an agent, but only used the agent to make a single query -- in this case the 1h cache would be wasted and you'd be overcharged. At this point we have rolled out 1h prompt cache by default in a number of places for subscribers to optimize cache duration based on real usage patterns, but we actually keep it at 5m for many queries also (eg. subagents, which are rarely resumed so you'd be paying for them even though they do not benefit from 1h). We also are not defaulting API customers to 1h yet -- this needs more testing to make sure it's a net improvement on average. Separately, when we do this kind of experimentation, we use experiment gates that are cached client-side. When you turn off telemetry we also disable experiment gates -- we do not call home when telemetry is off -- so Claude reads the default value, which is 5m. We will soon be changing the client side default to 1h for a few queries, since we now feel good that it is a small token savings on average for those queries. We will also give you env vars to force 1h and 5m. In any case, the token savings is nowhere near 12x unfortunately. It is a small win though, that we have been in the process of rolling out to everyone. Hope the explanation helps. More here: #pricing" target="_blank" rel="nofollow noopener">platform.claude.com/docs/en/build-…

English
12
4
343
69.6K
Tyler
Tyler@rezoundous·
Claude Code reduces cache TTL from 1hr to 5min when you turn off telemetry. Apparently privacy costs us 12 times the token.
English
24
5
254
55.9K
Clash Report
Clash Report@clashreport·
BREAKING: Trump: I agree to suspend the bombing and attack of Iran for a period of two weeks. This will be a double sided CEASEFIRE! The reason for doing so is that we have already met and exceeded all Military objectives, and are very far along with a definitive Agreement concerning Longterm PEACE with Iran, and PEACE in the Middle East. We received a 10 point proposal from Iran, and believe it is a workable basis on which to negotiate. Almost all of the various points of past contention have been agreed to between the United States and Iran, but a two week period will allow the Agreement to be finalized and consummated. On behalf of the United States of America, as President, and also representing the Countries of the Middle East, it is an Honor to have this Longterm problem close to resolution.
Clash Report tweet media
English
118
311
863
254.6K
Alex Prompter
Alex Prompter@alex_prompter·
Two types of people saw Karpathy's knowledge base post: Those who bookmarked it. Those who built it that same weekend. The second group now has an AI that gets smarter every time they use it. The first group is still scrolling. Here's the full build guide with every prompt:
God of Prompt@godofprompt

x.com/i/article/2040…

English
39
52
612
192.2K
Arena.ai
Arena.ai@arena·
Gemma 4 by @GoogleDeepMind debuts at 3rd and 6th on the open source leaderboard, making it the #1 ranked US open source model. By total parameter count, Gemma 4 31B is 24× smaller than GLM-5 and 34× smaller than Kimi-K2.5-Thinking, delivering comparable performance at a fraction of the footprint.
Arena.ai tweet media
Arena.ai@arena

Gemma-4-31B is now live in Text Arena - ranking #3 among open models (#27 overall), matching much larger models at 10× smaller scale! A significant jump from Gemma-3-27B (+87 pts). Highlights: - #3 open (#27 overall), on par with the best open models Kimi-K2.5, Qwen-3.5-397b - Top 3 across Math, Instruction Following, Multi-Turn, Hard Prompts, Creative Writing, and Coding - Apache 2.0 license - Its efficient variant: Gemma-4-26B-A4B is #6 open (#39 overall) Congrats to @GoogleDeepMind on a major step forward for open models!

English
23
90
894
198.5K
Marco Teny
Marco Teny@italiantechguy·
Pio Esposito non giocherebbe titolare in nessuna squadra bosniaca.
Italiano
0
0
0
12
Marco Teny
Marco Teny@italiantechguy·
@Frenkie_Woody Ma se l’unico che c’ha messo l’anima ma che cazzo stai a di
Italiano
0
0
0
17
Frenkie_Woody
Frenkie_Woody@Frenkie_Woody·
Ma possiamo mai andare ai mondiali con Palestra?
Italiano
317
8
128
44K
Marco Teny
Marco Teny@italiantechguy·
GRAVINA DIMETTITI PEZZO DI MERDA
Italiano
0
0
0
17
Marco Teny
Marco Teny@italiantechguy·
Buttate l’Inter. Ci stanno compromettendo un mondiale Bastoni Pio Esposito Barella Di marco PEZZI DI MERDA
Italiano
0
0
1
151
Ugo Quinzi
Ugo Quinzi@QuinziUgo·
Spiegazioni?
Ugo Quinzi tweet media
Italiano
27
5
68
92.7K
Marco Teny
Marco Teny@italiantechguy·
@LottoLabs That's great! Which model would you recommend I use it with?
English
0
0
0
126
Lotto
Lotto@LottoLabs·
@italiantechguy It’s like having different employees, geared to different tasks, you can configure a whole set up of different agents, different models, skills whatever
English
2
0
7
310
Claude
Claude@claudeai·
Computer use is now in Claude Code. Claude can open your apps, click through your UI, and test what it built, right from the CLI. Now in research preview on Pro and Max plans.
English
2.6K
4.8K
59.3K
16M
Marco Teny
Marco Teny@italiantechguy·
@ivanfioravanti @MiniMax_AI TOP! I read that TurboQuant helps a lot with this, honestly I would like to stay on the Mac line rather than assemble a mini PC with processors from AMD for example.
English
0
0
0
40
Ivan Fioravanti ᯅ
Ivan Fioravanti ᯅ@ivanfioravanti·
Love the fact that my Coding Plan has been migrated to Token Plan on @MiniMax_AI because now I can try other AI services too: image, video and music!
English
5
0
26
2K
Marco Teny
Marco Teny@italiantechguy·
Claude Code is the biggest invention of our century.
English
0
0
0
19