Tim Adelmann

587 posts

Tim Adelmann banner
Tim Adelmann

Tim Adelmann

@tiGGu

#Digital #Native & #Innovation Enthusiast. #Design Thinker. Software Developer. Keynote Speaker. CTO @ MEETYOO

Berlin, Deutschland Katılım Aralık 2009
600 Takip Edilen152 Takipçiler
Tim Adelmann
Tim Adelmann@tiGGu·
Can an llm in a agentic application decide by itself to summarize the context to safe context? Can an agent do context engineering on its own?
English
0
0
0
6
Tim Adelmann
Tim Adelmann@tiGGu·
@MikeRyanDev Interesting. I‘ve built my own compiler for streamed Vue components with loading props, slots etc. Text replacements and a bunch of other features. I see hashbrown is only react/ng?
English
0
0
0
60
Tim Adelmann
Tim Adelmann@tiGGu·
@standupmaths please make an awesome video to explain this!
Rohan Paul@rohanpaul_ai

Another AI Math landmark. GPT 5.2 Pro solved Erdos Problem #397 as well, and it was accepted by Terence Tao. AI's progress on Math has such huge implecations, as Mathematics is the shared substrate for modeling, and computation across most sciences, so removing mathematical bottlenecks will multiply impact across many fields at once. So when AI solves core Math problems, everything built on top speeds up.

English
0
0
0
13
Tim Adelmann
Tim Adelmann@tiGGu·
@DennisAdriaans „Back in the day“ destructuring props made them loose reactivity… not sure if this was an exclusive vue 2 thing…
English
1
0
0
170
Dennis Adriaansen ⚡️
Dennis Adriaansen ⚡️@DennisAdriaans·
getting used to it; I'm finally destructuring my props with defaults
Dennis Adriaansen ⚡️ tweet media
English
5
1
62
5K
Tim Adelmann retweetledi
Tim Adelmann
Tim Adelmann@tiGGu·
I feel like MCP needs some metadata for proper context engineering. Eg. Context masking (removing tool outputs after some turns to free context) would require some information which tool can be/should be masked. How should this be done in the current spec?
English
0
0
0
9
Tim Adelmann
Tim Adelmann@tiGGu·
@0xDevShah $20B also seems cheap for Meta or Microsoft don‘t know about anthropic though. Wondered why nobody is picking this off the market seem to be able to make any llm much faster!
English
0
0
0
51
Dev Shah
Dev Shah@0xDevShah·
I think Sama missed this one. $20B is cheap for super fast inferencing.
Dev Shah@0xDevShah

Nvidia paid 3X Groq's September valuation to acquire it. This is strategically nuclear. Every AI lab was GPU dependent, creating massive concentration risk. Google broke free with TPUs for internal use, proving the "Nvidia or nothing" narrative was false. This didn't just demonstrate technical feasibility, it revealed that Nvidia's moat was shallower than markets believed. When a hyperscaler successfully builds custom silicon, every sophisticated buyer starts running" should we build our own?" calculations. This drops Nvidia’s TAM. Jonathan Ross (Groq’s founder) is the inventor of TPU. He understood the architectural principles that made non-GPU AI acceleration viable. His LPU architecture targeted inference workload where GPUs are actually over-engineered. This matters because inference is where the real money is long-term. Training is one-time capex, but inference is recurring opex that scales with usage. If Groq proved LPUs could hit competitive price-performance on inference, every cloud provider would white-label their architecture. Nvidia would get squeezed into "just training" while losing the annuity stream. It is safe to see this deal as Nvidia insuring against Groq enabling an entire ecosystem of Nvidia alternatives. But what is more interesting is the second-order effect, the customer lock-ins. Now, Nvidia owns both the incumbent standard (CUDA + GPU) and the most credible alternative architecture (LPUs). This is MSFT buying Github-level strategic. Any AI lab evaluating "build vs buy vs alternative vendor" now faces: - Option A (Nvidia GPUs) - Option B (Nvidia <> Groq LPUs) - Option C (start from scratch) Turning a competitive threat into a customer segmentation tool, Jensen is the master of trades. They can now price-discriminate: premium customers pay for GPUs, price-sensitive inference gets funneled to LPUs, and Nvidia captures both. If Nvidia doesn't integrate LPUs in its roadmap, this was a pure defensive play. If they do integrate it and start offering "GPU for training, LPU for inference" bundles, this becomes a textbook moat-widening acquisition. The most expensive thing in technology isn't building the future, it's preventing someone else from building a future without you.

English
19
9
124
12K
yuto
yuto@yutozxx·
Don't use Google Let's see what you got
yuto tweet media
English
7.5K
270
6.6K
1.6M
Tim Adelmann
Tim Adelmann@tiGGu·
@YBenlemlih @typescript @mattpocockuk For me having both interfaces and types which are basically the same, is a major design flaw of typescript. In other languages interfaces have a clear role: they define behavior and always end with -able like „serializable“. I personally always use types. Never interfaces
English
0
0
1
189
Josef Bender
Josef Bender@josefbender_·
Which is better? @typescript types or interfaces? I personally prefer types, but I am sure @mattpocockuk has his own opinion here!
English
95
25
678
175.9K
Tim Adelmann
Tim Adelmann@tiGGu·
@tekbog Jakob‘s law: „Users spend most of their time on other sites. This means that users prefer your site to work the same way as all the other sites they already know.“ lawsofux.com/jakobs-law/
English
0
0
2
68
Tim Adelmann
Tim Adelmann@tiGGu·
My year in code. @cursor_ai There are a couple of days left in 2025. I‘m confident i will reach the 1T until the end of 2025! The most amazing thing this year was the rate of improvements of models and IDE! Thank you
Tim Adelmann tweet media
English
0
0
0
6
Tim Adelmann
Tim Adelmann@tiGGu·
Hot take: The only good reason to use an SLM is if you're running on-device. For cloud/enterprise, the future isn't "Small Models." It's Smart Routing. The true achievement of GPT-5 Don't optimize by shrinking your intelligence. Optimize by routing your traffic.
English
0
0
0
8
Tim Adelmann
Tim Adelmann@tiGGu·
Two massive deals: OpenAI's $1B Disney IP grab for Sora + Netflix's Warner Bros bid. Why? AI video abundance is here. Attention is scarce—IP is the ultimate moat for beloved slop (Star Wars floods incoming). Netflix fights TikTok/IG too, not just Prime.
English
0
0
0
48
Tim Adelmann
Tim Adelmann@tiGGu·
I always thought putting 2 of the same icons next to each other is bad UX. Well I‘m definetly confused. Always press the wrong (X) … liquid (gl)ass
Tim Adelmann tweet media
English
0
0
0
51
Tim Adelmann
Tim Adelmann@tiGGu·
AI agents need data, but you can't let them run wild in your production database. Our solution: A secure, in-flight sandbox. The agent can now explore, analyze, and visualize customer data autonomously and safely.
English
0
0
0
15
Tim Adelmann
Tim Adelmann@tiGGu·
Css anchor is around the corner. Finally one of the most anticipated APIs in the web. I guess @jaffathecake is to he thanked for the FF support.
Tim Adelmann tweet media
English
1
0
1
1.6K
Tim Adelmann
Tim Adelmann@tiGGu·
The biggest myth in AI right now? That one "super model" has all the answers. We tested 17 LLMs on a video analysis task. They didn't agree. Instead, they formed "opinion clusters," each finding different highlights.
English
1
0
0
17