Joanfihu

248 posts

Joanfihu banner
Joanfihu

Joanfihu

@joanfihu

AI Product Engineer @ Wordsmith & Founder @SpecifiedBy Independent AI Researcher

Edinburgh, Scotland Katılım Şubat 2011
1.1K Takip Edilen469 Takipçiler
Luke Flyswatter
Luke Flyswatter@Luke_Flyswatter·
@ylecun Americans have their own unique definition of winning I think.
English
1
1
4
5.4K
Joanfihu retweetledi
Ask Pandi
Ask Pandi@ask_pandi·
Romulus and Remus | The Legendary Founding of Rome | Short Movie
English
0
1
1
32
bob
bob@agibyfriday·
@craigzLiszt I doubt it. OAI has a massive head start, incredible retention numbers and unbeatable name recognition. Matching the current generation of Chat GPT won’t be enough to woo existing users
English
1
0
0
287
Craig Weiss
Craig Weiss@craigzLiszt·
meta ai might actually win at consumer ai
English
103
12
418
23.8K
Boris Cherny
Boris Cherny@bcherny·
This is false. We defaulted to medium as a result of user feedback about Claude using too many tokens. When we made the change, we (1) included it in the changelog and (2) showed a dialog when you opened Claude Code so you could choose to opt out. Literally nothing sneaky about it — this was us addressing user feedback in an obvious and explicit way.
English
288
74
2.5K
258.1K
Joanfihu
Joanfihu@joanfihu·
@KentonVarda You have to enable the connectors for this to work
English
0
0
0
20
Kenton Varda
Kenton Varda@KentonVarda·
How is Google so far behind on agentic AI? This is the Gemini sidebar *embedded inside gmail*.
Kenton Varda tweet mediaKenton Varda tweet media
English
78
10
422
59K
Joanfihu
Joanfihu@joanfihu·
@a16z It’s equally hard. The bar is higher now. It’s the same step function compilers, frameworks, etc did.
English
0
0
1
75
a16z
a16z@a16z·
Marc Andreessen: Software isn't precious anymore. In this new world, high quality software is infinitely available. "We've always lived in a world in which software is this precious thing that you have to think about very carefully." "It was really hard to generate good software, and there was only a small number of people who could do it." "Those days are just over." "If you need new software to do X, Y, or Z, you're just going to wave your hand and get it." "Things that used to be hard, or even seem like an insurmountable mountain to get through, all of a sudden, I think, become very easy." @pmarca with @latentspacepod
Latent.Space@latentspacepod

🆕 Marc Andreessen’s 2026 AI Thesis: Agents, Open Source, and Why This Time Is Different latent.space/p/pmarca @pmarca of @a16z says AI people keep swinging between utopian and apocalyptic for one simple reason: this field has been “almost here” for 80 years. But now, the breakthroughs are no longer theoretical. Reasoning, coding, agents, and self-improvement are all starting to work at once. This episode goes deep on AI winters, OpenAI + OpenClaw, infrastructure overbuild risk, proof-of-human, why software may soon be written mostly for bots, and why the real bottleneck may be society adopting AI rather than the models improving.

English
212
154
1.2K
819.7K
Logan Kilpatrick
Logan Kilpatrick@OfficialLoganK·
Introducing Gemma 4, our series of open weight (Apache 2.0 licensed) models, which are byte for byte the most capable open models in the world! Gemma 4 is build to run on your hardware: phones, laptops, and desktops. Frontier intelligence with a 26B MOE and a 31B Dense model!
Logan Kilpatrick tweet media
English
288
597
6.2K
515.3K
Joanfihu retweetledi
Google DeepMind
Google DeepMind@GoogleDeepMind·
Meet Gemma 4: our new family of open models you can run on your own hardware. Built for advanced reasoning and agentic workflows, we’re releasing them under an Apache 2.0 license. Here’s what’s new 🧵
GIF
English
371
1.2K
8.8K
3.8M
atreides
atreides@atreides_sf·
well let's be real, open model SOTA was distilled from claude
clem 🤗@ClementDelangue

After @Pinterest @Airbnb @NotionHQ @cursor_ai, today it’s @eoghan @intercom publicly sharing that they’re finding it better, cheaper, faster to use and train open models themselves rather than use APIs for many tasks. And hundreds of other companies are doing the same without sharing. Ultimately, I believe the majority of AI workflows will be in-house based on open-source (vs API). It took much more time than we anticipated but it’s happening now!

English
13
0
86
37.2K
Joanfihu retweetledi
Jenny Zhang
Jenny Zhang@jennyzhangzt·
Introducing Hyperagents: an AI system that not only improves at solving tasks, but also improves how it improves itself. The Darwin Gödel Machine (DGM) demonstrated that open-ended self-improvement is possible by iteratively generating and evaluating improved agents, yet it relies on a key assumption: that improvements in task performance (e.g., coding ability) translate into improvements in the self-improvement process itself. This alignment holds in coding, where both evaluation and modification are expressed in the same domain, but breaks down more generally. As a result, prior systems remain constrained by fixed, handcrafted meta-level procedures that do not themselves evolve. We introduce Hyperagents – self-referential agents that can modify both their task-solving behavior and the process that generates future improvements. This enables what we call metacognitive self-modification: learning not just to perform better, but to improve at improving. We instantiate this framework as DGM-Hyperagents (DGM-H), an extension of the DGM in which both task-solving behavior and the self-improvement procedure are editable and subject to evolution. Across diverse domains (coding, paper review, robotics reward design, and Olympiad-level math solution grading), hyperagents enable continuous performance improvements over time and outperform baselines without self-improvement or open-ended exploration, as well as prior self-improving systems (including DGM). DGM-H also improves the process by which new agents are generated (e.g. persistent memory, performance tracking), and these meta-level improvements transfer across domains and accumulate across runs. This work was done during my internship at Meta (@AIatMeta), in collaboration with Bingchen Zhao (@BingchenZhao), Wannan Yang (@winnieyangwn), Jakob Foerster (@j_foerst), Jeff Clune (@jeffclune), Minqi Jiang (@MinqiJiang), Sam Devlin (@smdvln), and Tatiana Shavrina (@rybolos).
Jenny Zhang tweet media
English
155
645
3.6K
493.6K
Joanfihu
Joanfihu@joanfihu·
@nxthompson Google is fundamentally broken. It should be top of the chart by a large margin
English
0
0
0
93
nxthompson
nxthompson@nxthompson·
Wild. Reddit and LinkedIn account for almost a quarter of all citations from the top LLMs.
nxthompson tweet media
English
45
68
520
85.5K
Joanfihu
Joanfihu@joanfihu·
@apples_jimmy I kind of like Composer. It's fast and most of the tasks I do don't need AGI
English
0
0
0
74
Jimmy Apples 🍎/acc
Jimmy Apples 🍎/acc@apples_jimmy·
Apparently Cursor is going to release a coding model better than opus 4.6 and cheaper as well ( maybe tomorrow ) Can they regularly do this to keep up though ?
English
113
34
1.3K
213.2K
Yuchen Jin
Yuchen Jin@Yuchenj_UW·
Some people at frontier AI labs told me they believe startups are over. OpenAI, Anthropic, Google, xAI will absorb every industry as AGI nears. Coding today, science, medicine, and finance next. Then everything else. If they’re right, that’s a pretty boring end of the world.
English
540
160
3K
944.5K
Matthew Prince 🌥
Matthew Prince 🌥@eastdakota·
@bgurley Bill’s biases not withstanding: Google wins because it sees 3.5x+ the data that OpenAI does.
English
3
0
10
2.7K
Joanfihu
Joanfihu@joanfihu·
@WSJ Consumer is the hardest battleground
English
0
0
0
18
The Wall Street Journal
Exclusive: OpenAI’s top executives are finalizing plans for a major strategy shift to refocus the company around coding and business users on.wsj.com/3N6CFyr
English
122
168
1.1K
995.6K