ty13r

8.2K posts

ty13r banner
ty13r

ty13r

@_ty13r

🤓🤔🧐👨‍🔬👨‍💻📄🤖🏗️♻️🕺

Katılım Kasım 2010
992 Takip Edilen3.6K Takipçiler
TFTC
TFTC@TFTC21·
Anthropic just killed cheap Claude access for OpenClaw and other third-party harnesses. Starting April 4, subscriptions no longer work outside their walled garden.
TFTC tweet media
English
15
5
71
20.5K
Shaw (spirit/acc)
Shaw (spirit/acc)@shawmakesmagic·
Cryptography has a much scarier and shorter timeline than AI Likely that all encryption will be broken by state level actors before sovereign AGI
English
34
4
84
7.9K
ty13r
ty13r@_ty13r·
The speed at which science and research is advancing right now is the real issue. The real threat is when, it’s clearly on the horizon and another few leaps in advancement and we might be there. Private companies have much easier paths to upgrade their cryptography, bitcoin moving to PQ is going to be terribly messy.
English
0
0
0
123
ty13r
ty13r@_ty13r·
@lopp the real question is wether or not ossification is going to be an issue when push comes to shove?
English
0
0
1
34
Jameson Lopp
Jameson Lopp@lopp·
2 new quantum computing papers just dropped. Is crypto cooked? Google says they designed quantum circuits that could break ECC in a few minutes with 500,000 physical qubits: a 20-fold reduction from previous work. Oratomic says they could break ECC in a few days with 26,000 neutral-atom physical qubits. These papers both show advancements in algorithmic efficiency and quantum computing theory, but one should not overlook the assumptions underlying these claims. The authors have improved upon techniques shown to work at small scale, but we have no proof they can be scaled up. In other words, we're at the stage where scientists have created a few transistors but are still trying to figure out how to fabricate a fully functioning silicon chip with tons of transistors working together simultaneously. Progress is clearly continuing. How long do we have before a cryptographically relevant quantum computer can be built? That's still anyone's guess.
Jameson Lopp tweet media
English
70
54
411
53.1K
harper 🤯
harper 🤯@harper·
talking to claude code about the anthropic claude code source code leak:
harper 🤯 tweet media
English
2
2
5
838
ty13r
ty13r@_ty13r·
@AlexFinn They’ve optimized their product development cycle around AI from the ground up. Their only bottleneck is figuring out what to build and when to build it.
English
0
0
0
33
Alex Finn
Alex Finn@AlexFinn·
The velocity in which Anthropic ships is unlike anything I've ever seen I'm sorry, but we have to assume they have access to an AGI like model that nobody else in the world has correct? They are simply holding it back so they can ship industry changing updates daily?
Claude@claudeai

Computer use is now in Claude Code. Claude can open your apps, click through your UI, and test what it built, right from the CLI. Now in research preview on Pro and Max plans.

English
232
100
1.7K
190.5K
Jay Williams
Jay Williams@RealJayWilliams·
Watch tape of the last play in Duke vs UConn…. - play design? - baseline & sideline as extra defenders - Evans (best FT%) not in the frame - Cayden (turn, face, stay stationary & analyze - Karaban intelligent read - Evan’s hands low - Mullin’s dagger
English
186
253
2.5K
114K
ty13r
ty13r@_ty13r·
I think @thsottiaux philosophy regarding the harness is the optimal path. “You’re scaffolding it in a way where you want to remove the scaffold over time because the model is able to stand on its own and it’s all about figuring out when is the right time to remove pieces of the scaffold.”
English
0
0
0
261
Dan McAteer
Dan McAteer@daniel_mac8·
Harness engineering is as important as model capability scaling. AI Agents are 50% a harness story. "Natural-Language Agent Harnesses" proposes moving harness logic out of code and into the native language of LLMs: Natural Language. Turns the agent harness into a portable artifact that you can experiment with, improve and execute in a shared runtime. Agent harnesses will mature into a first class citizen of the AI Agent ecosystem. We will have people who are Agent Harness engineers and experts.
Dan McAteer tweet mediaDan McAteer tweet media
English
74
73
309
76.7K
ty13r
ty13r@_ty13r·
Now that I read so much content with em-dashes — I’ve started using them. But I don’t want my content to be considered AI slop. What to do? 🤔
English
0
0
0
51
ty13r
ty13r@_ty13r·
@davemorin Is this how the frontier lab bubble pops?
English
1
0
0
67
Dave Morin 🦞
Dave Morin 🦞@davemorin·
Team Local Inference
clem 🤗@ClementDelangue

After @Pinterest @Airbnb @NotionHQ @cursor_ai, today it’s @eoghan @intercom publicly sharing that they’re finding it better, cheaper, faster to use and train open models themselves rather than use APIs for many tasks. And hundreds of other companies are doing the same without sharing. Ultimately, I believe the majority of AI workflows will be in-house based on open-source (vs API). It took much more time than we anticipated but it’s happening now!

English
6
4
44
13.3K
ty13r
ty13r@_ty13r·
@katibmoe I don’t understand the “open knowledge” marketing spin
English
1
0
0
73
Moe
Moe@katibmoe·
Introducing One. The simplest way to connect and monitor AI agents to hundreds of apps. And we’re open-sourcing the world’s largest integration database powering it: 47,000 agentic actions across 250+ apps. RT + comment “One” for access & 1M free API requests/month.
English
740
421
861
156.7K
ty13r
ty13r@_ty13r·
@ziwenxu_ i was hoping we had plateau’s on spam calls and text messages but…
GIF
English
1
0
1
63
Ziwen
Ziwen@ziwenxu_·
@_ty13r If you are my target audience.. I will spam you until you pay 🤣
English
1
0
5
542
Ziwen
Ziwen@ziwenxu_·
Your OpenClaw agents can now text you AND make phone calls!! Most people don't realize what this actually unlocks. It means your agents can now scrape any site, anywhere, and pick up the phone to start selling your services or product. No human in the loop. Fully autonomous.
Nikita@nikita_builds

Introducing Sendblue CLI 🟦🎉 iMessage numbers for your agents. 1️⃣ npm install -g @sendblue/cli 2️⃣ sendblue setup Done. Your agent has an iMessage number

English
30
32
419
98.7K
ty13r
ty13r@_ty13r·
@boxmining A self-organizing distributed knowledge graph.
English
0
0
2
50
Boxmining
Boxmining@boxmining·
To all builders, What are you currently building with AI?
English
87
0
40
5.1K
Shaw (spirit/acc)
Shaw (spirit/acc)@shawmakesmagic·
The agent framework that wins will literally be a while loop, btw The existence of agent frameworks is existential proof that we don't have AGI yet, in the limit they are training wheels for stupid GIs
English
18
2
55
4.5K
ty13r
ty13r@_ty13r·
@mikehostetler Code is a great medium for doing things. It’s a terrible medium for knowing things.
English
0
1
0
268
Mike Hostetler // Chief Agent Officer
Breaking: Meta researchers invented a perpetual motion machine with full energy conservation redefining the laws of thermodynamics Might as well pack it in, humans are cooked
Jenny Zhang@jennyzhangzt

Introducing Hyperagents: an AI system that not only improves at solving tasks, but also improves how it improves itself. The Darwin Gödel Machine (DGM) demonstrated that open-ended self-improvement is possible by iteratively generating and evaluating improved agents, yet it relies on a key assumption: that improvements in task performance (e.g., coding ability) translate into improvements in the self-improvement process itself. This alignment holds in coding, where both evaluation and modification are expressed in the same domain, but breaks down more generally. As a result, prior systems remain constrained by fixed, handcrafted meta-level procedures that do not themselves evolve. We introduce Hyperagents – self-referential agents that can modify both their task-solving behavior and the process that generates future improvements. This enables what we call metacognitive self-modification: learning not just to perform better, but to improve at improving. We instantiate this framework as DGM-Hyperagents (DGM-H), an extension of the DGM in which both task-solving behavior and the self-improvement procedure are editable and subject to evolution. Across diverse domains (coding, paper review, robotics reward design, and Olympiad-level math solution grading), hyperagents enable continuous performance improvements over time and outperform baselines without self-improvement or open-ended exploration, as well as prior self-improving systems (including DGM). DGM-H also improves the process by which new agents are generated (e.g. persistent memory, performance tracking), and these meta-level improvements transfer across domains and accumulate across runs. This work was done during my internship at Meta (@AIatMeta), in collaboration with Bingchen Zhao (@BingchenZhao), Wannan Yang (@winnieyangwn), Jakob Foerster (@j_foerst), Jeff Clune (@jeffclune), Minqi Jiang (@MinqiJiang), Sam Devlin (@smdvln), and Tatiana Shavrina (@rybolos).

English
4
0
7
2.1K
ty13r
ty13r@_ty13r·
@Volland84 been following your writings around metagraph dbs and agent memory for months. I’d love to chat some time to share notes. Please feel free to DM me.
English
0
0
0
23
mason
mason@masonic_tweets·
A 12% layoff is nothing for a large corporation. At any point in time you can reduce by 20-30% with no negative impact. AI should push this to 50-75% with net positive impact.
CoinDesk@CoinDesk

TODAY: @Cryptocom cuts 12% of workforce as CEO Kris Marszalek emphasizes enterprise-wide AI integration is essential for survival.

English
2
0
3
608