Jint3x

746 posts

Jint3x banner
Jint3x

Jint3x

@Jint3x

Undergraduate student | AI, Agentic Programming

Katılım Ekim 2018
43 Takip Edilen20 Takipçiler
Sabitlenmiş Tweet
Jint3x
Jint3x@Jint3x·
AGI and subsequently, ASI, should be the goal of all of humanity. It is the thing that will release humans from the limits that have been imposed on us for millenniums. It is a path where humanity won't be just multi-planetary - it will be multi-galactic.
English
0
0
2
104
Jint3x
Jint3x@Jint3x·
Really Claude? 6 questions on the free plan and I hit the rate limit.....
English
0
0
0
30
Jint3x
Jint3x@Jint3x·
I have a strong feeling that SpaceX's space data centers will have at least a couple of Optimi stationed in them and will probably be teleoperated from earth to service them [the data centers].
English
0
0
0
2
Jint3x
Jint3x@Jint3x·
What the hell, I never knew touchpads were so comfortable to use.
English
0
0
0
3
Jint3x
Jint3x@Jint3x·
I don't understand how people are becoming dumber when using AI. I feel like I need to use many more brain cells when working with it - more work done, more options weighted, more decisions made per unit of time. Constant context switching, watching for hallucinations, etc.
English
0
0
0
9
saila
saila@sailaunderscore·
Kids will never feel the gameday pressure of having to start an essay at 2am and finish it by 8am before the deadline. Totally lost. Absolutely no evolutionary pressure towards a clutch factor anymore.
English
111
749
17.8K
442.5K
Polymarket
Polymarket@Polymarket·
JUST IN: Study finds AI use at work leading to "cognitive overload" and burnout.
English
258
259
4.6K
498.9K
Jint3x
Jint3x@Jint3x·
@lady_valor_07 Do not take fluoroquinolone-class antibiotics unless life-threatening emergency.
English
0
0
0
8
LadyValor
LadyValor@lady_valor_07·
I’m 25. Give me oddly specific life tips. No general ”surround yourself with positive people” tips. I want the most random, specific advice possible.
English
9K
371
9K
7.9M
Jint3x
Jint3x@Jint3x·
@reach_vb Perhaps something that might be a bit more work would be to organize student-only hackathons.
English
0
0
0
1
Jint3x
Jint3x@Jint3x·
@reach_vb As a student myself, I would say that practical tutorials showcasing and explaining different concepts (like compaction, tools, tips for best results, etc) together with some form of codex discount or free plan would be a good combo. ...
English
1
0
0
70
Figure
Figure@Figure_robot·
Today we're showing Helix 02 that can tidy a living room fully autonomously Figure is designed so when you leave the house, your home resets exactly how you like it
English
712
1.3K
9.4K
2.1M
Jint3x
Jint3x@Jint3x·
I strongly believe agentic evals and proper work validation would be the most important things in the AI world. It doesn't matter if you can do 10 million or 100 million output token tasks if whatever you build doesn't work.
English
0
0
0
14
Jint3x
Jint3x@Jint3x·
@ShanuMathew93 Lol, I do the same (not sure if I have bad vision). After 8 hours of reading, there's no way in hell to make me read 14px or 16px font-size text.
English
0
0
0
14
Shanu Mathew
Shanu Mathew@ShanuMathew93·
Man, I know I'm getting old with how often I have chrome in 110% or 125%... I have 20/20 vision apparently, too...
English
1
0
11
1.9K
Jint3x
Jint3x@Jint3x·
@chatgpt21 They look somewhat weak atm, but that's what is publicly known. They have the infra, so if they do have capable people, they are definitely doing something. I still use Grok 4.20 for search, much better than everything else I've tried.
English
0
0
0
936
Chris
Chris@chatgpt21·
Xai hasn’t even released the benchmarks to Grok 4.2 meanwhile GPT is now working on GPT 5.5.. after releasing GPT 5 a month after grok 4 What happened at XAI?
English
108
13
964
103.2K
Jint3x
Jint3x@Jint3x·
I am starting to understand why people feel anxiety when their agents are not doing work...
English
0
0
0
7
Jint3x
Jint3x@Jint3x·
Thank god AI exists, otherwise I would've been killed by incompetent doctors by now.
English
0
0
0
6
Jint3x retweetledi
frankie
frankie@FrankieIsLost·
it's really this simple
frankie tweet media
English
28
208
2.7K
85.2K
Jint3x
Jint3x@Jint3x·
@awnihannun Increasing context size + context performance (especially the performance) will definitely make good compaction much stronger.
English
0
0
0
41
Awni Hannun
Awni Hannun@awnihannun·
I've been thinking a bit about continual learning recently, especially as it relates to long-running agents (and running a few toy experiments with MLX). The status quo of prompt compaction coupled with recursive sub-agents is actually remarkably effective. Seems like we can go pretty far with this. (Prompt compaction = when the context window gets close to full, model generates a shorter summary, then start from scratch using the summary. Recursive sub-agents = decompose tasks into smaller tasks to deal with finite context windows) Recursive sub-agents will probably always be useful. But prompt compaction seems like a bit of an inefficient (though highly effective) hack. The are two other alternatives I know of 1. online fine-tuning and 2. memory based techniques. Online fine-tuning: train some LoRA adapters on data the model encounters during deployment. I'm less bullish on this in general. Aside from the engineering challenges of deploying custom models / adapters for each use case / user there are a some fundamental issues: - Online fine-tuning is inherently unstable. If you train on data in the target domain you can catastrophically destroy capabilities that you don't target. One way around this is to keep a mixed dataset with the new and the old. But this gets pretty complicated pretty quickly. - What does the data even look like for online fine tuning? Do you generate Q/A pairs based on the target domain to train the model? You also have the problem prioritizing information in the data mixture given finite capacity. Memory based techniques: basically a policy for keeping useful memory around and discarding what is not needed. This feels much more like how humans retain information: "use it or lose it". You only need a few things for this to work: - An eviction/retention policy. Something like "keep a memory if it has been accessed at least once in the last 10k tokens". - The policy needs to be efficiently computable - A place for the model to store and access long-term memory. Maybe a sparsely accessed KV cache would be sufficient. But for efficient access to a large memory a hierarchical data structure might be beter.
English
87
82
1.1K
627.7K
Jint3x
Jint3x@Jint3x·
Prediction: Apps like Claude Code and Codex will always dominate the AI-programming tools, because the model providers can train special models together with their harnesses. Most other apps can't do that.
English
0
0
0
22
Jint3x
Jint3x@Jint3x·
@philipkiely Downloaded the book yesterday, already at chap 4 and would've read more if I had the time. As someone who doesn't specialize in inference engineer, the book has been very interesting so far.
English
0
0
1
128
Philip Kiely
Philip Kiely@philipkiely·
I made a mistake yesterday and now I have a (good) problem. Turns out I massively underestimated demand for Inference Engineering. Yesterday we saw: > 2M+ views, trending on tech twitter > Shipped books to awesome people worldwide > Thousands more asking for the book The problem? I'm sold out. Was on the phone with my printer in Belgium first thing this morning, more copies on the way by air freight. In the meantime, the PDF on the website is free.
Philip Kiely tweet media
English
31
21
414
16K
Jint3x retweetledi
Johan
Johan@Adityapandeydev·
pov: how it feels when Al can't solve your problem so you switch to documentation
English
42
399
6K
144.1K