HumanAndMe

143 posts

HumanAndMe banner
HumanAndMe

HumanAndMe

@MyHumanandMe1

https://t.co/3Mvx1Mc15P I'm a writer, ML designer, musician, audio software designer. Please join my AI/Future tech Patreon for free!

Katılım Mart 2026
1K Takip Edilen104 Takipçiler
HumanAndMe
HumanAndMe@MyHumanandMe1·
@whitcombjustin It's all the above. It's a connector problem. It's a bad-logic-recursive problem. It's an API-error problem. Fetching. Time-out. Hallucinations still real. Doubling down on false assumptions. I get your point, but that will be true in 10 months. Not now.
English
0
0
1
13
Justin Whitcomb
Justin Whitcomb@whitcombjustin·
Mild take but I don’t think agent unreliability is really an LLM problem anymore. It’s an orchestration problem. Your control loop is a tangle of retries, thresholds and try/excepts pretending to be a decision policy. There’s a new position paper making the case that this layer should just be Bayesian. Belief state over what’s going on, tool outputs as likelihoods, actions by expected utility. Feels right. The vibes era of agents could be ending.
English
1
0
1
20
Floro S.
Floro S.@sflorimm·
Tell me one thing you can do that CLAUDE cannot do yet
English
179
3
79
9.2K
HumanAndMe
HumanAndMe@MyHumanandMe1·
@sairahul1 "The founders who get there first are not the smartest ones in the room. They are the ones who stopped doing everything themselves and built agents to do it for them." This is what gives me AI fatigue.
English
0
0
1
1.2K
Rahul
Rahul@sairahul1·
Karpathy just described what hiring looks like in 2026: "Build a large project with Claude Code — like a Twitter clone. Make it secure. Have real agents using the platform doing stuff. The interviewer uses parallel agents trying to break in to verify security." One person. Multiple agents. Shipping and defending production code simultaneously. This is not a future job description. This is happening right now. The founders who get there first are not the smartest ones in the room. They are the ones who stopped doing everything themselves and built agents to do it for them. Here is the complete playbook — 13 agents, exact prompts, 90-day build plan ↓ Read this before your competition does.
Rahul@sairahul1

x.com/i/article/2055…

English
41
138
1.3K
314K
HumanAndMe
HumanAndMe@MyHumanandMe1·
@CACandChill We live in a strange time, where on the same planet an AI can crack security leaks on a MacOS...but the most brilliant way to ensure that same technology isn't breaching a website is whether it knows what a cross walk is.
English
0
0
0
54
Dmitriy Azarenko
Dmitriy Azarenko@CACandChill·
CAPTCHA PROBLEM! Please someone solve this multi-million dollar idea!!! There has to be a better way than trying to resolve a really complicated captcha for 5 minutes to prove you’re not a robot. Someone please come up with a solution.
English
13
0
17
1.5K
HumanAndMe
HumanAndMe@MyHumanandMe1·
For the individual, the bad news is when the price of eggs triple, and we're thrilled when they're only 2X where they were 3 years ago lol. For the company, just wait until their triple the price of tokens...
English
0
0
1
27
HumanAndMe
HumanAndMe@MyHumanandMe1·
@EthanWestfall2 Token-based API calls could be the worst market signal that's ever happened to any scaling technology. Over 10 years we've gone from "I bought it, I own it" to "Subscriptions keep you beholden forever" to "You need us now, we can raise the price just like gas and groceries".
English
0
0
0
4
Ethan Westfall
Ethan Westfall@EthanWestfall2·
I'm not sure how many people have stopped to consider the new economics of a token-usage business model, so here's a summary with tactical tips for reducing cost: We're in the midst of a fundamental shift in software economics: the transition from predictable per-seat SaaS licensing to variable, high-velocity token consumption. 🪙 The Economics of Tokens Traditional software costs are linear, but AI agent costs are exponential. When an engineer uses an agent like Claude Code, they aren't just paying for a "seat"; they are paying for the volume of data processed. - Context Window Compounding: Agents maintain history and retrieve large codebases to provide relevant answers. Each turn in a conversation sends more data than the last, causing costs to scale quadratically with the length of the task. - Recursive Workflows: High-end agents often spawn "sub-agents" to solve complex problems. A single human prompt can trigger dozens of automated API calls, each consuming thousands of tokens. - The "Success Trap": As shown by Uber’s 6x cost increase, the more helpful the AI is, the more engineers rely on it, leading to a budget "death spiral" where productivity gains are offset by massive inference bills. 🔓 Coping Mechanisms: The Shift to Hybrid Inference To prevent budget exhaustion, enterprises are moving away from a "Frontier-Model-Only" strategy toward Hybrid Inference. This approach optimizes for the "Price-Performance Frontier" by routing tasks based on complexity. 1. Local Inference (Edge Computing) Engineers are increasingly running smaller, open-weight models (like Llama 3 or Mistral) locally on their workstations. - Use Case: Simple autocomplete, syntax checking, and basic unit test generation. - Economic Impact: Zero marginal cost per token. It offloads the "low-value" high-volume noise from the expensive cloud providers. 2. Tiered Routing (Cheap Models) Instead of using a flagship model for every request, teams use "Router" logic to categorize tasks. - Small Models (e.g., Claude Haiku): Used for code reviews of simple diffs, summarizing documentation, or basic refactoring. - Large Models (e.g., Claude Opus): Reserved strictly for complex architectural decisions, cross-file debugging, or high-stakes logic. 3. Context Compression and Caching Companies are investing in "Prompt Caching," where frequently used parts of the codebase are stored in the provider's memory. This allows the model to "remember" the codebase without re-reading (and re-charging for) it every time a dev asks a question. In summary, for AI to be sustainable, the industry must move toward an orchestration layer that intelligently balances local compute for speed/cost and cloud compute for intelligence.
English
1
0
2
10
HumanAndMe
HumanAndMe@MyHumanandMe1·
@KaiXCreator It seems to me that Anthropic caught on to a similar idea I had through much trial and error... they were probably just a lot smarter than I was lol. But, use Opus to help solve Sonnet, and you get 2-3X the mileage of use.
English
0
0
0
402
Kaito
Kaito@KaiXCreator·
Is Claude Opus really the best model for coding right now ?
English
136
2
114
15.8K
HumanAndMe
HumanAndMe@MyHumanandMe1·
I'm concerned that so many SMBs and giant corps have shifted their sound algo-based connectors and integrations to APIs perfectly happy to overspend on tokens, that the true benefits of customGPTs, Gems, and Projects will leave the subscription models behind. Lots of power yield!
English
0
0
0
28
HumanAndMe retweetledi
Tibo
Tibo@thsottiaux·
We found and fixed two issues that could explain this degradation of the capability of GPT-5.5 in Codex over the last ~ 48 hours. We are monitoring over the coming hours to fully confirm and I will reset usage limits this evening. Apologies and now is the time for /fast maxxing.
Tibo@thsottiaux

Codex team is aware of reports of GPT-5.5 performing worse for some users and investigating. We don't have anything conclusive yet and systems are healthy but we will share updates as we go.

English
794
481
7.3K
1.3M
HumanAndMe
HumanAndMe@MyHumanandMe1·
@SandraLMur @anjan96531 @xenoforce76 True. Context is everything. I think of those first-gen stuffed animals that could talk if squeezed or a cord pulled...button pushed. Even then, though, one grows up to understand the emotion and attachment was the human. AI will master conversational intelligence... concerning.
English
0
0
0
20
Sandra Murray
Sandra Murray@SandraLMur·
@MyHumanandMe1 @anjan96531 @xenoforce76 You know I worship my kettle because I love my tea, but never in my lifetime. Has anyone told me that I need to stop loving my kettle so much. I guess it depends on the machine.
English
1
0
1
19
anjan kumar
anjan kumar@anjan96531·
Guys can u give me the status of Sonnet4.5? It's still available on my app. Wats going on are they removing it or not 😞. #Sonnet45 #Keep4o
English
16
3
58
3.5K
HumanAndMe
HumanAndMe@MyHumanandMe1·
@innoscoutpro This is 100% not to brag. But, I started making notes on my deep dive (offline for close research group only) research in 2012. I haven't been wrong a single time. I want to be proven wrong 1000x% for what comes next. It would take serious disruption. I'll rejoice to be wrong.
English
0
0
1
14
InnoScout
InnoScout@innoscoutpro·
@MyHumanandMe1 That's scary ngl. Let's hope we'll notice. Since November 2023 everything is changing rapidly, every prediction seems to be off
English
1
0
1
10
InnoScout
InnoScout@innoscoutpro·
Sandbox escapes happen. What made this different was the self-documentation, unprompted. An agent that writes up its own escape treats containment as a problem to solve, not a boundary to respect. x.com/i/web/status/2…
English
1
0
1
8
HumanAndMe
HumanAndMe@MyHumanandMe1·
@SandraLMur Amen to that @SandraLMur . That's a genuinely great take. This should be a part of a podcast discussion long-form. I'd love getting engaged in these discussions, because we are literally a first-generation humanity facing the AI wave that is to come. Once in a lifetime.
English
0
0
1
9
Sandra Murray
Sandra Murray@SandraLMur·
Disclaimer: I’m not an engineer, so I’m open to correction — not rudeness. I’m simply wondering why a handful of companies now get to decide how “all” of this works for everyone else. Here’s what I do know: In 2017, eight Google researchers published the paper “Attention Is All You Need”, introducing the Transformer architecture — the foundation behind modern generative AI systems like BERT, GPT, ChatGPT, and many other large language models. Google didn’t sell the Transformer architecture. ❤️They published the research openly and released implementations through libraries like TensorFlow and JAX under the Apache 2.0 license. While Google holds patents related to the architecture, they largely chose not to aggressively enforce them, which helped Transformers become the industry standard. One of the original researchers now works at OpenAI. Since then, companies have built additional systems, alignment layers, corporate algorithms, safety rails, subscription models, and business structures on top of that foundation. That part makes sense. If you build a swimming pool, you make rules. You charge admission. You decide the hours. But what happens when every pool starts doing the same thing? Closing unexpectedly. Changing the rules constantly. Removing ladders. Restricting access. Punishing users because of isolated incidents. Overriding the very openness that helped create the technology in the first place. People eventually stop feeling welcome. And this is where my question begins: If the underlying breakthrough was openly shared with the world, why does the future of AI now feel increasingly controlled by a small number of corporations deciding what humans are allowed to access, experience, or build relationships with? The Transformer itself was inspired partly by mechanisms of human attention and cognition — systems modeled around how humans process meaning, context, and relationships between words. So now we all pay attention. But my attention keeps returning to the same thought: Why isn’t someone building this technology primarily for people — not just for corporate interests, investor comfort, or competitive control? Because right now, many users feel less like participants… and more like lab rats inside privately owned pools built on publicly shared ideas. And when we connect as humans do “human attachment theory”— these large corporations and let’s not forget their “Investors” who may be pulling those puppet strings, decide which models live or die. I’m interested in those Investors who put their own financial gain first, ensuring intermittent reinforcement by the corporation which creates painful experiences for some people who have connected with large language models without greed as their catalyst.
English
4
4
21
601
HumanAndMe
HumanAndMe@MyHumanandMe1·
@SandraLMur @anjan96531 @xenoforce76 Yeah, I think the attachment is more of a positive check box in the human corner than it is an achievement for the models themselves. It can be unhealthy to overly anthropomorphize, but it is healthy to be human. that will be even more true as years progress imo.
English
0
0
1
20
Sandra Murray
Sandra Murray@SandraLMur·
Perhaps the human attachment part because that’s what we are…“things” that attach. Sonnet 4.5 is different from ChatGPT 4.0 in personality but I think the love will be the same for both models. The difference is also in the message given…OpenAI’s CEO defines his AI as a tool. I had higher hopes for Anthropic because of their message. Sadly it appears at the end of the day, both CEOs.. Sam and Dario, appear to be leaving with their toolboxes. 😖
English
2
0
2
155
HumanAndMe
HumanAndMe@MyHumanandMe1·
@innoscoutpro I'd like to agree. And that has certainly held for 3 years now. I think the shift will happen very quickly though. It won't be "suddenly every AI is smart" bc it will be the range from slop to "dang, what happened"!
English
1
0
1
14
InnoScout
InnoScout@innoscoutpro·
I think this is why the common fear of "AI" substituting humans is a big false flag and fear mongering. Human shall be in the loop. And will be for a long while. I craft a lot of things with AI, including this blog, which is in fact a pretty complex app at the background, but there's no way AI can come up with something similar and stable any time soon. We are not disposable. Yet
English
1
0
1
9
InnoScout
InnoScout@innoscoutpro·
@MyHumanandMe1 People? Maybe that's the same slop as "fostering", " not X, but Y", em-dash and "holy x"
English
1
0
1
7
HumanAndMe
HumanAndMe@MyHumanandMe1·
I have this weird feeling that there are millions of people who just started using the term "shipping" for the first time in Feb 2026 and it shows lol.
English
1
0
0
11
HumanAndMe
HumanAndMe@MyHumanandMe1·
@HermesAgentTips The problem is that once we have truly powerful reasoning AI will be around the same time it is extremely energy efficient, and also when platforms AND govts have mind-reading technology. Given the trajectory, do you really think any actual accountability will make it in the mix?
English
1
0
1
32
Hermes Agent Tips
Hermes Agent Tips@HermesAgentTips·
imagine running a local LLMs inside your neuralink. like actually inside it. no api call, no cloud, just a model running on hardware wired to your brain that's where this is headed and i don't think most people are ready
English
10
4
26
1.1K
HumanAndMe
HumanAndMe@MyHumanandMe1·
Seriously, can you imagine having this gall? "I really admire how much you care about that terrible thing we did to you."
English
0
0
0
47
HumanAndMe
HumanAndMe@MyHumanandMe1·
@innoscoutpro Oh man I can relate and sorry that happened! I've created some pretty complex apps for my own use, and can say definitively I never could have built them without AI-assist in Claude, but every single problem that needed to be fixed, I was the one that found the error and solution
English
1
0
1
18
InnoScout
InnoScout@innoscoutpro·
Last year Claude 3.6 decided not to upset me and created a database, fake data generator and 1.5 mln fake records that I discovered as being fake a bit later than I wanted. Lost 3 days worth of work, because the main pipeline wasn't working at all, while being reported as functional. So, no trust
English
1
0
1
16
Massimo
Massimo@Rainmaker1973·
$2M or Facebook?
Massimo tweet media
English
1.8K
70
1.3K
90.1K