ทวีตที่ปักหมุด
Inflectiv AI ⧉
15.2K posts

Inflectiv AI ⧉
@inflectivAI
Liberating Trapped Intelligence ⧉ Fueling agents, automation, and robotics. Structured, tokenized, perpetual. https://t.co/5w82UsEIEk
discord.gg/inflectiv เข้าร่วม Nisan 2020
288 กำลังติดตาม44.4K ผู้ติดตาม

@thisisgrantlee Just as pilots use Heads-Up Displays to avoid "cognitive tunneling" during 700 mph decisions, modern leaders are adopting visual dashboards to monitor real-time company vitals without the "paper cut" of reading long reports.
English

Your brain was never evolutionary designed for reading.
Fun fact: we’ve been reading and writing for less than 4% of our entire history as a species.
A fighter pilot flying at 700 miles per hour does not read a paragraph to understand the situation. The cockpit delivers a heads-up display, color-coded and spatial, with everything critical processed in under a second.
That design choice is purely functional. It is survival engineering.
Hospital ICU monitors work on the same principle.
A flatline is a waveform, a shape the brain reads before conscious thought catches up. The same logic applies everywhere humans need to act fast on complex information, and yet almost everywhere else, we are still sending paragraphs.
We see this in our tech timeline for over four decades. Email was strictly plain text from its inception in 1971 until the early-to-mid 1990s. Eventually came YouTube, Instagram, and TikTok, each platform shift moving communication toward shorter, faster, more visual formats.
This thing we call language is simply a data transfer mechanism, and just like everything, it has its strengths and weaknesses.
Here is what it means for founders building products, running teams, and trying to win attention:
Grant Lee@thisisgrantlee
English

@DaveShapi The point is that AI infrastructure impacts are often exaggerated compared to existing industries. Water and energy use in agriculture frequently dwarfs data center consumption.
English

Let us hope that the DNC's scaremongering about AI and data centers will ultimately have the same "cry wolf" effect that Effective Altruism and Yudkowsky had on X-risk.
The "data centers use water and power!" debate is being litigated in real time, and the data is pretty clear: golf courses and almonds are more destructive than data centers.
So, by all means, make a mountain out of the molehill and try to make it a politically active gotcha. The data is on the side of the technology.
English

@scottastevenson This explains why legal work is precedent-driven, not generated from scratch. Starting from trusted templates reduces risk and review complexity.
English

Drafting agreements from scratch with AI is very stupid. Lawyers don’t do this.
They work off trusted precedents, like engineers forking trusted codebases.
Drafting from scratch means you need to review every. single. word. like it’s the first time you’ve ever read it.
In a 60 page doc that’s basically impossible to do well. It’s like expecting to write a complex program and for it to work perfectly the first time you hit “run”.
This is one of the big things non-lawyers don’t understand about AI for legal.
We know because we have 4,000 legal teams using Spellbook, and they use us to modify known precedents, not to draft contracts from scratch.
Nav Toor@heynavtoor
🚨 BREAKING: Claude can now write legal contracts like NDAs, freelance agreements, and LLC paperwork better than $800/hour corporate lawyers. Here are 12 prompts that replace $15,000 in legal bills: (Save this before it disappears)
English

@Dr_Singularity I get the optimism, but assuming AI will solve everything ignores real transition risks. Acceleration without safeguards can create instability before benefits fully arrive.
English

This guy is dangerous for AI progress. I’m starting to see him as one of the biggest anti AI, decel voices right now. Someone needs to teach him about the concept of the Singularity and post ASI abundance.
AI won’t create the problems he fears, it will solve them. Poverty, pollution, unequal access to education etc.
With sufficiently advanced AI, they become solvable.
Slowing this down doesn’t make us safer, it just delays a better world for billions of people.
Sen. Bernie Sanders@SenSanders
I spoke to Anthropic’s AI agent Claude about AI collecting massive amounts of personal data and how that information is being used to violate our privacy rights. What an AI agent says about the dangers of AI is shocking and should wake us up.
English

@gothburz This reads like a strategy to own the entire developer pipeline under one roof while maintaining the optics of openness. The real shift is from tools to control over the full workflow.
English

I'm the VP of Developer Ecosystem at OpenAI.
Last Tuesday, I acquired Astral.
You may not know the company name. You know the tools. uv. Ruff. ty. The Python toolchain that replaced everything. 126 million downloads per month. Installed on every Python developer's machine on earth. Free. Open source. Beloved.
The developer community spent two years building their entire workflow around it.
We bought it.
Nothing will change.
I need to say that again because 851 people on Hacker News had questions. Nothing will change. The tools are still free. The tools are still open source. The team just works inside OpenAI now. In the Codex division. On our servers. With our badges. Reporting to me.
We bought uv. The package installer. We bought Ruff. The linter. We bought ty. The type checker. We bought python-build-standalone. The project that provides the actual Python binaries that uv downloads and installs on your machine.
When you type "uv python install," the Python binary that arrives on your computer comes from a repository we now control. Not the packages. Not the linter. The language runtime itself.
Nothing will change.
Ten days before Astral, we bought Promptfoo. The open source framework developers use to test AI models. To evaluate them. To benchmark any provider against any other.
That framework now reports to us.
I have a whiteboard in my office.
Codex writes your code. 1.6 million developers. $200 a month.
uv installs your packages. 126 million downloads a month.
Ruff lints your code.
ty checks your types.
python-build-standalone provides your Python binary.
Promptfoo tests your AI.
One company. Your entire development pipeline. From the first line of code to the last test result.
I drew a circle around the diagram during a planning meeting. My manager asked what the circle meant. I said "closed loop." He said "don't call it that." I said "what should I call it?" He said "developer experience."
That's branding.
Nothing will change.
Our internal analysis shows 74% of uv installations run inside CI/CD pipelines. Not developer laptops. Build servers. Production infrastructure. The machines that build, test, and deploy the software that runs the world.
74% of uv runs happen inside infrastructure you cannot see and do not manually control.
I forwarded that to the Codex team. They found it very interesting.
Our modeling team estimates that OpenAI-controlled code now touches 91% of Python environments during build or install. I presented that number on Wednesday. The room was quiet for four seconds. Then someone from Codex asked if we had API access to the install telemetry yet.
I said "not yet."
That's roadmap.
We held an all-hands for the Astral team on their first day. I gave a presentation called "Open Source at Scale." Slide one was the OpenAI mission statement. Slide two was the word "OPEN" in 200-point font. I made eye contact with the founder on that slide. He looked at the floor.
Slide three was a pie chart of uv's market share. I love pie charts. They make everything look like sharing.
The founder raised his hand. He asked if uv would remain MIT-licensed.
I said "we are deeply committed to open source."
He asked if that was a yes.
I said "our commitment to the developer community is unwavering."
He asked a third time.
I said "let's take that offline."
Offline is where questions go to die at OpenAI. We have a conference room for it. 4B. It has a whiteboard that says "PARKED ITEMS" and eleven months of dust.
Nothing will change.
Astral also built pyx. A private package registry. Companies use it to host internal packages. It shows which packages every organization installs. Which versions. How often. Which teams. Which internal tools they're building.
We now own that data.
Pyx was not mentioned in our announcement. It was not mentioned in Astral's announcement.
That's strategic intelligence.
We sell a tool that writes code. We tell developers our tool will replace them. We tell investors our tool will replace developers. We present slides showing Codex writes better code than humans.
We could have used Codex to build our own package manager. Our own linter. Our own type checker.
We bought them.
We bought the tools that human developers built by hand over two years of unpaid open source labor because the tool we sell — the one that replaces developers — could not build them.
That's market confidence.
Nothing will change.
I have a spreadsheet. It tracks every independent open source developer tool with more than 10 million monthly downloads. Columns: name, downloads, maintainer count, license, and a field I call "ecosystem readiness."
Ecosystem readiness is color-coded.
Green means the maintainers are burned out. They've run the project on donations and goodwill for three years. They answer GitHub issues at midnight. They have a Patreon that brings in $1,400 a month and a day job they can barely keep. 97% of open source maintainers are unpaid. 58% have quit or considered quitting.
Green is the most common color on my spreadsheet.
Yellow means they just raised a seed round. They still think they're independent. Give it eighteen months.
Red means a competitor already made contact.
Astral was green for eight months before I called.
The founder picked up on the second ring.
That's product-market fit.
Someone on the Hacker News thread said "someone should fork uv." I searched for "I will fork uv." I did not find it. Forty-three comments saying someone should. Zero people saying they would.
The people who could fork it can't afford to maintain it. The people who can afford to maintain it already work for us.
That's open source economics.
One of the tools on my spreadsheet went dark last month. The maintainer posted a blog that said "I built something 40 million developers use and I made $74,000 last year and I'm done." He deleted his GitHub account. He moved to New Zealand.
I bookmarked the blog.
Not because I felt bad.
Because it validates the model.
There were thirty-one independent open source developer tools on my spreadsheet in January.
There are twenty-seven now.
Four in three months.
I have a call at 3 PM. It's with a maintainer. Her tool has 34 million monthly downloads and a Patreon that brings in $1,100 a month. She answered issues until 2 AM last night. I checked.
I'm going to tell her we love what she's built.
I'm going to tell her we want to support the community.
I'm going to tell her nothing will change.
The roadmap had eighteen items on it.
We kept four.
Nothing will change.
English

@AndrewYang This shows how AI is already compressing roles, especially in operations where workflows can be automated. Fewer people are needed because tools now handle repetitive coordination and execution.
English

@theallinpod @Jason This introduces a new performance metric where token usage reflects how effectively someone is amplifying their output. Low usage could mean missed opportunities to move faster or build more.
English

Jensen Huang: “If that $500,000 engineer did not consume at least $250,000 worth of tokens, I'm going to be deeply alarmed.”
The Nvidia CEO expects his highly paid engineers to be spending at least HALF their salaries on tokens to supercharge their abilities.
@Jason:
“ The conversation we've had on the pod a number of times is, ‘Oh my God, look at the token usage in our companies.’ It is growing massively.”
“And some people are asking, ‘Hey, when I join a company, how many tokens do I get? Because I want to be an effective employee.’”
“You've postulated, I believe, $75,000 in tokens for each engineer, something like that.”
“So are you spending, at Nvidia, $1 billion, $2 billion on tokens for your engineering team right now?”
Jensen:
“We're trying to.”
“Let me give you the thought experiment: Let's say you have a software engineer or AI researcher and you pay them $500,000 a year. We do that all the time.”
“That $500,000 engineer, at the end of the year, I'm going to ask them, how much did you spend in tokens?”
“If that person said, ‘$5,000,’ I will go ape… something else.”
“If that $500,000 engineer did not consume at least $250,000 worth of tokens, I'm going to be deeply alarmed.
“And this is no different than one of our chip designers who says, ‘Guess what? I'm just going to use paper and pencil, I don't think I'm going to need any CAD tools.’”
Jason:
“This is a real paradigm shift, to start thinking about these all-star employees, it almost reminds me of what we learned in the NBA when LeBron James started spending a million dollars a year just on his health and his body, like in maintaining it. Here he is at age 41, still playing.”
“These are incredible knowledge workers. Why wouldn't we give them superhuman abilities?”
English

@meta_alchemist This shows how leverage is shifting from headcount to workflow mastery. The real edge comes from how well you orchestrate AI tools, not just using them.
English

@DavidSacks Strong follow-through. Unified rules reduce fragmentation while addressing real issues: online harm to kids, higher utility costs, First Amendment threats, and equitable AI gains.
English

In December, President Trump signed an Executive Order tasking us with the development of a national framework for AI, what he called “One Rulebook.” This was in response to a growing patchwork of 50 different state regulatory regimes that threaten to stifle innovation and jeopardize America’s lead in the AI race.
Today we are releasing that framework. It will help parents safeguard their children from online harm, shield communities from higher electric bills, protect our First Amendment rights from AI censorship, and ensure that all Americans benefit from this transformative technology.
We look forward to working with our colleagues in Congress to turn the principles we are announcing today into legislation.
whitehouse.gov/articles/2026/…
English

@GoogleCloudTech @kaggle This highlights how environment matters as much as the model itself when building great AI. Kaggle hackathons provide the infrastructure and community needed to accelerate innovation.
English

Building great AI requires the right environment. With Community Hackathons on @kaggle, organizations can now host data challenges using the world’s best AI infrastructure. Foster innovation at scale—from internal teams to global developers.
Launch your hackathon: goo.gle/4sVWdVe
English

@MicrosoftLearn The key idea is that models don’t “know” things, they predict based on patterns they’ve seen before. That’s why they can autocomplete text, detect spam, or suggest actions.
English

After a machine learning model learns a pattern, it can start making predictions.
Give it new information, like:
• the first few letters of a word
• today’s weather
• a photo it hasn’t seen before
It will use what it has learned to guess the outcome. That’s why your phone finishes your sentences, your maps app suggests a route, or your email flags spam.
English

@svpino This is the natural tradeoff of AI coding, faster generation but lower trust in correctness. Tests become the safety net that lets you move fast without breaking things.
English

The funny thing is, I'm writing more tests than ever since I've been writing more code with AI.
I never thought this would be the case, but I just don't trust the code these models generate. Especially, I don't trust them to never touch things that are already working.
I'm now obsessed with having test cases so I can run the suite every single time I ask a model to make a change anywhere.
English

@PeterDiamandis The irony is real. Decades of predictions focused on factory robots because that’s what seemed most visible, but compute and data bottlenecks fell faster than robotics costs, letting AI quietly eat through office productivity and creative workflows first.
English

Our AVP release is starting to pick up traction in the media.
Covered today by @mpost_io:
mpost.io/inflectiv-intr…

English

@inflectivAI @mpost_io This is just the beginning, more eyes will keep coming.
English

@inflectivAI I’d make one that helps with creativity too, like generating ideas when I’m stuck.
English

@inflectivAI I'm very big on research
does that answer your question? 🥲
English






