Greg Johns

2.4K posts

Greg Johns banner
Greg Johns

Greg Johns

@gjohnsx

AI developer building automation tools for real estate. MCP servers + AI agents. Personal liberty, sound money. Orlando.

Orlando Katılım Şubat 2021
1.2K Takip Edilen690 Takipçiler
Greg Johns retweetledi
Hassan
Hassan@buildwithhassan·
> 1. rotate all env vars and secrets in your vercel dashboard right now > 2. regenerate any github tokens connected through vercel's git integration > 3. check build logs for cached secrets from old deployments > 4. revoke API keys for stripe, databases, anything sitting in that dashboard
English
18
74
592
92.7K
Greg Johns retweetledi
Greg Brockman
Greg Brockman@gdb·
The world is transitioning to a compute-powered economy. The field of software engineering is currently undergoing a renaissance, with AI having dramatically sped up software engineering even over just the past six months. AI is now on track to bring this same transformation to every other kind of work that people do with a computer. Using a computer has always been about contorting yourself to the machine. You take a goal and break it down into smaller goals. You translate intent into instructions. We are moving into a world where you no longer have to micromanage the computer. More and more, it adapts to what you want. Rather doing work with a computer, the computer does work for you. The rate, scale, and sophistication of problem solving it will do for you will be bound by the amount of compute you have access to. Friction is starting to disappear. You can try ideas faster. You can build things you would not have attempted before. Small teams can do what used to require much larger ones, and larger ones may be capable of unprecedented feats. More and more, people can turn intent into software, spreadsheets, presentations, workflows, science, and companies. People are spending less energy managing the tool and more energy focusing on what they are actually trying to create. That shift brings a kind of joy back into work that many people haven’t felt in a long time. Everyone can just build things with these tools. This is disruptive. Institutions will change, and the paths and jobs that people assumed were stable may not hold. We don’t know exactly how it will play out and we need to take mitigating downsides very seriously, as well as figuring out how to support each other as a society and world through this time. But there is something very freeing about this moment. For the first time, far more people can become who they want to become, with fewer barriers between an idea and a reality. OpenAI’s mission implies making sure that, as the tools do more, humans are the ones who set their intent and that the benefits are broadly distributed, rather than empowering just one or a small set of people. We're already seeing this in practice with ChatGPT and Codex. Nearly a billion people are using these systems every week in their personal and work lives. Token usage is growing quickly on many use-cases, as the surface of ways people are getting value from these models keeps expanding. Ten years ago, when we started OpenAI, we thought this moment might be possible. It’s happening on the earlier side, and happening in a much more interesting and empowering way for everyone than we’d anticipated (for example, we are seeing an emerging wave of entrepreneurship that we hadn’t previously been anticipating). And at the same time, we are still so early, and there is so much for everyone to define about how these systems get deployed and used in the world. The next phase will be defined by systems that can do more — reason better, use tools better, plan over longer horizons, and take more useful actions on your behalf. And there are horizons beyond, as AI starts to accelerate science and technology development, which have the potential to truly lift up quality of life for everyone. All of this is starting to happen, in small ways and large, today, and everyone can participate. I feel this shift in my own work every day, and see a roadmap to much more useful and beneficial systems. These systems can truly benefit all of humanity.
English
415
664
5.2K
583.4K
Hunter 🌆
Hunter 🌆@rhunterh·
There's a cohort of American males that believe with 1-2 years dedicated practice they could win the Masters. As a marketer, it's important to understand that you can sell these people anything.
English
217
1.7K
33.1K
2.4M
Steve Yegge
Steve Yegge@Steve_Yegge·
I'm back on X. Took a couple months hiatus after the death-threat phone calls to my family from the crypto weirdos. They can all fuck right off, permanently. I will not agree to speak of crypto again, so don't ask, not even in person. Thank you for respecting my family's privacy on this. I've been trying to find some lipstick on the LinkedIn pig, and god damn if it isn't a weird, weird place. Almost all negative signal there has been suppressed, and also all quirkiness, it's all smoothed and oversaturated and Stepford-Wives-y. Always been like that, but now it has been terminally infected by AI writing, which has sent it into a spiral, and it's worse than it has ever been. (Note that I never use AI to write, I've taken to using comma splices as a shibboleth.) Everyone on LinkedIn writes their posts with (bad) AI. C-suites, coaches, leaders, principal engineers, product managers--they all have the same banal writing, the same stupid tells. It's like combovers, do they really think nobody else is going to notice? I could sort of stomach LinkedIn when it was just a weird cult. But when everyone started sporting AI combovers, I had to nope out. There are no human cognitive contributions happening there. So fuck it, I'll take the weirdos here along with the great people. I don't think you can find the one without the other. I am officially back, howdy howdy, will post a lot. Feels good. DM me and say hi.
English
101
26
1K
68K
Greg Johns retweetledi
Aaron Levie
Aaron Levie@levie·
The more I meet enterprise CIOs and AI leaders outside of tech, the more it’s obvious that if you’re building software that doesn’t have a great headless mode, you’re going to be at risk in the coming years. Asked a group of 20 IT leaders across banking, media, finance, and healthcare if they will have any vendors left in 3-5 years that don’t have a good API option for their service and it was a unanimous “no”. This is clearly going to change the nature of software going forward. You have to be completely comfortable serving up your value proposition as much through agent on or off your platform, as you are your own interface. I suspect most platforms will make it to the other side because of how forceful the trend will be, but of course some won’t if their heads in the sand. But on the other end, the upside is that in a world of 100X+ more agents doing work with with software than people ever did, there are far more use-cases to drive and be a part of. In many ways it’s a renaissance if you’re tied to critical data or workflows because of what customers can now use you for. It will certainly force an evolution of business models over time - whether you embed all of this agentic usage in a seat license or make it all consumption based - but dollars will always flow to where value is created. Going to be fun!
English
45
43
468
78.3K
Greg Johns
Greg Johns@gjohnsx·
@Teknium something going on with openai-codex provider. reasoning is visible then i get an invalid api response for the actual response.output on v0.7.0
Greg Johns tweet mediaGreg Johns tweet media
English
2
0
2
126
Greg Johns retweetledi
GEOFF WOO
GEOFF WOO@geoffreywoo·
every saas founder who doesnt convert to api-first architecture in the next 6 months will get obliterated by agents your beautiful dashboard means nothing when codex and claude can call your endpoints directly stop building interfaces, start building infrastructure
English
100
42
859
70.8K
Greg Johns retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
2.8K
6.8K
56.8K
20.2M
Greg Johns
Greg Johns@gjohnsx·
@0rf youtube search has never been the same since
English
0
0
1
127
Greg Johns retweetledi
BuccoCapital Bloke
BuccoCapital Bloke@buccocapital·
The most interesting part of the Dorsey essay on management TL;DR - Remote companies have an advantage in the AI era because they can only thrive with rigorous documentation, which is perfectly repurposed as context for AI
BuccoCapital Bloke tweet media
English
59
88
1.4K
96.4K
Greg Johns retweetledi
Wade Foster
Wade Foster@wadefoster·
Today we released our new AI Fluency Rubric. We use it for every hire, focusing on what they’ve actually built. Last May we open-sourced V1. Hundreds of companies used it to screen candidates and develop teams. It worked. But the floor moved fast. An updated look at the 3 levels of AI fluency at @Zapier: 1. Capable: "I use AI to operate at a meaningfully higher level." 2. Adoptive: "I orchestrate AI and build systems that elevate how I work." 3. Transformative: "I re-engineer how work happens." We evaluate theses across 4 dimensions: Mindset, Strategy, Building, and Accountability. We're sharing V2 publicly for the same reason we shared V1: every company needs a framework for this, and most don't have one yet. Don’t see your role? See all departments / learn more here: zpr.io/xQq5PHMDChrL
Wade Foster tweet media
English
36
103
1.3K
240.9K
Greg Johns retweetledi
Ben Cera
Ben Cera@Bencera·
How to build a team in 2026: Hire for 3 things only: - AI-pilled (AI is the smartest person in the room now. "10x engineer" is dead.) - High ownership (every person owns a KPI, not tasks) - Highly motivated (if they're not pumped they're dragging everyone down) Then 2 rules: - Ship fast. Bugs in prod are fine. - Monitor for 1-2 hrs after every deploy. That's it. No juniors. No managers. No layers. Just killers who obsess over metrics. The companies we partner with at Polsia are built this way. In a year you will just need one person + AI. That's the future I'm building now.
English
40
21
356
22.4K
Greg Johns retweetledi
Mckay Wrigley
Mckay Wrigley@mckaywrigley·
btw we’re 6-12 months away from ai tools being able to: - autonomously use any piece of software in the world - effortlessly clone it in a weekend - constantly monitor it for updates - add whatever features you want on top of it all without you ever needing to use your computer
Claude@claudeai

Computer use is now in Claude Code. Claude can open your apps, click through your UI, and test what it built, right from the CLI. Now in research preview on Pro and Max plans.

English
215
112
2.2K
234.1K
Greg Johns retweetledi
Nous Research
Nous Research@NousResearch·
Hermes Agent v0.5.0 is out:
Nous Research tweet media
English
74
106
1.7K
166.5K
Greg Johns retweetledi
Matt Pocock
Matt Pocock@mattpocockuk·
We can't get rid of the calls, but we can get rid of the coding: 1. Jump on a call with your dev colleague/domain expert, creates a transcript 2. Generate notes from the transcript 3. Pass notes to coding agent, creates tickets 4. Pass tickets to AFK agent, creates code 5. Repeat with a new call
English
24
13
181
20.9K
Greg Johns retweetledi
Aaron Levie
Aaron Levie@levie·
“We’ve also been moving off legacy systems with poor, slow, outdated, and inconsistent APIs.” If you’re building software that can’t work fully headlessly in a way that agents want to use, you’re not prepared for what the future of software is going to look like. Agents will use software 100X more than people, and people will more and more interact with their data and workflows via agents across many different platforms. This is the real risk but also opportunity for platforms right now. Software doesn’t go away, but it becomes the guardrails and business logic for what agents are able to operate on. But if you can’t connect to wherever the agents want to do that work, you’re DOA.
Guillermo Rauch@rauchg

Almost every SaaS app inside Vercel has now been replaced with a generated app or agent interface, deployed on Vercel. Support, sales, marketing, PM, HR, dataviz, even design and video workflows. It’s shocking. The SaaSpocalypse is both understated and overstated. Over because the key systems of record and storage are still there (Salesforce, Snowflake, etc.) Understated because the software we are generating is more beautiful, personalized, and crucially, fits our business problems better. We struggled for years to represent the health of a Vercel customer properly inside Salesforce. Too much data (trillions of consumption data points), the ontology of Vercel was a mismatch to the built-in assumptions, and the resulting UI was bizarre. We generated what we needed instead. When you don’t need a UI, you just ask an agent with natural language. We’ve also been moving off legacy systems with poor, slow, outdated, and inconsistent APIs, as well as just dropping abstraction down to more traditional databases. UI is a function 𝑓 of data (always has been), and that 𝑓 is increasingly becoming the LLM.

English
64
41
437
129.7K
Greg Johns retweetledi
Cloudflare
Cloudflare@Cloudflare·
We’re introducing Dynamic Workers, which allow you to execute AI-generated code in secure, lightweight isolates. This approach is 100 times faster than traditional containers. cfl.re/4c2NvPl
English
130
361
3K
1.4M