Context Engineering Guild of New York City

60 posts

Context Engineering Guild of New York City banner
Context Engineering Guild of New York City

Context Engineering Guild of New York City

@ContextGuildNYC

NYC IRL community for advanced practitioners of LLM coding techniques. Brooklyn meetups on every new moon.

New York City Katılım Ocak 2026
34 Takip Edilen59 Takipçiler
Context Engineering Guild of New York City retweetledi
sophie
sophie@netcapgirl·
claude cowork is making me think maybe we’ll look back and it’ll be obvious that humans were never meant to spend their lives working behind a screen. we’ll see it as inevitable that computers do everything for us on computers and the future of work is cooler than we can imagine
English
336
584
9.3K
324.7K
Context Engineering Guild of New York City retweetledi
Allie K. Miller
Allie K. Miller@alliekmiller·
Yesterday, I met with Anthropic and OpenAI and Google. (Separately, of course.) And while the conversations were largely confidential, I do want to share some aggregated reflections on the day as well as general SF takeaways. ⬇️ 1) Competitive advantage as a solo practitioner really does come from taking action and finding an area with a bit of friction and doubling down. Ex: memory management right now isn’t perfect, but allocating an hour to improving that system gives you a ton of leverage over others 2) SF continues to be the number one place for AI work. I know that’s not surprising. I would put New York at a healthy second place. SF tends to be more about crazy agent experiments for the thrill of capability and discovery and NYC tends to be more about kinda crazy agent experiments to find new ways to make money. Not saying either is better. But I met several people renting two apartments to straddle these worlds. You want the frontier of SF and enterprise insights of NYC. It’s one reason I travel between them so much. 3) All AI labs want to hear more from people. All of them. What are you using it for, what do you like, what do you hate, what do you need. Users have a TON of power on the direction of these tools. Keep testing and tweeting at them!! 4) There is very clearly a third customer cohort that is bubbling and underserved. It’s not developers…it’s not the business professional basic users…it’s builders. Everyone can build now. It’s marketing and sales folks vibe coding. It’s legal folks building complex skills. It’s a finance expert building a side project. This is a really undertapped customer base. They feel the Cursors of the world are too complex and doc summarization tools of the world are too basic. 5) Not sure if it was just sample size, but far fewer people were wearing tech gear compared to when I lived in SF. Everyone was still dressed casually, but I used to see Splunk and Optimizely and Slack and VC gear everywhere. People seem more in stealth swag now. 6) We may soon have our world model moment. 7) Speed of iteration and shipping is faster than I’ve ever seen. We see the nonstop drops from Anthropic. We see that because of scale, providers can get a much faster feedback loop of products or features that aren’t hitting. A lot of 2025 was experimentation, but ever since the OpenClaw moment over the holidays, the releases from all three labs have been more concentrated on…things that sorta look and feel like OpenClaw. 8) Small teams can pull off more than ever before. Small teams are the powerhouses of innovation right now. This means that finding new ways to share knowledge, break silos, and remove duplicate work is going to be even more important. AI agents functioning as actually teammates that support an entire system is key. 9) Build more Skills. Build better Skills. 10) Misinformation on AI tools and leaks spread FAST. I’ve seen so many fake stories on these AI labs. Your company needs to actually TEST these tools on your actual use cases to know which models and tools are best and you need to not make large-scale snap decisions based on a rumor of a rumor of a rumor. We will see more volatility. Plan for it. 11) You can feel the seriousness of this moment. Even during random conversations I had in line at a cafe. Lots of folks worried about job loss and lack of meaning. 12) Mac minis were sold out ;)
English
89
65
584
106.5K
Andrew Jefferson
Andrew Jefferson@EastlondonDev·
My agent has been working hard all day to get the neural wasm interpreter (frozen weights) incorporated into nanogpt and training efficiently. The goal is to train a gpt 2 equivalent llm from scratch that will use it as a tool
English
2
0
14
1.5K
Context Engineering Guild of New York City
Running a wasm vm in the forward pass _inside the transformer_ This is going to be part of the shape of the future. Machines outputting deterministically perfect code with no latency / no tool calls. Yuuuuuuge
Andrew Jefferson@EastlondonDev

This REPL isn’t like anything you’ve seen before. It’s running on a neural network. It’s a feed forward network, using attention, and it implements a fully working wasm interpreter. When I saw an article on the topic of wasm interpreting llms I had to build this. Wasm + the tech behind AI + a repl … running in a browser and built and deployed on Replit - this is an homage to @amasad, one of my personal heroes 🐐 The idea is based on a recent post by @PerceptaAI but they didn’t provide the model architecture or really any of the important implementation details so I had to do a lot of figuring out and testing with the Replit agent to build it - it’s mind blowing that AI can produce this - a neural network architecture with hand crafted weights to implement something too new to be in any of its training data. P.s. it was all done on my phone, most of it while flying from SFO to Munich, what a time to be alive!

English
0
0
0
58
Context Engineering Guild of New York City
This is a really big deal
Hedgie@HedgieMarkets

🦔 Researchers at Aikido Security found 151 malicious packages uploaded to GitHub between March 3 and March 9. The packages use Unicode characters that are invisible to humans but execute as code when run. Manual code reviews and static analysis tools see only whitespace or blank lines. The surrounding code looks legitimate, with realistic documentation tweaks, version bumps, and bug fixes. Researchers suspect the attackers are using LLMs to generate convincing packages at scale. Similar packages have been found on NPM and the VS Code marketplace. My Take Supply chain attacks on code repositories aren't new, but this technique is nasty. The malicious payload is encoded in Unicode characters that don't render in any editor, terminal, or review interface. You can stare at the code all day and see nothing. A small decoder extracts the hidden bytes at runtime and passes them to eval(). Unless you're specifically looking for invisible Unicode ranges, you won't catch it. The researchers think AI is writing these packages because 151 bespoke code changes across different projects in a week isn't something a human team could do manually. If that's right, we're watching AI-generated attacks hit AI-assisted development workflows. The vibe coders pulling packages without reading them are the target, and there are a lot of them. The best defense is still carefully inspecting dependencies before adding them, but that's exactly the step people skip when they're moving fast. I don't really know how any of this gets better. The attackers are scaling faster than the defenses. Hedgie🤗 arstechnica.com/security/2026/…

English
0
0
1
67
Context Engineering Guild of New York City
Soon architecture innovations will be credited to centaurs, then, to spikes.
Anish Moonka@AnishA_Moonka

Every AI model you’ve ever used, ChatGPT, Claude, Gemini, all of them, passes information between its layers the exact same way it did in 2015. Moonshot’s new paper rewires that for the first time. AI models are built in layers. Think of it like floors of a building. Each floor does some work, then passes results up to the next floor. Since 2015, every model has used the same handoff method: each floor dumps everything it learned into one big pile, and the next floor gets the whole pile. No filtering. No choosing what’s useful. Just a growing stack of everything, treated equally. The guy who invented this method, Kaiming He, wrote the most cited paper of the 21st century doing it. Over 250,000 citations confirmed by a Nature analysis. It’s in every major AI model on earth. And for 11 years, nobody seriously questioned whether the handoff itself could be smarter. Moonshot found three problems. Every floor gets the same mix, even though different floors need different information. Once something gets blended in, no later floor can go back and pick out just the useful parts. And deeper floors have to scream louder and louder to be heard over the growing pile, which makes training unstable. The fix borrows from how AI already reads text. Since 2017, AI models don’t process words one at a time in order. They look at all words at once and decide which ones matter most. Moonshot applies that same idea to the building’s floors. Instead of blindly accepting the whole pile, each floor looks back at all previous floors and picks which ones to listen to. The model learns those preferences on its own. Results on their 48-billion-parameter model: a PhD-level science reasoning test jumped from 36.9 to 44.4 (for context, human PhD experts score around 65 on this). Math went from 53.5 to 57.1. A coding test from 59.1 to 62.2. All from changing the wiring between floors, not the floors themselves. And the method matches what you’d get from using 25% more computing power under the old approach, adding less than 2% extra processing time. Moonshot was valued at $4.3 billion in December. Bloomberg reported two days ago they’re seeking $18 billion, quadrupling in three months. Founded by three Tsinghua University classmates in 2023, they process 100 billion chunks of text through Kimi every day. The biggest gains came on the hardest tests. Turns out the plumbing was quietly holding everything back.

English
0
0
1
99
Context Engineering Guild of New York City retweetledi
Thariq
Thariq@trq212·
we need a better word than vibe coding man, Claude can create the most beautiful things
Thariq tweet media
English
277
196
4.9K
278K
Context Engineering Guild of New York City retweetledi
Ryan Hart
Ryan Hart@thisdudelikesAI·
🚨BREAKING: Someone just open-sourced a headless browser that runs 11x faster than Chrome and uses 9x less memory. It's called Lightpanda and it's built from scratch specifically for AI agents, scraping, and automation. Not a Chromium fork. Not a hack. A completely new browser written in Zig. Here's why this changes everything for AI builders: ↓
Ryan Hart tweet media
English
278
941
8.2K
738.7K
Context Engineering Guild of New York City retweetledi
frankie
frankie@FrankieIsLost·
december 2019. doctors in wuhan hospitals notice pneumonia cases that don’t fit the usual patterns. patients aren’t responding to standard treatments. some have no clear exposure history.
jack@jack

we're making @blocks smaller today. here's my note to the company. #### today we're making one of the hardest decisions in the history of our company: we're reducing our organization by nearly half, from over 10,000 people to just under 6,000. that means over 4,000 of you are being asked to leave or entering into consultation. i'll be straight about what's happening, why, and what it means for everyone. first off, if you're one of the people affected, you'll receive your salary for 20 weeks + 1 week per year of tenure, equity vested through the end of may, 6 months of health care, your corporate devices, and $5,000 to put toward whatever you need to help you in this transition (if you’re outside the U.S. you’ll receive similar support but exact details are going to vary based on local requirements). i want you to know that before anything else. everyone will be notified today, whether you're being asked to leave, entering consultation, or asked to stay. we're not making this decision because we're in trouble. our business is strong. gross profit continues to grow, we continue to serve more and more customers, and profitability is improving. but something has changed. we're already seeing that the intelligence tools we’re creating and using, paired with smaller and flatter teams, are enabling a new way of working which fundamentally changes what it means to build and run a company. and that's accelerating rapidly. i had two options: cut gradually over months or years as this shift plays out, or be honest about where we are and act on it now. i chose the latter. repeated rounds of cuts are destructive to morale, to focus, and to the trust that customers and shareholders place in our ability to lead. i'd rather take a hard, clear action now and build from a position we believe in than manage a slow reduction of people toward the same outcome. a smaller company also gives us the space to grow our business the right way, on our own terms, instead of constantly reacting to market pressures. a decision at this scale carries risk. but so does standing still. we've done a full review to determine the roles and people we require to reliably grow the business from here, and we've pressure-tested those decisions from multiple angles. i accept that we may have gotten some of them wrong, and we've built in flexibility to account for that, and do the right thing for our customers. we're not going to just disappear people from slack and email and pretend they were never here. communication channels will stay open through thursday evening (pacific) so everyone can say goodbye properly, and share whatever you wish. i'll also be hosting a live video session to thank everyone at 3:35pm pacific. i know doing it this way might feel awkward. i'd rather it feel awkward and human than efficient and cold. to those of you leaving…i’m grateful for you, and i’m sorry to put you through this. you built what this company is today. that's a fact that i'll honor forever. this decision is not a reflection of what you contributed. you will be a great contributor to any organization going forward. to those staying…i made this decision, and i'll own it. what i'm asking of you is to build with me. we're going to build this company with intelligence at the core of everything we do. how we work, how we create, how we serve our customers. our customers will feel this shift too, and we're going to help them navigate it: towards a future where they can build their own features directly, composed of our capabilities and served through our interfaces. that's what i'm focused on now. expect a note from me tomorrow. jack

English
68
589
11.7K
1.2M
Context Engineering Guild of New York City retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
It is hard to communicate how much programming has changed due to AI in the last 2 months: not gradually and over time in the "progress as usual" way, but specifically this last December. There are a number of asterisks but imo coding agents basically didn’t work before December and basically work since - the models have significantly higher quality, long-term coherence and tenacity and they can power through large and long tasks, well past enough that it is extremely disruptive to the default programming workflow. Just to give an example, over the weekend I was building a local video analysis dashboard for the cameras of my home so I wrote: “Here is the local IP and username/password of my DGX Spark. Log in, set up ssh keys, set up vLLM, download and bench Qwen3-VL, set up a server endpoint to inference videos, a basic web ui dashboard, test everything, set it up with systemd, record memory notes for yourself and write up a markdown report for me”. The agent went off for ~30 minutes, ran into multiple issues, researched solutions online, resolved them one by one, wrote the code, tested it, debugged it, set up the services, and came back with the report and it was just done. I didn’t touch anything. All of this could easily have been a weekend project just 3 months ago but today it’s something you kick off and forget about for 30 minutes. As a result, programming is becoming unrecognizable. You’re not typing computer code into an editor like the way things were since computers were invented, that era is over. You're spinning up AI agents, giving them tasks *in English* and managing and reviewing their work in parallel. The biggest prize is in figuring out how you can keep ascending the layers of abstraction to set up long-running orchestrator Claws with all of the right tools, memory and instructions that productively manage multiple parallel Code instances for you. The leverage achievable via top tier "agentic engineering" feels very high right now. It’s not perfect, it needs high-level direction, judgement, taste, oversight, iteration and hints and ideas. It works a lot better in some scenarios than others (e.g. especially for tasks that are well-specified and where you can verify/test functionality). The key is to build intuition to decompose the task just right to hand off the parts that work and help out around the edges. But imo, this is nowhere near "business as usual" time in software.
English
1.6K
4.8K
37.3K
5.1M
Context Engineering Guild of New York City retweetledi
Ben (no treats)
Ben (no treats)@andersonbcdefg·
curious... you claim to care about "model welfare" and yet you haven't granted your "open claw" the ability to kill itself using the Philips Hue™ Smart Plug... care to elaborate?
Ben (no treats) tweet media
English
54
118
3.6K
154.3K
Context Engineering Guild of New York City retweetledi
Eric Buess
Eric Buess@EricBuess·
claude update claude --worktree Or if you want to use tmux panes claude --worktree --tmux Optionally also ask Claude to “use worktrees for subagents” “Custom agents support git worktrees You can also make subagents always run in their own worktree. To do that, just add ‘isolation: worktree’ to your agent frontmatter”
Boris Cherny@bcherny

Introducing: built-in git worktree support for Claude Code Now, agents can run in parallel without interfering with one other. Each agent gets its own worktree and can work independently. The Claude Code Desktop app has had built-in support for worktrees for a while, and now we're bringing it to CLI too. Learn more about worktrees: git-scm.com/docs/git-workt…

English
11
24
377
55.3K
Context Engineering Guild of New York City retweetledi
Boris Cherny
Boris Cherny@bcherny·
Introducing: built-in git worktree support for Claude Code Now, agents can run in parallel without interfering with one other. Each agent gets its own worktree and can work independently. The Claude Code Desktop app has had built-in support for worktrees for a while, and now we're bringing it to CLI too. Learn more about worktrees: git-scm.com/docs/git-workt…
Boris Cherny tweet media
English
439
851
11K
1.3M
Context Engineering Guild of New York City retweetledi
Connor Leahy
Connor Leahy@NPCollapse·
The vibe shift is coming, and it's going to be very, very sudden.
English
70
35
668
70K
Context Engineering Guild of New York City retweetledi
Eric S. Raymond
Eric S. Raymond@esrtweet·
Is it weird that AI coding assistance is not giving me identity fracture? A lot of software developers are feeling disoriented and threatened these days. Programming by hand is clearly going the way of the buggy whip and the hand-cranked auger. Which is how we're finding out that a lot of people have their identities bound up in being good at hand-coding and how it feels to do that. That's not me. It's not me at all. Rather to my surprise, I don't miss coding by hand, not any more than I missed writing assembler when compilers ate the world and made that unnecessary. (That was in a couple years back around 1983, for you youngsters.) Maybe the fact that I'm not feeling any of this disorientation disqualifies me from having anything to say to people who are. On the other hand...if you can learn to emulate my mental stance and be completely unbothered, maybe that would be a good thing? So. If you're a programmer, and you're feeling disoriented, try this on for size: I like being a wizard. I like being able to speak spells, to weave complex patterns of logic that make things happen in the world. Writing code is a way to manifest my will. Yes, I've piled up a lot of arcane knowledge over the 50 years I've been doing this. But languages of invocation, they come and they go. Been a long time since I've had any use for being able to program in 8086 assembler, and that's okay. I have better spells now, and these days some rather powerful familiars. What I'm inviting you to do is think of yourself as a wizard. Not as a person who writes code, but as a person who is good at assuming the kind of mental states required to bend reality with the application of spells. And if that's who you are, does it matter if the spells are painstakingly scribed in runes of power, versus being spoken to an obedient machine spirit? It's all one; it's all the manifestation of will. Arcane languages come and go, machine spirits appear and then diminish to be replaced by more powerful ones, but you? You are the magic-wielder. Without you, none of it happens. Same as it ever was. Same is it ever was. And so mote it be.
English
232
303
2K
177.4K