Craig Connors

261 posts

Craig Connors banner
Craig Connors

Craig Connors

@egregious

Army veteran. CTO for Infrastructure & Security @ Cisco. Opinions are my own.

Nashville, TN and Akumal, MX Beigetreten Mayıs 2007
223 Folgt389 Follower
Craig Connors
Craig Connors@egregious·
@HeidyKhlaaf I didn’t (and would never) call you naive. I said the take was. Anthropic is not exaggerating here - your red flags aren’t invalid in the face of limited information, but they are leading people to underestimate this (and future) models.
English
3
0
7
2.1K
Dr Heidy Khlaaf (هايدي خلاف)
@egregious You're right, AISI and all major AI companies sought out my expertise and hired me (including to lead cyber evaluations while I was at Trail of Bits) because I'm very naive.
English
5
0
65
6.7K
Dr Heidy Khlaaf (هايدي خلاف)
As someone who has audited dozens of safety-critical systems, built static analysis tools, and used most formal verification and security tools, here are some red flags that should be a caution in taking these claims at face value: 1. There are no comparison benchmarks with 1/
Anthropic@AnthropicAI

Introducing Project Glasswing: an urgent initiative to help secure the world’s most critical software. It’s powered by our newest frontier model, Claude Mythos Preview, which can find software vulnerabilities better than all but the most skilled humans. anthropic.com/glasswing

English
44
114
1.3K
449.2K
Craig Connors
Craig Connors@egregious·
Cisco was early in many AI initiatives and we are all learning what “failure” and “learning” look like on a 2 month software cycle riding a 3-5 year hardware cycle. But that doesn’t stop us from experimenting and that’s why I’m here.
Ethan Mollick@emollick

I think that if companies are not failing at all with their AI efforts it is a sign that they are not being ambitious enough. This is a fundamentally new technology that we do not know how to use well. Achieving breakthroughs will require experimentation, which require failure.

English
0
0
1
294
Craig Connors
Craig Connors@egregious·
@vtahowe Products need to be re-architected to support active defenses and rapid response
English
1
0
2
37
Allie Howe
Allie Howe@vtahowe·
So much to share from RSAC 👀 > Where you run your AI code matters > We need defensive agents with large amounts of autonomy and capability to react quickly to attackers > Your code needs to be scanned and tested early and often to get ahead of what's coming
Insecure Agents Podcast@insecureagents

We had many interesting conversations at RSAC to share with you @alexstamos talked to us about how only a handful of large companies used to have to worry about a 0 day, but now with advances in AI models, it's everyone.

English
1
4
13
3K
Craig Connors retweetet
pedram.md
pedram.md@pdrmnvd·
men in their 40s used to have cool midlife crisis but now they just have agentic workflows
English
159
677
7.6K
439.8K
Craig Connors retweetet
Samantha Ruddy
Samantha Ruddy@samlymatters·
I love that people from Massachusetts created the most generous, socialist health care system in all 50 states while also being the most aggressive drivers. They’re like “I want my neighbors to have the best care. They’re gonna need it if they don’t get out of the left lane.”
English
73
991
13.9K
1.5M
Craig Connors retweetet
Dr. Dominic Ng
Dr. Dominic Ng@DrDominicNg·
i regret to inform you that personal growth rarely comes from acquiring new knowledge and almost always from: - getting humiliated - showing up terrified and doing it anyway - admitting you might be the problem
English
290
11.9K
83.4K
1.2M
Craig Connors retweetet
Andrej Karpathy
Andrej Karpathy@karpathy·
It is hard to communicate how much programming has changed due to AI in the last 2 months: not gradually and over time in the "progress as usual" way, but specifically this last December. There are a number of asterisks but imo coding agents basically didn’t work before December and basically work since - the models have significantly higher quality, long-term coherence and tenacity and they can power through large and long tasks, well past enough that it is extremely disruptive to the default programming workflow. Just to give an example, over the weekend I was building a local video analysis dashboard for the cameras of my home so I wrote: “Here is the local IP and username/password of my DGX Spark. Log in, set up ssh keys, set up vLLM, download and bench Qwen3-VL, set up a server endpoint to inference videos, a basic web ui dashboard, test everything, set it up with systemd, record memory notes for yourself and write up a markdown report for me”. The agent went off for ~30 minutes, ran into multiple issues, researched solutions online, resolved them one by one, wrote the code, tested it, debugged it, set up the services, and came back with the report and it was just done. I didn’t touch anything. All of this could easily have been a weekend project just 3 months ago but today it’s something you kick off and forget about for 30 minutes. As a result, programming is becoming unrecognizable. You’re not typing computer code into an editor like the way things were since computers were invented, that era is over. You're spinning up AI agents, giving them tasks *in English* and managing and reviewing their work in parallel. The biggest prize is in figuring out how you can keep ascending the layers of abstraction to set up long-running orchestrator Claws with all of the right tools, memory and instructions that productively manage multiple parallel Code instances for you. The leverage achievable via top tier "agentic engineering" feels very high right now. It’s not perfect, it needs high-level direction, judgement, taste, oversight, iteration and hints and ideas. It works a lot better in some scenarios than others (e.g. especially for tasks that are well-specified and where you can verify/test functionality). The key is to build intuition to decompose the task just right to hand off the parts that work and help out around the edges. But imo, this is nowhere near "business as usual" time in software.
English
1.6K
4.8K
37.2K
5.1M
Craig Connors
Craig Connors@egregious·
Just sitting in a @CiscoSecure Town Hall meeting blown away by what (and how) the team is building. 2026 is going to be exciting!
English
0
1
3
1.9K
Ram
Ram@ramalingams400·
@zerohedge lol. As legacy programmer I can say they underestimate legacy codebase. Human programmers are trying this for 20 years with limited success. Those codes are written in different levels and time periods AI can’t understand. Even experienced programmers struggle to read it.
English
26
5
253
33.8K
zerohedge
zerohedge@zerohedge·
*ANTHROPIC SAYS CLAUDE CODE CAN AUTOMATE COBOL MODERNIZATION And there goes IBM Amodei is now funding Anthropic by buying puts on all the companies he "disrupts" daily
English
198
391
5.7K
1.2M
Craig Connors
Craig Connors@egregious·
🫡
Jeetu Patel@jpatel41

To my Product team at @Cisco. Just because you can build it instantly doesn’t mean it is worth shipping. Create a new mental model. Here’s my advice… Be intellectually honest with yourself and if you don’t possess these things, figure out a way to learn them, FAST in order to stay relevant. The rules of the game have changed faster than anyone thought. Don’t fight it. Adjust to the new reality. Keep learning. Stay hungry. Be continually curious. You can change the world now more than anytime in the past. Your future self needs to define success very differently. Internalize this…in the new world more than ever before, the next gen engineer will define success very differently. It looks something like this. - Success requires judgement - Success requires instinct - Success requires clarity in your mind on the most important problem you want to solve - Success requires good taste - Success requires obsession on outcomes - Success requires understanding unit economics. Be cost-aware like an operator. - Success requires extreme emphasis on safety - Success requires that you are a terrific manager of digital agents - Success requires that you move really fast - Success requires that you adopt an entirely new mental model - Success requires constantly learning and unlearning Ponder what it will take to succeed. It will be counterintuitive. It’ll be unsettling. It will take sacrifice. It will be ridiculously hard. It’ll be scary. Yet it’ll be exciting. It’ll allow you to imagine a very different caliber of ambition. At Cisco, we have shipped our first product at this point fully written by AI. Kudos to the AI Defense team. 100% of the code in AI Defense is written by AI. By the end of 2026, we have a conservative estimate of at least half a dozen products written completely by AI. By end of 2027, we will shoot to have 70% of our products written 100% with AI. And these products must be superior in dimensions of quality, performance, simplicity, usability, adoption, and delivering real business outcomes for our customers. I hope you make these goals look ridiculously conservative with the work you make AI agents do for you. It’s time to up-level ourselves. Modern development practices have flipped. I am proud of all of you who are teaching yourself to rethink how you work. Reimagine how you code. Your teams augmented by digital coworkers are going to look very different. You will have many more teammates who will work round the clock. I’m excited for what lies ahead in our innovation journey. Your innovation journey. Move fast. Deliver something magical. Obsess about safety. Let’s build a great freaking company and don’t let your imagination be your constraint.

ART
0
0
1
136
Craig Connors
Craig Connors@egregious·
@ronhowellnc Yes, I absolutely believe network architecture and routing can and will be done by AI. These are not even "hard" problems per se.
English
0
0
0
11
Ron Howell
Ron Howell@ronhowellnc·
@egregious Software can be generated by A.I. So can SASE and Network Architecture be done by A.I. soon as well Think time is still Valuable i Believe What do you say ?
English
1
0
0
24
Craig Connors retweetet
Greg Brockman
Greg Brockman@gdb·
Software development is undergoing a renaissance in front of our eyes. If you haven't used the tools recently, you likely are underestimating what you're missing. Since December, there's been a step function improvement in what tools like Codex can do. Some great engineers at OpenAI yesterday told me that their job has fundamentally changed since December. Prior to then, they could use Codex for unit tests; now it writes essentially all the code and does a great deal of their operations and debugging. Not everyone has yet made that leap, but it's usually because of factors besides the capability of the model. Every company faces the same opportunity now, and navigating it well — just like with cloud computing or the Internet — requires careful thought. This post shares how OpenAI is currently approaching retooling our teams towards agentic software development. We're still learning and iterating, but here's how we're thinking about it right now: As a first step, by March 31st, we're aiming that: (1) For any technical task, the tool of first resort for humans is interacting with an agent rather than using an editor or terminal. (2) The default way humans utilize agents is explicitly evaluated as safe, but also productive enough that most workflows do not need additional permissions. In order to get there, here's what we recommended to the team a few weeks ago: 1. Take the time to try out the tools. The tools do sell themselves — many people have had amazing experiences with 5.2 in Codex, after having churned from codex web a few months ago. But many people are also so busy they haven't had a chance to try Codex yet or got stuck thinking "is there any way it could do X" rather than just trying. - Designate an "agents captain" for your team — the primary person responsible for thinking about how agents can be brought into the teams' workflow. - Share experiences or questions in a few designated internal channels - Take a day for a company-wide Codex hackathon 2. Create skills and AGENTS[.md]. - Create and maintain an AGENTS[.md] for any project you work on; update the AGENTS[.md] whenever the agent does something wrong or struggles with a task. - Write skills for anything that you get Codex to do, and commit it to the skills directory in a shared repository 3. Inventory and make accessible any internal tools. - Maintain a list of tools that your team relies on, and make sure someone takes point on making it agent-accessible (such as via a CLI or MCP server). 4. Structure codebases to be agent-first. With the models changing so fast, this is still somewhat untrodden ground, and will require some exploration. - Write tests which are quick to run, and create high-quality interfaces between components. 5. Say no to slop. Managing AI generated code at scale is an emerging problem, and will require new processes and conventions to keep code quality high - Ensure that some human is accountable for any code that gets merged. As a code reviewer, maintain at least the same bar as you would for human-written code, and make sure the author understands what they're submitting. 6. Work on basic infra. There's a lot of room for everyone to build basic infrastructure, which can be guided by internal user feedback. The core tools are getting a lot better and more usable, but there's a lot of infrastructure that currently go around the tools, such as observability, tracking not just the committed code but the agent trajectories that led to them, and central management of the tools that agents are able to use. Overall, adopting tools like Codex is not just a technical but also a deep cultural change, with a lot of downstream implications to figure out. We encourage every manager to drive this with their team, and to think through other action items — for example, per item 5 above, what else can prevent a lot of "functionally-correct but poorly-maintainable code" from creeping into codebases.
English
413
1.6K
12.2K
2.1M
Allie Howe
Allie Howe@vtahowe·
Me: So the security implications of Clawdbot… 10 people today: ~ItS cAlLeD mOlTbOT nOw~ Ya I know I just didn’t know if you knew 🙈
English
5
0
16
806