Tony Carter

1.3K posts

Tony Carter

Tony Carter

@xtremesecurity

Katılım Mayıs 2009
5.9K Takip Edilen758 Takipçiler
Dave Kennedy
Dave Kennedy@HackingDave·
Happy to announce that I took a board position (thanks @edskoudis) to SANS Technology Institute (college degree programs). I'm truly excited here as it fits right in to my passion of helping the next generation of cybersecurity folks get into the industry. Amazing mission reaching our youth, and impacting the next generation of hackers. I'm also on the board of Paradigm Cyber Ventures which focuses on K-12 cybersecurity hands-on courses in the high school level and we sponsor and fund many high school cybersecurity programs. Make the world a better place. sans.edu/about/governan…
English
23
15
235
5K
am.will
am.will@LLMJunky·
@ryancarson is this linear? i've been avoiding learning YET ANOTHER tool. but you keep posting all this cool stuffs damn you
English
3
1
10
2K
Ryan Carson
Ryan Carson@ryancarson·
This is how a code factory should work. I was waiting at the doc’s office and filed 6 bugs - all from my phone. 1 is already merged, 2nd is merging, 3-6 are getting done in parallel. Using Symphony + Codex (Elixir, Codex App Server, Linear, GitHub)
English
40
23
755
82K
Bryan Helmkamp
Bryan Helmkamp@brynary·
Today I'm thrilled to open source what I've been working on... Meet Fabro (github.com/fabro-sh/fabro), the dark software factory for small teams of expert engineers. Fabro gets you out of the REPL (read-eval-prompt-loop) by layering deterministic workflow graphs over agent sessions. It's batteries included with cloud sandboxes, quality sign offs, multi-model ensembles, and Git checkpoints. It's MIT licensed so you can fork and own your AI coding toolchain. Single Rust binary with zero deps. I'd love to hear what you think!
English
16
41
473
29.3K
David Zhang (▲)
David Zhang (▲)@dazhengzhang·
Wow looks great, love to see everyone experimenting with higher levels of software building automation Something that worked really well for me was building an agent on top of my platform that lets the user use the whole product via a chat interface (the agent uses a CLI under the hood) You can try distilling this idea into your product from my open source repo github.com/Git-on-my-leve…
English
3
0
13
1.2K
Tony Carter
Tony Carter@xtremesecurity·
@CTOAdvisor You can prompt effectively for front and backend 🤣??
English
0
0
0
15
Keith Townsend
Keith Townsend@CTOAdvisor·
What does it even mean to be a full-stack developer in the world of AI-generated code?
English
9
0
6
1.3K
Tony Carter
Tony Carter@xtremesecurity·
@DanielMiessler You are definitely not alone on this line of thinking. I'm in. Go or Rust?
English
0
0
0
40
ᴅᴀɴɪᴇʟ ᴍɪᴇssʟᴇʀ 🛡️
Zapier is down. Whenever this happens now I consider building my own version of what that service did for me. And I'm sure I'm not alone. 1. Maybe as good or better 2. One less subscription That's a hard environment to run a service in. You have to be unreproducible.
English
13
0
29
3.6K
Brian Cheong
Brian Cheong@briancheong·
@UnslothAI The auto-dataset creation from PDFs and CSVs is the part that actually removes a real bottleneck. Most people quit fine-tuning before they start because data prep takes longer than training.
English
1
0
8
834
Unsloth AI
Unsloth AI@UnslothAI·
Introducing Unsloth Studio ✨ A new open-source web UI to train and run LLMs. • Run models locally on Mac, Windows, Linux • Train 500+ models 2x faster with 70% less VRAM • Supports GGUF, vision, audio, embedding models • Auto-create datasets from PDF, CSV, DOCX • Self-healing tool calling and code execution • Compare models side by side + export to GGUF GitHub: github.com/unslothai/unsl… Blog and Guide: unsloth.ai/docs/new/studio Available now on Hugging Face, NVIDIA, Docker and Colab.
English
217
833
5K
1.5M
Wes Bos
Wes Bos@wesbos·
Anyone else impressed nvidia got the install script on the main domain, root path? Imagine the meeting of lawyers, security and infra
Wes Bos tweet media
English
77
49
2.5K
288.9K
Andrew Ambrosino
Andrew Ambrosino@ajambrosino·
here's an early look at the codex app sidebar I wanted to ship. i was silenced
Andrew Ambrosino tweet media
English
211
26
1.5K
201.5K
Tony Carter
Tony Carter@xtremesecurity·
@newstart_2024 I wonder if you can workout during the fast or have energy to workout.
English
5
0
2
2K
Camus
Camus@newstart_2024·
Dr. Pradip Jamnadas just described a 72-day water fast under supervision that erased diabetes, normalized blood pressure, and dropped ~55–60 lb — mostly visceral belly fat — with skin retraction instead of loose hanging skin. His go-to fasting progression for visceral fat / pre-diabetes / reversal: - Start: 12:12 (12 h fast, 12 h eating window) for 2–3 weeks - Then: 18:6 (18 h fast, 6 h eating) - For serious cases (heavy overweight, type 2 diabetes): 48-hour fast once a week or OMAD (one meal a day) for 9 days → 3-day water fast every 9 days Allowed during pure water fasts: black coffee, black tea, water + electrolytes (e.g., Element or ½ tsp Celtic salt daily). Cravings? 1 tsp MCT oil in water. He had one patient fast 183 days (from 400 → 210 lb) with no loose skin horror show — “totally different physiology” from calorie restriction. Steven Bartlett: “Probably shouldn’t try this at home… medical supervision is critical.” Clip from this 3:43 mind-bender — fasting flipping metabolic disease in ways calorie cuts rarely do. Would you ever consider supervised extended fasting for real reversal… or is even 72 hours unthinkable? Your thoughts — drop them below.
English
45
451
2.5K
226.6K
Praveen Neppalli
Praveen Neppalli@praveenTweets·
Agentic software engineering adoption is on fire at @Uber. 1,800 code changes per week are now written entirely by Uber's internal background coding agent, and 95% of our engineers now use AI every month across all the tools we track. This is a real reset moment for engineering; it's one of the most exciting times to lead. This shift requires builders to be curious and hands-on. I’m incredibly lucky to be surrounded by a team that’s doing exactly that. The best part is that the strongest adoption isn’t being pushed top down from leadership announcements; it’s coming from engineers who are quietly experimenting, quietly shipping, and quietly pushing things forward. I love spending time with those engineers because there’s no substitute for being close to the work. Over the last few months, we leaned in hard, and the results have been phenomenal. The bigger shift: going agentic. 84% of AI users are now working with agent-style workflows, not just tab completion. Claude Code usage nearly doubled in 2 months (32% → 63%), while IDE-based tools have largely plateaued. Engineers are moving from accepting suggestions to delegating tasks. Even within traditional IDEs, ~70% of committed code is now AI-generated. Background agents are writing code autonomously. Our internal background coding agent went from <1% of all code changes to 8% in just a few months. There is zero human authoring. Engineers review and approve, but the code is written entirely by AI agents. The role of the engineer is shifting - from writing every line to architecting systems and reviewing AI-generated code. More to come from the @UberEng team in the coming days.
English
177
168
1.8K
471K
Hasan
Hasan@hasanunlu9·
@seanmcdonaldxyz @apexcompute Instructions for running Gemma 3 1B on FPGA are available in our repo. You will need to purchase the FPGA board from the link and use a Linux PC with an available PCIe slot. We also have a few solutions for Raspberry Pi 5 using a PCIe extension, which we will share soon.
English
1
0
1
279
Hasan
Hasan@hasanunlu9·
After 8+ years on the Tesla Autopilot team and 3 years at Intel, I started @apexcompute to design a new architecture for efficient AI inference. For the past 9 months, we’ve been building our custom inference accelerator. Today we’re releasing Unified Engine v1. Last June we raised our seed round with @maxitechinc , DeepFin Research, @Soma_Capital and an incredible group of angel investors. In less than 9 months, we completed our RTL architecture and brought our first pre-silicon prototype to life on FPGA. Our architecture combines systolic array and vector processing in a single compute engine with multiple architectural optimizations, achieving very high FLOPs utilization. A single engine is super lean and it uses less than 90K LUTs and 1 MB Block RAM. It may also be one of the smallest logic-footprint compute engines developed so far. Our Unified Engine v1 supports: -matrix-matrix multiplication (~95% FLOPs utilization) -softmax (~90% FLOPs utilization) -broadcast and element-wise operations -RMSNorm / LayerNorm -block quantization/dequantization (fp4, int4) -multi-engine synchronization and many other operations. We even implemented memory-efficient attention similar to FlashAttention, reaching ~90% FLOP utilization. Full benchmarks and the software stack are available on our GitHub: github.com/apex-compute/u… We have basic compiler written in Python and it supports PyTorch tensors directly to easily test and transfer tensors between the accelerator and host using bf16, fp4 and int4 formats. Our FPGA prototype can already run LLM inference and outperform NVIDIA Jetson Orin Nano, even on a mid-tier FPGA setup (6.4x lower memory bandwidth, 18% slower clock speed at 4.5 Watts). Check the side-by-side comparison video below. Our GitHub includes low-level operator implementations, examples for tiled matrix multiplication, operation chaining, tensor parallelism, attention kernel and a full Gemma 3 1B model implementation. Many more models(Vision Transformers and VLA) are coming soon. Our accelerator IP is AXI-ready for deployment on any AMD(Xilinx) FPGA platform today. Even better, our two-engine prototype runs on an entry-level AMD(Xilinx) FPGA as a PCIe accelerator card. You can purchase it here buy.stripe.com/6oUaEQf6365bgA… for $50 to experiment our pre-silicon prototype on your desktop PC or Raspberry Pi 5. We will be releasing hardware bitstream updates as the architecture gets new features. More to come soon! We are expanding our team and looking for compiler engineers and floating-point hardware design engineers. If you're interested, please send me a DM.
English
29
40
388
36K
Tony Carter
Tony Carter@xtremesecurity·
@alexph_dev @ctatedev @grok @grok for newbies, what's benefit over browser tool of openclaw (openclaw said that's better) or vs new CDP-debugging in latest chrome release via mcp (connected claude code in 1min and its working). Why agent browser?
English
0
0
0
118
Alex (VibeManager)
Alex (VibeManager)@alexph_dev·
@ctatedev @grok for newbies, what's benefit over browser tool of openclaw (openclaw said that's better) or vs new CDP-debugging in latest chrome release via mcp (connected claude code in 1min and its working). Why agent browser?
English
5
0
2
1.3K
Chris Tate
Chris Tate@ctatedev·
agent-browser is now fully native Rust. The results: 1.6x faster cold start. 18x less memory. 99x smaller install. Less abstraction means faster shipping, more control, and capabilities that weren't possible before. Now with 140+ commands across navigation, interaction, state management, network control, debugging, and multi-engine support. It's become the tool we wished existed when we started building it. Thanks to everyone who reported issues, contributed fixes, and helped shape this release. More to come.
Chris Tate tweet media
English
126
155
2.4K
338.7K
Peter Steinberger 🦞
Huge kudos to Jensen and Nvidia for jumping in and helping out, so I could actually focus on fun stuff today. Enjoy the new live browser! 😊
OpenClaw🦞@openclaw

huge shoutout to @nvidia for lending engineers to help triage our security advisories 🛡️🦞 open source security hits different when GPU companies show up to help

English
114
189
5.3K
446.2K
Farving🙆⭐️
Farving🙆⭐️@FarvingCo·
100mg of rosehip extract reduced visceral belly fat in 12 weeks. No diet change. No exercise change. One tablet daily. Double-blind study with 32 subjects. The rosehip group saw significant reductions in: - Visceral fat - Total abdominal fat - Body weight - BMI vs placebo (P<0.01). The active compound — tiliroside — accelerates fat metabolism and improves glucose clearance. Visceral fat is the fat wrapped around your organs. It drives insulin resistance, inflammation, and cardiovascular risk. More than any number on the scale. Most people are fighting subcutaneous fat in the mirror. The real danger is the fat they can't see. PMID: 25834460
Farving🙆⭐️ tweet mediaFarving🙆⭐️ tweet mediaFarving🙆⭐️ tweet media
English
15
80
591
27.3K
ᴅᴀɴɪᴇʟ ᴍɪᴇssʟᴇʀ 🛡️
Ok, I figured out the best way to explain the significance of what Karpathy has done with his autoresearch project. Automation of the scientific method. This is what ML researchers do. They come up with an idea, and then they have to figure out how to test it, which is the experiment design piece. And it's all **super** kludgy and fragile. Tons of wrestling with the different tools and frameworks, getting the code right, all so that you can run an experiment that will take days to run. *Experiment doesn't work. Cool, back to the idea phase. In other words, some massive amount of AI Researcher time IS WASTED. Only a small amount of the time is able to be spent on coming up with ideas. Most of it is managing a shitstack of fragile tech that runs the experiments. Which take forever. Karpathy just automated this. He built and released an *open-source* stack for automating this entire process. You just put what you want to do into a Project.md file and send it off, and it builds all the experiments, all the code, and goes and executes and tells you which ones were successful. And the idea isn't just for a single researcher, but he's already thinking about how you can do like SETI on the whole thing, where you have compute that can take experiments and run them on shared infrastructure. This is the biggest project in all of AI, probably since Claude Code, and it's not close.
Andrej Karpathy@karpathy

I packaged up the "autoresearch" project into a new self-contained minimal repo if people would like to play over the weekend. It's basically nanochat LLM training core stripped down to a single-GPU, one file version of ~630 lines of code, then: - the human iterates on the prompt (.md) - the AI agent iterates on the training code (.py) The goal is to engineer your agents to make the fastest research progress indefinitely and without any of your own involvement. In the image, every dot is a complete LLM training run that lasts exactly 5 minutes. The agent works in an autonomous loop on a git feature branch and accumulates git commits to the training script as it finds better settings (of lower validation loss by the end) of the neural network architecture, the optimizer, all the hyperparameters, etc. You can imagine comparing the research progress of different prompts, different agents, etc. github.com/karpathy/autor… Part code, part sci-fi, and a pinch of psychosis :)

English
41
48
692
117.2K
Om Patel
Om Patel@om_patel5·
claude code has a hidden setting that makes it 600x faster and almost nobody knows about it by default it uses text grep to find functions. it doesn't understand your code at all. that's why it takes 30-60 seconds and sometimes returns the wrong file there's a flag called ENABLE_LSP_TOOL that connects it to language servers. same tech that powers vscode's ctrl+click to jump straight to the definition after enabling it: > "add a stripe webhook to my payments page" - claude finds your existing payment logic in 50ms instead of grepping through hundreds of files > "fix the auth bug on my dashboard" - traces the actual call hierarchy instead of guessing which file handles auth > after every edit it auto-catches type errors immediately instead of you finding them 10 prompts later also saves tokens because claude stops wasting context searching for the wrong files 2 minute setup and it works for 11 languages
Om Patel tweet media
English
195
251
5.7K
827.4K
Tony Carter
Tony Carter@xtremesecurity·
@steipete @openclaw I've been using gemini-flash becasue it's cheaper but I've been getting great results..
English
0
0
0
41