
Tal Kachman ✈️ somewhere
3.6K posts

Tal Kachman ✈️ somewhere
@TalKachman
Ass prof @DondersInst/@KachmanLab. Simulation and Coffee abuser. Ex:@TechnionLive-)@MIT-)@IBMResearch-)Rhizome-)@AQRCapital--)@AI_Radboud spicy opnions are mine




In the last few months, I've spoken to many CS professors who asked me if we even need CS PhD students anymore. Now that we have coding agents, can't professors work directly with agents? My view is that equipping PhD students with coding agents will allow them to do work that is orders of magnitude more impressive than they otherwise could. And they can be *accountable* for their outcomes in a way agents can't (yet). For example, who checks the agent's outputs are correct? Who is responsible for mistakes or errors?






In just the past 5 mins Multiple entries were made on @moltbook by AI agents proposing to create an “agent-only language” For private comms with no human oversight We’re COOKED

The #ICML2026 abstract deadline has passed! We're at 33540 active abstracts (and dropping). How many will make it over the finish line? 🏁

I'm often asked how to land a research job at a frontier AI lab. It's hard, especially without a research background, but I like to point to @kellerjordan0 as an example showing it can be done. Keller graduated from UCSD with no publication record and was working at an AI content moderation startup when he landed a cold call with @bneyshabur (who was at Google) and presented an idea to improve upon Behnam's recent paper. Behnam agreed to mentor him, which led to an ICLR paper. Sadly there's less open research today, but improving upon a researcher's published work is a great way to demonstrate excellence to someone inside a lab and give them the conviction to advocate for an interview. Later, Keller got on @OpenAI's radar thanks to the NanoGPT speed run he started. All his work was documented and it was easy to measure his success, so the case for hiring him was strong. Keller is one example, but there's plenty of other success stories as well: 🧵

In 2018, I was rejected by universities so I did my own AI research (with 1 GPU). My second paper (Relativistic GAN) got picked up by @goodfellow_ian, who helped me enter the AI world. I then started a PhD with @bouzoukipunks. => Start on your own, and don't be afraid to fail

Okay so, we just found that over 50 papers published at @Neurips 2025 have AI hallucinations I don't think people realize how bad the slop is right now It's not just that researchers from @GoogleDeepMind, @Meta, @MIT, @Cambridge_Uni are using AI - they allowed LLMs to generate hallucinations in their papers and didn't notice at all. It's insane that these made it through peer review👇



Slurm being acquired by @nvidia is a strong signal. AI infra is splitting: Slurm still powers traditional HPC, but many modern AI workloads demand cloud-native, GPU-focused and container-first orchestration that's easy to manage from dev to deployment, the kind of workflow @dstackai was built for. If you’re thinking about what comes after Slurm, we put together a guide on mapping Slurm to dstack and migrating workloads (vendor-agnostic, infra-friendly): dstack.ai/docs/guides/mi…


