Sabitlenmiş Tweet
pystar
24.3K posts

pystar
@pystar
| Engineering | Previously Finance | Previously Physics
Ad Abyssum, Ut Ad Astra Eamus Katılım Ocak 2009
854 Takip Edilen1.3K Takipçiler
pystar retweetledi
pystar retweetledi

THIS is the wildest open-source project I’ve seen this month.
We were all hyped about @karpathy's autoresearch project automating the experiment loop a few weeks ago.
(ICYMI → github.com/karpathy/autor…)
But a bunch of folks just took it ten steps further and automated the entire scientific method end-to-end.
It's called AutoResearchClaw, and it's fully open-source.
You pass it a single CLI command with a raw idea, and it completely takes over 🤯
The 23-stage loop they designed is insane:
✦ First, it handles the literature review.
- It searches arXiv and Semantic Scholar for real papers
- Cross-references them against DataCite and CrossRef.
- No fake papers make it through.
✦ Second, it runs the sandbox.
- It generates the code from scratch.
- If the code breaks, it self-heals.
- You don't have to step in.
✦ Finally, it writes the paper.
- It structures 5,000+ words into Introduction, Related Work, Method, and Experiments.
- Formats the math, generates the comparison charts,
- Then wraps the whole thing in official ICML or ICLR LaTeX templates.
You can set it to pause for human approval, or you can just pass the --auto-approve flag and walk away.
What it spits out at the end:
→ Full academic paper draft
→ Conference-grade .tex files
→ Verified, hallucination-free citations
→ All experiment scripts and sandbox results
This is what autonomous AI agents actually look like in 2026.
Free and open-source. Link to repo in 🧵 ↓

English
pystar retweetledi

🦔 Researchers at Aikido Security found 151 malicious packages uploaded to GitHub between March 3 and March 9. The packages use Unicode characters that are invisible to humans but execute as code when run. Manual code reviews and static analysis tools see only whitespace or blank lines. The surrounding code looks legitimate, with realistic documentation tweaks, version bumps, and bug fixes. Researchers suspect the attackers are using LLMs to generate convincing packages at scale. Similar packages have been found on NPM and the VS Code marketplace.
My Take
Supply chain attacks on code repositories aren't new, but this technique is nasty. The malicious payload is encoded in Unicode characters that don't render in any editor, terminal, or review interface. You can stare at the code all day and see nothing. A small decoder extracts the hidden bytes at runtime and passes them to eval(). Unless you're specifically looking for invisible Unicode ranges, you won't catch it.
The researchers think AI is writing these packages because 151 bespoke code changes across different projects in a week isn't something a human team could do manually. If that's right, we're watching AI-generated attacks hit AI-assisted development workflows. The vibe coders pulling packages without reading them are the target, and there are a lot of them. The best defense is still carefully inspecting dependencies before adding them, but that's exactly the step people skip when they're moving fast. I don't really know how any of this gets better. The attackers are scaling faster than the defenses.
Hedgie🤗
arstechnica.com/security/2026/…
English
pystar retweetledi





