Abdullah

1.2K posts

Abdullah banner
Abdullah

Abdullah

@abdecoder

🚀: Secure Software Development ❤: Angular, .Net Core, Reactjs, FastApi Helping businesses build a secure, reactive and real-time web apps.

शामिल हुए Nisan 2020
251 फ़ॉलोइंग44 फ़ॉलोवर्स
Abdullah रीट्वीट किया
AlphaSignal AI
AlphaSignal AI@AlphaSignalAI·
A peanut-sized Chinese model just dethroned Gemini at reading documents. GLM-OCR is a 0.9B parameter vision-language model. It scores 94.62 on OmniDocBench V1.5, ranking #1 overall. For context, it outperforms models 100x its size. 100% open-source. It works in two stages. 1. A layout engine detects every region in a document. 2. Each region gets read in parallel. The model predicts multiple tokens per step instead of one. That's what makes it so fast at small size. It handles things most OCR tools struggle with: > Complex tables and nested layouts > Handwritten text and stamps > Math formulas and code blocks > Mixed image-and-text documents You can run it locally through Ollama. It fits on edge devices with limited compute. Every expensive OCR API just got a free competitor.
English
17
122
1K
64.5K
Abdullah रीट्वीट किया
Nandkishor
Nandkishor@devops_nk·
Honestly, this is the most accurate diagram I've seen. Waterfall: You plan for 18 months and deliver exactly what nobody needs anymore. Agile: You deliver something usable at every step, but the CEO keeps asking, "Where's the car?" AI: You get the car on day one. It has six wheels, the doors are on backwards, and it has a rocket launcher. You spend more time making it yours than actually "building"; it's shaping. owning. verifying. That's what the best AI developers do now. They don't build. They shape and own.
Nandkishor tweet media
English
127
1.6K
8.3K
730.7K
Abdullah रीट्वीट किया
Omead Pooladzandi
Omead Pooladzandi@HessianFree·
your spotify cache is bigger than our largest AI model. Bonsai: 1-bit weights. 1.7B to 8B params. 14x compression vs bf16. 8x faster on edge. 256 MB to 1.2GB. Based on Qwen 3. we just came out of stealth. intelligence belongs at the edge and we're going to put it there. Apache 2.0. we compressed intelligence. more coming. @PrismML
Omead Pooladzandi tweet media
PrismML@PrismML

Today, we are emerging from stealth and launching PrismML, an AI lab with Caltech origins that is centered on building the most concentrated form of intelligence. At PrismML, we believe that the next major leaps in AI will be driven by order-of-magnitude improvements in intelligence density, not just sheer parameter count. Our first proof point is the 1-bit Bonsai 8B, a 1-bit weight model that fits into 1.15 GBs of memory and delivers over 10x the intelligence density of its full-precision counterparts. It is 14x smaller, 8x faster, and 5x more energy efficient on edge hardware while remaining competitive with other models in its parameter-class. We are open-sourcing the model under Apache 2.0 license, along with Bonsai 4B and 1.7B models. When advanced models become small, fast, and efficient enough to run locally, the design space for AI changes immediately. We believe in a future of on-device agents, real-time robotics, offline intelligence and entirely new products that were previously impossible. We are excited to share our vision with you and keep working in the future to push the frontier of intelligence to the edge.

English
89
158
2K
180.4K
Abdullah रीट्वीट किया
Steven Feng
Steven Feng@stevenyfeng·
We’re bringing back Stanford’s CS25 Transformers course tomorrow! 🤖 It’s open to everyone (in-person + online). Weekly talks (every Thursday) from top AI researchers. One of Stanford’s most popular AI seminar courses. Don’t miss out! More info below 👇 (1/7)
Steven Feng tweet media
English
9
89
625
48K
Abdullah रीट्वीट किया
Guido van Rossum
Guido van Rossum@gvanrossum·
I think I finally understand what an agent is. It's a prompt (or several), skills, and tools. Did I get this right?
English
534
204
4.7K
567.7K
Abdullah रीट्वीट किया
Wes Bos
Wes Bos@wesbos·
‼️Do not npm install or deploy anything right now Supply chain attack on axios 1.14.1 - even if you don’t use axios it may be a nested dep. Pin versions or wait until this is resolved
Maxwell@mvxvvll

@npmjs @GHSecurityLab there is an active supply chain attack on axios@1.14.1 which pulls in a malicious package published today - plain-crypto-js@4.2.1 - someone took over a maintainer account for Axios

English
168
1.8K
9K
1.6M
Abdullah रीट्वीट किया
Zhengzhong Tu
Zhengzhong Tu@_vztu·
We are entering the second half of research. Here is my advice to every PhD student before starting a project: 1. Can Claude Code solve it in a day? 2. Will a Research Agent solve it soon? 3. Will scaling solve it anyway? If the answer to all three is No, then maybe you have found a real research problem. Because in the age of AI, many things that looked like research are being revealed as delayed engineering. That does not make research less important. It makes problem selection more important than ever. The scarce resource is no longer intelligence. It is taste. It is originality. It is the ability to ask questions that survive automation. The first half of research was about solving hard problems. The second half is about knowing which problems are still worth solving. #research #academic #AI #GenAI #generativeai #airesearch #taste
Zhengzhong Tu tweet media
English
8
21
144
41.8K
Abdullah रीट्वीट किया
Andrej Karpathy
Andrej Karpathy@karpathy·
Software horror: litellm PyPI supply chain attack. Simple `pip install litellm` was enough to exfiltrate SSH keys, AWS/GCP/Azure creds, Kubernetes configs, git credentials, env vars (all your API keys), shell history, crypto wallets, SSL private keys, CI/CD secrets, database passwords. LiteLLM itself has 97 million downloads per month which is already terrible, but much worse, the contagion spreads to any project that depends on litellm. For example, if you did `pip install dspy` (which depended on litellm>=1.64.0), you'd also be pwnd. Same for any other large project that depended on litellm. Afaict the poisoned version was up for only less than ~1 hour. The attack had a bug which led to its discovery - Callum McMahon was using an MCP plugin inside Cursor that pulled in litellm as a transitive dependency. When litellm 1.82.8 installed, their machine ran out of RAM and crashed. So if the attacker didn't vibe code this attack it could have been undetected for many days or weeks. Supply chain attacks like this are basically the scariest thing imaginable in modern software. Every time you install any depedency you could be pulling in a poisoned package anywhere deep inside its entire depedency tree. This is especially risky with large projects that might have lots and lots of dependencies. The credentials that do get stolen in each attack can then be used to take over more accounts and compromise more packages. Classical software engineering would have you believe that dependencies are good (we're building pyramids from bricks), but imo this has to be re-evaluated, and it's why I've been so growingly averse to them, preferring to use LLMs to "yoink" functionality when it's simple enough and possible.
Daniel Hnyk@hnykda

LiteLLM HAS BEEN COMPROMISED, DO NOT UPDATE. We just discovered that LiteLLM pypi release 1.82.8. It has been compromised, it contains litellm_init.pth with base64 encoded instructions to send all the credentials it can find to remote server + self-replicate. link below

English
1.4K
5.4K
28.1K
66.3M
Abdullah रीट्वीट किया
clem 🤗
clem 🤗@ClementDelangue·
Local AI is free, fast & secure! So today we're introducing hf-mount: attach any storage bucket, model or dataset from @huggingface as a local filesystem. This is a game changer, as it allows you to attach remote storage that is 100x bigger than your local machine's disk. This is also perfect for Agentic storage!! Let's go!
clem 🤗 tweet media
English
67
226
1.3K
250.5K
Abdullah रीट्वीट किया
Lucas Maes
Lucas Maes@lucasmaes_·
JEPA are finally easy to train end-to-end without any tricks! Excited to introduce LeWorldModel: a stable, end-to-end JEPA that learns world models directly from pixels, no heuristics. 15M params, 1 GPU, and full planning <1 second. 📑: le-wm.github.io
English
102
538
3.9K
907.9K
Abdullah रीट्वीट किया
Lightning AI ⚡️
Lightning AI ⚡️@LightningAI·
Students from 100+ universities across 30 countries are using Lightning's Academic Tier⚡ Get S3 access for large datasets, a 24/7 CPU studio that never shuts off, and spin up more powerful machines when experiments scale. No queues. No usage caps. No infrastructure setup. Just research. Register with your school email to unlock → go.lightning.ai/3L8dlqC
Lightning AI ⚡️ tweet media
English
2
14
77
5.1K
Abdullah रीट्वीट किया
Doğa Arslan
Doğa Arslan@dogamsi·
"Hissettiğin kısıtlamaların çoğu gerçek değil. Bunlar somut engeller değil; zaman geçtikçe doğru kabul etmeye başladığın düşüncelerdir."
Türkçe
99
257
2K
5.5M
Abdullah रीट्वीट किया
Zed
Zed@zeddotdev·
Introducing: Zed for Students! 🎓 Enjoy Zed's Pro plan free, for a year, if you're a current university student (or teacher!) - Zed Pro features for 12 months - $10/month in token credits - Unlimited edit predictions Apply today: zed.dev/education
English
235
396
4.8K
590.5K
Abdullah रीट्वीट किया
Lior Alexander
Lior Alexander@LiorOnAI·
It's over. Karpathy just open-sourced an autonomous AI researcher that runs 100 experiments while you sleep. You don't write the training code anymore. You write a prompt that tells an AI agent how to think about research. The agent edits the code, trains a small language model for exactly five minutes, checks the score, keeps or discards the result, and loops. All night. No human in the loop. That fixed five-minute clock is the quiet genius. No matter what the agent changes, the network size, the learning rate, the entire architecture, every run gets compared on equal footing. This turns open-ended research into a game with a clear score: - 12 experiments per hour, ~100 overnight - Validation loss measures how well the model predicts unseen text - Lower score wins, everything else is fair game The agent touches one Python file containing the full training recipe. You never open it. Instead, you program a markdown file that shapes the agent's research strategy. Your job becomes programming the programmer, and this unlocks a strange new loop: 1. Agents run real experiments without supervision 2. Prompt quality becomes the bottleneck, not researcher hours 3. Results auto-optimize for your specific hardware 4. Anyone with one GPU can run a research lab overnight The best AI labs won't just have the most compute. They'll have the best instructions for agents who never sleep, never forget a failed experiment, and never stop iterating.
Andrej Karpathy@karpathy

I packaged up the "autoresearch" project into a new self-contained minimal repo if people would like to play over the weekend. It's basically nanochat LLM training core stripped down to a single-GPU, one file version of ~630 lines of code, then: - the human iterates on the prompt (.md) - the AI agent iterates on the training code (.py) The goal is to engineer your agents to make the fastest research progress indefinitely and without any of your own involvement. In the image, every dot is a complete LLM training run that lasts exactly 5 minutes. The agent works in an autonomous loop on a git feature branch and accumulates git commits to the training script as it finds better settings (of lower validation loss by the end) of the neural network architecture, the optimizer, all the hyperparameters, etc. You can imagine comparing the research progress of different prompts, different agents, etc. github.com/karpathy/autor… Part code, part sci-fi, and a pinch of psychosis :)

English
135
436
4.3K
877.4K
Franck Nijhof | Home Assistant & Smart Home
🚀 I've just opened 2 new roles in my department at @openhomefndn to work full-time on @home_assistant! 🖥️ Frontend Engineer 🔐 Security Engineer Fully remote. Full-time. Open source every day. Honestly? Best job in the world. Know someone? Tag them 👇 Link in the reply.
English
44
19
272
20.2K
Abdullah रीट्वीट किया
Angular
Angular@angular·
Workflow designers, visual rule engines, system architecture diagrams, low-code editors. ngdiagram.dev is an open-source, Angular-native library built by Synergy Codes, that doesn’t lock you into one diagram type. It’s a foundation for building custom visual tools in real-world Angular apps. Check the GitHub: github.com/synergycodes/n…
Angular tweet mediaAngular tweet mediaAngular tweet mediaAngular tweet media
English
8
73
634
67.8K
Abdullah रीट्वीट किया
Michael Pyrcz🌻
Michael Pyrcz🌻@GeostatsGuy·
"Howdy Folks, I'm Michael Pyrcz, a professor at The University of Texas at Austin, and I record all of my lectures and put them on YouTube so anyone can follow along!" ...and I kept doing that, and writing a Python package, along with 2 free, online e-books, 100s of Python demonstration workflows, dozens of synthetic datasets, etc. etc. Why? So anyone can follow along! Education changes lives. I know because it changed mine. I’m just paying it forward.
Michael Pyrcz🌻 tweet mediaMichael Pyrcz🌻 tweet mediaMichael Pyrcz🌻 tweet mediaMichael Pyrcz🌻 tweet media
English
101
1.3K
8.8K
195.7K