The kernel accounts for 40% of CPU time on Android, so we made it smarter. 🧠
Learn how AutoFDO uses real-world execution patterns to guide compiler optimizations for the Android kernel → goo.gle/3NvpEhV
Love seeing the kernel get AutoFDO smarts real execution data driving better optimizations is huge for Android perf.
I took a similar ‘make it smarter at the platform level’ approach with oird, the native C++ inference daemon that ships in JibarOS (Android 16 AOSP fork).
One shared /system_ext/bin/oird process owns model residency, concurrency, and priority scheduling for 12 standardized AI capabilities (text, vision, audio, etc.) across llama.cpp / whisper.cpp / ONNX backends. No more per-app model bloat eating 40 % CPU time or 12 GB RAM on a single 7B LLM.
Crash-isolated, OEM-configurable, running live on Cuttlefish today.
Repo + full architecture: github.com/Jibar-OS/oird