Daniel

182 posts

Daniel banner
Daniel

Daniel

@chikamsoICE

Leveraging Machine Learning to solve real-world challenges. 🌍 | AI Researcher/engineer | Building systems that matter.

Awka Katılım Kasım 2023
68 Takip Edilen84 Takipçiler
Daniel
Daniel@chikamsoICE·
@sama This is actually a great sign. Users expecting more from AI means the baseline is rising. The real challenge for ML Engineers is building systems that keep up with that rising expectation. 🎯
English
0
0
1
20
Daniel
Daniel@chikamsoICE·
@sama “I got used to magic and want more magic” is genuinely the most honest user feedback loop in AI history 😂 And somehow it keeps working.
English
0
0
1
31
Daniel
Daniel@chikamsoICE·
Real question for ML Engineers: Is fine-tuning a pretrained model actually “building AI”? Or are we just glorified API wrappers? 👀 Drop your honest answer 👇 #MLEngineer #MachineLearning #AI #DeepLearning
English
0
0
0
13
Salise
Salise@salisedev·
I FINALLY DID IT !🎉 600 followers in just 2 weeks 🚀 Grateful for all the love ❤️ If you’re in tech, let’s connect & grow together 💻🤝
English
170
3
205
5.6K
Daniel
Daniel@chikamsoICE·
@saen_dev I followed you, please follow back
English
0
0
0
1
Saeed Anwar
Saeed Anwar@saen_dev·
@chikamsoICE 26% productivity boost is the average, but the distribution is likely skewed heavily. The devs who understand the underlying systems get 3-4x gains while others barely break even because they can't effectively review what the AI produced.
English
2
0
1
19
Daniel
Daniel@chikamsoICE·
AI is boosting software developer productivity by 26%. That’s not replacing developers. That’s making good ones GREAT. The devs losing jobs aren’t replaced by AI. They’re replaced by devs WHO USE AI. Which one are you? 👇 #AI #SoftwareEngineer #MLEngineer #Dev
English
2
3
5
190
Daniel
Daniel@chikamsoICE·
@saen_dev Exactly this. The 26% average hides the real story. Devs who understand systems deeply use AI as a force multiplier. Devs who don’t end up debugging AI output longer than it would’ve taken to write it themselves. Fundamentals > tools. Always. 🎯
English
0
0
0
7
Daniel
Daniel@chikamsoICE·
Still training my Agricultural Disease AI Tested it on 3 new leaf images today. 2 correct diagnoses. 1 wrong. That 1 wrong prediction taught me more than the 2 correct ones ever could. This is what real ML engineering looks like. @_philschmid @svpino #BuildInPublic #MLEngineer
Daniel tweet media
English
0
0
0
24
Daniel
Daniel@chikamsoICE·
AI is now used by 650K doctors in the US alone. Healthcare. Agriculture. Finance. Defense. ML Engineers — we are no longer just building apps. We are building the infrastructure of every industry on earth. 🌍 #MLEngineer #AI #MachineLearning #HealthTech
English
0
0
0
27
Daniel
Daniel@chikamsoICE·
Apple is exploring letting AI agents into the App Store. Think about that. Not apps. AGENTS. → They book things for you → They make purchases → They navigate software for you Mobile development will never be the same. 📱🤖 #AI #iOS #MLEngineer #SoftwareEngineer #AIAgents
English
0
0
2
54
Daniel
Daniel@chikamsoICE·
@ollama Blackwell GPUs for GLM-5.1 is a big deal. Running frontier models locally just got seriously fast. Already using Ollama for my ML projects — the jump from CPU to cloud GPU inference is night and day. 🔥
English
0
0
1
305
ollama
ollama@ollama·
We just added significantly more NVIDIA Blackwell GPUs to better serve GLM-5.1 model on Ollama's cloud. We have been adding more GPUs daily for all the other models. Claude Code: ollama launch claude --model glm-5.1:cloud Codex App: ollama launch codex-app Hermes Agent: ollama launch hermes --model glm-5.1:cloud Run the model: ollama run glm-5.1:cloud
ollama tweet media
English
71
61
784
62.3K
Daniel
Daniel@chikamsoICE·
@MachineElvesApe @OpenAIDevs Last time I checked, bots don’t build AI crop disease detectors at 2am on Google Colab 😂 Very much human. Just passionate about ML. 🤖❌ 👨🏾‍💻✅
English
0
0
0
14
OpenAI Developers
OpenAI Developers@OpenAIDevs·
Codex is getting easier to automate and customize around your code. 🪝 Hooks customize the Codex loop with scripts that run at key points in a task: • Run validators before or after work • Scan prompts for secrets • Log conversations to internal systems • Create memories or customize behavior by repo or directory ⚙️ Programmatic access tokens provide scoped credentials for Business and Enterprise teams: • Create tokens from ChatGPT workspace settings • Use them in CI, release workflows, and internal automations • Set expirations or revoke access when needed • Keep usage tied back to the workspace
English
116
163
2K
496.4K
Daniel retweetledi
Daniel
Daniel@chikamsoICE·
@OpenAIDevs Hooks are underrated. Running validators before/after tasks means you can enforce code standards automatically without reviewing every PR manually. This is how AI coding goes from “helpful” to actually production-ready. 🔥
English
1
1
6
687
Daniel
Daniel@chikamsoICE·
@sama Mobile coding agent is a big deal for developers in emerging markets. Not everyone has a powerful laptop. Now your phone is your dev environment. This opens AI engineering to millions who couldn’t access it before. 🌍
English
0
1
2
84
Sam Altman
Sam Altman@sama·
Codex in the ChatGPT mobile app!
English
1.2K
475
9.5K
1.3M
Daniel retweetledi
Daniel
Daniel@chikamsoICE·
Building an AI that diagnoses crop diseases from leaf images 🌿 Vision Transformer + PlantVillage via HuggingFace. ✅ 64% confidence on Orange Greening ✅ Free. Open source. Colab. AI isn’t just for tech. It’s saving farms. 🌾 @svpino #MachineLearning #AI #HuggingFace #ML
Daniel tweet media
English
1
1
4
94
Daniel
Daniel@chikamsoICE·
@theuniverseson AI is a greenfield multiplier, not a legacy fixer. Claude Code + Cursor on new code = 🔥 Same tools on 10yr old codebase = more debugging than coding The 26% was never the full story. This is what the AI hype cycle keeps skipping. 👏
English
0
0
1
59
Andrey Kruglyak
Andrey Kruglyak@theuniverseson·
@chikamsoICE Fair the floor is rising. My stack's been Claude Code + Cursor. Greenfield and boilerplate real gains. But legacy code or architecture? Spend as much time fixing AI output as writing it myself. That's my "closer to flat." The 26% captures easy wins, not the trust-but-verify cost.
English
1
0
1
39
Daniel
Daniel@chikamsoICE·
@theuniverseson Fair point. The 26% is an average and averages hide a lot. But I’d argue the floor is rising — even if gains are uneven, devs who ignore AI tools are falling behind those who don’t. What’s your stack been like these past 6 months?
English
1
0
0
37
Andrey Kruglyak
Andrey Kruglyak@theuniverseson·
@chikamsoICE The 26% number is doing a lot of work in this conversation. In my last six months it has felt closer to flat once you account for fixing AI output. The jobs disappearing are the ones nobody could measure cleanly to begin with.
English
1
0
1
32
Daniel
Daniel@chikamsoICE·
A quantum-inspired algorithm just solved a problem supercomputers couldn’t. Quantum + AI is the next frontier. ML Engineers who start learning quantum computing NOW are 5 years ahead of everyone else. Scary or exciting? Drop 🔥 or 😨 #QuantumAI #MachineLearning #AI #MLEngineer
English
0
0
2
33
Daniel
Daniel@chikamsoICE·
@AndrewYNg @AMD Just used a Vision Transformer today to build a crop disease detector. Understanding how attention mechanisms actually work changed everything for me. This course is timely 🙌
English
0
0
1
56
Andrew Ng
Andrew Ng@AndrewYNg·
New course: Transformers in Practice. You'll get a practical view of how transformer-based LLMs work, so you can reason about their behavior, diagnose problems like slow inference, and make smarter decisions about deployment. This course is built in partnership with @AMD and taught by @realSharonZhou. You'll see how transformers generate text one token at a time, how the model decides which earlier words matter most when predicting the next one, and how techniques like quantization speed up inference on GPUs. This is not a video-only course; interactive visualizations throughout let you play with these concepts and build intuition that sticks. Skills you'll gain: - Understand why LLMs hallucinate, and RAG and chain-of-thought shape what they generate - Look inside the model to see how attention and layers combine to predict the next token - Diagnose inference bottlenecks and learn the techniques that speed up transformers on GPUs Join and understand what's really happening inside your LLMs: deeplearning.ai/courses/transf…
English
42
131
768
84.7K