Matthew Anorkplim Loh

837 posts

Matthew Anorkplim Loh banner
Matthew Anorkplim Loh

Matthew Anorkplim Loh

@iam_multiman

AI Oracle & Solutions Architect - conceptualizing, ideation, rapid-prototyping, testing & MVPs. Driven by innovation.

Se unió Temmuz 2025
44 Siguiendo21 Seguidores
Tweet fijado
Matthew Anorkplim Loh
Matthew Anorkplim Loh@iam_multiman·
Built a Ghana Sign Language LMS leveraging Google MediaPipe for motion tracking, then internationalized it for any country's sign language. Planning to open source it, but what to do about data entry, the massively demanding task of creating course content; text, images, videos?
English
2
1
3
358
Matthew Anorkplim Loh
Matthew Anorkplim Loh@iam_multiman·
Hot take: anyone who claims AI can clone any SaaS product has never actually cloned a SaaS product. At least not a closed source one.
English
0
0
0
1
Shub
Shub@shub0414·
Every idea feels taken. Every API already exists. Every SaaS has 12 competitors. So what do we even build now?
English
239
14
205
21.2K
Matthew Anorkplim Loh
Matthew Anorkplim Loh@iam_multiman·
@zuess05 What do you mean Claude automates the logic problem? The fun part is identifying the problem and figuring out the best solution. At least for many engineers that's the real challenge. AI still doesn't do that very well. What it does well is automate the routine parts.
English
0
0
0
6
Suhas
Suhas@zuess05·
Genuine question. We got into software because solving complex logic problems was genuinely fun. Now Claude instantly solves the fun parts. If we accidentally automated the creative work and kept the garbage work... what exactly are we going to be doing all day?
English
67
0
51
4.5K
Matthew Anorkplim Loh retuiteado
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
The former director of AI at Tesla stood up at Y Combinator's AI Startup School in June 2025 and said something that made half the room of young developers realize they had been preparing for the wrong future. His name is Andrej Karpathy, and he is one of the only people alive who has been in the room for all three of the paradigm shifts that built modern AI. He was a founding member of OpenAI. He led the Autopilot team at Tesla. He designed and taught the first deep learning class at Stanford, which grew from 150 students in 2015 to 750 by 2017 and then escaped onto the internet where millions of people have watched it since. When he said something had fundamentally changed, the people in that room had every reason to listen. Here is the framework he walked through, and why it is the clearest map anyone has drawn of what just happened to software. He said there have now been three distinct eras of programming, and they are not replacements of each other. They are layers on top of each other, each one eating into the work that used to require the one below it. Software 1.0 is what almost everyone still means when they say code. A human being sits down, writes explicit step-by-step instructions in Python or C or JavaScript, and the computer does exactly what those instructions say. For seventy years, this was the only kind of software there was. Software 2.0 is the shift Karpathy himself named in a 2017 essay. He watched it happen in real time at Tesla. The team stopped writing explicit rules for how the car should recognize a stop sign and started showing a neural network millions of examples until it figured the pattern out on its own. The code was no longer the instructions. The code was the dataset and the network architecture, and the actual logic lived in the weights that came out of training. He wrote at the time that Software 2.0 was eating Software 1.0 one function at a time, and inside Tesla, he was watching hand-coded computer vision logic get deleted and replaced by learned weights week after week. Software 3.0 is the one that just arrived, and it is the one almost nobody has the right framework for yet. He said the line carefully. "The hottest new programming language is English." Not a metaphor. A literal statement about how software is now being built. You no longer need to write Python to produce behavior. You write a prompt in plain language, and a large language model executes the intent. The prompt is the program. The English is the source code. And the thing that makes this more than a productivity improvement is what he said next. Software 3.0 is eating Software 1.0 and Software 2.0 at the same time. Every traditional rule-based function that used to require a team of engineers can now be replaced by a prompt and a model call. Every narrow machine learning model that used to require millions of labeled examples can be replaced by a large model that was already trained on a significant fraction of the internet. The entire stack is being compressed upward into natural language. The implication he drew from this is the one that matters most for anyone trying to figure out what to build next. He said we are living through the single biggest expansion of accessibility in the history of computing. For seventy years, programming required learning a formal language that fewer than one percent of humans could ever become fluent in. In the span of about three years, the barrier has collapsed. The only language you need to program a computer now is the one you already speak. He used a phrase for this that sounded almost silly until you realize what it actually means. Vibe coding. The act of describing the program you want in loose natural language and letting the model handle the syntax, the structure, the boilerplate, and the integration. You do not need to know Swift to describe the iOS app you want to build. You describe the vibe, and the LLM handles the rest. But he was careful not to oversell it. He said LLMs are what he calls people spirits. Stochastic simulations of human reasoning with an emergent psychology and a set of very specific weaknesses that every builder now has to design around. They have jagged intelligence, meaning they can do astonishingly hard things and then fail at something a child could handle. They have anterograde amnesia, meaning they cannot form new long-term memory the way a human coworker would. They hallucinate. They get confused. They need supervision. Which means the job of a developer is not disappearing. It is changing shape. The best developers in the Software 3.0 era are not the ones who write the most code. They are the ones who can think in systems, design the right prompts, build the validation layers that catch the model when it drifts, and orchestrate an entire pipeline of specialized AI agents the way a conductor handles an orchestra. The specific line he kept coming back to is the one I keep thinking about. We are no longer just writing code. We are managing behavior. The people who will build the important things in the next decade are not the ones with the cleanest syntax. They are the ones who figured out, earlier than everyone else, that when English becomes a programming language, the bottleneck is no longer how well you can speak to the compiler. The bottleneck is how clearly you can think about what you actually want the machine to do. And that has always been the real skill. It is just that for seventy years, we had the luxury of hiding it behind the syntax.
Ihtesham Ali tweet media
English
33
122
474
61.6K
Matthew Anorkplim Loh
Matthew Anorkplim Loh@iam_multiman·
The proof of AGI would be when an AI model writes code and all other frontier models find zero issues after a thorough review
English
0
0
0
4
Matthew Anorkplim Loh retuiteado
trash
trash@trashh_dev·
claude —dangerously-skip-permissions
English
28
153
2.3K
162.2K
Rody Davis
Rody Davis@rodydavis·
@iam_multiman @SaiNemani1 @antigravity Do you have the antigravity settings adjusted correctly? Also there should be an auto continue option. Only model capacity errors will stop it completely
English
1
0
0
31
Sai Nemani
Sai Nemani@SaiNemani1·
Honestly, I have been using @antigravity for quite a while now for some of my projects, and it's doing great. They have pretty good models, and generous limits imo. I don't see the real problem with it?
English
19
2
132
9.8K
Magnus Müller
Magnus Müller@mamagnus00·
Your agent can record its own product demos now. Feels like watching it take a selfie. I asked it to demo, film & cut itself. With this flavor every agent becomes a magic horse🐎 and can do anything in the browser👇👇
English
12
8
122
26K
Matthew Anorkplim Loh
Matthew Anorkplim Loh@iam_multiman·
This prompt or a variation may consume a lot of tokens... and may be quite frustrating but run it until there are no issues being reported in your codebase (which may be never 😅): "thoroughly review the whole codebase, to determine if there are any issues we missed"
English
0
0
0
7
Matthew Anorkplim Loh
Matthew Anorkplim Loh@iam_multiman·
@GoogleAIStudio The Figma killer. Well, I doubt it will kill Figma but if the AI proves to be capable of turning the architecture into code that works as the app is conceived to, we'll be welcoming a truly autonomous AI graphic designer.
English
0
0
3
421
Google AI Studio
Google AI Studio@GoogleAIStudio·
What are you vibe coding this weekend?
English
387
26
759
64.7K
Gregor Zunic
Gregor Zunic@gregpr07·
Introducing: Browser Harness. A self-healing harness that can complete virtually any browser task. ♞ We got tired of browser frameworks restricting the LLM. So we removed the framework. > Self-healing — edits helpers. py on the fly > Direct CDP — one websocket to Chrome > No framework, no rails, complete freedom > Drop-in for Claude Code and Codex I challenge anyone to find a task that DOESN'T work. I couldn't yet.🔥 100% open source ↓
Gregor Zunic tweet media
English
144
253
3.1K
740.1K
Floro S.
Floro S.@sflorimm·
Let’s finally agree on this. If I vibe coded a project, can I still tell people that I built it?
English
333
5
319
68.5K
Matthew Anorkplim Loh
Matthew Anorkplim Loh@iam_multiman·
@arunrafi @BoringBiz_ Right. In fact just realising and accepting your own strengths and limitations often can be an insurmountable barrier for many. While there are other factors, biting more than one can chew is a genuine problem. I wrote something related to this earlier x.com/i/status/20452…
Matthew Anorkplim Loh@iam_multiman

You were told that with the new developments around AI, you can build your own apps now. This is true to some extent. The whole truth though is that you still have to know the principles around building apps. 1. Is it necessary? 2. Are there existing solutions? 3. What underlying technology is optimal to build on? 4. What infrastructure do I need to deploy my app? 5. What's the recurring cost of deploying and maintaining the app? 6. Is my approach and the code generated secure enough? 7. Can I steer the AI when it is using the wrong technique or approach or when it misunderstands a request? 8. Am I proficient enough in a specific language to relay my requirements? 9. Do I have the time and energy to debug errors and iterate over and over until the app is functional? 10. Do I even know what point my app needs to get to for me to say, "it's okay, it's good enough for version x or y"? The list goes on. AI can only help with some of these questions. It doesn't address most of them. Even with AI, you are the one in the driver's seat. You remain the lead architect. What AI, however, enables for every single person who may be literate, illiterate, technical or non-technical is that it significantly brings down the cost of getting your dream software or app built and running. If you have a dream app or have desired a software tool that you previously felt was unattainable, this is the time to explore it. Let's talk

English
1
0
1
25
Boring_Business
Boring_Business@BoringBiz_·
People are waking up to the fact that AI is a complementary tool for your skill set, and not a complete replacement for a skill If you are already a good coder or writer, AI tools can enhance that skill and make you more productive But if you bad at it, AI is not going to magically solve your deficiency This is the primary reason why I think the fears around AI replacement of labor are overblown
Boring_Business tweet media
English
62
39
580
40.5K
Google AI
Google AI@GoogleAI·
What a week! Here’s everything we shipped: — Gemini 3.1 Flash TTS, our latest text-to-speech model, featuring native multi-speaker dialogue and improved controllability and audio tags for more natural, expressive voices in 70+ languages — Gemini Robotics-ER 1.6 by @GoogleDeepMind, an upgrade designed to help robots reason about the physical world — The @GeminiApp for Mac desktop (tip: Use Option + Space to access the app via shortcut) — Personal Intelligence in @GeminiApp has new integrations with @GooglePhotos and Nano Banana 2, making it easier to create relevant, personalized images. Available for AI Pro, Plus, and Ultra subscribers in the US — A couple fun additions in @GoogleAIStudio to make building easier, including Design previews and tab tab tab functionality — Skills in @GoogleChrome, which let you save and reuse your most helpful Gemini prompts and run them in your browser with a single click
English
78
75
856
86.1K
Matthew Anorkplim Loh
Matthew Anorkplim Loh@iam_multiman·
@MolloyLaurence @BoringBiz_ Right, I'm convinced building something niche that one very much needs offers significantly more motivation making it much harder to be a failed vibecoder
English
1
0
0
18
Laurence Molloy
Laurence Molloy@MolloyLaurence·
@iam_multiman @BoringBiz_ Totally... I'm building, from the ground up, a Swiss Army knife stock management app to replace a periodically audited text file for cellar stocks and a less than ideal freezer app and extend my record keeping to a store cupboard that I never had the stomach to manage before.
English
1
0
1
23
Matthew Anorkplim Loh
Matthew Anorkplim Loh@iam_multiman·
@a_protsyuk @haider1 Why not announce that you are having challenges using these models so people can tell you what they do differently? Gemini works well in the 3 IDEs I've tried it in and to build very complex apps. VS code via Github Copilot works great and so does Google AI Studio
English
0
0
0
35
Aleksandr Protsiuk
Aleksandr Protsiuk@a_protsyuk·
@haider1 ngl the google point is real - Gemini can handle single files but falls apart on multi-file refactors rn. claude code actually understands project context across 50+ files which is where real engineering work happens, not toy demos.
English
1
0
1
160
Haider.
Haider.@haider1·
the biggest problem right now is weak competition in AI coding you basically have claude code or codex, and that's it google does not seem fully invested in coding and looks more focused on real-world use cases grok, meta, and most chinese models do not look competitive in practical testing
English
39
7
124
10.5K
Arun Rafi
Arun Rafi@arunrafi·
@BoringBiz_ Accompanying the wave of creation will be a larger wave of frustration. "I can't because I lack resources" is uncomfortable but safe. "I had every resource and still couldn't make something good" is something else..frustration.
English
2
0
3
582