Nwafor Glory

136 posts

Nwafor Glory

Nwafor Glory

@southeastdev

CTO, Bildup AI

Lagos, Nigeria Присоединился Mayıs 2022
122 Подписки45 Подписчики
Nwafor Glory
Nwafor Glory@southeastdev·
Postman: "We've updated our features!" Me: "Oh, cool! What’s new?" Postman: "Your team is now locked out unless you pay up." 💀
English
0
0
0
6
Nwafor Glory ретвитнул
Gemini CLI
Gemini CLI@geminicli·
Gemini 3.1 Pro has arrived 🚀 We are beginning to roll it out within Gemini CLI. You will see gemini-3.1-pro-preview appear via /model once you have access. It may take a few days for the roll out to reach every single user. If you are using our model router feature (Auto), Gemini 3.1 Pro will start being used as the 3 Pro model. For the best experience upgrade to the v0.29.4 release. Excited to see what everyone builds 💎
Logan Kilpatrick@OfficialLoganK

Introducing Gemini 3.1 Pro, our new SOTA model across most reasoning, coding, and stem use cases!

English
42
74
1.1K
99.7K
Nwafor Glory ретвитнул
Andrej Karpathy
Andrej Karpathy@karpathy·
I think it must be a very interesting time to be in programming languages and formal methods because LLMs change the whole constraints landscape of software completely. Hints of this can already be seen, e.g. in the rising momentum behind porting C to Rust or the growing interest in upgrading legacy code bases in COBOL or etc. In particular, LLMs are *especially* good at translation compared to de-novo generation because 1) the original code base acts as a kind of highly detailed prompt, and 2) as a reference to write concrete tests with respect to. That said, even Rust is nowhere near optimal for LLMs as a target language. What kind of language is optimal? What concessions (if any) are still carved out for humans? Incredibly interesting new questions and opportunities. It feels likely that we'll end up re-writing large fractions of all software ever written many times over.
Thomas Wolf@Thom_Wolf

Shifting structures in a software world dominated by AI. Some first-order reflections (TL;DR at the end): Reducing software supply chains, the return of software monoliths – When rewriting code and understanding large foreign codebases becomes cheap, the incentive to rely on deep dependency trees collapses. Writing from scratch ¹ or extracting the relevant parts from another library is far easier when you can simply ask a code agent to handle it, rather than spending countless nights diving into an unfamiliar codebase. The reasons to reduce dependencies are compelling: a smaller attack surface for supply chain threats, smaller packaged software, improved performance, and faster boot times. By leveraging the tireless stamina of LLMs, the dream of coding an entire app from bare-metal considerations all the way up is becoming realistic. End of the Lindy effect – The Lindy effect holds that things which have been around for a long time are there for good reason and will likely continue to persist. It's related to Chesterton's fence: before removing something, you should first understand why it exists, which means removal always carries a cost. But in a world where software can be developed from first principles and understood by a tireless agent, this logic weakens. Older codebases can be explored at will; long-standing software can be replaced with far less friction. A codebase can be fully rewritten in a new language. ² Legacy software can be carefully studied and updated in situations where humans would have given up long ago. The catch: unknown unknowns remain unknown. The true extent of AI's impact will hinge on whether complete coverage of testing, edge cases, and formal verification is achievable. In an AI-dominated world, formal verification isn't optional—it's essential. The case for strongly typed languages – Historically, programming language adoption has been driven largely by human psychology and social dynamics. A language's success depended on a mix of factors: individual considerations like being easy to learn and simple to write correctly; community effects like how active and welcoming a community was, which in turn shaped how fast its ecosystem would grow; and fundamental properties like provable correctness, formal verification, and striking the right balance between dynamic and static checks—between the freedom to write anything and the discipline of guarding against edge cases and attacks. As the human factor diminishes, these dynamics will shift. Less dependence on human psychology will favor strongly typed, formally verifiable and/or high performance languages.³ These are often harder for humans to learn, but they're far better suited to LLMs, which thrive on formal verification and reinforcement learning environments. Expect this to reshape which languages dominate. Economic restructuring of open source – For decades, open-source communities have been built around humans finding connection through writing, learning, and using code together. In a world where most code is written—and perhaps more importantly, read—by machines, these incentives will start to break down.⁴ Communities of AIs building libraries and codebases together will likely emerge as a replacement, but such communities will lack the fundamentally human motivations that have driven open source until now. If the future of open-source development becomes largely devoid of humans, alignment of AI models won't just matter—it will be decisive. The future of new languages – Will AI agents face the same tradeoffs we do when developing or adopting new programming languages? Expressiveness vs. simplicity, safety vs. control, performance vs. abstraction, compile time vs. runtime, explicitness vs. conciseness. It's unclear that they will. In the long term, the reasons to create a new programming language will likely diverge significantly from the human-driven motivations of the past. There may well be an optimal programming language for LLMs—and there's no reason to assume it will resemble the ones humans have converged on. TL; DR: - Monoliths return – cheap rewriting kills dependency trees; smaller attack surface, better performance, bare-metal becomes realistic - Lindy effect weakens – legacy code loses its moat, but unknown unknowns persist; formal verification becomes essential - Strongly typed languages rise – human psychology mattered for adoption; now formal verification and RL environments favor types over ergonomics - Open source restructures – human connection drove the community; AI-written/read code breaks those incentives; alignment becomes decisive - New languages diverge – AI may not share our tradeoffs; optimal LLM programming languages may look nothing like what humans converged on ¹ x.com/mntruell/statu… ² x.com/anthropicai/st… ³ wesmckinney.com/blog/agent-erg…#issuecomment-3717222957" target="_blank" rel="nofollow noopener">github.com/tailwindlabs/t…

English
700
653
8.1K
1.2M
Nwafor Glory
Nwafor Glory@southeastdev·
🥂 to a productive new week. Aim for cleaner codes you’ll.
English
0
1
1
23
Manoj Kumar
Manoj Kumar@manojdotdev·
lets end the debate Antigravity or Cursor ?
Manoj Kumar tweet media
English
275
6
424
64.9K
Nwafor Glory ретвитнул
Andrej Karpathy
Andrej Karpathy@karpathy·
I've never felt this much behind as a programmer. The profession is being dramatically refactored as the bits contributed by the programmer are increasingly sparse and between. I have a sense that I could be 10X more powerful if I just properly string together what has become available over the last ~year and a failure to claim the boost feels decidedly like skill issue. There's a new programmable layer of abstraction to master (in addition to the usual layers below) involving agents, subagents, their prompts, contexts, memory, modes, permissions, tools, plugins, skills, hooks, MCP, LSP, slash commands, workflows, IDE integrations, and a need to build an all-encompassing mental model for strengths and pitfalls of fundamentally stochastic, fallible, unintelligible and changing entities suddenly intermingled with what used to be good old fashioned engineering. Clearly some powerful alien tool was handed around except it comes with no manual and everyone has to figure out how to hold it and operate it, while the resulting magnitude 9 earthquake is rocking the profession. Roll up your sleeves to not fall behind.
English
2.6K
7.5K
55.9K
16.8M
Nwafor Glory ретвитнул
Andrej Karpathy
Andrej Karpathy@karpathy·
The hottest new programming language is English
English
2.1K
8.5K
67.9K
11.7M
Nwafor Glory ретвитнул
Andrej Karpathy
Andrej Karpathy@karpathy·
New post: nanochat miniseries v1 The correct way to think about LLMs is that you are not optimizing for a single specific model but for a family models controlled by a single dial (the compute you wish to spend) to achieve monotonically better results. This allows you to do careful science of scaling laws and ultimately this is what gives you the confidence that when you pay for "the big run", the extrapolation will work and your money will be well spent. For the first public release of nanochat my focus was on end-to-end pipeline that runs the whole LLM pipeline with all of its stages. Now after YOLOing a few runs earlier, I'm coming back around to flesh out some of the parts that I sped through, starting of course with pretraining, which is both computationally heavy and critical as the foundation of intelligence and knowledge in these models. After locally tuning some of the hyperparameters, I swept out a number of models fixing the FLOPs budget. (For every FLOPs target you can train a small model a long time, or a big model for a short time.) It turns out that nanochat obeys very nice scaling laws, basically reproducing the Chinchilla paper plots: Which is just a baby version of this plot from Chinchilla: Very importantly and encouragingly, the exponent on N (parameters) and D (tokens) is equal at ~=0.5, so just like Chinchilla we get a single (compute-independent) constant that relates the model size to token training horizons. In Chinchilla, this was measured to be 20. In nanochat it seems to be 8! Once we can train compute optimal models, I swept out a miniseries from d10 to d20, which are nanochat sizes that can do 2**19 ~= 0.5M batch sizes on 8XH100 node without gradient accumulation. We get pretty, non-itersecting training plots for each model size. Then the fun part is relating this miniseries v1 to the GPT-2 and GPT-3 miniseries so that we know we're on the right track. Validation loss has many issues and is not comparable, so instead I use the CORE score (from DCLM paper). I calculated it for GPT-2 and estimated it for GPT-3, which allows us to finally put nanochat nicely and on the same scale: The total cost of this miniseries is only ~$100 (~4 hours on 8XH100). These experiments give us confidence that everything is working fairly nicely and that if we pay more (turn the dial), we get increasingly better models. TLDR: we can train compute optimal miniseries and relate them to GPT-2/3 via objective CORE scores, but further improvements are desirable and needed. E.g., matching GPT-2 currently needs ~$500, but imo should be possible to do <$100 with more work. Full post with a lot more detail is here: github.com/karpathy/nanoc… And all of the tuning and code is pushed to master and people can reproduce these with scaling_laws .sh and miniseries .sh bash scripts.
Andrej Karpathy tweet mediaAndrej Karpathy tweet mediaAndrej Karpathy tweet mediaAndrej Karpathy tweet media
English
227
680
5.4K
708.2K
Nwafor Glory
Nwafor Glory@southeastdev·
The internet is your classroom, your workplace, and your launchpad. Age, background, and location no longer define what you can build your actions do. Start. Learn. Build.
English
0
0
0
7
Nwafor Glory
Nwafor Glory@southeastdev·
Nothing else, just AI.
English
0
0
0
4
Nwafor Glory ретвитнул
Chibuike Aguene
Chibuike Aguene@ChibuikeAguene·
From Pain Point to Powerhouse: Bildup AI Raises $400K to Redefine Learning in Africa We started with a pain point. Now we’re building a movement. I’m proud to share that Bildup AI has closed an oversubscribed $400K angel round — a bold vote of confidence in our mission to transform how Africa learns and builds. This is not just a funding milestone. It’s a signal that the world is paying attention to what we’re building — a platform that delivers personalized, purpose-driven, AI-powered learning at a fraction of the cost and time of traditional education. We’re scaling our team. We’re expanding to Abuja and Lagos with new AI Learning Centres. And we’re doubling down on our commitment to help every young African discover their path, build real skills, and shape the future. To every investor who believed early — thank you. To every learner who trusted us — we’re just getting started. Let’s stop raising job seekers. Let’s start raising builders. Let’s Bildup the future. Bildup AI...empowered to make a difference. Read the full story on Techpoint: techpoint.africa/brandpress/bil… #BildupAI #AIStartup #EdTech #AIforAfrica
Chibuike Aguene tweet media
English
0
1
3
47
Nwafor Glory ретвитнул
Chibuike Aguene
Chibuike Aguene@ChibuikeAguene·
While the World Builds, We Gossip: A Call to Reclaim Nigeria’s National Focus – Part 2 Every morning, as students walk to school and return home, they pass a landscape that silently teaches them what we value. They see hotels, bars, filling stations, shopping plazas, provision stores, and furniture showrooms—structures built for consumption, not creation. Not one science lab. Not one innovation hub. Not one space that whispers, “You can build something here.” And when they come online, the story doesn’t change. Their screens are flooded with gossip, comics, celebrity drama, and endless distractions. Rarely do they stumble upon conversations about AI, biotech, or the breakthroughs shaping the future of humanity. This is not just about what they see. It’s about what they begin to believe is possible. And if we don’t change the narrative, we are silently telling them: “This is all there is.” And somehow, we think a miracle will happen. That they will magically become productive. That they will somehow lead innovation. Make no mistake: we are not preparing the next generation. We are betraying them. And the nation will pay a heavy price if we do not act now. At Bildup AI, we are not just building software. We are building a lifeline. A pathway. A promise. Every day, our team sacrifices long nights, relentless iteration—not because it’s easy, but because it’s necessary. We believe that every Nigerian child deserves more than gossip and guesswork. They deserve mastery. They deserve relevance. They deserve a future where they don’t just consume innovation—they create it. Let’s talk about: - AI literacy - Scientific breakthroughs - Local innovation - Youth empowerment - National transformation Let’s make these the headlines. Let’s make these the trends. Let’s make these the heartbeat of our nation. To every parent, educator, school owner, and visionary reading this: - Sign up your child on Bildup AI - Talk to your school about integrating our platform - Partner with us to open more AI learning centres across Nigeria - Start conversations that build, not break Let’s not wait for the future to arrive. Let’s build it together. Let’s build up Nigeria. Bildup AI… empowered to make a difference. #bildupai #AIStartup #Education #NextGenAI
Chibuike Aguene tweet media
English
0
1
2
38
Nwafor Glory ретвитнул
Chibuike Aguene
Chibuike Aguene@ChibuikeAguene·
The Cost of Late Exposure: Why Nigeria Must Rethink How We Prepare Our Youth for the Future Look at the image below. That’s Tanmay Bakshi—an AI prodigy who started working with IBM Watson at just 11 years old. By 14, he was speaking at global summits, publishing books, and building real-world solutions. Not after graduation. Not after NYSC. At 11. Now pause. Look around. Think about the average Nigerian teenager. What kind of exposure are we giving them? In Nigeria, we’ve normalized a dangerous delay. We wait until after secondary school. After JAMB. After university. After NYSC. Then we ask young people to start thinking critically about their future. By then, many are already disillusioned. They’ve spent years memorizing outdated content, chasing grades, and surviving an academic calendar so packed it leaves no room for exploration, creativity, or work experience. We must say it clearly: This is a great error. In many parts of the world, teenagers are building apps, interning at startups, and contributing to open-source projects. They’re not just studying—they’re working while they study. And that’s why they’re ready. In Nigeria, we treat work and study like oil and water. We overpack academic calendars, discourage internships, and stigmatize alternative learning paths. But here’s the truth: If we don’t normalize work and study, we will never build a workforce-ready youth. To be continued... Let's build up Nigeria. Bildup AI...empowered to make a difference
Chibuike Aguene tweet media
English
0
1
1
64
Nwafor Glory
Nwafor Glory@southeastdev·
@bod_zsn @PMTNigeria My brother booked you guys for his journey from Ibadan to Enugu today scheduled for 6:am. First you guys told them the first bus was bad and they had to wait till 11am to fix the bus before leaving Ibadan. It’s 9pm now and at Benin and you guys left them stranded.
Nwafor Glory tweet mediaNwafor Glory tweet media
English
0
0
0
5
Nwafor Glory
Nwafor Glory@southeastdev·
@Prince_NedNwoko @PMTNigeria My brother booked you guys for his journey from Ibadan to Enugu today scheduled for 6:am. First you guys told them the first bus was bad and they had to wait till 11am to fix the bus before leaving Ibadan. It’s 9pm now and at Benin and you guys left them stranded.
Nwafor Glory tweet mediaNwafor Glory tweet media
English
0
0
0
4
Senator (Dr.) Prince Ned Nwoko
Senator (Dr.) Prince Ned Nwoko@Prince_NedNwoko·
REGINA’S UNPROVOKED CARNAGE AND RAMPAGE IN MY HOUSE AND IN MY ABSENCE Regina was not always like this. Her current battle with drugs and alcohol abuse is the root of our problem. She must continue her rehabilitation program, or I fear for her life and safety. Now she has moved to a place where she will have unrestricted access to drugs. I have other wives, and none will ever accuse me of violence. Regina is the violent one here, slapping and hitting 3 staff in the past 48 hours and destroying property, including cars and windows, for no just cause. The truth is, I have set a clear condition for her to accept rehab in Asokoro or outside Nigeria especially Jordan where she will not have access to drugs. A clear headed regina would have taken moon to the hospital but instead she even threatened to kill our resident nurse(for exposing her drugs abuse). While I took Moon to the hospital, a scene of chaos unfolded at home, orchestrated by Sammy, Regina’s main drug supplier. Another known supplier of drugs to Regina is the tiny evil devil called Ann.
English
5.3K
3.6K
14.6K
5.9M
Nwafor Glory
Nwafor Glory@southeastdev·
@PMTNigeria You collected money from people and told them they can’t sleep in the park. You didn’t provide any form of accommodation for the passengers you messed up their plans. @PMTNigeria
English
0
0
0
10
Peace Mass Transit
Peace Mass Transit@PMTNigeria·
Peace Mass Transit is ever evolving! With the introduction of our Extra Comfort Buses, we’re reaffirming our commitment to high-quality, customer-centric transportation that truly puts passengers first. Step aboard and enjoy: - Spacious, luxury seating for unmatched comfort
English
5
1
1
92
Nwafor Glory
Nwafor Glory@southeastdev·
@PMTNigeria My brother booked you guys for his journey from Ibadan to Enugu today scheduled for 6:am. First you guys told them the first bus was bad and they had to wait till 11am to fix the bus before leaving Ibadan. It’s 9pm now and at Benin and you guys left them stranded.
Nwafor Glory tweet mediaNwafor Glory tweet media
English
0
0
0
22
Nwafor Glory ретвитнул
Chibuike Aguene
Chibuike Aguene@ChibuikeAguene·
I want to thank the management team of Adorable British College Enugu, led by Mr. Chris Terry, for making a significant contribution to transforming the education system in Africa by being the first school to formally integrate Bildup AI into their classrooms. This is a win for Nigeria and Africa. A great school with brilliant students and excellent teachers.
Chibuike Aguene tweet media
English
0
1
2
115
Nwafor Glory
Nwafor Glory@southeastdev·
We just lunched Nigeria's first indigenous education AI! Built with vision capabilities, real-time video generation & audio communication. So excited to be part of the team, and also the lead AI engineer who pioneered this. bildup.ai
English
0
0
0
40