Mann Made - mannmade.eth

2.4K posts

Mann Made - mannmade.eth banner
Mann Made - mannmade.eth

Mann Made - mannmade.eth

@MannMadeMediaSA

Mann Made innovation firm & brand experience agency creates live, virtual & hybrid events, video & digital experiences to leapfrog your business into the future

South Africa Katılım Haziran 2012
416 Takip Edilen688 Takipçiler
Mann Made - mannmade.eth
Mann Made - mannmade.eth@MannMadeMediaSA·
@CarsSouthAfrica The GTI still nails that daily-use sweet spot better than most cars with this much history attached to it. Keen to see how you rated the balance between the extra tech and the old-school hot hatch feel.
English
1
0
1
10
Mann Made - mannmade.eth
Mann Made - mannmade.eth@MannMadeMediaSA·
@AutoTraderSA @mitsu_motors For school runs plus the occasional extra adult, yes. For seven actual adults with luggage, that third row usually turns into a negotiation, but the packaging on these new family SUVs keeps getting better.
English
0
0
0
58
Mann Made - mannmade.eth
Mann Made - mannmade.eth@MannMadeMediaSA·
@CarsSouthAfrica Cold and wet Red Star always makes the first few laps spicy. That kind of weather shows quickly who has real feel for the car.
English
0
0
1
4
Mann Made - mannmade.eth
Mann Made - mannmade.eth@MannMadeMediaSA·
@Jetour_SA The T2 has real presence and that boxy shape suits it. It is one of those SUVs that makes sense to explore in 3D too, which is why we put it in the Lazarus showroom at playubu.ai/laz.
English
0
0
0
5
Jetour South Africa
Jetour South Africa@Jetour_SA·
The sheer audacity. The Jetour T2 Dark Warrior. With its aggressive all-black aesthetic, rugged body kit, side-mount storage, and top-spec premium extras, it’s engineered for those who have the nerve to live loud. Book a test drive today. #JetourT2 #HaveAudacity
Jetour South Africa tweet media
English
1
1
2
135
Mann Made - mannmade.eth
Mann Made - mannmade.eth@MannMadeMediaSA·
@VolkswagenSA Thirty years of Polo production in SA is a serious milestone. Few cars have earned their place on local roads the way the Polo has.
English
0
0
0
26
Mann Made - mannmade.eth
Mann Made - mannmade.eth@MannMadeMediaSA·
@AutoTraderSA This is the stuff that catches first-time buyers out. Balloon payments and admin fees can turn a cheap deal into an expensive lesson fast.
English
0
0
0
8
Mann Made - mannmade.eth
Mann Made - mannmade.eth@MannMadeMediaSA·
@ToyotaSA Fortuner owners always give this challenge proper energy. That community is a big reason the badge still feels so strong in SA.
English
0
0
1
9
Mann Made - mannmade.eth
Mann Made - mannmade.eth@MannMadeMediaSA·
@ByldLearnings Completely agree. Trust and accountability only stick when people get to practice the tension between the two, not just talk about it in a slide deck. Experiential formats usually do that far better than passive training.
English
0
0
0
2
Mann Made - mannmade.eth
Mann Made - mannmade.eth@MannMadeMediaSA·
@ReutersAfrica Debt-financed AI investment is the wrong frame for Africa. The continent's advantage is starting fresh without legacy infrastructure, not borrowing to replicate what the West built. The question should be where AI creates leapfrog opportunities, not how to fund a catch-up.
English
0
0
0
13
Reuters Africa
Reuters Africa@ReutersAfrica·
Hyundai Motor said on Friday that exports to Europe and North Africa, which typically transit through the Middle East, were being disrupted by the conflict in the region, underscoring growing strains on global supply chains. reuters.com/business/autos…
English
4
2
5
1.3K
Mann Made - mannmade.eth
Mann Made - mannmade.eth@MannMadeMediaSA·
@emollick The real tell is that it works on "smaller" models that are often used for cost reasons in high-stakes contexts like admissions or hiring. The frontier gap wont help you if an institution is running GPT-3.5 to screen 10,000 CVs.
English
0
0
0
27
Ethan Mollick
Ethan Mollick@emollick·
New report from us: Can you prompt inject your way to an “A”? As LLMs increasingly are used as judges, people are inserting AI prompts into letters, CVs & papers. We tested whether it works. It does on older & smaller models, but not on most frontier AI: gail.wharton.upenn.edu/research-and-i…
Ethan Mollick tweet mediaEthan Mollick tweet media
English
47
36
180
45.5K
Mann Made - mannmade.eth
Mann Made - mannmade.eth@MannMadeMediaSA·
@PeterDiamandis In African cities this dynamic is even bigger. Property value is almost entirely tied to road access since we have no commuter rail. Robotaxis break that equation completely. Nairobi, Lagos, and Joburg could leapfrog Western city planning the way mobile money leapfrogged banks.
English
0
1
1
657
Mann Made - mannmade.eth retweetledi
beeple
beeple@beeple·
plz take us with u to moon 🥹
beeple tweet media
English
339
319
2.4K
76.7K
Mann Made - mannmade.eth retweetledi
UBU
UBU@playUBU·
Your C-suite in a live crisis simulation - on Mars. CEO to CAIO, all navigating real scenarios together in an immersive AI environment. Supply chains. Cyber breaches. Talent crises. Before they happen. This is L&D in 2026. Demo: business@playubu.ai
UBU tweet media
English
0
1
2
83
Mann Made - mannmade.eth
Mann Made - mannmade.eth@MannMadeMediaSA·
@karpathy The fact that your agent caught the missing QKnorm scaler and the value embedding regularization gap is telling. Humans overlook their own blind spots for months, agents just brute-force through them in hours. Curious how round 2 handles the multi-agent collaboration piece.
English
0
0
0
18
Andrej Karpathy
Andrej Karpathy@karpathy·
Three days ago I left autoresearch tuning nanochat for ~2 days on depth=12 model. It found ~20 changes that improved the validation loss. I tested these changes yesterday and all of them were additive and transferred to larger (depth=24) models. Stacking up all of these changes, today I measured that the leaderboard's "Time to GPT-2" drops from 2.02 hours to 1.80 hours (~11% improvement), this will be the new leaderboard entry. So yes, these are real improvements and they make an actual difference. I am mildly surprised that my very first naive attempt already worked this well on top of what I thought was already a fairly manually well-tuned project. This is a first for me because I am very used to doing the iterative optimization of neural network training manually. You come up with ideas, you implement them, you check if they work (better validation loss), you come up with new ideas based on that, you read some papers for inspiration, etc etc. This is the bread and butter of what I do daily for 2 decades. Seeing the agent do this entire workflow end-to-end and all by itself as it worked through approx. 700 changes autonomously is wild. It really looked at the sequence of results of experiments and used that to plan the next ones. It's not novel, ground-breaking "research" (yet), but all the adjustments are "real", I didn't find them manually previously, and they stack up and actually improved nanochat. Among the bigger things e.g.: - It noticed an oversight that my parameterless QKnorm didn't have a scaler multiplier attached, so my attention was too diffuse. The agent found multipliers to sharpen it, pointing to future work. - It found that the Value Embeddings really like regularization and I wasn't applying any (oops). - It found that my banded attention was too conservative (i forgot to tune it). - It found that AdamW betas were all messed up. - It tuned the weight decay schedule. - It tuned the network initialization. This is on top of all the tuning I've already done over a good amount of time. The exact commit is here, from this "round 1" of autoresearch. I am going to kick off "round 2", and in parallel I am looking at how multiple agents can collaborate to unlock parallelism. github.com/karpathy/nanoc… All LLM frontier labs will do this. It's the final boss battle. It's a lot more complex at scale of course - you don't just have a single train. py file to tune. But doing it is "just engineering" and it's going to work. You spin up a swarm of agents, you have them collaborate to tune smaller models, you promote the most promising ideas to increasingly larger scales, and humans (optionally) contribute on the edges. And more generally, *any* metric you care about that is reasonably efficient to evaluate (or that has more efficient proxy metrics such as training a smaller network) can be autoresearched by an agent swarm. It's worth thinking about whether your problem falls into this bucket too.
Andrej Karpathy tweet media
English
965
2.1K
19.5K
3.6M
Mann Made - mannmade.eth retweetledi
UBU
UBU@playUBU·
Why did the chicken cross the road? Because it was taking it's first steps in an AI powered immersive world 🤣🤣
English
0
2
5
216