Fowotade Tolulope Elijah

232 posts

Fowotade Tolulope Elijah

Fowotade Tolulope Elijah

@Tolulopee90

ML/AI Engineer | Computer Scientist | Christian | Music Lover ... let's collaborate

Akure, Nigerian Katılım Kasım 2021
247 Takip Edilen82 Takipçiler
Fowotade Tolulope Elijah
Fowotade Tolulope Elijah@Tolulopee90·
@develop_ed you can do this bro, line by line, files by files...you'll cook something solid 🔥 good to see ya build in public 😀
English
0
1
1
44
Okegbemi Joshua
Okegbemi Joshua@develop_ed·
Day 1 of working on my final year project. 💯 Most of the database schemas are designed, and most of the packages are installed. I don't know how to start because it looks really large and overwhelming. I just have to start anyway. Wish me luck 🤞🏿
Okegbemi Joshua tweet mediaOkegbemi Joshua tweet media
English
2
0
9
144
Fowotade Tolulope Elijah retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
# on shortification of "learning" There are a lot of videos on YouTube/TikTok etc. that give the appearance of education, but if you look closely they are really just entertainment. This is very convenient for everyone involved : the people watching enjoy thinking they are learning (but actually they are just having fun). The people creating this content also enjoy it because fun has a much larger audience, fame and revenue. But as far as learning goes, this is a trap. This content is an epsilon away from watching the Bachelorette. It's like snacking on those "Garden Veggie Straws", which feel like you're eating healthy vegetables until you look at the ingredients. Learning is not supposed to be fun. It doesn't have to be actively not fun either, but the primary feeling should be that of effort. It should look a lot less like that "10 minute full body" workout from your local digital media creator and a lot more like a serious session at the gym. You want the mental equivalent of sweating. It's not that the quickie doesn't do anything, it's just that it is wildly suboptimal if you actually care to learn. I find it helpful to explicitly declare your intent up front as a sharp, binary variable in your mind. If you are consuming content: are you trying to be entertained or are you trying to learn? And if you are creating content: are you trying to entertain or are you trying to teach? You'll go down a different path in each case. Attempts to seek the stuff in between actually clamp to zero. So for those who actually want to learn. Unless you are trying to learn something narrow and specific, close those tabs with quick blog posts. Close those tabs of "Learn XYZ in 10 minutes". Consider the opportunity cost of snacking and seek the meal - the textbooks, docs, papers, manuals, longform. Allocate a 4 hour window. Don't just read, take notes, re-read, re-phrase, process, manipulate, learn. And for those actually trying to educate, please consider writing/recording longform, designed for someone to get "sweaty", especially in today's era of quantity over quality. Give someone a real workout. This is what I aspire to in my own educational work too. My audience will decrease. The ones that remain might not even like it. But at least we'll learn something.
English
662
3.4K
17K
2.2M
Fowotade Tolulope Elijah retweetledi
Karl Mehta
Karl Mehta@karlmehta·
BREAKING: Stanford just tracked 233 AI security incidents across 2024 & the pattern is staggering. Turns out, 64% of companies know AI is risky. But fewer than 1 in 3 have actual safeguards in place. Here's what the data revealed: (be prepared to have your mind blown...)
Karl Mehta tweet media
English
1
15
26
7.2K
Fowotade Tolulope Elijah retweetledi
Haider.
Haider.@haider1·
DeepSeek released an interesting paper ahead of its next model they introduced a new research module called "Engram", focused on conditional, scalable memory also, many industry analysts already see Engram as a core building block for deepseek-v4, expected to launch next month deepseek is actually cooking
Haider. tweet media
English
12
11
86
5.9K
Wise
Wise@trikcode·
Dear programmers, While binary search is very efficient, if a girl asks you to guess her age, don't say 50 and then 25.
English
229
919
27.2K
684.3K
@bluecow 🐮
@bluecow 🐮@BLUECOW009·
i wanted to make a group to talk daily research stuff, send papers, talk about AI news, who is interested? react or comment to be added
English
2.1K
163
6.2K
273K
Fowotade Tolulope Elijah retweetledi
edison
edison@mr_m0rale·
i’m confident about my potential like crazy, i just need time
English
94
3.1K
12.1K
244K
Avinash Singh
Avinash Singh@AvinashSingh_20·
Share your GitHub profile, I’ll review it and drop feedback!
English
496
14
478
52.8K
Fowotade Tolulope Elijah retweetledi
Ayanfe Brand Lab
Ayanfe Brand Lab@JoyAyanfe·
Ayanfe Brand Lab tweet media
ZXX
0
1
1
34
Fowotade Tolulope Elijah retweetledi
Lei Yang
Lei Yang@diyerxx·
Got burned by an Apple ICLR paper — it was withdrawn after my Public Comment. So here’s what happened. Earlier this month, a colleague shared an Apple paper on arXiv with me — it was also under review for ICLR 2026. The benchmark they proposed was perfectly aligned with a project we’re working on. I got excited after reading it. I immediately stopped my current tasks and started adapting our model to their benchmark. Pulled a whole weekend crunch session to finish the integration… only to find our model scoring absurdly low. I was really frustrated. I spent days debugging, checking everything — maybe I used it wrong, maybe there was a hidden bug. During this process, I actually found a critical bug in their official code: * When querying the VLM, it only passed in the image path string, not the image content itself. The most ridiculous part? After I fixed their bug, the model's scores got even lower! The results were so counterintuitive that I felt forced to do deeper validation. After multiple checks, the conclusion held: fixing the bug actually made the scores worse. At this point I decided to manually inspect the data. I sampled the first 20 questions our model got wrong, and I was shocked: * 6 out of 20 had clear GT errors. * The pattern suggested the “ground truth” was model-generated with extremely poor quality control, leading to tons of hallucinations. * Based on this quick sample, the GT error rate could be as high as 30%. I reported the data quality issue in a GitHub issue. After 6 days, the authors replied briefly and then immediately closed the issue. That annoyed me — I’d already wasted a ton of time, and I didn’t want others in the community to fall into the same trap — so I pushed back. Only then did they reopen the GitHub issue. Then I went back and checked the examples displayed in the paper itself. Even there, I found at least three clear GT errors. It’s hard to believe the authors were unaware of how bad the dataset quality was, especially when the paper claims all samples were reviewed by annotators. Yet even the examples printed in the paper contain blatant hallucinations and mistakes. When the ICLR reviews came out, I checked the five reviews for this paper. Not a single reviewer noticed the GT quality issues or the hallucinations in the paper's examples. So I started preparing a more detailed GT error analysis and wrote a Public Comment on OpenReview to inform the reviewers and the community about the data quality problems. The next day — the authors withdrew the paper and took down the GitHub repo. Fortunately, ICLR is an open conference with Public Comment. If this had been a closed-review venue, this kind of shoddy work would have been much harder to expose. So here’s a small call to the community: For any paper involving model-assisted dataset construction, reviewers should spend a few minutes checking a few samples manually. We need to prevent irresponsible work from slipping through and misleading everyone. Looking back, I should have suspected the dataset earlier based on two red flags: * The paper’s experiments claimed that GPT-5 has been surpassed by a bunch of small open-source models. * The original code, with a ridiculous bug, produced higher scores than the bug-fixed version. But because it was a paper from Big Tech, I subconsciously trusted the integrity and quality, which prevented me from spotting the problem sooner. This whole experience drained a lot of my time, energy, and emotion — especially because accusing others of bad data requires extra caution. I’m sharing this in hopes that the ML community remains vigilant and pushes back against this kind of sloppy, low-quality, and irresponsible behavior before it misleads people and wastes collective effort. #ICLR #ICLR2026 #NeurIPS #CVPR #openreview #MachineLearning #LLM #VLM
Lei Yang tweet media
English
53
212
2.5K
396.9K
Fowotade Tolulope Elijah
Fowotade Tolulope Elijah@Tolulopee90·
@chesscom I won't argue that... e and d pawns open up the bishops and get exchanged first...most pple castle kingside driving more attack to that side...while not always A pawn might receive comparatively lesser attack (plus my rook don't leave the A file on time 😃
English
0
0
0
83
Chess.com
Chess.com@chesscom·
justice for e-pawns 😢
Chess.com tweet media
English
17
26
655
27.3K
Fowotade Tolulope Elijah
Fowotade Tolulope Elijah@Tolulopee90·
@eskayML 😂 that's a really funny scenario But seriously, beyond transparency, I think open source FSD would actually improve performance and generalizability through community contributions and diverse testing scenarios
English
0
0
1
44
Mechanic 𝕏
Mechanic 𝕏@eskayML·
FSD cars desperately need an open source implementation Imagine a future where your car automatically drives you to the police station for cutting traffic.
English
4
0
3
321
Fact
Fact@Fact·
Psychology says, the best things in life are usually found when you are not looking for them.
English
5
42
226
12.8K
Fact
Fact@Fact·
According to research, new friends become better friends over time if they have similar levels of social anxiety.
English
2
18
112
11.7K
Noam Brown
Noam Brown@polynoamial·
You don’t need a PhD to be a great AI researcher. Even @OpenAI’s Chief Research Officer doesn’t have a PhD.
English
192
198
3.4K
1.3M
Chess.com
Chess.com@chesscom·
give us your worst chess advice in one word 👀
English
300
9
473
86.1K
Fowotade Tolulope Elijah
Fowotade Tolulope Elijah@Tolulopee90·
@levelsio Startups will storm everywhere 😊😊 as building gets easier and the early teenagers are fully potent to try out their curiosity... interesting times
English
0
0
0
12