Stephen F. Oladele

3.1K posts

Stephen F. Oladele banner
Stephen F. Oladele

Stephen F. Oladele

@stephenoladele_

Ecclesiastes 9:10 ✝️ Life by four ideologies: 1. Spirit of the Living God 🕊️ 2. Mind of an Entrepreneur 📈 3. Soul of a Creative 🎨 4. Body of an Athlete 🏋🏽

☁️ 参加日 Aralık 2017
1.6K フォロー中747 フォロワー
Stephen F. Oladele がリツイート
Stitch by Google
Stitch by Google@stitchbygoogle·
Meet the new Stitch, your vibe design partner. Here are 5 major upgrades to help you create, iterate and collaborate: 🎨 AI-Native Canvas 🧠 Smarter Design Agent 🎙️ Voice ⚡️ Instant Prototypes 📐 Design Systems and DESIGN.md Rolling out now. Details and product walkthrough video in 🧵
English
864
4.4K
38.9K
17.5M
Stephen F. Oladele がリツイート
C. S. Lewis
C. S. Lewis@CSLewisDaily·
“The Son of God became a man to enable men to become sons of God.” - C.S. Lewis
English
40
667
6K
63.1K
Stephen F. Oladele がリツイート
Richmond Alake
Richmond Alake@richmondalake·
Day 95/100 of Agent Memory  🧠 Yesterday, I wrote about Anthropic's 1M context window going generally available at standard pricing and what it does and does not change for memory engineering. But one thing I forgot to mention that someone brought to my attention is Latency. Processing 900K tokens costs the same per token as processing 9K tokens. It does not take the same time. Prefill latency, the time the model spends reading and processing your input before generating a single output token, scales with context length. At 1M tokens, you are looking at prefill times that can run into minutes before you see your first response token. Anthropic's announcement addresses pricing. It says nothing about processing speed 🤔 This matters more than it sounds for agent systems specifically. The workloads where 1M context is immediately and genuinely practical are the ones where latency is not the primary constraint: batch document analysis, offline research pipelines, contract review, codebase audits. Load everything in, wait, get a high-quality result. That workflow is now significantly cheaper and meaningfully more reliable than it was a week ago. Real-time agentic loops are a different conversation. An agent operating within a user-facing product, making sequential tool calls, waiting for responses, and iterating toward an answer, is sensitive to every added second of prefill time. The obvious mitigation here is prompt caching. If the bulk of your context is stable across turns, whether that is a system prompt, a large document set, or a populated knowledge base, you can cache the KV state of that prefix and avoid recomputing it on every request. Anthropic supports prompt caching, and at 1M context, it becomes less of a nice-to-have and more of an architectural necessity. #100DaysOfAgentMemory #AgentMemory #MemoryEngineering
English
0
2
3
207
Stephen F. Oladele がリツイート
Oliver Burdick
Oliver Burdick@oliverburdick·
The highest place I’ll ever be is at Jesus’s feet.
Oliver Burdick tweet media
English
93
2.6K
25.3K
201.3K
Stephen F. Oladele がリツイート
GodlyVibez Studios
GodlyVibez Studios@GVStudios_TV·
🚨Streamer Aries says men need to read their bibles everyday🙌
English
27
480
6.2K
76K
Stephen F. Oladele がリツイート
clem 🤗
clem 🤗@ClementDelangue·
I’m back. The girls and their superhero of a mom are doing great 😍😍😍 What did I miss?
clem 🤗@ClementDelangue

After almost 10 years of near nonstop grind, I’m taking 2 months of paternity leave to support my hero of a wife and welcome our twin daughters. @huggingface is in great hands with the team and @julien_c acting as interim CEO. Hope to return a changed man, to an even stronger HF, and to an AI field that’s more open and collaborative than ever!

English
94
9
1.1K
87K
Stephen F. Oladele
Stephen F. Oladele@stephenoladele_·
With Claude Code, you sometimes need to be very specific about terminologies to fix a problem. Ugh!
English
0
0
1
17
Stephen F. Oladele がリツイート
GB News
GB News@GBNEWS·
Bible sales surge to highest level on record as Britain's Christian revival continues gbnews.com/news/bible-sal…
English
121
823
3.8K
184.4K
Stephen F. Oladele がリツイート
Franklin Graham
Franklin Graham@Franklin_Graham·
It was an incredible night in the beautiful city of Lima as 48,500 people came out to Esperanza Lima to hear the Gospel. We give God all the glory for the thousands who made the most important decision of their lives tonight—to turn from their sins and put their trust in Jesus Christ!
Franklin Graham tweet mediaFranklin Graham tweet mediaFranklin Graham tweet mediaFranklin Graham tweet media
English
143
779
4.9K
47K
Stephen F. Oladele がリツイート
Mo
Mo@atmoio·
I was a 10x engineer. Now I'm useless.
English
1.5K
1.7K
16.1K
6M
Stephen F. Oladele がリツイート
Latent.Space
Latent.Space@latentspacepod·
From rewriting Google’s search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code. We sat down with Jeff to unpack what it really means to “own the Pareto frontier,” why distillation is the quiet force behind every generation of faster, cheaper models, how energy not FLOPs is becoming the true constraint on AI compute, what it takes to co-design hardware and models 2–6 years into the future, why unified multimodal systems will outperform specialized ones, what it was like leading the charge to unify all of Google’s AI teams, and his prediction that deeply personalized models with access to your full digital context will redefine what useful AI looks like. @JeffDean @GoogleDeepMind @Google
English
16
108
1K
527K
Stephen F. Oladele がリツイート
autist
autist@litteralyme0·
ZXX
72
2.7K
32.4K
1.4M
Stephen F. Oladele がリツイート
FearBuck
FearBuck@FearedBuck·
Church on Roblox is crazy 😭
English
306
1.4K
13.7K
820K
Stephen F. Oladele がリツイート
Andrej Karpathy
Andrej Karpathy@karpathy·
It is hard to communicate how much programming has changed due to AI in the last 2 months: not gradually and over time in the "progress as usual" way, but specifically this last December. There are a number of asterisks but imo coding agents basically didn’t work before December and basically work since - the models have significantly higher quality, long-term coherence and tenacity and they can power through large and long tasks, well past enough that it is extremely disruptive to the default programming workflow. Just to give an example, over the weekend I was building a local video analysis dashboard for the cameras of my home so I wrote: “Here is the local IP and username/password of my DGX Spark. Log in, set up ssh keys, set up vLLM, download and bench Qwen3-VL, set up a server endpoint to inference videos, a basic web ui dashboard, test everything, set it up with systemd, record memory notes for yourself and write up a markdown report for me”. The agent went off for ~30 minutes, ran into multiple issues, researched solutions online, resolved them one by one, wrote the code, tested it, debugged it, set up the services, and came back with the report and it was just done. I didn’t touch anything. All of this could easily have been a weekend project just 3 months ago but today it’s something you kick off and forget about for 30 minutes. As a result, programming is becoming unrecognizable. You’re not typing computer code into an editor like the way things were since computers were invented, that era is over. You're spinning up AI agents, giving them tasks *in English* and managing and reviewing their work in parallel. The biggest prize is in figuring out how you can keep ascending the layers of abstraction to set up long-running orchestrator Claws with all of the right tools, memory and instructions that productively manage multiple parallel Code instances for you. The leverage achievable via top tier "agentic engineering" feels very high right now. It’s not perfect, it needs high-level direction, judgement, taste, oversight, iteration and hints and ideas. It works a lot better in some scenarios than others (e.g. especially for tasks that are well-specified and where you can verify/test functionality). The key is to build intuition to decompose the task just right to hand off the parts that work and help out around the edges. But imo, this is nowhere near "business as usual" time in software.
English
1.6K
4.8K
37.3K
5M
Stephen F. Oladele がリツイート
The Bible
The Bible@The__Bible7·
Jesus Christ is the Savior of the world.
English
112
1.1K
10.3K
120.4K
Jared Hardin
Jared Hardin@JaredDHardin·
I would like to start following more Christians. More specifically I'd like to follow Christian entrepreneurs, business and finance leaders, politicians, dad's, and outdoorsmen. Who should I be following?
English
524
42
1.5K
92.3K