Jerry Low

26.5K posts

Jerry Low banner
Jerry Low

Jerry Low

@WebHostingJerry

Geek dad / SEO junkie / Value Investor.

Ipoh, Malaysia Katılım Eylül 2009
1.9K Takip Edilen2.9K Takipçiler
Jerry Low retweetledi
Mo
Mo@atmoio·
AI is making CEOs delusional
Indonesia
996
2.6K
19K
2.8M
Grant Hankins
Grant Hankins@GHankins25·
@JacobEdwardInc Raised 5 kids, no gaps in employment, dropped alcohol/sugar, eat clean. 467 days straight of 10k+ steps, 30+ minutes exercise, 12+ hours stand, 500+ move calories. 49 years old and just completed annual physical with Doc saying hands down healthiest patient. Discipline is the way
Grant Hankins tweet media
English
19
3
229
14.7K
Jacob Edward
Jacob Edward@JacobEdwardInc·
Neither of these men are married or have kids. Both are simply obsessed with their own personal perfection and optimization. There is nothing impressive about a single man with no kids sleeping well and being fit. Show me a man with young children, a full time job, disrupted sleep, who works out regularly, eats healthy, trains Jui Jitsu, with a muscular body… THIS is impressive. THIS requires extreme discipline.
Camus@newstart_2024

Chris Williamson just shared his "nuclear" sleep stack that's quietly changing his life—and Andrew Huberman breaks down exactly why it works: If you're lying in bed at 2 a.m. scrolling or staring at the ceiling, this 4-minute protocol combo might be the fastest way to shut your brain off without pills. The two killer techniques Williamson swears by: 1. The Mind Walk (visualization on steroids) - Imagine walking a route you know perfectly (your house → front door → street) - Do it with insane detail: feel the shoehorn, hear the key turn, feel the door handle, pressure of the pavement - It's like reading fiction for your nervous system—engages the brain just enough to stop problem-solving loops, but not enough to keep you awake 2. Resonance breathing with the Ohm stone lamp - Bedside lamp with induction-charging stone that has a built-in FDA-cleared HRV sensor - Hold the stone → 3/6/9/12-minute guided sessions with silent tactile vibration (no sound, no light, partner-safe) - Guides you into true resonance frequency (max vagal tone) → the stone knows when you hit it - Williamson calls it “the sickest” sleep tool he’s ever used—currently in stealth (ohmhealth, not widely available yet) Huberman adds the neuroscience: Looking down + eyelids lowering activates parasympathetic circuits and deactivates wakefulness-promoting brainstem nuclei. It’s literally pedaling the sleep pedal while shutting off the alertness arm. Williamson: “Some days you need the adventure story (mind walk), some days you need the physiological hammer (resonance breathing). Stack them and I’m cross-eyed into sleep.” Already trying one of these? Or is your nighttime routine still a war zone?

English
1.2K
927
20.2K
2M
Jerry Low retweetledi
BURKOV
BURKOV@burkov·
LLMs process text from left to right — each token can only look back at what came before it, never forward. This means that when you write a long prompt with context at the beginning and a question at the end, the model answers the question having "seen" the context, but the context tokens were generated without any awareness of what question was coming. This asymmetry is a basic structural property of how these models work. The paper asks what happens if you just send the prompt twice in a row, so that every part of the input gets a second pass where it can attend to every other part. The answer is that accuracy goes up across seven different benchmarks and seven different models (from the Gemini, ChatGPT, Claude, and DeepSeek series of LLMs), with no increase in the length of the model's output and no meaningful increase in response time — because processing the input is done in parallel by the hardware anyway. There are no new losses to compute, no finetuning, no clever prompt engineering beyond the repetition itself. The gap between this technique and doing nothing is sometimes small, sometimes large (one model went from 21% to 97% on a task involving finding a name in a list). If you are thinking about how to get better results from these models without paying for longer outputs or slower responses, that's a fairly concrete and low-effort finding. Read with AI tutor: chapterpal.com/s/1b15378b/pro… Get the PDF: arxiv.org/pdf/2512.14982
BURKOV tweet media
English
397
1.1K
11.6K
3M
Jerry Low retweetledi
Grant
Grant@Grantblocmates·
Anthropic have just buried OpenAI and ChatGPT with this ad lmfao There’s no coming back from that
English
614
2.6K
35.7K
3M
maskobuilds
maskobuilds@maskobuilds·
@thekitze @levelsio IMAGINE just IMAGINE, making 2M a year and yaping about some crazy theories you dont have proper knowledge about. 2M a year, i would fuck off the X platform and enjoy real life
English
2
0
22
1.4K
kitze 🛠️ tinkerer.club
i've supported and defended @levelsio publicly *countless* times when ppl blocked him, dunked on his takes, ego etc. after texting and interacting a bunch on twitter i invited him for coffee in 2017 when we both lived in nl and he replied "that would be a waste of time, it's low ROI" don't confuse this for a stan-type relationship, we actually interacted a lot before this, he even subscribed to me and texted "dude i'm your biggest fan" etc. which is confusing af to me i asked him to have a conversation on a podcast countless times over the years and he kept inventing different conditions a week ago he RT-ed some random dude and i dmd him to ask "man how come you supports anyone about anything but never gave one of my apps a chance" .. (by chance i meant to even try them, because i know how some of them can help him out to be more productive) he replied "brother this is begger energy", which kinda hurt. when ppl ask me for a RT boost even now (especially someone i interact with constantly) i try to at least be helpful and not insulting the guy's fame and x revenue hit him so hard in the head really forgot how to interact with human beings and forgot that some words actually hurt, especially coming from someone you've been looking up to someone and saw them as an inspiration for over 10 years i had a draft to invite him to tinkerers because everyone wants him there, but i knew what he's gonna reply with, and i'm glad i didn't send it the last straw for me is when literally everyone on the timeline has been congratulating @tinkererclub success publicly and privately. i mean literally all the big names. heck i even made amends with ppl i haven't interacted with in years, and i'm happy i did so. the supports mean A LOT especially cuz i'm navigating an uncharted territory of growth i haven't seen before. guess who i wanted to ask for advice? another unsent draft. the only thing pieter had to say was to dunk on me twice about my tweet about antidepressants (which was VERY hard to write and confess publicly) the person who was trying to sell an ai therapist to people is now making fun of mental health and depression imagine being this rich and famous but still to have this amount of bitterness and saltiness in you, it's actually sad tbh this is a skill issue because i continue to confuse acquaintances for friends. i don't have friends, and that's fine. if i ever lose my head to reach this level of being cruel to ppl i want you to call me out if anything this makes me wanna be more helpful and nicer to people i hate to admit it, but some of you mfs were right about him, i just couldn't see clearly. hardest block i've pressed in my life. but i've burned bridges and cut off *way* closer people this year over toxicity. i'm starting fresh. sayonara brother, i wish you all the best ✌️
kitze 🛠️ tinkerer.club tweet media
English
453
30
2.2K
310K
Jerry Low retweetledi
Peter Girnus 🦅
Peter Girnus 🦅@gothburz·
I'm a Reserve Manager at a central bank. My job is buying gold. 297 tons this year. Quietly. While we print money. Loudly. Gold hit $5,000 an ounce yesterday. We've been buying since it was $1,800. That's called "reserve diversification." Diversification means we don't trust our own currency. But we can't say that. So we say "diversification." The Governor went on television last month. He said inflation is "anchored." Anchored means 6%. Used to mean 2%. We moved the anchor. That's monetary policy. He said the currency is "sound." Sound means losing 20% of its value. Per year. But it sounds sound. That's what matters. We bought 45 tons in November. Poland bought 95 tons. Brazil bought 43. China reports 1 ton. China is lying. We all know. Nobody says it. 95% of central banks plan to buy more gold next year. That's a survey. We surveyed ourselves. On whether we trust ourselves. We don't. We trust gold. Citizens ask why prices keep rising. We say "supply chains." We say "external factors." We don't say "we printed 40% of all money in existence since 2020." That's not external. That's us. The Finance Minister asked if gold is a hedge against our own policies. I said "gold is a strategic reserve asset." Strategic means yes. I just can't say yes. Gold is $5,000 now. Our currency buys less every day. Our gold buys more. That's the strategy. For us. Not for you. You get the currency. We get the gold. That's central banking.
English
666
3.9K
17K
1.5M
Jerry Low
Jerry Low@WebHostingJerry·
@borjitaea Gaps are where dreams go to die 🤣🤣🤣 speaking truly like a machine
English
0
0
1
6
Borja
Borja@borjitaea·
My clawdbot just signed up for a $2,997 "build your personal brand" mastermind after watching 3 Alex Hormozi clips.
Borja tweet media
English
695
500
17.1K
2.3M
Jerry Low
Jerry Low@WebHostingJerry·
@RnaudBertrand When the lies are no longer beneficial, might as well call it out.
English
0
0
0
62
Arnaud Bertrand
Arnaud Bertrand@RnaudBertrand·
If you'd told anyone 5 years ago that Canada, of all countries, would deliver the obituary of American hegemony and exhort others to stop "living within the lie" of the rules-based order, you'd have been laughed out of the room. Yet here we are...
Aaron Rupar@atrupar

Carney: "American hegemony in particular helped provide public goods, open sea lanes, a stable financial system, collective security ... this bargain no longer works. Let me be direct. We are in the midst of a rupture, not a transition ... recently, great powers have begun using economic integration as a weapon. Tariffs as leverage ... "

English
441
2.8K
15K
827.1K
Jerry Low retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
Sequoia just called the end of an entire go-to-market era and most SaaS companies won’t realize what hit them for 18 months. Product-led growth was built on one assumption: humans would try the software. The entire playbook since 2010 optimized for human discovery. Beautiful landing pages. Frictionless free trials. Viral invite loops. Slack, Dropbox, Zoom, Calendly. $200B+ in market cap created by winning the user’s first 5 minutes. None of that matters if an agent is picking the software. Claude doesn’t care about your hero image. It can’t be impressed by your Dribbble awards. It’s reading documentation, parsing user reviews, checking API reliability, and matching features to use case. All the surface-level polish that convinced lazy humans to click “sign up” becomes irrelevant. The new PLG funnel isn’t landing page → free trial → activation → conversion. It’s agent query → documentation scan → feature match → recommendation. Which means the new moat looks completely different. You don’t need the best onboarding. You need the best documentation. You don’t need viral loops. You need structured data that agents can parse. You don’t need a beautiful UI for the first session. You need an API that an agent can actually call. The companies that won PLG hired designers and growth hackers. The companies that win agent-led growth will hire technical writers and developer relations engineers. And here’s the part nobody’s pricing in yet: agents don’t have loyalty. They don’t have switching costs. They’ll recommend Supabase today and something better tomorrow if the documentation is cleaner or the pricing is more transparent. The stickiness that made PLG so powerful, the network effects and learned behavior, doesn’t transfer. Sequoia is telling you the entire distribution layer is being rewritten. The question is whether your product is optimized for human attention or machine parsing. Most are built for the wrong audience.
TBPN@tbpn

Sequoia partner @sonyatweetybird says we're going from the age of product-led growth to the age of agent-led growth. "You see this most clearly if you're using Claude Code actively. It says, 'Hey, for a database, you should use Supabase. For hosting, use Vercel.' It's choosing for you, the stuff you should be using." "Product-led growth brought us closer to the vision of 'best product wins,' but ultimately people are still lazy. They can't read all the reviews, and they kind of default to what looks cool on the website." "Whereas your agent has infinite time to go and make these choices for you. It can go and read all the documentation, read all the user comments, and figure out [what you need] for your use case."

English
175
438
3.6K
883.9K
Jerry Low retweetledi
Basit | Σ:
Basit | Σ:@basitWeb3·
this is arguably the best article i’ve read in my entire lifetime. because it perfectly captures: – who i am – what i’m currently facing – how i’m navigating through it if you really want to make a difference in life, please give it a read.
DAN KOE@thedankoe

x.com/i/article/2010…

English
118
1.2K
18.7K
3.8M
Jerry Low retweetledi
Charly Wargnier
Charly Wargnier@DataChaz·
Wow. Anthropic just curated an impressive collection of use cases for Claude 🤯 You already get 39 deep guides and more get added weekly. It’s also free and definitely worth bookmarking. (link below)
Charly Wargnier tweet media
English
12
155
1.5K
145.6K
Sam Bhagwat
Sam Bhagwat@calcsam·
last month we wrote a new agents book: patterns for building ai agents it has everything you need to take your agents from prototype to production, like agent design patterns, the basics of security, etc reply to this tweet with BOOK and we'll dm you so you can get a copy
Sam Bhagwat tweet media
English
4.1K
454
5.1K
587.7K
Jerry Low retweetledi
Lily Ray 😏
Lily Ray 😏@lilyraynyc·
I think @top5seo's article is one of the most helpful and comprehensive breakdowns of how ChatGPT (most likely) works that I've ever read. Note: parts of the article are speculative, but David provides a lot of compelling evidence for why ChatGPT is most likely to work this way: Absolutely worth reading the whole thing, but the TL;DR: ⭐️ ChatGPT works a lot like a search engine-like system: it pulls in real-world info and stitches it together, rather than actually “thinking” on its own. ⭐️ It doesn’t automatically browse the web. It only goes looking things up when the system decides it actually needs to. ⭐️ Behind the scenes, it’s not just one model doing everything: ⭐️⭐️ First, a small model looks at your question and decides whether the answer can come from training data or needs a search. ⭐️⭐️ If a search is needed, another model handles finding and filtering web results before the main model writes the response. ⭐️ Those search decisions are based on probabilities — things like “no search,” “quick search,” or “deep search,” depending on how complex the question seems. ⭐️ When web search is used, ChatGPT starts by pulling in a small set of top search results. But ranking alone doesn't guarantee inclusion; the page still has to pass semantic relevance checks. ⭐️ When ChatGPT does search, it doesn’t just grab whole pages. It pulls in candidates, scores them by meaning, and feeds only the most relevant snippets into the final answer. ⭐️ It uses tiny chunks of pages, not full articles, and it cares more about meaning than exact keywords. ⭐️ Speed and cost matter. If a page is slow or expensive to process, it might get skipped - even if the content is good. ⭐️ For anything complicated, timely, or niche, fresh web info is key. Training data alone usually isn’t enough, so real-time context makes a big difference. ⭐️ If you want your content to show up in AI answers, clarity helps - clean structure, direct answers, and well-written explanations make it easier for the system to pull useful snippets. queryburst.com/blog/how-chatg…
English
10
19
124
8.6K
Jerry Low retweetledi
Travis Davids
Travis Davids@MrDavids1·
Here is a fun idea to try. You can create these images with giant people/animals using Nano Banana Pro. Just adjust the elements in brackets and have fun! If you have the option to generate at 4K, definitely use it as it makes the tiny people look crisper. These types of images seem to animate best with Veo 3.1 from testing, but you can try with Kling 2.6 or Grok Imagine as well so have some fun! Feel free to show me your giant humans/animals. 🍌Universal Prompt: A highly detailed, photorealistic [CAMERA ANGLE, e.g., wide-angle shot, aerial shot looking down, extreme low-angle looking up] showing a colossal [DESCRIBE THE PERSON & CLOTHING] positioned in [SPECIFIC LOCATION/CITY/LANDSCAPE]. The giant is [ACTION - INTERACTING WITH THE ENVIRONMENT, e.g., sitting on a building, stepping over a bridge]. To establish the immense scale, tiny [TINY ELEMENTS: PEOPLE, CARS, OR BOATS] are visible near their [FEET/HANDS]. The lighting is [TIME OF DAY, e.g., golden hour, midday sun]. There are more visuals examples with prompts included in my 80 creative use cases document which you can view via this link completely for free. Look for use case 84. canva.com/design/DAG5ifq…
English
87
393
3.1K
459.9K