berk ozer
135 posts

berk ozer
@berkbuilds
2x founder (yc s21). former vc, invested in 150+ startups. now founder again. prev: senpAI / @orangedaoxyz new account.
sf Katılım Şubat 2026
94 Takip Edilen238 Takipçiler

Wow, still coming down from the high of a16z speedrun's demo day yesterday.
12 weeks of pressure cooking. Sleepless nights. Gallons of matcha.
all leading to 90 seconds on stage in front of 1400+ people (and 12,000 more watching virtually!) to show them all what we've been building.
I'm super proud of our team. I’m still buzzing from all the feedback.
What an experience.
And we’re just getting started.

English

@ahsanshowrov with alfred i meant ai that just stages context where you'd already look without external disruptions, not that it acts on your behalf.
e.g., pre-meeting research lives as a private note on the calendar event, not a text in your inbox.
English

@berkbuilds Okay but the alfred model sounds right but the trust problem is real and how do you get comfortable with something silently handling your calendar and email before you've seen it fail once?
English

ai coworker UX (jarvis) that lives in message, slack, telegram is playing the wrong game.
they feel productive at first. but you're just adding another conversation and eventually exhaust people.
the real ai native coworker UX is alfred. no chat. no ping. just handles things silently where you already are. calendar. email. docs.
you interact only if you want.
English

@MooneyMillions it's kind of sad though. i would prefer a fire to being old and forgotten.
English

WE GOT INTO YCOMBINATOR LFG 🚀
art@artfreebrey
we just got accepted into YC!! were building Revnu! founders are addicted to shipping but nobody wants to do the growth work (Ads, SEO, A/B tests, etc) so we built AI agents that do it automatically if you're in SF hit us up and we can onboard you ❤️
English

@ParagArora can i join with my vibecoding skills making no mistakes?
English

I am looking for a Senior LLM Infrastructure Engineer.
This is not a cloud wrapper job. You would be deploying and operating vLLM and KServe ( or alternates ) on real bare-metal infrastructure. The work is making heterogeneous silicon behave as a coherent inference platform, at production latency and scale.
What the role actually involves: owning model serving end to end, getting batching and PagedAttention working correctly (working with the Intel HABana runtime as well and not just CUDA only ), tuning auto-scaling policies for real customer workloads, and keeping inference SLAs honest.
If you have operated vLLM in production, have experience with AI hardware, and think in terms of memory pressure, throughput, and tail latency rather than just getting a model to respond, I want to talk to you.
Drop me a message or tag someone who fits.
English

@aestheticsguyy do you have a longer version of this? asking for a friend
English

actually curious to hear what other people think
if you're at an event and there are name tags, do you find them useful?
Tarlon@TarlonKhoubyari
English










