
Ray Chen
588 posts

Ray Chen
@raychen
🏗️ @railway to unburden the builders | [email protected]
Singapore Katılım Aralık 2016
126 Takip Edilen328 Takipçiler

@Navadeep_naidu7 @Railway Jio's DNS servers are blocking us. Unfortunately this is out of our control :-/ We have tried reaching out to them & their NOC many months ago, but they have not responded.
English

Hey guys @Railway this issue started again on Jio's Mobile Network (the largest mobile network in India)
Services that use *.up.railway.app public URLs are straight up not reachable. These services are reachable if the DNS is changed to Cloudflare's but not on default DNS.
Navadeep@Navadeep_naidu7
I encountered the same issue with @Railway 's public domains (*up.railway. app) on Jio Mobile network. And can only be resolved if i change my DNS to Cloudflare or Google. This used to happen till like last week, now it's resolved (i guess)
English

@AliMasoud_dev Sorry bout that, we're working on it. If you have a service without a volume, try re-deploying it (or if you can, move regions)
status.railway.com/cmmui0c7z012ic…
English

@Kwilo68 @Railway Yes. This was an unexpected hardware issue unfortunately. Sorry bout that, we're working on it
status.railway.com/cmmui0c7z012ic…
English

@lassvestergaard @Railway Actively looking into this, sorry bout that
status.railway.com/cmmui0c7z012ic…
English

Lightweight DI container for TypeScript
container.lib.ray.cat/guide/getting-…
github.com/half0wl/contai…
English

Sorry about that. We called the incident at status.railway.com/cmlv7yk1o0169a…. Are you seeing "All systems operational"? That seems like an issue with our status page provider D:
To shed more light on this particular outage, it was related to an upstream Cloudflare DNS issue (cloudflarestatus.com/incidents/kwy3…). We're seeing full recovery right now and still working with CF to confirm if there's any further impact. We've also put in redundancy safeguards so even if one DNS provider is impacted, we can fall back quickly to an operational state.
About the recent incidents, we are very sorry about it and it's 100% not the level of service we should be providing. We are fully owning up to this, and we've shipped a bunch of mitigations the past week to prevent failures of these nature (e.g. fully global CDN in partnership with Fastly that we're starting to roll out). The majority of networking-related incidents this week were caused by DDoS attacks on our infrastructure and our mitigations were insufficient - we've since shored them up and are working on adding further protections.
Again, I'm really sorry you ran into these. If there's anything we can do to earn your trust back, please let us know - we are all ears. Btw, thank you for calling us out on this and please keep doing that - we want y'all to keep us honest.
English

@Railway Another outage? No comms? Status page says everything is up?
English

@orenaksakal @Railway @JustJake It's in project -> observability (top right)
docs.railway.com/observability
English

@Railway I don't think what you are showing on landing page is possible to have with observability settings today, am I missing something? 🤔

English

@erickreutz @Railway I agree. We want happy customers, not hostages. If you’re unhappy then we’re doing a poor job and we need to improve. You are fully within your rights to switch - let me know if you’d like some migration help or Railway credits/refunds to cover you during your migration!
English

I really hate when they start throwing out exact percentages and time down to the minute to downplay the effects of the outage. @railway does the same shit. Stupid. Own it.
Paul Copplestone - e/postgres@kiwicopple
Today we had an outage that ran for 3h42m, affecting 4.92% of our customers. All systems in us-east-2 were affected. I'm sorry to everyone affected. There is no good excuse - you trust us with your infra and we need to do better. We already have mitigations in place and we're working on a post-mortem. We will post it in the next 12 hours.
English

@FlyaKiet Hmm - because I'm used to tmux and don't see a need to tweak my workflow; terminals and multiplexing are a solved problem for me and I don't need any "parallel agentic" capabilities in mine. My bottleneck isn't execution speed, it's multiple rounds of planning/reviewing.
English

@raychen Why not superset.sh it multiplexes and persist terminals like tmux
English

Spent some time making my dev environ prettier. I stay in a terminal entirely for my workflow:
- Terminal: ghostty + tmux + zsh
- AI: Claude Code
- Editor: neovim
- Git Client: gitui
My full config is available at github.com/half0wl/dotfil…. Happy to answer any questions! :-)

English

$167 in two days using @openclaw with my claude code max subscription ($200/mo). at this rate i'll burn through $2,500+ in a month on a $200 plan. am i the customer or is anthropic subsidizing my bot at this point?
PS: and yes openclaw running on a proxmox lxc in my deskpi rack


English












