Kaslin Fields

13.7K posts

Kaslin Fields banner
Kaslin Fields

Kaslin Fields

@kaslinfields

GKE & OSS K8s Dev Advocate at Google, co-host of @KubernetesPod, CNCF Ambassador, tech comic creator. She/Her. https://t.co/EvEW0wbHVQ

Bellevue, WA Katılım Nisan 2014
1.5K Takip Edilen6.1K Takipçiler
Kaslin Fields retweetledi
Abdel SGHIOUAR
Abdel SGHIOUAR@boredabdel·
Calling all #Kubernetes Nerds and enthusiasts! 📣 Google Cloud Container Day 2026 is coming to Amsterdam on March 23. 🚲🇳🇱 During the week of #KubeCon #CloudNativeCon Come hang out with us at Venue Collective to talk GKE, explore the future of containerization, and network with GKE Engineers and PM's. RSVP now, space is very limited 👇 rsvp.withgoogle.com/events/contain… #ContainerDay #GoogleCloud #GKE #Amsterdam #TechEvent #Kubernetes
Abdel SGHIOUAR tweet media
English
0
4
5
417
Kaslin Fields retweetledi
CNCF
CNCF@CloudNativeFdn·
🎓 CNCF is a mentoring organization for Google Summer of Code once again! The time is now to engage with CNCF project communities, discuss ideas, and shape your proposal before applications open on March 16. Details: hubs.la/Q044Svcy0 #CNCF #GSoC #OpenSource #CloudNative
CNCF tweet media
English
0
7
24
1.7K
Kaslin Fields retweetledi
Dmitry Lyalin
Dmitry Lyalin@LyalinDotCom·
We had a UX review for @geminicli today and the changes coming soon are really fantastic. Our UX team is now prototyping directly in the code, often sending PRs or PRs are used as specs for more complex changes. This is not the same Google, many of us are changing rapidly.
English
29
10
263
8.5K
Kaslin Fields retweetledi
Jack Wotherspoon
Jack Wotherspoon@JackWoth98·
Gemini 3.1 Pro is available to all paid users in Gemini CLI Really excited for more folks to try it out! Thanks for the patience. Chatting with folks this week to make sure we do better next model release.
Gemini CLI@geminicli

Gemini 3.1 Pro is now available for all paid tiers! The default model router, Auto (Gemini 3) will use Gemini 3.1 Pro as it's pro model for complex prompts. You can also set the new model via /model to try it out. We are excited to see how you put it to use!

English
25
15
191
12.7K
Kaslin Fields
Kaslin Fields@kaslinfields·
I went to a @WriteSpeakCode meetup years ago where I gave a talk arguing something very similar! That storytelling is your most valuable skill. Great quotes in here. There were a couple that really resonated with me around the work we do on the @K8sContributors comms team!
Richard Seroter@rseroter

What's the most important skill you can develop right now? I'd argue it's persuasive communication skills. I study this topic and practice it here, with my team, and with customers. These are the books that have impacted me the most: seroter.com/2026/02/25/the…

English
0
1
3
1.1K
Kaslin Fields retweetledi
Richard Seroter
Richard Seroter@rseroter·
"Precise clock synchronization is the foundation of distributed systems." What undermines efforts to get nanosecond-level time sync in a data center? Clock drift, jitter, asymmetry, and more. Here's a post on Firefly, a clock sync system from Google. cloud.google.com/blog/products/…
Richard Seroter tweet media
English
3
17
126
5.4K
Kaslin Fields retweetledi
Drew Bradstock
Drew Bradstock@dbradstock·
The Cloud Next session library is now open! Come check out @garitweets and I talk about "What's new in Kubernetes" as a lot of exciting things have occured this past year
Google Cloud@googlecloud

The #GoogleCloudNext session library is now open—featuring live demos, new technologies, inspiring customer stories, and more. Save your seat today → goo.gle/4rAmIiZ

English
0
1
4
321
Kaslin Fields
Kaslin Fields@kaslinfields·
@smarterclayton AI is speeding up the pace of execution & can be used to produce a high volume of low quality outputs. This puts more pressure on already strained oss communities. It also makes it easier for juniors to contribute in some ways-but harder in others (like building certain skills)
Kaslin Fields tweet media
English
0
0
0
54
Kaslin Fields
Kaslin Fields@kaslinfields·
Shout-out to @smarterclayton and the way he brought the early Kubernetes contributor community together! Craig says he doesn't see the same happening in AI communities today and that's sad.
Kaslin Fields tweet mediaKaslin Fields tweet media
English
1
0
0
60
Kaslin Fields
Kaslin Fields@kaslinfields·
I'm attending the Cloud Native Seattle Meetup tonight! Our first talk featured a demo of OpenChoreo by @sameerajayasoma! He also demo'ed it using the GCP Microservices Demo! Always excited to see my team's work in the wild 🥰
Kaslin Fields tweet mediaKaslin Fields tweet media
English
1
1
8
287
Kaslin Fields
Kaslin Fields@kaslinfields·
Congratulations to members of Kubernetes Working Group Serving! The group has achieved it's immediate goals and is disbanding, having influenced work across SIGs to improve Kubernetes as a great platform for Inference workloads.
Yuan (Terry) Tang@TerryTangYuan

We'd like to announce that @kubernetesio WG Serving has succeeded and will be disbanded! Thank you everyone who have participated and contributed to the discussions and initiatives! The Kubernetes Working Group Serving was created to support development of AI inference stack on Kubernetes. The goal of this working group is to ensure that the Kubernetes is an orchestration platform of choice for inference workload. This goal was accomplished and we are disbanding the working group. The WG Serving formed workstreams to collect requirements from various model servers, hardware providers, and inference vendors. This work resulted in a common understanding of inference workload specifics and trends and laid the foundation for improvements across many SIGs in Kubernetes. The working group oversaw several key evolutions to the role of load balancing and workloads - the inference gateway was adopted as a request scheduler, multiple groups have worked to standardize AI gateway functionality, and early inference gateway participants went on to seed agent networking in SIG Network. The use cases and problem statements informed the design of AIBrix. And many of the unresolved problems in distributed inference - especially benchmarking and recommended best practices - have been picked up by the @_llm_d_ project which hybridizes the infrastructure and ML ecosystems and is better able to steer model server co-evolution. In particular, we believe llm-d and AIBrix represent more appropriate forums for driving requirements to Kubernetes SIGs than this working group. llm-d's goal is to provide well-lit paths for achieving state-of-the-art inference and aims to provide recommendations that can compose into existing inference user platforms. AIBrix provides a complete platform solution for cost efficient LLM inference. More details can be found in the announcement email: groups.google.com/a/kubernetes.i… Cheers, Yuan Tang On behalf of Kubernetes WG Serving Co-Chairs

English
0
0
5
548