We are pleased to announce that we have joined the Ultra Ethernet Consortium (UEC)!
Come swing by our booth #586 at #SuperComputing to learn more!
bit.ly/47AYJ8G
Multiple round trips across the mem + PCI bus for data from/to GPU/SSD/Network are killing us in ML. Time to change how Linux handles it say Mina Almasry et al in this @netdev01 0x17 talk. Exciting times in Linux and the networking world!!!😹 netdevconf.info/0x17/5#netdevconf
How does CXL disaggregation of memory and compute play with networking? Put buffer and control state placement next to offload devices +IO and then merge IO + Compute expansion protocols. @SHRIJEET2 will leads the the CXL and networking BoF @netdev01 0x17. Join us!#netdevconf
Courtesy of @Nasdaq - a live shot of Times Square just this past Friday!
To all our incredible employees, partners and investors, thank you for your role in getting @enfabrica_ to where we are today.
We couldn't do it without you and this is just the beginning!
@NolanSIGBUS@Ra1t0_Bezar1us upstreaming! hah 🤣 would never dream it. I'd prefer not to only have SAI and it's...progress...be my only window to the merchant silicon world. but it'll do for now.
This Thursday, at netdev 0x16, I talk about how RDMA concepts can be used with socket based networking to achieve most of the properties needed to scale up to high flow rates (netdevconf.info/0x16/session.h…).
Last month at LPC,I talked about the scalability of the Linux networking stack and what S/W needs for a single flow to scale up as line rates increase (lpc.events/event/16/contr…).
You want high performance application networking? fuggedaboutit if you insist on using BSD socket API. @dsahern and @SHRIJEET2 have been in pursuit of perf happyness and in this @netdev01 0x16 they discuss how we get to the promised land #netdevconfnetdevconf.info/0x16/session.h…
@blakedot_@majek04@networkservice There are a lot of permutations emphasizing how hard it is to cover all possible use cases. More tests can be added to cover your companies' needs so future changes do not cause breakage which takes years to discover.
@blakedot_@majek04@networkservice Test cases are key. In 2018 my test suite was not in the kernel tree which is the back and forth on that set; it is now[1]. Some of those tests include address binding with and without VRF.
[1] #n3474" target="_blank" rel="nofollow noopener">git.kernel.org/pub/scm/linux/…
@ValdikSS@toprankinrez@majek04 Linux VRF is implemented as policy routing. As such you must have a default route in the VRF table (your unreachable entry), and the fib rules need to be properly ordered (local rule moved below l3mdev). All of that is in the slide deck as well as troubleshooting lookups via perf
@toprankinrez@dsahern@majek04 Add IPv6 address and a route to eth0, but not on int0/vrf.
Run
ping -I vrf0 ya ru
Result:
PING ya ru(ya ru (2a02:6b8::2:242)) from 2000::1 vrf0
It tries to ping by IPv6 using eth0's source address.
Run
ip -6 route add unreachable default vrf vrf0
Result: no issues, ping by v4.
@toprankinrez@ValdikSS@majek04 That URL is to Open Source Summit, North America, Sept 2017. The original Cumulus blog post does seem to have vanished after the nvidia acquisition.
@majek04 I! I configured remote site with gretap inside VRF two days ago. It's very convenient when you need to use different links but don't need it inside main routing table amd don't want to configure policy routing.
The drawback is that 'ip r' still shows main r.t. in 'ip vrf exec'.