Exciting news: 2 grants accepted by FNRS

I’m happy to share the exciting news that my project for the Incentive Grant for Scientific research (MIS) from our national fund for 470K€ (one PhD, one post-doc and equipment) was accepted, while a PhD student of our team, Clément Delzotti got his 4-year PhD funded by FNRS FRIA too!

The MIS project “DHNET: The Disaggregated Host Networking paradigm to enable particle-to-particle streams” is a more fundamental project looking beyond the current design of the Internet, to enable more efficient communications. The Internet is built around the idea monolithic computers are communicating. Each computer gets an address, and communicate together as a whole. As physician refined the model of atoms in particles, the Internet needs to be redesigned to look inside computers. Components, particles, like GPU, CPU caches, TPUs, network interfaces need to communicate directly together as computers are not monolithic devices anymore, but an interconnect of many components and memory domains. Why would a transfer, say a picture generated by a cloud gaming GPU go through a CPU, a proxy, a load balancer while it could be sent through a direct path to the consumer’s screen? In practice we will use the newfound programmability of networks with programmable switches and SmartNICs to direct the data to the right particle.

The FRIA grant obtained by Clément Delzotti builds upon data acquired at the lab showing network I/O intensive application can sustain a good quality of service while achieving a 8X energy reduction by spreading differently the traffic on multiple cores and tuning frequency appropriately. The project aims at building an energy-aware data center load-balancer to reduce energy waste.

Therefore, expect a new PhD and post-doc position to open soon! The post-doc one is already open, just consider it can be extended 😉

A collection of Network Systems icons in SVG

You can use mine as you wish, I tried to find the original authors and the appropriate license whenever I could. Don’t hesitate to send me your own.

NAND SSD (inspired from https://commons.wikimedia.org/wiki/File:NAND-ssd.svg, CC )
RAM Module ( inspired from https://fr.m.wikibooks.org/wiki/Fichier:Ram-module.svg CC)
CPU (absed on https://commons.wikimedia.org/wiki/File:Abstract_i7_CPU_icon.svg, CC)
DPI (unsure but I think it’s my own. Anyway it’s standard)
Fast (own)

GPU (own)
IPSEC (unsure)
Load Balancer (unsure)
Monitoring, monitor, measurements (unsure)

Mellanox NIC (not SVG, Mellanox)
100G NIC (inspired from the above, consider my own I guess)

Router (unsure, but this is quite sandard…)
VLAN (own)

Retina: Analyzing 100 GbE Traffic on Commodity Hardware

I’m pleased to announce Retina has been accepted to appear at SIGCOMM at the end of the month ! It is the result of a pleasant collaboration with Gerry Wan, Fengchen Gong and Zakir Durumeric from Stanford.

Retina enables high-speed network forensics by building a binary tailored to a specific experiment written in Rust. It provides convenient filtering capabilities to easily answer questions such as “Is the TLS SNI really random?” or “How many TLS handshake are destined to Netflix?”. Tested at up to 160Gbps with a commodity server on a Stanford traffic TAP, it supports 5-100x higher traffic rates than standard “bloatware” IDSes.

paper ; github ; the video will follow after SIGCOMM

New position: Assistant Professor at UCLouvain

I’m delighted to announce I’ll start as assistant professor on the 1st of September in the INGI department of the ICTEAM, EPL faculty at UCLouvain. Right where I am currently conducting my post-doc.

I’ll continue my research on high-speed networking and programmable networks (including Smart NICs) while taking care of multiple lectures. Stay tuned for exciting news !

Packet Order Matters won the NSDI’22 community award !

Data centers increasingly deploy commodity servers with high-speed network interfaces to enable low-latency communication. However, achieving low latency at high data rates crucially depends on how the incoming traffic interacts with the system’s caches. When packets that need to be processed in the same way are consecutive, i.e., exhibit high temporal and spatial locality, caches deliver great benefits.

In this paper, we systematically study the impact of temporal and spatial traffic locality on the performance of commodity servers equipped with high-speed network interfaces. Our results show that (i) the performance of a variety of widely deployed applications degrade substantially with even the slightest lack of traffic locality, and (ii) a traffic trace from our organization reveals poor traffic locality as networking protocols, drivers, and the underlying switching/routing fabric spread packets out in time (reducing locality).

To address these issues, we built Reframer, a software solution that deliberately delays packets and reorders them to increase traffic locality. Despite introducing μs-scale delays of some packets, we show that Reframer increases the throughput of a network service chain by up to 84% and reduces the flow completion time of a web server by 11% while improving its throughput by 20%.

Links : paper ; usenix

Combined stateful classification and session splicing for high-speed NFV service chaining at IEEE/ACM Transactions on Networking

After encountering novel challenges arising at 100G speeds, a follow-up longer version of our MiddleClick paper has been published in the IEEE/ACM Transaction on Networking journal in 2021 with hardware offloading, and an improved algorithm for combining sessions.

The code has been reverted into FastClick, allowing to have unique state management for multiple VNFs, automatically combined. On top of this session system, one can easily modify TCP or HTTP streams on the fly without full termination!

Check out the paper ! The code has been merged to FastClick. The experiments are fully reproducible and described here. You can also check the ToN page.

The extended version of Cheetah: “A High-Speed Programmable Load-Balancer Framework With Guaranteed Per-Connection-Consistency” has been published in ACM/IEEE ToN

In this journal version, we extended our conference paper with additional, peer-reviewed material:

  • We implemented our system on QUIC using P4 and Picoquic. This demonstrates that our approach does not depend solely on TCP timestamps. The code in ‘bmv2’ and ‘p4-tofino’ has been made publicly available.  All of our code is available at https://github.com/cheetahlb/
  • We added an experiment using the Tofino implementation and the QUIC implementation of Cheetah for an HTTP webserver.
  • We added an experiment to verify whether today’s OSes support TCP timestamp, have them enabled by default, and correctly echo the TCP timestamp set by a server.
  • We added an experiment to verify the granularity of the TCP timestamp units used by some of the largest Alexa top 100 websites. 
  • We added a proof sketch on the size of the cookies given a number of servers. 
  • We added an implementation in bmv2 of the “TCP timestamp”-based system. We have also rewritten and published the P4- tofino code of the system. The implementation of the stateful LB is non-trivial as it requires the insertions/lookups/deletions operations to be applied in constant time (and more restrictions apply). We describe our implementation of a stack-based data structure for the Tofino in Section 4.3. 
  • We added a micro-benchmark of the performance of the Cheetah LB, e.g., compared SYN insertions with cuckoo, normal packets, 
  • We broke down the benefits of SSE parsing of TCP options instructions.
  • We evaluated the packet processing latency overheads of realizing Cheetah on a Tofino for both the TCP timestamp and QUIC implementation.
  • We clarified the design challenges in the introduction.

Check out the paper in open access !

Our new Journal extension of Metron “High Performance NFV Service Chaining Even in the Presence of Blackboxes”

Georgios P. Katsika, Tom Barbette, Dejan Kostić, JR. Gerald Q. Maguire, Rebecca Steinert

The NSDI version of Metron supported the integration of blackbox network functions (NFs) using ring buffers. This choice limited Metron’s applicability, as real networks might contain hardware blackboxes (also known as middleboxes) or closed-source blackbox binaries running inside virtual machines (VMs) or containers. In this extended journal version published in ACM Transaction on Computer Systems, we put special effort on integrating these important blackbox types into Metron, while maintaining Metron’s hardware-level performance.

Metron achieves 100G for a chain of VNF, up to 8* better efficiency than SoTA. Check the paper for more details.

This integration was not trivial as it involved tedious low-level system aspects related to (i) efficiently dispatching packets without introducing unnecessary inter-core communication and (ii) techniques to allow high-speed service chaining. These were key principles of Metron that we wanted to maintain. Moreover, we incorporated the latest functionalities of modern 100 GbE NICs, such as single root I/O virtualization (SR-IOV) that enables physical to virtual NIC dispatching, avoiding the need for software switching. Metron instructs the physical NIC to tag the packets according to the core associated with a traffic class by the controller. The tag can then be used to dispatch packets to queues just as a Metron agent does.

As appeared in USENIX NSDI 2018, the original Metron system demonstrated an experiment on dynamic scaling at 10 Gbps. 100 GbE deployments are becoming the new commodity. Therefore, we put substantial effort on refining Metron’s scaling algorithm. Part of this algorithm uses our new method for deriving the load of a CPU core even when this core performs NIC polling (e.g., using DPDK poll mode drivers).

Metron rapidly reacts to change in the input load, see Fig 16 for more details

The 100 GbE testbed used in the NSDI version of Metron exhibited hardware limitations that prevented Metron from reaching line-rate performance. In this journal, we repeated the same experiment on two additional testbeds: First we upgraded the 100 GbE NICs of the original testbed (i.e., replacing the Mellanox ConnectX-4 with newer Mellanox ConnectX-5 NICs) and managed to increase the maximum throughput at 85 Gbps (76 Gbps was the previous limit). Then, we also upgraded the servers of the testbed using new workstations with Intel’s Skylake hardware architecture (the old servers used Intel’s Haswell hardware architecture) and managed to achieve line-rate 100 Gbps packet processing.

The paper also presents a dozen other novelties compared to the NSDI version, so check it out!

Paper (open access)

High-speed Connection Tracking in Modern Servers

Our paper “High-speed Connection Tracking in Modern Servers” will be presented by Massimo Girondi at the IEEE HPSR 2021, the 22nd International Conference on High-Performance Switching and Routing.

We have analyzed the performances of six different Hash Tables implementations, studying how to scale them across multiple cores and how to efficiently remove expired entries, benchmarking them with up to 100 Gbps traffic.

This is joint work Marco Chiesa and Massimo Girondi, the first author.

Read the paper here.