Current project

I’m now working as a post-doc with the IP Networking Lab (INL) of Olivier Bonaventure at the ICTEAM department of UCLouvain, in Belgium.

We are working on designing, implementing and validating a new but backward-compatible network architecture centered around network “schedulers” instead of routers. The NetScheds neighbour to the users will be able to exchange information natively with the users to take advantage of multi-pathing opportunities and plan bursts of packets for maximized efficiency, exposing the service level objectives of the request. The NetScheds neighbours to servers will be able to natively deliver packets to the least used servers, or the warmest ones, using the NIC as the final point of this self-scheduling Internet, delivering tasks to the CPU in-time, as the network as a whole has decided.

Combining the INL’s knowledge on network protocols, multipathing, resilience with FEC and programmability with BPF, and my background on high-speed networking, load balancing and switch programmability with P4, we ought to build a system where a single NetSched will efficiently replace a load-balancer, while a network of NetScheds will bring native scaling to the Internet.


I started my PhD in 2013 inside the RUN team, in the EpI project supervised by Laurent Mathy.

Our research project was all about creating fast software middleboxes, and more generally fast virtual network functions (VNFs). To handle middleboxes features like IDS, Firewall, DPI on a datacenter or near a core router, one has to use either multiple general-purpose processors, or fast boxes mostly based on NPU or FPGA which are not really upgradable. Our goal is to come with a software architecture that would be able to handle very fast speed (~100Gbits/seconds) for any kind of VNFs on commodity hardware.

The first part of my work has been to find a strong basis for high-speed I/O to build upon. We decided to use the Click Modular Router and extend it to do flow processing and use it as a “Click Modular Middlebox”. However, after some months we found that many things could be improved regarding the usage of underlying frameworks like DPDK and Netmap, usage of batching (both I/O and compute batching) and multi-queue, leading to a first ANCS 2015 paper. A year later, I did an internship at Cisco Meraki, where I tried FastClick techniques on their product, uncovering new problems and leading to new discoveries.

Since then, we extended FastClick to unify the classification, session mappings and stack services on behalf of the VNFs. This does not only lead to convenient services for VNFs developers, it also allows to minimize and factorize the classification, avoiding redundant operations across VMs. The stack allows for on-the-fly modification of any flow (such as HTTP or TCP flows), managing SEQs and ACKs on behalf of the user. A presentation poster has been accepted at EuroSys 2018. A subsequent invited paper has been presented at HPSR 2018. The codename of the implementation is MiddleClick.

To enable efficient usage of the infrastructure around the dataplane itself, I collaborated with people at the KTH Institute of Technology to come up with Metron. Metron is a controller that enables offloading classification inside SDN switches and use NIC’s capabilities to directly deliver packets to the right FastClick process, avoiding any inter-core switch. The paper was presented at NSDI 2018.

After my PhD graduation, I joined the NSLab team at KTH in July 2018, to work on Metron‘s next phase, towards a global, low-latency Internet.

I worked at KTH for 3 years as a post-doc to work on the ULTRA project, an ERC consolidated grant awarded to Dejan Kostic.

The goal of the ULTRA project is to build internet services with ultra-low latency. We aimed to make the Internet services work at the true speed of the underlying hardware, a bit of which started with Metron. The services built by ULTRA will be an enabler for emerging applications such as intelligent transportation systems, the Internet of Things and e-health.

We observed the exponential growth of both Ethernet speeds and the number of CPU cores called for a new processing model for high-speed networking. Our new approach, RSS++, aims to answer the key question in this domain: which CPU core should get an incoming packet? RSS++ achieves very good load balancing over multiple CPU cores by exploiting opportunistic and controlled flow migration (utilizing a new design that enables lockless and zero-copy migration of state between CPU cores). RSS++ paper was published at CoNEXT 2019.

After addressing the problem of intra-server load-balancing, it was natural to address inter-server load-balancing. We built Cheetah, a paper publised at NSDI 2020 presenting a new load balancer that solves the challenge of remembering which connection was sent to which server without the traditional trade-off between uniform load balancing and efficiency. Cheetah is up to 5 times faster than stateful load balancers and can support advanced balancing mechanisms that reduce the flow completion time by a factor of 2 to 3x without breaking connections, even while adding and removing servers.

In our recent PacketMill paper presented at ASPLOS’21 we showed the limits of current kernel bypass solutions such as DPDK and propose a new buffering model that has improved memory locality. Combined with a pipeline of source-to-source compilation and LLVM passes, the throughput increases by up to 70% for memory intensive network functions. While those improvements are generic, applied to FastClick it becomes the fastest than all the open-source packet processing frameworks publicly available. The extended abstract is already available.

A lot of stateful high-speed applications rely on connection tracking. We verfore revisited high-speed software connection tracking on modern servers, using various hash-tables implementations. On top of being a general survey, our paper also study the impact of maintainance, that is deleting connections after some time which is a often a forgotten, but very important aspect of tracking. The paper was presented at HPSR 2021.

We then studied SmartNIC could help with rules offloading, for connection tracking but also other scenario, as it was used in Metron and RSS++ which led to a paper presented at PAM 2021.

On top of the 3 conferences papers, 2021 saw the realization of 3 extended papers in journals, MiddleClick (ToN), Metron (ToCS) and Cheetah (ToN).

Finally, 2022 is already lined up with a major publication, “Packet Order Matters” that won the community award at NSDI 2022, resulting from my work on ULTRA, finalizing 3 years of post-doc (but not the end, as we’re still working together on exciting things !).  When packets that need to be processed in the same way are consecutive, i.e., exhibit high temporal and spatial locality, caches deliver great benefits. We systematically studied the impact of temporal and spatial traffic locality on the performance of commodity servers equipped with high-speed network interfaces. Our results showed that (i) the performance of a variety of widely deployed applications degrades substantially with even the slightest lack of traffic locality, and (ii) a campus traffic trace reveals poor traffic locality as networking protocols, drivers, and the underlying switching/routing fabric spread packets out in time (reducing locality). To address these issues, we built Reframer, a software solution that deliberately delays packets and reorders them to increase traffic locality. Despite introducing μs-scale delays of some packets, Reframer increases the throughput of a network service chain by up to 84% and reduces the flow completion time of a web server by 11% while improving its throughput by 20%. This project contributed an important part of the trivia behind my current research on network schedulers.


Leave a Reply

Your email address will not be published.

Time limit is exhausted. Please reload the CAPTCHA.