Serverless Computing – Research Workshop Abstracts

JUNE 8-12, 2020

ONLINE! Copenhagen, Denmark

Serverless Computing – Research Workshop Abstracts

Diminuendo! Tactics in Support of FaaS Migrations

Sebastian Werner, Jörn Kuhlenkamp, Frank Pallas, Niklas Anders, Nebi Mucaj, Olesia Tsaplina, Christian Schmidt and Kann Yildirim:

Abstract

Function-as-a-Service (FaaS) receives close attention due to highly desirable characteristics, including pay-as-you-go pricing, high elasticity, and its fully managed nature.

To leverage these benefits for existing applications, developers face the challenge of migrating legacy code to a FaaS platform (FaaSication).
Unfortunately, however, actionable guidance on how to do so for real-world applications does not exist.

In this paper, we report on our experience from FaaSifying a data-intensive application and evaluating different options through extensive experimentation, using approaches such as regression tests and tracing. Based on the obtained results, we present five migration tactics in support of future FaaSification.

Predictable performance for QoS- sensitive, scalable, multi-tenant Function-as-a-Service deployments
Andrzej Kuriata and Ramesh Illikkal
Abstract
 
In this paper we present the results of our studies focused on enabling predictable performance for functions executing in scalable, multi-tenant Function-as-a-Service environments. We start by analyzing QoS and performance requirements and use cases from the point of view of End-Users, Developers and Infrastructure Owners. Then we take a closer look at functions’ resource utilization patterns and investigate functions’ sensitivity to those resources. We specifically focus on the CPU microarchitecture resources as they have significant impact on functions’ overall performance. As part of our studies we have conducted experiments to research the effect of co-locating different functions on the compute nodes. We discuss the results and provide an overview of how we have further modified the scheduling logic of our containers orchestrator (Kubernetes), and how that impacted functions’ execution times and performance variation. We have specifically leveraged the low-level telemetry data, mostly exposed by the Intel® Resource Director Technology (Intel® RDT). Finally, we provide an overview of our future studies, which will be centered around node-level resource allocations, further improving a function’s performance, and conclude with key takeaways.
On the use of Web Assembly in a Serverless Context
Sean Murphy, Leonardas Persaud, William Martini and Bill Bosshard

Abstract

This paper considers how WASM can be run in different serverless contexts. A comparison of different serverside WASM runtime options is considered, specifically focused on wasmer, wasmtime and lucet. Next, different options for running WASM within serverless platforms are compared. Initial results show that solution which uses the built-in node.js WASM supports is found to work better than using the dedicated WASM runtimes but this has limitations and providing more direct integration with WASM runtimes should be explored further.
Memory Autotuning for Cloud Functions
Josef Spillner and Daiana Boruta
Abstract
 
Application software provisioning has over many years evolved from monolithic designs towards differently designed abstractions including serverless applications. The promise of that abstraction is that developers are freed from infrastructural concerns such as instance activation and autoscaling. Today’s serverless architectures based on FaaS are however still exposing developers to explicit decisions about the amount of memory to allocate for the respective cloud functions. In many cases, guesswork and ad-hoc decisions determine the numbers a developer will put into the configuration. We present a tool to measure the memory consumption of a function in various configurations over time and to create trace profiles that advanced FaaS engines can use to autotune memory dynamically.