summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorSanto Cariotti <santo@dcariotti.me>2024-06-02 13:10:13 +0200
committerSanto Cariotti <santo@dcariotti.me>2024-06-02 13:10:13 +0200
commit5c623ef8da6c995855b9f100bb5f8efa718da49c (patch)
tree224bb52d02785f493c581a3fa55d82f17fb8e119
initmain
-rw-r--r--conclusion.tex6
-rw-r--r--content.tex10
-rw-r--r--k8s.tex37
-rw-r--r--main.tex36
-rw-r--r--refs.tex11
-rw-r--r--serverless.tex78
-rw-r--r--static/11227_2022_4430_Fig1_HTML.jpgbin0 -> 51532 bytes
-rw-r--r--static/11227_2022_4430_Fig2_HTML.jpgbin0 -> 42472 bytes
-rw-r--r--static/11227_2022_4430_Fig5_HTML.jpgbin0 -> 347058 bytes
-rw-r--r--static/11227_2022_4430_Fig6_HTML.jpgbin0 -> 346995 bytes
-rw-r--r--static/11227_2022_4430_Fig8_HTML.jpgbin0 -> 131703 bytes
-rw-r--r--static/Kubernetes_logo_without_workmark.svg.pngbin0 -> 14851 bytes
-rw-r--r--static/Untitled-2023-09-27-1503(3).pngbin0 -> 798293 bytes
-rw-r--r--static/Untitled-2023-09-27-1503(4).pngbin0 -> 205152 bytes
-rw-r--r--tests.tex151
15 files changed, 329 insertions, 0 deletions
diff --git a/conclusion.tex b/conclusion.tex
new file mode 100644
index 0000000..53141cc
--- /dev/null
+++ b/conclusion.tex
@@ -0,0 +1,6 @@
+\begin{frame}{Conclusion}
+\begin{itemize}
+ \item Kubespray takes longer to instantiate new function instances.
+ \item K3s and MicroK8s, removing unnecessary components, have reduced the deployment time and complexity, improving performance for the majority of serverless edge workloads. \pause \textbf{\alert{We can say that they are comparable.}}
+\end{itemize}
+\end{frame} \ No newline at end of file
diff --git a/content.tex b/content.tex
new file mode 100644
index 0000000..0b2b574
--- /dev/null
+++ b/content.tex
@@ -0,0 +1,10 @@
+\begin{frame}{Content}
+
+\begin{enumerate}
+ \item<1-> What is Kubernetes?
+ \item<2-> What is Serverless?
+ \item<3-> How can we combine both?
+ \item<4-> Which K8s distribution is more efficient for serveless development?
+\end{enumerate}
+
+\end{frame} \ No newline at end of file
diff --git a/k8s.tex b/k8s.tex
new file mode 100644
index 0000000..bd3a7bc
--- /dev/null
+++ b/k8s.tex
@@ -0,0 +1,37 @@
+% -------- Frame 1 ------
+\begin{frame}{Kubernetes}
+\begin{figure}
+ \centering
+ \includegraphics[width=0.2\linewidth]{static/Kubernetes_logo_without_workmark.svg.png}
+\end{figure}
+ The development was started by Google in 2014, but is now developed by Cloud Native Computing Foundation.
+It is the most widely used container orchestrator.
+
+\end{frame}
+
+% -------- Frame 2 ------
+\begin{frame}{Kubernetes: Architecture}
+
+\begin{figure}
+ \centering
+ \includegraphics[width=1\linewidth]{static/Untitled-2023-09-27-1503(3).png}
+\end{figure}
+
+\begin{itemize}
+ \item<2-> Something a bit less complex?
+\end{itemize}
+
+\end{frame}
+
+% -------- Frame 3 ------
+\begin{frame}{Kubernetes: Distributions}
+
+There are a lot of distributions of K8s such as:
+
+\begin{itemize}
+ \item Kubespray \uncover<2->{\alert{\textit{It uses a set of Ansible playbooks}}}
+ \item K3s \uncover<3->{\alert{\textit{Lightweight packaged as a single binary}}}
+ \item MicroK8s \uncover<4->{\alert{\textit{It works on any GNU/Linux distributions using Snap package manager}}}
+\end{itemize}
+
+\end{frame}
diff --git a/main.tex b/main.tex
new file mode 100644
index 0000000..401f8fe
--- /dev/null
+++ b/main.tex
@@ -0,0 +1,36 @@
+\documentclass{beamer}
+\usepackage{tikz}
+\usetheme{Copenhagen}
+\usecolortheme{beaver}
+
+\setbeamertemplate{footline}
+{
+ \leavevmode%
+ \hbox{%
+ \begin{beamercolorbox}[wd=.9\paperwidth,ht=2.25ex,dp=1ex,center]{author in head/foot}%
+ \end{beamercolorbox}%
+ \begin{beamercolorbox}[wd=.1\paperwidth,ht=2.25ex,dp=1ex,center]{title in head/foot}%
+ \insertframenumber{} / \inserttotalframenumber
+ \end{beamercolorbox}}%
+ \vskip0pt%
+}
+
+\title[Kubernetes distributions for the edge: serverless performance evaluation]{Kubernetes distributions for the edge: serverless performance evaluation \footnotesize{[1]}}
+\author[]{Santo Cariotti}
+
+\date[]{University of Bologna, 2024-06-17}
+
+\begin{document}
+\frame{\titlepage}
+
+\input{content}
+\input{k8s}
+\input{serverless}
+\input{tests}
+\input{conclusion}
+
+
+\input{refs}
+
+
+\end{document} \ No newline at end of file
diff --git a/refs.tex b/refs.tex
new file mode 100644
index 0000000..9845780
--- /dev/null
+++ b/refs.tex
@@ -0,0 +1,11 @@
+\begin{frame}{References}
+
+\begin{itemize}
+ \item {[1]} Kjorveziroski, V. and Filiposka, S., 2022. Kubernetes distributions for the edge: serverless performance evaluation. The Journal of Supercomputing, 78(11), pp.13728-13755.
+ \item {[2]} Kjorveziroski, V., Bernad Canto, C., Juan Roig, P., Gilly, K., Mishev, A., Trajkovikj, V. and Filiposka, S., 2021. IoT serverless computing at the edge: Open issues and research direction. Transactions on Networks and Communications.
+ \item {[3]} https://prometheus.io
+ \item {[4]} Kim, J. and Lee, K., 2019, July. Functionbench: A suite of workloads for serverless cloud function service. In 2019 IEEE 12th International Conference on Cloud Computing (CLOUD) (pp. 502-504). IEEE.
+
+\end{itemize}
+
+\end{frame} \ No newline at end of file
diff --git a/serverless.tex b/serverless.tex
new file mode 100644
index 0000000..be78b6f
--- /dev/null
+++ b/serverless.tex
@@ -0,0 +1,78 @@
+% -------------------- frame 1 -------------------
+\begin{frame}{Serverless}
+
+\begin{itemize}
+\item<1-> Serverless computing abstracts the underlying infrastructure, focusing solely on the logic that needs to be performed to solve a given task.
+
+\item<2-> A developer just write a function using their favourite programming language and put it online.
+
+\item<3-> A new concept of Function-as-a-Service.
+
+\item<4-> Edge computing is recommended by setting up compute infrastructure closer to the data source.
+
+\item<5-> It is a new frontier for IoT computing [2].
+
+\end{itemize}
+
+\end{frame}
+
+% ----------------- frame 2 ----------------
+
+\begin{frame}{Serverless architecture}
+ \begin{figure}
+ \centering
+ \includegraphics[width=1\linewidth]{static/Untitled-2023-09-27-1503(4).png}
+ \end{figure}
+\end{frame}
+
+% ------------------ frame 3 ---------------------
+\begin{frame}{Serverless platforms}
+
+There is a new market for serverless platforms, both open and closed source.
+
+\begin{itemize}
+ \item AWS Lambda
+ \item OpenWhisk
+ \item Kubeless
+ \item Knative
+ \item OpenFaaS
+\end{itemize}
+
+\end{frame}
+
+% ------------------ frame 3 ---------------------
+\begin{frame}{Serverless platforms}
+
+There is a new market for serverless platforms, both open and closed source.
+
+\begin{itemize}
+ \item AWS Lambda
+ \item OpenWhisk
+ \item Kubeless
+ \item Knative
+ \item OpenFaaS \alert{\textit{we chose this one!}}
+\end{itemize}
+
+\end{frame}
+
+% ---------------- frame 4 ----------------------
+\begin{frame}{OpenFaaS}
+
+Its architecture is composed by:
+
+\begin{itemize}
+ \item API Gateway
+ \item Prometheus [3]
+ \item Watchdog
+ \item Docker Swarm or Kubernetes
+ \item Docker
+\end{itemize}
+
+It supports two different function scaling modes:
+
+\begin{itemize}
+ \item Native scaling based on internal customized metrics
+ \item Kubernetes Horizontal Pod Autoscaler (HPA)
+\end{itemize}
+
+\end{frame} \ No newline at end of file
diff --git a/static/11227_2022_4430_Fig1_HTML.jpg b/static/11227_2022_4430_Fig1_HTML.jpg
new file mode 100644
index 0000000..cef5472
--- /dev/null
+++ b/static/11227_2022_4430_Fig1_HTML.jpg
Binary files differ
diff --git a/static/11227_2022_4430_Fig2_HTML.jpg b/static/11227_2022_4430_Fig2_HTML.jpg
new file mode 100644
index 0000000..42b45e0
--- /dev/null
+++ b/static/11227_2022_4430_Fig2_HTML.jpg
Binary files differ
diff --git a/static/11227_2022_4430_Fig5_HTML.jpg b/static/11227_2022_4430_Fig5_HTML.jpg
new file mode 100644
index 0000000..9d4ce57
--- /dev/null
+++ b/static/11227_2022_4430_Fig5_HTML.jpg
Binary files differ
diff --git a/static/11227_2022_4430_Fig6_HTML.jpg b/static/11227_2022_4430_Fig6_HTML.jpg
new file mode 100644
index 0000000..c214a63
--- /dev/null
+++ b/static/11227_2022_4430_Fig6_HTML.jpg
Binary files differ
diff --git a/static/11227_2022_4430_Fig8_HTML.jpg b/static/11227_2022_4430_Fig8_HTML.jpg
new file mode 100644
index 0000000..6cd8d73
--- /dev/null
+++ b/static/11227_2022_4430_Fig8_HTML.jpg
Binary files differ
diff --git a/static/Kubernetes_logo_without_workmark.svg.png b/static/Kubernetes_logo_without_workmark.svg.png
new file mode 100644
index 0000000..1a2f54a
--- /dev/null
+++ b/static/Kubernetes_logo_without_workmark.svg.png
Binary files differ
diff --git a/static/Untitled-2023-09-27-1503(3).png b/static/Untitled-2023-09-27-1503(3).png
new file mode 100644
index 0000000..eb915fe
--- /dev/null
+++ b/static/Untitled-2023-09-27-1503(3).png
Binary files differ
diff --git a/static/Untitled-2023-09-27-1503(4).png b/static/Untitled-2023-09-27-1503(4).png
new file mode 100644
index 0000000..4d5d48d
--- /dev/null
+++ b/static/Untitled-2023-09-27-1503(4).png
Binary files differ
diff --git a/tests.tex b/tests.tex
new file mode 100644
index 0000000..a7dd0d4
--- /dev/null
+++ b/tests.tex
@@ -0,0 +1,151 @@
+% -- frame 1---
+\begin{frame}{Set up the testing machine}
+\begin{itemize}
+ \item Ubuntu 20.04
+ \item CPU Intel Xeon X5647
+ \item 8 GB RAM
+ \item 1 Gbps
+ \item Kubernetes 1.20.7 and 1.23.0
+ \item OpenFaaS 0.21.1
+ \item 1 master node
+ \item Calico as CNI
+ \item Longhorn for persistent volumes
+ \item Docker Engine as CRI \pause \textit{\alert{dockershim is no longer supported after 1.24}}
+
+\end{itemize}
+
+\end{frame}
+
+% -- frame 2--
+\begin{frame}{Choose the testing functions}
+Let's choose 14 tests from the FunctionBench serverless benchmarking suite [4].
+
+\begin{itemize}
+ \item 8 from CPU \& Memory category.
+ \item 4 from Disk I/O category.
+ \item 2 from Network performance category.
+\end{itemize}
+
+\end{frame}
+
+% --- frame 3 ---
+\begin{frame}{How do we run?}
+
+5 tests for each Kubernetes distribution. Executed using the recommended OpenFaaS of-watchdog template for Python 3.7.
+
+\end{frame}
+
+% -- frame 4 --
+\begin{frame}{Cold start performance}
+Each function is executed 100 times to test the cold start delay. \pause
+After every execution, the number of instances for the function is scaled to 0. A new container instance needs to be created before a response is returned.
+\end{frame}
+
+% -- frame 5 --
+\begin{frame}{Cold start performance — Results}
+Kubespray exhibits a 15\% increase compared to both K3s and MicroK8s.
+
+\begin{figure}
+ \includegraphics[width=0.5\linewidth]{static/11227_2022_4430_Fig1_HTML.jpg}\hfill
+ \includegraphics[width=0.5\linewidth]{static/11227_2022_4430_Fig2_HTML.jpg}
+\end{figure}
+
+\end{frame}
+
+% -- frame 6 --
+\begin{frame}{Cold start performance — Results}
+Is the performance difference between K3s and MicroK8s statistically significant?
+\pause
+
+Using a Mann-Whitney U test with \(\alpha = 0.05\) and
+\begin{itemize}
+ \item H0: the two populations are equal
+ \item H0: the two populations are not equal
+\end{itemize}
+
+we have a \(p\)-value = 0.202, so we can't reject the null hypothesis.
+
+\end{frame}
+
+% -- frame 7 --
+\begin{frame}{Serial execution performance}
+Each function is continuously invoked for a period of 5 min using a single thread. Once a response is received, a new request is immediately sent. Auto-scaling is manually disabled.
+\end{frame}
+
+% -- frame 8 --
+\begin{frame}{Serial execution performance — Results}
+Once again, Kubespray results are slower than both K3s and MicroK8s.
+
+\begin{figure}
+ \centering
+ \includegraphics[width=0.6\linewidth]{static/11227_2022_4430_Fig5_HTML.jpg}
+\end{figure}
+\end{frame}
+
+% -- frame 9 --
+\begin{frame}{Serial execution performance — Results}
+Is the performance difference between K3s and MicroK8s statistically significant?
+\pause
+
+Using a Kruskal-Wallis test with \(\alpha = 0.05\) and
+\begin{itemize}
+ \item H0: the population medians are equal
+ \item H0: the population medians are not equal
+\end{itemize}
+
+where the null hypothesis failed to be rejected for the video-processing test. Keeping the same hypothesis we can perform the Mann-Whitney U test where, this time, the null hypothesis is rejected in 10 of the 14 tests. The null hypothesis can't be rejected for 3 CPU and 1 network benchmarks.
+\end{frame}
+
+% -- frame 10 --
+\begin{frame}{Parallel execution performance using a single replica}
+Each function is invoked for a fixed amount of time using varying concurrency to determine the performance of the auto-scaling behavior. \textit{Reduced isolation.}
+\end{frame}
+
+
+% -- frame 11 --
+\begin{frame}{Parallel execution performance using a single replica — Results}
+This time Kubespray has better performance than both K3s and MicroK8s for 6 of the 14 tests.
+
+\begin{figure}
+ \centering
+ \includegraphics[width=0.5\linewidth]{static/11227_2022_4430_Fig6_HTML.jpg}
+\end{figure}
+\end{frame}
+
+% -- frame 12 --
+\begin{frame}{Parallel execution using native OpenFaaS auto scaling}
+Each function is invoked for a fixed amount of time using varying concurrency to determine the performance of the auto-scaling behavior.
+\pause
+It tests the number of successful invocations per second for the last 10 seconds: if the number is larger than 5, it scales up the number of function instances up to a preconfigured maximum.
+\end{frame}
+
+
+% -- frame 13 --
+\begin{frame}{Parallel execution using native OpenFaaS auto scaling — Results}
+Performances are tests by 1 request per second from 6 concurrent workers for more than 200 seconds, successfully reaching the defined threshold for the maximum number of replicas.
+The current number of deployed replicas are not taken but this leads to suboptimal scaling decision scaling to maximum number of configured replicas or not scaling at all under a consistent load.
+\end{frame}
+
+
+% -- frame 14 --
+\begin{frame}{Parallel execution using Kubernetes Horizonal Pod Autoscaler}
+Each function is invoked as the same way as the previous test using the Kubernetes native mechanism. For this test, HPA is configured with a profile which is fired whenever the float-operation function has used more than 350 CPU shares (0.35 of a core). \pause
+We find out results for three different execution strategy:
+
+\begin{itemize}
+ \item Start from 1 replica, execute 2 concurrent req/s, increasing the concurrency rate by 2 every 5 min, until 48 req/s are achieved.
+ \item Start from 1 replica, execute 40 concurrent req/s, decreasing the concurrency rate by 2 every 5 min, until 2 req/s are achieved.
+ \item Start from 1 replica and vary the number of concurrent requests every 5 min using the strategy 8, 1, 20, 4, 40, 24, 1, 4, 16, 1, 36, 32.
+\end{itemize}
+\end{frame}
+
+
+% -- frame 15 --
+\begin{frame}{Parallel execution using Kubernetes Horizonal Pod Autoscaler — Results}
+Kubespray exhibits higher response times across the three tests, while the results obtained from K3S and MicroK8s are similar.
+
+\begin{figure}
+ \centering
+ \includegraphics[width=1\linewidth]{static/11227_2022_4430_Fig8_HTML.jpg}
+\end{figure}
+\end{frame}