{"id":5635,"date":"2019-03-05T10:30:30","date_gmt":"2019-03-05T09:30:30","guid":{"rendered":"https:\/\/blog.mi.hdm-stuttgart.de\/?p=5635"},"modified":"2023-06-18T18:28:53","modified_gmt":"2023-06-18T16:28:53","slug":"a-dive-into-serverless-on-the-basis-of-aws-lambda","status":"publish","type":"post","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2019\/03\/05\/a-dive-into-serverless-on-the-basis-of-aws-lambda\/","title":{"rendered":"A Dive into Serverless on the Basis of AWS Lambda"},"content":{"rendered":"\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"1880\" height=\"1253\" data-attachment-id=\"5639\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2019\/03\/05\/a-dive-into-serverless-on-the-basis-of-aws-lambda\/lambda\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/03\/lambda.jpeg\" data-orig-size=\"1880,1253\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"lambda\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/03\/lambda-1024x682.jpeg\" src=\"https:\/\/i1.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/03\/lambda.jpeg?fit=656%2C437&amp;ssl=1\" alt=\"\" class=\"wp-image-5639\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/03\/lambda.jpeg 1880w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/03\/lambda-300x200.jpeg 300w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/03\/lambda-768x512.jpeg 768w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/03\/lambda-1024x682.jpeg 1024w\" sizes=\"auto, (max-width: 1880px) 100vw, 1880px\" \/><\/figure>\n\n\n\n<p>Hypes help to overlook the fact that tech is often reinventing the wheel,  forcing developers to update applications and architecture accordingly in painful migrations.<br><br> Besides Kubernetes one of those current hypes is Serverless computing. While everyone agrees that Serverless offers some advantages it also introduces many problems. The current trend also shows certain parallels to CGI, PHP and co.<br><br>  In most cases, however, investigations are limited to the problem of cold boot time. This article will, therefore, explore Serverless functions and their behavior, especially when scaling them out and will provide information on the effects this behavior has on other components of the architecture stack. For example, it is shown how the scaling-out behavior can very quickly kill the database.<\/p>\n\n\n\n<!--more-->\n\n\n\n<h1 class=\"wp-block-heading\">\nIntroduction<\/h1>\n\n\n\n<p>In order to really understand Serverless,\nit must be analyzed where it came from. The technology was made\npossible by the long-lasting transition to cloud computing. However,\nwith the emergence of Serverless, cloud computing itself must be\nconsidered degraded to a <em>very early stage<\/em>. Because Serverless\nreveals one thing, developments so far were many things but not\ncloud-native. In fact, developers have tried to design environments\nthey were already familiar with &#8211; or more explicitly, they have\nrecreated the local environments in the cloud [1, p. 3].<\/p>\n\n\n\n<p>This may be the transition phase to a cloud-native world, allowing\nsoftware, design and know-how to be migrated step by step. During\nthis phase virtualization of the physical environment can be\nobserved, the abstraction level, on the other hand, remains the same\nor changes only slightly. This explains the current success of\nplatforms such as the Google Compute Engine or Amazon EC2, where\ndevelopers still deal with virtual network devices, operating systems\nand <strong>virtualized physical machines<\/strong>.<\/p>\n\n\n\n<p>But even since the advent of the cloud, there have been some\nservices with different levels of abstraction. For example, the\nGoogle App Engine released in 2008. Which requires services to be\ndivided into a stateless compute tier and a stateful storage tier [2,\np. 2] &#8211; which in turn comes very close to the concept of Serverless.<\/p>\n\n\n\n<p>The most significant distinction is made in terms of elasticity,\nwhereas in Infrastructure-as-a-Service (IaaS) the resource elasticity\nhas to be managed at the <em>virtual<\/em> machine-level in\nFunction-as-a-Service (FaaS) it\u2019s administered by the cloud\nprovider [3, p. 159].<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"> Characteristic and Definition<\/h2>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p>Serverless computing means that it is an event driven programing where the compute resource needed to run the code is managed as a scalable resource by cloud service provider instead of us managing the same. It refers to platforms that allow an organization to run a specific piece of code on-demand. It\u2018s called Serverless because the organization does not have to maintain a physical or virtual server to execute the code. [\u2026] pay for the usage and no overhead of maintain and manage the server and hence no overhead cost on idle and downtime of servers. <\/p><cite>Jambunathan et al.\u00a0[4]<\/cite><\/blockquote>\n\n\n\n<p>\nFrom this, it can be concluded that if there is no use, no resources\nare provisioned and therefore no costs arise. In order to have the\npossibility to remove resources during a non-usage period, it is\nabsolutely necessary that the resources can be made available again\nquickly enough. From the developer\u2019s point of view, the deployable\nartifact contains only the code and its direct dependencies, i.e.&nbsp;no\noperating system and no docker image. Payment is peruse and started\ninstances are ephemeral.<\/p>\n\n\n\n<p>It is absolutely necessary to draw a distinction from PaaS.\nServices such as Heroku, Google App Engine and AWS Beanstalk have\nexisted for a long time and offer a similar level of abstraction.\nRoberts argues that PaaS offerings are generally not designed to\nscale freely depending on occurring events. So for Heroku, operations\nwould still manage individual machines (so-called dynos) and define\nscaling parameters on a machine-based level [5].<\/p>\n\n\n\n<p>The invoicing also takes place on the machine level and not on the\nactual usage. In other words, instances are held up and resources\nwill not be raised in parallel with the current demand. Serverless\nand FaaS allow to react with very short-lived instances absolutely\nfine-granular and parallel to rising\/declining demand whereas PaaS\nscaling more leads to a &#8211; if set in a graph &#8211; <em>staircase shaped<\/em>\nincrease in costs\/resources.<\/p>\n\n\n\n<h1 class=\"wp-block-heading\">\nFacilitators<\/h1>\n\n\n\n<p>As a result of digitization, companies are\nincreasingly using digital systems to handle their business\nprocesses. They expect fail-safe, resilient systems at all times and\nthus increase the pressure on further development. This section is\ndedicated to the question of what improvements have enabled\nServerless in its current form, with special attention paid to the\ndevelopments that occurred after 2008 &#8211; the year the Google App\nEngine was released.<\/p>\n\n\n\n<p>While there was only moderate progress with CPU and memory, strong\nprogress could be detected in the storage segment. Especially due to\nthe further spread of SSDs. The Situation was similar with the\nnetwork. Particularly within the data centers and between their\nreplication zones, the connection has improved rapidly [6].<\/p>\n\n\n\n<p>In many cases, the improved network allows separating storage and\ncomputing tier from each other. While this introduces more distance,\ntherefore latency and reliance on the network, this also leads to the\nopportunity of scaling computing and storage tiers independently from\neach other.<\/p>\n\n\n\n<p>Without this development, Serverless, which is designed as a\nstateless-tier, would not be possible at all. While distributed\nsystems in interaction with relational databases had problems with\nregard to replication, performance and thus throughput, NoSQL systems\narose. These could easily be replicated and as a result provided the\nrequired data throughput &#8211; if necessary worldwide.<\/p>\n\n\n\n<p>The next two important milestones were Docker and the Dev-Ops\nmovement among other things leading to faster delivery. No less\nimportant was the trend towards microservices. This is also where the\nfirst cloud-native specific characteristic occurs, from now on it is\neasier to scale horizontally than vertically. The foundation for\nServerless was then established with Infrastructure-as-Code and\nlow-latency provisioning [6].<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"> Behavior of Serverless Environments<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\"> Szenarios<\/h3>\n\n\n\n<p>\nThis chapter will be preceded by a scenario. The application of a\ncompany has fully adopted Serverless. The stateless client in form of\na Single-Page-Application (SPA) communicates with a REST API\nconsisting of functions deployed on AWS Lambda. For any endpoint,\nthere is a dedicated function, in order to reduce the size of the\nartifacts and therefore the cold-boot time [7]. As a result, the\napplication consists of 250 independent stateless functions &#8211;\nso-called nano-services. In order to further reduce the boot times,\none of the more powerful machines was selected as runtime &#8211; since it\nhas been proved that higher memory and CPU allocation also result in\nshorter boot times [7]. A PostgreSQL instance running on AWS RDS\nserves as the database.<\/p>\n\n\n\n<p>During the scenario, the application experiences a sudden increase\nin load. In the previous period, there was only a low load. Depending\non how the provider scales the functions, many effects can occur that\nin the worst case completely paralyze the system.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"> Booting &amp; execution states<\/h3>\n\n\n\n<p>\nLloyd et al.&nbsp;have further investigated the various execution\nstates. These also provide insight into how the execution context is\nstructured &#8211; especially with regard to isolation [3, p. 163].<\/p>\n\n\n\n<p><strong>Provider-cold<\/strong> &#8211; occurs if the function\u2019s code was\naltered. This requires the cloud operator to actually rebuild the\nenveloping artifact.<\/p>\n\n\n\n<p><strong>VM-cold<\/strong> &#8211; can be seen if there is no active instance of a\nruntime that has the artifact present. So the boot time is extended\nby the required transfer time for the artifact. This can occur not\nonly after a provider cold event but also in the process of\nhorizontal scaling whereby a new VM might serve as runtime.<\/p>\n\n\n\n<p><strong>Container-cold<\/strong> &#8211; the artifact has already been transferred\nto the executing machine, is cached there, and therefore the boot\ntime is limited to the actual boot time of the artifact.<\/p>\n\n\n\n<p><strong>Warm<\/strong> &#8211; the artifact is provisioned and has the capacity to\naccept more incoming traffic.<\/p>\n\n\n\n<p>In addition to the start time, the function\u2019s life-span is also\nrelevant. Here it becomes clear that instances with more system\nresources are dismantled more quickly [3, p. 166].<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">\nConcurrency \n<\/h3>\n\n\n\n<p>As research to this article two functions\nwere deployed on AWS Lambda. The aim of the investigation is to see\nhow functions scale on a sudden increase in load. The first function\nwill serve as analyze target. It reads the property btime\nfrom \/proc\/stat\nand returns it to an incoming HTTP request. The procedure is similar\nto that of Loyd et al.&nbsp;In addition, one UUID is created for each\nstarted instance and returned along the btime\nvalue.<\/p>\n\n\n\n<p>The second function contains a simple logic that triggers <em>N<\/em>\nconcurrent requests to the first function and returns the result as a\nlist once all requests returned a result.<\/p>\n\n\n\n<p>The collected data then provides information about how many\ninstances of a function are started in parallel &#8211; via UUID &#8211; and\nwhether they run on different host systems &#8211; the boot time expressed\nvia btime\nis used for this. The assumption is made that no machine starts at\nexactly the same time. In terms of configuration, the timeout was set\nto 15 seconds, the memory limit to 128MB and the source code was\nwritten in JavaScript.<\/p>\n\n\n\n<p>In order to ensure that the exact effects could be precisely\ncontrolled, a static value within the code was incremented before the\ntest. This leads to a completely new artifact being built: the\nalready covered provider-cold start. In the test run, 150 requests\nwere sent to the function at an interval of 5 seconds. The resulting\nnumbers can be taken from Fig.&nbsp;1.<\/p>\n\n\n\n<p>Especially remarkable is the number of instances during the first\ninterval. Since no function has yet reached the warm state and the\nruntime environment cannot make any assumptions about whether a\nsingle instance can handle the traffic, AWS did the only correct\nthing: Provide a separate instance for each request.<\/p>\n\n\n\n<p>This, of course, reveals another problem: For each request of the\nfirst interval either a Provider-cold, VM-cold or Container-cold\nstart is adding latency to the response time.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"640\" height=\"480\" data-attachment-id=\"5648\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2019\/03\/05\/a-dive-into-serverless-on-the-basis-of-aws-lambda\/provider-cold\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/03\/provider-cold.png\" data-orig-size=\"640,480\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"provider-cold\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/03\/provider-cold.png\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/03\/provider-cold.png\" alt=\"\" class=\"wp-image-5648\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/03\/provider-cold.png 640w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/03\/provider-cold-300x225.png 300w\" sizes=\"auto, (max-width: 640px) 100vw, 640px\" \/><figcaption>Figure 1: 200 Request on Provider-Cold<\/figcaption><\/figure>\n\n\n\n<p>There is\na small difference between the absolute number of 200 requests and\nthe 169 instances provided. This can be explained by the fact that\neven if all requests are started at the same time, they are not\nnecessarily executed immediately. Therefore, instances are already\nreused for the last 31 requests of the first interval. All instances\nare distributed among 22 VMs. It is noteworthy that in the second\ninterval AWS has already drastically reduced the number of addressed\ninstances and VMs.<\/p>\n\n\n\n<p>The execution was repeated in a 2nd batch five minutes after\ncompletion of the first test. Following the findings of Lloyd et al.,\nsufficient warm instances should still be available after this\nperiod. Fig.&nbsp;2 points out that AWS now scales the functions much\nmore cautiously and, as expected, starts with fewer instances and\nthen expands them. This corresponds to a build-up scenario to be able\nto scale quickly. This is also supported by the fact that the number\nof machines tends to increase to up to 19 &#8211; quite contrary to the\nbehavior in Fig.&nbsp;1, where after the first interval only two VM\ninstances were used. If further scale out would be required, the\npreviously scaling to more machines enables AWS to accomplish more\nContainer- instead of VM-cold starts and thus reduce boot time.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"640\" height=\"480\" data-attachment-id=\"5649\" data-permalink=\"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2019\/03\/05\/a-dive-into-serverless-on-the-basis-of-aws-lambda\/warm\/\" data-orig-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/03\/warm.png\" data-orig-size=\"640,480\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"warm\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/03\/warm.png\" src=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/03\/warm.png\" alt=\"\" class=\"wp-image-5649\" srcset=\"https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/03\/warm.png 640w, https:\/\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2019\/03\/warm-300x225.png 300w\" sizes=\"auto, (max-width: 640px) 100vw, 640px\" \/><figcaption>Figure 2: 200 Request 2nd Batch<\/figcaption><\/figure>\n\n\n\n<p>In order\nto better control the concurrency and resulting problems, it is\npossible to limit the out-scaling per function in the AWS management\nconsole. The default value of 1,000 is quite high. The mandatory\nfixed value also reveals the danger of a too restrictive\nconfiguration which endangers availability and performance.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"> Stateful Stateless Services<\/h2>\n\n\n\n<p>\nAPIs, microservices and Serverless functions are rarely really\nstateless. As an example, you can list the connection pool to a\ndatabase or any state that is stored in memory and remains there\nafter a task such as processing a request has been completed.<\/p>\n\n\n\n<p>Especially the concept of connection pooling is interesting to\nsave latency. If connections to the database are kept open, future\ncommunication can take place without establishing a new connection\n[8]. This is especially relevant when it comes to APIs and the\ndatabase because often several single queries are necessary in order\nto fulfill a request.<\/p>\n\n\n\n<p>What actually is changing through Serverless is the lifespan of\nthis volatile state. As well as the number of dedicated functions\nforming some sort of API. An example given, when previously two\ninstances of one API server would stay online for n-days and have the\nresulting state. In a Serverless environment, there are multiple\ninstances of multiple functions for the same job. The state is\ntherefore divided into smaller, more volatile increments.<\/p>\n\n\n\n<p>To come back to the example of the connection pool, here is a\nproblem: Not all components scale with the same elasticity as\nServerless. Connections to the database are usually limited [9], the\nprovision of databases is usually mapped to machine instances, billed\nand scaled accordingly. The effect of the previously called\nnano-services &#8211; more smaller functions with more instances, resulting\nin more short timed microstates, intensifies this problem.<\/p>\n\n\n\n<p>If one relates this to the results of Chapter <em> Concurrency<\/em>,\nit quickly becomes clear that this scaling behavior may introduce\nissues for parts of the application that are not Serverless and\ntherefore not scale freely. Quickly leading to situations like having\nto many connections too the database.<\/p>\n\n\n\n<h1 class=\"wp-block-heading\">\nImpacts on the Ecosystem<\/h1>\n\n\n\n<p>This leads to the obvious conclusion that\nin order to really benefit from the advantages of Serverless, all\ncomponents of a system should exhibit similar elasticity and scale\naccordingly. If only the elasticity of the computing tier is\nimproved, this can lead to minor cost savings, but not take full\nadvantage of Serverless. While the data tier does not necessarily\nhave to be implemented in a Serverless manner, it must be able to\nscale proportionally with the upstream tiers.<\/p>\n\n\n\n<p>The influence of Serverless is great in the sense that existing\nsystems are not designed for this type of elasticity. Only a few\ndatabases provide support for spawning additional instances &#8211; during\nongoing operation. Especially hard for relational databases that\ninclude a lot of the required data inside the memory storage.<\/p>\n\n\n\n<p>But there are already first developments, e. g. <em>FaunaDB<\/em>\nwith support to run as Serverless database [10]. With <em>Aurora<\/em>,\nAWS already has a product on offer that offers sufficient elasticity\n[11]. This is rather surprising as Aurora API is compatible with\nPostgreSQL and MySQL, therefore it is no wonder that there are a lot\nof different restrictions to work with [12].<\/p>\n\n\n\n<p>If with the data tier, another critical component in the\narchitecture of an application is used as FaaS, latency problems can\npotentiate especially during the fan-out scenarios.<\/p>\n\n\n\n<p>This indicates that it is nowhere near the only way. Other\nconcepts, on the other hand, require even greater rethinking. If gone\none step further, the data access in Serverless functions could be\nremoved &#8211; waiting times should always be avoided in Serverless\nanyway. Instead all required data could be part of the event that\nleads to the invocation of the function. Apache Kafka or AWS SNS\noffer a wonderful template fur such application flows. The small size\nof the artifact plays an additional role here, instead of having to\ntransport the data to the function, the artifact can be carried to\nthe data (e. g. run on top of S3) &#8211; or to the user (edge computing).<\/p>\n\n\n\n<p>In addition to databases, there are already a number of Serverless\nservices &#8211; especially in the portfolio of cloud providers. Examples\ninclude S3 storage &#8211; where the developer does not manage physical or\nvirtual machines &#8211; and other services with similar abstraction, such\nas Amazon\u2019s Simple Queue Service. These, however, illustrate a\ntactic of the cloud providers: Own &#8211; proprietary &#8211; offerings are\nusually very easy to use because they correspond to a Serverless\narchitecture, thereby disrupting existing open source solutions.\nWhile on AWS queues can also be realized with Redis, developers have\nto define the cluster size on the base of physical machines. If the\nelasticity required for Serverless cannot be set at all, over\nprovisioning must be used.<\/p>\n\n\n\n<h1 class=\"wp-block-heading\">\nConclusion<\/h1>\n\n\n\n<p>The biggest danger of Serverless is the\nresulting lock-in effect. Since not only the FaaS offer itself is\nused but possibly proprietary components with high elasticity. A real\nlock-in relationship occurs not just through the mere use of a single\nservice, but when these are used in complex interaction to fulfill\nthe business case. Here free software lags a bit behind and cloud\nproviders understandably promote their own solutions. For example,\ndevelopers can directly connect to AWS DynamoDB when creating a\nlambda &#8211; another service that already scales in the required\nelasticity [13].<\/p>\n\n\n\n<p>Serverless also raises a number of new questions. In the by now\nwidespread microservice architecture it has become very obvious, who\nowns which data and to which advantages or disadvantages this leads.\nFor nano-services, these rules do not necessarily apply. Especially\nthe question <em>who owns the data<\/em> will surely keep programmers\nbusy for a long time. The latencies are also &#8211; at least still &#8211;\nworrying. But since this problem gets the most attention, one can\nassume that there will be continuous improvements.<\/p>\n\n\n\n<p>Nevertheless, Serverless must be granted tremendous potential. It\nallows developers to focus on code again &#8211; at least after the\narchitectural issues have been resolved. Complexity cannot be\nresolved, but Serverless shifts some parts to the responsibility of\nthe Cloud providers.<\/p>\n\n\n\n<p>It remains to be seen whether the little problems can be ironed\nout in the next few years with optimization. Especially for the\nscaling behavior, many parameters have to be considered, this could\nbe a wonderful application for prediction, possibly with machine\nlearning.<\/p>\n\n\n\n<p>At the present time, however, no recommendation can be made to\nfully rely on Serverless. The lock-in effects are too risky, the\nnecessary changes to existing code do not justify the slight cost\nsavings in operation. Most of the costs are likely to occur during\ndevelopment and not during operation, a problem that Serverless does\nnot deal with at all.<\/p>\n\n\n\n<p>Nevertheless, the use of serverless today already makes sense.\nEspecially for tasks that can be accomplished without further delays.\nAnd in indeed, when executing parallelizable tasks like applying\nmachine learning models or when editing images, excessive Serverless\nis already set here [14].<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">References<\/h2>\n\n\n\n<p>\n[1] E. Jonas, J. Schleier-Smith, V. Sreekanti, C.-C. Tsai, A.\nKhandelwal, Q. Pu, V. Shankar, J. Menezes Carreira, K. Krauth, N.\nYadwadkar, J. Gonzalez, R. A. Popa, I. Stoica, and D. A. Patterson,\n\u201cCloud programming simplified: A berkeley view on serverless\ncomputing,\u201d EECS Department, University of California, Berkeley,\nUCB\/EECS-2019-3, Feb. 2019.<\/p>\n\n\n\n<p>\n[2] M. Armbrust, A. Fox, R. Griffith, A. D. Joseph, R. H. Katz, A.\nKonwinski, G. Lee, D. A. Patterson, A. Rabkin, I. Stoica, and M.\nZaharia, \u201cAbove the clouds: A berkeley view of cloud computing,\u201d\nEECS Department, University of California, Berkeley,\nUCB\/EECS-2009-28, Feb. 2009.<\/p>\n\n\n\n<p>\n[3] W. Lloyd, S. Ramesh, S. Chinthalapati, L. Ly, and S. Pallickara,\n\u201cServerless computing: An investigation of factors influencing\nmicroservice performance,\u201d vol. 0, pp. 159\u2013169, Apr. 2018.<\/p>\n\n\n\n<p>\n[4] B. Jambunathan and K. Yoganathan, \u201cArchitecture decision on\nusing microservices or serverless functions with containers,\u201d 2018.<\/p>\n\n\n\n<p>\n[5] M. Roberts, \u201cServerless architectures.\u201d\n<a href=\"https:\/\/martinfowler.com\/articles\/serverless.html\">https:\/\/martinfowler.com\/articles\/serverless.html<\/a>,\n2018.<\/p>\n\n\n\n<p>\n[6] A. Cockcroft, \u201cEvolution of business logic from monoliths\nthrough microservices, to functions.\u201d\n<a href=\"https:\/\/read.acloud.guru\/evolution-of-business-logic-from-monoliths-through-microservices-to-functions-ff464b95a44d\">https:\/\/read.acloud.guru\/evolution-of-business-logic-from-monoliths-through-microservices-to-functions-ff464b95a44d<\/a>,\nFeb-2017.<\/p>\n\n\n\n<p>\n[7] M. Shilkov, \u201cServerless: Cold start war.\u201d\n<a href=\"https:\/\/mikhail.io\/2018\/08\/serverless-cold-start-war\/\">https:\/\/mikhail.io\/2018\/08\/serverless-cold-start-war\/<\/a>,\n2018.<\/p>\n\n\n\n<p>\n[8] \u201cSQL server connection pooling (ado.net).\u201d\n<a href=\"https:\/\/docs.microsoft.com\/en-us\/dotnet\/framework\/data\/adonet\/sql-server-connection-pooling\">https:\/\/docs.microsoft.com\/en-us\/dotnet\/framework\/data\/adonet\/sql-server-connection-pooling<\/a>,\n2017.<\/p>\n\n\n\n<p>\n[9] V. Tkachenko, \u201cMySQL challenge: 100k connection.\u201d\n<a href=\"https:\/\/www.percona.com\/blog\/2019\/02\/25\/mysql-challenge-100k-connections\/\">https:\/\/www.percona.com\/blog\/2019\/02\/25\/mysql-challenge-100k-connections\/<\/a>,\n2019.<\/p>\n\n\n\n<p>\n[10] C. Anderson, \u201cEscape the cloud database trap with serverless.\u201d\n<a href=\"https:\/\/fauna.com\/blog\/escape-the-cloud-database-trap-with-serverless\">https:\/\/fauna.com\/blog\/escape-the-cloud-database-trap-with-serverless<\/a>,\n2017.<\/p>\n\n\n\n<p>\n[11] A. AWS, \u201cAmazon aurora serverless.\u201d\n<a href=\"https:\/\/aws.amazon.com\/de\/rds\/aurora\/serverless\/\">https:\/\/aws.amazon.com\/de\/rds\/aurora\/serverless\/<\/a>.<\/p>\n\n\n\n<p>\n[12] J. Daly, \u201cAurora serverless: The good, the bad and the\nscalable &#8211; jeremy daly.\u201d\n<a href=\"https:\/\/www.jeremydaly.com\/aurora-serverless-the-good-the-bad-and-the-scalable\/\">https:\/\/www.jeremydaly.com\/aurora-serverless-the-good-the-bad-and-the-scalable\/<\/a>,\n2018.<\/p>\n\n\n\n<p>\n[13] N. V. Hoof, \u201cCreate a serverless application with aws lambda\nand dynamodb.\u201d\n<a href=\"https:\/\/ordina-jworks.github.io\/cloud\/2018\/10\/01\/How-to-build-a-Serverless-Application-with-AWS-Lambda-and-DynamoDB.html#dynamodb\">https:\/\/ordina-jworks.github.io\/cloud\/2018\/10\/01\/How-to-build-a-Serverless-Application-with-AWS-Lambda-and-DynamoDB.html#dynamodb<\/a>,\n2018.<\/p>\n\n\n\n<p>[14] G. Inc.,\n\u201cBuilding a serverless machine learning model.\u201d\n<a href=\"https:\/\/cloud.google.com\/solutions\/building-a-serverless-ml-model\">https:\/\/cloud.google.com\/solutions\/building-a-serverless-ml-model<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Hypes help to overlook the fact that tech is often reinventing the wheel, forcing developers to update applications and architecture accordingly in painful migrations. Besides Kubernetes one of those current hypes is Serverless computing. While everyone agrees that Serverless offers some advantages it also introduces many problems. The current trend also shows certain parallels to [&hellip;]<\/p>\n","protected":false},"author":909,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[1,120,650],"tags":[],"ppma_author":[776],"class_list":["post-5635","post","type-post","status-publish","format-standard","hentry","category-allgemein","category-cloud-technologies","category-scalable-systems"],"aioseo_notices":[],"jetpack_featured_media_url":"","jetpack-related-posts":[{"id":3864,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2018\/08\/07\/server-less-computing-vs-security\/","url_meta":{"origin":5635,"position":0},"title":"Server \u201cless\u201d Computing vs. Security","author":"Merve Uzun","date":"7. August 2018","format":false,"excerpt":"Summary about Serverless Computing with Security aspects.","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/08\/Funktionsweise-300x98.png?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":24203,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2023\/02\/26\/die-zukunft-ist-serverless\/","url_meta":{"origin":5635,"position":1},"title":"Die Zukunft ist Serverless?","author":"Michael Partes","date":"26. February 2023","format":false,"excerpt":"\u00dcberblick Die \u201cCloud\u201d ist ein Begriff, der in den letzten Jahren immens an Bedeutung gewonnen hat. H\u00e4ufig wird sie f\u00fcr die Bereitstellung von Diensten und Services genutzt. Im Lauf der Zeit haben sich dabei verschiedene Architekturen entwickelt, die in der Cloud eingesetzt werden und unterschiedliche Ans\u00e4tze f\u00fcr die Handhabung des\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"https:\/\/lh5.googleusercontent.com\/hnARrH3Mz7d41IhTltMgTCpuUfKpg8k6ur_0Ir46moShZzCf53cVBMeUogOFgp2yD-maHIuCu3CIOsnqE_oBCOrEEaB-KfPc8lsQ5jWanA8hFVPvMdC5XYLBboHJ_lUbrtMT5aVqtMNUjTbsLQQNuoM","width":350,"height":200,"srcset":"https:\/\/lh5.googleusercontent.com\/hnARrH3Mz7d41IhTltMgTCpuUfKpg8k6ur_0Ir46moShZzCf53cVBMeUogOFgp2yD-maHIuCu3CIOsnqE_oBCOrEEaB-KfPc8lsQ5jWanA8hFVPvMdC5XYLBboHJ_lUbrtMT5aVqtMNUjTbsLQQNuoM 1x, https:\/\/lh5.googleusercontent.com\/hnARrH3Mz7d41IhTltMgTCpuUfKpg8k6ur_0Ir46moShZzCf53cVBMeUogOFgp2yD-maHIuCu3CIOsnqE_oBCOrEEaB-KfPc8lsQ5jWanA8hFVPvMdC5XYLBboHJ_lUbrtMT5aVqtMNUjTbsLQQNuoM 1.5x"},"classes":[]},{"id":4164,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2018\/08\/31\/tweets-by-donnie-building-a-serverless-sentiment-analysis-application-with-the-twitter-streaming-api-lambda-and-kinesis\/","url_meta":{"origin":5635,"position":2},"title":"Tweets by Donnie\u200a-\u200aBuilding a serverless sentiment analysis application with the twitter streaming API,  Lambda and Kinesis","author":"dr053","date":"31. August 2018","format":false,"excerpt":"tweets-by-donnie dashboard \u00a0 Thinking of Trumps tweets it's pretty obvious that they are controversial. Trying to gain insights of how controversial his tweets really are, we created tweets-by-donnie. \u201cIt\u2019s freezing and snowing in New York\u200a\u2014\u200awe need global warming!\u201d Donald J. Trump You decide if it\u2019s meant as a joke or\u2026","rel":"","context":"In &quot;Cloud Technologies&quot;","block_context":{"text":"Cloud Technologies","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/scalable-systems\/cloud-technologies\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":21653,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2021\/09\/17\/studidash-a-serverless-web-application\/","url_meta":{"origin":5635,"position":3},"title":"&#8220;Studidash&#8221; | A serverless web application","author":"dk119","date":"17. September 2021","format":false,"excerpt":"by Oliver Klein (ok061), Daniel Koch (dk119), Luis B\u00fchler (lb159), Micha Huhn (mh334) Abstract You are probably familiar with the HdM SB-Funktionen. After nearly four semesters we were tired of the boring design and decided to give it a more modern look with a bit more functionality then it currently\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2021\/09\/grafik-1.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2021\/09\/grafik-1.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2021\/09\/grafik-1.png?resize=525%2C300&ssl=1 1.5x"},"classes":[]},{"id":2560,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/08\/17\/build-a-serverless-google-home-app\/","url_meta":{"origin":5635,"position":4},"title":"Build a Serverless Google Home App","author":"mg166@hdm-stuttgart.de","date":"17. August 2017","format":false,"excerpt":"In this blog article, I want to show you how to build your own Google Voice app. For Natural language processing, we will use API.AI. Our backend will run on a Google Cloud function, also called serverless functions, written in nodejs. Github You can find the whole project here on\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/archetecture-300x169.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/archetecture-300x169.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/archetecture-300x169.png?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2017\/08\/archetecture-300x169.png?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":4122,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2018\/08\/27\/building-a-serverless-web-service-for-music-fingerprinting\/","url_meta":{"origin":5635,"position":5},"title":"Building a Serverless Web Service For Music Fingerprinting","author":"Alexis Luengas","date":"27. August 2018","format":false,"excerpt":"Building serverless architectures is hard. At least it was to me in my first attempt to design a loosely coupled system that should, in the long term, mean a good bye to my all-time aversion towards system maintenance. Music information retrieval is also hard. It is when you attempt to\u2026","rel":"","context":"In &quot;Cloud Technologies&quot;","block_context":{"text":"Cloud Technologies","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/scalable-systems\/cloud-technologies\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/08\/Architecture-Diagram-300x190.png?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]}],"jetpack_sharing_enabled":true,"authors":[{"term_id":776,"user_id":909,"is_guest":0,"slug":"ck165","display_name":"Can Kattwinkel","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/fee91281984251136fef4fca33c3971708c9d5c8d099574e76a2b06ff55ae77a?s=96&d=mm&r=g","0":null,"1":"","2":"","3":"","4":"","5":"","6":"","7":"","8":""}],"_links":{"self":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/5635","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/users\/909"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/comments?post=5635"}],"version-history":[{"count":13,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/5635\/revisions"}],"predecessor-version":[{"id":5651,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/5635\/revisions\/5651"}],"wp:attachment":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/media?parent=5635"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/categories?post=5635"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/tags?post=5635"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/ppma_author?post=5635"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}