About the Robustness of Machine Learning

Glitch

In the past couple of years research in the field of machine learning (ML) has made huge progress which resulted in applications like automated translation, practical speech recognition for smart assistants, useful robots, self-driving cars and lots of others. But so far we only have reached the point where ML works, but may easily be broken. Therefore, this blog post concentrates on the weaknesses ML faces these days. After an overview and categorization of different flaws, we will dig a little deeper into adversarial attacks, which are the most dangerous ones.

Continue reading

Large Scale Deployment for Deep Learning Models with TensorFlow Serving

Image source

Introduction

“How do you turn a trained model into a product, that will bring value to your enterprise?”

In recent years, serving has become a hot topic in machine learning. With the ongoing success of deep neural networks, there is a growing demand for solutions that address the increasing complexity of inference at scale. This article will explore some of the challenges of serving machine learning models in production. After a brief overview of existing solutions, it will take a closer look at Google’s TensorFlow-Serving system and investigate its capabilities. Note: Even though they may be closely related, this article will not deal with the training aspect of machine learning, only inference.

Inference and Serving

Before diving in, it is important to differentiate between the training and inference phases, because they have completely different requirements.

  • Training is extremely compute-intensive. The goal here is to maximize the number of compute operations in a given time. Latency is not of concern.
  • Inference costs only a fraction of the computing power that training does. However, it should be fast. When you query the model, you want the answer immediately. Inference must be optimized for latency and throughput.

There are two ways to deploy a model for inference. Which one to use largely depends on the use case. First, you can push the entire model to client devices and have them do inference there. Lots of ML features are already baked into our mobile devices this way. T­his works well for some applications e.g. for Face-ID or activity-detection on phones, but falls flat for many other, large-scale industrial applications. You probably won’t have latency problems, but you are limited to the client’s compute power and local information. On the other hand, you can serve the model yourself. This would be suitable for industrial-scale applications, such as recommender systems, fraud detection schemes, intelligent intrusion detection systems and so forth. Serving allows for much larger models, direct integration into your own systems and the direct control and insights that come with it.

Serving Machine Learning at Scale

Of course, it’s never that easy. In most “real-world” scenarios, there isn’t really such a thing as a “finished ML model”. Consider the “Cross-industry standard process for data mining”:

Fig. 1 – Back to the basics: the six phases of CRISP-DM. Source

It might be ancient, but it describes a key concept for successful data mining/machine learning: It is a continuous process [12]. Deployment is part of this process, which means: You will replace your productive models, and you will do it a lot! This will happen for a number of reasons:

  • Data freshness: The ML model is trained on historical data. This data can go stale quickly, because new patterns constantly appear in the real world. Model performance will deteriorate, and you must replace the model with one that was trained on more recent data, before performance drops too low.
  • Model revision: With time, retraining the model might just not be enough to keep the performance up. At this point you need to revise the model architecture itself, perhaps even start from scratch.
  • Experiments: Perhaps you want to try another approach to a problem. For that reason you want to load a temporary, new model, but not discontinue your current one.
  • Rollbacks: Something went wrong, and you need to revert to a previous version.

Version control and lifecycle management aren’t exactly new ideas. However, here they come with a caveat: Since artificial neural networks are essentially “clunky, massive databases” [5], loading and unloading them can have an impact on performance. For reference, some of the most impactful deep models of recent years are a few hundreds of megabytes in size (AlexNet: 240MB, VGG-19: 574MB, ResNet-200: 519MB). But as model performance tends to scale with depth, model size can easily go to multiple gigabytes. That might not be much in terms of “Big Data”, but it’s still capable of causing ugly latency spikes when implemented poorly. Besides ML performance metrics, the primary concerns are latency and throughput. Thus, the serving solution should be able to [4]:

  • quickly replace a loaded model with another,
  • have multiple models loaded at the same time, in the same process,
  • cope with differences in model size and computational complexity,
  • avoid latency spikes when new models are loaded into RAM,
  • if possible, be optimized for GPUs and TPUs to accelerate inference,
  • scale out inference horizontally, depending on demand.

Serving Before “Model Servers”

Until some three years ago, you basically had to build your ML serving solution yourself. A popular approach was (and still is) using Flask or some other framework to serve requests against the model, some WSGI server to handle multiple requests at once and have it all behind some low-footprint web-server like Nginx.

Fig. 2 – An exemplary serving solution architecture. Source: [8]

However, while initially simple, these solutions are not meant to perform at “ultra large” scale on their own. They have difficulty benefiting from hardware acceleration and can become complex fast. If you needed scale, you had to create your own solution, like Facebook’s “FBLearnerPredictor” or Uber’s “Michelangelo”. Within Google, initially simple solutions would often evolve into sophisticated, complex pieces of software, that scaled but couldn’t be repurposed elsewhere [1].

The Rise of Model Servers

Recent years have seen the creation of various different model serving systems, “model servers”, for general machine learning purposes. They take inspiration from the design principles of web application servers and interface through standard web APIs (REST/RPC), while hiding most of their complexity. Among simpler deployment and customization, model servers also offer machine learning-specific optimizations, e.g. support for Nvidia GPUs or Google TPUs. Most model servers have some degree of interoperability with other machine learning platforms, especially the more popular ones. That said, you may still restrict your options, depending on your choice of platform.

Fig. 3 – Exemplary model server for an image recognition task. Source: [10]

A selection of popular model serving and inference solutions includes:

  • TensorFlow Serving (Google)
  • TensorRT (Nvidia)
  • Model Server for Apache MXNet (Amazon)
  • Clipper
  • MLflow
  • DeepDetect
  • Skymind Intelligence Layer for Deeplearning4j

TensorFlow-Serving

By far the most battle-tested model serving system out there is Google’s own TensorFlow-Serving. It is used in Google’s internal model hosting service TFS², as part of their TFX general purpose machine learning platform [2]. It drives services from the Google PlayStore’s recommender system to Google’s own, fully hosted “Cloud Machine Learning Engine”. TensorFlow-Serving natively uses gRPC, but it also supports RESTful APIs. The software can be downloaded as a binary, Docker image or as a C++ library.

Architecture

The core of TensorFlow-Serving is made up of four elements: Servables, Loaders, Sources and Managers. The central element in TensorFlow Serving is the servable [3]. This is where your ML model lives. Servables are objects, that TensorFlow-Serving uses for inference. For example, one servable could correspond to one version of your model. Servables can be simplistic or complicated, anything from lookup-tables to multi-gigabyte deep neural networks. The lifecycles of servables are managed by loaders, which are responsible for loading servables to RAM and unloading them again. Sources provide the file system, where saved models are stored. They also provide a list of the specific servables, that should be loaded and used in production, the aspired versions. Managers are the broadest class. Their job is to handle the full life cycle of servables, i.e. loading, serving and unloading the aspired versions. They try to fulfill the requests from sources with respect the specified version policy.

Fig. 4 – TensorFlow-Serving architecture overview. Source: [3]

When a servable is elevated to an aspired version, its source creates a loader object for it. This object only contains metadata at first, not the complete (potentially large) servable. The manager listens for calls from loaders, that inform it of new aspired versions. According to its version policy, the manager then executes the requested actions, such as loading the aspired version and unloading the previous one. Loading a servable can be temporarily blocked if resources are not available yet. Unloading a servable can be postponed while there are still active requests to it. Finally, clients interface with the TensorFlow-Serving core through the manager. Both requests and responses are JSON objects.

Simple Serving Example

Getting started with a minimal setup is as simple as pulling the tensorflow/serving Docker image and pointing it at the saved model file [16]. Here I’m using a version of ResNet v2, a deep CNN for image recognition, that has been pretrained on the ImageNet dataset. The image below is encoded in Base64 and sent to the manager as a JSON object.

Fig. 5 – Some random image to predict. Source

The prediction output of this model consists of the estimated probabilities for each of the 1000 classes in the ImageNet dataset, and the index of the most likely class.

Fig. 6 – Model output, class 771 corresponds to “running_shoe”.

Performance

Implementing and hosting a multi-model serving solution for an industrial-scale web application with millions of users, just for benchmarks, is slightly out of scope for now. However, Google provides some numbers that should give an idea of what you can expect TensorFlow-Serving to do for you.

Latency

A strong point of TensorFlow-Serving is multi-tenancy, i.e. serving multiple models in the same process concurrently. The key problem with this is avoiding cross-model interference, i.e. one model’s performance characteristics affecting those of another. This is especially challenging while models are being loaded to RAM. Google’s solution is to provide a separate thread-pool for model-loading. They report that even under heavy load, while constantly switching between models, the 99th percentile inference request latency stayed in the range from ~75 to ~150 milliseconds in their own TFX benchmarks [2].

Throughput

Google claims that the serving system on its own can handle around 100,000 requests per second per core on a 16 vCPU Intel Xeon E5 2.6 GHz machine [1]. That is however ignoring API overhead and model complexity, which may significantly impact throughput. To accelerate inference on large models with GPUs or TPUs, requests can be batched together and processed jointly. They do not disclose whether this affects request latency. Since late February (TensorFlow-Serving v1.13), TensorFlow-Serving can now work directly in conjunction with TensorRT [14], Nvidia’s high-performance deep learning inference platform, which claims a 40x increase in throughput compared to CPU-only methods [15].

Usage and Adoption

In their paper on TFX (TensorFlow Extended), Google presents their own machine learning platform, that many of its services use [2]. TFX’ serving component, TFS², uses TensorFlow-Serving. As of November 2017, TensorFlow-Serving is handling tens of millions of inferences per second for over 1100 of Google’s own projects [13]. One of the first deployments of TFX is the recommender system for Google Play, which has millions of apps and over a billion active users (over two billion if you count devices). Furthermore, TensorFlow-Serving is also used by companies like IBM, SAP and Cloudera in their respective multi-purpose machine learning and database platforms [2].

Limitations

Today’s machine learning applications are very much capable of smashing all practical limits: DeepMind’s AlphaGo required 1920 CPUs and 280 GPUs running concurrently in real-time, for inference, for a single “client” [6]. That example might be excessive, but the power of deep ML models does scale with their size and compute complexity. Deep learning models today can become so large that they don’t fit on a single server node anymore (Google claims that they can already serve models up to a size of one terabyte in production, using a technique called model sharding [13]). Sometimes it is worth investing the extra compute power, sometimes you just need to squeeze that extra 0.1 percent accuracy out of your model, but often there are diminishing returns,. To wrap it up, there may be a trade-off between the power of your model versus latency, throughput and runtime cost.

Conclusion

When you serve ML models, your return on investment is largely determined by two factors: How easily you can scale out inference and how fast you can adapt your model to change. Model servers like TensorFlow-Serving address the lifecycle of machine learning models, without making the process disruptive in a productive environment. A good serving solution can reduce both runtime and implementation costs by a significant margin. While building a productive machine learning system at scale has to integrate a myriad different steps from data preparation to training, validation and testing, a scalable serving solution is the key to making it economically viable.

References and Further Reading

  1. Olston, C., Fiedel, N., Gorovoy, K., Harmsen, J., Lao, L., Li, F., Rajashekhar, V., Ramesh, S., and Soyke, J. (2017). Tensorflow-serving: Flexible, high-performance ML serving.CoRR, abs/1712.06139
  2. Baylor, D., Breck, E., Cheng, H.-T., Fiedel, N., Foo, C. Y., Haque, Z., Haykal, S., Ispir, M., Jain, V., Koc, L., Koo, C. Y., Lew, L., Mewald, C., Modi, A. N., Polyzotis, N., Ramesh, S., Roy, S., Whang, S. E., Wicke, M., Wilkiewicz, J., Zhang, X., and Zinkevich, M. (2017). Tfx: A tensorflow-based production-scale machine learning platform. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’17, pages 1387–1395, New York, NY, USA. ACM.
  3. TensorFlow-Serving documentation – https://www.tensorflow.org/tfx/guide/serving (accessed 11.03.2019)
  4. Serving Models in Production with TensorFlow Serving (TensorFlow Dev Summit 2017) – https://www.youtube.com/watch?v=q_IkJcPyNl0 (accessed 11.03.2019)
  5. Difference Inference vs. Training – https://blogs.nvidia.com/blog/2016/08/22/difference-deep-learning-training-inference-ai/ (accessed 11.03.2019)
  6. Challenges of ML Deployment – https://www.youtube.com/watch?v=JKxIiSfWtjI (accessed 11.03.2019)
  7. Lessons Learned from ML deployment – https://www.youtube.com/watch?v=-UYyyeYJAoQ (accessed 11.03.2019)
  8. https://hackernoon.com/a-guide-to-scaling-machine-learning-models-in-production-aa8831163846 (accessed 11.03.2019)
  9. https://medium.com/@maheshkkumar/a-guide-to-deploying-machine-deep-learning-model-s-in-production-e497fd4b734a (accessed 11.03.2019)
  10. https://medium.com/@vikati/the-rise-of-the-model-servers-9395522b6c58 (accessed 11.03.2019)
  11. https://blog.algorithmia.com/deploying-deep-learning-cloud-services/ (accessed 11.03.2019)
  12. https://the-modeling-agency.com/crisp-dm.pdf (accessed 11.03.2019)
  13. https://ai.googleblog.com/2017/11/latest-innovations-in-tensorflow-serving.html (accessed 12.03.2019)
  14. https://developer.nvidia.com/tensorrt (accessed 12.03.2019)
  15. https://medium.com/tensorflow/optimizing-tensorflow-serving-performance-with-nvidia-tensorrt-6d8a2347869a (accessed 12.03.2019)
  16. https://medium.com/tensorflow/serving-ml-quickly-with-tensorflow-serving-and-docker-7df7094aa008 (accessed 12.03.2019)
  17. https://www.slideshare.net/shunyaueta/tfx-a-tensor-flowbased-productionscale-machine-learning-platform (accessed 12.03.2019)

Improved Vulnerability Detection using Deep Representation Learning

Today’s software is more vulnerable to cyber attacks than ever before. The number of recorded vulnerabilities has almost constantly increased since the early 90s. The strong competition on the software market along with many innovative technologies getting released every year forces modern software companies to spend more resources on development and less resources on software quality and testing. In 2017 alone, 14.500 new vulnerabilities were recorded by the CVE (Common Vulnerability and Exposures) database, compared to the 6.000 from the previous year. This will continue in the years to come. [1]

Continue reading

Federated Learning

The world is enriched daily with the latest and most sophisticated achievements of Artificial Intelligence (AI). But one challenge that all new technologies need to take seriously is training time. With deep neural networks and the computing power available today, it is finally possible to perform the most complex analyses without need of pre-processing and feature selection. This makes it possible to apply new models to numerous applications. The only thing needed is tons of training data, a good model and a lot of patience during training.

Continue reading

About using Machine Learning to improve performance of Go programs

Gophers

This Blogpost contains some thoughts on learning the sizes arrays, slices or maps are going to reach using Machine Learning (ML) to increase  programs’ performances by allocating the necessary memory in advance instead of reallocating every time new elements are appended.

What made me write this blogpost?

Well first of all I had to because it is part of the lecture Ultra Largescale Systems (ULS) I attended past winter term. But as an introduction I’ll tell you what made me choose this topic: I started to learn Golang and coming from mainly Java, Python and JavaScript the concept of Arrays with fixed sizes and Slices wrapped around them for convenience was new to me. When I understood that initializing them with the correct capacity is good for performance and memory usage I always tried to so. Until I came to some use case where I could not know the capacity in advance. At almost the same time we talked about “ML for Systems” in the ULS-lecture. There the power of ML is used to speed up Databases, loadbalance Elastic Search Queries and other things. So I came up with the idea of ML for programming languages in this case for learning capacities in Golang. By the way I wanted to try out ML in Go, which is said to bring some performance advantages compared to python and is easier to deliver. But neither ML in Go (go for ML) nor ML on Go is topic of this post, though both appear at some parts.

The goal in more detail

As explained in various blogposts like here and there, arrays have fixed sizes in Go. For convenient manipulation anyway they can be wrapped by slices. Thus appending to a slice that reached its capacity needs to create a new slice with a larger underling array, copy the contents of the old slice to the new one and then replace the old one by the new one. This is what the append method does. That this process is more time consuming than appending to a slice that has a sufficient capacity can be shown with some very simple code that just appends 100 times to a test slice in a loop. Once the slice is initialized with a capacity of zero and once with 100. For both cases we calculate the durations it takes and compare them. Since those durations can vary for the same kind of initialization we run this 1000 times each and calculate the average duration to get more meaningful results. The averages are calculated by the method printSummary which is left out here in order to keep track of things. However the whole code can be found on GitHub.

As expected the correct initialized version runs with an average of 1714ns faster than the other one with an average of 2409ns. Of course those durations are still just samples and vary if the code runs multiple times. But in over 20 runs each there is only one average value of the bad initialized slice lower than some of the good ones.

If we also take a look at the capacity the slower version ends up with, we see that this is 128 instead of the required 100. This is because append always doubles the capacity if it reaches the limit.

So we can see that it is worth setting the capacity correct in advance for performance and resource consumption reasons. But this is not always as easy as in the example we just saw and sometimes it is not even possible to know the length a slice will grow up to in advance. In those cases it might make sense to let the program learn the required capacities. It could be helpful at initialization with make as well as for growing with append.

A basic example

Setup

To check out feasibility I created a basic example that is a bit more complex than the first one but still possible to calculate as well. It iterates over index j and value s of a slice of random integer samples and for each of them the test slice is created. Then we append s times three values and one value j times. So the final length (and required capacity) of test can be calculated as s*3+j.

Also in this loop training data gets gathered. One sample consists of s and j as input and len(test) as label. Since the main goal of this scenario is to check if it’s worth using a trained ML model to predict the required capacity, this data is collected always to create equal conditions for every test case. Ways to avoid the time expensive training and data collection at runtime are discussed later.

As implementation for the ML part I chose go-deep. I picked it from this list because it looked well documented, easy to use and sufficient for my needs, though not perfect.

I used the collected training data to train a MLP (Multi Layer Perceptron) with two hidden layers containing two and five neurons. Of course I configured RegressionMode to use Identity as activation function in the output layer and MSE (Mean Square Error) as loss function. I also played around with some other hyperparameters but kept a lot from the examples provided as well, because the MSE already decreased very fast and became 0.0000 after three training-iterations. This is not surprising since the function to learn is very simple. Also there is no need to avoid overfitting in this basic example. I kept some of the belonging hyperparameters with low values anyway. In a real world use case one would probably try to keep the model as small as possible to get quickest responses.

Results

The following table shows the test cases I compared along with the average durations in nanoseconds calculated over 1000 tries each. Since those averages vary again from run to run the table contains three of them.

Test caseAvg ns run1Avg ns run2Avg ns run3
Initialize capacity with
zero
12.790.50114.267.92514.321.735
Use s*3+j directly in make5.679.5956.067.9685.943.731
Use a function to
calculate s*3+j
5.242.1826.012.9205.515.661
Use the prediction of the
learned model
10.898.4376.361.9119.056.003
The model’s prediction +16.069.7765.714.3486.144.386
The model’s prediction
on new random data
10.165.7646.096.9299.296.384

Even though the durations vary the results show that not initializing the capacity is worst. Also usually it is best to calculate the capacity, if possible. It does not really matter if the calculation happens in a function or directly. When I took a closer look at the model’s predictions I saw that they are quite often exactly one less than the actual capacity. This is why I also added the prediction+1 test case, which is almost as good as the direct calculations. So investigating a bit deeper in what the model predicts is worth it. Maybe some finetuning on the hyperparameters could also fix the problem instead of adding 1 manually. The results also show that the learned model works on completely new random data as well as on partly known data from the training.

Conclusion

Of course creating such a model for a small performance optimization is heavy overengineered and thus not worth it. It could be worth in cases where you know you have a bottleneck at this place (because your profiler told you) and you cannot calculate the required capacity in any other way in advance. In the introduction I already mentioned that I had a use case where it is not possible to do so. In this case the length of the slice depends on a sql.rows object which doesn’t tell you how many rows it contains in advance. Other examples might be conditional appends where you cannot know how many elements fulfill the condition to be appended to a slice or something else. But also in those cases the required capacity might depend on something else. For example the current time, the size of an HTTP request that caused this action or the length this slice reached the last time. In those cases using a ML model might be helpful to avoid a performance bottleneck. With dependencies to previous lengths especially RNNs (Recurrent Neural Networks) might be helpful. At least they probably could give a better guess than a developer himself.

Looking ahead

As stated above in examples like this the engineering effort is too high. So ways for automating would be desirable. First I thought about a one-size-fits-all solution meaning one pretrained model that predicts for various makes the required capacity. But it would be difficult to find good features because they could change from make to make and just using all sorts of possible features would create very sparse matrices and require larger models if they could work at all.

So we should stick to use case specific models that can be smaller and use meaningful features depending on their environment like lengths of arrays, slices, maps or strings “close” to them or values of specific bools or integers. The drawback is that individual models need individual training maybe with production like data. Training during runtime would cause an overhead that might destroy the benefit the model could bring and slow the program down at least for a while until training can be stopped or paused because the ML model’s performance is good enough. So if possible pure online learning should be avoided and training on test stages or at times with low traffic should be preferred. If the length of a slice depends on the current traffic this is of course not possible. Then one should at least make use of dumping a model’s weights from time to time to the logs to be able to reuse them when starting a new node.

Still we need to solve the overengineering issue and try to build a model automatically at compile time, if the developer demands to do so for example using an additional argument in the call to make. I think that this might be another use case for ML on code: Finding good features and parameters to build a ML model by inspecting the code. Unfortunately I’m not sure what such an ML model on code could look like and what it would require to train it. Using a GAN (Generative adversarial network) to generate the models would probably require already existing ones to train the discriminator. If the automation could be realized the use case also could get broader because then calculating the capacity would be more effort than just saying “learn it”.

Some final thoughts

Using ML would not magically boost performance. It would require developers to remeasure and double check if it’s worth using it. For example it is not only important how often the program needs to allocate memory but also where. So stack allocation is cheap and heap allocation is expensive as explained in this blog post. If using ML to predict the capacity requires the program to allocate on the heap it might be slower even when the predictions are correct. In the test scenario above all the cases instead of initializing with zero escaped to the heap. There it was worth it but it needs to be measured. So the performance should be compared with and without learning for short and for longer running applications. As another example sometimes the required capacities might not be learnable because they are almost random or depend on things that cannot be used as features in an efficient way.

Another drawback of using ML is that your code behaves less predictable. You won’t know what capacity will be estimated for a slice in advance anymore and it will be much harder to figure out why the program estimated exactly what it did afterwards.

I also thought about to train the model to reduce a mix of performance and required memory instead of using the final length as labels. But then it is not that easy anymore to get the training data. In some cases however it might also be difficult to get the “final” length of a slice as well.

The last thing to remember is that it is always helpful to set a learned model some borders. In this case a minimum and a maximum. My test model for example predicted a negative capacity before I got the hyperparameters right, what made my program crash. So if the model for some reason thinks this could be a great idea a fixed minimum of zero should prevent the worst. Also such borders make a program a bit more predictable again.

Reproducibility in Machine Learning

The rise of Machine Learning has led to changes across all areas of computer science. From a very abstract point of view, heuristics are replaced by black-box machine-learning algorithms providing “better results”. But how do we actually quantify better results? ML-based solutions tend to focus more on absolute performance improvements (measured by metrics) instead of factors like resilience and reproducibility. On the other hand, ML models have a significantly growing impact on humans. One can argue that the danger is negligible for applications like playing games but with direct impacts like self-driving in production, there comes a responsibility. This responsibility was strengthened not only by laws such as the EU General Data Protection Regulation (GDPR).

Nevertheless, the objective of this post is not to philosophize about the dangers and dark sides of AI. In fact, this post aims to work out common challenges in reproducibility for machine learning and shows programming differences to other areas of Computer Science. Secondly, we will see practices and workflows to create a higher grade of reproducibility in machine learning algorithms.

Background

Having a software engineering background, my first personal experience of programming in machine learning felt like going back in time. Many frameworks are evolved and highly used in practice (TensorFlow, keras, pytorch, …) but other’s are still in the early stages and evolve quickly. This fact shouldn’t be surprising regarding the short history of current ML implementations. However, the definition of frameworks differs from other areas of Computer Science. Tensorflow and others create an abstraction layer for the underlying mathematical operations and indeed simplify processes like training, optimizations and more. But for me, they are closer to a toolkit of operations than a cookbook with best-practices.

Especially scientific results are often implemented with the same toolkit but as a standalone project. For this reason, the grade of reusability of such implementations is often low. Research scientists are interested in the most recent publications but there is no baseline project which can be used for different approaches, models and datasets. It’s more about copying and pasting workflows, downloading datasets and hacking it together. However, the research in ML is now establishing programming paradigms which exist in other parts of computer science for decades. Further, I am thankful to anyone contributing to state-of-the-art implementations in the first place. Thus, we will move the scientific scope into the background from now on.

Taking a more practical approach into account, Jupyter notebooks are often used as a starting point to explore data and different approaches. They are a great tool to evaluate a proof-of-concept and to showcase initial findings. However, notebooks tend to be chaotic with increasing complexity. In certain aspects, we can compare the workflow to creating an MVP in software engineering. You can reuse the created MVP as a setup for the productive application, but you shouldn’t expect a clean and extensible architecture then.

A machine learning workflow

For a better understanding, the following figure shows a typical workflow and the components of development in Data Science:

  1. Load and preprocess data, bring it into an interpretable form for our ML model.
  2. Code a model and implement the block-box magic that empowers AI.
  3. Train, Evaluate and fine-tune the model over days, weeks or months.
ML workflow
https://cloud.google.com/ml-engine/docs/tensorflow/ml-solutions-overview

After an initial implementation and similar to software lifecycles, we have the following steps:

  1. Deploy the program (model) to our dedicated infrastructure (Cloud, local).
  2. Use the model in production.
  3. Monitor the application and its predictions.
  4. Maintain the source code, implement new features and deploy new versions.

Frameworks like TensorFlow provide tools to read data, train models and evaluate them with different metrics. Further, approaches like TensorFlow Serving address the second part of the workflow to deploy models on infrastructure for production. Nonetheless, these tools don’t explicitly address reproducibility issues in ML. For a better understanding, the following section goes one step back by pointing out these challenges.

Challenges in ML reproducibility

In contrast to other fields of computer science, the results are non-deterministic. In other words, the same source code produces different results for the same dataset. Reasons mostly lie in implementation details such as random initializations of parameters or randomly shuffled datasets.

However, the baseline for collaboration in software engineering is a project environment where changes can be reproduced. There is not enough time in this blog post to discuss concepts like Versioning and Continous Integration, but in general, they lead to projects that are less error-agnostic due to automatic testing and deploying. Furthermore, contributors are able to comprehend changes and reproduce it on their environment (if guidelines and rules are followed).

Having the non-determinism in mind, the  objective for ML is a process in which results can be produced having the exact same results. The following aspects address this issue:

  1. Versioning of models: Models should be versioned and any changes be transparent.
  2. “Results without context are meaningless.“ (https://www.pachyderm.io/dsbor.html). For reproducibility, the collection of metadata is essential. Running a model on a dataset and versioning it does not answer questions such as: Where did the data come from? How can we rerun the model with an updated dataset?
  3. A reproducibility flag should enable a mode in which features causing non-deterministic results such as random initialization are disabled.

Coming back to Jupyter notebooks as a baseline for ML projects, they are a nightmare from a versioning and reproducibility perspective. First, notebook cells can run in different orders which makes the results hard to understand. Secondly, the actual source of notebooks is illegible which basically means that one can’t understand changes between versions regarding the source code. So as software developer, just imagine checking out a version and searching for changes in the application because source differences are meaningless.

So, how in practice?

We have gained an intuition for challenges in ML development and learned that versioning and collecting metadata is crucial for reproducibility. We are now answering the question of how to address these issues with in practice.

Data versioning tools

Data version control (DVC) or datmo are open source production tools for model management. In other words, they are versioning tools similar to git but address additional data scientist needs. One fundamental need is the integration of large files from different data sources. Git is not made to handle large files and the source for machine learning data is often on a different source (Public Cloud, Customer’s infrastructure).

Thus, the Git Large File Storage (LFS) replaces large files such as data input files with text pointers inside Git and stores the data on a remote server. Going one step further, we don’t want the data as part of the tool but the entire workflow. For a better understanding, the following figure illustrates a very basic scenario for DVC.

https://dvc.org/doc/use-cases/data-and-model-files-versioning

We publish and check out the code using a remote hub (Gitlab, Github, …) just as usual. On top of this, we use DVC to publish and retrieve data versions using a different hub. This separation of code and data makes it possible to test different program versions on various data versions. This is particularly useful to reproduce a model’s performance over time in production (after collecting new data).

On top of this, the tools address the following features (not exclusively):

  • Language- and Framework-agnostic: Implement projects in different languages (Python, R, Julia, ..) using different frameworks (TensorFlow, PyTorch, …)
  • Infrastructure-agnostic: Deploy models to different environments and infrastructures (Google Cloud, AWS, local infrastructure)

However, the infrastructure-agnostic feature comes with a drawback. DVC or datmo lack pipeline execution features for build pipelines, monitoring or error handling. The philosophy of these tools is to be very generic without running servers. They are slim command line tools without user interfaces.

Pachyderm

In order to come closer to continuous integration, we need deployment pipelines and modular infrastructure. The goal is an automated processes of releasing new versions, testing it on a staging environments and deploying it to customers. The borders between infrastructure and programming are blurring and the same should apply to machine learning. The two keywords that pop up in every (modern) infrastructure are containers (docker) and Kubernetes. Say hi to Pachyderm.

Pachyderm runs on top of Kubernetes which makes it deployable to any service that supports Kubernetes (Google Cloud Platform, AWS, Azure, Local infrastructure). Further, it integrates git-like features to version code as well as data and it shares many of its names (repository, pipeline, …). With Pachyderm, we configure continuous integration pipelines with container images.

Image result for pachyderm data science
https://www.slideshare.net/joshlk100/reproducible-data-science-review-of-pachyderm-data-version-control-and-git-lfs-tools

The above figure shows a baseline workflow in Pachyderm. Assuming that we already created a repository and coded our model, we put a file to our dedicated data storage. We then create pipelines whose configurations are written as JSON-files and can look as follows:

{
  "pipeline": {
    "name": "word-count"
  },
  "transform": {
    "image": "docker-image",
    "cmd": ["/bin", "/pfs/data", "/pfs/out"]
  },
  "input": {
      "pfs": {
        "repo": "data",
        "glob": "/*"
      }
  }
}

By executing the above configuration with the Pachyderm command line interface pachctl create-pipeline -f above_pipeline.json, we run the commands in cmd within the container created from an image defined in image. Further, we use remote storages like S3 or store it on the Pachyderm File System (pfs). PFS is a distributed filesystem which is, until a certain level, comparable to the Hadoop file system (HDFS) where MapReduce-Jobs are replaced by Pachyderm pipelines (see 8).

After creating the pipeline above, Pachyderm will launch worker pods on Kubernetes. These worker pods will remain up and running, such that they are ready to process any data committed to their input repositories.

KubeFlow: Distributed large-scale deployment

As described beforehand, Pachyderm let us create scalable and manageable ML pipelines on Kubernetes. Although Pachyderm can be parallelized in a map/reduce-style way, the pipelines mostly rely on single nodes and non-distributed training (multiple GPUs, but not multiple nodes). Having a different approach in mind, KubeFlow mainly focuses on standards to deploy and manage distributed ML on Kubernetes. It integrates tools like Distributed TensorFlow or TensorFlow Serving and further JupyterHub which improves the process of developing in teams on shared notebooks.

However, KubeFlow (as of now) lacks tools that orchestrate Data Science workflows as seen earlier (Data Preprocessing, Modelling, Training, Deployment, Monitoring, …). It leaves these responsibilities up to the developer. Since this blog post mainly focuses on reproducibility in Machine Learning, KubeFlow does not answer these questions satisfactory. Consequently, further concepts are out-of-scope for this post while not being less exciting for productive and large-scale ML engineering.

Nevertheless, reproducibility and productivity should go hand in hand. For this reason, KubeFlow and Pachyderm can be jointly used in practice. In such a scenario, Pachyderm would provide the reproducibility through pipelines and KubeFlow would bring with the ease of deployment and distributed framework integrations (see 67 for more details).

So what should I use?

After an introduction to tools such as DVC and Pachyderm, one last question remains: Which is the best tool in production? And as always the answer is – it depends. DVC can improve productivity in smaller teams to organize and version projects and link the source code to the data. However, for organizations willing to introduce a workflow, richly-featured tools such as Pachyderm are the way to go. Taking one step further, KubeFlow paves the way for large-scale and distributed applications.

From a different point of view, the discussion behind it could be seen as a discussion about Kubernetes itself. This is fairly more wide-reaching and asks fundamental questions such as: Can we pay someone to set up and maintain the Kubernetes Cluster? Are our applications and workflows complex enough (multiple nodes, not multiple GPUs) to justify the overhead of Kubernetes? Unfortunately, we can’t answer these questions in this blog post.

Wrapping it up

In Machine Learning, programs can have the same meaning and even speak the same language but output different results because context matters and implementations are full of (intended) randomness. Further, development in ML is very sensitive to changes and even small differences can have high impacts on the result. For reproducibility, we have to record the full story and keep track of all changes.

Frameworks such as DVC or Pachyderm help to keep track of not only the code but also the data. Furthermore, they use pipelines to reproduce results and simplify collaborative projects. This increases the reproducibility and corresponds to the responsibility in ML. On top of this, the tools are a first step towards the fulfillment of laws like the GDPR because results can at least be reproduced. However, these solutions are to some extent immature and evolve quickly (however, just like everything else in ML). It is still a long way to go to obtain practices in ML which are comparable to standards in software engineering.

Related Sources and further reading

  1. Collaboration Issues in Data Science (accessed: 25.02.19): https://github.com/iterative/dvc.org/blob/master/static/docs/philosophy/collaboration-issues.md
  2. Hold Your Machine Learning and AI Models Accountable (accessed 25.02.19): https://medium.com/pachyderm-data/hold-your-machine-learning-and-ai-models-accountable-de887177174c
  3. How to Manage Machine Learning Models (accessed: 25.02.19): https://www.inovex.de/blog/how-to-manage-machine-learning-models/
  4. Introducing Kubeflow – A Composable, Portable, Scalable ML Stack Built for Kubernetes (accessed: 26.02.19): https://kubernetes.io/blog/2017/12/introducing-kubeflow-composable/
  5. Machine-Learning im Kubernetes-Cluster (German, accessed: 25.02.19): https://www.heise.de/developer/artikel/Machine-Learning-im-Kubernetes-Cluster-4226233.html
  6. Machine Learning Workflow (accessed: 26.02.18): https://cloud.google.com/ml-engine/docs/tensorflow/ml-solutions-overview
  7. Pachyderm and Kubeflow integration (accessed: 26.02.18): https://github.com/kubeflow/kubeflow/issues/151
  8. Pachyderm File System (PFS, accessed: 26.02.18): https://docs.pachyderm.io/en/v1.3.7/pachyderm_file_system.html
  9. Provenance: the Missing Feature for Rigorous Data Science. Now in Pachyderm 1.1 (accessed 25.02.19): https://medium.com/pachyderm-data/provenance-the-missing-feature-for-good-data-science-now-in-pachyderm-1-1-2bd9d376a7eb
  10. Reproducibility in ML: Why It Matters and How to Achieve It (accessed: 25.02.19): https://determined.ai/blog/reproducibility-in-ml/
  11. Reproducible data science: review of Pachyderm, Data Version Control and GIT LFS tools (slides, accessed: 25.02.19): https://www.slideshare.net/joshlk100/reproducible-data-science-review-of-pachyderm-data-version-control-and-git-lfs-tools
  12. The Data Science – Bill of Rights (accessed: 25.02.19): https://www.pachyderm.io/dsbor.html

Observability?! – Where do we go from here?

MEME: I always, always test my code. The I test it again in production.

The last two years in software development and operations have been characterized by the emerging idea of “observability”. The need for a novel concept guiding the efforts to control our systems arose from the accelerating paradigm changes driven by the need to scale and cloud native technologies. In contrast, the monitoring landscape stagnated and failed to meet the new challenges our massively more complex applications pose. Therefore, observability evolved as a mission-critical property of modern systems and still attracts much attention. The numerous debates differentiated monitoring from observability and covered its technical and cultural impact on building and operating systems. At the beginning of 2019, the community reached consensus on the characteristic of observability and elaborated its core principles. Consequently, new tools and SaaS applications appeared marking the beginning of its commercialization. This post identifies the forces driving the evolution of observability, points out trends we presently perceive and tries to predict future developments.

Continue reading

VVS-Delay – AI in the Cloud

Introduction

Howdy, Geeks! Ever frustrated by public transportation around Stuttgart?
Managed to get up early just to find out your train to university or work is delayed… again?
Yeah, we all know that! We wondered if we could get around this issue by connecting our alarm clock to some algorithms. So we would never ever have to get up too early again.

Well, okay, we’re not quite there yet. But we started with getting some data and did some hardly trustworthy hypothesis of prediction on it. In the end it’s up to you if you gonna believe it or not.

To give you a short overview, here are the components that are involved in the process. You will find the components described in more details below.
Process overview

A view parts in short:
1. crawler and database – get and store departure information
2. visualization – visualizes the delays on a map
3. statistical analysis – some statistical analysis on the delays over a week
4. continuous delivery – keep the production system up to date with the code

Continue reading

FOOLING THE INTELLIGENCE

Adversarial machine learning and its dangers

The world is led by machines, humans are subjected to the robot’s rule. Omniscient computer systems hold the control of the world. The newest technology has outpaced human knowledge, while the mankind is powerless in the face of the stronger, faster, better and almighty cyborgs.

Such dystopian visions of the future often come to mind when reading or hearing the latest news about current advances in the field of artificial intelligence. A lot of Sci-Fi movies and literature take up this issue and show what might happen if the systems become more intelligent than humans and develop their own mind. Even the CEO of SpaceX, Tesla and Neuralink, Elon Musk, who is known for his innovative mindset, has a critical opinion towards future progress in artificial intelligence:

If I were to guess what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. […] With artificial intelligence we are summoning the demon.

Elon Musk

Continue reading