Event
15 April 2019
New York
Added 01-Jan-1970
Talk 1: "Slack After Dark": Realtime ML + Kubernetes + TensorFlow + MLflow + Slack API + PipelineAI (Antje Barth, Developer Advocate, MapR)
Welcome to “Slack after Dark” - our slack-based dating app which showcases an end-to-end containerized & integrated ML workflow, running Online Model Predictions and Online Model Training using Keras/Tensorflow, PipelineAI, MapR Data Platform and Slack!
In this talk and live presentation you’ll also see how a streaming architecture can help you with moving from batch to real-time model training, and simplifying your overall model data logistics.
Talk 2: Real-Time, Continuous ML/AI Model Training, Optimizing, and Predicting with Kubernetes, Kafka, TensorFlow, KubeFlow, MLflow, Keras, Spark ML, PyTorch, Scikit-Learn, and GPUs (Chris Fregly, Founder @ PipelineAI)
Chris Fregly, Founder @ PipelineAI, will walk you through a real-world, complete end-to-end Pipeline-optimization example.
Through a series of live demos, Chris will install, create, and deploy a model ensemble using the PipelineAI Platform with GPUs, TensorFlow, and Scikit-Learn.
While most Hyper-parameter Optimizers stop at the training phase (ie. learning rate, tree depth, ec2 instance type, etc), we extend model validation and tuning into a new post-training optimization phase including 8-bit reduced precision weight quantization and neural network layer fusing - among many other framework and hardware-specific optimizations.
Next, we introduce hyper-parameters at the prediction phase including request-batch sizing and chipset (CPU v. GPU v. TPU). We’ll continuously learn from all phases of our pipeline - including the prediction phase. And we’ll update our model in real-time using data from a Kafka stream.
Lastly, we determine a PipelineAI Efficiency Score of our overall Pipeline including Cost, Accuracy, and Time. We show techniques to maximize this PipelineAI Efficiency Score using our massive PipelineDB along with the Pipeline-wide hyper-parameter tuning techniques mentioned in this talk.