Close Up Of Machine Part In Factory 257770

Deploy end to end ML pipeline using Apache Spark

Close Up Of Machine Part In Factory 257770

Want to learn how to deploy an end to end ML pipeline? Prashant Sharma and Nick Pentreath compare a typical streaming machine with using a distributed streaming processing engine like Apache Spark in their talk at Scale By The Bay. 

Structured streaming can help in various ways for performing machine learning in real time at a large scale. A typical streaming machine learning end to end pipeline consists of : # Preprocessing the data based on the application. e.g. normalising or cleaning etc.. # Using micro service and kubernetes hosting the model, using IBM MAX (IBM Model asset exchange). # Scaling the entire pipeline using Apache Spark and kubernetes. This talk may consist of a live demo of applying the above technique, for predicting objects in an image, using an object detection model. Since this is a streaming application, the prediction will be made in realtime. Key takeaways: # Learn about reusing ML models using IBM Model asset exchange. # Learn about how to scale an online ML application end to end, using Apache Spark Structured streaming and kubernetes.

This talk was given by Prashant Sharma and Nick Pentreath at Scale By The Bay.