If you're processing a large amount of data the best computing engine is usually Apache Spark but when using Apache Spark do you know how to serve it? You can continue to use Spark but sometimes it's just not possible. In this talk given by Matthew Tovbin from Scale by the Bay he explores various approaches of how to allow model portability outside Spark.
Making Spark ML Models Portable - Know Your Options
After successfully training ML model with Apache Spark the next task becomes important - how to serve it? One way is to keep using Spark for serving as well, but sometimes it's not desired or possible. For instance if one would like to expose model as HTTP service, run in Docker container or use it on mobile device. This talk explores various approaches of how to allow model portability outside Spark to achieve this.