In this session, we will talk about how we can get deep learning and ml models with TensorFlow into production quickly right after prototyping. Using TensorFlow with big datasets in a distributed setting has been an issue for small teams like ours due to complicated MLOps code, but with what we cover in the talk, we could now do it simpler with a few extra lines of code, letting Databricks handle most of the MLOps and allowing data scientists to focus on feature engineering and building the actual models. We can now also leverage the best of both Spark and TensorFlow in a project, including all of TensorFlow ecosystem libraries like TensorFlow Hub, TensorFlow Recommenders, and ranking.