TensorFlow 2.0 and Cloud AI make it easy to train, deploy, and maintain scalable machine learning models | Google Cloud Blog

March 22, 2019 0 Comments

TensorFlow 2.0 and Cloud AI make it easy to train, deploy, and maintain scalable machine learning models | Google Cloud Blog



Since it was open-sourced in 2015, TensorFlow has matured into an entire end-to-end ML ecosystem that includes a variety of tools, libraries, and deployment options to help users go from research to production easily. This month at the 2019 TensorFlow Dev Summit we announced TensorFlow 2.0 to make machine learning models easier to use and deploy.

TensorFlow started out as a machine learning framework and has grown into a comprehensive platform that gives researchers and developers access to both intuitive higher-level APIs and low-level operations. In TensorFlow 2.0, eager execution is enabled by default, with tight Keras integration. You can easily ingest datasets via tf.data pipelines, and you can monitor your training in TensorBoard directly from Colab and Jupyter Notebooks. The TensorFlow team will continue to work on improving TensorFlow 2.0 alpha with a general release candidate coming later in Q2 2019.

Upgrading a model with the tf_upgrade_v2 tool.gif
Upgrading a model with the tfupgradev2 tool.

Experiment and iterate

Both researchers and enterprise data science teams must continuously iterate on model architectures, with a focus on rapid prototyping and speed to a first solution. With eager execution a focus in TensorFlow 2.0, researchers have the ability to use intuitive Python control flows, optimize their eager code with tf.function, and save time with improved error messaging. Creating and experimenting with models using TensorFlow has never been so easy.

Faster training is essential for model deployments, retraining, and experimentation. In the past year, the TensorFlow team has worked diligently to improve training performance times on a variety of platforms including the second-generation Cloud TPU (by a factor of 1.6x) and the NVIDIA V100 GPU (by a factor of more than 2x). For inference, we saw speedups of over 3x with Intel’s MKL library, which supports CPU-based Compute Engine instances.

Through add-on extensions, TensorFlow expands to help you build advance models. For example, TensorFlow Federated lets you train models both in the cloud and on remote (IoT or embedded) devices in a collaborative fashion. Often times, your remote devices have data to train on that your centralized training system may not. We also recently announced the TensorFlow Privacy extension, which helps you strip personally identifiable information (PII) from your training data. Finally, TensorFlow Probability extends TensorFlow’s abilities to more traditional statistical use cases, which you can use in conjunction with other functionality like estimators.

Deploy your ML model in a variety ofenvironments and languages

A core strength of TensorFlow has always been the ability to deploy models into production. In TensorFlow 2.0, the TensorFlow team is making it even easier. TFX Pipelines give you the ability to coordinate how you serve your trained models for inference at runtime, whether on a single instance, or across an entire cluster. Meanwhile, for more resource-constrained systems, like mobile or IoT devices and embedded hardware, you can easily quantize your models to run with TensorFlow Lite. Airbnb, Shazam, and the BBC are all using TensorFlow Lite to enhance their mobile experiences, and to validate as well as classify user-uploaded content.

Exploring and analyzing data with TensorFlow Data Validation.gif
Exploring and analyzing data with TensorFlow Data Validation.

JavaScript is one of the world’s most popular programming languages, and TensorFlow.js helps make ML available to millions of JavaScript developers. The TensorFlow team announced TensorFlow.js version 1.0. This integration means you can not only train and run models in the browser, but also run TensorFlow as a part of server-side hosted JavaScript apps, including on App Engine. TensorFlow.js now has better performance than ever, and its community has grown substantially: in the year since its initial launch, community members have downloaded TensorFlow.js over 300,000 times, and its repository now incorporates code from over 100 contributors.

How to get started

If you’re eager to get started with TensorFlow 2.0 alpha on Google Cloud, start up a Deep Learning VM and try out some of the tutorials. TensorFlow 2.0 is available through Colab via pip install, if you’re just looking to run a notebook anywhere, but perhaps more importantly, you can also run a Jupyter instance on Google Cloud using a Cloud Dataproc Cluster, or launch notebooks directly from Cloud ML Engine, all from within your GCP project.

Using TensorFlow 2.0 with a Deep Learning VM and GCP Notebook Instances.gif
Using TensorFlow 2.0 with a Deep Learning VM and GCP Notebook Instances.

Along with announcing the alpha release of TensorFlow 2.0, we also announced new community and education partnerships. In collaboration with O’Reilly Media, we’re hosting TensorFlow World, a week-long conference dedicated to fostering and bringing together the open source community and all things TensorFlow. Call for proposals is open for attendees to submit papers and projects to be highlighted at the event. Finally, we announced two new courses to help beginners and learners new to ML and TensorFlow. The first course is deeplearning.ai’s Course 1 - Introduction to TensorFlow for AI, ML and DL, part of the TensorFlow: from Basic to Mastery series. The second course is Udacity’s Intro to TensorFlow for Deep Learning.

If you’re using TensorFlow 2.0 on Google Cloud, we want to hear about it! Make sure to join our Testing special interest group, submit your project abstracts to TensorFlow World, and share your projects in our #PoweredByTF Challenge on DevPost. To quickly get up to speed on TensorFlow, be sure to check out our free courses on Udacity and DeepLearning.ai.

Tag cloud