Gist

The TF estimators are high-level pre-made blocks which help you perform quickly some learning tasks:

  • you can run Estimator-based models on a local host or on a distributed multi-server environment without changing your model. Furthermore, you can run Estimator-based models on CPUs, GPUs, or TPUs without recoding your model.
  • estimators simplify sharing implementations between model developers.
  • you can develop a state of the art model with high-level intuitive code. In short, it is generally much easier to create models with Estimators than with the low-level TensorFlow APIs.
  • estimators are themselves built on tf.keras.layers, which simplifies customization.
  • estimators build the graph for you.
  • estimators provide a safe distributed training loop that controls how and when to:
    • build the graph
    • initialize variables
    • load data
    • handle exceptions
    • create checkpoint files and recover from failures
    • save summaries for TensorBoard

When writing an application with Estimators, you have to separate the data input pipeline from the model, which simplifies experiments with different data sets.

Below is a straightforward example of a linear regression with estimators. It highlights the general workflow.

You need of course to import some namespaces

and define some data

The feature columns go into the definition of the estimator:

and the usage of input functions allows you to decouple input from model

To predict things you use input functions as well

and it’s clear that estimators are the fastest road to results in terms of API mechanics. Next to Keras and the low-level API the estimators is a third API layer in TensorFlow. Whether this approach is the best depends on your context:

  • the checkpoints mechanism related to model restoration should, in particular, be considered in contrast to the non-estimator approaches
  • the (dis)advantages of creating custom estimators 
  • the flexibility of Keras versus the flexibility of estimator pipelines (also in the context of serving models)

Estimators definitely turn TensorFlow into something close to Scikit-Learn but whether you should use TF as a replacement for sklearn. Well, probably not.

The whole estimator stack is also available in R and works similarly, maybe even easier:

The TensorFlow API for R is actually very nicely documented and complements well the Python docs.