url for the model, or an io.IOHandler
.
Releases the memory used by the weight tensors and resourceManager.
Executes inference for the model for given input tensors.
tensor, tensor array or tensor map of the inputs for the model, keyed by the input node names.
output node name from the Tensorflow model, if no outputs are specified, the default outputs of the model would be used. You can inspect intermediate nodes of the model by adding them to the outputs array.
A single tensor if provided with a single output or no outputs are provided and there is only one default output, otherwise return a tensor array. The order of the tensor array is the same as the outputs if provided, otherwise the order of outputNodes attribute of the model.
Executes inference for the model for given input tensors in async fashion, use this method when your model contains control flow ops.
tensor, tensor array or tensor map of the inputs for the model, keyed by the input node names.
output node name from the Tensorflow model, if no outputs are specified, the default outputs of the model would be used. You can inspect intermediate nodes of the model by adding them to the outputs array.
A Promise of single tensor if provided with a single output or no outputs are provided and there is only one default output, otherwise return a tensor map.
Loads the model and weight files, construct the in memory weight map and compile the inference graph.
Synchronously construct the in memory weight map and compile the inference graph. Also initialize hashtable if any.
Execute the inference for the input tensors.
Prediction configuration for specifying the batch size and output node names. Currently the batch size option is ignored for graph model.
Inference result tensors. The output would be single tf.Tensor
if model has single output node, otherwise Tensor[] or NamedTensorMap[]
will be returned for model with multiple outputs.
Save the configuration and/or weights of the GraphModel.
An IOHandler
is an object that has a save
method of the proper
signature defined. The save
method manages the storing or
transmission of serialized data ("artifacts") that represent the
model's topology and weights onto or via a specific medium, such as
file downloads, local storage, IndexedDB in the web browser and HTTP
requests to a server. TensorFlow.js provides IOHandler
implementations for a number of frequently used saving mediums, such as
tf.io.browserDownloads
and tf.io.browserLocalStorage
. See tf.io
for more details.
This method also allows you to refer to certain types of IOHandler
s
as URL-like string shortcuts, such as 'localstorage://' and
'indexeddb://'.
Example 1: Save model
's topology and weights to browser local
storage;
then load it back.
const modelUrl =
'https://storage.googleapis.com/tfjs-models/savedmodel/mobilenet_v2_1.0_224/model.json';
const model = await tf.loadGraphModel(modelUrl);
const zeros = tf.zeros([1, 224, 224, 3]);
model.predict(zeros).print();
const saveResults = await model.save('localstorage://my-model-1');
const loadedModel = await tf.loadGraphModel('localstorage://my-model-1');
console.log('Prediction from loaded model:');
model.predict(zeros).print();
An instance of IOHandler
or a URL-like,
scheme-based string shortcut for IOHandler
.
Options for saving the model.
A Promise
of SaveResult
, which summarizes the result of
the saving, such as byte sizes of the saved artifacts for the model's
topology and weight values.
A
tf.GraphModel
is a directed, acyclic graph built from a SavedModel GraphDef and allows inference execution.A
tf.GraphModel
can only be created by loading from a model converted from a TensorFlow SavedModel using the command line converter tool and loaded viatf.loadGraphModel
.{heading: 'Models', subheading: 'Classes'}