Class GraphModel<ModelURL>

A tf.GraphModel is a directed, acyclic graph built from a SavedModel GraphDef and allows inference execution.

A tf.GraphModel can only be created by loading from a model converted from a TensorFlow SavedModel using the command line converter tool and loaded via tf.loadGraphModel.

Doc

Type Parameters

  • ModelURL extends Url = string | io.IOHandler

Hierarchy

  • GraphModel

Implements

  • InferenceModel

Constructors

  • Type Parameters

    • ModelURL extends Url = string | IOHandler

    Parameters

    • modelUrl: ModelURL

      url for the model, or an io.IOHandler.

    • Optional loadOptions: LoadOptions

    Returns GraphModel<ModelURL>

Properties

inputNodes: string[]
inputs: TensorInfo[]
metadata: {}

Type declaration

    modelSignature: {}

    Type declaration

      modelVersion: string
      outputNodes: string[]
      outputs: TensorInfo[]
      weights: NamedTensorsMap

      Methods

      • Releases the memory used by the weight tensors and resourceManager.

        Doc

        Returns void

      • Dispose intermediate tensors for model debugging mode (flag KEEP_INTERMEDIATE_TENSORS is true).

        Doc

        Returns void

      • Executes inference for the model for given input tensors.

        Returns

        A single tensor if provided with a single output or no outputs are provided and there is only one default output, otherwise return a tensor array. The order of the tensor array is the same as the outputs if provided, otherwise the order of outputNodes attribute of the model.

        Doc

        Parameters

        • inputs: Tensor<Rank> | Tensor<Rank>[] | NamedTensorMap

          tensor, tensor array or tensor map of the inputs for the model, keyed by the input node names.

        • Optional outputs: string | string[]

          output node name from the Tensorflow model, if no outputs are specified, the default outputs of the model would be used. You can inspect intermediate nodes of the model by adding them to the outputs array.

        Returns Tensor<Rank> | Tensor<Rank>[]

      • Executes inference for the model for given input tensors in async fashion, use this method when your model contains control flow ops.

        Returns

        A Promise of single tensor if provided with a single output or no outputs are provided and there is only one default output, otherwise return a tensor map.

        Doc

        Parameters

        • inputs: Tensor<Rank> | Tensor<Rank>[] | NamedTensorMap

          tensor, tensor array or tensor map of the inputs for the model, keyed by the input node names.

        • Optional outputs: string | string[]

          output node name from the Tensorflow model, if no outputs are specified, the default outputs of the model would be used. You can inspect intermediate nodes of the model by adding them to the outputs array.

        Returns Promise<Tensor<Rank> | Tensor<Rank>[]>

      • Get intermediate tensors for model debugging mode (flag KEEP_INTERMEDIATE_TENSORS is true).

        Doc

        Returns NamedTensorsMap

      • Loads the model and weight files, construct the in memory weight map and compile the inference graph.

        Returns UrlIOHandler<ModelURL> extends IOHandlerSync ? boolean : Promise<boolean>

      • Synchronously construct the in memory weight map and compile the inference graph. Also initialize hashtable if any.

        Doc

        Parameters

        • artifacts: ModelArtifacts

        Returns boolean

      • Execute the inference for the input tensors.

        See

        inputNodes

        You can also feed any intermediate nodes using the NamedTensorMap as the input type. For example, given the graph InputNode => Intermediate => OutputNode, you can execute the subgraph Intermediate => OutputNode by calling model.execute('IntermediateNode' : tf.tensor(...));

        This is useful for models that uses tf.dynamic_rnn, where the intermediate state needs to be fed manually.

        For batch inference execution, the tensors for each input need to be concatenated together. For example with mobilenet, the required input shape is [1, 244, 244, 3], which represents the [batch, height, width, channel]. If we are provide a batched data of 100 images, the input tensor should be in the shape of [100, 244, 244, 3].

        Returns

        Inference result tensors. The output would be single tf.Tensor if model has single output node, otherwise Tensor[] or NamedTensorMap[] will be returned for model with multiple outputs.

        Doc

        Parameters

        • inputs: Tensor<Rank> | Tensor<Rank>[] | NamedTensorMap
        • Optional config: ModelPredictConfig

          Prediction configuration for specifying the batch size and output node names. Currently the batch size option is ignored for graph model.

        Returns Tensor<Rank> | Tensor<Rank>[] | NamedTensorMap

      • Save the configuration and/or weights of the GraphModel.

        An IOHandler is an object that has a save method of the proper signature defined. The save method manages the storing or transmission of serialized data ("artifacts") that represent the model's topology and weights onto or via a specific medium, such as file downloads, local storage, IndexedDB in the web browser and HTTP requests to a server. TensorFlow.js provides IOHandler implementations for a number of frequently used saving mediums, such as tf.io.browserDownloads and tf.io.browserLocalStorage. See tf.io for more details.

        This method also allows you to refer to certain types of IOHandlers as URL-like string shortcuts, such as 'localstorage://' and 'indexeddb://'.

        Example 1: Save model's topology and weights to browser local storage; then load it back.

        const modelUrl =
        'https://storage.googleapis.com/tfjs-models/savedmodel/mobilenet_v2_1.0_224/model.json';
        const model = await tf.loadGraphModel(modelUrl);
        const zeros = tf.zeros([1, 224, 224, 3]);
        model.predict(zeros).print();

        const saveResults = await model.save('localstorage://my-model-1');

        const loadedModel = await tf.loadGraphModel('localstorage://my-model-1');
        console.log('Prediction from loaded model:');
        model.predict(zeros).print();

        Returns

        A Promise of SaveResult, which summarizes the result of the saving, such as byte sizes of the saved artifacts for the model's topology and weight values.

        Doc

        Parameters

        • handlerOrURL: string | IOHandler

          An instance of IOHandler or a URL-like, scheme-based string shortcut for IOHandler.

        • Optional config: SaveConfig

          Options for saving the model.

        Returns Promise<SaveResult>