Options
All
  • Public
  • Public/Protected
  • All
Menu

Class GraphModel<ModelURL>

A tf.GraphModel is a directed, acyclic graph built from a SavedModel GraphDef and allows inference execution.

A tf.GraphModel can only be created by loading from a model converted from a TensorFlow SavedModel using the command line converter tool and loaded via tf.loadGraphModel.

doc

{heading: 'Models', subheading: 'Classes'}

Type parameters

  • ModelURL: Url = string | io.IOHandler

Hierarchy

  • GraphModel

Implements

  • InferenceModel

Index

Constructors

  • new GraphModel<ModelURL>(modelUrl: ModelURL, loadOptions?: LoadOptions): GraphModel<ModelURL>
  • Type parameters

    • ModelURL: Url = string | IOHandler

    Parameters

    • modelUrl: ModelURL

      url for the model, or an io.IOHandler.

    • Optional loadOptions: LoadOptions

    Returns GraphModel<ModelURL>

Properties

inputNodes: string[]
inputs: TensorInfo[]
metadata: {}

Type declaration

    modelSignature: {}

    Type declaration

      modelVersion: string
      outputNodes: string[]
      outputs: TensorInfo[]
      weights: NamedTensorsMap

      Methods

      • dispose(): void
      • Releases the memory used by the weight tensors and resourceManager.

        doc

        {heading: 'Models', subheading: 'Classes'}

        Returns void

      • disposeIntermediateTensors(): void
      • Dispose intermediate tensors for model debugging mode (flag KEEP_INTERMEDIATE_TENSORS is true).

        doc

        {heading: 'Models', subheading: 'Classes'}

        Returns void

      • Executes inference for the model for given input tensors.

        doc

        {heading: 'Models', subheading: 'Classes'}

        Parameters

        • inputs: Tensor<Rank> | NamedTensorMap | Tensor<Rank>[]

          tensor, tensor array or tensor map of the inputs for the model, keyed by the input node names.

        • Optional outputs: string | string[]

          output node name from the Tensorflow model, if no outputs are specified, the default outputs of the model would be used. You can inspect intermediate nodes of the model by adding them to the outputs array.

        Returns Tensor<Rank> | Tensor<Rank>[]

        A single tensor if provided with a single output or no outputs are provided and there is only one default output, otherwise return a tensor array. The order of the tensor array is the same as the outputs if provided, otherwise the order of outputNodes attribute of the model.

      • Executes inference for the model for given input tensors in async fashion, use this method when your model contains control flow ops.

        doc

        {heading: 'Models', subheading: 'Classes'}

        Parameters

        • inputs: Tensor<Rank> | NamedTensorMap | Tensor<Rank>[]

          tensor, tensor array or tensor map of the inputs for the model, keyed by the input node names.

        • Optional outputs: string | string[]

          output node name from the Tensorflow model, if no outputs are specified, the default outputs of the model would be used. You can inspect intermediate nodes of the model by adding them to the outputs array.

        Returns Promise<Tensor<Rank> | Tensor<Rank>[]>

        A Promise of single tensor if provided with a single output or no outputs are provided and there is only one default output, otherwise return a tensor map.

      • getIntermediateTensors(): NamedTensorsMap
      • Get intermediate tensors for model debugging mode (flag KEEP_INTERMEDIATE_TENSORS is true).

        doc

        {heading: 'Models', subheading: 'Classes'}

        Returns NamedTensorsMap

      • load(): UrlIOHandler<ModelURL> extends IOHandlerSync ? boolean : Promise<boolean>
      • Loads the model and weight files, construct the in memory weight map and compile the inference graph.

        Returns UrlIOHandler<ModelURL> extends IOHandlerSync ? boolean : Promise<boolean>

      • loadSync(artifacts: ModelArtifacts): boolean
      • Synchronously construct the in memory weight map and compile the inference graph. Also initialize hashtable if any.

        doc

        {heading: 'Models', subheading: 'Classes', ignoreCI: true}

        Parameters

        • artifacts: ModelArtifacts

        Returns boolean

      • Execute the inference for the input tensors.

        doc

        {heading: 'Models', subheading: 'Classes'}

        Parameters

        • inputs: Tensor<Rank> | NamedTensorMap | Tensor<Rank>[]
        • Optional config: ModelPredictConfig

          Prediction configuration for specifying the batch size and output node names. Currently the batch size option is ignored for graph model.

        Returns Tensor<Rank> | NamedTensorMap | Tensor<Rank>[]

        Inference result tensors. The output would be single tf.Tensor if model has single output node, otherwise Tensor[] or NamedTensorMap[] will be returned for model with multiple outputs.

      • save(handlerOrURL: string | IOHandler, config?: SaveConfig): Promise<SaveResult>
      • Save the configuration and/or weights of the GraphModel.

        An IOHandler is an object that has a save method of the proper signature defined. The save method manages the storing or transmission of serialized data ("artifacts") that represent the model's topology and weights onto or via a specific medium, such as file downloads, local storage, IndexedDB in the web browser and HTTP requests to a server. TensorFlow.js provides IOHandler implementations for a number of frequently used saving mediums, such as tf.io.browserDownloads and tf.io.browserLocalStorage. See tf.io for more details.

        This method also allows you to refer to certain types of IOHandlers as URL-like string shortcuts, such as 'localstorage://' and 'indexeddb://'.

        Example 1: Save model's topology and weights to browser local storage; then load it back.

        const modelUrl =
        'https://storage.googleapis.com/tfjs-models/savedmodel/mobilenet_v2_1.0_224/model.json';
        const model = await tf.loadGraphModel(modelUrl);
        const zeros = tf.zeros([1, 224, 224, 3]);
        model.predict(zeros).print();

        const saveResults = await model.save('localstorage://my-model-1');

        const loadedModel = await tf.loadGraphModel('localstorage://my-model-1');
        console.log('Prediction from loaded model:');
        model.predict(zeros).print();
        doc

        {heading: 'Models', subheading: 'Classes', ignoreCI: true}

        Parameters

        • handlerOrURL: string | IOHandler

          An instance of IOHandler or a URL-like, scheme-based string shortcut for IOHandler.

        • Optional config: SaveConfig

          Options for saving the model.

        Returns Promise<SaveResult>

        A Promise of SaveResult, which summarizes the result of the saving, such as byte sizes of the saved artifacts for the model's topology and weight values.