Options
All
  • Public
  • Public/Protected
  • All
Menu

Interface Config

Configuration interface definition for Human library

Contains all configurable parameters

Hierarchy

  • Config

Index

Properties

async

async: boolean

Perform model loading and inference concurrently or sequentially

backend

backend: null | "" | "cpu" | "wasm" | "webgl" | "humangl" | "tensorflow" | "webgpu"

Backend used for TFJS operations

body

body: { enabled: boolean; maxDetected: number; minConfidence: number; modelPath: string; skipFrames: number }

Controlls and configures all body detection specific options

  • enabled: true/false
  • modelPath: body pose model, can be absolute path or relative to modelBasePath
  • minConfidence: threshold for discarding a prediction
  • maxDetected: maximum number of people detected in the input, should be set to the minimum number for performance

Type declaration

  • enabled: boolean
  • maxDetected: number
  • minConfidence: number
  • modelPath: string
  • skipFrames: number

cacheSensitivity

cacheSensitivity: number

Cache sensitivity

  • values 0..1 where 0.01 means reset cache if input changed more than 1%
  • set to 0 to disable caching

debug

debug: boolean

Print debug statements to console

face

face: { description: { enabled: boolean; minConfidence: number; modelPath: string; skipFrames: number }; detector: { iouThreshold: number; maxDetected: number; minConfidence: number; modelPath: string; return: boolean; rotation: boolean; skipFrames: number }; emotion: { enabled: boolean; minConfidence: number; modelPath: string; skipFrames: number }; enabled: boolean; iris: { enabled: boolean; modelPath: string }; mesh: { enabled: boolean; modelPath: string } }

Controlls and configures all face-specific options:

  • face detection, face mesh detection, age, gender, emotion detection and face description Parameters:
  • enabled: true/false
  • modelPath: path for each of face models
  • minConfidence: threshold for discarding a prediction
  • iouThreshold: ammount of overlap between two detected objects before one object is removed
  • maxDetected: maximum number of faces detected in the input, should be set to the minimum number for performance
  • rotation: use calculated rotated face image or just box with rotation as-is, false means higher performance, but incorrect mesh mapping on higher face angles
  • return: return extracted face as tensor for futher user processing, in which case user is reponsible for manually disposing the tensor

Type declaration

  • description: { enabled: boolean; minConfidence: number; modelPath: string; skipFrames: number }
    • enabled: boolean
    • minConfidence: number
    • modelPath: string
    • skipFrames: number
  • detector: { iouThreshold: number; maxDetected: number; minConfidence: number; modelPath: string; return: boolean; rotation: boolean; skipFrames: number }
    • iouThreshold: number
    • maxDetected: number
    • minConfidence: number
    • modelPath: string
    • return: boolean
    • rotation: boolean
    • skipFrames: number
  • emotion: { enabled: boolean; minConfidence: number; modelPath: string; skipFrames: number }
    • enabled: boolean
    • minConfidence: number
    • modelPath: string
    • skipFrames: number
  • enabled: boolean
  • iris: { enabled: boolean; modelPath: string }
    • enabled: boolean
    • modelPath: string
  • mesh: { enabled: boolean; modelPath: string }
    • enabled: boolean
    • modelPath: string

filter

filter: { blur: number; brightness: number; contrast: number; enabled: boolean; flip: boolean; height: number; hue: number; kodachrome: boolean; negative: boolean; pixelate: number; polaroid: boolean; return: boolean; saturation: number; sepia: boolean; sharpness: number; technicolor: boolean; vintage: boolean; width: number }

Run input through image filters before inference

  • image filters run with near-zero latency as they are executed on the GPU

Type declaration

  • blur: number

    Range: 0 (no blur) to N (blur radius in pixels)

  • brightness: number

    Range: -1 (darken) to 1 (lighten)

  • contrast: number

    Range: -1 (reduce contrast) to 1 (increase contrast)

  • enabled: boolean
  • flip: boolean

    Flip input as mirror image

  • height: number

    Resize input height

    • if both width and height are set to 0, there is no resizing
    • if just one is set, second one is scaled automatically
    • if both are set, values are used as-is
  • hue: number

    Range: 0 (no change) to 360 (hue rotation in degrees)

  • kodachrome: boolean

    Image kodachrome colors

  • negative: boolean

    Image negative

  • pixelate: number

    Range: 0 (no pixelate) to N (number of pixels to pixelate)

  • polaroid: boolean

    Image polaroid camera effect

  • return: boolean

    Return processed canvas imagedata in result

  • saturation: number

    Range: -1 (reduce saturation) to 1 (increase saturation)

  • sepia: boolean

    Image sepia colors

  • sharpness: number

    Range: 0 (no sharpening) to 1 (maximum sharpening)

  • technicolor: boolean

    Image technicolor colors

  • vintage: boolean

    Image vintage colors

  • width: number

    Resize input width

    • if both width and height are set to 0, there is no resizing
    • if just one is set, second one is scaled automatically
    • if both are set, values are used as-is

gesture

gesture: { enabled: boolean }

Controlls gesture detection

Type declaration

  • enabled: boolean

hand

hand: { detector: { modelPath: string }; enabled: boolean; iouThreshold: number; landmarks: boolean; maxDetected: number; minConfidence: number; rotation: boolean; skeleton: { modelPath: string }; skipFrames: number }

Controlls and configures all hand detection specific options

  • enabled: true/false
  • landmarks: detect hand landmarks or just hand boundary box
  • modelPath: paths for hand detector and hand skeleton models, can be absolute path or relative to modelBasePath
  • minConfidence: threshold for discarding a prediction
  • iouThreshold: ammount of overlap between two detected objects before one object is removed
  • maxDetected: maximum number of hands detected in the input, should be set to the minimum number for performance
  • rotation: use best-guess rotated hand image or just box with rotation as-is, false means higher performance, but incorrect finger mapping if hand is inverted

Type declaration

  • detector: { modelPath: string }
    • modelPath: string
  • enabled: boolean
  • iouThreshold: number
  • landmarks: boolean
  • maxDetected: number
  • minConfidence: number
  • rotation: boolean
  • skeleton: { modelPath: string }
    • modelPath: string
  • skipFrames: number

modelBasePath

modelBasePath: string

Base model path (typically starting with file://, http:// or https://) for all models

  • individual modelPath values are relative to this path

object

object: { enabled: boolean; iouThreshold: number; maxDetected: number; minConfidence: number; modelPath: string; skipFrames: number }

Controlls and configures all object detection specific options

  • enabled: true/false
  • modelPath: object detection model, can be absolute path or relative to modelBasePath
  • minConfidence: minimum score that detection must have to return as valid object
  • iouThreshold: ammount of overlap between two detected objects before one object is removed
  • maxDetected: maximum number of detections to return

Type declaration

  • enabled: boolean
  • iouThreshold: number
  • maxDetected: number
  • minConfidence: number
  • modelPath: string
  • skipFrames: number

segmentation

segmentation: { enabled: boolean; modelPath: string }

Controlls and configures all body segmentation module removes background from input containing person if segmentation is enabled it will run as preprocessing task before any other model alternatively leave it disabled and use it on-demand using human.segmentation method which can remove background or replace it with user-provided background

  • enabled: true/false
  • modelPath: object detection model, can be absolute path or relative to modelBasePath

Type declaration

  • enabled: boolean
  • modelPath: string

skipFrame

skipFrame: boolean

Cache sensitivity

  • values 0..1 where 0.01 means reset cache if input changed more than 1%
  • set to 0 to disable caching

warmup

warmup: "none" | "face" | "full" | "body"

What to use for human.warmup()

  • warmup pre-initializes all models for faster inference but can take significant time on startup
  • only used for webgl and humangl backends

wasmPath

wasmPath: string

Path to *.wasm files if backend is set to wasm