Options
All
  • Public
  • Public/Protected
  • All
Menu

Interface Config

Configuration interface definition for Human library

Contains all configurable parameters

Hierarchy

  • Config

Index

Properties

async

async: boolean

Perform model loading and inference concurrently or sequentially

backend

backend: null | "" | "cpu" | "wasm" | "webgl" | "humangl" | "tensorflow"

Backend used for TFJS operations

body

body: { enabled: boolean; maxDetections: number; modelPath: string; nmsRadius: number; scoreThreshold: number }

Controlls and configures all body detection specific options

  • enabled: true/false
  • modelPath: paths for both hand detector model and hand skeleton model
  • maxDetections: maximum number of people detected in the input, should be set to the minimum number for performance
  • scoreThreshold: threshold for deciding when to remove people based on score in non-maximum suppression
  • nmsRadius: threshold for deciding whether body parts overlap too much in non-maximum suppression

Type declaration

  • enabled: boolean
  • maxDetections: number
  • modelPath: string
  • nmsRadius: number
  • scoreThreshold: number

deallocate

deallocate: boolean

Internal: Use aggressive GPU memory deallocator when backend is set to webgl or humangl

debug

debug: boolean

Print debug statements to console

face

face: { description: { enabled: boolean; minConfidence: number; modelPath: string; skipFrames: number }; detector: { iouThreshold: number; maxFaces: number; minConfidence: number; modelPath: string; return: boolean; rotation: boolean; scoreThreshold: number; skipFrames: number; skipInitial: boolean }; emotion: { enabled: boolean; minConfidence: number; modelPath: string; skipFrames: number }; enabled: boolean; iris: { enabled: boolean; modelPath: string }; mesh: { enabled: boolean; modelPath: string } }

Controlls and configures all face-specific options:

  • face detection, face mesh detection, age, gender, emotion detection and face description Parameters:
  • enabled: true/false
  • modelPath: path for individual face model
  • rotation: use calculated rotated face image or just box with rotation as-is, false means higher performance, but incorrect mesh mapping on higher face angles
  • maxFaces: maximum number of faces detected in the input, should be set to the minimum number for performance
  • skipFrames: how many frames to go without re-running the face detector and just run modified face mesh analysis, only valid if videoOptimized is set to true
  • skipInitial: if previous detection resulted in no faces detected, should skipFrames be reset immediately to force new detection cycle
  • minConfidence: threshold for discarding a prediction
  • iouThreshold: threshold for deciding whether boxes overlap too much in non-maximum suppression
  • scoreThreshold: threshold for deciding when to remove boxes based on score in non-maximum suppression
  • return extracted face as tensor for futher user processing

Type declaration

  • description: { enabled: boolean; minConfidence: number; modelPath: string; skipFrames: number }
    • enabled: boolean
    • minConfidence: number
    • modelPath: string
    • skipFrames: number
  • detector: { iouThreshold: number; maxFaces: number; minConfidence: number; modelPath: string; return: boolean; rotation: boolean; scoreThreshold: number; skipFrames: number; skipInitial: boolean }
    • iouThreshold: number
    • maxFaces: number
    • minConfidence: number
    • modelPath: string
    • return: boolean
    • rotation: boolean
    • scoreThreshold: number
    • skipFrames: number
    • skipInitial: boolean
  • emotion: { enabled: boolean; minConfidence: number; modelPath: string; skipFrames: number }
    • enabled: boolean
    • minConfidence: number
    • modelPath: string
    • skipFrames: number
  • enabled: boolean
  • iris: { enabled: boolean; modelPath: string }
    • enabled: boolean
    • modelPath: string
  • mesh: { enabled: boolean; modelPath: string }
    • enabled: boolean
    • modelPath: string

filter

filter: { blur: number; brightness: number; contrast: number; enabled: boolean; flip: boolean; height: number; hue: number; kodachrome: boolean; negative: boolean; pixelate: number; polaroid: boolean; return: boolean; saturation: number; sepia: boolean; sharpness: number; technicolor: boolean; vintage: boolean; width: number }

Run input through image filters before inference

  • image filters run with near-zero latency as they are executed on the GPU

Type declaration

  • blur: number

    Range: 0 (no blur) to N (blur radius in pixels)

  • brightness: number

    Range: -1 (darken) to 1 (lighten)

  • contrast: number

    Range: -1 (reduce contrast) to 1 (increase contrast)

  • enabled: boolean
  • flip: boolean

    Flip input as mirror image

  • height: number

    Resize input height

    • if both width and height are set to 0, there is no resizing
    • if just one is set, second one is scaled automatically
    • if both are set, values are used as-is
  • hue: number

    Range: 0 (no change) to 360 (hue rotation in degrees)

  • kodachrome: boolean

    Image kodachrome colors

  • negative: boolean

    Image negative

  • pixelate: number

    Range: 0 (no pixelate) to N (number of pixels to pixelate)

  • polaroid: boolean

    Image polaroid camera effect

  • return: boolean

    Return processed canvas imagedata in result

  • saturation: number

    Range: -1 (reduce saturation) to 1 (increase saturation)

  • sepia: boolean

    Image sepia colors

  • sharpness: number

    Range: 0 (no sharpening) to 1 (maximum sharpening)

  • technicolor: boolean

    Image technicolor colors

  • vintage: boolean

    Image vintage colors

  • width: number

    Resize input width

    • if both width and height are set to 0, there is no resizing
    • if just one is set, second one is scaled automatically
    • if both are set, values are used as-is

gesture

gesture: { enabled: boolean }

Controlls gesture detection

Type declaration

  • enabled: boolean

hand

hand: { detector: { modelPath: string }; enabled: boolean; iouThreshold: number; landmarks: boolean; maxHands: number; minConfidence: number; rotation: boolean; scoreThreshold: number; skeleton: { modelPath: string }; skipFrames: number; skipInitial: boolean }

Controlls and configures all hand detection specific options

  • enabled: true/false
  • modelPath: paths for both hand detector model and hand skeleton model
  • rotation: use best-guess rotated hand image or just box with rotation as-is, false means higher performance, but incorrect finger mapping if hand is inverted
  • skipFrames: how many frames to go without re-running the hand bounding box detector and just run modified hand skeleton detector, only valid if videoOptimized is set to true
  • skipInitial: if previous detection resulted in no hands detected, should skipFrames be reset immediately to force new detection cycle
  • minConfidence: threshold for discarding a prediction
  • iouThreshold: threshold for deciding whether boxes overlap too much in non-maximum suppression
  • scoreThreshold: threshold for deciding when to remove boxes based on score in non-maximum suppression
  • maxHands: maximum number of hands detected in the input, should be set to the minimum number for performance
  • landmarks: detect hand landmarks or just hand boundary box

Type declaration

  • detector: { modelPath: string }
    • modelPath: string
  • enabled: boolean
  • iouThreshold: number
  • landmarks: boolean
  • maxHands: number
  • minConfidence: number
  • rotation: boolean
  • scoreThreshold: number
  • skeleton: { modelPath: string }
    • modelPath: string
  • skipFrames: number
  • skipInitial: boolean

modelBasePath

modelBasePath: string

Base model path (typically starting with file://, http:// or https://) for all models

  • individual modelPath values are joined to this path

object

object: { enabled: boolean; iouThreshold: number; maxResults: number; minConfidence: number; modelPath: string; skipFrames: number }

Controlls and configures all object detection specific options

  • minConfidence: minimum score that detection must have to return as valid object
  • iouThreshold: ammount of overlap between two detected objects before one object is removed
  • maxResults: maximum number of detections to return
  • skipFrames: run object detection every n input frames, only valid if videoOptimized is set to true

Type declaration

  • enabled: boolean
  • iouThreshold: number
  • maxResults: number
  • minConfidence: number
  • modelPath: string
  • skipFrames: number

profile

profile: boolean

Collect and print profiling data during inference operations

scoped

scoped: boolean

Internal: Run all inference operations in an explicit local scope run to avoid memory leaks

videoOptimized

videoOptimized: boolean

Perform additional optimizations when input is video,

  • must be disabled for images
  • automatically disabled for Image, ImageData, ImageBitmap and Tensor inputs
  • skips boundary detection for every skipFrames frames specified for each model
  • while maintaining in-box detection since objects don't change definition as fast

warmup

warmup: "none" | "face" | "full" | "body"

What to use for human.warmup()

  • warmup pre-initializes all models for faster inference but can take significant time on startup
  • only used for webgl and humangl backends

wasmPath

wasmPath: string

Path to *.wasm files if backend is set to wasm