Options
All
  • Public
  • Public/Protected
  • All
Menu

Configuration interface definition for Human library

Contains all configurable parameters

Hierarchy

  • Config

Index

Properties

async

async: boolean

Perform model loading and inference concurrently or sequentially

default: true

backend

backend: "" | "cpu" | "wasm" | "webgl" | "humangl" | "tensorflow" | "webgpu"

Backend used for TFJS operations valid build-in backends are:

  • Browser: cpu, wasm, webgl, humangl, webgpu
  • NodeJS: cpu, wasm, tensorflow default: humangl for browser and tensorflow for nodejs

body

body: Partial<BodyConfig>

cacheSensitivity

cacheSensitivity: number

Cache sensitivity

  • values 0..1 where 0.01 means reset cache if input changed more than 1%
  • set to 0 to disable caching

default: 0.7

deallocate

deallocate: boolean

Perform immediate garbage collection on deallocated tensors instead of caching them

debug

debug: boolean

Print debug statements to console

default: true

face

face: Partial<FaceConfig>

filter

filter: Partial<FilterConfig>

gesture

gesture: Partial<GestureConfig>

hand

hand: Partial<HandConfig>

modelBasePath

modelBasePath: string

Base model path (typically starting with file://, http:// or https://) for all models

  • individual modelPath values are relative to this path

default: ../models/ for browsers and file://models/ for nodejs

object

object: Partial<ObjectConfig>

segmentation

segmentation: Partial<SegmentationConfig>

skipAllowed

skipAllowed: boolean

Internal Variable

warmup

warmup: "" | "face" | "body" | "none" | "full"

What to use for human.warmup()

  • warmup pre-initializes all models for faster inference but can take significant time on startup
  • used by webgl, humangl and webgpu backends

default: full

wasmPath

wasmPath: string

Path to *.wasm files if backend is set to wasm

default: auto-detects to link to CDN jsdelivr when running in browser