Perform model loading and inference concurrently or sequentially
default: true
Backend used for TFJS operations valid build-in backends are:
cpu, wasm, webgl, humangl, webgpucpu, wasm, tensorflow
default: humangl for browser and tensorflow for nodejsBody config BodyConfig
Cache models in IndexDB on first sucessfull load default: true if indexdb is available (browsers), false if its not (nodejs)
Cache sensitivity
default: 0.7
Perform immediate garbage collection on deallocated tensors instead of caching them
Print debug statements to console
default: true
Face config FaceConfig
Filter config FilterConfig
Gesture config GestureConfig
Hand config HandConfig
Base model path (typically starting with file://, http:// or https://) for all models
default: ../models/ for browsers and file://models/ for nodejs
Object config ObjectConfig
Segmentation config SegmentationConfig
Internal Variable
What to use for human.warmup()
webgl, humangl and webgpu backendsdefault: full
Path to *.wasm files if backend is set to wasm
default: auto-detects to link to CDN jsdelivr when running in browser
Force WASM loader to use platform fetch
default: auto-detects to link to CDN jsdelivr when running in browser
Configuration interface definition for Human library Contains all configurable parameters Defaults: config