Perform model loading and inference concurrently or sequentially
default: true
Backend used for TFJS operations valid build-in backends are:
cpu
, wasm
, webgl
, humangl
, webgpu
cpu
, wasm
, tensorflow
default: humangl
for browser and tensorflow
for nodejsCache sensitivity
default: 0.7
Perform immediate garbage collection on deallocated tensors instead of caching them
Print debug statements to console
default: true
Base model path (typically starting with file://, http:// or https://) for all models
default: ../models/
for browsers and file://models/
for nodejs
Internal Variable
What to use for human.warmup()
webgl
, humangl
and webgpu
backendsdefault: full
Path to *.wasm files if backend is set to wasm
default: auto-detects to link to CDN jsdelivr
when running in browser
Configuration interface definition for Human library
Contains all configurable parameters