Perform model loading and inference concurrently or sequentially
Backend used for TFJS operations Valid build-in backends are:
cpu, wasm, webgl, humanglcpu, wasm, tensorflowExperimental:
webgpu - requires custom build of tfjs-backend-webgpuDefaults: humangl for browser and tensorflow for nodejs
Cache sensitivity
Print debug statements to console
Run input through image filters before inference
Base model path (typically starting with file://, http:// or https://) for all models
Internal Variable
What to use for human.warmup()
Path to *.wasm files if backend is set to wasm
jsdelivr when running in browser
Configuration interface definition for Human library
Contains all configurable parameters