Options
All
  • Public
  • Public/Protected
  • All
Menu

Human* library main class

All methods and properties are available only as members of Human class

  • Configuration object definition: Config
  • Results object definition: Result
  • Possible inputs: Input
param userConfig:

Config

returns

instance

Hierarchy

  • Human

Index

Constructors

constructor

  • Constructor for Human library that is futher used for all operations

    Parameters

    • Optional userConfig: Partial<Config>

    Returns Human

    instance

Properties

config

config: Config

Current configuration

draw

draw: { all: any; body: any; canvas: any; face: any; gesture: any; hand: any; object: any; options: DrawOptions; person: any }

Draw helper classes that can draw detected objects on canvas using specified draw

  • options: DrawOptions global settings for all draw operations, can be overriden for each draw method
  • face: draw detected faces
  • body: draw detected people and body parts
  • hand: draw detected hands and hand parts
  • canvas: draw processed canvas which is a processed copy of the input
  • all: meta-function that performs: canvas, face, body, hand

Type declaration

  • all: any
  • body: any
  • canvas: any
  • face: any
  • gesture: any
  • hand: any
  • object: any
  • options: DrawOptions
  • person: any

env

env: Env

Object containing environment information used for diagnostics

events

events: EventTarget

Container for events dispatched by Human

Possible events:

  • create: triggered when Human object is instantiated
  • load: triggered when models are loaded (explicitly or on-demand)
  • image: triggered when input image is processed
  • result: triggered when detection is complete
  • warmup: triggered when warmup is complete
  • error: triggered on some errors

faceTriangulation

faceTriangulation: number[]

Reference face triangualtion array of 468 points, used for triangle references between points

faceUVMap

faceUVMap: number[][]

Refernce UV map of 468 values, used for 3D mapping of the face mesh

gl

gl: Record<string, unknown>

WebGL debug info

performance

performance: Record<string, number>

Performance object that contains values for all recently performed operations

process

process: { canvas: null | HTMLCanvasElement | OffscreenCanvas; tensor: null | Tensor<Rank> }

currenty processed image tensor and canvas

Type declaration

  • canvas: null | HTMLCanvasElement | OffscreenCanvas
  • tensor: null | Tensor<Rank>

result

result: Result

Last known result of detect run

  • Can be accessed anytime after initial detection

state

state: string

Current state of Human library

  • Can be polled to determine operations that are currently executed
  • Progresses through: 'config', 'check', 'backend', 'load', 'run:', 'idle'

version

version: string

Current version of Human library in semver format

Methods

detect

enhance

  • enhance(input: Tensor<Rank>): null | Tensor<Rank>
  • Enhance method performs additional enhacements to face image previously detected for futher processing

    Parameters

    • input: Tensor<Rank>

    Returns null | Tensor<Rank>

    Tensor

image

  • image(input: Input): { canvas: null | HTMLCanvasElement | OffscreenCanvas; tensor: null | Tensor<Rank> }
  • Process input as return canvas and tensor

    Parameters

    Returns { canvas: null | HTMLCanvasElement | OffscreenCanvas; tensor: null | Tensor<Rank> }

    • canvas: null | HTMLCanvasElement | OffscreenCanvas
    • tensor: null | Tensor<Rank>

init

  • init(): Promise<void>
  • Explicit backend initialization

    • Normally done implicitly during initial load phase
    • Call to explictly register and initialize TFJS backend without any other operations
    • Use when changing backend during runtime

    Returns Promise<void>

    Promise

load

  • load(userConfig?: Partial<Config>): Promise<void>
  • Load method preloads all configured models on-demand

    • Not explicitly required as any required model is load implicitly on it's first run

    Parameters

    • Optional userConfig: Partial<Config>

    Returns Promise<void>

    Promise

match

  • match(faceEmbedding: number[], db: { embedding: number[]; name: string; source: string }[], threshold?: number): { embedding: number[]; name: string; similarity: number; source: string }
  • Math method find best match between provided face descriptor and predefined database of known descriptors

    Parameters

    • faceEmbedding: number[]
    • db: { embedding: number[]; name: string; source: string }[]
    • threshold: number = 0

    Returns { embedding: number[]; name: string; similarity: number; source: string }

    best match

    • embedding: number[]
    • name: string
    • similarity: number
    • source: string

next

  • Runs interpolation using last known result and returns smoothened result Interpolation is based on time since last known result so can be called independently

    Parameters

    Returns Result

    result: Result

reset

  • reset(): void
  • Reset configuration to default values

    Returns void

segmentation

  • segmentation(input: Input, background?: Input): Promise<{ alpha: null | HTMLCanvasElement | OffscreenCanvas; canvas: null | HTMLCanvasElement | OffscreenCanvas; data: null | Uint8ClampedArray }>
  • Segmentation method takes any input and returns processed canvas with body segmentation

    • Optional parameter background is used to fill the background with specific input
    • Segmentation is not triggered as part of detect process

    Returns:

    • data as raw data array with per-pixel segmentation values
    • canvas as canvas which is input image filtered with segementation data and optionally merged with background image canvas alpha values are set to segmentation values for easy merging
    • alpha as grayscale canvas that represents segmentation alpha values

    Parameters

    Returns Promise<{ alpha: null | HTMLCanvasElement | OffscreenCanvas; canvas: null | HTMLCanvasElement | OffscreenCanvas; data: null | Uint8ClampedArray }>

similarity

  • similarity(embedding1: number[], embedding2: number[]): number
  • Simmilarity method calculates simmilarity between two provided face descriptors (face embeddings)

    • Calculation is based on normalized Minkowski distance between two descriptors
    • Default is Euclidean distance which is Minkowski distance of 2nd order

    Parameters

    • embedding1: number[]
    • embedding2: number[]

    Returns number

    similarity: number

validate

  • validate(userConfig?: Partial<Config>): { expected?: string; reason: string; where: string }[]
  • Validate current configuration schema

    Parameters

    • Optional userConfig: Partial<Config>

    Returns { expected?: string; reason: string; where: string }[]

warmup

  • warmup(userConfig?: Partial<Config>): Promise<Result | { error: any }>
  • Warmup method pre-initializes all configured models for faster inference

    • can take significant time on startup
    • only used for webgl and humangl backends

    Parameters

    • Optional userConfig: Partial<Config>

    Returns Promise<Result | { error: any }>

    result: Result