Options
All
  • Public
  • Public/Protected
  • All
Menu

Human* library main class

All methods and properties are available only as members of Human class

  • Configuration object definition: Config
  • Results object definition: Result
  • Possible inputs: Input
param userConfig:

Config

returns

instance of Human

Hierarchy

  • Human

Index

Constructors

constructor

  • Constructor for Human library that is futher used for all operations

    Parameters

    • Optional userConfig: Partial<Config>

    Returns Human

Properties

config

config: Config

Current configuration

distance

distance: (descriptor1: Descriptor, descriptor2: Descriptor, options?: { multiplier: number; order: number }) => number = match.distance

Type declaration

    • (descriptor1: Descriptor, descriptor2: Descriptor, options?: { multiplier: number; order: number }): number
    • Calculates distance between two descriptors

      Parameters

      • descriptor1: Descriptor
      • descriptor2: Descriptor
      • options: { multiplier: number; order: number } = ...
        • multiplier: number

          by how much to enhance difference analysis in range of 1..100

          • default is 20 which normalizes results to similarity above 0.5 can be considered a match
        • order: number

          algorithm to use

          • Euclidean distance if order is 2 (default), Minkowski distance algorithm of nth order if order is higher than 2

      Returns number

draw

draw: { all: (inCanvas: AnyCanvas, result: Result, drawOptions?: Partial<DrawOptions>) => Promise<null | [void, void, void, void, void]>; body: (inCanvas: AnyCanvas, result: BodyResult[], drawOptions?: Partial<DrawOptions>) => Promise<void>; canvas: (input: AnyCanvas | HTMLImageElement | HTMLMediaElement | HTMLVideoElement, output: HTMLCanvasElement) => Promise<void>; face: (inCanvas: AnyCanvas, result: FaceResult[], drawOptions?: Partial<DrawOptions>) => Promise<void>; gesture: (inCanvas: AnyCanvas, result: GestureResult[], drawOptions?: Partial<DrawOptions>) => Promise<void>; hand: (inCanvas: AnyCanvas, result: HandResult[], drawOptions?: Partial<DrawOptions>) => Promise<void>; object: (inCanvas: AnyCanvas, result: ObjectResult[], drawOptions?: Partial<DrawOptions>) => Promise<void>; options: DrawOptions; person: (inCanvas: AnyCanvas, result: PersonResult[], drawOptions?: Partial<DrawOptions>) => Promise<void> }

Draw helper classes that can draw detected objects on canvas using specified draw

property

options global settings for all draw operations, can be overriden for each draw method DrawOptions

Type declaration

env

env: Env

Object containing environment information used for diagnostics

events

events: undefined | EventTarget

Container for events dispatched by Human {@type} EventTarget Possible events:

  • create: triggered when Human object is instantiated
  • load: triggered when models are loaded (explicitly or on-demand)
  • image: triggered when input image is processed
  • result: triggered when detection is complete
  • warmup: triggered when warmup is complete
  • error: triggered on some errors

faceTriangulation

faceTriangulation: number[]

Reference face triangualtion array of 468 points, used for triangle references between points

faceUVMap

faceUVMap: [number, number][]

Refernce UV map of 468 values, used for 3D mapping of the face mesh

gl

gl: Record<string, unknown>

WebGL debug info

match

match: (descriptor: Descriptor, descriptors: Descriptor[], options?: { multiplier: number; order: number; threshold: number }) => { distance: number; index: number; similarity: number } = match.match

Type declaration

    • (descriptor: Descriptor, descriptors: Descriptor[], options?: { multiplier: number; order: number; threshold: number }): { distance: number; index: number; similarity: number }
    • Matches given descriptor to a closest entry in array of descriptors

      Parameters

      • descriptor: Descriptor

        face descriptor

      • descriptors: Descriptor[]

        array of face descriptors to commpare given descriptor to

      • options: { multiplier: number; order: number; threshold: number } = ...

      Returns { distance: number; index: number; similarity: number }

      • index index array index where best match was found or -1 if no matches
      • distance calculated distance of given descriptor to the best match
      • similarity calculated normalized similarity of given descriptor to the best match
      • distance: number
      • index: number
      • similarity: number

performance

performance: Record<string, number>

Performance object that contains values for all recently performed operations

process

process: { canvas: null | OffscreenCanvas | HTMLCanvasElement; tensor: null | Tensor<Rank> }

currenty processed image tensor and canvas

Type declaration

  • canvas: null | OffscreenCanvas | HTMLCanvasElement
  • tensor: null | Tensor<Rank>

result

result: Result

Last known result of detect run

  • Can be accessed anytime after initial detection

similarity

similarity: (descriptor1: Descriptor, descriptor2: Descriptor, options?: { multiplier: number; order: number }) => number = match.similarity

Type declaration

    • (descriptor1: Descriptor, descriptor2: Descriptor, options?: { multiplier: number; order: number }): number
    • Calculates normalized similarity between two face descriptors based on their distance

      Parameters

      • descriptor1: Descriptor
      • descriptor2: Descriptor
      • options: { multiplier: number; order: number } = ...
        • multiplier: number

          by how much to enhance difference analysis in range of 1..100

          • default is 20 which normalizes results to similarity above 0.5 can be considered a match
        • order: number

          algorithm to use

          • Euclidean distance if order is 2 (default), Minkowski distance algorithm of nth order if order is higher than 2

      Returns number

      similarity between two face descriptors normalized to 0..1 range where 0 is no similarity and 1 is perfect similarity

state

state: string

Current state of Human library

  • Can be polled to determine operations that are currently executed
  • Progresses through: 'config', 'check', 'backend', 'load', 'run:', 'idle'

version

version: string

Current version of Human library in semver format

Methods

detect

enhance

  • Enhance method performs additional enhacements to face image previously detected for futher processing

    Parameters

    Returns null | Tensor<Rank>

    Tensor

image

init

  • init(): Promise<void>
  • Explicit backend initialization

    • Normally done implicitly during initial load phase
    • Call to explictly register and initialize TFJS backend without any other operations
    • Use when changing backend during runtime

    Returns Promise<void>

load

  • load(userConfig?: Partial<Config>): Promise<void>
  • Load method preloads all configured models on-demand

    • Not explicitly required as any required model is load implicitly on it's first run

    Parameters

    • Optional userConfig: Partial<Config>

    Returns Promise<void>

    Promise

next

  • Runs interpolation using last known result and returns smoothened result Interpolation is based on time since last known result so can be called independently

    Parameters

    Returns Result

    result: Result

now

  • now(): number
  • Utility wrapper for performance.now()

    Returns number

reset

  • reset(): void

segmentation

  • segmentation(input: Input, background?: Input): Promise<{ alpha: null | OffscreenCanvas | HTMLCanvasElement; canvas: null | OffscreenCanvas | HTMLCanvasElement; data: number[] }>
  • Segmentation method takes any input and returns processed canvas with body segmentation

    • Segmentation is not triggered as part of detect process

    Returns:

    Parameters

    Returns Promise<{ alpha: null | OffscreenCanvas | HTMLCanvasElement; canvas: null | OffscreenCanvas | HTMLCanvasElement; data: number[] }>

    • data as raw data array with per-pixel segmentation values
    • canvas as canvas which is input image filtered with segementation data and optionally merged with background image. canvas alpha values are set to segmentation values for easy merging
    • alpha as grayscale canvas that represents segmentation alpha values

validate

  • validate(userConfig?: Partial<Config>): { expected?: string; reason: string; where: string }[]
  • Validate current configuration schema

    Parameters

    • Optional userConfig: Partial<Config>

    Returns { expected?: string; reason: string; where: string }[]

warmup

  • warmup(userConfig?: Partial<Config>): Promise<Result | { error: any }>
  • Warmup method pre-initializes all configured models for faster inference

    • can take significant time on startup
    • only used for webgl and humangl backends

    Parameters

    • Optional userConfig: Partial<Config>

    Returns Promise<Result | { error: any }>

    result: Result