Options
All
  • Public
  • Public/Protected
  • All
Menu

Human* library main class

All methods and properties are available only as members of Human class

  • Configuration object definition: Config
  • Results object definition: Result
  • Possible inputs: Input
param userConfig:

Config

returns

instance of Human

Hierarchy

  • Human

Index

Constructors

constructor

  • Constructor for Human library that is futher used for all operations

    Parameters

    • Optional userConfig: Partial<Config>

    Returns Human

Properties

config

config: Config

Current configuration

distance

distance: (descriptor1: Descriptor, descriptor2: Descriptor, options?: MatchOptions) => number = match.distance

Type declaration

    • Calculates distance between two descriptors

      Parameters

      Returns number

draw

draw: { all: (inCanvas: AnyCanvas, result: Result, drawOptions?: Partial<DrawOptions>) => Promise<null | [void, void, void, void, void]>; body: (inCanvas: AnyCanvas, result: BodyResult[], drawOptions?: Partial<DrawOptions>) => Promise<void>; canvas: (input: AnyCanvas | HTMLImageElement | HTMLMediaElement | HTMLVideoElement, output: AnyCanvas) => Promise<void>; face: (inCanvas: AnyCanvas, result: FaceResult[], drawOptions?: Partial<DrawOptions>) => Promise<void>; gesture: (inCanvas: AnyCanvas, result: GestureResult[], drawOptions?: Partial<DrawOptions>) => Promise<void>; hand: (inCanvas: AnyCanvas, result: HandResult[], drawOptions?: Partial<DrawOptions>) => Promise<void>; object: (inCanvas: AnyCanvas, result: ObjectResult[], drawOptions?: Partial<DrawOptions>) => Promise<void>; options: DrawOptions; person: (inCanvas: AnyCanvas, result: PersonResult[], drawOptions?: Partial<DrawOptions>) => Promise<void> }

Draw helper classes that can draw detected objects on canvas using specified draw

property

options global settings for all draw operations, can be overriden for each draw method DrawOptions

Type declaration

env

env: Env

Object containing environment information used for diagnostics

events

events: undefined | EventTarget

Container for events dispatched by Human {@type} EventTarget Possible events:

  • create: triggered when Human object is instantiated
  • load: triggered when models are loaded (explicitly or on-demand)
  • image: triggered when input image is processed
  • result: triggered when detection is complete
  • warmup: triggered when warmup is complete
  • error: triggered on some errors

faceTriangulation

faceTriangulation: number[]

Reference face triangualtion array of 468 points, used for triangle references between points

faceUVMap

faceUVMap: [number, number][]

Refernce UV map of 468 values, used for 3D mapping of the face mesh

gl

gl: Record<string, unknown>

WebGL debug info

match

match: (descriptor: Descriptor, descriptors: Descriptor[], options?: MatchOptions) => { distance: number; index: number; similarity: number } = match.match

Type declaration

    • (descriptor: Descriptor, descriptors: Descriptor[], options?: MatchOptions): { distance: number; index: number; similarity: number }
    • Matches given descriptor to a closest entry in array of descriptors

      Parameters

      • descriptor: Descriptor

        face descriptor

      • descriptors: Descriptor[]

        array of face descriptors to commpare given descriptor to

      • options: MatchOptions = ...

      Returns { distance: number; index: number; similarity: number }

      • index index array index where best match was found or -1 if no matches
      • distance calculated distance of given descriptor to the best match
      • similarity calculated normalized similarity of given descriptor to the best match
      • distance: number
      • index: number
      • similarity: number

performance

performance: Record<string, number>

Performance object that contains values for all recently performed operations

process

process: { canvas: null | AnyCanvas; tensor: null | Tensor<Rank> }

currenty processed image tensor and canvas

Type declaration

result

result: Result

Last known result of detect run

  • Can be accessed anytime after initial detection

similarity

similarity: (descriptor1: Descriptor, descriptor2: Descriptor, options?: MatchOptions) => number = match.similarity

Type declaration

    • Calculates normalized similarity between two face descriptors based on their distance

      Parameters

      Returns number

      similarity between two face descriptors normalized to 0..1 range where 0 is no similarity and 1 is perfect similarity

state

state: string

Current state of Human library

  • Can be polled to determine operations that are currently executed
  • Progresses through: 'config', 'check', 'backend', 'load', 'run:', 'idle'

version

version: string

Current version of Human library in semver format

Methods

compare

  • compare(firstImageTensor: Tensor<Rank>, secondImageTensor: Tensor<Rank>): Promise<number>
  • Compare two input tensors for pixel simmilarity

    • use human.image to process any valid input and get a tensor that can be used for compare
    • when passing manually generated tensors:
    • both input tensors must be in format [1, height, width, 3]
    • if resolution of tensors does not match, second tensor will be resized to match resolution of the first tensor

    Parameters

    • firstImageTensor: Tensor<Rank>
    • secondImageTensor: Tensor<Rank>

    Returns Promise<number>

    • return value is pixel similarity score normalized by input resolution and rgb channels

detect

enhance

  • Enhance method performs additional enhacements to face image previously detected for futher processing

    Parameters

    Returns null | Tensor<Rank>

    Tensor

image

  • image(input: Input, getTensor?: boolean): Promise<{ canvas: null | AnyCanvas; tensor: null | Tensor<Rank> }>
  • Process input as return canvas and tensor

    Parameters

    • input: Input
    • getTensor: boolean = true

    Returns Promise<{ canvas: null | AnyCanvas; tensor: null | Tensor<Rank> }>

init

  • init(): Promise<void>
  • Explicit backend initialization

    • Normally done implicitly during initial load phase
    • Call to explictly register and initialize TFJS backend without any other operations
    • Use when changing backend during runtime

    Returns Promise<void>

load

  • load(userConfig?: Partial<Config>): Promise<void>
  • Load method preloads all configured models on-demand

    • Not explicitly required as any required model is load implicitly on it's first run

    Parameters

    • Optional userConfig: Partial<Config>

    Returns Promise<void>

    Promise

next

  • Runs interpolation using last known result and returns smoothened result Interpolation is based on time since last known result so can be called independently

    Parameters

    Returns Result

    result: Result

now

  • now(): number
  • Utility wrapper for performance.now()

    Returns number

profile

  • profile(input: Input, userConfig?: Partial<Config>): Promise<Record<string, number>>
  • Run detect with tensorflow profiling

    • result object will contain total exeuction time information for top-20 kernels
    • actual detection object can be accessed via human.result

    Parameters

    Returns Promise<Record<string, number>>

reset

  • reset(): void

segmentation

  • Segmentation method takes any input and returns processed canvas with body segmentation

    • Segmentation is not triggered as part of detect process

    Returns:

    Parameters

    Returns Promise<{ alpha: null | AnyCanvas; canvas: null | AnyCanvas; data: Tensor<Rank> | number[] }>

    • data as raw data array with per-pixel segmentation values
    • canvas as canvas which is input image filtered with segementation data and optionally merged with background image. canvas alpha values are set to segmentation values for easy merging
    • alpha as grayscale canvas that represents segmentation alpha values

validate

  • validate(userConfig?: Partial<Config>): { expected?: string; reason: string; where: string }[]
  • Validate current configuration schema

    Parameters

    • Optional userConfig: Partial<Config>

    Returns { expected?: string; reason: string; where: string }[]

warmup

  • warmup(userConfig?: Partial<Config>): Promise<Result | { error: any }>
  • Warmup method pre-initializes all configured models for faster inference

    • can take significant time on startup
    • only used for webgl and humangl backends

    Parameters

    • Optional userConfig: Partial<Config>

    Returns Promise<Result | { error: any }>

    result: Result