Human* library main class

All methods and properties are available only as members of Human class

  • Configuration object definition: Config
  • Results object definition: Result
  • Possible inputs: Input

Param

Config

Returns

instance of Human

Hierarchy

  • Human

Constructors

  • Constructor for Human library that is futher used for all operations

    Parameters

    • Optional userConfig: Partial<Config>

      user configuration object Config

    Returns Human

Properties

config: Config

Current configuration

distance: ((descriptor1: Descriptor, descriptor2: Descriptor, options?: MatchOptions) => number) = match.distance

Type declaration

    • (descriptor1: Descriptor, descriptor2: Descriptor, options?: MatchOptions): number
    • Calculates distance between two descriptors

      Parameters

      • descriptor1: Descriptor
      • descriptor2: Descriptor
      • options: MatchOptions = ...

        calculation options

        • order - algorithm to use Euclidean distance if order is 2 (default), Minkowski distance algorithm of nth order if order is higher than 2
        • multiplier - by how much to enhance difference analysis in range of 1..100 default is 20 which normalizes results to similarity above 0.5 can be considered a match

      Returns number

draw: { all: ((inCanvas: AnyCanvas, result: Result, drawOptions?: Partial<DrawOptions>) => Promise<null | [void, void, void, void, void]>); body: ((inCanvas: AnyCanvas, result: BodyResult[], drawOptions?: Partial<DrawOptions>) => void); canvas: ((input: AnyCanvas | HTMLImageElement | HTMLVideoElement, output: AnyCanvas) => void); face: ((inCanvas: AnyCanvas, result: FaceResult[], drawOptions?: Partial<DrawOptions>) => void); gesture: ((inCanvas: AnyCanvas, result: GestureResult[], drawOptions?: Partial<DrawOptions>) => void); hand: ((inCanvas: AnyCanvas, result: HandResult[], drawOptions?: Partial<DrawOptions>) => void); object: ((inCanvas: AnyCanvas, result: ObjectResult[], drawOptions?: Partial<DrawOptions>) => void); options: DrawOptions; person: ((inCanvas: AnyCanvas, result: PersonResult[], drawOptions?: Partial<DrawOptions>) => void) }

Draw helper classes that can draw detected objects on canvas using specified draw

  • canvas: draws input to canvas
  • options: are global settings for all draw operations, can be overriden for each draw method DrawOptions
  • face, body, hand, gesture, object, person: draws detected results as overlays on canvas

Type declaration

env: Env

Object containing environment information used for diagnostics

events: undefined | EventTarget

Container for events dispatched by Human Possible events:

  • create: triggered when Human object is instantiated
  • load: triggered when models are loaded (explicitly or on-demand)
  • image: triggered when input image is processed
  • result: triggered when detection is complete
  • warmup: triggered when warmup is complete
  • error: triggered on some errors
faceTriangulation: number[]

Reference face triangualtion array of 468 points, used for triangle references between points

faceUVMap: [number, number][]

Refernce UV map of 468 values, used for 3D mapping of the face mesh

gl: Record<string, unknown>

WebGL debug info

match: ((descriptor: Descriptor, descriptors: Descriptor[], options?: MatchOptions) => { distance: number; index: number; similarity: number }) = match.match

Type declaration

    • (descriptor: Descriptor, descriptors: Descriptor[], options?: MatchOptions): { distance: number; index: number; similarity: number }
    • Matches given descriptor to a closest entry in array of descriptors

      Parameters

      • descriptor: Descriptor

        face descriptor

      • descriptors: Descriptor[]

        array of face descriptors to commpare given descriptor to

      • options: MatchOptions = ...

        see similarity method for options description Returns

        • index index array index where best match was found or -1 if no matches
        • distance calculated distance of given descriptor to the best match
        • similarity calculated normalized similarity of given descriptor to the best match

      Returns { distance: number; index: number; similarity: number }

      • distance: number
      • index: number
      • similarity: number
performance: Record<string, number>

Performance object that contains values for all recently performed operations

process: { canvas: null | AnyCanvas; tensor: null | Tensor<Rank> }

currenty processed image tensor and canvas

Type declaration

result: Result

Last known result of detect run

  • Can be accessed anytime after initial detection
similarity: ((descriptor1: Descriptor, descriptor2: Descriptor, options?: MatchOptions) => number) = match.similarity

Type declaration

    • (descriptor1: Descriptor, descriptor2: Descriptor, options?: MatchOptions): number
    • Calculates normalized similarity between two face descriptors based on their distance

      Parameters

      • descriptor1: Descriptor
      • descriptor2: Descriptor
      • options: MatchOptions = ...

        calculation options

        • order - algorithm to use Euclidean distance if order is 2 (default), Minkowski distance algorithm of nth order if order is higher than 2
        • multiplier - by how much to enhance difference analysis in range of 1..100 default is 20 which normalizes results to similarity above 0.5 can be considered a match
        • min - normalize similarity result to a given range
        • max - normalzie similarity resutl to a given range default is 0.2...0.8 Returns similarity between two face descriptors normalized to 0..1 range where 0 is no similarity and 1 is perfect similarity

      Returns number

state: string

Current state of Human library

  • Can be polled to determine operations that are currently executed
  • Progresses through: 'config', 'check', 'backend', 'load', 'run:', 'idle'
tf: any

Instance of TensorFlow/JS used by Human

  • Can be embedded or externally provided TFJS API
version: string

Current version of Human library in semver format

webcam: WebCam = ...

WebCam helper methods

Methods

  • internal function to measure tensor leaks

    Parameters

    • Rest ...msg: string[]

    Returns void

  • Check model for invalid kernel ops for current backend

    Returns { missing: string[]; name: string }[]

  • Compare two input tensors for pixel simmilarity

    • use human.image to process any valid input and get a tensor that can be used for compare
    • when passing manually generated tensors:
    • both input tensors must be in format [1, height, width, 3]
    • if resolution of tensors does not match, second tensor will be resized to match resolution of the first tensor
    • return value is pixel similarity score normalized by input resolution and rgb channels

    Parameters

    Returns Promise<number>

  • emit event

    Parameters

    • event: string

    Returns void

  • Enhance method performs additional enhacements to face image previously detected for futher processing

    Returns

    Tensor

    Parameters

    • input: Tensor<Rank>

      Tensor as provided in human.result.face[n].tensor

    Returns null | Tensor<Rank>

  • Process input as return canvas and tensor

    Parameters

    • input: Input

      any input Input

    • getTensor: boolean = true

      should image processing also return tensor or just canvas Returns object with tensor and canvas

    Returns Promise<{ canvas: null | AnyCanvas; tensor: null | Tensor<Rank> }>

  • Explicit backend initialization

    • Normally done implicitly during initial load phase
    • Call to explictly register and initialize TFJS backend without any other operations
    • Use when changing backend during runtime

    Returns Promise<void>

  • Load method preloads all configured models on-demand

    • Not explicitly required as any required model is load implicitly on it's first run

    Parameters

    Returns Promise<void>

  • Runs interpolation using last known result and returns smoothened result Interpolation is based on time since last known result so can be called independently

    Returns

    result - Result

    Parameters

    • result: Result = ...

      Result optional use specific result set to run interpolation on

    Returns Result

  • Utility wrapper for performance.now()

    Returns number

  • Run detect with tensorflow profiling

    • result object will contain total exeuction time information for top-20 kernels
    • actual detection object can be accessed via human.result

    Parameters

    Returns Promise<{ kernel: string; perc: number; time: number }[]>

  • Reset configuration to default values

    Returns void

  • Segmentation method takes any input and returns processed canvas with body segmentation

    • Segmentation is not triggered as part of detect process

    Parameters

    • input: Input
    • Optional background: Input

      Input

      • Optional parameter background is used to fill the background with specific input Returns:
      • data as raw data array with per-pixel segmentation values
      • canvas as canvas which is input image filtered with segementation data and optionally merged with background image. canvas alpha values are set to segmentation values for easy merging
      • alpha as grayscale canvas that represents segmentation alpha values

    Returns Promise<{ alpha: null | AnyCanvas; canvas: null | AnyCanvas; data: number[] | Tensor<Rank> }>

  • Helper function

    Parameters

    • ms: number

      sleep time in miliseconds

    Returns Promise<void>

  • Validate current configuration schema

    Parameters

    • Optional userConfig: Partial<Config>

    Returns { expected?: string; reason: string; where: string }[]

  • Continously detect video frames

    Parameters

    • element: HTMLVideoElement

      HTMLVideoElement input

    • run: boolean = true

      boolean run continously or stop if already running, default true

    • delay: number = 0

      number delay detection between frames for number of miliseconds, default 0

    Returns Promise<void>

  • Warmup method pre-initializes all configured models for faster inference

    • can take significant time on startup
    • only used for webgl and humangl backends

    Returns

    result - Result

    Parameters

    Returns Promise<undefined | Result>