Options
All
  • Public
  • Public/Protected
  • All
Menu

Human library main class

All methods and properties are available only as members of Human class

  • Configuration object definition: Config
  • Results object definition: Result
  • Possible inputs: Input
param

Config

Hierarchy

  • Human

Index

Constructors

constructor

  • Creates instance of Human library that is futher used for all operations

    Parameters

    • Optional userConfig: Partial<Config>

    Returns Human

Properties

config

config: Config

Current configuration

draw

draw: { all: any; body: any; canvas: any; face: any; gesture: any; hand: any; object: any; options: DrawOptions; person: any }

Draw helper classes that can draw detected objects on canvas using specified draw

  • options: DrawOptions global settings for all draw operations, can be overriden for each draw method
  • face: draw detected faces
  • body: draw detected people and body parts
  • hand: draw detected hands and hand parts
  • canvas: draw this.processed canvas which is a this.processed copy of the input
  • all: meta-function that performs: canvas, face, body, hand

Type declaration

  • all: any
  • body: any
  • canvas: any
  • face: any
  • gesture: any
  • hand: any
  • object: any
  • options: DrawOptions
  • person: any

env

env: Env

Object containing environment information used for diagnostics

events

events: EventTarget

Container for events dispatched by Human

Possible events:

  • create: triggered when Human object is instantiated
  • load: triggered when models are loaded (explicitly or on-demand)
  • image: triggered when input image is this.processed
  • result: triggered when detection is complete
  • warmup: triggered when warmup is complete

faceTriangulation

faceTriangulation: number[]

Reference face triangualtion array of 468 points, used for triangle references between points

faceUVMap

faceUVMap: number[][]

Refernce UV map of 468 values, used for 3D mapping of the face mesh

initial

initial: boolean

models

models: { age: null | GraphModel; blazepose: null | GraphModel; centernet: null | GraphModel; efficientpose: null | GraphModel; embedding: null | GraphModel; emotion: null | GraphModel; face: null | [unknown, null | GraphModel, null | GraphModel]; faceres: null | GraphModel; gender: null | GraphModel; handpose: null | [null | GraphModel, null | GraphModel]; movenet: null | GraphModel; nanodet: null | GraphModel; posenet: null | GraphModel; segmentation: null | GraphModel }
internal:

Currently loaded models

Type declaration

  • age: null | GraphModel
  • blazepose: null | GraphModel
  • centernet: null | GraphModel
  • efficientpose: null | GraphModel
  • embedding: null | GraphModel
  • emotion: null | GraphModel
  • face: null | [unknown, null | GraphModel, null | GraphModel]
  • faceres: null | GraphModel
  • gender: null | GraphModel
  • handpose: null | [null | GraphModel, null | GraphModel]
  • movenet: null | GraphModel
  • nanodet: null | GraphModel
  • posenet: null | GraphModel
  • segmentation: null | GraphModel

performance

performance: Record<string, number>

Performance object that contains values for all recently performed operations

process

process: { canvas: null | HTMLCanvasElement | OffscreenCanvas; tensor: null | Tensor<Rank> }

currenty processed image tensor and canvas

Type declaration

  • canvas: null | HTMLCanvasElement | OffscreenCanvas
  • tensor: null | Tensor<Rank>

result

result: Result

Last known result of detect run

  • Can be accessed anytime after initial detection

state

state: string

Current state of Human library

  • Can be polled to determine operations that are currently executed
  • Progresses through: 'config', 'check', 'backend', 'load', 'run:', 'idle'

tf

tf: any
internal:

Instance of TensorFlow/JS used by Human

  • Can be embedded or externally provided

version

version: string

Current version of Human library in semver format

Methods

detect

  • Main detection method

    • Analyze configuration: Config
    • Pre-this.process input: Input
    • Run inference for all configured models
    • Process and return result: Result

    Parameters

    Returns Promise<Error | Result>

    result: Result

enhance

  • enhance(input: Tensor<Rank>): null | Tensor<Rank>
  • Enhance method performs additional enhacements to face image previously detected for futher this.processing

    Parameters

    • input: Tensor<Rank>

    Returns null | Tensor<Rank>

    Tensor

image

  • image(input: Input): { canvas: HTMLCanvasElement | OffscreenCanvas; tensor: null | Tensor<Rank> }
  • Process input as return canvas and tensor

    Parameters

    Returns { canvas: HTMLCanvasElement | OffscreenCanvas; tensor: null | Tensor<Rank> }

    • canvas: HTMLCanvasElement | OffscreenCanvas
    • tensor: null | Tensor<Rank>

load

  • load(userConfig?: Partial<Config>): Promise<void>
  • Load method preloads all configured models on-demand

    • Not explicitly required as any required model is load implicitly on it's first run

    Parameters

    • Optional userConfig: Partial<Config>

    Returns Promise<void>

match

  • match(faceEmbedding: number[], db: { embedding: number[]; name: string; source: string }[], threshold?: number): { embedding: number[]; name: string; similarity: number; source: string }
  • Math method find best match between provided face descriptor and predefined database of known descriptors

    Parameters

    • faceEmbedding: number[]
    • db: { embedding: number[]; name: string; source: string }[]
    • threshold: number = 0

    Returns { embedding: number[]; name: string; similarity: number; source: string }

    best match

    • embedding: number[]
    • name: string
    • similarity: number
    • source: string

next

  • Runs interpolation using last known result and returns smoothened result Interpolation is based on time since last known result so can be called independently

    Parameters

    Returns Result

    result: Result

segmentation

  • segmentation(input: Input, background?: Input): Promise<null | HTMLCanvasElement | OffscreenCanvas>
  • Segmentation method takes any input and returns this.processed canvas with body segmentation Optional parameter background is used to fill the background with specific input Segmentation is not triggered as part of detect this.process

    Parameters

    Returns Promise<null | HTMLCanvasElement | OffscreenCanvas>

    Canvas

similarity

  • similarity(embedding1: number[], embedding2: number[]): number
  • Simmilarity method calculates simmilarity between two provided face descriptors (face embeddings)

    • Calculation is based on normalized Minkowski distance between

    Parameters

    • embedding1: number[]
    • embedding2: number[]

    Returns number

    similarity: number

warmup

  • warmup(userConfig?: Partial<Config>): Promise<Result | { error: any }>
  • Warmup method pre-initializes all configured models for faster inference

    • can take significant time on startup
    • only used for webgl and humangl backends

    Parameters

    • Optional userConfig: Partial<Config>

    Returns Promise<Result | { error: any }>

    result: Result