Options
All
  • Public
  • Public/Protected
  • All
Menu

Class Human

Human library main class

All methods and properties are available only as members of Human class

  • Configuration object definition: Config
  • Results object definition: Result
  • Possible inputs: Input
param

Config

Hierarchy

  • Human

Index

Constructors

constructor

  • new Human(userConfig?: Config | Record<string, unknown>): Human
  • Creates instance of Human library that is futher used for all operations

    Parameters

    • Optional userConfig: Config | Record<string, unknown>

    Returns Human

Properties

config

config: Config

Current configuration

draw

draw: { all: (inCanvas: HTMLCanvasElement, result: Result, drawOptions?: DrawOptions) => Promise<null | [void, void, void, void, void]>; body: (inCanvas: HTMLCanvasElement, result: Body[], drawOptions?: DrawOptions) => Promise<void>; canvas: (inCanvas: HTMLCanvasElement, outCanvas: HTMLCanvasElement) => Promise<void>; face: (inCanvas: HTMLCanvasElement, result: Face[], drawOptions?: DrawOptions) => Promise<void>; gesture: (inCanvas: HTMLCanvasElement, result: Gesture[], drawOptions?: DrawOptions) => Promise<void>; hand: (inCanvas: HTMLCanvasElement, result: Hand[], drawOptions?: DrawOptions) => Promise<void>; options: DrawOptions }

Draw helper classes that can draw detected objects on canvas using specified draw

  • options: DrawOptions global settings for all draw operations, can be overriden for each draw method
  • face: draw detected faces
  • body: draw detected people and body parts
  • hand: draw detected hands and hand parts
  • canvas: draw processed canvas which is a processed copy of the input
  • all: meta-function that performs: canvas, face, body, hand

Type declaration

  • all: (inCanvas: HTMLCanvasElement, result: Result, drawOptions?: DrawOptions) => Promise<null | [void, void, void, void, void]>
      • (inCanvas: HTMLCanvasElement, result: Result, drawOptions?: DrawOptions): Promise<null | [void, void, void, void, void]>
      • Parameters

        Returns Promise<null | [void, void, void, void, void]>

  • body: (inCanvas: HTMLCanvasElement, result: Body[], drawOptions?: DrawOptions) => Promise<void>
      • (inCanvas: HTMLCanvasElement, result: Body[], drawOptions?: DrawOptions): Promise<void>
      • Parameters

        • inCanvas: HTMLCanvasElement
        • result: Body[]
        • Optional drawOptions: DrawOptions

        Returns Promise<void>

  • canvas: (inCanvas: HTMLCanvasElement, outCanvas: HTMLCanvasElement) => Promise<void>
      • (inCanvas: HTMLCanvasElement, outCanvas: HTMLCanvasElement): Promise<void>
      • Parameters

        • inCanvas: HTMLCanvasElement
        • outCanvas: HTMLCanvasElement

        Returns Promise<void>

  • face: (inCanvas: HTMLCanvasElement, result: Face[], drawOptions?: DrawOptions) => Promise<void>
      • (inCanvas: HTMLCanvasElement, result: Face[], drawOptions?: DrawOptions): Promise<void>
      • Parameters

        • inCanvas: HTMLCanvasElement
        • result: Face[]
        • Optional drawOptions: DrawOptions

        Returns Promise<void>

  • gesture: (inCanvas: HTMLCanvasElement, result: Gesture[], drawOptions?: DrawOptions) => Promise<void>
      • (inCanvas: HTMLCanvasElement, result: Gesture[], drawOptions?: DrawOptions): Promise<void>
      • Parameters

        Returns Promise<void>

  • hand: (inCanvas: HTMLCanvasElement, result: Hand[], drawOptions?: DrawOptions) => Promise<void>
      • (inCanvas: HTMLCanvasElement, result: Hand[], drawOptions?: DrawOptions): Promise<void>
      • Parameters

        • inCanvas: HTMLCanvasElement
        • result: Hand[]
        • Optional drawOptions: DrawOptions

        Returns Promise<void>

  • options: DrawOptions

faceTriangulation

faceTriangulation: number[]

Reference face triangualtion array of 468 points, used for triangle references between points

faceUVMap

faceUVMap: number[][]

Refernce UV map of 468 values, used for 3D mapping of the face mesh

image

image: { canvas: null | HTMLCanvasElement | OffscreenCanvas; tensor: null | Tensor<Rank> }
internal:

Instance of current image being processed

Type declaration

  • canvas: null | HTMLCanvasElement | OffscreenCanvas
  • tensor: null | Tensor<Rank>

models

models: { age: null | GraphModel; blazepose: null | GraphModel; centernet: null | GraphModel; efficientpose: null | GraphModel; embedding: null | GraphModel; emotion: null | GraphModel; face: null | [unknown, null | GraphModel, null | GraphModel]; faceres: null | GraphModel; gender: null | GraphModel; handpose: null | [null | GraphModel, null | GraphModel]; movenet: null | GraphModel; nanodet: null | GraphModel; posenet: null | GraphModel; segmentation: null | GraphModel }
internal:

Currently loaded models

Type declaration

  • age: null | GraphModel
  • blazepose: null | GraphModel
  • centernet: null | GraphModel
  • efficientpose: null | GraphModel
  • embedding: null | GraphModel
  • emotion: null | GraphModel
  • face: null | [unknown, null | GraphModel, null | GraphModel]
  • faceres: null | GraphModel
  • gender: null | GraphModel
  • handpose: null | [null | GraphModel, null | GraphModel]
  • movenet: null | GraphModel
  • nanodet: null | GraphModel
  • posenet: null | GraphModel
  • segmentation: null | GraphModel

performance

performance: Record<string, number>

Performance object that contains values for all recently performed operations

result

result: Result

Last known result of detect run

  • Can be accessed anytime after initial detection

state

state: string

Current state of Human library

  • Can be polled to determine operations that are currently executed
  • Progresses through: 'config', 'check', 'backend', 'load', 'run:', 'idle'

sysinfo

sysinfo: { agent: string; platform: string }

Platform and agent information detected by Human

Type declaration

  • agent: string
  • platform: string

tf

tf: __module
internal:

Instance of TensorFlow/JS used by Human

  • Can be embedded or externally provided

Static Body

Body: Body

Static Config

Config: Config

Types used by Human

Static DrawOptions

DrawOptions: DrawOptions

Static Face

Face: Face

Static Gesture

Gesture: Gesture

Static Hand

Hand: Hand

Static Item

Item: Item

Static Person

Person: Gesture

Static Result

Result: Result

Static version

version: string

Current version of Human library in semver format

Methods

detect

  • Main detection method

    • Analyze configuration: Config
    • Pre-process input: Input
    • Run inference for all configured models
    • Process and return result: Result

    Parameters

    • input: Input
    • Optional userConfig: Config | Record<string, unknown>

    Returns Promise<Error | Result>

    result: Result

enhance

  • enhance(input: Tensor<Rank>): null | Tensor<Rank>
  • Enhance method performs additional enhacements to face image previously detected for futher processing

    Parameters

    • input: Tensor<Rank>

    Returns null | Tensor<Rank>

    Tensor

load

  • load(userConfig?: Config | Record<string, unknown>): Promise<void>
  • Load method preloads all configured models on-demand

    • Not explicitly required as any required model is load implicitly on it's first run

    Parameters

    • Optional userConfig: Config | Record<string, unknown>

    Returns Promise<void>

match

  • match(faceEmbedding: number[], db: { embedding: number[]; name: string; source: string }[], threshold?: number): { embedding: number[]; name: string; similarity: number; source: string }
  • Math method find best match between provided face descriptor and predefined database of known descriptors

    Parameters

    • faceEmbedding: number[]
    • db: { embedding: number[]; name: string; source: string }[]
    • threshold: number = 0

    Returns { embedding: number[]; name: string; similarity: number; source: string }

    best match

    • embedding: number[]
    • name: string
    • similarity: number
    • source: string

next

  • Runs interpolation using last known result and returns smoothened result Interpolation is based on time since last known result so can be called independently

    Parameters

    Returns Result

    result: Result

segmentation

  • segmentation(input: Input, background?: Input): Promise<null | HTMLCanvasElement | OffscreenCanvas>
  • Segmentation method takes any input and returns processed canvas with body segmentation Optional parameter background is used to fill the background with specific input Segmentation is not triggered as part of detect process

    Parameters

    Returns Promise<null | HTMLCanvasElement | OffscreenCanvas>

    Canvas

similarity

  • similarity(embedding1: number[], embedding2: number[]): number
  • Simmilarity method calculates simmilarity between two provided face descriptors (face embeddings)

    • Calculation is based on normalized Minkowski distance between

    Parameters

    • embedding1: number[]
    • embedding2: number[]

    Returns number

    similarity: number

warmup

  • warmup(userConfig?: Config | Record<string, unknown>): Promise<Result | { error: any }>
  • Warmup method pre-initializes all configured models for faster inference

    • can take significant time on startup
    • only used for webgl and humangl backends

    Parameters

    • Optional userConfig: Config | Record<string, unknown>

    Returns Promise<Result | { error: any }>