human/README.md

4.7 KiB

Human Library

3D Face Detection, Face Embedding & Recognition,
Body Pose Tracking, Hand & Finger Tracking,
Iris Analysis, Age & Gender & Emotion Prediction
& Gesture Recognition


Native JavaScript module using TensorFlow/JS Machine Learning library
Compatible with Browser, WebWorker and NodeJS execution on both Windows and Linux

  • Browser/WebWorker: Compatible with CPU, WebGL, WASM and WebGPU backends
  • NodeJS: Compatible with software tfjs-node and CUDA accelerated backends tfjs-node-gpu

Check out Live Demo for processing of live WebCam video or static images


Project pages


Wiki pages


Additional notes


Default models

Default models in Human library are:

  • Face Detection: MediaPipe BlazeFace-Back
  • Face Mesh: MediaPipe FaceMesh
  • Face Iris Analysis: MediaPipe Iris
  • Emotion Detection: Oarriaga Emotion
  • Gender Detection: Oarriaga Gender
  • Age Detection: SSR-Net Age IMDB
  • Body Analysis: PoseNet
  • Face Embedding: Sirius-AI MobileFaceNet Embedding

Note that alternative models are provided and can be enabled via configuration
For example, PoseNet model can be switched for BlazePose model depending on the use case

For more info, see Configuration Details and List of Models


See issues and discussions for list of known limitations and planned enhancements

Suggestions are welcome!




Options

As presented in the demo application...

Options visible in demo




Examples


Training image:

Example Training Image

Using static images:

Example Using Image

Live WebCam view:

Example Using WebCam




Example simple app that uses Human to process video input and
draw output on screen using internal draw helper functions

import Human from '@vladmandic/human';

// create instance of human with simple configuration using default values
const config = { backend: 'wasm' };
const human = new Human(config);

function detectVideo() {
  // select input HTMLVideoElement and output HTMLCanvasElement from page
  const inputVideo = document.getElementById('video-id');
  const outputCanvas = document.getElementById('canvas-id');
  // perform processing using default configuration
  human.detect(inputVideo).then((result) => {
    // result object will contain detected details as well as the processed canvas itself
    // first draw processed frame on canvas
    human.draw.canvas(result.canvas, outputCanvas);
    // then draw results on the same canvas
    human.draw.face(outputCanvas, result.face);
    human.draw.body(outputCanvas, result.body);
    human.draw.hand(outputCanvas, result.hand);
    human.draw.gesture(outputCanvas, result.gesture);
    // loop immediate to next frame
    requestAnimationFrame(detectVideo);
  });
}

detectVideo();