Human: AI-powered 3D Face Detection & Rotation Tracking, Face Description & Recognition, Body Pose Tracking, 3D Hand & Finger Tracking, Iris Analysis, Age & Gender & Emotion Prediction, Gaze Tracking, Gesture Recognition
 
 
 
 
Go to file
Vladimir Mandic 533bbf9286 update movenet-multipose and samples 2021-09-28 17:07:34 -04:00
.github added human.env diagnostic class 2021-09-12 12:42:17 -04:00
assets new samples gallery and major code folder restructure 2021-09-25 11:51:15 -04:00
demo update movenet-multipose and samples 2021-09-28 17:07:34 -04:00
dist update movenet-multipose and samples 2021-09-28 17:07:34 -04:00
models prototype handtracking 2021-09-21 16:48:16 -04:00
samples update movenet-multipose and samples 2021-09-28 17:07:34 -04:00
src update movenet-multipose and samples 2021-09-28 17:07:34 -04:00
test update movenet-multipose and samples 2021-09-28 17:07:34 -04:00
tfjs reorganize tfjs bundle 2021-09-14 22:07:13 -04:00
typedoc tweaked default values 2021-09-28 13:48:29 -04:00
types release candidate 2021-09-28 13:49:51 -04:00
wiki@bf977ae232 update wiki 2021-09-26 06:53:06 -04:00
.eslintrc.json fix usge of string enums 2021-09-11 23:08:18 -04:00
.gitignore experimental webgpu support 2021-08-14 18:00:26 -04:00
.gitmodules updated wiki 2020-11-07 09:42:54 -05:00
.hintrc implement webhint 2021-04-04 09:25:18 -04:00
.markdownlint.json update badges 2021-03-08 15:06:56 -05:00
.npmignore simplify dependencies 2021-09-11 10:29:31 -04:00
.npmrc add npmrc 2021-04-20 08:02:21 -04:00
CHANGELOG.md tweaked default values 2021-09-28 13:48:29 -04:00
CODE_OF_CONDUCT update 2021-06-04 07:03:34 -04:00
CONTRIBUTING added node-multiprocess demo 2021-04-16 08:34:16 -04:00
LICENSE update 2021-06-04 07:03:34 -04:00
README.md new samples gallery and major code folder restructure 2021-09-25 11:51:15 -04:00
SECURITY.md update security policy 2021-05-30 09:41:24 -04:00
TODO.md update todo 2021-09-28 12:02:47 -04:00
build.json reorganize tfjs bundle 2021-09-14 22:07:13 -04:00
favicon.ico new icons 2021-03-29 15:01:16 -04:00
human.service fix box clamping and raw output 2021-03-17 14:35:11 -04:00
package.json redesign face processing 2021-09-28 12:01:48 -04:00
tsconfig.json full ts strict typechecks 2021-09-13 13:29:14 -04:00

README.md

Git Version NPM Version Last Commit License GitHub Status Checks Vulnerabilities

Human Library

AI-powered 3D Face Detection & Rotation Tracking, Face Description & Recognition,
Body Pose Tracking, 3D Hand & Finger Tracking, Iris Analysis,
Age & Gender & Emotion Prediction, Gaze Tracking, Gesture Recognition, Body Segmentation


JavaScript module using TensorFlow/JS Machine Learning library

  • Browser:
    Compatible with both desktop and mobile platforms
    Compatible with CPU, WebGL, WASM backends
    Compatible with WebWorker execution
  • NodeJS:
    Compatible with both software tfjs-node and
    GPU accelerated backends tfjs-node-gpu using CUDA libraries

Check out Live Demo app for processing of live WebCam video or static images

  • To start video detection, simply press Play
  • To process images, simply drag & drop in your Browser window
  • Note: For optimal performance, select only models you'd like to use
  • Note: If you have modern GPU, WebGL (default) backend is preferred, otherwise select WASM backend

Demos

Project pages

Wiki pages

Additional notes


See issues and discussions for list of known limitations and planned enhancements

Suggestions are welcome!



Examples

Visit Examples galery for more examples
https://vladmandic.github.io/human/samples/samples.html

samples


Options

All options as presented in the demo application...

demo/index.html

Options visible in demo


Results Browser:
[ Demo -> Display -> Show Results ]
Results


Advanced Examples

  1. Face Similarity Matching:
    Extracts all faces from provided input images,
    sorts them by similarity to selected face
    and optionally matches detected face with database of known people to guess their names

demo/facematch

Face Matching


  1. Face3D OpenGL Rendering:

demo/face3d

Face Matching


  1. VR Model Tracking:
    vrmodel

468-Point Face Mesh Defails:
(view in full resolution to see keypoints)

FaceMesh




Quick Start

Simply load Human (IIFE version) directly from a cloud CDN in your HTML file:
(pick one: jsdelirv, unpkg or cdnjs)

<script src="https://cdn.jsdelivr.net/npm/@vladmandic/human/dist/human.js"></script>
<script src="https://unpkg.dev/@vladmandic/human/dist/human.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/human/2.1.5/human.js"></script>

For details, including how to use Browser ESM version or NodeJS version of Human, see Installation


Inputs

Human library can process all known input types:

  • Image, ImageData, ImageBitmap, Canvas, OffscreenCanvas, Tensor,
  • HTMLImageElement, HTMLCanvasElement, HTMLVideoElement, HTMLMediaElement

Additionally, HTMLVideoElement, HTMLMediaElement can be a standard <video> tag that links to:

  • WebCam on user's system
  • Any supported video type
    For example: .mp4, .avi, etc.
  • Additional video types supported via HTML5 Media Source Extensions
    Live streaming examples:
    • HLS (HTTP Live Streaming) using hls.js
    • DASH (Dynamic Adaptive Streaming over HTTP) using dash.js
  • WebRTC media track using built-in support

Example

Example simple app that uses Human to process video input and
draw output on screen using internal draw helper functions

// create instance of human with simple configuration using default values
const config = { backend: 'webgl' };
const human = new Human(config);
// select input HTMLVideoElement and output HTMLCanvasElement from page
const inputVideo = document.getElementById('video-id');
const outputCanvas = document.getElementById('canvas-id');

function detectVideo() {
  // perform processing using default configuration
  human.detect(inputVideo).then((result) => {
    // result object will contain detected details
    // as well as the processed canvas itself
    // so lets first draw processed frame on canvas
    human.draw.canvas(result.canvas, outputCanvas);
    // then draw results on the same canvas
    human.draw.face(outputCanvas, result.face);
    human.draw.body(outputCanvas, result.body);
    human.draw.hand(outputCanvas, result.hand);
    human.draw.gesture(outputCanvas, result.gesture);
    // and loop immediate to the next frame
    requestAnimationFrame(detectVideo);
  });
}

detectVideo();

or using async/await:

// create instance of human with simple configuration using default values
const config = { backend: 'webgl' };
const human = new Human(config); // create instance of Human
const inputVideo = document.getElementById('video-id');
const outputCanvas = document.getElementById('canvas-id');

async function detectVideo() {
  const result = await human.detect(inputVideo); // run detection
  human.draw.all(outputCanvas, result); // draw all results
  requestAnimationFrame(detectVideo); // run loop
}

detectVideo(); // start loop

or using Events:

// create instance of human with simple configuration using default values
const config = { backend: 'webgl' };
const human = new Human(config); // create instance of Human
const inputVideo = document.getElementById('video-id');
const outputCanvas = document.getElementById('canvas-id');

human.events.addEventListener('detect', () => { // event gets triggered when detect is complete
  human.draw.all(outputCanvas, human.result); // draw all results
});

function detectVideo() {
  human.detect(inputVideo) // run detection
  .then(() => requestAnimationFrame(detectVideo)); // upon detect complete start processing of the next frame
}

detectVideo(); // start loop

or using interpolated results for smooth video processing by separating detection and drawing loops:

const human = new Human(); // create instance of Human
const inputVideo = document.getElementById('video-id');
const outputCanvas = document.getElementById('canvas-id');
let result;

async function detectVideo() {
  result = await human.detect(inputVideo); // run detection
  requestAnimationFrame(detectVideo); // run detect loop
}

async function drawVideo() {
  if (result) { // check if result is available
    const interpolated = human.next(result); // calculate next interpolated frame
    human.draw.all(outputCanvas, interpolated); // draw the frame
  }
  requestAnimationFrame(drawVideo); // run draw loop
}

detectVideo(); // start detection loop
drawVideo(); // start draw loop

And for even better results, you can run detection in a separate web worker thread




Default models

Default models in Human library are:

  • Face Detection: MediaPipe BlazeFace - Back variation
  • Face Mesh: MediaPipe FaceMesh
  • Face Iris Analysis: MediaPipe Iris
  • Face Description: HSE FaceRes
  • Emotion Detection: Oarriaga Emotion
  • Body Analysis: MoveNet - Lightning variation
  • Hand Analysis: MediaPipe Hands
  • Body Segmentation: Google Selfie
  • Object Detection: MB3 CenterNet
  • Body Segmentation: Google Selfie

Note that alternative models are provided and can be enabled via configuration
For example, PoseNet model can be switched for BlazePose, EfficientPose or MoveNet model depending on the use case

For more info, see Configuration Details and List of Models




Diagnostics




Human library is written in TypeScript 4.4
Conforming to JavaScript ECMAScript version 2020 standard
Build target is JavaScript EMCAScript version 2018


For details see Wiki Pages
and API Specification


Stars Forks Code Size CDN
Downloads Downloads Downloads