Human: AI-powered 3D Face Detection & Rotation Tracking, Face Description & Recognition, Body Pose Tracking, 3D Hand & Finger Tracking, Iris Analysis, Age & Gender & Emotion Prediction, Gaze Tracking, Gesture Recognition
 
 
 
 
Go to file
Vladimir Mandic 656c1f0ea0 0.2.10 2020-10-14 18:22:56 -04:00
demo added emotion backend 2020-10-14 18:22:38 -04:00
dist added emotion backend 2020-10-14 18:22:38 -04:00
models added emotion backend 2020-10-14 18:22:38 -04:00
src added emotion backend 2020-10-14 18:22:38 -04:00
.eslintrc.json implemented multi-hand support 2020-10-14 11:43:33 -04:00
.gitignore initial public commit 2020-10-11 19:22:43 -04:00
LICENSE Initial commit 2020-10-11 19:14:20 -04:00
README.md added emotion backend 2020-10-14 18:22:38 -04:00
package-lock.json 0.2.10 2020-10-14 18:22:56 -04:00
package.json 0.2.10 2020-10-14 18:22:56 -04:00

README.md

Human: 3D Face Detection, Body Pose, Hand & Finger Tracking, Iris Tracking, Age & Gender Prediction & Emotion Prediction

Compatible with Browser, WebWorker and NodeJS execution!

This is a pre-release project, see issues for list of known limitations

Suggestions are welcome!


Example using static image:
Example Using Image

Example using webcam:
Example Using WebCam


Installation

Important
The packaged (IIFE and ESM) version of Human includes TensorFlow/JS (TFJS) 2.6.0 library which can be accessed via human.tf
You should NOT manually load another instance of tfjs, but if you do, be aware of possible version conflicts

There are multiple ways to use Human library, pick one that suits you:

1. IIFE script

Simplest way for usage within Browser

Simply download dist/human.js, include it in your HTML file & it's ready to use.

  <script src="dist/human.js"><script>

IIFE script auto-registers global namespace human within global Window object
This way you can also use Human library within embbedded <script> tag within your html page for all-in-one approach

IIFE script is distributed in minified form with attached sourcemap

2. ESM module

Recommended for usage within Browser

2.1 With Bundler

If you're using bundler (such as rollup, webpack, esbuild) to package your client application, you can import ESM version of Human library which supports full tree shaking

  import human from '@vladmandic/human'; // points to @vladmandic/human/dist/human.esm.js

Or if you prefer to package your version of tfjs, you can use nobundle version

  import tf from '@tensorflow/tfjs'
  import human from '@vladmandic/human/dist/human.nobundle.js'; // same functionality as default import, but without tfjs bundled

2.2 Using Script Module

You could use same syntax within your main JS file if it's imported with <script type="module">

  <script src="./index.js" type="module">

and then in your index.js

  import human from 'dist/human.esm.js';

ESM script is distributed in minified form with attached sourcemap

3. NPM module

Recommended for NodeJS projects that will execute in the backend

Entry point is bundle in CJS format dist/human.node.js
You also need to install and include tfjs-node or tfjs-node-gpu in your project so it can register an optimized backend

Install with:

  npm install @tensorflow/tfjs-node @vladmandic/human

And then use with:

  const tf = require('@tensorflow/tfjs-node'); 
  const human = require('@vladmandic/human'); // points to @vladmandic/human/dist/human.node.js

Since NodeJS projects load weights from local filesystem instead of using http calls, you must modify default configuration to include correct paths with file:// prefix
For example:

const config = {
  body: { enabled: true, modelPath: 'file://models/posenet/model.json' },
}

Note that when using Human in NodeJS, you must load and parse the image before you pass it for detection
For example:

  const buffer = fs.readFileSync(input);
  const image = tf.node.decodeImage(buffer);
  const result = human.detect(image, config);
  image.dispose();

Weights

Pretrained model weights are includes in ./models
Default configuration uses relative paths to you entry script pointing to ../models
If your application resides in a different folder, modify modelPath property in configuration of each module


Demo

Demos are included in /demo:

Browser:

  • demo-esm: Demo using Browser with ESM module
  • demo-iife: Demo using Browser with IIFE module
  • demo-webworker: Demo using Browser with ESM module and Web Workers All three following demos are identical, they just illustrate different ways to load and work with Human library:

NodeJS:

  • demo-node: Demo using NodeJS with CJS module
    This is a very simple demo as althought Human library is compatible with NodeJS execution
    and is able to load images and models from local filesystem,

Usage

Human library does not require special initialization. All configuration is done in a single JSON object and all model weights will be dynamically loaded upon their first usage(and only then, Human will not load weights that it doesn't need according to configuration).

There is only ONE method you need:

  import * as tf from '@tensorflow/tfjs';
  import human from '@vladmandic/human';

  // 'image': can be of any type of an image object: HTMLImage, HTMLVideo, HTMLMedia, Canvas, Tensor4D
  // 'options': optional parameter used to override any options present in default configuration
  const result = await human.detect(image, options?)

or if you want to use promises

  human.detect(image, options?).then((result) => {
    // your code
  })

Additionally, Human library exposes several classes:

  human.defaults // default configuration object
  human.models   // dynamically maintained object of any loaded models
  human.tf       // instance of tfjs used by human

Configuration

Below is output of human.defaults object
Any property can be overriden by passing user object during human.detect()
Note that user object and default configuration are merged using deep-merge, so you do not need to redefine entire configuration

human.defaults = {
  face: {
    enabled: true,
    detector: {
      modelPath: '../models/blazeface/model.json',
      maxFaces: 10,
      skipFrames: 10,
      minConfidence: 0.8,
      iouThreshold: 0.3,
      scoreThreshold: 0.75,
    },
    mesh: {
      enabled: true,
      modelPath: '../models/facemesh/model.json',
    },
    iris: {
      enabled: true,
      modelPath: '../models/iris/model.json',
    },
    age: {
      enabled: true,
      modelPath: '../models/ssrnet-imdb-age/model.json',
      skipFrames: 10,
    },
    gender: {
      enabled: true,
      modelPath: '../models/ssrnet-imdb-gender/model.json',
    },
    emotion: {
      enabled: true,
      minConfidence: 0.5,
      skipFrames: 10,
      useGrayscale: true,
      modelPath: '../models/emotion/model.json',
    },
  },
  body: {
    enabled: true,
    modelPath: '../models/posenet/model.json',
    maxDetections: 5,
    scoreThreshold: 0.75,
    nmsRadius: 20,
  },
  hand: {
    enabled: true,
    skipFrames: 10,
    minConfidence: 0.8,
    iouThreshold: 0.3,
    scoreThreshold: 0.75,
    detector: {
      anchors: '../models/handdetect/anchors.json',
      modelPath: '../models/handdetect/model.json',
    },
    skeleton: {
      modelPath: '../models/handskeleton/model.json',
    },
  },
};

Where:

  • enabled: controls if specified modul is enabled (note: module is not loaded until it is required)
  • modelPath: path to specific pre-trained model weights
  • maxFaces, maxDetections: how many faces or people are we trying to analyze. limiting number in busy scenes will result in higher performance
  • skipFrames: how many frames to skip before re-running bounding box detection (e.g., face position does not move fast within a video, so it's ok to use previously detected face position and just run face geometry analysis)
  • minConfidence: threshold for discarding a prediction
  • iouThreshold: threshold for deciding whether boxes overlap too much in non-maximum suppression
  • scoreThreshold: threshold for deciding when to remove boxes based on score in non-maximum suppression
  • useGrayscale: convert color input to grayscale before processing or use single channels when color input is not supported
  • nmsRadius: radius for deciding points are too close in non-maximum suppression

Outputs

Result of humand.detect() is a single object that includes data for all enabled modules and all detected objects:

result = {
  face:            // <array of detected objects>
  [
    {
      confidence,  // <number>
      box,         // <array [x, y, width, height]>
      mesh,        // <array of 3D points [x, y, z]> 468 base points & 10 iris points
      annotations, // <list of object { landmark: array of points }> 32 base annotated landmarks & 2 iris annotations
      iris,        // <number> relative distance of iris to camera, multiple by focal lenght to get actual distance
      age,         // <number> estimated age
      gender,      // <string> 'male', 'female'
    }
  ],
  body:            // <array of detected objects>
  [
    {
      score,       // <number>,
      keypoints,   // <array of 2D landmarks [ score, landmark, position [x, y] ]> 17 annotated landmarks
    }
  ],
  hand:            // <array of detected objects>
  [
    {
      confidence,  // <number>,
      box,         // <array [x, y, width, height]>,
      landmarks,   // <array of 3D points [x, y,z]> 21 points
      annotations, // <array of 3D landmarks [ landmark: <array of points> ]> 5 annotated landmakrs
    }
  ],
  emotion:         // <array of emotions>
  [
    {
      score,       // <number> probabily of emotion
      emotion,     // <string> 'angry', 'discust', 'fear', 'happy', 'sad', 'surpise', 'neutral'
    }
  ],
}

Additionally, result object includes internal performance data - total time spend and time per module (measured in ms):

  result.performance = {
    body,
    hand,
    face,
    agegender,
    emotion,
    total,
  }

Build

If you want to modify the library and perform a full rebuild:

clone repository, install dependencies, check for errors and run full rebuild from which creates bundles from /src into /dist:

git clone https://github.com/vladmandic/human
cd human
npm install # installs all project dependencies
npm run lint
npm run build

Project is written in pure JavaScript ECMAScript version 2020

Only project depdendency is @tensorflow/tfjs Development dependencies are eslint used for code linting and esbuild used for IIFE and ESM script bundling


Performance

Performance will vary depending on your hardware, but also on number of resolution of input video/image, enabled modules as well as their parameters

For example, on a desktop with a low-end nVidia GTX1050 it can perform multiple face detections at 60+ FPS, but drops to 10 FPS on a medium complex images if all modules are enabled

Performance per module:

  • Enabled all: 10 FPS
  • Face Detect: 80 FPS
  • Face Geometry: 30 FPS (includes face detect)
  • Face Iris: 25 FPS (includes face detect and face geometry)
  • Age: 60 FPS (includes face detect)
  • Gender: 60 FPS (includes face detect)
  • Emotion: 60 FPS (includes face detect)
  • Hand: 40 FPS
  • Body: 50 FPS

Library can also be used on mobile devices


Credits


Todo

  • Tweak default parameters and factorization for age/gender/emotion
  • Verify age/gender models