5.5 KiB
Human Library
3D Face Detection & Rotation Tracking, Face Embedding & Recognition,
Body Pose Tracking, Hand & Finger Tracking,
Iris Analysis, Age & Gender & Emotion Prediction
& Gesture Recognition
Native JavaScript module using TensorFlow/JS Machine Learning library
Compatible with Browser, WebWorker and NodeJS execution on both Windows and Linux
- Browser/WebWorker: Compatible with CPU, WebGL, WASM and WebGPU backends
- NodeJS: Compatible with software tfjs-node and CUDA accelerated backends tfjs-node-gpu
Check out Live Demo for processing of live WebCam video or static images
Project pages
Wiki pages
- Home
- Demos
- Installation
- Usage & Functions
- Configuration Details
- Output Details
- Face Embedding and Recognition
- Gesture Recognition
Additional notes
- Notes on Backends
- Development Server
- Build Process
- Performance Notes
- Performance Profiling
- Platform Support
- List of Models & Credits
Default models
Default models in Human library are:
- Face Detection: MediaPipe BlazeFace-Back
- Face Mesh: MediaPipe FaceMesh
- Face Iris Analysis: MediaPipe Iris
- Emotion Detection: Oarriaga Emotion
- Gender Detection: Oarriaga Gender
- Age Detection: SSR-Net Age IMDB
- Body Analysis: PoseNet
- Face Embedding: BecauseofAI MobileFace Embedding
Note that alternative models are provided and can be enabled via configuration
For example, PoseNet
model can be switched for BlazePose
model depending on the use case
For more info, see Configuration Details and List of Models
See issues and discussions for list of known limitations and planned enhancements
Suggestions are welcome!
Options
As presented in the demo application...
Examples
Training image:
Using static images:
Live WebCam view:
468-Point Face Mesh Defails:
Example simple app that uses Human to process video input and
draw output on screen using internal draw helper functions
import Human from '@vladmandic/human';
// create instance of human with simple configuration using default values
const config = { backend: 'webgl' };
const human = new Human(config);
function detectVideo() {
// select input HTMLVideoElement and output HTMLCanvasElement from page
const inputVideo = document.getElementById('video-id');
const outputCanvas = document.getElementById('canvas-id');
// perform processing using default configuration
human.detect(inputVideo).then((result) => {
// result object will contain detected details
// as well as the processed canvas itself
// so lets first draw processed frame on canvas
human.draw.canvas(result.canvas, outputCanvas);
// then draw results on the same canvas
human.draw.face(outputCanvas, result.face);
human.draw.body(outputCanvas, result.body);
human.draw.hand(outputCanvas, result.hand);
human.draw.gesture(outputCanvas, result.gesture);
// loop immediate to next frame
requestAnimationFrame(detectVideo);
});
}
detectVideo();