human/README.md

152 lines
5.6 KiB
Markdown
Raw Normal View History

2020-10-15 14:16:34 +02:00
# Human Library
2021-03-08 20:12:12 +01:00
![Version](https://img.shields.io/github/package-json/v/vladmandic/human?style=flat-square)
![Last Commit](https://img.shields.io/github/last-commit/vladmandic/human?style=flat-square)
![License](https://img.shields.io/github/license/vladmandic/human?style=flat-square)
![GitHub Status Checks](https://img.shields.io/github/checks-status/vladmandic/human/main?style=flat-square])
![Vulnerabilities](https://img.shields.io/snyk/vulnerabilities/github/vladmandic/human?style=flat-square)
2021-03-06 23:22:47 +01:00
**3D Face Detection & Rotation Tracking, Face Embedding & Recognition,**
2021-03-04 16:33:08 +01:00
**Body Pose Tracking, Hand & Finger Tracking,**
**Iris Analysis, Age & Gender & Emotion Prediction**
**& Gesture Recognition**
2021-02-19 14:35:41 +01:00
<br>
Native JavaScript module using TensorFlow/JS Machine Learning library
2021-03-04 16:33:08 +01:00
Compatible with *Browser*, *WebWorker* and *NodeJS* execution on both Windows and Linux
2021-02-19 14:35:41 +01:00
- Browser/WebWorker: Compatible with *CPU*, *WebGL*, *WASM* and *WebGPU* backends
- NodeJS: Compatible with software *tfjs-node* and CUDA accelerated backends *tfjs-node-gpu*
2020-11-08 16:06:23 +01:00
2021-03-05 17:43:50 +01:00
Check out [**Live Demo**](https://vladmandic.github.io/human/demo/index.html) for processing of live WebCam video or static images
2021-03-04 16:33:08 +01:00
2020-11-08 16:06:23 +01:00
<br>
2020-10-12 01:22:43 +02:00
2021-03-04 16:33:08 +01:00
## Project pages
2020-11-07 15:39:54 +01:00
2021-03-05 17:43:50 +01:00
- [**Live Demo**](https://vladmandic.github.io/human/demo/index.html)
2020-10-14 17:43:33 +02:00
- [**Code Repository**](https://github.com/vladmandic/human)
2020-10-15 14:16:34 +02:00
- [**NPM Package**](https://www.npmjs.com/package/@vladmandic/human)
2020-10-14 17:43:33 +02:00
- [**Issues Tracker**](https://github.com/vladmandic/human/issues)
2021-03-08 20:12:12 +01:00
- [**Change Log**](https://github.com/vladmandic/human/CHANGELOG.md)
2020-11-07 15:39:54 +01:00
2020-12-21 14:12:18 +01:00
<br>
2021-03-04 16:33:08 +01:00
## Wiki pages
2020-11-07 15:39:54 +01:00
2020-11-07 15:47:26 +01:00
- [**Home**](https://github.com/vladmandic/human/wiki)
2020-11-10 02:13:38 +01:00
- [**Demos**](https://github.com/vladmandic/human/wiki/Demos)
2020-11-07 15:47:26 +01:00
- [**Installation**](https://github.com/vladmandic/human/wiki/Install)
2020-11-13 22:42:00 +01:00
- [**Usage & Functions**](https://github.com/vladmandic/human/wiki/Usage)
- [**Configuration Details**](https://github.com/vladmandic/human/wiki/Configuration)
- [**Output Details**](https://github.com/vladmandic/human/wiki/Outputs)
- [**Face Embedding and Recognition**](https://github.com/vladmandic/human/wiki/Embedding)
2020-11-09 14:57:24 +01:00
- [**Gesture Recognition**](https://github.com/vladmandic/human/wiki/Gesture)
2020-11-13 22:42:00 +01:00
2020-12-21 14:12:18 +01:00
<br>
2021-03-04 16:33:08 +01:00
## Additional notes
2020-11-13 22:42:00 +01:00
- [**Notes on Backends**](https://github.com/vladmandic/human/wiki/Backends)
2020-11-07 15:47:26 +01:00
- [**Development Server**](https://github.com/vladmandic/human/wiki/Development-Server)
- [**Build Process**](https://github.com/vladmandic/human/wiki/Build-Process)
- [**Performance Notes**](https://github.com/vladmandic/human/wiki/Performance)
2020-11-10 16:23:11 +01:00
- [**Performance Profiling**](https://github.com/vladmandic/human/wiki/Profiling)
2020-11-23 13:44:10 +01:00
- [**Platform Support**](https://github.com/vladmandic/human/wiki/Platforms)
2020-11-14 13:05:20 +01:00
- [**List of Models & Credits**](https://github.com/vladmandic/human/wiki/Models)
2020-10-12 01:22:43 +02:00
2020-11-08 18:44:08 +01:00
<br>
2021-03-04 16:33:08 +01:00
## Default models
Default models in Human library are:
- **Face Detection**: MediaPipe BlazeFace-Back
- **Face Mesh**: MediaPipe FaceMesh
- **Face Iris Analysis**: MediaPipe Iris
- **Emotion Detection**: Oarriaga Emotion
- **Gender Detection**: Oarriaga Gender
- **Age Detection**: SSR-Net Age IMDB
- **Body Analysis**: PoseNet
- **Face Embedding**: Sirius-AI MobileFaceNet Embedding
Note that alternative models are provided and can be enabled via configuration
For example, `PoseNet` model can be switched for `BlazePose` model depending on the use case
For more info, see [**Configuration Details**](https://github.com/vladmandic/human/wiki/Configuration) and [**List of Models**](https://github.com/vladmandic/human/wiki/Models)
<br>
*See [**issues**](https://github.com/vladmandic/human/issues?q=) and [**discussions**](https://github.com/vladmandic/human/discussions) for list of known limitations and planned enhancements*
2020-10-14 02:52:30 +02:00
2020-11-07 15:39:54 +01:00
*Suggestions are welcome!*
2020-11-06 22:21:20 +01:00
2021-02-19 14:35:41 +01:00
<br><hr><br>
2020-11-06 22:21:20 +01:00
2021-03-04 16:33:08 +01:00
## Options
As presented in the demo application...
2020-11-21 13:19:20 +01:00
![Options visible in demo](assets/screenshot-menu.png)
2021-02-19 14:35:41 +01:00
<br><hr><br>
2020-11-21 13:19:20 +01:00
2020-10-16 21:04:51 +02:00
## Examples
2020-10-13 16:06:49 +02:00
2020-11-21 13:19:20 +01:00
<br>
2021-03-04 16:33:08 +01:00
**Training image:**
![Example Training Image](assets/screenshot-sample.png)
2020-10-16 21:04:51 +02:00
**Using static images:**
2020-11-21 13:19:20 +01:00
2021-03-04 16:33:08 +01:00
![Example Using Image](assets/screenshot-images.jpg)
2020-10-16 21:04:51 +02:00
2021-03-04 16:33:08 +01:00
**Live WebCam view:**
![Example Using WebCam](assets/screenshot-webcam.jpg)
2020-11-21 13:19:20 +01:00
2021-03-05 17:43:50 +01:00
<br><hr><br>
Example simple app that uses Human to process video input and
draw output on screen using internal draw helper functions
```js
import Human from '@vladmandic/human';
// create instance of human with simple configuration using default values
2021-03-05 20:40:44 +01:00
const config = { backend: 'webgl' };
2021-03-05 17:43:50 +01:00
const human = new Human(config);
function detectVideo() {
// select input HTMLVideoElement and output HTMLCanvasElement from page
const inputVideo = document.getElementById('video-id');
const outputCanvas = document.getElementById('canvas-id');
// perform processing using default configuration
human.detect(inputVideo).then((result) => {
2021-03-05 20:40:44 +01:00
// result object will contain detected details
// as well as the processed canvas itself
// so lets first draw processed frame on canvas
2021-03-05 17:43:50 +01:00
human.draw.canvas(result.canvas, outputCanvas);
// then draw results on the same canvas
human.draw.face(outputCanvas, result.face);
human.draw.body(outputCanvas, result.body);
human.draw.hand(outputCanvas, result.hand);
human.draw.gesture(outputCanvas, result.gesture);
// loop immediate to next frame
requestAnimationFrame(detectVideo);
});
}
detectVideo();
```
2020-11-21 13:19:20 +01:00
2021-03-04 16:33:08 +01:00
<br>
2021-03-08 20:12:12 +01:00
![Downloads](https://img.shields.io/npm/dm/@vladmandic/human?style=flat-square)
![Stars](https://img.shields.io/github/stars/vladmandic/human?style=flat-square)
![Code Size](https://img.shields.io/github/languages/code-size/vladmandic/human?style=flat-square)
![Commit Activity](https://img.shields.io/github/commit-activity/m/vladmandic/human?style=flat-square)