mirror of https://github.com/vladmandic/human
update readme
parent
e5a6342e4e
commit
55876f5dbb
240
Home.md
240
Home.md
|
@ -18,10 +18,22 @@ JavaScript module using TensorFlow/JS Machine Learning library
|
||||||
|
|
||||||
<br>
|
<br>
|
||||||
|
|
||||||
|
*Check out [**Simple Live Demo**](https://vladmandic.github.io/human/demo/typescript/index.html) fully annotated app as a good start starting point ([html](https://github.com/vladmandic/human/blob/main/demo/typescript/index.html))([code](https://github.com/vladmandic/human/blob/main/demo/typescript/index.ts))*
|
||||||
|
|
||||||
|
*Check out [**Main Live Demo**](https://vladmandic.github.io/human/demo/index.html) app for advanced processing of of webcam, video stream or images static images with all possible tunable options*
|
||||||
|
|
||||||
|
- To start video detection, simply press *Play*
|
||||||
|
- To process images, simply drag & drop in your Browser window
|
||||||
|
- Note: For optimal performance, select only models you'd like to use
|
||||||
|
- Note: If you have modern GPU, WebGL (default) backend is preferred, otherwise select WASM backend
|
||||||
|
|
||||||
|
<br>
|
||||||
|
|
||||||
## Demos
|
## Demos
|
||||||
|
|
||||||
- [**List of all Demo applications**](https://github.com/vladmandic/human/wiki/Demos)
|
- [**List of all Demo applications**](https://github.com/vladmandic/human/wiki/Demos)
|
||||||
- [*Live:* **Main Application**](https://vladmandic.github.io/human/demo/index.html)
|
- [*Live:* **Main Application**](https://vladmandic.github.io/human/demo/index.html)
|
||||||
|
- [*Live:* **Simple Application**](https://vladmandic.github.io/human/demo/typescript/index.html)
|
||||||
- [*Live:* **Face Extraction, Description, Identification and Matching**](https://vladmandic.github.io/human/demo/facematch/index.html)
|
- [*Live:* **Face Extraction, Description, Identification and Matching**](https://vladmandic.github.io/human/demo/facematch/index.html)
|
||||||
- [*Live:* **Face Extraction and 3D Rendering**](https://vladmandic.github.io/human/demo/face3d/index.html)
|
- [*Live:* **Face Extraction and 3D Rendering**](https://vladmandic.github.io/human/demo/face3d/index.html)
|
||||||
- [*Live:* **Multithreaded Detection Showcasing Maximum Performance**](https://vladmandic.github.io/human/demo/multithread/index.html)
|
- [*Live:* **Multithreaded Detection Showcasing Maximum Performance**](https://vladmandic.github.io/human/demo/multithread/index.html)
|
||||||
|
@ -45,6 +57,7 @@ JavaScript module using TensorFlow/JS Machine Learning library
|
||||||
- [**Configuration Details**](https://github.com/vladmandic/human/wiki/Config)
|
- [**Configuration Details**](https://github.com/vladmandic/human/wiki/Config)
|
||||||
- [**Result Details**](https://github.com/vladmandic/human/wiki/Result)
|
- [**Result Details**](https://github.com/vladmandic/human/wiki/Result)
|
||||||
- [**Caching & Smoothing**](https://github.com/vladmandic/human/wiki/Caching)
|
- [**Caching & Smoothing**](https://github.com/vladmandic/human/wiki/Caching)
|
||||||
|
- [**Input Processing**](https://github.com/vladmandic/human/wiki/Image)
|
||||||
- [**Face Recognition & Face Description**](https://github.com/vladmandic/human/wiki/Embedding)
|
- [**Face Recognition & Face Description**](https://github.com/vladmandic/human/wiki/Embedding)
|
||||||
- [**Gesture Recognition**](https://github.com/vladmandic/human/wiki/Gesture)
|
- [**Gesture Recognition**](https://github.com/vladmandic/human/wiki/Gesture)
|
||||||
- [**Common Issues**](https://github.com/vladmandic/human/wiki/Issues)
|
- [**Common Issues**](https://github.com/vladmandic/human/wiki/Issues)
|
||||||
|
@ -72,8 +85,231 @@ JavaScript module using TensorFlow/JS Machine Learning library
|
||||||
|
|
||||||
*Suggestions are welcome!*
|
*Suggestions are welcome!*
|
||||||
|
|
||||||
|
<hr><br>
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
Visit [Examples galery](https://vladmandic.github.io/human/samples/samples.html) for more examples
|
||||||
|
<https://vladmandic.github.io/human/samples/samples.html>
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
<br>
|
||||||
|
|
||||||
|
## Options
|
||||||
|
|
||||||
|
All options as presented in the demo application...
|
||||||
|
> [demo/index.html](demo/index.html)
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
<br>
|
||||||
|
|
||||||
|
**Results Browser:**
|
||||||
|
[ *Demo -> Display -> Show Results* ]<br>
|
||||||
|

|
||||||
|
|
||||||
|
<br>
|
||||||
|
|
||||||
|
## Advanced Examples
|
||||||
|
|
||||||
|
1. **Face Similarity Matching:**
|
||||||
|
Extracts all faces from provided input images,
|
||||||
|
sorts them by similarity to selected face
|
||||||
|
and optionally matches detected face with database of known people to guess their names
|
||||||
|
> [demo/facematch](demo/facematch/index.html)
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
<br>
|
||||||
|
|
||||||
|
2. **Face3D OpenGL Rendering:**
|
||||||
|
> [demo/face3d](demo/face3d/index.html)
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
<br>
|
||||||
|
|
||||||
|
3. **VR Model Tracking:**
|
||||||
|

|
||||||
|
|
||||||
|
<br>
|
||||||
|
|
||||||
|
**468-Point Face Mesh Defails:**
|
||||||
|
(view in full resolution to see keypoints)
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
<br><hr><br>
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
Simply load `Human` (*IIFE version*) directly from a cloud CDN in your HTML file:
|
||||||
|
(pick one: `jsdelirv`, `unpkg` or `cdnjs`)
|
||||||
|
|
||||||
|
```html
|
||||||
|
<script src="https://cdn.jsdelivr.net/npm/@vladmandic/human/dist/human.js"></script>
|
||||||
|
<script src="https://unpkg.dev/@vladmandic/human/dist/human.js"></script>
|
||||||
|
<script src="https://cdnjs.cloudflare.com/ajax/libs/human/2.1.5/human.js"></script>
|
||||||
|
```
|
||||||
|
|
||||||
|
For details, including how to use `Browser ESM` version or `NodeJS` version of `Human`, see [**Installation**](https://github.com/vladmandic/human/wiki/Install)
|
||||||
|
|
||||||
|
<br>
|
||||||
|
|
||||||
|
## Inputs
|
||||||
|
|
||||||
|
`Human` library can process all known input types:
|
||||||
|
|
||||||
|
- `Image`, `ImageData`, `ImageBitmap`, `Canvas`, `OffscreenCanvas`, `Tensor`,
|
||||||
|
- `HTMLImageElement`, `HTMLCanvasElement`, `HTMLVideoElement`, `HTMLMediaElement`
|
||||||
|
|
||||||
|
Additionally, `HTMLVideoElement`, `HTMLMediaElement` can be a standard `<video>` tag that links to:
|
||||||
|
|
||||||
|
- WebCam on user's system
|
||||||
|
- Any supported video type
|
||||||
|
For example: `.mp4`, `.avi`, etc.
|
||||||
|
- Additional video types supported via *HTML5 Media Source Extensions*
|
||||||
|
Live streaming examples:
|
||||||
|
- **HLS** (*HTTP Live Streaming*) using `hls.js`
|
||||||
|
- **DASH** (Dynamic Adaptive Streaming over HTTP) using `dash.js`
|
||||||
|
- **WebRTC** media track using built-in support
|
||||||
|
|
||||||
|
<br>
|
||||||
|
|
||||||
|
## Example
|
||||||
|
|
||||||
|
Example simple app that uses Human to process video input and
|
||||||
|
draw output on screen using internal draw helper functions
|
||||||
|
|
||||||
|
```js
|
||||||
|
// create instance of human with simple configuration using default values
|
||||||
|
const config = { backend: 'webgl' };
|
||||||
|
const human = new Human(config);
|
||||||
|
// select input HTMLVideoElement and output HTMLCanvasElement from page
|
||||||
|
const inputVideo = document.getElementById('video-id');
|
||||||
|
const outputCanvas = document.getElementById('canvas-id');
|
||||||
|
|
||||||
|
function detectVideo() {
|
||||||
|
// perform processing using default configuration
|
||||||
|
human.detect(inputVideo).then((result) => {
|
||||||
|
// result object will contain detected details
|
||||||
|
// as well as the processed canvas itself
|
||||||
|
// so lets first draw processed frame on canvas
|
||||||
|
human.draw.canvas(result.canvas, outputCanvas);
|
||||||
|
// then draw results on the same canvas
|
||||||
|
human.draw.face(outputCanvas, result.face);
|
||||||
|
human.draw.body(outputCanvas, result.body);
|
||||||
|
human.draw.hand(outputCanvas, result.hand);
|
||||||
|
human.draw.gesture(outputCanvas, result.gesture);
|
||||||
|
// and loop immediate to the next frame
|
||||||
|
requestAnimationFrame(detectVideo);
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
detectVideo();
|
||||||
|
```
|
||||||
|
|
||||||
|
or using `async/await`:
|
||||||
|
|
||||||
|
```js
|
||||||
|
// create instance of human with simple configuration using default values
|
||||||
|
const config = { backend: 'webgl' };
|
||||||
|
const human = new Human(config); // create instance of Human
|
||||||
|
const inputVideo = document.getElementById('video-id');
|
||||||
|
const outputCanvas = document.getElementById('canvas-id');
|
||||||
|
|
||||||
|
async function detectVideo() {
|
||||||
|
const result = await human.detect(inputVideo); // run detection
|
||||||
|
human.draw.all(outputCanvas, result); // draw all results
|
||||||
|
requestAnimationFrame(detectVideo); // run loop
|
||||||
|
}
|
||||||
|
|
||||||
|
detectVideo(); // start loop
|
||||||
|
```
|
||||||
|
|
||||||
|
or using `Events`:
|
||||||
|
|
||||||
|
```js
|
||||||
|
// create instance of human with simple configuration using default values
|
||||||
|
const config = { backend: 'webgl' };
|
||||||
|
const human = new Human(config); // create instance of Human
|
||||||
|
const inputVideo = document.getElementById('video-id');
|
||||||
|
const outputCanvas = document.getElementById('canvas-id');
|
||||||
|
|
||||||
|
human.events.addEventListener('detect', () => { // event gets triggered when detect is complete
|
||||||
|
human.draw.all(outputCanvas, human.result); // draw all results
|
||||||
|
});
|
||||||
|
|
||||||
|
function detectVideo() {
|
||||||
|
human.detect(inputVideo) // run detection
|
||||||
|
.then(() => requestAnimationFrame(detectVideo)); // upon detect complete start processing of the next frame
|
||||||
|
}
|
||||||
|
|
||||||
|
detectVideo(); // start loop
|
||||||
|
```
|
||||||
|
|
||||||
|
or using interpolated results for smooth video processing by separating detection and drawing loops:
|
||||||
|
|
||||||
|
```js
|
||||||
|
const human = new Human(); // create instance of Human
|
||||||
|
const inputVideo = document.getElementById('video-id');
|
||||||
|
const outputCanvas = document.getElementById('canvas-id');
|
||||||
|
let result;
|
||||||
|
|
||||||
|
async function detectVideo() {
|
||||||
|
result = await human.detect(inputVideo); // run detection
|
||||||
|
requestAnimationFrame(detectVideo); // run detect loop
|
||||||
|
}
|
||||||
|
|
||||||
|
async function drawVideo() {
|
||||||
|
if (result) { // check if result is available
|
||||||
|
const interpolated = human.next(result); // calculate next interpolated frame
|
||||||
|
human.draw.all(outputCanvas, interpolated); // draw the frame
|
||||||
|
}
|
||||||
|
requestAnimationFrame(drawVideo); // run draw loop
|
||||||
|
}
|
||||||
|
|
||||||
|
detectVideo(); // start detection loop
|
||||||
|
drawVideo(); // start draw loop
|
||||||
|
```
|
||||||
|
|
||||||
|
And for even better results, you can run detection in a separate web worker thread
|
||||||
|
|
||||||
|
<br><hr><br>
|
||||||
|
|
||||||
|
## Default models
|
||||||
|
|
||||||
|
Default models in Human library are:
|
||||||
|
|
||||||
|
- **Face Detection**: MediaPipe BlazeFace Back variation
|
||||||
|
- **Face Mesh**: MediaPipe FaceMesh
|
||||||
|
- **Face Iris Analysis**: MediaPipe Iris
|
||||||
|
- **Face Description**: HSE FaceRes
|
||||||
|
- **Emotion Detection**: Oarriaga Emotion
|
||||||
|
- **Body Analysis**: MoveNet Lightning variation
|
||||||
|
- **Hand Analysis**: HandTrack & MediaPipe HandLandmarks
|
||||||
|
- **Body Segmentation**: Google Selfie
|
||||||
|
- **Object Detection**: CenterNet with MobileNet v3
|
||||||
|
|
||||||
|
Note that alternative models are provided and can be enabled via configuration
|
||||||
|
For example, `PoseNet` model can be switched for `BlazePose`, `EfficientPose` or `MoveNet` model depending on the use case
|
||||||
|
|
||||||
|
For more info, see [**Configuration Details**](https://github.com/vladmandic/human/wiki/Configuration) and [**List of Models**](https://github.com/vladmandic/human/wiki/Models)
|
||||||
|
|
||||||
|
<br><hr><br>
|
||||||
|
|
||||||
|
## Diagnostics
|
||||||
|
|
||||||
|
- [How to get diagnostic information or performance trace information](https://github.com/vladmandic/human/wiki/Diag)
|
||||||
|
|
||||||
<br><hr><br>
|
<br><hr><br>
|
||||||
|
|
||||||
`Human` library is written in `TypeScript` [4.4](https://www.typescriptlang.org/docs/handbook/intro.html)
|
`Human` library is written in `TypeScript` [4.4](https://www.typescriptlang.org/docs/handbook/intro.html)
|
||||||
Conforming to `JavaScript` [ECMAScript version 2020](https://www.ecma-international.org/ecma-262/11.0/index.html) standard
|
Conforming to latest `JavaScript` [ECMAScript version 2021](https://262.ecma-international.org/) standard
|
||||||
Build target is `JavaScript` [EMCAScript version 2018](https://262.ecma-international.org/9.0/)
|
Build target is `JavaScript` [EMCAScript version 2018](https://262.ecma-international.org/11.0/)
|
||||||
|
|
||||||
|
<br>
|
||||||
|
|
||||||
|
For details see [**Wiki Pages**](https://github.com/vladmandic/human/wiki)
|
||||||
|
and [**API Specification**](https://vladmandic.github.io/human/typedoc/classes/Human.html)
|
||||||
|
|
Loading…
Reference in New Issue