mirror of https://github.com/vladmandic/human
update demos
parent
41b5880bf0
commit
185d129a17
1
Demos.md
1
Demos.md
|
@ -9,6 +9,7 @@ All demos are included in `/demo` and come with individual documentation per-dem
|
||||||
- **Full** [[*Live*]](https://vladmandic.github.io/human/demo/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo): Main browser demo app that showcases all Human capabilities
|
- **Full** [[*Live*]](https://vladmandic.github.io/human/demo/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo): Main browser demo app that showcases all Human capabilities
|
||||||
- **Simple** [[*Live*]](https://vladmandic.github.io/human/demo/typescript/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/typescript): Simple demo in WebCam processing demo in TypeScript
|
- **Simple** [[*Live*]](https://vladmandic.github.io/human/demo/typescript/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/typescript): Simple demo in WebCam processing demo in TypeScript
|
||||||
- **Embedded** [[*Live*]](https://vladmandic.github.io/human/demo/video/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/video/index.html): Even simpler demo with tiny code embedded in HTML file
|
- **Embedded** [[*Live*]](https://vladmandic.github.io/human/demo/video/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/video/index.html): Even simpler demo with tiny code embedded in HTML file
|
||||||
|
- **Face Detect** [[*Live*]](https://vladmandic.github.io/human/demo/facedetect/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/facedetect): Extract faces from images and processes details
|
||||||
- **Face Match** [[*Live*]](https://vladmandic.github.io/human/demo/facematch/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/facematch): Extract faces from images, calculates face descriptors and similarities and matches them to known database
|
- **Face Match** [[*Live*]](https://vladmandic.github.io/human/demo/facematch/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/facematch): Extract faces from images, calculates face descriptors and similarities and matches them to known database
|
||||||
- **Face ID** [[*Live*]](https://vladmandic.github.io/human/demo/faceid/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/faceid): Runs multiple checks to validate webcam input before performing face match to faces in IndexDB
|
- **Face ID** [[*Live*]](https://vladmandic.github.io/human/demo/faceid/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/faceid): Runs multiple checks to validate webcam input before performing face match to faces in IndexDB
|
||||||
- **Multi-thread** [[*Live*]](https://vladmandic.github.io/human/demo/multithread/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/multithread): Runs each Human module in a separate web worker for highest possible performance
|
- **Multi-thread** [[*Live*]](https://vladmandic.github.io/human/demo/multithread/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/multithread): Runs each Human module in a separate web worker for highest possible performance
|
||||||
|
|
266
Home.md
266
Home.md
|
@ -14,35 +14,6 @@
|
||||||
|
|
||||||
<br>
|
<br>
|
||||||
|
|
||||||
## Highlights
|
|
||||||
|
|
||||||
- Compatible with most server-side and client-side environments and frameworks
|
|
||||||
- Combines multiple machine learning models which can be switched on-demand depending on the use-case
|
|
||||||
- Related models are executed in an attention pipeline to provide details when needed
|
|
||||||
- Optimized input pre-processing that can enhance image quality of any type of inputs
|
|
||||||
- Detection of frame changes to trigger only required models for improved performance
|
|
||||||
- Intelligent temporal interpolation to provide smooth results regardless of processing performance
|
|
||||||
- Simple unified API
|
|
||||||
- Built-in Image, Video and WebCam handling
|
|
||||||
|
|
||||||
[*Jump to Quick Start*](#quick-start)
|
|
||||||
|
|
||||||
<br>
|
|
||||||
|
|
||||||
## Compatibility
|
|
||||||
|
|
||||||
- **Browser**:
|
|
||||||
Compatible with both desktop and mobile platforms
|
|
||||||
Compatible with *CPU*, *WebGL*, *WASM* backends
|
|
||||||
Compatible with *WebWorker* execution
|
|
||||||
Compatible with *WebView*
|
|
||||||
- **NodeJS**:
|
|
||||||
Compatibile with *WASM* backend for executions on architectures where *tensorflow* binaries are not available
|
|
||||||
Compatible with *tfjs-node* using software execution via *tensorflow* shared libraries
|
|
||||||
Compatible with *tfjs-node* using GPU-accelerated execution via *tensorflow* shared libraries and nVidia CUDA
|
|
||||||
|
|
||||||
<br>
|
|
||||||
|
|
||||||
## Releases
|
## Releases
|
||||||
- [Release Notes](https://github.com/vladmandic/human/releases)
|
- [Release Notes](https://github.com/vladmandic/human/releases)
|
||||||
- [NPM Link](https://www.npmjs.com/package/@vladmandic/human)
|
- [NPM Link](https://www.npmjs.com/package/@vladmandic/human)
|
||||||
|
@ -70,6 +41,7 @@
|
||||||
- **Full** [[*Live*]](https://vladmandic.github.io/human/demo/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo): Main browser demo app that showcases all Human capabilities
|
- **Full** [[*Live*]](https://vladmandic.github.io/human/demo/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo): Main browser demo app that showcases all Human capabilities
|
||||||
- **Simple** [[*Live*]](https://vladmandic.github.io/human/demo/typescript/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/typescript): Simple demo in WebCam processing demo in TypeScript
|
- **Simple** [[*Live*]](https://vladmandic.github.io/human/demo/typescript/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/typescript): Simple demo in WebCam processing demo in TypeScript
|
||||||
- **Embedded** [[*Live*]](https://vladmandic.github.io/human/demo/video/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/video/index.html): Even simpler demo with tiny code embedded in HTML file
|
- **Embedded** [[*Live*]](https://vladmandic.github.io/human/demo/video/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/video/index.html): Even simpler demo with tiny code embedded in HTML file
|
||||||
|
- **Face Detect** [[*Live*]](https://vladmandic.github.io/human/demo/facedetect/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/facedetect): Extract faces from images and processes details
|
||||||
- **Face Match** [[*Live*]](https://vladmandic.github.io/human/demo/facematch/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/facematch): Extract faces from images, calculates face descriptors and similarities and matches them to known database
|
- **Face Match** [[*Live*]](https://vladmandic.github.io/human/demo/facematch/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/facematch): Extract faces from images, calculates face descriptors and similarities and matches them to known database
|
||||||
- **Face ID** [[*Live*]](https://vladmandic.github.io/human/demo/faceid/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/faceid): Runs multiple checks to validate webcam input before performing face match to faces in IndexDB
|
- **Face ID** [[*Live*]](https://vladmandic.github.io/human/demo/faceid/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/faceid): Runs multiple checks to validate webcam input before performing face match to faces in IndexDB
|
||||||
- **Multi-thread** [[*Live*]](https://vladmandic.github.io/human/demo/multithread/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/multithread): Runs each Human module in a separate web worker for highest possible performance
|
- **Multi-thread** [[*Live*]](https://vladmandic.github.io/human/demo/multithread/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/multithread): Runs each Human module in a separate web worker for highest possible performance
|
||||||
|
@ -140,240 +112,4 @@
|
||||||
|
|
||||||
*Suggestions are welcome!*
|
*Suggestions are welcome!*
|
||||||
|
|
||||||
<br><hr><br>
|
|
||||||
|
|
||||||
## Quick Start
|
|
||||||
|
|
||||||
Simply load `Human` (*IIFE version*) directly from a cloud CDN in your HTML file:
|
|
||||||
(pick one: `jsdelirv`, `unpkg` or `cdnjs`)
|
|
||||||
|
|
||||||
```html
|
|
||||||
<!DOCTYPE HTML>
|
|
||||||
<script src="https://cdn.jsdelivr.net/npm/@vladmandic/human/dist/human.js"></script>
|
|
||||||
<script src="https://unpkg.dev/@vladmandic/human/dist/human.js"></script>
|
|
||||||
<script src="https://cdnjs.cloudflare.com/ajax/libs/human/3.0.0/human.js"></script>
|
|
||||||
```
|
|
||||||
|
|
||||||
For details, including how to use `Browser ESM` version or `NodeJS` version of `Human`, see [**Installation**](https://github.com/vladmandic/human/wiki/Install)
|
|
||||||
|
|
||||||
<br>
|
|
||||||
|
|
||||||
## Code Examples
|
|
||||||
|
|
||||||
Simple app that uses Human to process video input and
|
|
||||||
draw output on screen using internal draw helper functions
|
|
||||||
|
|
||||||
```js
|
|
||||||
// create instance of human with simple configuration using default values
|
|
||||||
const config = { backend: 'webgl' };
|
|
||||||
const human = new Human(config);
|
|
||||||
// select input HTMLVideoElement and output HTMLCanvasElement from page
|
|
||||||
const inputVideo = document.getElementById('video-id');
|
|
||||||
const outputCanvas = document.getElementById('canvas-id');
|
|
||||||
|
|
||||||
function detectVideo() {
|
|
||||||
// perform processing using default configuration
|
|
||||||
human.detect(inputVideo).then((result) => {
|
|
||||||
// result object will contain detected details
|
|
||||||
// as well as the processed canvas itself
|
|
||||||
// so lets first draw processed frame on canvas
|
|
||||||
human.draw.canvas(result.canvas, outputCanvas);
|
|
||||||
// then draw results on the same canvas
|
|
||||||
human.draw.face(outputCanvas, result.face);
|
|
||||||
human.draw.body(outputCanvas, result.body);
|
|
||||||
human.draw.hand(outputCanvas, result.hand);
|
|
||||||
human.draw.gesture(outputCanvas, result.gesture);
|
|
||||||
// and loop immediate to the next frame
|
|
||||||
requestAnimationFrame(detectVideo);
|
|
||||||
return result;
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
detectVideo();
|
|
||||||
```
|
|
||||||
|
|
||||||
or using `async/await`:
|
|
||||||
|
|
||||||
```js
|
|
||||||
// create instance of human with simple configuration using default values
|
|
||||||
const config = { backend: 'webgl' };
|
|
||||||
const human = new Human(config); // create instance of Human
|
|
||||||
const inputVideo = document.getElementById('video-id');
|
|
||||||
const outputCanvas = document.getElementById('canvas-id');
|
|
||||||
|
|
||||||
async function detectVideo() {
|
|
||||||
const result = await human.detect(inputVideo); // run detection
|
|
||||||
human.draw.all(outputCanvas, result); // draw all results
|
|
||||||
requestAnimationFrame(detectVideo); // run loop
|
|
||||||
}
|
|
||||||
|
|
||||||
detectVideo(); // start loop
|
|
||||||
```
|
|
||||||
|
|
||||||
or using `Events`:
|
|
||||||
|
|
||||||
```js
|
|
||||||
// create instance of human with simple configuration using default values
|
|
||||||
const config = { backend: 'webgl' };
|
|
||||||
const human = new Human(config); // create instance of Human
|
|
||||||
const inputVideo = document.getElementById('video-id');
|
|
||||||
const outputCanvas = document.getElementById('canvas-id');
|
|
||||||
|
|
||||||
human.events.addEventListener('detect', () => { // event gets triggered when detect is complete
|
|
||||||
human.draw.all(outputCanvas, human.result); // draw all results
|
|
||||||
});
|
|
||||||
|
|
||||||
function detectVideo() {
|
|
||||||
human.detect(inputVideo) // run detection
|
|
||||||
.then(() => requestAnimationFrame(detectVideo)); // upon detect complete start processing of the next frame
|
|
||||||
}
|
|
||||||
|
|
||||||
detectVideo(); // start loop
|
|
||||||
```
|
|
||||||
|
|
||||||
or using interpolated results for smooth video processing by separating detection and drawing loops:
|
|
||||||
|
|
||||||
```js
|
|
||||||
const human = new Human(); // create instance of Human
|
|
||||||
const inputVideo = document.getElementById('video-id');
|
|
||||||
const outputCanvas = document.getElementById('canvas-id');
|
|
||||||
let result;
|
|
||||||
|
|
||||||
async function detectVideo() {
|
|
||||||
result = await human.detect(inputVideo); // run detection
|
|
||||||
requestAnimationFrame(detectVideo); // run detect loop
|
|
||||||
}
|
|
||||||
|
|
||||||
async function drawVideo() {
|
|
||||||
if (result) { // check if result is available
|
|
||||||
const interpolated = human.next(result); // get smoothened result using last-known results
|
|
||||||
human.draw.all(outputCanvas, interpolated); // draw the frame
|
|
||||||
}
|
|
||||||
requestAnimationFrame(drawVideo); // run draw loop
|
|
||||||
}
|
|
||||||
|
|
||||||
detectVideo(); // start detection loop
|
|
||||||
drawVideo(); // start draw loop
|
|
||||||
```
|
|
||||||
|
|
||||||
or same, but using built-in full video processing instead of running manual frame-by-frame loop:
|
|
||||||
|
|
||||||
```js
|
|
||||||
const human = new Human(); // create instance of Human
|
|
||||||
const inputVideo = document.getElementById('video-id');
|
|
||||||
const outputCanvas = document.getElementById('canvas-id');
|
|
||||||
|
|
||||||
async function drawResults() {
|
|
||||||
const interpolated = human.next(); // get smoothened result using last-known results
|
|
||||||
human.draw.all(outputCanvas, interpolated); // draw the frame
|
|
||||||
requestAnimationFrame(drawResults); // run draw loop
|
|
||||||
}
|
|
||||||
|
|
||||||
human.video(inputVideo); // start detection loop which continously updates results
|
|
||||||
drawResults(); // start draw loop
|
|
||||||
```
|
|
||||||
|
|
||||||
or using built-in webcam helper methods that take care of video handling completely:
|
|
||||||
|
|
||||||
```js
|
|
||||||
const human = new Human(); // create instance of Human
|
|
||||||
const outputCanvas = document.getElementById('canvas-id');
|
|
||||||
|
|
||||||
async function drawResults() {
|
|
||||||
const interpolated = human.next(); // get smoothened result using last-known results
|
|
||||||
human.draw.canvas(outputCanvas, human.webcam.element); // draw current webcam frame
|
|
||||||
human.draw.all(outputCanvas, interpolated); // draw the frame detectgion results
|
|
||||||
requestAnimationFrame(drawResults); // run draw loop
|
|
||||||
}
|
|
||||||
|
|
||||||
await human.webcam.start({ crop: true });
|
|
||||||
human.video(human.webcam.element); // start detection loop which continously updates results
|
|
||||||
drawResults(); // start draw loop
|
|
||||||
```
|
|
||||||
|
|
||||||
And for even better results, you can run detection in a separate web worker thread
|
|
||||||
|
|
||||||
<br><hr><br>
|
|
||||||
|
|
||||||
## Inputs
|
|
||||||
|
|
||||||
`Human` library can process all known input types:
|
|
||||||
|
|
||||||
- `Image`, `ImageData`, `ImageBitmap`, `Canvas`, `OffscreenCanvas`, `Tensor`,
|
|
||||||
- `HTMLImageElement`, `HTMLCanvasElement`, `HTMLVideoElement`, `HTMLMediaElement`
|
|
||||||
|
|
||||||
Additionally, `HTMLVideoElement`, `HTMLMediaElement` can be a standard `<video>` tag that links to:
|
|
||||||
|
|
||||||
- WebCam on user's system
|
|
||||||
- Any supported video type
|
|
||||||
e.g. `.mp4`, `.avi`, etc.
|
|
||||||
- Additional video types supported via *HTML5 Media Source Extensions*
|
|
||||||
e.g.: **HLS** (*HTTP Live Streaming*) using `hls.js` or **DASH** (*Dynamic Adaptive Streaming over HTTP*) using `dash.js`
|
|
||||||
- **WebRTC** media track using built-in support
|
|
||||||
|
|
||||||
<br><hr><br>
|
|
||||||
|
|
||||||
## Detailed Usage
|
|
||||||
|
|
||||||
- [**Wiki Home**](https://github.com/vladmandic/human/wiki)
|
|
||||||
- [**List of all available methods, properies and namespaces**](https://github.com/vladmandic/human/wiki/Usage)
|
|
||||||
- [**TypeDoc API Specification - Main class**](https://vladmandic.github.io/human/typedoc/classes/Human.html)
|
|
||||||
- [**TypeDoc API Specification - Full**](https://vladmandic.github.io/human/typedoc/)
|
|
||||||
|
|
||||||
<br><hr><br>
|
|
||||||
|
|
||||||
## TypeDefs
|
|
||||||
|
|
||||||
`Human` is written using TypeScript strong typing and ships with full **TypeDefs** for all classes defined by the library bundled in `types/human.d.ts` and enabled by default
|
|
||||||
|
|
||||||
*Note*: This does not include embedded `tfjs`
|
|
||||||
If you want to use embedded `tfjs` inside `Human` (`human.tf` namespace) and still full **typedefs**, add this code:
|
|
||||||
|
|
||||||
> import type * as tfjs from '@vladmandic/human/dist/tfjs.esm';
|
|
||||||
> const tf = human.tf as typeof tfjs;
|
|
||||||
|
|
||||||
This is not enabled by default as `Human` does not ship with full **TFJS TypeDefs** due to size considerations
|
|
||||||
Enabling `tfjs` TypeDefs as above creates additional project (dev-only as only types are required) dependencies as defined in `@vladmandic/human/dist/tfjs.esm.d.ts`:
|
|
||||||
|
|
||||||
> @tensorflow/tfjs-core, @tensorflow/tfjs-converter, @tensorflow/tfjs-backend-wasm, @tensorflow/tfjs-backend-webgl
|
|
||||||
|
|
||||||
|
|
||||||
<br><hr><br>
|
|
||||||
|
|
||||||
## Default models
|
|
||||||
|
|
||||||
Default models in Human library are:
|
|
||||||
|
|
||||||
- **Face Detection**: *MediaPipe BlazeFace Back variation*
|
|
||||||
- **Face Mesh**: *MediaPipe FaceMesh*
|
|
||||||
- **Face Iris Analysis**: *MediaPipe Iris*
|
|
||||||
- **Face Description**: *HSE FaceRes*
|
|
||||||
- **Emotion Detection**: *Oarriaga Emotion*
|
|
||||||
- **Body Analysis**: *MoveNet Lightning variation*
|
|
||||||
- **Hand Analysis**: *HandTrack & MediaPipe HandLandmarks*
|
|
||||||
- **Body Segmentation**: *Google Selfie*
|
|
||||||
- **Object Detection**: *CenterNet with MobileNet v3*
|
|
||||||
|
|
||||||
Note that alternative models are provided and can be enabled via configuration
|
|
||||||
For example, body pose detection by default uses *MoveNet Lightning*, but can be switched to *MultiNet Thunder* for higher precision or *Multinet MultiPose* for multi-person detection or even *PoseNet*, *BlazePose* or *EfficientPose* depending on the use case
|
|
||||||
|
|
||||||
For more info, see [**Configuration Details**](https://github.com/vladmandic/human/wiki/Configuration) and [**List of Models**](https://github.com/vladmandic/human/wiki/Models)
|
|
||||||
|
|
||||||
<br><hr><br>
|
|
||||||
|
|
||||||
## Diagnostics
|
|
||||||
|
|
||||||
- [How to get diagnostic information or performance trace information](https://github.com/vladmandic/human/wiki/Diag)
|
|
||||||
|
|
||||||
<br><hr><br>
|
|
||||||
|
|
||||||
`Human` library is written in [TypeScript](https://www.typescriptlang.org/docs/handbook/intro.html) **4.9** using [TensorFlow/JS](https://www.tensorflow.org/js/) **4.1** and conforming to latest `JavaScript` [ECMAScript version 2022](https://262.ecma-international.org/) standard
|
|
||||||
|
|
||||||
Build target for distributables is `JavaScript` [EMCAScript version 2018](https://262.ecma-international.org/9.0/)
|
|
||||||
|
|
||||||
<br>
|
|
||||||
|
|
||||||
For details see [**Wiki Pages**](https://github.com/vladmandic/human/wiki)
|
|
||||||
and [**API Specification**](https://vladmandic.github.io/human/typedoc/classes/Human.html)
|
|
||||||
|
|
||||||
<br>
|
<br>
|
||||||
|
|
Loading…
Reference in New Issue