mirror of https://github.com/vladmandic/human
update demos
parent
7a9a68dc4d
commit
1e147d34e6
6
Demos.md
6
Demos.md
|
@ -14,9 +14,9 @@ All demos are included in `/demo` and come with individual documentation per-dem
|
|||
- **Multi-thread** [[*Live*]](https://vladmandic.github.io/human/demo/multithread/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/multithread): Runs each Human module in a separate web worker for highest possible performance
|
||||
- **NextJS** [[*Live*]](https://vladmandic.github.io/human-next/out/index.html) [[*Details*]](https://github.com/vladmandic/human-next): Use Human with TypeScript, NextJS and ReactJS
|
||||
- **ElectronJS** [[*Details*]](https://github.com/vladmandic/human-electron): Use Human with TypeScript and ElectonJS to create standalone cross-platform apps
|
||||
- **3D Analysis** [[*Live*]](https://vladmandic.github.io/human-motion/src/index.html) [[*Details*]](https://github.com/vladmandic/human-motion): 3D tracking and visualization of heead, face, eye, body and hand
|
||||
- **Avatar Bone Mapping** [[*Live*]](https://vladmandic.github.io/human-vrm/src/human-avatar.html) [[*Details*]](https://github.com/vladmandic/human-avatar): Human skeleton with full bone mapping using look and inverse kinematics controllers
|
||||
- **Virtual Model Tracking** [[*Live*]](https://vladmandic.github.io/human-vrm/src/human-vrm.html) [[*Details*]](https://github.com/vladmandic/human-vrm): VR model with head, face, eye, body and hand tracking
|
||||
- **3D Analysis with BabylonJS** [[*Live*]](https://vladmandic.github.io/human-motion/src/index.html) [[*Details*]](https://github.com/vladmandic/human-motion): 3D tracking and visualization of heead, face, eye, body and hand
|
||||
- **VRM Virtual Model Tracking with Three.JS** [[*Live*]](https://vladmandic.github.io/human-three-vrm/src/human-vrm.html) [[*Details*]](https://github.com/vladmandic/human-three-vrm): VR model with head, face, eye, body and hand tracking
|
||||
- **VRM Virtual Model Tracking with BabylonJS** [[*Live*]](https://vladmandic.github.io/human-bjs-vrm/src/index.html) [[*Details*]](https://github.com/vladmandic/human-bjs-vrm): VR model with head, face, eye, body and hand tracking
|
||||
|
||||
## NodeJS Demos
|
||||
|
||||
|
|
116
Home.md
116
Home.md
|
@ -1,31 +1,40 @@
|
|||

|
||||

|
||||

|
||||

|
||||

|
||||

|
||||
|
||||
# Human Library
|
||||
|
||||
**AI-powered 3D Face Detection & Rotation Tracking, Face Description & Recognition,**
|
||||
**Body Pose Tracking, 3D Hand & Finger Tracking, Iris Analysis,**
|
||||
**Age & Gender & Emotion Prediction, Gaze Tracking, Gesture Recognition, Body Segmentation**
|
||||
|
||||
JavaScript module using TensorFlow/JS Machine Learning library
|
||||
## Highlights
|
||||
|
||||
- Compatible with most server-side and client-side environments and frameworks
|
||||
- Combines multiple machine learning models which can be switched on-demand depending on the use-case
|
||||
- Related models are executed in an attention pipeline to provide details when needed
|
||||
- Optimized input pre-processing that can enhance image quality of any type of inputs
|
||||
- Detection of frame changes to trigger only required models for improved performance
|
||||
- Intelligent temporal interpolation to provide smooth results regardless of processing performance
|
||||
- Simple unified API
|
||||
|
||||
<br>
|
||||
|
||||
JavaScript module using TensorFlow/JS Machine Learning library
|
||||
## Compatibility
|
||||
|
||||
- **Browser**:
|
||||
Compatible with both desktop and mobile platforms
|
||||
Compatible with *CPU*, *WebGL*, *WASM* backends
|
||||
Compatible with *WebWorker* execution
|
||||
Compatible with *WebWorker* execution
|
||||
Compatible with *WebView*
|
||||
- **NodeJS**:
|
||||
Compatible with both software *tfjs-node* and
|
||||
GPU accelerated backends *tfjs-node-gpu* using CUDA libraries
|
||||
|
||||
<br>
|
||||
|
||||
*Check out [**Simple Live Demo**](https://vladmandic.github.io/human/demo/typescript/index.html) fully annotated app as a good start starting point ([html](https://github.com/vladmandic/human/blob/main/demo/typescript/index.html))([code](https://github.com/vladmandic/human/blob/main/demo/typescript/index.ts))*
|
||||
|
||||
*Check out [**Main Live Demo**](https://vladmandic.github.io/human/demo/index.html) app for advanced processing of of webcam, video stream or images static images with all possible tunable options*
|
||||
|
||||
- To start video detection, simply press *Play*
|
||||
- To process images, simply drag & drop in your Browser window
|
||||
- Note: For optimal performance, select only models you'd like to use
|
||||
- Note: If you have modern GPU, WebGL (default) backend is preferred, otherwise select WASM backend
|
||||
Compatibile with *WASM* backend for executions on architectures where *tensorflow* binaries are not available
|
||||
Compatible with *tfjs-node* using software execution via *tensorflow* shared libraries
|
||||
Compatible with *tfjs-node* using GPU-accelerated execution via *tensorflow* shared libraries and nVidia CUDA
|
||||
|
||||
<br>
|
||||
|
||||
|
@ -34,21 +43,41 @@ JavaScript module using TensorFlow/JS Machine Learning library
|
|||
- [NPM Link](https://www.npmjs.com/package/@vladmandic/human)
|
||||
## Demos
|
||||
|
||||
*Check out [**Simple Live Demo**](https://vladmandic.github.io/human/demo/typescript/index.html) fully annotated app as a good start starting point ([html](https://github.com/vladmandic/human/blob/main/demo/typescript/index.html))([code](https://github.com/vladmandic/human/blob/main/demo/typescript/index.ts))*
|
||||
|
||||
*Check out [**Main Live Demo**](https://vladmandic.github.io/human/demo/index.html) app for advanced processing of of webcam, video stream or images static images with all possible tunable options*
|
||||
|
||||
- To start video detection, simply press *Play*
|
||||
- To process images, simply drag & drop in your Browser window
|
||||
- Note: For optimal performance, select only models you'd like to use
|
||||
- Note: If you have modern GPU, *WebGL* (default) backend is preferred, otherwise select *WASM* backend
|
||||
|
||||
<br>
|
||||
|
||||
|
||||
- [**List of all Demo applications**](https://github.com/vladmandic/human/wiki/Demos)
|
||||
- [**Examples galery**](https://vladmandic.github.io/human/samples/samples.html)
|
||||
- [**Live Examples galery**](https://vladmandic.github.io/human/samples/index.html)
|
||||
|
||||
### Browser Demos
|
||||
|
||||
*All browser demos are self-contained without any external dependencies*
|
||||
|
||||
- **Full** [[*Live*]](https://vladmandic.github.io/human/demo/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo): Main browser demo app that showcases all Human capabilities
|
||||
- **Simple** [[*Live*]](https://vladmandic.github.io/human/demo/typescript/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/typescript): Simple demo in WebCam processing demo in TypeScript
|
||||
- **Embedded** [[*Live*]](https://vladmandic.github.io/human/demo/video/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/video/index.html): Even simpler demo with tiny code embedded in HTML file
|
||||
- **Face Match** [[*Live*]](https://vladmandic.github.io/human/demo/facematch/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/facematch): Extract faces from images, calculates face descriptors and simmilarities and matches them to known database
|
||||
- **Face ID** [[*Live*]](https://vladmandic.github.io/human/demo/faceid/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/faceid): Runs multiple checks to validate webcam input before performing face match to faces in IndexDB
|
||||
- **Multi-thread** [[*Live*]](https://vladmandic.github.io/human/demo/multithread/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/multithread): Runs each `human` module in a separate web worker for highest possible performance
|
||||
- **Multi-thread** [[*Live*]](https://vladmandic.github.io/human/demo/multithread/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/multithread): Runs each Human module in a separate web worker for highest possible performance
|
||||
- **NextJS** [[*Live*]](https://vladmandic.github.io/human-next/out/index.html) [[*Details*]](https://github.com/vladmandic/human-next): Use Human with TypeScript, NextJS and ReactJS
|
||||
- **ElectronJS** [[*Details*]](https://github.com/vladmandic/human-electron): Use Human with TypeScript and ElectonJS to create standalone cross-platform apps
|
||||
- **3D Analysis** [[*Live*]](https://vladmandic.github.io/human-motion/src/index.html) [[*Details*]](https://github.com/vladmandic/human-motion): 3D tracking and visualization of heead, face, eye, body and hand
|
||||
- **Virtual Avatar** [[*Live*]](https://vladmandic.github.io/human-vrm/src/human-vrm.html) [[*Details*]](https://github.com/vladmandic/human-vrm): VR model with head, face, eye, body and hand tracking
|
||||
- **Virtual Model Tracking** [[*Live*]](https://vladmandic.github.io/human-vrm/src/human-vrm.html) [[*Details*]](https://github.com/vladmandic/human-vrm): VR model with head, face, eye, body and hand tracking
|
||||
|
||||
### NodeJS Demos
|
||||
|
||||
*NodeJS demos may require extra dependencies which are used to decode inputs*
|
||||
*See header of each demo to see its dependencies as they are not automatically installed with `Human`*
|
||||
|
||||
- **Main** [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/nodejs): Process images from files, folders or URLs using native methods
|
||||
- **Canvas** [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/nodejs): Process image from file or URL and draw results to a new image file using `node-canvas`
|
||||
- **Video** [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/nodejs): Processing of video input using `ffmpeg`
|
||||
|
@ -58,7 +87,6 @@ JavaScript module using TensorFlow/JS Machine Learning library
|
|||
- **Face Match** [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/facematch): Parallel processing of face **match** in multiple child worker threads
|
||||
- **Multiple Workers** [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/nodejs): Runs multiple parallel `human` by dispaching them to pool of pre-created worker processes
|
||||
|
||||
|
||||
## Project pages
|
||||
|
||||
- [**Code Repository**](https://github.com/vladmandic/human)
|
||||
|
@ -108,8 +136,8 @@ JavaScript module using TensorFlow/JS Machine Learning library
|
|||
|
||||
## Examples
|
||||
|
||||
Visit [Examples galery](https://vladmandic.github.io/human/samples/samples.html) for more examples
|
||||
<https://vladmandic.github.io/human/samples/samples.html>
|
||||
Visit [Examples gallery](https://vladmandic.github.io/human/samples/index.html) for more examples
|
||||
<https://vladmandic.github.io/human/samples/index.html>
|
||||
|
||||

|
||||
|
||||
|
@ -142,7 +170,7 @@ and optionally matches detected face with database of known people to guess thei
|
|||
|
||||
<br>
|
||||
|
||||
2. **Face3D OpenGL Rendering:**
|
||||
2. **3D Rendering:**
|
||||
> [human-motion](https://github.com/vladmandic/human-motion)
|
||||
|
||||

|
||||
|
@ -151,7 +179,14 @@ and optionally matches detected face with database of known people to guess thei
|
|||
|
||||
<br>
|
||||
|
||||
3. **VR Model Tracking:**
|
||||
3. **Avatar Bone Mapping:**
|
||||
> [human-avatar](https://github.com/vladmandic/human-avatar)
|
||||
|
||||

|
||||
|
||||
<br>
|
||||
|
||||
4. **VR Model Tracking:**
|
||||
> [human-vrmmotion](https://github.com/vladmandic/human-vrm)
|
||||
|
||||

|
||||
|
@ -287,7 +322,7 @@ async function detectVideo() {
|
|||
|
||||
async function drawVideo() {
|
||||
if (result) { // check if result is available
|
||||
const interpolated = human.next(result); // calculate next interpolated frame
|
||||
const interpolated = human.next(result); // get smoothened result using last-known results
|
||||
human.draw.all(outputCanvas, interpolated); // draw the frame
|
||||
}
|
||||
requestAnimationFrame(drawVideo); // run draw loop
|
||||
|
@ -297,6 +332,23 @@ detectVideo(); // start detection loop
|
|||
drawVideo(); // start draw loop
|
||||
```
|
||||
|
||||
or same, but using built-in full video processing instead of running manual frame-by-frame loop:
|
||||
|
||||
```js
|
||||
const human = new Human(); // create instance of Human
|
||||
const inputVideo = document.getElementById('video-id');
|
||||
const outputCanvas = document.getElementById('canvas-id');
|
||||
|
||||
async function drawResults() {
|
||||
const interpolated = human.next(); // get smoothened result using last-known results
|
||||
human.draw.all(outputCanvas, interpolated); // draw the frame
|
||||
requestAnimationFrame(drawVideo); // run draw loop
|
||||
}
|
||||
|
||||
human.video(inputVideo); // start detection loop which continously updates results
|
||||
drawResults(); // start draw loop
|
||||
```
|
||||
|
||||
And for even better results, you can run detection in a separate web worker thread
|
||||
|
||||
<br><hr><br>
|
||||
|
@ -316,7 +368,7 @@ Default models in Human library are:
|
|||
- **Object Detection**: CenterNet with MobileNet v3
|
||||
|
||||
Note that alternative models are provided and can be enabled via configuration
|
||||
For example, `PoseNet` model can be switched for `BlazePose`, `EfficientPose` or `MoveNet` model depending on the use case
|
||||
For example, body pose detection by default uses `MoveNet Lightning`, but can be switched to `MultiNet Thunder` for higher precision or `Multinet MultiPose` for multi-person detection or even `PoseNet`, `BlazePose` or `EfficientPose` depending on the use case
|
||||
|
||||
For more info, see [**Configuration Details**](https://github.com/vladmandic/human/wiki/Configuration) and [**List of Models**](https://github.com/vladmandic/human/wiki/Models)
|
||||
|
||||
|
@ -328,11 +380,21 @@ For more info, see [**Configuration Details**](https://github.com/vladmandic/hum
|
|||
|
||||
<br><hr><br>
|
||||
|
||||
`Human` library is written in `TypeScript` [4.5](https://www.typescriptlang.org/docs/handbook/intro.html)
|
||||
Conforming to latest `JavaScript` [ECMAScript version 2021](https://262.ecma-international.org/) standard
|
||||
`Human` library is written in `TypeScript` [4.8](https://www.typescriptlang.org/docs/handbook/intro.html)
|
||||
Conforming to latest `JavaScript` [ECMAScript version 2022](https://262.ecma-international.org/) standard
|
||||
Build target is `JavaScript` [EMCAScript version 2018](https://262.ecma-international.org/11.0/)
|
||||
|
||||
<br>
|
||||
|
||||
For details see [**Wiki Pages**](https://github.com/vladmandic/human/wiki)
|
||||
and [**API Specification**](https://vladmandic.github.io/human/typedoc/classes/Human.html)
|
||||
|
||||
<br>
|
||||
|
||||

|
||||

|
||||

|
||||
<br>
|
||||

|
||||

|
||||

|
||||
|
|
Loading…
Reference in New Issue