mirror of https://github.com/vladmandic/human
update readme
parent
1e147d34e6
commit
cf9ea4929d
53
Home.md
53
Home.md
|
@ -1,3 +1,4 @@
|
||||||
|
[](https://github.com/sponsors/vladmandic)
|
||||||

|

|
||||||

|

|
||||||

|

|
||||||
|
@ -70,8 +71,9 @@ JavaScript module using TensorFlow/JS Machine Learning library
|
||||||
- **Multi-thread** [[*Live*]](https://vladmandic.github.io/human/demo/multithread/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/multithread): Runs each Human module in a separate web worker for highest possible performance
|
- **Multi-thread** [[*Live*]](https://vladmandic.github.io/human/demo/multithread/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/multithread): Runs each Human module in a separate web worker for highest possible performance
|
||||||
- **NextJS** [[*Live*]](https://vladmandic.github.io/human-next/out/index.html) [[*Details*]](https://github.com/vladmandic/human-next): Use Human with TypeScript, NextJS and ReactJS
|
- **NextJS** [[*Live*]](https://vladmandic.github.io/human-next/out/index.html) [[*Details*]](https://github.com/vladmandic/human-next): Use Human with TypeScript, NextJS and ReactJS
|
||||||
- **ElectronJS** [[*Details*]](https://github.com/vladmandic/human-electron): Use Human with TypeScript and ElectonJS to create standalone cross-platform apps
|
- **ElectronJS** [[*Details*]](https://github.com/vladmandic/human-electron): Use Human with TypeScript and ElectonJS to create standalone cross-platform apps
|
||||||
- **3D Analysis** [[*Live*]](https://vladmandic.github.io/human-motion/src/index.html) [[*Details*]](https://github.com/vladmandic/human-motion): 3D tracking and visualization of heead, face, eye, body and hand
|
- **3D Analysis with BabylonJS** [[*Live*]](https://vladmandic.github.io/human-motion/src/index.html) [[*Details*]](https://github.com/vladmandic/human-motion): 3D tracking and visualization of heead, face, eye, body and hand
|
||||||
- **Virtual Model Tracking** [[*Live*]](https://vladmandic.github.io/human-vrm/src/human-vrm.html) [[*Details*]](https://github.com/vladmandic/human-vrm): VR model with head, face, eye, body and hand tracking
|
- **VRM Virtual Model Tracking with Three.JS** [[*Live*]](https://vladmandic.github.io/human-three-vrm/src/human-vrm.html) [[*Details*]](https://github.com/vladmandic/human-three-vrm): VR model with head, face, eye, body and hand tracking
|
||||||
|
- **VRM Virtual Model Tracking with BabylonJS** [[*Live*]](https://vladmandic.github.io/human-bjs-vrm/src/index.html) [[*Details*]](https://github.com/vladmandic/human-bjs-vrm): VR model with head, face, eye, body and hand tracking
|
||||||
|
|
||||||
### NodeJS Demos
|
### NodeJS Demos
|
||||||
|
|
||||||
|
@ -134,7 +136,7 @@ JavaScript module using TensorFlow/JS Machine Learning library
|
||||||
|
|
||||||
<hr><br>
|
<hr><br>
|
||||||
|
|
||||||
## Examples
|
## App Examples
|
||||||
|
|
||||||
Visit [Examples gallery](https://vladmandic.github.io/human/samples/index.html) for more examples
|
Visit [Examples gallery](https://vladmandic.github.io/human/samples/index.html) for more examples
|
||||||
<https://vladmandic.github.io/human/samples/index.html>
|
<https://vladmandic.github.io/human/samples/index.html>
|
||||||
|
@ -179,17 +181,15 @@ and optionally matches detected face with database of known people to guess thei
|
||||||
|
|
||||||
<br>
|
<br>
|
||||||
|
|
||||||
3. **Avatar Bone Mapping:**
|
3. **VR Model Tracking:**
|
||||||
> [human-avatar](https://github.com/vladmandic/human-avatar)
|
> [human-three-vrm](https://github.com/vladmandic/human-three-vrm)
|
||||||
|
> [human-bjs-vrm](https://github.com/vladmandic/human-bjs-vrm)
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
<br>
|
|
||||||
|
|
||||||
4. **VR Model Tracking:**
|
4. **Human as OS native application:**
|
||||||
> [human-vrmmotion](https://github.com/vladmandic/human-vrm)
|
> [human-electron](https://github.com/vladmandic/human-electron)
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
<br>
|
<br>
|
||||||
|
|
||||||
|
@ -226,18 +226,16 @@ Additionally, `HTMLVideoElement`, `HTMLMediaElement` can be a standard `<video>`
|
||||||
|
|
||||||
- WebCam on user's system
|
- WebCam on user's system
|
||||||
- Any supported video type
|
- Any supported video type
|
||||||
For example: `.mp4`, `.avi`, etc.
|
e.g. `.mp4`, `.avi`, etc.
|
||||||
- Additional video types supported via *HTML5 Media Source Extensions*
|
- Additional video types supported via *HTML5 Media Source Extensions*
|
||||||
Live streaming examples:
|
e.g.: **HLS** (*HTTP Live Streaming*) using `hls.js` or **DASH** (*Dynamic Adaptive Streaming over HTTP*) using `dash.js`
|
||||||
- **HLS** (*HTTP Live Streaming*) using `hls.js`
|
|
||||||
- **DASH** (Dynamic Adaptive Streaming over HTTP) using `dash.js`
|
|
||||||
- **WebRTC** media track using built-in support
|
- **WebRTC** media track using built-in support
|
||||||
|
|
||||||
<br>
|
<br>
|
||||||
|
|
||||||
## Example
|
## Code Examples
|
||||||
|
|
||||||
Example simple app that uses Human to process video input and
|
Simple app that uses Human to process video input and
|
||||||
draw output on screen using internal draw helper functions
|
draw output on screen using internal draw helper functions
|
||||||
|
|
||||||
```js
|
```js
|
||||||
|
@ -357,18 +355,18 @@ And for even better results, you can run detection in a separate web worker thre
|
||||||
|
|
||||||
Default models in Human library are:
|
Default models in Human library are:
|
||||||
|
|
||||||
- **Face Detection**: MediaPipe BlazeFace Back variation
|
- **Face Detection**: *MediaPipe BlazeFace Back variation*
|
||||||
- **Face Mesh**: MediaPipe FaceMesh
|
- **Face Mesh**: *MediaPipe FaceMesh*
|
||||||
- **Face Iris Analysis**: MediaPipe Iris
|
- **Face Iris Analysis**: *MediaPipe Iris*
|
||||||
- **Face Description**: HSE FaceRes
|
- **Face Description**: *HSE FaceRes*
|
||||||
- **Emotion Detection**: Oarriaga Emotion
|
- **Emotion Detection**: *Oarriaga Emotion*
|
||||||
- **Body Analysis**: MoveNet Lightning variation
|
- **Body Analysis**: *MoveNet Lightning variation*
|
||||||
- **Hand Analysis**: HandTrack & MediaPipe HandLandmarks
|
- **Hand Analysis**: *HandTrack & MediaPipe HandLandmarks*
|
||||||
- **Body Segmentation**: Google Selfie
|
- **Body Segmentation**: *Google Selfie*
|
||||||
- **Object Detection**: CenterNet with MobileNet v3
|
- **Object Detection**: *CenterNet with MobileNet v3*
|
||||||
|
|
||||||
Note that alternative models are provided and can be enabled via configuration
|
Note that alternative models are provided and can be enabled via configuration
|
||||||
For example, body pose detection by default uses `MoveNet Lightning`, but can be switched to `MultiNet Thunder` for higher precision or `Multinet MultiPose` for multi-person detection or even `PoseNet`, `BlazePose` or `EfficientPose` depending on the use case
|
For example, body pose detection by default uses *MoveNet Lightning*, but can be switched to *MultiNet Thunder* for higher precision or *Multinet MultiPose* for multi-person detection or even *PoseNet*, *BlazePose* or *EfficientPose* depending on the use case
|
||||||
|
|
||||||
For more info, see [**Configuration Details**](https://github.com/vladmandic/human/wiki/Configuration) and [**List of Models**](https://github.com/vladmandic/human/wiki/Models)
|
For more info, see [**Configuration Details**](https://github.com/vladmandic/human/wiki/Configuration) and [**List of Models**](https://github.com/vladmandic/human/wiki/Models)
|
||||||
|
|
||||||
|
@ -391,6 +389,7 @@ and [**API Specification**](https://vladmandic.github.io/human/typedoc/classes/H
|
||||||
|
|
||||||
<br>
|
<br>
|
||||||
|
|
||||||
|
[](https://github.com/sponsors/vladmandic)
|
||||||

|

|
||||||

|

|
||||||

|

|
||||||
|
|
Loading…
Reference in New Issue