update docs with webrtc

master
Vladimir Mandic 2021-04-12 17:48:51 -04:00
parent 3539f10bcd
commit bd0cfa7ff3
3 changed files with 103 additions and 41 deletions

@ -63,7 +63,8 @@ const config: Config = {
// typically not needed // typically not needed
videoOptimized: true, // perform additional optimizations when input is video, videoOptimized: true, // perform additional optimizations when input is video,
// must be disabled for images // must be disabled for images
// basically this skips object box boundary detection for every n frames // automatically disabled for Image, ImageData, ImageBitmap and Tensor inputs
// skips boundary detection for every n frames
// while maintaining in-box detection since objects cannot move that fast // while maintaining in-box detection since objects cannot move that fast
warmup: 'face', // what to use for human.warmup(), can be 'none', 'face', 'full' warmup: 'face', // what to use for human.warmup(), can be 'none', 'face', 'full'
// warmup pre-initializes all models for faster inference but can take // warmup pre-initializes all models for faster inference but can take

@ -16,12 +16,20 @@ On notes on how to use built-in micro server, see notes on [**Development Server
<br> <br>
### Changing Demo Target ### Demo Inputs
Demo in `demo/index.html` loads `dist/demo-browser-index.js` which is built from sources in `demo`, starting with `demo/browser` Demo is in `demo/index.html` loads `demo/index.js`
This bundled version is needed since mobile browsers (e.g. Chrome on Android) do not support native modules loading yet
If your target is desktop, alternatively you can load `demo/browser.js` directly and skip requirement to rebuild demo from sources every time Demo can process:
- Sample images
- WebCam input
- WebRTC input
Note that WebRTC connection requires a WebRTC server that provides a compatible media track such as H.264 video track
For such a WebRTC server implementation see <https://github.com/vladmandic/stream-rtsp> project
that implements a connection to IP Security camera using RTSP protocol and transcodes it to WebRTC
ready to be consumed by a client such as `Human`
<br> <br>
@ -30,18 +38,51 @@ If your target is desktop, alternatively you can load `demo/browser.js` directly
Demo implements several ways to use `Human` library, Demo implements several ways to use `Human` library,
all configurable in `browse.js:ui` configuration object and in the UI itself: all configurable in `browse.js:ui` configuration object and in the UI itself:
- `ui.buffered`: run detection and screen refresh in a sequence or as separate buffered functions ```js
- `ui.bufferedFPSTarget`: when using buffered execution this target fps for screen refresh const ui = {
- `ui.useWorker`: run processing in main thread or dedicated web worker thread crop: true, // video mode crop to size or leave full frame
- `ui.crop`: resize camera input to fit screen or run at native resolution columns: 2, // when processing sample images create this many columns
- `ui.facing`: use front or back camera if device has multiple cameras facing: true, // camera facing front or back
- `ui.modelsPreload`: pre-load all enabled models on page load useWorker: false, // use web workers for processing
- `ui.modelsWarmup`: warmup all loaded models on page load worker: 'index-worker.js',
- `ui.useDepth`: draw points and polygons with different shade depending on detected Z-axis depth samples: ['../assets/sample6.jpg', '../assets/sample1.jpg', '../assets/sample4.jpg', '../assets/sample5.jpg', '../assets/sample3.jpg', '../assets/sample2.jpg'],
- `ui.drawBoxes`: draw bounding boxes around detected objects (e.g. face) compare: '../assets/sample-me.jpg',
- `ui.drawPoints`: draw each deteced point as point cloud useWebRTC: false, // use webrtc as camera source instead of local webcam
- `ui.drawPolygons`: connect detected points with polygons webRTCServer: 'http://localhost:8002',
- `ui.fillPolygons`: fill drawn polygons webRTCStream: 'reowhite',
console: true, // log messages to browser console
maxFPSframes: 10, // keep fps history for how many frames
modelsPreload: true, // preload human models on startup
modelsWarmup: true, // warmup human models on startup
busy: false, // internal camera busy flag
buffered: true, // should output be buffered between frames
bench: true, // show gl fps benchmark window
};
```
Additionally, some parameters are held inside `Human` instance:
```ts
human.draw.drawOptions = {
color: <string>'rgba(173, 216, 230, 0.3)', // 'lightblue' with light alpha channel
labelColor: <string>'rgba(173, 216, 230, 1)', // 'lightblue' with dark alpha channel
shadowColor: <string>'black',
font: <string>'small-caps 16px "Segoe UI"',
lineHeight: <number>20,
lineWidth: <number>6,
pointSize: <number>2,
roundRect: <number>28,
drawPoints: <Boolean>false,
drawLabels: <Boolean>true,
drawBoxes: <Boolean>true,
drawPolygons: <Boolean>true,
fillPolygons: <Boolean>false,
useDepth: <Boolean>true,
useCurves: <Boolean>false,
bufferedOutput: <Boolean>false,
useRawBoxes: <Boolean>false,
};
```
Demo app can use URL parameters to override configuration values Demo app can use URL parameters to override configuration values
For example: For example:
@ -52,13 +93,13 @@ For example:
<br><hr><br> <br><hr><br>
### Face 3D Rendering using OpenGL ## Face 3D Rendering using OpenGL
`face3d.html`: Demo that uses `Three.js` for 3D OpenGL rendering of a detected face `face3d.html`: Demo that uses `Three.js` for 3D OpenGL rendering of a detected face
<br><hr><br> <br><hr><br>
### Face Recognition Demo ## Face Recognition Demo
`demo/facematch.html`: Demo that uses all face description and embedding features to `demo/facematch.html`: Demo that uses all face description and embedding features to
detect, extract and identify all faces plus calculate simmilarity between them detect, extract and identify all faces plus calculate simmilarity between them
@ -73,7 +114,7 @@ It highlights functionality such as:
<br><hr><br> <br><hr><br>
## NodeJS ## NodeJS Demo
- `node.js`: Demo using NodeJS with CommonJS module - `node.js`: Demo using NodeJS with CommonJS module
Simple demo that can process any input image Simple demo that can process any input image

62
Home.md

@ -1,17 +1,16 @@
# Human Library # Human Library
**3D Face Detection & Rotation Tracking, Face Embedding & Recognition,** **AI-powered 3D Face Detection & Rotation Tracking, Face Description & Recognition,**
**Body Pose Tracking, 3D Hand & Finger Tracking,** **Body Pose Tracking, 3D Hand & Finger Tracking, Iris Analysis,**
**Iris Analysis, Age & Gender & Emotion Prediction,** **Age & Gender & Emotion Prediction, Gesture Recognition**
**Gesture Recognition**
<br> <br>
JavaScript module using TensorFlow/JS Machine Learning library JavaScript module using TensorFlow/JS Machine Learning library
- **Browser**: - **Browser**:
Compatible with *CPU*, *WebGL*, *WASM* backends
Compatible with both desktop and mobile platforms Compatible with both desktop and mobile platforms
Compatible with *CPU*, *WebGL*, *WASM* backends
Compatible with *WebWorker* execution Compatible with *WebWorker* execution
- **NodeJS**: - **NodeJS**:
Compatible with both software *tfjs-node* and Compatible with both software *tfjs-node* and
@ -23,32 +22,30 @@ Check out [**Live Demo**](https://vladmandic.github.io/human/demo/index.html) fo
## Demos ## Demos
- [**Demo Application**](https://vladmandic.github.io/human/demo/index.html) - [**Main Application**](https://vladmandic.github.io/human/demo/index.html)
- [**Face Extraction, Description, Identification and Matching**](https://vladmandic.github.io/human/demo/facematch.html) - [**Face Extraction, Description, Identification and Matching**](https://vladmandic.github.io/human/demo/facematch.html)
- [**Face Extraction and 3D Rendering**](https://vladmandic.github.io/human/demo/face3d.html) - [**Face Extraction and 3D Rendering**](https://vladmandic.github.io/human/demo/face3d.html)
- [**Details on Demo Applications**](https://github.com/vladmandic/human/wiki/Demos)
## Project pages ## Project pages
- [**Code Repository**](https://github.com/vladmandic/human) - [**Code Repository**](https://github.com/vladmandic/human)
- [**NPM Package**](https://www.npmjs.com/package/@vladmandic/human) - [**NPM Package**](https://www.npmjs.com/package/@vladmandic/human)
- [**Issues Tracker**](https://github.com/vladmandic/human/issues) - [**Issues Tracker**](https://github.com/vladmandic/human/issues)
- [**API Specification**](https://vladmandic.github.io/human/typedoc/classes/human.html) - [**API Specification: Human**](https://vladmandic.github.io/human/typedoc/classes/human.html)
- [**API Specification: Root**](https://vladmandic.github.io/human/typedoc/)
- [**Change Log**](https://github.com/vladmandic/human/blob/main/CHANGELOG.md) - [**Change Log**](https://github.com/vladmandic/human/blob/main/CHANGELOG.md)
<br>
## Wiki pages ## Wiki pages
- [**Home**](https://github.com/vladmandic/human/wiki) - [**Home**](https://github.com/vladmandic/human/wiki)
- [**Demos**](https://github.com/vladmandic/human/wiki/Demos)
- [**Installation**](https://github.com/vladmandic/human/wiki/Install) - [**Installation**](https://github.com/vladmandic/human/wiki/Install)
- [**Usage & Functions**](https://github.com/vladmandic/human/wiki/Usage) - [**Usage & Functions**](https://github.com/vladmandic/human/wiki/Usage)
- [**Configuration Details**](https://github.com/vladmandic/human/wiki/Configuration) - [**Configuration Details**](https://github.com/vladmandic/human/wiki/Configuration)
- [**Output Details**](https://github.com/vladmandic/human/wiki/Outputs) - [**Output Details**](https://github.com/vladmandic/human/wiki/Outputs)
- [**Face Recognition & Face Embedding**](https://github.com/vladmandic/human/wiki/Embedding) - [**Face Recognition & Face Description**](https://github.com/vladmandic/human/wiki/Embedding)
- [**Gesture Recognition**](https://github.com/vladmandic/human/wiki/Gesture) - [**Gesture Recognition**](https://github.com/vladmandic/human/wiki/Gesture)
- [**Common Issues**](https://github.com/vladmandic/human/wiki/Issues)
<br>
## Additional notes ## Additional notes
@ -62,18 +59,42 @@ Check out [**Live Demo**](https://vladmandic.github.io/human/demo/index.html) fo
<br> <br>
*See [**issues**](https://github.com/vladmandic/human/issues?q=) and [**discussions**](https://github.com/vladmandic/human/discussions) for list of known limitations and planned enhancements*
*Suggestions are welcome!*
<hr><br>
## Inputs
`Human` library can process all known input types:
- `Image`, `ImageData`, `ImageBitmap`, `Canvas`, `OffscreenCanvas`, `Tensor`,
- `HTMLImageElement`, `HTMLCanvasElement`, `HTMLVideoElement`, `HTMLMediaElement`
Additionally, `HTMLVideoElement`, `HTMLMediaElement` can be a standard `<video>` tag that links to:
- WebCam on user's system
- Any supported video type
For example: `.mp4`, `.avi`, etc.
- Additional video types supported via *HTML5 Media Source Extensions*
Live streaming examples:
- **HLS** (*HTTP Live Streaming*) using `hls.js`
- **DASH** (Dynamic Adaptive Streaming over HTTP) using `dash.js`
- **WebRTC** media track
<br><hr><br>
## Default models ## Default models
Default models in Human library are: Default models in Human library are:
- **Face Detection**: MediaPipe BlazeFace-Back - **Face Detection**: MediaPipe BlazeFace-Back
- **Face Mesh**: MediaPipe FaceMesh - **Face Mesh**: MediaPipe FaceMesh
- **Face Description**: HSE FaceRes
- **Face Iris Analysis**: MediaPipe Iris - **Face Iris Analysis**: MediaPipe Iris
- **Emotion Detection**: Oarriaga Emotion - **Emotion Detection**: Oarriaga Emotion
- **Gender Detection**: Oarriaga Gender
- **Age Detection**: SSR-Net Age IMDB
- **Body Analysis**: PoseNet - **Body Analysis**: PoseNet
- **Face Embedding**: BecauseofAI MobileFace Embedding
Note that alternative models are provided and can be enabled via configuration Note that alternative models are provided and can be enabled via configuration
For example, `PoseNet` model can be switched for `BlazePose` model depending on the use case For example, `PoseNet` model can be switched for `BlazePose` model depending on the use case
@ -82,14 +103,13 @@ For more info, see [**Configuration Details**](https://github.com/vladmandic/hum
<br><hr><br> <br><hr><br>
`Human` library is written in `TypeScript` [4.3](https://www.typescriptlang.org/docs/handbook/intro.html) `Human` library is written in `TypeScript` [4.2](https://www.typescriptlang.org/docs/handbook/intro.html)
Conforming to `JavaScript` [ECMAScript version 2020](https://www.ecma-international.org/ecma-262/11.0/index.html) standard Conforming to `JavaScript` [ECMAScript version 2020](https://www.ecma-international.org/ecma-262/11.0/index.html) standard
Build target is `JavaScript` **EMCAScript version 2018** Build target is `JavaScript` **EMCAScript version 2018**
<br> <br>
*See [**issues**](https://github.com/vladmandic/human/issues?q=) and [**discussions**](https://github.com/vladmandic/human/discussions) for list of known limitations and planned enhancements* For details see [**Wiki Pages**](https://github.com/vladmandic/human/wiki)
and [**API Specification**](https://vladmandic.github.io/human/typedoc/classes/human.html)
*Suggestions are welcome!* <br>
<br><hr><br>