mirror of https://github.com/vladmandic/human
update docs with webrtc
parent
3539f10bcd
commit
bd0cfa7ff3
|
@ -63,7 +63,8 @@ const config: Config = {
|
|||
// typically not needed
|
||||
videoOptimized: true, // perform additional optimizations when input is video,
|
||||
// must be disabled for images
|
||||
// basically this skips object box boundary detection for every n frames
|
||||
// automatically disabled for Image, ImageData, ImageBitmap and Tensor inputs
|
||||
// skips boundary detection for every n frames
|
||||
// while maintaining in-box detection since objects cannot move that fast
|
||||
warmup: 'face', // what to use for human.warmup(), can be 'none', 'face', 'full'
|
||||
// warmup pre-initializes all models for faster inference but can take
|
||||
|
|
79
Demos.md
79
Demos.md
|
@ -16,12 +16,20 @@ On notes on how to use built-in micro server, see notes on [**Development Server
|
|||
|
||||
<br>
|
||||
|
||||
### Changing Demo Target
|
||||
### Demo Inputs
|
||||
|
||||
Demo in `demo/index.html` loads `dist/demo-browser-index.js` which is built from sources in `demo`, starting with `demo/browser`
|
||||
This bundled version is needed since mobile browsers (e.g. Chrome on Android) do not support native modules loading yet
|
||||
Demo is in `demo/index.html` loads `demo/index.js`
|
||||
|
||||
If your target is desktop, alternatively you can load `demo/browser.js` directly and skip requirement to rebuild demo from sources every time
|
||||
Demo can process:
|
||||
|
||||
- Sample images
|
||||
- WebCam input
|
||||
- WebRTC input
|
||||
|
||||
Note that WebRTC connection requires a WebRTC server that provides a compatible media track such as H.264 video track
|
||||
For such a WebRTC server implementation see <https://github.com/vladmandic/stream-rtsp> project
|
||||
that implements a connection to IP Security camera using RTSP protocol and transcodes it to WebRTC
|
||||
ready to be consumed by a client such as `Human`
|
||||
|
||||
<br>
|
||||
|
||||
|
@ -30,18 +38,51 @@ If your target is desktop, alternatively you can load `demo/browser.js` directly
|
|||
Demo implements several ways to use `Human` library,
|
||||
all configurable in `browse.js:ui` configuration object and in the UI itself:
|
||||
|
||||
- `ui.buffered`: run detection and screen refresh in a sequence or as separate buffered functions
|
||||
- `ui.bufferedFPSTarget`: when using buffered execution this target fps for screen refresh
|
||||
- `ui.useWorker`: run processing in main thread or dedicated web worker thread
|
||||
- `ui.crop`: resize camera input to fit screen or run at native resolution
|
||||
- `ui.facing`: use front or back camera if device has multiple cameras
|
||||
- `ui.modelsPreload`: pre-load all enabled models on page load
|
||||
- `ui.modelsWarmup`: warmup all loaded models on page load
|
||||
- `ui.useDepth`: draw points and polygons with different shade depending on detected Z-axis depth
|
||||
- `ui.drawBoxes`: draw bounding boxes around detected objects (e.g. face)
|
||||
- `ui.drawPoints`: draw each deteced point as point cloud
|
||||
- `ui.drawPolygons`: connect detected points with polygons
|
||||
- `ui.fillPolygons`: fill drawn polygons
|
||||
```js
|
||||
const ui = {
|
||||
crop: true, // video mode crop to size or leave full frame
|
||||
columns: 2, // when processing sample images create this many columns
|
||||
facing: true, // camera facing front or back
|
||||
useWorker: false, // use web workers for processing
|
||||
worker: 'index-worker.js',
|
||||
samples: ['../assets/sample6.jpg', '../assets/sample1.jpg', '../assets/sample4.jpg', '../assets/sample5.jpg', '../assets/sample3.jpg', '../assets/sample2.jpg'],
|
||||
compare: '../assets/sample-me.jpg',
|
||||
useWebRTC: false, // use webrtc as camera source instead of local webcam
|
||||
webRTCServer: 'http://localhost:8002',
|
||||
webRTCStream: 'reowhite',
|
||||
console: true, // log messages to browser console
|
||||
maxFPSframes: 10, // keep fps history for how many frames
|
||||
modelsPreload: true, // preload human models on startup
|
||||
modelsWarmup: true, // warmup human models on startup
|
||||
busy: false, // internal camera busy flag
|
||||
buffered: true, // should output be buffered between frames
|
||||
bench: true, // show gl fps benchmark window
|
||||
};
|
||||
```
|
||||
|
||||
Additionally, some parameters are held inside `Human` instance:
|
||||
|
||||
```ts
|
||||
human.draw.drawOptions = {
|
||||
color: <string>'rgba(173, 216, 230, 0.3)', // 'lightblue' with light alpha channel
|
||||
labelColor: <string>'rgba(173, 216, 230, 1)', // 'lightblue' with dark alpha channel
|
||||
shadowColor: <string>'black',
|
||||
font: <string>'small-caps 16px "Segoe UI"',
|
||||
lineHeight: <number>20,
|
||||
lineWidth: <number>6,
|
||||
pointSize: <number>2,
|
||||
roundRect: <number>28,
|
||||
drawPoints: <Boolean>false,
|
||||
drawLabels: <Boolean>true,
|
||||
drawBoxes: <Boolean>true,
|
||||
drawPolygons: <Boolean>true,
|
||||
fillPolygons: <Boolean>false,
|
||||
useDepth: <Boolean>true,
|
||||
useCurves: <Boolean>false,
|
||||
bufferedOutput: <Boolean>false,
|
||||
useRawBoxes: <Boolean>false,
|
||||
};
|
||||
```
|
||||
|
||||
Demo app can use URL parameters to override configuration values
|
||||
For example:
|
||||
|
@ -52,13 +93,13 @@ For example:
|
|||
|
||||
<br><hr><br>
|
||||
|
||||
### Face 3D Rendering using OpenGL
|
||||
## Face 3D Rendering using OpenGL
|
||||
|
||||
`face3d.html`: Demo that uses `Three.js` for 3D OpenGL rendering of a detected face
|
||||
|
||||
<br><hr><br>
|
||||
|
||||
### Face Recognition Demo
|
||||
## Face Recognition Demo
|
||||
|
||||
`demo/facematch.html`: Demo that uses all face description and embedding features to
|
||||
detect, extract and identify all faces plus calculate simmilarity between them
|
||||
|
@ -73,7 +114,7 @@ It highlights functionality such as:
|
|||
|
||||
<br><hr><br>
|
||||
|
||||
## NodeJS
|
||||
## NodeJS Demo
|
||||
|
||||
- `node.js`: Demo using NodeJS with CommonJS module
|
||||
Simple demo that can process any input image
|
||||
|
|
62
Home.md
62
Home.md
|
@ -1,17 +1,16 @@
|
|||
# Human Library
|
||||
|
||||
**3D Face Detection & Rotation Tracking, Face Embedding & Recognition,**
|
||||
**Body Pose Tracking, 3D Hand & Finger Tracking,**
|
||||
**Iris Analysis, Age & Gender & Emotion Prediction,**
|
||||
**Gesture Recognition**
|
||||
**AI-powered 3D Face Detection & Rotation Tracking, Face Description & Recognition,**
|
||||
**Body Pose Tracking, 3D Hand & Finger Tracking, Iris Analysis,**
|
||||
**Age & Gender & Emotion Prediction, Gesture Recognition**
|
||||
|
||||
<br>
|
||||
|
||||
JavaScript module using TensorFlow/JS Machine Learning library
|
||||
|
||||
- **Browser**:
|
||||
Compatible with *CPU*, *WebGL*, *WASM* backends
|
||||
Compatible with both desktop and mobile platforms
|
||||
Compatible with *CPU*, *WebGL*, *WASM* backends
|
||||
Compatible with *WebWorker* execution
|
||||
- **NodeJS**:
|
||||
Compatible with both software *tfjs-node* and
|
||||
|
@ -23,32 +22,30 @@ Check out [**Live Demo**](https://vladmandic.github.io/human/demo/index.html) fo
|
|||
|
||||
## Demos
|
||||
|
||||
- [**Demo Application**](https://vladmandic.github.io/human/demo/index.html)
|
||||
- [**Main Application**](https://vladmandic.github.io/human/demo/index.html)
|
||||
- [**Face Extraction, Description, Identification and Matching**](https://vladmandic.github.io/human/demo/facematch.html)
|
||||
- [**Face Extraction and 3D Rendering**](https://vladmandic.github.io/human/demo/face3d.html)
|
||||
- [**Details on Demo Applications**](https://github.com/vladmandic/human/wiki/Demos)
|
||||
|
||||
## Project pages
|
||||
|
||||
- [**Code Repository**](https://github.com/vladmandic/human)
|
||||
- [**NPM Package**](https://www.npmjs.com/package/@vladmandic/human)
|
||||
- [**Issues Tracker**](https://github.com/vladmandic/human/issues)
|
||||
- [**API Specification**](https://vladmandic.github.io/human/typedoc/classes/human.html)
|
||||
- [**API Specification: Human**](https://vladmandic.github.io/human/typedoc/classes/human.html)
|
||||
- [**API Specification: Root**](https://vladmandic.github.io/human/typedoc/)
|
||||
- [**Change Log**](https://github.com/vladmandic/human/blob/main/CHANGELOG.md)
|
||||
|
||||
<br>
|
||||
|
||||
## Wiki pages
|
||||
|
||||
- [**Home**](https://github.com/vladmandic/human/wiki)
|
||||
- [**Demos**](https://github.com/vladmandic/human/wiki/Demos)
|
||||
- [**Installation**](https://github.com/vladmandic/human/wiki/Install)
|
||||
- [**Usage & Functions**](https://github.com/vladmandic/human/wiki/Usage)
|
||||
- [**Configuration Details**](https://github.com/vladmandic/human/wiki/Configuration)
|
||||
- [**Output Details**](https://github.com/vladmandic/human/wiki/Outputs)
|
||||
- [**Face Recognition & Face Embedding**](https://github.com/vladmandic/human/wiki/Embedding)
|
||||
- [**Face Recognition & Face Description**](https://github.com/vladmandic/human/wiki/Embedding)
|
||||
- [**Gesture Recognition**](https://github.com/vladmandic/human/wiki/Gesture)
|
||||
|
||||
<br>
|
||||
- [**Common Issues**](https://github.com/vladmandic/human/wiki/Issues)
|
||||
|
||||
## Additional notes
|
||||
|
||||
|
@ -62,18 +59,42 @@ Check out [**Live Demo**](https://vladmandic.github.io/human/demo/index.html) fo
|
|||
|
||||
<br>
|
||||
|
||||
*See [**issues**](https://github.com/vladmandic/human/issues?q=) and [**discussions**](https://github.com/vladmandic/human/discussions) for list of known limitations and planned enhancements*
|
||||
|
||||
*Suggestions are welcome!*
|
||||
|
||||
<hr><br>
|
||||
|
||||
## Inputs
|
||||
|
||||
`Human` library can process all known input types:
|
||||
|
||||
- `Image`, `ImageData`, `ImageBitmap`, `Canvas`, `OffscreenCanvas`, `Tensor`,
|
||||
- `HTMLImageElement`, `HTMLCanvasElement`, `HTMLVideoElement`, `HTMLMediaElement`
|
||||
|
||||
Additionally, `HTMLVideoElement`, `HTMLMediaElement` can be a standard `<video>` tag that links to:
|
||||
|
||||
- WebCam on user's system
|
||||
- Any supported video type
|
||||
For example: `.mp4`, `.avi`, etc.
|
||||
- Additional video types supported via *HTML5 Media Source Extensions*
|
||||
Live streaming examples:
|
||||
- **HLS** (*HTTP Live Streaming*) using `hls.js`
|
||||
- **DASH** (Dynamic Adaptive Streaming over HTTP) using `dash.js`
|
||||
- **WebRTC** media track
|
||||
|
||||
<br><hr><br>
|
||||
|
||||
## Default models
|
||||
|
||||
Default models in Human library are:
|
||||
|
||||
- **Face Detection**: MediaPipe BlazeFace-Back
|
||||
- **Face Mesh**: MediaPipe FaceMesh
|
||||
- **Face Description**: HSE FaceRes
|
||||
- **Face Iris Analysis**: MediaPipe Iris
|
||||
- **Emotion Detection**: Oarriaga Emotion
|
||||
- **Gender Detection**: Oarriaga Gender
|
||||
- **Age Detection**: SSR-Net Age IMDB
|
||||
- **Body Analysis**: PoseNet
|
||||
- **Face Embedding**: BecauseofAI MobileFace Embedding
|
||||
|
||||
Note that alternative models are provided and can be enabled via configuration
|
||||
For example, `PoseNet` model can be switched for `BlazePose` model depending on the use case
|
||||
|
@ -82,14 +103,13 @@ For more info, see [**Configuration Details**](https://github.com/vladmandic/hum
|
|||
|
||||
<br><hr><br>
|
||||
|
||||
`Human` library is written in `TypeScript` [4.3](https://www.typescriptlang.org/docs/handbook/intro.html)
|
||||
`Human` library is written in `TypeScript` [4.2](https://www.typescriptlang.org/docs/handbook/intro.html)
|
||||
Conforming to `JavaScript` [ECMAScript version 2020](https://www.ecma-international.org/ecma-262/11.0/index.html) standard
|
||||
Build target is `JavaScript` **EMCAScript version 2018**
|
||||
|
||||
<br>
|
||||
|
||||
*See [**issues**](https://github.com/vladmandic/human/issues?q=) and [**discussions**](https://github.com/vladmandic/human/discussions) for list of known limitations and planned enhancements*
|
||||
For details see [**Wiki Pages**](https://github.com/vladmandic/human/wiki)
|
||||
and [**API Specification**](https://vladmandic.github.io/human/typedoc/classes/human.html)
|
||||
|
||||
*Suggestions are welcome!*
|
||||
|
||||
<br><hr><br>
|
||||
<br>
|
||||
|
|
Loading…
Reference in New Issue