diff --git a/Configuration.md b/Configuration.md
index 043cd28..987eac1 100644
--- a/Configuration.md
+++ b/Configuration.md
@@ -63,7 +63,8 @@ const config: Config = {
// typically not needed
videoOptimized: true, // perform additional optimizations when input is video,
// must be disabled for images
- // basically this skips object box boundary detection for every n frames
+ // automatically disabled for Image, ImageData, ImageBitmap and Tensor inputs
+ // skips boundary detection for every n frames
// while maintaining in-box detection since objects cannot move that fast
warmup: 'face', // what to use for human.warmup(), can be 'none', 'face', 'full'
// warmup pre-initializes all models for faster inference but can take
diff --git a/Demos.md b/Demos.md
index babbcf2..c75b9f4 100644
--- a/Demos.md
+++ b/Demos.md
@@ -16,12 +16,20 @@ On notes on how to use built-in micro server, see notes on [**Development Server
-### Changing Demo Target
+### Demo Inputs
-Demo in `demo/index.html` loads `dist/demo-browser-index.js` which is built from sources in `demo`, starting with `demo/browser`
-This bundled version is needed since mobile browsers (e.g. Chrome on Android) do not support native modules loading yet
+Demo is in `demo/index.html` loads `demo/index.js`
-If your target is desktop, alternatively you can load `demo/browser.js` directly and skip requirement to rebuild demo from sources every time
+Demo can process:
+
+- Sample images
+- WebCam input
+- WebRTC input
+
+Note that WebRTC connection requires a WebRTC server that provides a compatible media track such as H.264 video track
+For such a WebRTC server implementation see project
+that implements a connection to IP Security camera using RTSP protocol and transcodes it to WebRTC
+ready to be consumed by a client such as `Human`
@@ -30,18 +38,51 @@ If your target is desktop, alternatively you can load `demo/browser.js` directly
Demo implements several ways to use `Human` library,
all configurable in `browse.js:ui` configuration object and in the UI itself:
-- `ui.buffered`: run detection and screen refresh in a sequence or as separate buffered functions
-- `ui.bufferedFPSTarget`: when using buffered execution this target fps for screen refresh
-- `ui.useWorker`: run processing in main thread or dedicated web worker thread
-- `ui.crop`: resize camera input to fit screen or run at native resolution
-- `ui.facing`: use front or back camera if device has multiple cameras
-- `ui.modelsPreload`: pre-load all enabled models on page load
-- `ui.modelsWarmup`: warmup all loaded models on page load
-- `ui.useDepth`: draw points and polygons with different shade depending on detected Z-axis depth
-- `ui.drawBoxes`: draw bounding boxes around detected objects (e.g. face)
-- `ui.drawPoints`: draw each deteced point as point cloud
-- `ui.drawPolygons`: connect detected points with polygons
-- `ui.fillPolygons`: fill drawn polygons
+```js
+const ui = {
+ crop: true, // video mode crop to size or leave full frame
+ columns: 2, // when processing sample images create this many columns
+ facing: true, // camera facing front or back
+ useWorker: false, // use web workers for processing
+ worker: 'index-worker.js',
+ samples: ['../assets/sample6.jpg', '../assets/sample1.jpg', '../assets/sample4.jpg', '../assets/sample5.jpg', '../assets/sample3.jpg', '../assets/sample2.jpg'],
+ compare: '../assets/sample-me.jpg',
+ useWebRTC: false, // use webrtc as camera source instead of local webcam
+ webRTCServer: 'http://localhost:8002',
+ webRTCStream: 'reowhite',
+ console: true, // log messages to browser console
+ maxFPSframes: 10, // keep fps history for how many frames
+ modelsPreload: true, // preload human models on startup
+ modelsWarmup: true, // warmup human models on startup
+ busy: false, // internal camera busy flag
+ buffered: true, // should output be buffered between frames
+ bench: true, // show gl fps benchmark window
+};
+```
+
+Additionally, some parameters are held inside `Human` instance:
+
+```ts
+human.draw.drawOptions = {
+ color: 'rgba(173, 216, 230, 0.3)', // 'lightblue' with light alpha channel
+ labelColor: 'rgba(173, 216, 230, 1)', // 'lightblue' with dark alpha channel
+ shadowColor: 'black',
+ font: 'small-caps 16px "Segoe UI"',
+ lineHeight: 20,
+ lineWidth: 6,
+ pointSize: 2,
+ roundRect: 28,
+ drawPoints: false,
+ drawLabels: true,
+ drawBoxes: true,
+ drawPolygons: true,
+ fillPolygons: false,
+ useDepth: true,
+ useCurves: false,
+ bufferedOutput: false,
+ useRawBoxes: false,
+};
+```
Demo app can use URL parameters to override configuration values
For example:
@@ -52,13 +93,13 @@ For example:
-### Face 3D Rendering using OpenGL
+## Face 3D Rendering using OpenGL
`face3d.html`: Demo that uses `Three.js` for 3D OpenGL rendering of a detected face
-### Face Recognition Demo
+## Face Recognition Demo
`demo/facematch.html`: Demo that uses all face description and embedding features to
detect, extract and identify all faces plus calculate simmilarity between them
@@ -73,7 +114,7 @@ It highlights functionality such as:
-## NodeJS
+## NodeJS Demo
- `node.js`: Demo using NodeJS with CommonJS module
Simple demo that can process any input image
diff --git a/Home.md b/Home.md
index e383547..44a1c19 100644
--- a/Home.md
+++ b/Home.md
@@ -1,17 +1,16 @@
# Human Library
-**3D Face Detection & Rotation Tracking, Face Embedding & Recognition,**
-**Body Pose Tracking, 3D Hand & Finger Tracking,**
-**Iris Analysis, Age & Gender & Emotion Prediction,**
-**Gesture Recognition**
+**AI-powered 3D Face Detection & Rotation Tracking, Face Description & Recognition,**
+**Body Pose Tracking, 3D Hand & Finger Tracking, Iris Analysis,**
+**Age & Gender & Emotion Prediction, Gesture Recognition**
JavaScript module using TensorFlow/JS Machine Learning library
- **Browser**:
- Compatible with *CPU*, *WebGL*, *WASM* backends
Compatible with both desktop and mobile platforms
+ Compatible with *CPU*, *WebGL*, *WASM* backends
Compatible with *WebWorker* execution
- **NodeJS**:
Compatible with both software *tfjs-node* and
@@ -23,32 +22,30 @@ Check out [**Live Demo**](https://vladmandic.github.io/human/demo/index.html) fo
## Demos
-- [**Demo Application**](https://vladmandic.github.io/human/demo/index.html)
+- [**Main Application**](https://vladmandic.github.io/human/demo/index.html)
- [**Face Extraction, Description, Identification and Matching**](https://vladmandic.github.io/human/demo/facematch.html)
- [**Face Extraction and 3D Rendering**](https://vladmandic.github.io/human/demo/face3d.html)
+- [**Details on Demo Applications**](https://github.com/vladmandic/human/wiki/Demos)
## Project pages
- [**Code Repository**](https://github.com/vladmandic/human)
- [**NPM Package**](https://www.npmjs.com/package/@vladmandic/human)
- [**Issues Tracker**](https://github.com/vladmandic/human/issues)
-- [**API Specification**](https://vladmandic.github.io/human/typedoc/classes/human.html)
+- [**API Specification: Human**](https://vladmandic.github.io/human/typedoc/classes/human.html)
+- [**API Specification: Root**](https://vladmandic.github.io/human/typedoc/)
- [**Change Log**](https://github.com/vladmandic/human/blob/main/CHANGELOG.md)
-
-
## Wiki pages
- [**Home**](https://github.com/vladmandic/human/wiki)
-- [**Demos**](https://github.com/vladmandic/human/wiki/Demos)
- [**Installation**](https://github.com/vladmandic/human/wiki/Install)
- [**Usage & Functions**](https://github.com/vladmandic/human/wiki/Usage)
- [**Configuration Details**](https://github.com/vladmandic/human/wiki/Configuration)
- [**Output Details**](https://github.com/vladmandic/human/wiki/Outputs)
-- [**Face Recognition & Face Embedding**](https://github.com/vladmandic/human/wiki/Embedding)
+- [**Face Recognition & Face Description**](https://github.com/vladmandic/human/wiki/Embedding)
- [**Gesture Recognition**](https://github.com/vladmandic/human/wiki/Gesture)
-
-
+- [**Common Issues**](https://github.com/vladmandic/human/wiki/Issues)
## Additional notes
@@ -62,18 +59,42 @@ Check out [**Live Demo**](https://vladmandic.github.io/human/demo/index.html) fo
+*See [**issues**](https://github.com/vladmandic/human/issues?q=) and [**discussions**](https://github.com/vladmandic/human/discussions) for list of known limitations and planned enhancements*
+
+*Suggestions are welcome!*
+
+
+
+## Inputs
+
+`Human` library can process all known input types:
+
+- `Image`, `ImageData`, `ImageBitmap`, `Canvas`, `OffscreenCanvas`, `Tensor`,
+- `HTMLImageElement`, `HTMLCanvasElement`, `HTMLVideoElement`, `HTMLMediaElement`
+
+Additionally, `HTMLVideoElement`, `HTMLMediaElement` can be a standard `