diff --git a/Build-Process.md b/Build-Process.md
index 5cc8c89..213526b 100644
--- a/Build-Process.md
+++ b/Build-Process.md
@@ -14,9 +14,12 @@ npm run build
This will rebuild library itself (all variations) as well as demo
+
+
Project is written in pure `JavaScript` [ECMAScript version 2020](https://www.ecma-international.org/ecma-262/11.0/index.html)
Build target is `JavaScript` **EMCAScript version 2018**
-Only project depdendency is [@tensorflow/tfjs](https://github.com/tensorflow/tfjs)
-Development dependencies are [eslint](https://github.com/eslint) used for code linting and [esbuild](https://github.com/evanw/esbuild) used for IIFE and ESM script bundling
+
+Only project depdendency is [@tensorflow/tfjs](https://github.com/tensorflow/tfjs)
+Development dependencies are [eslint](https://github.com/eslint) used for code linting and [esbuild](https://github.com/evanw/esbuild) used for IIFE and ESM script bundling
diff --git a/Configuration.md b/Configuration.md
index aca8dfa..384a51f 100644
--- a/Configuration.md
+++ b/Configuration.md
@@ -16,27 +16,35 @@ config = {
backend: 'webgl', // select tfjs backend to use
console: true, // enable debugging output to console
async: true, // execute enabled models in parallel
- // this disables per-model performance data but slightly increases performance
+ // this disables per-model performance data but
+ // slightly increases performance
// cannot be used if profiling is enabled
profile: false, // enable tfjs profiling
- // this has significant performance impact, only enable for debugging purposes
+ // this has significant performance impact
+ // only enable for debugging purposes
// currently only implemented for age,gender,emotion models
deallocate: false, // aggresively deallocate gpu memory after each usage
- // only valid for webgl backend and only during first call, cannot be changed unless library is reloaded
- // this has significant performance impact, only enable on low-memory devices
+ // only valid for webgl backend and only during first call
+ // cannot be changed unless library is reloaded
+ // this has significant performance impact
+ // only enable on low-memory devices
scoped: false, // enable scoped runs
- // some models *may* have memory leaks, this wrapps everything in a local scope at a cost of performance
+ // some models *may* have memory leaks,
+ // this wrapps everything in a local scope at a cost of performance
// typically not needed
- videoOptimized: true, // perform additional optimizations when input is video, must be disabled for images
- filter: { // note: image filters are only available in Browser environments and not in NodeJS as they require WebGL for processing
+ videoOptimized: true, // perform additional optimizations when input is video,
+ // must be disabled for images
+ // basically this skips object box boundary detection for every n frames
+ // while maintaining in-box detection since objects cannot move that fast
+
+ filter: {
enabled: true, // enable image pre-processing filters
- return: true, // return processed canvas imagedata in result
width: 0, // resize input width
height: 0, // resize input height
- // usefull on low-performance devices to reduce the size of processed input
// if both width and height are set to 0, there is no resizing
// if just one is set, second one is scaled automatically
// if both are set, values are used as-is
+ return: true, // return processed canvas imagedata in result
brightness: 0, // range: -1 (darken) to 1 (lighten)
contrast: 0, // range: -1 (reduce contrast) to 1 (increase contrast)
sharpness: 0, // range: 0 (no sharpening) to 1 (maximum sharpening)
@@ -51,90 +59,115 @@ config = {
polaroid: false, // image polaroid camera effect
pixelate: 0, // range: 0 (no pixelate) to N (number of pixels to pixelate)
},
+
+ gesture: {
+ enabled: true, // enable simple gesture recognition
+ },
+
face: {
enabled: true, // controls if specified modul is enabled
- // face.enabled is required for all face models: detector, mesh, iris, age, gender, emotion
- // note: module is not loaded until it is required
+ // face.enabled is required for all face models:
+ // detector, mesh, iris, age, gender, emotion
+ // (note: module is not loaded until it is required)
detector: {
- modelPath: '../models/blazeface/back/model.json', // can be 'front' or 'back'.
- // 'front' is optimized for large faces such as front-facing camera and 'back' is optimized for distanct faces.
+ modelPath: '../models/blazeface-back.json', // can be 'front' or 'back'.
+ // 'front' is optimized for large faces
+ // such as front-facing camera and
+ // 'back' is optimized for distanct faces.
inputSize: 256, // fixed value: 128 for front and 256 for 'back'
- maxFaces: 10, // maximum number of faces detected in the input, should be set to the minimum number for performance
- skipFrames: 10, // how many frames to go without re-running the face bounding box detector
- // only used for video inputs, ignored for static inputs
- // if model is running st 25 FPS, we can re-use existing bounding box for updated face mesh analysis
- // as the face probably hasn't moved much in short time (10 * 1/25 = 0.25 sec)
- minConfidence: 0.5, // threshold for discarding a prediction
- iouThreshold: 0.3, // threshold for deciding whether boxes overlap too much in non-maximum suppression
- scoreThreshold: 0.7, // threshold for deciding when to remove boxes based on score in non-maximum suppression
+ maxFaces: 10, // maximum number of faces detected in the input
+ // should be set to the minimum number for performance
+ skipFrames: 15, // how many frames to go without re-running the face bounding box detector
+ // only used for video inputs
+ // e.g., if model is running st 25 FPS, we can re-use existing bounding
+ // box for updated face analysis as the head probably hasn't moved much
+ // in short time (10 * 1/25 = 0.25 sec)
+ minConfidence: 0.1, // threshold for discarding a prediction
+ iouThreshold: 0.1, // threshold for deciding whether boxes overlap too much in
+ // non-maximum suppression (0.1 means drop if overlap 10%)
+ scoreThreshold: 0.2, // threshold for deciding when to remove boxes based on score
+ // in non-maximum suppression,
+ // this is applied on detection objects only and before minConfidence
},
+
mesh: {
enabled: true,
- modelPath: '../models/facemesh/model.json',
+ modelPath: '../models/facemesh.json',
inputSize: 192, // fixed value
},
+
iris: {
enabled: true,
- modelPath: '../models/iris/model.json',
- enlargeFactor: 2.3, // empiric tuning
+ modelPath: '../models/iris.json',
inputSize: 64, // fixed value
},
+
age: {
enabled: true,
- modelPath: '../models/ssrnet-age/imdb/model.json', // can be 'imdb' or 'wiki'
- // which determines training set for model
+ modelPath: '../models/age-ssrnet-imdb.json', // can be 'age-ssrnet-imdb' or 'age-ssrnet-wiki'
+ // which determines training set for model
inputSize: 64, // fixed value
- skipFrames: 10, // how many frames to go without re-running the detector, only used for video inputs
+ skipFrames: 15, // how many frames to go without re-running the detector
+ // only used for video inputs
},
+
gender: {
enabled: true,
- minConfidence: 0.8, // threshold for discarding a prediction
- modelPath: '../models/ssrnet-gender/imdb/model.json',
+ minConfidence: 0.1, // threshold for discarding a prediction
+ modelPath: '../models/gender-ssrnet-imdb.json', // can be 'gender', 'gender-ssrnet-imdb' or 'gender-ssrnet-wiki'
+ inputSize: 64, // fixed value
+ skipFrames: 15, // how many frames to go without re-running the detector
+ // only used for video inputs
},
+
emotion: {
enabled: true,
inputSize: 64, // fixed value
- minConfidence: 0.5, // threshold for discarding a prediction
- skipFrames: 10, // how many frames to go without re-running the detector, only used for video inputs
- modelPath: '../models/emotion/model.json',
+ minConfidence: 0.2, // threshold for discarding a prediction
+ skipFrames: 15, // how many frames to go without re-running the detector
+ modelPath: '../models/emotion-large.json', // can be 'mini', 'large'
},
},
+
body: {
enabled: true,
- modelPath: '../models/posenet/model.json',
+ modelPath: '../models/posenet.json',
inputResolution: 257, // fixed value
- outputStride: 16, // fixed value
- maxDetections: 10, // maximum number of people detected in the input, should be set to the minimum number for performance
- scoreThreshold: 0.7, // threshold for deciding when to remove boxes based on score in non-maximum suppression
+ maxDetections: 10, // maximum number of people detected in the input
+ // should be set to the minimum number for performance
+ scoreThreshold: 0.8, // threshold for deciding when to remove boxes based on score
+ // in non-maximum suppression
nmsRadius: 20, // radius for deciding points are too close in non-maximum suppression
},
+
hand: {
enabled: true,
inputSize: 256, // fixed value
- skipFrames: 10, // how many frames to go without re-running the hand bounding box detector
+ skipFrames: 15, // how many frames to go without re-running the hand bounding box detector
// only used for video inputs
- // if model is running st 25 FPS, we can re-use existing bounding box for updated hand skeleton analysis
- // as the hand probably hasn't moved much in short time (10 * 1/25 = 0.25 sec)
+ // e.g., if model is running st 25 FPS, we can re-use existing bounding
+ // box for updated hand skeleton analysis as the hand probably
+ // hasn't moved much in short time (10 * 1/25 = 0.25 sec)
minConfidence: 0.5, // threshold for discarding a prediction
- iouThreshold: 0.3, // threshold for deciding whether boxes overlap too much in non-maximum suppression
- scoreThreshold: 0.7, // threshold for deciding when to remove boxes based on score in non-maximum suppression
- enlargeFactor: 1.65, // empiric tuning as skeleton prediction prefers hand box with some whitespace
- maxHands: 10, // maximum number of hands detected in the input, should be set to the minimum number for performance
+ iouThreshold: 0.1, // threshold for deciding whether boxes overlap too much
+ // in non-maximum suppression
+ scoreThreshold: 0.8, // threshold for deciding when to remove boxes based on
+ // score in non-maximum suppression
+ maxHands: 1, // maximum number of hands detected in the input
+ // should be set to the minimum number for performance
+ landmarks: true, // detect hand landmarks or just hand boundary box
detector: {
- modelPath: '../models/handdetect/model.json',
+ modelPath: '../models/handdetect.json',
},
skeleton: {
- modelPath: '../models/handskeleton/model.json',
+ modelPath: '../models/handskeleton.json',
},
},
- gesture: {
- enabled: true, // enable simple gesture recognition
- // takes processed data and based on geometry detects simple gestures
- // easily expandable via code, see `src/gesture.js`
- },
};
```
+
+
Any user configuration and default configuration are merged using deep-merge, so you do not need to redefine entire configuration
Configurtion object is large, but typically you only need to modify few values:
diff --git a/Demos.md b/Demos.md
index 234f09f..d81031f 100644
--- a/Demos.md
+++ b/Demos.md
@@ -39,6 +39,8 @@ npm run dev
```
On first start, it will install all development dependencies required to rebuild `Human` library
+By default, web server will run on port `8000` which is configurable in `dev-server.js:options.port`
+
```log
> @vladmandic/human@0.7.5 dev /home/vlado/dev/human
> npm install && node --trace-warnings --unhandled-rejections=strict --trace-uncaught --no-deprecation dev-server.js
@@ -62,4 +64,3 @@ found 0 vulnerabilities
- `node.js`: Demo using NodeJS with CommonJS module
This is a very simple demo as althought `Human` library is compatible with NodeJS execution
and is able to load images and models from local filesystem,
-
diff --git a/Home.md b/Home.md
index 7e78362..6d15051 100644
--- a/Home.md
+++ b/Home.md
@@ -27,16 +27,16 @@
- [**Performance Notes**](https://github.com/vladmandic/human/wiki/Performance)
- [**Credits**](https://github.com/vladmandic/human/wiki/Credits)
+
+
Compatible with *Browser*, *WebWorker* and *NodeJS* execution on both Windows and Linux
- Browser/WebWorker: Compatible with *CPU*, *WebGL*, *WASM* and *WebGPU* backends
- NodeJS: Compatible with software *tfjs-node* and CUDA accelerated backends *tfjs-node-gpu*
(and maybe with React-Native as it doesn't use any DOM objects)
+
+
*This is a pre-release project, see [issues](https://github.com/vladmandic/human/issues) for list of known limitations and planned enhancements*
*Suggestions are welcome!*
-
-
-