lint markdown

master
Vladimir Mandic 2023-04-03 10:41:38 -04:00
parent b38b93c6fc
commit 1abc0f96dd
12 changed files with 46 additions and 16 deletions

@ -17,26 +17,28 @@ If difference is higher than `config.cacheSensitivity` (expressed in range 0..1)
Setting `config.cacheSensitivity=80` disables caching Setting `config.cacheSensitivity=80` disables caching
Caching can be monitored via `human.performance`: Caching can be monitored via `human.performance`:
- `totalFrames`: total number of processed frames - `totalFrames`: total number of processed frames
- `cachedFrames`: number of frames considered for caching - `cachedFrames`: number of frames considered for caching
### Per-module results caching ### Per-module results caching
Each module implements its logic that interprets values of `config.<module>`: Each module implements its logic that interprets values of `config.<module>`:
- `skipFrames`: maximum number of frames before cache is invalidated - `skipFrames`: maximum number of frames before cache is invalidated
- `skipTime`: maximum time (in ms) before cache is invalidated - `skipTime`: maximum time (in ms) before cache is invalidated
Values are interpreted as **or**, meaning whichever threshold is reached first Values are interpreted as **or**, meaning whichever threshold is reached first
Note that per-module caching logic is only active if input is considered sufficiently similar Note that per-module caching logic is only active if input is considered sufficiently similar
**Single-stage Modules Caching**: **Single-stage Modules Caching**:
- Includes: **Body, Emotion, Description, Object, AntiSpoof** - Includes: **Body, Emotion, Description, Object, AntiSpoof**
- Module will return last known good value for a specific object - Module will return last known good value for a specific object
For example, there is no need to re-run *age/gender* analysis on video input on each frame For example, there is no need to re-run *age/gender* analysis on video input on each frame
since it probably did not change if input itself is sufficiently similar since it probably did not change if input itself is sufficiently similar
**Two-stage Modules Caching**: **Two-stage Modules Caching**:
- Includes: **Face, Hand** - Includes: **Face, Hand**
- Module will run analysis on the last known position of the object but will skip detecting new objects - Module will run analysis on the last known position of the object but will skip detecting new objects

17
Diag.md

@ -1,22 +1,25 @@
# Diagnostics # Diagnostics
## Get human version ## Get human version
```js ```js
console.log(human.version); console.log(human.version);
``` ```
```
2.2.0 > 2.2.0
```
## Enable console debug output ## Enable console debug output
```js ```js
const human = new Human({ debug: true }); const human = new Human({ debug: true });
``` ```
## Get current configuration ## Get current configuration
```js ```js
console.log(human.config); console.log(human.config);
``` ```
```json ```json
{ {
"backend": "tensorflow", "backend": "tensorflow",
@ -27,9 +30,11 @@ console.log(human.config);
``` ```
## Get current environment details ## Get current environment details
```js ```js
console.log(human.env); console.log(human.env);
``` ```
```json ```json
{ {
"browser": true, "browser": true,
@ -47,10 +52,12 @@ console.log(human.env);
``` ```
## Get list of all models ## Get list of all models
```js ```js
const models = human.models.list(); const models = human.models.list();
console.log(models); console.log(models);
``` ```
```js ```js
models = [ models = [
{ name: 'face', loaded: true }, { name: 'face', loaded: true },
@ -71,6 +78,7 @@ models = [
``` ```
## Get memory usage information ## Get memory usage information
```js ```js
console.log(human.tf.engine().memory()); console.log(human.tf.engine().memory());
``` ```
@ -80,9 +88,11 @@ memory = { numTensors: 1053, numDataBuffers: 1053, numBytes: 42736024 };
``` ```
## Get current TensorFlow flags ## Get current TensorFlow flags
```js ```js
console.log(human.tf.ENV.flags); console.log(human.tf.ENV.flags);
``` ```
```js ```js
flags = { DEBUG: false, PROD: true, CPU_HANDOFF_SIZE_THRESHOLD: 128 }; flags = { DEBUG: false, PROD: true, CPU_HANDOFF_SIZE_THRESHOLD: 128 };
``` ```
@ -93,6 +103,7 @@ flags = { DEBUG: false, PROD: true, CPU_HANDOFF_SIZE_THRESHOLD: 128 };
const result = await human.detect(input); const result = await human.detect(input);
console.log(result.performance); console.log(result.performance);
``` ```
```js ```js
performance = { performance = {
backend: 1, load: 283, image: 1, frames: 1, cached: 0, changed: 1, total: 947, draw: 0, face: 390, emotion: 15, embedding: 97, body: 97, hand: 142, object: 312, gesture: 0, backend: 1, load: 283, image: 1, frames: 1, cached: 0, changed: 1, total: 947, draw: 0, face: 390, emotion: 15, embedding: 97, body: 97, hand: 142, object: 312, gesture: 0,

@ -9,6 +9,7 @@ This guide covers multiple scenarios:
<br> <br>
## Install Docker ## Install Docker
For details see [Docker Docs: Installation Guide](https://docs.docker.com/engine/install/) For details see [Docker Docs: Installation Guide](https://docs.docker.com/engine/install/)
Example: Install Docker using official convenience script: Example: Install Docker using official convenience script:
@ -87,7 +88,7 @@ USER node
> sudo docker build . --file myapp.docker --tag myapp > sudo docker build . --file myapp.docker --tag myapp
### Run container ### Run container
- Maps `models` from host to a docker container so there is no need to copy it into each container - Maps `models` from host to a docker container so there is no need to copy it into each container
- Modify path as needed - Modify path as needed
@ -128,7 +129,7 @@ USER node
> sudo docker build . --file human-web.docker --tag human-web > sudo docker build . --file human-web.docker --tag human-web
### Run container ### Run container
- Maps `models` from host to a docker container so there is no need to copy it into each container - Maps `models` from host to a docker container so there is no need to copy it into each container
- Maps human internal web server to external port 8001 so app can be accessed externally - Maps human internal web server to external port 8001 so app can be accessed externally

@ -17,7 +17,8 @@
## Labels ## Labels
If `options.drawLabels` is enabled (default) If `options.drawLabels` is enabled (default):
- Labels for each feature are parsed using templates - Labels for each feature are parsed using templates
- Label templates can use built-in values in `[]` or be provided as any string literal - Label templates can use built-in values in `[]` or be provided as any string literal
- Labels for each feature are set relative to the top-left of the detection box of that feature (face, hand, body, object, etc.) - Labels for each feature are set relative to the top-left of the detection box of that feature (face, hand, body, object, etc.)
@ -50,6 +51,7 @@ drawOptions = {
## Example ## Example
Example of custom labels: Example of custom labels:
```js ```js
const drawOptions = { const drawOptions = {
bodyLabels: `person confidence is [score]% and has ${human.result?.body?.[0]?.keypoints.length || 'no'} keypoints`, bodyLabels: `person confidence is [score]% and has ${human.result?.body?.[0]?.keypoints.length || 'no'} keypoints`,

@ -53,6 +53,7 @@ for (let i = 0; i < currentResult.face.length; i++) {
``` ```
However, note that default configuration only detects first face in the frame, so increase maximum number of detected faces as well: However, note that default configuration only detects first face in the frame, so increase maximum number of detected faces as well:
```js ```js
const myConfig = { const myConfig = {
face: { face: {
@ -118,11 +119,14 @@ a permanent database of faces that can be expanded over time to cover any number
For example, see `/demo/facematch/facematch.js` and example database `/demo/facematch/faces.json`: For example, see `/demo/facematch/facematch.js` and example database `/demo/facematch/faces.json`:
> download db with known faces using http/https > download db with known faces using http/https
```js ```js
const res = await fetch('/demo/facematch/faces.json'); const res = await fetch('/demo/facematch/faces.json');
db = (res && res.ok) ? await res.json() : []; db = (res && res.ok) ? await res.json() : [];
``` ```
> download db with known faces from a local file > download db with known faces from a local file
```js ```js
const fs = require('fs'); const fs = require('fs');
const buffer = fs.readFileSync('/demo/facematch/faces.json'); const buffer = fs.readFileSync('/demo/facematch/faces.json');

@ -15,8 +15,10 @@
<br> <br>
## Releases ## Releases
- [Release Notes](https://github.com/vladmandic/human/releases) - [Release Notes](https://github.com/vladmandic/human/releases)
- [NPM Link](https://www.npmjs.com/package/@vladmandic/human) - [NPM Link](https://www.npmjs.com/package/@vladmandic/human)
## Demos ## Demos
*Check out [**Simple Live Demo**](https://vladmandic.github.io/human/demo/typescript/index.html) fully annotated app as a good start starting point ([html](https://github.com/vladmandic/human/blob/main/demo/typescript/index.html))([code](https://github.com/vladmandic/human/blob/main/demo/typescript/index.ts))* *Check out [**Simple Live Demo**](https://vladmandic.github.io/human/demo/typescript/index.html) fully annotated app as a good start starting point ([html](https://github.com/vladmandic/human/blob/main/demo/typescript/index.html))([code](https://github.com/vladmandic/human/blob/main/demo/typescript/index.ts))*

@ -1,6 +1,7 @@
# Input Processing # Input Processing
`Human` includes optional input pre-processing via `config.filter` configuration: `Human` includes optional input pre-processing via `config.filter` configuration:
- using `Canvas` features - using `Canvas` features
- using `WebGL` accelerated filters - using `WebGL` accelerated filters
- using `TFJS` accelerated enhancements - using `TFJS` accelerated enhancements
@ -41,4 +42,4 @@ Individual filters that can be set are:
If set, any input will be processed via histogram equalization to maximize color dynamic range to full spectrum If set, any input will be processed via histogram equalization to maximize color dynamic range to full spectrum
- `equalization`: boolean - `equalization`: boolean

@ -23,7 +23,7 @@ type ExternalCanvas = typeof env.Canvas | typeof globalThis.Canvas;
## Examples of Input processing in NodeJS ## Examples of Input processing in NodeJS
### 1. Using decode functionality from `tfjs-node`: ### 1. Using decode functionality from `tfjs-node`
All primary functionality of `Human` is available, but `human.draw` methods cannot be used as `canvas` implementation is not present All primary functionality of `Human` is available, but `human.draw` methods cannot be used as `canvas` implementation is not present
@ -36,10 +36,12 @@ human.tf.dispose(tensor); // dispose input data, required when working with tens
``` ```
*Note:* For all processing, correct input tensor **shape** `[1, height, width, 3]` and **dtype** `float32` *Note:* For all processing, correct input tensor **shape** `[1, height, width, 3]` and **dtype** `float32`
- 1 means batch number and is a fixed value - 1 means batch number and is a fixed value
- 3 means number of channels so 3 is used for RGB format - 3 means number of channels so 3 is used for RGB format
However `Human` will automatically convert input tensor to a correct shape However `Human` will automatically convert input tensor to a correct shape
- if batch number is omitted - if batch number is omitted
- if input image is 4-channels such as in **RGBA** images with alpha channel - if input image is 4-channels such as in **RGBA** images with alpha channel
- if input tensor is in different data type such as `int32` - if input tensor is in different data type such as `int32`

@ -23,10 +23,12 @@ Default models in Human library are:
`Human` includes default models but supports number of additional models and model variations of existing models `Human` includes default models but supports number of additional models and model variations of existing models
Additional models can be accessed via: Additional models can be accessed via:
- [GitHub repository](https://github.com/vladmandic/human-models)
- [NPMjs package](https://www.npmjs.com/package/@vladmandic/human-models) - [GitHub repository](https://github.com/vladmandic/human-models)
- [NPMjs package](https://www.npmjs.com/package/@vladmandic/human-models)
To use alternative models from local host: To use alternative models from local host:
- download them either from *github* or *npmjs* and either - download them either from *github* or *npmjs* and either
- set human configuration value `modelPath` for each model or - set human configuration value `modelPath` for each model or
- set global configuration value `baseModelPath` to location of downloaded models - set global configuration value `baseModelPath` to location of downloaded models

@ -10,7 +10,8 @@ Result of `humand.detect()` method is a single object that includes data for all
<br> <br>
Full documentation: Full documentation:
- [**Result Interface Specification**](https://vladmandic.github.io/human/typedoc/interfaces/Result.html) - [**Result Interface Specification**](https://vladmandic.github.io/human/typedoc/interfaces/Result.html)
- [**Result Interface Definition**](https://github.com/vladmandic/human/blob/main/src/result.ts) - [**Result Interface Definition**](https://github.com/vladmandic/human/blob/main/src/result.ts)

@ -18,7 +18,7 @@ There is only *ONE* method you need:
const result = await human.detect(input, config?) // run single detection const result = await human.detect(input, config?) // run single detection
``` ```
or or
<!-- eslint-skip --> <!-- eslint-skip -->
```js ```js
@ -28,6 +28,7 @@ or
``` ```
Notes: Notes:
- [**Valid Inputs**](https://github.com/vladmandic/human/wiki/Inputs) - [**Valid Inputs**](https://github.com/vladmandic/human/wiki/Inputs)
- [**Configuration Details**](https://github.com/vladmandic/human/wiki/Config) - [**Configuration Details**](https://github.com/vladmandic/human/wiki/Config)
@ -81,6 +82,7 @@ For details, see [segmentation documentation](https://vladmandic.github.io/human
Additiona helper namespaces that can be used to reduce amount of manual code that needs to be written, but do not have to be used Additiona helper namespaces that can be used to reduce amount of manual code that needs to be written, but do not have to be used
For details, see: For details, see:
- [Draw methods documentation](https://github.com/vladmandic/human/wiki/Draw) | [Draw options](https://vladmandic.github.io/human/typedoc/interfaces/DrawOptions.html) - [Draw methods documentation](https://github.com/vladmandic/human/wiki/Draw) | [Draw options](https://vladmandic.github.io/human/typedoc/interfaces/DrawOptions.html)
- [WebCam API specs](https://vladmandic.github.io/human/typedoc/classes/WebCam.html) - [WebCam API specs](https://vladmandic.github.io/human/typedoc/classes/WebCam.html)
@ -151,7 +153,7 @@ human.tf; // instance of tfjs used
## Results Caching and Smoothing ## Results Caching and Smoothing
- By default, `Human` uses frame change detection for results caching - By default, `Human` uses frame change detection for results caching
- For on-screen display best results, it is recommended to use results smoothing - For on-screen display best results, it is recommended to use results smoothing
For details, see <https://github.com/vladmandic/human/wiki/Caching For details, see <https://github.com/vladmandic/human/wiki/Caching

@ -1,2 +1,2 @@
**Human Library Wiki Pages** **Human Library Wiki Pages**
3D Face Detection, Body Pose, Hand & Finger Tracking, Iris Tracking, Age & Gender Prediction, Emotion Prediction & Gesture Recognition 3D Face Detection, Body Pose, Hand & Finger Tracking, Iris Tracking, Age & Gender Prediction, Emotion Prediction & Gesture Recognition