diff --git a/Demos.md b/Demos.md
index 3da82e9..673e42d 100644
--- a/Demos.md
+++ b/Demos.md
@@ -9,7 +9,7 @@ All demos are included in `/demo` and come with individual documentation per-dem
- **Full** [[*Live*]](https://vladmandic.github.io/human/demo/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo): Main browser demo app that showcases all Human capabilities
- **Simple** [[*Live*]](https://vladmandic.github.io/human/demo/typescript/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/typescript): Simple demo in WebCam processing demo in TypeScript
- **Embedded** [[*Live*]](https://vladmandic.github.io/human/demo/video/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/video/index.html): Even simpler demo with tiny code embedded in HTML file
-- **Face Match** [[*Live*]](https://vladmandic.github.io/human/demo/facematch/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/facematch): Extract faces from images, calculates face descriptors and simmilarities and matches them to known database
+- **Face Match** [[*Live*]](https://vladmandic.github.io/human/demo/facematch/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/facematch): Extract faces from images, calculates face descriptors and similarities and matches them to known database
- **Face ID** [[*Live*]](https://vladmandic.github.io/human/demo/faceid/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/faceid): Runs multiple checks to validate webcam input before performing face match to faces in IndexDB
- **Multi-thread** [[*Live*]](https://vladmandic.github.io/human/demo/multithread/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/multithread): Runs each Human module in a separate web worker for highest possible performance
- **NextJS** [[*Live*]](https://vladmandic.github.io/human-next/out/index.html) [[*Details*]](https://github.com/vladmandic/human-next): Use Human with TypeScript, NextJS and ReactJS
diff --git a/Embedding.md b/Embedding.md
index ae74802..994ffdb 100644
--- a/Embedding.md
+++ b/Embedding.md
@@ -38,8 +38,8 @@ const myConfig = {
const human = new Human(myConfig);
const firstResult = await human.detect(firstImage);
const secondResult = await human.detect(secondImage);
-const similarity = human.similarity(firstResult.face[0].embedding, secondResult.face[0].embedding);
-console.log(`faces are ${100 * similarity}% simmilar`);
+const similarity = human.match.similarity(firstResult.face[0].embedding, secondResult.face[0].embedding);
+console.log(`faces are ${100 * similarity}% similar`);
```
If the image or video frame have multiple faces and you want to match all of them, simply loop through all `results.face`
@@ -47,8 +47,8 @@ If the image or video frame have multiple faces and you want to match all of the
```js
for (let i = 0; i < currentResult.face.length; i++) {
const currentEmbedding = currentResult.face[i].embedding;
- const similarity = human.similarity(referenceEmbedding, currentEmbedding);
- console.log(`face ${i} is ${100 * similarity}% simmilar`);
+ const similarity = human.match.similarity(referenceEmbedding, currentEmbedding);
+ console.log(`face ${i} is ${100 * similarity}% similar`);
}
```
@@ -61,15 +61,6 @@ const myConfig = {
};
```
-Additional helper function is `human.enhance(face)` which returns an enhanced tensor
-of a face image that can be further visualized with
-
-```js
- const enhanced = human.enhance(face);
- const canvas = document.getElementById('orig');
- human.tf.browser.toPixels(enhanced.squeeze(), canvas);
-```
-
## Face Descriptor
@@ -91,8 +82,8 @@ Changing `order` can make similarity matching more or less sensitive (default or
For example, those will produce slighly different results:
```js
- const similarity2ndOrder = human.similarity(firstEmbedding, secondEmbedding, { order = 2 });
- const similarity3rdOrder = human.similarity(firstEmbedding, secondEmbedding, { order = 3 });
+ const similarity2ndOrder = human.match.similarity(firstEmbedding, secondEmbedding, { order = 2 });
+ const similarity3rdOrder = human.match.similarity(firstEmbedding, secondEmbedding, { order = 3 });
```
@@ -116,7 +107,7 @@ To find the best match, simply use `match` method while providing embedding desc
```js
const embeddingArray = db.map((record) => record.embedding); // build array with just embeddings
- const best = human.match(embedding, embeddingArray); // return is object: { index: number, similarity: number, distance: number }
+ const best = human.match.find(embedding, embeddingArray); // return is object: { index: number, similarity: number, distance: number }
const label = embeddingArray[best.index].label;
console.log({ name, similarity: best.similarity });
```
diff --git a/Home.md b/Home.md
index cae925d..7158fb1 100644
--- a/Home.md
+++ b/Home.md
@@ -68,7 +68,7 @@
- **Full** [[*Live*]](https://vladmandic.github.io/human/demo/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo): Main browser demo app that showcases all Human capabilities
- **Simple** [[*Live*]](https://vladmandic.github.io/human/demo/typescript/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/typescript): Simple demo in WebCam processing demo in TypeScript
- **Embedded** [[*Live*]](https://vladmandic.github.io/human/demo/video/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/video/index.html): Even simpler demo with tiny code embedded in HTML file
-- **Face Match** [[*Live*]](https://vladmandic.github.io/human/demo/facematch/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/facematch): Extract faces from images, calculates face descriptors and simmilarities and matches them to known database
+- **Face Match** [[*Live*]](https://vladmandic.github.io/human/demo/facematch/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/facematch): Extract faces from images, calculates face descriptors and similarities and matches them to known database
- **Face ID** [[*Live*]](https://vladmandic.github.io/human/demo/faceid/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/faceid): Runs multiple checks to validate webcam input before performing face match to faces in IndexDB
- **Multi-thread** [[*Live*]](https://vladmandic.github.io/human/demo/multithread/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/multithread): Runs each Human module in a separate web worker for highest possible performance
- **NextJS** [[*Live*]](https://vladmandic.github.io/human-next/out/index.html) [[*Details*]](https://github.com/vladmandic/human-next): Use Human with TypeScript, NextJS and ReactJS
diff --git a/Models.md b/Models.md
index c664256..ad45ba0 100644
--- a/Models.md
+++ b/Models.md
@@ -79,7 +79,7 @@ Switching model also automatically switches implementation used inside `Human` s
- `Age Detection`: SSR-Net Age IMDB
- `Face Embedding`: BecauseofAI MobileFace Embedding
-**Object detection** can be switched from `mb3-centernet` to `nanodet`
+**Object detection** can be switched from `centernet` to `nanodet`
**Hand destection** can be switched from `handdetect` to `handtrack`
diff --git a/Usage.md b/Usage.md
index f4484f9..6744819 100644
--- a/Usage.md
+++ b/Usage.md
@@ -8,49 +8,108 @@ All configuration is done in a single JSON object and all model weights are dyna
-## Detect
+## Basics
There is only *ONE* method you need:
```js
- const human = new Human(config?) // create instance of human
- const result = await human.detect(input) // run detection
+ const human = new Human(config?) // create instance of human
+ const result = await human.detect(input, config?) // run single detection
```
-or if you want to use promises
+or
```js
- human.detect(input, config?).then((result) => {
- // your code
- })
+ const human = new Human(config?) // create instance of human
+ await human.video(input, config?) // run detection loop on input video
+ // last known results are available in human.results
```
+Notes:
- [**Valid Inputs**](https://github.com/vladmandic/human/wiki/Inputs)
- [**Configuration Details**](https://github.com/vladmandic/human/wiki/Config)
-If no other methods are called, `Human` will
-1. select best detected engine
-2. use default configuration
-3. load required models
-4. perform warmup operations
-5. preprocess input
-6. perform detection
+Standard workflow:
+
+1. Select best detected engine
+2. Use default configuration
+3. Load & compile required models
+4. Perform warmup operations
+5. Preprocess input
+6. Perform detection
+
-## Results Caching and Smoothing
+## Human Methods
-- By default, `Human` uses frame change detection for results caching
-- For on-screen display best results, it is recommended to use results smoothing
+### Main
-For details, see
+### Additional
-
+Methods used for **face recognition** and **face matching**:
+For details, see [embedding documentation](https://github.com/vladmandic/human/wiki/Embedding)
+
+```js
+ human.match.similarity(descriptor1, descriptor2) // runs similarity calculation between two provided embedding vectors
+ // vectors for source and target must be previously detected using
+ // face.description module
+ human.match.find(descriptor, descriptors) // finds best match for current face in a provided list of faces
+```
+
+Methods used for **body segmentation**, **background removal** or **background replacement**
+For details, see [segmentation documentation](https://vladmandic.github.io/human/typedoc/classes/Human.html#segmentation)
+
+```js
+ human.segmentation(input, config?) // runs body segmentation and returns processed image tensor
+ // which can be foreground-only, alpha-only or blended image
+```
+
+### Helpers
+
+Additiona helper namespaces that can be used to reduce amount of manual code that needs to be written, but do not have to be used
+For details, see:
+- [Draw methods documentation](https://github.com/vladmandic/human/wiki/Draw) | [Draw options](https://vladmandic.github.io/human/typedoc/interfaces/DrawOptions.html)
+- [WebCam API specs](https://vladmandic.github.io/human/typedoc/classes/WebCam.html)
+
+```js
+ human.webcam.* // helper methods to control webcam, main properties are `start`, `stop`, `play`, `pause`
+ human.draw.* // helper methods to draw detected results to canvas, main options are `options`, `canvas`, `all`
+```
+
+### Implicit
+
+Methods that are typically called as part of standard workflow and do not need to be called manually
+
+```js
+ human.validate(config?); // validate human configuration
+ human.init(config?); // initialize human and processing backend
+ human.load(config?); // load configured models
+ human.warmup(config?); // warms up human library for faster initial execution after loading
+ human.image(input); // process input without detection and returns canvas or tensor
+ human.models.* // namespace that serves current library ml models
+ // main options are `load`, `list`, `loaded`, `reset`, `stats`, `validate`
+```
+
+Utility methods that are typically not directly used except in advanced or troubleshooting cases
+
+```js
+ human.analyze(); // check for memory leaks
+ human.compare(); // compare two images for pixel similarity
+ human.now(); // utility wrapper for performance timestamp
+ human.profile(); // run function via profiler
+ human.reset(); // reset configuration
+ human.sleep(); // utility wrapper for sleep function
+ human.emit(); // internal event emitter
+```
## Human Properties
@@ -65,114 +124,30 @@ human.performance // access to current performance counters
human.state // describing current operation in progress
// progresses through: 'config', 'check', 'backend', 'load', 'run:', 'idle'
human.models // dynamically maintained list of loaded models
-human.env // detected platform capabilities
+human.env // detected platform environment and capabilities
human.events // container for events dispateched by human
Human.defaults // static property of Human class that contains default configuration
```
-
-
-## Human Methods
-
-### General
-
-General purpose methods exposed by `Human`
-
-```js
-human.load(config?) // explicitly call load method that loads configured models
-human.image(input, config?) // runs image processing without detection and returns canvas and tensor
-human.warmup(config?) // warms up human library for faster initial execution after loading
-human.next(result?) // returns time variant smoothened/interpolated result based on last known result
-```
-
-### Utility
-
-Utility methods exposed by `Human` that can be used in advanced cases but are typically not needed
-
-```js
-human.init() // explict backend initialization
-human.validate(config?) // validate configuration values
-human.reset() // reset current configuration to default values
-human.now() // returns platform-independent timestamp, used for performance measurements
-human.profile(input, config?) // runs detection with profiling enabled and returns information on top-20 kernels
-human.compare(input1, input2) // runs pixel-compare on two different inputs and returns score
- // internally used to detect frame-changes and cache validations
-```
-
-### TensorFlow
+## TensorFlow
`Human` internally uses `TensorFlow/JS` for all ML processing
-Access to interal instance of `tfjs` used by `human` is possible via:
+Access to namespace of an interal instance of `tfjs` used by `human` is possible via:
```js
-human.tf // instance of tfjs used by human, can be embedded or externally loaded
-```
-### Face Recognition
-
-Additional functions used for face recognition:
-For details, see [embedding documentation](https://github.com/vladmandic/human/wiki/Embedding)
-
-```js
-human.similarity(descriptor1, descriptor2) // runs similarity calculation between two provided embedding vectors
- // vectors for source and target must be previously detected using
- // face.description module
-human.match(descriptor, descriptors) // finds best match for current face in a provided list of faces
-human.distance(descriptor1, descriptor2) // checks algorithmic distance between two descriptors
- // opposite of `similarity`
-human.enhance(face) // returns enhanced tensor of a previously detected face
- // that can be used for visualizations
-```
-
-### Input Segmentation and Backgroun Removal or Replacement
-
-`Human` library can attempt to detect outlines of people in provided input and either remove background from input
-or replace it with a user-provided background image
-
-For details on parameters and return values see [API Documentation](https://vladmandic.github.io/human/typedoc/classes/Human.html#segmentation)
-
-```js
- const input = document.getElementById('my-canvas);
- const background = document.getElementById('my-background);
- human.segmentation(input, background);
-```
-
-### Draw Functions
-
-Additional helper functions inside `human.draw`:
-
-```js
- human.draw.all(canvas, result) // interpolates results for smoother operations
- // and triggers each individual draw operation
- human.draw.person(canvas, result) // triggers unified person analysis and draws bounding box
- human.draw.canvas(inCanvas, outCanvas) // simply copies one canvas to another,
- // can be used to draw results.canvas to user canvas on page
- human.draw.face(canvas, results.face) // draw face detection results to canvas
- human.draw.body(canvas, results.body) // draw body detection results to canvas
- human.draw.hand(canvas, result.hand) // draw hand detection results to canvas
- human.draw.object(canvas, result.object) // draw object detection results to canvas
- human.draw.gesture(canvas, result.gesture) // draw detected gesture results to canvas
-```
-
-Style of drawing is configurable via `human.draw.options` object:
-
-```js
- color: 'rgba(173, 216, 230, 0.3)', // 'lightblue' with light alpha channel
- labelColor: 'rgba(173, 216, 230, 1)', // 'lightblue' with dark alpha channel
- shadowColor: 'black', // draw shadows underneath labels, set to blank to disable
- font: 'small-caps 16px "Segoe UI"', // font used for labels
- lineHeight: 20, // spacing between lines for multi-line labels
- lineWidth: 6, // line width of drawn polygons
- drawPoints: true, // draw detected points in all objects
- pointSize: 2, // size of points
- drawLabels: true, // draw labels with detection results
- drawBoxes: true, // draw boxes around detected faces
- roundRect: 8, // should boxes have round corners and rounding value
- drawGestures: true, // should draw gestures in top-left part of the canvas
- drawGaze: true, // should draw gaze arrows
- drawPolygons: true, // draw polygons such as body and face mesh
- fillPolygons: false, // fill polygons in face mesh
- useDepth: true, // use z-axis value when available to determine color shade
- useCurves: false, // draw polygons and boxes using smooth curves instead of lines
+human.tf // instance of tfjs used by human, can be embedded or externally loaded
```
+
+## Results Caching and Smoothing
+
+- By default, `Human` uses frame change detection for results caching
+- For on-screen display best results, it is recommended to use results smoothing
+
+For details, see