mirror of https://github.com/vladmandic/human
update documentation
parent
07cf5a489e
commit
f0b3ba9432
39
Usage.md
39
Usage.md
|
@ -8,6 +8,8 @@ All configuration is done in a single JSON object and all model weights are dyna
|
|||
|
||||
<br>
|
||||
|
||||
## Detect
|
||||
|
||||
There is only *ONE* method you need:
|
||||
|
||||
```js
|
||||
|
@ -25,12 +27,32 @@ or if you want to use promises
|
|||
})
|
||||
```
|
||||
|
||||
## Results Smoothing
|
||||
|
||||
If you're processing video input, you may want to interpolate results for smoother output
|
||||
After calling `detect` method, simply call `next` method when you need a new interpolated frame
|
||||
|
||||
`Result` Parameter to `next` method is optional and if not provided, it will use last known result
|
||||
|
||||
Example that performs single detection and then draws new interpolated result at 50 frames per second:
|
||||
|
||||
```js
|
||||
const result = await human.detect(image, config?)
|
||||
setInterval(() => {
|
||||
const interpolated = human.next(result);
|
||||
human.draw.all(canvas, result);
|
||||
}, 1000 / 50);
|
||||
```
|
||||
|
||||
## Extra Properties and Methods
|
||||
|
||||
`Human` library exposes several additional objects and methods:
|
||||
|
||||
```js
|
||||
human.version // string containing version of human library
|
||||
human.tf // instance of tfjs used by human
|
||||
human.config // access to configuration object, normally set as parameter to detect()
|
||||
human.result // access to last known result object, normally returned via call to detect()
|
||||
human.performance // access to last known performance counters
|
||||
human.state // <string> describing current operation in progress
|
||||
// progresses through: 'config', 'check', 'backend', 'load', 'run:<model>', 'idle'
|
||||
human.sysinfo // object containing current client platform and agent
|
||||
|
@ -39,8 +61,11 @@ or if you want to use promises
|
|||
human.image(image, config?) // runs image processing without detection and returns canvas
|
||||
human.warmup(config, image? // warms up human library for faster initial execution after loading
|
||||
// if image is not provided, it will generate internal sample
|
||||
human.tf // instance of tfjs used by human
|
||||
```
|
||||
|
||||
## Face Recognition
|
||||
|
||||
Additional functions used for face recognition:
|
||||
For details, see [embedding documentation](https://github.com/vladmandic/human/wiki/Embedding)
|
||||
|
||||
|
@ -59,6 +84,8 @@ Internal list of modules and objects used by current instance of `Human`:
|
|||
human.classes // dynamically maintained list of classes that perform detection on each model
|
||||
```
|
||||
|
||||
## Draw Functions
|
||||
|
||||
Additional helper functions inside `human.draw`:
|
||||
|
||||
```js
|
||||
|
@ -89,14 +116,16 @@ Style of drawing is configurable via `human.draw.options` object:
|
|||
drawBoxes: true, // draw boxes around detected faces
|
||||
roundRect: 8, // should boxes have round corners and rounding value
|
||||
drawPolygons: true, // draw polygons such as body and face mesh
|
||||
fillPolygons: true, // fill polygons in face mesh
|
||||
fillPolygons: false, // fill polygons in face mesh
|
||||
useDepth: true, // use z-axis value when available to determine color shade
|
||||
useCurves: true, // draw polygons and boxes using smooth curves instead of lines
|
||||
bufferedOutput: false, // experimental: buffer and interpolate results between frames
|
||||
useCurves: false, // draw polygons and boxes using smooth curves instead of lines
|
||||
bufferedOutput: true, // experimental: buffer and interpolate results between frames
|
||||
```
|
||||
|
||||
<br>
|
||||
|
||||
## Example Video Processing
|
||||
|
||||
Example simple app that uses Human to process video input and
|
||||
draw output on screen using internal draw helper functions
|
||||
|
||||
|
@ -132,6 +161,8 @@ detectVideo();
|
|||
|
||||
<br>
|
||||
|
||||
## Example for NodeJS
|
||||
|
||||
Note that when using `Human` library in `NodeJS`, you must load and parse the image *before* you pass it for detection and dispose it afterwards
|
||||
Input format is `Tensor4D[1, width, height, 3]` of type `float32`
|
||||
|
||||
|
|
Loading…
Reference in New Issue