mirror of https://github.com/vladmandic/human
add draw wiki
parent
b6432fc419
commit
93e58e16b5
|
@ -0,0 +1,58 @@
|
|||
# Draw Methods
|
||||
|
||||
`Human` library includes built-in Canvas draw methods
|
||||
|
||||
- [human.draw.options](https://vladmandic.github.io/human/typedoc/interfaces/DrawOptions.html)
|
||||
currenntly set draw options, can be overriden in each method
|
||||
used to set line/text colors, line width, font type, label templates, etc.
|
||||
- [human.draw.canvas](https://vladmandic.github.io/human/typedoc/functions/draw.canvas.html)
|
||||
simply input (video, canvas, image, etc.) to output canvas
|
||||
as it was after processing (see [filters](https://vladmandic.github.io/human/typedoc/interfaces/FilterConfig.html))
|
||||
- [human.draw.all](https://vladmandic.github.io/human/typedoc/functions/draw.all.html)
|
||||
meta function that executes draw for `face`, `hand`, `body`, `object`, `gestures` for all detected features
|
||||
- [human.draw.person](https://vladmandic.github.io/human/typedoc/functions/draw.all.html)
|
||||
meta function that executes draw for `face`, `hand`, `body`, `object`, `gestures` for all detected features beloning to a specific person
|
||||
- `human.draw.[face|hand|body|object|gestures]`
|
||||
draws **points**, **boxes**, **polygones** and **labels** for each detected feature as defined in `draw.options`
|
||||
|
||||
## Labels
|
||||
|
||||
If `options.drawLabels` is enabled (default)
|
||||
- Labels for each feature are parsed using templates
|
||||
- Label templates can use built-in values in `[]` or be provided as any string literal
|
||||
- Labels for each feature are set relative to the top-left of the detection box of that feature (face, hand, body, object, etc.)
|
||||
- Draw methods automatically handle multi-line labels and vertical spacing
|
||||
- Built-in unmatched label templates not matched are removed
|
||||
|
||||
## Default Label Templates
|
||||
|
||||
```js
|
||||
faceLabels: `face
|
||||
confidence: [score]%
|
||||
[gender] [genderScore]%
|
||||
age: [age] years
|
||||
distance: [distance]cm
|
||||
real: [real]%
|
||||
live: [live]%
|
||||
[emotions]
|
||||
roll: [roll]° yaw:[yaw]° pitch:[pitch]°
|
||||
gaze: [gaze]°`,
|
||||
bodyLabels: 'body [score]%',
|
||||
bodyPartLabels: '[label] [score]%',
|
||||
objectLabels: '[label] [score]%',
|
||||
handLabels: '[label] [score]%',
|
||||
fingerLabels: '[label]',
|
||||
gestureLabels: '[where] [who]: [what]',
|
||||
```
|
||||
|
||||
## Example
|
||||
|
||||
Example of custom labels:
|
||||
```js
|
||||
import * as Human from '@vladmandic/human';
|
||||
...
|
||||
const drawOptions: Partial<Human.DrawOptions> = {
|
||||
bodyLabels: `person confidence is [score]% and has ${human.result?.body?.[0]?.keypoints.length || 'no'} keypoints`,
|
||||
};
|
||||
human.draw.all(dom.canvas, human.result, drawOptions);
|
||||
```
|
57
Home.md
57
Home.md
|
@ -12,7 +12,8 @@
|
|||
**Body Pose Tracking, 3D Hand & Finger Tracking, Iris Analysis,**
|
||||
**Age & Gender & Emotion Prediction, Gaze Tracking, Gesture Recognition, Body Segmentation**
|
||||
|
||||
JavaScript module using TensorFlow/JS Machine Learning library
|
||||
<br>
|
||||
|
||||
## Highlights
|
||||
|
||||
- Compatible with most server-side and client-side environments and frameworks
|
||||
|
@ -22,6 +23,7 @@ JavaScript module using TensorFlow/JS Machine Learning library
|
|||
- Detection of frame changes to trigger only required models for improved performance
|
||||
- Intelligent temporal interpolation to provide smooth results regardless of processing performance
|
||||
- Simple unified API
|
||||
- Built-in Image, Video and WebCam handling
|
||||
|
||||
<br>
|
||||
|
||||
|
@ -94,7 +96,8 @@ JavaScript module using TensorFlow/JS Machine Learning library
|
|||
- [**Code Repository**](https://github.com/vladmandic/human)
|
||||
- [**NPM Package**](https://www.npmjs.com/package/@vladmandic/human)
|
||||
- [**Issues Tracker**](https://github.com/vladmandic/human/issues)
|
||||
- [**TypeDoc API Specification**](https://vladmandic.github.io/human/typedoc/classes/Human.html)
|
||||
- [**TypeDoc API Specification - Main class**](https://vladmandic.github.io/human/typedoc/classes/Human.html)
|
||||
- [**TypeDoc API Specification - Full**](https://vladmandic.github.io/human/typedoc/)
|
||||
- [**Change Log**](https://github.com/vladmandic/human/blob/main/CHANGELOG.md)
|
||||
- [**Current To-do List**](https://github.com/vladmandic/human/blob/main/TODO.md)
|
||||
|
||||
|
@ -105,6 +108,7 @@ JavaScript module using TensorFlow/JS Machine Learning library
|
|||
- [**Usage & Functions**](https://github.com/vladmandic/human/wiki/Usage)
|
||||
- [**Configuration Details**](https://github.com/vladmandic/human/wiki/Config)
|
||||
- [**Result Details**](https://github.com/vladmandic/human/wiki/Result)
|
||||
- [**Customizing Draw Methods**](https://github.com/vladmandic/human/wiki/Draw)
|
||||
- [**Caching & Smoothing**](https://github.com/vladmandic/human/wiki/Caching)
|
||||
- [**Input Processing**](https://github.com/vladmandic/human/wiki/Image)
|
||||
- [**Face Recognition & Face Description**](https://github.com/vladmandic/human/wiki/Embedding)
|
||||
|
@ -237,7 +241,7 @@ Additionally, `HTMLVideoElement`, `HTMLMediaElement` can be a standard `<video>`
|
|||
e.g.: **HLS** (*HTTP Live Streaming*) using `hls.js` or **DASH** (*Dynamic Adaptive Streaming over HTTP*) using `dash.js`
|
||||
- **WebRTC** media track using built-in support
|
||||
|
||||
<br>
|
||||
<br><hr><br>
|
||||
|
||||
## Code Examples
|
||||
|
||||
|
@ -346,15 +350,54 @@ const outputCanvas = document.getElementById('canvas-id');
|
|||
async function drawResults() {
|
||||
const interpolated = human.next(); // get smoothened result using last-known results
|
||||
human.draw.all(outputCanvas, interpolated); // draw the frame
|
||||
requestAnimationFrame(drawVideo); // run draw loop
|
||||
requestAnimationFrame(drawResults); // run draw loop
|
||||
}
|
||||
|
||||
human.video(inputVideo); // start detection loop which continously updates results
|
||||
drawResults(); // start draw loop
|
||||
```
|
||||
|
||||
or using built-in webcam helper methods that take care of video handling completely:
|
||||
|
||||
```js
|
||||
const human = new Human(); // create instance of Human
|
||||
const outputCanvas = document.getElementById('canvas-id');
|
||||
|
||||
async function drawResults() {
|
||||
const interpolated = human.next(); // get smoothened result using last-known results
|
||||
human.draw.canvas(outputCanvas, human.webcam.element); // draw current webcam frame
|
||||
human.draw.all(outputCanvas, interpolated); // draw the frame detectgion results
|
||||
requestAnimationFrame(drawResults); // run draw loop
|
||||
}
|
||||
|
||||
await human.webcam.start({ crop: true });
|
||||
human.video(human.webcam.element); // start detection loop which continously updates results
|
||||
drawResults(); // start draw loop
|
||||
```
|
||||
|
||||
And for even better results, you can run detection in a separate web worker thread
|
||||
|
||||
<br><hr><br>
|
||||
|
||||
## TypeDefs
|
||||
|
||||
`Human` is written using TypeScript strong typing and ships with full **TypeDefs** for all classes defined by the library bundled in `types/human.d.ts` and enabled by default
|
||||
|
||||
*Note*: This does not include embedded `tfjs`
|
||||
If you want to use embedded `tfjs` inside `Human` (`human.tf` namespace) and still full **typedefs**, add this code:
|
||||
|
||||
```js
|
||||
import type * as tfjs from '@vladmandic/human/dist/tfjs.esm';
|
||||
...
|
||||
const tf = human.tf as typeof tfjs;
|
||||
```
|
||||
|
||||
This is not enabled by default as `Human` does not ship with full **TFJS TypeDefs** due to size considerations
|
||||
Enabling `tfjs` TypeDefs as above creates additional project (dev-only as only types are required) dependencies as defined in `@vladmandic/human/dist/tfjs.esm.d.ts`:
|
||||
|
||||
@tensorflow/tfjs-core, @tensorflow/tfjs-converter, @tensorflow/tfjs-backend-wasm, @tensorflow/tfjs-backend-webgl
|
||||
|
||||
|
||||
<br><hr><br>
|
||||
|
||||
## Default models
|
||||
|
@ -384,9 +427,9 @@ For more info, see [**Configuration Details**](https://github.com/vladmandic/hum
|
|||
|
||||
<br><hr><br>
|
||||
|
||||
`Human` library is written in `TypeScript` [4.8](https://www.typescriptlang.org/docs/handbook/intro.html)
|
||||
Conforming to latest `JavaScript` [ECMAScript version 2022](https://262.ecma-international.org/) standard
|
||||
Build target is `JavaScript` [EMCAScript version 2018](https://262.ecma-international.org/11.0/)
|
||||
`Human` library is written in [TypeScript](https://www.typescriptlang.org/docs/handbook/intro.html) **4.8** using [TensorFlow/JS](https://www.tensorflow.org/js/) **4.0** and conforming to latest `JavaScript` [ECMAScript version 2022](https://262.ecma-international.org/) standard
|
||||
|
||||
Build target for distributables is `JavaScript` [EMCAScript version 2018](https://262.ecma-international.org/9.0/)
|
||||
|
||||
<br>
|
||||
|
||||
|
|
Loading…
Reference in New Issue