diff --git a/Configuration.md b/Configuration.md new file mode 100644 index 0000000..aca8dfa --- /dev/null +++ b/Configuration.md @@ -0,0 +1,152 @@ +## Configuration + +Detailed configuration options are explained below, but they are best seen in the menus present in the `demo` application: + +![Menus](assets/screenshot-menu.png) + +Below is output of `human.defaults` object +Any property can be overriden by passing user object during `human.detect()` +Note that user object and default configuration are merged using deep-merge, so you do not need to redefine entire configuration + +All configuration details can be changed in real-time! + + +```js +config = { + backend: 'webgl', // select tfjs backend to use + console: true, // enable debugging output to console + async: true, // execute enabled models in parallel + // this disables per-model performance data but slightly increases performance + // cannot be used if profiling is enabled + profile: false, // enable tfjs profiling + // this has significant performance impact, only enable for debugging purposes + // currently only implemented for age,gender,emotion models + deallocate: false, // aggresively deallocate gpu memory after each usage + // only valid for webgl backend and only during first call, cannot be changed unless library is reloaded + // this has significant performance impact, only enable on low-memory devices + scoped: false, // enable scoped runs + // some models *may* have memory leaks, this wrapps everything in a local scope at a cost of performance + // typically not needed + videoOptimized: true, // perform additional optimizations when input is video, must be disabled for images + filter: { // note: image filters are only available in Browser environments and not in NodeJS as they require WebGL for processing + enabled: true, // enable image pre-processing filters + return: true, // return processed canvas imagedata in result + width: 0, // resize input width + height: 0, // resize input height + // usefull on low-performance devices to reduce the size of processed input + // if both width and height are set to 0, there is no resizing + // if just one is set, second one is scaled automatically + // if both are set, values are used as-is + brightness: 0, // range: -1 (darken) to 1 (lighten) + contrast: 0, // range: -1 (reduce contrast) to 1 (increase contrast) + sharpness: 0, // range: 0 (no sharpening) to 1 (maximum sharpening) + blur: 0, // range: 0 (no blur) to N (blur radius in pixels) + saturation: 0, // range: -1 (reduce saturation) to 1 (increase saturation) + hue: 0, // range: 0 (no change) to 360 (hue rotation in degrees) + negative: false, // image negative + sepia: false, // image sepia colors + vintage: false, // image vintage colors + kodachrome: false, // image kodachrome colors + technicolor: false, // image technicolor colors + polaroid: false, // image polaroid camera effect + pixelate: 0, // range: 0 (no pixelate) to N (number of pixels to pixelate) + }, + face: { + enabled: true, // controls if specified modul is enabled + // face.enabled is required for all face models: detector, mesh, iris, age, gender, emotion + // note: module is not loaded until it is required + detector: { + modelPath: '../models/blazeface/back/model.json', // can be 'front' or 'back'. + // 'front' is optimized for large faces such as front-facing camera and 'back' is optimized for distanct faces. + inputSize: 256, // fixed value: 128 for front and 256 for 'back' + maxFaces: 10, // maximum number of faces detected in the input, should be set to the minimum number for performance + skipFrames: 10, // how many frames to go without re-running the face bounding box detector + // only used for video inputs, ignored for static inputs + // if model is running st 25 FPS, we can re-use existing bounding box for updated face mesh analysis + // as the face probably hasn't moved much in short time (10 * 1/25 = 0.25 sec) + minConfidence: 0.5, // threshold for discarding a prediction + iouThreshold: 0.3, // threshold for deciding whether boxes overlap too much in non-maximum suppression + scoreThreshold: 0.7, // threshold for deciding when to remove boxes based on score in non-maximum suppression + }, + mesh: { + enabled: true, + modelPath: '../models/facemesh/model.json', + inputSize: 192, // fixed value + }, + iris: { + enabled: true, + modelPath: '../models/iris/model.json', + enlargeFactor: 2.3, // empiric tuning + inputSize: 64, // fixed value + }, + age: { + enabled: true, + modelPath: '../models/ssrnet-age/imdb/model.json', // can be 'imdb' or 'wiki' + // which determines training set for model + inputSize: 64, // fixed value + skipFrames: 10, // how many frames to go without re-running the detector, only used for video inputs + }, + gender: { + enabled: true, + minConfidence: 0.8, // threshold for discarding a prediction + modelPath: '../models/ssrnet-gender/imdb/model.json', + }, + emotion: { + enabled: true, + inputSize: 64, // fixed value + minConfidence: 0.5, // threshold for discarding a prediction + skipFrames: 10, // how many frames to go without re-running the detector, only used for video inputs + modelPath: '../models/emotion/model.json', + }, + }, + body: { + enabled: true, + modelPath: '../models/posenet/model.json', + inputResolution: 257, // fixed value + outputStride: 16, // fixed value + maxDetections: 10, // maximum number of people detected in the input, should be set to the minimum number for performance + scoreThreshold: 0.7, // threshold for deciding when to remove boxes based on score in non-maximum suppression + nmsRadius: 20, // radius for deciding points are too close in non-maximum suppression + }, + hand: { + enabled: true, + inputSize: 256, // fixed value + skipFrames: 10, // how many frames to go without re-running the hand bounding box detector + // only used for video inputs + // if model is running st 25 FPS, we can re-use existing bounding box for updated hand skeleton analysis + // as the hand probably hasn't moved much in short time (10 * 1/25 = 0.25 sec) + minConfidence: 0.5, // threshold for discarding a prediction + iouThreshold: 0.3, // threshold for deciding whether boxes overlap too much in non-maximum suppression + scoreThreshold: 0.7, // threshold for deciding when to remove boxes based on score in non-maximum suppression + enlargeFactor: 1.65, // empiric tuning as skeleton prediction prefers hand box with some whitespace + maxHands: 10, // maximum number of hands detected in the input, should be set to the minimum number for performance + detector: { + modelPath: '../models/handdetect/model.json', + }, + skeleton: { + modelPath: '../models/handskeleton/model.json', + }, + }, + gesture: { + enabled: true, // enable simple gesture recognition + // takes processed data and based on geometry detects simple gestures + // easily expandable via code, see `src/gesture.js` + }, +}; +``` + +Any user configuration and default configuration are merged using deep-merge, so you do not need to redefine entire configuration +Configurtion object is large, but typically you only need to modify few values: + +- `enabled`: Choose which models to use +- `modelPath`: Update as needed to reflect your application's relative path + +for example, + +```js +const myConfig = { + backend: 'wasm', + filter: { enabled: false }, +} +const result = await human.detect(image, myConfig) +``` diff --git a/Demos.md b/Demos.md new file mode 100644 index 0000000..ffbceea --- /dev/null +++ b/Demos.md @@ -0,0 +1,45 @@ +## Demos + +Demos are included in `/demo`: + +**Browser**: +- `index.html`: Full demo using Browser with ESM module, includes selectable backends and webworkers + it loads `dist/demo-browser-index.js` which is built from sources in `demo`, starting with `demo/browser` + alternatively you can load `demo/browser.js` directly + +*You can run browser demo either live from git pages, by serving demo folder from your web server or use +included micro http2 server with source file monitoring and dynamic rebuild* + +### Dev Server + +To start micro http2 dev server, you must provide your own SSL certificate (production or self-signed) +and place them in `dev-server.js` + +Once SSL certificates have been provided, simply run +```shell +npm run dev +``` +On first start, it will install all development dependencies required to rebuild `Human` library + +```log +> @vladmandic/human@0.7.5 dev /home/vlado/dev/human +> npm install && node --trace-warnings --unhandled-rejections=strict --trace-uncaught --no-deprecation dev-server.js + +audited 321 packages in 2.506s +found 0 vulnerabilities + +2020-11-06 16:19:09 INFO: @vladmandic/human version 0.7.5 +2020-11-06 16:19:09 INFO: User: vlado Platform: linux Arch: x64 Node: v15.0.1 +2020-11-06 16:19:09 STATE: HTTP2 server listening: 8000 +2020-11-06 16:19:09 STATE: Monitoring: [ 'package.json', 'config.js', 'demo', 'src', [length]: 4 ] +2020-11-06 16:19:16 DATA: GET/2.0 200 text/html 4866 / ::ffff:192.168.0.200 +2020-11-06 16:19:16 DATA: GET/2.0 200 text/javascript 1708910 /dist/demo-browser-index.js ::ffff:192.168.0.200 +``` + +*If you want to test `wasm` or `webgpu` backends, enable loading in `index.html`* + +**NodeJS**: +- `node.js`: Demo using NodeJS with CommonJS module + This is a very simple demo as althought `Human` library is compatible with NodeJS execution + and is able to load images and models from local filesystem, + diff --git a/Install.md b/Install.md new file mode 100644 index 0000000..9e6a655 --- /dev/null +++ b/Install.md @@ -0,0 +1,126 @@ +## Installation + +**Important** +*The packaged (IIFE and ESM) version of `Human` includes `TensorFlow/JS (TFJS) 2.7.0` library which can be accessed via `human.tf`* +*You should NOT manually load another instance of `tfjs`, but if you do, be aware of possible version conflicts* + +There are multiple ways to use `Human` library, pick one that suits you: + +### Included + +- `dist/human.js`: IIFE format bundle with TFJS for Browsers +- `dist/human.esm.js`: ESM format bundle with TFJS for Browsers +- `dist/human.esm-nobundle.js`: ESM format bundle without TFJS for Browsers +- `dist/human.node.js`: CommonJS format bundle with TFJS for NodeJS +- `dist/human.node-nobundle.js`: CommonJS format bundle without TFJS for NodeJS + +All versions include `sourcemap` *(.map)* and build `manifest` *(.json)* +While `Human` is in pre-release mode, all bundles are non-minified + +Defaults: +```json + { + "main": "dist/human.node.js", + "module": "dist/human.esm.js", + "browser": "dist/human.esm.js", + } +``` + +### 1. [IIFE](https://developer.mozilla.org/en-US/docs/Glossary/IIFE) script + +*Simplest way for usage within Browser* + +Simply download `dist/human.js`, include it in your `HTML` file & it's ready to use. + +```html +