breaking change: convert to object class

pull/293/head
Vladimir Mandic 2020-10-19 11:03:48 -04:00
parent 827a04e2d0
commit 1b7ab8bdcf
6 changed files with 288 additions and 265 deletions

View File

@ -70,23 +70,46 @@ Simply download `dist/human.js`, include it in your `HTML` file & it's ready to
<script src="dist/human.js"><script>
```
IIFE script auto-registers global namespace `human` within global `Window` object
IIFE script auto-registers global namespace `Human` within global `Window` object
Which you can use to create instance of `human` library:
```js
const human = new Human();
```
This way you can also use `Human` library within embbedded `<script>` tag within your `html` page for all-in-one approach
### 2. [ESM](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/import) module
*Recommended for usage within `Browser`*
#### 2.1 With Bundler
#### **2.1 Using Script Module**
You could use same syntax within your main `JS` file if it's imported with `<script type="module">`
If you're using bundler *(such as rollup, webpack, esbuild)* to package your client application, you can import ESM version of `Human` library which supports full tree shaking
```html
<script src="./index.js" type="module">
```
and then in your `index.js`
```js
import Human from 'dist/human.esm.js'; // for direct import must use path to module, not package name
const human = new Human();
```
#### **2.2 With Bundler**
If you're using bundler *(such as rollup, webpack, parcel, browserify, esbuild)* to package your client application,
you can import ESM version of `Human` library which supports full tree shaking
Install with:
```shell
npm install @vladmandic/human
```
```js
import human from '@vladmandic/human'; // points to @vladmandic/human/dist/human.esm.js
import Human from '@vladmandic/human'; // points to @vladmandic/human/dist/human.esm.js
// you can also force-load specific version
// for example: `@vladmandic/human/dist/human.esm.js`
const human = new Human();
```
Or if you prefer to package your version of `tfjs`, you can use `nobundle` version
@ -97,20 +120,8 @@ Install with:
```
```js
import tf from '@tensorflow/tfjs'
import human from '@vladmandic/human/dist/human.esm-nobundle.js'; // same functionality as default import, but without tfjs bundled
```
#### 2.2 Using Script Module
You could use same syntax within your main `JS` file if it's imported with `<script type="module">`
```html
<script src="./index.js" type="module">
```
and then in your `index.js`
```js
import * as tf from `https://cdnjs.cloudflare.com/ajax/libs/tensorflow/2.6.0/tf.es2017.min.js`; // load tfjs directly from CDN link
import human from 'dist/human.esm.js'; // for direct import must use path to module, not package name
import Human from '@vladmandic/human/dist/human.esm-nobundle.js'; // same functionality as default import, but without tfjs bundled
const human = new Human();
```
### 3. [NPM](https://www.npmjs.com/) module
@ -127,7 +138,8 @@ Install with:
And then use with:
```js
const tf = require('@tensorflow/tfjs-node'); // can also use '@tensorflow/tfjs-node-gpu' if you have environment with CUDA extensions
const human = require('@vladmandic/human'); // points to @vladmandic/human/dist/human.cjs
const Human = require('@vladmandic/human').default; // points to @vladmandic/human/dist/human.cjs
const human = new Human();
```
Since NodeJS projects load `weights` from local filesystem instead of using `http` calls, you must modify default configuration to include correct paths with `file://` prefix
@ -198,13 +210,20 @@ Additionally, `Human` library exposes several objects and methods:
```
Note that when using `Human` library in `NodeJS`, you must load and parse the image *before* you pass it for detection and dispose it afterwards
Input format is `Tensor4D[1, width, height, 3]` of type `float32`
For example:
```js
const imageFile = '../assets/sample1.jpg';
const buffer = fs.readFileSync(imageFile);
const image = tf.node.decodeImage(buffer);
const result = human.detect(image, config);
const decoded = tf.node.decodeImage(buffer);
const casted = decoded.toFloat();
const image = casted.expandDims(0);
decoded.dispose();
casted.dispose();
logger.log('Processing:', image.shape);
const human = new Human.Human();
const result = await human.detect(image, config);
image.dispose();
```
@ -414,15 +433,15 @@ Development dependencies are [eslint](https://github.com/eslint) used for code l
Performance will vary depending on your hardware, but also on number of resolution of input video/image, enabled modules as well as their parameters
For example, on a desktop with a low-end nVidia GTX1050 it can perform multiple face detections at 60+ FPS, but drops to 10 FPS on a medium complex images if all modules are enabled
For example, on a desktop with a low-end nVidia GTX1050 it can perform multiple face detections at 60+ FPS, but drops to ~15 FPS on a medium complex images if all modules are enabled
Performance per module:
- Enabled all: 10 FPS
- Enabled all: 15 FPS
- Image filters: 80 FPS (standalone)
- Face Detect: 80 FPS (standalone)
- Face Geometry: 30 FPS (includes face detect)
- Face Iris: 25 FPS (includes face detect and face geometry)
- Face Iris: 30 FPS (includes face detect and face geometry)
- Age: 60 FPS (includes face detect)
- Gender: 60 FPS (includes face detect)
- Emotion: 60 FPS (includes face detect)
@ -437,8 +456,11 @@ For performance details, see output of `result.performance` object during runtim
`Human` library can be used in any modern Browser or NodeJS environment, but there are several items to be aware of:
- **NodeJS**: Due to a missing feature in `tfjs-node`, only some models are available <https://github.com/tensorflow/tfjs/issues/4066>
- **Browser**: `filters` module cannot be used when using web workers <https://github.com/phoboslab/WebGLImageFilter/issues/27>
- **NodeJS**: Due to a missing feature in `tfjs-node`, only some models are available
For unsupported models, error is: `TypeError: forwardFunc is not a function`
<https://github.com/tensorflow/tfjs/issues/4066>
- **Browser**: Module `filters` cannot be used when using web workers
<https://github.com/phoboslab/WebGLImageFilter/issues/27>
<hr>

View File

@ -1,7 +1,9 @@
import human from '../dist/human.esm.js';
import Human from '../dist/human.esm.js';
import draw from './draw.js';
import Menu from './menu.js';
const human = new Human();
// ui options
const ui = {
baseColor: 'rgba(173, 216, 230, 0.3)', // this is 'lightblue', just with alpha channel

View File

@ -2,7 +2,7 @@ const tf = require('@tensorflow/tfjs-node');
const fs = require('fs');
const process = require('process');
const console = require('console');
const human = require('..'); // this resolves to project root which is '@vladmandic/human'
const Human = require('..').default; // this resolves to project root which is '@vladmandic/human'
const logger = new console.Console({
stdout: process.stdout,
@ -26,6 +26,7 @@ const logger = new console.Console({
const config = {
backend: 'tensorflow',
console: true,
videoOptimized: false,
face: {
detector: { modelPath: 'file://models/blazeface/back/model.json' },
mesh: { modelPath: 'file://models/facemesh/model.json' },
@ -47,8 +48,13 @@ async function detect(input, output) {
logger.info('TFJS Flags:', tf.env().features);
logger.log('Loading:', input);
const buffer = fs.readFileSync(input);
const image = tf.node.decodeImage(buffer);
const decoded = tf.node.decodeImage(buffer);
const casted = decoded.toFloat();
const image = casted.expandDims(0);
decoded.dispose();
casted.dispose();
logger.log('Processing:', image.shape);
const human = new Human();
const result = await human.detect(image, config);
image.dispose();
logger.log(result);

View File

@ -1,7 +1,8 @@
import human from '../dist/human.esm.js';
import Human from '../dist/human.esm.js';
let config;
let busy = false;
const human = new Human();
const log = (...msg) => {
// eslint-disable-next-line no-console

View File

@ -39,7 +39,7 @@
"scripts": {
"start": "node --trace-warnings --unhandled-rejections=strict --trace-uncaught --no-deprecation src/node.js",
"lint": "eslint src/*.js demo/*.js",
"build-iife": "esbuild --bundle --platform=browser --sourcemap --target=esnext --format=iife --minify --external:fs --global-name=human --metafile=dist/human.json --outfile=dist/human.js src/human.js",
"build-iife": "esbuild --bundle --platform=browser --sourcemap --target=esnext --format=iife --minify --external:fs --global-name=Human --metafile=dist/human.json --outfile=dist/human.js src/human.js",
"build-esm-bundle": "esbuild --bundle --platform=browser --sourcemap --target=esnext --format=esm --minify --external:fs --metafile=dist/human.esm.json --outfile=dist/human.esm.js src/human.js",
"build-esm-nobundle": "esbuild --bundle --platform=browser --sourcemap --target=esnext --format=esm --minify --external:@tensorflow --external:fs --metafile=dist/human.esm-nobundle.json --outfile=dist/human.esm-nobundle.js src/human.js",
"build-node": "esbuild --bundle --platform=node --sourcemap --target=esnext --format=cjs --external:@tensorflow --metafile=dist/human.cjs.json --outfile=dist/human.cjs src/human.js",

View File

@ -8,22 +8,7 @@ const fxImage = require('./imagefx.js');
const defaults = require('../config.js').default;
const app = require('../package.json');
let config;
let fx;
let state = 'idle';
let offscreenCanvas;
// object that contains all initialized models
const models = {
facemesh: null,
posenet: null,
handpose: null,
iris: null,
age: null,
gender: null,
emotion: null,
};
// static config override for non-video detection
const override = {
face: { detector: { skipFrames: 0 }, age: { skipFrames: 0 }, emotion: { skipFrames: 0 } },
hand: { skipFrames: 0 },
@ -35,24 +20,6 @@ const now = () => {
return parseInt(Number(process.hrtime.bigint()) / 1000 / 1000);
};
// helper function: wrapper around console output
const log = (...msg) => {
// eslint-disable-next-line no-console
if (msg && config.console) console.log(...msg);
};
// helper function: measure tensor leak
let numTensors = 0;
const analyzeMemoryLeaks = false;
const analyze = (...msg) => {
if (!analyzeMemoryLeaks) return;
const current = tf.engine().state.numTensors;
const previous = numTensors;
numTensors = current;
const leaked = current - previous;
if (leaked !== 0) log(...msg, leaked);
};
// helper function: perform deep merge of multiple objects so it allows full inheriance with overrides
function mergeDeep(...objects) {
const isObject = (obj) => obj && typeof obj === 'object';
@ -85,223 +52,248 @@ function sanity(input) {
return null;
}
async function load(userConfig) {
if (userConfig) config = mergeDeep(defaults, userConfig);
if (config.face.enabled && !models.facemesh) {
log('Load model: Face');
models.facemesh = await facemesh.load(config.face);
class Human {
constructor() {
this.tf = tf;
this.version = app.version;
this.defaults = defaults;
this.config = defaults;
this.fx = (tf.ENV.flags.IS_BROWSER && (typeof document !== 'undefined')) ? new fxImage.Canvas() : null;
this.state = 'idle';
this.numTensors = 0;
this.analyzeMemoryLeaks = false;
// object that contains all initialized models
this.models = {
facemesh: null,
posenet: null,
handpose: null,
iris: null,
age: null,
gender: null,
emotion: null,
};
// export raw access to underlying models
this.facemesh = facemesh;
this.ssrnet = ssrnet;
this.emotion = emotion;
this.posenet = posenet;
this.handpose = handpose;
}
if (config.body.enabled && !models.posenet) {
log('Load model: Body');
models.posenet = await posenet.load(config.body);
}
if (config.hand.enabled && !models.handpose) {
log('Load model: Hand');
models.handpose = await handpose.load(config.hand);
}
if (config.face.enabled && config.face.age.enabled && !models.age) {
log('Load model: Age');
models.age = await ssrnet.loadAge(config);
}
if (config.face.enabled && config.face.gender.enabled && !models.gender) {
log('Load model: Gender');
models.gender = await ssrnet.loadGender(config);
}
if (config.face.enabled && config.face.emotion.enabled && !models.emotion) {
log('Load model: Emotion');
models.emotion = await emotion.load(config);
}
}
function tfImage(input) {
// let imageData;
let filtered;
if (tf.ENV.flags.IS_BROWSER && config.filter.enabled && !(input instanceof tf.Tensor)) {
const width = input.naturalWidth || input.videoWidth || input.width || (input.shape && (input.shape[1] > 0));
const height = input.naturalHeight || input.videoHeight || input.height || (input.shape && (input.shape[2] > 0));
if (!offscreenCanvas) offscreenCanvas = new OffscreenCanvas(width, height);
/*
if (!offscreenCanvas) {
offscreenCanvas = document.createElement('canvas');
offscreenCanvas.width = width;
offscreenCanvas.height = height;
// helper function: wrapper around console output
log(...msg) {
// eslint-disable-next-line no-console
if (msg && this.config.console) console.log(...msg);
}
// helper function: measure tensor leak
analyze(...msg) {
if (!this.analyzeMemoryLeaks) return;
const current = tf.engine().state.numTensors;
const previous = this.numTensors;
this.numTensors = current;
const leaked = current - previous;
if (leaked !== 0) this.log(...msg, leaked);
}
async load(userConfig) {
if (userConfig) this.config = mergeDeep(defaults, userConfig);
if (this.config.face.enabled && !this.models.facemesh) {
this.log('Load model: Face');
this.models.facemesh = await facemesh.load(this.config.face);
}
*/
const ctx = offscreenCanvas.getContext('2d');
if (input instanceof ImageData) ctx.putImageData(input, 0, 0);
else ctx.drawImage(input, 0, 0, width, height, 0, 0, offscreenCanvas.width, offscreenCanvas.height);
if (!fx) fx = new fxImage.Canvas();
else fx.reset();
fx.addFilter('brightness', config.filter.brightness); // must have at least one filter enabled
if (config.filter.contrast !== 0) fx.addFilter('contrast', config.filter.contrast);
if (config.filter.sharpness !== 0) fx.addFilter('sharpen', config.filter.sharpness);
if (config.filter.blur !== 0) fx.addFilter('blur', config.filter.blur);
if (config.filter.saturation !== 0) fx.addFilter('saturation', config.filter.saturation);
if (config.filter.hue !== 0) fx.addFilter('hue', config.filter.hue);
if (config.filter.negative) fx.addFilter('negative');
if (config.filter.sepia) fx.addFilter('sepia');
if (config.filter.vintage) fx.addFilter('brownie');
if (config.filter.sepia) fx.addFilter('sepia');
if (config.filter.kodachrome) fx.addFilter('kodachrome');
if (config.filter.technicolor) fx.addFilter('technicolor');
if (config.filter.polaroid) fx.addFilter('polaroid');
if (config.filter.pixelate !== 0) fx.addFilter('pixelate', config.filter.pixelate);
filtered = fx.apply(offscreenCanvas);
}
let tensor;
if (input instanceof tf.Tensor) {
tensor = tf.clone(input);
} else {
const pixels = tf.browser.fromPixels(filtered || input);
const casted = pixels.toFloat();
tensor = casted.expandDims(0);
pixels.dispose();
casted.dispose();
}
return { tensor, canvas: config.filter.return ? filtered : null };
}
async function detect(input, userConfig = {}) {
state = 'config';
const perf = {};
let timeStamp;
timeStamp = now();
config = mergeDeep(defaults, userConfig);
if (!config.videoOptimized) config = mergeDeep(config, override);
perf.config = Math.trunc(now() - timeStamp);
// sanity checks
timeStamp = now();
state = 'check';
const error = sanity(input);
if (error) {
log(error, input);
return { error };
}
perf.sanity = Math.trunc(now() - timeStamp);
// eslint-disable-next-line no-async-promise-executor
return new Promise(async (resolve) => {
const timeStart = now();
// configure backend
timeStamp = now();
if (tf.getBackend() !== config.backend) {
state = 'backend';
log('Human library setting backend:', config.backend);
await tf.setBackend(config.backend);
await tf.ready();
if (this.config.body.enabled && !this.models.posenet) {
this.log('Load model: Body');
this.models.posenet = await posenet.load(this.config.body);
}
perf.backend = Math.trunc(now() - timeStamp);
// check number of loaded models
const loadedModels = Object.values(models).filter((a) => a).length;
if (loadedModels === 0) {
log('Human library starting');
log('Configuration:', config);
log('Flags:', tf.ENV.flags);
if (this.config.hand.enabled && !this.models.handpose) {
this.log('Load model: Hand');
this.models.handpose = await handpose.load(this.config.hand);
}
if (this.config.face.enabled && this.config.face.age.enabled && !this.models.age) {
this.log('Load model: Age');
this.models.age = await ssrnet.loadAge(this.config);
}
if (this.config.face.enabled && this.config.face.gender.enabled && !this.models.gender) {
this.log('Load model: Gender');
this.models.gender = await ssrnet.loadGender(this.config);
}
if (this.config.face.enabled && this.config.face.emotion.enabled && !this.models.emotion) {
this.log('Load model: Emotion');
this.models.emotion = await emotion.load(this.config);
}
}
// load models if enabled
timeStamp = now();
state = 'load';
await load();
perf.load = Math.trunc(now() - timeStamp);
tfImage(input) {
// let imageData;
let filtered;
if (this.fx && this.config.filter.enabled && !(input instanceof tf.Tensor)) {
const width = input.naturalWidth || input.videoWidth || input.width || (input.shape && (input.shape[1] > 0));
const height = input.naturalHeight || input.videoHeight || input.height || (input.shape && (input.shape[2] > 0));
const offscreenCanvas = new OffscreenCanvas(width, height);
const ctx = offscreenCanvas.getContext('2d');
if (input instanceof ImageData) ctx.putImageData(input, 0, 0);
else ctx.drawImage(input, 0, 0, width, height, 0, 0, offscreenCanvas.width, offscreenCanvas.height);
this.fx.reset();
this.fx.addFilter('brightness', this.config.filter.brightness); // must have at least one filter enabled
if (this.config.filter.contrast !== 0) this.fx.addFilter('contrast', this.config.filter.contrast);
if (this.config.filter.sharpness !== 0) this.fx.addFilter('sharpen', this.config.filter.sharpness);
if (this.config.filter.blur !== 0) this.fx.addFilter('blur', this.config.filter.blur);
if (this.config.filter.saturation !== 0) this.fx.addFilter('saturation', this.config.filter.saturation);
if (this.config.filter.hue !== 0) this.fx.addFilter('hue', this.config.filter.hue);
if (this.config.filter.negative) this.fx.addFilter('negative');
if (this.config.filter.sepia) this.fx.addFilter('sepia');
if (this.config.filter.vintage) this.fx.addFilter('brownie');
if (this.config.filter.sepia) this.fx.addFilter('sepia');
if (this.config.filter.kodachrome) this.fx.addFilter('kodachrome');
if (this.config.filter.technicolor) this.fx.addFilter('technicolor');
if (this.config.filter.polaroid) this.fx.addFilter('polaroid');
if (this.config.filter.pixelate !== 0) this.fx.addFilter('pixelate', this.config.filter.pixelate);
filtered = this.fx.apply(offscreenCanvas);
}
let tensor;
if (input instanceof tf.Tensor) {
tensor = tf.clone(input);
} else {
const pixels = tf.browser.fromPixels(filtered || input);
const casted = pixels.toFloat();
tensor = casted.expandDims(0);
pixels.dispose();
casted.dispose();
}
return { tensor, canvas: this.config.filter.return ? filtered : null };
}
if (config.scoped) tf.engine().startScope();
analyze('Start Detect:');
async detect(input, userConfig = {}) {
this.state = 'config';
const perf = {};
let timeStamp;
timeStamp = now();
const image = tfImage(input);
perf.image = Math.trunc(now() - timeStamp);
const imageTensor = image.tensor;
this.config = mergeDeep(defaults, userConfig);
if (!this.config.videoOptimized) this.config = mergeDeep(this.config, override);
perf.config = Math.trunc(now() - timeStamp);
// run posenet
state = 'run:body';
// sanity checks
timeStamp = now();
analyze('Start PoseNet');
const poseRes = config.body.enabled ? await models.posenet.estimatePoses(imageTensor, config.body) : [];
analyze('End PoseNet:');
perf.body = Math.trunc(now() - timeStamp);
this.state = 'check';
const error = sanity(input);
if (error) {
this.log(error, input);
return { error };
}
perf.sanity = Math.trunc(now() - timeStamp);
// run handpose
state = 'run:hand';
timeStamp = now();
analyze('Start HandPose:');
const handRes = config.hand.enabled ? await models.handpose.estimateHands(imageTensor, config.hand) : [];
analyze('End HandPose:');
perf.hand = Math.trunc(now() - timeStamp);
// eslint-disable-next-line no-async-promise-executor
return new Promise(async (resolve) => {
const timeStart = now();
// run facemesh, includes blazeface and iris
const faceRes = [];
if (config.face.enabled) {
state = 'run:face';
// configure backend
timeStamp = now();
analyze('Start FaceMesh:');
const faces = await models.facemesh.estimateFaces(imageTensor, config.face);
perf.face = Math.trunc(now() - timeStamp);
for (const face of faces) {
// is something went wrong, skip the face
if (!face.image || face.image.isDisposedInternal) {
log('face object is disposed:', face.image);
continue;
}
// run ssr-net age & gender, inherits face from blazeface
state = 'run:agegender';
timeStamp = now();
const ssrData = (config.face.age.enabled || config.face.gender.enabled) ? await ssrnet.predict(face.image, config) : {};
perf.agegender = Math.trunc(now() - timeStamp);
// run emotion, inherits face from blazeface
state = 'run:emotion';
timeStamp = now();
const emotionData = config.face.emotion.enabled ? await emotion.predict(face.image, config) : {};
perf.emotion = Math.trunc(now() - timeStamp);
// dont need face anymore
face.image.dispose();
// calculate iris distance
// iris: array[ bottom, left, top, right, center ]
const iris = (face.annotations.leftEyeIris && face.annotations.rightEyeIris)
? Math.max(face.annotations.leftEyeIris[3][0] - face.annotations.leftEyeIris[1][0], face.annotations.rightEyeIris[3][0] - face.annotations.rightEyeIris[1][0])
: 0;
faceRes.push({
confidence: face.confidence,
box: face.box,
mesh: face.mesh,
annotations: face.annotations,
age: ssrData.age,
gender: ssrData.gender,
agConfidence: ssrData.confidence,
emotion: emotionData,
iris: (iris !== 0) ? Math.trunc(100 * 11.7 /* human iris size in mm */ / iris) / 100 : 0,
});
analyze('End FaceMesh:');
if (tf.getBackend() !== this.config.backend) {
this.state = 'backend';
this.log('Human library setting backend:', this.config.backend);
await tf.setBackend(this.config.backend);
await tf.ready();
}
}
perf.backend = Math.trunc(now() - timeStamp);
imageTensor.dispose();
state = 'idle';
// check number of loaded models
const loadedModels = Object.values(this.models).filter((a) => a).length;
if (loadedModels === 0) {
this.log('Human library starting');
this.log('Configuration:', this.config);
this.log('Flags:', tf.ENV.flags);
}
if (config.scoped) tf.engine().endScope();
analyze('End Scope:');
// load models if enabled
timeStamp = now();
this.state = 'load';
await this.load();
perf.load = Math.trunc(now() - timeStamp);
perf.total = Math.trunc(now() - timeStart);
resolve({ face: faceRes, body: poseRes, hand: handRes, performance: perf, canvas: image.canvas });
});
if (this.config.scoped) tf.engine().startScope();
this.analyze('Start Detect:');
timeStamp = now();
const image = this.tfImage(input);
perf.image = Math.trunc(now() - timeStamp);
const imageTensor = image.tensor;
// run posenet
this.state = 'run:body';
timeStamp = now();
this.analyze('Start PoseNet');
const poseRes = this.config.body.enabled ? await this.models.posenet.estimatePoses(imageTensor, this.config.body) : [];
this.analyze('End PoseNet:');
perf.body = Math.trunc(now() - timeStamp);
// run handpose
this.state = 'run:hand';
timeStamp = now();
this.analyze('Start HandPose:');
const handRes = this.config.hand.enabled ? await this.models.handpose.estimateHands(imageTensor, this.config.hand) : [];
this.analyze('End HandPose:');
perf.hand = Math.trunc(now() - timeStamp);
// run facemesh, includes blazeface and iris
const faceRes = [];
if (this.config.face.enabled) {
this.state = 'run:face';
timeStamp = now();
this.analyze('Start FaceMesh:');
const faces = await this.models.facemesh.estimateFaces(imageTensor, this.config.face);
perf.face = Math.trunc(now() - timeStamp);
for (const face of faces) {
// is something went wrong, skip the face
if (!face.image || face.image.isDisposedInternal) {
this.log('face object is disposed:', face.image);
continue;
}
// run ssr-net age & gender, inherits face from blazeface
this.state = 'run:agegender';
timeStamp = now();
const ssrData = (this.config.face.age.enabled || this.config.face.gender.enabled) ? await ssrnet.predict(face.image, this.config) : {};
perf.agegender = Math.trunc(now() - timeStamp);
// run emotion, inherits face from blazeface
this.state = 'run:emotion';
timeStamp = now();
const emotionData = this.config.face.emotion.enabled ? await emotion.predict(face.image, this.config) : {};
perf.emotion = Math.trunc(now() - timeStamp);
// dont need face anymore
face.image.dispose();
// calculate iris distance
// iris: array[ bottom, left, top, right, center ]
const iris = (face.annotations.leftEyeIris && face.annotations.rightEyeIris)
? Math.max(face.annotations.leftEyeIris[3][0] - face.annotations.leftEyeIris[1][0], face.annotations.rightEyeIris[3][0] - face.annotations.rightEyeIris[1][0])
: 0;
faceRes.push({
confidence: face.confidence,
box: face.box,
mesh: face.mesh,
annotations: face.annotations,
age: ssrData.age,
gender: ssrData.gender,
agConfidence: ssrData.confidence,
emotion: emotionData,
iris: (iris !== 0) ? Math.trunc(100 * 11.7 /* human iris size in mm */ / iris) / 100 : 0,
});
this.analyze('End FaceMesh:');
}
}
imageTensor.dispose();
this.state = 'idle';
if (this.config.scoped) tf.engine().endScope();
this.analyze('End Scope:');
perf.total = Math.trunc(now() - timeStart);
resolve({ face: faceRes, body: poseRes, hand: handRes, performance: perf, canvas: image.canvas });
});
}
}
exports.detect = detect;
exports.defaults = defaults;
exports.config = config;
exports.models = models;
exports.facemesh = facemesh;
exports.ssrnet = ssrnet;
exports.posenet = posenet;
exports.handpose = handpose;
exports.tf = tf;
exports.version = app.version;
exports.state = state;
// Error: Failed to compile fragment shader
export { Human as default };