documentation overhaul

pull/233/head
Vladimir Mandic 2021-11-10 12:21:45 -05:00
parent 9ffed35d82
commit 6b93191080
38 changed files with 1145 additions and 814 deletions

View File

@ -11,6 +11,7 @@
### **HEAD -> main** 2021/11/09 mandic00@live.com
- disable use of path2d in node
- add liveness module and facerecognition demo
- initial version of facerecognition demo
- rebuild

View File

@ -38,18 +38,33 @@ JavaScript module using TensorFlow/JS Machine Learning library
## Releases
- [Release Notes](https://github.com/vladmandic/human/releases)
- [NPM](https://www.npmjs.com/package/@vladmandic/human)
- [NPM Link](https://www.npmjs.com/package/@vladmandic/human)
## Demos
- [**List of all Demo applications**](https://github.com/vladmandic/human/wiki/Demos)
- [*Live:* **Main Application**](https://vladmandic.github.io/human/demo/index.html)
- [*Live:* **Simple Application**](https://vladmandic.github.io/human/demo/typescript/index.html)
- [*Live:* **Face Extraction, Description, Identification and Matching**](https://vladmandic.github.io/human/demo/facematch/index.html)
- [*Live:* **Face Validation and Matching: FaceID**](https://vladmandic.github.io/human/demo/facerecognition/index.html)
- [*Live:* **Face Extraction and 3D Rendering**](https://vladmandic.github.io/human/demo/face3d/index.html)
- [*Live:* **Multithreaded Detection Showcasing Maximum Performance**](https://vladmandic.github.io/human/demo/multithread/index.html)
- [*Live:* **VR Model with Head, Face, Eye, Body and Hand tracking**](https://vladmandic.github.io/human-vrm/src/human-vrm.html)
- [Examples galery](https://vladmandic.github.io/human/samples/samples.html)
- [**Examples galery**](https://vladmandic.github.io/human/samples/samples.html)
### Browser Demos
- **Full** [[*Live*]](https://vladmandic.github.io/human/demo/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo): Main browser demo app that showcases all Human capabilities
- **Simple** [[*Live*]](https://vladmandic.github.io/human/demo/typescript/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/typescript): Simple demo in WebCam processing demo in TypeScript
- **Face Match** [[*Live*]](https://vladmandic.github.io/human/demo/facematch/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/facematch): Extract faces from images, calculates face descriptors and simmilarities and matches them to known database
- **Face Recognition** [[*Live*]](https://vladmandic.github.io/human/demo/facerecognition/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/facerecognition): Runs multiple checks to validate webcam input before performing face match, similar to *FaceID*
- **Multi-thread** [[*Live*]](https://vladmandic.github.io/human/demo/multithread/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/multithread): Runs each `human` module in a separate web worker for highest possible performance
- **Face 3D** [[*Live*]](https://vladmandic.github.io/human/demo/face3d/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/face3d): Uses WebCam as input and draws 3D render of face mesh using `Three.js`
- **Virtual Avatar** [[*Live*]](https://vladmandic.github.io/human-vrm/src/human-vrm.html) [[*Details*]](https://github.com/vladmandic/human-vrm): VR model with head, face, eye, body and hand tracking
### NodeJS Demos
- **Main** [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/nodejs): Process images from files, folders or URLs using native methods
- **Canvas** [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/nodejs): Process image from file or URL and draw results to a new image file using `node-canvas`
- **Video** [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/nodejs): Processing of video input using `ffmpeg`
- **WebCam** [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/nodejs): Processing of webcam screenshots using `fswebcam`
- **Events** [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/nodejs): Showcases usage of `Human` eventing to get notifications on processing
- **Similarity** [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/nodejs): Compares two input images for similarity of detected faces
- **Face Match** [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/facematch): Parallel processing of face **match** in multiple child worker threads
- **Multiple Workers** [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/nodejs): Runs multiple parallel `human` by dispaching them to pool of pre-created worker processes
## Project pages

View File

@ -1,5 +1,64 @@
# Human Library: Demos
For details see Wiki:
For details on other demos see Wiki: [**Demos**](https://github.com/vladmandic/human/wiki/Demos)
- [**Demos**](https://github.com/vladmandic/human/wiki/Demos)
## Main Demo
`index.html`: Full demo using `Human` ESM module running in Browsers,
Includes:
- Selectable inputs:
- Sample images
- Image via drag & drop
- Image via URL param
- WebCam input
- Video stream
- WebRTC stream
- Selectable active `Human` modules
- With interactive module params
- Interactive `Human` image filters
- Selectable interactive `results` browser
- Selectable `backend`
- Multiple execution methods:
- Sync vs Async
- in main thread or web worker
- live on git pages, on user-hosted web server or via included [**micro http2 server**](https://github.com/vladmandic/human/wiki/Development-Server)
### Demo Options
- General `Human` library options
in `index.js:userConfig`
- General `Human` `draw` options
in `index.js:drawOptions`
- Demo PWA options
in `index.js:pwa`
- Demo specific options
in `index.js:ui`
```js
console: true, // log messages to browser console
useWorker: true, // use web workers for processing
buffered: true, // should output be buffered between frames
interpolated: true, // should output be interpolated for smoothness between frames
results: false, // show results tree
useWebRTC: false, // use webrtc as camera source instead of local webcam
```
Demo implements several ways to use `Human` library,
### URL Params
Demo app can use URL parameters to override configuration values
For example:
- Force using `WASM` as backend: <https://vladmandic.github.io/human/demo/index.html?backend=wasm>
- Enable `WebWorkers`: <https://vladmandic.github.io/human/demo/index.html?worker=true>
- Skip pre-loading and warming up: <https://vladmandic.github.io/human/demo/index.html?preload=false&warmup=false>
### WebRTC
Note that WebRTC connection requires a WebRTC server that provides a compatible media track such as H.264 video track
For such a WebRTC server implementation see <https://github.com/vladmandic/stream-rtsp> project
that implements a connection to IP Security camera using RTSP protocol and transcodes it to WebRTC
ready to be consumed by a client such as `Human`

4
demo/benchmark/README.md Normal file
View File

@ -0,0 +1,4 @@
# Human Benchmarks
- `node.js` runs benchmark using `tensorflow` backend in **NodeJS**
- `index.html` runs benchmark using `wasm`, `webgl`, `humangl` and `webgpu` backends in **Browser**

View File

@ -29,14 +29,13 @@
import Human from '../../dist/human.esm.js';
const loop = 20;
const backends = ['wasm', 'webgl', 'humangl', 'webgpu'];
// eslint-disable-next-line no-console
const log = (...msg) => console.log(...msg);
const myConfig = {
backend: 'humangl',
modelBasePath: 'https://vladmandic.github.io/human/models',
wasmPath: 'https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-backend-wasm@3.9.0/dist/',
debug: true,
async: true,
cacheSensitivity: 0,
@ -48,13 +47,16 @@
iris: { enabled: true },
description: { enabled: true },
emotion: { enabled: false },
antispoof: { enabled: true },
liveness: { enabled: true },
},
hand: { enabled: true, rotation: false },
hand: { enabled: true },
body: { enabled: true },
object: { enabled: false },
object: { enabled: true },
};
async function main() {
async function benchmark(backend) {
myConfig.backend = backend;
const human = new Human(myConfig);
await human.tf.ready();
log('Human:', human.version);
@ -74,6 +76,10 @@
log('Average:', Math.round((t2 - t1) / loop));
}
async function main() {
for (const backend of backends) await benchmark(backend);
}
window.onload = main;
</script>
</body>

View File

@ -4,7 +4,7 @@ const log = require('@vladmandic/pilogger');
const canvasJS = require('canvas');
const Human = require('../../dist/human.node-gpu.js').default;
const input = 'samples/groups/group1.jpg';
const input = './samples/in/group-1.jpg';
const loop = 20;
const myConfig = {
@ -22,12 +22,12 @@ const myConfig = {
iris: { enabled: true },
description: { enabled: true },
emotion: { enabled: true },
antispoof: { enabled: true },
liveness: { enabled: true },
},
hand: {
enabled: true,
},
hand: { enabled: true },
body: { enabled: true },
object: { enabled: false },
object: { enabled: true },
};
async function getImage(human) {
@ -36,15 +36,9 @@ async function getImage(human) {
const ctx = canvas.getContext('2d');
ctx.drawImage(img, 0, 0, img.width, img.height);
const imageData = ctx.getImageData(0, 0, canvas.width, canvas.height);
const res = human.tf.tidy(() => {
const tensor = human.tf.tensor(Array.from(imageData.data), [canvas.height, canvas.width, 4], 'int32'); // create rgba image tensor from flat array
const channels = human.tf.split(tensor, 4, 2); // split rgba to channels
const rgb = human.tf.stack([channels[0], channels[1], channels[2]], 2); // stack channels back to rgb
const reshape = human.tf.reshape(rgb, [1, canvas.height, canvas.width, 3]); // move extra dim from the end of tensor and use it as batch number instead
return reshape;
});
log.info('Image:', input, res.shape);
return res;
const tensor = human.tf.tensor(Array.from(imageData.data), [canvas.height, canvas.width, 4], 'int32'); // create rgba image tensor from flat array
log.info('Image:', input, tensor.shape);
return tensor;
}
async function main() {

3
demo/face3d/README.md Normal file
View File

@ -0,0 +1,3 @@
## Human Face 3D Rendering using OpenGL
Demo for Browsers that uses `Three.js` for 3D OpenGL rendering of a detected face

View File

@ -1,8 +1,31 @@
# NodeJS Multi-Threading Match Solution
# Human Face Recognition & Matching
See `node-match.js` and `node-match-worker.js`
- **Browser** demo: `index.html` & `facematch.js`:
Loads sample images, extracts faces and runs match and similarity analysis
- **NodeJS** demo `node-match.js` and `node-match-worker.js`
Advanced multithreading demo that runs number of worker threads to process high number of matches
- Sample face database: `faces.json`
## Methods and Properties in `node-match`
<br>
## Browser Face Recognition Demo
- `demo/facematch`: Demo for Browsers that uses all face description and embedding features to
detect, extract and identify all faces plus calculate simmilarity between them
It highlights functionality such as:
- Loading images
- Extracting faces from images
- Calculating face embedding descriptors
- Finding face similarity and sorting them by similarity
- Finding best face match based on a known list of faces and printing matches
<br>
## NodeJS Multi-Threading Match Solution
### Methods and Properties in `node-match`
- `createBuffer`: create shared buffer array
single copy of data regardless of number of workers
@ -30,7 +53,7 @@ See `node-match.js` and `node-match-worker.js`
`node-match` runs in a listens for messages from workers until `maxJobs` have been reached
## Performance
### Performance
Linear performance decrease that depends on number of records in database
Non-linear performance that increases with number of worker threads due to communication overhead
@ -45,7 +68,7 @@ Non-linear performance that increases with number of worker threads due to commu
> threadPoolSize: 1 => ~600 ms / match job
> threadPoolSize: 6 => ~200 ms / match job
## Example
### Example
> node node-match

View File

@ -68,7 +68,7 @@ const fuzDescriptor = (descriptor) => {
return descriptor;
};
const delay = (ms) => new Promise((resolve) => setTimeout(resolve, ms));
const delay = (ms) => new Promise((resolve) => { setTimeout(resolve, ms); });
async function workersClose() {
const current = data.workers.filter((worker) => !!worker).length;
@ -154,7 +154,7 @@ async function createBuffer() {
data.buffer = new SharedArrayBuffer(4 * options.dbMax * options.descLength); // preallocate max number of records as sharedarraybuffers cannot grow
data.view = new Float32Array(data.buffer); // create view into buffer
data.labels.length = 0;
log.data('created shared buffer:', { maxDescriptors: data.view?.length / options.descLength, totalBytes: data.buffer.byteLength, totalElements: data.view?.length });
log.data('created shared buffer:', { maxDescriptors: (data.view?.length || 0) / options.descLength, totalBytes: data.buffer.byteLength, totalElements: data.view?.length });
}
async function main() {

View File

@ -0,0 +1,33 @@
# Human Face Recognition
`facerecognition` runs multiple checks to validate webcam input before performing face match, similar to *FaceID*
## Workflow
- Starts webcam
- Waits until input video contains validated face or timeout is reached
- Number of people
- Face size
- Face and gaze direction
- Detection scores
- Blink detection (including temporal check for blink speed) to verify live input
- Runs antispoofing optional module
- Runs liveness optional module
- Runs match against database of registered faces and presents best match with scores
## Notes
Both `antispoof` and `liveness` models are tiny and
designed to serve as a quick check when used together with other indicators:
- size below 1MB
- very quick inference times as they are very simple (11 ops for antispoof and 23 ops for liveness)
- trained on low-resolution inputs
### Anti-spoofing Module
- Checks if input is realistic (e.g. computer generated faces)
- Configuration: `human.config.face.antispoof`.enabled
- Result: `human.result.face[0].real` as score
### Liveness Module
- Checks if input has obvious artifacts due to recording (e.g. playing back phone recording of a face)
- Configuration: `human.config.face.liveness`.enabled
- Result: `human.result.face[0].live` as score

3
demo/helpers/README.md Normal file
View File

@ -0,0 +1,3 @@
# Helper libraries
Used by main `Human` demo app

View File

@ -148,7 +148,7 @@ let lastDetectedResult = {};
// helper function: async pause
// eslint-disable-next-line @typescript-eslint/no-unused-vars, no-unused-vars
const delay = (ms) => new Promise((resolve) => setTimeout(resolve, ms));
const delay = (ms) => new Promise((resolve) => { setTimeout(resolve, ms); });
// helper function: translates json to human readable string
function str(...msg) {
@ -421,7 +421,7 @@ async function setupCamera() {
if (!stream) return 'camera stream empty';
const ready = new Promise((resolve) => (video.onloadeddata = () => resolve(true)));
const ready = new Promise((resolve) => { (video.onloadeddata = () => resolve(true)); });
video.srcObject = stream;
await ready;
if (settings.width > settings.height) canvas.style.width = '100vw';

View File

@ -0,0 +1,70 @@
# Human Multithreading Demos
- **Browser** demo `multithread` & `worker`
Runs each `human` module in a separate web worker for highest possible performance
- **NodeJS** demo `node-multiprocess` & `node-multiprocess-worker`
Runs multiple parallel `human` by dispaching them to pool of pre-created worker processes
<br><hr><br>
## NodeJS Multi-process Demo
`nodejs/node-multiprocess.js` and `nodejs/node-multiprocess-worker.js`: Demo using NodeJS with CommonJS module
Demo that starts n child worker processes for parallel execution
```shell
node demo/nodejs/node-multiprocess.js
```
```json
2021-06-01 08:54:19 INFO: @vladmandic/human version 2.0.0
2021-06-01 08:54:19 INFO: User: vlado Platform: linux Arch: x64 Node: v16.0.0
2021-06-01 08:54:19 INFO: FaceAPI multi-process test
2021-06-01 08:54:19 STATE: Enumerated images: ./assets 15
2021-06-01 08:54:19 STATE: Main: started worker: 130362
2021-06-01 08:54:19 STATE: Main: started worker: 130363
2021-06-01 08:54:19 STATE: Main: started worker: 130369
2021-06-01 08:54:19 STATE: Main: started worker: 130370
2021-06-01 08:54:20 STATE: Worker: PID: 130370 TensorFlow/JS 3.6.0 Human 2.0.0 Backend: tensorflow
2021-06-01 08:54:20 STATE: Worker: PID: 130362 TensorFlow/JS 3.6.0 Human 2.0.0 Backend: tensorflow
2021-06-01 08:54:20 STATE: Worker: PID: 130369 TensorFlow/JS 3.6.0 Human 2.0.0 Backend: tensorflow
2021-06-01 08:54:20 STATE: Worker: PID: 130363 TensorFlow/JS 3.6.0 Human 2.0.0 Backend: tensorflow
2021-06-01 08:54:21 STATE: Main: dispatching to worker: 130370
2021-06-01 08:54:21 INFO: Latency: worker initializtion: 1348 message round trip: 0
2021-06-01 08:54:21 DATA: Worker received message: 130370 { test: true }
2021-06-01 08:54:21 STATE: Main: dispatching to worker: 130362
2021-06-01 08:54:21 DATA: Worker received message: 130362 { image: 'samples/ai-face.jpg' }
2021-06-01 08:54:21 DATA: Worker received message: 130370 { image: 'samples/ai-body.jpg' }
2021-06-01 08:54:21 STATE: Main: dispatching to worker: 130369
2021-06-01 08:54:21 STATE: Main: dispatching to worker: 130363
2021-06-01 08:54:21 DATA: Worker received message: 130369 { image: 'assets/human-sample-upper.jpg' }
2021-06-01 08:54:21 DATA: Worker received message: 130363 { image: 'assets/sample-me.jpg' }
2021-06-01 08:54:24 DATA: Main: worker finished: 130362 detected faces: 1 bodies: 1 hands: 0 objects: 1
2021-06-01 08:54:24 STATE: Main: dispatching to worker: 130362
2021-06-01 08:54:24 DATA: Worker received message: 130362 { image: 'assets/sample1.jpg' }
2021-06-01 08:54:25 DATA: Main: worker finished: 130369 detected faces: 1 bodies: 1 hands: 0 objects: 1
2021-06-01 08:54:25 STATE: Main: dispatching to worker: 130369
2021-06-01 08:54:25 DATA: Main: worker finished: 130370 detected faces: 1 bodies: 1 hands: 0 objects: 1
2021-06-01 08:54:25 STATE: Main: dispatching to worker: 130370
2021-06-01 08:54:25 DATA: Worker received message: 130369 { image: 'assets/sample2.jpg' }
2021-06-01 08:54:25 DATA: Main: worker finished: 130363 detected faces: 1 bodies: 1 hands: 0 objects: 2
2021-06-01 08:54:25 STATE: Main: dispatching to worker: 130363
2021-06-01 08:54:25 DATA: Worker received message: 130370 { image: 'assets/sample3.jpg' }
2021-06-01 08:54:25 DATA: Worker received message: 130363 { image: 'assets/sample4.jpg' }
2021-06-01 08:54:30 DATA: Main: worker finished: 130362 detected faces: 3 bodies: 1 hands: 0 objects: 7
2021-06-01 08:54:30 STATE: Main: dispatching to worker: 130362
2021-06-01 08:54:30 DATA: Worker received message: 130362 { image: 'assets/sample5.jpg' }
2021-06-01 08:54:31 DATA: Main: worker finished: 130369 detected faces: 3 bodies: 1 hands: 0 objects: 5
2021-06-01 08:54:31 STATE: Main: dispatching to worker: 130369
2021-06-01 08:54:31 DATA: Worker received message: 130369 { image: 'assets/sample6.jpg' }
2021-06-01 08:54:31 DATA: Main: worker finished: 130363 detected faces: 4 bodies: 1 hands: 2 objects: 2
2021-06-01 08:54:31 STATE: Main: dispatching to worker: 130363
2021-06-01 08:54:39 STATE: Main: worker exit: 130370 0
2021-06-01 08:54:39 DATA: Main: worker finished: 130362 detected faces: 1 bodies: 1 hands: 0 objects: 1
2021-06-01 08:54:39 DATA: Main: worker finished: 130369 detected faces: 1 bodies: 1 hands: 1 objects: 3
2021-06-01 08:54:39 STATE: Main: worker exit: 130362 0
2021-06-01 08:54:39 STATE: Main: worker exit: 130369 0
2021-06-01 08:54:41 DATA: Main: worker finished: 130363 detected faces: 9 bodies: 1 hands: 0 objects: 10
2021-06-01 08:54:41 STATE: Main: worker exit: 130363 0
2021-06-01 08:54:41 INFO: Processed: 15 images in total: 22006 ms working: 20658 ms average: 1377 ms
```

120
demo/nodejs/README.md Normal file
View File

@ -0,0 +1,120 @@
# Human Demos for NodeJS
- `node`: Process images from files, folders or URLs
uses native methods for image loading and decoding without external dependencies
- `node-canvas`: Process image from file or URL and draw results to a new image file using `node-canvas`
uses `node-canvas` library to load and decode images from files, draw detection results and write output to a new image file
- `node-video`: Processing of video input using `ffmpeg`
uses `ffmpeg` to decode video input (can be a file, stream or device such as webcam) and
output results in a pipe that are captured by demo app as frames and processed by `Human` library
- `node-webcam`: Processing of webcam screenshots using `fswebcam`
uses `fswebcam` to connect to web cam and take screenshots at regular interval which are then processed by `Human` library
- `node-event`: Showcases usage of `Human` eventing to get notifications on processing
- `node-similarity`: Compares two input images for similarity of detected faces
- `process-folder`: Processing all images in input folder and creates output images
interally used to generate samples gallery
<br>
## Main Demo
`nodejs/node.js`: Demo using NodeJS with CommonJS module
Simple demo that can process any input image
Note that you can run demo as-is and it will perform detection on provided sample images,
or you can pass a path to image to analyze, either on local filesystem or using URL
```shell
node demo/nodejs/node.js
```
```json
2021-06-01 08:52:15 INFO: @vladmandic/human version 2.0.0
2021-06-01 08:52:15 INFO: User: vlado Platform: linux Arch: x64 Node: v16.0.0
2021-06-01 08:52:15 INFO: Current folder: /home/vlado/dev/human
2021-06-01 08:52:15 INFO: Human: 2.0.0
2021-06-01 08:52:15 INFO: Active Configuration {
backend: 'tensorflow',
modelBasePath: 'file://models/',
wasmPath: '../node_modules/@tensorflow/tfjs-backend-wasm/dist/',
debug: true,
async: false,
warmup: 'full',
cacheSensitivity: 0.75,
filter: {
enabled: true,
width: 0,
height: 0,
flip: true,
return: true,
brightness: 0,
contrast: 0,
sharpness: 0,
blur: 0,
saturation: 0,
hue: 0,
negative: false,
sepia: false,
vintage: false,
kodachrome: false,
technicolor: false,
polaroid: false,
pixelate: 0
},
gesture: { enabled: true },
face: {
enabled: true,
detector: { modelPath: 'blazeface.json', rotation: false, maxDetected: 10, skipFrames: 15, minConfidence: 0.2, iouThreshold: 0.1, return: false, enabled: true },
mesh: { enabled: true, modelPath: 'facemesh.json' },
iris: { enabled: true, modelPath: 'iris.json' },
description: { enabled: true, modelPath: 'faceres.json', skipFrames: 16, minConfidence: 0.1 },
emotion: { enabled: true, minConfidence: 0.1, skipFrames: 17, modelPath: 'emotion.json' }
},
body: { enabled: true, modelPath: 'movenet-lightning.json', maxDetected: 1, minConfidence: 0.2 },
hand: {
enabled: true,
rotation: true,
skipFrames: 18,
minConfidence: 0.1,
iouThreshold: 0.1,
maxDetected: 2,
landmarks: true,
detector: { modelPath: 'handdetect.json' },
skeleton: { modelPath: 'handskeleton.json' }
},
object: { enabled: true, modelPath: 'mb3-centernet.json', minConfidence: 0.2, iouThreshold: 0.4, maxDetected: 10, skipFrames: 19 }
}
08:52:15.673 Human: version: 2.0.0
08:52:15.674 Human: tfjs version: 3.6.0
08:52:15.674 Human: platform: linux x64
08:52:15.674 Human: agent: NodeJS v16.0.0
08:52:15.674 Human: setting backend: tensorflow
08:52:15.710 Human: load model: file://models/blazeface.json
08:52:15.743 Human: load model: file://models/facemesh.json
08:52:15.744 Human: load model: file://models/iris.json
08:52:15.760 Human: load model: file://models/emotion.json
08:52:15.847 Human: load model: file://models/handdetect.json
08:52:15.847 Human: load model: file://models/handskeleton.json
08:52:15.914 Human: load model: file://models/movenet-lightning.json
08:52:15.957 Human: load model: file://models/mb3-centernet.json
08:52:16.015 Human: load model: file://models/faceres.json
08:52:16.015 Human: tf engine state: 50796152 bytes 1318 tensors
2021-06-01 08:52:16 INFO: Loaded: [ 'face', 'movenet', 'handpose', 'emotion', 'centernet', 'faceres', [length]: 6 ]
2021-06-01 08:52:16 INFO: Memory state: { unreliable: true, numTensors: 1318, numDataBuffers: 1318, numBytes: 50796152 }
2021-06-01 08:52:16 INFO: Loading image: private/daz3d/daz3d-kiaria-02.jpg
2021-06-01 08:52:16 STATE: Processing: [ 1, 1300, 1000, 3, [length]: 4 ]
2021-06-01 08:52:17 DATA: Results:
2021-06-01 08:52:17 DATA: Face: #0 boxScore:0.88 faceScore:1 age:16.3 genderScore:0.97 gender:female emotionScore:0.85 emotion:happy iris:61.05
2021-06-01 08:52:17 DATA: Body: #0 score:0.82 keypoints:17
2021-06-01 08:52:17 DATA: Hand: #0 score:0.89
2021-06-01 08:52:17 DATA: Hand: #1 score:0.97
2021-06-01 08:52:17 DATA: Gesture: face#0 gesture:facing left
2021-06-01 08:52:17 DATA: Gesture: body#0 gesture:leaning right
2021-06-01 08:52:17 DATA: Gesture: hand#0 gesture:pinky forward middlefinger up
2021-06-01 08:52:17 DATA: Gesture: hand#1 gesture:pinky forward middlefinger up
2021-06-01 08:52:17 DATA: Gesture: iris#0 gesture:looking left
2021-06-01 08:52:17 DATA: Object: #0 score:0.55 label:person
2021-06-01 08:52:17 DATA: Object: #1 score:0.23 label:bottle
2021-06-01 08:52:17 DATA: Persons:
2021-06-01 08:52:17 DATA: #0: Face:score:1 age:16.3 gender:female iris:61.05 Body:score:0.82 keypoints:17 LeftHand:no RightHand:yes Gestures:4
```

View File

@ -1,21 +0,0 @@
const log = require('@vladmandic/pilogger');
const Human = require('../../dist/human.node.js').default; // or const Human = require('../dist/human.node-gpu.js').default;
const config = {
debug: false,
};
async function main() {
const human = new Human(config);
await human.tf.ready();
log.info('Human:', human.version);
log.data('Environment', human.env);
await human.load();
const models = Object.keys(human.models).map((model) => ({ name: model, loaded: (human.models[model] !== null) }));
log.data('Models:', models);
log.info('Memory state:', human.tf.engine().memory());
// log.data('Config', human.config);
log.info('TFJS flags:', human.tf.ENV.flags);
}
main();

View File

@ -51,19 +51,7 @@ async function detect(input) {
// decode image using tfjs-node so we don't need external depenencies
if (!buffer) return;
const tensor = human.tf.tidy(() => {
const decode = human.tf.node.decodeImage(buffer, 3);
let expand;
if (decode.shape[2] === 4) { // input is in rgba format, need to convert to rgb
const channels = human.tf.split(decode, 4, 2); // split rgba to channels
const rgb = human.tf.stack([channels[0], channels[1], channels[2]], 2); // stack channels back to rgb and ignore alpha
expand = human.tf.reshape(rgb, [1, decode.shape[0], decode.shape[1], 3]); // move extra dim from the end of tensor and use it as batch number instead
} else {
expand = human.tf.expandDims(decode, 0);
}
const cast = human.tf.cast(expand, 'float32');
return cast;
});
const tensor = human.tf.node.decodeImage(buffer, 3);
// run detection
await human.detect(tensor, myConfig);

View File

@ -37,12 +37,10 @@ async function detect(input) {
process.exit(1);
}
const buffer = fs.readFileSync(input);
const decode = human.tf.node.decodeImage(buffer, 3);
const expand = human.tf.expandDims(decode, 0);
const tensor = human.tf.cast(expand, 'float32');
const tensor = human.tf.node.decodeImage(buffer, 3);
log.state('Loaded image:', input, tensor['shape']);
const result = await human.detect(tensor, myConfig);
human.tf.dispose([tensor, decode, expand]);
human.tf.dispose(tensor);
log.state('Detected faces:', result.face.length);
return result;
}

View File

@ -0,0 +1,5 @@
# Human Demo in TypeScript for Browsers
Simple demo app that can be used as a quick-start guide for use of `Human` in browser environments
- `index.ts` is compiled to `index.js` which is loaded from `index.html`

View File

@ -5547,7 +5547,7 @@ var getLeftToRightEyeDepthDifference = (rawCoords) => {
const rightEyeZ = rawCoords[eyeLandmarks.rightBounds[0]][2];
return leftEyeZ - rightEyeZ;
};
var getEyeBox = (rawCoords, face5, eyeInnerCornerIndex, eyeOuterCornerIndex, flip = false, meshSize) => {
var getEyeBox = (rawCoords, face5, eyeInnerCornerIndex, eyeOuterCornerIndex, meshSize, flip = false) => {
const box4 = squarifyBox(enlargeBox(calculateLandmarksBoundingBox([rawCoords[eyeInnerCornerIndex], rawCoords[eyeOuterCornerIndex]]), irisEnlarge));
const boxSize = getBoxSize(box4);
let crop2 = tfjs_esm_exports.image.cropAndResize(face5, [[
@ -5597,8 +5597,8 @@ async function augmentIris(rawCoords, face5, config3, meshSize) {
log("face mesh iris detection requested, but model is not loaded");
return rawCoords;
}
const { box: leftEyeBox, boxSize: leftEyeBoxSize, crop: leftEyeCrop } = getEyeBox(rawCoords, face5, eyeLandmarks.leftBounds[0], eyeLandmarks.leftBounds[1], true, meshSize);
const { box: rightEyeBox, boxSize: rightEyeBoxSize, crop: rightEyeCrop } = getEyeBox(rawCoords, face5, eyeLandmarks.rightBounds[0], eyeLandmarks.rightBounds[1], true, meshSize);
const { box: leftEyeBox, boxSize: leftEyeBoxSize, crop: leftEyeCrop } = getEyeBox(rawCoords, face5, eyeLandmarks.leftBounds[0], eyeLandmarks.leftBounds[1], meshSize, true);
const { box: rightEyeBox, boxSize: rightEyeBoxSize, crop: rightEyeCrop } = getEyeBox(rawCoords, face5, eyeLandmarks.rightBounds[0], eyeLandmarks.rightBounds[1], meshSize, true);
const combined = tfjs_esm_exports.concat([leftEyeCrop, rightEyeCrop]);
tfjs_esm_exports.dispose(leftEyeCrop);
tfjs_esm_exports.dispose(rightEyeCrop);
@ -11086,7 +11086,8 @@ var getCanvasContext = (input) => {
throw new Error("invalid canvas");
};
var rad2deg = (theta) => Math.round(theta * 180 / Math.PI);
function point(ctx, x, y, z = 0, localOptions) {
function point(ctx, x, y, z, localOptions) {
z = z || 0;
ctx.fillStyle = localOptions.useDepth && z ? `rgba(${127.5 + 2 * z}, ${127.5 - 2 * z}, 255, 0.3)` : localOptions.color;
ctx.beginPath();
ctx.arc(x, y, localOptions.pointSize, 0, 2 * Math.PI);

File diff suppressed because one or more lines are too long

9
dist/human.esm.js vendored
View File

@ -75872,7 +75872,7 @@ var getLeftToRightEyeDepthDifference = (rawCoords) => {
const rightEyeZ = rawCoords[eyeLandmarks.rightBounds[0]][2];
return leftEyeZ - rightEyeZ;
};
var getEyeBox = (rawCoords, face5, eyeInnerCornerIndex, eyeOuterCornerIndex, flip = false, meshSize) => {
var getEyeBox = (rawCoords, face5, eyeInnerCornerIndex, eyeOuterCornerIndex, meshSize, flip = false) => {
const box4 = squarifyBox(enlargeBox(calculateLandmarksBoundingBox([rawCoords[eyeInnerCornerIndex], rawCoords[eyeOuterCornerIndex]]), irisEnlarge));
const boxSize = getBoxSize(box4);
let crop2 = image.cropAndResize(face5, [[
@ -75922,8 +75922,8 @@ async function augmentIris(rawCoords, face5, config3, meshSize) {
log("face mesh iris detection requested, but model is not loaded");
return rawCoords;
}
const { box: leftEyeBox, boxSize: leftEyeBoxSize, crop: leftEyeCrop } = getEyeBox(rawCoords, face5, eyeLandmarks.leftBounds[0], eyeLandmarks.leftBounds[1], true, meshSize);
const { box: rightEyeBox, boxSize: rightEyeBoxSize, crop: rightEyeCrop } = getEyeBox(rawCoords, face5, eyeLandmarks.rightBounds[0], eyeLandmarks.rightBounds[1], true, meshSize);
const { box: leftEyeBox, boxSize: leftEyeBoxSize, crop: leftEyeCrop } = getEyeBox(rawCoords, face5, eyeLandmarks.leftBounds[0], eyeLandmarks.leftBounds[1], meshSize, true);
const { box: rightEyeBox, boxSize: rightEyeBoxSize, crop: rightEyeCrop } = getEyeBox(rawCoords, face5, eyeLandmarks.rightBounds[0], eyeLandmarks.rightBounds[1], meshSize, true);
const combined = concat([leftEyeCrop, rightEyeCrop]);
dispose(leftEyeCrop);
dispose(rightEyeCrop);
@ -81411,7 +81411,8 @@ var getCanvasContext = (input2) => {
throw new Error("invalid canvas");
};
var rad2deg = (theta) => Math.round(theta * 180 / Math.PI);
function point(ctx, x, y, z = 0, localOptions) {
function point(ctx, x, y, z, localOptions) {
z = z || 0;
ctx.fillStyle = localOptions.useDepth && z ? `rgba(${127.5 + 2 * z}, ${127.5 - 2 * z}, 255, 0.3)` : localOptions.color;
ctx.beginPath();
ctx.arc(x, y, localOptions.pointSize, 0, 2 * Math.PI);

File diff suppressed because one or more lines are too long

2
dist/human.js vendored

File diff suppressed because one or more lines are too long

View File

@ -5585,7 +5585,7 @@ var getLeftToRightEyeDepthDifference = (rawCoords) => {
const rightEyeZ = rawCoords[eyeLandmarks.rightBounds[0]][2];
return leftEyeZ - rightEyeZ;
};
var getEyeBox = (rawCoords, face5, eyeInnerCornerIndex, eyeOuterCornerIndex, flip = false, meshSize) => {
var getEyeBox = (rawCoords, face5, eyeInnerCornerIndex, eyeOuterCornerIndex, meshSize, flip = false) => {
const box4 = squarifyBox(enlargeBox(calculateLandmarksBoundingBox([rawCoords[eyeInnerCornerIndex], rawCoords[eyeOuterCornerIndex]]), irisEnlarge));
const boxSize = getBoxSize(box4);
let crop2 = tf12.image.cropAndResize(face5, [[
@ -5635,8 +5635,8 @@ async function augmentIris(rawCoords, face5, config3, meshSize) {
log("face mesh iris detection requested, but model is not loaded");
return rawCoords;
}
const { box: leftEyeBox, boxSize: leftEyeBoxSize, crop: leftEyeCrop } = getEyeBox(rawCoords, face5, eyeLandmarks.leftBounds[0], eyeLandmarks.leftBounds[1], true, meshSize);
const { box: rightEyeBox, boxSize: rightEyeBoxSize, crop: rightEyeCrop } = getEyeBox(rawCoords, face5, eyeLandmarks.rightBounds[0], eyeLandmarks.rightBounds[1], true, meshSize);
const { box: leftEyeBox, boxSize: leftEyeBoxSize, crop: leftEyeCrop } = getEyeBox(rawCoords, face5, eyeLandmarks.leftBounds[0], eyeLandmarks.leftBounds[1], meshSize, true);
const { box: rightEyeBox, boxSize: rightEyeBoxSize, crop: rightEyeCrop } = getEyeBox(rawCoords, face5, eyeLandmarks.rightBounds[0], eyeLandmarks.rightBounds[1], meshSize, true);
const combined = tf12.concat([leftEyeCrop, rightEyeCrop]);
tf12.dispose(leftEyeCrop);
tf12.dispose(rightEyeCrop);
@ -11145,7 +11145,8 @@ var getCanvasContext = (input) => {
throw new Error("invalid canvas");
};
var rad2deg = (theta) => Math.round(theta * 180 / Math.PI);
function point(ctx, x, y, z = 0, localOptions) {
function point(ctx, x, y, z, localOptions) {
z = z || 0;
ctx.fillStyle = localOptions.useDepth && z ? `rgba(${127.5 + 2 * z}, ${127.5 - 2 * z}, 255, 0.3)` : localOptions.color;
ctx.beginPath();
ctx.arc(x, y, localOptions.pointSize, 0, 2 * Math.PI);

View File

@ -5586,7 +5586,7 @@ var getLeftToRightEyeDepthDifference = (rawCoords) => {
const rightEyeZ = rawCoords[eyeLandmarks.rightBounds[0]][2];
return leftEyeZ - rightEyeZ;
};
var getEyeBox = (rawCoords, face5, eyeInnerCornerIndex, eyeOuterCornerIndex, flip = false, meshSize) => {
var getEyeBox = (rawCoords, face5, eyeInnerCornerIndex, eyeOuterCornerIndex, meshSize, flip = false) => {
const box4 = squarifyBox(enlargeBox(calculateLandmarksBoundingBox([rawCoords[eyeInnerCornerIndex], rawCoords[eyeOuterCornerIndex]]), irisEnlarge));
const boxSize = getBoxSize(box4);
let crop2 = tf12.image.cropAndResize(face5, [[
@ -5636,8 +5636,8 @@ async function augmentIris(rawCoords, face5, config3, meshSize) {
log("face mesh iris detection requested, but model is not loaded");
return rawCoords;
}
const { box: leftEyeBox, boxSize: leftEyeBoxSize, crop: leftEyeCrop } = getEyeBox(rawCoords, face5, eyeLandmarks.leftBounds[0], eyeLandmarks.leftBounds[1], true, meshSize);
const { box: rightEyeBox, boxSize: rightEyeBoxSize, crop: rightEyeCrop } = getEyeBox(rawCoords, face5, eyeLandmarks.rightBounds[0], eyeLandmarks.rightBounds[1], true, meshSize);
const { box: leftEyeBox, boxSize: leftEyeBoxSize, crop: leftEyeCrop } = getEyeBox(rawCoords, face5, eyeLandmarks.leftBounds[0], eyeLandmarks.leftBounds[1], meshSize, true);
const { box: rightEyeBox, boxSize: rightEyeBoxSize, crop: rightEyeCrop } = getEyeBox(rawCoords, face5, eyeLandmarks.rightBounds[0], eyeLandmarks.rightBounds[1], meshSize, true);
const combined = tf12.concat([leftEyeCrop, rightEyeCrop]);
tf12.dispose(leftEyeCrop);
tf12.dispose(rightEyeCrop);
@ -11146,7 +11146,8 @@ var getCanvasContext = (input) => {
throw new Error("invalid canvas");
};
var rad2deg = (theta) => Math.round(theta * 180 / Math.PI);
function point(ctx, x, y, z = 0, localOptions) {
function point(ctx, x, y, z, localOptions) {
z = z || 0;
ctx.fillStyle = localOptions.useDepth && z ? `rgba(${127.5 + 2 * z}, ${127.5 - 2 * z}, 255, 0.3)` : localOptions.color;
ctx.beginPath();
ctx.arc(x, y, localOptions.pointSize, 0, 2 * Math.PI);

9
dist/human.node.js vendored
View File

@ -5585,7 +5585,7 @@ var getLeftToRightEyeDepthDifference = (rawCoords) => {
const rightEyeZ = rawCoords[eyeLandmarks.rightBounds[0]][2];
return leftEyeZ - rightEyeZ;
};
var getEyeBox = (rawCoords, face5, eyeInnerCornerIndex, eyeOuterCornerIndex, flip = false, meshSize) => {
var getEyeBox = (rawCoords, face5, eyeInnerCornerIndex, eyeOuterCornerIndex, meshSize, flip = false) => {
const box4 = squarifyBox(enlargeBox(calculateLandmarksBoundingBox([rawCoords[eyeInnerCornerIndex], rawCoords[eyeOuterCornerIndex]]), irisEnlarge));
const boxSize = getBoxSize(box4);
let crop2 = tf12.image.cropAndResize(face5, [[
@ -5635,8 +5635,8 @@ async function augmentIris(rawCoords, face5, config3, meshSize) {
log("face mesh iris detection requested, but model is not loaded");
return rawCoords;
}
const { box: leftEyeBox, boxSize: leftEyeBoxSize, crop: leftEyeCrop } = getEyeBox(rawCoords, face5, eyeLandmarks.leftBounds[0], eyeLandmarks.leftBounds[1], true, meshSize);
const { box: rightEyeBox, boxSize: rightEyeBoxSize, crop: rightEyeCrop } = getEyeBox(rawCoords, face5, eyeLandmarks.rightBounds[0], eyeLandmarks.rightBounds[1], true, meshSize);
const { box: leftEyeBox, boxSize: leftEyeBoxSize, crop: leftEyeCrop } = getEyeBox(rawCoords, face5, eyeLandmarks.leftBounds[0], eyeLandmarks.leftBounds[1], meshSize, true);
const { box: rightEyeBox, boxSize: rightEyeBoxSize, crop: rightEyeCrop } = getEyeBox(rawCoords, face5, eyeLandmarks.rightBounds[0], eyeLandmarks.rightBounds[1], meshSize, true);
const combined = tf12.concat([leftEyeCrop, rightEyeCrop]);
tf12.dispose(leftEyeCrop);
tf12.dispose(rightEyeCrop);
@ -11145,7 +11145,8 @@ var getCanvasContext = (input) => {
throw new Error("invalid canvas");
};
var rad2deg = (theta) => Math.round(theta * 180 / Math.PI);
function point(ctx, x, y, z = 0, localOptions) {
function point(ctx, x, y, z, localOptions) {
z = z || 0;
ctx.fillStyle = localOptions.useDepth && z ? `rgba(${127.5 + 2 * z}, ${127.5 - 2 * z}, 255, 0.3)` : localOptions.color;
ctx.beginPath();
ctx.arc(x, y, localOptions.pointSize, 0, 2 * Math.PI);

View File

@ -284,4 +284,27 @@ DATA: kernel ops: {
reduction: [ 'Mean' ],
matrices: [ '_FusedMatMul' ]
}
```
INFO: graph model: /home/vlado/dev/human/models/liveness.json
INFO: created on: 2021-11-09T12:39:11.760Z
INFO: metadata: { generatedBy: 'https://github.com/leokwu/livenessnet', convertedBy: 'https://github.com/vladmandic', version: '808.undefined' }
INFO: model inputs based on signature
{ name: 'conv2d_1_input', dtype: 'DT_FLOAT', shape: [ -1, 32, 32, 3 ] }
INFO: model outputs based on signature
{ id: 0, name: 'activation_6', dytpe: 'DT_FLOAT', shape: [ -1, 2 ] }
INFO: tensors: 23
DATA: weights: {
files: [ 'liveness.bin' ],
size: { disk: 592976, memory: 592976 },
count: { total: 23, float32: 22, int32: 1 },
quantized: { none: 23 },
values: { total: 148244, float32: 148242, int32: 2 }
}
DATA: kernel ops: {
graph: [ 'Const', 'Placeholder', 'Identity' ],
convolution: [ '_FusedConv2D', 'MaxPool' ],
arithmetic: [ 'Mul', 'Add', 'AddV2' ],
transformation: [ 'Reshape' ],
matrices: [ '_FusedMatMul' ],
normalization: [ 'Softmax' ]
}
```

View File

@ -66,18 +66,18 @@
"@tensorflow/tfjs-layers": "^3.11.0",
"@tensorflow/tfjs-node": "^3.11.0",
"@tensorflow/tfjs-node-gpu": "^3.11.0",
"@types/node": "^16.11.6",
"@types/node": "^16.11.7",
"@typescript-eslint/eslint-plugin": "^5.3.1",
"@typescript-eslint/parser": "^5.3.1",
"@vladmandic/build": "^0.6.3",
"@vladmandic/pilogger": "^0.3.5",
"canvas": "^2.8.0",
"dayjs": "^1.10.7",
"esbuild": "^0.13.12",
"esbuild": "^0.13.13",
"eslint": "8.2.0",
"eslint-config-airbnb-base": "^14.2.1",
"eslint-config-airbnb-base": "^15.0.0",
"eslint-plugin-html": "^6.2.0",
"eslint-plugin-import": "^2.25.2",
"eslint-plugin-import": "^2.25.3",
"eslint-plugin-json": "^3.1.0",
"eslint-plugin-node": "^11.1.0",
"eslint-plugin-promise": "^5.1.1",

View File

@ -65,7 +65,7 @@ export const getLeftToRightEyeDepthDifference = (rawCoords) => {
};
// Returns a box describing a cropped region around the eye fit for passing to the iris model.
export const getEyeBox = (rawCoords, face, eyeInnerCornerIndex, eyeOuterCornerIndex, flip = false, meshSize) => {
export const getEyeBox = (rawCoords, face, eyeInnerCornerIndex, eyeOuterCornerIndex, meshSize, flip = false) => {
const box = util.squarifyBox(util.enlargeBox(util.calculateLandmarksBoundingBox([rawCoords[eyeInnerCornerIndex], rawCoords[eyeOuterCornerIndex]]), irisEnlarge));
const boxSize = util.getBoxSize(box);
let crop = tf.image.cropAndResize(face, [[
@ -119,8 +119,8 @@ export async function augmentIris(rawCoords, face, config, meshSize) {
if (config.debug) log('face mesh iris detection requested, but model is not loaded');
return rawCoords;
}
const { box: leftEyeBox, boxSize: leftEyeBoxSize, crop: leftEyeCrop } = getEyeBox(rawCoords, face, eyeLandmarks.leftBounds[0], eyeLandmarks.leftBounds[1], true, meshSize);
const { box: rightEyeBox, boxSize: rightEyeBoxSize, crop: rightEyeCrop } = getEyeBox(rawCoords, face, eyeLandmarks.rightBounds[0], eyeLandmarks.rightBounds[1], true, meshSize);
const { box: leftEyeBox, boxSize: leftEyeBoxSize, crop: leftEyeCrop } = getEyeBox(rawCoords, face, eyeLandmarks.leftBounds[0], eyeLandmarks.leftBounds[1], meshSize, true);
const { box: rightEyeBox, boxSize: rightEyeBoxSize, crop: rightEyeCrop } = getEyeBox(rawCoords, face, eyeLandmarks.rightBounds[0], eyeLandmarks.rightBounds[1], meshSize, true);
const combined = tf.concat([leftEyeCrop, rightEyeCrop]);
tf.dispose(leftEyeCrop);
tf.dispose(rightEyeCrop);

View File

@ -549,4 +549,5 @@ export class Human {
}
/** Class Human as default export */
/* eslint no-restricted-exports: ["off", { "restrictedNamedExports": ["default"] }] */
export { Human as default };

View File

@ -77,7 +77,8 @@ const getCanvasContext = (input) => {
const rad2deg = (theta) => Math.round((theta * 180) / Math.PI);
function point(ctx: CanvasRenderingContext2D, x, y, z = 0, localOptions) {
function point(ctx: CanvasRenderingContext2D, x, y, z, localOptions) {
z = z || 0;
ctx.fillStyle = localOptions.useDepth && z ? `rgba(${127.5 + (2 * z)}, ${127.5 - (2 * z)}, 255, 0.3)` : localOptions.color;
ctx.beginPath();
ctx.arc(x, y, localOptions.pointSize, 0, 2 * Math.PI);

View File

@ -66,6 +66,6 @@ export const minmax = (data: Array<number>) => data.reduce((acc: Array<number>,
// helper function: async wait
export async function wait(time) {
const waiting = new Promise((resolve) => setTimeout(() => resolve(true), time));
const waiting = new Promise((resolve) => { setTimeout(() => resolve(true), time); });
await waiting;
}

View File

@ -1,26 +1,26 @@
2021-11-09 19:33:44 INFO:  @vladmandic/human version 2.5.1
2021-11-09 19:33:44 INFO:  User: vlado Platform: linux Arch: x64 Node: v17.0.1
2021-11-09 19:33:44 INFO:  Application: {"name":"@vladmandic/human","version":"2.5.1"}
2021-11-09 19:33:44 INFO:  Environment: {"profile":"production","config":".build.json","package":"package.json","tsconfig":true,"eslintrc":true,"git":true}
2021-11-09 19:33:44 INFO:  Toolchain: {"build":"0.6.3","esbuild":"0.13.12","typescript":"4.4.4","typedoc":"0.22.8","eslint":"8.2.0"}
2021-11-09 19:33:44 INFO:  Build: {"profile":"production","steps":["clean","compile","typings","typedoc","lint","changelog"]}
2021-11-09 19:33:44 STATE: Clean: {"locations":["dist/*","types/*","typedoc/*"]}
2021-11-09 19:33:44 STATE: Compile: {"name":"tfjs/nodejs/cpu","format":"cjs","platform":"node","input":"tfjs/tf-node.ts","output":"dist/tfjs.esm.js","files":1,"inputBytes":102,"outputBytes":1275}
2021-11-09 19:33:45 STATE: Compile: {"name":"human/nodejs/cpu","format":"cjs","platform":"node","input":"src/human.ts","output":"dist/human.node.js","files":57,"inputBytes":527426,"outputBytes":445566}
2021-11-09 19:33:45 STATE: Compile: {"name":"tfjs/nodejs/gpu","format":"cjs","platform":"node","input":"tfjs/tf-node-gpu.ts","output":"dist/tfjs.esm.js","files":1,"inputBytes":110,"outputBytes":1283}
2021-11-09 19:33:45 STATE: Compile: {"name":"human/nodejs/gpu","format":"cjs","platform":"node","input":"src/human.ts","output":"dist/human.node-gpu.js","files":57,"inputBytes":527434,"outputBytes":445570}
2021-11-09 19:33:45 STATE: Compile: {"name":"tfjs/nodejs/wasm","format":"cjs","platform":"node","input":"tfjs/tf-node-wasm.ts","output":"dist/tfjs.esm.js","files":1,"inputBytes":149,"outputBytes":1350}
2021-11-09 19:33:45 STATE: Compile: {"name":"human/nodejs/wasm","format":"cjs","platform":"node","input":"src/human.ts","output":"dist/human.node-wasm.js","files":57,"inputBytes":527501,"outputBytes":445642}
2021-11-09 19:33:45 STATE: Compile: {"name":"tfjs/browser/version","format":"esm","platform":"browser","input":"tfjs/tf-version.ts","output":"dist/tfjs.version.js","files":1,"inputBytes":1063,"outputBytes":1652}
2021-11-09 19:33:45 STATE: Compile: {"name":"tfjs/browser/esm/nobundle","format":"esm","platform":"browser","input":"tfjs/tf-browser.ts","output":"dist/tfjs.esm.js","files":2,"inputBytes":2326,"outputBytes":912}
2021-11-09 19:33:45 STATE: Compile: {"name":"human/browser/esm/nobundle","format":"esm","platform":"browser","input":"src/human.ts","output":"dist/human.esm-nobundle.js","files":57,"inputBytes":527063,"outputBytes":447664}
2021-11-09 19:33:45 STATE: Compile: {"name":"tfjs/browser/esm/custom","format":"esm","platform":"browser","input":"tfjs/tf-custom.ts","output":"dist/tfjs.esm.js","files":2,"inputBytes":2562703,"outputBytes":2497652}
2021-11-09 19:33:46 STATE: Compile: {"name":"human/browser/iife/bundle","format":"iife","platform":"browser","input":"src/human.ts","output":"dist/human.js","files":57,"inputBytes":3023803,"outputBytes":1614837}
2021-11-09 19:33:46 STATE: Compile: {"name":"human/browser/esm/bundle","format":"esm","platform":"browser","input":"src/human.ts","output":"dist/human.esm.js","files":57,"inputBytes":3023803,"outputBytes":2950885}
2021-11-09 19:34:06 STATE: Typings: {"input":"src/human.ts","output":"types","files":50}
2021-11-09 19:34:13 STATE: TypeDoc: {"input":"src/human.ts","output":"typedoc","objects":49,"generated":true}
2021-11-09 19:34:13 STATE: Compile: {"name":"demo/typescript","format":"esm","platform":"browser","input":"demo/typescript/index.ts","output":"demo/typescript/index.js","files":1,"inputBytes":5801,"outputBytes":3822}
2021-11-09 19:34:13 STATE: Compile: {"name":"demo/facerecognition","format":"esm","platform":"browser","input":"demo/facerecognition/index.ts","output":"demo/facerecognition/index.js","files":1,"inputBytes":8949,"outputBytes":6529}
2021-11-09 19:34:48 STATE: Lint: {"locations":["*.json","src/**/*.ts","test/**/*.js","demo/**/*.js"],"files":91,"errors":0,"warnings":0}
2021-11-09 19:34:48 STATE: ChangeLog: {"repository":"https://github.com/vladmandic/human","branch":"main","output":"CHANGELOG.md"}
2021-11-09 19:34:48 INFO:  Done...
2021-11-10 12:12:57 INFO:  @vladmandic/human version 2.5.1
2021-11-10 12:12:57 INFO:  User: vlado Platform: linux Arch: x64 Node: v17.0.1
2021-11-10 12:12:57 INFO:  Application: {"name":"@vladmandic/human","version":"2.5.1"}
2021-11-10 12:12:57 INFO:  Environment: {"profile":"production","config":".build.json","package":"package.json","tsconfig":true,"eslintrc":true,"git":true}
2021-11-10 12:12:57 INFO:  Toolchain: {"build":"0.6.3","esbuild":"0.13.13","typescript":"4.4.4","typedoc":"0.22.8","eslint":"8.2.0"}
2021-11-10 12:12:57 INFO:  Build: {"profile":"production","steps":["clean","compile","typings","typedoc","lint","changelog"]}
2021-11-10 12:12:57 STATE: Clean: {"locations":["dist/*","types/*","typedoc/*"]}
2021-11-10 12:12:57 STATE: Compile: {"name":"tfjs/nodejs/cpu","format":"cjs","platform":"node","input":"tfjs/tf-node.ts","output":"dist/tfjs.esm.js","files":1,"inputBytes":102,"outputBytes":1275}
2021-11-10 12:12:57 STATE: Compile: {"name":"human/nodejs/cpu","format":"cjs","platform":"node","input":"src/human.ts","output":"dist/human.node.js","files":57,"inputBytes":527528,"outputBytes":445576}
2021-11-10 12:12:57 STATE: Compile: {"name":"tfjs/nodejs/gpu","format":"cjs","platform":"node","input":"tfjs/tf-node-gpu.ts","output":"dist/tfjs.esm.js","files":1,"inputBytes":110,"outputBytes":1283}
2021-11-10 12:12:57 STATE: Compile: {"name":"human/nodejs/gpu","format":"cjs","platform":"node","input":"src/human.ts","output":"dist/human.node-gpu.js","files":57,"inputBytes":527536,"outputBytes":445580}
2021-11-10 12:12:57 STATE: Compile: {"name":"tfjs/nodejs/wasm","format":"cjs","platform":"node","input":"tfjs/tf-node-wasm.ts","output":"dist/tfjs.esm.js","files":1,"inputBytes":149,"outputBytes":1350}
2021-11-10 12:12:57 STATE: Compile: {"name":"human/nodejs/wasm","format":"cjs","platform":"node","input":"src/human.ts","output":"dist/human.node-wasm.js","files":57,"inputBytes":527603,"outputBytes":445652}
2021-11-10 12:12:57 STATE: Compile: {"name":"tfjs/browser/version","format":"esm","platform":"browser","input":"tfjs/tf-version.ts","output":"dist/tfjs.version.js","files":1,"inputBytes":1063,"outputBytes":1652}
2021-11-10 12:12:57 STATE: Compile: {"name":"tfjs/browser/esm/nobundle","format":"esm","platform":"browser","input":"tfjs/tf-browser.ts","output":"dist/tfjs.esm.js","files":2,"inputBytes":2326,"outputBytes":912}
2021-11-10 12:12:57 STATE: Compile: {"name":"human/browser/esm/nobundle","format":"esm","platform":"browser","input":"src/human.ts","output":"dist/human.esm-nobundle.js","files":57,"inputBytes":527165,"outputBytes":447674}
2021-11-10 12:12:57 STATE: Compile: {"name":"tfjs/browser/esm/custom","format":"esm","platform":"browser","input":"tfjs/tf-custom.ts","output":"dist/tfjs.esm.js","files":2,"inputBytes":2562703,"outputBytes":2497652}
2021-11-10 12:12:58 STATE: Compile: {"name":"human/browser/iife/bundle","format":"iife","platform":"browser","input":"src/human.ts","output":"dist/human.js","files":57,"inputBytes":3023905,"outputBytes":1614842}
2021-11-10 12:12:58 STATE: Compile: {"name":"human/browser/esm/bundle","format":"esm","platform":"browser","input":"src/human.ts","output":"dist/human.esm.js","files":57,"inputBytes":3023905,"outputBytes":2950895}
2021-11-10 12:13:20 STATE: Typings: {"input":"src/human.ts","output":"types","files":50}
2021-11-10 12:13:27 STATE: TypeDoc: {"input":"src/human.ts","output":"typedoc","objects":49,"generated":true}
2021-11-10 12:13:27 STATE: Compile: {"name":"demo/typescript","format":"esm","platform":"browser","input":"demo/typescript/index.ts","output":"demo/typescript/index.js","files":1,"inputBytes":5801,"outputBytes":3822}
2021-11-10 12:13:27 STATE: Compile: {"name":"demo/facerecognition","format":"esm","platform":"browser","input":"demo/facerecognition/index.ts","output":"demo/facerecognition/index.js","files":1,"inputBytes":8949,"outputBytes":6529}
2021-11-10 12:14:05 STATE: Lint: {"locations":["*.json","src/**/*.ts","test/**/*.js","demo/**/*.js"],"files":90,"errors":0,"warnings":0}
2021-11-10 12:14:06 STATE: ChangeLog: {"repository":"https://github.com/vladmandic/human","branch":"main","output":"CHANGELOG.md"}
2021-11-10 12:14:06 INFO:  Done...

File diff suppressed because it is too large Load Diff

View File

@ -3,7 +3,7 @@ import type { Config } from '../config';
import type { Point } from '../result';
export declare function load(config: Config): Promise<GraphModel>;
export declare const getLeftToRightEyeDepthDifference: (rawCoords: any) => number;
export declare const getEyeBox: (rawCoords: any, face: any, eyeInnerCornerIndex: any, eyeOuterCornerIndex: any, flip: boolean | undefined, meshSize: any) => {
export declare const getEyeBox: (rawCoords: any, face: any, eyeInnerCornerIndex: any, eyeOuterCornerIndex: any, meshSize: any, flip?: boolean) => {
box: {
startPoint: Point;
endPoint: Point;

2
wiki

@ -1 +1 @@
Subproject commit 60b5007b96ddba692561cce29cf03a89d1edc842
Subproject commit 2a937c42e7539b7aa077a9f41085ca573bba7578