updated demos

pull/34/head
Vladimir Mandic 2020-12-08 08:33:00 -05:00
parent b11d5406bc
commit 84f8b2702e
17 changed files with 302 additions and 127 deletions

106
README.md
View File

@ -4,10 +4,7 @@
This is updated **face-api.js** with latest available TensorFlow/JS as the original face-api.js is not compatible with **tfjs 2.0+**.
Forked from **face-api.js** version **0.22.2** released on March 22nd, 2020
- <https://github.com/justadudewhohacks/face-api.js>
- <https://www.npmjs.com/package/face-api.js>
Forked from [face-api.js](https://github.com/justadudewhohacks/face-api.js) version **0.22.2** released on March 22nd, 2020
Currently based on **`TensorFlow/JS` 2.7.0**
@ -18,6 +15,8 @@ And since original Face-API was open-source, I've released this version as well
Unfortunately, changes ended up being too large for a simple pull request on original Face-API and it ended up being a full-fledged version on its own
<br>
### Differences
- Compatible with `TensorFlow/JS 2.0+`
@ -41,6 +40,73 @@ Unfortunately, changes ended up being too large for a simple pull request on ori
Which means valid models are **tinyFaceDetector** and **mobileNetv1**
<br>
<hr>
<br>
## Examples
<br>
### Browser
Browser example that uses both models as well as all of the extensions is included in `/example/index.html`
Example can be accessed directly using Git pages using URL: <https://vladmandic.github.io/face-api/example/>
<br>
*Note: Photos shown below are taken by me*
![alt text](example/screenshot.png)
<br>
### NodeJS
Two NodeJS examples are:
- `/example/node-singleprocess.js`: Regular usage of `FaceAPI` from `NodeJS`
- `/example/node-multiprocess.js`: Multiprocessing showcase that uses pool of worker processes (`node-multiprocess-worker.js`
Main starts fixed pool of worker processes with each worker having it's instance of `FaceAPI`
Workers communicate with main when they are ready and main dispaches job to each ready worker until job queue is empty
```json
2020-12-08 08:30:01 INFO: @vladmandic/face-api version 0.9.1
2020-12-08 08:30:01 INFO: User: vlado Platform: linux Arch: x64 Node: v15.0.1
2020-12-08 08:30:01 INFO: FaceAPI multi-process test
2020-12-08 08:30:01 STATE: Main: started worker: 265238
2020-12-08 08:30:01 STATE: Main: started worker: 265244
2020-12-08 08:30:02 STATE: Worker: PID: 265238 TensorFlow/JS 2.7.0 FaceAPI 0.9.1 Backend: tensorflow
2020-12-08 08:30:02 STATE: Worker: PID: 265244 TensorFlow/JS 2.7.0 FaceAPI 0.9.1 Backend: tensorflow
2020-12-08 08:30:02 STATE: Main: dispatching to worker: 265238
2020-12-08 08:30:02 STATE: Main: dispatching to worker: 265244
2020-12-08 08:30:02 DATA: Worker received message: 265238 { image: 'example/sample (1).jpg' }
2020-12-08 08:30:02 DATA: Worker received message: 265244 { image: 'example/sample (2).jpg' }
2020-12-08 08:30:04 DATA: Main: worker finished: 265238 detected faces: 3
2020-12-08 08:30:04 STATE: Main: dispatching to worker: 265238
2020-12-08 08:30:04 DATA: Main: worker finished: 265244 detected faces: 3
2020-12-08 08:30:04 STATE: Main: dispatching to worker: 265244
2020-12-08 08:30:04 DATA: Worker received message: 265238 { image: 'example/sample (3).jpg' }
2020-12-08 08:30:04 DATA: Worker received message: 265244 { image: 'example/sample (4).jpg' }
2020-12-08 08:30:06 DATA: Main: worker finished: 265238 detected faces: 3
2020-12-08 08:30:06 STATE: Main: dispatching to worker: 265238
2020-12-08 08:30:06 DATA: Worker received message: 265238 { image: 'example/sample (5).jpg' }
2020-12-08 08:30:06 DATA: Main: worker finished: 265244 detected faces: 4
2020-12-08 08:30:06 STATE: Main: dispatching to worker: 265244
2020-12-08 08:30:06 DATA: Worker received message: 265244 { image: 'example/sample (6).jpg' }
2020-12-08 08:30:07 DATA: Main: worker finished: 265238 detected faces: 5
2020-12-08 08:30:07 STATE: Main: worker exit: 265238 0
2020-12-08 08:30:08 DATA: Main: worker finished: 265244 detected faces: 4
2020-12-08 08:30:08 INFO: Processed 12 images in 6826 ms
2020-12-08 08:30:08 STATE: Main: worker exit: 265244 0
```
Note that `@tensorflow/tfjs-node` or `@tensorflow/tfjs-node-gpu` must be installed before using NodeJS example
<br>
<hr>
<br>
## Installation
Face-API ships with several pre-build versions of the library:
@ -68,6 +134,8 @@ Reason for additional `nobundle` version is if you want to include a specific ve
All versions include `sourcemap` and `asset manifest`
<br>
<hr>
<br>
There are several ways to use Face-API:
@ -171,10 +239,16 @@ And then use with:
const faceapi = require('@vladmandic/face-api/dist/face-api.node-gpu.js'); // this loads face-api version with correct bindings for tfjs-node-gpu
```
<br>
<hr>
<br>
## Weights
Pretrained models and their weights are includes in `./model`.
<br>
## Build
If you want to do a full rebuild, either download npm module
@ -218,6 +292,8 @@ npm run build
2020-12-02 16:31:25 STATE: Build for: browserBundle type: esm: { imports: 162, importBytes: 1728673, modules: 576, moduleBytes: 1359851, outputBytes: 1900836, outputFiles: 'dist/face-api.esm.js' }
```
<br>
<hr>
<br>
## Credits & Documentation
@ -225,25 +301,3 @@ npm run build
- Original project and usage documentation: [Face-API](https://github.com/justadudewhohacks/face-api.js)
- Original model weighs: [Face-API](https://github.com/justadudewhohacks/face-api.js-models)
- ML API Documentation: [Tensorflow/JS](https://js.tensorflow.org/api/latest/)
<br>
## Example
<br>
### Browser
Example that uses both models as well as all of the extensions is included in `/example/index.html`
Example can be accessed directly using Git pages using URL: <https://vladmandic.github.io/face-api/example/>
<br>
### NodeJS
Example is included in `/example/node.js`
Note that it does not require any other other 3rd party libraries
*Note: Photos shown below are taken by me*
![alt text](example/screenshot.png)

File diff suppressed because one or more lines are too long

View File

@ -2060,7 +2060,7 @@
]
},
"package.json": {
"bytes": 1409,
"bytes": 1352,
"imports": []
},
"src/index.ts": {

File diff suppressed because one or more lines are too long

View File

@ -13201,7 +13201,7 @@
]
},
"package.json": {
"bytes": 1409,
"bytes": 1352,
"imports": []
},
"src/index.ts": {

2
dist/face-api.js vendored

File diff suppressed because one or more lines are too long

2
dist/face-api.json vendored
View File

@ -13201,7 +13201,7 @@
]
},
"package.json": {
"bytes": 1409,
"bytes": 1352,
"imports": []
},
"src/index.ts": {

File diff suppressed because one or more lines are too long

View File

@ -2060,7 +2060,7 @@
]
},
"package.json": {
"bytes": 1409,
"bytes": 1352,
"imports": []
},
"src/index.ts": {

File diff suppressed because one or more lines are too long

View File

@ -2060,7 +2060,7 @@
]
},
"package.json": {
"bytes": 1409,
"bytes": 1352,
"imports": []
},
"src/index.ts": {

View File

@ -0,0 +1,67 @@
const fs = require('fs');
const path = require('path');
const log = require('@vladmandic/pilogger');
// workers actual import tfjs and faceapi modules
const tf = require('@tensorflow/tfjs-node');
const faceapi = require('../dist/face-api.node.js'); // this is equivalent to '@vladmandic/faceapi'
// options used by faceapi
const modelPathRoot = '../model';
const minScore = 0.1;
const maxResults = 5;
let optionsSSDMobileNet;
// read image from a file and create tensor to be used by faceapi
// this way we don't need any monkey patches
// you can add any pre-proocessing here such as resizing, etc.
async function image(img) {
const buffer = fs.readFileSync(img);
const tensor = tf.tidy(() => tf.node.decodeImage(buffer).toFloat().expandDims());
return tensor;
}
// actual faceapi detection
async function detect(img) {
const tensor = await image(img);
const result = await faceapi
.detectAllFaces(tensor, optionsSSDMobileNet)
.withFaceLandmarks()
.withFaceExpressions()
.withFaceDescriptors()
.withAgeAndGender();
process.send({ image: img, detected: result }); // send results back to main
process.send({ ready: true }); // send signal back to main that this worker is now idle and ready for next image
tensor.dispose();
}
async function main() {
// on worker start first initialize message handler so we don't miss any messages
process.on('message', (msg) => {
if (msg.exit) process.exit(); // if main told worker to exit
if (msg.test) process.send({ test: true });
if (msg.image) detect(msg.image); // if main told worker to process image
log.data('Worker received message:', process.pid, msg); // generic log
});
// then initialize tfjs
await faceapi.tf.setBackend('tensorflow');
await faceapi.tf.enableProdMode();
await faceapi.tf.ENV.set('DEBUG', false);
await faceapi.tf.ready();
log.state('Worker: PID:', process.pid, `TensorFlow/JS ${faceapi.tf.version_core} FaceAPI ${faceapi.version.faceapi} Backend: ${faceapi.tf.getBackend()}`);
// and load and initialize facepi models
const modelPath = path.join(__dirname, modelPathRoot);
await faceapi.nets.ssdMobilenetv1.loadFromDisk(modelPath);
await faceapi.nets.ageGenderNet.loadFromDisk(modelPath);
await faceapi.nets.faceLandmark68Net.loadFromDisk(modelPath);
await faceapi.nets.faceRecognitionNet.loadFromDisk(modelPath);
await faceapi.nets.faceExpressionNet.loadFromDisk(modelPath);
optionsSSDMobileNet = new faceapi.SsdMobilenetv1Options({ minConfidence: minScore, maxResults });
// now we're ready, so send message back to main that it knows it can use this worker
process.send({ ready: true });
}
main();

View File

@ -0,0 +1,79 @@
const fs = require('fs');
const path = require('path');
const log = require('@vladmandic/pilogger'); // this is my simple logger with few extra features
const child_process = require('child_process');
// note that main process import faceapi or tfjs at all
const imgPathRoot = './example'; // modify to include your sample images
const numWorkers = 2; // how many workers will be started
const workers = []; // this holds worker processes
const images = []; // this holds queue of enumerated images
const t = []; // timers
let dir;
// trigered by main when worker sends ready message
// if image pool is empty, signal worker to exit otherwise dispatch image to worker and remove image from queue
async function detect(worker) {
if (!t[2]) t[2] = process.hrtime.bigint(); // first time do a timestamp so we can measure initial latency
if (images.length === dir.length) worker.send({ test: true }); // for first image in queue just measure latency
if (images.length === 0) worker.send({ exit: true }); // nothing left in queue
else {
log.state('Main: dispatching to worker:', worker.pid);
worker.send({ image: images[0] });
images.shift();
}
}
// loop that waits for all workers to complete
function waitCompletion() {
const activeWorkers = workers.reduce((any, worker) => (any += worker.connected ? 1 : 0), 0);
if (activeWorkers > 0) setImmediate(() => waitCompletion());
else {
t[1] = process.hrtime.bigint();
log.info('Processed', dir.length, 'images in', Math.trunc(parseInt(t[1] - t[0]) / 1000 / 1000), 'ms');
}
}
function measureLatency() {
t[3] = process.hrtime.bigint();
const latencyInitialization = Math.trunc(parseInt(t[2] - t[0]) / 1000 / 1000);
const latencyRoundTrip = Math.trunc(parseInt(t[3] - t[2]) / 1000 / 1000);
log.info('Latency: worker initializtion: ', latencyInitialization, 'message round trip:', latencyRoundTrip);
}
async function main() {
log.header();
log.info('FaceAPI multi-process test');
// enumerate all images into queue
dir = fs.readdirSync(imgPathRoot);
for (const imgFile of dir) {
if (imgFile.toLocaleLowerCase().endsWith('.jpg')) images.push(path.join(imgPathRoot, imgFile));
}
t[0] = process.hrtime.bigint();
// manage worker processes
for (let i = 0; i < numWorkers; i++) {
// create worker process
workers[i] = await child_process.fork('example/node-multiprocess-worker.js', ['special']);
// parse message that worker process sends back to main
// if message is ready, dispatch next image in queue
// if message is processing result, just print how many faces were detected
// otherwise it's an unknown message
workers[i].on('message', (msg) => {
if (msg.ready) detect(workers[i]);
else if (msg.image) log.data('Main: worker finished:', workers[i].pid, 'detected faces:', msg.detected.length);
else if (msg.test) measureLatency();
else log.data('Main: worker message:', workers[i].pid, msg);
});
// just log when worker exits
workers[i].on('exit', (msg) => log.state('Main: worker exit:', workers[i].pid, msg));
// just log which worker was started
log.state('Main: started worker:', workers[i].pid);
}
// wait for all workers to complete
waitCompletion();
}
main();

View File

@ -0,0 +1,60 @@
const fs = require('fs');
const path = require('path');
const log = require('@vladmandic/pilogger');
const tf = require('@tensorflow/tfjs-node');
const faceapi = require('../dist/face-api.node.js'); // this is equivalent to '@vladmandic/faceapi'
const modelPathRoot = '../model';
const imgPathRoot = './example'; // modify to include your sample images
const minScore = 0.1;
const maxResults = 5;
async function image(img) {
const buffer = fs.readFileSync(img);
const decoded = tf.node.decodeImage(buffer);
const casted = decoded.toFloat();
const result = casted.expandDims(0);
decoded.dispose();
casted.dispose();
return result;
}
async function main() {
log.header();
log.info('FaceAPI single-process test');
const t0 = process.hrtime.bigint();
await faceapi.tf.setBackend('tensorflow');
await faceapi.tf.enableProdMode();
await faceapi.tf.ENV.set('DEBUG', false);
await faceapi.tf.ready();
log.state(`Version: TensorFlow/JS ${faceapi.tf?.version_core} FaceAPI ${faceapi.version.faceapi} Backend: ${faceapi.tf?.getBackend()}`);
log.info('Loading FaceAPI models');
const modelPath = path.join(__dirname, modelPathRoot);
await faceapi.nets.ssdMobilenetv1.loadFromDisk(modelPath);
await faceapi.nets.ageGenderNet.loadFromDisk(modelPath);
await faceapi.nets.faceLandmark68Net.loadFromDisk(modelPath);
await faceapi.nets.faceRecognitionNet.loadFromDisk(modelPath);
await faceapi.nets.faceExpressionNet.loadFromDisk(modelPath);
const optionsSSDMobileNet = new faceapi.SsdMobilenetv1Options({ minConfidence: minScore, maxResults });
const dir = fs.readdirSync(imgPathRoot);
for (const img of dir) {
if (!img.toLocaleLowerCase().endsWith('.jpg')) continue;
const tensor = await image(path.join(imgPathRoot, img));
const result = await faceapi
.detectAllFaces(tensor, optionsSSDMobileNet)
.withFaceLandmarks()
.withFaceExpressions()
.withFaceDescriptors()
.withAgeAndGender();
log.data('Image:', img, 'Detected faces:', result.length);
tensor.dispose();
}
const t1 = process.hrtime.bigint();
log.info('Processed', dir.length, 'images in', Math.trunc(parseInt(t1 - t0) / 1000 / 1000), 'ms');
}
main();

View File

@ -1,84 +0,0 @@
process.stderr.write = null; // silly hack to stock tfjs logging too much to stderr
const fs = require('fs');
const path = require('path');
const tf = require('@tensorflow/tfjs-node');
const faceapi = require('../dist/face-api.node.js');
// if you have module installed, this would be
// const faceapi = require('@vladmandic/face-api');
// configuration options
const modelPathRoot = '../model/'; // path to model folder that will be loaded using http
const imgSize = 512; // maximum image size in pixels
const minScore = 0.1; // minimum score
const maxResults = 5; // maximum number of results to return
const samples = ['sample (1).jpg', 'sample (2).jpg', 'sample (3).jpg', 'sample (4).jpg', 'sample (5).jpg', 'sample (6).jpg']; // sample images to be loaded using http
// helper function to pretty-print json object to string
function str(json) {
const text = json ? JSON.stringify(json).replace(/{|}|"|\[|\]/g, '').replace(/,/g, ', ') : '';
return text;
}
// helper function to print strings to html document as a log
function log(...txt) {
// eslint-disable-next-line no-console
console.log(...txt);
}
async function image(img) {
const buffer = fs.readFileSync(img);
const decoded = tf.node.decodeImage(buffer);
const casted = decoded.toFloat();
const result = casted.expandDims(0);
decoded.dispose();
casted.dispose();
return result;
}
async function main() {
// initialize tfjs
log('FaceAPI Test');
await faceapi.tf.setBackend('tensorflow'); //Sets the backend (cpu, webgl, wasm, tensorflow, etc) responsible for creating tensors and executing operations on those tensors.
await faceapi.tf.enableProdMode();
await faceapi.tf.ENV.set('DEBUG', false);
await faceapi.tf.ready(); //Returns a promise that resolves when the currently selected backend (or the highest priority one) has initialized.
// check version
log(`Version: TensorFlow/JS ${str(faceapi.tf?.version_core || '(not loaded)')} FaceAPI ${str(faceapi?.version || '(not loaded)')} Backend: ${str(faceapi.tf?.getBackend() || '(not loaded)')}`);
log(`Flags: ${JSON.stringify(faceapi.tf.ENV.flags)}`);
// load face-api models
log('Loading FaceAPI models');
const modelPath = path.join(__dirname, modelPathRoot);
await faceapi.nets.ssdMobilenetv1.loadFromDisk(modelPath);
await faceapi.nets.ageGenderNet.loadFromDisk(modelPath);
await faceapi.nets.faceLandmark68Net.loadFromDisk(modelPath);
await faceapi.nets.faceRecognitionNet.loadFromDisk(modelPath);
await faceapi.nets.faceExpressionNet.loadFromDisk(modelPath);
const optionsSSDMobileNet = new faceapi.SsdMobilenetv1Options({ minConfidence: minScore, maxResults });
// check tf engine state
const engine = await faceapi.tf.engine();
log(`TF Engine State: ${str(engine.state)}`);
const dir = fs.readdirSync(__dirname);
for (const img of dir) {
if (!img.toLocaleLowerCase().endsWith('.jpg')) continue;
// load image
const tensor = await image(path.join(__dirname, img));
// actual model execution
const result = await faceapi
.detectAllFaces(tensor, optionsSSDMobileNet)
.withFaceLandmarks()
.withFaceExpressions()
.withFaceDescriptors()
.withAgeAndGender();
log('Image:', img, 'Detected faces:', result.length);
// you can access entire result object
// console.log(result);
tensor.dispose();
}
}
main();

3
package-lock.json generated
View File

@ -196,8 +196,7 @@
"@vladmandic/pilogger": {
"version": "0.2.9",
"resolved": "https://registry.npmjs.org/@vladmandic/pilogger/-/pilogger-0.2.9.tgz",
"integrity": "sha512-UaDAFoEJwPw8248u9WQjVexP24wMiglHMWWd4X0gwukZuDw+CkoLddVF8335OYa+pXbP+t/rwx+E50f5rd5IhQ==",
"dev": true
"integrity": "sha512-UaDAFoEJwPw8248u9WQjVexP24wMiglHMWWd4X0gwukZuDw+CkoLddVF8335OYa+pXbP+t/rwx+E50f5rd5IhQ=="
},
"abbrev": {
"version": "1.1.1",

View File

@ -34,19 +34,19 @@
"url": "https://github.com/vladmandic/face-api/issues"
},
"homepage": "https://github.com/vladmandic/face-api#readme",
"Dependencies": {},
"dependencies": {
"@vladmandic/pilogger": "^0.2.9"
},
"devDependencies": {
"@tensorflow/tfjs": "^2.7.0",
"@tensorflow/tfjs-backend-wasm": "^2.7.0",
"@tensorflow/tfjs-node": "^2.7.0",
"@tensorflow/tfjs-node-gpu": "^2.7.0",
"@tensorflow/tfjs-backend-wasm": "^2.7.0",
"@types/node": "^14.14.10",
"@vladmandic/pilogger": "^0.2.9",
"esbuild": "^0.8.17",
"rimraf": "^3.0.2",
"ts-node": "^9.0.0",
"tslib": "^2.0.3",
"typescript": "^4.1.2"
},
"dependencies": {}
}
}