diff --git a/Configuration.md b/Configuration.md
index cd05ba7..a0c8f41 100644
--- a/Configuration.md
+++ b/Configuration.md
@@ -156,7 +156,7 @@ config = {
embedding: {
enabled: false, // to improve accuracy of face embedding extraction it is recommended
// to enable detector.rotation and mesh.enabled
- modelPath: '../models/mobilefacenet.json',
+ modelPath: '../models/mobileface.json',
},
},
diff --git a/Credits.md b/Credits.md
index 884bb89..d2c1077 100644
--- a/Credits.md
+++ b/Credits.md
@@ -8,7 +8,7 @@
- Body Pose Detection: [**PoseNet**](https://medium.com/tensorflow/real-time-human-pose-estimation-in-the-browser-with-tensorflow-js-7dd0bc881cd5)
- Age & Gender Prediction: [**SSR-Net**](https://github.com/shamangary/SSR-Net)
- Emotion Prediction: [**Oarriaga**](https://github.com/oarriaga/face_classification)
-- Face Embedding: [**Sirius-AI MobileFaceNet**](https://github.com/sirius-ai/MobileFaceNet_TF)
+- Face Embedding: [**BecauseofAI MobileFace**](https://github.com/becauseofAI/MobileFace)
- Image Filters: [**WebGLImageFilter**](https://github.com/phoboslab/WebGLImageFilter)
- Pinto Model Zoo: [**Pinto**](https://github.com/PINTO0309/PINTO_model_zoo)
diff --git a/Demos.md b/Demos.md
index f87c285..9d77969 100644
--- a/Demos.md
+++ b/Demos.md
@@ -11,6 +11,8 @@ Demos are included in `/demo`:
*You can run browser demo either live from git pages, by serving demo folder from your web server or use
included micro http2 server with source file monitoring and dynamic rebuild*
+On notes on how to use built-in micro server, see notes on [**Development Server**](https://github.com/vladmandic/human/wiki/Development-Server)
+
### Changing Demo Target
@@ -116,3 +118,14 @@ node demo/node.js
2021-03-06 10:28:54 DATA: Gesture: [ { body: 0, gesture: 'leaning right' }, [length]: 1 ]
10:28:54.968 Human: Warmup full 621 ms
```
+
+
+
+## Face Recognition Demo
+
+`Human` contains an additional browser-based demo that enumerates number of images,
+extracts all faces from them, processed them and then allows
+for a selection of any face which sorts faces by simmilarity
+
+Demo is available in `demo/embedding.html` which uses `demo/embedding.js` as JavaSript module
+And can be hosted independently or accessed using built-in dev server
diff --git a/Embedding.md b/Embedding.md
index 8ec9c01..ad54038 100644
--- a/Embedding.md
+++ b/Embedding.md
@@ -2,22 +2,32 @@
+## Usage
+
To use face simmilaity compare feature, you must first enable `face.embedding` module
-and calculate embedding vectors for both first and second image you want to compare.
+and calculate embedding vectors for both first and second image you want to compare
+
+To achieve quality results, it is also highly recommended to have `face.mesh` and `face.detection.rotation`
+enabled as calculating feature vectors on non-quality inputs can lead to false results
For example,
```js
-const myConfig = { face: { embedding: true }};
+const myConfig = {
+ face: {
+ enabled: true,
+ detector: { rotation: true, return: true },
+ mesh: { enabled: true },
+ embedding: { enabled: true },
+ },
+};
+
const human = new Human(myConfig);
const firstResult = await human.detect(firstImage);
const secondResult = await human.detect(secondImage);
-const firstEmbedding = firstResult.face[0].embedding;
-const secondEmbedding = secondResult.face[0].embedding;
-
-const simmilarity = human.simmilarity(firstEmbedding, secondEmbedding);
+const simmilarity = human.simmilarity(firstResult.face[0].embedding, secondResult.face[0].embedding);
console.log(`faces are ${100 * simmilarity}% simmilar`);
```
@@ -32,7 +42,20 @@ for (let i = 0; i < secondResult.face.length; i++) {
}
```
-Embedding vectors are calulated values uniquely identifying a given face and presented as array of 192 float values
+Additional helper function is `human.enhance(face)` which returns an enhanced tensor
+of a face image that can be further visualized with
+
+```js
+ const enhanced = human.enhance(face);
+ const canvas = document.getElementById('orig');
+ human.tf.browser.toPixels(enhanced.squeeze(), canvas);
+```
+
+
+
+## Embedding Vectors
+
+Embedding vectors are calulated feature vector values uniquely identifying a given face and presented as array of 256 float values
They can be stored as normal arrays and reused as needed
@@ -40,10 +63,42 @@ Simmilarity function is based on *Eucilidean distance* between all points in vec
*Eucliean distance is limited case of Minkowski distance with order of 2*
*[Minkowski distance](https://en.wikipedia.org/wiki/Minkowski_distance) is a nth root of sum of nth powers of distances between each point (each value in 192-member array)*
-Changing `order` can make simmilarity matching more or less sensitive:
+Changing `order` can make simmilarity matching more or less sensitive (default order is 2nd order)
+For example, those will produce slighly different results:
+
+```js
+ const simmilarity2ndOrder = human.simmilarity(firstEmbedding, secondEmbedding, 2);
+ const simmilarity3rdOrder = human.simmilarity(firstEmbedding, secondEmbedding, 2);
+```
+
+How simmilarity is calculated:
```js
const distance = ((firstEmbedding.map((val, i) => (val - secondEmbedding[i])).reduce((dist, diff) => dist + (diff ** order), 0) ** (1 / order)));
```
*Once embedding values are calculated and stored, if you want to use stored embedding values without requiring `Human` library you can use above formula to calculate simmilarity on the fly.*
+
+
+
+## Face Image Pre-processing
+
+To achieve optimal result, `Human` performs following operations on an image before calulcating feature vector (embedding):
+
+- Crop to face
+- Find rought face angle and straighten face
+- Detect mesh
+- Find precise face angle and again straighten face
+- Crop again with more narrow margins
+- Convert image to grayscale to avoid impact of different colorizations
+- Normalize brightness to common range for all images
+
+
+
+## Demo
+
+`Human` contains a demo that enumerates number of images,
+extracts all faces from them, processed them and then allows
+for a selection of any face which sorts faces by simmilarity
+
+Demo is available in `demo/embedding.html` which uses `demo/embedding.js` as JavaSript module
diff --git a/Home.md b/Home.md
index bd08de3..02b955b 100644
--- a/Home.md
+++ b/Home.md
@@ -61,7 +61,7 @@ Default models in Human library are:
- **Gender Detection**: Oarriaga Gender
- **Age Detection**: SSR-Net Age IMDB
- **Body Analysis**: PoseNet
-- **Face Embedding**: Sirius-AI MobileFaceNet Embedding
+- **Face Embedding**: BecauseofAI MobileFace Embedding
Note that alternative models are provided and can be enabled via configuration
For example, `PoseNet` model can be switched for `BlazePose` model depending on the use case
diff --git a/Models.md b/Models.md
index 41befb2..c7aa3b9 100644
--- a/Models.md
+++ b/Models.md
@@ -11,7 +11,7 @@ Default models in Human library are:
- **Gender Detection**: Oarriaga Gender
- **Age Detection**: SSR-Net Age IMDB
- **Body Analysis**: PoseNet
-- **Face Embedding**: Sirius-AI MobileFaceNet Embedding
+- **Face Embedding**: BecauseofAI MobileFace Embedding
## Notes
@@ -48,6 +48,7 @@ Default models in Human library are:
| MediaPipe HandPose (HandDetect) | 126K | handdetect.json | 6.8M | handdetect.bin | 152 |
| MediaPipe HandPose (HandSkeleton) | 127K | handskeleton.json | 5.3M | handskeleton.bin | 145 |
| Sirius-AI MobileFaceNet | 125K | mobilefacenet.json | 5.0M | mobilefacenet.bin | 139 |
+| BecauseofAI MobileFace | 33K | mobileface.json | 2.1M | mobileface.bin | 75 |
| FaceBoxes | 212K | faceboxes.json | 2.0M | faceboxes.bin | N/A |
diff --git a/Outputs.md b/Outputs.md
index de906f0..6f08d04 100644
--- a/Outputs.md
+++ b/Outputs.md
@@ -11,21 +11,20 @@ result = {
confidence, // returns faceConfidence if exists, otherwise boxConfidence
faceConfidence // confidence in detection box after running mesh
boxConfidence // confidence in detection box before running mesh
- box, //
- rawBox, // normalized values for box
- mesh, // 468 base points & 10 iris points
- rawMesh, // normalized values for box
+ box, // , normalized to input image size
+ boxRaw, // , normalized to range of 0..1
+ mesh, // 468 base points & 10 iris points, normalized to input impact size
+ meshRaw, // 468 base points & 10 iris points, normalized to range of 0..1
annotations, // 32 base annotated landmarks & 2 iris annotations
iris, // relative distance of iris to camera, multiple by focal lenght to get actual distance
age, // estimated age
gender, // 'male', 'female'
embedding, // [float] vector of 192 values used for face simmilarity compare
- angle: // 3d face rotation values in radians in range of -pi/2 to pi/2 which is -90 to +90 degrees
- // value of 0 means center
+ angle: // 3d face rotation values in radians in range of -pi/2 to pi/2 which is -90 to +90 degrees
{
- roll, // roll is face lean left/right
- yaw, // yaw is face turn left/right
- pitch, // pitch is face move up/down
+ roll, // roll is face lean left/right, value of 0 means center
+ yaw, // yaw is face turn left/right, value of 0 means center
+ pitch, // pitch is face move up/down, value of 0 means center
}
emotion: //
[
@@ -34,6 +33,8 @@ result = {
emotion, // 'angry', 'disgust', 'fear', 'happy', 'sad', 'surprise', 'neutral'
}
],
+ tensor: // if config.face.detector.return is set to true, detector will
+ // return a raw tensor containing cropped image of a face
}
],
body: //
diff --git a/Usage.md b/Usage.md
index 08e787a..6420d28 100644
--- a/Usage.md
+++ b/Usage.md
@@ -40,6 +40,7 @@ Additionally, `Human` library exposes several objects and methods:
human.simmilarity(embedding1, embedding2) // runs simmilarity calculation between two provided embedding vectors
// vectors for source and target must be previously detected using
// face.embedding module
+ human.enhance(face) // returns enhanced tensor of a previously detected face that can be used for visualizations
human.models // dynamically maintained list of object of any loaded models
human.classes // dynamically maintained list of classes that perform detection on each model
```