From c4642bde54506afd70a5fc32617414fa84b9fc0e Mon Sep 17 00:00:00 2001 From: Vladimir Mandic Date: Sat, 25 Sep 2021 09:22:04 -0400 Subject: [PATCH] update model list --- Models.md | 32 ++++++++++++++++++++++---------- 1 file changed, 22 insertions(+), 10 deletions(-) diff --git a/Models.md b/Models.md index e9d0126..566e9dc 100644 --- a/Models.md +++ b/Models.md @@ -39,6 +39,12 @@ Models are not re-trained so any bias included in the original models is present `Human` includes implementations for several alternative models which are normally not 1:1 replacement, but can be switched on-the-fly due to standardized output implementation +Switching model also automatically switches implementation used inside `Human` so it is critical to keep +model filenames in original form + +`Human` includes all default models while alternative models are kept in a separate repository +and must be downloaded manually from +
**Body detection** can be switched from `PoseNet` to `BlazePose`, `EfficientPose` or `MoveNet` depending on the use case: @@ -62,6 +68,10 @@ but can be switched on-the-fly due to standardized output implementation **Object detection** can be switched from `mb3-centernet` to `nanodet` +**Hand destection** can be switched from `handdetect` to `handtrack` + +**Body Segmentation** can be switched from `selfie` to `meet` +


## List of all models included in Human library @@ -95,6 +105,7 @@ but can be switched on-the-fly due to standardized output implementation | MoveNet-MultiPose | 235K | movenet-thunder.json | 9.1M | movenet-thunder.bin | 303 | | Google Selfie | 82K | selfie.json | 208K | selfie.bin | 136 | | Hand Tracking | 605K | handtrack.json | 2.9M | handtrack.bin | 619 | +| GEAR Predictor | 28K | gear.json | 1.5M | gear.bin | 25 |
@@ -104,24 +115,25 @@ but can be switched on-the-fly due to standardized output implementation ## Credits -- Face Detection: [**MediaPipe BlazeFace**](https://drive.google.com/file/d/1f39lSzU5Oq-j_OXgS67KfN5wNsoeAZ4V/view) -- Facial Spacial Geometry: [**MediaPipe FaceMesh**](https://drive.google.com/file/d/1VFC_wIpw4O7xBOiTgUldl79d9LA-LsnA/view) -- Eye Iris Details: [**MediaPipe Iris**](https://drive.google.com/file/d/1bsWbokp9AklH2ANjCfmjqEzzxO1CNbMu/view) -- Face Description: [**HSE-FaceRes**](https://github.com/HSE-asavchenko/HSE_FaceRec_tf) -- Hand Detection & Skeleton: [**MediaPipe HandPose**](https://drive.google.com/file/d/1sv4sSb9BSNVZhLzxXJ0jBv9DqD-4jnAz/view) +- Age & Gender Prediction: [**SSR-Net**](https://github.com/shamangary/SSR-Net) - Body Pose Detection: [**BlazePose**](https://drive.google.com/file/d/10IU-DRP2ioSNjKFdiGbmmQX81xAYj88s/view) -- Body Pose Detection: [**PoseNet**](https://medium.com/tensorflow/real-time-human-pose-estimation-in-the-browser-with-tensorflow-js-7dd0bc881cd5) -- Body Pose Detection: [**MoveNet**](https://blog.tensorflow.org/2021/05/next-generation-pose-detection-with-movenet-and-tensorflowjs.html) - Body Pose Detection: [**EfficientPose**](https://github.com/daniegr/EfficientPose) +- Body Pose Detection: [**MoveNet**](https://blog.tensorflow.org/2021/05/next-generation-pose-detection-with-movenet-and-tensorflowjs.html) +- Body Pose Detection: [**PoseNet**](https://medium.com/tensorflow/real-time-human-pose-estimation-in-the-browser-with-tensorflow-js-7dd0bc881cd5) - Body Segmentation: [**MediaPipe Meet**](https://drive.google.com/file/d/1lnP1bRi9CSqQQXUHa13159vLELYDgDu0/preview) - Body Segmentation: [**MediaPipe Selfie**](https://drive.google.com/file/d/1dCfozqknMa068vVsO2j_1FgZkW_e3VWv/preview) -- Age & Gender Prediction: [**SSR-Net**](https://github.com/shamangary/SSR-Net) - Emotion Prediction: [**Oarriaga**](https://github.com/oarriaga/face_classification) +- Eye Iris Details: [**MediaPipe Iris**](https://drive.google.com/file/d/1bsWbokp9AklH2ANjCfmjqEzzxO1CNbMu/view) +- Face Description: [**HSE-FaceRes**](https://github.com/HSE-asavchenko/HSE_FaceRec_tf) +- Face Detection: [**MediaPipe BlazeFace**](https://drive.google.com/file/d/1f39lSzU5Oq-j_OXgS67KfN5wNsoeAZ4V/view) - Face Embedding: [**BecauseofAI MobileFace**](https://github.com/becauseofAI/MobileFace) -- ObjectDetection: [**NanoDet**](https://github.com/RangiLyu/nanodet) -- ObjectDetection: [**MB3-CenterNet**](https://github.com/610265158/mobilenetv3_centernet) +- Facial Spacial Geometry: [**MediaPipe FaceMesh**](https://drive.google.com/file/d/1VFC_wIpw4O7xBOiTgUldl79d9LA-LsnA/view) +- Gender, Emotion, Age, Race Prediction: [**GEAR Predictor**](https://github.com/Udolf15/GEAR-Predictor) +- Hand Detection & Skeleton: [**MediaPipe HandPose**](https://drive.google.com/file/d/1sv4sSb9BSNVZhLzxXJ0jBv9DqD-4jnAz/view) - Hand Tracking: [**HandTracking**](https://github.com/victordibia/handtracking) - Image Filters: [**WebGLImageFilter**](https://github.com/phoboslab/WebGLImageFilter) +- ObjectDetection: [**MB3-CenterNet**](https://github.com/610265158/mobilenetv3_centernet) +- ObjectDetection: [**NanoDet**](https://github.com/RangiLyu/nanodet) - Pinto Model Zoo: [**Pinto**](https://github.com/PINTO0309/PINTO_model_zoo) *Included models are included under license inherited from the original model source*