mirror of https://github.com/vladmandic/human
update model list
parent
a0497b6d14
commit
c4642bde54
32
Models.md
32
Models.md
|
@ -39,6 +39,12 @@ Models are not re-trained so any bias included in the original models is present
|
||||||
`Human` includes implementations for several alternative models which are normally not 1:1 replacement,
|
`Human` includes implementations for several alternative models which are normally not 1:1 replacement,
|
||||||
but can be switched on-the-fly due to standardized output implementation
|
but can be switched on-the-fly due to standardized output implementation
|
||||||
|
|
||||||
|
Switching model also automatically switches implementation used inside `Human` so it is critical to keep
|
||||||
|
model filenames in original form
|
||||||
|
|
||||||
|
`Human` includes all default models while alternative models are kept in a separate repository
|
||||||
|
and must be downloaded manually from <https://github.com/vladmandic/human-models>
|
||||||
|
|
||||||
<br>
|
<br>
|
||||||
|
|
||||||
**Body detection** can be switched from `PoseNet` to `BlazePose`, `EfficientPose` or `MoveNet` depending on the use case:
|
**Body detection** can be switched from `PoseNet` to `BlazePose`, `EfficientPose` or `MoveNet` depending on the use case:
|
||||||
|
@ -62,6 +68,10 @@ but can be switched on-the-fly due to standardized output implementation
|
||||||
|
|
||||||
**Object detection** can be switched from `mb3-centernet` to `nanodet`
|
**Object detection** can be switched from `mb3-centernet` to `nanodet`
|
||||||
|
|
||||||
|
**Hand destection** can be switched from `handdetect` to `handtrack`
|
||||||
|
|
||||||
|
**Body Segmentation** can be switched from `selfie` to `meet`
|
||||||
|
|
||||||
<br><hr><br>
|
<br><hr><br>
|
||||||
|
|
||||||
## List of all models included in Human library
|
## List of all models included in Human library
|
||||||
|
@ -95,6 +105,7 @@ but can be switched on-the-fly due to standardized output implementation
|
||||||
| MoveNet-MultiPose | 235K | movenet-thunder.json | 9.1M | movenet-thunder.bin | 303 |
|
| MoveNet-MultiPose | 235K | movenet-thunder.json | 9.1M | movenet-thunder.bin | 303 |
|
||||||
| Google Selfie | 82K | selfie.json | 208K | selfie.bin | 136 |
|
| Google Selfie | 82K | selfie.json | 208K | selfie.bin | 136 |
|
||||||
| Hand Tracking | 605K | handtrack.json | 2.9M | handtrack.bin | 619 |
|
| Hand Tracking | 605K | handtrack.json | 2.9M | handtrack.bin | 619 |
|
||||||
|
| GEAR Predictor | 28K | gear.json | 1.5M | gear.bin | 25 |
|
||||||
|
|
||||||
<br>
|
<br>
|
||||||
|
|
||||||
|
@ -104,24 +115,25 @@ but can be switched on-the-fly due to standardized output implementation
|
||||||
|
|
||||||
## Credits
|
## Credits
|
||||||
|
|
||||||
- Face Detection: [**MediaPipe BlazeFace**](https://drive.google.com/file/d/1f39lSzU5Oq-j_OXgS67KfN5wNsoeAZ4V/view)
|
- Age & Gender Prediction: [**SSR-Net**](https://github.com/shamangary/SSR-Net)
|
||||||
- Facial Spacial Geometry: [**MediaPipe FaceMesh**](https://drive.google.com/file/d/1VFC_wIpw4O7xBOiTgUldl79d9LA-LsnA/view)
|
|
||||||
- Eye Iris Details: [**MediaPipe Iris**](https://drive.google.com/file/d/1bsWbokp9AklH2ANjCfmjqEzzxO1CNbMu/view)
|
|
||||||
- Face Description: [**HSE-FaceRes**](https://github.com/HSE-asavchenko/HSE_FaceRec_tf)
|
|
||||||
- Hand Detection & Skeleton: [**MediaPipe HandPose**](https://drive.google.com/file/d/1sv4sSb9BSNVZhLzxXJ0jBv9DqD-4jnAz/view)
|
|
||||||
- Body Pose Detection: [**BlazePose**](https://drive.google.com/file/d/10IU-DRP2ioSNjKFdiGbmmQX81xAYj88s/view)
|
- Body Pose Detection: [**BlazePose**](https://drive.google.com/file/d/10IU-DRP2ioSNjKFdiGbmmQX81xAYj88s/view)
|
||||||
- Body Pose Detection: [**PoseNet**](https://medium.com/tensorflow/real-time-human-pose-estimation-in-the-browser-with-tensorflow-js-7dd0bc881cd5)
|
|
||||||
- Body Pose Detection: [**MoveNet**](https://blog.tensorflow.org/2021/05/next-generation-pose-detection-with-movenet-and-tensorflowjs.html)
|
|
||||||
- Body Pose Detection: [**EfficientPose**](https://github.com/daniegr/EfficientPose)
|
- Body Pose Detection: [**EfficientPose**](https://github.com/daniegr/EfficientPose)
|
||||||
|
- Body Pose Detection: [**MoveNet**](https://blog.tensorflow.org/2021/05/next-generation-pose-detection-with-movenet-and-tensorflowjs.html)
|
||||||
|
- Body Pose Detection: [**PoseNet**](https://medium.com/tensorflow/real-time-human-pose-estimation-in-the-browser-with-tensorflow-js-7dd0bc881cd5)
|
||||||
- Body Segmentation: [**MediaPipe Meet**](https://drive.google.com/file/d/1lnP1bRi9CSqQQXUHa13159vLELYDgDu0/preview)
|
- Body Segmentation: [**MediaPipe Meet**](https://drive.google.com/file/d/1lnP1bRi9CSqQQXUHa13159vLELYDgDu0/preview)
|
||||||
- Body Segmentation: [**MediaPipe Selfie**](https://drive.google.com/file/d/1dCfozqknMa068vVsO2j_1FgZkW_e3VWv/preview)
|
- Body Segmentation: [**MediaPipe Selfie**](https://drive.google.com/file/d/1dCfozqknMa068vVsO2j_1FgZkW_e3VWv/preview)
|
||||||
- Age & Gender Prediction: [**SSR-Net**](https://github.com/shamangary/SSR-Net)
|
|
||||||
- Emotion Prediction: [**Oarriaga**](https://github.com/oarriaga/face_classification)
|
- Emotion Prediction: [**Oarriaga**](https://github.com/oarriaga/face_classification)
|
||||||
|
- Eye Iris Details: [**MediaPipe Iris**](https://drive.google.com/file/d/1bsWbokp9AklH2ANjCfmjqEzzxO1CNbMu/view)
|
||||||
|
- Face Description: [**HSE-FaceRes**](https://github.com/HSE-asavchenko/HSE_FaceRec_tf)
|
||||||
|
- Face Detection: [**MediaPipe BlazeFace**](https://drive.google.com/file/d/1f39lSzU5Oq-j_OXgS67KfN5wNsoeAZ4V/view)
|
||||||
- Face Embedding: [**BecauseofAI MobileFace**](https://github.com/becauseofAI/MobileFace)
|
- Face Embedding: [**BecauseofAI MobileFace**](https://github.com/becauseofAI/MobileFace)
|
||||||
- ObjectDetection: [**NanoDet**](https://github.com/RangiLyu/nanodet)
|
- Facial Spacial Geometry: [**MediaPipe FaceMesh**](https://drive.google.com/file/d/1VFC_wIpw4O7xBOiTgUldl79d9LA-LsnA/view)
|
||||||
- ObjectDetection: [**MB3-CenterNet**](https://github.com/610265158/mobilenetv3_centernet)
|
- Gender, Emotion, Age, Race Prediction: [**GEAR Predictor**](https://github.com/Udolf15/GEAR-Predictor)
|
||||||
|
- Hand Detection & Skeleton: [**MediaPipe HandPose**](https://drive.google.com/file/d/1sv4sSb9BSNVZhLzxXJ0jBv9DqD-4jnAz/view)
|
||||||
- Hand Tracking: [**HandTracking**](https://github.com/victordibia/handtracking)
|
- Hand Tracking: [**HandTracking**](https://github.com/victordibia/handtracking)
|
||||||
- Image Filters: [**WebGLImageFilter**](https://github.com/phoboslab/WebGLImageFilter)
|
- Image Filters: [**WebGLImageFilter**](https://github.com/phoboslab/WebGLImageFilter)
|
||||||
|
- ObjectDetection: [**MB3-CenterNet**](https://github.com/610265158/mobilenetv3_centernet)
|
||||||
|
- ObjectDetection: [**NanoDet**](https://github.com/RangiLyu/nanodet)
|
||||||
- Pinto Model Zoo: [**Pinto**](https://github.com/PINTO0309/PINTO_model_zoo)
|
- Pinto Model Zoo: [**Pinto**](https://github.com/PINTO0309/PINTO_model_zoo)
|
||||||
|
|
||||||
*Included models are included under license inherited from the original model source*
|
*Included models are included under license inherited from the original model source*
|
||||||
|
|
Loading…
Reference in New Issue