new samples gallery and major code folder restructure
|
@ -9,11 +9,13 @@
|
||||||
|
|
||||||
## Changelog
|
## Changelog
|
||||||
|
|
||||||
|
### **HEAD -> main** 2021/09/24 mandic00@live.com
|
||||||
|
|
||||||
|
- new release
|
||||||
|
|
||||||
### **2.2.3** 2021/09/24 mandic00@live.com
|
### **2.2.3** 2021/09/24 mandic00@live.com
|
||||||
|
|
||||||
|
- optimize model loading
|
||||||
### **origin/main** 2021/09/23 mandic00@live.com
|
|
||||||
|
|
||||||
- support segmentation for nodejs
|
- support segmentation for nodejs
|
||||||
- redo segmentation and handtracking
|
- redo segmentation and handtracking
|
||||||
- prototype handtracking
|
- prototype handtracking
|
||||||
|
|
61
README.md
|
@ -42,6 +42,7 @@ Check out [**Live Demo**](https://vladmandic.github.io/human/demo/index.html) ap
|
||||||
- [*Live:* **Face Extraction and 3D Rendering**](https://vladmandic.github.io/human/demo/face3d/index.html)
|
- [*Live:* **Face Extraction and 3D Rendering**](https://vladmandic.github.io/human/demo/face3d/index.html)
|
||||||
- [*Live:* **Multithreaded Detection Showcasing Maximum Performance**](https://vladmandic.github.io/human/demo/multithread/index.html)
|
- [*Live:* **Multithreaded Detection Showcasing Maximum Performance**](https://vladmandic.github.io/human/demo/multithread/index.html)
|
||||||
- [*Live:* **VR Model with Head, Face, Eye, Body and Hand tracking**](https://vladmandic.github.io/human-vrm/src/human-vrm.html)
|
- [*Live:* **VR Model with Head, Face, Eye, Body and Hand tracking**](https://vladmandic.github.io/human-vrm/src/human-vrm.html)
|
||||||
|
- [Examples galery](https://vladmandic.github.io/human/samples/samples.html)
|
||||||
|
|
||||||
## Project pages
|
## Project pages
|
||||||
|
|
||||||
|
@ -75,6 +76,7 @@ Check out [**Live Demo**](https://vladmandic.github.io/human/demo/index.html) ap
|
||||||
- [**Platform Support**](https://github.com/vladmandic/human/wiki/Platforms)
|
- [**Platform Support**](https://github.com/vladmandic/human/wiki/Platforms)
|
||||||
- [**Diagnostic and Performance trace information**](https://github.com/vladmandic/human/wiki/Diag)
|
- [**Diagnostic and Performance trace information**](https://github.com/vladmandic/human/wiki/Diag)
|
||||||
- [**List of Models & Credits**](https://github.com/vladmandic/human/wiki/Models)
|
- [**List of Models & Credits**](https://github.com/vladmandic/human/wiki/Models)
|
||||||
|
- [**Models Download Repository**](https://github.com/vladmandic/human-models)
|
||||||
- [**Security & Privacy Policy**](https://github.com/vladmandic/human/blob/main/SECURITY.md)
|
- [**Security & Privacy Policy**](https://github.com/vladmandic/human/blob/main/SECURITY.md)
|
||||||
- [**License & Usage Restrictions**](https://github.com/vladmandic/human/blob/main/LICENSE)
|
- [**License & Usage Restrictions**](https://github.com/vladmandic/human/blob/main/LICENSE)
|
||||||
|
|
||||||
|
@ -86,6 +88,15 @@ Check out [**Live Demo**](https://vladmandic.github.io/human/demo/index.html) ap
|
||||||
|
|
||||||
<hr><br>
|
<hr><br>
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
Visit [Examples galery](https://vladmandic.github.io/human/samples/samples.html) for more examples
|
||||||
|
<https://vladmandic.github.io/human/samples/samples.html>
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
<br>
|
||||||
|
|
||||||
## Options
|
## Options
|
||||||
|
|
||||||
All options as presented in the demo application...
|
All options as presented in the demo application...
|
||||||
|
@ -95,52 +106,15 @@ All options as presented in the demo application...
|
||||||
|
|
||||||
<br>
|
<br>
|
||||||
|
|
||||||
## Examples
|
|
||||||
|
|
||||||
<br>
|
|
||||||
|
|
||||||
**Face Close-up:**
|
|
||||||

|
|
||||||
|
|
||||||
<br>
|
|
||||||
|
|
||||||
**Face under a high angle:**
|
|
||||||

|
|
||||||
|
|
||||||
<br>
|
|
||||||
|
|
||||||
**Full Person Details:**
|
|
||||||

|
|
||||||
|
|
||||||
<br>
|
|
||||||
|
|
||||||
**Pose Detection:**
|
|
||||||

|
|
||||||
|
|
||||||
<br>
|
|
||||||
|
|
||||||
**Body Segmentation and Background Replacement:**
|
|
||||||

|
|
||||||
|
|
||||||
<br>
|
|
||||||
|
|
||||||
**Large Group:**
|
|
||||||

|
|
||||||
|
|
||||||
<br>
|
|
||||||
|
|
||||||
**VR Model Tracking:**
|
|
||||||

|
|
||||||
|
|
||||||
<br>
|
|
||||||
|
|
||||||
**Results Browser:**
|
**Results Browser:**
|
||||||
[ *Demo -> Display -> Show Results* ]<br>
|
[ *Demo -> Display -> Show Results* ]<br>
|
||||||

|

|
||||||
|
|
||||||
<br>
|
<br>
|
||||||
|
|
||||||
**Face Similarity Matching:**
|
## Advanced Examples
|
||||||
|
|
||||||
|
1. **Face Similarity Matching:**
|
||||||
Extracts all faces from provided input images,
|
Extracts all faces from provided input images,
|
||||||
sorts them by similarity to selected face
|
sorts them by similarity to selected face
|
||||||
and optionally matches detected face with database of known people to guess their names
|
and optionally matches detected face with database of known people to guess their names
|
||||||
|
@ -150,13 +124,18 @@ and optionally matches detected face with database of known people to guess thei
|
||||||
|
|
||||||
<br>
|
<br>
|
||||||
|
|
||||||
**Face3D OpenGL Rendering:**
|
2. **Face3D OpenGL Rendering:**
|
||||||
> [demo/face3d](demo/face3d/index.html)
|
> [demo/face3d](demo/face3d/index.html)
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
<br>
|
<br>
|
||||||
|
|
||||||
|
3. **VR Model Tracking:**
|
||||||
|

|
||||||
|
|
||||||
|
<br>
|
||||||
|
|
||||||
**468-Point Face Mesh Defails:**
|
**468-Point Face Mesh Defails:**
|
||||||
(view in full resolution to see keypoints)
|
(view in full resolution to see keypoints)
|
||||||
|
|
||||||
|
|
After Width: | Height: | Size: 297 KiB |
|
@ -12,7 +12,7 @@ const Human = require('../../dist/human.node.js'); // this is 'const Human = req
|
||||||
const config = { // just enable all and leave default settings
|
const config = { // just enable all and leave default settings
|
||||||
debug: false,
|
debug: false,
|
||||||
face: { enabled: true }, // includes mesh, iris, emotion, descriptor
|
face: { enabled: true }, // includes mesh, iris, emotion, descriptor
|
||||||
hand: { enabled: true },
|
hand: { enabled: true, maxDetected: 2, minConfidence: 0.5, detector: { modelPath: 'handtrack.json' } }, // use alternative hand model
|
||||||
body: { enabled: true },
|
body: { enabled: true },
|
||||||
object: { enabled: true },
|
object: { enabled: true },
|
||||||
gestures: { enabled: true },
|
gestures: { enabled: true },
|
||||||
|
|
|
@ -66,14 +66,14 @@
|
||||||
"@tensorflow/tfjs-layers": "^3.9.0",
|
"@tensorflow/tfjs-layers": "^3.9.0",
|
||||||
"@tensorflow/tfjs-node": "^3.9.0",
|
"@tensorflow/tfjs-node": "^3.9.0",
|
||||||
"@tensorflow/tfjs-node-gpu": "^3.9.0",
|
"@tensorflow/tfjs-node-gpu": "^3.9.0",
|
||||||
"@types/node": "^16.9.6",
|
"@types/node": "^16.10.1",
|
||||||
"@typescript-eslint/eslint-plugin": "^4.31.2",
|
"@typescript-eslint/eslint-plugin": "^4.31.2",
|
||||||
"@typescript-eslint/parser": "^4.31.2",
|
"@typescript-eslint/parser": "^4.31.2",
|
||||||
"@vladmandic/build": "^0.5.3",
|
"@vladmandic/build": "^0.5.3",
|
||||||
"@vladmandic/pilogger": "^0.3.3",
|
"@vladmandic/pilogger": "^0.3.3",
|
||||||
"canvas": "^2.8.0",
|
"canvas": "^2.8.0",
|
||||||
"dayjs": "^1.10.7",
|
"dayjs": "^1.10.7",
|
||||||
"esbuild": "^0.13.0",
|
"esbuild": "^0.13.2",
|
||||||
"eslint": "^7.32.0",
|
"eslint": "^7.32.0",
|
||||||
"eslint-config-airbnb-base": "^14.2.1",
|
"eslint-config-airbnb-base": "^14.2.1",
|
||||||
"eslint-plugin-import": "^2.24.2",
|
"eslint-plugin-import": "^2.24.2",
|
||||||
|
|
|
@ -2,3 +2,11 @@
|
||||||
|
|
||||||
Sample Images used by `Human` library demos and automated tests
|
Sample Images used by `Human` library demos and automated tests
|
||||||
Not required for normal funcioning of library
|
Not required for normal funcioning of library
|
||||||
|
|
||||||
|
Samples were generated using default configuration without any fine-tuning using command:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
node test/test-node-canvas.js samples/in/ samples/out/
|
||||||
|
```
|
||||||
|
|
||||||
|
Samples galery viewer: <https://vladmandic.github.io/human/samples/samples.html>
|
||||||
|
|
Before Width: | Height: | Size: 32 KiB After Width: | Height: | Size: 32 KiB |
Before Width: | Height: | Size: 8.4 KiB After Width: | Height: | Size: 8.4 KiB |
Before Width: | Height: | Size: 41 KiB After Width: | Height: | Size: 41 KiB |
Before Width: | Height: | Size: 381 KiB After Width: | Height: | Size: 381 KiB |
Before Width: | Height: | Size: 137 KiB After Width: | Height: | Size: 137 KiB |
Before Width: | Height: | Size: 295 KiB After Width: | Height: | Size: 295 KiB |
Before Width: | Height: | Size: 359 KiB After Width: | Height: | Size: 359 KiB |
Before Width: | Height: | Size: 464 KiB After Width: | Height: | Size: 464 KiB |
Before Width: | Height: | Size: 216 KiB After Width: | Height: | Size: 216 KiB |
Before Width: | Height: | Size: 206 KiB After Width: | Height: | Size: 206 KiB |
After Width: | Height: | Size: 90 KiB |
After Width: | Height: | Size: 142 KiB |
After Width: | Height: | Size: 79 KiB |
After Width: | Height: | Size: 110 KiB |
After Width: | Height: | Size: 44 KiB |
After Width: | Height: | Size: 9.2 KiB |
After Width: | Height: | Size: 43 KiB |
After Width: | Height: | Size: 340 KiB |
After Width: | Height: | Size: 127 KiB |
After Width: | Height: | Size: 305 KiB |
After Width: | Height: | Size: 296 KiB |
After Width: | Height: | Size: 386 KiB |
After Width: | Height: | Size: 214 KiB |
After Width: | Height: | Size: 215 KiB |
After Width: | Height: | Size: 67 KiB |
After Width: | Height: | Size: 113 KiB |
After Width: | Height: | Size: 48 KiB |
After Width: | Height: | Size: 76 KiB |
Before Width: | Height: | Size: 97 KiB |
Before Width: | Height: | Size: 182 KiB |
Before Width: | Height: | Size: 139 KiB |
Before Width: | Height: | Size: 79 KiB |
Before Width: | Height: | Size: 166 KiB |
Before Width: | Height: | Size: 266 KiB |
|
@ -0,0 +1,57 @@
|
||||||
|
<!DOCTYPE html>
|
||||||
|
<html lang="en">
|
||||||
|
<head>
|
||||||
|
<title>Human Examples Gallery</title>
|
||||||
|
<meta http-equiv="content-type" content="text/html; charset=utf-8">
|
||||||
|
<meta name="viewport" content="width=device-width, shrink-to-fit=yes">
|
||||||
|
<meta name="keywords" content="Human">
|
||||||
|
<meta name="application-name" content="Human">
|
||||||
|
<meta name="description" content="Human: 3D Face Detection, Body Pose, Hand & Finger Tracking, Iris Tracking, Age & Gender Prediction, Emotion Prediction & Gesture Recognition; Author: Vladimir Mandic <https://github.com/vladmandic>">
|
||||||
|
<meta name="msapplication-tooltip" content="Human: 3D Face Detection, Body Pose, Hand & Finger Tracking, Iris Tracking, Age & Gender Prediction, Emotion Prediction & Gesture Recognition; Author: Vladimir Mandic <https://github.com/vladmandic>">
|
||||||
|
<meta name="theme-color" content="#000000">
|
||||||
|
<link rel="manifest" href="../manifest.webmanifest">
|
||||||
|
<link rel="shortcut icon" href="../../favicon.ico" type="image/x-icon">
|
||||||
|
<link rel="apple-touch-icon" href="../../assets/icon.png">
|
||||||
|
<style>
|
||||||
|
@font-face { font-family: 'Lato'; font-display: swap; font-style: normal; font-weight: 100; src: local('Lato'), url('../../assets/lato-light.woff2') }
|
||||||
|
html { font-family: 'Lato', 'Segoe UI'; font-size: 24px; font-variant: small-caps; }
|
||||||
|
body { margin: 24px; background: black; color: white; overflow-x: hidden; overflow-y: auto; text-align: -webkit-center; min-height: 100%; max-height: 100%; }
|
||||||
|
::-webkit-scrollbar { height: 8px; border: 0; border-radius: 0; }
|
||||||
|
::-webkit-scrollbar-thumb { background: grey }
|
||||||
|
::-webkit-scrollbar-track { margin: 3px; }
|
||||||
|
.text { margin: 24px }
|
||||||
|
.strip { display: flex; width: 100%; overflow: auto; }
|
||||||
|
.thumb { height: 150px; margin: 2px; padding: 2px; }
|
||||||
|
.thumb:hover { filter: grayscale(1); background: white; }
|
||||||
|
.image-container { margin: 24px 3px 3px 3px }
|
||||||
|
.image { max-width: -webkit-fill-available; }
|
||||||
|
</style>
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
<div class="text">Human Examples Gallery</div>
|
||||||
|
<div id="strip" class="strip"></div>
|
||||||
|
<div class="image-container">
|
||||||
|
<img id="image" src="data:image/gif;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs=" alt="" class="image" />
|
||||||
|
</div>
|
||||||
|
<script>
|
||||||
|
const samples = [
|
||||||
|
'ai-body.jpg', 'ai-upper.jpg',
|
||||||
|
'person-vlado.jpg', 'person-linda.jpg', 'person-celeste.jpg', 'person-tetiana.jpg',
|
||||||
|
'group-1.jpg', 'group-2.jpg', 'group-3.jpg', 'group-4.jpg', 'group-5.jpg', 'group-6.jpg', 'group-7.jpg',
|
||||||
|
'daz3d-brianna.jpg', 'daz3d-chiyo.jpg', 'daz3d-cody.jpg', 'daz3d-drew-01.jpg', 'daz3d-drew-02.jpg', 'daz3d-ella-01.jpg', 'daz3d-ella-02.jpg', 'daz3d-gillian.jpg',
|
||||||
|
'daz3d-hye-01.jpg', 'daz3d-hye-02.jpg', 'daz3d-kaia.jpg', 'daz3d-karen.jpg', 'daz3d-kiaria-01.jpg', 'daz3d-kiaria-02.jpg', 'daz3d-lilah-01.jpg', 'daz3d-lilah-02.jpg',
|
||||||
|
'daz3d-lilah-03.jpg', 'daz3d-lila.jpg', 'daz3d-lindsey.jpg', 'daz3d-megah.jpg', 'daz3d-selina-01.jpg', 'daz3d-selina-02.jpg', 'daz3d-snow.jpg',
|
||||||
|
'daz3d-sunshine.jpg', 'daz3d-taia.jpg', 'daz3d-tuesday-01.jpg', 'daz3d-tuesday-02.jpg', 'daz3d-tuesday-03.jpg', 'daz3d-zoe.jpg', 'daz3d-ginnifer.jpg',
|
||||||
|
'daz3d-_emotions01.jpg', 'daz3d-_emotions02.jpg', 'daz3d-_emotions03.jpg', 'daz3d-_emotions04.jpg', 'daz3d-_emotions05.jpg',
|
||||||
|
];
|
||||||
|
const image = document.getElementById('image');
|
||||||
|
for (const sample of samples) {
|
||||||
|
const el = document.createElement('img');
|
||||||
|
el.className = 'thumb';
|
||||||
|
el.src = el.title = el.alt = `/samples/in/${sample}`;
|
||||||
|
el.addEventListener('click', () => image.src = image.alt = image.title = el.src.replace('/in/', '/out/'));
|
||||||
|
document.getElementById('strip')?.appendChild(el);
|
||||||
|
}
|
||||||
|
</script>
|
||||||
|
</body>
|
||||||
|
</html>
|
|
@ -1,3 +1,8 @@
|
||||||
|
/**
|
||||||
|
* BlazeFace, FaceMesh & Iris model implementation
|
||||||
|
* See `facemesh.ts` for entry point
|
||||||
|
*/
|
||||||
|
|
||||||
export const MESH_ANNOTATIONS = {
|
export const MESH_ANNOTATIONS = {
|
||||||
silhouette: [
|
silhouette: [
|
||||||
10, 338, 297, 332, 284, 251, 389, 356, 454, 323, 361, 288,
|
10, 338, 297, 332, 284, 251, 389, 356, 454, 323, 361, 288,
|
||||||
|
|
|
@ -3,7 +3,7 @@
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { TRI468 as triangulation } from './blazeface/coords';
|
import { TRI468 as triangulation } from './blazeface/coords';
|
||||||
import { mergeDeep, now } from './helpers';
|
import { mergeDeep, now } from './util';
|
||||||
import type { Result, FaceResult, BodyResult, HandResult, ObjectResult, GestureResult, PersonResult } from './result';
|
import type { Result, FaceResult, BodyResult, HandResult, ObjectResult, GestureResult, PersonResult } from './result';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|
|
@ -1,8 +1,10 @@
|
||||||
/**
|
/**
|
||||||
* EfficientPose Module
|
* EfficientPose model implementation
|
||||||
|
*
|
||||||
|
* Based on: [**EfficientPose**](https://github.com/daniegr/EfficientPose)
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { log, join } from '../helpers';
|
import { log, join } from '../util';
|
||||||
import * as tf from '../../dist/tfjs.esm.js';
|
import * as tf from '../../dist/tfjs.esm.js';
|
||||||
import type { BodyResult } from '../result';
|
import type { BodyResult } from '../result';
|
||||||
import type { GraphModel, Tensor } from '../tfjs/types';
|
import type { GraphModel, Tensor } from '../tfjs/types';
|
||||||
|
|
|
@ -1,8 +1,10 @@
|
||||||
/**
|
/**
|
||||||
* Emotion Module
|
* Emotion model implementation
|
||||||
|
*
|
||||||
|
* [**Oarriaga**](https://github.com/oarriaga/face_classification)
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { log, join } from '../helpers';
|
import { log, join } from '../util';
|
||||||
import type { Config } from '../config';
|
import type { Config } from '../config';
|
||||||
import type { GraphModel, Tensor } from '../tfjs/types';
|
import type { GraphModel, Tensor } from '../tfjs/types';
|
||||||
import * as tf from '../../dist/tfjs.esm.js';
|
import * as tf from '../../dist/tfjs.esm.js';
|
||||||
|
|
|
@ -1,6 +1,6 @@
|
||||||
import * as tf from '../dist/tfjs.esm.js';
|
import * as tf from '../dist/tfjs.esm.js';
|
||||||
import * as image from './image/image';
|
import * as image from './image/image';
|
||||||
import { mergeDeep } from './helpers';
|
import { mergeDeep } from './util';
|
||||||
|
|
||||||
export type Env = {
|
export type Env = {
|
||||||
browser: undefined | boolean,
|
browser: undefined | boolean,
|
||||||
|
|
|
@ -1,9 +1,9 @@
|
||||||
/**
|
/**
|
||||||
* Module that analyzes person age
|
* Face algorithm implementation
|
||||||
* Obsolete
|
* Uses FaceMesh, Emotion and FaceRes models to create a unified pipeline
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { log, now } from './helpers';
|
import { log, now } from './util';
|
||||||
import * as tf from '../dist/tfjs.esm.js';
|
import * as tf from '../dist/tfjs.esm.js';
|
||||||
import * as facemesh from './blazeface/facemesh';
|
import * as facemesh from './blazeface/facemesh';
|
||||||
import * as emotion from './emotion/emotion';
|
import * as emotion from './emotion/emotion';
|
||||||
|
|
|
@ -1,10 +1,13 @@
|
||||||
/**
|
/**
|
||||||
* HSE-FaceRes Module
|
* FaceRes model implementation
|
||||||
|
*
|
||||||
* Returns Age, Gender, Descriptor
|
* Returns Age, Gender, Descriptor
|
||||||
* Implements Face simmilarity function
|
* Implements Face simmilarity function
|
||||||
|
*
|
||||||
|
* Based on: [**HSE-FaceRes**](https://github.com/HSE-asavchenko/HSE_FaceRec_tf)
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { log, join } from '../helpers';
|
import { log, join } from '../util';
|
||||||
import * as tf from '../../dist/tfjs.esm.js';
|
import * as tf from '../../dist/tfjs.esm.js';
|
||||||
import type { Tensor, GraphModel } from '../tfjs/types';
|
import type { Tensor, GraphModel } from '../tfjs/types';
|
||||||
import type { Config } from '../config';
|
import type { Config } from '../config';
|
||||||
|
|
|
@ -1,3 +1,8 @@
|
||||||
|
/**
|
||||||
|
* FingerPose algorithm implementation
|
||||||
|
* See `fingerpose.ts` for entry point
|
||||||
|
*/
|
||||||
|
|
||||||
import { Finger, FingerCurl, FingerDirection } from './description';
|
import { Finger, FingerCurl, FingerDirection } from './description';
|
||||||
|
|
||||||
const options = {
|
const options = {
|
||||||
|
|
|
@ -1,3 +1,8 @@
|
||||||
|
/**
|
||||||
|
* FingerPose algorithm implementation
|
||||||
|
* See `fingerpose.ts` for entry point
|
||||||
|
*/
|
||||||
|
|
||||||
export default class Gesture {
|
export default class Gesture {
|
||||||
name;
|
name;
|
||||||
curls;
|
curls;
|
||||||
|
|
|
@ -1,3 +1,8 @@
|
||||||
|
/**
|
||||||
|
* FingerPose algorithm implementation
|
||||||
|
* See `fingerpose.ts` for entry point
|
||||||
|
*/
|
||||||
|
|
||||||
import { Finger, FingerCurl, FingerDirection } from './description';
|
import { Finger, FingerCurl, FingerDirection } from './description';
|
||||||
import Gesture from './gesture';
|
import Gesture from './gesture';
|
||||||
|
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
/**
|
/**
|
||||||
* Gesture detection module
|
* Gesture detection algorithm
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import type { GestureResult } from '../result';
|
import type { GestureResult } from '../result';
|
||||||
|
|
|
@ -1,3 +1,8 @@
|
||||||
|
/**
|
||||||
|
* HandPose model implementation constants
|
||||||
|
* See `handpose.ts` for entry point
|
||||||
|
*/
|
||||||
|
|
||||||
export const anchors = [
|
export const anchors = [
|
||||||
{ x: 0.015625, y: 0.015625 },
|
{ x: 0.015625, y: 0.015625 },
|
||||||
{ x: 0.015625, y: 0.015625 },
|
{ x: 0.015625, y: 0.015625 },
|
||||||
|
|
|
@ -1,3 +1,8 @@
|
||||||
|
/**
|
||||||
|
* HandPose model implementation
|
||||||
|
* See `handpose.ts` for entry point
|
||||||
|
*/
|
||||||
|
|
||||||
import * as tf from '../../dist/tfjs.esm.js';
|
import * as tf from '../../dist/tfjs.esm.js';
|
||||||
|
|
||||||
export function getBoxSize(box) {
|
export function getBoxSize(box) {
|
||||||
|
|
|
@ -1,3 +1,8 @@
|
||||||
|
/**
|
||||||
|
* HandPose model implementation
|
||||||
|
* See `handpose.ts` for entry point
|
||||||
|
*/
|
||||||
|
|
||||||
import * as tf from '../../dist/tfjs.esm.js';
|
import * as tf from '../../dist/tfjs.esm.js';
|
||||||
import * as box from './box';
|
import * as box from './box';
|
||||||
import * as anchors from './anchors';
|
import * as anchors from './anchors';
|
||||||
|
|
|
@ -1,3 +1,8 @@
|
||||||
|
/**
|
||||||
|
* HandPose model implementation
|
||||||
|
* See `handpose.ts` for entry point
|
||||||
|
*/
|
||||||
|
|
||||||
import * as tf from '../../dist/tfjs.esm.js';
|
import * as tf from '../../dist/tfjs.esm.js';
|
||||||
import * as box from './box';
|
import * as box from './box';
|
||||||
import * as util from './util';
|
import * as util from './util';
|
||||||
|
|
|
@ -1,8 +1,10 @@
|
||||||
/**
|
/**
|
||||||
* HandPose module entry point
|
* HandPose model implementation
|
||||||
|
*
|
||||||
|
* Based on: [**MediaPipe HandPose**](https://drive.google.com/file/d/1sv4sSb9BSNVZhLzxXJ0jBv9DqD-4jnAz/view)
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { log, join } from '../helpers';
|
import { log, join } from '../util';
|
||||||
import * as tf from '../../dist/tfjs.esm.js';
|
import * as tf from '../../dist/tfjs.esm.js';
|
||||||
import * as handdetector from './handdetector';
|
import * as handdetector from './handdetector';
|
||||||
import * as handpipeline from './handpipeline';
|
import * as handpipeline from './handpipeline';
|
||||||
|
|
|
@ -1,8 +1,12 @@
|
||||||
/**
|
/**
|
||||||
* Hand Detection and Segmentation
|
* HandTrack model implementation
|
||||||
|
*
|
||||||
|
* Based on:
|
||||||
|
* - Hand Detection & Skeleton: [**MediaPipe HandPose**](https://drive.google.com/file/d/1sv4sSb9BSNVZhLzxXJ0jBv9DqD-4jnAz/view)
|
||||||
|
* - Hand Tracking: [**HandTracking**](https://github.com/victordibia/handtracking)
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { log, join } from '../helpers';
|
import { log, join } from '../util';
|
||||||
import * as tf from '../../dist/tfjs.esm.js';
|
import * as tf from '../../dist/tfjs.esm.js';
|
||||||
import type { HandResult } from '../result';
|
import type { HandResult } from '../result';
|
||||||
import type { GraphModel, Tensor } from '../tfjs/types';
|
import type { GraphModel, Tensor } from '../tfjs/types';
|
||||||
|
|
13
src/human.ts
|
@ -2,7 +2,7 @@
|
||||||
* Human main module
|
* Human main module
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { log, now, mergeDeep, validate } from './helpers';
|
import { log, now, mergeDeep, validate } from './util';
|
||||||
import { Config, defaults } from './config';
|
import { Config, defaults } from './config';
|
||||||
import type { Result, FaceResult, HandResult, BodyResult, ObjectResult, GestureResult, PersonResult } from './result';
|
import type { Result, FaceResult, HandResult, BodyResult, ObjectResult, GestureResult, PersonResult } from './result';
|
||||||
import * as tf from '../dist/tfjs.esm.js';
|
import * as tf from '../dist/tfjs.esm.js';
|
||||||
|
@ -168,7 +168,6 @@ export class Human {
|
||||||
this.config = JSON.parse(JSON.stringify(defaults));
|
this.config = JSON.parse(JSON.stringify(defaults));
|
||||||
Object.seal(this.config);
|
Object.seal(this.config);
|
||||||
if (userConfig) this.config = mergeDeep(this.config, userConfig);
|
if (userConfig) this.config = mergeDeep(this.config, userConfig);
|
||||||
validate(defaults, this.config);
|
|
||||||
this.tf = tf;
|
this.tf = tf;
|
||||||
this.state = 'idle';
|
this.state = 'idle';
|
||||||
this.#numTensors = 0;
|
this.#numTensors = 0;
|
||||||
|
@ -229,21 +228,25 @@ export class Human {
|
||||||
}
|
}
|
||||||
|
|
||||||
/** Reset configuration to default values */
|
/** Reset configuration to default values */
|
||||||
reset = () => {
|
reset() {
|
||||||
const currentBackend = this.config.backend; // save backend;
|
const currentBackend = this.config.backend; // save backend;
|
||||||
this.config = JSON.parse(JSON.stringify(defaults));
|
this.config = JSON.parse(JSON.stringify(defaults));
|
||||||
this.config.backend = currentBackend;
|
this.config.backend = currentBackend;
|
||||||
}
|
}
|
||||||
|
|
||||||
/** Validate current configuration schema */
|
/** Validate current configuration schema */
|
||||||
validate = (userConfig?: Partial<Config>) => validate(defaults, userConfig || this.config);
|
validate(userConfig?: Partial<Config>) {
|
||||||
|
return validate(defaults, userConfig || this.config);
|
||||||
|
}
|
||||||
|
|
||||||
/** Process input as return canvas and tensor
|
/** Process input as return canvas and tensor
|
||||||
*
|
*
|
||||||
* @param input: {@link Input}
|
* @param input: {@link Input}
|
||||||
* @returns { tensor, canvas }
|
* @returns { tensor, canvas }
|
||||||
*/
|
*/
|
||||||
image = (input: Input) => image.process(input, this.config);
|
image(input: Input) {
|
||||||
|
return image.process(input, this.config);
|
||||||
|
}
|
||||||
|
|
||||||
/** Simmilarity method calculates simmilarity between two provided face descriptors (face embeddings)
|
/** Simmilarity method calculates simmilarity between two provided face descriptors (face embeddings)
|
||||||
* - Calculation is based on normalized Minkowski distance between two descriptors
|
* - Calculation is based on normalized Minkowski distance between two descriptors
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
/**
|
/**
|
||||||
* Image Processing module used by Human
|
* Image Processing algorithm implementation
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import * as tf from '../../dist/tfjs.esm.js';
|
import * as tf from '../../dist/tfjs.esm.js';
|
||||||
|
@ -7,7 +7,7 @@ import * as fxImage from './imagefx';
|
||||||
import type { Tensor } from '../tfjs/types';
|
import type { Tensor } from '../tfjs/types';
|
||||||
import type { Config } from '../config';
|
import type { Config } from '../config';
|
||||||
import { env } from '../env';
|
import { env } from '../env';
|
||||||
import { log } from '../helpers';
|
import { log } from '../util';
|
||||||
|
|
||||||
type Input = Tensor | ImageData | ImageBitmap | HTMLImageElement | HTMLMediaElement | HTMLVideoElement | HTMLCanvasElement | OffscreenCanvas | typeof Image | typeof env.Canvas;
|
type Input = Tensor | ImageData | ImageBitmap | HTMLImageElement | HTMLMediaElement | HTMLVideoElement | HTMLCanvasElement | OffscreenCanvas | typeof Image | typeof env.Canvas;
|
||||||
|
|
||||||
|
@ -84,11 +84,11 @@ export function process(input: Input, config: Config): { tensor: Tensor | null,
|
||||||
let targetHeight = originalHeight;
|
let targetHeight = originalHeight;
|
||||||
if (targetWidth > maxSize) {
|
if (targetWidth > maxSize) {
|
||||||
targetWidth = maxSize;
|
targetWidth = maxSize;
|
||||||
targetHeight = targetWidth * originalHeight / originalWidth;
|
targetHeight = Math.trunc(targetWidth * originalHeight / originalWidth);
|
||||||
}
|
}
|
||||||
if (targetHeight > maxSize) {
|
if (targetHeight > maxSize) {
|
||||||
targetHeight = maxSize;
|
targetHeight = maxSize;
|
||||||
targetWidth = targetHeight * originalWidth / originalHeight;
|
targetWidth = Math.trunc(targetHeight * originalWidth / originalHeight);
|
||||||
}
|
}
|
||||||
|
|
||||||
// create our canvas and resize it if needed
|
// create our canvas and resize it if needed
|
||||||
|
|
|
@ -1,6 +1,10 @@
|
||||||
/*
|
/**
|
||||||
WebGLImageFilter by Dominic Szablewski: <https://github.com/phoboslab/WebGLImageFilter>
|
* Image Filters in WebGL algoritm implementation
|
||||||
*/
|
*
|
||||||
|
* Based on: [WebGLImageFilter](https://github.com/phoboslab/WebGLImageFilter)
|
||||||
|
*
|
||||||
|
* This module is written in ES5 JS and does not conform to code and style standards
|
||||||
|
*/
|
||||||
|
|
||||||
// @ts-nocheck
|
// @ts-nocheck
|
||||||
|
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
/**
|
/**
|
||||||
* Module that interpolates results for smoother animations
|
* Results interpolation for smoothening of video detection results inbetween detected frames
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import type { Result, FaceResult, BodyResult, HandResult, ObjectResult, GestureResult, PersonResult } from './result';
|
import type { Result, FaceResult, BodyResult, HandResult, ObjectResult, GestureResult, PersonResult } from './result';
|
||||||
|
|
|
@ -1,4 +1,8 @@
|
||||||
import { log } from './helpers';
|
/**
|
||||||
|
* Loader and Validator for all models used by Human
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { log } from './util';
|
||||||
import type { GraphModel } from './tfjs/types';
|
import type { GraphModel } from './tfjs/types';
|
||||||
import * as facemesh from './blazeface/facemesh';
|
import * as facemesh from './blazeface/facemesh';
|
||||||
import * as faceres from './faceres/faceres';
|
import * as faceres from './faceres/faceres';
|
||||||
|
|
|
@ -1,8 +1,10 @@
|
||||||
/**
|
/**
|
||||||
* EfficientPose Module
|
* MoveNet model implementation
|
||||||
|
*
|
||||||
|
* Based on: [**MoveNet**](https://blog.tensorflow.org/2021/05/next-generation-pose-detection-with-movenet-and-tensorflowjs.html)
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { log, join } from '../helpers';
|
import { log, join } from '../util';
|
||||||
import * as tf from '../../dist/tfjs.esm.js';
|
import * as tf from '../../dist/tfjs.esm.js';
|
||||||
import type { BodyResult } from '../result';
|
import type { BodyResult } from '../result';
|
||||||
import type { GraphModel, Tensor } from '../tfjs/types';
|
import type { GraphModel, Tensor } from '../tfjs/types';
|
||||||
|
|
|
@ -1,8 +1,10 @@
|
||||||
/**
|
/**
|
||||||
* CenterNet object detection module
|
* CenterNet object detection model implementation
|
||||||
|
*
|
||||||
|
* Based on: [**NanoDet**](https://github.com/RangiLyu/nanodet)
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { log, join } from '../helpers';
|
import { log, join } from '../util';
|
||||||
import * as tf from '../../dist/tfjs.esm.js';
|
import * as tf from '../../dist/tfjs.esm.js';
|
||||||
import { labels } from './labels';
|
import { labels } from './labels';
|
||||||
import type { ObjectResult } from '../result';
|
import type { ObjectResult } from '../result';
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
/**
|
/**
|
||||||
* CoCo Labels used by object detection modules
|
* CoCo Labels used by object detection implementations
|
||||||
*/
|
*/
|
||||||
export const labels = [
|
export const labels = [
|
||||||
{ class: 1, label: 'person' },
|
{ class: 1, label: 'person' },
|
||||||
|
|
|
@ -1,8 +1,10 @@
|
||||||
/**
|
/**
|
||||||
* NanoDet object detection module
|
* NanoDet object detection model implementation
|
||||||
|
*
|
||||||
|
* Based on: [**MB3-CenterNet**](https://github.com/610265158/mobilenetv3_centernet)
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { log, join } from '../helpers';
|
import { log, join } from '../util';
|
||||||
import * as tf from '../../dist/tfjs.esm.js';
|
import * as tf from '../../dist/tfjs.esm.js';
|
||||||
import { labels } from './labels';
|
import { labels } from './labels';
|
||||||
import type { ObjectResult } from '../result';
|
import type { ObjectResult } from '../result';
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
/**
|
/**
|
||||||
* Module that analyzes existing results and recombines them into a unified person object
|
* Analyze detection Results and sort&combine them into per-person view
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import type { FaceResult, BodyResult, HandResult, GestureResult, PersonResult } from './result';
|
import type { FaceResult, BodyResult, HandResult, GestureResult, PersonResult } from './result';
|
||||||
|
|
|
@ -1,3 +1,8 @@
|
||||||
|
/**
|
||||||
|
* PoseNet body detection model implementation
|
||||||
|
* See `posenet.ts` for entry point
|
||||||
|
*/
|
||||||
|
|
||||||
import * as utils from './utils';
|
import * as utils from './utils';
|
||||||
import * as kpt from './keypoints';
|
import * as kpt from './keypoints';
|
||||||
|
|
||||||
|
|
|
@ -1,3 +1,8 @@
|
||||||
|
/**
|
||||||
|
* PoseNet body detection model implementation constants
|
||||||
|
* See `posenet.ts` for entry point
|
||||||
|
*/
|
||||||
|
|
||||||
import * as kpt from './keypoints';
|
import * as kpt from './keypoints';
|
||||||
import type { BodyResult } from '../result';
|
import type { BodyResult } from '../result';
|
||||||
|
|
||||||
|
|
|
@ -1,8 +1,9 @@
|
||||||
/**
|
/**
|
||||||
* Profiling calculations
|
* Profiling calculations
|
||||||
|
* Debug only
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { log } from './helpers';
|
import { log } from './util';
|
||||||
|
|
||||||
export const data = {};
|
export const data = {};
|
||||||
|
|
||||||
|
|
|
@ -1,9 +1,12 @@
|
||||||
/**
|
/**
|
||||||
* Module that analyzes person age
|
* Age model implementation
|
||||||
* Obsolete
|
*
|
||||||
|
* Based on: [**SSR-Net**](https://github.com/shamangary/SSR-Net)
|
||||||
|
*
|
||||||
|
* Obsolete and replaced by `faceres` that performs age/gender/descriptor analysis
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { log, join } from '../helpers';
|
import { log, join } from '../util';
|
||||||
import * as tf from '../../dist/tfjs.esm.js';
|
import * as tf from '../../dist/tfjs.esm.js';
|
||||||
import type { Config } from '../config';
|
import type { Config } from '../config';
|
||||||
import type { GraphModel, Tensor } from '../tfjs/types';
|
import type { GraphModel, Tensor } from '../tfjs/types';
|
|
@ -1,9 +1,12 @@
|
||||||
/**
|
/**
|
||||||
* Module that analyzes person gender
|
* Gender model implementation
|
||||||
* Obsolete
|
*
|
||||||
|
* Based on: [**SSR-Net**](https://github.com/shamangary/SSR-Net)
|
||||||
|
*
|
||||||
|
* Obsolete and replaced by `faceres` that performs age/gender/descriptor analysis
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { log, join } from '../helpers';
|
import { log, join } from '../util';
|
||||||
import * as tf from '../../dist/tfjs.esm.js';
|
import * as tf from '../../dist/tfjs.esm.js';
|
||||||
import type { Config } from '../config';
|
import type { Config } from '../config';
|
||||||
import type { GraphModel, Tensor } from '../tfjs/types';
|
import type { GraphModel, Tensor } from '../tfjs/types';
|
|
@ -1,4 +1,6 @@
|
||||||
import { log, now } from '../helpers';
|
/** TFJS backend initialization and customization */
|
||||||
|
|
||||||
|
import { log, now } from '../util';
|
||||||
import * as humangl from './humangl';
|
import * as humangl from './humangl';
|
||||||
import * as env from '../env';
|
import * as env from '../env';
|
||||||
import * as tf from '../../dist/tfjs.esm.js';
|
import * as tf from '../../dist/tfjs.esm.js';
|
||||||
|
|
|
@ -1,9 +1,6 @@
|
||||||
/**
|
/** TFJS custom backend registration */
|
||||||
* Custom TFJS backend for Human based on WebGL
|
|
||||||
* Not used by default
|
|
||||||
*/
|
|
||||||
|
|
||||||
import { log } from '../helpers';
|
import { log } from '../util';
|
||||||
import * as tf from '../../dist/tfjs.esm.js';
|
import * as tf from '../../dist/tfjs.esm.js';
|
||||||
import * as image from '../image/image';
|
import * as image from '../image/image';
|
||||||
import * as models from '../models';
|
import * as models from '../models';
|
||||||
|
|
|
@ -1,6 +1,4 @@
|
||||||
/**
|
/** TFJS common types exports */
|
||||||
* Export common TensorFlow types
|
|
||||||
*/
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* TensorFlow Tensor type
|
* TensorFlow Tensor type
|
||||||
|
|
|
@ -1,4 +1,8 @@
|
||||||
import { log, now, mergeDeep } from './helpers';
|
/**
|
||||||
|
* Warmup algorithm that uses embedded images to excercise loaded models for faster future inference
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { log, now, mergeDeep } from './util';
|
||||||
import * as sample from './sample';
|
import * as sample from './sample';
|
||||||
import * as tf from '../dist/tfjs.esm.js';
|
import * as tf from '../dist/tfjs.esm.js';
|
||||||
import * as image from './image/image';
|
import * as image from './image/image';
|
||||||
|
|
|
@ -196,7 +196,7 @@ async function test(Human, inputConfig) {
|
||||||
human.reset();
|
human.reset();
|
||||||
config.async = true;
|
config.async = true;
|
||||||
config.cacheSensitivity = 0;
|
config.cacheSensitivity = 0;
|
||||||
res = await testDetect(human, 'samples/ai-body.jpg', 'default');
|
res = await testDetect(human, 'samples/in/ai-body.jpg', 'default');
|
||||||
if (!res || res?.face?.length !== 1 || res?.face[0].gender !== 'female') log('error', 'failed: default result face mismatch', res?.face?.length, res?.body?.length, res?.hand?.length, res?.gesture?.length);
|
if (!res || res?.face?.length !== 1 || res?.face[0].gender !== 'female') log('error', 'failed: default result face mismatch', res?.face?.length, res?.body?.length, res?.hand?.length, res?.gesture?.length);
|
||||||
else log('state', 'passed: default result face match');
|
else log('state', 'passed: default result face match');
|
||||||
|
|
||||||
|
@ -205,13 +205,13 @@ async function test(Human, inputConfig) {
|
||||||
human.reset();
|
human.reset();
|
||||||
config.async = false;
|
config.async = false;
|
||||||
config.cacheSensitivity = 0;
|
config.cacheSensitivity = 0;
|
||||||
res = await testDetect(human, 'samples/ai-body.jpg', 'default');
|
res = await testDetect(human, 'samples/in/ai-body.jpg', 'default');
|
||||||
if (!res || res?.face?.length !== 1 || res?.face[0].gender !== 'female') log('error', 'failed: default sync', res?.face?.length, res?.body?.length, res?.hand?.length, res?.gesture?.length);
|
if (!res || res?.face?.length !== 1 || res?.face[0].gender !== 'female') log('error', 'failed: default sync', res?.face?.length, res?.body?.length, res?.hand?.length, res?.gesture?.length);
|
||||||
else log('state', 'passed: default sync');
|
else log('state', 'passed: default sync');
|
||||||
|
|
||||||
// test image processing
|
// test image processing
|
||||||
const img1 = await human.image(null);
|
const img1 = await human.image(null);
|
||||||
const img2 = await human.image(await getImage(human, 'samples/ai-face.jpg'));
|
const img2 = await human.image(await getImage(human, 'samples/in/ai-face.jpg'));
|
||||||
if (!img1 || !img2 || img1.tensor !== null || img2.tensor?.shape?.length !== 4) log('error', 'failed: image input', img1?.tensor?.shape, img2?.tensor?.shape);
|
if (!img1 || !img2 || img1.tensor !== null || img2.tensor?.shape?.length !== 4) log('error', 'failed: image input', img1?.tensor?.shape, img2?.tensor?.shape);
|
||||||
else log('state', 'passed: image input', img1?.tensor?.shape, img2?.tensor?.shape);
|
else log('state', 'passed: image input', img1?.tensor?.shape, img2?.tensor?.shape);
|
||||||
|
|
||||||
|
@ -225,9 +225,9 @@ async function test(Human, inputConfig) {
|
||||||
human.reset();
|
human.reset();
|
||||||
config.async = false;
|
config.async = false;
|
||||||
config.cacheSensitivity = 0;
|
config.cacheSensitivity = 0;
|
||||||
let res1 = await testDetect(human, 'samples/ai-face.jpg', 'default');
|
let res1 = await testDetect(human, 'samples/in/ai-face.jpg', 'default');
|
||||||
let res2 = await testDetect(human, 'samples/ai-body.jpg', 'default');
|
let res2 = await testDetect(human, 'samples/in/ai-body.jpg', 'default');
|
||||||
let res3 = await testDetect(human, 'samples/ai-upper.jpg', 'default');
|
let res3 = await testDetect(human, 'samples/in/ai-upper.jpg', 'default');
|
||||||
const desc1 = res1 && res1.face && res1.face[0] && res1.face[0].embedding ? [...res1.face[0].embedding] : null;
|
const desc1 = res1 && res1.face && res1.face[0] && res1.face[0].embedding ? [...res1.face[0].embedding] : null;
|
||||||
const desc2 = res2 && res2.face && res2.face[0] && res2.face[0].embedding ? [...res2.face[0].embedding] : null;
|
const desc2 = res2 && res2.face && res2.face[0] && res2.face[0].embedding ? [...res2.face[0].embedding] : null;
|
||||||
const desc3 = res3 && res3.face && res3.face[0] && res3.face[0].embedding ? [...res3.face[0].embedding] : null;
|
const desc3 = res3 && res3.face && res3.face[0] && res3.face[0].embedding ? [...res3.face[0].embedding] : null;
|
||||||
|
@ -257,7 +257,7 @@ async function test(Human, inputConfig) {
|
||||||
log('info', 'test object');
|
log('info', 'test object');
|
||||||
human.reset();
|
human.reset();
|
||||||
config.object = { enabled: true };
|
config.object = { enabled: true };
|
||||||
res = await testDetect(human, 'samples/ai-body.jpg', 'default');
|
res = await testDetect(human, 'samples/in/ai-body.jpg', 'default');
|
||||||
if (!res || res?.object?.length !== 1 || res?.object[0]?.label !== 'person') log('error', 'failed: object result mismatch', res?.object?.length);
|
if (!res || res?.object?.length !== 1 || res?.object[0]?.label !== 'person') log('error', 'failed: object result mismatch', res?.object?.length);
|
||||||
else log('state', 'passed: object result match');
|
else log('state', 'passed: object result match');
|
||||||
|
|
||||||
|
@ -268,7 +268,7 @@ async function test(Human, inputConfig) {
|
||||||
config.face = { detector: { minConfidence: 0.0001, maxDetected: 1 } };
|
config.face = { detector: { minConfidence: 0.0001, maxDetected: 1 } };
|
||||||
config.body = { minConfidence: 0.0001, maxDetected: 1 };
|
config.body = { minConfidence: 0.0001, maxDetected: 1 };
|
||||||
config.hand = { minConfidence: 0.0001, maxDetected: 3 };
|
config.hand = { minConfidence: 0.0001, maxDetected: 3 };
|
||||||
res = await testDetect(human, 'samples/ai-body.jpg', 'default');
|
res = await testDetect(human, 'samples/in/ai-body.jpg', 'default');
|
||||||
if (!res || res?.face?.length !== 1 || res?.body?.length !== 1 || res?.hand?.length !== 3 || res?.gesture?.length !== 9) log('error', 'failed: sensitive result mismatch', res?.face?.length, res?.body?.length, res?.hand?.length, res?.gesture?.length);
|
if (!res || res?.face?.length !== 1 || res?.body?.length !== 1 || res?.hand?.length !== 3 || res?.gesture?.length !== 9) log('error', 'failed: sensitive result mismatch', res?.face?.length, res?.body?.length, res?.hand?.length, res?.gesture?.length);
|
||||||
else log('state', 'passed: sensitive result match');
|
else log('state', 'passed: sensitive result match');
|
||||||
|
|
||||||
|
@ -293,7 +293,7 @@ async function test(Human, inputConfig) {
|
||||||
human.reset();
|
human.reset();
|
||||||
config.face = { mesh: { enabled: false }, iris: { enabled: false }, description: { enabled: false }, emotion: { enabled: false } };
|
config.face = { mesh: { enabled: false }, iris: { enabled: false }, description: { enabled: false }, emotion: { enabled: false } };
|
||||||
config.hand = { landmarks: false };
|
config.hand = { landmarks: false };
|
||||||
res = await testDetect(human, 'samples/ai-body.jpg', 'default');
|
res = await testDetect(human, 'samples/in/ai-body.jpg', 'default');
|
||||||
if (!res || res?.face?.length !== 1 || res?.face[0]?.gender || res?.face[0]?.age || res?.face[0]?.embedding) log('error', 'failed: detectors result face mismatch', res?.face);
|
if (!res || res?.face?.length !== 1 || res?.face[0]?.gender || res?.face[0]?.age || res?.face[0]?.embedding) log('error', 'failed: detectors result face mismatch', res?.face);
|
||||||
else log('state', 'passed: detector result face match');
|
else log('state', 'passed: detector result face match');
|
||||||
if (!res || res?.hand?.length !== 1 || res?.hand[0]?.landmarks) log('error', 'failed: detectors result hand mismatch', res?.hand?.length);
|
if (!res || res?.hand?.length !== 1 || res?.hand[0]?.landmarks) log('error', 'failed: detectors result hand mismatch', res?.hand?.length);
|
||||||
|
@ -302,22 +302,22 @@ async function test(Human, inputConfig) {
|
||||||
// test posenet and movenet
|
// test posenet and movenet
|
||||||
log('info', 'test body variants');
|
log('info', 'test body variants');
|
||||||
config.body = { modelPath: 'posenet.json' };
|
config.body = { modelPath: 'posenet.json' };
|
||||||
res = await testDetect(human, 'samples/ai-body.jpg', 'posenet');
|
res = await testDetect(human, 'samples/in/ai-body.jpg', 'posenet');
|
||||||
if (!res || res?.body?.length !== 1) log('error', 'failed: body posenet');
|
if (!res || res?.body?.length !== 1) log('error', 'failed: body posenet');
|
||||||
else log('state', 'passed: body posenet');
|
else log('state', 'passed: body posenet');
|
||||||
config.body = { modelPath: 'movenet-lightning.json' };
|
config.body = { modelPath: 'movenet-lightning.json' };
|
||||||
res = await testDetect(human, 'samples/ai-body.jpg', 'movenet');
|
res = await testDetect(human, 'samples/in/ai-body.jpg', 'movenet');
|
||||||
if (!res || res?.body?.length !== 1) log('error', 'failed: body movenet');
|
if (!res || res?.body?.length !== 1) log('error', 'failed: body movenet');
|
||||||
else log('state', 'passed: body movenet');
|
else log('state', 'passed: body movenet');
|
||||||
|
|
||||||
// test handdetect and handtrack
|
// test handdetect and handtrack
|
||||||
log('info', 'test hand variants');
|
log('info', 'test hand variants');
|
||||||
config.hand = { enabled: true, maxDetected: 2, minConfidence: 0.1, detector: { modelPath: 'handdetect.json' } };
|
config.hand = { enabled: true, maxDetected: 2, minConfidence: 0.1, detector: { modelPath: 'handdetect.json' } };
|
||||||
res = await testDetect(human, 'samples/ai-body.jpg', 'handdetect');
|
res = await testDetect(human, 'samples/in/ai-body.jpg', 'handdetect');
|
||||||
if (!res || res?.hand?.length !== 2) log('error', 'failed: hand handdetect');
|
if (!res || res?.hand?.length !== 2) log('error', 'failed: hand handdetect');
|
||||||
else log('state', 'passed: hand handdetect');
|
else log('state', 'passed: hand handdetect');
|
||||||
config.hand = { enabled: true, maxDetected: 2, minConfidence: 0.1, detector: { modelPath: 'handtrack.json' } };
|
config.hand = { enabled: true, maxDetected: 2, minConfidence: 0.1, detector: { modelPath: 'handtrack.json' } };
|
||||||
res = await testDetect(human, 'samples/ai-body.jpg', 'handtrack');
|
res = await testDetect(human, 'samples/in/ai-body.jpg', 'handtrack');
|
||||||
if (!res || res?.hand?.length !== 2) log('error', 'failed: hand handdetect');
|
if (!res || res?.hand?.length !== 2) log('error', 'failed: hand handdetect');
|
||||||
else log('state', 'passed: hand handdetect');
|
else log('state', 'passed: hand handdetect');
|
||||||
|
|
||||||
|
@ -326,28 +326,28 @@ async function test(Human, inputConfig) {
|
||||||
const second = new Human(config);
|
const second = new Human(config);
|
||||||
await testDetect(human, null, 'default');
|
await testDetect(human, null, 'default');
|
||||||
log('info', 'test: first instance');
|
log('info', 'test: first instance');
|
||||||
await testDetect(first, 'samples/ai-upper.jpg', 'default');
|
await testDetect(first, 'samples/in/ai-upper.jpg', 'default');
|
||||||
log('info', 'test: second instance');
|
log('info', 'test: second instance');
|
||||||
await testDetect(second, 'samples/ai-upper.jpg', 'default');
|
await testDetect(second, 'samples/in/ai-upper.jpg', 'default');
|
||||||
|
|
||||||
// test async multiple instances
|
// test async multiple instances
|
||||||
log('info', 'test: concurrent');
|
log('info', 'test: concurrent');
|
||||||
await Promise.all([
|
await Promise.all([
|
||||||
testDetect(human, 'samples/ai-face.jpg', 'default', false),
|
testDetect(human, 'samples/in/ai-face.jpg', 'default', false),
|
||||||
testDetect(first, 'samples/ai-face.jpg', 'default', false),
|
testDetect(first, 'samples/in/ai-face.jpg', 'default', false),
|
||||||
testDetect(second, 'samples/ai-face.jpg', 'default', false),
|
testDetect(second, 'samples/in/ai-face.jpg', 'default', false),
|
||||||
testDetect(human, 'samples/ai-body.jpg', 'default', false),
|
testDetect(human, 'samples/in/ai-body.jpg', 'default', false),
|
||||||
testDetect(first, 'samples/ai-body.jpg', 'default', false),
|
testDetect(first, 'samples/in/ai-body.jpg', 'default', false),
|
||||||
testDetect(second, 'samples/ai-body.jpg', 'default', false),
|
testDetect(second, 'samples/in/ai-body.jpg', 'default', false),
|
||||||
testDetect(human, 'samples/ai-upper.jpg', 'default', false),
|
testDetect(human, 'samples/in/ai-upper.jpg', 'default', false),
|
||||||
testDetect(first, 'samples/ai-upper.jpg', 'default', false),
|
testDetect(first, 'samples/in/ai-upper.jpg', 'default', false),
|
||||||
testDetect(second, 'samples/ai-upper.jpg', 'default', false),
|
testDetect(second, 'samples/in/ai-upper.jpg', 'default', false),
|
||||||
]);
|
]);
|
||||||
|
|
||||||
// test monkey-patch
|
// test monkey-patch
|
||||||
globalThis.Canvas = canvasJS.Canvas; // monkey-patch to use external canvas library
|
globalThis.Canvas = canvasJS.Canvas; // monkey-patch to use external canvas library
|
||||||
globalThis.ImageData = canvasJS.ImageData; // monkey-patch to use external canvas library
|
globalThis.ImageData = canvasJS.ImageData; // monkey-patch to use external canvas library
|
||||||
const inputImage = await canvasJS.loadImage('samples/ai-face.jpg'); // load image using canvas library
|
const inputImage = await canvasJS.loadImage('samples/in/ai-face.jpg'); // load image using canvas library
|
||||||
const inputCanvas = new canvasJS.Canvas(inputImage.width, inputImage.height); // create canvas
|
const inputCanvas = new canvasJS.Canvas(inputImage.width, inputImage.height); // create canvas
|
||||||
const ctx = inputCanvas.getContext('2d');
|
const ctx = inputCanvas.getContext('2d');
|
||||||
ctx.drawImage(inputImage, 0, 0); // draw input image onto canvas
|
ctx.drawImage(inputImage, 0, 0); // draw input image onto canvas
|
||||||
|
|
|
@ -0,0 +1,90 @@
|
||||||
|
const fs = require('fs');
|
||||||
|
const path = require('path');
|
||||||
|
const process = require('process');
|
||||||
|
const log = require('@vladmandic/pilogger');
|
||||||
|
const canvas = require('canvas');
|
||||||
|
const tf = require('@tensorflow/tfjs-node'); // for nodejs, `tfjs-node` or `tfjs-node-gpu` should be loaded before using Human
|
||||||
|
const Human = require('../dist/human.node.js'); // this is 'const Human = require('../dist/human.node-gpu.js').default;'
|
||||||
|
|
||||||
|
const config = { // just enable all and leave default settings
|
||||||
|
debug: true,
|
||||||
|
async: false,
|
||||||
|
cacheSensitivity: 0,
|
||||||
|
face: { enabled: true },
|
||||||
|
hand: { enabled: true },
|
||||||
|
body: { enabled: true },
|
||||||
|
object: { enabled: true },
|
||||||
|
gesture: { enabled: true },
|
||||||
|
/*
|
||||||
|
face: { enabled: true, detector: { minConfidence: 0.1 } },
|
||||||
|
hand: { enabled: true, maxDetected: 2, minConfidence: 0.1, detector: { modelPath: 'handtrack.json' } }, // use alternative hand model
|
||||||
|
body: { enabled: true, minConfidence: 0.1 },
|
||||||
|
object: { enabled: true, minConfidence: 0.1 },
|
||||||
|
gesture: { enabled: true },
|
||||||
|
*/
|
||||||
|
};
|
||||||
|
|
||||||
|
async function main() {
|
||||||
|
log.header();
|
||||||
|
|
||||||
|
globalThis.Canvas = canvas.Canvas; // patch global namespace with canvas library
|
||||||
|
globalThis.ImageData = canvas.ImageData; // patch global namespace with canvas library
|
||||||
|
|
||||||
|
const human = new Human.Human(config); // create instance of human
|
||||||
|
log.info('Human:', human.version);
|
||||||
|
const configErrors = await human.validate();
|
||||||
|
if (configErrors.length > 0) log.error('Configuration errors:', configErrors);
|
||||||
|
await human.load(); // pre-load models
|
||||||
|
log.info('Loaded models:', Object.keys(human.models).filter((a) => human.models[a]));
|
||||||
|
|
||||||
|
const inDir = process.argv[2];
|
||||||
|
const outDir = process.argv[3];
|
||||||
|
if (process.argv.length !== 4) {
|
||||||
|
log.error('Parameters: <input-directory> <output-directory> missing');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
if (!fs.existsSync(inDir) || !fs.statSync(inDir).isDirectory() || !fs.existsSync(outDir) || !fs.statSync(outDir).isDirectory()) {
|
||||||
|
log.error('Invalid directory specified:', 'input:', fs.existsSync(inDir) ?? fs.statSync(inDir).isDirectory(), 'output:', fs.existsSync(outDir) ?? fs.statSync(outDir).isDirectory());
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const dir = fs.readdirSync(inDir);
|
||||||
|
const images = dir.filter((f) => fs.statSync(path.join(inDir, f)).isFile() && (f.toLocaleLowerCase().endsWith('.jpg') || f.toLocaleLowerCase().endsWith('.jpeg')));
|
||||||
|
log.info(`Processing folder: ${inDir} entries:`, dir.length, 'images', images.length);
|
||||||
|
for (const image of images) {
|
||||||
|
const inFile = path.join(inDir, image);
|
||||||
|
/*
|
||||||
|
const inputImage = await canvas.loadImage(inFile); // load image using canvas library
|
||||||
|
log.state('Loaded image:', inFile, inputImage.width, inputImage.height);
|
||||||
|
const inputCanvas = new canvas.Canvas(inputImage.width, inputImage.height); // create canvas
|
||||||
|
const inputCtx = inputCanvas.getContext('2d');
|
||||||
|
inputCtx.drawImage(inputImage, 0, 0); // draw input image onto canvas
|
||||||
|
*/
|
||||||
|
const buffer = fs.readFileSync(inFile);
|
||||||
|
const tensor = human.tf.tidy(() => {
|
||||||
|
const decode = human.tf.node.decodeImage(buffer, 3);
|
||||||
|
const expand = human.tf.expandDims(decode, 0);
|
||||||
|
const cast = human.tf.cast(expand, 'float32');
|
||||||
|
return cast;
|
||||||
|
});
|
||||||
|
log.state('Loaded image:', inFile, tensor.shape);
|
||||||
|
|
||||||
|
const result = await human.detect(tensor);
|
||||||
|
tf.dispose(tensor);
|
||||||
|
log.data(`Detected: ${image}:`, 'Face:', result.face.length, 'Body:', result.body.length, 'Hand:', result.hand.length, 'Objects:', result.object.length, 'Gestures:', result.gesture.length);
|
||||||
|
|
||||||
|
const outputCanvas = new canvas.Canvas(tensor.shape[2], tensor.shape[1]); // create canvas
|
||||||
|
const outputCtx = outputCanvas.getContext('2d');
|
||||||
|
const inputImage = await canvas.loadImage(buffer); // load image using canvas library
|
||||||
|
outputCtx.drawImage(inputImage, 0, 0); // draw input image onto canvas
|
||||||
|
human.draw.all(outputCanvas, result); // use human build-in method to draw results as overlays on canvas
|
||||||
|
const outFile = path.join(outDir, image);
|
||||||
|
const outStream = fs.createWriteStream(outFile); // write canvas to new image file
|
||||||
|
outStream.on('finish', () => log.state('Output image:', outFile, outputCanvas.width, outputCanvas.height));
|
||||||
|
outStream.on('error', (err) => log.error('Output error:', outFile, err));
|
||||||
|
const stream = outputCanvas.createJPEGStream({ quality: 0.5, progressive: true, chromaSubsampling: true });
|
||||||
|
stream.pipe(outStream);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
main();
|
2
wiki
|
@ -1 +1 @@
|
||||||
Subproject commit a0497b6d14059099b2764b8f70390f4b6af8db9f
|
Subproject commit c4642bde54506afd70a5fc32617414fa84b9fc0e
|