Compare commits

..

266 Commits
2.0 ... master

Author SHA1 Message Date
Vladimir Mandic 189226d63a full rebuild
Signed-off-by: Vladimir Mandic <mandic00@live.com>
2025-02-05 09:15:34 -05:00
Vladimir Mandic f587b44f66 1.7.15 2025-02-05 09:02:09 -05:00
Vladimir Mandic e3f11b8533 update build platform
Signed-off-by: Vladimir Mandic <mandic00@live.com>
2025-02-05 09:02:06 -05:00
Vladimir Mandic 171d17cadf update changelog 2024-09-10 11:31:01 -04:00
Vladimir Mandic e4cdf624c9 update build environment and full rebuild 2024-09-10 11:30:23 -04:00
Vladimir Mandic c633f9fbe4 1.7.14 2024-09-10 11:17:44 -04:00
Vladimir Mandic ffc3c40362 rebuild 2024-01-20 15:46:59 -05:00
Vladimir Mandic a8193f9077
Merge pull request #188 from rebser/master
fixing leaking EventHandlers when using HTMLCanvasElement
2024-01-20 15:45:04 -05:00
rebser 155f07dccd
fixing leaking EventHandlers when using HTMLCanvasElement 2024-01-19 08:38:59 +01:00
Vladimir Mandic 2f0469fe6e update readme 2024-01-17 17:04:22 -05:00
Vladimir Mandic 697b265337 rebuild types 2024-01-17 17:01:20 -05:00
Vladimir Mandic 4719b81587 rebuild 2024-01-17 16:56:53 -05:00
Vladimir Mandic fc9a39ea13 1.7.13 2024-01-17 16:44:28 -05:00
Vladimir Mandic 438897c5a2 update all dependencies 2024-01-17 16:44:24 -05:00
Vladimir Mandic f4d4780267
Merge pull request #186 from khwalkowicz/master
feat: enable noImplicitAny
2024-01-17 16:06:03 -05:00
Kamil H. Walkowicz a5c767fdff feat: enable noImplicitAny 2024-01-16 18:09:52 +01:00
Vladimir Mandic 1fa29b0fd3 update tfjs and rebuild 2023-06-12 12:02:21 -04:00
Vladimir Mandic 472f2e4480 1.7.12 2023-06-12 12:01:45 -04:00
Vladimir Mandic 4433ce44bc update dependencies 2023-05-08 09:08:30 -04:00
Vladimir Mandic 4ca829f941 1.7.11 2023-05-08 09:08:05 -04:00
Vladimir Mandic 038349968c update tfjs 2023-03-21 08:00:18 -04:00
Vladimir Mandic ae96c7b230 1.7.10 2023-03-21 07:59:27 -04:00
Vladimir Mandic f9f036ba01 change typedefs 2023-01-29 10:08:46 -05:00
Vladimir Mandic 0736a99250 1.7.9 2023-01-29 09:00:29 -05:00
Vladimir Mandic 3ea729badb update dependencies 2023-01-21 09:06:35 -05:00
Vladimir Mandic d36ed6d266 update changelog 2023-01-06 13:25:52 -05:00
Vladimir Mandic 4061d4d62f update tfjs 2023-01-06 13:24:17 -05:00
Vladimir Mandic b034c46f80 1.7.8 2023-01-06 13:04:31 -05:00
Vladimir Mandic aefd776a9e update dependencies 2022-12-21 14:14:22 -05:00
Vladimir Mandic 20eb54beb4 update 2022-12-04 14:14:05 -05:00
Vladimir Mandic e8301c5277 update 2022-12-04 13:23:41 -05:00
Vladimir Mandic fba823ba50 update tfjs 2022-12-01 14:56:40 -05:00
Vladimir Mandic a1cb6de1e8 1.7.7 2022-12-01 14:55:47 -05:00
Vladimir Mandic fb3836019f update dependencies 2022-11-12 11:54:00 -05:00
Vladimir Mandic 15ae496f40 update release 2022-10-18 07:23:49 -04:00
Vladimir Mandic 0009d1bc34 1.7.6 2022-10-18 07:23:04 -04:00
Vladimir Mandic adc4b3a11d update dependencies 2022-10-18 07:10:40 -04:00
Sohaib Ahmed 7e5a1289ff
Fix face angles (yaw, pitch, & roll) accuracy (#130)
Previouly derived aforementioned angles correctly seemed inaccurate and somewhat unusable (given their output was in radians). This update uses the a person's mesh positions, and chooses specific points for accurate results. It also adds directionality of the movements (_e.g. pitching head backwards is a negative result, as is rolling head to the left).

The webcam.js file has also been updated to showcase the correct output in degrees (reducing potential user confusion)

Comitter: Sohaib Ahmed <sohaibi.ahmed@icloud.com>

Co-authored-by: Sophia Glisch <sophiaglisch@Sophias-MacBook-Pro.local>
2022-10-18 07:09:35 -04:00
Vladimir Mandic cd2c553737 update tfjs 2022-10-14 08:01:39 -04:00
Vladimir Mandic a433fc0681 1.7.5 2022-10-09 13:42:45 -04:00
Vladimir Mandic f9902b0459 update readme 2022-10-09 13:42:38 -04:00
Vladimir Mandic bd5ab6bb0f update 2022-10-09 13:41:11 -04:00
Vladimir Mandic 96fed4f123 update tfjs 2022-10-09 13:40:33 -04:00
Vladimir Mandic 0cbfd9b01b update dependencies 2022-09-29 10:38:14 -04:00
Vladimir Mandic dea225bbeb
Create FUNDING.yml 2022-09-26 09:39:08 -04:00
Vladimir Mandic 602e86cbec add node-wasm demo 2022-09-25 16:40:42 -04:00
Vladimir Mandic 00bf49b24f 1.7.4 2022-09-25 16:39:22 -04:00
Vladimir Mandic fa33c1281c improve face compare performance 2022-09-14 08:18:51 -04:00
Vladimir Mandic 7f613367a3 update tfjs and typescript 2022-09-04 15:18:07 -04:00
Vladimir Mandic 4d65f459f9 update tfjs 2022-08-24 08:21:15 -04:00
Vladimir Mandic d28e5d2142 1.7.3 2022-08-24 08:20:11 -04:00
Vladimir Mandic 6aeb292453 refresh release 2022-08-23 08:26:07 -04:00
Vladimir Mandic 289faf17f2 1.7.2 2022-08-23 08:25:42 -04:00
Vladimir Mandic 7a6f7d96b7 document and remove optional dependencies 2022-08-23 08:21:20 -04:00
Vladimir Mandic 870eebedfa update dependencies 2022-08-22 13:17:39 -04:00
Vladimir Mandic 1ed702f713 update readme 2022-08-16 20:25:26 -04:00
Nina Egger b2a988e436
update readme 2022-08-03 15:14:56 -04:00
Vladimir Mandic 5c38676a83 update build platform 2022-07-29 09:24:51 -04:00
Vladimir Mandic bac0ef10cf update readme 2022-07-26 07:27:52 -04:00
Vladimir Mandic 8baef0ef68 update links 2022-07-25 08:38:52 -04:00
Vladimir Mandic c5dbb9d4e9 release build 2022-07-25 08:23:57 -04:00
Vladimir Mandic a8021dc2a3 1.7.1 2022-07-25 08:21:02 -04:00
Vladimir Mandic f946780bab refactor dependencies 2022-07-25 08:20:59 -04:00
Vladimir Mandic 8e7061a9aa full rebuild 2022-05-24 07:18:59 -04:00
Vladimir Mandic cd904ca5dd 1.6.11 2022-05-24 07:18:51 -04:00
Vladimir Mandic 496779fee2 1.6.10 2022-05-24 07:17:40 -04:00
Vladimir Mandic 4ba4a99ee1 update tfjs 2022-05-24 07:16:42 -04:00
Vladimir Mandic 31170e750b update changelog 2022-05-18 08:36:24 -04:00
Vladimir Mandic 5f58cd376d update tfjs 2022-05-18 08:36:05 -04:00
Vladimir Mandic 07eb00d7d6 1.6.9 2022-05-18 08:21:59 -04:00
Vladimir Mandic a1f7a0841f update libraries 2022-05-09 08:12:24 -04:00
Vladimir Mandic 49a594a59b 1.6.8 2022-05-09 08:11:31 -04:00
Vladimir Mandic 3b3ab219dc update dependencies 2022-04-09 09:48:06 -04:00
Vladimir Mandic 2fce7338dc exclude impossible detected face boxes 2022-04-05 07:38:11 -04:00
Vladimir Mandic 6cafeafba1 update tfjs 2022-04-01 09:16:17 -04:00
Vladimir Mandic d0f1349a23 1.6.7 2022-04-01 09:15:45 -04:00
abdemirza cdb0e485f8
fixed typo error (#97)
Co-authored-by: Abuzar Mirza <abdermiza@gmail.com>
2022-03-10 06:48:14 -05:00
Vladimir Mandic 5bcc4d2a73 update changelog 2022-03-07 13:17:54 -05:00
Vladimir Mandic 92008ed6f4 update tfjs and ts 2022-03-07 13:17:31 -05:00
Vladimir Mandic c1b38f99fe 1.6.6 2022-03-04 16:48:47 -05:00
Vladimir Mandic 0c5251c219 toolkit refresh 2022-02-07 09:43:35 -05:00
Vladimir Mandic fcf61e5c30 1.6.5 2022-02-07 09:41:55 -05:00
Vladimir Mandic 8c7e21b1c9 update tfjs and expand readme 2022-01-14 10:04:13 -05:00
Vladimir Mandic 2841969df8 1.6.4 2022-01-14 09:54:19 -05:00
Vladimir Mandic 39b137ed63 add node with wasm build target 2022-01-06 07:59:13 -05:00
Vladimir Mandic c53becfc67 1.6.3 2022-01-06 07:58:05 -05:00
Vladimir Mandic fd427cce39 update lint 2022-01-01 07:55:12 -05:00
Vladimir Mandic 43805b50c6 update demos 2022-01-01 07:52:40 -05:00
Vladimir Mandic fc18d89ab6 1.6.2 2022-01-01 07:51:51 -05:00
Vladimir Mandic 0de113080c update 2021-12-27 10:52:58 -05:00
Vladimir Mandic 471ddb7549 update 2021-12-14 15:42:06 -05:00
Vladimir Mandic 70991235df update tfjs 2021-12-09 14:22:22 -05:00
Vladimir Mandic c07be32e26 1.6.1 2021-12-09 14:20:24 -05:00
Vladimir Mandic 936ecba7ec update build 2021-12-06 21:43:06 -05:00
Vladimir Mandic 63476fcbc0 rebuild 2021-12-06 06:34:50 -05:00
Vladimir Mandic 62da12758f update 2021-12-03 11:32:42 -05:00
Vladimir Mandic bd4d5935fe update 2021-12-03 11:28:27 -05:00
Vladimir Mandic 118fbaba4d release preview 2021-12-01 17:21:12 -05:00
Vladimir Mandic e70d9bb18b switch to custom tfjs and new typedefs 2021-12-01 15:37:52 -05:00
Vladimir Mandic f1a2ef34a5 rebuild 2021-12-01 07:51:57 -05:00
Vladimir Mandic e7fd0efd27 1.5.8 2021-11-30 13:17:15 -05:00
Vladimir Mandic eb5501c672 update tfjs 2021-10-28 13:58:21 -04:00
Vladimir Mandic 8b304fa3d4 1.5.7 2021-10-28 13:56:38 -04:00
Vladimir Mandic 1824a62efb update readme 2021-10-23 09:52:51 -04:00
Vladimir Mandic bd2317d42e update tfjs to 3.10.0 2021-10-22 09:06:43 -04:00
Vladimir Mandic 1def723c7b 1.5.6 2021-10-22 09:01:27 -04:00
Vladimir Mandic d78dd3aae1 update dependencies and stricter linting rules 2021-10-19 08:04:24 -04:00
Vladimir Mandic 461e074993 1.5.5 2021-10-19 07:54:26 -04:00
Vladimir Mandic 1d30a9f816 rebuild 2021-09-30 13:45:23 -04:00
Vladimir Mandic fcbfc8589a allow backend change in demo via url params 2021-09-30 13:43:15 -04:00
Vladimir Mandic c7b2c65c97 add node-match demo 2021-09-29 13:03:02 -04:00
Vladimir Mandic 1b4580dd6e fix face matcher 2021-09-29 09:32:30 -04:00
Vladimir Mandic fdddee7101 1.5.4 2021-09-29 09:31:42 -04:00
Vladimir Mandic aee959f464 update build platform and typedoc template 2021-09-18 18:38:13 -04:00
Vladimir Mandic f70e5615b4 update release 2021-09-16 08:31:45 -04:00
Vladimir Mandic 4ba43e08ae 1.5.3 2021-09-16 08:30:53 -04:00
Vladimir Mandic c3049e7c29 simplify tfjs imports 2021-09-16 08:30:50 -04:00
Vladimir Mandic e2609a0ef2 update sourcemaps 2021-09-11 11:14:57 -04:00
Vladimir Mandic d13586f549 reduce bundle size 2021-09-11 11:11:38 -04:00
Vladimir Mandic 519e346f02 enable webgl uniforms 2021-09-10 10:24:33 -04:00
Vladimir Mandic efb307d230 1.5.2 2021-09-10 10:22:09 -04:00
Vladimir Mandic 47f2b53e92 update dependencies 2021-09-08 13:57:03 -04:00
Vladimir Mandic 9b810d8028 redesign build platform 2021-09-08 13:51:28 -04:00
Vladimir Mandic f48cbda416 1.5.1 2021-09-08 13:50:47 -04:00
Vladimir Mandic ac172b8be5 update dependencies 2021-09-05 17:06:09 -04:00
Vladimir Mandic 2c8c8c2c1c update tfjs 3.9.0 2021-08-31 12:21:57 -04:00
Vladimir Mandic 9fb3029211 1.4.2 2021-08-31 12:21:05 -04:00
Vladimir Mandic 225192d18d update dependencies 2021-08-10 08:19:49 -04:00
Vladimir Mandic 8dab959446 update 2021-07-29 09:18:21 -04:00
Vladimir Mandic 42d9d677de update tfjs and typescript 2021-07-29 09:05:49 -04:00
Vladimir Mandic d5b366629b 1.4.1 2021-07-29 09:05:01 -04:00
Vladimir Mandic 1455c35c81 update typedoc 2021-06-18 07:19:03 -04:00
Vladimir Mandic 953ef705ab update with typedoc 4.3 2021-06-08 06:59:55 -04:00
Vladimir Mandic 00803107ce 1.3.1 2021-06-08 06:58:27 -04:00
Vladimir Mandic 2ac6baa02b update build and lint scripts 2021-06-04 09:17:04 -04:00
Vladimir Mandic 7ef748390c update for tfjs 3.7.0 2021-06-04 08:54:48 -04:00
Vladimir Mandic b4ba10898f update 2021-06-04 07:27:31 -04:00
Vladimir Mandic df47b3e2a9 update 2021-05-28 07:27:16 -04:00
Bettina Steger 76daa38bce
fix face expression detection (#56) 2021-05-28 07:26:21 -04:00
Vladimir Mandic e13a6d684b add bufferToVideo 2021-05-27 18:38:30 -04:00
Vladimir Mandic da426d5cfd fix git conflicts 2021-05-27 18:36:59 -04:00
Bettina Steger 1de3551a0b
fix TSC error (#55)
* add bufferToVideo and fetchVideo

* fixes for mov videos

* use oncanplay instead of timeout

* remove video.type
2021-05-27 18:33:47 -04:00
Vladimir Mandic 98ea06fb0e force typescript 4.2 due to typedoc incompatibility with ts 4.3 2021-05-27 16:04:17 -04:00
Vladimir Mandic bf84748777 1.2.5 2021-05-27 14:03:27 -04:00
Bettina Steger 25735fcb34
add bufferToVideo and fetchVideo (#54)
* add bufferToVideo and fetchVideo

* fixes for mov videos

* use oncanplay instead of timeout
2021-05-27 14:02:01 -04:00
Vladimir Mandic 7b8b30bfc9 update dependencies 2021-05-18 08:11:17 -04:00
Vladimir Mandic 107297015e 1.2.4 2021-05-18 08:10:36 -04:00
Vladimir Mandic b9c78b21b0 update tfjs version 2021-05-04 11:18:07 -04:00
Vladimir Mandic 1c577b6ede 1.2.3 2021-05-04 11:17:34 -04:00
Vladimir Mandic b0d195dd57 update for tfjs 3.6.0 2021-04-30 12:01:04 -04:00
Vladimir Mandic f0aefed9e6 1.2.2 2021-04-30 12:00:27 -04:00
Vladimir Mandic 158dbc6208 add node-wasm demo 2021-04-26 14:45:49 -04:00
Vladimir Mandic b8830e8cd3 accept uri as input to demo node and node-canvas 2021-04-25 10:05:25 -04:00
Vladimir Mandic 1410be346a major version full rebuild 2021-04-22 19:50:28 -04:00
Vladimir Mandic 11b0685c9b 1.2.1 2021-04-22 19:50:01 -04:00
Vladimir Mandic 5c13f14b05 update for tfjs 3.5.0 2021-04-22 19:49:56 -04:00
Vladimir Mandic 33fc169fa6 add npmrc 2021-04-20 08:02:42 -04:00
Vladimir Mandic 47cb1aac88 updated node-multiprocess demo 2021-04-16 08:30:15 -04:00
Vladimir Mandic e496c9789f add canvas/image based demo to decode webp 2021-04-15 16:59:35 -04:00
Vladimir Mandic 6f9db4cd09 1.1.12 2021-04-13 11:24:10 -04:00
Vladimir Mandic 3bce447141 update 2021-04-13 11:24:07 -04:00
Vladimir Mandic 0304c9c2f1 update readme 2021-04-10 23:40:12 -04:00
Vladimir Mandic 98b8963505 update 2021-04-08 19:21:36 -04:00
Vladimir Mandic ab8478837d update badges 2021-04-08 19:17:06 -04:00
Vladimir Mandic 5f2aa0456c update cdn links 2021-04-08 18:26:07 -04:00
Vladimir Mandic 48b626b76c 1.1.11 2021-04-06 11:05:52 -04:00
Vladimir Mandic cbeeca675d update tslib 2021-04-06 11:05:49 -04:00
Vladimir Mandic 8e61c418e6
Merge pull request #46 from mayankagarwals/demo_latencyTest_fix
Fixed bug which led to latency not being measured in multiprocess node demo
2021-04-06 11:02:07 -04:00
mayankagarwals 9773e3557a Fixed bug which led to latency not being measured and wrong output on console for demo 2021-04-06 19:14:41 +05:30
Vladimir Mandic c188e2f9d8 add cdn links 2021-04-05 09:36:26 -04:00
Vladimir Mandic 9cf903a5cf update 2021-04-04 09:23:11 -04:00
Vladimir Mandic 8942b0752c update http headers 2021-04-04 09:21:07 -04:00
Vladimir Mandic 99c9ea0b75 1.1.10 2021-04-04 08:28:21 -04:00
Vladimir Mandic ed465fc042 added webhints 2021-04-04 08:28:14 -04:00
Vladimir Mandic 05de572e79 update keywords 2021-04-03 11:36:09 -04:00
Vladimir Mandic 7615b6b234 1.1.9 2021-04-03 11:02:55 -04:00
Vladimir Mandic 5e88795227 fix linting and tests 2021-04-03 11:02:49 -04:00
Vladimir Mandic d1e5e71079 1.1.8 2021-04-01 13:41:16 -04:00
Vladimir Mandic 62f123da0a update 2021-04-01 13:39:54 -04:00
Vladimir Mandic dd024e0ebf 1.1.7 2021-03-31 07:01:25 -04:00
Vladimir Mandic c15e6a5ba4 enable minify 2021-03-31 07:01:22 -04:00
Vladimir Mandic efd2019e19 1.1.6 2021-03-26 10:26:06 -04:00
Vladimir Mandic 40b3a65bdc update node demos 2021-03-26 10:26:02 -04:00
Vladimir Mandic 23bdd3f086 update readme 2021-03-25 07:31:18 -04:00
Vladimir Mandic eaa298211e update description 2021-03-24 22:37:43 -04:00
Vladimir Mandic f47c05cc13 update readme 2021-03-23 15:48:23 -04:00
Vladimir Mandic 18a16a9f2c 1.1.5 2021-03-23 09:36:46 -04:00
Vladimir Mandic 2ad4fc24db add node-canvas demo 2021-03-23 09:36:41 -04:00
Vladimir Mandic 9ccaf781ab refactoring 2021-03-19 21:39:45 -04:00
Vladimir Mandic 09698a891b update 2021-03-19 18:47:34 -04:00
Vladimir Mandic 1b68ca1160 refactoring 2021-03-19 18:46:36 -04:00
Vladimir Mandic d85c913347 update github templates 2021-03-18 11:59:18 -04:00
Vladimir Mandic 325e3852e7 update 2021-03-18 07:39:56 -04:00
Vladimir Mandic 8053e5de99 1.1.4 2021-03-18 06:34:44 -04:00
Vladimir Mandic 98c07fa123 update docs 2021-03-18 06:34:38 -04:00
Vladimir Mandic 796ba2dda3 1.1.3 2021-03-16 06:59:51 -04:00
Vladimir Mandic 4ea115ea0d fix for seedrandom 2021-03-16 06:59:48 -04:00
Vladimir Mandic 4a6572f3ba update 2021-03-15 08:52:51 -04:00
Vladimir Mandic 090a1d9e4b 1.1.2 2021-03-15 08:52:32 -04:00
Vladimir Mandic 5a1cc87be2 update readme 2021-03-15 08:37:40 -04:00
Vladimir Mandic 748998c921 update issue template 2021-03-14 14:21:27 -04:00
Vladimir Mandic 2bbfd8490a create templates 2021-03-14 14:19:42 -04:00
Vladimir Mandic 3238d8b26c
Create codeql-analysis.yml 2021-03-14 14:14:49 -04:00
Vladimir Mandic 527c0de84c 1.1.1 2021-03-14 13:38:13 -04:00
Vladimir Mandic ee6f3398a4 update docs 2021-03-14 09:34:20 -04:00
Vladimir Mandic d29d073c4e full rebuild 2021-03-14 09:28:18 -04:00
Vladimir Mandic ad61f77ea2 reformatted model manifests and weights 2021-03-14 09:23:45 -04:00
Vladimir Mandic 8bc4f095f4 create api specs 2021-03-14 08:47:38 -04:00
Vladimir Mandic cd022855eb update 2021-03-13 12:28:00 -05:00
Vladimir Mandic e43d1b9472 update 2021-03-13 12:26:31 -05:00
Vladimir Mandic de7ef14bf5 update docs 2021-03-13 12:21:32 -05:00
Vladimir Mandic e22ba62899 update readme 2021-03-09 17:33:30 -05:00
Vladimir Mandic 6e115bb37f 1.0.2 2021-03-09 17:32:35 -05:00
Vladimir Mandic 863c6fcd7a update tfjs and esbuild 2021-03-09 17:32:33 -05:00
Vladimir Mandic d024b045d4 update 2021-03-09 13:38:30 -05:00
Vladimir Mandic ecc6fcf668 update dependencies 2021-03-09 13:33:07 -05:00
Vladimir Mandic 8bbd6b875d update 2021-03-09 13:28:55 -05:00
Vladimir Mandic 191aed5336 1.0.1 2021-03-09 13:28:42 -05:00
Vladimir Mandic 0a459bf071 update 2021-03-09 13:28:29 -05:00
Vladimir Mandic 39f7635704 update 2021-03-08 15:07:31 -05:00
Vladimir Mandic ba98b5a9e2 add badges 2021-03-08 14:26:04 -05:00
Vladimir Mandic 7a93246163 optimize for npm 2021-03-08 14:17:39 -05:00
Vladimir Mandic 40e6d5d9cd 0.30.6 2021-03-08 08:55:53 -05:00
Vladimir Mandic 77fb56eb1a update face angle algorithm 2021-03-08 08:55:51 -05:00
Vladimir Mandic c0d2eda2d7 added typings for face angle 2021-03-07 21:15:53 -05:00
Vladimir Mandic acaab78f62 disable landmark printing 2021-03-07 10:09:45 -05:00
Vladimir Mandic afc7e6b645 0.30.5 2021-03-07 10:07:20 -05:00
Vladimir Mandic 53f8dcc7a5 update 2021-03-07 10:07:18 -05:00
Vladimir Mandic e2f1fc3641 enabled live demo on gitpages 2021-03-07 10:05:35 -05:00
Vladimir Mandic c67e78c84b 0.30.4 2021-03-07 09:58:35 -05:00
Vladimir Mandic 8b6d1b76df added face angle calculations 2021-03-07 09:58:20 -05:00
Vladimir Mandic 3d7007f13d added documentation 2021-03-07 07:26:23 -05:00
Vladimir Mandic c1d5153017 package update 2021-03-04 10:47:08 -05:00
Vladimir Mandic 733279eae7 0.30.3 2021-03-04 10:46:45 -05:00
Vladimir Mandic af816ec567 update 2021-02-26 16:03:20 -05:00
Vladimir Mandic e867864bcc 0.30.2 2021-02-26 16:03:04 -05:00
Vladimir Mandic 37b81af913 update 2021-02-25 08:13:15 -05:00
Vladimir Mandic a2f5aee755 0.30.1 2021-02-25 08:12:51 -05:00
Vladimir Mandic a3e213532e update to tfjs 3.2.0 2021-02-25 08:12:39 -05:00
Vladimir Mandic 6ad9c84b0a 0.13.3 2021-02-21 10:01:00 -05:00
Vladimir Mandic 48792e77ce added note-cpu target 2021-02-21 10:00:57 -05:00
Vladimir Mandic 1f264554f6
Merge pull request #39 from xemle/feature/node-cpu
Add node-cpu build for non supported systems of libtensorflow
2021-02-21 09:53:41 -05:00
Sebastian Felis beff86e8b1 Add node-cpu build for non supported systems of libtensorflow 2021-02-21 12:59:09 +01:00
Vladimir Mandic 823bbe443e 0.13.2 2021-02-20 21:49:45 -05:00
Vladimir Mandic b0098e8f17 update 2021-02-20 21:49:39 -05:00
Vladimir Mandic d5b0c77e51 0.13.1 2021-02-20 21:13:07 -05:00
Vladimir Mandic 52849b63a7 0.12.10 2021-02-20 21:12:50 -05:00
Vladimir Mandic 8deecc33b5 exception handling 2021-02-20 21:12:33 -05:00
Vladimir Mandic c34db982bc 0.12.9 2021-02-20 21:06:40 -05:00
Vladimir Mandic a4646a2389 exception handling 2021-02-20 21:06:33 -05:00
Vladimir Mandic ec6f4f8547 0.12.8 2021-02-20 21:00:20 -05:00
Vladimir Mandic 15d0176596 exception handling 2021-02-20 21:00:16 -05:00
Vladimir Mandic 586dcdf477 full rebuild 2021-02-17 09:46:43 -05:00
Vladimir Mandic ee4bbe8749 0.12.7 2021-02-17 09:22:36 -05:00
Vladimir Mandic 675d77937c update for tfjs 3.1.0 2021-02-17 09:22:31 -05:00
Vladimir Mandic d345b24254 update 2021-02-13 08:39:24 -05:00
Vladimir Mandic 7e82c8fbcf 0.12.6 2021-02-13 08:38:42 -05:00
Vladimir Mandic c121965515 update 2021-02-13 08:38:41 -05:00
Vladimir Mandic 6051d86d4d 0.12.5 2021-02-12 09:11:01 -05:00
Vladimir Mandic 24bc527c4b update 2021-02-12 09:11:00 -05:00
Vladimir Mandic 4e77e6e52d 0.12.4 2021-02-06 10:18:47 -05:00
Vladimir Mandic 69a9faa2a5 update 2021-02-06 10:18:46 -05:00
Vladimir Mandic 8226a7eaac 0.12.3 2021-02-06 10:16:26 -05:00
Vladimir Mandic 2984f2df1f update 2021-02-06 10:16:25 -05:00
Vladimir Mandic c3af6149cc 0.12.2 2021-02-02 20:36:31 -05:00
Vladimir Mandic c79f6e7976 update 2021-02-02 20:36:31 -05:00
671 changed files with 22914 additions and 53905 deletions

148
.build.json Normal file
View File

@ -0,0 +1,148 @@
{
"log": {
"enabled": false,
"debug": false,
"console": true,
"output": "build.log"
},
"profiles": {
"production": ["compile", "typings", "typedoc", "lint", "changelog"],
"development": ["serve", "watch", "compile"]
},
"clean": {
"locations": ["dist/*", "typedoc/*", "types/lib/src"]
},
"lint": {
"locations": [ "src/" ],
"rules": { }
},
"changelog": {
"log": "CHANGELOG.md"
},
"serve": {
"sslKey": "cert/https.key",
"sslCrt": "cert/https.crt",
"httpPort": 8000,
"httpsPort": 8001,
"documentRoot": ".",
"defaultFolder": "demo",
"defaultFile": "index.html"
},
"build": {
"global": {
"target": "es2018",
"treeShaking": true,
"ignoreAnnotations": true,
"sourcemap": false,
"banner": { "js": "/*\n Face-API\n homepage: <https://github.com/vladmandic/face-api>\n author: <https://github.com/vladmandic>'\n*/\n" }
},
"targets": [
{
"name": "tfjs/browser/tf-version",
"platform": "browser",
"format": "esm",
"input": "src/tfjs/tf-version.ts",
"output": "dist/tfjs.version.js"
},
{
"name": "tfjs/node/cpu",
"platform": "node",
"format": "cjs",
"input": "src/tfjs/tf-node.ts",
"output": "dist/tfjs.esm.js",
"external": ["@tensorflow"]
},
{
"name": "faceapi/node/cpu",
"platform": "node",
"format": "cjs",
"input": "src/index.ts",
"output": "dist/face-api.node.js",
"external": ["@tensorflow"]
},
{
"name": "tfjs/node/gpu",
"platform": "node",
"format": "cjs",
"input": "src/tfjs/tf-node-gpu.ts",
"output": "dist/tfjs.esm.js",
"external": ["@tensorflow"]
},
{
"name": "faceapi/node/gpu",
"platform": "node",
"format": "cjs",
"input": "src/index.ts",
"output": "dist/face-api.node-gpu.js",
"external": ["@tensorflow"]
},
{
"name": "tfjs/node/wasm",
"platform": "node",
"format": "cjs",
"input": "src/tfjs/tf-node-wasm.ts",
"output": "dist/tfjs.esm.js",
"external": ["@tensorflow"]
},
{
"name": "faceapi/node/wasm",
"platform": "node",
"format": "cjs",
"input": "src/index.ts",
"output": "dist/face-api.node-wasm.js",
"external": ["@tensorflow"]
},
{
"name": "tfjs/browser/esm/nobundle",
"platform": "browser",
"format": "esm",
"input": "src/tfjs/tf-browser.ts",
"output": "dist/tfjs.esm.js",
"external": ["@tensorflow"]
},
{
"name": "faceapi/browser/esm/nobundle",
"platform": "browser",
"format": "esm",
"input": "src/index.ts",
"output": "dist/face-api.esm-nobundle.js",
"external": ["@tensorflow"]
},
{
"name": "tfjs/browser/esm/bundle",
"platform": "browser",
"format": "esm",
"input": "src/tfjs/tf-browser.ts",
"output": "dist/tfjs.esm.js"
},
{
"name": "faceapi/browser/iife/bundle",
"platform": "browser",
"format": "iife",
"globalName": "faceapi",
"minify": true,
"input": "src/index.ts",
"output": "dist/face-api.js",
"external": ["@tensorflow"]
},
{
"name": "faceapi/browser/esm/bundle",
"platform": "browser",
"format": "esm",
"sourcemap": true,
"input": "src/index.ts",
"output": "dist/face-api.esm.js",
"typings": "types/lib",
"typedoc": "typedoc",
"external": ["@tensorflow"]
}
]
},
"watch": {
"enabled": true,
"locations": [ "src/**" ]
},
"typescript": {
"allowJs": false
}
}

View File

@ -3,50 +3,74 @@
"env": { "env": {
"browser": true, "browser": true,
"commonjs": true, "commonjs": true,
"es6": true,
"node": true, "node": true,
"es2020": true "es2020": true
}, },
"parser": "@typescript-eslint/parser", "parser": "@typescript-eslint/parser",
"parserOptions": { "ecmaVersion": 2020 }, "parserOptions": { "ecmaVersion": "latest" },
"plugins": ["@typescript-eslint"], "plugins": [
"@typescript-eslint"
],
"extends": [ "extends": [
"eslint:recommended", "eslint:recommended",
"plugin:import/errors", "plugin:import/errors",
"plugin:import/warnings", "plugin:import/warnings",
"plugin:import/typescript",
"plugin:node/recommended", "plugin:node/recommended",
"plugin:promise/recommended", "plugin:promise/recommended",
"plugin:json/recommended-with-comments", "plugin:@typescript-eslint/eslint-recommended",
"plugin:@typescript-eslint/recommended",
"airbnb-base" "airbnb-base"
], ],
"ignorePatterns": [ "node_modules", "types" ], "ignorePatterns": [ "node_modules", "types" ],
"settings": {
"import/resolver": {
"node": {
"extensions": [".js", ".ts"]
}
}
},
"rules": { "rules": {
"max-len": [1, 275, 3], "@typescript-eslint/no-explicit-any": "off",
"@typescript-eslint/ban-types": "off",
"@typescript-eslint/ban-ts-comment": "off",
"@typescript-eslint/explicit-module-boundary-types": "off",
"@typescript-eslint/no-var-requires": "off",
"@typescript-eslint/no-empty-object-type": "off",
"@typescript-eslint/no-require-imports": "off",
"camelcase": "off", "camelcase": "off",
"class-methods-use-this": "off", "class-methods-use-this": "off",
"default-param-last": "off",
"dot-notation": "off",
"func-names": "off",
"guard-for-in": "off",
"import/extensions": "off", "import/extensions": "off",
"import/no-cycle": "off", "import/no-extraneous-dependencies": "off",
"import/no-named-as-default": "off",
"import/no-unresolved": "off",
"import/prefer-default-export": "off", "import/prefer-default-export": "off",
"lines-between-class-members": "off",
"max-len": [1, 275, 3],
"newline-per-chained-call": "off",
"no-async-promise-executor": "off",
"no-await-in-loop": "off", "no-await-in-loop": "off",
"no-bitwise": "off",
"no-case-declarations":"off",
"no-continue": "off", "no-continue": "off",
"no-loop-func": "off",
"no-mixed-operators": "off", "no-mixed-operators": "off",
"no-param-reassign": "off", "no-param-reassign":"off",
"no-plusplus": "off", "no-plusplus": "off",
"no-regex-spaces": "off",
"no-restricted-globals": "off",
"no-restricted-syntax": "off", "no-restricted-syntax": "off",
"no-return-assign": "off", "no-return-assign": "off",
"no-underscore-dangle": "off", "no-underscore-dangle": "off",
"node/no-missing-import": "off", "no-promise-executor-return": "off",
"node/no-missing-import": ["error", { "tryExtensions": [".js", ".json", ".ts"] }],
"node/no-unpublished-import": "off",
"node/no-unpublished-require": "off",
"node/no-unsupported-features/es-syntax": "off", "node/no-unsupported-features/es-syntax": "off",
"no-lonely-if": "off",
"node/shebang": "off",
"object-curly-newline": "off",
"prefer-destructuring": "off", "prefer-destructuring": "off",
"radix": "off", "prefer-template":"off",
"object-curly-newline": "off" "promise/always-return": "off",
"promise/catch-or-return": "off",
"promise/no-nesting": "off",
"radix": "off"
} }
} }

13
.github/FUNDING.yml vendored Normal file
View File

@ -0,0 +1,13 @@
# These are supported funding model platforms
github: [vladmandic]
patreon: # Replace with a single Patreon username
open_collective: # Replace with a single Open Collective username
ko_fi: # Replace with a single Ko-fi username
tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
liberapay: # Replace with a single Liberapay username
issuehunt: # Replace with a single IssueHunt username
otechie: # Replace with a single Otechie username
lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']

View File

@ -1 +0,0 @@
Please include output of `faceapi.version` object or specify details about your version and platform (OS, NodeJS version, Browser version).

28
.github/ISSUE_TEMPLATE/issue.md vendored Normal file
View File

@ -0,0 +1,28 @@
---
name: Issue
about: Issue
title: ''
labels: ''
assignees: vladmandic
---
**Issue Description**
**Steps to Reproduce**
**Expected Behavior**
**Environment
- Module version?
- Built-in demo or custom code?
- Type of module used (e.g. `js`, `esm`, `esm-nobundle`)?
- Browser or NodeJS and version (e.g. NodeJS 14.15 or Chrome 89)?
- OS and Hardware platform (e.g. Windows 10, Ubuntu Linux on x64, Android 10)?
- Packager (if any) (e.g, webpack, rollup, parcel, esbuild, etc.)?
**Additional**
- For installation or startup issues include your `package.json`
- For usage issues, it is recommended to post your code as [gist](https://gist.github.com/)

View File

@ -0,0 +1,3 @@
# Pull Request Template
<br>

67
.github/workflows/codeql-analysis.yml vendored Normal file
View File

@ -0,0 +1,67 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL"
on:
push:
branches: [ master ]
pull_request:
# The branches below must be a subset of the branches above
branches: [ master ]
schedule:
- cron: '21 6 * * 0'
jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
language: [ 'javascript' ]
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python' ]
# Learn more:
# https://docs.github.com/en/free-pro-team@latest/github/finding-security-vulnerabilities-and-errors-in-your-code/configuring-code-scanning#changing-the-languages-that-are-analyzed
steps:
- name: Checkout repository
uses: actions/checkout@v2
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v1
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# queries: ./path/to/local/query, your-org/your-repo/queries@main
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@v1
# Command-line programs to run using the OS shell.
# 📚 https://git.io/JvXDl
# ✏️ If the Autobuild fails above, remove it and uncomment the following three lines
# and modify them (or add more) to build your code if your project
# uses a compiled language
#- run: |
# make bootstrap
# make release
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v1

1
.gitignore vendored
View File

@ -1 +1,2 @@
node_modules node_modules
pnpm-lock.yaml

13
.hintrc Normal file
View File

@ -0,0 +1,13 @@
{
"extends": [
"web-recommended"
],
"browserslist": [
"last 1 versions",
"not ie < 20"
],
"hints": {
"no-inline-styles": "off",
"meta-charset-utf-8": "off"
}
}

7
.markdownlint.json Normal file
View File

@ -0,0 +1,7 @@
{
"MD012": false,
"MD013": false,
"MD033": false,
"MD036": false,
"MD041": false
}

5
.npmignore Normal file
View File

@ -0,0 +1,5 @@
node_modules
pnpm-lock.yaml
typedoc
test
types/lib

5
.npmrc Normal file
View File

@ -0,0 +1,5 @@
force=true
production=true
legacy-peer-deps=true
strict-peer-dependencies=false
node-options='--no-deprecation'

3
.vscode/settings.json vendored Normal file
View File

@ -0,0 +1,3 @@
{
"typescript.tsdk": "node_modules/typescript/lib"
}

473
CHANGELOG.md Normal file
View File

@ -0,0 +1,473 @@
# @vladmandic/face-api
Version: **1.7.15**
Description: **FaceAPI: AI-powered Face Detection & Rotation Tracking, Face Description & Recognition, Age & Gender & Emotion Prediction for Browser and NodeJS using TensorFlow/JS**
Author: **Vladimir Mandic <mandic00@live.com>**
License: **MIT**
Repository: **<https://github.com/vladmandic/face-api>**
## Changelog
### **1.7.15** 2025/02/05 mandic00@live.com
### **origin/master** 2024/09/10 mandic00@live.com
### **1.7.14** 2024/09/10 mandic00@live.com
- rebuild
- merge pull request #188 from rebser/master
- fixing leaking eventhandlers when using htmlcanvaselement
- rebuild types
- rebuild
### **1.7.13** 2024/01/17 mandic00@live.com
- merge pull request #186 from khwalkowicz/master
- feat: enable noimplicitany
### **release: 1.7.12** 2023/06/12 mandic00@live.com
### **1.7.12** 2023/06/12 mandic00@live.com
### **1.7.11** 2023/05/08 mandic00@live.com
### **1.7.10** 2023/03/21 mandic00@live.com
- change typedefs
### **1.7.9** 2023/01/29 mandic00@live.com
### **1.7.8** 2023/01/06 mandic00@live.com
### **1.7.7** 2022/12/01 mandic00@live.com
### **1.7.6** 2022/10/18 mandic00@live.com
- fix face angles (yaw, pitch, & roll) accuracy (#130)
### **1.7.5** 2022/10/09 mandic00@live.com
- create funding.yml
- add node-wasm demo
### **1.7.4** 2022/09/25 mandic00@live.com
- improve face compare performance
### **1.7.3** 2022/08/24 mandic00@live.com
- refresh release
### **1.7.2** 2022/08/23 mandic00@live.com
- document and remove optional dependencies
### **release: 1.7.1** 2022/07/25 mandic00@live.com
### **1.7.1** 2022/07/25 mandic00@live.com
- refactor dependencies
- full rebuild
### **1.6.11** 2022/05/24 mandic00@live.com
### **1.6.10** 2022/05/24 mandic00@live.com
### **1.6.9** 2022/05/18 mandic00@live.com
### **1.6.8** 2022/05/09 mandic00@live.com
- exclude impossible detected face boxes
### **1.6.7** 2022/04/01 mandic00@live.com
- fixed typo error (#97)
### **1.6.6** 2022/03/04 mandic00@live.com
### **1.6.5** 2022/02/07 mandic00@live.com
### **1.6.4** 2022/01/14 mandic00@live.com
- add node with wasm build target
### **1.6.3** 2022/01/06 mandic00@live.com
### **1.6.2** 2022/01/01 mandic00@live.com
### **1.6.1** 2021/12/09 mandic00@live.com
- rebuild
- release preview
- switch to custom tfjs and new typedefs
- rebuild
### **1.5.8** 2021/11/30 mandic00@live.com
### **1.5.7** 2021/10/28 mandic00@live.com
### **1.5.6** 2021/10/22 mandic00@live.com
### **release: 1.5.5** 2021/10/19 mandic00@live.com
### **1.5.5** 2021/10/19 mandic00@live.com
- allow backend change in demo via url params
- add node-match demo
- fix face matcher
### **1.5.4** 2021/09/29 mandic00@live.com
### **1.5.3** 2021/09/16 mandic00@live.com
- simplify tfjs imports
- reduce bundle size
- enable webgl uniforms
### **1.5.2** 2021/09/10 mandic00@live.com
- redesign build platform
### **1.5.1** 2021/09/08 mandic00@live.com
### **1.4.2** 2021/08/31 mandic00@live.com
### **release: 1.4.1** 2021/07/29 mandic00@live.com
### **1.4.1** 2021/07/29 mandic00@live.com
### **release: 1.3.1** 2021/06/18 mandic00@live.com
### **1.3.1** 2021/06/08 mandic00@live.com
- fix face expression detection (#56)
- add buffertovideo
- fix git conflicts
- fix tsc error (#55)
- force typescript 4.2 due to typedoc incompatibility with ts 4.3
### **1.2.5** 2021/05/27 mandic00@live.com
- add buffertovideo and fetchvideo (#54)
### **1.2.4** 2021/05/18 mandic00@live.com
### **1.2.3** 2021/05/04 mandic00@live.com
### **update for tfjs 3.6.0** 2021/04/30 mandic00@live.com
### **1.2.2** 2021/04/30 mandic00@live.com
- add node-wasm demo
- accept uri as input to demo node and node-canvas
- major version full rebuild
### **1.2.1** 2021/04/22 mandic00@live.com
- add npmrc
- add canvas/image based demo to decode webp
### **1.1.12** 2021/04/13 mandic00@live.com
### **1.1.11** 2021/04/06 mandic00@live.com
- merge pull request #46 from mayankagarwals/demo_latencytest_fix
- fixed bug which led to latency not being measured and wrong output on console for demo
- add cdn links
### **1.1.10** 2021/04/04 mandic00@live.com
- added webhints
### **1.1.9** 2021/04/03 mandic00@live.com
- fix linting and tests
### **1.1.8** 2021/04/01 mandic00@live.com
### **1.1.7** 2021/03/31 mandic00@live.com
- enable minify
### **1.1.6** 2021/03/26 mandic00@live.com
### **1.1.5** 2021/03/23 mandic00@live.com
- add node-canvas demo
- refactoring
### **1.1.4** 2021/03/18 mandic00@live.com
### **1.1.3** 2021/03/16 mandic00@live.com
- fix for seedrandom
### **1.1.2** 2021/03/15 mandic00@live.com
- create templates
- create codeql-analysis.yml
### **1.1.1** 2021/03/14 mandic00@live.com
- full rebuild
- reformatted model manifests and weights
- create api specs
### **1.0.2** 2021/03/09 mandic00@live.com
### **release: 1.0.1** 2021/03/09 mandic00@live.com
### **1.0.1** 2021/03/09 mandic00@live.com
- add badges
- optimize for npm
- 0.30.6
- added typings for face angle
- disable landmark printing
- 0.30.5
- enabled live demo on gitpages
- 0.30.4
- added face angle calculations
- added documentation
- package update
- 0.30.3
- 0.30.2
- 0.30.1
- 0.13.3
- added note-cpu target
- merge pull request #39 from xemle/feature/node-cpu
- add node-cpu build for non supported systems of libtensorflow
- 0.13.2
- 0.13.1
- 0.12.10
- exception handling
- 0.12.9
- exception handling
- 0.12.8
- exception handling
### **0.12.7** 2021/02/17 mandic00@live.com
- 0.12.7
- 0.12.6
- 0.12.5
- 0.12.4
- 0.12.3
- 0.12.2
### **update for tfjs 3.0.0** 2021/01/29 mandic00@live.com
- 0.12.1
- rebuild
- 0.11.6
- add check for null face descriptor
- merge pull request #34 from patrickhulce/patch-1
- fix: return empty descriptor for zero-sized faces
- 0.11.5
- 0.11.4
- 0.11.3
- fix typo
- enable full minification
- 0.11.2
- full rebuild
- 0.11.1
- added live webcam demo
- 0.10.2
- ts linting
- version bump
- 0.10.1
- full re-lint and typings generation
- rebuild
### **0.9.5** 2020/12/19 mandic00@live.com
- added tsc build typings
### **0.9.4** 2020/12/15 mandic00@live.com
- package update
### **0.9.3** 2020/12/12 mandic00@live.com
- remove old demo
- merge branch 'master' of https://github.com/vladmandic/face-api
### **0.9.2** 2020/12/08 mandic00@live.com
- merge pull request #19 from meeki007/patch-3
- remove http reff
- fixed typos
### **0.9.1** 2020/12/02 mandic00@live.com
- redesigned tfjs bundling and build process
- push
- merge pull request #17 from meeki007/patch-2
- merge pull request #16 from meeki007/patch-1
- added link to documentation for js.tensorflow 2.7.0
- add comments and fix typo
### **0.8.9** 2020/11/25 mandic00@live.com
- removed node-fetch dependency
### **0.8.8** 2020/11/03 mandic00@live.com
### **0.8.7** 2020/11/03 mandic00@live.com
- removed type from package.json and added nodejs example
### **0.8.6** 2020/10/29 mandic00@live.com
### **0.8.5** 2020/10/27 mandic00@live.com
### **0.8.4** 2020/10/27 mandic00@live.com
- fix webpack compatibility issue
### **0.8.3** 2020/10/25 mandic00@live.com
### **0.8.2** 2020/10/25 mandic00@live.com
- fix for wasm compatibility
### **0.8.1** 2020/10/15 mandic00@live.com
- added cjs builds
### **0.7.4** 2020/10/14 mandic00@live.com
- added nobundle
### **0.7.3** 2020/10/13 mandic00@live.com
### **0.7.2** 2020/10/13 mandic00@live.com
### **0.7.1** 2020/10/13 mandic00@live.com
- switched to monolithic build
### **0.6.3** 2020/10/12 mandic00@live.com
### **0.6.2** 2020/10/11 mandic00@live.com
### **0.6.1** 2020/10/11 mandic00@live.com
- major update
- tfjs 2.6.0
### **0.5.3** 2020/09/18 cyan00@gmail.com
### **0.5.2** 2020/09/16 cyan00@gmail.com
- added build for node
- upgrade to tfjs@2.4.0 and ts-node@9.0.0
- create issue.md
- added issue template
- added faceapi.version object
### **0.5.1** 2020/09/08 cyan00@gmail.com
### **0.4.6** 2020/09/08 cyan00@gmail.com
- added test fot @tfjs and backends loaded
### **0.4.5** 2020/08/31 cyan00@gmail.com
- adding build
### **0.4.4** 2020/08/30 cyan00@gmail.com
- change build process
### **0.4.3** 2020/08/29 cyan00@gmail.com
- fix node build error
### **0.4.2** 2020/08/29 cyan00@gmail.com
### **0.4.1** 2020/08/27 cyan00@gmail.com
### **0.3.9** 2020/08/27 cyan00@gmail.com
- added example
### **0.3.8** 2020/08/26 cyan00@gmail.com
- re-added ssd_mobilenet
### **0.3.7** 2020/08/22 cyan00@gmail.com
### **0.3.6** 2020/08/21 cyan00@gmail.com
### **0.3.5** 2020/08/19 cyan00@gmail.com
### **0.3.4** 2020/08/19 cyan00@gmail.com
- switch to commonjs and es2018 for compatibility
### **0.3.3** 2020/08/19 cyan00@gmail.com
### **0.3.2** 2020/08/18 cyan00@gmail.com
### **0.3.1** 2020/08/18 cyan00@gmail.com
- uodated build script
- npm publish
- added pre-compiled build
- added pre-bundled dist
- removed unnecessary weights
- initial commit

24
CODE_OF_CONDUCT Normal file
View File

@ -0,0 +1,24 @@
# Code of Conduct
Use your best judgement
If it will possibly make others uncomfortable, do not post it
- Be respectful
Disagreement is not an opportunity to attack someone else's thoughts or opinions
Although views may differ, remember to approach every situation with patience and care
- Be considerate
Think about how your contribution will affect others in the community
- Be open minded
Embrace new people and new ideas. Our community is continually evolving and we welcome positive change
Be mindful of your language
Any of the following behavior is unacceptable:
- Offensive comments of any kind
- Threats or intimidation
- Sexually explicit material
- Or any other kinds of harassment
If you believe someone is violating the code of conduct, we ask that you report it
Participants asked to stop any harassing behavior are expected to comply immediately

17
CONTRIBUTING Normal file
View File

@ -0,0 +1,17 @@
# Contributing Guidelines
Pull requests from everyone are welcome
Procedure for contributing:
- Create a fork of the repository on github
In a top right corner of a GitHub, select "Fork"
- Clone your forked repository to your local system
`git clone https://github.com/<your-username>/<your-fork>
- Make your changes
- Test your changes against code guidelines
`npm run lint`
- Push changes to your fork
- Submit a PR (pull request)
Your pull request will be reviewed and pending review results, merged into main branch

View File

@ -1,6 +1,6 @@
MIT License MIT License
Copyright (c) 2018 Vincent Mühler Copyright (c) Vladimir Mandic
Permission is hereby granted, free of charge, to any person obtaining a copy Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal of this software and associated documentation files (the "Software"), to deal

477
README.md
View File

@ -1,49 +1,26 @@
![Git Version](https://img.shields.io/github/package-json/v/vladmandic/face-api?style=flat-square&svg=true&label=git)
![NPM Version](https://img.shields.io/npm/v/@vladmandic/face-api.png?style=flat-square)
![Last Commit](https://img.shields.io/github/last-commit/vladmandic/face-api?style=flat-square?svg=true)
![License](https://img.shields.io/github/license/vladmandic/face-api?style=flat-square?svg=true)
![GitHub Status Checks](https://img.shields.io/github/checks-status/vladmandic/face-api/master?style=flat-square?svg=true)
![Vulnerabilities](https://img.shields.io/snyk/vulnerabilities/github/vladmandic/face-api?style=flat-square?svg=true)
# FaceAPI # FaceAPI
## Note **AI-powered Face Detection & Rotation Tracking, Face Description & Recognition, Age & Gender & Emotion Prediction for Browser and NodeJS using TensorFlow/JS**
This is updated **face-api.js** with latest available TensorFlow/JS as the original face-api.js is not compatible with **tfjs 2.0+**.
Forked from [face-api.js](https://github.com/justadudewhohacks/face-api.js) version **0.22.2** released on March 22nd, 2020
Currently based on **`TensorFlow/JS` 3.0.0**
### Why?
Because I needed Face-API that does not cause version conflict with newer TFJS 2.0 that I use accross my projects
And since original Face-API was open-source, I've released this version as well
Unfortunately, changes ended up being too large for a simple pull request on original Face-API and it ended up being a full-fledged version on its own
<br> <br>
### Differences **Live Demo**: <https://vladmandic.github.io/face-api/demo/webcam.html>
- Compatible with `TensorFlow/JS 2.0+`
- Compatible with `WebGL`, `CPU` and `WASM` TFJS Browser backends
- Compatible with both `tfjs-node` and `tfjs-node-gpu` TFJS NodeJS backends
- Updated all type castings for TypeScript type checking to `TypeScript 4.1`
- Switched bundling from `UMD` to `ESM` + `CommonJS` with fallback to `IIFE`
Resulting code is optimized per-platform instead of being universal
Fully tree shakable when imported as an `ESM` module
Browser bundle process uses `ESBuild` instead of `Rollup`
- Typescript build process now targets `ES2018` and instead of dual ES5/ES6
Resulting code is clean ES2018 JavaScript without polyfills
- Removed old tests, docs, examples
- Removed old package dependencies (`karma`, `jasmine`, `babel`, etc.)
- Updated all package dependencies
- Updated TensorFlow/JS dependencies since backends were removed from `@tensorflow/tfjs-core`
- Updated mobileNetv1 model due to `batchNorm()` dependency
- Added `version` class that returns JSON object with version of FaceAPI as well as linked TFJS
- Added test/dev built-in HTTP & HTTPS Web server
- Removed `mtcnn` and `tinyYolov2` models as they were non-functional in latest public version of `Face-API`
*If there is a demand, I can re-implement them back.*
Which means valid models are **tinyFaceDetector** and **mobileNetv1**
<br> <br>
<hr>
<br> ## Additional Documentation
- [**Tutorial**](TUTORIAL.md)
- [**TypeDoc API Specification**](https://vladmandic.github.io/face-api/typedoc/index.html)
<br><hr><br>
## Examples ## Examples
@ -51,78 +28,117 @@ Which means valid models are **tinyFaceDetector** and **mobileNetv1**
### Browser ### Browser
Browser example that uses static images and showcases both models as well as all of the extensions is included in `/example/index.html` Browser example that uses static images and showcases both models
Example can be accessed directly using Git pages using URL: <https://vladmandic.github.io/face-api/example/index.html> as well as all of the extensions is included in `/demo/index.html`
Example can be accessed directly using Git pages using URL:
<https://vladmandic.github.io/face-api/demo/index.html>
Browser example that uses live webcam is included in `/example/webcam.html` Browser example that uses live webcam is included in `/demo/webcam.html`
Example can be accessed directly using Git pages using URL: <https://vladmandic.github.io/face-api/example/webcam.html> Example can be accessed directly using Git pages using URL:
<https://vladmandic.github.io/face-api/demo/webcam.html>
<br> <br>
**Demo using FaceAPI to process images**
*Note: Photos shown below are taken by me* *Note: Photos shown below are taken by me*
![alt text](example/screenshot.png) ![screenshot](demo/screenshot-images.png)
**Demo using FaceAPI to process live webcam**
![screenshot](demo/screenshot-webcam.png)
<br> <br>
### NodeJS ### NodeJS
Two NodeJS examples are: NodeJS examples are:
- `/example/node-singleprocess.js`: Regular usage of `FaceAPI` from `NodeJS` - `/demo/node-simple.js`:
- `/example/node-multiprocess.js`: Multiprocessing showcase that uses pool of worker processes (`node-multiprocess-worker.js` Simplest possible NodeJS demo for FaceAPI in under 30 lines of JavaScript code
Main starts fixed pool of worker processes with each worker having it's instance of `FaceAPI` - `/demo/node.js`:
Workers communicate with main when they are ready and main dispaches job to each ready worker until job queue is empty Using `TFJS` native methods to load images without external dependencies
- `/demo/node-canvas.js` and `/demo/node-image.js`:
Using external `canvas` module to load images
Which also allows for image drawing and saving inside `NodeJS` environment
- `/demo/node-match.js`:
Simple demo that compares face similarity from a given image
to a second image or list of images in a folder
- `/demo/node-multiprocess.js`:
Multiprocessing showcase that uses pool of worker processes
(`node-multiprocess-worker.js`)
Main starts fixed pool of worker processes with each worker having
it's instance of `FaceAPI`
Workers communicate with main when they are ready and main dispaches
job to each ready worker until job queue is empty
```json ```json
2020-12-08 08:30:01 INFO: @vladmandic/face-api version 0.9.1 2021-03-14 08:42:03 INFO: @vladmandic/face-api version 1.0.2
2020-12-08 08:30:01 INFO: User: vlado Platform: linux Arch: x64 Node: v15.0.1 2021-03-14 08:42:03 INFO: User: vlado Platform: linux Arch: x64 Node: v15.7.0
2020-12-08 08:30:01 INFO: FaceAPI multi-process test 2021-03-14 08:42:03 INFO: FaceAPI multi-process test
2020-12-08 08:30:01 STATE: Main: started worker: 265238 2021-03-14 08:42:03 STATE: Main: started worker: 1888019
2020-12-08 08:30:01 STATE: Main: started worker: 265244 2021-03-14 08:42:03 STATE: Main: started worker: 1888025
2020-12-08 08:30:02 STATE: Worker: PID: 265238 TensorFlow/JS 2.7.0 FaceAPI 0.9.1 Backend: tensorflow 2021-03-14 08:42:04 STATE: Worker: PID: 1888025 TensorFlow/JS 3.3.0 FaceAPI 1.0.2 Backend: tensorflow
2020-12-08 08:30:02 STATE: Worker: PID: 265244 TensorFlow/JS 2.7.0 FaceAPI 0.9.1 Backend: tensorflow 2021-03-14 08:42:04 STATE: Worker: PID: 1888019 TensorFlow/JS 3.3.0 FaceAPI 1.0.2 Backend: tensorflow
2020-12-08 08:30:02 STATE: Main: dispatching to worker: 265238 2021-03-14 08:42:04 STATE: Main: dispatching to worker: 1888019
2020-12-08 08:30:02 STATE: Main: dispatching to worker: 265244 2021-03-14 08:42:04 STATE: Main: dispatching to worker: 1888025
2020-12-08 08:30:02 DATA: Worker received message: 265238 { image: 'example/sample (1).jpg' } 2021-03-14 08:42:04 DATA: Worker received message: 1888019 { image: 'demo/sample1.jpg' }
2020-12-08 08:30:02 DATA: Worker received message: 265244 { image: 'example/sample (2).jpg' } 2021-03-14 08:42:04 DATA: Worker received message: 1888025 { image: 'demo/sample2.jpg' }
2020-12-08 08:30:04 DATA: Main: worker finished: 265238 detected faces: 3 2021-03-14 08:42:06 DATA: Main: worker finished: 1888025 detected faces: 3
2020-12-08 08:30:04 STATE: Main: dispatching to worker: 265238 2021-03-14 08:42:06 STATE: Main: dispatching to worker: 1888025
2020-12-08 08:30:04 DATA: Main: worker finished: 265244 detected faces: 3 2021-03-14 08:42:06 DATA: Worker received message: 1888025 { image: 'demo/sample3.jpg' }
2020-12-08 08:30:04 STATE: Main: dispatching to worker: 265244 2021-03-14 08:42:06 DATA: Main: worker finished: 1888019 detected faces: 3
2020-12-08 08:30:04 DATA: Worker received message: 265238 { image: 'example/sample (3).jpg' } 2021-03-14 08:42:06 STATE: Main: dispatching to worker: 1888019
2020-12-08 08:30:04 DATA: Worker received message: 265244 { image: 'example/sample (4).jpg' } 2021-03-14 08:42:06 DATA: Worker received message: 1888019 { image: 'demo/sample4.jpg' }
2020-12-08 08:30:06 DATA: Main: worker finished: 265238 detected faces: 3 2021-03-14 08:42:07 DATA: Main: worker finished: 1888025 detected faces: 3
2020-12-08 08:30:06 STATE: Main: dispatching to worker: 265238 2021-03-14 08:42:07 STATE: Main: dispatching to worker: 1888025
2020-12-08 08:30:06 DATA: Worker received message: 265238 { image: 'example/sample (5).jpg' } 2021-03-14 08:42:07 DATA: Worker received message: 1888025 { image: 'demo/sample5.jpg' }
2020-12-08 08:30:06 DATA: Main: worker finished: 265244 detected faces: 4 2021-03-14 08:42:08 DATA: Main: worker finished: 1888019 detected faces: 4
2020-12-08 08:30:06 STATE: Main: dispatching to worker: 265244 2021-03-14 08:42:08 STATE: Main: dispatching to worker: 1888019
2020-12-08 08:30:06 DATA: Worker received message: 265244 { image: 'example/sample (6).jpg' } 2021-03-14 08:42:08 DATA: Worker received message: 1888019 { image: 'demo/sample6.jpg' }
2020-12-08 08:30:07 DATA: Main: worker finished: 265238 detected faces: 5 2021-03-14 08:42:09 DATA: Main: worker finished: 1888025 detected faces: 5
2020-12-08 08:30:07 STATE: Main: worker exit: 265238 0 2021-03-14 08:42:09 STATE: Main: worker exit: 1888025 0
2020-12-08 08:30:08 DATA: Main: worker finished: 265244 detected faces: 4 2021-03-14 08:42:09 DATA: Main: worker finished: 1888019 detected faces: 4
2020-12-08 08:30:08 INFO: Processed 12 images in 6826 ms 2021-03-14 08:42:09 INFO: Processed 15 images in 5944 ms
2020-12-08 08:30:08 STATE: Main: worker exit: 265244 0 2021-03-14 08:42:09 STATE: Main: worker exit: 1888019 0
``` ```
Note that `@tensorflow/tfjs-node` or `@tensorflow/tfjs-node-gpu` must be installed before using NodeJS example ### NodeJS Notes
- Supported NodeJS versions are **14** up to **22**
NodeJS version **23** and higher are not supported due to incompatibility with TensorFlow/JS
- `@tensorflow/tfjs-node` or `@tensorflow/tfjs-node-gpu`
must be installed before using any **NodeJS** examples
<br> <br><hr><br>
<hr>
<br> ## Quick Start
Simply include latest version of `FaceAPI` directly from a CDN in your HTML:
(pick one, `jsdelivr` or `unpkg`)
```html
<script src="https://cdn.jsdelivr.net/npm/@vladmandic/face-api/dist/face-api.js"></script>
<script src="https://unpkg.dev/@vladmandic/face-api/dist/face-api.js"></script>
```
## Installation ## Installation
Face-API ships with several pre-build versions of the library: `FaceAPI` ships with several pre-build versions of the library:
- `dist/face-api.js`: IIFE format for client-side Browser execution *with* TFJS pre-bundled - `dist/face-api.js`: IIFE format for client-side Browser execution
- `dist/face-api.esm.js`: ESM format for client-side Browser execution *with* TFJS pre-bundled *with* TFJS pre-bundled
- `dist/face-api.esm-nobundle.js`: ESM format for client-side Browser execution *without* TFJS pre-bundled - `dist/face-api.esm.js`: ESM format for client-side Browser execution
- `dist/face-api.node.js`: CommonJS format for server-side NodeJS execution *without* TFJS pre-bundled *with* TFJS pre-bundled
- `dist/face-api.node-gpu.js`: CommonJS format for server-side NodeJS execution *without* TFJS pre-bundled and optimized for CUDA GPU acceleration - `dist/face-api.esm-nobundle.js`: ESM format for client-side Browser execution
*without* TFJS pre-bundled
- `dist/face-api.node.js`: CommonJS format for server-side NodeJS execution
*without* TFJS pre-bundled
- `dist/face-api.node-gpu.js`: CommonJS format for server-side NodeJS execution
*without* TFJS pre-bundled and optimized for CUDA GPU acceleration
Defaults are: Defaults are:
```json ```json
{ {
"main": "dist/face-api.node-js", "main": "dist/face-api.node-js",
@ -133,29 +149,34 @@ Defaults are:
Bundled `TFJS` can be used directly via export: `faceapi.tf` Bundled `TFJS` can be used directly via export: `faceapi.tf`
Reason for additional `nobundle` version is if you want to include a specific version of TFJS and not rely on pre-packaged one Reason for additional `nobundle` version is if you want to
include a specific version of TFJS and not rely on pre-packaged one
`FaceAPI` is compatible with TFJS 2.0+ `FaceAPI` is compatible with TFJS 2.0+ and TFJS 3.0+
All versions include `sourcemap` and `asset manifest` All versions include `sourcemap`
<br> <br><hr><br>
<hr>
<br>
There are several ways to use Face-API: There are several ways to use FaceAPI:
### 1. IIFE script ### 1. IIFE script
*Recommened for quick tests and backward compatibility with older Browsers that do not support ESM such as IE* *Recommened for quick tests and backward compatibility with older Browsers that do not support ESM such as IE*
This is simplest way for usage within Browser This is simplest way for usage within Browser
Simply download `dist/face-api.js`, include it in your `HTML` file & it's ready to use Simply download `dist/face-api.js`, include it in your `HTML` file & it's ready to use:
```html ```html
<script src="dist/face-api.js"><script> <script src="dist/face-api.js"><script>
``` ```
Or skip the download and include it directly from a CDN:
```html
<script src="https://cdn.jsdelivr.net/npm/@vladmandic/face-api/dist/face-api.js"></script>
```
IIFE script bundles TFJS and auto-registers global namespace `faceapi` within Window object which can be accessed directly from a `<script>` tag or from your JS file. IIFE script bundles TFJS and auto-registers global namespace `faceapi` within Window object which can be accessed directly from a `<script>` tag or from your JS file.
<br> <br>
@ -171,6 +192,7 @@ To use ESM import directly in a Browser, you must import your script (e.g. `inde
```html ```html
<script src="./index.js" type="module"> <script src="./index.js" type="module">
``` ```
and then in your `index.js` and then in your `index.js`
```js ```js
@ -180,6 +202,7 @@ and then in your `index.js`
#### 2.2. With Bundler #### 2.2. With Bundler
Same as above, but expectation is that you've installed `@vladmandic/faceapi` package: Same as above, but expectation is that you've installed `@vladmandic/faceapi` package:
```shell ```shell
npm install @vladmandic/face-api npm install @vladmandic/face-api
``` ```
@ -190,11 +213,15 @@ in which case, you do not need to import a script as module - that depends on yo
```js ```js
import * as faceapi from '@vladmandic/face-api'; import * as faceapi from '@vladmandic/face-api';
``` ```
or if your bundler doesn't recognize `recommended` type, force usage with: or if your bundler doesn't recognize `recommended` type, force usage with:
```js ```js
import * as faceapi from '@vladmandic/face-api/dist/face-api.esm.js'; import * as faceapi from '@vladmandic/face-api/dist/face-api.esm.js';
``` ```
or to use non-bundled version or to use non-bundled version
```js ```js
import * as tf from `@tensorflow/tfjs`; import * as tf from `@tensorflow/tfjs`;
import * as faceapi from '@vladmandic/face-api/dist/face-api.esm-nobundle.js'; import * as faceapi from '@vladmandic/face-api/dist/face-api.esm-nobundle.js';
@ -208,129 +235,285 @@ or to use non-bundled version
*Recommended for NodeJS projects* *Recommended for NodeJS projects*
*Node: Face-API for NodeJS does not bundle TFJS due to binary dependencies that are installed during TFJS installation* *Node: FaceAPI for NodeJS does not bundle TFJS due to binary dependencies that are installed during TFJS installation*
Install with: Install with:
```shell ```shell
npm install @tensorflow/tfjs-node npm install @tensorflow/tfjs-node
npm install @vladmandic/face-api npm install @vladmandic/face-api
``` ```
And then use with: And then use with:
```js ```js
const tf = require('@tensorflow/tfjs-node') const tf = require('@tensorflow/tfjs-node')
const faceapi = require('@vladmandic/face-api'); const faceapi = require('@vladmandic/face-api');
``` ```
If you want to force CommonJS module instead of relying on `recommended` field: If you want to force CommonJS module instead of relying on `recommended` field:
```js ```js
const faceapi = require('@vladmandic/face-api/dist/face-api.node.js'); const faceapi = require('@vladmandic/face-api/dist/face-api.node.js');
``` ```
If you want to GPU Accelerated execution in NodeJS, you must have CUDA libraries already installed and working If you want to GPU Accelerated execution in NodeJS, you must have CUDA libraries already installed and working
Then install appropriate version of `Face-API`: Then install appropriate version of `FaceAPI`:
```shell ```shell
npm install @tensorflow/tfjs-node npm install @tensorflow/tfjs-node-gpu
npm install @vladmandic/face-api npm install @vladmandic/face-api
``` ```
And then use with: And then use with:
```js ```js
const tf = require('@tensorflow/tfjs-node-gpu') const tf = require('@tensorflow/tfjs-node-gpu')
const faceapi = require('@vladmandic/face-api/dist/face-api.node-gpu.js'); // this loads face-api version with correct bindings for tfjs-node-gpu const faceapi = require('@vladmandic/face-api/dist/face-api.node-gpu.js'); // this loads face-api version with correct bindings for tfjs-node-gpu
``` ```
<br> If you want to use `FaceAPI` in a NodeJS on platforms where **tensorflow** binary libraries are not supported, you can use NodeJS **WASM** backend.
<hr>
<br> ```shell
npm install @tensorflow/tfjs
npm install @tensorflow/tfjs-backend-wasm
npm install @vladmandic/face-api
```
And then use with:
```js
const tf = require('@tensorflow/tfjs');
const wasm = require('@tensorflow/tfjs-backend-wasm');
const faceapi = require('@vladmandic/face-api/dist/face-api.node-wasm.js'); // use this when using face-api in dev mode
wasm.setWasmPaths('https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-backend-wasm/dist/');
await tf.setBackend('wasm');
await tf.ready();
...
```
If you want to use graphical functions inside NodeJS,
you must provide appropriate graphical library as
NodeJS does not include implementation for DOM elements
such as HTMLImageElement or HTMLCanvasElement:
Install `Canvas` for NodeJS:
```shell
npm install canvas
```
Patch NodeJS environment to use newly installed `Canvas` library:
```js
const canvas = require('canvas');
const faceapi = require('@vladmandic/face-api');
const { Canvas, Image, ImageData } = canvas
faceapi.env.monkeyPatch({ Canvas, Image, ImageData })
```
<br><hr><br>
## Weights ## Weights
Pretrained models and their weights are includes in `./model`. Pretrained models and their weights are included in `./model`.
<br> <br><hr><br>
<hr>
<br>
## Test & Dev Web Server ## Test & Dev Web Server
To install development dependencies, use `npm install --production=false`
Built-in test&dev web server can be started using Built-in test&dev web server can be started using
```shell ```shell
npm run dev npm run dev
``` ```
By default it starts HTTP server on port 8000 and HTTPS server on port 8001 and can be accessed as: By default it starts HTTP server on port 8000 and HTTPS server on port 8001 and can be accessed as:
- <https://localhost:8001/example/index.html>
- <https://localhost:8001/example/webcam.html>
```json - <https://localhost:8001/demo/index.html>
2021-01-10 08:39:00 INFO: @vladmandic/face-api version 0.10.2 - <https://localhost:8001/demo/webcam.html>
2021-01-10 08:39:00 INFO: User: vlado Platform: linux Arch: x64 Node: v15.4.0
2021-01-10 08:39:00 INFO: Build: file startup all target: es2018 ```js
2021-01-10 08:39:00 STATE: HTTP server listening: 8000 2022-01-14 09:56:19 INFO: @vladmandic/face-api version 1.6.4
2021-01-10 08:39:00 STATE: HTTP2 server listening: 8001 2022-01-14 09:56:19 INFO: User: vlado Platform: linux Arch: x64 Node: v17.2.0
2021-01-10 08:39:00 STATE: Monitoring: [ 'package.json', 'config.js', 'example', 'src', [length]: 4 ] 2022-01-14 09:56:19 INFO: Application: { name: '@vladmandic/face-api', version: '1.6.4' }
2021-01-10 08:39:00 STATE: Monitoring: [ 'package.json', 'config.js', 'example', 'src', [length]: 4 ] 2022-01-14 09:56:19 INFO: Environment: { profile: 'development', config: '.build.json', package: 'package.json', tsconfig: true, eslintrc: true, git: true }
2021-01-10 08:39:01 STATE: Build for: browserBundle type: tfjs: { modules: 1253, moduleBytes: 3997175, imports: 7, importBytes: 276, outputBytes: 1565414, outputFiles: 'dist/tfjs.esm.js' } 2022-01-14 09:56:19 INFO: Toolchain: { build: '0.6.7', esbuild: '0.14.11', typescript: '4.5.4', typedoc: '0.22.10', eslint: '8.6.0' }
2021-01-10 08:39:01 STATE: Build for: browserBundle type: iife: { imports: 160, importBytes: 1797487, outputBytes: 1699552, outputFiles: 'dist/face-api.js' } 2022-01-14 09:56:19 INFO: Build: { profile: 'development', steps: [ 'serve', 'watch', 'compile' ] }
2021-01-10 08:39:01 STATE: Build for: browserBundle type: esm: { imports: 160, importBytes: 1797487, outputBytes: 1697086, outputFiles: 'dist/face-api.esm.js' } 2022-01-14 09:56:19 STATE: WebServer: { ssl: false, port: 8000, root: '.' }
2021-01-10 08:39:01 INFO: Compile: [ 'src/index.ts', [length]: 1 ] 2022-01-14 09:56:19 STATE: WebServer: { ssl: true, port: 8001, root: '.', sslKey: 'build/cert/https.key', sslCrt: 'build/cert/https.crt' }
2022-01-14 09:56:19 STATE: Watch: { locations: [ 'src/**', 'README.md', 'src/**', 'src/**' ] }
2022-01-14 09:56:19 STATE: Compile: { name: 'tfjs/node/cpu', format: 'cjs', platform: 'node', input: 'src/tfjs/tf-node.ts', output: 'dist/tfjs.esm.js', files: 1, inputBytes: 143, outputBytes: 1276 }
2022-01-14 09:56:19 STATE: Compile: { name: 'faceapi/node/cpu', format: 'cjs', platform: 'node', input: 'src/index.ts', output: 'dist/face-api.node.js', files: 162, inputBytes: 234787, outputBytes: 175203 }
2022-01-14 09:56:19 STATE: Compile: { name: 'tfjs/node/gpu', format: 'cjs', platform: 'node', input: 'src/tfjs/tf-node-gpu.ts', output: 'dist/tfjs.esm.js', files: 1, inputBytes: 147, outputBytes: 1296 }
2022-01-14 09:56:19 STATE: Compile: { name: 'faceapi/node/gpu', format: 'cjs', platform: 'node', input: 'src/index.ts', output: 'dist/face-api.node-gpu.js', files: 162, inputBytes: 234807, outputBytes: 175219 }
2022-01-14 09:56:19 STATE: Compile: { name: 'tfjs/node/wasm', format: 'cjs', platform: 'node', input: 'src/tfjs/tf-node-wasm.ts', output: 'dist/tfjs.esm.js', files: 1, inputBytes: 185, outputBytes: 1367 }
2022-01-14 09:56:19 STATE: Compile: { name: 'faceapi/node/wasm', format: 'cjs', platform: 'node', input: 'src/index.ts', output: 'dist/face-api.node-wasm.js', files: 162, inputBytes: 234878, outputBytes: 175294 }
2022-01-14 09:56:19 STATE: Compile: { name: 'tfjs/browser/tf-version', format: 'esm', platform: 'browser', input: 'src/tfjs/tf-version.ts', output: 'dist/tfjs.version.js', files: 1, inputBytes: 1063, outputBytes: 1662 }
2022-01-14 09:56:19 STATE: Compile: { name: 'tfjs/browser/esm/nobundle', format: 'esm', platform: 'browser', input: 'src/tfjs/tf-browser.ts', output: 'dist/tfjs.esm.js', files: 2, inputBytes: 2172, outputBytes: 811 }
2022-01-14 09:56:19 STATE: Compile: { name: 'faceapi/browser/esm/nobundle', format: 'esm', platform: 'browser', input: 'src/index.ts', output: 'dist/face-api.esm-nobundle.js', files: 162, inputBytes: 234322, outputBytes: 169437 }
2022-01-14 09:56:19 STATE: Compile: { name: 'tfjs/browser/esm/bundle', format: 'esm', platform: 'browser', input: 'src/tfjs/tf-browser.ts', output: 'dist/tfjs.esm.js', files: 11, inputBytes: 2172, outputBytes: 2444105 }
2022-01-14 09:56:20 STATE: Compile: { name: 'faceapi/browser/iife/bundle', format: 'iife', platform: 'browser', input: 'src/index.ts', output: 'dist/face-api.js', files: 162, inputBytes: 2677616, outputBytes: 1252572 }
2022-01-14 09:56:20 STATE: Compile: { name: 'faceapi/browser/esm/bundle', format: 'esm', platform: 'browser', input: 'src/index.ts', output: 'dist/face-api.esm.js', files: 162, inputBytes: 2677616, outputBytes: 2435063 }
2022-01-14 09:56:20 INFO: Listening...
...
2022-01-14 09:56:46 DATA: HTTPS: { method: 'GET', ver: '2.0', status: 200, mime: 'text/html', size: 1047, url: '/', remote: '::1' }
2022-01-14 09:56:46 DATA: HTTPS: { method: 'GET', ver: '2.0', status: 200, mime: 'text/javascript', size: 6919, url: '/index.js', remote: '::1' }
2022-01-14 09:56:46 DATA: HTTPS: { method: 'GET', ver: '2.0', status: 200, mime: 'text/javascript', size: 2435063, url: '/dist/face-api.esm.js', remote: '::1' }
2022-01-14 09:56:47 DATA: HTTPS: { method: 'GET', ver: '2.0', status: 200, mime: 'application/octet-stream', size: 4125244, url: '/dist/face-api.esm.js.map', remote: '::1' }
2022-01-14 09:56:47 DATA: HTTPS: { method: 'GET', ver: '2.0', status: 200, mime: 'application/json', size: 3219, url: '/model/tiny_face_detector_model-weights_manifest.json', remote: '::1' }
2022-01-14 09:56:47 DATA: HTTPS: { method: 'GET', ver: '2.0', status: 200, mime: 'application/octet-stream', size: 193321, url: '/model/tiny_face_detector_model.bin', remote: '::1' }
2022-01-14 09:56:47 DATA: HTTPS: { method: 'GET', ver: '2.0', status: 200, mime: 'application/json', size: 28233, url: '/model/ssd_mobilenetv1_model-weights_manifest.json', remote: '::1' }
2022-01-14 09:56:47 DATA: HTTPS: { method: 'GET', ver: '2.0', status: 200, mime: 'application/octet-stream', size: 5616957, url: '/model/ssd_mobilenetv1_model.bin', remote: '::1' }
2022-01-14 09:56:48 DATA: HTTPS: { method: 'GET', ver: '2.0', status: 200, mime: 'application/json', size: 8392, url: '/model/age_gender_model-weights_manifest.json', remote: '::1' }
2022-01-14 09:56:48 DATA: HTTPS: { method: 'GET', ver: '2.0', status: 200, mime: 'application/octet-stream', size: 429708, url: '/model/age_gender_model.bin', remote: '::1' }
2022-01-14 09:56:48 DATA: HTTPS: { method: 'GET', ver: '2.0', status: 200, mime: 'application/json', size: 8485, url: '/model/face_landmark_68_model-weights_manifest.json', remote: '::1' }
2022-01-14 09:56:48 DATA: HTTPS: { method: 'GET', ver: '2.0', status: 200, mime: 'application/octet-stream', size: 356840, url: '/model/face_landmark_68_model.bin', remote: '::1' }
2022-01-14 09:56:48 DATA: HTTPS: { method: 'GET', ver: '2.0', status: 200, mime: 'application/json', size: 19615, url: '/model/face_recognition_model-weights_manifest.json', remote: '::1' }
2022-01-14 09:56:48 DATA: HTTPS: { method: 'GET', ver: '2.0', status: 200, mime: 'application/octet-stream', size: 6444032, url: '/model/face_recognition_model.bin', remote: '::1' }
2022-01-14 09:56:48 DATA: HTTPS: { method: 'GET', ver: '2.0', status: 200, mime: 'application/json', size: 6980, url: '/model/face_expression_model-weights_manifest.json', remote: '::1' }
2022-01-14 09:56:48 DATA: HTTPS: { method: 'GET', ver: '2.0', status: 200, mime: 'application/octet-stream', size: 329468, url: '/model/face_expression_model.bin', remote: '::1' }
2022-01-14 09:56:48 DATA: HTTPS: { method: 'GET', ver: '2.0', status: 200, mime: 'image/jpeg', size: 144516, url: '/sample1.jpg', remote: '::1' }
``` ```
<br> <br><hr><br>
<hr>
<br>
## Build ## Build
If you want to do a full rebuild, either download npm module If you want to do a full rebuild, either download npm module
```shell ```shell
npm install @vladmandic/face-api npm install @vladmandic/face-api
cd node_modules/@vladmandic/face-api cd node_modules/@vladmandic/face-api
``` ```
or clone a git project or clone a git project
```shell ```shell
git clone https://github.com/vladmandic/face-api git clone https://github.com/vladmandic/face-api
cd face-api cd face-api
``` ```
Then install all dependencies and run rebuild: Then install all dependencies and run rebuild:
```shell ```shell
npm install npm install --production=false
npm run build npm run build
``` ```
Build process uses script `build.js` that creates optimized build for each target: Build process uses `@vladmandic/build` module that creates optimized build for each target:
```text ```js
npm run build > @vladmandic/face-api@1.7.1 build /home/vlado/dev/face-api
> node build.js
> @vladmandic/face-api@0.8.9 build /home/vlado/dev/face-api 2022-07-25 08:21:05 INFO: Application: { name: '@vladmandic/face-api', version: '1.7.1' }
> rimraf dist/* && node ./build.js 2022-07-25 08:21:05 INFO: Environment: { profile: 'production', config: '.build.json', package: 'package.json', tsconfig: true, eslintrc: true, git: true }
2022-07-25 08:21:05 INFO: Toolchain: { build: '0.7.7', esbuild: '0.14.50', typescript: '4.7.4', typedoc: '0.23.9', eslint: '8.20.0' }
2022-07-25 08:21:05 INFO: Build: { profile: 'production', steps: [ 'clean', 'compile', 'typings', 'typedoc', 'lint', 'changelog' ] }
2022-07-25 08:21:05 STATE: Clean: { locations: [ 'dist/*', 'typedoc/*', 'types/lib/src' ] }
2022-07-25 08:21:05 STATE: Compile: { name: 'tfjs/node/cpu', format: 'cjs', platform: 'node', input: 'src/tfjs/tf-node.ts', output: 'dist/tfjs.esm.js', files: 1, inputBytes: 143, outputBytes: 614 }
2022-07-25 08:21:05 STATE: Compile: { name: 'faceapi/node/cpu', format: 'cjs', platform: 'node', input: 'src/index.ts', output: 'dist/face-api.node.js', files: 162, inputBytes: 234137, outputBytes: 85701 }
2022-07-25 08:21:05 STATE: Compile: { name: 'tfjs/node/gpu', format: 'cjs', platform: 'node', input: 'src/tfjs/tf-node-gpu.ts', output: 'dist/tfjs.esm.js', files: 1, inputBytes: 147, outputBytes: 618 }
2022-07-25 08:21:05 STATE: Compile: { name: 'faceapi/node/gpu', format: 'cjs', platform: 'node', input: 'src/index.ts', output: 'dist/face-api.node-gpu.js', files: 162, inputBytes: 234141, outputBytes: 85705 }
2022-07-25 08:21:05 STATE: Compile: { name: 'tfjs/node/wasm', format: 'cjs', platform: 'node', input: 'src/tfjs/tf-node-wasm.ts', output: 'dist/tfjs.esm.js', files: 1, inputBytes: 185, outputBytes: 670 }
2022-07-25 08:21:05 STATE: Compile: { name: 'faceapi/node/wasm', format: 'cjs', platform: 'node', input: 'src/index.ts', output: 'dist/face-api.node-wasm.js', files: 162, inputBytes: 234193, outputBytes: 85755 }
2022-07-25 08:21:05 STATE: Compile: { name: 'tfjs/browser/tf-version', format: 'esm', platform: 'browser', input: 'src/tfjs/tf-version.ts', output: 'dist/tfjs.version.js', files: 1, inputBytes: 1063, outputBytes: 400 }
2022-07-25 08:21:05 STATE: Compile: { name: 'tfjs/browser/esm/nobundle', format: 'esm', platform: 'browser', input: 'src/tfjs/tf-browser.ts', output: 'dist/tfjs.esm.js', files: 2, inputBytes: 910, outputBytes: 527 }
2022-07-25 08:21:05 STATE: Compile: { name: 'faceapi/browser/esm/nobundle', format: 'esm', platform: 'browser', input: 'src/index.ts', output: 'dist/face-api.esm-nobundle.js', files: 162, inputBytes: 234050, outputBytes: 82787 }
2022-07-25 08:21:05 STATE: Compile: { name: 'tfjs/browser/esm/bundle', format: 'esm', platform: 'browser', input: 'src/tfjs/tf-browser.ts', output: 'dist/tfjs.esm.js', files: 11, inputBytes: 910, outputBytes: 1184871 }
2022-07-25 08:21:05 STATE: Compile: { name: 'faceapi/browser/iife/bundle', format: 'iife', platform: 'browser', input: 'src/index.ts', output: 'dist/face-api.js', files: 162, inputBytes: 1418394, outputBytes: 1264631 }
2022-07-25 08:21:05 STATE: Compile: { name: 'faceapi/browser/esm/bundle', format: 'esm', platform: 'browser', input: 'src/index.ts', output: 'dist/face-api.esm.js', files: 162, inputBytes: 1418394, outputBytes: 1264150 }
2022-07-25 08:21:07 STATE: Typings: { input: 'src/index.ts', output: 'types/lib', files: 93 }
2022-07-25 08:21:09 STATE: TypeDoc: { input: 'src/index.ts', output: 'typedoc', objects: 154, generated: true }
2022-07-25 08:21:13 STATE: Lint: { locations: [ 'src/' ], files: 174, errors: 0, warnings: 0 }
2022-07-25 08:21:14 STATE: ChangeLog: { repository: 'https://github.com/vladmandic/face-api', branch: 'master', output: 'CHANGELOG.md' }
2022-07-25 08:21:14 INFO: Done...
2022-07-25 08:21:14 STATE: Copy: { input: 'types/lib/dist/tfjs.esm.d.ts' }
2022-07-25 08:21:15 STATE: API-Extractor: { succeeeded: true, errors: 0, warnings: 417 }
2022-07-25 08:21:15 INFO: FaceAPI Build complete...
``` ```
```json <br><hr><br>
2021-01-10 08:42:01 INFO: @vladmandic/face-api version 0.10.2
2021-01-10 08:42:01 INFO: User: vlado Platform: linux Arch: x64 Node: v15.4.0 ## Face Mesh
2021-01-10 08:42:01 INFO: Build: file startup all target: es2018
2021-01-10 08:42:01 STATE: Build for: node type: tfjs: { imports: 1, importBytes: 143, outputBytes: 1042, outputFiles: 'dist/tfjs.esm.js' } `FaceAPI` landmark model returns 68-point face mesh as detailed in the image below:
2021-01-10 08:42:01 STATE: Build for: node type: node: { imports: 160, importBytes: 233115, outputBytes: 132266, outputFiles: 'dist/face-api.node.js' }
2021-01-10 08:42:01 STATE: Build for: nodeGPU type: tfjs: { imports: 1, importBytes: 147, outputBytes: 1046, outputFiles: 'dist/tfjs.esm.js' } ![facemesh](demo/facemesh.png)
2021-01-10 08:42:01 STATE: Build for: nodeGPU type: node: { imports: 160, importBytes: 233119, outputBytes: 132274, outputFiles: 'dist/face-api.node-gpu.js' }
2021-01-10 08:42:01 STATE: Build for: browserNoBundle type: tfjs: { imports: 1, importBytes: 276, outputBytes: 244, outputFiles: 'dist/tfjs.esm.js' } <br><hr><br>
2021-01-10 08:42:01 STATE: Build for: browserNoBundle type: esm: { imports: 160, importBytes: 232317, outputBytes: 129069, outputFiles: 'dist/face-api.esm-nobundle.js' }
2021-01-10 08:42:01 STATE: Build for: browserBundle type: tfjs: { modules: 1253, moduleBytes: 3997175, imports: 7, importBytes: 276, outputBytes: 1565414, outputFiles: 'dist/tfjs.esm.js' } ## Note
2021-01-10 08:42:02 STATE: Build for: browserBundle type: iife: { imports: 160, importBytes: 1797487, outputBytes: 1699552, outputFiles: 'dist/face-api.js' }
2021-01-10 08:42:02 STATE: Build for: browserBundle type: esm: { imports: 160, importBytes: 1797487, outputBytes: 1697086, outputFiles: 'dist/face-api.esm.js' } This is updated **face-api.js** with latest available TensorFlow/JS as the original is not compatible with **tfjs >=2.0**.
2021-01-10 08:42:02 INFO: Compile: [ 'src/index.ts', [length]: 1 ]``` Forked from [face-api.js](https://github.com/justadudewhohacks/face-api.js) version **0.22.2** which was released on March 22nd, 2020
```
*Why?* I needed a FaceAPI that does not cause version conflict with newer versions of TensorFlow
And since the original FaceAPI was open-source, I've released this version as well
Changes ended up being too large for a simple pull request and it ended up being a full-fledged version on its own
Plus many features were added since the original inception
Although a lot of work has gone into this version of `FaceAPI` and it will continue to be maintained,
at this time it is completely superseded by my newer library `Human` which covers the same use cases,
but extends it with newer AI models, additional detection details, compatibility with latest web standard and more
- [Human NPM](https://www.npmjs.com/package/@vladmandic/human)
- [Human Git Repository](https://github.com/vladmandic/human)
<br> <br>
<hr>
## Differences
Compared to [face-api.js](https://github.com/justadudewhohacks/face-api.js) version **0.22.2**:
- Compatible with `TensorFlow/JS 2.0+, 3.0+ and 4.0+`
Currently using **`TensorFlow/JS` 4.16**
Original `face-api.js` is based on `TFJS` **1.7.4**
- Compatible with `WebGL`, `CPU` and `WASM` TFJS Browser backends
- Compatible with both `tfjs-node` and `tfjs-node-gpu` TFJS NodeJS backends
- Updated all type castings for TypeScript type checking to `TypeScript 5.3`
- Switched bundling from `UMD` to `ESM` + `CommonJS` with fallback to `IIFE`
Resulting code is optimized per-platform instead of being universal
Fully tree shakable when imported as an `ESM` module
Browser bundle process uses `ESBuild` instead of `Rollup`
- Added separate `face-api` versions with `tfjs` pre-bundled and without `tfjs`
When using `-nobundle` version, user can load any version of `tfjs` manually
- Typescript build process now targets `ES2018` and instead of dual `ES5`/`ES6`
Resulting code is clean ES2018 JavaScript without polyfills
- Removed old tests, docs, examples
- Removed old package dependencies (`karma`, `jasmine`, `babel`, etc.)
- Updated all package dependencies
- Updated TensorFlow/JS dependencies since backends were removed from `@tensorflow/tfjs-core`
- Updated `mobileNetv1` model due to `batchNorm()` dependency
- Added `version` class that returns JSON object with version of FaceAPI as well as linked TFJS
- Added test/dev built-in HTTP & HTTPS Web server
- Removed `mtcnn` and `tinyYolov2` models as they were non-functional in latest public version of `FaceAPI`
Which means valid models are **tinyFaceDetector** and **mobileNetv1**
*If there is a demand, I can re-implement them back.*
- Added `face angle` calculations that returns `roll`, `yaw` and `pitch`
- Added `typdoc` automatic API specification generation during build
- Added `changelog` automatic generation during build
- New process to generate **TypeDocs** bundle using API-Extractor
<br> <br>
## Credits & Documentation ## Credits
- Original project and usage documentation: [Face-API](https://github.com/justadudewhohacks/face-api.js) - Original project: [face-api.js](https://github.com/justadudewhohacks/face-api.js)
- Original model weighs: [Face-API](https://github.com/justadudewhohacks/face-api.js-models) - Original model weighs: [face-api.js-models](https://github.com/justadudewhohacks/face-api.js-models)
- ML API Documentation: [Tensorflow/JS](https://js.tensorflow.org/api/latest/) - ML API Documentation: [Tensorflow/JS](https://js.tensorflow.org/api/latest/)
<br>
![Stars](https://img.shields.io/github/stars/vladmandic/face-api?style=flat-square?svg=true)
![Forks](https://badgen.net/github/forks/vladmandic/face-api)
![Code Size](https://img.shields.io/github/languages/code-size/vladmandic/face-api?style=flat-square?svg=true)
![CDN](https://data.jsdelivr.com/v1/package/npm/@vladmandic/face-api/badge)<br>
![Downloads](https://img.shields.io/npm/dw/@vladmandic/face-api.png?style=flat-square)
![Downloads](https://img.shields.io/npm/dm/@vladmandic/face-api.png?style=flat-square)
![Downloads](https://img.shields.io/npm/dy/@vladmandic/face-api.png?style=flat-square)

5
SECURITY.md Normal file
View File

@ -0,0 +1,5 @@
# Security Policy
All issues are tracked publicly on GitHub
Entire code base and indluded dependencies is automatically scanned against known security vulnerabilities

3
TODO.md Normal file
View File

@ -0,0 +1,3 @@
# To-do List for FaceAPI
N/A

747
TUTORIAL.md Normal file
View File

@ -0,0 +1,747 @@
# FaceAPI Tutorial
## Features
* Face Recognition
* Face Landmark Detection
* Face Expression Recognition
* Age Estimation & Gender Recognition
<br>
## Table of Contents
* **[Usage](#getting-started)**
* **[Loading the Models](#getting-started-loading-models)**
* **[High Level API](#high-level-api)**
* **[Displaying Detection Results](#getting-started-displaying-detection-results)**
* **[Face Detection Options](#getting-started-face-detection-options)**
* **[Utility Classes](#getting-started-utility-classes)**
* **[Other Useful Utility](#other-useful-utility)**
* **[Available Models](#models)**
* **[Face Detection](#models-face-detection)**
* **[Face Landmark Detection](#models-face-landmark-detection)**
* **[Face Recognition](#models-face-recognition)**
* **[Face Expression Recognition](#models-face-expression-recognition)**
* **[Age Estimation and Gender Recognition](#models-age-and-gender-recognition)**
* **[API Documentation](https://justadudewhohacks.github.io/face-api.js/docs/globals.html)**
<br><hr><br>
<a name="getting-started"></a>
## Getting Started
<a name="getting-started-loading-models"></a>
### Loading the Models
All global neural network instances are exported via faceapi.nets:
```js
console.log(faceapi.nets)
// ageGenderNet
// faceExpressionNet
// faceLandmark68Net
// faceLandmark68TinyNet
// faceRecognitionNet
// ssdMobilenetv1
// tinyFaceDetector
// tinyYolov2
```
To load a model, you have to provide the corresponding manifest.json file as well as the model weight files (shards) as assets. Simply copy them to your public or assets folder. The manifest.json and shard files of a model have to be located in the same directory / accessible under the same route.
Assuming the models reside in **public/models**:
```js
await faceapi.nets.ssdMobilenetv1.loadFromUri('/models')
// accordingly for the other models:
// await faceapi.nets.faceLandmark68Net.loadFromUri('/models')
// await faceapi.nets.faceRecognitionNet.loadFromUri('/models')
// ...
```
In a nodejs environment you can furthermore load the models directly from disk:
```js
await faceapi.nets.ssdMobilenetv1.loadFromDisk('./models')
```
You can also load the model from a tf.NamedTensorMap:
```js
await faceapi.nets.ssdMobilenetv1.loadFromWeightMap(weightMap)
```
Alternatively, you can also create own instances of the neural nets:
```js
const net = new faceapi.SsdMobilenetv1()
await net.loadFromUri('/models')
```
You can also load the weights as a Float32Array (in case you want to use the uncompressed models):
```js
// using fetch
net.load(await faceapi.fetchNetWeights('/models/face_detection_model.weights'))
// using axios
const res = await axios.get('/models/face_detection_model.weights', { responseType: 'arraybuffer' })
const weights = new Float32Array(res.data)
net.load(weights)
```
<a name="getting-high-level-api"></a>
### High Level API
In the following **input** can be an HTML img, video or canvas element or the id of that element.
``` html
<img id="myImg" src="images/example.png" />
<video id="myVideo" src="media/example.mp4" />
<canvas id="myCanvas" />
```
```js
const input = document.getElementById('myImg')
// const input = document.getElementById('myVideo')
// const input = document.getElementById('myCanvas')
// or simply:
// const input = 'myImg'
```
### Detecting Faces
Detect all faces in an image. Returns **Array<[FaceDetection](#interface-face-detection)>**:
```js
const detections = await faceapi.detectAllFaces(input)
```
Detect the face with the highest confidence score in an image. Returns **[FaceDetection](#interface-face-detection) | undefined**:
```js
const detection = await faceapi.detectSingleFace(input)
```
By default **detectAllFaces** and **detectSingleFace** utilize the SSD Mobilenet V1 Face Detector. You can specify the face detector by passing the corresponding options object:
```js
const detections1 = await faceapi.detectAllFaces(input, new faceapi.SsdMobilenetv1Options())
const detections2 = await faceapi.detectAllFaces(input, new faceapi.TinyFaceDetectorOptions())
```
You can tune the options of each face detector as shown [here](#getting-started-face-detection-options).
#### Detecting 68 Face Landmark Points
**After face detection, we can furthermore predict the facial landmarks for each detected face as follows:**
Detect all faces in an image + computes 68 Point Face Landmarks for each detected face. Returns **Array<[WithFaceLandmarks<WithFaceDetection<{}>>](#getting-started-utility-classes)>**:
```js
const detectionsWithLandmarks = await faceapi.detectAllFaces(input).withFaceLandmarks()
```
Detect the face with the highest confidence score in an image + computes 68 Point Face Landmarks for that face. Returns **[WithFaceLandmarks<WithFaceDetection<{}>>](#getting-started-utility-classes) | undefined**:
```js
const detectionWithLandmarks = await faceapi.detectSingleFace(input).withFaceLandmarks()
```
You can also specify to use the tiny model instead of the default model:
```js
const useTinyModel = true
const detectionsWithLandmarks = await faceapi.detectAllFaces(input).withFaceLandmarks(useTinyModel)
```
#### Computing Face Descriptors
**After face detection and facial landmark prediction the face descriptors for each face can be computed as follows:**
Detect all faces in an image + compute 68 Point Face Landmarks for each detected face. Returns **Array<[WithFaceDescriptor<WithFaceLandmarks<WithFaceDetection<{}>>>](#getting-started-utility-classes)>**:
```js
const results = await faceapi.detectAllFaces(input).withFaceLandmarks().withFaceDescriptors()
```
Detect the face with the highest confidence score in an image + compute 68 Point Face Landmarks and face descriptor for that face. Returns **[WithFaceDescriptor<WithFaceLandmarks<WithFaceDetection<{}>>>](#getting-started-utility-classes) | undefined**:
```js
const result = await faceapi.detectSingleFace(input).withFaceLandmarks().withFaceDescriptor()
```
#### Recognizing Face Expressions
**Face expression recognition can be performed for detected faces as follows:**
Detect all faces in an image + recognize face expressions of each face. Returns **Array<[WithFaceExpressions<WithFaceLandmarks<WithFaceDetection<{}>>>](#getting-started-utility-classes)>**:
```js
const detectionsWithExpressions = await faceapi.detectAllFaces(input).withFaceLandmarks().withFaceExpressions()
```
Detect the face with the highest confidence score in an image + recognize the face expressions for that face. Returns **[WithFaceExpressions<WithFaceLandmarks<WithFaceDetection<{}>>>](#getting-started-utility-classes) | undefined**:
```js
const detectionWithExpressions = await faceapi.detectSingleFace(input).withFaceLandmarks().withFaceExpressions()
```
**You can also skip .withFaceLandmarks(), which will skip the face alignment step (less stable accuracy):**
Detect all faces without face alignment + recognize face expressions of each face. Returns **Array<[WithFaceExpressions<WithFaceDetection<{}>>](#getting-started-utility-classes)>**:
```js
const detectionsWithExpressions = await faceapi.detectAllFaces(input).withFaceExpressions()
```
Detect the face with the highest confidence score without face alignment + recognize the face expression for that face. Returns **[WithFaceExpressions<WithFaceDetection<{}>>](#getting-started-utility-classes) | undefined**:
```js
const detectionWithExpressions = await faceapi.detectSingleFace(input).withFaceExpressions()
```
#### Age Estimation and Gender Recognition
**Age estimation and gender recognition from detected faces can be done as follows:**
Detect all faces in an image + estimate age and recognize gender of each face. Returns **Array<[WithAge<WithGender<WithFaceLandmarks<WithFaceDetection<{}>>>>](#getting-started-utility-classes)>**:
```js
const detectionsWithAgeAndGender = await faceapi.detectAllFaces(input).withFaceLandmarks().withAgeAndGender()
```
Detect the face with the highest confidence score in an image + estimate age and recognize gender for that face. Returns **[WithAge<WithGender<WithFaceLandmarks<WithFaceDetection<{}>>>>](#getting-started-utility-classes) | undefined**:
```js
const detectionWithAgeAndGender = await faceapi.detectSingleFace(input).withFaceLandmarks().withAgeAndGender()
```
**You can also skip .withFaceLandmarks(), which will skip the face alignment step (less stable accuracy):**
Detect all faces without face alignment + estimate age and recognize gender of each face. Returns **Array<[WithAge<WithGender<WithFaceDetection<{}>>>](#getting-started-utility-classes)>**:
```js
const detectionsWithAgeAndGender = await faceapi.detectAllFaces(input).withAgeAndGender()
```
Detect the face with the highest confidence score without face alignment + estimate age and recognize gender for that face. Returns **[WithAge<WithGender<WithFaceDetection<{}>>>](#getting-started-utility-classes) | undefined**:
```js
const detectionWithAgeAndGender = await faceapi.detectSingleFace(input).withAgeAndGender()
```
#### Composition of Tasks
**Tasks can be composed as follows:**
```js
// all faces
await faceapi.detectAllFaces(input)
await faceapi.detectAllFaces(input).withFaceExpressions()
await faceapi.detectAllFaces(input).withFaceLandmarks()
await faceapi.detectAllFaces(input).withFaceLandmarks().withFaceExpressions()
await faceapi.detectAllFaces(input).withFaceLandmarks().withFaceExpressions().withFaceDescriptors()
await faceapi.detectAllFaces(input).withFaceLandmarks().withAgeAndGender().withFaceDescriptors()
await faceapi.detectAllFaces(input).withFaceLandmarks().withFaceExpressions().withAgeAndGender().withFaceDescriptors()
// single face
await faceapi.detectSingleFace(input)
await faceapi.detectSingleFace(input).withFaceExpressions()
await faceapi.detectSingleFace(input).withFaceLandmarks()
await faceapi.detectSingleFace(input).withFaceLandmarks().withFaceExpressions()
await faceapi.detectSingleFace(input).withFaceLandmarks().withFaceExpressions().withFaceDescriptor()
await faceapi.detectSingleFace(input).withFaceLandmarks().withAgeAndGender().withFaceDescriptor()
await faceapi.detectSingleFace(input).withFaceLandmarks().withFaceExpressions().withAgeAndGender().withFaceDescriptor()
```
#### Face Recognition by Matching Descriptors
To perform face recognition, one can use faceapi.FaceMatcher to compare reference face descriptors to query face descriptors.
First, we initialize the FaceMatcher with the reference data, for example we can simply detect faces in a **referenceImage** and match the descriptors of the detected faces to faces of subsequent images:
```js
const results = await faceapi
.detectAllFaces(referenceImage)
.withFaceLandmarks()
.withFaceDescriptors()
if (!results.length) {
return
}
// create FaceMatcher with automatically assigned labels
// from the detection results for the reference image
const faceMatcher = new faceapi.FaceMatcher(results)
```
Now we can recognize a persons face shown in **queryImage1**:
```js
const singleResult = await faceapi
.detectSingleFace(queryImage1)
.withFaceLandmarks()
.withFaceDescriptor()
if (singleResult) {
const bestMatch = faceMatcher.findBestMatch(singleResult.descriptor)
console.log(bestMatch.toString())
}
```
Or we can recognize all faces shown in **queryImage2**:
```js
const results = await faceapi
.detectAllFaces(queryImage2)
.withFaceLandmarks()
.withFaceDescriptors()
results.forEach(fd => {
const bestMatch = faceMatcher.findBestMatch(fd.descriptor)
console.log(bestMatch.toString())
})
```
You can also create labeled reference descriptors as follows:
```js
const labeledDescriptors = [
new faceapi.LabeledFaceDescriptors(
'obama',
[descriptorObama1, descriptorObama2]
),
new faceapi.LabeledFaceDescriptors(
'trump',
[descriptorTrump]
)
]
const faceMatcher = new faceapi.FaceMatcher(labeledDescriptors)
```
<a name="getting-started-displaying-detection-results"></a>
### Displaying Detection Results
Preparing the overlay canvas:
```js
const displaySize = { width: input.width, height: input.height }
// resize the overlay canvas to the input dimensions
const canvas = document.getElementById('overlay')
faceapi.matchDimensions(canvas, displaySize)
```
face-api.js predefines some highlevel drawing functions, which you can utilize:
```js
/* Display detected face bounding boxes */
const detections = await faceapi.detectAllFaces(input)
// resize the detected boxes in case your displayed image has a different size than the original
const resizedDetections = faceapi.resizeResults(detections, displaySize)
// draw detections into the canvas
faceapi.draw.drawDetections(canvas, resizedDetections)
/* Display face landmarks */
const detectionsWithLandmarks = await faceapi
.detectAllFaces(input)
.withFaceLandmarks()
// resize the detected boxes and landmarks in case your displayed image has a different size than the original
const resizedResults = faceapi.resizeResults(detectionsWithLandmarks, displaySize)
// draw detections into the canvas
faceapi.draw.drawDetections(canvas, resizedResults)
// draw the landmarks into the canvas
faceapi.draw.drawFaceLandmarks(canvas, resizedResults)
/* Display face expression results */
const detectionsWithExpressions = await faceapi
.detectAllFaces(input)
.withFaceLandmarks()
.withFaceExpressions()
// resize the detected boxes and landmarks in case your displayed image has a different size than the original
const resizedResults = faceapi.resizeResults(detectionsWithExpressions, displaySize)
// draw detections into the canvas
faceapi.draw.drawDetections(canvas, resizedResults)
// draw a textbox displaying the face expressions with minimum probability into the canvas
const minProbability = 0.05
faceapi.draw.drawFaceExpressions(canvas, resizedResults, minProbability)
```
You can also draw boxes with custom text ([DrawBox](https://github.com/justadudewhohacks/tfjs-image-recognition-base/blob/master/src/draw/DrawBox.ts)):
```js
const box = { x: 50, y: 50, width: 100, height: 100 }
// see DrawBoxOptions below
const drawOptions = {
label: 'Hello I am a box!',
lineWidth: 2
}
const drawBox = new faceapi.draw.DrawBox(box, drawOptions)
drawBox.draw(document.getElementById('myCanvas'))
```
DrawBox drawing options:
```js
export interface IDrawBoxOptions {
boxColor?: string
lineWidth?: number
drawLabelOptions?: IDrawTextFieldOptions
label?: string
}
```
Finally you can draw custom text fields ([DrawTextField](https://github.com/justadudewhohacks/tfjs-image-recognition-base/blob/master/src/draw/DrawTextField.ts)):
```js
const text = [
'This is a textline!',
'This is another textline!'
]
const anchor = { x: 200, y: 200 }
// see DrawTextField below
const drawOptions = {
anchorPosition: 'TOP_LEFT',
backgroundColor: 'rgba(0, 0, 0, 0.5)'
}
const drawBox = new faceapi.draw.DrawTextField(text, anchor, drawOptions)
drawBox.draw(document.getElementById('myCanvas'))
```
DrawTextField drawing options:
```js
export interface IDrawTextFieldOptions {
anchorPosition?: AnchorPosition
backgroundColor?: string
fontColor?: string
fontSize?: number
fontStyle?: string
padding?: number
}
export enum AnchorPosition {
TOP_LEFT = 'TOP_LEFT',
TOP_RIGHT = 'TOP_RIGHT',
BOTTOM_LEFT = 'BOTTOM_LEFT',
BOTTOM_RIGHT = 'BOTTOM_RIGHT'
}
```
<a name="getting-started-face-detection-options"></a>
### Face Detection Options
#### SsdMobilenetv1Options
```js
export interface ISsdMobilenetv1Options {
// minimum confidence threshold
// default: 0.5
minConfidence?: number
// maximum number of faces to return
// default: 100
maxResults?: number
}
// example
const options = new faceapi.SsdMobilenetv1Options({ minConfidence: 0.8 })
```
#### TinyFaceDetectorOptions
```js
export interface ITinyFaceDetectorOptions {
// size at which image is processed, the smaller the faster,
// but less precise in detecting smaller faces, must be divisible
// by 32, common sizes are 128, 160, 224, 320, 416, 512, 608,
// for face tracking via webcam I would recommend using smaller sizes,
// e.g. 128, 160, for detecting smaller faces use larger sizes, e.g. 512, 608
// default: 416
inputSize?: number
// minimum confidence threshold
// default: 0.5
scoreThreshold?: number
}
// example
const options = new faceapi.TinyFaceDetectorOptions({ inputSize: 320 })
```
<a name="getting-started-utility-classes"></a>
### Utility Classes
#### IBox
```js
export interface IBox {
x: number
y: number
width: number
height: number
}
```
#### IFaceDetection
```js
export interface IFaceDetection {
score: number
box: Box
}
```
#### IFaceLandmarks
```js
export interface IFaceLandmarks {
positions: Point[]
shift: Point
}
```
#### WithFaceDetection
```js
export type WithFaceDetection<TSource> = TSource & {
detection: FaceDetection
}
```
#### WithFaceLandmarks
```js
export type WithFaceLandmarks<TSource> = TSource & {
unshiftedLandmarks: FaceLandmarks
landmarks: FaceLandmarks
alignedRect: FaceDetection
angle: { roll: number, yaw: number, pitch: number }
// for angle all values are in radians in range of -pi/2 to pi/2 which is -90 to +90 degrees
// value of 0 means center
}
```
#### WithFaceDescriptor
```js
export type WithFaceDescriptor<TSource> = TSource & {
descriptor: Float32Array
}
```
#### WithFaceExpressions
```js
export type WithFaceExpressions<TSource> = TSource & {
expressions: FaceExpressions
}
```
#### WithAge
```js
export type WithAge<TSource> = TSource & {
age: number
}
```
#### WithGender
```js
export type WithGender<TSource> = TSource & {
gender: Gender
genderProbability: number
}
export enum Gender {
FEMALE = 'female',
MALE = 'male'
}
```
<a name="getting-started-other-useful-utility"></a>
### Other Useful Utility
#### Using the Low Level API
Instead of using the high level API, you can directly use the forward methods of each neural network:
```js
const detections1 = await faceapi.ssdMobilenetv1(input, options)
const detections2 = await faceapi.tinyFaceDetector(input, options)
const landmarks1 = await faceapi.detectFaceLandmarks(faceImage)
const landmarks2 = await faceapi.detectFaceLandmarksTiny(faceImage)
const descriptor = await faceapi.computeFaceDescriptor(alignedFaceImage)
```
#### Extracting a Canvas for an Image Region
```js
const regionsToExtract = [
new faceapi.Rect(0, 0, 100, 100)
]
// actually extractFaces is meant to extract face regions from bounding boxes
// but you can also use it to extract any other region
const canvases = await faceapi.extractFaces(input, regionsToExtract)
```
#### Euclidean Distance
```js
// ment to be used for computing the euclidean distance between two face descriptors
const dist = faceapi.euclideanDistance([0, 0], [0, 10])
console.log(dist) // 10
```
#### Retrieve the Face Landmark Points and Contours
```js
const landmarkPositions = landmarks.positions
// or get the positions of individual contours,
// only available for 68 point face ladnamrks (FaceLandmarks68)
const jawOutline = landmarks.getJawOutline()
const nose = landmarks.getNose()
const mouth = landmarks.getMouth()
const leftEye = landmarks.getLeftEye()
const rightEye = landmarks.getRightEye()
const leftEyeBbrow = landmarks.getLeftEyeBrow()
const rightEyeBrow = landmarks.getRightEyeBrow()
```
#### Fetch and Display Images from an URL
``` html
<img id="myImg" src="">
```
```js
const image = await faceapi.fetchImage('/images/example.png')
console.log(image instanceof HTMLImageElement) // true
// displaying the fetched image content
const myImg = document.getElementById('myImg')
myImg.src = image.src
```
#### Fetching JSON
```js
const json = await faceapi.fetchJson('/files/example.json')
```
#### Creating an Image Picker
``` html
<img id="myImg" src="">
<input id="myFileUpload" type="file" onchange="uploadImage()" accept=".jpg, .jpeg, .png">
```
```js
async function uploadImage() {
const imgFile = document.getElementById('myFileUpload').files[0]
// create an HTMLImageElement from a Blob
const img = await faceapi.bufferToImage(imgFile)
document.getElementById('myImg').src = img.src
}
```
#### Creating a Canvas Element from an Image or Video Element
``` html
<img id="myImg" src="images/example.png" />
<video id="myVideo" src="media/example.mp4" />
```
```js
const canvas1 = faceapi.createCanvasFromMedia(document.getElementById('myImg'))
const canvas2 = faceapi.createCanvasFromMedia(document.getElementById('myVideo'))
```
<a name="models"></a>
<br><hr><br>
## Available Models
<a name="models-face-detection"></a>
### Face Detection Models
#### SSD Mobilenet V1
For face detection, this project implements a SSD (Single Shot Multibox Detector) based on MobileNetV1. The neural net will compute the locations of each face in an image and will return the bounding boxes together with it's probability for each face. This face detector is aiming towards obtaining high accuracy in detecting face bounding boxes instead of low inference time. The size of the quantized model is about 5.4 MB (**ssd_mobilenetv1_model**).
The face detection model has been trained on the [WIDERFACE dataset](http://mmlab.ie.cuhk.edu.hk/projects/WIDERFace/) and the weights are provided by [yeephycho](https://github.com/yeephycho) in [this](https://github.com/yeephycho/tensorflow-face-detection) repo.
#### Tiny Face Detector
The Tiny Face Detector is a very performant, realtime face detector, which is much faster, smaller and less resource consuming compared to the SSD Mobilenet V1 face detector, in return it performs slightly less well on detecting small faces. This model is extremely mobile and web friendly, thus it should be your GO-TO face detector on mobile devices and resource limited clients. The size of the quantized model is only 190 KB (**tiny_face_detector_model**).
The face detector has been trained on a custom dataset of ~14K images labeled with bounding boxes. Furthermore the model has been trained to predict bounding boxes, which entirely cover facial feature points, thus it in general produces better results in combination with subsequent face landmark detection than SSD Mobilenet V1.
This model is basically an even tinier version of Tiny Yolo V2, replacing the regular convolutions of Yolo with depthwise separable convolutions. Yolo is fully convolutional, thus can easily adapt to different input image sizes to trade off accuracy for performance (inference time).
<a name="models-face-landmark-detection"></a>
### 68 Point Face Landmark Detection Models
This package implements a very lightweight and fast, yet accurate 68 point face landmark detector. The default model has a size of only 350kb (**face_landmark_68_model**) and the tiny model is only 80kb (**face_landmark_68_tiny_model**). Both models employ the ideas of depthwise separable convolutions as well as densely connected blocks. The models have been trained on a dataset of ~35k face images labeled with 68 face landmark points.
<a name="models-face-recognition"></a>
### Face Recognition Model
For face recognition, a ResNet-34 like architecture is implemented to compute a face descriptor (a feature vector with 128 values) from any given face image, which is used to describe the characteristics of a persons face. The model is **not** limited to the set of faces used for training, meaning you can use it for face recognition of any person, for example yourself. You can determine the similarity of two arbitrary faces by comparing their face descriptors, for example by computing the euclidean distance or using any other classifier of your choice.
The neural net is equivalent to the **FaceRecognizerNet** used in [face-recognition.js](https://github.com/justadudewhohacks/face-recognition.js) and the net used in the [dlib](https://github.com/davisking/dlib/blob/master/examples/dnn_face_recognition_ex.cpp) face recognition example. The weights have been trained by [davisking](https://github.com/davisking) and the model achieves a prediction accuracy of 99.38% on the LFW (Labeled Faces in the Wild) benchmark for face recognition.
The size of the quantized model is roughly 6.2 MB (**face_recognition_model**).
<a name="models-face-expression-recognition"></a>
### Face Expression Recognition Model
The face expression recognition model is lightweight, fast and provides reasonable accuracy. The model has a size of roughly 310kb and it employs depthwise separable convolutions and densely connected blocks. It has been trained on a variety of images from publicly available datasets as well as images scraped from the web. Note, that wearing glasses might decrease the accuracy of the prediction results.
<a name="models-age-and-gender-recognition"></a>
### Age and Gender Recognition Model
The age and gender recognition model is a multitask network, which employs a feature extraction layer, an age regression layer and a gender classifier. The model has a size of roughly 420kb and the feature extractor employs a tinier but very similar architecture to Xception.
This model has been trained and tested on the following databases with an 80/20 train/test split each: UTK, FGNET, Chalearn, Wiki, IMDB*, CACD*, MegaAge, MegaAge-Asian. The `*` indicates, that these databases have been algorithmically cleaned up, since the initial databases are very noisy.
#### Total Test Results
Total MAE (Mean Age Error): **4.54**
Total Gender Accuracy: **95%**
#### Test results for each database
The `-` indicates, that there are no gender labels available for these databases.
Database | UTK | FGNET | Chalearn | Wiki | IMDB* | CACD* | MegaAge | MegaAge-Asian |
----------------|-------:|------:|---------:|-----:|------:|------:|--------:|--------------:|
MAE | 5.25 | 4.23 | 6.24 | 6.54 | 3.63 | 3.20 | 6.23 | 4.21 |
Gender Accuracy | 0.93 | - | 0.94 | 0.95 | - | 0.97 | - | - |
#### Test results for different age category groups
Age Range | 0 - 3 | 4 - 8 | 9 - 18 | 19 - 28 | 29 - 40 | 41 - 60 | 60 - 80 | 80+ |
----------------|-------:|------:|-------:|--------:|--------:|--------:|--------:|--------:|
MAE | 1.52 | 3.06 | 4.82 | 4.99 | 5.43 | 4.94 | 6.17 | 9.91 |
Gender Accuracy | 0.69 | 0.80 | 0.88 | 0.96 | 0.97 | 0.97 | 0.96 | 0.9 |

38
api-extractor.json Normal file
View File

@ -0,0 +1,38 @@
{
"$schema": "https://developer.microsoft.com/json-schemas/api-extractor/v7/api-extractor.schema.json",
"mainEntryPointFilePath": "types/lib/src/index.d.ts",
"bundledPackages": ["@tensorflow/tfjs-core", "@tensorflow/tfjs-converter", "@types/offscreencanvas"],
"compiler": {
"skipLibCheck": false
},
"newlineKind": "lf",
"dtsRollup": {
"enabled": true,
"untrimmedFilePath": "types/face-api.d.ts"
},
"docModel": { "enabled": false },
"tsdocMetadata": {
"enabled": false
},
"apiReport": { "enabled": false },
"messages": {
"compilerMessageReporting": {
"default": {
"logLevel": "warning"
}
},
"extractorMessageReporting": {
"default": {
"logLevel": "warning"
},
"ae-missing-release-tag": {
"logLevel": "none"
}
},
"tsdocMessageReporting": {
"default": {
"logLevel": "warning"
}
}
}
}

77
build.js Normal file
View File

@ -0,0 +1,77 @@
const fs = require('fs');
const log = require('@vladmandic/pilogger');
const Build = require('@vladmandic/build').Build;
const APIExtractor = require('@microsoft/api-extractor');
const regEx = [
{ search: 'types="@webgpu/types/dist"', replace: 'path="../src/types/webgpu.d.ts"' },
{ search: 'types="offscreencanvas"', replace: 'path="../src/types/offscreencanvas.d.ts"' },
];
function copyFile(src, dst) {
if (!fs.existsSync(src)) {
log.warn('Copy:', { input: src, output: dst });
return;
}
log.state('Copy:', { input: src, output: dst });
const buffer = fs.readFileSync(src);
fs.writeFileSync(dst, buffer);
}
function writeFile(str, dst) {
log.state('Write:', { output: dst });
fs.writeFileSync(dst, str);
}
function regExFile(src, entries) {
if (!fs.existsSync(src)) {
log.warn('Filter:', { src });
return;
}
log.state('Filter:', { input: src });
for (const entry of entries) {
const buffer = fs.readFileSync(src, 'UTF-8');
const lines = buffer.split(/\r?\n/);
const out = [];
for (const line of lines) {
if (line.includes(entry.search)) out.push(line.replace(entry.search, entry.replace));
else out.push(line);
}
fs.writeFileSync(src, out.join('\n'));
}
}
const apiIgnoreList = ['ae-forgotten-export', 'ae-unresolved-link', 'tsdoc-param-tag-missing-hyphen'];
async function main() {
// run production build
const build = new Build();
await build.run('production');
// patch tfjs typedefs
log.state('Copy:', { input: 'types/lib/dist/tfjs.esm.d.ts' });
copyFile('types/lib/dist/tfjs.esm.d.ts', 'dist/tfjs.esm.d.ts');
// run api-extractor to create typedef rollup
const extractorConfig = APIExtractor.ExtractorConfig.loadFileAndPrepare('api-extractor.json');
const extractorResult = APIExtractor.Extractor.invoke(extractorConfig, {
localBuild: true,
showVerboseMessages: false,
messageCallback: (msg) => {
msg.handled = true;
if (msg.logLevel === 'none' || msg.logLevel === 'verbose' || msg.logLevel === 'info') return;
if (msg.sourceFilePath?.includes('/node_modules/')) return;
if (apiIgnoreList.reduce((prev, curr) => prev || msg.messageId.includes(curr), false)) return;
log.data('API', { level: msg.logLevel, category: msg.category, id: msg.messageId, file: msg.sourceFilePath, line: msg.sourceFileLine, text: msg.text });
},
});
log.state('API-Extractor:', { succeeeded: extractorResult.succeeded, errors: extractorResult.errorCount, warnings: extractorResult.warningCount });
regExFile('types/face-api.d.ts', regEx);
writeFile('export * from \'../types/face-api\';', 'dist/face-api.esm-nobundle.d.ts');
writeFile('export * from \'../types/face-api\';', 'dist/face-api.esm.d.ts');
writeFile('export * from \'../types/face-api\';', 'dist/face-api.d.ts');
writeFile('export * from \'../types/face-api\';', 'dist/face-api.node.d.ts');
writeFile('export * from \'../types/face-api\';', 'dist/face-api.node-gpu.d.ts');
writeFile('export * from \'../types/face-api\';', 'dist/face-api.node-wasm.d.ts');
log.info('FaceAPI Build complete...');
}
main();

BIN
demo/facemesh.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

17
demo/index.html Normal file
View File

@ -0,0 +1,17 @@
<!DOCTYPE html>
<html lang="en">
<head>
<title>FaceAPI Static Images Demo</title>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
<meta name="viewport" content="width=device-width, shrink-to-fit=yes">
<meta name="application-name" content="FaceAPI">
<meta name="keywords" content="FaceAPI">
<meta name="description" content="FaceAPI: AI-powered Face Detection, Description & Recognition for Browser and NodeJS using Tensorflow/JS; Author: Vladimir Mandic <https://github.com/vladmandic>">
<meta name="msapplication-tooltip" content="FaceAPI: AI-powered Face Detection, Description & Recognition for Browser and NodeJS using Tensorflow/JS; Author: Vladimir Mandic <https://github.com/vladmandic>">
<link rel="shortcut icon" href="../favicon.ico" type="image/x-icon">
<script src="./index.js" type="module"></script>
</head>
<body style="font-family: monospace; background: black; color: white; font-size: 16px; line-height: 22px; margin: 0; overflow-x: hidden;">
<div id="log"></div>
</body>
</html>

View File

@ -1,26 +1,27 @@
import * as faceapi from '../dist/face-api.esm.js'; /**
* FaceAPI Demo for Browsers
* Loaded via `index.html`
*/
import * as faceapi from '../dist/face-api.esm.js'; // use when in dev mode
// import * as faceapi from '@vladmandic/face-api'; // use when downloading face-api as npm
// configuration options // configuration options
const modelPath = 'https://vladmandic.github.io/face-api/model/'; // path to model folder that will be loaded using http const modelPath = '../model/'; // path to model folder that will be loaded using http
const imgSize = 512; // maximum image size in pixels // const modelPath = 'https://cdn.jsdelivr.net/npm/@vladmandic/face-api/model/'; // path to model folder that will be loaded using http
const minScore = 0.1; // minimum score const imgSize = 800; // maximum image size in pixels
const maxResults = 5; // maximum number of results to return const minScore = 0.3; // minimum score
const samples = ['sample (1).jpg', 'sample (2).jpg', 'sample (3).jpg', 'sample (4).jpg', 'sample (5).jpg', 'sample (6).jpg']; // sample images to be loaded using http const maxResults = 10; // maximum number of results to return
const samples = ['sample1.jpg', 'sample2.jpg', 'sample3.jpg', 'sample4.jpg', 'sample5.jpg', 'sample6.jpg']; // sample images to be loaded using http
// helper function to pretty-print json object to string // helper function to pretty-print json object to string
function str(json) { const str = (json) => (json ? JSON.stringify(json).replace(/{|}|"|\[|\]/g, '').replace(/,/g, ', ') : '');
let text = '<font color="lightblue">';
text += json ? JSON.stringify(json).replace(/{|}|"|\[|\]/g, '').replace(/,/g, ', ') : '';
text += '</font>';
return text;
}
// helper function to print strings to html document as a log // helper function to print strings to html document as a log
function log(...txt) { function log(...txt) {
// eslint-disable-next-line no-console console.log(...txt); // eslint-disable-line no-console
console.log(...txt); const div = document.getElementById('log');
// @ts-ignore if (div) div.innerHTML += `<br>${txt}`;
document.getElementById('log').innerHTML += `<br>${txt}`;
} }
// helper function to draw detected faces // helper function to draw detected faces
@ -32,11 +33,9 @@ function faces(name, title, id, data) {
canvas.style.position = 'absolute'; canvas.style.position = 'absolute';
canvas.style.left = `${img.offsetLeft}px`; canvas.style.left = `${img.offsetLeft}px`;
canvas.style.top = `${img.offsetTop}px`; canvas.style.top = `${img.offsetTop}px`;
// @ts-ignore
canvas.width = img.width; canvas.width = img.width;
// @ts-ignore
canvas.height = img.height; canvas.height = img.height;
const ctx = canvas.getContext('2d'); const ctx = canvas.getContext('2d', { willReadFrequently: true });
if (!ctx) return; if (!ctx) return;
// draw title // draw title
ctx.font = '1rem sans-serif'; ctx.font = '1rem sans-serif';
@ -52,6 +51,7 @@ function faces(name, title, id, data) {
ctx.beginPath(); ctx.beginPath();
ctx.rect(person.detection.box.x, person.detection.box.y, person.detection.box.width, person.detection.box.height); ctx.rect(person.detection.box.x, person.detection.box.y, person.detection.box.width, person.detection.box.height);
ctx.stroke(); ctx.stroke();
// draw text labels
ctx.globalAlpha = 1; ctx.globalAlpha = 1;
ctx.fillText(`${Math.round(100 * person.genderProbability)}% ${person.gender}`, person.detection.box.x, person.detection.box.y - 18); ctx.fillText(`${Math.round(100 * person.genderProbability)}% ${person.gender}`, person.detection.box.x, person.detection.box.y - 18);
ctx.fillText(`${Math.round(person.age)} years`, person.detection.box.x, person.detection.box.y - 2); ctx.fillText(`${Math.round(person.age)} years`, person.detection.box.x, person.detection.box.y - 2);
@ -71,8 +71,7 @@ function faces(name, title, id, data) {
// helper function to draw processed image and its results // helper function to draw processed image and its results
function print(title, img, data) { function print(title, img, data) {
// eslint-disable-next-line no-console console.log('Results:', title, img, data); // eslint-disable-line no-console
console.log('Results:', title, img, data);
const el = new Image(); const el = new Image();
el.id = Math.floor(Math.random() * 100000).toString(); el.id = Math.floor(Math.random() * 100000).toString();
el.src = img; el.src = img;
@ -95,7 +94,7 @@ async function image(url) {
const canvas = document.createElement('canvas'); const canvas = document.createElement('canvas');
canvas.height = img.height; canvas.height = img.height;
canvas.width = img.width; canvas.width = img.width;
const ctx = canvas.getContext('2d'); const ctx = canvas.getContext('2d', { willReadFrequently: true });
if (ctx) ctx.drawImage(img, 0, 0, img.width, img.height); if (ctx) ctx.drawImage(img, 0, 0, img.width, img.height);
// return generated canvas to be used by tfjs during detection // return generated canvas to be used by tfjs during detection
resolve(canvas); resolve(canvas);
@ -110,19 +109,24 @@ async function main() {
log('FaceAPI Test'); log('FaceAPI Test');
// if you want to use wasm backend location for wasm binaries must be specified // if you want to use wasm backend location for wasm binaries must be specified
// await faceapi.tf.setWasmPaths('../node_modules/@tensorflow/tfjs-backend-wasm/dist/'); // await faceapi.tf?.setWasmPaths(`https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-backend-wasm@${faceapi.tf.version_core}/dist/`);
// await faceapi.tf.setBackend('wasm'); // await faceapi.tf?.setBackend('wasm');
// log(`WASM SIMD: ${await faceapi.tf?.env().getAsync('WASM_HAS_SIMD_SUPPORT')} Threads: ${await faceapi.tf?.env().getAsync('WASM_HAS_MULTITHREAD_SUPPORT') ? 'Multi' : 'Single'}`);
// default is webgl backend // default is webgl backend
await faceapi.tf.setBackend('webgl'); await faceapi.tf.setBackend('webgl');
await faceapi.tf.ready();
// tfjs optimizations
if (faceapi.tf?.env().flagRegistry.CANVAS2D_WILL_READ_FREQUENTLY) faceapi.tf.env().set('CANVAS2D_WILL_READ_FREQUENTLY', true);
if (faceapi.tf?.env().flagRegistry.WEBGL_EXP_CONV) faceapi.tf.env().set('WEBGL_EXP_CONV', true);
if (faceapi.tf?.env().flagRegistry.WEBGL_EXP_CONV) faceapi.tf.env().set('WEBGL_EXP_CONV', true);
await faceapi.tf.enableProdMode(); await faceapi.tf.enableProdMode();
await faceapi.tf.ENV.set('DEBUG', false);
await faceapi.tf.ready(); await faceapi.tf.ready();
// check version // check version
log(`Version: TensorFlow/JS ${str(faceapi.tf?.version_core || '(not loaded)')} FaceAPI ${str(faceapi?.version || '(not loaded)')} Backend: ${str(faceapi.tf?.getBackend() || '(not loaded)')}`); log(`Version: FaceAPI ${str(faceapi?.version || '(not loaded)')} TensorFlow/JS ${str(faceapi?.tf?.version_core || '(not loaded)')} Backend: ${str(faceapi?.tf?.getBackend() || '(not loaded)')}`);
log(`Flags: ${JSON.stringify(faceapi.tf.ENV.flags)}`); log(`Flags: ${JSON.stringify(faceapi?.tf?.ENV.flags || { tf: 'not loaded' })}`);
// load face-api models // load face-api models
log('Loading FaceAPI models'); log('Loading FaceAPI models');
@ -139,16 +143,9 @@ async function main() {
const engine = await faceapi.tf.engine(); const engine = await faceapi.tf.engine();
log(`TF Engine State: ${str(engine.state)}`); log(`TF Engine State: ${str(engine.state)}`);
// const testT = faceapi.tf.tensor([0]);
// const testF = testT.toFloat();
// console.log(testT.print(), testF.print());
// testT.dispose();
// testF.dispose();
// loop through all images and try to process them // loop through all images and try to process them
log(`Start processing: ${samples.length} images ...<br>`); log(`Start processing: ${samples.length} images ...<br>`);
for (const img of samples) { for (const img of samples) {
// new line
document.body.appendChild(document.createElement('br')); document.body.appendChild(document.createElement('br'));
// load and resize image // load and resize image
const canvas = await image(img); const canvas = await image(img);
@ -162,7 +159,7 @@ async function main() {
.withFaceDescriptors() .withFaceDescriptors()
.withAgeAndGender(); .withAgeAndGender();
// print results to screen // print results to screen
print('TinyFace Detector', img, dataTinyYolo); print('TinyFace:', img, dataTinyYolo);
// actual model execution // actual model execution
const dataSSDMobileNet = await faceapi const dataSSDMobileNet = await faceapi
.detectAllFaces(canvas, optionsSSDMobileNet) .detectAllFaces(canvas, optionsSSDMobileNet)
@ -171,11 +168,9 @@ async function main() {
.withFaceDescriptors() .withFaceDescriptors()
.withAgeAndGender(); .withAgeAndGender();
// print results to screen // print results to screen
print('SSD MobileNet', img, dataSSDMobileNet); print('SSDMobileNet:', img, dataSSDMobileNet);
} catch (err) { } catch (err) {
log(`Image: ${img} Error during processing ${str(err)}`); log(`Image: ${img} Error during processing ${str(err)}`);
// eslint-disable-next-line no-console
console.error(err);
} }
} }
} }

98
demo/node-canvas.js Normal file
View File

@ -0,0 +1,98 @@
/**
* FaceAPI Demo for NodeJS
* - Uses external library [canvas](https://www.npmjs.com/package/canvas) to decode image
* - Loads image from provided param
* - Outputs results to console
*/
// canvas library provides full canvas (load/draw/write) functionality for nodejs
// must be installed manually as it just a demo dependency and not actual face-api dependency
const canvas = require('canvas'); // eslint-disable-line node/no-missing-require
const fs = require('fs');
const path = require('path');
const process = require('process');
const log = require('@vladmandic/pilogger');
const tf = require('@tensorflow/tfjs-node'); // in nodejs environments tfjs-node is required to be loaded before face-api
const faceapi = require('../dist/face-api.node.js'); // use this when using face-api in dev mode
// const faceapi = require('@vladmandic/face-api'); // use this when face-api is installed as module (majority of use cases)
const modelPathRoot = '../model';
const imgPathRoot = './demo'; // modify to include your sample images
const minConfidence = 0.15;
const maxResults = 5;
let optionsSSDMobileNet;
async function image(input) {
const img = await canvas.loadImage(input);
const c = canvas.createCanvas(img.width, img.height);
const ctx = c.getContext('2d');
ctx.drawImage(img, 0, 0, img.width, img.height);
// const out = fs.createWriteStream('test.jpg');
// const stream = c.createJPEGStream({ quality: 0.6, progressive: true, chromaSubsampling: true });
// stream.pipe(out);
return c;
}
async function detect(tensor) {
const result = await faceapi
.detectAllFaces(tensor, optionsSSDMobileNet)
.withFaceLandmarks()
.withFaceExpressions()
.withFaceDescriptors()
.withAgeAndGender();
return result;
}
function print(face) {
const expression = Object.entries(face.expressions).reduce((acc, val) => ((val[1] > acc[1]) ? val : acc), ['', 0]);
const box = [face.alignedRect._box._x, face.alignedRect._box._y, face.alignedRect._box._width, face.alignedRect._box._height];
const gender = `Gender: ${Math.round(100 * face.genderProbability)}% ${face.gender}`;
log.data(`Detection confidence: ${Math.round(100 * face.detection._score)}% ${gender} Age: ${Math.round(10 * face.age) / 10} Expression: ${Math.round(100 * expression[1])}% ${expression[0]} Box: ${box.map((a) => Math.round(a))}`);
}
async function main() {
log.header();
log.info('FaceAPI single-process test');
faceapi.env.monkeyPatch({ Canvas: canvas.Canvas, Image: canvas.Image, ImageData: canvas.ImageData });
await faceapi.tf.setBackend('tensorflow');
await faceapi.tf.ready();
log.state(`Version: FaceAPI ${faceapi.version} TensorFlow/JS ${tf.version_core} Backend: ${faceapi.tf?.getBackend()}`);
log.info('Loading FaceAPI models');
const modelPath = path.join(__dirname, modelPathRoot);
await faceapi.nets.ssdMobilenetv1.loadFromDisk(modelPath);
await faceapi.nets.ageGenderNet.loadFromDisk(modelPath);
await faceapi.nets.faceLandmark68Net.loadFromDisk(modelPath);
await faceapi.nets.faceRecognitionNet.loadFromDisk(modelPath);
await faceapi.nets.faceExpressionNet.loadFromDisk(modelPath);
optionsSSDMobileNet = new faceapi.SsdMobilenetv1Options({ minConfidence, maxResults });
if (process.argv.length !== 3) {
const t0 = process.hrtime.bigint();
const dir = fs.readdirSync(imgPathRoot);
let numImages = 0;
for (const img of dir) {
if (!img.toLocaleLowerCase().endsWith('.jpg')) continue;
numImages += 1;
const c = await image(path.join(imgPathRoot, img));
const result = await detect(c);
log.data('Image:', img, 'Detected faces:', result.length);
for (const face of result) print(face);
}
const t1 = process.hrtime.bigint();
log.info('Processed', numImages, 'images in', Math.trunc(Number((t1 - t0).toString()) / 1000 / 1000), 'ms');
} else {
const param = process.argv[2];
if (fs.existsSync(param) || param.startsWith('http:') || param.startsWith('https:')) {
const c = await image(param);
const result = await detect(c);
log.data('Image:', param, 'Detected faces:', result.length);
for (const face of result) print(face);
}
}
}
main();

35
demo/node-face-compare.js Normal file
View File

@ -0,0 +1,35 @@
/**
* FaceAPI demo that loads two images and finds similarity most prominant face in each image
*/
const fs = require('fs');
const tf = require('@tensorflow/tfjs-node');
const faceapi = require('../dist/face-api.node');
let optionsSSDMobileNet;
const getDescriptors = async (imageFile) => {
const buffer = fs.readFileSync(imageFile);
const tensor = tf.node.decodeImage(buffer, 3);
const faces = await faceapi.detectAllFaces(tensor, optionsSSDMobileNet)
.withFaceLandmarks()
.withFaceDescriptors();
tf.dispose(tensor);
return faces.map((face) => face.descriptor);
};
const main = async (file1, file2) => {
console.log('input images:', file1, file2); // eslint-disable-line no-console
await tf.ready();
await faceapi.nets.ssdMobilenetv1.loadFromDisk('model');
optionsSSDMobileNet = new faceapi.SsdMobilenetv1Options({ minConfidence: 0.5, maxResults: 1 });
await faceapi.nets.faceLandmark68Net.loadFromDisk('model');
await faceapi.nets.faceRecognitionNet.loadFromDisk('model');
const desc1 = await getDescriptors(file1);
const desc2 = await getDescriptors(file2);
const distance = faceapi.euclideanDistance(desc1[0], desc2[0]); // only compare first found face in each image
console.log('distance between most prominant detected faces:', distance); // eslint-disable-line no-console
console.log('similarity between most prominant detected faces:', 1 - distance); // eslint-disable-line no-console
};
main('demo/sample1.jpg', 'demo/sample2.jpg');

54
demo/node-image.js Normal file
View File

@ -0,0 +1,54 @@
/**
* FaceAPI Demo for NodeJS
* - Uses external library [@canvas/image](https://www.npmjs.com/package/@canvas/image) to decode image
* - Loads image from provided param
* - Outputs results to console
*/
// @canvas/image can decode jpeg, png, webp
// must be installed manually as it just a demo dependency and not actual face-api dependency
const image = require('@canvas/image'); // eslint-disable-line node/no-missing-require
const fs = require('fs');
const log = require('@vladmandic/pilogger');
const tf = require('@tensorflow/tfjs-node'); // in nodejs environments tfjs-node is required to be loaded before face-api
const faceapi = require('../dist/face-api.node.js'); // use this when using face-api in dev mode
// const faceapi = require('@vladmandic/face-api'); // use this when face-api is installed as module (majority of use cases)
const modelPath = 'model/';
const imageFile = 'demo/sample1.jpg';
const ssdOptions = { minConfidence: 0.1, maxResults: 10 };
async function main() {
log.header();
const buffer = fs.readFileSync(imageFile); // read image from disk
const canvas = await image.imageFromBuffer(buffer); // decode to canvas
const imageData = image.getImageData(canvas); // read decoded image data from canvas
log.info('image:', imageFile, canvas.width, canvas.height);
const tensor = tf.tidy(() => { // create tensor from image data
const data = tf.tensor(Array.from(imageData?.data || []), [canvas.height, canvas.width, 4], 'int32'); // create rgba image tensor from flat array and flip to height x width
const channels = tf.split(data, 4, 2); // split rgba to channels
const rgb = tf.stack([channels[0], channels[1], channels[2]], 2); // stack channels back to rgb
const reshape = tf.reshape(rgb, [1, canvas.height, canvas.width, 3]); // move extra dim from the end of tensor and use it as batch number instead
return reshape;
});
log.info('tensor:', tensor.shape, tensor.size);
// load models
await faceapi.nets.ssdMobilenetv1.loadFromDisk(modelPath);
await faceapi.nets.ageGenderNet.loadFromDisk(modelPath);
await faceapi.nets.faceLandmark68Net.loadFromDisk(modelPath);
await faceapi.nets.faceRecognitionNet.loadFromDisk(modelPath);
await faceapi.nets.faceExpressionNet.loadFromDisk(modelPath);
const optionsSSDMobileNet = new faceapi.SsdMobilenetv1Options(ssdOptions); // create options object
const result = await faceapi // run detection
.detectAllFaces(tensor, optionsSSDMobileNet)
.withFaceLandmarks()
.withFaceExpressions()
.withFaceDescriptors()
.withAgeAndGender();
log.data('results:', result.length);
}
main();

84
demo/node-match.js Normal file
View File

@ -0,0 +1,84 @@
/**
* FaceAPI Demo for NodeJS
* - Analyzes face descriptors from source (image file or folder containing multiple image files)
* - Analyzes face descriptor from target
* - Finds best match
*/
const fs = require('fs');
const path = require('path');
const log = require('@vladmandic/pilogger');
const tf = require('@tensorflow/tfjs-node'); // in nodejs environments tfjs-node is required to be loaded before face-api
const faceapi = require('../dist/face-api.node.js'); // use this when using face-api in dev mode
// const faceapi = require('@vladmandic/face-api'); // use this when face-api is installed as module (majority of use cases)
let optionsSSDMobileNet;
const minConfidence = 0.1;
const distanceThreshold = 0.5;
const modelPath = 'model';
const labeledFaceDescriptors = [];
async function initFaceAPI() {
await faceapi.nets.ssdMobilenetv1.loadFromDisk(modelPath);
await faceapi.nets.faceLandmark68Net.loadFromDisk(modelPath);
await faceapi.nets.faceExpressionNet.loadFromDisk(modelPath);
await faceapi.nets.faceRecognitionNet.loadFromDisk(modelPath);
optionsSSDMobileNet = new faceapi.SsdMobilenetv1Options({ minConfidence, maxResults: 1 });
}
async function getDescriptors(imageFile) {
const buffer = fs.readFileSync(imageFile);
const tensor = tf.node.decodeImage(buffer, 3);
const faces = await faceapi.detectAllFaces(tensor, optionsSSDMobileNet)
.withFaceLandmarks()
.withFaceExpressions()
.withFaceDescriptors();
tf.dispose(tensor);
return faces.map((face) => face.descriptor);
}
async function registerImage(inputFile) {
if (!inputFile.toLowerCase().endsWith('jpg') && !inputFile.toLowerCase().endsWith('png') && !inputFile.toLowerCase().endsWith('gif')) return;
log.data('Registered:', inputFile);
const descriptors = await getDescriptors(inputFile);
for (const descriptor of descriptors) {
const labeledFaceDescriptor = new faceapi.LabeledFaceDescriptors(inputFile, [descriptor]);
labeledFaceDescriptors.push(labeledFaceDescriptor);
}
}
async function findBestMatch(inputFile) {
const matcher = new faceapi.FaceMatcher(labeledFaceDescriptors, distanceThreshold);
const descriptors = await getDescriptors(inputFile);
const matches = [];
for (const descriptor of descriptors) {
const match = await matcher.findBestMatch(descriptor);
matches.push(match);
}
return matches;
}
async function main() {
log.header();
if (process.argv.length !== 4) {
log.error(process.argv[1], 'Expected <source image or folder> <target image>');
process.exit(1);
}
await initFaceAPI();
log.info('Input:', process.argv[2]);
if (fs.statSync(process.argv[2]).isFile()) {
await registerImage(process.argv[2]); // register image
} else if (fs.statSync(process.argv[2]).isDirectory()) {
const dir = fs.readdirSync(process.argv[2]);
for (const f of dir) await registerImage(path.join(process.argv[2], f)); // register all images in a folder
}
log.info('Comparing:', process.argv[3], 'Descriptors:', labeledFaceDescriptors.length);
if (labeledFaceDescriptors.length > 0) {
const bestMatch = await findBestMatch(process.argv[3]); // find best match to all registered images
log.data('Match:', bestMatch);
} else {
log.warn('No registered faces');
}
}
main();

View File

@ -1,17 +1,20 @@
// @ts-nocheck /**
* FaceAPI Demo for NodeJS
* - Used by `node-multiprocess.js`
*/
const fs = require('fs'); const fs = require('fs');
const path = require('path'); const path = require('path');
const log = require('@vladmandic/pilogger'); const log = require('@vladmandic/pilogger');
// workers actual import tfjs and faceapi modules // workers actual import tfjs and faceapi modules
// eslint-disable-next-line import/no-extraneous-dependencies, node/no-unpublished-require const tf = require('@tensorflow/tfjs-node'); // in nodejs environments tfjs-node is required to be loaded before face-api
const tf = require('@tensorflow/tfjs-node'); const faceapi = require('../dist/face-api.node.js'); // use this when using face-api in dev mode
const faceapi = require('../dist/face-api.node.js'); // this is equivalent to '@vladmandic/faceapi' // const faceapi = require('@vladmandic/face-api'); // use this when face-api is installed as module (majority of use cases)
// options used by faceapi // options used by faceapi
const modelPathRoot = '../model'; const modelPathRoot = '../model';
const minScore = 0.1; const minConfidence = 0.15;
const maxResults = 5; const maxResults = 5;
let optionsSSDMobileNet; let optionsSSDMobileNet;
@ -29,10 +32,10 @@ async function detect(img) {
const tensor = await image(img); const tensor = await image(img);
const result = await faceapi const result = await faceapi
.detectAllFaces(tensor, optionsSSDMobileNet) .detectAllFaces(tensor, optionsSSDMobileNet)
.withFaceLandmarks() .withFaceLandmarks();
.withFaceExpressions() // .withFaceExpressions()
.withFaceDescriptors() // .withFaceDescriptors()
.withAgeAndGender(); // .withAgeAndGender();
process.send({ image: img, detected: result }); // send results back to main process.send({ image: img, detected: result }); // send results back to main
process.send({ ready: true }); // send signal back to main that this worker is now idle and ready for next image process.send({ ready: true }); // send signal back to main that this worker is now idle and ready for next image
tensor.dispose(); tensor.dispose();
@ -52,7 +55,7 @@ async function main() {
await faceapi.tf.enableProdMode(); await faceapi.tf.enableProdMode();
await faceapi.tf.ENV.set('DEBUG', false); await faceapi.tf.ENV.set('DEBUG', false);
await faceapi.tf.ready(); await faceapi.tf.ready();
log.state('Worker: PID:', process.pid, `TensorFlow/JS ${faceapi.tf.version_core} FaceAPI ${faceapi.version.faceapi} Backend: ${faceapi.tf.getBackend()}`); log.state('Worker: PID:', process.pid, `TensorFlow/JS ${faceapi.tf.version_core} FaceAPI ${faceapi.version} Backend: ${faceapi.tf.getBackend()}`);
// and load and initialize facepi models // and load and initialize facepi models
const modelPath = path.join(__dirname, modelPathRoot); const modelPath = path.join(__dirname, modelPathRoot);
@ -61,7 +64,7 @@ async function main() {
await faceapi.nets.faceLandmark68Net.loadFromDisk(modelPath); await faceapi.nets.faceLandmark68Net.loadFromDisk(modelPath);
await faceapi.nets.faceRecognitionNet.loadFromDisk(modelPath); await faceapi.nets.faceRecognitionNet.loadFromDisk(modelPath);
await faceapi.nets.faceExpressionNet.loadFromDisk(modelPath); await faceapi.nets.faceExpressionNet.loadFromDisk(modelPath);
optionsSSDMobileNet = new faceapi.SsdMobilenetv1Options({ minConfidence: minScore, maxResults }); optionsSSDMobileNet = new faceapi.SsdMobilenetv1Options({ minConfidence, maxResults });
// now we're ready, so send message back to main that it knows it can use this worker // now we're ready, so send message back to main that it knows it can use this worker
process.send({ ready: true }); process.send({ ready: true });

View File

@ -1,23 +1,27 @@
// @ts-nocheck /**
* FaceAPI Demo for NodeJS
* - Starts multiple worker processes and uses them as worker pool to process all input images
* - Images are enumerated in main process and sent for processing to worker processes via ipc
*/
const fs = require('fs'); const fs = require('fs');
const path = require('path'); const path = require('path');
const log = require('@vladmandic/pilogger'); // this is my simple logger with few extra features const log = require('@vladmandic/pilogger'); // this is my simple logger with few extra features
const child_process = require('child_process'); const child_process = require('child_process');
// note that main process import faceapi or tfjs at all // note that main process does not need to import faceapi or tfjs at all as processing is done in a worker process
const imgPathRoot = './example'; // modify to include your sample images const imgPathRoot = './demo'; // modify to include your sample images
const numWorkers = 2; // how many workers will be started const numWorkers = 4; // how many workers will be started
const workers = []; // this holds worker processes const workers = []; // this holds worker processes
const images = []; // this holds queue of enumerated images const images = []; // this holds queue of enumerated images
const t = []; // timers const t = []; // timers
let dir; let numImages;
// trigered by main when worker sends ready message // trigered by main when worker sends ready message
// if image pool is empty, signal worker to exit otherwise dispatch image to worker and remove image from queue // if image pool is empty, signal worker to exit otherwise dispatch image to worker and remove image from queue
async function detect(worker) { async function detect(worker) {
if (!t[2]) t[2] = process.hrtime.bigint(); // first time do a timestamp so we can measure initial latency if (!t[2]) t[2] = process.hrtime.bigint(); // first time do a timestamp so we can measure initial latency
if (images.length === dir.length) worker.send({ test: true }); // for first image in queue just measure latency if (images.length === numImages) worker.send({ test: true }); // for first image in queue just measure latency
if (images.length === 0) worker.send({ exit: true }); // nothing left in queue if (images.length === 0) worker.send({ exit: true }); // nothing left in queue
else { else {
log.state('Main: dispatching to worker:', worker.pid); log.state('Main: dispatching to worker:', worker.pid);
@ -32,14 +36,14 @@ function waitCompletion() {
if (activeWorkers > 0) setImmediate(() => waitCompletion()); if (activeWorkers > 0) setImmediate(() => waitCompletion());
else { else {
t[1] = process.hrtime.bigint(); t[1] = process.hrtime.bigint();
log.info('Processed', dir.length, 'images in', Math.trunc(parseInt(t[1] - t[0]) / 1000 / 1000), 'ms'); log.info('Processed:', numImages, 'images in', 'total:', Math.trunc(Number(t[1] - t[0]) / 1000000), 'ms', 'working:', Math.trunc(Number(t[1] - t[2]) / 1000000), 'ms', 'average:', Math.trunc(Number(t[1] - t[2]) / numImages / 1000000), 'ms');
} }
} }
function measureLatency() { function measureLatency() {
t[3] = process.hrtime.bigint(); t[3] = process.hrtime.bigint();
const latencyInitialization = Math.trunc(parseInt(t[2] - t[0]) / 1000 / 1000); const latencyInitialization = Math.trunc(Number(t[2] - t[0]) / 1000 / 1000);
const latencyRoundTrip = Math.trunc(parseInt(t[3] - t[2]) / 1000 / 1000); const latencyRoundTrip = Math.trunc(Number(t[3] - t[2]) / 1000 / 1000);
log.info('Latency: worker initializtion: ', latencyInitialization, 'message round trip:', latencyRoundTrip); log.info('Latency: worker initializtion: ', latencyInitialization, 'message round trip:', latencyRoundTrip);
} }
@ -48,16 +52,17 @@ async function main() {
log.info('FaceAPI multi-process test'); log.info('FaceAPI multi-process test');
// enumerate all images into queue // enumerate all images into queue
dir = fs.readdirSync(imgPathRoot); const dir = fs.readdirSync(imgPathRoot);
for (const imgFile of dir) { for (const imgFile of dir) {
if (imgFile.toLocaleLowerCase().endsWith('.jpg')) images.push(path.join(imgPathRoot, imgFile)); if (imgFile.toLocaleLowerCase().endsWith('.jpg')) images.push(path.join(imgPathRoot, imgFile));
} }
numImages = images.length;
t[0] = process.hrtime.bigint(); t[0] = process.hrtime.bigint();
// manage worker processes // manage worker processes
for (let i = 0; i < numWorkers; i++) { for (let i = 0; i < numWorkers; i++) {
// create worker process // create worker process
workers[i] = await child_process.fork('example/node-multiprocess-worker.js', ['special']); workers[i] = await child_process.fork('demo/node-multiprocess-worker.js', ['special']);
// parse message that worker process sends back to main // parse message that worker process sends back to main
// if message is ready, dispatch next image in queue // if message is ready, dispatch next image in queue
// if message is processing result, just print how many faces were detected // if message is processing result, just print how many faces were detected

31
demo/node-simple.js Normal file
View File

@ -0,0 +1,31 @@
/**
* FaceAPI Demo for NodeJS
* - Loads image
* - Outputs results to console
*/
const fs = require('fs');
const faceapi = require('../dist/face-api.node.js'); // use this when using face-api in dev mode
// const faceapi = require('@vladmandic/face-api'); // use this when face-api is installed as module (majority of use cases)
async function main() {
await faceapi.nets.ssdMobilenetv1.loadFromDisk('model'); // load models from a specific patch
await faceapi.nets.faceLandmark68Net.loadFromDisk('model');
await faceapi.nets.ageGenderNet.loadFromDisk('model');
await faceapi.nets.faceRecognitionNet.loadFromDisk('model');
await faceapi.nets.faceExpressionNet.loadFromDisk('model');
const options = new faceapi.SsdMobilenetv1Options({ minConfidence: 0.1, maxResults: 10 }); // set model options
const buffer = fs.readFileSync('demo/sample1.jpg'); // load jpg image as binary
const decodeT = faceapi.tf.node.decodeImage(buffer, 3); // decode binary buffer to rgb tensor
const expandT = faceapi.tf.expandDims(decodeT, 0); // add batch dimension to tensor
const result = await faceapi.detectAllFaces(expandT, options) // run detection
.withFaceLandmarks()
.withFaceExpressions()
.withFaceDescriptors()
.withAgeAndGender();
faceapi.tf.dispose([decodeT, expandT]); // dispose tensors to avoid memory leaks
console.log({ result }); // eslint-disable-line no-console
}
main();

53
demo/node-wasm.js Normal file
View File

@ -0,0 +1,53 @@
/**
* FaceAPI Demo for NodeJS using WASM
* - Loads WASM binaries from external CDN
* - Loads image
* - Outputs results to console
*/
const fs = require('fs');
const image = require('@canvas/image'); // eslint-disable-line node/no-missing-require
const tf = require('@tensorflow/tfjs');
const wasm = require('@tensorflow/tfjs-backend-wasm');
const faceapi = require('../dist/face-api.node-wasm.js'); // use this when using face-api in dev mode
async function readImage(imageFile) {
const buffer = fs.readFileSync(imageFile); // read image from disk
const canvas = await image.imageFromBuffer(buffer); // decode to canvas
const imageData = image.getImageData(canvas); // read decoded image data from canvas
const tensor = tf.tidy(() => { // create tensor from image data
const data = tf.tensor(Array.from(imageData?.data || []), [canvas.height, canvas.width, 4], 'int32'); // create rgba image tensor from flat array and flip to height x width
const channels = tf.split(data, 4, 2); // split rgba to channels
const rgb = tf.stack([channels[0], channels[1], channels[2]], 2); // stack channels back to rgb
const squeeze = tf.squeeze(rgb); // move extra dim from the end of tensor and use it as batch number instead
return squeeze;
});
console.log(`Image: ${imageFile} [${canvas.width} x ${canvas.height} Tensor: ${tensor.shape}, Size: ${tensor.size}`); // eslint-disable-line no-console
return tensor;
}
async function main() {
wasm.setWasmPaths('https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-backend-wasm/dist/', true);
await tf.setBackend('wasm');
await tf.ready();
console.log(`Version: FaceAPI ${faceapi.version} TensorFlow/JS ${tf.version_core} Backend: ${faceapi.tf.getBackend()}`); // eslint-disable-line no-console
await faceapi.nets.ssdMobilenetv1.loadFromDisk('model'); // load models from a specific patch
await faceapi.nets.faceLandmark68Net.loadFromDisk('model');
await faceapi.nets.ageGenderNet.loadFromDisk('model');
await faceapi.nets.faceRecognitionNet.loadFromDisk('model');
await faceapi.nets.faceExpressionNet.loadFromDisk('model');
const options = new faceapi.SsdMobilenetv1Options({ minConfidence: 0.1, maxResults: 10 }); // set model options
const tensor = await readImage('demo/sample1.jpg');
const t0 = performance.now();
const result = await faceapi.detectAllFaces(tensor, options) // run detection
.withFaceLandmarks()
.withFaceExpressions()
.withFaceDescriptors()
.withAgeAndGender();
tf.dispose(tensor); // dispose tensors to avoid memory leaks
const t1 = performance.now();
console.log('Time', t1 - t0); // eslint-disable-line no-console
console.log('Result', result); // eslint-disable-line no-console
}
main();

139
demo/node.js Normal file
View File

@ -0,0 +1,139 @@
/**
* FaceAPI Demo for NodeJS
* - Uses external library [node-fetch](https://www.npmjs.com/package/node-fetch) to load images via http
* - Loads image from provided param
* - Outputs results to console
*/
const fs = require('fs');
const process = require('process');
const path = require('path');
const log = require('@vladmandic/pilogger');
const tf = require('@tensorflow/tfjs-node'); // in nodejs environments tfjs-node is required to be loaded before face-api
const faceapi = require('../dist/face-api.node.js'); // use this when using face-api in dev mode
// const faceapi = require('@vladmandic/face-api'); // use this when face-api is installed as module (majority of use cases)
const modelPathRoot = '../model';
const imgPathRoot = './demo'; // modify to include your sample images
const minConfidence = 0.15;
const maxResults = 5;
let optionsSSDMobileNet;
let fetch; // dynamically imported later
async function image(input) {
// read input image file and create tensor to be used for processing
let buffer;
log.info('Loading image:', input);
if (input.startsWith('http:') || input.startsWith('https:')) {
const res = await fetch(input);
if (res && res.ok) buffer = await res.buffer();
else log.error('Invalid image URL:', input, res.status, res.statusText, res.headers.get('content-type'));
} else {
buffer = fs.readFileSync(input);
}
// decode image using tfjs-node so we don't need external depenencies
// can also be done using canvas.js or some other 3rd party image library
if (!buffer) return {};
const tensor = tf.tidy(() => {
const decode = faceapi.tf.node.decodeImage(buffer, 3);
let expand;
if (decode.shape[2] === 4) { // input is in rgba format, need to convert to rgb
const channels = faceapi.tf.split(decode, 4, 2); // tf.split(tensor, 4, 2); // split rgba to channels
const rgb = faceapi.tf.stack([channels[0], channels[1], channels[2]], 2); // stack channels back to rgb and ignore alpha
expand = faceapi.tf.reshape(rgb, [1, decode.shape[0], decode.shape[1], 3]); // move extra dim from the end of tensor and use it as batch number instead
} else {
expand = faceapi.tf.expandDims(decode, 0);
}
const cast = faceapi.tf.cast(expand, 'float32');
return cast;
});
return tensor;
}
async function detect(tensor) {
try {
const result = await faceapi
.detectAllFaces(tensor, optionsSSDMobileNet)
.withFaceLandmarks()
.withFaceExpressions()
.withFaceDescriptors()
.withAgeAndGender();
return result;
} catch (err) {
log.error('Caught error', err.message);
return [];
}
}
// eslint-disable-next-line no-unused-vars, @typescript-eslint/no-unused-vars
function detectPromise(tensor) {
return new Promise((resolve) => faceapi
.detectAllFaces(tensor, optionsSSDMobileNet)
.withFaceLandmarks()
.withFaceExpressions()
.withFaceDescriptors()
.withAgeAndGender()
.then((res) => resolve(res))
.catch((err) => {
log.error('Caught error', err.message);
resolve([]);
}));
}
function print(face) {
const expression = Object.entries(face.expressions).reduce((acc, val) => ((val[1] > acc[1]) ? val : acc), ['', 0]);
const box = [face.alignedRect._box._x, face.alignedRect._box._y, face.alignedRect._box._width, face.alignedRect._box._height];
const gender = `Gender: ${Math.round(100 * face.genderProbability)}% ${face.gender}`;
log.data(`Detection confidence: ${Math.round(100 * face.detection._score)}% ${gender} Age: ${Math.round(10 * face.age) / 10} Expression: ${Math.round(100 * expression[1])}% ${expression[0]} Box: ${box.map((a) => Math.round(a))}`);
}
async function main() {
log.header();
log.info('FaceAPI single-process test');
// eslint-disable-next-line node/no-extraneous-import
fetch = (await import('node-fetch')).default; // eslint-disable-line node/no-missing-import
await faceapi.tf.setBackend('tensorflow');
await faceapi.tf.ready();
log.state(`Version: TensorFlow/JS ${faceapi.tf?.version_core} FaceAPI ${faceapi.version} Backend: ${faceapi.tf?.getBackend()}`);
log.info('Loading FaceAPI models');
const modelPath = path.join(__dirname, modelPathRoot);
await faceapi.nets.ssdMobilenetv1.loadFromDisk(modelPath);
await faceapi.nets.ageGenderNet.loadFromDisk(modelPath);
await faceapi.nets.faceLandmark68Net.loadFromDisk(modelPath);
await faceapi.nets.faceRecognitionNet.loadFromDisk(modelPath);
await faceapi.nets.faceExpressionNet.loadFromDisk(modelPath);
optionsSSDMobileNet = new faceapi.SsdMobilenetv1Options({ minConfidence, maxResults });
if (process.argv.length !== 4) {
const t0 = process.hrtime.bigint();
const dir = fs.readdirSync(imgPathRoot);
for (const img of dir) {
if (!img.toLocaleLowerCase().endsWith('.jpg')) continue;
const tensor = await image(path.join(imgPathRoot, img));
const result = await detect(tensor);
log.data('Image:', img, 'Detected faces:', result.length);
for (const face of result) print(face);
tensor.dispose();
}
const t1 = process.hrtime.bigint();
log.info('Processed', dir.length, 'images in', Math.trunc(Number((t1 - t0)) / 1000 / 1000), 'ms');
} else {
const param = process.argv[2];
if (fs.existsSync(param) || param.startsWith('http:') || param.startsWith('https:')) {
const tensor = await image(param);
const result = await detect(tensor);
// const result = await detectPromise(null);
log.data('Image:', param, 'Detected faces:', result.length);
for (const face of result) print(face);
tensor.dispose();
}
}
}
main();

View File

Before

Width:  |  Height:  |  Size: 141 KiB

After

Width:  |  Height:  |  Size: 141 KiB

View File

Before

Width:  |  Height:  |  Size: 178 KiB

After

Width:  |  Height:  |  Size: 178 KiB

View File

Before

Width:  |  Height:  |  Size: 216 KiB

After

Width:  |  Height:  |  Size: 216 KiB

View File

Before

Width:  |  Height:  |  Size: 206 KiB

After

Width:  |  Height:  |  Size: 206 KiB

View File

Before

Width:  |  Height:  |  Size: 162 KiB

After

Width:  |  Height:  |  Size: 162 KiB

View File

Before

Width:  |  Height:  |  Size: 295 KiB

After

Width:  |  Height:  |  Size: 295 KiB

View File

Before

Width:  |  Height:  |  Size: 569 KiB

After

Width:  |  Height:  |  Size: 569 KiB

BIN
demo/screenshot-webcam.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 240 KiB

21
demo/webcam.html Normal file
View File

@ -0,0 +1,21 @@
<!DOCTYPE html>
<html lang="en">
<head>
<title>FaceAPI Live WebCam Demo</title>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
<meta name="viewport" content="width=device-width, shrink-to-fit=yes">
<meta name="application-name" content="FaceAPI">
<meta name="keywords" content="FaceAPI">
<meta name="description" content="FaceAPI: AI-powered Face Detection, Description & Recognition for Browser and NodeJS using Tensorflow/JS; Author: Vladimir Mandic <https://github.com/vladmandic>">
<meta name="msapplication-tooltip" content="FaceAPI: AI-powered Face Detection, Description & Recognition for Browser and NodeJS using Tensorflow/JS; Author: Vladimir Mandic <https://github.com/vladmandic>">
<link rel="shortcut icon" href="../favicon.ico" type="image/x-icon">
<script src="./webcam.js" type="module"></script>
</head>
<body style="font-family: monospace; background: black; color: white; font-size: 16px; line-height: 22px; margin: 0; overflow: hidden">
<video id="video" playsinline class="video"></video>
<canvas id="canvas" class="canvas" style="position: fixed; top: 0; left: 0; z-index: 10"></canvas>
<div id="log" style="overflow-y: scroll; height: 16.5rem"></div>
</body>
</html>

194
demo/webcam.js Normal file
View File

@ -0,0 +1,194 @@
/**
* FaceAPI Demo for Browsers
* Loaded via `webcam.html`
*/
import * as faceapi from '../dist/face-api.esm.js'; // use when in dev mode
// import * as faceapi from '@vladmandic/face-api'; // use when downloading face-api as npm
// configuration options
const modelPath = '../model/'; // path to model folder that will be loaded using http
// const modelPath = 'https://cdn.jsdelivr.net/npm/@vladmandic/face-api/model/'; // path to model folder that will be loaded using http
const minScore = 0.2; // minimum score
const maxResults = 5; // maximum number of results to return
let optionsSSDMobileNet;
// helper function to pretty-print json object to string
function str(json) {
let text = '<font color="lightblue">';
text += json ? JSON.stringify(json).replace(/{|}|"|\[|\]/g, '').replace(/,/g, ', ') : '';
text += '</font>';
return text;
}
// helper function to print strings to html document as a log
function log(...txt) {
console.log(...txt); // eslint-disable-line no-console
const div = document.getElementById('log');
if (div) div.innerHTML += `<br>${txt}`;
}
// helper function to draw detected faces
function drawFaces(canvas, data, fps) {
const ctx = canvas.getContext('2d', { willReadFrequently: true });
if (!ctx) return;
ctx.clearRect(0, 0, canvas.width, canvas.height);
// draw title
ctx.font = 'small-caps 20px "Segoe UI"';
ctx.fillStyle = 'white';
ctx.fillText(`FPS: ${fps}`, 10, 25);
for (const person of data) {
// draw box around each face
ctx.lineWidth = 3;
ctx.strokeStyle = 'deepskyblue';
ctx.fillStyle = 'deepskyblue';
ctx.globalAlpha = 0.6;
ctx.beginPath();
ctx.rect(person.detection.box.x, person.detection.box.y, person.detection.box.width, person.detection.box.height);
ctx.stroke();
ctx.globalAlpha = 1;
// draw text labels
const expression = Object.entries(person.expressions).sort((a, b) => b[1] - a[1]);
ctx.fillStyle = 'black';
ctx.fillText(`gender: ${Math.round(100 * person.genderProbability)}% ${person.gender}`, person.detection.box.x, person.detection.box.y - 59);
ctx.fillText(`expression: ${Math.round(100 * expression[0][1])}% ${expression[0][0]}`, person.detection.box.x, person.detection.box.y - 41);
ctx.fillText(`age: ${Math.round(person.age)} years`, person.detection.box.x, person.detection.box.y - 23);
ctx.fillText(`roll:${person.angle.roll}° pitch:${person.angle.pitch}° yaw:${person.angle.yaw}°`, person.detection.box.x, person.detection.box.y - 5);
ctx.fillStyle = 'lightblue';
ctx.fillText(`gender: ${Math.round(100 * person.genderProbability)}% ${person.gender}`, person.detection.box.x, person.detection.box.y - 60);
ctx.fillText(`expression: ${Math.round(100 * expression[0][1])}% ${expression[0][0]}`, person.detection.box.x, person.detection.box.y - 42);
ctx.fillText(`age: ${Math.round(person.age)} years`, person.detection.box.x, person.detection.box.y - 24);
ctx.fillText(`roll:${person.angle.roll}° pitch:${person.angle.pitch}° yaw:${person.angle.yaw}°`, person.detection.box.x, person.detection.box.y - 6);
// draw face points for each face
ctx.globalAlpha = 0.8;
ctx.fillStyle = 'lightblue';
const pointSize = 2;
for (let i = 0; i < person.landmarks.positions.length; i++) {
ctx.beginPath();
ctx.arc(person.landmarks.positions[i].x, person.landmarks.positions[i].y, pointSize, 0, 2 * Math.PI);
ctx.fill();
}
}
}
async function detectVideo(video, canvas) {
if (!video || video.paused) return false;
const t0 = performance.now();
faceapi
.detectAllFaces(video, optionsSSDMobileNet)
.withFaceLandmarks()
.withFaceExpressions()
// .withFaceDescriptors()
.withAgeAndGender()
.then((result) => {
const fps = 1000 / (performance.now() - t0);
drawFaces(canvas, result, fps.toLocaleString());
requestAnimationFrame(() => detectVideo(video, canvas));
return true;
})
.catch((err) => {
log(`Detect Error: ${str(err)}`);
return false;
});
return false;
}
// just initialize everything and call main function
async function setupCamera() {
const video = document.getElementById('video');
const canvas = document.getElementById('canvas');
if (!video || !canvas) return null;
log('Setting up camera');
// setup webcam. note that navigator.mediaDevices requires that page is accessed via https
if (!navigator.mediaDevices) {
log('Camera Error: access not supported');
return null;
}
let stream;
const constraints = { audio: false, video: { facingMode: 'user', resizeMode: 'crop-and-scale' } };
if (window.innerWidth > window.innerHeight) constraints.video.width = { ideal: window.innerWidth };
else constraints.video.height = { ideal: window.innerHeight };
try {
stream = await navigator.mediaDevices.getUserMedia(constraints);
} catch (err) {
if (err.name === 'PermissionDeniedError' || err.name === 'NotAllowedError') log(`Camera Error: camera permission denied: ${err.message || err}`);
if (err.name === 'SourceUnavailableError') log(`Camera Error: camera not available: ${err.message || err}`);
return null;
}
if (stream) {
video.srcObject = stream;
} else {
log('Camera Error: stream empty');
return null;
}
const track = stream.getVideoTracks()[0];
const settings = track.getSettings();
if (settings.deviceId) delete settings.deviceId;
if (settings.groupId) delete settings.groupId;
if (settings.aspectRatio) settings.aspectRatio = Math.trunc(100 * settings.aspectRatio) / 100;
log(`Camera active: ${track.label}`);
log(`Camera settings: ${str(settings)}`);
canvas.addEventListener('click', () => {
if (video && video.readyState >= 2) {
if (video.paused) {
video.play();
detectVideo(video, canvas);
} else {
video.pause();
}
}
log(`Camera state: ${video.paused ? 'paused' : 'playing'}`);
});
return new Promise((resolve) => {
video.onloadeddata = async () => {
canvas.width = video.videoWidth;
canvas.height = video.videoHeight;
video.play();
detectVideo(video, canvas);
resolve(true);
};
});
}
async function setupFaceAPI() {
// load face-api models
// log('Models loading');
// await faceapi.nets.tinyFaceDetector.load(modelPath); // using ssdMobilenetv1
await faceapi.nets.ssdMobilenetv1.load(modelPath);
await faceapi.nets.ageGenderNet.load(modelPath);
await faceapi.nets.faceLandmark68Net.load(modelPath);
await faceapi.nets.faceRecognitionNet.load(modelPath);
await faceapi.nets.faceExpressionNet.load(modelPath);
optionsSSDMobileNet = new faceapi.SsdMobilenetv1Options({ minConfidence: minScore, maxResults });
// check tf engine state
log(`Models loaded: ${str(faceapi.tf.engine().state.numTensors)} tensors`);
}
async function main() {
// initialize tfjs
log('FaceAPI WebCam Test');
// if you want to use wasm backend location for wasm binaries must be specified
// await faceapi.tf?.setWasmPaths(`https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-backend-wasm@${faceapi.tf.version_core}/dist/`);
// await faceapi.tf?.setBackend('wasm');
// log(`WASM SIMD: ${await faceapi.tf?.env().getAsync('WASM_HAS_SIMD_SUPPORT')} Threads: ${await faceapi.tf?.env().getAsync('WASM_HAS_MULTITHREAD_SUPPORT') ? 'Multi' : 'Single'}`);
// default is webgl backend
await faceapi.tf.setBackend('webgl');
await faceapi.tf.ready();
// tfjs optimizations
if (faceapi.tf?.env().flagRegistry.CANVAS2D_WILL_READ_FREQUENTLY) faceapi.tf.env().set('CANVAS2D_WILL_READ_FREQUENTLY', true);
if (faceapi.tf?.env().flagRegistry.WEBGL_EXP_CONV) faceapi.tf.env().set('WEBGL_EXP_CONV', true);
if (faceapi.tf?.env().flagRegistry.WEBGL_EXP_CONV) faceapi.tf.env().set('WEBGL_EXP_CONV', true);
// check version
log(`Version: FaceAPI ${str(faceapi?.version || '(not loaded)')} TensorFlow/JS ${str(faceapi.tf?.version_core || '(not loaded)')} Backend: ${str(faceapi.tf?.getBackend() || '(not loaded)')}`);
await setupFaceAPI();
await setupCamera();
}
// start processing as soon as page is loaded
window.onload = main;

1
dist/face-api.d.ts vendored Normal file
View File

@ -0,0 +1 @@
export * from '../types/face-api';

1
dist/face-api.esm-nobundle.d.ts vendored Normal file
View File

@ -0,0 +1 @@
export * from '../types/face-api';

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

1
dist/face-api.esm.d.ts vendored Normal file
View File

@ -0,0 +1 @@
export * from '../types/face-api';

3169
dist/face-api.esm.js vendored

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

3192
dist/face-api.esm.json vendored

File diff suppressed because it is too large Load Diff

3170
dist/face-api.js vendored

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

3071
dist/face-api.json vendored

File diff suppressed because it is too large Load Diff

1
dist/face-api.node-gpu.d.ts vendored Normal file
View File

@ -0,0 +1 @@
export * from '../types/face-api';

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

1
dist/face-api.node-wasm.d.ts vendored Normal file
View File

@ -0,0 +1 @@
export * from '../types/face-api';

7
dist/face-api.node-wasm.js vendored Normal file

File diff suppressed because one or more lines are too long

1
dist/face-api.node.d.ts vendored Normal file
View File

@ -0,0 +1 @@
export * from '../types/face-api';

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

3071
dist/face-api.node.json vendored

File diff suppressed because it is too large Load Diff

28
dist/tfjs.esm.d.ts vendored Normal file
View File

@ -0,0 +1,28 @@
/*
import '@tensorflow/tfjs-core';
import '@tensorflow/tfjs-core/dist/types';
import '@tensorflow/tfjs-core/dist/register_all_gradients';
import '@tensorflow/tfjs-core/dist/public/chained_ops/register_all_chained_ops';
import '@tensorflow/tfjs-data';
import '@tensorflow/tfjs-layers';
import '@tensorflow/tfjs-converter';
import '@tensorflow/tfjs-backend-cpu';
import '@tensorflow/tfjs-backend-webgl';
import '@tensorflow/tfjs-backend-wasm';
import '@tensorflow/tfjs-backend-webgpu';
*/
export declare const version: {
'tfjs-core': string;
'tfjs-backend-cpu': string;
'tfjs-backend-webgl': string;
'tfjs-data': string;
'tfjs-layers': string;
'tfjs-converter': string;
tfjs: string;
};
export { io, browser, image } from '@tensorflow/tfjs-core';
export { tensor, tidy, softmax, unstack, relu, add, conv2d, cast, zeros, concat, avgPool, stack, fill, transpose, tensor1d, tensor2d, tensor3d, tensor4d, maxPool, matMul, mul, sub, scalar } from '@tensorflow/tfjs-core';
export { div, pad, slice, reshape, slice3d, expandDims, depthwiseConv2d, separableConv2d, sigmoid, exp, tile, batchNorm, clipByValue } from '@tensorflow/tfjs-core';
export { ENV, Variable, Tensor, TensorLike, Rank, Tensor1D, Tensor2D, Tensor3D, Tensor4D, Tensor5D, NamedTensorMap } from '@tensorflow/tfjs-core';

3630
dist/tfjs.esm.js vendored

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

29542
dist/tfjs.esm.json vendored

File diff suppressed because it is too large Load Diff

9
dist/tfjs.version.d.ts vendored Normal file
View File

@ -0,0 +1,9 @@
export declare const version: {
'tfjs-core': string;
'tfjs-backend-cpu': string;
'tfjs-backend-webgl': string;
'tfjs-data': string;
'tfjs-layers': string;
'tfjs-converter': string;
tfjs: string;
};

7
dist/tfjs.version.js vendored Normal file
View File

@ -0,0 +1,7 @@
/*
Face-API
homepage: <https://github.com/vladmandic/face-api>
author: <https://github.com/vladmandic>'
*/
var e="4.22.0";var s="4.22.0";var t="4.22.0";var n="4.22.0";var i="4.22.0";var w={tfjs:e,"tfjs-core":e,"tfjs-converter":s,"tfjs-backend-cpu":t,"tfjs-backend-webgl":n,"tfjs-backend-wasm":i};export{w as version};

View File

@ -1,13 +0,0 @@
<!DOCTYPE html>
<html lang="en">
<head>
<title>FaceAPI Static Images Demo</title>
<meta http-equiv="content-type">
<meta content="text/html">
<meta charset="UTF-8">
<script src="./index.js" type="module"></script>
</head>
<body style="font-family: monospace; background: black; color: white; font-size: 16px; line-height: 22px; margin: 0;">
<div id="log"></div>
</body>
</html>

View File

@ -1,15 +0,0 @@
<!DOCTYPE html>
<html lang="en">
<head>
<title>FaceAPI Live WebCam Demo</title>
<meta http-equiv="content-type">
<meta content="text/html">
<meta charset="UTF-8">
<script src="./webcam.js" type="module"></script>
</head>
<body style="font-family: monospace; background: black; color: white; font-size: 16px; line-height: 22px; margin: 0;">
<video id="video" playsinline class="video"></video>
<canvas id="canvas" class="canvas" style="position: fixed; top: 0; left: 0; z-index: 10"></canvas>
<div id="log"></div>
</body>
</html>

View File

@ -1,173 +0,0 @@
import * as faceapi from '../dist/face-api.esm.js';
// configuration options
const modelPath = 'https://vladmandic.github.io/face-api/model/'; // path to model folder that will be loaded using http
const minScore = 0.1; // minimum score
const maxResults = 5; // maximum number of results to return
let optionsSSDMobileNet;
// helper function to pretty-print json object to string
function str(json) {
let text = '<font color="lightblue">';
text += json ? JSON.stringify(json).replace(/{|}|"|\[|\]/g, '').replace(/,/g, ', ') : '';
text += '</font>';
return text;
}
// helper function to print strings to html document as a log
function log(...txt) {
// eslint-disable-next-line no-console
console.log(...txt);
// @ts-ignore
document.getElementById('log').innerHTML += `<br>${txt}`;
}
// helper function to draw detected faces
function drawFaces(canvas, data, fps) {
const ctx = canvas.getContext('2d');
if (!ctx) return;
ctx.clearRect(0, 0, canvas.width, canvas.height);
// draw title
ctx.font = '1.4rem sans-serif';
ctx.fillStyle = 'white';
ctx.fillText(`FPS: ${fps}`, 10, 25);
for (const person of data) {
// draw box around each face
ctx.lineWidth = 3;
ctx.strokeStyle = 'deepskyblue';
ctx.fillStyle = 'deepskyblue';
ctx.globalAlpha = 0.4;
ctx.beginPath();
ctx.rect(person.detection.box.x, person.detection.box.y, person.detection.box.width, person.detection.box.height);
ctx.stroke();
ctx.globalAlpha = 1;
// const expression = person.expressions.sort((a, b) => Object.values(a)[0] - Object.values(b)[0]);
const expression = Object.entries(person.expressions).sort((a, b) => b[1] - a[1]);
ctx.fillText(`gender ${Math.round(100 * person.genderProbability)}% ${person.gender}`, person.detection.box.x, person.detection.box.y - 45);
ctx.fillText(`expression ${Math.round(100 * expression[0][1])}% ${expression[0][0]}`, person.detection.box.x, person.detection.box.y - 25);
ctx.fillText(`age ${Math.round(person.age)} years`, person.detection.box.x, person.detection.box.y - 5);
// draw face points for each face
ctx.fillStyle = 'lightblue';
ctx.globalAlpha = 0.5;
const pointSize = 2;
for (const pt of person.landmarks.positions) {
ctx.beginPath();
ctx.arc(pt.x, pt.y, pointSize, 0, 2 * Math.PI);
ctx.fill();
}
}
}
async function detectVideo(video, canvas) {
const t0 = performance.now();
faceapi
.detectAllFaces(video, optionsSSDMobileNet)
.withFaceLandmarks()
.withFaceExpressions()
// .withFaceDescriptors()
.withAgeAndGender()
.then((result) => {
const fps = 1000 / (performance.now() - t0);
drawFaces(canvas, result, fps.toLocaleString());
requestAnimationFrame(() => detectVideo(video, canvas));
return true;
})
.catch((err) => {
log(`Detect Error: ${str(err)}`);
return false;
});
}
// just initialize everything and call main function
async function setupCamera() {
const video = document.getElementById('video');
const canvas = document.getElementById('canvas');
if (!video || !canvas) return null;
let msg = '';
log('Setting up camera');
// setup webcam. note that navigator.mediaDevices requires that page is accessed via https
if (!navigator.mediaDevices) {
log('Camera Error: access not supported');
return null;
}
let stream;
const constraints = {
audio: false,
video: { facingMode: 'user', resizeMode: 'crop-and-scale' },
};
if (window.innerWidth > window.innerHeight) constraints.video.width = { ideal: window.innerWidth };
else constraints.video.height = { ideal: window.innerHeight };
try {
stream = await navigator.mediaDevices.getUserMedia(constraints);
} catch (err) {
if (err.name === 'PermissionDeniedError' || err.name === 'NotAllowedError') msg = 'camera permission denied';
else if (err.name === 'SourceUnavailableError') msg = 'camera not available';
log(`Camera Error: ${msg}: ${err.message || err}`);
return null;
}
// @ts-ignore
if (stream) video.srcObject = stream;
else {
log('Camera Error: stream empty');
return null;
}
const track = stream.getVideoTracks()[0];
const settings = track.getSettings();
log(`Camera active: ${track.label} ${str(constraints)}`);
log(`Camera settings: ${str(settings)}`);
return new Promise((resolve) => {
video.onloadeddata = async () => {
// @ts-ignore
canvas.width = video.videoWidth;
// @ts-ignore
canvas.height = video.videoHeight;
// @ts-ignore
video.play();
detectVideo(video, canvas);
resolve(true);
};
});
}
async function setupFaceAPI() {
// load face-api models
log('Models loading');
await faceapi.nets.tinyFaceDetector.load(modelPath);
await faceapi.nets.ssdMobilenetv1.load(modelPath);
await faceapi.nets.ageGenderNet.load(modelPath);
await faceapi.nets.faceLandmark68Net.load(modelPath);
await faceapi.nets.faceRecognitionNet.load(modelPath);
await faceapi.nets.faceExpressionNet.load(modelPath);
optionsSSDMobileNet = new faceapi.SsdMobilenetv1Options({ minConfidence: minScore, maxResults });
// check tf engine state
const engine = await faceapi.tf.engine();
log(`Models loaded: ${str(engine.state)}`);
}
async function main() {
// initialize tfjs
log('FaceAPI WebCam Test');
// if you want to use wasm backend location for wasm binaries must be specified
// await faceapi.tf.setWasmPaths('../node_modules/@tensorflow/tfjs-backend-wasm/dist/');
// await faceapi.tf.setBackend('wasm');
// default is webgl backend
await faceapi.tf.setBackend('webgl');
await faceapi.tf.enableProdMode();
await faceapi.tf.ENV.set('DEBUG', false);
await faceapi.tf.ready();
// check version
log(`Version: TensorFlow/JS ${str(faceapi.tf?.version_core || '(not loaded)')} FaceAPI ${str(faceapi?.version || '(not loaded)')} Backend: ${str(faceapi.tf?.getBackend() || '(not loaded)')}`);
log(`Flags: ${str(faceapi.tf.ENV.flags)}`);
setupFaceAPI();
setupCamera();
}
// start processing as soon as page is loaded
window.onload = main;

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -1 +1,39 @@
[{"weights":[{"name":"dense0/conv0/filters","shape":[3,3,3,32],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.008194216092427571,"min":-0.9423348506291708}},{"name":"dense0/conv0/bias","shape":[32],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.006839508168837603,"min":-0.8412595047670252}},{"name":"dense0/conv1/depthwise_filter","shape":[3,3,32,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.009194007106855804,"min":-1.2779669878529567}},{"name":"dense0/conv1/pointwise_filter","shape":[1,1,32,32],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.0036026100317637128,"min":-0.3170296827952067}},{"name":"dense0/conv1/bias","shape":[32],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.000740380117706224,"min":-0.06367269012273527}},{"name":"dense0/conv2/depthwise_filter","shape":[3,3,32,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":1,"min":0}},{"name":"dense0/conv2/pointwise_filter","shape":[1,1,32,32],"dtype":"float32","quantization":{"dtype":"uint8","scale":1,"min":0}},{"name":"dense0/conv2/bias","shape":[32],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.0037702228508743585,"min":-0.6220867703942692}},{"name":"dense1/conv0/depthwise_filter","shape":[3,3,32,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.0033707996209462483,"min":-0.421349952618281}},{"name":"dense1/conv0/pointwise_filter","shape":[1,1,32,64],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.014611541991140328,"min":-1.8556658328748217}},{"name":"dense1/conv0/bias","shape":[64],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.002832523046755323,"min":-0.30307996600281956}},{"name":"dense1/conv1/depthwise_filter","shape":[3,3,64,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.006593170586754294,"min":-0.6329443763284123}},{"name":"dense1/conv1/pointwise_filter","shape":[1,1,64,64],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.012215249211180444,"min":-1.6001976466646382}},{"name":"dense1/conv1/bias","shape":[64],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.002384825547536214,"min":-0.3028728445370992}},{"name":"dense1/conv2/depthwise_filter","shape":[3,3,64,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.005859645441466687,"min":-0.7617539073906693}},{"name":"dense1/conv2/pointwise_filter","shape":[1,1,64,64],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.013121426806730382,"min":-1.7845140457153321}},{"name":"dense1/conv2/bias","shape":[64],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.0032247188044529336,"min":-0.46435950784122243}},{"name":"dense2/conv0/depthwise_filter","shape":[3,3,64,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.002659512618008782,"min":-0.32977956463308894}},{"name":"dense2/conv0/pointwise_filter","shape":[1,1,64,128],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.015499923743453681,"min":-1.9839902391620712}},{"name":"dense2/conv0/bias","shape":[128],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.0032450980999890497,"min":-0.522460794098237}},{"name":"dense2/conv1/depthwise_filter","shape":[3,3,128,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.005911862382701799,"min":-0.792189559282041}},{"name":"dense2/conv1/pointwise_filter","shape":[1,1,128,128],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.021025861478319356,"min":-2.2077154552235325}},{"name":"dense2/conv1/bias","shape":[128],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.00349616945958605,"min":-0.46149436866535865}},{"name":"dense2/conv2/depthwise_filter","shape":[3,3,128,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.008104994250278847,"min":-1.013124281284856}},{"name":"dense2/conv2/pointwise_filter","shape":[1,1,128,128],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.029337059282789044,"min":-3.5791212325002633}},{"name":"dense2/conv2/bias","shape":[128],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.0038808938334969913,"min":-0.4230174278511721}},{"name":"fc/weights","shape":[128,136],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.014016061670639936,"min":-1.8921683255363912}},{"name":"fc/bias","shape":[136],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.0029505149698724935,"min":0.088760145008564}}],"paths":["face_landmark_68_tiny_model-shard1"]}] [
{
"weights":
[
{"name":"dense0/conv0/filters","shape":[3,3,3,32],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.008194216092427571,"min":-0.9423348506291708}},
{"name":"dense0/conv0/bias","shape":[32],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.006839508168837603,"min":-0.8412595047670252}},
{"name":"dense0/conv1/depthwise_filter","shape":[3,3,32,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.009194007106855804,"min":-1.2779669878529567}},
{"name":"dense0/conv1/pointwise_filter","shape":[1,1,32,32],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.0036026100317637128,"min":-0.3170296827952067}},
{"name":"dense0/conv1/bias","shape":[32],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.000740380117706224,"min":-0.06367269012273527}},
{"name":"dense0/conv2/depthwise_filter","shape":[3,3,32,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":1,"min":0}},
{"name":"dense0/conv2/pointwise_filter","shape":[1,1,32,32],"dtype":"float32","quantization":{"dtype":"uint8","scale":1,"min":0}},
{"name":"dense0/conv2/bias","shape":[32],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.0037702228508743585,"min":-0.6220867703942692}},
{"name":"dense1/conv0/depthwise_filter","shape":[3,3,32,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.0033707996209462483,"min":-0.421349952618281}},
{"name":"dense1/conv0/pointwise_filter","shape":[1,1,32,64],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.014611541991140328,"min":-1.8556658328748217}},
{"name":"dense1/conv0/bias","shape":[64],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.002832523046755323,"min":-0.30307996600281956}},
{"name":"dense1/conv1/depthwise_filter","shape":[3,3,64,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.006593170586754294,"min":-0.6329443763284123}},
{"name":"dense1/conv1/pointwise_filter","shape":[1,1,64,64],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.012215249211180444,"min":-1.6001976466646382}},
{"name":"dense1/conv1/bias","shape":[64],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.002384825547536214,"min":-0.3028728445370992}},
{"name":"dense1/conv2/depthwise_filter","shape":[3,3,64,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.005859645441466687,"min":-0.7617539073906693}},
{"name":"dense1/conv2/pointwise_filter","shape":[1,1,64,64],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.013121426806730382,"min":-1.7845140457153321}},
{"name":"dense1/conv2/bias","shape":[64],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.0032247188044529336,"min":-0.46435950784122243}},
{"name":"dense2/conv0/depthwise_filter","shape":[3,3,64,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.002659512618008782,"min":-0.32977956463308894}},
{"name":"dense2/conv0/pointwise_filter","shape":[1,1,64,128],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.015499923743453681,"min":-1.9839902391620712}},
{"name":"dense2/conv0/bias","shape":[128],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.0032450980999890497,"min":-0.522460794098237}},
{"name":"dense2/conv1/depthwise_filter","shape":[3,3,128,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.005911862382701799,"min":-0.792189559282041}},
{"name":"dense2/conv1/pointwise_filter","shape":[1,1,128,128],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.021025861478319356,"min":-2.2077154552235325}},
{"name":"dense2/conv1/bias","shape":[128],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.00349616945958605,"min":-0.46149436866535865}},
{"name":"dense2/conv2/depthwise_filter","shape":[3,3,128,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.008104994250278847,"min":-1.013124281284856}},
{"name":"dense2/conv2/pointwise_filter","shape":[1,1,128,128],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.029337059282789044,"min":-3.5791212325002633}},
{"name":"dense2/conv2/bias","shape":[128],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.0038808938334969913,"min":-0.4230174278511721}},
{"name":"fc/weights","shape":[128,136],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.014016061670639936,"min":-1.8921683255363912}},
{"name":"fc/bias","shape":[136],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.0029505149698724935,"min":0.088760145008564}}
],
"paths":
[
"face_landmark_68_tiny_model.bin"
]
}
]

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -1 +1,30 @@
[{"weights":[{"name":"conv0/filters","shape":[3,3,3,16],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.009007044399485869,"min":-1.2069439495311063}},{"name":"conv0/bias","shape":[16],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.005263455241334205,"min":-0.9211046672334858}},{"name":"conv1/depthwise_filter","shape":[3,3,16,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.004001977630690033,"min":-0.5042491814669441}},{"name":"conv1/pointwise_filter","shape":[1,1,16,32],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.013836609615999109,"min":-1.411334180831909}},{"name":"conv1/bias","shape":[32],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.0015159862590771096,"min":-0.30926119685173037}},{"name":"conv2/depthwise_filter","shape":[3,3,32,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.002666276225856706,"min":-0.317286870876948}},{"name":"conv2/pointwise_filter","shape":[1,1,32,64],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.015265831292844286,"min":-1.6792414422128714}},{"name":"conv2/bias","shape":[64],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.0020280554598453,"min":-0.37113414915168985}},{"name":"conv3/depthwise_filter","shape":[3,3,64,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.006100742489683862,"min":-0.8907084034938438}},{"name":"conv3/pointwise_filter","shape":[1,1,64,128],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.016276211832083907,"min":-2.0508026908425725}},{"name":"conv3/bias","shape":[128],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.003394414279975143,"min":-0.7637432129944072}},{"name":"conv4/depthwise_filter","shape":[3,3,128,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.006716050119961009,"min":-0.8059260143953211}},{"name":"conv4/pointwise_filter","shape":[1,1,128,256],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.021875603993733724,"min":-2.8875797271728514}},{"name":"conv4/bias","shape":[256],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.0041141652009066415,"min":-0.8187188749804216}},{"name":"conv5/depthwise_filter","shape":[3,3,256,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.008423839597141042,"min":-0.9013508368940915}},{"name":"conv5/pointwise_filter","shape":[1,1,256,512],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.030007277283014035,"min":-3.8709387695088107}},{"name":"conv5/bias","shape":[512],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.008402082966823203,"min":-1.4871686851277068}},{"name":"conv8/filters","shape":[1,1,512,25],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.028336129469030042,"min":-4.675461362389957}},{"name":"conv8/bias","shape":[25],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.002268134028303857,"min":-0.41053225912299807}}],"paths":["tiny_face_detector_model-shard1"]}] [
{
"weights":
[
{"name":"conv0/filters","shape":[3,3,3,16],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.009007044399485869,"min":-1.2069439495311063}},
{"name":"conv0/bias","shape":[16],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.005263455241334205,"min":-0.9211046672334858}},
{"name":"conv1/depthwise_filter","shape":[3,3,16,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.004001977630690033,"min":-0.5042491814669441}},
{"name":"conv1/pointwise_filter","shape":[1,1,16,32],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.013836609615999109,"min":-1.411334180831909}},
{"name":"conv1/bias","shape":[32],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.0015159862590771096,"min":-0.30926119685173037}},
{"name":"conv2/depthwise_filter","shape":[3,3,32,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.002666276225856706,"min":-0.317286870876948}},
{"name":"conv2/pointwise_filter","shape":[1,1,32,64],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.015265831292844286,"min":-1.6792414422128714}},
{"name":"conv2/bias","shape":[64],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.0020280554598453,"min":-0.37113414915168985}},
{"name":"conv3/depthwise_filter","shape":[3,3,64,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.006100742489683862,"min":-0.8907084034938438}},
{"name":"conv3/pointwise_filter","shape":[1,1,64,128],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.016276211832083907,"min":-2.0508026908425725}},
{"name":"conv3/bias","shape":[128],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.003394414279975143,"min":-0.7637432129944072}},
{"name":"conv4/depthwise_filter","shape":[3,3,128,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.006716050119961009,"min":-0.8059260143953211}},
{"name":"conv4/pointwise_filter","shape":[1,1,128,256],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.021875603993733724,"min":-2.8875797271728514}},
{"name":"conv4/bias","shape":[256],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.0041141652009066415,"min":-0.8187188749804216}},
{"name":"conv5/depthwise_filter","shape":[3,3,256,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.008423839597141042,"min":-0.9013508368940915}},
{"name":"conv5/pointwise_filter","shape":[1,1,256,512],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.030007277283014035,"min":-3.8709387695088107}},
{"name":"conv5/bias","shape":[512],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.008402082966823203,"min":-1.4871686851277068}},
{"name":"conv8/filters","shape":[1,1,512,25],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.028336129469030042,"min":-4.675461362389957}},
{"name":"conv8/bias","shape":[25],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.002268134028303857,"min":-0.41053225912299807}}
],
"paths":
[
"tiny_face_detector_model.bin"
]
}
]

2922
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@ -1,63 +1,79 @@
{ {
"name": "@vladmandic/face-api", "name": "@vladmandic/face-api",
"version": "0.12.1", "version": "1.7.15",
"description": "FaceAPI: AI-powered Face Detection, Face Embedding & Recognition Using Tensorflow/JS", "description": "FaceAPI: AI-powered Face Detection & Rotation Tracking, Face Description & Recognition, Age & Gender & Emotion Prediction for Browser and NodeJS using TensorFlow/JS",
"sideEffects": false,
"main": "dist/face-api.node.js", "main": "dist/face-api.node.js",
"module": "dist/face-api.esm.js", "module": "dist/face-api.esm.js",
"browser": "dist/face-api.esm.js", "browser": "dist/face-api.esm.js",
"types": "types/index.d.ts", "types": "types/face-api.d.ts",
"author": "Vladimir Mandic <mandic00@live.com>",
"bugs": {
"url": "https://github.com/vladmandic/face-api/issues"
},
"homepage": "https://vladmandic.github.io/face-api/demo/webcam.html",
"license": "MIT",
"engines": { "engines": {
"node": ">=12.0.0" "node": ">=14.0.0"
}, },
"scripts": {
"start": "node --trace-warnings example/node-singleprocess.js",
"dev": "npm install && node server/dev.js",
"build": "rimraf dist/* types/* && node server/build.js",
"lint": "eslint src/**/* example/*.js server/*.js"
},
"keywords": [
"tensorflow",
"tf",
"tfjs",
"face",
"face-api",
"face-detection",
"age-gender"
],
"repository": { "repository": {
"type": "git", "type": "git",
"url": "git+https://github.com/vladmandic/face-api.git" "url": "git+https://github.com/vladmandic/face-api.git"
}, },
"publishConfig": { "scripts": {
"registry": "https://registry.npmjs.org/" "start": "node --no-warnings demo/node.js",
}, "build": "node build.js",
"author": "Vladimir Mandic <mandic00@live.com>", "dev": "build --profile development",
"license": "MIT", "lint": "eslint src/ demo/",
"bugs": { "test": "node --trace-warnings test/test-node.js",
"url": "https://github.com/vladmandic/face-api/issues" "scan": "npx auditjs@latest ossi --dev --quiet"
},
"homepage": "https://github.com/vladmandic/face-api#readme",
"dependencies": {
"@vladmandic/pilogger": "^0.2.13"
}, },
"keywords": [
"face-api",
"faceapi",
"face-detection",
"age-gender",
"emotion-detection",
"face-recognition",
"face",
"face-description",
"tensorflow",
"tensorflowjs",
"tfjs"
],
"devDependencies": { "devDependencies": {
"@tensorflow/tfjs": "^3.0.0", "@canvas/image": "^2.0.0",
"@tensorflow/tfjs-backend-wasm": "^3.0.0", "@microsoft/api-extractor": "^7.49.2",
"@tensorflow/tfjs-node": "^3.0.0", "@tensorflow/tfjs": "^4.22.0",
"@tensorflow/tfjs-node-gpu": "^3.0.0", "@tensorflow/tfjs-backend-cpu": "^4.22.0",
"@types/node": "^14.14.22", "@tensorflow/tfjs-backend-wasm": "^4.22.0",
"@typescript-eslint/eslint-plugin": "^4.14.1", "@tensorflow/tfjs-backend-webgl": "^4.22.0",
"@typescript-eslint/parser": "^4.14.1", "@tensorflow/tfjs-backend-webgpu": "4.22.0",
"chokidar": "^3.5.1", "@tensorflow/tfjs-converter": "^4.22.0",
"esbuild": "^0.8.36", "@tensorflow/tfjs-core": "^4.22.0",
"eslint": "^7.18.0", "@tensorflow/tfjs-data": "^4.22.0",
"eslint-config-airbnb-base": "^14.2.1", "@tensorflow/tfjs-layers": "^4.22.0",
"eslint-plugin-import": "^2.22.1", "@tensorflow/tfjs-node": "^4.22.0",
"eslint-plugin-json": "^2.1.2", "@tensorflow/tfjs-node-gpu": "^4.22.0",
"@types/node": "^22.13.1",
"@types/offscreencanvas": "^2019.7.3",
"@typescript-eslint/eslint-plugin": "^8.5.0",
"@typescript-eslint/parser": "^8.5.0",
"@vladmandic/build": "^0.10.2",
"@vladmandic/pilogger": "^0.5.1",
"ajv": "^8.17.1",
"esbuild": "^0.24.2",
"eslint": "8.57.0",
"eslint-config-airbnb-base": "^15.0.0",
"eslint-plugin-import": "^2.30.0",
"eslint-plugin-json": "^4.0.1",
"eslint-plugin-node": "^11.1.0", "eslint-plugin-node": "^11.1.0",
"eslint-plugin-promise": "^4.2.1", "eslint-plugin-promise": "^7.1.0",
"rimraf": "^3.0.2", "node-fetch": "^3.3.2",
"tslib": "^2.1.0", "rimraf": "^6.0.1",
"typescript": "^4.1.3" "seedrandom": "^3.0.5",
"tslib": "^2.8.1",
"typedoc": "^0.27.6",
"typescript": "5.7.3"
} }
} }

View File

@ -1,221 +0,0 @@
#!/usr/bin/env -S node --trace-warnings
/* eslint-disable no-restricted-syntax */
/* eslint-disable import/no-extraneous-dependencies */
/* eslint-disable node/no-unpublished-require */
/* eslint-disable node/shebang */
const fs = require('fs');
const esbuild = require('esbuild');
const ts = require('typescript');
const log = require('@vladmandic/pilogger');
// keeps esbuild service instance cached
let es;
const banner = `
/*
Face-API
homepage: <https://github.com/vladmandic/face-api>
author: <https://github.com/vladmandic>'
*/
`;
// tsc configuration
const tsconfig = {
noEmitOnError: false,
target: ts.ScriptTarget.ES2018,
module: ts.ModuleKind.ES2020,
// outFile: "dist/face-api.d.ts",
outDir: 'types/',
declaration: true,
emitDeclarationOnly: true,
emitDecoratorMetadata: true,
experimentalDecorators: true,
skipLibCheck: true,
strictNullChecks: true,
baseUrl: './',
paths: {
tslib: ['node_modules/tslib/tslib.d.ts'],
},
};
// common configuration
const common = {
banner,
minifyWhitespace: true,
minifyIdentifiers: true,
minifySyntax: true,
bundle: true,
sourcemap: true,
logLevel: 'error',
target: 'es2018',
// tsconfig: './tsconfig.json',
};
const targets = {
node: {
tfjs: {
platform: 'node',
format: 'cjs',
metafile: 'dist/tfjs.esm.json',
entryPoints: ['src/tfjs/tf-node.ts'],
outfile: 'dist/tfjs.esm.js',
external: ['@tensorflow'],
},
node: {
platform: 'node',
format: 'cjs',
metafile: 'dist/face-api.node.json',
entryPoints: ['src/index.ts'],
outfile: 'dist/face-api.node.js',
external: ['@tensorflow'],
},
},
nodeGPU: {
tfjs: {
platform: 'node',
format: 'cjs',
entryPoints: ['src/tfjs/tf-node-gpu.ts'],
outfile: 'dist/tfjs.esm.js',
metafile: 'dist/tfjs.esm.json',
external: ['@tensorflow'],
},
node: {
platform: 'node',
format: 'cjs',
entryPoints: ['src/index.ts'],
outfile: 'dist/face-api.node-gpu.js',
metafile: 'dist/face-api.node-gpu.json',
external: ['@tensorflow'],
},
},
browserNoBundle: {
tfjs: {
platform: 'browser',
format: 'esm',
entryPoints: ['src/tfjs/tf-browser.ts'],
outfile: 'dist/tfjs.esm.js',
metafile: 'dist/tfjs.esm.json',
external: ['fs', 'buffer', 'util', '@tensorflow'],
},
esm: {
platform: 'browser',
format: 'esm',
entryPoints: ['src/index.ts'],
outfile: 'dist/face-api.esm-nobundle.js',
metafile: 'dist/face-api.esm-nobundle.json',
external: ['fs', 'buffer', 'util', '@tensorflow', 'tfjs.esm.js'],
},
},
browserBundle: {
tfjs: {
platform: 'browser',
format: 'esm',
entryPoints: ['src/tfjs/tf-browser.ts'],
outfile: 'dist/tfjs.esm.js',
metafile: 'dist/tfjs.esm.json',
external: ['fs', 'buffer', 'util'],
},
iife: {
platform: 'browser',
format: 'iife',
globalName: 'faceapi',
entryPoints: ['src/index.ts'],
outfile: 'dist/face-api.js',
metafile: 'dist/face-api.json',
external: ['fs', 'buffer', 'util'],
},
esm: {
platform: 'browser',
format: 'esm',
entryPoints: ['src/index.ts'],
outfile: 'dist/face-api.esm.js',
metafile: 'dist/face-api.esm.json',
external: ['fs', 'buffer', 'util'],
},
},
};
async function getStats(metafile) {
const stats = {};
if (!fs.existsSync(metafile)) return stats;
const data = fs.readFileSync(metafile);
const json = JSON.parse(data.toString());
if (json && json.inputs && json.outputs) {
for (const [key, val] of Object.entries(json.inputs)) {
if (key.startsWith('node_modules')) {
stats.modules = (stats.modules || 0) + 1;
stats.moduleBytes = (stats.moduleBytes || 0) + val.bytes;
} else {
stats.imports = (stats.imports || 0) + 1;
stats.importBytes = (stats.importBytes || 0) + val.bytes;
}
}
const files = [];
for (const [key, val] of Object.entries(json.outputs)) {
if (!key.endsWith('.map')) {
files.push(key);
stats.outputBytes = (stats.outputBytes || 0) + val.bytes;
}
}
stats.outputFiles = files.join(', ');
}
return stats;
}
function compile(fileNames, options) {
log.info('Compile:', fileNames);
const program = ts.createProgram(fileNames, options);
const emit = program.emit();
const diag = ts
.getPreEmitDiagnostics(program)
.concat(emit.diagnostics);
for (const info of diag) {
// @ts-ignore
const msg = info.messageText.messageText || info.messageText;
if (msg.includes('package.json')) continue;
if (msg.includes('Expected 0 arguments, but got 1')) continue;
if (info.file) {
const pos = info.file.getLineAndCharacterOfPosition(info.start || 0);
log.error(`TSC: ${info.file.fileName} [${pos.line + 1},${pos.character + 1}]:`, msg);
} else {
log.error('TSC:', msg);
}
}
}
// rebuild on file change
async function build(f, msg) {
log.info('Build: file', msg, f, 'target:', common.target);
if (!es) es = await esbuild.startService();
// common build options
try {
// rebuild all target groups and types
for (const [targetGroupName, targetGroup] of Object.entries(targets)) {
for (const [targetName, targetOptions] of Object.entries(targetGroup)) {
// if triggered from watch mode, rebuild only browser bundle
if ((require.main !== module) && (targetGroupName !== 'browserBundle')) continue;
await es.build({ ...common, ...targetOptions });
const stats = await getStats(targetOptions.metafile);
log.state(`Build for: ${targetGroupName} type: ${targetName}:`, stats);
}
}
} catch (err) {
// catch errors and print where it occured
log.error('Build error', JSON.stringify(err.errors || err, null, 2));
if (require.main === module) process.exit(1);
}
// generate typings
compile(targets.browserBundle.esm.entryPoints, tsconfig);
if (require.main === module) process.exit(0);
}
if (require.main === module) {
log.header();
build('all', 'startup');
} else {
exports.build = build;
}

View File

@ -1,147 +0,0 @@
/*
micro http2 server with file monitoring and automatic app rebuild
- can process concurrent http requests
- monitors specified filed and folders for changes
- triggers library and application rebuild
- any build errors are immediately displayed and can be corrected without need for restart
- passthrough data compression
*/
const process = require('process');
const fs = require('fs');
const zlib = require('zlib');
const http = require('http');
const http2 = require('http2');
const path = require('path');
// eslint-disable-next-line node/no-unpublished-require, import/no-extraneous-dependencies
const chokidar = require('chokidar');
const log = require('@vladmandic/pilogger');
const build = require('./build.js');
// app configuration
// you can provide your server key and certificate or use provided self-signed ones
// self-signed certificate generated using:
// openssl req -x509 -newkey rsa:4096 -nodes -keyout https.key -out https.crt -days 365 -subj "/C=US/ST=Florida/L=Miami/O=@vladmandic"
// client app does not work without secure server since browsers enforce https for webcam access
const options = {
key: fs.readFileSync('server/https.key'),
cert: fs.readFileSync('server/https.crt'),
root: '..',
default: 'example/index.html',
httpPort: 8000,
httpsPort: 8001,
monitor: ['package.json', 'example', 'src'],
};
// just some predefined mime types
const mime = {
'.html': 'text/html',
'.js': 'text/javascript',
'.css': 'text/css',
'.json': 'application/json',
'.png': 'image/png',
'.jpg': 'image/jpg',
'.gif': 'image/gif',
'.ico': 'image/x-icon',
'.svg': 'image/svg+xml',
'.wav': 'audio/wav',
'.mp4': 'video/mp4',
'.woff': 'application/font-woff',
'.ttf': 'application/font-ttf',
'.wasm': 'application/wasm',
};
// watch filesystem for any changes and notify build when needed
async function watch() {
const watcher = chokidar.watch(options.monitor, {
persistent: true,
ignorePermissionErrors: false,
alwaysStat: false,
ignoreInitial: true,
followSymlinks: true,
usePolling: false,
useFsEvents: false,
atomic: true,
});
// single event handler for file add/change/delete
watcher
.on('add', (evt) => build.build(evt, 'add'))
.on('change', (evt) => build.build(evt, 'modify'))
.on('unlink', (evt) => build.build(evt, 'remove'))
.on('error', (err) => log.error(`Client watcher error: ${err}`))
.on('ready', () => log.state('Monitoring:', options.monitor));
}
// get file content for a valid url request
function handle(url) {
return new Promise((resolve) => {
let obj = { ok: false };
obj.file = url;
if (!fs.existsSync(obj.file)) resolve(null);
obj.stat = fs.statSync(obj.file);
if (obj.stat.isFile()) obj.ok = true;
if (!obj.ok && obj.stat.isDirectory()) {
obj.file = path.join(obj.file, options.default);
// @ts-ignore
obj = handle(obj.file);
}
resolve(obj);
});
}
// process http requests
async function httpRequest(req, res) {
handle(path.join(__dirname, options.root, decodeURI(req.url)))
.then((result) => {
// get original ip of requestor, regardless if it's behind proxy or not
// eslint-disable-next-line dot-notation
const forwarded = (req.headers['forwarded'] || '').match(/for="\[(.*)\]:/);
const ip = (Array.isArray(forwarded) ? forwarded[1] : null) || req.headers['x-forwarded-for'] || req.ip || req.socket.remoteAddress;
if (!result || !result.ok) {
res.writeHead(404, { 'Content-Type': 'text/html' });
res.end('Error 404: Not Found\n', 'utf-8');
log.warn(`${req.method}/${req.httpVersion}`, res.statusCode, req.url, ip);
} else {
const ext = String(path.extname(result.file)).toLowerCase();
const contentType = mime[ext] || 'application/octet-stream';
const accept = req.headers['accept-encoding'] ? req.headers['accept-encoding'].includes('br') : false; // does target accept brotli compressed data
res.writeHead(200, {
// 'Content-Length': result.stat.size, // not using as it's misleading for compressed streams
'Content-Language': 'en', 'Content-Type': contentType, 'Content-Encoding': accept ? 'br' : '', 'Last-Modified': result.stat.mtime, 'Cache-Control': 'no-cache', 'X-Powered-By': `NodeJS/${process.version}`,
});
const compress = zlib.createBrotliCompress({ params: { [zlib.constants.BROTLI_PARAM_QUALITY]: 5 } }); // instance of brotli compression with level 5
const stream = fs.createReadStream(result.file);
if (!accept) stream.pipe(res); // don't compress data
else stream.pipe(compress).pipe(res); // compress data
// alternative methods of sending data
/// 2. read stream and send by chunk
// const stream = fs.createReadStream(result.file);
// stream.on('data', (chunk) => res.write(chunk));
// stream.on('end', () => res.end());
// 3. read entire file and send it as blob
// const data = fs.readFileSync(result.file);
// res.write(data);
log.data(`${req.method}/${req.httpVersion}`, res.statusCode, contentType, result.stat.size, req.url, ip);
}
return null;
})
.catch((err) => log.error('handle error:', err));
}
// app main entry point
async function main() {
log.header();
await watch();
// @ts-ignore
const server1 = http.createServer(options, httpRequest);
server1.on('listening', () => log.state('HTTP server listening:', options.httpPort));
server1.listen(options.httpPort);
const server2 = http2.createSecureServer(options, httpRequest);
server2.on('listening', () => log.state('HTTP2 server listening:', options.httpsPort));
server2.listen(options.httpsPort);
await build.build('all', 'startup');
}
main();

View File

@ -1,31 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIFazCCA1OgAwIBAgIUKQKodDBJnuweJs5IcTyL4NIp3vgwDQYJKoZIhvcNAQEL
BQAwRTELMAkGA1UEBhMCVVMxEDAOBgNVBAgMB0Zsb3JpZGExDjAMBgNVBAcMBU1p
YW1pMRQwEgYDVQQKDAtAdmxhZG1hbmRpYzAeFw0yMDExMDcxNTE3NDNaFw0yMTEx
MDcxNTE3NDNaMEUxCzAJBgNVBAYTAlVTMRAwDgYDVQQIDAdGbG9yaWRhMQ4wDAYD
VQQHDAVNaWFtaTEUMBIGA1UECgwLQHZsYWRtYW5kaWMwggIiMA0GCSqGSIb3DQEB
AQUAA4ICDwAwggIKAoICAQDSC88PF8NyLkagK5mAZ/d739SOU16l2Cx3zE35zZQh
O29+1L4L+oMksLYipo+FMgtGO+MSzFsvGgKCs2sDSdfyoNSTZ3QaN4BAZ0sbq+wL
cke7yRBTM/XIGOQfhqq8yC2q8/zXwUbZg0UsCAxDGNwUr0Qlm829laIU/UN1KcYS
57Nebl1z05wMEvYmyl4JBAl9ozne7KS9DyW7jbrAXE8TaEy3+pY66kx5GG6v2+up
ScITGm4YPmPPlpOF1UjQloosgxdVa+fVp8aNCa/rf0JNO0Uhb3OKOZ+4kYmpfPn/
trwoKWAa6CV1uAJ+3zDkLMq1JNlrV4OMp1QvX0wzA47a/n466JMN9SFb0Ng5wf19
VOtT5Zu7chDStBudVjxlMDfUixvhvn4sjbaLNYR1fyWPoNXwr0KX2lpTP1QOzp9/
Sd0iiJ8RPfXn8Xo26MStu4I52CZjS7yEMgJGCLH/mgPuSbrHHYYrrrCPJgmQOZG2
TNMI+EqOwQvHh2ghdv7t7EEk4IslBk0QzufMXQ2WFXQ20nvj74mrmmiMuBcmonpR
0egA5/M18ZPLQxYu0Q86NUr4XHtAG1fq+n8pseQ7Avy6Gk6HRiezCbB7TJ9rnNeu
jie1TDajC6W7rx0VF7hcxkIrDgNgnYcjXUV2hMx1lo4fIoWkL3nJJVEthMVIcJOX
EwIDAQABo1MwUTAdBgNVHQ4EFgQUHawIRAo1bW8Xy7l4oKfM+ESjhs0wHwYDVR0j
BBgwFoAUHawIRAo1bW8Xy7l4oKfM+ESjhs0wDwYDVR0TAQH/BAUwAwEB/zANBgkq
hkiG9w0BAQsFAAOCAgEAozQJk5Ahx7rDn/aMXLdZFxR81VfkmHDm7NhlJsdVKUx5
o/iegXnvwc1PoeKsz2S504QiuL8l7jqZoU2WPIm7Vlr+oxBgiKqjo1EqBsUgNCZ7
qxMD84TVp/KBGjKUh1TXhjJwGGfNNr+R/fJGw+36UeuY3fSckjaYTuNuVElp+DoZ
/pGyu1qpcybLfiR8mpQkCeU/iBq5gIjWddbVjlYoTKfqULZrpsAF2AeqELEgyshl
p3PNhW/54TJSn4mWK+39BibYHPkvx8orEuWKyjjRk82hEXi7J3hsGKX29qC3oO40
67DKDWmZdMCz+E1ERf10V0bSp6iJnnlwknHJloZUETV1NY/DdoSC6e8CN0+0cQqL
aJefJ483O3sXyN3v3+DaEFBLPFgRFGZB7eaBwR2xAv/KfjT5dSyi+wA4LZAxsQMC
Q7UYGNAfHLNHJo/bsj12+JDhJaFZ/KoBKzyMUuEXmvjxXNDMCfm+gVQFoLyXkGq3
491W/O7LjR6pkD+ce0qeTFMu3nfUubyfbONVDEfuH4GC1e+FAggCRaBnFsVzCzXj
jxOOLoQ9nwLk8v17mx0BSwX4iuqvXFntfJbzfcnzQfx/qqPFheIbGnmKw1lrRML8
87ZbN6t01+v2YyYe6Mc7p80s1R3jc8aVX8ca2KcYwsJAkg/xz0q5RJwsE1is5UY=
-----END CERTIFICATE-----

View File

@ -1,52 +0,0 @@
-----BEGIN PRIVATE KEY-----
MIIJQwIBADANBgkqhkiG9w0BAQEFAASCCS0wggkpAgEAAoICAQDSC88PF8NyLkag
K5mAZ/d739SOU16l2Cx3zE35zZQhO29+1L4L+oMksLYipo+FMgtGO+MSzFsvGgKC
s2sDSdfyoNSTZ3QaN4BAZ0sbq+wLcke7yRBTM/XIGOQfhqq8yC2q8/zXwUbZg0Us
CAxDGNwUr0Qlm829laIU/UN1KcYS57Nebl1z05wMEvYmyl4JBAl9ozne7KS9DyW7
jbrAXE8TaEy3+pY66kx5GG6v2+upScITGm4YPmPPlpOF1UjQloosgxdVa+fVp8aN
Ca/rf0JNO0Uhb3OKOZ+4kYmpfPn/trwoKWAa6CV1uAJ+3zDkLMq1JNlrV4OMp1Qv
X0wzA47a/n466JMN9SFb0Ng5wf19VOtT5Zu7chDStBudVjxlMDfUixvhvn4sjbaL
NYR1fyWPoNXwr0KX2lpTP1QOzp9/Sd0iiJ8RPfXn8Xo26MStu4I52CZjS7yEMgJG
CLH/mgPuSbrHHYYrrrCPJgmQOZG2TNMI+EqOwQvHh2ghdv7t7EEk4IslBk0QzufM
XQ2WFXQ20nvj74mrmmiMuBcmonpR0egA5/M18ZPLQxYu0Q86NUr4XHtAG1fq+n8p
seQ7Avy6Gk6HRiezCbB7TJ9rnNeujie1TDajC6W7rx0VF7hcxkIrDgNgnYcjXUV2
hMx1lo4fIoWkL3nJJVEthMVIcJOXEwIDAQABAoICAF45S+ZSW6uh1K7PQCnY+a0J
CJncDk5JPhFzhds0fGm39tknaCWJeEECQIIkw6cVfvc/sCpjn9fuTAgDolK0UnoV
6aZCN1P3Z8H8VDYSlm3AEyvLE1avrWbYu6TkzTyoc8wHbXn/yt+SQnpxFccXpMpm
oSRZ0x5jvHS79AHf/mnGpLEMw0FNQOgtrVxTVYGn3PYOPcyhzXi+Dcgn2QmnnxVu
qVOyxqehKTL9YdHjzsB/RN868P5RJocd3gmgVuyzS0KSf+oi4Ln4bFoiaVc0HDL3
DpjkHSl5lgu+xclRNfifKaK+hM0tLHi1VfFB//WrnjdKU3oSpQF4oowprM4Jn5AP
jhRI54JWZlWnvbiAOx7D49xFga3EnqjVH6So2gxi+q3Dv25luXGAnueaBPDpVC6c
nkJm2aCl7T3xlVpW8O5Fs+rsP8Xr9RTyEQJauM01uOi3N2zEeO8ERxTYEW5Sy2U7
OFKRXtLj7Jnejib/SxWGcIX4Wid5QFAygbXz4APfFN22QU0fqmhm4/c2OB/xM8qr
VVFx4xlG2wnuq5CZdZjmK3MTbmSM+pWW8mly/+++p694cf5oXGenYus/JWFNwxj/
fPyA7zQmaTOidu6clDHzkPCOE7TBv9TkQ7lL6ClgE7B39JR65ZQtjCYqRsADKsGI
dFMg+HDmGbVEfWg2V0GBAoIBAQDupImrJ0JXHA/0SEC2Tbz7pE60fRwmBFdhvk4Z
rzZiaOl+M2HXQU6b5DYhKcgdiFah5IuAnsRPo6X5Ug+Q1DV3OFTuEGAkXgqZliNa
aXsJcc0++DYlXX3BrTb66gylVLQRs5tZzsXps5iXWclziDC2go8RKnCwxsxwbzVq
FP4hoBP4dp83WoLF4NznnGFGw3/KLlMivtRxDE5OegpxTuWGlA/bVtT187Ksuuz3
dFUayLfpg0ABS/E7wwAJjSUpPPEi3J/G255H3lZXgS1gWcAf3rGDQYlJKF8UHdja
yWQcAOF+b/bYEpa4lHw+UtKNNkPTiCV4Y7CNQd8a2Gcl7VFTAoIBAQDhUs9r1dhm
rUlNAunVZZZVZ91XhXeqVTa/9xUDEvDh91nB5c7CcuNXxwcX4oTsMF4Bc7CHlvOv
pybp+QLjK310VjxxkFYJT0TKWuYqLjtNkQ93sp8wF3gVCf8m8bMOX/gPfQzNZWKp
un+ZWnzXNU5d2A+63xbZmFzT0Zo6H/h9YEO5Xxw32HCKFzEhl5JD34muZTEXSpdD
p7LUUr5LvnoUqEzonhXx2qRnTLP87d1o0GlkVex9HeeeBgrvm57QYoJnABxw9UFM
/ocLeYsjkmqJQRBDWgiwQlos1pdZyX2Yj20b7Wm5Pxd4aM9gh5EZZMXeQHhbHlWz
UY1IPxfAkytBAoIBAHmYavFDisD58oMlAZwiViXeXaAHk30nfyK1pfPeXBaeoEKG
idb1VsmF6bLSKD4sBwBshExgGWT+3IYCMx43kpqRoGzA+UvugvYpExBxaJiyXMM2
E9jMH1S9HqOQ+CqR00KlwoVrH1rqANk1jbkJbtDAC4fSmSLp2Kd9crj/w1F80FAs
mQnKW5HZ9pUpEEPPP2DUY9XzaCnF/GxuML31VmxRKxc20kIUDzmF8VJQ+0Avf85C
6yz99gfeXzl+qq2teKyrv9nCc47pEhN6JZXPhV53yPk5PmuBX5jPcHxiW1kNddhH
0n3cUuHv/rJ+3vvG555z46vJF9+R7c0u8LfZiTMCggEBAMQd4a/IN0xXM1+2U3SL
sSew+XR+FMPK25aGJmHAkKz9L8CWlzmj6cCy2LevT2aMSqYU3eeGOZ//at1nAV5c
shsaHA30RQ5hUkyWhZLdHnzK752NeQTQyJH3W3+4C9NNMIm6m/QCdLeqPflqSxK9
sPH5ZueN2UOXW+R5oTVKMmxd51RnNhZdasamnPrSBFrTK/EA3pOZNsOKKRqo0jz3
Eyb7vcUSI6OYXFQU7OwO1RGvpKvSJb5Y0wo11DrtRnO16i5gaGDg9u9e8ofISJSz
kcrZOKCGst1HQ1mXhbB+sbSh0aPnJog4I+OHxkgMdvyVO6vQjXExnAIxzzi8wZ25
+oECggEBAIT6q/sn8xFt5Jwc/0Z7YUjd415Nknam09tnbB+UPRR6lt6JFoILx8by
5Y1sN30HWDv27v9G32oZhUDii3Rt3PkbYLqlHy7XBMEXA9WIUo+3Be7mtdL8Wfrj
0zn0b7Hks9a9KsElG1dXUopwjMRL3M22UamaN7e/gl5jz2I7pyc5oaqz9GRDV5yG
slb6gGZ5naMycJD3p8vutXbmgKRr9beRp55UICAbEMdr5p3ks8bfR33Z6t+a97u1
IxI5x5Lb0fdfvL8JK3nRWn7Uzbmm5Ni/OaODNKP+fIm9m2yDAs8LM8RGpPtk6i0d
qIRta3H9KNw2Mhpkm77TtUSV/W5aOmY=
-----END PRIVATE KEY-----

View File

@ -10,9 +10,9 @@ export abstract class NeuralNetwork<TNetParams> {
this._name = name; this._name = name;
} }
protected _params: TNetParams | undefined = undefined protected _params: TNetParams | undefined = undefined;
protected _paramMappings: ParamMapping[] = [] protected _paramMappings: ParamMapping[] = [];
public _name: any; public _name: any;
@ -62,7 +62,7 @@ export abstract class NeuralNetwork<TNetParams> {
}); });
} }
public dispose(throwOnRedispose: boolean = true) { public dispose(throwOnRedispose = true) {
this.getParamList().forEach((param) => { this.getParamList().forEach((param) => {
if (throwOnRedispose && param.tensor.isDisposed) { if (throwOnRedispose && param.tensor.isDisposed) {
throw new Error(`param tensor has already been disposed for path ${param.path}`); throw new Error(`param tensor has already been disposed for path ${param.path}`);
@ -102,8 +102,9 @@ export abstract class NeuralNetwork<TNetParams> {
} }
const { readFile } = env.getEnv(); const { readFile } = env.getEnv();
const { manifestUri, modelBaseUri } = getModelUris(filePath, this.getDefaultModelName()); const { manifestUri, modelBaseUri } = getModelUris(filePath, this.getDefaultModelName());
const fetchWeightsFromDisk = (filePaths: string[]) => Promise.all(filePaths.map((fp) => readFile(fp).then((buf) => buf.buffer))); const fetchWeightsFromDisk = (filePaths: string[]) => Promise.all(filePaths.map((fp) => readFile(fp).then((buf) => (typeof buf === 'string' ? Buffer.from(buf) : buf.buffer))));
const loadWeights = tf.io.weightsLoaderFactory(fetchWeightsFromDisk); // @ts-ignore async-vs-sync mismatch
const loadWeights = tf['io'].weightsLoaderFactory(fetchWeightsFromDisk);
const manifest = JSON.parse((await readFile(manifestUri)).toString()); const manifest = JSON.parse((await readFile(manifestUri)).toString());
const weightMap = await loadWeights(manifest, modelBaseUri); const weightMap = await loadWeights(manifest, modelBaseUri);
this.loadFromWeightMap(weightMap); this.loadFromWeightMap(weightMap);

View File

@ -1,6 +1,10 @@
export class PlatformBrowser { export class PlatformBrowser {
private textEncoder: TextEncoder; private textEncoder: TextEncoder;
constructor() {
this.textEncoder = new TextEncoder();
}
fetch(path: string, init?: any): Promise<Response> { fetch(path: string, init?: any): Promise<Response> {
return fetch(path, init); return fetch(path, init);
} }
@ -13,9 +17,6 @@ export class PlatformBrowser {
if (encoding !== 'utf-8' && encoding !== 'utf8') { if (encoding !== 'utf-8' && encoding !== 'utf8') {
throw new Error(`Browser's encoder only supports utf-8, but got ${encoding}`); throw new Error(`Browser's encoder only supports utf-8, but got ${encoding}`);
} }
if (this.textEncoder == null) {
this.textEncoder = new TextEncoder();
}
return this.textEncoder.encode(text); return this.textEncoder.encode(text);
} }

Some files were not shown because too many files have changed in this diff Show More