Compare commits

...

1264 Commits
v2.7.1 ... main

Author SHA1 Message Date
Vladimir Mandic a6fd9a41c1 update readme
Signed-off-by: Vladimir Mandic <mandic00@live.com>
2025-02-05 10:11:17 -05:00
Vladimir Mandic 7e7c6d2ea2 update compatibility notes
Signed-off-by: Vladimir Mandic <mandic00@live.com>
2025-02-05 09:50:45 -05:00
Vladimir Mandic 5208b9ec2d full rebuild
Signed-off-by: Vladimir Mandic <mandic00@live.com>
2025-02-05 09:41:58 -05:00
Vladimir Mandic f515b9c20d 3.3.5 2025-02-05 09:29:56 -05:00
Vladimir Mandic 5a51889edb update build platform
Signed-off-by: Vladimir Mandic <mandic00@live.com>
2025-02-05 09:29:47 -05:00
Vladimir Mandic 745fd626a3 rebuild
Signed-off-by: Vladimir Mandic <mandic00@live.com>
2024-10-24 11:11:55 -04:00
Vladimir Mandic c1dc719a67 add human.draw.tensor method
Signed-off-by: Vladimir Mandic <mandic00@live.com>
2024-10-24 11:09:45 -04:00
Vladimir Mandic 2b0a2fecc2 3.3.4 2024-10-24 11:09:27 -04:00
Vladimir Mandic 38922fe92d update packages
Signed-off-by: Vladimir Mandic <mandic00@live.com>
2024-10-14 09:06:22 -04:00
Vladimir Mandic c80540a934 3.3.3 2024-10-14 09:05:49 -04:00
Vladimir Mandic 49b25830b4 add loaded property to model stats and mark models not loaded correctly.
Signed-off-by: Vladimir Mandic <mandic00@live.com>
2024-10-14 09:04:10 -04:00
Vladimir Mandic df73c8247f update changelog 2024-09-11 12:16:59 -04:00
Vladimir Mandic dd186ab065 release build 2024-09-11 12:16:36 -04:00
Vladimir Mandic a2acfc433e 3.3.2 2024-09-11 12:14:26 -04:00
Vladimir Mandic 644235433d full rebuild 2024-09-11 12:13:42 -04:00
Vladimir Mandic 42dfe18736 update face roll/pitch/yaw math 2024-09-11 12:13:03 -04:00
Vladimir Mandic c5b7b43fca 3.3.1 2024-09-11 11:23:18 -04:00
Vladimir Mandic 715210db51 add config.face.detector.square option 2024-09-11 11:16:07 -04:00
Vladimir Mandic 9e2c612c1f human 3.3 alpha test run 2024-09-10 15:49:23 -04:00
Vladimir Mandic 862de3e6c8 human 3.3 alpha with new build environment 2024-09-10 15:44:39 -04:00
Vladimir Mandic 1114014bfd update changelog 2024-04-17 11:37:23 -04:00
Vladimir Mandic 001a3d58ea release rebuild 2024-04-17 11:36:54 -04:00
Vladimir Mandic d7e66afe1f fix flazeface tensor scale and update build platform 2024-04-17 11:29:51 -04:00
Vladimir Mandic a2fedaba40 3.2.2 2024-04-17 10:31:25 -04:00
Vladimir Mandic 62396317f5 add public face detector and iris scale options and refresh dependencies 2024-02-15 12:52:31 -05:00
Vladimir Mandic 15a6de03de 3.2.1 2024-02-15 12:49:18 -05:00
Vladimir Mandic c55279ca82 update wiki 2023-12-06 15:01:26 -05:00
Vladimir Mandic 6902405342 update dependencies and run full refresh 2023-12-06 15:00:47 -05:00
Vladimir Mandic b0e6aa57de 3.2.0 2023-12-06 13:32:21 -05:00
Augustin Chan 83964b02b1 Set browser false when navigator object is empty 2023-12-06 10:22:02 -05:00
Augustin Chan 9d1239301c https://github.com/vladmandic/human/issues/402 2023-12-06 10:21:09 -05:00
Vladimir Mandic 709e5100d8 update notes 2023-09-18 12:53:12 -04:00
Vladimir Mandic 1ff7992563 update wiki 2023-09-18 12:49:23 -04:00
Vladimir Mandic 6280f69299 full rebuild 2023-09-18 12:49:04 -04:00
Vladimir Mandic c1bea7d585 3.1.2 2023-09-18 12:44:40 -04:00
Vladimir Mandic 957644e216 major toolkit upgrade 2023-09-18 12:44:36 -04:00
Vladimir Mandic 0e247768ff update wiki 2023-08-07 14:28:28 +02:00
Vladimir Mandic 7b093c44d5 full rebuild 2023-08-05 15:04:11 +02:00
Vladimir Mandic f0b7285d67 major toolkit upgrade 2023-08-05 15:03:11 +02:00
Vladimir Mandic 3e30aa6e42 3.1.1 2023-08-05 14:51:13 +02:00
Vladimir Mandic ad54b34b07 fixes plus tfjs upgrade for new release 2023-06-12 13:30:25 -04:00
Vladimir Mandic d1bcd25b3d 3.0.7 2023-06-12 13:26:59 -04:00
Vladimir Mandic 9a19d051a3 full rebuild 2023-05-08 09:16:52 -04:00
Vladimir Mandic d1a3b3944e update dependencies 2023-05-08 09:13:42 -04:00
Vladimir Mandic 9dd8663e9e update dependencies 2023-05-08 09:13:16 -04:00
Kozyrev Vladislav acf6bead21 fix memory leak in histogramEqualization
Bug was introduced in cc4650c after rgb variable had been renamed.
2023-05-08 08:55:45 -04:00
Vladimir Mandic 73544e6c1b update wiki 2023-04-03 10:41:48 -04:00
Vladimir Mandic b72d592647 initial work on tracker 2023-04-03 10:36:01 -04:00
Vladimir Mandic e72a7808fb 3.0.6 2023-03-21 08:02:58 -04:00
Vladimir Mandic e30d072ebf add optional crop to multiple models 2023-03-06 18:15:42 -05:00
Vladimir Mandic adbab08203 fix movenet-multipose 2023-02-28 15:03:46 -05:00
Vladimir Mandic 073c6c519d update todo 2023-02-25 09:42:07 -05:00
Vladimir Mandic 059ebe5e36 add electron detection 2023-02-25 09:40:12 -05:00
Vladimir Mandic da3cf359fd fix gender-ssrnet-imdb 2023-02-22 06:45:34 -05:00
Vladimir Mandic c8571ad8e2 add movenet-multipose workaround 2023-02-13 10:25:43 -05:00
Vladimir Mandic cca0102bbc rebuild and publish 2023-02-13 06:53:43 -05:00
Vladimir Mandic 97b6cb152c update build platform 2023-02-10 13:41:37 -05:00
Vladimir Mandic 1bf65413fe update blazeface 2023-02-06 14:30:08 -05:00
Vladimir Mandic 770f433e1a add face.detector.minSize configurable setting 2023-02-03 10:04:53 -05:00
Vladimir Mandic fa908be5bb add affectnet 2023-02-02 10:29:02 -05:00
Vladimir Mandic 3aaea20eb4 3.0.5 2023-02-02 08:57:44 -05:00
Vladimir Mandic eb53988f90 add gear-e models 2023-02-01 09:19:15 -05:00
Vladimir Mandic 6fb4d04df3 detect react-native 2023-01-31 08:54:50 -05:00
Vladimir Mandic 870433ece2 redo blazeface annotations 2023-01-29 12:13:55 -05:00
Vladimir Mandic e75bd0e26b 3.0.4 2023-01-29 10:24:45 -05:00
Vladimir Mandic bd994ffc77 update dependencies 2023-01-21 09:14:09 -05:00
Vladimir Mandic 22062e5b7c make naviator calls safe 2023-01-12 15:40:37 -05:00
Vladimir Mandic 3191666d8d update 2023-01-07 15:51:27 -05:00
Vladimir Mandic f82cdcc7f1 fix facedetector-only configs 2023-01-07 15:50:37 -05:00
Vladimir Mandic 41e5541b5a 3.0.3 2023-01-07 15:48:15 -05:00
Vladimir Mandic 35419b581e full rebuild 2023-01-06 13:36:15 -05:00
Vladimir Mandic ddfc3c7e1b update tfjs 2023-01-06 13:23:06 -05:00
Vladimir Mandic 37f8175218 3.0.2 2023-01-06 13:06:17 -05:00
Vladimir Mandic 42217152f9 full rebuild 2023-01-03 14:24:47 -05:00
Vladimir Mandic 5de785558b update node-video 2022-12-29 19:37:38 -05:00
Vladimir Mandic ebc9c72567 update dependencies 2022-12-21 14:17:07 -05:00
Vladimir Mandic cb3646652e default face.rotation disabled 2022-11-28 10:21:14 -05:00
Vladimir Mandic 5156b18f4f update todo and changelog 2022-11-22 11:00:04 -05:00
Vladimir Mandic 69e9720799 release 2022-11-22 10:37:05 -05:00
Vladimir Mandic 481b55cd1a 3.0.1 2022-11-22 10:33:43 -05:00
Vladimir Mandic b47e6251c8 support dynamic loads 2022-11-22 10:33:31 -05:00
Vladimir Mandic daec8d4ba1 polish demos 2022-11-21 14:05:00 -05:00
Vladimir Mandic 55efcafc0f add facedetect demo and fix model async load 2022-11-21 13:07:23 -05:00
Vladimir Mandic 9f24aad194 update wiki 2022-11-20 16:20:27 -05:00
Vladimir Mandic d2593a5094 update all tests 2022-11-20 16:20:02 -05:00
Vladimir Mandic ae744d56c7 enforce markdown linting 2022-11-18 13:14:21 -05:00
Vladimir Mandic 3f774f195b update readme 2022-11-18 12:35:48 -05:00
Vladimir Mandic 06e16eea55 update markdowns 2022-11-18 12:20:14 -05:00
Vladimir Mandic cecff16701 cleanup git history 2022-11-18 11:13:29 -05:00
Vladimir Mandic f278424664 default empty result 2022-11-17 14:53:48 -05:00
Vladimir Mandic 8d9190a773 refactor draw and models namespaces 2022-11-17 14:39:02 -05:00
Vladimir Mandic 8fe34fd723 refactor distance 2022-11-17 10:18:26 -05:00
Vladimir Mandic 1713990f66 add basic anthropometry 2022-11-16 17:47:28 -05:00
Vladimir Mandic 4e418a803c added webcam id specification 2022-11-16 11:27:59 -05:00
Vladimir Mandic 009af80f1d include external typedefs 2022-11-12 12:54:58 -05:00
Vladimir Mandic 5e925b6236 update main demo 2022-11-12 09:33:04 -05:00
Vladimir Mandic 39735b03f6 prepare external typedefs 2022-11-11 16:19:27 -05:00
Vladimir Mandic 4c26e6cbbb rebuild all 2022-11-11 12:34:16 -05:00
Vladimir Mandic b0695ccedf update demos and tests 2022-11-11 12:33:40 -05:00
Vladimir Mandic 12ab4f0e35 include project files for types 2022-11-11 11:11:27 -05:00
Vladimir Mandic cc4650c151 architectural improvements 2022-11-10 20:16:40 -05:00
Vladimir Mandic 1b53b190b1 refresh dependencies 2022-11-04 13:20:56 -04:00
Vladimir Mandic 51dc129da4 update todo 2022-10-28 09:29:15 -04:00
Vladimir Mandic 4b6a25f748 add named exports 2022-10-28 09:26:33 -04:00
Vladimir Mandic a0563a3b91 update typedocs 2022-10-24 15:34:41 -04:00
Vladimir Mandic 0d7f2ba147 update wiki 2022-10-18 10:25:53 -04:00
Vladimir Mandic afb70c52e0 add draw label templates 2022-10-18 10:18:40 -04:00
Vladimir Mandic 510e89d9f2 reduce dev dependencies 2022-10-17 10:47:20 -04:00
Vladimir Mandic 7a82a73273 tensor rank strong typechecks 2022-10-16 20:28:57 -04:00
Vladimir Mandic 41aeadf00f rebuild dependencies 2022-10-13 09:30:33 -04:00
Vladimir Mandic 5218439796 update release 2022-10-09 14:34:58 -04:00
Vladimir Mandic ad55453f35 2.11.1 2022-10-09 14:32:15 -04:00
Vladimir Mandic b2845acf36 update tfjs 2022-10-09 13:59:59 -04:00
Vladimir Mandic 4fddd86f3f add rvm segmentation model 2022-10-02 15:09:00 -04:00
Vladimir Mandic 48df1b13f0 update wiki 2022-09-30 10:20:20 -04:00
Vladimir Mandic 597da8c7d4 update typedefs and typedocs 2022-09-30 10:20:08 -04:00
Vladimir Mandic ec53f70128 add human.webcam methods 2022-09-29 21:28:13 -04:00
Vladimir Mandic 1ffad0ee1a update dependencies 2022-09-27 11:51:55 -04:00
Vladimir Mandic 3e4d856ac3 update faceid 2022-09-25 10:15:47 -04:00
Vladimir Mandic 6255f2590e update readme 2022-09-25 08:28:29 -04:00
Vladimir Mandic 940576e24d Create FUNDING.yml 2022-09-25 08:17:26 -04:00
Vladimir Mandic e1153aa83c update readme 2022-09-24 11:43:29 -04:00
Vladimir Mandic 7d05bc090e update demos 2022-09-21 15:31:17 -04:00
Vladimir Mandic 4d8369bff2 fix rotation interpolation 2022-09-21 13:51:49 -04:00
Vladimir Mandic b636eedc6b 2.10.3 2022-09-21 13:49:11 -04:00
Vladimir Mandic a89adc81bf update samples 2022-09-19 10:46:11 -04:00
Vladimir Mandic 29736d8b1b add human.video method 2022-09-17 17:19:51 -04:00
Vladimir Mandic 1eb5f9b6f4 update readme 2022-09-14 11:39:12 -04:00
Vladimir Mandic d78add263a update readme 2022-09-13 16:31:23 -04:00
Vladimir Mandic 164e28ed99 update todo 2022-09-12 09:39:39 -04:00
Vladimir Mandic c79afbd1e7 update node resolver 2022-09-11 12:26:24 -04:00
Vladimir Mandic bf8c68de1e 2.10.2 2022-09-11 11:43:13 -04:00
George Bougakov 357dfc2b38 Add Node.js ESM compatibility (#292) 2022-09-11 11:40:46 -04:00
Vladimir Mandic 2362695039 update 2022-09-08 08:02:26 -04:00
Vladimir Mandic 1c4c41cd55 update todo 2022-09-07 12:46:19 -04:00
Vladimir Mandic b5eb7e9bec release 2022-09-07 12:42:49 -04:00
Vladimir Mandic 546febae9e 2.10.1 2022-09-07 12:34:08 -04:00
Vladimir Mandic cb2205bbab release candidate 2022-09-07 10:54:01 -04:00
Vladimir Mandic 15b50f2181 add config flags 2022-09-06 10:28:54 -04:00
Vladimir Mandic 398aefcad5 test update 2022-09-03 17:17:46 -04:00
Vladimir Mandic 5ff70f756a update settings 2022-09-03 07:15:34 -04:00
Vladimir Mandic cec65ac16c release preview 2022-09-03 07:13:08 -04:00
Vladimir Mandic 9154f4ef3e optimize startup sequence 2022-09-02 14:07:10 -04:00
Vladimir Mandic d33f3e45a1 update 2022-09-02 12:04:26 -04:00
Vladimir Mandic 73e96bf249 reorder backend init code 2022-09-02 11:57:47 -04:00
Vladimir Mandic 2cfee111fb test embedding 2022-09-02 11:11:51 -04:00
Vladimir Mandic 43f44cd114 update backend 2022-09-02 10:22:24 -04:00
Vladimir Mandic 179566cc83 embedding test 2022-09-02 08:08:21 -04:00
Vladimir Mandic 55a6398d95 update tests 2022-09-01 09:27:29 -04:00
Vladimir Mandic a222ce933f add browser iife tests 2022-08-31 18:30:47 -04:00
Vladimir Mandic 39634cb25d minor bug fixes and increased test coverage 2022-08-31 11:29:19 -04:00
Vladimir Mandic cc71013f1d extend release tests 2022-08-30 11:42:38 -04:00
Vladimir Mandic b8c96840bb add model load exception handling 2022-08-30 10:34:56 -04:00
Vladimir Mandic 69b19ec4fa add softwareKernels config option 2022-08-30 10:28:33 -04:00
Vladimir Mandic 217c4a903f update typescript 2022-08-28 13:12:27 -04:00
Vladimir Mandic 4cfac787b1 update todo 2022-08-24 08:18:34 -04:00
Vladimir Mandic d5eb5e40ff update tfjs 2022-08-24 08:10:36 -04:00
Vladimir Mandic db74ab4c97 update todo 2022-08-21 15:24:19 -04:00
Vladimir Mandic 47c7bdfae2 expand type safety 2022-08-21 15:23:03 -04:00
Vladimir Mandic fc5f90b639 full eslint rule rewrite 2022-08-21 13:34:51 -04:00
Vladimir Mandic c10c919f1a update demo notes 2022-08-20 09:38:08 -04:00
Vladimir Mandic c308a4edde 2.9.4 2022-08-20 09:29:22 -04:00
Vladimir Mandic 65c9a45f61 add browser test 2022-08-19 09:15:29 -04:00
Vladimir Mandic 3af503b508 update wiki 2022-08-15 11:48:55 -04:00
Vladimir Mandic 96bc063a1d add tensorflow library detection 2022-08-15 11:40:15 -04:00
Vladimir Mandic 6fc26e793c fix wasm detection 2022-08-15 11:29:56 -04:00
Vladimir Mandic 554ed81f49 update build pipeline 2022-08-12 09:51:45 -04:00
Vladimir Mandic 37cf9e37d1 enumerate additional models 2022-08-12 09:13:48 -04:00
Vladimir Mandic a10b37d13a release refresh 2022-08-10 13:50:33 -04:00
Vladimir Mandic f029377d5f 2.9.3 2022-08-10 13:45:19 -04:00
Vladimir Mandic ad90d3fc3e rehault testing framework 2022-08-10 13:44:38 -04:00
Vladimir Mandic 47b5830c89 release refresh 2022-08-08 15:15:57 -04:00
Vladimir Mandic b09a65cc7e update pending todo notes 2022-08-08 15:10:34 -04:00
Vladimir Mandic 62ea156861 update wiki 2022-08-08 15:09:39 -04:00
Vladimir Mandic 5e1743695d add insightface 2022-08-08 15:09:26 -04:00
Vladimir Mandic ef4caa68fa 2.9.2 2022-08-08 13:38:16 -04:00
Vladimir Mandic 321f962894 update profiling methods 2022-08-04 09:15:13 -04:00
Vladimir Mandic faa9615d3a update build platform 2022-07-29 09:24:04 -04:00
Vladimir Mandic 190340bf70 update packages definitions 2022-07-26 07:36:57 -04:00
Vladimir Mandic fde0f48afe release rebuild 2022-07-25 08:33:07 -04:00
Vladimir Mandic 04644db9a3 2.9.1 2022-07-25 08:30:38 -04:00
Vladimir Mandic f31cef3923 update tfjs 2022-07-25 08:30:34 -04:00
Vladimir Mandic 12937b9abf update tfjs 2022-07-23 14:45:40 -04:00
Vladimir Mandic 4dcad5147f full rebuild 2022-07-21 13:06:13 -04:00
Vladimir Mandic a9bc6087f5 release cleanup 2022-07-21 12:53:10 -04:00
Vladimir Mandic 7a613fb8d2 tflite experiments 2022-07-19 17:49:58 -04:00
Vladimir Mandic 4e872b38d4 update wiki 2022-07-18 08:22:42 -04:00
Vladimir Mandic 7e161b2e94 add load monitor test 2022-07-18 08:22:19 -04:00
Vladimir Mandic 85656cdef5 beta for upcoming major release 2022-07-17 21:31:08 -04:00
Vladimir Mandic b5390363b5 swtich to release version of tfjs 2022-07-16 09:08:58 -04:00
Vladimir Mandic 0a62abc07e update method signatures 2022-07-14 10:41:52 -04:00
Vladimir Mandic 43126bc7c9 update demo 2022-07-14 10:02:23 -04:00
Vladimir Mandic d814470a49 update typedocs 2022-07-14 09:36:08 -04:00
Vladimir Mandic e705e0a3a1 placeholder for face contours 2022-07-13 12:08:23 -04:00
Vladimir Mandic 8d92d935ae improve face compare in main demo 2022-07-13 09:26:00 -04:00
Vladimir Mandic 79bc49b2ef add webview support 2022-07-13 08:53:37 -04:00
Vladimir Mandic b302c096ec update dependencies 2022-07-13 08:23:18 -04:00
FaeronGaming d0bacd5028 fix(gear): ensure gear.modelPath is used for loadModel() 2022-07-13 08:22:28 -04:00
Vladimir Mandic d23e824610 npm default install should be prod only 2022-07-07 12:11:05 +02:00
Vladimir Mandic 0d2cfd6ab9 fix npm v7 compatibility 2022-07-05 05:03:31 -04:00
Vladimir Mandic ffdd43faf9 add getModelStats method 2022-07-02 03:39:40 -04:00
Vladimir Mandic 772964ff49 rebuild 2022-06-21 13:26:58 -04:00
Vladimir Mandic d8b8acec54 update 2022-06-10 08:47:22 -04:00
Vladimir Mandic c331c8b675 release build 2022-06-08 08:52:19 -04:00
Vladimir Mandic ccaf9325a8 2.8.1 2022-06-08 08:44:52 -04:00
Vladimir Mandic 7ec9dfe130 webgpu and wasm optimizations 2022-06-02 10:39:53 -04:00
Vladimir Mandic 62376a5ca2 add faceboxes prototype 2022-05-30 08:58:54 -04:00
Vladimir Mandic 236ecf8286 updated facemesh and attention models 2022-05-29 21:12:18 -04:00
Vladimir Mandic b619035fb4 full rebuild 2022-05-24 07:28:51 -04:00
Vladimir Mandic 51c1d52e6b 2.7.4 2022-05-24 07:28:43 -04:00
Vladimir Mandic cb6a21a505 2.7.3 2022-05-24 07:19:38 -04:00
Vladimir Mandic dade40c78d add face.mesh.keepInvalid config flag 2022-05-22 08:50:51 -04:00
Vladimir Mandic 106669919f initial work for new facemesh model 2022-05-18 17:42:40 -04:00
Vladimir Mandic 45471052c6 update changelog 2022-05-18 08:35:06 -04:00
Vladimir Mandic 68e4ef31b0 update tfjs 2022-05-18 08:33:33 -04:00
Vladimir Mandic 9b7661cd80 2.7.2 2022-05-12 16:47:41 -04:00
Vladimir Mandic 3c45347f10 fix demo when used with video files 2022-05-12 16:47:21 -04:00
Vladimir Mandic 678a58e166 major release 2022-05-09 08:16:00 -04:00
Vladimir Mandic 4c518cfa4b 2.7.1 2022-05-09 08:14:00 -04:00
Vladimir Mandic 7cb384679f update wiki 2022-04-23 13:02:00 -04:00
Vladimir Mandic 4ba1846e12 update todo 2022-04-21 09:58:13 -04:00
Vladimir Mandic 6cb5c00903 support 4k input 2022-04-21 09:39:40 -04:00
Vladimir Mandic ff6e0ef196 update tfjs 2022-04-21 09:38:36 -04:00
Vladimir Mandic 4bd1f53a0b add attention draw methods 2022-04-18 12:26:05 -04:00
Vladimir Mandic 4ab5c778bd fix coloring function 2022-04-18 11:29:45 -04:00
Vladimir Mandic 6ffe7cb364 enable precompile as part of warmup 2022-04-15 07:54:27 -04:00
Vladimir Mandic 2634b510f4 prepare release beta 2022-04-14 11:55:49 -04:00
Vladimir Mandic e9300cc43a change default face crop 2022-04-14 11:47:08 -04:00
Vladimir Mandic 0d2d34d5c7 update wiki 2022-04-11 11:55:30 -04:00
Vladimir Mandic e4bca32fea beta release 2.7 2022-04-11 11:46:35 -04:00
Vladimir Mandic 3950232a35 refactor draw methods 2022-04-11 11:46:00 -04:00
Vladimir Mandic 4ab0a9d18f implement face attention model 2022-04-11 11:45:24 -04:00
Vladimir Mandic fd0d6558f5 add electronjs demo 2022-04-10 11:00:41 -04:00
Vladimir Mandic 106120de3d rebuild 2022-04-10 10:13:13 -04:00
Vladimir Mandic 6abc1a2d4c rebuild 2022-04-05 12:25:41 -04:00
Vladimir Mandic c05722b9cd update tfjs 2022-04-01 12:38:05 -04:00
Vladimir Mandic ccd2f8e244 update 2022-04-01 09:13:32 -04:00
Vladimir Mandic 898866f94a 2.6.5 2022-04-01 09:12:13 -04:00
Vladimir Mandic 1d7e76232f bundle offscreencanvas types 2022-04-01 09:12:04 -04:00
Vladimir Mandic 647953cb67 prototype precompile pass 2022-03-19 11:02:30 -04:00
Vladimir Mandic 507d6fda02 fix changelog generation 2022-03-16 11:38:57 -04:00
Vladimir Mandic 4a1fe79549 fix indexdb config check 2022-03-16 11:19:56 -04:00
Vladimir Mandic dd0a028110 update typescript and tensorflow 2022-03-07 13:24:06 -05:00
Vladimir Mandic 264e9a9ccf 2.6.4 2022-02-27 07:25:45 -05:00
Vladimir Mandic bd269021f2 fix types typo 2022-02-17 08:15:57 -05:00
Vladimir Mandic 15fa4eaa1a refresh 2022-02-14 07:53:28 -05:00
Vladimir Mandic e4862fe8ea add config option wasmPlatformFetch 2022-02-10 15:35:32 -05:00
Vladimir Mandic f34ada60b9 2.6.3 2022-02-10 15:32:53 -05:00
Vladimir Mandic 218895339a rebuild 2022-02-10 12:27:21 -05:00
Vladimir Mandic 81befec667 update toolkit 2022-02-07 10:12:59 -05:00
Vladimir Mandic deb094706e 2.6.2 2022-02-07 09:47:17 -05:00
Vladimir Mandic d3d0b37bf7 update todo 2022-01-20 08:24:23 -05:00
Vladimir Mandic 345433756a release rebuild 2022-01-20 08:17:06 -05:00
Vladimir Mandic 903ee9268f 2.6.1 2022-01-20 07:54:56 -05:00
Vladimir Mandic 2c0057cd30 implement model caching using indexdb 2022-01-17 11:03:21 -05:00
Vladimir Mandic c5911301e9 prototype global fetch handler 2022-01-16 09:49:55 -05:00
Vladimir Mandic c668d8fe3f update samples 2022-01-15 09:18:14 -05:00
Vladimir Mandic 921ecb0934 update samples 2022-01-15 09:11:04 -05:00
Vladimir Mandic d33e4c960c update samples with images under cc licence only 2022-01-14 16:10:32 -05:00
Vladimir Mandic 8b336230e7 fix face box and hand tracking when in front of face 2022-01-14 09:46:16 -05:00
Vladimir Mandic a071a1eee9 2.5.8 2022-01-14 09:42:57 -05:00
Vladimir Mandic c04c8fa03c update 2022-01-08 12:43:44 -05:00
Vladimir Mandic c27f4a19d8 update wiki 2022-01-05 11:49:10 -05:00
Vladimir Mandic bc328cfee9 update wiki 2022-01-05 09:55:07 -05:00
Vladimir Mandic 84bfbc323b update 2022-01-05 08:34:31 -05:00
Vladimir Mandic 51d1f251e6 update demos 2022-01-01 08:13:04 -05:00
Vladimir Mandic 7f82eb58c5 update blazepose 2021-12-31 13:58:03 -05:00
Vladimir Mandic b817ff2150 update dependencies 2021-12-30 12:39:29 -05:00
Vladimir Mandic 15ff1efc4b update hand annotations 2021-12-30 12:14:09 -05:00
Vladimir Mandic 5a6ef389a6 update blazepose 2021-12-29 12:37:46 -05:00
Vladimir Mandic e41664dd18 update 2021-12-28 11:39:54 -05:00
Vladimir Mandic 69a080e64b update demos 2021-12-28 09:40:32 -05:00
Vladimir Mandic ae05c7d2b2 fix samples 2021-12-28 07:03:05 -05:00
libowen.eric da48dcb449 fix(src): typo 2021-12-28 06:59:16 -05:00
Vladimir Mandic 9bc8832166 change on how face box is calculated 2021-12-27 10:59:56 -05:00
Vladimir Mandic 027b287f26 2.5.7 2021-12-27 09:29:15 -05:00
Vladimir Mandic e81683d55c update 2021-12-22 10:04:41 -05:00
Vladimir Mandic 4d8feaff3e fix posenet 2021-12-18 12:24:01 -05:00
Vladimir Mandic 44ad8c6d4d release refresh 2021-12-15 09:30:26 -05:00
Vladimir Mandic e413a0fe15 2.5.6 2021-12-15 09:26:40 -05:00
Vladimir Mandic 8372469e6c strong type for string enums 2021-12-15 09:26:32 -05:00
Vladimir Mandic 54a399f0bc update 2021-12-14 15:45:43 -05:00
Vladimir Mandic 1fe50ae36c rebuild 2021-12-13 21:38:55 -05:00
Vladimir Mandic dd462305b5 update tfjs 2021-12-09 14:44:26 -05:00
Vladimir Mandic 67c60a77b7 fix node detection in electron environment 2021-12-07 17:02:33 -05:00
Vladimir Mandic f720159149 update 2021-12-01 08:27:05 -05:00
Vladimir Mandic c9846f9b77 2.5.5 2021-12-01 08:21:55 -05:00
Vladimir Mandic acc899a3d6 update readme 2021-11-26 12:14:40 -05:00
Vladimir Mandic ea90ed68ad added human-motion 2021-11-26 12:12:46 -05:00
Vladimir Mandic 5ed2e15a4e add offscreencanvas typedefs 2021-11-26 11:55:52 -05:00
Vladimir Mandic a90e8ee723 update blazepose and extend hand annotations 2021-11-24 16:17:03 -05:00
Vladimir Mandic c919784f68 release preview 2021-11-23 10:40:40 -05:00
Vladimir Mandic 7924518151 fix face box scaling on detection 2021-11-23 08:36:32 -05:00
Vladimir Mandic 1db4783611 cleanup 2021-11-22 14:44:25 -05:00
Vladimir Mandic fbbb5aa138 2.5.4 2021-11-22 14:33:46 -05:00
Vladimir Mandic cf304bc514 prototype blazepose detector 2021-11-22 14:33:40 -05:00
Vladimir Mandic 02d883c00f minor fixes 2021-11-21 16:55:17 -05:00
Vladimir Mandic 67667160cb add body 3d interpolation 2021-11-19 18:30:57 -05:00
Vladimir Mandic 9fd7ea723e edit blazepose keypoints 2021-11-19 16:11:03 -05:00
Vladimir Mandic c3a5e1f802 new build process 2021-11-18 10:10:06 -05:00
Vladimir Mandic fd1217c4b3 2.5.3 2021-11-18 10:06:07 -05:00
Vladimir Mandic 7517ac2d8f update typescript 2021-11-17 16:50:21 -05:00
Vladimir Mandic eb65cabf31 create typedef rollup 2021-11-17 15:45:49 -05:00
Vladimir Mandic 8d05c1089e optimize centernet 2021-11-16 20:16:49 -05:00
Vladimir Mandic 7deb9694e7 cache frequent tf constants 2021-11-16 18:31:07 -05:00
Vladimir Mandic 54b492b987 add extra face rotation prior to mesh 2021-11-16 13:07:44 -05:00
Vladimir Mandic 6a6f14f658 release 2.5.2 2021-11-15 09:26:38 -05:00
Vladimir Mandic 798d842c4b improve error handling 2021-11-14 11:22:52 -05:00
Vladimir Mandic 8e0aa270f0 2.5.2 2021-11-14 10:43:00 -05:00
Vladimir Mandic 296c52fed4 fix mobilefacenet module 2021-11-13 17:26:19 -05:00
Vladimir Mandic 1c228c70bf fix gear and ssrnet modules 2021-11-13 12:23:32 -05:00
Vladimir Mandic b93ea7314c fix for face crop when mesh is disabled 2021-11-12 15:17:08 -05:00
Vladimir Mandic 4f2993a2f5 implement optional face masking 2021-11-12 15:07:23 -05:00
Vladimir Mandic 8b56de5140 update todo 2021-11-11 17:02:32 -05:00
Vladimir Mandic 1e4ceeb1e8 add similarity score range normalization 2021-11-11 17:01:10 -05:00
Vladimir Mandic 474db8bf01 add faceid demo 2021-11-11 11:30:55 -05:00
Vladimir Mandic ea6eb0b9c9 documentation overhaul 2021-11-10 12:21:45 -05:00
Vladimir Mandic adb358fe98 auto tensor shape and channels handling 2021-11-09 19:39:18 -05:00
Vladimir Mandic 1729a989af disable use of path2d in node 2021-11-09 18:10:54 -05:00
Vladimir Mandic a06119c20b update wiki 2021-11-09 14:45:45 -05:00
Vladimir Mandic d1545c8740 add liveness module and facerecognition demo 2021-11-09 14:37:50 -05:00
Vladimir Mandic b9e0c1faf4 initial version of facerecognition demo 2021-11-09 10:39:23 -05:00
Vladimir Mandic dc867d85d4 rebuild 2021-11-08 16:41:30 -05:00
Vladimir Mandic 8a524233b0 add type defs when working with relative path imports 2021-11-08 16:36:20 -05:00
Vladimir Mandic 50eff29056 disable humangl backend if webgl 1.0 is detected 2021-11-08 11:35:35 -05:00
Vladimir Mandic 37f62f47fa add additional hand gestures 2021-11-08 07:36:26 -05:00
Vladimir Mandic 33d6e94787 2.5.1 2021-11-08 06:25:07 -05:00
Vladimir Mandic 4c5db5ab04 update automated tests 2021-11-07 10:10:23 -05:00
Vladimir Mandic 7d58d02ca2 new human.compare api 2021-11-07 10:03:33 -05:00
Vladimir Mandic 39d45e1e2b added links to release notes 2021-11-07 08:14:14 -05:00
Vladimir Mandic 16120a87f4 update readme 2021-11-06 10:26:04 -04:00
Vladimir Mandic 4c3ea44199 new frame change detection algorithm 2021-11-06 10:21:51 -04:00
Vladimir Mandic 243826267a add histogram equalization 2021-11-05 15:35:53 -04:00
Vladimir Mandic db63a70c8a add histogram equalization 2021-11-05 15:09:54 -04:00
Vladimir Mandic 0fa9498afe implement wasm missing ops 2021-11-05 13:36:53 -04:00
Vladimir Mandic c2dc38793e performance and memory optimizations 2021-11-05 11:28:06 -04:00
Vladimir Mandic b64e9ae69f fix react compatibility issues 2021-11-04 06:34:13 -04:00
Vladimir Mandic 3a0436bc54 improve box rescaling for all modules 2021-11-03 16:32:07 -04:00
Vladimir Mandic cd1c8fd003 improve precision using wasm backend 2021-11-02 11:42:15 -04:00
Vladimir Mandic 26f6bba361 refactor predict with execute 2021-11-02 11:07:11 -04:00
Vladimir Mandic 8e26744006 update tests 2021-10-31 09:58:48 -04:00
Vladimir Mandic 0c4978310f update hand landmarks model 2021-10-31 09:06:33 -04:00
Vladimir Mandic 355529b074 patch tfjs type defs 2021-10-31 08:03:42 -04:00
Vladimir Mandic da7f4300b2 start 2.5 major version 2021-10-30 12:21:54 -04:00
Vladimir Mandic f3411437a0 build and docs cleanup 2021-10-29 15:55:20 -04:00
Vladimir Mandic a710ef88ec fix firefox bug 2021-10-28 17:25:50 -04:00
Vladimir Mandic 8ea9a89642 update tfjs 2021-10-28 14:40:31 -04:00
Vladimir Mandic e15792e88b 2.4.3 2021-10-28 13:59:57 -04:00
Vladimir Mandic 59058a0b93 additional human.performance counters 2021-10-27 09:45:38 -04:00
Vladimir Mandic 686b0716de 2.4.2 2021-10-27 09:44:17 -04:00
Vladimir Mandic 4fa71659e7 add ts demo 2021-10-27 08:16:06 -04:00
Vladimir Mandic a005c00a5b switch from es2018 to es2020 for main build 2021-10-26 19:38:23 -04:00
Vladimir Mandic 81d5336498 switch to custom tfjs for demos 2021-10-26 15:08:05 -04:00
Vladimir Mandic 8c941597ed update todo 2021-10-25 13:45:04 -04:00
Vladimir Mandic 75123ff212 release 2.4 2021-10-25 13:29:29 -04:00
Vladimir Mandic 385ab03f75 2.4.1 2021-10-25 13:09:41 -04:00
Vladimir Mandic b395a74701 refactoring plus jsdoc comments 2021-10-25 13:09:00 -04:00
Vladimir Mandic 2bd59f1276 increase face similarity match resolution 2021-10-25 09:44:13 -04:00
Vladimir Mandic 12ef0a846b update todo 2021-10-23 09:42:41 -04:00
Vladimir Mandic 2923c6b5af time based caching 2021-10-23 09:38:52 -04:00
Vladimir Mandic a9ca883908 turn on minification 2021-10-22 20:14:13 -04:00
Vladimir Mandic b9547e551a update todo 2021-10-22 16:11:02 -04:00
Vladimir Mandic 87465f99fd initial work on skipTime 2021-10-22 16:09:52 -04:00
Vladimir Mandic 2791ee9fa9 added generic types 2021-10-22 14:46:19 -04:00
Vladimir Mandic c3dab75414 enhanced typing exports 2021-10-22 13:49:40 -04:00
Vladimir Mandic f1639837a6 update tfjs to 3.10.0 2021-10-22 09:48:27 -04:00
Vladimir Mandic 974a295407 add optional autodetected custom wasm path 2021-10-21 12:42:08 -04:00
Vladimir Mandic 20624de6a9 2.3.6 2021-10-21 11:31:46 -04:00
Vladimir Mandic 975d7fb477 fix for human.draw labels and typedefs 2021-10-21 10:54:51 -04:00
Vladimir Mandic 37672d6460 refactor human.env to a class type 2021-10-21 10:26:44 -04:00
Vladimir Mandic 962ef18e1c add human.custom.esm using custom tfjs build 2021-10-20 17:49:00 -04:00
Vladimir Mandic 715f2dbfb5 update handtrack boxes and refactor handpose 2021-10-20 09:10:57 -04:00
Vladimir Mandic 5d5876e749 update demos 2021-10-19 11:28:59 -04:00
Vladimir Mandic 4dc5d84137 2.3.5 2021-10-19 11:25:05 -04:00
Jimmy Nyström c1243b96e4 Removed direct usage of performance.now
Switched to using the utility function that works in both nodejs and browser environments
2021-10-19 09:58:14 -04:00
Vladimir Mandic 00461783dd update 2021-10-19 08:09:46 -04:00
Vladimir Mandic f1953ca1f2 2.3.4 2021-10-19 08:05:19 -04:00
Vladimir Mandic 6a49230874 update dependencies and refresh release 2021-10-19 07:58:51 -04:00
Vladimir Mandic 5ef6158cb1 minor blazepose optimizations 2021-10-15 09:34:40 -04:00
Vladimir Mandic 0d9b7ee0ae compress samples 2021-10-15 07:25:51 -04:00
Vladimir Mandic 6d1d648fdf remove posenet from default package 2021-10-15 06:49:41 -04:00
Vladimir Mandic 761a636c2c enhanced movenet postprocessing 2021-10-14 12:26:59 -04:00
Vladimir Mandic 37a8892cfe update handtrack skip algorithm 2021-10-13 14:49:41 -04:00
Vladimir Mandic 9505c8c80e use transferrable buffer for worker messages 2021-10-13 11:53:54 -04:00
Vladimir Mandic d70e6fa628 update todo 2021-10-13 11:02:44 -04:00
Vladimir Mandic b4e6fda31b add optional anti-spoofing module 2021-10-13 10:56:56 -04:00
Vladimir Mandic 6ff3e12a7e update todo 2021-10-13 08:36:20 -04:00
Vladimir Mandic 6a6694f433 add node-match advanced example using worker thread pool 2021-10-13 08:06:11 -04:00
Vladimir Mandic b509489ed7 package updates 2021-10-12 14:17:33 -04:00
Vladimir Mandic 0f92e3023e optimize image preprocessing 2021-10-12 11:39:18 -04:00
Vladimir Mandic d23fb162a9 update imagefx 2021-10-12 09:48:00 -04:00
Vladimir Mandic 224f3d26c0 set webgpu optimized flags 2021-10-11 09:22:39 -04:00
Vladimir Mandic 7e7cba2168 major precision improvements to movenet and handtrack 2021-10-10 22:29:20 -04:00
Vladimir Mandic 924a0b24f0 image processing fixes 2021-10-10 17:52:43 -04:00
Vladimir Mandic 110f4999a4 redesign body and hand caching and interpolation 2021-10-08 18:39:04 -04:00
Vladimir Mandic 93748a4609 demo default config cleanup 2021-10-08 07:48:48 -04:00
Vladimir Mandic 293eba8379 improve gaze and face angle visualizations in draw 2021-10-07 10:33:10 -04:00
Vladimir Mandic 65888d82a7 release 2.3.1 2021-10-06 11:33:58 -04:00
Vladimir Mandic 92c0fb0584 2.3.1 2021-10-06 11:30:44 -04:00
Vladimir Mandic c47b72c56b workaround for chrome offscreencanvas bug 2021-10-06 11:30:34 -04:00
Vladimir Mandic 0e9195dca3 fix backend conflict in webworker 2021-10-04 17:03:36 -04:00
Vladimir Mandic e0ef7c5b1e add blazepose v2 and add annotations to body results 2021-10-04 16:29:15 -04:00
Vladimir Mandic 6bbbeaf452 fix backend order initialization 2021-10-03 08:12:26 -04:00
Vladimir Mandic 04e832f512 added docker notes 2021-10-02 11:41:51 -04:00
Vladimir Mandic f265eb9f3f update dependencies 2021-10-02 07:46:07 -04:00
Vladimir Mandic 75744b5235 updated hint rules 2021-10-01 12:07:14 -04:00
Vladimir Mandic 1e2290d2a2 updated facematch demo 2021-10-01 11:40:57 -04:00
Vladimir Mandic e548e71810 update wiki 2021-09-30 14:29:14 -04:00
Vladimir Mandic 49112e584b breaking change: new similarity and match methods 2021-09-30 14:28:16 -04:00
Vladimir Mandic 5b15508c39 update facematch demo 2021-09-29 08:02:23 -04:00
Vladimir Mandic 07eb238490 update movenet-multipose and samples 2021-09-28 17:07:34 -04:00
Vladimir Mandic 3e61cb083e tweaked default values 2021-09-28 13:48:29 -04:00
Vladimir Mandic 31fbbb01e2 update todo 2021-09-28 12:02:47 -04:00
Vladimir Mandic 8e801a2af5 enable handtrack as default model 2021-09-28 12:02:17 -04:00
Vladimir Mandic 156e857d32 redesign face processing 2021-09-28 12:01:48 -04:00
Vladimir Mandic 28a957316b update types and dependencies 2021-09-27 14:39:54 -04:00
Vladimir Mandic 6be1b062fb refactoring 2021-09-27 13:58:13 -04:00
Vladimir Mandic a21e3c95ed define app specific types 2021-09-27 09:19:43 -04:00
Vladimir Mandic 561d25cfc9 implement box caching for movenet 2021-09-27 08:53:41 -04:00
Vladimir Mandic 04406afcf2 update todo 2021-09-26 10:09:30 -04:00
Vladimir Mandic 5a02271071 update todo 2021-09-26 10:03:39 -04:00
Vladimir Mandic f021a00834 update wiki 2021-09-26 06:53:06 -04:00
Vladimir Mandic 5c507ad8f3 autodetect number of bodies and hands 2021-09-25 19:14:03 -04:00
Vladimir Mandic 7c60d62e6e upload new samples 2021-09-25 16:31:44 -04:00
Vladimir Mandic ad2866bab6 new samples gallery and major code folder restructure 2021-09-25 11:51:15 -04:00
Vladimir Mandic 776f20a6bb update todo 2021-09-24 09:57:03 -04:00
Vladimir Mandic 894dde3edd new release 2021-09-24 09:55:27 -04:00
Vladimir Mandic 7b23c7f0a8 2.2.3 2021-09-24 09:46:35 -04:00
Vladimir Mandic 8bbfb9615a optimize model loading 2021-09-23 14:09:41 -04:00
Vladimir Mandic c52f1c979c support segmentation for nodejs 2021-09-22 19:27:12 -04:00
Vladimir Mandic d3113d6baf update todo and docs 2021-09-22 16:00:43 -04:00
Vladimir Mandic 8a4b498357 redo segmentation and handtracking 2021-09-22 15:16:14 -04:00
Vladimir Mandic 9186e46c57 prototype handtracking 2021-09-21 16:48:16 -04:00
Vladimir Mandic a5977e3f45 automated browser tests 2021-09-20 22:06:49 -04:00
Vladimir Mandic ded141a161 support for dynamic backend switching 2021-09-20 21:59:49 -04:00
Vladimir Mandic 04fcbc7e6a initial automated browser tests 2021-09-20 17:17:13 -04:00
Vladimir Mandic 384d94c0cb enhanced automated test coverage 2021-09-20 09:42:34 -04:00
Vladimir Mandic 57f5fd391f more automated tests 2021-09-19 14:20:22 -04:00
Vladimir Mandic ccd5ba1e46 added configuration validation 2021-09-19 14:07:53 -04:00
Vladimir Mandic cb1ff858e9 updated build platform and typedoc theme 2021-09-18 19:09:02 -04:00
Vladimir Mandic 79f95aa39f prevent validation failed on some model combinations 2021-09-17 14:30:57 -04:00
Vladimir Mandic 64c6195342 webgl exception handling 2021-09-17 14:07:44 -04:00
Vladimir Mandic 5b69a70a62 2.2.2 2021-09-17 14:07:32 -04:00
Vladimir Mandic 8dba39245d experimental webgl status monitoring 2021-09-17 11:23:00 -04:00
Vladimir Mandic 75630a7aa3 major release 2021-09-16 10:49:42 -04:00
Vladimir Mandic 87454b1203 2.2.1 2021-09-16 10:46:24 -04:00
Vladimir Mandic 85017a3d93 add vr model demo 2021-09-16 10:15:20 -04:00
Vladimir Mandic 81d141b852 update readme 2021-09-15 19:12:05 -04:00
Vladimir Mandic c4cdddfb59 all tests passing 2021-09-15 19:02:51 -04:00
Vladimir Mandic 42e6a25294 redefine draw helpers interface 2021-09-15 18:58:54 -04:00
Vladimir Mandic 5f68153af7 add simple webcam and webrtc demo 2021-09-15 13:59:18 -04:00
Vladimir Mandic 43a91ba5e0 added visual results browser to demo 2021-09-15 11:15:38 -04:00
Vladimir Mandic 246415b8cc reorganize tfjs bundle 2021-09-14 22:07:13 -04:00
Vladimir Mandic fae1e76af5 experimental custom tfjs bundle - disabled 2021-09-14 20:07:08 -04:00
Vladimir Mandic 6eaea226da add platform and backend capabilities detection 2021-09-13 23:24:04 -04:00
Vladimir Mandic f4caef2e90 update changelog and todo 2021-09-13 13:54:42 -04:00
Vladimir Mandic 5fe0144924 update dependencies 2021-09-13 13:34:41 -04:00
Vladimir Mandic eb9e6d5cf0 enhanced automated tests 2021-09-13 13:30:46 -04:00
Vladimir Mandic ddf9239ccd enable canvas patching for nodejs 2021-09-13 13:30:08 -04:00
Vladimir Mandic 6dbe8fce42 full ts strict typechecks 2021-09-13 13:29:14 -04:00
Vladimir Mandic a0f5922b9a fix multiple memory leaks 2021-09-13 13:28:35 -04:00
Vladimir Mandic fd0f85a8e9 modularize human class and add model validation 2021-09-12 18:37:06 -04:00
Vladimir Mandic ba8ac1d8b8 update todo 2021-09-12 13:18:33 -04:00
Vladimir Mandic 203dbffa1a add dynamic kernel op detection 2021-09-12 13:17:33 -04:00
Vladimir Mandic 7fa09937b4 added human.env diagnostic class 2021-09-12 12:42:17 -04:00
Vladimir Mandic f6724de956 minor typos 2021-09-12 08:49:56 -04:00
Vladimir Mandic 83b705818d release candidate 2021-09-12 00:30:11 -04:00
Vladimir Mandic b8d594e18d parametrize face config 2021-09-12 00:05:06 -04:00
Vladimir Mandic 81bf83c948 mark all config items as optional 2021-09-11 23:59:41 -04:00
Vladimir Mandic 54c1dfb37a redefine config and result interfaces 2021-09-11 23:54:35 -04:00
Vladimir Mandic 6e8bf0f4f4 fix usge of string enums 2021-09-11 23:08:18 -04:00
Vladimir Mandic 19e4e49c41 start using partial definitions 2021-09-11 16:11:00 -04:00
Vladimir Mandic 34a3a42fba implement event emitters 2021-09-11 16:00:16 -04:00
Vladimir Mandic cd77ccdef6 fix iife loader 2021-09-11 11:42:48 -04:00
Vladimir Mandic c9554f8e77 update sourcemaps 2021-09-11 11:17:13 -04:00
Vladimir Mandic 017934406a simplify dependencies 2021-09-11 10:29:31 -04:00
Vladimir Mandic 52b4310992 change build process 2021-09-10 21:21:29 -04:00
Vladimir Mandic 26570042cd updated wiki 2021-09-06 08:17:48 -04:00
Vladimir Mandic 042505f022 update lint exceptions 2021-09-05 17:05:46 -04:00
Vladimir Mandic d3e9b74e22 update wiki 2021-09-05 16:48:57 -04:00
Vladimir Mandic 79bb653409 add benchmark info 2021-09-05 16:42:11 -04:00
Vladimir Mandic 296501cbf8 update hand detector processing algorithm 2021-09-02 08:50:16 -04:00
Vladimir Mandic d5abaf2405 update 2021-08-31 18:24:30 -04:00
Vladimir Mandic e97df8d380 simplify canvas handling in nodejs 2021-08-31 18:22:16 -04:00
Vladimir Mandic ab2fe916d9 full rebuild 2021-08-31 14:50:16 -04:00
Vladimir Mandic 85b62fadc8 2.1.5 2021-08-31 14:49:07 -04:00
Vladimir Mandic 2e36f43efb added demo node-canvas 2021-08-31 14:48:55 -04:00
Vladimir Mandic 0759c125ce update node-fetch 2021-08-31 13:29:29 -04:00
Vladimir Mandic e58ba5e803 dynamically generate default wasm path 2021-08-31 13:00:06 -04:00
Vladimir Mandic 17356e0a4d updated wiki 2021-08-23 08:41:50 -04:00
Vladimir Mandic ac83b3d153 implement finger poses in hand detection and gestures 2021-08-20 20:43:03 -04:00
Vladimir Mandic 54d717bbff implemented movenet-multipose model 2021-08-20 09:05:07 -04:00
Vladimir Mandic 4f5ee67431 update todo 2021-08-19 17:28:07 -04:00
Vladimir Mandic bfef22c75e 2.1.4 2021-08-19 16:17:03 -04:00
Vladimir Mandic e1546e158f add static type definitions to main class 2021-08-19 16:16:56 -04:00
Vladimir Mandic e4293511d0 fix interpolation overflow 2021-08-18 14:28:31 -04:00
Vladimir Mandic 312f51f07e rebuild full 2021-08-17 18:49:49 -04:00
Vladimir Mandic 649a3a17b5 update angle calculations 2021-08-17 18:46:50 -04:00
Vladimir Mandic 996019eea3 improve face box caching 2021-08-17 09:15:47 -04:00
Vladimir Mandic f9a4f741a9 strict type checks 2021-08-17 08:51:17 -04:00
Vladimir Mandic 71f25a8f12 add webgu checks 2021-08-15 08:09:40 -04:00
Vladimir Mandic 791b880a54 update todo 2021-08-14 18:02:39 -04:00
Vladimir Mandic f29d85dacd experimental webgpu support 2021-08-14 18:00:26 -04:00
Vladimir Mandic f867d46b85 add experimental webgu demo 2021-08-14 13:39:26 -04:00
Vladimir Mandic 14cd80b32a add backend initialization checks 2021-08-14 11:17:51 -04:00
Vladimir Mandic eadc65cc5a complete async work 2021-08-14 11:16:26 -04:00
Vladimir Mandic 451e88e1bf update node-webcam 2021-08-13 18:47:37 -04:00
Vladimir Mandic 13c94efb8b list detect cameras 2021-08-13 10:34:09 -04:00
Vladimir Mandic 334bb7061f switch to async data reads 2021-08-12 09:31:16 -04:00
Vladimir Mandic f73520bbd5 2.1.3 2021-08-12 09:29:48 -04:00
Vladimir Mandic 67b7db377d fix centernet & update blazeface 2021-08-11 18:59:02 -04:00
Vladimir Mandic 2eae119c96 update todo 2021-08-09 10:46:03 -04:00
Vladimir Mandic 0a459bc54d update model list 2021-08-06 08:50:50 -04:00
Vladimir Mandic 10b0c28fc3 minor update 2021-08-06 08:29:41 -04:00
Vladimir Mandic 7cedebbe89 minor update 2021-08-05 10:38:04 -04:00
Vladimir Mandic b70775caa9 update build process to remove warnings 2021-07-31 20:42:28 -04:00
Vladimir Mandic 39172c3740 update todo 2021-07-31 07:43:50 -04:00
Vladimir Mandic 775c176036 update typedoc links 2021-07-31 07:29:37 -04:00
Vladimir Mandic cb0b20681b replace movenet with lightning-v4 2021-07-30 07:18:54 -04:00
Vladimir Mandic 4ac41f54a1 update eslint rules 2021-07-30 06:49:41 -04:00
Vladimir Mandic b387bad3f0 enable webgl uniform support for faster warmup 2021-07-29 16:35:16 -04:00
Vladimir Mandic b2db89d9ee 2.1.2 2021-07-29 16:34:03 -04:00
Vladimir Mandic c7613f93e2 fix unregistered ops in tfjs 2021-07-29 16:06:03 -04:00
Vladimir Mandic 20e417ca1c update build 2021-07-29 12:50:06 -04:00
Vladimir Mandic 3bb4c84fb7 fix typo 2021-07-29 11:26:19 -04:00
Vladimir Mandic 5871977f12 updated wiki 2021-07-29 11:06:34 -04:00
Vladimir Mandic 448cd26f61 rebuild new release 2021-07-29 11:03:21 -04:00
Vladimir Mandic 7bf826496c 2.1.1 2021-07-29 11:02:02 -04:00
Vladimir Mandic e84e421a04 updated gesture types 2021-07-29 11:01:50 -04:00
Vladimir Mandic fbe8a8b0f6 update tfjs and typescript 2021-07-29 09:53:13 -04:00
Vladimir Mandic 9fcc0a3431 updated minimum version of nodejs to v14 2021-07-29 09:41:17 -04:00
Vladimir Mandic 9394aaa742 add note on manually disping tensor 2021-06-18 13:39:20 -04:00
Vladimir Mandic f911b0e2fc update todo 2021-06-18 09:19:34 -04:00
Vladimir Mandic 733a6db43e modularize model loading 2021-06-18 09:16:21 -04:00
Vladimir Mandic 5b367e8591 update typedoc 2021-06-18 07:25:33 -04:00
Vladimir Mandic 1af8b37978 2.0.3 2021-06-18 07:20:33 -04:00
Vladimir Mandic 0f31125b9a update 2021-06-16 15:47:01 -04:00
Vladimir Mandic 6fc1c5c2bc update 2021-06-16 15:46:05 -04:00
Vladimir Mandic 47f1571ffd fix demo paths 2021-06-16 15:40:35 -04:00
Vladimir Mandic c10f31ef6c added multithreaded demo 2021-06-14 10:23:06 -04:00
Vladimir Mandic 2432f19ea5 2.0.2 2021-06-14 10:20:49 -04:00
Vladimir Mandic a6e9b8f35b reorganize demos 2021-06-14 08:16:10 -04:00
Vladimir Mandic bcce8e8872 fix centernet box width & height 2021-06-11 16:12:24 -04:00
Vladimir Mandic e90f268cae update todo 2021-06-09 07:27:19 -04:00
Vladimir Mandic b02e06c4e7 update 2021-06-09 07:19:03 -04:00
Vladimir Mandic 44a07aec2f update demo menu documentation 2021-06-09 07:17:54 -04:00
Vladimir Mandic 6fa6a03cf9 update 2021-06-08 07:37:15 -04:00
Vladimir Mandic 99e1ca3dc9 add body segmentation sample 2021-06-08 07:29:08 -04:00
Vladimir Mandic 19a9e9605e add release notes 2021-06-08 07:09:37 -04:00
Vladimir Mandic 66a101e2aa release 2.0 2021-06-08 07:06:16 -04:00
Vladimir Mandic 62e454db36 2.0.1 2021-06-08 07:02:11 -04:00
Vladimir Mandic d598f1bdb4 add video drag&drop capability 2021-06-07 08:38:16 -04:00
Vladimir Mandic badbe57426 update readme 2021-06-06 20:49:48 -04:00
Vladimir Mandic 3d45825d37 update packages 2021-06-06 20:47:59 -04:00
Vladimir Mandic 58d46094aa modularize build platform 2021-06-06 20:34:29 -04:00
Vladimir Mandic f654b89e8a custom build tfjs from sources 2021-06-06 19:00:34 -04:00
Vladimir Mandic ccad4a8c20 update wasm to tfjs 3.7.0 2021-06-06 12:58:06 -04:00
Vladimir Mandic e65ea98bc3 update defaults 2021-06-05 20:06:36 -04:00
Vladimir Mandic 525634ad26 modularize build platform 2021-06-05 17:51:46 -04:00
Vladimir Mandic d3bea52d51 enable body segmentation and background replacement in demo 2021-06-05 16:13:41 -04:00
Vladimir Mandic 5b3f5289b2 minor git corruption 2021-06-05 15:23:17 -04:00
Vladimir Mandic aa18ecf7f5 update 2021-06-05 15:10:28 -04:00
Vladimir Mandic e64ecbec69 update 2021-06-05 13:02:01 -04:00
Vladimir Mandic 4167d186ee unified build 2021-06-05 12:59:11 -04:00
Vladimir Mandic 302cc31f59 enable body segmentation and background replacement 2021-06-05 11:54:49 -04:00
Vladimir Mandic 5c6ba688c9 work on body segmentation 2021-06-04 20:22:05 -04:00
Vladimir Mandic 5800461d79 added experimental body segmentation module 2021-06-04 13:52:40 -04:00
Vladimir Mandic 2d3e81181c add meet and selfie models 2021-06-04 13:51:01 -04:00
Vladimir Mandic 6e1f9a34a6 update for tfjs 3.7.0 2021-06-04 09:20:59 -04:00
Vladimir Mandic 1e38b9645e update 2021-06-04 07:03:34 -04:00
Vladimir Mandic 3aef4ec048 update gaze strength calculations 2021-06-03 09:53:11 -04:00
Vladimir Mandic 3cdbcbb860 update build with automatic linter 2021-06-03 09:41:53 -04:00
Vladimir Mandic 73edfb9f44 add live hints to demo 2021-06-02 17:29:50 -04:00
Vladimir Mandic b8db2f0a62 switch worker from module to iife importscripts 2021-06-02 16:46:07 -04:00
Vladimir Mandic 2d354d03e1 release candidate 2021-06-02 13:39:02 -04:00
Vladimir Mandic b472276ea0 update wiki 2021-06-02 13:35:59 -04:00
Vladimir Mandic 7498bd061f update tests and demos 2021-06-02 13:35:33 -04:00
Vladimir Mandic baa5beff80 added samples to git 2021-06-02 12:44:12 -04:00
Vladimir Mandic 0d0e7244ef implemented drag & drop for image processing 2021-06-02 12:43:43 -04:00
Vladimir Mandic 851ea87b18 release candidate 2021-06-01 08:59:09 -04:00
Vladimir Mandic e8cb3a361e breaking changes to results.face output properties 2021-06-01 07:37:17 -04:00
Vladimir Mandic 3708732d1a breaking changes to results.object output properties 2021-06-01 07:07:01 -04:00
Vladimir Mandic 33ba2bd266 breaking changes to results.hand output properties 2021-06-01 07:01:59 -04:00
Vladimir Mandic d670fc4ad9 breaking changes to results.body output properties 2021-06-01 06:55:40 -04:00
Vladimir Mandic 0504f25e81 update wiki 2021-05-31 10:40:24 -04:00
Vladimir Mandic 4e9a5ff552 implemented human.next global interpolation method 2021-05-31 10:40:07 -04:00
Vladimir Mandic 10b2c78599 update wiki 2021-05-30 23:22:21 -04:00
Vladimir Mandic a965e2f04d finished draw buffering and smoothing and enabled by default 2021-05-30 23:21:48 -04:00
Vladimir Mandic d7de6424d1 update wiki 2021-05-30 18:46:23 -04:00
Vladimir Mandic 7784257c76 update typedoc definitions 2021-05-30 18:45:39 -04:00
Vladimir Mandic 30dcbdd149 update pwa scope 2021-05-30 18:00:51 -04:00
Vladimir Mandic 9aaa835395 implemented service worker 2021-05-30 17:56:40 -04:00
Vladimir Mandic d471a86e0b update todo 2021-05-30 12:05:27 -04:00
Vladimir Mandic f5205bafce release candidate 2021-05-30 12:03:34 -04:00
Vladimir Mandic 9fd87086cc added usage restrictions 2021-05-30 09:51:23 -04:00
Vladimir Mandic 020bb8ce7a update security policy 2021-05-30 09:41:24 -04:00
Vladimir Mandic a3bf652abc quantize handdetect model 2021-05-29 18:29:57 -04:00
Vladimir Mandic 02930dfdb9 update todo list 2021-05-29 09:24:09 -04:00
Vladimir Mandic 185463e30d added experimental movenet-lightning and removed blazepose from default dist 2021-05-29 09:20:01 -04:00
Vladimir Mandic cbe8e5a7d1 update 2021-05-28 15:54:29 -04:00
Vladimir Mandic 9bcfe23395 added experimental face.rotation.gaze 2021-05-28 15:53:51 -04:00
Vladimir Mandic 7ea2bcbb5b fix and optimize for mobile platform 2021-05-28 10:43:48 -04:00
Vladimir Mandic b0af2fb67e lock typescript to 4.2 due to typedoc incompatibility with 4.3 2021-05-27 16:07:02 -04:00
Vladimir Mandic 6a1b0ccce3 1.9.4 2021-05-27 16:05:20 -04:00
Vladimir Mandic ec2f53f4e2 fix demo facecompare 2021-05-26 08:52:31 -04:00
Vladimir Mandic e37c07417e webhint and lighthouse optimizations 2021-05-26 08:47:31 -04:00
Vladimir Mandic b471588b8d update 2021-05-26 07:59:52 -04:00
Vladimir Mandic 0c6bdad1e9 add camera startup diag messages 2021-05-26 07:57:51 -04:00
Vladimir Mandic 08386933d0 update all box calculations 2021-05-25 08:58:20 -04:00
Vladimir Mandic fd2bd21301 implemented unified result.persons that combines face, body and hands for each person 2021-05-24 11:10:13 -04:00
Vladimir Mandic 1d6f8ddff4 update iris distance docs 2021-05-24 07:18:03 -04:00
Vladimir Mandic 68afebcd24 update iris distance calculations 2021-05-24 07:16:38 -04:00
Vladimir Mandic d3e16112af added experimental results interpolation for smooth draw operations 2021-05-23 13:55:33 -04:00
Vladimir Mandic 80ad09a161 1.9.3 2021-05-23 13:54:44 -04:00
Vladimir Mandic 13b69fb4cd use green weighted for input diff calculation 2021-05-23 13:54:22 -04:00
Vladimir Mandic bce1d62135 implement experimental drawOptions.bufferedOutput and bufferedFactor 2021-05-23 13:52:49 -04:00
Vladimir Mandic f0739716e2 use explicit tensor interface 2021-05-22 21:54:18 -04:00
Vladimir Mandic 9e0318ea52 add tfjs types and remove all instances of any 2021-05-22 21:47:59 -04:00
Vladimir Mandic b192445071 enhance strong typing 2021-05-22 14:53:51 -04:00
Vladimir Mandic a21f9b2a06 rebuild all for release 2021-05-22 13:17:07 -04:00
Vladimir Mandic 98e8e8646a 1.9.2 2021-05-22 13:15:11 -04:00
Vladimir Mandic 3b46a05483 add id and boxraw on missing objects 2021-05-22 12:41:29 -04:00
Vladimir Mandic e49b5f1018 restructure results strong typing 2021-05-22 12:33:19 -04:00
Vladimir Mandic ba89d21f4d update dependencies 2021-05-21 06:54:02 -04:00
Vladimir Mandic db9f650266 1.9.1 2021-05-21 06:51:31 -04:00
Vladimir Mandic 1c52d42e24 caching improvements 2021-05-20 19:14:07 -04:00
Vladimir Mandic a5b5352ea6 add experimental mb3-centernet object detection 2021-05-19 08:27:28 -04:00
Vladimir Mandic 3463bb302f individual model skipFrames values still max high threshold for caching 2021-05-18 11:38:22 -04:00
Vladimir Mandic 6add9ba386 config.videoOptimized has been removed and config.cacheSensitivity has been added instead 2021-05-18 11:36:57 -04:00
Vladimir Mandic 7cc927cb1c caching determination is now dynamic based on detection of input change and not based on input types 2021-05-18 11:36:24 -04:00
Vladimir Mandic 43ec77d71b human 1.9.0 beta with breaking changes regarding caching 2021-05-18 11:26:16 -04:00
Vladimir Mandic 0b9baffbfd update dependencies 2021-05-18 08:22:33 -04:00
Vladimir Mandic 12e7dc520f 1.8.5 2021-05-18 08:14:56 -04:00
Vladimir Mandic 9c015670e5 update demos 2021-05-17 09:35:41 -04:00
Vladimir Mandic c295588cf7 update 2021-05-17 08:56:57 -04:00
Vladimir Mandic 30c6b80f01 add node-video sample 2021-05-16 23:55:08 -04:00
Vladimir Mandic 7f2a0cddfe update 2021-05-11 15:08:27 -04:00
Vladimir Mandic 68b0cc38b0 add node-webcam demo 2021-05-11 10:11:55 -04:00
Vladimir Mandic 1652300288 fix node build and update model signatures 2021-05-11 07:53:06 -04:00
Vladimir Mandic 73011c6a06 1.8.4 2021-05-11 07:09:44 -04:00
Vladimir Mandic 36715ba3cd update & fix posenet 2021-05-05 10:07:44 -04:00
Vladimir Mandic 4629b94405 1.8.3 2021-05-05 09:55:39 -04:00
Vladimir Mandic a95ca54bbf switch posenet weights 2021-05-04 20:46:33 -04:00
Vladimir Mandic cb60baf47a update tfjs version 2021-05-04 11:19:46 -04:00
Vladimir Mandic 8f4621b637 1.8.2 2021-05-04 11:17:50 -04:00
Vladimir Mandic e1754cf775 release 1.8 with major changes and tfjs 3.6.0 2021-04-30 11:55:04 -04:00
Vladimir Mandic c5852725bb 1.8.1 2021-04-30 11:53:56 -04:00
Vladimir Mandic f77c142965 update 2021-04-28 08:58:21 -04:00
Vladimir Mandic b8592b53c6 blazeface optimizations 2021-04-28 08:55:26 -04:00
Vladimir Mandic 3fe8807440 add hand labels in draw 2021-04-26 07:37:29 -04:00
Vladimir Mandic 6e2d6dc40f cleanup demo workflow 2021-04-26 07:19:30 -04:00
Vladimir Mandic 3dbe82e644 update browser demo defaults 2021-04-25 16:58:18 -04:00
Vladimir Mandic 66b7272987 convert blazeface to module 2021-04-25 16:56:10 -04:00
Vladimir Mandic 92930efb65 version 1.8 release candidate 2021-04-25 14:32:55 -04:00
Vladimir Mandic 773c09cb00 build NodeJS deliverables in non-minified form 2021-04-25 14:32:19 -04:00
Vladimir Mandic b83aaff811 stop building sourcemaps for NodeJS deliverables 2021-04-25 14:32:07 -04:00
Vladimir Mandic 6ba66be1e0 remove deallocate, profile, scoped 2021-04-25 14:31:51 -04:00
Vladimir Mandic 86bec71d28 replaced maxFaces, maxDetections, maxHands, maxResults with maxDetected 2021-04-25 14:31:39 -04:00
Vladimir Mandic b6fb7ce2f5 replaced nmsRadius with built-in default 2021-04-25 14:31:26 -04:00
Vladimir Mandic d60c992da1 unified minConfidence and scoreThresdold as minConfidence 2021-04-25 14:31:13 -04:00
Vladimir Mandic ed0fbd6e3c add exception handlers to all demos 2021-04-25 14:30:40 -04:00
Vladimir Mandic f26dae059e remove blazeface-front and add unhandledrejection handler 2021-04-25 14:15:38 -04:00
Vladimir Mandic fab62c6332 major update for 1.8 release candidate 2021-04-25 13:16:04 -04:00
Vladimir Mandic 7b4055e23d enable webworker detection 2021-04-25 07:51:01 -04:00
Vladimir Mandic 64b45dba61 1.7.1 2021-04-25 07:50:12 -04:00
Vladimir Mandic 8ec4ae5426 remove obsolete binary models 2021-04-24 18:46:22 -04:00
Vladimir Mandic a05b9e7774 enable cross origin isolation 2021-04-24 18:43:59 -04:00
Vladimir Mandic 01c9bb24b5 rewrite posenet decoder 2021-04-24 16:04:49 -04:00
Vladimir Mandic fa1d14cda0 update demo node decoder 2021-04-24 12:12:10 -04:00
Vladimir Mandic ead7dc3153 update posenet model 2021-04-24 11:49:26 -04:00
Vladimir Mandic 94c6cba195 remove efficientpose 2021-04-24 09:31:46 -04:00
Vladimir Mandic b8309bcddb major version rebuild 2021-04-22 19:47:04 -04:00
Vladimir Mandic 9025e76187 1.6.1 2021-04-22 19:46:29 -04:00
Vladimir Mandic 9b39425410 update for tfjs 3.5.0 2021-04-22 19:46:18 -04:00
Vladimir Mandic 5e8f33e821 update todo 2021-04-22 18:39:29 -04:00
Vladimir Mandic 922bafbc88 add npmrc 2021-04-20 08:02:21 -04:00
Vladimir Mandic d234e68fc9 update 2021-04-19 16:19:03 -04:00
Vladimir Mandic 197e7dc2ef added filter.flip feature 2021-04-19 16:02:47 -04:00
Vladimir Mandic 5fc482036d update todo 2021-04-19 11:36:44 -04:00
Vladimir Mandic ca53ff0f2d update 2021-04-19 11:24:17 -04:00
Vladimir Mandic 53e4a81087 update 2021-04-19 11:20:24 -04:00
Vladimir Mandic d46ef5463c update node demo 2021-04-19 11:17:55 -04:00
Vladimir Mandic 5951fbbb7e added demo load image from http 2021-04-19 11:15:29 -04:00
Vladimir Mandic 9dd168733f update gestures 2021-04-19 09:30:04 -04:00
Vladimir Mandic 20f61a6b2b mobile demo optimization and iris gestures 2021-04-18 19:33:40 -04:00
Vladimir Mandic 5de743ceb2 full rebuild 2021-04-16 18:03:15 -04:00
Vladimir Mandic fcce4694a7 new look 2021-04-16 18:00:24 -04:00
Vladimir Mandic bf29fca2bc added benchmarks 2021-04-16 09:31:58 -04:00
Vladimir Mandic 3fc3bf4082 added node-multiprocess demo 2021-04-16 08:34:16 -04:00
Vladimir Mandic 774f649f5a update 2021-04-15 17:13:27 -04:00
Vladimir Mandic beb2987ae0 fix image orientation 2021-04-15 15:26:31 -04:00
Vladimir Mandic 2163fb3cc0 flat app style 2021-04-15 15:01:27 -04:00
Vladimir Mandic 9db6a151ee add full nodejs test coverage 2021-04-15 09:43:55 -04:00
Vladimir Mandic bea26e986d 1.5.2 2021-04-14 12:53:12 -04:00
Vladimir Mandic b9395af7ae update tfjs 3.4.0 2021-04-14 12:53:00 -04:00
Vladimir Mandic 8aec48d98a experimental node-wasm support 2021-04-13 21:45:45 -04:00
Vladimir Mandic 2eecd2fed4 1.5.1 2021-04-13 11:32:26 -04:00
Vladimir Mandic d25d970ef4 fix for safari imagebitmap 2021-04-13 11:32:22 -04:00
Vladimir Mandic 6ff61e8546 refactored human.config and human.draw 2021-04-13 11:05:52 -04:00
Vladimir Mandic 494e290794 1.4.3 2021-04-12 17:49:14 -04:00
Vladimir Mandic fa5f0be769 implement webrtc 2021-04-12 17:48:59 -04:00
Vladimir Mandic 96baa97c29 1.4.2 2021-04-12 08:29:58 -04:00
Vladimir Mandic e3b5dcb75e added support for multiple instances of human 2021-04-12 08:29:52 -04:00
Vladimir Mandic da8307b306 update 2021-04-10 23:37:44 -04:00
Vladimir Mandic 6baf27997b update cdn links 2021-04-10 23:16:06 -04:00
Vladimir Mandic 1fec656bc1 update 2021-04-09 21:58:45 -04:00
Vladimir Mandic 362fda37c9 fix typedoc 2021-04-09 21:53:48 -04:00
Vladimir Mandic 25088c74fa update readme 2021-04-09 21:31:53 -04:00
Vladimir Mandic aaec742c0a exception handling 2021-04-09 10:02:40 -04:00
Vladimir Mandic d9bc088582 1.4.1 2021-04-09 08:08:05 -04:00
Vladimir Mandic 57fe43ab5d add modelBasePath option 2021-04-09 08:07:58 -04:00
Vladimir Mandic 50d3a7697f update badges 2021-04-08 19:16:17 -04:00
Vladimir Mandic 730afee004 update cdn links 2021-04-08 18:37:58 -04:00
Vladimir Mandic b5c77fd149 update 2021-04-08 12:10:15 -04:00
Vladimir Mandic 56ceef11a4 1.3.5 2021-04-06 11:38:07 -04:00
Vladimir Mandic 6d814fd6f4 update tslib 2021-04-06 11:38:01 -04:00
Vladimir Mandic ec0ed9a9c6 update wiki 2021-04-06 07:45:44 -04:00
Vladimir Mandic 1d6c72318b add dynamic viewport and fix web worker 2021-04-05 11:48:24 -04:00
Vladimir Mandic a306378b3b add cdn links 2021-04-05 09:35:56 -04:00
Vladimir Mandic 134ae3bbd9 1.3.4 2021-04-04 09:26:35 -04:00
Vladimir Mandic 4b6dd41f69 update 2021-04-04 09:26:32 -04:00
Vladimir Mandic 54173aa12a implement webhint 2021-04-04 09:25:18 -04:00
Vladimir Mandic 68a8c032bc update 2021-04-03 11:36:53 -04:00
Vladimir Mandic 3120911979 update keywords 2021-04-03 11:10:07 -04:00
Vladimir Mandic fd9174cceb 1.3.3 2021-04-03 10:49:30 -04:00
Vladimir Mandic 3bb6179bf6 fix linting and tests 2021-04-03 10:49:14 -04:00
Vladimir Mandic 2d86a4c1e8 1.3.2 2021-04-02 08:39:50 -04:00
Vladimir Mandic f55119cd70 input type validation 2021-04-02 08:37:35 -04:00
Vladimir Mandic 7f9c8a794f update 2021-04-02 08:15:39 -04:00
Vladimir Mandic 20efb89885 update 2021-04-02 08:09:06 -04:00
Vladimir Mandic 0c678b470c update 2021-04-02 08:06:28 -04:00
Vladimir Mandic 052a93d859 update demos 2021-04-02 08:05:19 -04:00
Vladimir Mandic f0c7cd9b98 normalize all scores 2021-04-01 09:24:56 -04:00
Vladimir Mandic db40c85658 1.3.1 2021-03-30 09:04:51 -04:00
Vladimir Mandic 18ec01ec31 update 2021-03-30 09:04:49 -04:00
Vladimir Mandic c2dd6fe567 added face3d demo 2021-03-30 09:03:18 -04:00
Vladimir Mandic 3006266060 initial work on face3d three.js demo 2021-03-29 15:59:16 -04:00
Vladimir Mandic 07a94fba84 enable buffering 2021-03-29 15:05:14 -04:00
Vladimir Mandic 935c914d5c new icons 2021-03-29 15:01:16 -04:00
Vladimir Mandic 77476652d2 new serve module and demo structure 2021-03-29 14:40:34 -04:00
Vladimir Mandic f9306abac5 move gl flags to correct location 2021-03-28 13:22:22 -04:00
Vladimir Mandic 718ccda645 minor rotation calculation fix 2021-03-28 08:49:56 -04:00
Vladimir Mandic 91e51cf884 remove debug output 2021-03-28 08:44:53 -04:00
Vladimir Mandic 43b4850819 new face rotation calculations 2021-03-28 08:40:39 -04:00
ButzYung ac6b220888 cleanup 2021-03-28 07:32:31 -04:00
ButzYung adbaa24220 rotationMatrixToEulerAngle, and fixes 2021-03-28 07:32:31 -04:00
ButzYung 38581a3a80 face rotation matrix 2021-03-28 07:32:31 -04:00
Vladimir Mandic db7443d96b update 2021-03-27 15:45:37 -04:00
Vladimir Mandic 23e515d26f experimental: add efficientpose 2021-03-27 15:43:48 -04:00
Vladimir Mandic ae9d6caabc implement nanodet 2021-03-27 10:25:31 -04:00
Vladimir Mandic 1dd860d112 start working on efficientpose 2021-03-26 18:50:19 -04:00
Vladimir Mandic 1dbeb93726 update contributing guide 2021-03-26 15:37:43 -04:00
Vladimir Mandic 46328a1e0c update contributing guidelines 2021-03-26 12:37:08 -04:00
Vladimir Mandic 863f5f0caf update 2021-03-25 08:50:41 -04:00
Vladimir Mandic bc9dc6e3fb 1.2.5 2021-03-25 08:44:04 -04:00
Vladimir Mandic 3cb0dca242 fix broken exports 2021-03-25 08:43:51 -04:00
Vladimir Mandic d2560e6954 update faces database 2021-03-24 11:43:28 -04:00
Vladimir Mandic 1419b89f9b updated face description 2021-03-24 11:08:49 -04:00
Vladimir Mandic 10cbf42439 added face matching example to docs 2021-03-23 15:35:54 -04:00
Vladimir Mandic f71b5c9d97 update 2021-03-23 15:25:43 -04:00
Vladimir Mandic 4c3e9818c8 improve fact matching 2021-03-23 15:24:58 -04:00
Vladimir Mandic 870ac26f5d 1.2.4 2021-03-23 14:46:50 -04:00
Vladimir Mandic 1a1560cca1 update nanodet and face rotation check 2021-03-23 14:46:44 -04:00
Vladimir Mandic 959f448bc0 1.2.3 2021-03-21 17:47:05 -04:00
Vladimir Mandic 7fe2e66957 update demos 2021-03-21 17:47:00 -04:00
Vladimir Mandic bd4b21cea8 1.2.2 2021-03-21 16:16:17 -04:00
Vladimir Mandic 97921cc5fc precise face rotation 2021-03-21 16:16:13 -04:00
Vladimir Mandic cc542ccbc0 update 2021-03-21 14:24:10 -04:00
Vladimir Mandic 9b7a3cbf18 1.2.1 2021-03-21 14:21:47 -04:00
Vladimir Mandic 7e976a4bfb update wiki 2021-03-21 14:19:04 -04:00
Vladimir Mandic 35cf845fbf new module: face description 2021-03-21 14:18:51 -04:00
Vladimir Mandic d8cef2925e 1.1.11 2021-03-21 07:49:58 -04:00
Vladimir Mandic 61633928cf refactor face classes 2021-03-21 07:49:55 -04:00
Vladimir Mandic 245ecaf710 1.1.10 2021-03-18 16:55:04 -04:00
Vladimir Mandic 3dc7dcefe1 cleanup 2021-03-18 16:55:00 -04:00
Vladimir Mandic 65e930b2f1 update github templates 2021-03-18 11:58:46 -04:00
Vladimir Mandic 9be94b003d update 2021-03-18 07:40:08 -04:00
Vladimir Mandic 2cce7ac7f0 update keywords 2021-03-18 07:14:24 -04:00
Vladimir Mandic 7f644a3bde update 2021-03-17 20:26:43 -04:00
Vladimir Mandic fdddebda2a redefine tensor 2021-03-17 20:23:12 -04:00
Vladimir Mandic 714d2b9187 enforce types 2021-03-17 20:16:40 -04:00
Vladimir Mandic 9af350ba2a regen type declarations 2021-03-17 18:57:00 -04:00
Vladimir Mandic 412aaa3d45 switch to single jumbo dts 2021-03-17 18:48:02 -04:00
Vladimir Mandic b1a096f9e5 update node demo 2021-03-17 18:36:12 -04:00
Vladimir Mandic 5f28dd09ad update 2021-03-17 18:33:12 -04:00
Vladimir Mandic 55b12a629c type definitions 2021-03-17 18:23:19 -04:00
Vladimir Mandic f8babafd14 update package docs 2021-03-17 14:45:15 -04:00
Vladimir Mandic 7bef12037c 1.1.9 2021-03-17 14:35:23 -04:00
Vladimir Mandic ad222d8b15 fix box clamping and raw output 2021-03-17 14:35:11 -04:00
Vladimir Mandic bf1ee06543 update readme 2021-03-17 12:03:36 -04:00
Vladimir Mandic eff034e397 hierarchical readme notes 2021-03-17 12:01:54 -04:00
Vladimir Mandic 899ade3533 update readme 2021-03-17 11:48:34 -04:00
Vladimir Mandic 4de41de2e0 update 2021-03-17 11:41:57 -04:00
Vladimir Mandic b9b9846808 1.1.8 2021-03-17 11:40:35 -04:00
Vladimir Mandic d676bd5310 update 2021-03-17 11:40:31 -04:00
Vladimir Mandic 7b96b04af6 add experimental nanodet object detection 2021-03-17 11:32:37 -04:00
Vladimir Mandic 467271ab1a full models signature 2021-03-17 09:01:59 -04:00
Vladimir Mandic 87c8ce6bfe 1.1.7 2021-03-16 07:16:28 -04:00
Vladimir Mandic 2c43cef0c7 fix for seedrandom 2021-03-16 07:16:25 -04:00
Vladimir Mandic 50b2e94020 update todo 2021-03-15 12:31:23 -04:00
Vladimir Mandic 4990975b5b custom typedoc 2021-03-15 12:29:51 -04:00
Vladimir Mandic 3e33a43076 1.1.6 2021-03-15 12:14:52 -04:00
Vladimir Mandic 7cb191a4cd implement human.match and embedding demo 2021-03-15 12:14:48 -04:00
Vladimir Mandic 3ea4812a3d 1.1.5 2021-03-15 08:56:45 -04:00
Vladimir Mandic e970964994 full rebuild 2021-03-15 08:56:36 -04:00
Vladimir Mandic 261d4aca9b update 2021-03-15 08:52:28 -04:00
Vladimir Mandic 42a60628bd 1.1.4 2021-03-14 13:49:05 -04:00
Vladimir Mandic acfd4de403 fix broken build 2021-03-14 13:49:01 -04:00
Vladimir Mandic eac172bca2 1.1.3 2021-03-14 13:39:52 -04:00
Vladimir Mandic 8cbd9b6210 update 2021-03-14 13:39:47 -04:00
Vladimir Mandic 7b6dbae62e added api specs 2021-03-13 22:38:35 -05:00
Vladimir Mandic f65bae05e4 add typedocs and types 2021-03-13 22:31:09 -05:00
Vladimir Mandic a08c0c0061 strong typings 2021-03-13 13:47:45 -05:00
Vladimir Mandic 5e0307dfe8 update 2021-03-13 12:30:03 -05:00
Vladimir Mandic 99dc340cb8 update docs 2021-03-13 12:13:45 -05:00
Vladimir Mandic 53aea1a9ad update 2021-03-13 11:37:16 -05:00
Vladimir Mandic d373f8e121 update embedding and strong typings 2021-03-13 11:26:53 -05:00
Vladimir Mandic 274be0acad 1.1.2 2021-03-12 18:24:38 -05:00
Vladimir Mandic 43836ad0a6 distance based on minkowski space and limited euclidean space 2021-03-12 18:24:34 -05:00
Vladimir Mandic a458f29dc7 guard against invalid input images 2021-03-12 16:43:36 -05:00
Vladimir Mandic d36a43ae83 1.1.1 2021-03-12 12:54:34 -05:00
Vladimir Mandic 3f83b1706a update wiki 2021-03-12 12:54:30 -05:00
Vladimir Mandic 1f7b699074 switched face embedding to mobileface 2021-03-12 12:54:08 -05:00
Vladimir Mandic 5874a3ee59 updated docs 2021-03-11 22:11:49 -05:00
Vladimir Mandic ce14f232f4 1.0.4 2021-03-11 22:04:54 -05:00
Vladimir Mandic 9aa53df307 add face return tensor 2021-03-11 22:04:44 -05:00
Vladimir Mandic 435030056a add test for face descriptors 2021-03-11 18:26:04 -05:00
Vladimir Mandic 2ff548ee4f wip on embedding 2021-03-11 13:31:36 -05:00
Vladimir Mandic 70812cb6cf simplify face box coordinate calculations 2021-03-11 11:44:22 -05:00
Vladimir Mandic 4ba33a9eb2 annotated models and removed gender-ssrnet 2021-03-11 10:30:20 -05:00
Vladimir Mandic 1b53cd4b6b autodetect inputSizes 2021-03-11 10:26:14 -05:00
Vladimir Mandic d5b6c676c9 update todo 2021-03-10 10:54:39 -05:00
Vladimir Mandic c7c1ee1ffb update 2021-03-10 10:28:20 -05:00
Vladimir Mandic 9c445f70c9 update todo 2021-03-10 10:12:39 -05:00
Vladimir Mandic 53d6719278 1.0.3 2021-03-10 10:02:55 -05:00
Vladimir Mandic 6b61718ac7 strong typing for public classes and hide private classes 2021-03-10 10:02:52 -05:00
Vladimir Mandic d9ac78107c enhanced age, gender, emotion detection 2021-03-10 09:44:45 -05:00
Vladimir Mandic 18bcc549ca full rebuild 2021-03-09 18:36:26 -05:00
Vladimir Mandic b972ba3480 1.0.2 2021-03-09 18:34:43 -05:00
Vladimir Mandic fda0c1630b update tfjs and esbuild 2021-03-09 18:34:04 -05:00
Vladimir Mandic 1a8e3575a4 remove blazeface-front, blazepose-upper, faceboxes 2021-03-09 18:33:50 -05:00
Vladimir Mandic 414e512114 remove blazeface-front and faceboxes 2021-03-09 18:32:35 -05:00
Vladimir Mandic 643158ac22 update 2021-03-09 13:37:49 -05:00
Vladimir Mandic 25bb2f6df1 update 2021-03-09 13:19:52 -05:00
Vladimir Mandic 91252bdb15 update 2021-03-09 13:18:08 -05:00
Vladimir Mandic 007cb9a70c 1.0.1 2021-03-09 13:15:59 -05:00
Vladimir Mandic ee94119d1e fix for face detector when mesh is disabled 2021-03-09 13:15:40 -05:00
Vladimir Mandic 37a9a2049a update badges 2021-03-08 15:06:56 -05:00
Vladimir Mandic 4a7f00ce79 optimize for npm 2021-03-08 14:12:12 -05:00
Vladimir Mandic caad5c4ed0 0.40.9 2021-03-08 10:06:40 -05:00
Vladimir Mandic 0fcd2c6031 fix performance issue when running with low confidence 2021-03-08 10:06:34 -05:00
Vladimir Mandic a527385465 0.40.8 2021-03-08 07:32:30 -05:00
Vladimir Mandic 2c5b297889 update docs and demo 2021-03-08 07:32:24 -05:00
Vladimir Mandic 27b0019463 0.40.7 2021-03-06 17:23:36 -05:00
Vladimir Mandic 1c75ed80e6 update 2021-03-06 17:23:24 -05:00
Vladimir Mandic 282f1e100f implemented 3d face angle calculations 2021-03-06 17:22:47 -05:00
Vladimir Mandic 2dff6a36ff 0.40.6 2021-03-06 10:38:22 -05:00
Vladimir Mandic b9dddcdd0a add curve draw output 2021-03-06 10:38:04 -05:00
Vladimir Mandic 988f7a7cbd update 2021-03-05 14:42:32 -05:00
Vladimir Mandic c53e7ddfc2 update readme 2021-03-05 14:40:44 -05:00
Vladimir Mandic e2858c419d update 2021-03-05 14:31:06 -05:00
Vladimir Mandic e3827ce45e 0.40.5 2021-03-05 14:30:46 -05:00
Vladimir Mandic d030efda21 fix human.draw 2021-03-05 14:30:09 -05:00
Vladimir Mandic 2cb88ffb86 0.40.4 2021-03-05 11:44:00 -05:00
Vladimir Mandic 250207e67e update human.draw helper methods 2021-03-05 11:43:50 -05:00
Vladimir Mandic f72cef0294 fix demo 2021-03-05 07:45:30 -05:00
Vladimir Mandic 133d762249 0.40.3 2021-03-05 07:45:20 -05:00
Vladimir Mandic 57b01aebdd 0.40.2 2021-03-05 07:39:47 -05:00
Vladimir Mandic 98db269b2f added blazepose-upper 2021-03-05 07:39:37 -05:00
Vladimir Mandic 27bd339c41 0.40.1 2021-03-04 10:33:18 -05:00
Vladimir Mandic efbb152bc5 implement blazepose and update demos 2021-03-04 10:33:08 -05:00
Vladimir Mandic 4c20662633 update 2021-03-03 12:10:44 -05:00
Vladimir Mandic 9f5742fd3a add todo list 2021-03-03 12:04:59 -05:00
Vladimir Mandic 9ec3eff801 update 2021-03-03 10:00:14 -05:00
Vladimir Mandic a462f8fb74 0.30.6 2021-03-03 09:59:31 -05:00
Vladimir Mandic 4ad2552322 fine tuning age and face models 2021-03-03 09:59:04 -05:00
Vladimir Mandic c1aa1eb2c1 0.30.5 2021-03-02 11:27:47 -05:00
Vladimir Mandic 4d05c4f604 add debug logging flag 2021-03-02 11:27:42 -05:00
Vladimir Mandic 634ea027c9 0.30.4 2021-03-01 17:20:06 -05:00
Vladimir Mandic 23205e8173 added skipInitial flag 2021-03-01 17:20:02 -05:00
Vladimir Mandic 5c712d792b update 2021-02-28 07:39:48 -05:00
Vladimir Mandic a717ad8000 0.30.3 2021-02-28 07:39:29 -05:00
Vladimir Mandic 672144da49 update 2021-02-28 07:38:13 -05:00
meeki007 e350b3dc1a typo 2021-02-28 07:37:37 -05:00
Vladimir Mandic 5854870215 update 2021-02-26 10:13:31 -05:00
Vladimir Mandic 053d00b548 0.30.2 2021-02-26 10:13:10 -05:00
Vladimir Mandic f4f0e5e30b update 2021-02-26 10:05:56 -05:00
Vladimir Mandic cd5e43ae17 rebuild 2021-02-26 09:04:15 -05:00
meeki007 949b37e1a3 fix typo 2021-02-26 08:59:12 -05:00
Vladimir Mandic 9d0a30c756 update 2021-02-25 07:51:26 -05:00
Vladimir Mandic 667feecef9 0.30.1 2021-02-25 07:51:00 -05:00
Vladimir Mandic 10bbff185e update to tfjs 3.2.0 2021-02-25 07:50:13 -05:00
Vladimir Mandic ceed7e8de3 0.20.11 2021-02-24 09:57:39 -05:00
Vladimir Mandic 1d2067941a update default gender model 2021-02-24 09:57:33 -05:00
Vladimir Mandic 08a267a705 update 2021-02-22 09:14:10 -05:00
Vladimir Mandic 079a3fe0b3 0.20.10 2021-02-22 09:13:16 -05:00
Vladimir Mandic 971f8508bb updated model defaults 2021-02-22 09:13:11 -05:00
Vladimir Mandic f1a431f3ef update 2021-02-21 21:49:59 -05:00
Vladimir Mandic 5867e8c641 0.20.9 2021-02-21 21:49:36 -05:00
Vladimir Mandic c9118536ac update 2021-02-21 14:50:03 -05:00
Vladimir Mandic 1b078f8579 0.20.8 2021-02-21 14:46:56 -05:00
Vladimir Mandic a688476317 update 2021-02-21 14:07:04 -05:00
Vladimir Mandic 39d423e22b 0.20.7 2021-02-21 14:06:35 -05:00
Vladimir Mandic 6128a027c2 build fix 2021-02-21 14:06:13 -05:00
Vladimir Mandic 30d492b8f4 0.20.6 2021-02-21 13:34:30 -05:00
Vladimir Mandic 7955d1f916 embedding fix 2021-02-21 13:34:26 -05:00
Vladimir Mandic bdd46af728 update wiki 2021-02-21 08:19:55 -05:00
Vladimir Mandic 12943574ee 0.20.5 2021-02-21 07:21:03 -05:00
Vladimir Mandic 7c0f122abb fix imagefx and add dev builds 2021-02-21 07:20:58 -05:00
Vladimir Mandic d4ba670a54 update 2021-02-19 08:36:47 -05:00
Vladimir Mandic 91e454e63d 0.20.4 2021-02-19 08:35:48 -05:00
Vladimir Mandic 5c336f60d3 update imagefx 2021-02-19 08:35:41 -05:00
Vladimir Mandic 34f4bc5ee4 update 2021-02-17 10:23:21 -05:00
Vladimir Mandic 86d449bdcd 0.20.3 2021-02-17 10:22:49 -05:00
Vladimir Mandic c70b621d67 update tfjs to 3.1.0 2021-02-17 10:22:38 -05:00
Vladimir Mandic a34b677e75 rebuild 2021-02-13 09:22:40 -05:00
Vladimir Mandic 356c885629 0.20.2 2021-02-13 09:22:10 -05:00
Vladimir Mandic 34fca6fc1c update lint rules 2021-02-13 09:21:48 -05:00
Vladimir Mandic e7c1f95dd2 updated lint rules 2021-02-13 09:16:41 -05:00
Vladimir Mandic 062d3a714c update dev server 2021-02-13 08:42:10 -05:00
Vladimir Mandic 2ea8152273 Merge branch 'main' of https://github.com/vladmandic/human into main 2021-02-08 13:30:52 -05:00
Vladimir Mandic 1acdcfcdd7 update readme 2021-02-08 13:30:49 -05:00
Vladimir Mandic 833db0a241 Create codeql-analysis.yml 2021-02-08 13:29:58 -05:00
Vladimir Mandic 412f1cf001 Create SECURITY.md 2021-02-08 13:28:32 -05:00
Vladimir Mandic 803f3250ca update template 2021-02-08 13:25:28 -05:00
Vladimir Mandic 69434b3f8f add templates 2021-02-08 13:24:23 -05:00
Vladimir Mandic 1c1b6d8fd5 update default github docs 2021-02-08 13:20:37 -05:00
Vladimir Mandic fcf6bedb00 update 2021-02-08 13:10:10 -05:00
Vladimir Mandic 2807df0373 0.20.1 2021-02-08 13:09:36 -05:00
Vladimir Mandic 0ccdd61f7e menu fixes 2021-02-08 13:07:49 -05:00
Vladimir Mandic 4130ddb32f updated typings 2021-02-08 12:47:38 -05:00
Vladimir Mandic bfe688251b convert to typescript 2021-02-08 11:39:09 -05:00
Vladimir Mandic ab620c85fa update 2021-02-06 17:42:47 -05:00
Vladimir Mandic df2b74d73d 0.11.5 2021-02-06 17:42:21 -05:00
Vladimir Mandic 16854c097d added faceboxes alternative model 2021-02-06 17:41:53 -05:00
Vladimir Mandic 90f8bacc23 0.11.4 2021-02-06 10:19:45 -05:00
Vladimir Mandic 89cbf189a8 update 2021-02-06 10:19:45 -05:00
Vladimir Mandic 689c799008 0.11.3 2021-02-02 20:35:15 -05:00
Vladimir Mandic eabfb26036 update 2021-02-02 20:35:14 -05:00
Vladimir Mandic 6eda8f4d07 update wiki 2021-01-30 13:24:06 -05:00
Vladimir Mandic 8f07a35bc2 0.11.2 2021-01-30 13:23:26 -05:00
Vladimir Mandic a5c03e0c6a added warmup for nodejs 2021-01-30 13:23:07 -05:00
Vladimir Mandic 7c76e1cba0 update for tfjs 3.0.0 2021-01-29 10:26:58 -05:00
Vladimir Mandic 3d6bc57b9d 0.11.1 2021-01-29 10:25:26 -05:00
Vladimir Mandic 79269c0ae9 0.10.2 2021-01-22 08:15:34 -05:00
Vladimir Mandic 3b5daf2146 update 2021-01-22 08:15:33 -05:00
Vladimir Mandic 542a5e8e46 update to tfjs 2.8.5 2021-01-20 08:05:30 -05:00
Vladimir Mandic c3bc406fd0 0.10.1 2021-01-20 08:04:59 -05:00
Vladimir Mandic 5e00d7ee3d update 2021-01-18 08:22:51 -05:00
Vladimir Mandic b4919fe65d 0.9.26 2021-01-18 08:22:35 -05:00
Vladimir Mandic de4df71a08 fix face detection when mesh is disabled 2021-01-18 08:22:25 -05:00
Vladimir Mandic 0551cc9f86 version bump 2021-01-13 11:14:23 -05:00
Vladimir Mandic ed3702feeb 0.9.25 2021-01-13 11:13:43 -05:00
Vladimir Mandic b8db173f24 update 2021-01-13 09:40:04 -05:00
Vladimir Mandic 13094b337a added humangl custom backend 2021-01-13 09:35:31 -05:00
Vladimir Mandic 0bac1e11de rebuild 2021-01-12 10:18:58 -05:00
Vladimir Mandic 1932977224 code cleanup and enable minification 2021-01-12 09:55:08 -05:00
Vladimir Mandic b6e9fb04dc fix safari incopatibility 2021-01-12 08:24:00 -05:00
Vladimir Mandic 9e5e6a2531 0.9.24 2021-01-12 08:23:19 -05:00
Vladimir Mandic aa10a36350 work on blazepose 2021-01-11 14:35:57 -05:00
Vladimir Mandic 432512524e update changelog 2021-01-11 09:02:55 -05:00
Vladimir Mandic bea3ea6425 full rebuild 2021-01-11 09:02:43 -05:00
Vladimir Mandic f8ac0dce89 0.9.23 2021-01-11 09:02:15 -05:00
Vladimir Mandic c099c1006f added iris gesture 2021-01-11 09:02:02 -05:00
Vladimir Mandic ea5e50de6c update 2021-01-06 06:52:01 -05:00
Vladimir Mandic a33f2cb41b fix emotion labels 2021-01-06 06:51:20 -05:00
Vladimir Mandic a2a2b8b7fc full rebuild 2021-01-05 16:50:25 -05:00
Vladimir Mandic a3a3238cb1 0.9.22 2021-01-05 16:49:55 -05:00
Vladimir Mandic b2906a70ea update wiki 2021-01-05 16:48:19 -05:00
Vladimir Mandic e63eb072fc remove iris coords if iris is disabled 2021-01-05 16:41:54 -05:00
Vladimir Mandic 34cf0f7fd2 update iris objects 2021-01-05 16:33:43 -05:00
Vladimir Mandic 387de12a09 web worker fix 2021-01-03 10:41:56 -05:00
Vladimir Mandic 73d5c86ced 0.9.21 2021-01-03 10:41:27 -05:00
Vladimir Mandic 17467d457a 0.9.20 2021-01-03 10:23:49 -05:00
Vladimir Mandic 2a1b92ae6f update to tfjs 2.8.2 2021-01-03 10:23:45 -05:00
Vladimir Mandic fe0439ec0d stricter linting, fix face annotations 2020-12-27 08:12:22 -05:00
Vladimir Mandic 8b52a6b3d9 update nodejs platform support 2020-12-23 13:55:22 -05:00
Vladimir Mandic 042ba11648 0.9.19 2020-12-23 13:54:47 -05:00
Vladimir Mandic 4ec8e58077 added rawBox and rawMesh 2020-12-22 12:34:47 -05:00
ButzYung 6ae26e14e7 Variable name changes, setting .rawCoords only if necessary 2020-12-22 12:20:36 -05:00
ButzYung 4f40680434 Option to return raw data (mesh, box) for Facemesh / "preserve aspect ratio" fix from Facemesh upstream 2020-12-22 12:20:36 -05:00
Vladimir Mandic f41052437c updated readme 2020-12-21 08:12:18 -05:00
Vladimir Mandic 69741a8516 update 2020-12-17 21:09:25 -05:00
Vladimir Mandic 247d39358e 0.9.18 2020-12-16 19:17:04 -05:00
Vladimir Mandic cdd10d06b8 add z axis scaling 2020-12-16 19:16:54 -05:00
Vladimir Mandic f1293948b5 major work on body module 2020-12-16 18:36:24 -05:00
Vladimir Mandic c7249463b2 republish due to tfjs 2.8.0 issues 2020-12-16 14:49:14 -05:00
Vladimir Mandic 96c1376a62 0.9.17 2020-12-15 08:44:45 -05:00
Vladimir Mandic 5d671cf0ca updated tfjs 2020-12-15 08:44:42 -05:00
Vladimir Mandic 33a1374f60 updated 2020-12-12 18:35:16 -05:00
Vladimir Mandic 3ad6c824a0 added custom webgl backend 2020-12-12 18:34:30 -05:00
Vladimir Mandic 3546c12aa2 0.9.16 2020-12-12 11:25:13 -05:00
Vladimir Mandic e93568f258 update dependencies 2020-12-12 11:25:12 -05:00
Vladimir Mandic 9bde7c5493 change default ports 2020-12-12 10:15:51 -05:00
Vladimir Mandic 7bd25ca7de 0.9.15 2020-12-11 10:11:54 -05:00
Vladimir Mandic 8df844cd7b improved caching and warmup 2020-12-11 10:11:49 -05:00
Vladimir Mandic b48047109b rebuild 2020-12-10 15:47:43 -05:00
Vladimir Mandic 2fdc0f7940 0.9.14 2020-12-10 15:47:06 -05:00
Vladimir Mandic 565a8b116a conditional hand rotation 2020-12-10 15:46:45 -05:00
Vladimir Mandic 866c2f0d0f updated wiki 2020-12-10 14:48:12 -05:00
Vladimir Mandic 1d0bff46ec staggered skipframes 2020-12-10 14:47:53 -05:00
Vladimir Mandic 1f51188392 0.9.13 2020-12-08 10:54:22 -05:00
Vladimir Mandic 07b2fa4497 implemented face and hand boundary checks 2020-12-08 10:50:26 -05:00
Vladimir Mandic 2309712bd8 embedded sample for warmup 2020-12-08 09:58:30 -05:00
Vladimir Mandic adf638d306 switch to central logger 2020-12-08 09:00:44 -05:00
Vladimir Mandic 9340cb41f0 0.9.12 2020-11-26 10:37:08 -05:00
Vladimir Mandic 22fc4aec80 minor compatibility fixes 2020-11-26 10:37:04 -05:00
Vladimir Mandic 2be58d94ab update for node v15 2020-11-25 09:13:19 -05:00
Vladimir Mandic e7e51641e8 0.9.11 2020-11-23 23:42:42 -05:00
Vladimir Mandic b796b4ea06 implement multi-person gestures 2020-11-23 23:36:04 -05:00
Vladimir Mandic 6c6503768a modularize pipeline models 2020-11-23 22:55:01 -05:00
Vladimir Mandic 0e6ac7048b 2020-11-23 08:44:34 -05:00
Vladimir Mandic a110d45ca9 updated embedding function 2020-11-23 08:40:17 -05:00
Vladimir Mandic d551cdf566 updated wiki 2020-11-23 08:18:08 -05:00
Vladimir Mandic 30a76482a6 updated firefox css styling 2020-11-23 07:44:10 -05:00
Vladimir Mandic 199e79408a 0.9.10 2020-11-21 12:22:00 -05:00
Vladimir Mandic d5b922de02 changed build for optimized node & browser 2020-11-21 12:21:47 -05:00
Vladimir Mandic 9eb2426114 updated font 2020-11-21 07:33:41 -05:00
Vladimir Mandic 0e257ccdf7 0.9.9 2020-11-21 07:20:17 -05:00
Vladimir Mandic 0c0ebb6d5d new screenshots 2020-11-21 07:19:20 -05:00
Vladimir Mandic 4a6313c5b6 update dependencies 2020-11-20 08:53:40 -05:00
Vladimir Mandic 217fe1efc8 camera exception handling 2020-11-20 07:52:50 -05:00
Vladimir Mandic 68f7a487b4 0.9.8 2020-11-19 16:16:58 -05:00
Vladimir Mandic 731237df78 force f16 textures 2020-11-19 16:16:51 -05:00
Vladimir Mandic 387a080c89 bugfix embedding check 2020-11-19 15:22:08 -05:00
Vladimir Mandic 62635d23ed 0.9.7 2020-11-19 14:46:08 -05:00
Vladimir Mandic 26fbe21ab0 ui redesign 2020-11-19 14:45:59 -05:00
Vladimir Mandic 77e73b8dfd 0.9.6 2020-11-18 09:15:22 -05:00
Vladimir Mandic 08e5f6c62b optimize camera resize on mobile 2020-11-18 09:15:03 -05:00
Vladimir Mandic e08c9e9ab2 completed tfjs wrapper 2020-11-18 08:26:28 -05:00
Vladimir Mandic 0d33d7c5b5 update wiki 2020-11-17 17:48:05 -05:00
Vladimir Mandic ddeffb0362 0.9.5 2020-11-17 17:47:16 -05:00
Vladimir Mandic 3f180cfc5d fix serious performance bug around skipframes 2020-11-17 17:42:44 -05:00
Vladimir Mandic 9391acd72d swtich to custom tfjs bundle 2020-11-17 12:38:48 -05:00
Vladimir Mandic d7f5dc7a1f 0.9.4 2020-11-17 10:19:15 -05:00
Vladimir Mandic 3451c7b35f swtich to tfjs source import 2020-11-17 10:18:15 -05:00
Vladimir Mandic 60f03dc4a8 0.9.3 2020-11-16 23:58:23 -05:00
Vladimir Mandic ecb13151b0 updated build process 2020-11-16 23:58:06 -05:00
Vladimir Mandic 8790786766 updated build scripts 2020-11-16 16:40:25 -05:00
Vladimir Mandic d8734626d1 switched to minified build 2020-11-16 15:51:46 -05:00
Vladimir Mandic a4da0b7fe2 web worker fixes 2020-11-15 09:28:57 -05:00
Vladimir Mandic 92b4ac71dd update buffered output 2020-11-14 17:22:59 -05:00
Vladimir Mandic 426e3fbd54 full rebuild 2020-11-14 07:05:20 -05:00
Vladimir Mandic 212ef0d96b 0.9.2 2020-11-14 07:02:43 -05:00
Vladimir Mandic b5aac6e016 fix camera restart on resize 2020-11-14 07:02:05 -05:00
Vladimir Mandic 5777b86748 update package description 2020-11-13 16:43:48 -05:00
Vladimir Mandic 005268fcfb 0.9.1 2020-11-13 16:42:56 -05:00
Vladimir Mandic 1d4c20707c version bump 2020-11-13 16:42:50 -05:00
Vladimir Mandic 23bd78dbec full rebuild 2020-11-13 16:42:00 -05:00
Vladimir Mandic 17b079ab39 implemented face embedding 2020-11-13 16:13:35 -05:00
Vladimir Mandic 2f3da6441d added internal benchmark tool 2020-11-12 17:00:06 -05:00
Vladimir Mandic df43e41cb8 updated face uv coordinates 2020-11-12 14:52:32 -05:00
Vladimir Mandic 4a7ba26f42 0.8.8 2020-11-12 12:59:00 -05:00
Vladimir Mandic b5dec77ad5 updated packages 2020-11-12 12:58:55 -05:00
Vladimir Mandic 07a794ba81 reduced bundle size 2020-11-12 12:17:57 -05:00
Vladimir Mandic c929c2d1d6 implemented buffered processing 2020-11-12 09:21:26 -05:00
Vladimir Mandic 2a71f81462 fix for conditional model loading 2020-11-11 22:40:05 -05:00
Vladimir Mandic 1de1d4e48a 0.8.7 2020-11-11 15:02:53 -05:00
Vladimir Mandic 1c2223ec9e added performance notes 2020-11-11 15:02:49 -05:00
Vladimir Mandic e4bcdeb105 added notes on models 2020-11-10 10:23:11 -05:00
Vladimir Mandic 63258f91ff update dependencies 2020-11-10 09:54:07 -05:00
Vladimir Mandic d3a1e43348 fix bug in async ops and change imports 2020-11-10 08:57:39 -05:00
Vladimir Mandic 0b57ebbfb4 fix wiki links 2020-11-09 20:23:05 -05:00
Vladimir Mandic e4537f8a73 0.8.6 2020-11-09 20:13:44 -05:00
Vladimir Mandic bc1f987872 add wasm bundle 2020-11-09 20:13:38 -05:00
Vladimir Mandic 18da5913e9 0.8.5 2020-11-09 14:26:19 -05:00
Vladimir Mandic 250187f259 reimplemented blazeface processing 2020-11-09 14:26:10 -05:00
Vladimir Mandic 99fadef352 0.8.4 2020-11-09 09:31:17 -05:00
Vladimir Mandic 6bd064cddf added additional gestures 2020-11-09 09:31:11 -05:00
Vladimir Mandic 63cc122e9b implemented blink detection 2020-11-09 08:57:24 -05:00
Vladimir Mandic 65d045547d fix wasm module 2020-11-09 06:32:11 -05:00
Vladimir Mandic a098a7771f updated readme 2020-11-08 12:44:08 -05:00
Vladimir Mandic c224f80285 updated wiki 2020-11-08 12:40:46 -05:00
Vladimir Mandic a282d89436 updated wiki 2020-11-08 12:35:26 -05:00
Vladimir Mandic d7b3c404ee updated wiki 2020-11-08 12:33:24 -05:00
Vladimir Mandic 784de3bdc9 0.8.3 2020-11-08 12:32:36 -05:00
Vladimir Mandic d0bf167652 refresh 2020-11-08 12:32:31 -05:00
Vladimir Mandic 6af1768fa6 optimizations 2020-11-08 12:26:45 -05:00
Vladimir Mandic 0cffd084b0 update 2020-11-08 10:06:23 -05:00
Vladimir Mandic acb1fba9e5 update wiki 2020-11-08 09:56:27 -05:00
Vladimir Mandic 80eeebc8f2 update hand model 2020-11-08 09:56:02 -05:00
Vladimir Mandic 25884ca3e7 update hand algorithm 2020-11-08 01:17:25 -05:00
Vladimir Mandic bfd86ca85a 0.8.2 2020-11-08 01:16:28 -05:00
Vladimir Mandic 4ae1b2fc37 fix typos 2020-11-07 20:44:15 -05:00
Vladimir Mandic f59f8cedbd update build script 2020-11-07 20:15:09 -05:00
Vladimir Mandic 429560bf35 commit 2020-11-07 11:36:45 -05:00
Vladimir Mandic ae2febbff2 updated changelog 2020-11-07 11:35:43 -05:00
Vladimir Mandic 5c700e5caf 0.8.1 2020-11-07 11:34:44 -05:00
Vladimir Mandic d029f09dd0 updated ui 2020-11-07 11:34:09 -05:00
Vladimir Mandic c31cf49807 fix hand detection performance 2020-11-07 11:25:03 -05:00
Vladimir Mandic 7666519d32 optimized model loader 2020-11-07 10:37:19 -05:00
Vladimir Mandic 311328b70f updated wiki links 2020-11-07 09:47:26 -05:00
Vladimir Mandic a0f4bd7083 Merge branch 'main' of https://github.com/vladmandic/human into main 2020-11-07 09:44:43 -05:00
Vladimir Mandic 6aa74b6405 update dependencies 2020-11-07 09:44:33 -05:00
Vladimir Mandic 38c627c3b3 updated wiki 2020-11-07 09:42:54 -05:00
Vladimir Mandic a9482412ff created wiki 2020-11-07 09:39:54 -05:00
Vladimir Mandic eff52f4e98 Update issue templates 2020-11-07 09:37:56 -05:00
Vladimir Mandic 69201984c1 optimize font resizing 2020-11-06 22:20:42 -05:00
Vladimir Mandic 481b89ec9c fix nms sync call 2020-11-06 19:25:37 -05:00
Vladimir Mandic d8867fd1b4 0.7.6 2020-11-06 16:21:25 -05:00
Vladimir Mandic 8aacb4c7fc fixed memory leaks and updated docs 2020-11-06 16:21:20 -05:00
Vladimir Mandic 5ecc072f0f model tuning 2020-11-06 15:35:58 -05:00
Vladimir Mandic f705ce9dce cache invalidation improvements 2020-11-06 13:50:16 -05:00
Vladimir Mandic 4c2bc9a48a full async operations 2020-11-06 11:39:39 -05:00
Vladimir Mandic 0c044fa9fe 0.7.5 2020-11-05 23:46:43 -05:00
Vladimir Mandic a60db18949 implemented dev-server 2020-11-05 23:46:37 -05:00
Vladimir Mandic 5be8353e3a 0.7.4 2020-11-05 16:00:46 -05:00
Vladimir Mandic 8cc256bc93 fix canvas size on different orientation 2020-11-05 15:59:28 -05:00
Vladimir Mandic ceccda54cf updated emotion models 2020-11-05 15:38:09 -05:00
Vladimir Mandic e96135e652 updated changelog 2020-11-05 09:12:43 -05:00
Vladimir Mandic aef47d2d32 switched from es2020 to es2018 build target 2020-11-05 09:12:31 -05:00
Vladimir Mandic a267e2b04b 0.7.3 2020-11-05 09:06:17 -05:00
Vladimir Mandic a3edf94406 optimized camera and mobile layout 2020-11-05 09:06:09 -05:00
Vladimir Mandic 9578103dd2 fixed worker and filter compatibility 2020-11-05 08:21:23 -05:00
Vladimir Mandic b899df4923 0.7.2 2020-11-04 14:59:33 -05:00
Vladimir Mandic 967877bd76 major work on handpose model 2020-11-04 14:59:30 -05:00
Vladimir Mandic ba0437cc8b updated mobile build 2020-11-04 12:10:26 -05:00
Vladimir Mandic b476d3ec0e update demo build process 2020-11-04 11:57:44 -05:00
Vladimir Mandic da1801ffcf updated docs 2020-11-04 11:45:24 -05:00
Vladimir Mandic edaaa2dd8f updated changelog 2020-11-04 11:44:07 -05:00
Vladimir Mandic a0b31cc5dd 0.7.1 2020-11-04 11:44:00 -05:00
Vladimir Mandic b036205557 changed demo build process 2020-11-04 11:43:51 -05:00
Vladimir Mandic 3360ba7dbe 0.6.7 2020-11-04 10:18:30 -05:00
Vladimir Mandic 1af0b13c4b implemented simple gesture recognition 2020-11-04 10:18:22 -05:00
Vladimir Mandic e890852cc8 0.6.6 2020-11-04 01:13:44 -05:00
Vladimir Mandic d345afbb6a remove debug code 2020-11-04 01:13:40 -05:00
Vladimir Mandic 81ae5483f1 0.6.5 2020-11-04 01:11:30 -05:00
Vladimir Mandic 039b9356e8 redo hand detection 2020-11-04 01:11:24 -05:00
Vladimir Mandic c606e4776f 0.6.4 2020-11-03 15:24:11 -05:00
Vladimir Mandic 19c2f1fab0 added manifest 2020-11-03 15:24:02 -05:00
Vladimir Mandic f6d9a0e362 0.6.3 2020-11-03 11:11:57 -05:00
Vladimir Mandic 369c6faa21 update changelog 2020-11-03 11:11:53 -05:00
Vladimir Mandic 0153841891 enhanced processing resolution 2020-11-03 10:55:33 -05:00
Vladimir Mandic de5d299eee fix pause restart 2020-11-03 09:40:04 -05:00
Vladimir Mandic 63fb870e86 complete model refactoring 2020-11-03 09:34:36 -05:00
Vladimir Mandic 738b04ae35 fixed typo 2020-11-02 22:17:56 -05:00
Vladimir Mandic 82d53ff64b 0.6.2 2020-11-02 22:15:40 -05:00
Vladimir Mandic 38b3af1d84 optimized demo 2020-11-02 22:15:37 -05:00
Vladimir Mandic c570227a19 updated docs 2020-11-02 18:56:12 -05:00
Vladimir Mandic 5c889f37da package update 2020-11-02 18:55:17 -05:00
Vladimir Mandic 95ce9ff303 0.6.1 2020-11-02 18:54:13 -05:00
Vladimir Mandic d2bf2aeade major performance improvements for all models 2020-11-02 18:54:03 -05:00
Vladimir Mandic 18ec5f211f Revert "optimized canvas handling"
This reverts commit eaf603aa26.
2020-11-02 17:31:37 -05:00
Vladimir Mandic eaf603aa26 optimized canvas handling 2020-11-02 17:25:35 -05:00
Vladimir Mandic 3072e3a5d2 minor optimization to imagefx 2020-11-02 12:21:30 -05:00
Vladimir Mandic 8b8a01afe0 fix demo image sample 2020-11-01 14:16:47 -05:00
Vladimir Mandic 4dc8eaf79a added tfjs-vis to distribution 2020-11-01 13:59:25 -05:00
Vladimir Mandic 02c5c445f3 0.5.5 2020-11-01 13:10:40 -05:00
Vladimir Mandic cf6e525972 updated changelog 2020-11-01 13:10:36 -05:00
Vladimir Mandic 35e2131335 changed defaults 2020-11-01 13:10:22 -05:00
Vladimir Mandic 2b60efcf8e 0.5.4 2020-11-01 13:07:58 -05:00
Vladimir Mandic 38be53cb7e implemented memory profiler 2020-11-01 13:07:53 -05:00
Vladimir Mandic 6416e5e327 0.5.3 2020-10-30 15:45:00 -04:00
Vladimir Mandic f7be2a7ad3 improved debug logging 2020-10-30 11:57:23 -04:00
Vladimir Mandic 28679ed8fd 0.5.2 2020-10-30 10:30:12 -04:00
Vladimir Mandic d3a3ca3b50 updated changelog 2020-10-30 10:30:07 -04:00
Vladimir Mandic 0bc74bde6e updated menu ui 2020-10-30 10:29:50 -04:00
Vladimir Mandic ce876f8c48 added wasm and webgpu backends 2020-10-30 10:23:49 -04:00
Vladimir Mandic 6dbd481961 0.5.1 2020-10-30 07:35:30 -04:00
Vladimir Mandic 52f77d8f59 updated changelog 2020-10-30 07:35:19 -04:00
Vladimir Mandic 19c93e9ec9 updated dependencies 2020-10-30 07:34:55 -04:00
Vladimir Mandic 35fed12c5b improve demo line continous draws 2020-10-30 07:32:35 -04:00
Vladimir Mandic 338e16230d 0.4.10 2020-10-30 07:14:47 -04:00
Vladimir Mandic f96a2dfc6f fix for seedrandom 2020-10-30 07:14:42 -04:00
Vladimir Mandic a1a5b2a1bb 0.4.9 2020-10-29 00:09:33 -04:00
Vladimir Mandic 4a81cd7032 updated dependencies 2020-10-29 00:09:29 -04:00
Vladimir Mandic e64f60854d updated changelog 2020-10-28 15:03:23 -04:00
Vladimir Mandic ace57199c8 0.4.8 2020-10-28 15:03:08 -04:00
Vladimir Mandic e5a9113721 updated build targets and tfjs to 2.7.0 2020-10-28 15:02:59 -04:00
Vladimir Mandic 14503e9eea updated menu library 2020-10-27 14:20:21 -04:00
Vladimir Mandic 742e800155 Revert "updated menu handler"
This reverts commit 476a1cc4d4.
2020-10-27 14:19:24 -04:00
Vladimir Mandic 476a1cc4d4 updated menu handler 2020-10-27 14:07:39 -04:00
Vladimir Mandic 9187202979 0.4.7 2020-10-27 12:49:24 -04:00
Vladimir Mandic 8b264c6f87 0.4.6 2020-10-27 11:56:50 -04:00
Vladimir Mandic d75f68227e fix firefox compatibility bug 2020-10-27 11:56:41 -04:00
Vladimir Mandic 4e42a89074 0.4.5 2020-10-27 10:32:21 -04:00
Vladimir Mandic d4a2c49bdc updated docs 2020-10-27 10:32:12 -04:00
Vladimir Mandic 4a18186f96 updated samples 2020-10-27 10:30:28 -04:00
Vladimir Mandic d06b444b44 0.4.4 2020-10-27 10:06:14 -04:00
Vladimir Mandic fedbb1aac4 implelented input resizing 2020-10-27 10:06:01 -04:00
Vladimir Mandic c731892cc5 updated docs 2020-10-25 21:33:06 -04:00
Vladimir Mandic e50a30f68f updated 2020-10-22 18:50:28 -04:00
Vladimir Mandic 99940a2d01 0.4.3 2020-10-22 18:50:21 -04:00
Vladimir Mandic 50886d1971 update build process 2020-10-22 18:50:09 -04:00
Vladimir Mandic 310d3372ed 0.4.2 2020-10-20 10:08:23 -04:00
Vladimir Mandic c38ac88769 updated docs 2020-10-20 10:08:06 -04:00
Vladimir Mandic a9c03c111f log initialization 2020-10-20 07:58:20 -04:00
Vladimir Mandic f41d9cc43b updated changelog 2020-10-19 11:04:34 -04:00
Vladimir Mandic 358365eb62 0.4.1 2020-10-19 11:04:02 -04:00
Vladimir Mandic 1b7ab8bdcf breaking change: convert to object class 2020-10-19 11:03:48 -04:00
Vladimir Mandic 827a04e2d0 compatibility notes 2020-10-18 14:14:05 -04:00
Vladimir Mandic e83774d7d5 0.3.9 2020-10-18 12:22:39 -04:00
Vladimir Mandic e3c6bcdc01 updated docs 2020-10-18 12:21:17 -04:00
Vladimir Mandic aaeb842cf6 implemented image filters 2020-10-18 12:12:09 -04:00
Vladimir Mandic 5884c8cfe4 pure tensor pipeline without image converts 2020-10-18 09:21:53 -04:00
Vladimir Mandic d44ff5dbb2 autodetect skipFrames 2020-10-18 08:07:45 -04:00
Vladimir Mandic 08c55327bb 0.3.8 2020-10-17 21:00:43 -04:00
Vladimir Mandic 6b8d99ce0c new menu layout 2020-10-17 20:59:43 -04:00
Vladimir Mandic 9ab360f8b5 0.3.7 2020-10-17 11:43:44 -04:00
Vladimir Mandic 8362579a48 updated changelog 2020-10-17 11:43:40 -04:00
Vladimir Mandic 04d0b814c4 added diagnostics output 2020-10-17 11:43:04 -04:00
Vladimir Mandic 2729d9e8bf parallelized agegender operations 2020-10-17 11:38:24 -04:00
Vladimir Mandic 4efb17c040 updated readme 2020-10-17 10:32:02 -04:00
Vladimir Mandic 11161d2430 0.3.6 2020-10-17 10:25:37 -04:00
Vladimir Mandic 09f6863d0e fixed webcam initialization 2020-10-17 10:25:27 -04:00
Vladimir Mandic 87f8e31344 fixed memory leaks and added scoped runs 2020-10-17 10:06:02 -04:00
Vladimir Mandic 13f7a7a5f6 modularized draw 2020-10-17 07:34:45 -04:00
Vladimir Mandic d6ad21cb48 added state handling 2020-10-17 07:15:23 -04:00
Vladimir Mandic 2e680ddc2b refactored package file layout 2020-10-17 06:30:00 -04:00
Vladimir Mandic 43a6a9934a updated changelog 2020-10-16 15:22:20 -04:00
Vladimir Mandic ada14165d4 0.3.5 2020-10-16 15:22:01 -04:00
Vladimir Mandic f31c18f241 added auto-generated changelog 2020-10-16 15:21:56 -04:00
Vladimir Mandic cf6293a6cd updated readme 2020-10-16 15:13:29 -04:00
Vladimir Mandic 11eb6c0252 0.3.4 2020-10-16 15:05:01 -04:00
Vladimir Mandic c82b1698d5 updated examples plus bugfixes 2020-10-16 15:04:51 -04:00
Vladimir Mandic 924eb3eb25 added camera selection 2020-10-16 11:23:59 -04:00
Vladimir Mandic c8b705f805 optimized blazeface anchors 2020-10-16 10:48:10 -04:00
Vladimir Mandic ee65aa7588 added error handling 2020-10-16 10:12:12 -04:00
Vladimir Mandic b0da7fa5b6 updated default values 2020-10-15 20:29:51 -04:00
Vladimir Mandic 3f95980b2d 0.3.3 2020-10-15 20:25:15 -04:00
Vladimir Mandic a859dc64d5 added blazeface back and front models 2020-10-15 20:20:37 -04:00
Vladimir Mandic 4c1aafff31 0.3.2 2020-10-15 18:16:19 -04:00
Vladimir Mandic 69ee764cc1 reduced web worker latency 2020-10-15 18:16:05 -04:00
Vladimir Mandic c4f04a4904 added debugging and versioning 2020-10-15 15:25:58 -04:00
Vladimir Mandic 28688025b3 optimized demos and added scoped runs 2020-10-15 09:43:16 -04:00
Vladimir Mandic 51d3b429e6 added multi backend support 2020-10-15 08:16:34 -04:00
Vladimir Mandic a6024c40e8 0.3.1 2020-10-14 18:30:26 -04:00
Vladimir Mandic 66ef54a249 0.2.10 2020-10-14 18:22:56 -04:00
Vladimir Mandic 3ca9edeaa5 added emotion backend 2020-10-14 18:22:38 -04:00
Vladimir Mandic 23aeb81b76 module parametrization and performance monitoring 2020-10-14 13:23:02 -04:00
Vladimir Mandic 2ab6b08841 implemented multi-hand support 2020-10-14 11:43:33 -04:00
Vladimir Mandic 369692ef62 updated dependencies 2020-10-13 21:08:06 -04:00
Vladimir Mandic 60409c3c15 fixed documentation typos 2020-10-13 20:59:09 -04:00
Vladimir Mandic 56a8338856 0.2.9 2020-10-13 20:52:34 -04:00
Vladimir Mandic 11137f4523 added node build and demo 2020-10-13 20:52:30 -04:00
Vladimir Mandic b19e6372c8 0.2.8 2020-10-13 10:08:50 -04:00
Vladimir Mandic f225726285 0.2.7 2020-10-13 10:07:03 -04:00
Vladimir Mandic 32f5be1bf8 new examples 2020-10-13 10:06:49 -04:00
Vladimir Mandic 4052f3aa65 0.2.6 2020-10-13 09:59:27 -04:00
Vladimir Mandic 22933c1331 updated demo 2020-10-13 09:59:21 -04:00
Vladimir Mandic d94e372363 enable all models by default 2020-10-12 22:03:55 -04:00
Vladimir Mandic 1ee4004a5c 0.2.5 2020-10-12 22:01:44 -04:00
Vladimir Mandic fa3a9e5372 fixed memory leak 2020-10-12 22:01:35 -04:00
Vladimir Mandic ca81481cb2 0.2.4 2020-10-12 15:05:30 -04:00
Vladimir Mandic 64c5fc9a80 0.2.3 2020-10-12 15:03:59 -04:00
Vladimir Mandic 470327411d updated keywords 2020-10-12 15:03:51 -04:00
Vladimir Mandic 2f6007cda3 updated readme 2020-10-12 14:46:08 -04:00
Vladimir Mandic 541e829349 0.2.2 2020-10-12 14:41:59 -04:00
Vladimir Mandic 6ca10e26a1 updated docs 2020-10-12 14:41:50 -04:00
Vladimir Mandic badc79c40a updated readme 2020-10-12 14:33:07 -04:00
Vladimir Mandic 5b56999dd0 updated linting 2020-10-12 11:19:52 -04:00
Vladimir Mandic 4521ee79b6 0.2.1 2020-10-12 11:05:46 -04:00
Vladimir Mandic 709cf22efd updated readme 2020-10-12 11:05:29 -04:00
Vladimir Mandic 210bc8a80d added sample image 2020-10-12 10:59:55 -04:00
Vladimir Mandic a5afed99ec updated docs 2020-10-12 10:31:36 -04:00
Vladimir Mandic c06ed770a3 updated docs 2020-10-12 10:28:42 -04:00
Vladimir Mandic 128dc87e05 updated docs 2020-10-12 10:27:22 -04:00
Vladimir Mandic c223b2cf82 updated model path 2020-10-12 10:20:51 -04:00
Vladimir Mandic cf82f666d2 updated docs 2020-10-12 10:16:31 -04:00
Vladimir Mandic ee1d598367 updated model path 2020-10-12 10:15:00 -04:00
Vladimir Mandic f83d13d960 updated model path 2020-10-12 10:14:26 -04:00
Vladimir Mandic f71fbe1873 updated iife and esm demos 2020-10-12 10:08:00 -04:00
Vladimir Mandic 32025cd7f7 updated demo 2020-10-11 21:21:41 -04:00
Vladimir Mandic e2bd9dbc34 initial public commit 2020-10-11 19:22:43 -04:00
577 changed files with 129490 additions and 1 deletions

27
.api-extractor.json Normal file
View File

@ -0,0 +1,27 @@
{
"$schema": "https://developer.microsoft.com/json-schemas/api-extractor/v7/api-extractor.schema.json",
"mainEntryPointFilePath": "types/lib/src/human.d.ts",
"compiler": {
"skipLibCheck": true
},
"newlineKind": "lf",
"dtsRollup": {
"enabled": true,
"untrimmedFilePath": "types/human.d.ts"
},
"docModel": { "enabled": false },
"tsdocMetadata": { "enabled": false },
"apiReport": { "enabled": false },
"messages": {
"compilerMessageReporting": {
"default": { "logLevel": "warning" }
},
"extractorMessageReporting": {
"default": { "logLevel": "warning" },
"ae-missing-release-tag": { "logLevel": "none" }
},
"tsdocMessageReporting": {
"default": { "logLevel": "warning" }
}
}
}

181
.build.json Normal file
View File

@ -0,0 +1,181 @@
{
"log": {
"enabled": true,
"debug": false,
"console": true,
"output": "test/build.log"
},
"profiles": {
"production": ["clean", "compile", "typings", "typedoc", "lint", "changelog"],
"development": ["serve", "watch", "compile"],
"serve": ["serve"],
"clean": ["clean"]
},
"clean": {
"locations": ["dist/*", "types/*", "typedoc/*"]
},
"lint": {
"locations": [ "**/*.json", "src/**/*.ts", "test/**/*.js", "demo/**/*.js", "**/*.md" ],
"rules": { }
},
"changelog": {
"log": "CHANGELOG.md"
},
"serve": {
"sslKey": "node_modules/@vladmandic/build/cert/https.key",
"sslCrt": "node_modules/@vladmandic/build/cert/https.crt",
"httpPort": 8000,
"httpsPort": 8001,
"documentRoot": ".",
"defaultFolder": "demo",
"defaultFile": "index.html"
},
"build": {
"global": {
"target": "es2018",
"sourcemap": false,
"treeShaking": true,
"ignoreAnnotations": true,
"banner": { "js": "/*\n Human\n homepage: <https://github.com/vladmandic/human>\n author: <https://github.com/vladmandic>'\n*/\n" }
},
"targets": [
{
"name": "tfjs/browser/version",
"platform": "browser",
"format": "esm",
"input": "tfjs/tf-version.ts",
"output": "dist/tfjs.version.js"
},
{
"name": "tfjs/nodejs/cpu",
"platform": "node",
"format": "cjs",
"input": "tfjs/tf-node.ts",
"output": "dist/tfjs.esm.js",
"external": ["@tensorflow"]
},
{
"name": "human/nodejs/cpu",
"platform": "node",
"format": "cjs",
"input": "src/human.ts",
"output": "dist/human.node.js",
"external": ["@tensorflow"]
},
{
"name": "tfjs/nodejs/gpu",
"platform": "node",
"format": "cjs",
"input": "tfjs/tf-node-gpu.ts",
"output": "dist/tfjs.esm.js",
"external": ["@tensorflow"]
},
{
"name": "human/nodejs/gpu",
"platform": "node",
"format": "cjs",
"input": "src/human.ts",
"output": "dist/human.node-gpu.js",
"external": ["@tensorflow"]
},
{
"name": "tfjs/nodejs/wasm",
"platform": "node",
"format": "cjs",
"input": "tfjs/tf-node-wasm.ts",
"output": "dist/tfjs.esm.js",
"minify": false,
"external": ["@tensorflow"]
},
{
"name": "human/nodejs/wasm",
"platform": "node",
"format": "cjs",
"input": "src/human.ts",
"output": "dist/human.node-wasm.js",
"external": ["@tensorflow"]
},
{
"name": "tfjs/browser/esm/nobundle",
"platform": "browser",
"format": "esm",
"input": "tfjs/tf-browser.ts",
"output": "dist/tfjs.esm.js",
"external": ["@tensorflow"]
},
{
"name": "human/browser/esm/nobundle",
"platform": "browser",
"format": "esm",
"input": "src/human.ts",
"output": "dist/human.esm-nobundle.js",
"sourcemap": false,
"external": ["@tensorflow"]
},
{
"name": "tfjs/browser/esm/bundle",
"platform": "browser",
"format": "esm",
"input": "tfjs/tf-browser.ts",
"output": "dist/tfjs.esm.js",
"sourcemap": false,
"minify": true
},
{
"name": "human/browser/iife/bundle",
"platform": "browser",
"format": "iife",
"input": "src/human.ts",
"output": "dist/human.js",
"minify": true,
"globalName": "Human",
"external": ["@tensorflow"]
},
{
"name": "human/browser/esm/bundle",
"platform": "browser",
"format": "esm",
"input": "src/human.ts",
"output": "dist/human.esm.js",
"sourcemap": true,
"minify": false,
"external": ["@tensorflow"],
"typings": "types/lib",
"typedoc": "typedoc"
},
{
"name": "demo/typescript",
"platform": "browser",
"format": "esm",
"input": "demo/typescript/index.ts",
"output": "demo/typescript/index.js",
"sourcemap": true,
"external": ["*/human.esm.js"]
},
{
"name": "demo/faceid",
"platform": "browser",
"format": "esm",
"input": "demo/faceid/index.ts",
"output": "demo/faceid/index.js",
"sourcemap": true,
"external": ["*/human.esm.js"]
},
{
"name": "demo/tracker",
"platform": "browser",
"format": "esm",
"input": "demo/tracker/index.ts",
"output": "demo/tracker/index.js",
"sourcemap": true,
"external": ["*/human.esm.js"]
}
]
},
"watch": {
"locations": [ "src/**/*", "tfjs/**/*", "demo/**/*.ts" ]
},
"typescript": {
"allowJs": false
}
}

221
.eslintrc.json Normal file
View File

@ -0,0 +1,221 @@
{
"globals": {
},
"rules": {
"@typescript-eslint/no-require-imports":"off"
},
"overrides": [
{
"files": ["**/*.ts"],
"parser": "@typescript-eslint/parser",
"parserOptions": { "ecmaVersion": "latest", "project": ["./tsconfig.json"] },
"plugins": ["@typescript-eslint"],
"env": {
"browser": true,
"commonjs": false,
"node": false,
"es2021": true
},
"extends": [
"airbnb-base",
"eslint:recommended",
"plugin:@typescript-eslint/eslint-recommended",
"plugin:@typescript-eslint/recommended",
"plugin:@typescript-eslint/recommended-requiring-type-checking",
"plugin:@typescript-eslint/strict",
"plugin:import/recommended",
"plugin:promise/recommended"
],
"rules": {
"@typescript-eslint/ban-ts-comment":"off",
"@typescript-eslint/dot-notation":"off",
"@typescript-eslint/no-empty-interface":"off",
"@typescript-eslint/no-inferrable-types":"off",
"@typescript-eslint/no-misused-promises":"off",
"@typescript-eslint/no-unnecessary-condition":"off",
"@typescript-eslint/no-unsafe-argument":"off",
"@typescript-eslint/no-unsafe-assignment":"off",
"@typescript-eslint/no-unsafe-call":"off",
"@typescript-eslint/no-unsafe-member-access":"off",
"@typescript-eslint/no-unsafe-return":"off",
"@typescript-eslint/no-require-imports":"off",
"@typescript-eslint/no-empty-object-type":"off",
"@typescript-eslint/non-nullable-type-assertion-style":"off",
"@typescript-eslint/prefer-for-of":"off",
"@typescript-eslint/prefer-nullish-coalescing":"off",
"@typescript-eslint/prefer-ts-expect-error":"off",
"@typescript-eslint/restrict-plus-operands":"off",
"@typescript-eslint/restrict-template-expressions":"off",
"dot-notation":"off",
"guard-for-in":"off",
"import/extensions": ["off", "always"],
"import/no-unresolved":"off",
"import/prefer-default-export":"off",
"lines-between-class-members":"off",
"max-len": [1, 275, 3],
"no-async-promise-executor":"off",
"no-await-in-loop":"off",
"no-bitwise":"off",
"no-continue":"off",
"no-lonely-if":"off",
"no-mixed-operators":"off",
"no-param-reassign":"off",
"no-plusplus":"off",
"no-regex-spaces":"off",
"no-restricted-syntax":"off",
"no-return-assign":"off",
"no-void":"off",
"object-curly-newline":"off",
"prefer-destructuring":"off",
"prefer-template":"off",
"radix":"off"
}
},
{
"files": ["**/*.d.ts"],
"parser": "@typescript-eslint/parser",
"parserOptions": { "ecmaVersion": "latest", "project": ["./tsconfig.json"] },
"plugins": ["@typescript-eslint"],
"env": {
"browser": true,
"commonjs": false,
"node": false,
"es2021": true
},
"extends": [
"airbnb-base",
"eslint:recommended",
"plugin:@typescript-eslint/eslint-recommended",
"plugin:@typescript-eslint/recommended",
"plugin:@typescript-eslint/recommended-requiring-type-checking",
"plugin:@typescript-eslint/strict",
"plugin:import/recommended",
"plugin:promise/recommended"
],
"rules": {
"@typescript-eslint/array-type":"off",
"@typescript-eslint/ban-types":"off",
"@typescript-eslint/consistent-indexed-object-style":"off",
"@typescript-eslint/consistent-type-definitions":"off",
"@typescript-eslint/no-empty-interface":"off",
"@typescript-eslint/no-explicit-any":"off",
"@typescript-eslint/no-invalid-void-type":"off",
"@typescript-eslint/no-unnecessary-type-arguments":"off",
"@typescript-eslint/no-unnecessary-type-constraint":"off",
"comma-dangle":"off",
"indent":"off",
"lines-between-class-members":"off",
"max-classes-per-file":"off",
"max-len":"off",
"no-multiple-empty-lines":"off",
"no-shadow":"off",
"no-use-before-define":"off",
"quotes":"off",
"semi":"off"
}
},
{
"files": ["**/*.js"],
"parserOptions": { "sourceType": "module", "ecmaVersion": "latest" },
"plugins": [],
"env": {
"browser": true,
"commonjs": true,
"node": true,
"es2021": true
},
"extends": [
"airbnb-base",
"eslint:recommended",
"plugin:node/recommended",
"plugin:promise/recommended"
],
"rules": {
"dot-notation":"off",
"import/extensions": ["error", "always"],
"import/no-extraneous-dependencies":"off",
"max-len": [1, 275, 3],
"no-await-in-loop":"off",
"no-bitwise":"off",
"no-continue":"off",
"no-mixed-operators":"off",
"no-param-reassign":"off",
"no-plusplus":"off",
"no-regex-spaces":"off",
"no-restricted-syntax":"off",
"no-return-assign":"off",
"node/no-unsupported-features/es-syntax":"off",
"object-curly-newline":"off",
"prefer-destructuring":"off",
"prefer-template":"off",
"radix":"off"
}
},
{
"files": ["**/*.json"],
"parserOptions": { "ecmaVersion": "latest" },
"plugins": ["json"],
"env": {
"browser": false,
"commonjs": false,
"node": false,
"es2021": false
},
"extends": []
},
{
"files": ["**/*.html"],
"parserOptions": { "sourceType": "module", "ecmaVersion": "latest" },
"parser": "@html-eslint/parser",
"plugins": ["html", "@html-eslint"],
"env": {
"browser": true,
"commonjs": false,
"node": false,
"es2021": false
},
"extends": ["plugin:@html-eslint/recommended"],
"rules": {
"@html-eslint/element-newline":"off",
"@html-eslint/attrs-newline":"off",
"@html-eslint/indent": ["error", 2]
}
},
{
"files": ["**/*.md"],
"plugins": ["markdown"],
"processor": "markdown/markdown",
"rules": {
"no-undef":"off"
}
},
{
"files": ["**/*.md/*.js"],
"rules": {
"@typescript-eslint/no-unused-vars":"off",
"@typescript-eslint/triple-slash-reference":"off",
"import/newline-after-import":"off",
"import/no-unresolved":"off",
"no-console":"off",
"no-global-assign":"off",
"no-multi-spaces":"off",
"no-restricted-globals":"off",
"no-undef":"off",
"no-unused-vars":"off",
"node/no-missing-import":"off",
"node/no-missing-require":"off",
"promise/catch-or-return":"off"
}
}
],
"ignorePatterns": [
"node_modules",
"assets",
"dist",
"demo/helpers/*.js",
"demo/typescript/*.js",
"demo/faceid/*.js",
"demo/tracker/*.js",
"typedoc"
]
}

11
.github/FUNDING.yml vendored Normal file
View File

@ -0,0 +1,11 @@
github: [vladmandic]
patreon: # Replace with a single Patreon username
open_collective: # Replace with a single Open Collective username
ko_fi: # Replace with a single Ko-fi username
tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
liberapay: # Replace with a single Liberapay username
issuehunt: # Replace with a single IssueHunt username
otechie: # Replace with a single Otechie username
lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']

35
.github/ISSUE_TEMPLATE/issue.md vendored Normal file
View File

@ -0,0 +1,35 @@
---
name: Issue
about: Issue
title: ''
labels: ''
assignees: vladmandic
---
**Issue Description**
**Steps to Reproduce**
**Expected Behavior**
**Environment**
- Human library version?
- Built-in demo or custom code?
- Type of module used (e.g. `js`, `esm`, `esm-nobundle`)?
- TensorFlow/JS version (if not using bundled module)?
- Browser or NodeJS and version (e.g. *NodeJS 14.15* or *Chrome 89*)?
- OS and Hardware platform (e.g. *Windows 10*, *Ubuntu Linux on x64*, *Android 10*)?
- Packager (if any) (e.g, *webpack*, *rollup*, *parcel*, *esbuild*, etc.)?
- Framework (if any) (e.g. *React*, *NextJS*, etc.)?
**Diagnostics**
- Check out any applicable [diagnostic steps](https://github.com/vladmandic/human/wiki/Diag)
**Additional**
- For installation or startup issues include your `package.json`
- For usage issues, it is recommended to post your code as [gist](https://gist.github.com/)
- For general questions, create a [discussion topic](https://github.com/vladmandic/human/discussions)

View File

@ -0,0 +1,3 @@
# Pull Request Template
<br>

67
.github/workflows/codeql-analysis.yml vendored Normal file
View File

@ -0,0 +1,67 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL"
on:
push:
branches: [ main ]
pull_request:
# The branches below must be a subset of the branches above
branches: [ main ]
schedule:
- cron: '16 14 * * 6'
jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
language: [ 'javascript' ]
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python' ]
# Learn more:
# https://docs.github.com/en/free-pro-team@latest/github/finding-security-vulnerabilities-and-errors-in-your-code/configuring-code-scanning#changing-the-languages-that-are-analyzed
steps:
- name: Checkout repository
uses: actions/checkout@v2
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v1
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# queries: ./path/to/local/query, your-org/your-repo/queries@main
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@v1
# Command-line programs to run using the OS shell.
# 📚 https://git.io/JvXDl
# ✏️ If the Autobuild fails above, remove it and uncomment the following three lines
# and modify them (or add more) to build your code if your project
# uses a compiled language
#- run: |
# make bootstrap
# make release
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v1

9
.gitignore vendored Normal file
View File

@ -0,0 +1,9 @@
node_modules/
types/lib
pnpm-lock.yaml
package-lock.json
*.swp
samples/**/*.mp4
samples/**/*.webm
temp
tmp

3
.gitmodules vendored Normal file
View File

@ -0,0 +1,3 @@
[submodule "wiki"]
path = wiki
url = https://github.com/vladmandic/human.wiki.git

16
.hintrc Normal file
View File

@ -0,0 +1,16 @@
{
"extends": [
"web-recommended"
],
"browserslist": [
"chrome >= 90",
"edge >= 90",
"firefox >= 100",
"android >= 90",
"safari >= 15"
],
"hints": {
"no-inline-styles": "off",
"meta-charset-utf-8": "off"
}
}

8
.markdownlint.json Normal file
View File

@ -0,0 +1,8 @@
{
"MD012": false,
"MD013": false,
"MD029": false,
"MD033": false,
"MD036": false,
"MD041": false
}

7
.npmignore Normal file
View File

@ -0,0 +1,7 @@
node_modules
pnpm-lock.yaml
samples
typedoc
test
wiki
types/lib

5
.npmrc Normal file
View File

@ -0,0 +1,5 @@
force=true
omit=dev
legacy-peer-deps=true
strict-peer-dependencies=false
node-options='--no-deprecation'

10
.vscode/settings.json vendored Normal file
View File

@ -0,0 +1,10 @@
{
"search.exclude": {
"dist/*": true,
"node_modules/*": true,
"types": true,
"typedoc": true,
},
"search.useGlobalIgnoreFiles": true,
"search.useParentIgnoreFiles": true
}

1236
CHANGELOG.md Normal file

File diff suppressed because it is too large Load Diff

33
CODE_OF_CONDUCT Normal file
View File

@ -0,0 +1,33 @@
# Code of Conduct
Use your best judgement
If it will possibly make others uncomfortable, do not post it
- Be respectful
Disagreement is not an opportunity to attack someone else's thoughts or opinions
Although views may differ, remember to approach every situation with patience and care
- Be considerate
Think about how your contribution will affect others in the community
- Be open minded
Embrace new people and new ideas. Our community is continually evolving and we welcome positive change
Be mindful of your language
Any of the following behavior is unacceptable:
- Offensive comments of any kind
- Threats or intimidation
- Sexually explicit material
- Or any other kinds of harassment
If you believe someone is violating the code of conduct, we ask that you report it
Participants asked to stop any harassing behavior are expected to comply immediately
<br>
## Usage Restrictions
`Human` library does not alow for usage in following scenarios:
- Any life-critical decisions
- Any form of surveillance without consent of the user is explicitly out of scope

22
CONTRIBUTING Normal file
View File

@ -0,0 +1,22 @@
# Contributing Guidelines
Pull requests from everyone are welcome
Procedure for contributing:
- Create a fork of the repository on github
In a top right corner of a GitHub, select "Fork"
Its recommended to fork latest version from main branch to avoid any possible conflicting code updates
- Clone your forked repository to your local system
`git clone https://github.com/<your-username>/<your-fork>
- Make your changes
- Test your changes against code guidelines
`npm run lint`
- Test your changes in Browser and NodeJS
`npm run dev` and naviate to https://localhost:10031
`node test/test-node.js`
- Push changes to your fork
Exclude files in `/dist', '/types', '/typedoc' from the commit as they are dynamically generated during build
- Submit a PR (pull request)
Your pull request will be reviewed and pending review results, merged into main branch

View File

@ -1,6 +1,6 @@
MIT License
Copyright (c) 2020 Vladimir Mandic
Copyright (c) Vladimir Mandic
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

469
README.md Normal file
View File

@ -0,0 +1,469 @@
[![](https://img.shields.io/static/v1?label=Sponsor&message=%E2%9D%A4&logo=GitHub&color=%23fe8e86)](https://github.com/sponsors/vladmandic)
![Git Version](https://img.shields.io/github/package-json/v/vladmandic/human?style=flat-square&svg=true&label=git)
![NPM Version](https://img.shields.io/npm/v/@vladmandic/human.png?style=flat-square)
![Last Commit](https://img.shields.io/github/last-commit/vladmandic/human?style=flat-square&svg=true)
![License](https://img.shields.io/github/license/vladmandic/human?style=flat-square&svg=true)
![GitHub Status Checks](https://img.shields.io/github/checks-status/vladmandic/human/main?style=flat-square&svg=true)
# Human Library
**AI-powered 3D Face Detection & Rotation Tracking, Face Description & Recognition,**
**Body Pose Tracking, 3D Hand & Finger Tracking, Iris Analysis,**
**Age & Gender & Emotion Prediction, Gaze Tracking, Gesture Recognition, Body Segmentation**
<br>
## Highlights
- Compatible with most server-side and client-side environments and frameworks
- Combines multiple machine learning models which can be switched on-demand depending on the use-case
- Related models are executed in an attention pipeline to provide details when needed
- Optimized input pre-processing that can enhance image quality of any type of inputs
- Detection of frame changes to trigger only required models for improved performance
- Intelligent temporal interpolation to provide smooth results regardless of processing performance
- Simple unified API
- Built-in Image, Video and WebCam handling
[*Jump to Quick Start*](#quick-start)
<br>
## Compatibility
**Browser**:
- Compatible with both desktop and mobile platforms
- Compatible with *WebGPU*, *WebGL*, *WASM*, *CPU* backends
- Compatible with *WebWorker* execution
- Compatible with *WebView*
- Primary platform: *Chromium*-based browsers
- Secondary platform: *Firefox*, *Safari*
**NodeJS**:
- Compatibile with *WASM* backend for executions on architectures where *tensorflow* binaries are not available
- Compatible with *tfjs-node* using software execution via *tensorflow* shared libraries
- Compatible with *tfjs-node* using GPU-accelerated execution via *tensorflow* shared libraries and nVidia CUDA
- Supported versions are from **14.x** to **22.x**
- NodeJS version **23.x** is not supported due to breaking changes and issues with `@tensorflow/tfjs`
<br>
## Releases
- [Release Notes](https://github.com/vladmandic/human/releases)
- [NPM Link](https://www.npmjs.com/package/@vladmandic/human)
## Demos
*Check out [**Simple Live Demo**](https://vladmandic.github.io/human/demo/typescript/index.html) fully annotated app as a good start starting point ([html](https://github.com/vladmandic/human/blob/main/demo/typescript/index.html))([code](https://github.com/vladmandic/human/blob/main/demo/typescript/index.ts))*
*Check out [**Main Live Demo**](https://vladmandic.github.io/human/demo/index.html) app for advanced processing of of webcam, video stream or images static images with all possible tunable options*
- To start video detection, simply press *Play*
- To process images, simply drag & drop in your Browser window
- Note: For optimal performance, select only models you'd like to use
- Note: If you have modern GPU, *WebGL* (default) backend is preferred, otherwise select *WASM* backend
<br>
- [**List of all Demo applications**](https://github.com/vladmandic/human/wiki/Demos)
- [**Live Examples galery**](https://vladmandic.github.io/human/samples/index.html)
### Browser Demos
*All browser demos are self-contained without any external dependencies*
- **Full** [[*Live*]](https://vladmandic.github.io/human/demo/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo): Main browser demo app that showcases all Human capabilities
- **Simple** [[*Live*]](https://vladmandic.github.io/human/demo/typescript/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/typescript): Simple demo in WebCam processing demo in TypeScript
- **Embedded** [[*Live*]](https://vladmandic.github.io/human/demo/video/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/video/index.html): Even simpler demo with tiny code embedded in HTML file
- **Face Detect** [[*Live*]](https://vladmandic.github.io/human/demo/facedetect/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/facedetect): Extract faces from images and processes details
- **Face Match** [[*Live*]](https://vladmandic.github.io/human/demo/facematch/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/facematch): Extract faces from images, calculates face descriptors and similarities and matches them to known database
- **Face ID** [[*Live*]](https://vladmandic.github.io/human/demo/faceid/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/faceid): Runs multiple checks to validate webcam input before performing face match to faces in IndexDB
- **Multi-thread** [[*Live*]](https://vladmandic.github.io/human/demo/multithread/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/multithread): Runs each Human module in a separate web worker for highest possible performance
- **NextJS** [[*Live*]](https://vladmandic.github.io/human-next/out/index.html) [[*Details*]](https://github.com/vladmandic/human-next): Use Human with TypeScript, NextJS and ReactJS
- **ElectronJS** [[*Details*]](https://github.com/vladmandic/human-electron): Use Human with TypeScript and ElectonJS to create standalone cross-platform apps
- **3D Analysis with BabylonJS** [[*Live*]](https://vladmandic.github.io/human-motion/src/index.html) [[*Details*]](https://github.com/vladmandic/human-motion): 3D tracking and visualization of heead, face, eye, body and hand
- **VRM Virtual Model Tracking with Three.JS** [[*Live*]](https://vladmandic.github.io/human-three-vrm/src/human-vrm.html) [[*Details*]](https://github.com/vladmandic/human-three-vrm): VR model with head, face, eye, body and hand tracking
- **VRM Virtual Model Tracking with BabylonJS** [[*Live*]](https://vladmandic.github.io/human-bjs-vrm/src/index.html) [[*Details*]](https://github.com/vladmandic/human-bjs-vrm): VR model with head, face, eye, body and hand tracking
### NodeJS Demos
*NodeJS demos may require extra dependencies which are used to decode inputs*
*See header of each demo to see its dependencies as they are not automatically installed with `Human`*
- **Main** [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/nodejs/node.js): Process images from files, folders or URLs using native methods
- **Canvas** [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/nodejs/node-canvas.js): Process image from file or URL and draw results to a new image file using `node-canvas`
- **Video** [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/nodejs/node-video.js): Processing of video input using `ffmpeg`
- **WebCam** [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/nodejs/node-webcam.js): Processing of webcam screenshots using `fswebcam`
- **Events** [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/nodejs/node-event.js): Showcases usage of `Human` eventing to get notifications on processing
- **Similarity** [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/nodejs/node-similarity.js): Compares two input images for similarity of detected faces
- **Face Match** [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/facematch/node-match.js): Parallel processing of face **match** in multiple child worker threads
- **Multiple Workers** [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/multithread/node-multiprocess.js): Runs multiple parallel `human` by dispaching them to pool of pre-created worker processes
- **Dynamic Load** [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/nodejs): Loads Human dynamically with multiple different desired backends
## Project pages
- [**Code Repository**](https://github.com/vladmandic/human)
- [**NPM Package**](https://www.npmjs.com/package/@vladmandic/human)
- [**Issues Tracker**](https://github.com/vladmandic/human/issues)
- [**TypeDoc API Specification - Main class**](https://vladmandic.github.io/human/typedoc/classes/Human.html)
- [**TypeDoc API Specification - Full**](https://vladmandic.github.io/human/typedoc/)
- [**Change Log**](https://github.com/vladmandic/human/blob/main/CHANGELOG.md)
- [**Current To-do List**](https://github.com/vladmandic/human/blob/main/TODO.md)
## Wiki pages
- [**Home**](https://github.com/vladmandic/human/wiki)
- [**Installation**](https://github.com/vladmandic/human/wiki/Install)
- [**Usage & Functions**](https://github.com/vladmandic/human/wiki/Usage)
- [**Configuration Details**](https://github.com/vladmandic/human/wiki/Config)
- [**Result Details**](https://github.com/vladmandic/human/wiki/Result)
- [**Customizing Draw Methods**](https://github.com/vladmandic/human/wiki/Draw)
- [**Caching & Smoothing**](https://github.com/vladmandic/human/wiki/Caching)
- [**Input Processing**](https://github.com/vladmandic/human/wiki/Image)
- [**Face Recognition & Face Description**](https://github.com/vladmandic/human/wiki/Embedding)
- [**Gesture Recognition**](https://github.com/vladmandic/human/wiki/Gesture)
- [**Common Issues**](https://github.com/vladmandic/human/wiki/Issues)
- [**Background and Benchmarks**](https://github.com/vladmandic/human/wiki/Background)
## Additional notes
- [**Comparing Backends**](https://github.com/vladmandic/human/wiki/Backends)
- [**Development Server**](https://github.com/vladmandic/human/wiki/Development-Server)
- [**Build Process**](https://github.com/vladmandic/human/wiki/Build-Process)
- [**Adding Custom Modules**](https://github.com/vladmandic/human/wiki/Module)
- [**Performance Notes**](https://github.com/vladmandic/human/wiki/Performance)
- [**Performance Profiling**](https://github.com/vladmandic/human/wiki/Profiling)
- [**Platform Support**](https://github.com/vladmandic/human/wiki/Platforms)
- [**Diagnostic and Performance trace information**](https://github.com/vladmandic/human/wiki/Diag)
- [**Dockerize Human applications**](https://github.com/vladmandic/human/wiki/Docker)
- [**List of Models & Credits**](https://github.com/vladmandic/human/wiki/Models)
- [**Models Download Repository**](https://github.com/vladmandic/human-models)
- [**Security & Privacy Policy**](https://github.com/vladmandic/human/blob/main/SECURITY.md)
- [**License & Usage Restrictions**](https://github.com/vladmandic/human/blob/main/LICENSE)
<br>
*See [**issues**](https://github.com/vladmandic/human/issues?q=) and [**discussions**](https://github.com/vladmandic/human/discussions) for list of known limitations and planned enhancements*
*Suggestions are welcome!*
<hr><br>
## App Examples
Visit [Examples gallery](https://vladmandic.github.io/human/samples/index.html) for more examples
[<img src="assets/samples.jpg" width="640"/>](assets/samples.jpg)
<br>
## Options
All options as presented in the demo application...
[demo/index.html](demo/index.html)
[<img src="assets/screenshot-menu.png"/>](assets/screenshot-menu.png)
<br>
**Results Browser:**
[ *Demo -> Display -> Show Results* ]<br>
[<img src="assets/screenshot-results.png"/>](assets/screenshot-results.png)
<br>
## Advanced Examples
1. **Face Similarity Matching:**
Extracts all faces from provided input images,
sorts them by similarity to selected face
and optionally matches detected face with database of known people to guess their names
> [demo/facematch](demo/facematch/index.html)
[<img src="assets/screenshot-facematch.jpg" width="640"/>](assets/screenshot-facematch.jpg)
2. **Face Detect:**
Extracts all detect faces from loaded images on-demand and highlights face details on a selected face
> [demo/facedetect](demo/facedetect/index.html)
[<img src="assets/screenshot-facedetect.jpg" width="640"/>](assets/screenshot-facedetect.jpg)
3. **Face ID:**
Performs validation check on a webcam input to detect a real face and matches it to known faces stored in database
> [demo/faceid](demo/faceid/index.html)
[<img src="assets/screenshot-faceid.jpg" width="640"/>](assets/screenshot-faceid.jpg)
<br>
4. **3D Rendering:**
> [human-motion](https://github.com/vladmandic/human-motion)
[<img src="https://github.com/vladmandic/human-motion/raw/main/assets/screenshot-face.jpg" width="640"/>](https://github.com/vladmandic/human-motion/raw/main/assets/screenshot-face.jpg)
[<img src="https://github.com/vladmandic/human-motion/raw/main/assets/screenshot-body.jpg" width="640"/>](https://github.com/vladmandic/human-motion/raw/main/assets/screenshot-body.jpg)
[<img src="https://github.com/vladmandic/human-motion/raw/main/assets/screenshot-hand.jpg" width="640"/>](https://github.com/vladmandic/human-motion/raw/main/assets/screenshot-hand.jpg)
<br>
5. **VR Model Tracking:**
> [human-three-vrm](https://github.com/vladmandic/human-three-vrm)
> [human-bjs-vrm](https://github.com/vladmandic/human-bjs-vrm)
[<img src="https://github.com/vladmandic/human-three-vrm/raw/main/assets/human-vrm-screenshot.jpg" width="640"/>](https://github.com/vladmandic/human-three-vrm/raw/main/assets/human-vrm-screenshot.jpg)
6. **Human as OS native application:**
> [human-electron](https://github.com/vladmandic/human-electron)
<br>
**468-Point Face Mesh Defails:**
(view in full resolution to see keypoints)
[<img src="assets/facemesh.png" width="400"/>](assets/facemesh.png)
<br><hr><br>
## Quick Start
Simply load `Human` (*IIFE version*) directly from a cloud CDN in your HTML file:
(pick one: `jsdelirv`, `unpkg` or `cdnjs`)
```html
<!DOCTYPE HTML>
<script src="https://cdn.jsdelivr.net/npm/@vladmandic/human/dist/human.js"></script>
<script src="https://unpkg.dev/@vladmandic/human/dist/human.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/human/3.0.0/human.js"></script>
```
For details, including how to use `Browser ESM` version or `NodeJS` version of `Human`, see [**Installation**](https://github.com/vladmandic/human/wiki/Install)
<br>
## Code Examples
Simple app that uses Human to process video input and
draw output on screen using internal draw helper functions
```js
// create instance of human with simple configuration using default values
const config = { backend: 'webgl' };
const human = new Human.Human(config);
// select input HTMLVideoElement and output HTMLCanvasElement from page
const inputVideo = document.getElementById('video-id');
const outputCanvas = document.getElementById('canvas-id');
function detectVideo() {
// perform processing using default configuration
human.detect(inputVideo).then((result) => {
// result object will contain detected details
// as well as the processed canvas itself
// so lets first draw processed frame on canvas
human.draw.canvas(result.canvas, outputCanvas);
// then draw results on the same canvas
human.draw.face(outputCanvas, result.face);
human.draw.body(outputCanvas, result.body);
human.draw.hand(outputCanvas, result.hand);
human.draw.gesture(outputCanvas, result.gesture);
// and loop immediate to the next frame
requestAnimationFrame(detectVideo);
return result;
});
}
detectVideo();
```
or using `async/await`:
```js
// create instance of human with simple configuration using default values
const config = { backend: 'webgl' };
const human = new Human(config); // create instance of Human
const inputVideo = document.getElementById('video-id');
const outputCanvas = document.getElementById('canvas-id');
async function detectVideo() {
const result = await human.detect(inputVideo); // run detection
human.draw.all(outputCanvas, result); // draw all results
requestAnimationFrame(detectVideo); // run loop
}
detectVideo(); // start loop
```
or using `Events`:
```js
// create instance of human with simple configuration using default values
const config = { backend: 'webgl' };
const human = new Human(config); // create instance of Human
const inputVideo = document.getElementById('video-id');
const outputCanvas = document.getElementById('canvas-id');
human.events.addEventListener('detect', () => { // event gets triggered when detect is complete
human.draw.all(outputCanvas, human.result); // draw all results
});
function detectVideo() {
human.detect(inputVideo) // run detection
.then(() => requestAnimationFrame(detectVideo)); // upon detect complete start processing of the next frame
}
detectVideo(); // start loop
```
or using interpolated results for smooth video processing by separating detection and drawing loops:
```js
const human = new Human(); // create instance of Human
const inputVideo = document.getElementById('video-id');
const outputCanvas = document.getElementById('canvas-id');
let result;
async function detectVideo() {
result = await human.detect(inputVideo); // run detection
requestAnimationFrame(detectVideo); // run detect loop
}
async function drawVideo() {
if (result) { // check if result is available
const interpolated = human.next(result); // get smoothened result using last-known results
human.draw.all(outputCanvas, interpolated); // draw the frame
}
requestAnimationFrame(drawVideo); // run draw loop
}
detectVideo(); // start detection loop
drawVideo(); // start draw loop
```
or same, but using built-in full video processing instead of running manual frame-by-frame loop:
```js
const human = new Human(); // create instance of Human
const inputVideo = document.getElementById('video-id');
const outputCanvas = document.getElementById('canvas-id');
async function drawResults() {
const interpolated = human.next(); // get smoothened result using last-known results
human.draw.all(outputCanvas, interpolated); // draw the frame
requestAnimationFrame(drawResults); // run draw loop
}
human.video(inputVideo); // start detection loop which continously updates results
drawResults(); // start draw loop
```
or using built-in webcam helper methods that take care of video handling completely:
```js
const human = new Human(); // create instance of Human
const outputCanvas = document.getElementById('canvas-id');
async function drawResults() {
const interpolated = human.next(); // get smoothened result using last-known results
human.draw.canvas(outputCanvas, human.webcam.element); // draw current webcam frame
human.draw.all(outputCanvas, interpolated); // draw the frame detectgion results
requestAnimationFrame(drawResults); // run draw loop
}
await human.webcam.start({ crop: true });
human.video(human.webcam.element); // start detection loop which continously updates results
drawResults(); // start draw loop
```
And for even better results, you can run detection in a separate web worker thread
<br><hr><br>
## Inputs
`Human` library can process all known input types:
- `Image`, `ImageData`, `ImageBitmap`, `Canvas`, `OffscreenCanvas`, `Tensor`,
- `HTMLImageElement`, `HTMLCanvasElement`, `HTMLVideoElement`, `HTMLMediaElement`
Additionally, `HTMLVideoElement`, `HTMLMediaElement` can be a standard `<video>` tag that links to:
- WebCam on user's system
- Any supported video type
e.g. `.mp4`, `.avi`, etc.
- Additional video types supported via *HTML5 Media Source Extensions*
e.g.: **HLS** (*HTTP Live Streaming*) using `hls.js` or **DASH** (*Dynamic Adaptive Streaming over HTTP*) using `dash.js`
- **WebRTC** media track using built-in support
<br><hr><br>
## Detailed Usage
- [**Wiki Home**](https://github.com/vladmandic/human/wiki)
- [**List of all available methods, properies and namespaces**](https://github.com/vladmandic/human/wiki/Usage)
- [**TypeDoc API Specification - Main class**](https://vladmandic.github.io/human/typedoc/classes/Human.html)
- [**TypeDoc API Specification - Full**](https://vladmandic.github.io/human/typedoc/)
![typedoc](assets/screenshot-typedoc.png)
<br><hr><br>
## TypeDefs
`Human` is written using TypeScript strong typing and ships with full **TypeDefs** for all classes defined by the library bundled in `types/human.d.ts` and enabled by default
*Note*: This does not include embedded `tfjs`
If you want to use embedded `tfjs` inside `Human` (`human.tf` namespace) and still full **typedefs**, add this code:
> import type * as tfjs from '@vladmandic/human/dist/tfjs.esm';
> const tf = human.tf as typeof tfjs;
This is not enabled by default as `Human` does not ship with full **TFJS TypeDefs** due to size considerations
Enabling `tfjs` TypeDefs as above creates additional project (dev-only as only types are required) dependencies as defined in `@vladmandic/human/dist/tfjs.esm.d.ts`:
> @tensorflow/tfjs-core, @tensorflow/tfjs-converter, @tensorflow/tfjs-backend-wasm, @tensorflow/tfjs-backend-webgl
<br><hr><br>
## Default models
Default models in Human library are:
- **Face Detection**: *MediaPipe BlazeFace Back variation*
- **Face Mesh**: *MediaPipe FaceMesh*
- **Face Iris Analysis**: *MediaPipe Iris*
- **Face Description**: *HSE FaceRes*
- **Emotion Detection**: *Oarriaga Emotion*
- **Body Analysis**: *MoveNet Lightning variation*
- **Hand Analysis**: *HandTrack & MediaPipe HandLandmarks*
- **Body Segmentation**: *Google Selfie*
- **Object Detection**: *CenterNet with MobileNet v3*
Note that alternative models are provided and can be enabled via configuration
For example, body pose detection by default uses *MoveNet Lightning*, but can be switched to *MultiNet Thunder* for higher precision or *Multinet MultiPose* for multi-person detection or even *PoseNet*, *BlazePose* or *EfficientPose* depending on the use case
For more info, see [**Configuration Details**](https://github.com/vladmandic/human/wiki/Configuration) and [**List of Models**](https://github.com/vladmandic/human/wiki/Models)
<br><hr><br>
## Diagnostics
- [How to get diagnostic information or performance trace information](https://github.com/vladmandic/human/wiki/Diag)
<br><hr><br>
`Human` library is written in [TypeScript](https://www.typescriptlang.org/docs/handbook/intro.html) **5.1** using [TensorFlow/JS](https://www.tensorflow.org/js/) **4.10** and conforming to latest `JavaScript` [ECMAScript version 2022](https://262.ecma-international.org/) standard
Build target for distributables is `JavaScript` [EMCAScript version 2018](https://262.ecma-international.org/9.0/)
<br>
For details see [**Wiki Pages**](https://github.com/vladmandic/human/wiki)
and [**API Specification**](https://vladmandic.github.io/human/typedoc/classes/Human.html)
<br>
[![](https://img.shields.io/static/v1?label=Sponsor&message=%E2%9D%A4&logo=GitHub&color=%23fe8e86)](https://github.com/sponsors/vladmandic)
![Stars](https://img.shields.io/github/stars/vladmandic/human?style=flat-square&svg=true)
![Forks](https://badgen.net/github/forks/vladmandic/human)
![Code Size](https://img.shields.io/github/languages/code-size/vladmandic/human?style=flat-square&svg=true)
![CDN](https://data.jsdelivr.com/v1/package/npm/@vladmandic/human/badge)<br>
![Downloads](https://img.shields.io/npm/dw/@vladmandic/human.png?style=flat-square)
![Downloads](https://img.shields.io/npm/dm/@vladmandic/human.png?style=flat-square)
![Downloads](https://img.shields.io/npm/dy/@vladmandic/human.png?style=flat-square)

32
SECURITY.md Normal file
View File

@ -0,0 +1,32 @@
# Security & Privacy Policy
<br>
## Issues
All issues are tracked publicly on GitHub: <https://github.com/vladmandic/human/issues>
<br>
## Vulnerabilities
`Human` library code base and indluded dependencies are automatically scanned against known security vulnerabilities
Any code commit is validated before merge
- [Dependencies](https://github.com/vladmandic/human/security/dependabot)
- [Scanning Alerts](https://github.com/vladmandic/human/security/code-scanning)
<br>
## Privacy
`Human` library and included demo apps:
- Are fully self-contained and does not send or share data of any kind with external targets
- Do not store any user or system data tracking, user provided inputs (images, video) or detection results
- Do not utilize any analytic services (such as Google Analytics)
`Human` library can establish external connections *only* for following purposes and *only* when explicitly configured by user:
- Load models from externally hosted site (e.g. CDN)
- Load inputs for detection from *http & https* sources

38
TODO.md Normal file
View File

@ -0,0 +1,38 @@
# To-Do list for Human library
## Work-in-Progress
<hr><br>
## Known Issues & Limitations
### Face with Attention
`FaceMesh-Attention` is not supported when using `WASM` backend due to missing kernel op in **TFJS**
No issues with default model `FaceMesh`
### Object Detection
`NanoDet` model is not supported when using `WASM` backend due to missing kernel op in **TFJS**
No issues with default model `MB3-CenterNet`
## Body Detection using MoveNet-MultiPose
Model does not return valid detection scores (all other functionality is not impacted)
### Firefox
Running in **web workers** requires `OffscreenCanvas` which is still disabled by default in **Firefox**
Enable via `about:config` -> `gfx.offscreencanvas.enabled`
[Details](https://developer.mozilla.org/en-US/docs/Web/API/OffscreenCanvas#browser_compatibility)
### Safari
No support for running in **web workers** as Safari still does not support `OffscreenCanvas`
[Details](https://developer.mozilla.org/en-US/docs/Web/API/OffscreenCanvas#browser_compatibility)
## React-Native
`Human` support for **React-Native** is best-effort, but not part of the main development focus
<hr><br>

4
assets/README.md Normal file
View File

@ -0,0 +1,4 @@
# Human Library: Static Assets
Static assets used by `Human` library demos and/or referenced by Wiki pages

BIN
assets/facemesh.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 595 KiB

BIN
assets/icon.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 139 KiB

BIN
assets/lato-light.woff2 Normal file

Binary file not shown.

BIN
assets/samples.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 261 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 321 KiB

BIN
assets/screenshot-menu.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

BIN
assets/screenshot-vrm.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

153
build.js Normal file
View File

@ -0,0 +1,153 @@
const fs = require('fs');
const path = require('path');
const log = require('@vladmandic/pilogger'); // eslint-disable-line node/no-unpublished-require
const Build = require('@vladmandic/build').Build; // eslint-disable-line node/no-unpublished-require
const APIExtractor = require('@microsoft/api-extractor'); // eslint-disable-line node/no-unpublished-require
const tf = require('@tensorflow/tfjs-node'); // eslint-disable-line node/no-unpublished-require
const packageJSON = require('./package.json');
const logFile = 'test/build.log';
const modelsOut = 'models/models.json';
const modelsFolders = [
'./models',
'../human-models/models',
'../blazepose/model/',
'../anti-spoofing/model',
'../efficientpose/models',
'../insightface/models',
'../movenet/models',
'../nanodet/models',
];
const apiExtractorIgnoreList = [ // eslint-disable-line no-unused-vars
'ae-missing-release-tag',
'tsdoc-param-tag-missing-hyphen',
'tsdoc-escape-right-brace',
'tsdoc-undefined-tag',
'tsdoc-escape-greater-than',
'ae-unresolved-link',
'ae-forgotten-export',
'tsdoc-malformed-inline-tag',
'tsdoc-unnecessary-backslash',
];
const regEx = [
{ search: 'types="@webgpu/types/dist"', replace: 'path="../src/types/webgpu.d.ts"' },
{ search: 'types="offscreencanvas"', replace: 'path="../src/types/offscreencanvas.d.ts"' },
];
function copyFile(src, dst) {
if (!fs.existsSync(src)) {
log.warn('Copy:', { input: src, output: dst });
return;
}
log.state('Copy:', { input: src, output: dst });
const buffer = fs.readFileSync(src);
fs.writeFileSync(dst, buffer);
}
function writeFile(str, dst) {
log.state('Write:', { output: dst });
fs.writeFileSync(dst, str);
}
function regExFile(src, entries) {
if (!fs.existsSync(src)) {
log.warn('Filter:', { src });
return;
}
log.state('Filter:', { input: src });
for (const entry of entries) {
const buffer = fs.readFileSync(src, 'UTF-8');
const lines = buffer.split(/\r?\n/);
const out = [];
for (const line of lines) {
if (line.includes(entry.search)) out.push(line.replace(entry.search, entry.replace));
else out.push(line);
}
fs.writeFileSync(src, out.join('\n'));
}
}
async function analyzeModels() {
log.info('Analyze models:', { folders: modelsFolders.length, result: modelsOut });
let totalSize = 0;
const models = {};
const allModels = [];
for (const folder of modelsFolders) {
try {
if (!fs.existsSync(folder)) continue;
const stat = fs.statSync(folder);
if (!stat.isDirectory) continue;
const dir = fs.readdirSync(folder);
const found = dir.map((f) => `file://${folder}/${f}`).filter((f) => f.endsWith('json'));
log.state('Models', { folder, models: found.length });
allModels.push(...found);
} catch {
// log.warn('Cannot enumerate:', modelFolder);
}
}
for (const url of allModels) {
// if (!f.endsWith('.json')) continue;
// const url = `file://${modelsDir}/${f}`;
const model = new tf.GraphModel(url); // create model prototype and decide if load from cache or from original modelurl
model.findIOHandler();
const artifacts = await model.handler.load();
const size = artifacts?.weightData?.byteLength || 0;
totalSize += size;
const name = path.basename(url).replace('.json', '');
if (!models[name]) models[name] = size;
}
const json = JSON.stringify(models, null, 2);
fs.writeFileSync(modelsOut, json);
log.state('Models:', { count: Object.keys(models).length, totalSize });
}
async function main() {
log.logFile(logFile);
log.data('Build', { name: packageJSON.name, version: packageJSON.version });
// run production build
const build = new Build();
await build.run('production');
// patch tfjs typedefs
copyFile('node_modules/@vladmandic/tfjs/types/tfjs-core.d.ts', 'types/tfjs-core.d.ts');
copyFile('node_modules/@vladmandic/tfjs/types/tfjs.d.ts', 'types/tfjs.esm.d.ts');
copyFile('src/types/tsconfig.json', 'types/tsconfig.json');
copyFile('src/types/eslint.json', 'types/.eslintrc.json');
copyFile('src/types/tfjs.esm.d.ts', 'dist/tfjs.esm.d.ts');
regExFile('types/tfjs-core.d.ts', regEx);
// run api-extractor to create typedef rollup
const extractorConfig = APIExtractor.ExtractorConfig.loadFileAndPrepare('.api-extractor.json');
try {
const extractorResult = APIExtractor.Extractor.invoke(extractorConfig, {
localBuild: true,
showVerboseMessages: false,
messageCallback: (msg) => {
msg.handled = true;
if (msg.logLevel === 'none' || msg.logLevel === 'verbose' || msg.logLevel === 'info') return;
if (msg.sourceFilePath?.includes('/node_modules/')) return;
// if (apiExtractorIgnoreList.reduce((prev, curr) => prev || msg.messageId.includes(curr), false)) return; // those are external issues outside of human control
log.data('API', { level: msg.logLevel, category: msg.category, id: msg.messageId, file: msg.sourceFilePath, line: msg.sourceFileLine, text: msg.text });
},
});
log.state('API-Extractor:', { succeeeded: extractorResult.succeeded, errors: extractorResult.errorCount, warnings: extractorResult.warningCount });
} catch (err) {
log.error('API-Extractor:', err);
}
regExFile('types/human.d.ts', regEx);
writeFile('export * from \'../types/human\';', 'dist/human.esm-nobundle.d.ts');
writeFile('export * from \'../types/human\';', 'dist/human.esm.d.ts');
writeFile('export * from \'../types/human\';', 'dist/human.d.ts');
writeFile('export * from \'../types/human\';', 'dist/human.node-gpu.d.ts');
writeFile('export * from \'../types/human\';', 'dist/human.node.d.ts');
writeFile('export * from \'../types/human\';', 'dist/human.node-wasm.d.ts');
// generate model signature
await analyzeModels();
log.info('Human Build complete...', { logFile });
}
main();

67
demo/README.md Normal file
View File

@ -0,0 +1,67 @@
# Human Library: Demos
For details on other demos see Wiki: [**Demos**](https://github.com/vladmandic/human/wiki/Demos)
## Main Demo
`index.html`: Full demo using `Human` ESM module running in Browsers,
Includes:
- Selectable inputs:
- Sample images
- Image via drag & drop
- Image via URL param
- WebCam input
- Video stream
- WebRTC stream
- Selectable active `Human` modules
- With interactive module params
- Interactive `Human` image filters
- Selectable interactive `results` browser
- Selectable `backend`
- Multiple execution methods:
- Sync vs Async
- in main thread or web worker
- live on git pages, on user-hosted web server or via included [**micro http2 server**](https://github.com/vladmandic/human/wiki/Development-Server)
### Demo Options
- General `Human` library options
in `index.js:userConfig`
- General `Human` `draw` options
in `index.js:drawOptions`
- Demo PWA options
in `index.js:pwa`
- Demo specific options
in `index.js:ui`
```js
const ui = {
console: true, // log messages to browser console
useWorker: true, // use web workers for processing
buffered: true, // should output be buffered between frames
interpolated: true, // should output be interpolated for smoothness between frames
results: false, // show results tree
useWebRTC: false, // use webrtc as camera source instead of local webcam
};
```
Demo implements several ways to use `Human` library,
### URL Params
Demo app can use URL parameters to override configuration values
For example:
- Force using `WASM` as backend: <https://vladmandic.github.io/human/demo/index.html?backend=wasm>
- Enable `WebWorkers`: <https://vladmandic.github.io/human/demo/index.html?worker=true>
- Skip pre-loading and warming up: <https://vladmandic.github.io/human/demo/index.html?preload=false&warmup=false>
### WebRTC
Note that WebRTC connection requires a WebRTC server that provides a compatible media track such as H.264 video track
For such a WebRTC server implementation see <https://github.com/vladmandic/stream-rtsp> project
that implements a connection to IP Security camera using RTSP protocol and transcodes it to WebRTC
ready to be consumed by a client such as `Human`

View File

@ -0,0 +1,160 @@
/**
* Human demo for browsers
*
* Demo for face detection
*/
/** @type {Human} */
import { Human } from '../../dist/human.esm.js';
let loader;
const humanConfig = { // user configuration for human, used to fine-tune behavior
cacheSensitivity: 0,
debug: true,
modelBasePath: 'https://vladmandic.github.io/human-models/models/',
filter: { enabled: true, equalization: false, flip: false },
face: {
enabled: true,
detector: { rotation: false, maxDetected: 100, minConfidence: 0.2, return: true, square: false },
iris: { enabled: true },
description: { enabled: true },
emotion: { enabled: true },
antispoof: { enabled: true },
liveness: { enabled: true },
},
body: { enabled: false },
hand: { enabled: false },
object: { enabled: false },
gesture: { enabled: false },
segmentation: { enabled: false },
};
const human = new Human(humanConfig); // new instance of human
export const showLoader = (msg) => { loader.setAttribute('msg', msg); loader.style.display = 'block'; };
export const hideLoader = () => loader.style.display = 'none';
class ComponentLoader extends HTMLElement { // watch for attributes
message = document.createElement('div');
static get observedAttributes() { return ['msg']; }
attributeChangedCallback(_name, _prevVal, currVal) {
this.message.innerHTML = currVal;
}
connectedCallback() { // triggered on insert
this.attachShadow({ mode: 'open' });
const css = document.createElement('style');
css.innerHTML = `
.loader-container { top: 450px; justify-content: center; position: fixed; width: 100%; }
.loader-message { font-size: 1.5rem; padding: 1rem; }
.loader { width: 300px; height: 300px; border: 3px solid transparent; border-radius: 50%; border-top: 4px solid #f15e41; animation: spin 4s linear infinite; position: relative; }
.loader::before, .loader::after { content: ""; position: absolute; top: 6px; bottom: 6px; left: 6px; right: 6px; border-radius: 50%; border: 4px solid transparent; }
.loader::before { border-top-color: #bad375; animation: 3s spin linear infinite; }
.loader::after { border-top-color: #26a9e0; animation: spin 1.5s linear infinite; }
@keyframes spin { from { transform: rotate(0deg); } to { transform: rotate(360deg); } }
`;
const container = document.createElement('div');
container.id = 'loader-container';
container.className = 'loader-container';
loader = document.createElement('div');
loader.id = 'loader';
loader.className = 'loader';
this.message.id = 'loader-message';
this.message.className = 'loader-message';
this.message.innerHTML = '';
container.appendChild(this.message);
container.appendChild(loader);
this.shadowRoot?.append(css, container);
loader = this; // eslint-disable-line @typescript-eslint/no-this-alias
}
}
customElements.define('component-loader', ComponentLoader);
function addFace(face, source) {
const deg = (rad) => Math.round((rad || 0) * 180 / Math.PI);
const canvas = document.createElement('canvas');
const emotion = face.emotion?.map((e) => `${Math.round(100 * e.score)}% ${e.emotion}`) || [];
const rotation = `pitch ${deg(face.rotation?.angle.pitch)}° | roll ${deg(face.rotation?.angle.roll)}° | yaw ${deg(face.rotation?.angle.yaw)}°`;
const gaze = `direction ${deg(face.rotation?.gaze.bearing)}° strength ${Math.round(100 * (face.rotation.gaze.strength || 0))}%`;
canvas.title = `
source: ${source}
score: ${Math.round(100 * face.boxScore)}% detection ${Math.round(100 * face.faceScore)}% analysis
age: ${face.age} years | gender: ${face.gender} score ${Math.round(100 * face.genderScore)}%
emotion: ${emotion.join(' | ')}
head rotation: ${rotation}
eyes gaze: ${gaze}
camera distance: ${face.distance}m | ${Math.round(100 * face.distance / 2.54)}in
check: ${Math.round(100 * face.real)}% real ${Math.round(100 * face.live)}% live
`.replace(/ /g, ' ');
canvas.onclick = (e) => {
e.preventDefault();
document.getElementById('description').innerHTML = canvas.title;
};
human.draw.tensor(face.tensor, canvas);
human.tf.dispose(face.tensor);
return canvas;
}
async function addFaces(imgEl) {
showLoader('human: busy');
const faceEl = document.getElementById('faces');
faceEl.innerHTML = '';
const res = await human.detect(imgEl);
console.log(res); // eslint-disable-line no-console
document.getElementById('description').innerHTML = `detected ${res.face.length} faces`;
for (const face of res.face) {
const canvas = addFace(face, imgEl.src.substring(0, 64));
faceEl.appendChild(canvas);
}
hideLoader();
}
function addImage(imageUri) {
const imgEl = new Image(256, 256);
imgEl.onload = () => {
const images = document.getElementById('images');
images.appendChild(imgEl); // add image if loaded ok
images.scroll(images?.offsetWidth, 0);
};
imgEl.onerror = () => console.error('addImage', { imageUri }); // eslint-disable-line no-console
imgEl.onclick = () => addFaces(imgEl);
imgEl.title = imageUri.substring(0, 64);
imgEl.src = encodeURI(imageUri);
}
async function initDragAndDrop() {
const reader = new FileReader();
reader.onload = async (e) => {
if (e.target.result.startsWith('data:image')) await addImage(e.target.result);
};
document.body.addEventListener('dragenter', (evt) => evt.preventDefault());
document.body.addEventListener('dragleave', (evt) => evt.preventDefault());
document.body.addEventListener('dragover', (evt) => evt.preventDefault());
document.body.addEventListener('drop', async (evt) => {
evt.preventDefault();
evt.dataTransfer.dropEffect = 'copy';
for (const f of evt.dataTransfer.files) reader.readAsDataURL(f);
});
document.body.onclick = (e) => {
if (e.target.localName !== 'canvas') document.getElementById('description').innerHTML = '';
};
}
async function main() {
showLoader('loading models');
await human.load();
showLoader('compiling models');
await human.warmup();
showLoader('loading images');
const images = ['group-1.jpg', 'group-2.jpg', 'group-3.jpg', 'group-4.jpg', 'group-5.jpg', 'group-6.jpg', 'group-7.jpg', 'solvay1927.jpg', 'stock-group-1.jpg', 'stock-group-2.jpg', 'stock-models-6.jpg', 'stock-models-7.jpg'];
const imageUris = images.map((a) => `../../samples/in/${a}`);
for (let i = 0; i < imageUris.length; i++) addImage(imageUris[i]);
initDragAndDrop();
hideLoader();
}
window.onload = main;

View File

@ -0,0 +1,43 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Human</title>
<!-- <meta http-equiv="content-type" content="text/html; charset=utf-8"> -->
<meta name="viewport" content="width=device-width, shrink-to-fit=yes">
<meta name="keywords" content="Human">
<meta name="application-name" content="Human">
<meta name="description" content="Human: 3D Face Detection, Body Pose, Hand & Finger Tracking, Iris Tracking, Age & Gender Prediction, Emotion Prediction & Gesture Recognition; Author: Vladimir Mandic <https://github.com/vladmandic>">
<meta name="msapplication-tooltip" content="Human: 3D Face Detection, Body Pose, Hand & Finger Tracking, Iris Tracking, Age & Gender Prediction, Emotion Prediction & Gesture Recognition; Author: Vladimir Mandic <https://github.com/vladmandic>">
<meta name="theme-color" content="#000000">
<link rel="manifest" href="../manifest.webmanifest">
<link rel="shortcut icon" href="../../favicon.ico" type="image/x-icon">
<link rel="apple-touch-icon" href="../../assets/icon.png">
<script src="./facedetect.js" type="module"></script>
<style>
img { object-fit: contain; }
img:hover { filter: grayscale(1); transform: scale(1.08); transition : all 0.3s ease; }
@font-face { font-family: 'Lato'; font-display: swap; font-style: normal; font-weight: 100; src: local('Lato'), url('../../assets/lato-light.woff2') }
html { font-family: 'Lato', 'Segoe UI'; font-size: 24px; font-variant: small-caps; }
body { margin: 24px; background: black; color: white; overflow: hidden; text-align: -webkit-center; width: 100vw; height: 100vh; }
::-webkit-scrollbar { height: 8px; border: 0; border-radius: 0; }
::-webkit-scrollbar-thumb { background: grey }
::-webkit-scrollbar-track { margin: 3px; }
canvas { width: 192px; height: 192px; margin: 2px; padding: 2px; cursor: grab; transform: scale(1.00); transition : all 0.3s ease; }
canvas:hover { filter: grayscale(1); transform: scale(1.08); transition : all 0.3s ease; }
</style>
</head>
<body>
<component-loader></component-loader>
<div style="display: flex">
<div>
<div style="margin: 24px">select image to show detected faces<br>drag & drop to add your images</div>
<div id="images" style="display: flex; width: 98vw; overflow-x: auto; overflow-y: hidden; scroll-behavior: smooth"></div>
</div>
</div>
<div id="list" style="height: 10px"></div>
<div style="margin: 24px">hover or click on face to show details</div>
<div id="faces" style="overflow-y: auto"></div>
<div id="description" style="white-space: pre;"></div>
</body>
</html>

42
demo/faceid/README.md Normal file
View File

@ -0,0 +1,42 @@
# Human Face Recognition: FaceID
`faceid` runs multiple checks to validate webcam input before performing face match
Detected face image and descriptor are stored in client-side IndexDB
## Workflow
- Starts webcam
- Waits until input video contains validated face or timeout is reached
- Number of people
- Face size
- Face and gaze direction
- Detection scores
- Blink detection (including temporal check for blink speed) to verify live input
- Runs `antispoofing` optional module
- Runs `liveness` optional module
- Runs match against database of registered faces and presents best match with scores
## Notes
Both `antispoof` and `liveness` models are tiny and
designed to serve as a quick check when used together with other indicators:
- size below 1MB
- very quick inference times as they are very simple (11 ops for antispoof and 23 ops for liveness)
- trained on low-resolution inputs
### Anti-spoofing Module
- Checks if input is realistic (e.g. computer generated faces)
- Configuration: `human.config.face.antispoof`.enabled
- Result: `human.result.face[0].real` as score
### Liveness Module
- Checks if input has obvious artifacts due to recording (e.g. playing back phone recording of a face)
- Configuration: `human.config.face.liveness`.enabled
- Result: `human.result.face[0].live` as score
### Models
**FaceID** is compatible with
- `faceres.json` (default) perfoms combined age/gender/descriptor analysis
- `faceres-deep.json` higher resolution variation of `faceres`
- `insightface` alternative model for face descriptor analysis
- `mobilefacenet` alternative model for face descriptor analysis

49
demo/faceid/index.html Normal file
View File

@ -0,0 +1,49 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Human: Face Recognition</title>
<meta name="viewport" content="width=device-width" id="viewport">
<meta name="keywords" content="Human">
<meta name="application-name" content="Human">
<meta name="description" content="Human: 3D Face Detection, Body Pose, Hand & Finger Tracking, Iris Tracking, Age & Gender Prediction, Emotion Prediction & Gesture Recognition; Author: Vladimir Mandic <https://github.com/vladmandic>">
<meta name="msapplication-tooltip" content="Human: 3D Face Detection, Body Pose, Hand & Finger Tracking, Iris Tracking, Age & Gender Prediction, Emotion Prediction & Gesture Recognition; Author: Vladimir Mandic <https://github.com/vladmandic>">
<meta name="theme-color" content="#000000">
<link rel="manifest" href="../manifest.webmanifest">
<link rel="shortcut icon" href="../../favicon.ico" type="image/x-icon">
<link rel="apple-touch-icon" href="../../assets/icon.png">
<script src="./index.js" type="module"></script>
<style>
@font-face { font-family: 'Lato'; font-display: swap; font-style: normal; font-weight: 100; src: local('Lato'), url('../../assets/lato-light.woff2') }
html { font-family: 'Lato', 'Segoe UI'; font-size: 16px; font-variant: small-caps; }
body { margin: 0; padding: 16px; background: black; color: white; overflow-x: hidden; width: 100vw; height: 100vh; }
body::-webkit-scrollbar { display: none; }
.button { padding: 2px; cursor: pointer; box-shadow: 2px 2px black; width: 64px; text-align: center; place-content: center; margin-left: 16px; height: 16px; display: none }
.ok { position: absolute; top: 64px; right: 20px; width: 150px; background-color: grey; padding: 4px; color: black; font-size: 14px }
</style>
</head>
<body>
<div style="padding: 8px">
<h1 style="margin: 0">faceid demo using human library</h1>
look directly at camera and make sure that detection passes all of the required tests noted on the right hand side of the screen<br>
if input does not satisfies tests within specific timeout, no image will be selected<br>
once face image is approved, it will be compared with existing face database<br>
you can also store face descriptor with label in a browser's indexdb for future usage<br>
<br>
<i>note: this is not equivalent to full faceid methods as used by modern mobile phones or windows hello<br>
as they rely on additional infrared sensors and depth-sensing and not just camera image for additional levels of security</i>
</div>
<canvas id="canvas" style="padding: 8px"></canvas>
<canvas id="source" style="padding: 8px"></canvas>
<video id="video" playsinline style="display: none"></video>
<pre id="log" style="padding: 8px"></pre>
<div id="match" style="display: none; padding: 8px">
<label for="name">name:</label>
<input id="name" type="text" value="" style="height: 16px; border: none; padding: 2px; margin-left: 8px">
<span id="save" class="button" style="background-color: royalblue">save</span>
<span id="delete" class="button" style="background-color: lightcoral">delete</span>
</div>
<div id="retry" class="button" style="background-color: darkslategray; width: 93%; margin-top: 32px; padding: 12px">retry</div>
<div id="ok"></div>
</body>
</html>

9
demo/faceid/index.js Normal file

File diff suppressed because one or more lines are too long

7
demo/faceid/index.js.map Normal file

File diff suppressed because one or more lines are too long

318
demo/faceid/index.ts Normal file
View File

@ -0,0 +1,318 @@
/**
* Human demo for browsers
* @default Human Library
* @summary <https://github.com/vladmandic/human>
* @author <https://github.com/vladmandic>
* @copyright <https://github.com/vladmandic>
* @license MIT
*/
import * as H from '../../dist/human.esm.js'; // equivalent of @vladmandic/Human
import * as indexDb from './indexdb'; // methods to deal with indexdb
const humanConfig = { // user configuration for human, used to fine-tune behavior
cacheSensitivity: 0.01,
modelBasePath: '../../models',
filter: { enabled: true, equalization: true }, // lets run with histogram equilizer
debug: true,
face: {
enabled: true,
detector: { rotation: true, return: true, mask: false }, // return tensor is used to get detected face image
description: { enabled: true }, // default model for face descriptor extraction is faceres
// mobilefacenet: { enabled: true, modelPath: 'https://vladmandic.github.io/human-models/models/mobilefacenet.json' }, // alternative model
// insightface: { enabled: true, modelPath: 'https://vladmandic.github.io/insightface/models/insightface-mobilenet-swish.json' }, // alternative model
iris: { enabled: true }, // needed to determine gaze direction
emotion: { enabled: false }, // not needed
antispoof: { enabled: true }, // enable optional antispoof module
liveness: { enabled: true }, // enable optional liveness module
},
body: { enabled: false },
hand: { enabled: false },
object: { enabled: false },
gesture: { enabled: true }, // parses face and iris gestures
};
// const matchOptions = { order: 2, multiplier: 1000, min: 0.0, max: 1.0 }; // for embedding model
const matchOptions = { order: 2, multiplier: 25, min: 0.2, max: 0.8 }; // for faceres model
const options = {
minConfidence: 0.6, // overal face confidence for box, face, gender, real, live
minSize: 224, // min input to face descriptor model before degradation
maxTime: 30000, // max time before giving up
blinkMin: 10, // minimum duration of a valid blink
blinkMax: 800, // maximum duration of a valid blink
threshold: 0.5, // minimum similarity
distanceMin: 0.4, // closest that face is allowed to be to the cammera in cm
distanceMax: 1.0, // farthest that face is allowed to be to the cammera in cm
mask: humanConfig.face.detector.mask,
rotation: humanConfig.face.detector.rotation,
...matchOptions,
};
const ok: Record<string, { status: boolean | undefined, val: number }> = { // must meet all rules
faceCount: { status: false, val: 0 },
faceConfidence: { status: false, val: 0 },
facingCenter: { status: false, val: 0 },
lookingCenter: { status: false, val: 0 },
blinkDetected: { status: false, val: 0 },
faceSize: { status: false, val: 0 },
antispoofCheck: { status: false, val: 0 },
livenessCheck: { status: false, val: 0 },
distance: { status: false, val: 0 },
age: { status: false, val: 0 },
gender: { status: false, val: 0 },
timeout: { status: true, val: 0 },
descriptor: { status: false, val: 0 },
elapsedMs: { status: undefined, val: 0 }, // total time while waiting for valid face
detectFPS: { status: undefined, val: 0 }, // mark detection fps performance
drawFPS: { status: undefined, val: 0 }, // mark redraw fps performance
};
const allOk = () => ok.faceCount.status
&& ok.faceSize.status
&& ok.blinkDetected.status
&& ok.facingCenter.status
&& ok.lookingCenter.status
&& ok.faceConfidence.status
&& ok.antispoofCheck.status
&& ok.livenessCheck.status
&& ok.distance.status
&& ok.descriptor.status
&& ok.age.status
&& ok.gender.status;
const current: { face: H.FaceResult | null, record: indexDb.FaceRecord | null } = { face: null, record: null }; // current face record and matched database record
const blink = { // internal timers for blink start/end/duration
start: 0,
end: 0,
time: 0,
};
// let db: Array<{ name: string, source: string, embedding: number[] }> = []; // holds loaded face descriptor database
const human = new H.Human(humanConfig); // create instance of human with overrides from user configuration
human.env.perfadd = false; // is performance data showing instant or total values
human.draw.options.font = 'small-caps 18px "Lato"'; // set font used to draw labels when using draw methods
human.draw.options.lineHeight = 20;
const dom = { // grab instances of dom objects so we dont have to look them up later
video: document.getElementById('video') as HTMLVideoElement,
canvas: document.getElementById('canvas') as HTMLCanvasElement,
log: document.getElementById('log') as HTMLPreElement,
fps: document.getElementById('fps') as HTMLPreElement,
match: document.getElementById('match') as HTMLDivElement,
name: document.getElementById('name') as HTMLInputElement,
save: document.getElementById('save') as HTMLSpanElement,
delete: document.getElementById('delete') as HTMLSpanElement,
retry: document.getElementById('retry') as HTMLDivElement,
source: document.getElementById('source') as HTMLCanvasElement,
ok: document.getElementById('ok') as HTMLDivElement,
};
const timestamp = { detect: 0, draw: 0 }; // holds information used to calculate performance and possible memory leaks
let startTime = 0;
const log = (...msg) => { // helper method to output messages
dom.log.innerText += msg.join(' ') + '\n';
console.log(...msg); // eslint-disable-line no-console
};
async function webCam() { // initialize webcam
// @ts-ignore resizeMode is not yet defined in tslib
const cameraOptions: MediaStreamConstraints = { audio: false, video: { facingMode: 'user', resizeMode: 'none', width: { ideal: document.body.clientWidth } } };
const stream: MediaStream = await navigator.mediaDevices.getUserMedia(cameraOptions);
const ready = new Promise((resolve) => { dom.video.onloadeddata = () => resolve(true); });
dom.video.srcObject = stream;
void dom.video.play();
await ready;
dom.canvas.width = dom.video.videoWidth;
dom.canvas.height = dom.video.videoHeight;
dom.canvas.style.width = '50%';
dom.canvas.style.height = '50%';
if (human.env.initial) log('video:', dom.video.videoWidth, dom.video.videoHeight, '|', stream.getVideoTracks()[0].label);
dom.canvas.onclick = () => { // pause when clicked on screen and resume on next click
if (dom.video.paused) void dom.video.play();
else dom.video.pause();
};
}
async function detectionLoop() { // main detection loop
if (!dom.video.paused) {
if (current.face?.tensor) human.tf.dispose(current.face.tensor); // dispose previous tensor
await human.detect(dom.video); // actual detection; were not capturing output in a local variable as it can also be reached via human.result
const now = human.now();
ok.detectFPS.val = Math.round(10000 / (now - timestamp.detect)) / 10;
timestamp.detect = now;
requestAnimationFrame(detectionLoop); // start new frame immediately
}
}
function drawValidationTests() {
let y = 32;
for (const [key, val] of Object.entries(ok)) {
let el = document.getElementById(`ok-${key}`);
if (!el) {
el = document.createElement('div');
el.id = `ok-${key}`;
el.innerText = key;
el.className = 'ok';
el.style.top = `${y}px`;
dom.ok.appendChild(el);
}
if (typeof val.status === 'boolean') el.style.backgroundColor = val.status ? 'lightgreen' : 'lightcoral';
const status = val.status ? 'ok' : 'fail';
el.innerText = `${key}: ${val.val === 0 ? status : val.val}`;
y += 28;
}
}
async function validationLoop(): Promise<H.FaceResult> { // main screen refresh loop
const interpolated = human.next(human.result); // smoothen result using last-known results
human.draw.canvas(dom.video, dom.canvas); // draw canvas to screen
await human.draw.all(dom.canvas, interpolated); // draw labels, boxes, lines, etc.
const now = human.now();
ok.drawFPS.val = Math.round(10000 / (now - timestamp.draw)) / 10;
timestamp.draw = now;
ok.faceCount.val = human.result.face.length;
ok.faceCount.status = ok.faceCount.val === 1; // must be exactly detected face
if (ok.faceCount.status) { // skip the rest if no face
const gestures: string[] = Object.values(human.result.gesture).map((gesture: H.GestureResult) => gesture.gesture); // flatten all gestures
if (gestures.includes('blink left eye') || gestures.includes('blink right eye')) blink.start = human.now(); // blink starts when eyes get closed
if (blink.start > 0 && !gestures.includes('blink left eye') && !gestures.includes('blink right eye')) blink.end = human.now(); // if blink started how long until eyes are back open
ok.blinkDetected.status = ok.blinkDetected.status || (Math.abs(blink.end - blink.start) > options.blinkMin && Math.abs(blink.end - blink.start) < options.blinkMax);
if (ok.blinkDetected.status && blink.time === 0) blink.time = Math.trunc(blink.end - blink.start);
ok.facingCenter.status = gestures.includes('facing center');
ok.lookingCenter.status = gestures.includes('looking center'); // must face camera and look at camera
ok.faceConfidence.val = human.result.face[0].faceScore || human.result.face[0].boxScore || 0;
ok.faceConfidence.status = ok.faceConfidence.val >= options.minConfidence;
ok.antispoofCheck.val = human.result.face[0].real || 0;
ok.antispoofCheck.status = ok.antispoofCheck.val >= options.minConfidence;
ok.livenessCheck.val = human.result.face[0].live || 0;
ok.livenessCheck.status = ok.livenessCheck.val >= options.minConfidence;
ok.faceSize.val = Math.min(human.result.face[0].box[2], human.result.face[0].box[3]);
ok.faceSize.status = ok.faceSize.val >= options.minSize;
ok.distance.val = human.result.face[0].distance || 0;
ok.distance.status = (ok.distance.val >= options.distanceMin) && (ok.distance.val <= options.distanceMax);
ok.descriptor.val = human.result.face[0].embedding?.length || 0;
ok.descriptor.status = ok.descriptor.val > 0;
ok.age.val = human.result.face[0].age || 0;
ok.age.status = ok.age.val > 0;
ok.gender.val = human.result.face[0].genderScore || 0;
ok.gender.status = ok.gender.val >= options.minConfidence;
}
// run again
ok.timeout.status = ok.elapsedMs.val <= options.maxTime;
drawValidationTests();
if (allOk() || !ok.timeout.status) { // all criteria met
dom.video.pause();
return human.result.face[0];
}
ok.elapsedMs.val = Math.trunc(human.now() - startTime);
return new Promise((resolve) => {
setTimeout(async () => {
await validationLoop(); // run validation loop until conditions are met
resolve(human.result.face[0]); // recursive promise resolve
}, 30); // use to slow down refresh from max refresh rate to target of 30 fps
});
}
async function saveRecords() {
if (dom.name.value.length > 0) {
const image = dom.canvas.getContext('2d')?.getImageData(0, 0, dom.canvas.width, dom.canvas.height) as ImageData;
const rec = { id: 0, name: dom.name.value, descriptor: current.face?.embedding as number[], image };
await indexDb.save(rec);
log('saved face record:', rec.name, 'descriptor length:', current.face?.embedding?.length);
log('known face records:', await indexDb.count());
} else {
log('invalid name');
}
}
async function deleteRecord() {
if (current.record && current.record.id > 0) {
await indexDb.remove(current.record);
}
}
async function detectFace() {
dom.canvas.style.height = '';
dom.canvas.getContext('2d')?.clearRect(0, 0, options.minSize, options.minSize);
if (!current?.face?.tensor || !current?.face?.embedding) return false;
console.log('face record:', current.face); // eslint-disable-line no-console
log(`detected face: ${current.face.gender} ${current.face.age || 0}y distance ${100 * (current.face.distance || 0)}cm/${Math.round(100 * (current.face.distance || 0) / 2.54)}in`);
await human.draw.tensor(current.face.tensor, dom.canvas);
if (await indexDb.count() === 0) {
log('face database is empty: nothing to compare face with');
document.body.style.background = 'black';
dom.delete.style.display = 'none';
return false;
}
const db = await indexDb.load();
const descriptors = db.map((rec) => rec.descriptor).filter((desc) => desc.length > 0);
const res = human.match.find(current.face.embedding, descriptors, matchOptions);
current.record = db[res.index] || null;
if (current.record) {
log(`best match: ${current.record.name} | id: ${current.record.id} | similarity: ${Math.round(1000 * res.similarity) / 10}%`);
dom.name.value = current.record.name;
dom.source.style.display = '';
dom.source.getContext('2d')?.putImageData(current.record.image, 0, 0);
}
document.body.style.background = res.similarity > options.threshold ? 'darkgreen' : 'maroon';
return res.similarity > options.threshold;
}
async function main() { // main entry point
ok.faceCount.status = false;
ok.faceConfidence.status = false;
ok.facingCenter.status = false;
ok.blinkDetected.status = false;
ok.faceSize.status = false;
ok.antispoofCheck.status = false;
ok.livenessCheck.status = false;
ok.age.status = false;
ok.gender.status = false;
ok.elapsedMs.val = 0;
dom.match.style.display = 'none';
dom.retry.style.display = 'none';
dom.source.style.display = 'none';
dom.canvas.style.height = '50%';
document.body.style.background = 'black';
await webCam();
await detectionLoop(); // start detection loop
startTime = human.now();
current.face = await validationLoop(); // start validation loop
dom.canvas.width = current.face?.tensor?.shape[1] || options.minSize;
dom.canvas.height = current.face?.tensor?.shape[0] || options.minSize;
dom.source.width = dom.canvas.width;
dom.source.height = dom.canvas.height;
dom.canvas.style.width = '';
dom.match.style.display = 'flex';
dom.save.style.display = 'flex';
dom.delete.style.display = 'flex';
dom.retry.style.display = 'block';
if (!allOk()) { // is all criteria met?
log('did not find valid face');
return false;
}
return detectFace();
}
async function init() {
log('human version:', human.version, '| tfjs version:', human.tf.version['tfjs-core']);
log('options:', JSON.stringify(options).replace(/{|}|"|\[|\]/g, '').replace(/,/g, ' '));
log('initializing webcam...');
await webCam(); // start webcam
log('loading human models...');
await human.load(); // preload all models
log('initializing human...');
log('face embedding model:', humanConfig.face.description.enabled ? 'faceres' : '', humanConfig.face['mobilefacenet']?.enabled ? 'mobilefacenet' : '', humanConfig.face['insightface']?.enabled ? 'insightface' : '');
log('loading face database...');
log('known face records:', await indexDb.count());
dom.retry.addEventListener('click', main);
dom.save.addEventListener('click', saveRecords);
dom.delete.addEventListener('click', deleteRecord);
await human.warmup(); // warmup function to initialize backend for future faster detection
await main();
}
window.onload = init;

65
demo/faceid/indexdb.ts Normal file
View File

@ -0,0 +1,65 @@
let db: IDBDatabase; // instance of indexdb
const database = 'human';
const table = 'person';
export interface FaceRecord { id: number, name: string, descriptor: number[], image: ImageData }
const log = (...msg) => console.log('indexdb', ...msg); // eslint-disable-line no-console
export async function open() {
if (db) return true;
return new Promise((resolve) => {
const request: IDBOpenDBRequest = indexedDB.open(database, 1);
request.onerror = (evt) => log('error:', evt);
request.onupgradeneeded = (evt: IDBVersionChangeEvent) => { // create if doesnt exist
log('create:', evt.target);
db = (evt.target as IDBOpenDBRequest).result;
db.createObjectStore(table, { keyPath: 'id', autoIncrement: true });
};
request.onsuccess = (evt) => { // open
db = (evt.target as IDBOpenDBRequest).result;
log('open:', db);
resolve(true);
};
});
}
export async function load(): Promise<FaceRecord[]> {
const faceDB: FaceRecord[] = [];
if (!db) await open(); // open or create if not already done
return new Promise((resolve) => {
const cursor: IDBRequest = db.transaction([table], 'readwrite').objectStore(table).openCursor(null, 'next');
cursor.onerror = (evt) => log('load error:', evt);
cursor.onsuccess = (evt) => {
if ((evt.target as IDBRequest).result) {
faceDB.push((evt.target as IDBRequest).result.value);
(evt.target as IDBRequest).result.continue();
} else {
resolve(faceDB);
}
};
});
}
export async function count(): Promise<number> {
if (!db) await open(); // open or create if not already done
return new Promise((resolve) => {
const store: IDBRequest = db.transaction([table], 'readwrite').objectStore(table).count();
store.onerror = (evt) => log('count error:', evt);
store.onsuccess = () => resolve(store.result);
});
}
export async function save(faceRecord: FaceRecord) {
if (!db) await open(); // open or create if not already done
const newRecord = { name: faceRecord.name, descriptor: faceRecord.descriptor, image: faceRecord.image }; // omit id as its autoincrement
db.transaction([table], 'readwrite').objectStore(table).put(newRecord);
log('save:', newRecord);
}
export async function remove(faceRecord: FaceRecord) {
if (!db) await open(); // open or create if not already done
db.transaction([table], 'readwrite').objectStore(table).delete(faceRecord.id); // delete based on id
log('delete:', faceRecord);
}

84
demo/facematch/README.md Normal file
View File

@ -0,0 +1,84 @@
# Human Face Recognition & Matching
- **Browser** demo: `index.html` & `facematch.js`:
Loads sample images, extracts faces and runs match and similarity analysis
- **NodeJS** demo `node-match.js` and `node-match-worker.js`
Advanced multithreading demo that runs number of worker threads to process high number of matches
- Sample face database: `faces.json`
<br>
## Browser Face Recognition Demo
- `demo/facematch`: Demo for Browsers that uses all face description and embedding features to
detect, extract and identify all faces plus calculate similarity between them
It highlights functionality such as:
- Loading images
- Extracting faces from images
- Calculating face embedding descriptors
- Finding face similarity and sorting them by similarity
- Finding best face match based on a known list of faces and printing matches
<br>
## NodeJS Multi-Threading Match Solution
### Methods and Properties in `node-match`
- `createBuffer`: create shared buffer array
single copy of data regardless of number of workers
fixed size based on `options.dbMax`
- `appendRecord`: add additional batch of descriptors to buffer
can append batch of records to buffer at anytime
workers are informed of the new content after append has been completed
- `workersStart`: start or expand pool of `threadPoolSize` workers
each worker runs `node-match-worker` and listens for messages from main thread
can shutdown workers or create additional worker threads on-the-fly
safe against workers that exit
- `workersClose`: close workers in a pool
first request workers to exit then terminate after timeout
- `match`: dispach a match job to a worker
returns first match that satisfies `minThreshold`
assigment to workers using round-robin
since timing for each job is near-fixed and predictable
- `getDescriptor`: get descriptor array for a given id from a buffer
- `fuzDescriptor`: small randomize descriptor content for harder match
- `getLabel`: fetch label for resolved descriptor index
- `loadDB`: load face database from a JSON file `dbFile`
extracts descriptors and adds them to buffer
extracts labels and maintains them in main thread
for test purposes loads same database `dbFact` times to create a very large database
`node-match` runs in a listens for messages from workers until `maxJobs` have been reached
### Performance
Linear performance decrease that depends on number of records in database
Non-linear performance that increases with number of worker threads due to communication overhead
- Face dataase with 10k records:
> threadPoolSize: 1 => ~60 ms / match job
> threadPoolSize: 6 => ~25 ms / match job
- Face database with 50k records:
> threadPoolSize: 1 => ~300 ms / match job
> threadPoolSize: 6 => ~100 ms / match job
- Face database with 100k records:
> threadPoolSize: 1 => ~600 ms / match job
> threadPoolSize: 6 => ~200 ms / match job
### Example
> node node-match
<!-- eslint-skip -->
```js
INFO: options: { dbFile: './faces.json', dbMax: 10000, threadPoolSize: 6, workerSrc: './node-match-worker.js', debug: false, minThreshold: 0.9, descLength: 1024 }
DATA: created shared buffer: { maxDescriptors: 10000, totalBytes: 40960000, totalElements: 10240000 }
DATA: db loaded: { existingRecords: 0, newRecords: 5700 }
INFO: starting worker thread pool: { totalWorkers: 6, alreadyActive: 0 }
STATE: submitted: { matchJobs: 100, poolSize: 6, activeWorkers: 6 }
STATE: { matchJobsFinished: 100, totalTimeMs: 1769, averageTimeMs: 17.69 }
INFO: closing workers: { poolSize: 6, activeWorkers: 6 }
```

257
demo/facematch/facematch.js Normal file
View File

@ -0,0 +1,257 @@
/**
* Human demo for browsers
*
* Demo for face descriptor analysis and face similarity analysis
*/
/** @type {Human} */
import { Human } from '../../dist/human.esm.js';
const userConfig = {
backend: 'humangl',
async: true,
warmup: 'none',
cacheSensitivity: 0.01,
debug: true,
modelBasePath: '../../models/',
deallocate: true,
filter: {
enabled: true,
equalization: true,
width: 0,
},
face: {
enabled: true,
detector: { return: true, rotation: true, maxDetected: 50, iouThreshold: 0.01, minConfidence: 0.2 },
mesh: { enabled: true },
iris: { enabled: false },
emotion: { enabled: true },
description: { enabled: true },
},
hand: { enabled: false },
gesture: { enabled: false },
body: { enabled: false },
segmentation: { enabled: false },
};
const human = new Human(userConfig); // new instance of human
const all = []; // array that will hold all detected faces
let db = []; // array that holds all known faces
const minScore = 0.4;
function log(...msg) {
const dt = new Date();
const ts = `${dt.getHours().toString().padStart(2, '0')}:${dt.getMinutes().toString().padStart(2, '0')}:${dt.getSeconds().toString().padStart(2, '0')}.${dt.getMilliseconds().toString().padStart(3, '0')}`;
console.log(ts, ...msg); // eslint-disable-line no-console
}
function title(msg) {
document.getElementById('title').innerHTML = msg;
}
async function loadFaceMatchDB() {
// download db with known faces
try {
let res = await fetch('/demo/facematch/faces.json');
if (!res || !res.ok) res = await fetch('/human/demo/facematch/faces.json');
db = (res && res.ok) ? await res.json() : [];
log('Loaded Faces DB:', db);
} catch (err) {
log('Could not load faces database', err);
}
}
async function selectFaceCanvas(face) {
// if we have face image tensor, enhance it and display it
let embedding;
document.getElementById('orig').style.filter = 'blur(16px)';
if (face.tensor) {
title('Sorting Faces by Similarity');
const c = document.getElementById('orig');
await human.draw.tensor(face.tensor, c);
const arr = db.map((rec) => rec.embedding);
const res = await human.match.find(face.embedding, arr);
log('Match:', db[res.index].name);
const emotion = face.emotion[0] ? `${Math.round(100 * face.emotion[0].score)}% ${face.emotion[0].emotion}` : 'N/A';
document.getElementById('desc').innerHTML = `
source: ${face.fileName}<br>
match: ${Math.round(1000 * res.similarity) / 10}% ${db[res.index].name}<br>
score: ${Math.round(100 * face.boxScore)}% detection ${Math.round(100 * face.faceScore)}% analysis<br>
age: ${face.age} years<br>
gender: ${Math.round(100 * face.genderScore)}% ${face.gender}<br>
emotion: ${emotion}<br>
`;
embedding = face.embedding.map((a) => parseFloat(a.toFixed(4)));
navigator.clipboard.writeText(`{"name":"unknown", "source":"${face.fileName}", "embedding":[${embedding}]},`);
}
// loop through all canvases that contain faces
const canvases = document.getElementsByClassName('face');
let time = 0;
for (const canvas of canvases) {
// calculate similarity from selected face to current one in the loop
const current = all[canvas.tag.sample][canvas.tag.face];
const similarity = human.match.similarity(face.embedding, current.embedding);
canvas.tag.similarity = similarity;
// get best match
// draw the canvas
await human.draw.tensor(current.tensor, canvas);
const ctx = canvas.getContext('2d');
ctx.font = 'small-caps 1rem "Lato"';
ctx.fillStyle = 'rgba(0, 0, 0, 1)';
ctx.fillText(`${(100 * similarity).toFixed(1)}%`, 3, 23);
ctx.fillStyle = 'rgba(255, 255, 255, 1)';
ctx.fillText(`${(100 * similarity).toFixed(1)}%`, 4, 24);
ctx.font = 'small-caps 0.8rem "Lato"';
ctx.fillText(`${current.age}y ${(100 * (current.genderScore || 0)).toFixed(1)}% ${current.gender}`, 4, canvas.height - 6);
// identify person
ctx.font = 'small-caps 1rem "Lato"';
const start = human.now();
const arr = db.map((rec) => rec.embedding);
const res = await human.match.find(current.embedding, arr);
time += (human.now() - start);
if (res.similarity > minScore) ctx.fillText(`DB: ${(100 * res.similarity).toFixed(1)}% ${db[res.index].name}`, 4, canvas.height - 30);
}
log('Analyzed:', 'Face:', canvases.length, 'DB:', db.length, 'Time:', time);
// sort all faces by similarity
const sorted = document.getElementById('faces');
[...sorted.children]
.sort((a, b) => parseFloat(b.tag.similarity) - parseFloat(a.tag.similarity))
.forEach((canvas) => sorted.appendChild(canvas));
document.getElementById('orig').style.filter = 'blur(0)';
title('Selected Face');
}
async function addFaceCanvas(index, res, fileName) {
all[index] = res.face;
for (const i in res.face) {
if (!res.face[i].tensor) continue; // did not get valid results
if ((res.face[i].faceScore || 0) < human.config.face.detector.minConfidence) continue; // face analysis score too low
all[index][i].fileName = fileName;
const canvas = document.createElement('canvas');
canvas.tag = { sample: index, face: i, source: fileName };
canvas.width = 200;
canvas.height = 200;
canvas.className = 'face';
const emotion = res.face[i].emotion[0] ? `${Math.round(100 * res.face[i].emotion[0].score)}% ${res.face[i].emotion[0].emotion}` : 'N/A';
canvas.title = `
source: ${res.face[i].fileName}
score: ${Math.round(100 * res.face[i].boxScore)}% detection ${Math.round(100 * res.face[i].faceScore)}% analysis
age: ${res.face[i].age} years
gender: ${Math.round(100 * res.face[i].genderScore)}% ${res.face[i].gender}
emotion: ${emotion}
`.replace(/ /g, ' ');
await human.draw.tensor(res.face[i].tensor, canvas);
const ctx = canvas.getContext('2d');
if (!ctx) return;
ctx.font = 'small-caps 0.8rem "Lato"';
ctx.fillStyle = 'rgba(255, 255, 255, 1)';
ctx.fillText(`${res.face[i].age}y ${(100 * (res.face[i].genderScore || 0)).toFixed(1)}% ${res.face[i].gender}`, 4, canvas.height - 6);
const arr = db.map((rec) => rec.embedding);
const result = human.match.find(res.face[i].embedding, arr);
ctx.font = 'small-caps 1rem "Lato"';
if (result.similarity && res.similarity > minScore) ctx.fillText(`${(100 * result.similarity).toFixed(1)}% ${db[result.index].name}`, 4, canvas.height - 30);
document.getElementById('faces').appendChild(canvas);
canvas.addEventListener('click', (evt) => {
log('Select:', 'Image:', evt.target.tag.sample, 'Face:', evt.target.tag.face, 'Source:', evt.target.tag.source, all[evt.target.tag.sample][evt.target.tag.face]);
selectFaceCanvas(all[evt.target.tag.sample][evt.target.tag.face]);
});
}
}
async function addImageElement(index, image, length) {
const faces = all.reduce((prev, curr) => prev += curr.length, 0);
title(`Analyzing Input Images<br> ${Math.round(100 * index / length)}% [${index} / ${length}]<br>Found ${faces} Faces`);
return new Promise((resolve) => {
const img = new Image(128, 128);
img.onload = () => { // must wait until image is loaded
document.getElementById('images').appendChild(img); // and finally we can add it
human.detect(img, userConfig)
.then((res) => { // eslint-disable-line promise/always-return
addFaceCanvas(index, res, image); // then wait until image is analyzed
resolve(true);
})
.catch(() => log('human detect error'));
};
img.onerror = () => {
log('Add image error:', index + 1, image);
resolve(false);
};
img.title = image;
img.src = encodeURI(image);
});
}
function createFaceMatchDB() {
log('Creating Faces DB...');
for (const image of all) {
for (const face of image) db.push({ name: 'unknown', source: face.fileName, embedding: face.embedding });
}
log(db);
}
async function main() {
// pre-load human models
await human.load();
title('Loading Face Match Database');
let images = [];
let dir = [];
// load face descriptor database
await loadFaceMatchDB();
// enumerate all sample images in /assets
title('Enumerating Input Images');
const res = await fetch('/samples/in');
dir = (res && res.ok) ? await res.json() : [];
images = images.concat(dir.filter((img) => (img.endsWith('.jpg') && img.includes('sample'))));
// could not dynamically enumerate images so using static list
if (images.length === 0) {
images = [
'ai-face.jpg', 'ai-upper.jpg', 'ai-body.jpg', 'solvay1927.jpg',
'group-1.jpg', 'group-2.jpg', 'group-3.jpg', 'group-4.jpg', 'group-5.jpg', 'group-6.jpg', 'group-7.jpg',
'person-celeste.jpg', 'person-christina.jpg', 'person-lauren.jpg', 'person-lexi.jpg', 'person-linda.jpg', 'person-nicole.jpg', 'person-tasia.jpg', 'person-tetiana.jpg', 'person-vlado.jpg', 'person-vlado1.jpg', 'person-vlado5.jpg',
'stock-group-1.jpg', 'stock-group-2.jpg',
'stock-models-1.jpg', 'stock-models-2.jpg', 'stock-models-3.jpg', 'stock-models-4.jpg', 'stock-models-5.jpg', 'stock-models-6.jpg', 'stock-models-7.jpg', 'stock-models-8.jpg', 'stock-models-9.jpg',
'stock-teen-1.jpg', 'stock-teen-2.jpg', 'stock-teen-3.jpg', 'stock-teen-4.jpg', 'stock-teen-5.jpg', 'stock-teen-6.jpg', 'stock-teen-7.jpg', 'stock-teen-8.jpg',
'stock-models-10.jpg', 'stock-models-11.jpg', 'stock-models-12.jpg', 'stock-models-13.jpg', 'stock-models-14.jpg', 'stock-models-15.jpg', 'stock-models-16.jpg',
'cgi-model-1.jpg', 'cgi-model-2.jpg', 'cgi-model-3.jpg', 'cgi-model-4.jpg', 'cgi-model-5.jpg', 'cgi-model-6.jpg', 'cgi-model-7.jpg', 'cgi-model-8.jpg', 'cgi-model-9.jpg',
'cgi-model-10.jpg', 'cgi-model-11.jpg', 'cgi-model-12.jpg', 'cgi-model-13.jpg', 'cgi-model-14.jpg', 'cgi-model-15.jpg', 'cgi-model-18.jpg', 'cgi-model-19.jpg',
'cgi-model-20.jpg', 'cgi-model-21.jpg', 'cgi-model-22.jpg', 'cgi-model-23.jpg', 'cgi-model-24.jpg', 'cgi-model-25.jpg', 'cgi-model-26.jpg', 'cgi-model-27.jpg', 'cgi-model-28.jpg', 'cgi-model-29.jpg',
'cgi-model-30.jpg', 'cgi-model-31.jpg', 'cgi-model-33.jpg', 'cgi-model-34.jpg',
'cgi-multiangle-1.jpg', 'cgi-multiangle-2.jpg', 'cgi-multiangle-3.jpg', 'cgi-multiangle-4.jpg', 'cgi-multiangle-6.jpg', 'cgi-multiangle-7.jpg', 'cgi-multiangle-8.jpg', 'cgi-multiangle-9.jpg', 'cgi-multiangle-10.jpg', 'cgi-multiangle-11.jpg',
'stock-emotions-a-1.jpg', 'stock-emotions-a-2.jpg', 'stock-emotions-a-3.jpg', 'stock-emotions-a-4.jpg', 'stock-emotions-a-5.jpg', 'stock-emotions-a-6.jpg', 'stock-emotions-a-7.jpg', 'stock-emotions-a-8.jpg',
'stock-emotions-b-1.jpg', 'stock-emotions-b-2.jpg', 'stock-emotions-b-3.jpg', 'stock-emotions-b-4.jpg', 'stock-emotions-b-5.jpg', 'stock-emotions-b-6.jpg', 'stock-emotions-b-7.jpg', 'stock-emotions-b-8.jpg',
];
// add prefix for gitpages
images = images.map((a) => `../../samples/in/${a}`);
log('Adding static image list:', images);
} else {
log('Discovered images:', images);
}
// images = ['/samples/in/person-lexi.jpg', '/samples/in/person-carolina.jpg', '/samples/in/solvay1927.jpg'];
const t0 = human.now();
for (let i = 0; i < images.length; i++) await addImageElement(i, images[i], images.length);
const t1 = human.now();
// print stats
const num = all.reduce((prev, cur) => prev += cur.length, 0);
log('Extracted faces:', num, 'from images:', all.length, 'time:', Math.round(t1 - t0));
log(human.tf.engine().memory());
// if we didn't download db, generate it from current faces
if (!db || db.length === 0) createFaceMatchDB();
title('');
log('Ready');
human.validate(userConfig);
human.match.similarity([], []);
}
window.onload = main;

81
demo/facematch/faces.json Normal file

File diff suppressed because one or more lines are too long

50
demo/facematch/index.html Normal file
View File

@ -0,0 +1,50 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Human</title>
<!-- <meta http-equiv="content-type" content="text/html; charset=utf-8"> -->
<meta name="viewport" content="width=device-width, shrink-to-fit=yes">
<meta name="keywords" content="Human">
<meta name="application-name" content="Human">
<meta name="description" content="Human: 3D Face Detection, Body Pose, Hand & Finger Tracking, Iris Tracking, Age & Gender Prediction, Emotion Prediction & Gesture Recognition; Author: Vladimir Mandic <https://github.com/vladmandic>">
<meta name="msapplication-tooltip" content="Human: 3D Face Detection, Body Pose, Hand & Finger Tracking, Iris Tracking, Age & Gender Prediction, Emotion Prediction & Gesture Recognition; Author: Vladimir Mandic <https://github.com/vladmandic>">
<meta name="theme-color" content="#000000">
<link rel="manifest" href="../manifest.webmanifest">
<link rel="shortcut icon" href="../../favicon.ico" type="image/x-icon">
<link rel="apple-touch-icon" href="../../assets/icon.png">
<script src="./facematch.js" type="module"></script>
<style>
img { object-fit: contain; }
@font-face { font-family: 'Lato'; font-display: swap; font-style: normal; font-weight: 100; src: local('Lato'), url('../../assets/lato-light.woff2') }
html { font-family: 'Lato', 'Segoe UI'; font-size: 24px; font-variant: small-caps; }
body { margin: 24px; background: black; color: white; overflow: hidden; text-align: -webkit-center; min-height: 100%; max-height: 100%; }
::-webkit-scrollbar { height: 8px; border: 0; border-radius: 0; }
::-webkit-scrollbar-thumb { background: grey }
::-webkit-scrollbar-track { margin: 3px; }
.orig { width: 200px; height: 200px; padding-bottom: 20px; filter: blur(16px); transition : all 0.5s ease; }
.text { margin: 24px; }
.face { width: 128px; height: 128px; margin: 2px; padding: 2px; cursor: grab; transform: scale(1.00); transition : all 0.3s ease; }
.face:hover { filter: grayscale(1); transform: scale(1.08); transition : all 0.3s ease; }
</style>
</head>
<body>
<div style="display: block">
<div style="display: flex">
<div style="min-width: 400px">
<div class="text" id="title"></div>
<canvas id="orig" class="orig"></canvas>
<div id="desc" style="font-size: 0.8rem; text-align: left;"></div>
</div>
<div style="width: 20px"></div>
<div>
<div class="text">Input Images</div>
<div id="images" style="display: flex; width: 60vw; overflow-x: auto; overflow-y: hidden; scroll-behavior: smooth"></div>
</div>
</div>
<div id="list" style="height: 10px"></div>
<div class="text">Select person to sort by similarity and get a known face match</div>
<div id="faces" style="height: 50vh; overflow-y: auto"></div>
</div>
</body>
</html>

View File

@ -0,0 +1,76 @@
/**
* Runs in a worker thread started by `node-match` demo app
*
*/
const threads = require('worker_threads');
let debug = false;
/** @type SharedArrayBuffer */
let buffer;
/** @type Float32Array */
let view;
let threshold = 0;
let records = 0;
const descLength = 1024; // descriptor length in bytes
function distance(descBuffer, index, options = { order: 2, multiplier: 20 }) {
const descriptor = new Float32Array(descBuffer);
let sum = 0;
for (let i = 0; i < descriptor.length; i++) {
const diff = (options.order === 2) ? (descriptor[i] - view[index * descLength + i]) : (Math.abs(descriptor[i] - view[index * descLength + i]));
sum += (options.order === 2) ? (diff * diff) : (diff ** options.order);
}
return (options.multiplier || 20) * sum;
}
function match(descBuffer, options = { order: 2, multiplier: 20 }) {
let best = Number.MAX_SAFE_INTEGER;
let index = -1;
for (let i = 0; i < records; i++) {
const res = distance(descBuffer, i, { order: options.order, multiplier: options.multiplier });
if (res < best) {
best = res;
index = i;
}
if (best < threshold || best === 0) break; // short circuit
}
best = (options.order === 2) ? Math.sqrt(best) : best ** (1 / options.order);
const similarity = Math.round(100 * Math.max(0, 100 - best) / 100.0) / 100;
return { index, distance: best, similarity };
}
threads.parentPort?.on('message', (msg) => {
if (typeof msg.descriptor !== 'undefined') { // actual work order to find a match
const t0 = performance.now();
const result = match(msg.descriptor);
const t1 = performance.now();
threads.parentPort?.postMessage({ request: msg.request, time: Math.trunc(t1 - t0), ...result });
return; // short circuit
}
if (msg instanceof SharedArrayBuffer) { // called only once to receive reference to shared array buffer
buffer = msg;
view = new Float32Array(buffer); // initialize f64 view into buffer
if (debug) threads.parentPort?.postMessage(`buffer: ${buffer.byteLength}`);
}
if (typeof msg.records !== 'undefined') { // recived every time when number of records changes
records = msg.records;
if (debug) threads.parentPort?.postMessage(`records: ${records}`);
}
if (typeof msg.debug !== 'undefined') { // set verbose logging
debug = msg.debug;
// if (debug) threads.parentPort?.postMessage(`debug: ${debug}`);
}
if (typeof msg.threshold !== 'undefined') { // set minimum similarity threshold
threshold = msg.threshold;
// if (debug) threads.parentPort?.postMessage(`threshold: ${threshold}`);
}
if (typeof msg.shutdown !== 'undefined') { // got message to close worker
if (debug) threads.parentPort?.postMessage('shutting down');
process.exit(0); // eslint-disable-line no-process-exit
}
});
if (debug) threads.parentPort?.postMessage('started');

View File

@ -0,0 +1,184 @@
/**
* Human demo app for NodeJS that generates random facial descriptors
* and uses NodeJS multi-threading to start multiple threads for face matching
* uses `node-match-worker.js` to perform actual face matching analysis
*/
const fs = require('fs');
const path = require('path');
const threads = require('worker_threads');
const log = require('@vladmandic/pilogger'); // eslint-disable-line node/no-unpublished-require
// global optinos
const options = {
dbFile: 'demo/facematch/faces.json', // sample face db
dbMax: 10000, // maximum number of records to hold in memory
threadPoolSize: 12, // number of worker threads to create in thread pool
workerSrc: './node-match-worker.js', // code that executes in the worker thread
debug: true, // verbose messages
minThreshold: 0.5, // match returns first record that meets the similarity threshold, set to 0 to always scan all records
descLength: 1024, // descriptor length
};
// test options
const testOptions = {
dbFact: 175, // load db n times to fake huge size
maxJobs: 200, // exit after processing this many jobs
fuzDescriptors: true, // randomize descriptor content before match for harder jobs
};
// global data structures
const data = {
/** @type string[] */
labels: [], // array of strings, length of array serves as overal number of records so has to be maintained carefully
/** @type SharedArrayBuffer | null */
buffer: null,
/** @type Float32Array | null */
view: null,
/** @type threads.Worker[] */
workers: [], // holds instance of workers. worker can be null if exited
requestID: 0, // each request should increment this counter as its used for round robin assignment
};
let t0 = process.hrtime.bigint(); // used for perf counters
const appendRecords = (labels, descriptors) => {
if (!data.view) return 0;
if (descriptors.length !== labels.length) {
log.error('append error:', { descriptors: descriptors.length, labels: labels.length });
}
// if (options.debug) log.state('appending:', { descriptors: descriptors.length, labels: labels.length });
for (let i = 0; i < descriptors.length; i++) {
for (let j = 0; j < descriptors[i].length; j++) {
data.view[data.labels.length * descriptors[i].length + j] = descriptors[i][j]; // add each descriptors element to buffer
}
data.labels.push(labels[i]); // finally add to labels
}
for (const worker of data.workers) { // inform all workers how many records we have
if (worker) worker.postMessage({ records: data.labels.length });
}
return data.labels.length;
};
const getLabel = (index) => data.labels[index];
const getDescriptor = (index) => {
if (!data.view) return [];
const descriptor = [];
for (let i = 0; i < 1024; i++) descriptor.push(data.view[index * options.descLength + i]);
return descriptor;
};
const fuzDescriptor = (descriptor) => {
for (let i = 0; i < descriptor.length; i++) descriptor[i] += Math.random() - 0.5;
return descriptor;
};
const delay = (ms) => new Promise((resolve) => { setTimeout(resolve, ms); });
async function workersClose() {
const current = data.workers.filter((worker) => !!worker).length;
log.info('closing workers:', { poolSize: data.workers.length, activeWorkers: current });
for (const worker of data.workers) {
if (worker) worker.postMessage({ shutdown: true }); // tell worker to exit
}
await delay(250); // wait a little for threads to exit on their own
const remaining = data.workers.filter((worker) => !!worker).length;
if (remaining > 0) {
log.info('terminating remaining workers:', { remaining: current, pool: data.workers.length });
for (const worker of data.workers) {
if (worker) worker.terminate(); // if worker did not exit cleany terminate it
}
}
}
const workerMessage = (index, msg) => {
if (msg.request) {
if (options.debug) log.data('message:', { worker: index, request: msg.request, time: msg.time, label: getLabel(msg.index), similarity: msg.similarity });
if (msg.request >= testOptions.maxJobs) {
const t1 = process.hrtime.bigint();
const elapsed = Math.round(Number(t1 - t0) / 1000 / 1000);
log.state({ matchJobsFinished: testOptions.maxJobs, totalTimeMs: elapsed, averageTimeMs: Math.round(100 * elapsed / testOptions.maxJobs) / 100 });
workersClose();
}
} else {
log.data('message:', { worker: index, msg });
}
};
async function workerClose(id, code) {
const previous = data.workers.filter((worker) => !!worker).length;
delete data.workers[id];
const current = data.workers.filter((worker) => !!worker).length;
if (options.debug) log.state('worker exit:', { id, code, previous, current });
}
async function workersStart(numWorkers) {
const previous = data.workers.filter((worker) => !!worker).length;
log.info('starting worker thread pool:', { totalWorkers: numWorkers, alreadyActive: previous });
for (let i = 0; i < numWorkers; i++) {
if (!data.workers[i]) { // worker does not exist, so create it
const worker = new threads.Worker(path.join(__dirname, options.workerSrc));
worker.on('message', (msg) => workerMessage(i, msg));
worker.on('error', (err) => log.error('worker error:', { err }));
worker.on('exit', (code) => workerClose(i, code));
worker.postMessage(data.buffer); // send buffer to worker
data.workers[i] = worker;
}
data.workers[i]?.postMessage({ records: data.labels.length, threshold: options.minThreshold, debug: options.debug }); // inform worker how many records there are
}
await delay(100); // just wait a bit for everything to settle down
}
const match = (descriptor) => {
// const arr = Float32Array.from(descriptor);
const buffer = new ArrayBuffer(options.descLength * 4);
const view = new Float32Array(buffer);
view.set(descriptor);
const available = data.workers.filter((worker) => !!worker).length; // find number of available workers
if (available > 0) data.workers[data.requestID % available].postMessage({ descriptor: buffer, request: data.requestID }, [buffer]); // round robin to first available worker
else log.error('no available workers');
};
async function loadDB(count) {
const previous = data.labels.length;
if (!fs.existsSync(options.dbFile)) {
log.error('db file does not exist:', options.dbFile);
return;
}
t0 = process.hrtime.bigint();
for (let i = 0; i < count; i++) { // test loop: load entire face db from array of objects n times into buffer
const db = JSON.parse(fs.readFileSync(options.dbFile).toString());
const names = db.map((record) => record.name);
const descriptors = db.map((record) => record.embedding);
appendRecords(names, descriptors);
}
log.data('db loaded:', { existingRecords: previous, newRecords: data.labels.length });
}
async function createBuffer() {
data.buffer = new SharedArrayBuffer(4 * options.dbMax * options.descLength); // preallocate max number of records as sharedarraybuffers cannot grow
data.view = new Float32Array(data.buffer); // create view into buffer
data.labels.length = 0;
log.data('created shared buffer:', { maxDescriptors: (data.view.length || 0) / options.descLength, totalBytes: data.buffer.byteLength, totalElements: data.view.length });
}
async function main() {
log.header();
log.info('options:', options);
await createBuffer(); // create shared buffer array
await loadDB(testOptions.dbFact); // loadDB is a test method that calls actual addRecords
await workersStart(options.threadPoolSize); // can be called at anytime to modify worker pool size
for (let i = 0; i < testOptions.maxJobs; i++) {
const idx = Math.trunc(data.labels.length * Math.random()); // grab a random descriptor index that we'll search for
const descriptor = getDescriptor(idx); // grab a descriptor at index
data.requestID++; // increase request id
if (testOptions.fuzDescriptors) match(fuzDescriptor(descriptor)); // fuz descriptor for harder match
else match(descriptor);
if (options.debug) log.debug('submited job', data.requestID); // we already know what we're searching for so we can compare results
}
log.state('submitted:', { matchJobs: testOptions.maxJobs, poolSize: data.workers.length, activeWorkers: data.workers.filter((worker) => !!worker).length });
}
main();

BIN
demo/favicon.ico Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 256 KiB

3
demo/helpers/README.md Normal file
View File

@ -0,0 +1,3 @@
# Helper libraries
Used by main `Human` demo app

270
demo/helpers/gl-bench.js Normal file
View File

@ -0,0 +1,270 @@
// based on: https://github.com/munrocket/gl-bench
const UICSS = `
#gl-bench { position: absolute; right: 1rem; bottom: 1rem; z-index:1000; -webkit-user-select: none; -moz-user-select: none; user-select: none; }
#gl-bench div { position: relative; display: block; margin: 4px; padding: 0 2px 0 2px; background: #303030; border-radius: 0.1rem; cursor: pointer; opacity: 0.9; }
#gl-bench svg { height: 60px; margin: 0 0px 0px 4px; }
#gl-bench text { font-size: 16px; font-family: 'Lato', 'Segoe UI'; dominant-baseline: middle; text-anchor: middle; }
#gl-bench .gl-mem { font-size: 12px; fill: white; }
#gl-bench .gl-fps { font-size: 13px; fill: white; }
#gl-bench line { stroke-width: 5; stroke: white; stroke-linecap: round; }
#gl-bench polyline { fill: none; stroke: white; stroke-linecap: round; stroke-linejoin: round; stroke-width: 3.5; }
#gl-bench rect { fill: black; }
#gl-bench .opacity { stroke: black; }
`;
const UISVG = `
<div class="gl-box">
<svg viewBox="0 0 60 60">
<text x="27" y="56" class="gl-fps">00 FPS</text>
<text x="30" y="8" class="gl-mem"></text>
<rect x="0" y="14" rx="4" ry="4" width="60" height="32"></rect>
<polyline class="gl-chart"></polyline>
</svg>
<svg viewBox="0 0 14 60" class="gl-cpu-svg">
<line x1="7" y1="38" x2="7" y2="11" class="opacity"/>
<line x1="7" y1="38" x2="7" y2="11" class="gl-cpu" stroke-dasharray="0 27"/>
<path d="M5.35 43c-.464 0-.812.377-.812.812v1.16c-.783.1972-1.421.812-1.595 1.624h-1.16c-.435 0-.812.348-.812.812s.348.812.812.812h1.102v1.653H1.812c-.464 0-.812.377-.812.812 0 .464.377.812.812.812h1.131c.1943.783.812 1.392 1.595 1.595v1.131c0 .464.377.812.812.812.464 0 .812-.377.812-.812V53.15h1.653v1.073c0 .464.377.812.812.812.464 0 .812-.377.812-.812v-1.131c.783-.1943 1.392-.812 1.595-1.595h1.131c.464 0 .812-.377.812-.812 0-.464-.377-.812-.812-.812h-1.073V48.22h1.102c.435 0 .812-.348.812-.812s-.348-.812-.812-.812h-1.16c-.1885-.783-.812-1.421-1.595-1.624v-1.131c0-.464-.377-.812-.812-.812-.464 0-.812.377-.812.812v1.073H6.162v-1.073c0-.464-.377-.812-.812-.812zm.58 3.48h2.088c.754 0 1.363.609 1.363 1.363v2.088c0 .754-.609 1.363-1.363 1.363H5.93c-.754 0-1.363-.609-1.363-1.363v-2.088c0-.754.609-1.363 1.363-1.363z" style="fill: grey"></path>
</svg>
<svg viewBox="0 0 14 60" class="gl-gpu-svg">
<line x1="7" y1="38" x2="7" y2="11" class="opacity"/>
<line x1="7" y1="38" x2="7" y2="11" class="gl-gpu" stroke-dasharray="0 27"/>
<path d="M1.94775 43.3772a.736.736 0 10-.00416 1.472c.58535.00231.56465.1288.6348.3197.07015.18975.04933.43585.04933.43585l-.00653.05405v8.671a.736.736 0 101.472 0v-1.4145c.253.09522.52785.1495.81765.1495h5.267c1.2535 0 2.254-.9752 2.254-2.185v-3.105c0-1.2075-1.00625-2.185-2.254-2.185h-5.267c-.28865 0-.5635.05405-.8165.1495.01806-.16445.04209-.598-.1357-1.0787-.22425-.6072-.9499-1.2765-2.0125-1.2765zm2.9095 3.6455c.42435 0 .7659.36225.7659.8119v2.9785c0 .44965-.34155.8119-.7659.8119s-.7659-.36225-.7659-.8119v-2.9785c0-.44965.34155-.8119.7659-.8119zm4.117 0a2.3 2.3 0 012.3 2.3 2.3 2.3 0 01-2.3 2.3 2.3 2.3 0 01-2.3-2.3 2.3 2.3 0 012.3-2.3z" style="fill: grey"></path>
</svg>
</div>
`;
class GLBench {
/** GLBench constructor
* @param { WebGLRenderingContext | WebGL2RenderingContext | null } gl context
* @param { Object | undefined } settings additional settings
*/
constructor(gl, settings = {}) {
this.css = UICSS;
this.svg = UISVG;
this.paramLogger = () => {};
this.chartLogger = () => {};
this.chartLen = 20;
this.chartHz = 20;
this.names = [];
this.cpuAccums = [];
this.gpuAccums = [];
this.activeAccums = [];
this.chart = new Array(this.chartLen);
this.now = () => ((performance && performance.now) ? performance.now() : Date.now());
this.updateUI = () => {
[].forEach.call(this.nodes['gl-gpu-svg'], (node) => node.style.display = this.trackGPU ? 'inline' : 'none');
};
Object.assign(this, settings);
this.detected = 0;
this.finished = [];
this.isFramebuffer = 0;
this.frameId = 0;
// 120hz device detection
let rafId; let n = 0; let
t0;
const loop = (t) => {
if (++n < 20) {
rafId = requestAnimationFrame(loop);
} else {
this.detected = Math.ceil(1e3 * n / (t - t0) / 70);
cancelAnimationFrame(rafId);
}
if (!t0) t0 = t;
};
requestAnimationFrame(loop);
// attach gpu profilers
if (gl) {
const glFinish = async (t, activeAccums) => Promise.resolve(setTimeout(() => {
gl.getError();
const dt = this.now() - t;
activeAccums.forEach((active, i) => {
if (active) this.gpuAccums[i] += dt;
});
}, 0));
const addProfiler = (fn, self, target) => {
const t = self.now();
fn.apply(target, arguments);
if (self.trackGPU) self.finished.push(glFinish(t, self.activeAccums.slice(0)));
};
/* ['drawArrays', 'drawElements', 'drawArraysInstanced', 'drawBuffers', 'drawElementsInstanced', 'drawRangeElements'].forEach((fn) => {
if (gl[fn]) {
gl[fn] = addProfiler(gl[fn], this, gl);
}
});
*/
const fn = 'drawElements';
if (gl[fn]) {
gl[fn] = addProfiler(gl[fn], this, gl);
} else {
console.log('bench: cannot attach to webgl function');
}
/*
gl.getExtension = ((fn, self) => {
const ext = fn.apply(gl, arguments);
if (ext) {
['drawElementsInstancedANGLE', 'drawBuffersWEBGL'].forEach((fn2) => {
if (ext[fn2]) {
ext[fn2] = addProfiler(ext[fn2], self, ext);
}
});
}
return ext;
})(gl.getExtension, this);
*/
}
// init ui and ui loggers
if (!this.withoutUI) {
if (!this.dom) this.dom = document.body;
const elm = document.createElement('div');
elm.id = 'gl-bench';
this.dom.appendChild(elm);
this.dom.insertAdjacentHTML('afterbegin', '<style id="gl-bench-style">' + this.css + '</style>');
this.dom = elm;
this.dom.addEventListener('click', () => {
this.trackGPU = !this.trackGPU;
this.updateUI();
});
this.paramLogger = ((logger, dom, names) => {
const classes = ['gl-cpu', 'gl-gpu', 'gl-mem', 'gl-fps', 'gl-gpu-svg', 'gl-chart'];
const nodes = { ...classes };
classes.forEach((c) => nodes[c] = dom.getElementsByClassName(c));
this.nodes = nodes;
return (i, cpu, gpu, mem, fps, totalTime, frameId) => {
nodes['gl-cpu'][i].style.strokeDasharray = (cpu * 0.27).toFixed(0) + ' 100';
nodes['gl-gpu'][i].style.strokeDasharray = (gpu * 0.27).toFixed(0) + ' 100';
nodes['gl-mem'][i].innerHTML = names[i] ? names[i] : (mem ? 'mem: ' + mem.toFixed(0) + 'mb' : '');
nodes['gl-fps'][i].innerHTML = 'FPS: ' + fps.toFixed(1);
logger(names[i], cpu, gpu, mem, fps, totalTime, frameId);
};
})(this.paramLogger, this.dom, this.names);
this.chartLogger = ((logger, dom) => {
const nodes = { 'gl-chart': dom.getElementsByClassName('gl-chart') };
return (i, chart, circularId) => {
let points = '';
const len = chart.length;
for (let j = 0; j < len; j++) {
const id = (circularId + j + 1) % len;
if (chart[id] !== undefined) points = points + ' ' + (60 * j / (len - 1)).toFixed(1) + ',' + (45 - chart[id] * 0.5 / this.detected).toFixed(1);
}
nodes['gl-chart'][i].setAttribute('points', points);
logger(this.names[i], chart, circularId);
};
})(this.chartLogger, this.dom);
}
}
/**
* Explicit UI add
* @param { string | undefined } name
*/
addUI(name) {
if (this.names.indexOf(name) === -1) {
this.names.push(name);
if (this.dom) {
this.dom.insertAdjacentHTML('beforeend', this.svg);
this.updateUI();
}
this.cpuAccums.push(0);
this.gpuAccums.push(0);
this.activeAccums.push(false);
}
}
/**
* Increase frameID
* @param { number | undefined } now
*/
nextFrame(now) {
this.frameId++;
const t = now || this.now();
// params
if (this.frameId <= 1) {
this.paramFrame = this.frameId;
this.paramTime = t;
} else {
const duration = t - this.paramTime;
if (duration >= 1e3) {
const frameCount = this.frameId - this.paramFrame;
const fps = frameCount / duration * 1e3;
for (let i = 0; i < this.names.length; i++) {
const cpu = this.cpuAccums[i] / duration * 100;
const gpu = this.gpuAccums[i] / duration * 100;
const mem = (performance && performance.memory) ? performance.memory.usedJSHeapSize / (1 << 20) : 0;
this.paramLogger(i, cpu, gpu, mem, fps, duration, frameCount);
this.cpuAccums[i] = 0;
Promise.all(this.finished).then(() => {
this.gpuAccums[i] = 0;
this.finished = [];
});
}
this.paramFrame = this.frameId;
this.paramTime = t;
}
}
// chart
if (!this.detected || !this.chartFrame) {
this.chartFrame = this.frameId;
this.chartTime = t;
this.circularId = 0;
} else {
const timespan = t - this.chartTime;
let hz = this.chartHz * timespan / 1e3;
while (--hz > 0 && this.detected) {
const frameCount = this.frameId - this.chartFrame;
const fps = frameCount / timespan * 1e3;
this.chart[this.circularId % this.chartLen] = fps;
for (let i = 0; i < this.names.length; i++) this.chartLogger(i, this.chart, this.circularId);
this.circularId++;
this.chartFrame = this.frameId;
this.chartTime = t;
}
}
}
/**
* Begin named measurement
* @param { string | undefined } name
*/
begin(name) {
this.updateAccums(name);
}
/**
* End named measure
* @param { string | undefined } name
*/
end(name) {
this.updateAccums(name);
}
updateAccums(name) {
let nameId = this.names.indexOf(name);
if (nameId === -1) {
nameId = this.names.length;
this.addUI(name);
}
const t = this.now();
const dt = t - this.t0;
for (let i = 0; i < nameId + 1; i++) {
if (this.activeAccums[i]) this.cpuAccums[i] += dt;
}
this.activeAccums[nameId] = !this.activeAccums[nameId];
this.t0 = t;
}
}
export default GLBench;

157
demo/helpers/jsonview.js Normal file
View File

@ -0,0 +1,157 @@
let callbackFunction = null;
function createElement(type, config) {
const htmlElement = document.createElement(type);
if (config === undefined) return htmlElement;
if (config.className) htmlElement.className = config.className;
if (config.content) htmlElement.textContent = config.content;
if (config.style) htmlElement.style = config.style;
if (config.children) config.children.forEach((el) => !el || htmlElement.appendChild(el));
return htmlElement;
}
function createExpandedElement(node) {
const iElem = createElement('i');
if (node.expanded) { iElem.className = 'fas fa-caret-down'; } else { iElem.className = 'fas fa-caret-right'; }
const caretElem = createElement('div', { style: 'width: 18px; text-align: center; cursor: pointer', children: [iElem] });
const handleClick = node.toggle.bind(node);
caretElem.addEventListener('click', handleClick);
const indexElem = createElement('div', { className: 'json json-index', content: node.key });
indexElem.addEventListener('click', handleClick);
const typeElem = createElement('div', { className: 'json json-type', content: node.type });
const keyElem = createElement('div', { className: 'json json-key', content: node.key });
keyElem.addEventListener('click', handleClick);
const sizeElem = createElement('div', { className: 'json json-size' });
sizeElem.addEventListener('click', handleClick);
if (node.type === 'array') {
sizeElem.innerText = `[${node.children.length} items]`;
} else if (node.type === 'object') {
const size = node.children.find((item) => item.key === 'size');
sizeElem.innerText = size ? `{${size.value.toLocaleString()} bytes}` : `{${node.children.length} properties}`;
}
let lineChildren;
if (node.key === null) lineChildren = [caretElem, typeElem, sizeElem];
else if (node.parent.type === 'array') lineChildren = [caretElem, indexElem, sizeElem];
else lineChildren = [caretElem, keyElem, sizeElem];
const lineElem = createElement('div', { className: 'json-line', children: lineChildren });
if (node.depth > 0) lineElem.style = `margin-left: ${node.depth * 24}px;`;
return lineElem;
}
function createNotExpandedElement(node) {
const caretElem = createElement('div', { style: 'width: 18px' });
const keyElem = createElement('div', { className: 'json json-key', content: node.key });
const separatorElement = createElement('div', { className: 'json-separator', content: ':' });
const valueType = ` json-${typeof node.value}`;
const valueContent = node.value.toLocaleString();
const valueElement = createElement('div', { className: `json json-value${valueType}`, content: valueContent });
const lineElem = createElement('div', { className: 'json-line', children: [caretElem, keyElem, separatorElement, valueElement] });
if (node.depth > 0) lineElem.style = `margin-left: ${node.depth * 24}px;`;
return lineElem;
}
function createNode() {
return {
key: '',
parent: {},
value: null,
expanded: false,
type: '',
children: [],
elem: {},
depth: 0,
hideChildren() {
if (Array.isArray(this.children)) {
this.children.forEach((item) => {
item['elem']['classList'].add('hide');
if (item['expanded']) item.hideChildren();
});
}
},
showChildren() {
if (Array.isArray(this.children)) {
this.children.forEach((item) => {
item['elem']['classList'].remove('hide');
if (item['expanded']) item.showChildren();
});
}
},
toggle() {
if (this.expanded) {
this.hideChildren();
const icon = this.elem?.querySelector('.fas');
icon.classList.replace('fa-caret-down', 'fa-caret-right');
if (callbackFunction !== null) callbackFunction(null);
} else {
this.showChildren();
const icon = this.elem?.querySelector('.fas');
icon.classList.replace('fa-caret-right', 'fa-caret-down');
if (this.type === 'object') {
if (callbackFunction !== null) callbackFunction(`${this.parent?.key}/${this.key}`);
}
}
this.expanded = !this.expanded;
},
};
}
function getType(val) {
let type
if (Array.isArray(val)) type = 'array';
else if (val === null) type = 'null';
else type = typeof val;
return type;
}
function traverseObject(obj, parent, filter) {
for (const key in obj) {
const child = createNode();
child.parent = parent;
child.key = key;
child.type = getType(obj[key]);
child.depth = parent.depth + 1;
child.expanded = false;
if (Array.isArray(filter)) {
for (const filtered of filter) {
if (key === filtered) return;
}
}
if (typeof obj[key] === 'object') {
child.children = [];
parent.children.push(child);
traverseObject(obj[key], child, filter);
child.elem = createExpandedElement(child);
} else {
child.value = obj[key];
child.elem = createNotExpandedElement(child);
parent.children.push(child);
}
}
}
function createTree(obj, title, filter) {
const tree = createNode();
tree.type = title;
tree.key = title;
tree.children = [];
tree.expanded = true;
traverseObject(obj, tree, filter);
tree.elem = createExpandedElement(tree);
return tree;
}
function traverseTree(node, callback) {
callback(node);
if (node.children !== null) node.children.forEach((item) => traverseTree(item, callback));
}
async function jsonView(json, element, title = '', filter = []) {
const tree = createTree(json, title, filter);
traverseTree(tree, (node) => {
if (!node.expanded) node.hideChildren();
element.appendChild(node.elem);
});
}
export default jsonView;

327
demo/helpers/menu.js Normal file
View File

@ -0,0 +1,327 @@
let instance = 0;
let CSScreated = false;
let theme = {
background: '#303030',
hover: '#505050',
itemBackground: 'black',
itemColor: 'white',
buttonBackground: 'lightblue',
buttonHover: 'lightgreen',
checkboxOn: 'lightgreen',
checkboxOff: 'lightcoral',
rangeBackground: 'lightblue',
rangeLabel: 'white',
chartColor: 'lightblue',
};
function createCSS() {
if (CSScreated) return;
const css = `
:root { --rounded: 0.1rem; }
.menu { position: absolute; top: 0rem; right: 0; min-width: 180px; width: max-content; padding: 0.2rem 0.8rem 0 0.8rem; line-height: 1.8rem; z-index: 10; background: ${theme.background}; border: none }
.button { text-shadow: none; }
.menu-container { display: block; max-height: 100vh; }
.menu-container-fadeout { max-height: 0; overflow: hidden; transition: max-height, 0.5s ease; }
.menu-container-fadein { max-height: 100vh; overflow: hidden; transition: max-height, 0.5s ease; }
.menu-item { display: flex; white-space: nowrap; padding: 0.2rem; cursor: default; width: 100%; }
.menu-item:hover { background: ${theme.hover} }
.menu-title { cursor: pointer; }
.menu-hr { margin: 0.2rem; border: 1px solid rgba(0, 0, 0, 0.5) }
.menu-label { padding: 0; font-weight: 800; }
.menu-list { margin-right: 0.8rem; }
select:focus { outline: none; }
.menu-list-item { background: ${theme.itemBackground}; color: ${theme.itemColor}; border: none; padding: 0.2rem; font-family: inherit;
font-variant: inherit; border-radius: var(--rounded); font-weight: 800; }
.menu-chart-title { padding: 0; font-size: 0.8rem; font-weight: 800; align-items: center}
.menu-chart-canvas { background: transparent; margin: 0.2rem 0 0.2rem 0.6rem; }
.menu-button { border: 0; background: ${theme.buttonBackground}; width: -webkit-fill-available; padding: 8px; margin: 8px; cursor: pointer;
border-radius: var(--rounded); justify-content: center; font-family: inherit; font-variant: inherit; font-size: 1rem; font-weight: 800; }
.menu-button:hover { background: ${theme.buttonHover}; box-shadow: 4px 4px 4px 0 black; }
.menu-button:focus { outline: none; }
.menu-checkbox { width: 2.6rem; height: 1rem; background: ${theme.itemBackground}; margin: 0.5rem 1.0rem 0 0; position: relative; border-radius: var(--rounded); }
.menu-checkbox:after { content: 'OFF'; color: ${theme.checkboxOff}; position: absolute; right: 0.2rem; top: -0.4rem; font-weight: 800; font-size: 0.5rem; }
.menu-checkbox:before { content: 'ON'; color: ${theme.checkboxOn}; position: absolute; left: 0.3rem; top: -0.4rem; font-weight: 800; font-size: 0.5rem; }
.menu-checkbox-label { width: 1.3rem; height: 1rem; cursor: pointer; position: absolute; top: 0; left: 0rem; z-index: 1; background: ${theme.checkboxOff};
border-radius: var(--rounded); transition: left 0.6s ease; }
input[type=checkbox] { visibility: hidden; }
input[type=checkbox]:checked + label { left: 1.4rem; background: ${theme.checkboxOn}; }
.menu-range { margin: 0.2rem 1.0rem 0 0; width: 5rem; background: transparent; color: ${theme.rangeBackground}; }
.menu-range:before { color: ${theme.rangeLabel}; margin: 0 0.4rem 0 0; font-weight: 800; font-size: 0.6rem; position: relative; top: 0.3rem; content: attr(value); }
input[type=range] { -webkit-appearance: none; }
input[type=range]::-webkit-slider-runnable-track { width: 100%; height: 1rem; cursor: pointer; background: ${theme.itemBackground}; border-radius: var(--rounded); border: 1px; }
input[type=range]::-moz-range-track { width: 100%; height: 1rem; cursor: pointer; background: ${theme.itemBackground}; border-radius: var(--rounded); border: 1px; }
input[type=range]::-webkit-slider-thumb { border: 1px solid #000000; margin-top: 0; height: 1rem; width: 0.6rem; border-radius: var(--rounded); background: ${theme.rangeBackground}; cursor: pointer; -webkit-appearance: none; }
input[type=range]::-moz-range-thumb { border: 1px solid #000000; margin-top: 0rem; height: 1rem; width: 0.6rem; border-radius: var(--rounded); background: ${theme.rangeBackground}; cursor: pointer; -webkit-appearance: none; }
.svg-background { fill:#303030; cursor:pointer; opacity: 0.6; }
.svg-foreground { fill:white; cursor:pointer; opacity: 0.8; }
`;
const el = document.createElement('style');
el.innerHTML = css;
document.getElementsByTagName('head')[0].appendChild(el);
CSScreated = true;
}
class Menu {
constructor(parent, title, position, userTheme) {
if (userTheme) theme = { ...theme, ...userTheme };
createCSS();
this.createMenu(parent, title, position);
this.id = 0;
this.instance = instance;
instance++;
this._maxFPS = 0;
this.hidden = false;
}
createMenu(parent, title = '', position = { top: null, left: null, bottom: null, right: null }) {
/** @type {HTMLDivElement} */
this.menu = document.createElement('div');
this.menu.id = `menu-${instance}`;
this.menu.className = 'menu';
if (position) {
if (position.top) this.menu.style.top = `${position.top}`;
if (position.bottom) this.menu.style.bottom = `${position.bottom}`;
if (position.left) this.menu.style.left = `${position.left}`;
if (position.right) this.menu.style.right = `${position.right}`;
}
this.container = document.createElement('div');
this.container.id = `menu-container-${instance}`;
this.container.className = 'menu-container menu-container-fadein';
// set menu title with pulldown arrow
const elTitle = document.createElement('div');
elTitle.className = 'menu-title';
elTitle.id = `menu-title-${instance}`;
const svg = `<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 448 512" style="width: 2rem; height: 2rem; vertical-align: top;">
<path d="M400 32H48A48 48 0 0 0 0 80v352a48 48 0 0 0 48 48h352a48 48 0 0 0 48-48V80a48 48 0 0 0-48-48zm-51.37 182.31L232.06 348.16a10.38 10.38 0 0 1-16.12 0L99.37 214.31C92.17 206 97.28 192 107.43 192h233.14c10.15 0 15.26 14 8.06 22.31z" class="svg-background"/>
<path d="M348.63 214.31L232.06 348.16a10.38 10.38 0 0 1-16.12 0L99.37 214.31C92.17 206 97.28 192 107.43 192h233.14c10.15 0 15.26 14 8.06 22.31z" class="svg-foreground"/>
</svg>`;
if (title) elTitle.innerHTML = `${title}${svg}`;
this.menu.appendChild(elTitle);
elTitle.addEventListener('click', () => {
if (this.container && this.menu) {
this.container.classList.toggle('menu-container-fadeout');
this.container.classList.toggle('menu-container-fadein');
// this.menu.style.borderStyle = this.container.classList.contains('menu-container-fadeout') ? 'none' : 'solid';
}
});
this.menu.appendChild(this.container);
if (typeof parent === 'object') parent.appendChild(this.menu);
else document.getElementById(parent).appendChild(this.menu);
}
get newID() {
this.id++;
return `menu-${this.instance}-${this.id}`;
}
get ID() {
return `menu-${this.instance}-${this.id}`;
}
get width() {
return this.menu ? this.menu.offsetWidth : 0;
}
get height() {
return this.menu ? this.menu.offsetHeight : 0;
}
hide() {
if (this.container && this.container.classList.contains('menu-container-fadein')) {
this.container.classList.toggle('menu-container-fadeout');
this.container.classList.toggle('menu-container-fadein');
}
}
visible() {
return (this.container ? this.container.classList.contains('menu-container-fadein') : false);
}
toggle(evt) {
if (this.container && this.menu) {
this.container.classList.toggle('menu-container-fadeout');
this.container.classList.toggle('menu-container-fadein');
/*
if (this.container.classList.contains('menu-container-fadein') && evt) {
const x = evt.x || (evt.touches && evt.touches[0] ? evt.touches[0].pageX : null);
// const y = evt.y || (evt.touches && evt.touches[0] ? evt.touches[0].pageY : null);
if (x) this.menu.style.left = `${x - (this.menu.offsetWidth / 2)}px`;
// if (y) this.menu.style.top = '5.5rem'; // `${evt.y + 55}px`;
if (this.menu.offsetLeft < 0) this.menu.style.left = '0';
if ((this.menu.offsetLeft + this.menu.offsetWidth) > window.innerWidth) {
this.menu.style.left = '';
this.menu.style.right = '0';
}
// this.menu.style.borderStyle = 'solid';
} else {
// this.menu.style.borderStyle = 'none';
}
*/
}
}
addTitle(title) {
const el = document.createElement('div');
el.className = 'menu-title';
el.id = this.newID;
el.innerHTML = title;
if (this.menu) this.menu.appendChild(el);
el.addEventListener('click', () => {
this.hidden = !this.hidden;
const all = document.getElementsByClassName('menu');
for (const item of all) {
item.style.display = this.hidden ? 'none' : 'block';
}
});
return el;
}
addLabel(title) {
const el = document.createElement('div');
el.className = 'menu-item menu-label';
el.id = this.newID;
el.innerHTML = title;
if (this.container) this.container.appendChild(el);
return el;
}
addBool(title, object, variable, callback) {
const el = document.createElement('div');
el.className = 'menu-item';
el.innerHTML = `<div class="menu-checkbox"><input class="menu-checkbox" type="checkbox" id="${this.newID}" ${object[variable] ? 'checked' : ''}/><label class="menu-checkbox-label" for="${this.ID}"></label></div>${title}`;
if (this.container) this.container.appendChild(el);
el.addEventListener('change', (evt) => {
if (evt.target) {
object[variable] = evt.target['checked'];
if (callback) callback(evt.target['checked']);
}
});
return el;
}
async addList(title, items, selected, callback) {
const el = document.createElement('div');
el.className = 'menu-item';
let options = '';
for (const item of items) {
const def = item === selected ? 'selected' : '';
options += `<option value="${item}" ${def}>${item}</option>`;
}
el.innerHTML = `<div class="menu-list"><select name="${this.ID}" title="${title}" class="menu-list-item">${options}</select><label for="${this.ID}"></label></div>${title}`;
el.style.fontFamily = document.body.style.fontFamily;
el.style.fontSize = document.body.style.fontSize;
el.style.fontVariant = document.body.style.fontVariant;
if (this.container) this.container.appendChild(el);
el.addEventListener('change', (evt) => {
if (callback && evt.target) callback(items[evt.target['selectedIndex']]);
});
return el;
}
addRange(title, object, variable, min, max, step, callback) {
const el = document.createElement('div');
el.className = 'menu-item';
el.innerHTML = `<input class="menu-range" type="range" title="${title}" id="${this.newID}" min="${min}" max="${max}" step="${step}" value="${object[variable]}">${title}`;
if (this.container) this.container.appendChild(el);
el.addEventListener('change', (evt) => {
if (evt.target) {
object[variable] = parseInt(evt.target['value']) === parseFloat(evt.target['value']) ? parseInt(evt.target['value']) : parseFloat(evt.target['value']);
evt.target.setAttribute('value', evt.target['value']);
if (callback) callback(evt.target['value']);
}
});
el['input'] = el.children[0];
return el;
}
addHTML(html) {
const el = document.createElement('div');
el.className = 'menu-item';
el.id = this.newID;
if (html) el.innerHTML = html;
if (this.container) this.container.appendChild(el);
return el;
}
addButton(titleOn, titleOff, callback) {
const el = document.createElement('button');
el.className = 'menu-item menu-button';
el.style.fontFamily = document.body.style.fontFamily;
el.style.fontSize = document.body.style.fontSize;
el.style.fontVariant = document.body.style.fontVariant;
el.type = 'button';
el.id = this.newID;
el.innerText = titleOn;
if (this.container) this.container.appendChild(el);
el.addEventListener('click', () => {
if (el.innerText === titleOn) el.innerText = titleOff;
else el.innerText = titleOn;
if (callback) callback(el.innerText !== titleOn);
});
return el;
}
addValue(title, val, suffix = '') {
const el = document.createElement('div');
el.className = 'menu-item';
el.id = `menu-val-${title}`;
el.innerText = `${title}: ${val}${suffix}`;
if (this.container) this.container.appendChild(el);
return el;
}
updateValue(title, val, suffix = '') {
const el = document.getElementById(`menu-val-${title}`);
if (el) el.innerText = `${title}: ${val}${suffix}`;
else this.addValue(title, val);
}
addChart(title, id, width = 150, height = 40, color) {
if (color) theme.chartColor = color;
const el = document.createElement('div');
el.className = 'menu-item menu-chart-title';
el.id = this.newID;
el.innerHTML = `<font color=${theme.chartColor}>${title}</font><canvas id="menu-canvas-${id}" class="menu-chart-canvas" width="${width}px" height="${height}px"></canvas>`;
if (this.container) this.container.appendChild(el);
return el;
}
async updateChart(id, values) {
if (!values || (values.length === 0)) return;
/** @type {HTMLCanvasElement} */
const canvas = document.getElementById(`menu-canvas-${id}`);
if (!canvas) return;
const ctx = canvas.getContext('2d');
if (!ctx) return;
ctx.fillStyle = theme.background;
ctx.fillRect(0, 0, canvas.width, canvas.height);
const width = canvas.width / values.length;
const max = 1 + Math.max(...values);
const height = canvas.height / max;
for (let i = 0; i < values.length; i++) {
const gradient = ctx.createLinearGradient(0, (max - values[i]) * height, 0, 0);
gradient.addColorStop(0.1, theme.chartColor);
gradient.addColorStop(0.4, theme.background);
ctx.fillStyle = gradient;
ctx.fillRect(i * width, 0, width - 4, canvas.height);
ctx.fillStyle = theme.background;
ctx.font = `${width / 1.5}px "Segoe UI"`;
ctx.fillText(Math.round(values[i]).toString(), i * width + 1, canvas.height - 1, width - 1);
}
}
}
export default Menu;

85
demo/helpers/webrtc.js Normal file
View File

@ -0,0 +1,85 @@
const debug = true;
async function log(...msg) {
if (debug) {
const dt = new Date();
const ts = `${dt.getHours().toString().padStart(2, '0')}:${dt.getMinutes().toString().padStart(2, '0')}:${dt.getSeconds().toString().padStart(2, '0')}.${dt.getMilliseconds().toString().padStart(3, '0')}`;
console.log(ts, 'webrtc', ...msg); // eslint-disable-line no-console
}
}
/**
* helper implementation of webrtc
* performs:
* - discovery
* - handshake
* - connct to webrtc stream
* - assign webrtc stream to video element
*
* for development purposes i'm using test webrtc server that reads rtsp stream from a security camera:
* <https://github.com/vladmandic/stream-rtsp>
*
* @param {string} server
* @param {string} streamName
* @param {HTMLVideoElement} elementName
* @return {promise}
*/
async function webRTC(server, streamName, elementName) {
const suuid = streamName;
log('client starting');
log(`server: ${server} stream: ${suuid}`);
const stream = new MediaStream();
const connection = new RTCPeerConnection();
connection.oniceconnectionstatechange = () => log('connection', connection.iceConnectionState);
connection.onnegotiationneeded = async () => {
let offer;
if (connection.localDescription) {
offer = await connection.createOffer();
await connection.setLocalDescription(offer);
const res = await fetch(`${server}/stream/receiver/${suuid}`, {
method: 'POST',
headers: { 'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8' },
body: new URLSearchParams({
suuid: `${suuid}`,
data: `${btoa(connection.localDescription.sdp || '')}`,
}),
});
}
const data = res && res.ok ? await res.text() : '';
if (data.length === 0 || !offer) {
log('cannot connect:', server);
} else {
connection.setRemoteDescription(new RTCSessionDescription({
type: 'answer',
sdp: atob(data),
}));
log('negotiation start:', offer);
}
};
connection.ontrack = (event) => {
stream.addTrack(event.track);
const video = (typeof elementName === 'string') ? document.getElementById(elementName) : elementName;
if (video instanceof HTMLVideoElement) video.srcObject = stream;
else log('element is not a video element:', elementName);
// video.onloadeddata = async () => log('resolution:', video.videoWidth, video.videoHeight);
log('received track:', event.track);
};
const res = await fetch(`${server}/stream/codec/${suuid}`);
const streams = res && res.ok ? await res.json() : [];
if (streams.length === 0) log('received no streams');
else log('received streams:', streams);
for (const s of streams) connection.addTransceiver(s.Type, { direction: 'sendrecv' });
const channel = connection.createDataChannel(suuid, { maxRetransmits: 10 });
channel.onmessage = (e) => log('channel message:', channel.label, 'payload', e.data);
channel.onerror = (e) => log('channel error:', channel.label, 'payload', e);
// channel.onbufferedamountlow = (e) => log('channel buffering:', channel.label, 'payload', e);
channel.onclose = () => log('channel close', channel.label);
channel.onopen = () => {
log('channel open', channel.label);
setInterval(() => channel.send('ping'), 1000); // send ping becouse PION doesn't handle RTCSessionDescription.close()
};
}
export default webRTC;

17
demo/icons.css Normal file

File diff suppressed because one or more lines are too long

133
demo/index-pwa.js Normal file
View File

@ -0,0 +1,133 @@
/**
* PWA Service Worker for Human main demo
*/
/* eslint-disable no-restricted-globals */
/// <reference lib="webworker" />
const skipCaching = false;
const cacheName = 'Human';
const cacheFiles = ['/favicon.ico', 'manifest.webmanifest']; // assets and models are cached on first access
let cacheModels = true; // *.bin; *.json
let cacheWASM = true; // *.wasm
let cacheOther = false; // *
let listening = false;
const stats = { hit: 0, miss: 0 };
const log = (...msg) => {
const dt = new Date();
const ts = `${dt.getHours().toString().padStart(2, '0')}:${dt.getMinutes().toString().padStart(2, '0')}:${dt.getSeconds().toString().padStart(2, '0')}.${dt.getMilliseconds().toString().padStart(3, '0')}`;
console.log(ts, 'pwa', ...msg); // eslint-disable-line no-console
};
async function updateCached(req) {
fetch(req)
.then((update) => {
// update cache if request is ok
if (update.ok) {
caches
.open(cacheName)
.then((cache) => cache.put(req, update))
.catch((err) => log('cache update error', err)); // eslint-disable-line promise/no-nesting
}
return true;
})
.catch((err) => {
log('fetch error', err);
return false;
});
}
async function getCached(evt) {
// just fetch
if (skipCaching) return fetch(evt.request);
// get from cache or fetch if not in cache
let found = await caches.match(evt.request);
if (found && found.ok) {
stats.hit += 1;
} else {
stats.miss += 1;
found = await fetch(evt.request);
}
// if still don't have it, return offline page
if (!found || !found.ok) {
found = await caches.match('offline.html');
}
// update cache in the background
if (found && found.type === 'basic' && found.ok) {
const uri = new URL(evt.request.url);
if (uri.pathname.endsWith('.bin') || uri.pathname.endsWith('.json')) {
if (cacheModels) updateCached(evt.request);
} else if (uri.pathname.endsWith('.wasm')) {
if (cacheWASM) updateCached(evt.request);
} else if (cacheOther) {
updateCached(evt.request);
}
}
return found;
}
function cacheInit() {
caches.open(cacheName)
.then((cache) => cache.addAll(cacheFiles)
.then( // eslint-disable-line promise/no-nesting
() => log('cache refresh:', cacheFiles.length, 'files'),
(err) => log('cache error', err),
))
.catch(() => log('cache error'));
}
if (!listening) {
// get messages from main app to update configuration
self.addEventListener('message', (evt) => {
log('event message:', evt.data);
switch (evt.data.key) {
case 'cacheModels': cacheModels = evt.data.val; break;
case 'cacheWASM': cacheWASM = evt.data.val; break;
case 'cacheOther': cacheOther = evt.data.val; break;
default:
}
});
self.addEventListener('install', (evt) => {
log('install');
self.skipWaiting();
evt.waitUntil(cacheInit);
});
self.addEventListener('activate', (evt) => {
log('activate');
evt.waitUntil(self.clients.claim());
});
self.addEventListener('fetch', (evt) => {
const uri = new URL(evt.request.url);
// if (uri.pathname === '/') { log('cache skip /', evt.request); return; } // skip root access requests
if (evt.request.cache === 'only-if-cached' && evt.request.mode !== 'same-origin') return; // required due to chrome bug
if (uri.origin !== self.location.origin) return; // skip non-local requests
if (evt.request.method !== 'GET') return; // only cache get requests
if (evt.request.url.includes('/api/')) return; // don't cache api requests, failures are handled at the time of call
const response = getCached(evt);
if (response) evt.respondWith(response);
else log('fetch response missing');
});
// only trigger controllerchange once
let refreshed = false;
self.addEventListener('controllerchange', (evt) => {
log(`PWA: ${evt.type}`);
if (refreshed) return;
refreshed = true;
self.location.reload();
});
listening = true;
}

37
demo/index-worker.js Normal file
View File

@ -0,0 +1,37 @@
/**
* Web worker used by main demo app
* Loaded from index.js
*/
/// <reference lib="webworker"/>
// load Human using IIFE script as Chome Mobile does not support Modules as Workers
self.importScripts('../dist/human.js'); // eslint-disable-line no-restricted-globals
let busy = false;
// eslint-disable-next-line new-cap, no-undef
const human = new Human.default();
onmessage = async (msg) => { // receive message from main thread
if (busy) return;
busy = true;
// received from index.js using:
// worker.postMessage({ image: image.data.buffer, width: canvas.width, height: canvas.height, config }, [image.data.buffer]);
const image = new ImageData(new Uint8ClampedArray(msg.data.image), msg.data.width, msg.data.height);
let result = {};
result = await human.detect(image, msg.data.userConfig);
result.tensors = human.tf.engine().state.numTensors; // append to result object so main thread get info
result.backend = human.tf.getBackend(); // append to result object so main thread get info
if (result.canvas) { // convert canvas to imageData and send it by reference
const canvas = new OffscreenCanvas(result.canvas.width, result.canvas.height);
const ctx = canvas.getContext('2d');
if (ctx) ctx.drawImage(result.canvas, 0, 0);
const img = ctx ? ctx.getImageData(0, 0, result.canvas.width, result.canvas.height) : null;
result.canvas = null; // must strip original canvas from return value as it cannot be transfered from worker thread
if (img) postMessage({ result, image: img.data.buffer, width: msg.data.width, height: msg.data.height }, [img.data.buffer]);
else postMessage({ result }); // send message back to main thread with canvas
} else {
postMessage({ result }); // send message back to main thread without canvas
}
busy = false;
};

118
demo/index.html Normal file
View File

@ -0,0 +1,118 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Human</title>
<meta name="viewport" content="width=device-width" id="viewport">
<meta name="keywords" content="Human">
<meta name="application-name" content="Human">
<meta name="description" content="Human: 3D Face Detection, Body Pose, Hand & Finger Tracking, Iris Tracking, Age & Gender Prediction, Emotion Prediction & Gesture Recognition; Author: Vladimir Mandic <https://github.com/vladmandic>">
<meta name="msapplication-tooltip" content="Human: 3D Face Detection, Body Pose, Hand & Finger Tracking, Iris Tracking, Age & Gender Prediction, Emotion Prediction & Gesture Recognition; Author: Vladimir Mandic <https://github.com/vladmandic>">
<meta name="theme-color" content="#000000">
<link rel="manifest" href="./manifest.webmanifest">
<link rel="shortcut icon" href="../favicon.ico" type="image/x-icon">
<link rel="apple-touch-icon" href="../assets/icon.png">
<link rel="stylesheet" type="text/css" href="./icons.css">
<script src="./index.js" type="module"></script>
<style>
@font-face { font-family: 'Lato'; font-display: swap; font-style: normal; font-weight: 100; src: local('Lato'), url('../assets/lato-light.woff2') }
html { font-family: 'Lato', 'Segoe UI'; font-size: 16px; font-variant: small-caps; }
body { margin: 0; background: black; color: white; overflow-x: hidden; width: 100vw; height: 100vh; }
body::-webkit-scrollbar { display: none; }
hr { width: 100%; }
.play { position: absolute; width: 256px; height: 256px; z-index: 9; bottom: 15%; left: 50%; margin-left: -125px; display: none; filter: grayscale(1); }
.play:hover { filter: grayscale(0); }
.btn-background { fill:grey; cursor: pointer; opacity: 0.6; }
.btn-background:hover { opacity: 1; }
.btn-foreground { fill:white; cursor: pointer; opacity: 0.8; }
.btn-foreground:hover { opacity: 1; }
.status { position: absolute; width: 100vw; bottom: 10%; text-align: center; font-size: 3rem; font-weight: 100; text-shadow: 2px 2px #303030; }
.thumbnail { margin: 8px; box-shadow: 0 0 4px 4px dimgrey; }
.thumbnail:hover { box-shadow: 0 0 8px 8px dimgrey; filter: grayscale(1); }
.log { position: absolute; bottom: 0; margin: 0.4rem 0.4rem 0 0.4rem; font-size: 0.9rem; }
.menubar { width: 100vw; background: #303030; display: flex; justify-content: space-evenly; text-align: center; padding: 8px; cursor: pointer; }
.samples-container { display: flex; flex-wrap: wrap; }
.video { display: none; }
.canvas { margin: 0 auto; }
.bench { position: absolute; right: 0; bottom: 0; }
.compare-image { width: 200px; position: absolute; top: 150px; left: 30px; box-shadow: 0 0 2px 2px black; background: black; display: none; }
.loader { width: 300px; height: 300px; border: 3px solid transparent; border-radius: 50%; border-top: 4px solid #f15e41; animation: spin 4s linear infinite; position: absolute; bottom: 15%; left: 50%; margin-left: -150px; z-index: 15; }
.loader::before, .loader::after { content: ""; position: absolute; top: 6px; bottom: 6px; left: 6px; right: 6px; border-radius: 50%; border: 4px solid transparent; }
.loader::before { border-top-color: #bad375; animation: 3s spin linear infinite; }
.loader::after { border-top-color: #26a9e0; animation: spin 1.5s linear infinite; }
@keyframes spin {
from { transform: rotate(0deg); }
to { transform: rotate(360deg); }
}
.wave { position: fixed; top: 0; left: -90%; width: 100vw; height: 100vh; border-radius: 10%; opacity: .3; z-index: -1; }
.wave.one { animation: rotate 10000ms infinite linear; background: #2F4F4F; }
.wave.two { animation: rotate 15000ms infinite linear; background: #1F3F3F; }
.wave.three { animation: rotate 20000ms infinite linear; background: #0F1F1F; }
@keyframes rotate {
from { transform: rotate(0deg); }
from { transform: rotate(360deg); }
}
.button { text-shadow: 2px 2px black; display: flex; }
.button:hover { text-shadow: -2px -2px black; color: lightblue; }
.button::before { display: inline-block; font-style: normal; font-variant: normal; text-rendering: auto; -webkit-font-smoothing: antialiased; font-family: "FA"; font-weight: 900; font-size: 2.4rem; margin-right: 0.4rem; }
.button-display::before { content: "\f8c4"; }
.button-image::before { content: "\f3f2"; }
.button-process::before { content: "\f3f0"; }
.button-model::before { content: "\f2c2"; }
.button-start::before { content: "\f144"; }
.button-stop::before { content: "\f28b"; }
.icon { width: 180px; text-align: -webkit-center; text-align: -moz-center; filter: grayscale(1); }
.icon:hover { background: #505050; filter: grayscale(0); }
.hint { opacity: 0; transition-duration: 0.5s; transition-property: opacity; font-style: italic; position: fixed; top: 5rem; padding: 8px; margin: 8px; box-shadow: 0 0 2px 2px #303030; }
.input-file { align-self: center; width: 5rem; }
.results { position: absolute; left: 0; top: 5rem; background: #303030; width: 20rem; height: 90%; font-size: 0.8rem; overflow-y: auto; display: none }
.results::-webkit-scrollbar { background-color: #303030; }
.results::-webkit-scrollbar-thumb { background: black; border-radius: 10px; }
.json-line { margin: 4px 0; display: flex; justify-content: flex-start; }
.json { margin-right: 8px; margin-left: 8px; }
.json-type { color: lightyellow; }
.json-key { color: white; }
.json-index { color: lightcoral; }
.json-value { margin-left: 20px; }
.json-number { color: lightgreen; }
.json-boolean { color: lightyellow; }
.json-string { color: lightblue; }
.json-size { color: gray; }
.hide { display: none; }
.fas { display: inline-block; width: 0; height: 0; border-style: solid; }
.fa-caret-down { border-width: 10px 8px 0 8px; border-color: white transparent }
.fa-caret-right { border-width: 10px 0 8px 10px; border-color: transparent transparent transparent white; }
</style>
</head>
<body>
<div id="play" class="play icon-play"></div>
<div id="background">
<div class="wave one"></div>
<div class="wave two"></div>
<div class="wave three"></div>
</div>
<div id="loader" class="loader"></div>
<div id="status" class="status"></div>
<div id="menubar" class="menubar">
<div id="btnDisplay" class="icon"><div class="icon-binoculars"> </div>display</div>
<div id="btnImage" class="icon"><div class="icon-brush"></div>input</div>
<div id="btnProcess" class="icon"><div class="icon-stats"></div>options</div>
<div id="btnModel" class="icon"><div class="icon-games"></div>models</div>
<div id="btnStart" class="icon"><div class="icon-webcam"></div><span id="btnStartText">start video</span></div>
</div>
<div id="media">
<canvas id="canvas" class="canvas"></canvas>
<video id="video" playsinline class="video"></video>
</div>
<div id="compare-container" class="compare-image">
<canvas id="compare-canvas" width="200" height="200"></canvas>
<div id="similarity"></div>
</div>
<div id="samples-container" class="samples-container"></div>
<div id="hint" class="hint"></div>
<div id="log" class="log"></div>
<div id="results" class="results"></div>
</body>
</html>

1025
demo/index.js Normal file

File diff suppressed because it is too large Load Diff

10
demo/manifest.webmanifest Normal file
View File

@ -0,0 +1,10 @@
{
"name": "Human Library",
"short_name": "Human",
"icons": [{ "src": "../assets/icon.png", "sizes": "512x512", "type": "image/png", "purpose": "any maskable" }],
"start_url": "./index.html",
"scope": "/",
"display": "standalone",
"background_color": "#000000",
"theme_color": "#000000"
}

View File

@ -0,0 +1,71 @@
# Human Multithreading Demos
- **Browser** demo `multithread` & `worker`
Runs each `human` module in a separate web worker for highest possible performance
- **NodeJS** demo `node-multiprocess` & `node-multiprocess-worker`
Runs multiple parallel `human` by dispaching them to pool of pre-created worker processes
<br><hr><br>
## NodeJS Multi-process Demo
`nodejs/node-multiprocess.js` and `nodejs/node-multiprocess-worker.js`: Demo using NodeJS with CommonJS module
Demo that starts n child worker processes for parallel execution
```shell
node demo/nodejs/node-multiprocess.js
```
<!-- eslint-skip -->
```json
2021-06-01 08:54:19 INFO: @vladmandic/human version 2.0.0
2021-06-01 08:54:19 INFO: User: vlado Platform: linux Arch: x64 Node: v16.0.0
2021-06-01 08:54:19 INFO: Human multi-process test
2021-06-01 08:54:19 STATE: Enumerated images: ./assets 15
2021-06-01 08:54:19 STATE: Main: started worker: 130362
2021-06-01 08:54:19 STATE: Main: started worker: 130363
2021-06-01 08:54:19 STATE: Main: started worker: 130369
2021-06-01 08:54:19 STATE: Main: started worker: 130370
2021-06-01 08:54:20 STATE: Worker: PID: 130370 TensorFlow/JS 3.6.0 Human 2.0.0 Backend: tensorflow
2021-06-01 08:54:20 STATE: Worker: PID: 130362 TensorFlow/JS 3.6.0 Human 2.0.0 Backend: tensorflow
2021-06-01 08:54:20 STATE: Worker: PID: 130369 TensorFlow/JS 3.6.0 Human 2.0.0 Backend: tensorflow
2021-06-01 08:54:20 STATE: Worker: PID: 130363 TensorFlow/JS 3.6.0 Human 2.0.0 Backend: tensorflow
2021-06-01 08:54:21 STATE: Main: dispatching to worker: 130370
2021-06-01 08:54:21 INFO: Latency: worker initializtion: 1348 message round trip: 0
2021-06-01 08:54:21 DATA: Worker received message: 130370 { test: true }
2021-06-01 08:54:21 STATE: Main: dispatching to worker: 130362
2021-06-01 08:54:21 DATA: Worker received message: 130362 { image: 'samples/ai-face.jpg' }
2021-06-01 08:54:21 DATA: Worker received message: 130370 { image: 'samples/ai-body.jpg' }
2021-06-01 08:54:21 STATE: Main: dispatching to worker: 130369
2021-06-01 08:54:21 STATE: Main: dispatching to worker: 130363
2021-06-01 08:54:21 DATA: Worker received message: 130369 { image: 'assets/human-sample-upper.jpg' }
2021-06-01 08:54:21 DATA: Worker received message: 130363 { image: 'assets/sample-me.jpg' }
2021-06-01 08:54:24 DATA: Main: worker finished: 130362 detected faces: 1 bodies: 1 hands: 0 objects: 1
2021-06-01 08:54:24 STATE: Main: dispatching to worker: 130362
2021-06-01 08:54:24 DATA: Worker received message: 130362 { image: 'assets/sample1.jpg' }
2021-06-01 08:54:25 DATA: Main: worker finished: 130369 detected faces: 1 bodies: 1 hands: 0 objects: 1
2021-06-01 08:54:25 STATE: Main: dispatching to worker: 130369
2021-06-01 08:54:25 DATA: Main: worker finished: 130370 detected faces: 1 bodies: 1 hands: 0 objects: 1
2021-06-01 08:54:25 STATE: Main: dispatching to worker: 130370
2021-06-01 08:54:25 DATA: Worker received message: 130369 { image: 'assets/sample2.jpg' }
2021-06-01 08:54:25 DATA: Main: worker finished: 130363 detected faces: 1 bodies: 1 hands: 0 objects: 2
2021-06-01 08:54:25 STATE: Main: dispatching to worker: 130363
2021-06-01 08:54:25 DATA: Worker received message: 130370 { image: 'assets/sample3.jpg' }
2021-06-01 08:54:25 DATA: Worker received message: 130363 { image: 'assets/sample4.jpg' }
2021-06-01 08:54:30 DATA: Main: worker finished: 130362 detected faces: 3 bodies: 1 hands: 0 objects: 7
2021-06-01 08:54:30 STATE: Main: dispatching to worker: 130362
2021-06-01 08:54:30 DATA: Worker received message: 130362 { image: 'assets/sample5.jpg' }
2021-06-01 08:54:31 DATA: Main: worker finished: 130369 detected faces: 3 bodies: 1 hands: 0 objects: 5
2021-06-01 08:54:31 STATE: Main: dispatching to worker: 130369
2021-06-01 08:54:31 DATA: Worker received message: 130369 { image: 'assets/sample6.jpg' }
2021-06-01 08:54:31 DATA: Main: worker finished: 130363 detected faces: 4 bodies: 1 hands: 2 objects: 2
2021-06-01 08:54:31 STATE: Main: dispatching to worker: 130363
2021-06-01 08:54:39 STATE: Main: worker exit: 130370 0
2021-06-01 08:54:39 DATA: Main: worker finished: 130362 detected faces: 1 bodies: 1 hands: 0 objects: 1
2021-06-01 08:54:39 DATA: Main: worker finished: 130369 detected faces: 1 bodies: 1 hands: 1 objects: 3
2021-06-01 08:54:39 STATE: Main: worker exit: 130362 0
2021-06-01 08:54:39 STATE: Main: worker exit: 130369 0
2021-06-01 08:54:41 DATA: Main: worker finished: 130363 detected faces: 9 bodies: 1 hands: 0 objects: 10
2021-06-01 08:54:41 STATE: Main: worker exit: 130363 0
2021-06-01 08:54:41 INFO: Processed: 15 images in total: 22006 ms working: 20658 ms average: 1377 ms
```

View File

@ -0,0 +1,33 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Human</title>
<meta name="viewport" content="width=device-width" id="viewport">
<meta name="keywords" content="Human">
<meta name="application-name" content="Human">
<meta name="description" content="Human: 3D Face Detection, Body Pose, Hand & Finger Tracking, Iris Tracking, Age & Gender Prediction, Emotion Prediction & Gesture Recognition; Author: Vladimir Mandic <https://github.com/vladmandic>">
<meta name="msapplication-tooltip" content="Human: 3D Face Detection, Body Pose, Hand & Finger Tracking, Iris Tracking, Age & Gender Prediction, Emotion Prediction & Gesture Recognition; Author: Vladimir Mandic <https://github.com/vladmandic>">
<meta name="theme-color" content="#000000">
<link rel="manifest" href="../../manifest.webmanifest">
<link rel="shortcut icon" href="../../favicon.ico" type="image/x-icon">
<link rel="apple-touch-icon" href="../../assets/icon.png">
<script src="../multithread/index.js" type="module"></script>
<style>
@font-face { font-family: 'Lato'; font-display: swap; font-style: normal; font-weight: 100; src: local('Lato'), url('../../assets/lato-light.woff2') }
html { font-family: 'Lato', 'Segoe UI'; font-size: 16px; font-variant: small-caps; }
body { margin: 0; background: black; color: white; overflow-x: hidden; width: 100vw; height: 100vh; }
body::-webkit-scrollbar { display: none; }
.status { position: absolute; width: 100vw; bottom: 10%; text-align: center; font-size: 3rem; font-weight: 100; text-shadow: 2px 2px #303030; }
.log { position: absolute; bottom: 0; margin: 0.4rem 0.4rem 0 0.4rem; font-size: 0.9rem; }
.video { display: none; }
.canvas { margin: 0 auto; }
</style>
</head>
<body>
<div id="status" class="status"></div>
<canvas id="canvas" class="canvas"></canvas>
<video id="video" playsinline class="video"></video>
<div id="log" class="log"></div>
</body>
</html>

264
demo/multithread/index.js Normal file
View File

@ -0,0 +1,264 @@
/**
* Human demo for browsers
*
* @description Demo app that enables all Human modules and runs them in separate worker threads
*
*/
import { Human } from '../../dist/human.esm.js'; // equivalent of @vladmandic/human
import GLBench from '../helpers/gl-bench.js';
const workerJS = '../multithread/worker.js';
const config = {
main: { // processes input and runs gesture analysis
warmup: 'none',
backend: 'webgl',
modelBasePath: '../../models/',
async: false,
filter: { enabled: true },
face: { enabled: false },
object: { enabled: false },
gesture: { enabled: true },
hand: { enabled: false },
body: { enabled: false },
segmentation: { enabled: false },
},
face: { // runs all face models
warmup: 'none',
backend: 'webgl',
modelBasePath: '../../models/',
async: false,
filter: { enabled: false },
face: { enabled: true },
object: { enabled: false },
gesture: { enabled: false },
hand: { enabled: false },
body: { enabled: false },
segmentation: { enabled: false },
},
body: { // runs body model
warmup: 'none',
backend: 'webgl',
modelBasePath: '../../models/',
async: false,
filter: { enabled: false },
face: { enabled: false },
object: { enabled: false },
gesture: { enabled: false },
hand: { enabled: false },
body: { enabled: true },
segmentation: { enabled: false },
},
hand: { // runs hands model
warmup: 'none',
backend: 'webgl',
modelBasePath: '../../models/',
async: false,
filter: { enabled: false },
face: { enabled: false },
object: { enabled: false },
gesture: { enabled: false },
hand: { enabled: true },
body: { enabled: false },
segmentation: { enabled: false },
},
object: { // runs object model
warmup: 'none',
backend: 'webgl',
modelBasePath: '../../models/',
async: false,
filter: { enabled: false },
face: { enabled: false },
object: { enabled: true },
gesture: { enabled: false },
hand: { enabled: false },
body: { enabled: false },
segmentation: { enabled: false },
},
};
let human;
let canvas;
let video;
let bench;
const busy = {
face: false,
hand: false,
body: false,
object: false,
};
const workers = {
/** @type {Worker | null} */
face: null,
/** @type {Worker | null} */
body: null,
/** @type {Worker | null} */
hand: null,
/** @type {Worker | null} */
object: null,
};
const time = {
main: 0,
draw: 0,
face: '[warmup]',
body: '[warmup]',
hand: '[warmup]',
object: '[warmup]',
};
const start = {
main: 0,
draw: 0,
face: 0,
body: 0,
hand: 0,
object: 0,
};
const result = { // initialize empty result object which will be partially filled with results from each thread
performance: {},
hand: [],
body: [],
face: [],
object: [],
};
function log(...msg) {
const dt = new Date();
const ts = `${dt.getHours().toString().padStart(2, '0')}:${dt.getMinutes().toString().padStart(2, '0')}:${dt.getSeconds().toString().padStart(2, '0')}.${dt.getMilliseconds().toString().padStart(3, '0')}`;
console.log(ts, ...msg); // eslint-disable-line no-console
}
async function drawResults() {
start.draw = human.now();
const interpolated = human.next(result);
await human.draw.all(canvas, interpolated);
time.draw = Math.round(1 + human.now() - start.draw);
const fps = Math.round(10 * 1000 / time.main) / 10;
const draw = Math.round(10 * 1000 / time.draw) / 10;
const div = document.getElementById('log');
if (div) div.innerText = `Human: version ${human.version} | Performance: Main ${time.main}ms Face: ${time.face}ms Body: ${time.body}ms Hand: ${time.hand}ms Object ${time.object}ms | FPS: ${fps} / ${draw}`;
requestAnimationFrame(drawResults);
}
async function receiveMessage(msg) {
result[msg.data.type] = msg.data.result;
busy[msg.data.type] = false;
time[msg.data.type] = Math.round(human.now() - start[msg.data.type]);
}
async function runDetection() {
start.main = human.now();
if (!bench) {
bench = new GLBench(null, { trackGPU: false, chartHz: 20, chartLen: 20 });
bench.begin('human');
}
const ctx = canvas.getContext('2d');
ctx.drawImage(video, 0, 0, canvas.width, canvas.height);
const imageData = ctx.getImageData(0, 0, canvas.width, canvas.height);
if (!busy.face) {
busy.face = true;
start.face = human.now();
if (workers.face) workers.face.postMessage({ image: imageData.data.buffer, width: canvas.width, height: canvas.height, config: config.face, type: 'face' }, [imageData.data.buffer.slice(0)]);
}
if (!busy.body) {
busy.body = true;
start.body = human.now();
if (workers.body) workers.body.postMessage({ image: imageData.data.buffer, width: canvas.width, height: canvas.height, config: config.body, type: 'body' }, [imageData.data.buffer.slice(0)]);
}
if (!busy.hand) {
busy.hand = true;
start.hand = human.now();
if (workers.hand) workers.hand.postMessage({ image: imageData.data.buffer, width: canvas.width, height: canvas.height, config: config.hand, type: 'hand' }, [imageData.data.buffer.slice(0)]);
}
if (!busy.object) {
busy.object = true;
start.object = human.now();
if (workers.object) workers.object.postMessage({ image: imageData.data.buffer, width: canvas.width, height: canvas.height, config: config.object, type: 'object' }, [imageData.data.buffer.slice(0)]);
}
time.main = Math.round(human.now() - start.main);
bench.nextFrame();
requestAnimationFrame(runDetection);
}
async function setupCamera() {
video = document.getElementById('video');
canvas = document.getElementById('canvas');
const output = document.getElementById('log');
let stream;
const constraints = {
audio: false,
video: {
facingMode: 'user',
resizeMode: 'crop-and-scale',
width: { ideal: document.body.clientWidth },
aspectRatio: document.body.clientWidth / document.body.clientHeight,
},
};
// enumerate devices for diag purposes
navigator.mediaDevices.enumerateDevices()
.then((devices) => log('enumerated devices:', devices))
.catch(() => log('mediaDevices error'));
log('camera constraints', constraints);
try {
stream = await navigator.mediaDevices.getUserMedia(constraints);
} catch (err) {
if (output) output.innerText += `\n${err.name}: ${err.message}`;
log('camera error:', err);
}
if (stream) {
const tracks = stream.getVideoTracks();
log('enumerated viable tracks:', tracks);
const track = stream.getVideoTracks()[0];
const settings = track.getSettings();
log('selected video source:', track, settings);
} else {
log('missing video stream');
}
const promise = !stream || new Promise((resolve) => {
video.onloadeddata = () => {
canvas.style.height = '100vh';
canvas.width = video.videoWidth;
canvas.height = video.videoHeight;
video.play();
resolve(true);
};
});
// attach input to video element
if (stream && video) video.srcObject = stream;
return promise;
}
async function startWorkers() {
if (!workers.face) workers.face = new Worker(workerJS);
if (!workers.body) workers.body = new Worker(workerJS);
if (!workers.hand) workers.hand = new Worker(workerJS);
if (!workers.object) workers.object = new Worker(workerJS);
workers.face.onmessage = receiveMessage;
workers.body.onmessage = receiveMessage;
workers.hand.onmessage = receiveMessage;
workers.object.onmessage = receiveMessage;
}
async function main() {
if (typeof Worker === 'undefined' || typeof OffscreenCanvas === 'undefined') {
return;
}
human = new Human(config.main);
const div = document.getElementById('log');
if (div) div.innerText = `Human: version ${human.version}`;
await startWorkers();
await setupCamera();
runDetection();
drawResults();
}
window.onload = main;

View File

@ -0,0 +1,85 @@
/**
* Human demo for NodeJS
*
* Used by node-multiprocess.js as an on-demand started worker process
* Receives messages from parent process and sends results
*/
const fs = require('fs');
const log = require('@vladmandic/pilogger'); // eslint-disable-line node/no-unpublished-require
// workers actual import tfjs and human modules
const tf = require('@tensorflow/tfjs-node'); // eslint-disable-line node/no-unpublished-require
const Human = require('../../dist/human.node.js').default; // or const Human = require('../dist/human.node-gpu.js').default;
let human = null;
const myConfig = {
// backend: 'tensorflow',
modelBasePath: 'file://models/',
debug: false,
async: true,
face: {
enabled: true,
detector: { enabled: true, rotation: false },
mesh: { enabled: true },
iris: { enabled: true },
description: { enabled: true },
emotion: { enabled: true },
},
hand: {
enabled: true,
},
// body: { modelPath: 'blazepose.json', enabled: true },
body: { enabled: true },
object: { enabled: true },
};
// read image from a file and create tensor to be used by human
// this way we don't need any monkey patches
// you can add any pre-proocessing here such as resizing, etc.
async function image(img) {
const buffer = fs.readFileSync(img);
const tensor = human.tf.tidy(() => human.tf.node.decodeImage(buffer).toFloat().expandDims());
return tensor;
}
// actual human detection
async function detect(img) {
const tensor = await image(img);
const result = await human.detect(tensor);
if (process.send) { // check if ipc exists
process.send({ image: img, detected: result }); // send results back to main
process.send({ ready: true }); // send signal back to main that this worker is now idle and ready for next image
}
tf.dispose(tensor);
}
async function main() {
process.on('unhandledRejection', (err) => {
// @ts-ignore // no idea if exception message is compelte
log.error(err?.message || err || 'no error message');
});
// on worker start first initialize message handler so we don't miss any messages
process.on('message', (msg) => {
// if main told worker to exit
if (msg.exit && process.exit) process.exit(); // eslint-disable-line no-process-exit
if (msg.test && process.send) process.send({ test: true });
if (msg.image) detect(msg.image); // if main told worker to process image
log.data('Worker received message:', process.pid, msg); // generic log
});
// create instance of human
human = new Human(myConfig);
// wait until tf is ready
await human.tf.ready();
// pre-load models
log.state('Worker: PID:', process.pid, `TensorFlow/JS ${human.tf.version['tfjs-core']} Human ${human.version} Backend: ${human.tf.getBackend()}`);
await human.load();
// now we're ready, so send message back to main that it knows it can use this worker
if (process.send) process.send({ ready: true });
}
main();

View File

@ -0,0 +1,97 @@
/**
* Human demo for NodeJS
*
* Uses NodeJS fork functionality with inter-processing-messaging
* Starts a pool of worker processes and dispatch work items to each worker when they are available
* Uses node-multiprocess-worker.js for actual processing
*/
const fs = require('fs');
const path = require('path');
const childProcess = require('child_process'); // eslint-disable-line camelcase
const log = require('@vladmandic/pilogger'); // eslint-disable-line node/no-unpublished-require
// note that main process does not import human or tfjs at all, it's all done from worker process
const workerFile = 'demo/multithread/node-multiprocess-worker.js';
const imgPathRoot = './samples/in'; // modify to include your sample images
const numWorkers = 4; // how many workers will be started
const workers = []; // this holds worker processes
const images = []; // this holds queue of enumerated images
const t = []; // timers
let numImages;
// trigered by main when worker sends ready message
// if image pool is empty, signal worker to exit otherwise dispatch image to worker and remove image from queue
async function submitDetect(worker) {
if (!t[2]) t[2] = process.hrtime.bigint(); // first time do a timestamp so we can measure initial latency
if (images.length === numImages) worker.send({ test: true }); // for first image in queue just measure latency
if (images.length === 0) worker.send({ exit: true }); // nothing left in queue
else {
log.state('Main: dispatching to worker:', worker.pid);
worker.send({ image: images[0] });
images.shift();
}
}
// loop that waits for all workers to complete
function waitCompletion() {
const activeWorkers = workers.reduce((any, worker) => (any += worker.connected ? 1 : 0), 0);
if (activeWorkers > 0) setImmediate(() => waitCompletion());
else {
t[1] = process.hrtime.bigint();
log.info('Processed:', numImages, 'images in', 'total:', Math.trunc(Number(t[1] - t[0]) / 1000000), 'ms', 'working:', Math.trunc(Number(t[1] - t[2]) / 1000000), 'ms', 'average:', Math.trunc(Number(t[1] - t[2]) / numImages / 1000000), 'ms');
}
}
function measureLatency() {
t[3] = process.hrtime.bigint();
const latencyInitialization = Math.trunc(Number(t[2] - t[0]) / 1000 / 1000);
const latencyRoundTrip = Math.trunc(Number(t[3] - t[2]) / 1000 / 1000);
log.info('Latency: worker initializtion: ', latencyInitialization, 'message round trip:', latencyRoundTrip);
}
async function main() {
process.on('unhandledRejection', (err) => {
// @ts-ignore // no idea if exception message is compelte
log.error(err?.message || err || 'no error message');
});
log.header();
log.info('Human multi-process test');
// enumerate all images into queue
const dir = fs.readdirSync(imgPathRoot);
for (const imgFile of dir) {
if (imgFile.toLocaleLowerCase().endsWith('.jpg')) images.push(path.join(imgPathRoot, imgFile));
}
numImages = images.length;
log.state('Enumerated images:', imgPathRoot, numImages);
t[0] = process.hrtime.bigint();
t[1] = process.hrtime.bigint();
t[2] = process.hrtime.bigint();
// manage worker processes
for (let i = 0; i < numWorkers; i++) {
// create worker process
workers[i] = await childProcess.fork(workerFile, ['special']);
// parse message that worker process sends back to main
// if message is ready, dispatch next image in queue
// if message is processing result, just print how many faces were detected
// otherwise it's an unknown message
workers[i].on('message', (msg) => {
if (msg.ready) submitDetect(workers[i]);
else if (msg.image) log.data('Main: worker finished:', workers[i].pid, 'detected faces:', msg.detected.face?.length, 'bodies:', msg.detected.body?.length, 'hands:', msg.detected.hand?.length, 'objects:', msg.detected.object?.length);
else if (msg.test) measureLatency();
else log.data('Main: worker message:', workers[i].pid, msg);
});
// just log when worker exits
workers[i].on('exit', (msg) => log.state('Main: worker exit:', workers[i].pid, msg));
// just log which worker was started
log.state('Main: started worker:', workers[i].pid);
}
// wait for all workers to complete
waitCompletion();
}
main();

View File

@ -0,0 +1,18 @@
/// <reference lib="webworker" />
// load Human using IIFE script as Chome Mobile does not support Modules as Workers
self.importScripts('../../dist/human.js'); // eslint-disable-line no-restricted-globals
let human;
onmessage = async (msg) => {
// received from index.js using:
// worker.postMessage({ image: image.data.buffer, width: canvas.width, height: canvas.height, config }, [image.data.buffer]);
// Human is registered as global namespace using IIFE script
if (!human) human = new Human.default(msg.data.config); // eslint-disable-line no-undef, new-cap
const image = new ImageData(new Uint8ClampedArray(msg.data.image), msg.data.width, msg.data.height);
let result = {};
result = await human.detect(image, msg.data.config);
postMessage({ result: result[msg.data.type], type: msg.data.type });
};

121
demo/nodejs/README.md Normal file
View File

@ -0,0 +1,121 @@
# Human Demos for NodeJS
- `node`: Process images from files, folders or URLs
uses native methods for image loading and decoding without external dependencies
- `node-canvas`: Process image from file or URL and draw results to a new image file using `node-canvas`
uses `node-canvas` library to load and decode images from files, draw detection results and write output to a new image file
- `node-video`: Processing of video input using `ffmpeg`
uses `ffmpeg` to decode video input (can be a file, stream or device such as webcam) and
output results in a pipe that are captured by demo app as frames and processed by `Human` library
- `node-webcam`: Processing of webcam screenshots using `fswebcam`
uses `fswebcam` to connect to web cam and take screenshots at regular interval which are then processed by `Human` library
- `node-event`: Showcases usage of `Human` eventing to get notifications on processing
- `node-similarity`: Compares two input images for similarity of detected faces
- `process-folder`: Processing all images in input folder and creates output images
interally used to generate samples gallery
<br>
## Main Demo
`nodejs/node.js`: Demo using NodeJS with CommonJS module
Simple demo that can process any input image
Note that you can run demo as-is and it will perform detection on provided sample images,
or you can pass a path to image to analyze, either on local filesystem or using URL
```shell
node demo/nodejs/node.js
```
<!-- eslint-skip -->
```js
2021-06-01 08:52:15 INFO: @vladmandic/human version 2.0.0
2021-06-01 08:52:15 INFO: User: vlado Platform: linux Arch: x64 Node: v16.0.0
2021-06-01 08:52:15 INFO: Current folder: /home/vlado/dev/human
2021-06-01 08:52:15 INFO: Human: 2.0.0
2021-06-01 08:52:15 INFO: Active Configuration {
backend: 'tensorflow',
modelBasePath: 'file://models/',
wasmPath: '../node_modules/@tensorflow/tfjs-backend-wasm/dist/',
debug: true,
async: false,
warmup: 'full',
cacheSensitivity: 0.75,
filter: {
enabled: true,
width: 0,
height: 0,
flip: true,
return: true,
brightness: 0,
contrast: 0,
sharpness: 0,
blur: 0,
saturation: 0,
hue: 0,
negative: false,
sepia: false,
vintage: false,
kodachrome: false,
technicolor: false,
polaroid: false,
pixelate: 0
},
gesture: { enabled: true },
face: {
enabled: true,
detector: { modelPath: 'blazeface.json', rotation: false, maxDetected: 10, skipFrames: 15, minConfidence: 0.2, iouThreshold: 0.1, return: false, enabled: true },
mesh: { enabled: true, modelPath: 'facemesh.json' },
iris: { enabled: true, modelPath: 'iris.json' },
description: { enabled: true, modelPath: 'faceres.json', skipFrames: 16, minConfidence: 0.1 },
emotion: { enabled: true, minConfidence: 0.1, skipFrames: 17, modelPath: 'emotion.json' }
},
body: { enabled: true, modelPath: 'movenet-lightning.json', maxDetected: 1, minConfidence: 0.2 },
hand: {
enabled: true,
rotation: true,
skipFrames: 18,
minConfidence: 0.1,
iouThreshold: 0.1,
maxDetected: 2,
landmarks: true,
detector: { modelPath: 'handdetect.json' },
skeleton: { modelPath: 'handskeleton.json' }
},
object: { enabled: true, modelPath: 'centernet.json', minConfidence: 0.2, iouThreshold: 0.4, maxDetected: 10, skipFrames: 19 }
}
08:52:15.673 Human: version: 2.0.0
08:52:15.674 Human: tfjs version: 3.6.0
08:52:15.674 Human: platform: linux x64
08:52:15.674 Human: agent: NodeJS v16.0.0
08:52:15.674 Human: setting backend: tensorflow
08:52:15.710 Human: load model: file://models/blazeface.json
08:52:15.743 Human: load model: file://models/facemesh.json
08:52:15.744 Human: load model: file://models/iris.json
08:52:15.760 Human: load model: file://models/emotion.json
08:52:15.847 Human: load model: file://models/handdetect.json
08:52:15.847 Human: load model: file://models/handskeleton.json
08:52:15.914 Human: load model: file://models/movenet-lightning.json
08:52:15.957 Human: load model: file://models/centernet.json
08:52:16.015 Human: load model: file://models/faceres.json
08:52:16.015 Human: tf engine state: 50796152 bytes 1318 tensors
2021-06-01 08:52:16 INFO: Loaded: [ 'face', 'movenet', 'handpose', 'emotion', 'centernet', 'faceres', [length]: 6 ]
2021-06-01 08:52:16 INFO: Memory state: { unreliable: true, numTensors: 1318, numDataBuffers: 1318, numBytes: 50796152 }
2021-06-01 08:52:16 INFO: Loading image: private/daz3d/daz3d-kiaria-02.jpg
2021-06-01 08:52:16 STATE: Processing: [ 1, 1300, 1000, 3, [length]: 4 ]
2021-06-01 08:52:17 DATA: Results:
2021-06-01 08:52:17 DATA: Face: #0 boxScore:0.88 faceScore:1 age:16.3 genderScore:0.97 gender:female emotionScore:0.85 emotion:happy iris:61.05
2021-06-01 08:52:17 DATA: Body: #0 score:0.82 keypoints:17
2021-06-01 08:52:17 DATA: Hand: #0 score:0.89
2021-06-01 08:52:17 DATA: Hand: #1 score:0.97
2021-06-01 08:52:17 DATA: Gesture: face#0 gesture:facing left
2021-06-01 08:52:17 DATA: Gesture: body#0 gesture:leaning right
2021-06-01 08:52:17 DATA: Gesture: hand#0 gesture:pinky forward middlefinger up
2021-06-01 08:52:17 DATA: Gesture: hand#1 gesture:pinky forward middlefinger up
2021-06-01 08:52:17 DATA: Gesture: iris#0 gesture:looking left
2021-06-01 08:52:17 DATA: Object: #0 score:0.55 label:person
2021-06-01 08:52:17 DATA: Object: #1 score:0.23 label:bottle
2021-06-01 08:52:17 DATA: Persons:
2021-06-01 08:52:17 DATA: #0: Face:score:1 age:16.3 gender:female iris:61.05 Body:score:0.82 keypoints:17 LeftHand:no RightHand:yes Gestures:4
```

66
demo/nodejs/node-bench.js Normal file
View File

@ -0,0 +1,66 @@
/**
* Human simple demo for NodeJS
*/
const childProcess = require('child_process'); // eslint-disable-line camelcase
const log = require('@vladmandic/pilogger'); // eslint-disable-line node/no-unpublished-require
const canvas = require('canvas'); // eslint-disable-line node/no-unpublished-require
const config = {
cacheSensitivity: 0.01,
wasmPlatformFetch: true,
modelBasePath: 'https://vladmandic.github.io/human-models/models/',
};
const count = 10;
async function loadImage(input) {
const inputImage = await canvas.loadImage(input);
const inputCanvas = new canvas.Canvas(inputImage.width, inputImage.height);
const inputCtx = inputCanvas.getContext('2d');
inputCtx.drawImage(inputImage, 0, 0);
const imageData = inputCtx.getImageData(0, 0, inputCanvas.width, inputCanvas.height);
process.send({ input, resolution: [inputImage.width, inputImage.height] });
return imageData;
}
async function runHuman(module, backend) {
if (backend === 'wasm') require('@tensorflow/tfjs-backend-wasm'); // eslint-disable-line node/no-unpublished-require, global-require
const Human = require('../../dist/' + module); // eslint-disable-line global-require, import/no-dynamic-require
config.backend = backend;
const human = new Human.Human(config);
human.env.Canvas = canvas.Canvas;
human.env.Image = canvas.Image;
human.env.ImageData = canvas.ImageData;
process.send({ human: human.version, module });
await human.init();
process.send({ desired: human.config.backend, wasm: human.env.wasm, tfjs: human.tf.version.tfjs, tensorflow: human.env.tensorflow });
const imageData = await loadImage('samples/in/ai-body.jpg');
const t0 = human.now();
await human.load();
const t1 = human.now();
await human.warmup();
const t2 = human.now();
for (let i = 0; i < count; i++) await human.detect(imageData);
const t3 = human.now();
process.send({ backend: human.tf.getBackend(), load: Math.round(t1 - t0), warmup: Math.round(t2 - t1), detect: Math.round(t3 - t2), count, memory: human.tf.memory().numBytes });
}
async function executeWorker(args) {
return new Promise((resolve) => {
const worker = childProcess.fork(process.argv[1], args);
worker.on('message', (msg) => log.data(msg));
worker.on('exit', () => resolve(true));
});
}
async function main() {
if (process.argv[2]) {
await runHuman(process.argv[2], process.argv[3]);
} else {
await executeWorker(['human.node.js', 'tensorflow']);
await executeWorker(['human.node-gpu.js', 'tensorflow']);
await executeWorker(['human.node-wasm.js', 'wasm']);
}
}
main();

View File

@ -0,0 +1,82 @@
/**
* Human demo for NodeJS using Canvas library
*
* Requires [canvas](https://www.npmjs.com/package/canvas) to provide Canvas functionality in NodeJS environment
*/
const fs = require('fs');
const process = require('process');
const log = require('@vladmandic/pilogger'); // eslint-disable-line node/no-unpublished-require
// in nodejs environments tfjs-node is required to be loaded before human
const tf = require('@tensorflow/tfjs-node'); // eslint-disable-line node/no-unpublished-require
const canvas = require('canvas'); // eslint-disable-line node/no-unpublished-require
// const human = require('@vladmandic/human'); // use this when human is installed as module (majority of use cases)
const Human = require('../../dist/human.node.js'); // use this when using human in dev mode
const config = { // just enable all and leave default settings
debug: false,
face: { enabled: true, detector: { maxDetected: 10 } }, // includes mesh, iris, emotion, descriptor
hand: { enabled: true, maxDetected: 20, minConfidence: 0.5, detector: { modelPath: 'handtrack.json' } }, // use alternative hand model
body: { enabled: true },
object: { enabled: true },
gestures: { enabled: true },
};
async function main() {
log.header();
globalThis.Canvas = canvas.Canvas; // patch global namespace with canvas library
globalThis.ImageData = canvas.ImageData; // patch global namespace with canvas library
// human.env.Canvas = canvas.Canvas; // alternatively monkey-patch human to use external canvas library
// human.env.ImageData = canvas.ImageData; // alternatively monkey-patch human to use external canvas library
// init
const human = new Human.Human(config); // create instance of human
log.info('Human:', human.version, 'TF:', tf.version_core);
await human.load(); // pre-load models
log.info('Loaded models:', human.models.loaded());
log.info('Memory state:', human.tf.engine().memory());
// parse cmdline
const input = process.argv[2];
let output = process.argv[3];
if (!output.toLowerCase().endsWith('.jpg')) output += '.jpg';
if (process.argv.length !== 4) log.error('Parameters: <input-image> <output-image> missing');
else if (!fs.existsSync(input) && !input.startsWith('http')) log.error(`File not found: ${process.argv[2]}`);
else {
// everything seems ok
const inputImage = await canvas.loadImage(input); // load image using canvas library
log.info('Loaded image', input, inputImage.width, inputImage.height);
const inputCanvas = new canvas.Canvas(inputImage.width, inputImage.height); // create canvas
const inputCtx = inputCanvas.getContext('2d');
inputCtx.drawImage(inputImage, 0, 0); // draw input image onto canvas
const imageData = inputCtx.getImageData(0, 0, inputCanvas.width, inputCanvas.height);
// run detection
const result = await human.detect(imageData);
// print results summary
const persons = result.persons; // invoke persons getter, only used to print summary on console
for (let i = 0; i < persons.length; i++) {
const face = persons[i].face;
const faceTxt = face ? `score:${face.score} age:${face.age} gender:${face.gender} iris:${face.iris}` : null;
const body = persons[i].body;
const bodyTxt = body ? `score:${body.score} keypoints:${body.keypoints.length}` : null;
log.data(`Detected: #${i}: Face:${faceTxt} Body:${bodyTxt} LeftHand:${persons[i].hands.left ? 'yes' : 'no'} RightHand:${persons[i].hands.right ? 'yes' : 'no'} Gestures:${persons[i].gestures.length}`);
}
// draw detected results onto canvas and save it to a file
const outputCanvas = new canvas.Canvas(inputImage.width, inputImage.height); // create canvas
const outputCtx = outputCanvas.getContext('2d');
outputCtx.drawImage(result.canvas || inputImage, 0, 0); // draw input image onto canvas
human.draw.all(outputCanvas, result); // use human build-in method to draw results as overlays on canvas
const outFile = fs.createWriteStream(output); // write canvas to new image file
outFile.on('finish', () => log.state('Output image:', output, outputCanvas.width, outputCanvas.height));
outFile.on('error', (err) => log.error('Output error:', output, err));
const stream = outputCanvas.createJPEGStream({ quality: 0.5, progressive: true, chromaSubsampling: true });
stream.pipe(outFile);
}
}
main();

95
demo/nodejs/node-event.js Normal file
View File

@ -0,0 +1,95 @@
/**
* Human demo for NodeJS
*/
const fs = require('fs');
const process = require('process');
const log = require('@vladmandic/pilogger'); // eslint-disable-line node/no-unpublished-require
// in nodejs environments tfjs-node is required to be loaded before human
const tf = require('@tensorflow/tfjs-node'); // eslint-disable-line node/no-unpublished-require
// const human = require('@vladmandic/human'); // use this when human is installed as module (majority of use cases)
const Human = require('../../dist/human.node.js'); // use this when using human in dev mode
let human = null;
const myConfig = {
modelBasePath: 'file://models/',
debug: false,
async: true,
filter: { enabled: false },
face: {
enabled: true,
detector: { enabled: true },
mesh: { enabled: true },
iris: { enabled: true },
description: { enabled: true },
emotion: { enabled: true },
},
hand: { enabled: true },
body: { enabled: true },
object: { enabled: true },
};
async function detect(input) {
// read input image from file or url into buffer
let buffer;
log.info('Loading image:', input);
if (input.startsWith('http:') || input.startsWith('https:')) {
const res = await fetch(input);
if (res && res.ok) buffer = Buffer.from(await res.arrayBuffer());
else log.error('Invalid image URL:', input, res.status, res.statusText, res.headers.get('content-type'));
} else {
buffer = fs.readFileSync(input);
}
log.data('Image bytes:', buffer?.length, 'buffer:', buffer?.slice(0, 32));
// decode image using tfjs-node so we don't need external depenencies
if (!buffer) return;
const tensor = human.tf.node.decodeImage(buffer, 3);
// run detection
await human.detect(tensor, myConfig);
human.tf.dispose(tensor); // dispose image tensor as we no longer need it
}
async function main() {
log.header();
human = new Human.Human(myConfig);
log.info('Human:', human.version, 'TF:', tf.version_core);
if (human.events) {
human.events.addEventListener('warmup', () => {
log.info('Event Warmup');
});
human.events.addEventListener('load', () => {
log.info('Event Loaded:', human.models.loaded(), human.tf.engine().memory());
});
human.events.addEventListener('image', () => {
log.info('Event Image:', human.process.tensor.shape);
});
human.events.addEventListener('detect', () => {
log.data('Event Detected:');
const persons = human.result.persons;
for (let i = 0; i < persons.length; i++) {
const face = persons[i].face;
const faceTxt = face ? `score:${face.score} age:${face.age} gender:${face.gender} iris:${face.distance}` : null;
const body = persons[i].body;
const bodyTxt = body ? `score:${body.score} keypoints:${body.keypoints?.length}` : null;
log.data(` #${i}: Face:${faceTxt} Body:${bodyTxt} LeftHand:${persons[i].hands.left ? 'yes' : 'no'} RightHand:${persons[i].hands.right ? 'yes' : 'no'} Gestures:${persons[i].gestures.length}`);
}
});
}
await human.tf.ready(); // wait until tf is ready
const input = process.argv[2]; // process input
if (input) await detect(input);
else log.error('Missing <input>');
}
main();

30
demo/nodejs/node-fetch.js Normal file
View File

@ -0,0 +1,30 @@
/**
* Human demo for NodeJS using http fetch to get image file
*
* Requires [node-fetch](https://www.npmjs.com/package/node-fetch) to provide `fetch` functionality in NodeJS environment
*/
const fs = require('fs');
const log = require('@vladmandic/pilogger'); // eslint-disable-line node/no-unpublished-require
// in nodejs environments tfjs-node is required to be loaded before human
const tf = require('@tensorflow/tfjs-node'); // eslint-disable-line node/no-unpublished-require
// const human = require('@vladmandic/human'); // use this when human is installed as module (majority of use cases)
const Human = require('../../dist/human.node.js'); // use this when using human in dev mode
const humanConfig = {
modelBasePath: 'https://vladmandic.github.io/human/models/',
};
async function main(inputFile) {
global.fetch = (await import('node-fetch')).default; // eslint-disable-line node/no-unpublished-import, import/no-unresolved, node/no-missing-import, node/no-extraneous-import
const human = new Human.Human(humanConfig); // create instance of human using default configuration
log.info('Human:', human.version, 'TF:', tf.version_core);
await human.load(); // optional as models would be loaded on-demand first time they are required
await human.warmup(); // optional as model warmup is performed on-demand first time its executed
const buffer = fs.readFileSync(inputFile); // read file data into buffer
const tensor = human.tf.node.decodeImage(buffer); // decode jpg data
const result = await human.detect(tensor); // run detection; will initialize backend and on-demand load models
log.data(result.gesture);
}
main('samples/in/ai-body.jpg');

View File

@ -0,0 +1,64 @@
/**
* Human Person Similarity test for NodeJS
*/
const fs = require('fs');
const process = require('process');
const log = require('@vladmandic/pilogger'); // eslint-disable-line node/no-unpublished-require
// in nodejs environments tfjs-node is required to be loaded before human
const tf = require('@tensorflow/tfjs-node'); // eslint-disable-line node/no-unpublished-require
// const human = require('@vladmandic/human'); // use this when human is installed as module (majority of use cases)
const Human = require('../../dist/human.node.js'); // use this when using human in dev mode
let human = null;
const myConfig = {
modelBasePath: 'file://models/',
debug: true,
face: { emotion: { enabled: false } },
body: { enabled: false },
hand: { enabled: false },
gesture: { enabled: false },
};
async function init() {
human = new Human.Human(myConfig);
await human.tf.ready();
log.info('Human:', human.version, 'TF:', tf.version_core);
await human.load();
log.info('Loaded:', human.models.loaded());
log.info('Memory state:', human.tf.engine().memory());
}
async function detect(input) {
if (!fs.existsSync(input)) {
throw new Error('Cannot load image:', input);
}
const buffer = fs.readFileSync(input);
const tensor = human.tf.node.decodeImage(buffer, 3);
log.state('Loaded image:', input, tensor.shape);
const result = await human.detect(tensor, myConfig);
human.tf.dispose(tensor);
log.state('Detected faces:', result.face.length);
return result;
}
async function main() {
log.configure({ inspect: { breakLength: 265 } });
log.header();
if (process.argv.length !== 4) {
log.error('Parameters: <first image> <second image> missing');
return;
}
await init();
const res1 = await detect(process.argv[2]);
const res2 = await detect(process.argv[3]);
if (!res1 || !res1.face || res1.face.length === 0 || !res2 || !res2.face || res2.face.length === 0) {
throw new Error('Could not detect face descriptors');
}
const similarity = human.match.similarity(res1.face[0].embedding, res2.face[0].embedding, { order: 2 });
log.data('Similarity: ', similarity);
}
main();

View File

@ -0,0 +1,32 @@
/**
* Human simple demo for NodeJS
*/
const fs = require('fs');
const process = require('process');
// in nodejs environments tfjs-node is required to be loaded before human
const tf = require('@tensorflow/tfjs-node'); // eslint-disable-line node/no-unpublished-require
// const human = require('@vladmandic/human'); // use this when human is installed as module (majority of use cases)
const Human = require('../../dist/human.node.js'); // use this when using human in dev mode
const humanConfig = {
// add any custom config here
debug: true,
body: { enabled: false },
};
async function detect(inputFile) {
const human = new Human.Human(humanConfig); // create instance of human using default configuration
console.log('Human:', human.version, 'TF:', tf.version_core); // eslint-disable-line no-console
await human.load(); // optional as models would be loaded on-demand first time they are required
await human.warmup(); // optional as model warmup is performed on-demand first time its executed
const buffer = fs.readFileSync(inputFile); // read file data into buffer
const tensor = human.tf.node.decodeImage(buffer); // decode jpg data
console.log('loaded input file:', inputFile, 'resolution:', tensor.shape); // eslint-disable-line no-console
const result = await human.detect(tensor); // run detection; will initialize backend and on-demand load models
console.log(result); // eslint-disable-line no-console
}
if (process.argv.length === 3) detect(process.argv[2]); // if input file is provided as cmdline parameter use it
else detect('samples/in/ai-body.jpg'); // else use built-in test inputfile

91
demo/nodejs/node-video.js Normal file
View File

@ -0,0 +1,91 @@
/**
* Human demo for NodeJS
* Unsupported sample of using external utility ffmpeg to capture to decode video input and process it using Human
*
* Uses ffmpeg to process video input and output stream of motion jpeg images which are then parsed for frame start/end markers by pipe2jpeg
* Each frame triggers an event with jpeg buffer that then can be decoded and passed to human for processing
* If you want process at specific intervals, set output fps to some value
* If you want to process an input stream, set real-time flag and set input as required
*
* Note that [pipe2jpeg](https://www.npmjs.com/package/pipe2jpeg) is not part of Human dependencies and should be installed manually
* Working version of `ffmpeg` must be present on the system
*/
const process = require('process');
const spawn = require('child_process').spawn;
const log = require('@vladmandic/pilogger'); // eslint-disable-line node/no-unpublished-require
// in nodejs environments tfjs-node is required to be loaded before human
// const tf = require('@tensorflow/tfjs-node'); // eslint-disable-line node/no-unpublished-require
// const human = require('@vladmandic/human'); // use this when human is installed as module (majority of use cases)
const Pipe2Jpeg = require('pipe2jpeg'); // eslint-disable-line node/no-missing-require, import/no-unresolved
// const human = require('@vladmandic/human'); // use this when human is installed as module (majority of use cases)
const Human = require('../../dist/human.node.js'); // use this when using human in dev mode
let count = 0; // counter
let busy = false; // busy flag
let inputFile = './test.mp4';
if (process.argv.length === 3) inputFile = process.argv[2];
const humanConfig = {
modelBasePath: 'file://models/',
debug: false,
async: true,
filter: { enabled: false },
face: {
enabled: true,
detector: { enabled: true, rotation: false },
mesh: { enabled: true },
iris: { enabled: true },
description: { enabled: true },
emotion: { enabled: true },
},
hand: { enabled: false },
body: { enabled: false },
object: { enabled: false },
};
const human = new Human.Human(humanConfig);
const pipe2jpeg = new Pipe2Jpeg();
const ffmpegParams = [
'-loglevel', 'quiet',
// input
// '-re', // optional process video in real-time not as fast as possible
'-i', `${inputFile}`, // input file
// output
'-an', // drop audio
'-c:v', 'mjpeg', // use motion jpeg as output encoder
'-pix_fmt', 'yuvj422p', // typical for mp4, may need different settings for some videos
'-f', 'image2pipe', // pipe images as output
// '-vf', 'fps=5,scale=800:600', // optional video filter, do anything here such as process at fixed 5fps or resize to specific resulution
'pipe:1', // output to unix pipe that is then captured by pipe2jpeg
];
async function detect(jpegBuffer) {
if (busy) return; // skip processing if busy
busy = true;
const tensor = human.tf.node.decodeJpeg(jpegBuffer, 3); // decode jpeg buffer to raw tensor
const res = await human.detect(tensor);
human.tf.dispose(tensor); // must dispose tensor
// start custom processing here
log.data('frame', { frame: ++count, size: jpegBuffer.length, shape: tensor.shape, face: res?.face?.length, body: res?.body?.length, hand: res?.hand?.length, gesture: res?.gesture?.length });
if (res?.face?.[0]) log.data('person', { score: [res.face[0].boxScore, res.face[0].faceScore], age: res.face[0].age || 0, gender: [res.face[0].genderScore || 0, res.face[0].gender], emotion: res.face[0].emotion?.[0] });
// at the of processing mark loop as not busy so it can process next frame
busy = false;
}
async function main() {
log.header();
await human.tf.ready();
// pre-load models
log.info({ human: human.version, tf: human.tf.version_core });
log.info({ input: inputFile });
pipe2jpeg.on('data', (jpegBuffer) => detect(jpegBuffer));
const ffmpeg = spawn('ffmpeg', ffmpegParams, { stdio: ['ignore', 'pipe', 'ignore'] });
ffmpeg.on('error', (error) => log.error('ffmpeg error:', error));
ffmpeg.on('exit', (code, signal) => log.info('ffmpeg exit', code, signal));
ffmpeg.stdout.pipe(pipe2jpeg);
}
main();

View File

@ -0,0 +1,94 @@
/**
* Human demo for NodeJS
* Unsupported sample of using external utility fswebcam to capture screenshot from attached webcam in regular intervals and process it using Human
*
* Note that [node-webcam](https://www.npmjs.com/package/node-webcam) is not part of Human dependencies and should be installed manually
* Working version of `fswebcam` must be present on the system
*/
let initial = true; // remember if this is the first run to print additional details
const log = require('@vladmandic/pilogger'); // eslint-disable-line node/no-unpublished-require
const nodeWebCam = require('node-webcam'); // eslint-disable-line import/no-unresolved, node/no-missing-require
// in nodejs environments tfjs-node is required to be loaded before human
const tf = require('@tensorflow/tfjs-node'); // eslint-disable-line node/no-unpublished-require
// const human = require('@vladmandic/human'); // use this when human is installed as module (majority of use cases)
const Human = require('../../dist/human.node.js'); // use this when using human in dev mode
// options for node-webcam
const tempFile = 'webcam-snap'; // node-webcam requires writting snapshot to a file, recommended to use tmpfs to avoid excessive disk writes
const optionsCamera = {
callbackReturn: 'buffer', // this means whatever `fswebcam` writes to disk, no additional processing so it's fastest
saveShots: false, // don't save processed frame to disk, note that temp file is still created by fswebcam thus recommendation for tmpfs
};
const camera = nodeWebCam.create(optionsCamera);
// options for human
const optionsHuman = {
modelBasePath: 'file://models/',
};
const human = new Human.Human(optionsHuman);
function buffer2tensor(buffer) {
return human.tf.tidy(() => {
if (!buffer) return null;
const decode = human.tf.node.decodeImage(buffer, 3);
let expand;
if (decode.shape[2] === 4) { // input is in rgba format, need to convert to rgb
const channels = human.tf.split(decode, 4, 2); // tf.split(tensor, 4, 2); // split rgba to channels
const rgb = human.tf.stack([channels[0], channels[1], channels[2]], 2); // stack channels back to rgb and ignore alpha
expand = human.tf.reshape(rgb, [1, decode.shape[0], decode.shape[1], 3]); // move extra dim from the end of tensor and use it as batch number instead
} else {
expand = human.tf.expandDims(decode, 0); // inpur ia rgb so use as-is
}
const cast = human.tf.cast(expand, 'float32');
return cast;
});
}
async function detect() {
// trigger next frame every 5 sec
// triggered here before actual capture and detection since we assume it will complete in less than 5sec
// so it's as close as possible to real 5sec and not 5sec + detection time
// if there is a chance of race scenario where detection takes longer than loop trigger, then trigger should be at the end of the function instead
setTimeout(() => detect(), 5000);
camera.capture(tempFile, (err, data) => { // gets the (default) jpeg data from from webcam
if (err) {
log.error('error capturing webcam:', err);
} else {
const tensor = buffer2tensor(data); // create tensor from image buffer
if (initial) log.data('input tensor:', tensor.shape);
human.detect(tensor) // eslint-disable-line promise/no-promise-in-callback
.then((result) => {
if (result && result.face && result.face.length > 0) {
for (let i = 0; i < result.face.length; i++) {
const face = result.face[i];
const emotion = face.emotion?.reduce((prev, curr) => (prev.score > curr.score ? prev : curr));
log.data(`detected face: #${i} boxScore:${face.boxScore} faceScore:${face.faceScore} age:${face.age} genderScore:${face.genderScore} gender:${face.gender} emotionScore:${emotion?.score} emotion:${emotion?.emotion} iris:${face.iris}`);
}
} else {
log.data(' Face: N/A');
}
return result;
})
.catch(() => log.error('human detect error'));
}
initial = false;
});
// alternatively to triggering every 5sec sec, simply trigger next frame as fast as possible
// setImmediate(() => process());
}
async function main() {
log.info('human:', human.version, 'tf:', tf.version_core);
camera.list((list) => {
log.data('detected camera:', list);
});
await human.load();
detect();
}
log.header();
main();

213
demo/nodejs/node.js Normal file
View File

@ -0,0 +1,213 @@
/**
* Human demo for NodeJS
*/
const fs = require('fs');
const path = require('path');
const process = require('process');
const log = require('@vladmandic/pilogger'); // eslint-disable-line node/no-unpublished-require
// in nodejs environments tfjs-node is required to be loaded before human
const tf = require('@tensorflow/tfjs-node'); // eslint-disable-line node/no-unpublished-require
// const human = require('@vladmandic/human'); // use this when human is installed as module (majority of use cases)
const Human = require('../../dist/human.node.js'); // use this when using human in dev mode
let human = null;
const myConfig = {
// backend: 'tensorflow',
modelBasePath: 'file://models/',
debug: true,
async: false,
filter: {
enabled: true,
flip: true,
},
face: {
enabled: true,
detector: { enabled: true, rotation: false },
mesh: { enabled: true },
iris: { enabled: true },
description: { enabled: true },
emotion: { enabled: true },
},
hand: {
enabled: true,
},
// body: { modelPath: 'blazepose.json', enabled: true },
body: { enabled: true },
object: { enabled: true },
};
async function init() {
// create instance of human
human = new Human.Human(myConfig);
// wait until tf is ready
await human.tf.ready();
log.info('human:', human.version, 'tf:', tf.version_core);
// pre-load models
log.info('Human:', human.version);
// log.info('Active Configuration', human.config);
await human.load();
log.info('Loaded:', human.models.loaded());
// log.info('Memory state:', human.tf.engine().memory());
log.data(tf.backend().binding ? tf.backend().binding.TF_Version : null);
}
async function detect(input) {
// read input image file and create tensor to be used for processing
let buffer;
log.info('Loading image:', input);
if (input.startsWith('http:') || input.startsWith('https:')) {
const res = await fetch(input);
if (res && res.ok) buffer = Buffer.from(await res.arrayBuffer());
else log.error('Invalid image URL:', input, res.status, res.statusText, res.headers.get('content-type'));
} else {
buffer = fs.readFileSync(input);
}
log.data('Image bytes:', buffer?.length, 'buffer:', buffer?.slice(0, 32));
// decode image using tfjs-node so we don't need external depenencies
// can also be done using canvas.js or some other 3rd party image library
if (!buffer) return {};
const tensor = human.tf.tidy(() => {
const decode = human.tf.node.decodeImage(buffer, 3);
let expand;
if (decode.shape[2] === 4) { // input is in rgba format, need to convert to rgb
const channels = human.tf.split(decode, 4, 2); // tf.split(tensor, 4, 2); // split rgba to channels
const rgb = human.tf.stack([channels[0], channels[1], channels[2]], 2); // stack channels back to rgb and ignore alpha
expand = human.tf.reshape(rgb, [1, decode.shape[0], decode.shape[1], 3]); // move extra dim from the end of tensor and use it as batch number instead
} else {
expand = human.tf.expandDims(decode, 0);
}
const cast = human.tf.cast(expand, 'float32');
return cast;
});
// image shape contains image dimensions and depth
log.state('Processing:', tensor.shape);
// run actual detection
let result;
try {
result = await human.detect(tensor, myConfig);
} catch (err) {
log.error('caught', err);
}
// dispose image tensor as we no longer need it
human.tf.dispose(tensor);
// print data to console
log.data('Results:');
if (result && result.face && result.face.length > 0) {
for (let i = 0; i < result.face.length; i++) {
const face = result.face[i];
const emotion = face.emotion.reduce((prev, curr) => (prev.score > curr.score ? prev : curr));
log.data(` Face: #${i} boxScore:${face.boxScore} faceScore:${face.faceScore} age:${face.age} genderScore:${face.genderScore} gender:${face.gender} emotionScore:${emotion.score} emotion:${emotion.emotion} distance:${face.distance}`);
}
} else {
log.data(' Face: N/A');
}
if (result && result.body && result.body.length > 0) {
for (let i = 0; i < result.body.length; i++) {
const body = result.body[i];
log.data(` Body: #${i} score:${body.score} keypoints:${body.keypoints?.length}`);
}
} else {
log.data(' Body: N/A');
}
if (result && result.hand && result.hand.length > 0) {
for (let i = 0; i < result.hand.length; i++) {
const hand = result.hand[i];
log.data(` Hand: #${i} score:${hand.score} keypoints:${hand.keypoints?.length}`);
}
} else {
log.data(' Hand: N/A');
}
if (result && result.gesture && result.gesture.length > 0) {
for (let i = 0; i < result.gesture.length; i++) {
const [key, val] = Object.entries(result.gesture[i]);
log.data(` Gesture: ${key[0]}#${key[1]} gesture:${val[1]}`);
}
} else {
log.data(' Gesture: N/A');
}
if (result && result.object && result.object.length > 0) {
for (let i = 0; i < result.object.length; i++) {
const object = result.object[i];
log.data(` Object: #${i} score:${object.score} label:${object.label}`);
}
} else {
log.data(' Object: N/A');
}
// print data to console
if (result) {
// invoke persons getter
const persons = result.persons;
// write result objects to file
// fs.writeFileSync('result.json', JSON.stringify(result, null, 2));
log.data('Persons:');
for (let i = 0; i < persons.length; i++) {
const face = persons[i].face;
const faceTxt = face ? `score:${face.score} age:${face.age} gender:${face.gender} iris:${face.iris}` : null;
const body = persons[i].body;
const bodyTxt = body ? `score:${body.score} keypoints:${body.keypoints?.length}` : null;
log.data(` #${i}: Face:${faceTxt} Body:${bodyTxt} LeftHand:${persons[i].hands.left ? 'yes' : 'no'} RightHand:${persons[i].hands.right ? 'yes' : 'no'} Gestures:${persons[i].gestures.length}`);
}
}
return result;
}
async function test() {
process.on('unhandledRejection', (err) => {
// @ts-ignore // no idea if exception message is compelte
log.error(err?.message || err || 'no error message');
});
// test with embedded full body image
let result;
log.state('Processing embedded warmup image: face');
myConfig.warmup = 'face';
result = await human.warmup(myConfig);
log.state('Processing embedded warmup image: full');
myConfig.warmup = 'full';
result = await human.warmup(myConfig);
// no need to print results as they are printed to console during detection from within the library due to human.config.debug set
return result;
}
async function main() {
log.configure({ inspect: { breakLength: 265 } });
log.header();
log.info('Current folder:', process.env.PWD);
await init();
const f = process.argv[2];
if (process.argv.length !== 3) {
log.warn('Parameters: <input image | folder> missing');
await test();
} else if (!fs.existsSync(f) && !f.startsWith('http')) {
log.error(`File not found: ${process.argv[2]}`);
} else if (fs.existsSync(f)) {
const stat = fs.statSync(f);
if (stat.isDirectory()) {
const dir = fs.readdirSync(f);
for (const file of dir) {
await detect(path.join(f, file));
}
} else {
await detect(f);
}
} else {
await detect(f);
}
}
main();

View File

@ -0,0 +1,119 @@
/**
* Human demo for NodeJS
*
* Takes input and output folder names parameters and processes all images
* found in input folder and creates annotated images in output folder
*
* Requires [canvas](https://www.npmjs.com/package/canvas) to provide Canvas functionality in NodeJS environment
*/
const fs = require('fs');
const path = require('path');
const process = require('process');
const log = require('@vladmandic/pilogger'); // eslint-disable-line node/no-unpublished-require
const canvas = require('canvas'); // eslint-disable-line node/no-unpublished-require
// for nodejs, `tfjs-node` or `tfjs-node-gpu` should be loaded before using Human
const tf = require('@tensorflow/tfjs-node-gpu'); // eslint-disable-line node/no-unpublished-require
const Human = require('../../dist/human.node-gpu.js'); // this is 'const Human = require('../dist/human.node-gpu.js').default;'
const config = { // just enable all and leave default settings
modelBasePath: 'file://models',
debug: true,
softwareKernels: true, // slower but enhanced precision since face rotation can work in software mode in nodejs environments
cacheSensitivity: 0.01,
face: { enabled: true, detector: { maxDetected: 100, minConfidence: 0.1 } },
object: { enabled: true, maxDetected: 100, minConfidence: 0.1 },
gesture: { enabled: true },
hand: { enabled: true, maxDetected: 100, minConfidence: 0.2 },
body: { enabled: true, maxDetected: 100, minConfidence: 0.1, modelPath: 'https://vladmandic.github.io/human-models/models/movenet-multipose.json' },
};
const poolSize = 4;
const human = new Human.Human(config); // create instance of human
async function saveFile(shape, buffer, result, outFile) {
return new Promise(async (resolve, reject) => { // eslint-disable-line no-async-promise-executor
const outputCanvas = new canvas.Canvas(shape[2], shape[1]); // create canvas
const outputCtx = outputCanvas.getContext('2d');
const inputImage = await canvas.loadImage(buffer); // load image using canvas library
outputCtx.drawImage(inputImage, 0, 0); // draw input image onto canvas
human.draw.all(outputCanvas, result); // use human build-in method to draw results as overlays on canvas
const outStream = fs.createWriteStream(outFile); // write canvas to new image file
outStream.on('finish', () => {
log.data('Output image:', outFile, outputCanvas.width, outputCanvas.height);
resolve();
});
outStream.on('error', (err) => {
log.error('Output error:', outFile, err);
reject();
});
const stream = outputCanvas.createJPEGStream({ quality: 0.5, progressive: true, chromaSubsampling: true });
stream.pipe(outStream);
});
}
async function processFile(image, inFile, outFile) {
const buffer = fs.readFileSync(inFile);
const tensor = tf.tidy(() => {
const decode = tf.node.decodeImage(buffer, 3);
const expand = tf.expandDims(decode, 0);
const cast = tf.cast(expand, 'float32');
return cast;
});
log.state('Loaded image:', inFile, tensor.shape);
const result = await human.detect(tensor);
human.tf.dispose(tensor);
log.data(`Detected: ${image}:`, 'Face:', result.face.length, 'Body:', result.body.length, 'Hand:', result.hand.length, 'Objects:', result.object.length, 'Gestures:', result.gesture.length);
if (outFile) await saveFile(tensor.shape, buffer, result, outFile);
}
async function main() {
log.header();
globalThis.Canvas = canvas.Canvas; // patch global namespace with canvas library
globalThis.ImageData = canvas.ImageData; // patch global namespace with canvas library
log.info('Human:', human.version, 'TF:', tf.version_core);
const configErrors = await human.validate();
if (configErrors.length > 0) log.error('Configuration errors:', configErrors);
await human.load(); // pre-load models
log.info('Loaded models:', human.models.loaded());
const inDir = process.argv[2];
const outDir = process.argv[3];
if (!inDir) {
log.error('Parameters: <input-directory> missing');
return;
}
if (inDir && (!fs.existsSync(inDir) || !fs.statSync(inDir).isDirectory())) {
log.error('Invalid input directory:', fs.existsSync(inDir) ?? fs.statSync(inDir).isDirectory());
return;
}
if (!outDir) {
log.info('Parameters: <output-directory> missing, images will not be saved');
}
if (outDir && (!fs.existsSync(outDir) || !fs.statSync(outDir).isDirectory())) {
log.error('Invalid output directory:', fs.existsSync(outDir) ?? fs.statSync(outDir).isDirectory());
return;
}
const dir = fs.readdirSync(inDir);
const images = dir.filter((f) => fs.statSync(path.join(inDir, f)).isFile() && (f.toLocaleLowerCase().endsWith('.jpg') || f.toLocaleLowerCase().endsWith('.jpeg')));
log.info(`Processing folder: ${inDir} entries:`, dir.length, 'images', images.length);
const t0 = performance.now();
const promises = [];
for (let i = 0; i < images.length; i++) {
const inFile = path.join(inDir, images[i]);
const outFile = outDir ? path.join(outDir, images[i]) : null;
promises.push(processFile(images[i], inFile, outFile));
if (i % poolSize === 0) await Promise.all(promises);
}
await Promise.all(promises);
const t1 = performance.now();
log.info(`Processed ${images.length} images in ${Math.round(t1 - t0)} ms`);
}
main();

36
demo/offline.html Normal file
View File

@ -0,0 +1,36 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
<title>Human: Offline</title>
<meta name="viewport" content="width=device-width, shrink-to-fit=yes">
<meta name="mobile-web-app-capable" content="yes">
<meta name="application-name" content="Human">
<meta name="keywords" content="Human">
<meta name="description" content="Human; Author: Vladimir Mandic <mandic00@live.com>">
<meta name="msapplication-tooltip" content="Human; Author: Vladimir Mandic <mandic00@live.com>">
<meta name="theme-color" content="#000000">
<link rel="manifest" href="manifest.webmanifest">
<link rel="shortcut icon" href="/favicon.ico" type="image/x-icon">
<link rel="icon" sizes="256x256" href="../assets/icon.png">
<link rel="apple-touch-icon" href="../assets/icon.png">
<link rel="apple-touch-startup-image" href="../assets/icon.png">
<style>
@font-face { font-family: 'Lato'; font-display: swap; font-style: normal; font-weight: 100; src: local('Lato'), url('../assets/lato-light.woff2') }
body { font-family: 'Lato', 'Segoe UI'; font-size: 16px; font-variant: small-caps; background: black; color: #ebebeb; }
h1 { font-size: 2rem; margin-top: 1.2rem; font-weight: bold; }
a { color: white; }
a:link { color: lightblue; text-decoration: none; }
a:hover { color: lightskyblue; text-decoration: none; }
.row { width: 90vw; margin: auto; margin-top: 100px; text-align: center; }
</style>
</head>
<body>
<div class="row text-center">
<h1>
<a href="/">Human: Offline</a><br>
<img alt="icon" src="../assets/icon.png">
</h1>
</div>
</body>
</html>

View File

@ -0,0 +1,61 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="content-type" content="text/html; charset=utf-8">
<title>Human Demo</title>
<meta name="viewport" content="width=device-width, shrink-to-fit=yes">
<meta name="mobile-web-app-capable" content="yes">
<meta name="application-name" content="Human Demo">
<meta name="keywords" content="Human Demo">
<meta name="description" content="Human Demo; Author: Vladimir Mandic <mandic00@live.com>">
<link rel="manifest" href="../manifest.webmanifest">
<link rel="shortcut icon" href="../favicon.ico" type="image/x-icon">
<link rel="icon" sizes="256x256" href="../assets/icons/dash-256.png">
<link rel="apple-touch-icon" href="../assets/icons/dash-256.png">
<link rel="apple-touch-startup-image" href="../assets/icons/dash-256.png">
<style>
@font-face { font-family: 'CenturyGothic'; font-display: swap; font-style: normal; font-weight: 400; src: local('CenturyGothic'), url('../assets/century-gothic.ttf') format('truetype'); }
html { font-size: 18px; }
body { font-size: 1rem; font-family: "CenturyGothic", "Segoe UI", sans-serif; font-variant: small-caps; width: -webkit-fill-available; height: 100%; background: black; color: white; overflow: hidden; margin: 0; }
select { font-size: 1rem; font-family: "CenturyGothic", "Segoe UI", sans-serif; font-variant: small-caps; background: gray; color: white; border: none; }
</style>
<script src="../segmentation/index.js" type="module"></script>
</head>
<body>
<noscript><h1>javascript is required</h1></noscript>
<nav>
<div id="nav" class="nav"></div>
</nav>
<header>
<div id="header" class="header" style="position: fixed; top: 0; right: 0; padding: 4px; margin: 16px; background: rgba(0, 0, 0, 0.5); z-index: 10; line-height: 2rem;">
<label for="mode">mode</label>
<select id="mode" name="mode">
<option value="default">remove background</option>
<option value="alpha">draw alpha channel</option>
<option value="foreground">full foreground</option>
<option value="state">recurrent state</option>
</select><br>
<label for="composite">composite</label>
<select id="composite" name="composite"></select><br>
<label for="ratio">downsample ratio</label>
<input type="range" name="ratio" id="ratio" min="0.1" max="1" value="0.5" step="0.05">
<div id="fps" style="margin-top: 8px"></div>
</div>
</header>
<main>
<div id="main" class="main">
<video id="webcam" style="position: fixed; top: 0; left: 0; width: 50vw; height: 50vh"></video>
<img id="background" alt="background" style="position: fixed; top: 0; right: 0; width: 50vw; height: 50vh" controls></img>
<canvas id="output" style="position: fixed; bottom: 0; left: 0; height: 50vh"></canvas>
<canvas id="merge" style="position: fixed; bottom: 0; right: 0; height: 50vh"></canvas>
</div>
</main>
<footer>
<div id="footer" class="footer"></div>
</footer>
<aside>
<div id="aside" class="aside"></div>
</aside>
</body>
</html>

View File

@ -0,0 +1,99 @@
/**
* Human demo for browsers
* @default Human Library
* @summary <https://github.com/vladmandic/human>
* @author <https://github.com/vladmandic>
* @copyright <https://github.com/vladmandic>
* @license MIT
*/
import * as H from '../../dist/human.esm.js'; // equivalent of @vladmandic/Human
const humanConfig = { // user configuration for human, used to fine-tune behavior
modelBasePath: 'https://vladmandic.github.io/human-models/models/',
filter: { enabled: true, equalization: false, flip: false },
face: { enabled: false },
body: { enabled: false },
hand: { enabled: false },
object: { enabled: false },
gesture: { enabled: false },
segmentation: {
enabled: true,
modelPath: 'rvm.json', // can use rvm, selfie or meet
ratio: 0.5,
mode: 'default',
},
};
const backgroundImage = '../../samples/in/background.jpg';
const human = new H.Human(humanConfig); // create instance of human with overrides from user configuration
const log = (...msg) => console.log(...msg); // eslint-disable-line no-console
async function main() {
// gather dom elements
const dom = {
background: document.getElementById('background'),
webcam: document.getElementById('webcam'),
output: document.getElementById('output'),
merge: document.getElementById('merge'),
mode: document.getElementById('mode'),
composite: document.getElementById('composite'),
ratio: document.getElementById('ratio'),
fps: document.getElementById('fps'),
};
// set defaults
dom.fps.innerText = 'initializing';
dom.ratio.valueAsNumber = human.config.segmentation.ratio;
dom.background.src = backgroundImage;
dom.composite.innerHTML = ['source-atop', 'color', 'color-burn', 'color-dodge', 'copy', 'darken', 'destination-atop', 'destination-in', 'destination-out', 'destination-over', 'difference', 'exclusion', 'hard-light', 'hue', 'lighten', 'lighter', 'luminosity', 'multiply', 'overlay', 'saturation', 'screen', 'soft-light', 'source-in', 'source-out', 'source-over', 'xor'].map((gco) => `<option value="${gco}">${gco}</option>`).join(''); // eslint-disable-line max-len
const ctxMerge = dom.merge.getContext('2d');
log('human version:', human.version, '| tfjs version:', human.tf.version['tfjs-core']);
log('platform:', human.env.platform, '| agent:', human.env.agent);
await human.load(); // preload all models
log('backend:', human.tf.getBackend(), '| available:', human.env.backends);
log('models stats:', human.models.stats());
log('models loaded:', human.models.loaded());
await human.warmup(); // warmup function to initialize backend for future faster detection
const numTensors = human.tf.engine().state.numTensors;
// initialize webcam
dom.webcam.onplay = () => { // start processing on video play
log('start processing');
dom.output.width = human.webcam.width;
dom.output.height = human.webcam.height;
dom.merge.width = human.webcam.width;
dom.merge.height = human.webcam.height;
loop(); // eslint-disable-line no-use-before-define
};
await human.webcam.start({ element: dom.webcam, crop: true, width: window.innerWidth / 2, height: window.innerHeight / 2 }); // use human webcam helper methods and associate webcam stream with a dom element
if (!human.webcam.track) dom.fps.innerText = 'webcam error';
// processing loop
async function loop() {
if (!human.webcam.element || human.webcam.paused) return; // check if webcam is valid and playing
human.config.segmentation.mode = dom.mode.value; // get segmentation mode from ui
human.config.segmentation.ratio = dom.ratio.valueAsNumber; // get segmentation downsample ratio from ui
const t0 = Date.now();
const rgba = await human.segmentation(human.webcam.element, human.config); // run model and process results
const t1 = Date.now();
if (!rgba) {
dom.fps.innerText = 'error';
return;
}
dom.fps.innerText = `fps: ${Math.round(10000 / (t1 - t0)) / 10}`; // mark performance
human.draw.tensor(rgba, dom.output); // draw raw output
human.tf.dispose(rgba); // dispose tensors
ctxMerge.globalCompositeOperation = 'source-over';
ctxMerge.drawImage(dom.background, 0, 0); // draw original video to first stacked canvas
ctxMerge.globalCompositeOperation = dom.composite.value;
ctxMerge.drawImage(dom.output, 0, 0); // draw processed output to second stacked canvas
if (numTensors !== human.tf.engine().state.numTensors) log({ leak: human.tf.engine().state.numTensors - numTensors }); // check for memory leaks
requestAnimationFrame(loop);
}
}
window.onload = main;

28
demo/tracker/README.md Normal file
View File

@ -0,0 +1,28 @@
## Tracker
### Based on
<https://github.com/opendatacam/node-moving-things-tracker>
### Build
- remove reference to `lodash`:
> `isEqual` in <tracker.js>
- replace external lib:
> curl https://raw.githubusercontent.com/ubilabs/kd-tree-javascript/master/kdTree.js -o lib/kdTree-min.js
- build with `esbuild`:
> node_modules/.bin/esbuild --bundle tracker.js --format=esm --platform=browser --target=esnext --keep-names --tree-shaking=false --analyze --outfile=/home/vlado/dev/human/demo/tracker/tracker.js --banner:js="/* eslint-disable */"
### Usage
computeDistance(item1, item2)
disableKeepInMemory()
enableKeepInMemory()
getAllTrackedItems()
getJSONDebugOfTrackedItems(roundInt = true)
getJSONOfAllTrackedItems()
getJSONOfTrackedItems(roundInt = true)
getTrackedItemsInMOTFormat(frameNb)
reset()
setParams(newParams)
updateTrackedItemsWithNewFrame(detectionsOfThisFrame, frameNb)

65
demo/tracker/index.html Normal file
View File

@ -0,0 +1,65 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Human</title>
<meta name="viewport" content="width=device-width" id="viewport">
<meta name="keywords" content="Human">
<meta name="application-name" content="Human">
<meta name="description" content="Human: 3D Face Detection, Body Pose, Hand & Finger Tracking, Iris Tracking, Age & Gender Prediction, Emotion Prediction & Gesture Recognition; Author: Vladimir Mandic <https://github.com/vladmandic>">
<meta name="msapplication-tooltip" content="Human: 3D Face Detection, Body Pose, Hand & Finger Tracking, Iris Tracking, Age & Gender Prediction, Emotion Prediction & Gesture Recognition; Author: Vladimir Mandic <https://github.com/vladmandic>">
<meta name="theme-color" content="#000000">
<link rel="manifest" href="../manifest.webmanifest">
<link rel="shortcut icon" href="../../favicon.ico" type="image/x-icon">
<link rel="apple-touch-icon" href="../../assets/icon.png">
<script src="./index.js" type="module"></script>
<style>
html { font-family: 'Segoe UI'; font-size: 16px; font-variant: small-caps; }
body { margin: 0; background: black; color: white; overflow-x: hidden; width: 100vw; height: 100vh; }
body::-webkit-scrollbar { display: none; }
input[type="file"] { font-family: 'Segoe UI'; font-size: 14px; font-variant: small-caps; }
::-webkit-file-upload-button { background: #333333; color: white; border: 0; border-radius: 0; padding: 6px 16px; box-shadow: 4px 4px 4px #222222; font-family: 'Segoe UI'; font-size: 14px; font-variant: small-caps; }
</style>
</head>
<body>
<div style="display: flex">
<video id="video" playsinline style="width: 25vw" controls controlslist="nofullscreen nodownload noremoteplayback" disablepictureinpicture loop></video>
<canvas id="canvas" style="width: 75vw"></canvas>
</div>
<div class="uploader" style="padding: 8px">
<input type="file" name="inputvideo" id="inputvideo" accept="video/*"></input>
<input type="checkbox" id="interpolation" name="interpolation"></input>
<label for="tracker">interpolation</label>
</div>
<form id="config" style="padding: 8px; line-height: 1.6rem;">
tracker |
<input type="checkbox" id="tracker" name="tracker" checked></input>
<label for="tracker">enabled</label> |
<input type="checkbox" id="keepInMemory" name="keepInMemory"></input>
<label for="keepInMemory">keepInMemory</label> |
<br>
tracker source |
<input type="radio" id="box-face" name="box" value="face" checked>
<label for="box-face">face</label> |
<input type="radio" id="box-body" name="box" value="body">
<label for="box-face">body</label> |
<input type="radio" id="box-object" name="box" value="object">
<label for="box-face">object</label> |
<br>
tracker config |
<input type="range" id="unMatchedFramesTolerance" name="unMatchedFramesTolerance" min="0" max="300" step="1", value="60"></input>
<label for="unMatchedFramesTolerance">unMatchedFramesTolerance</label> |
<input type="range" id="iouLimit" name="unMatchedFramesTolerance" min="0" max="1" step="0.01", value="0.1"></input>
<label for="iouLimit">iouLimit</label> |
<input type="range" id="distanceLimit" name="unMatchedFramesTolerance" min="0" max="1" step="0.01", value="0.1"></input>
<label for="distanceLimit">distanceLimit</label> |
<input type="radio" id="matchingAlgorithm-kdTree" name="matchingAlgorithm" value="kdTree" checked>
<label for="matchingAlgorithm-kdTree">kdTree</label> |
<input type="radio" id="matchingAlgorithm-munkres" name="matchingAlgorithm" value="munkres">
<label for="matchingAlgorithm-kdTree">munkres</label> |
</form>
<pre id="status" style="position: absolute; top: 12px; right: 20px; background-color: grey; padding: 8px; box-shadow: 2px 2px black"></pre>
<pre id="log" style="padding: 8px"></pre>
<div id="performance" style="position: absolute; bottom: 0; width: 100%; padding: 8px; font-size: 0.8rem;"></div>
</body>
</html>

10
demo/tracker/index.js Normal file

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

208
demo/tracker/index.ts Normal file
View File

@ -0,0 +1,208 @@
/**
* Human demo for browsers
* @default Human Library
* @summary <https://github.com/vladmandic/human>
* @author <https://github.com/vladmandic>
* @copyright <https://github.com/vladmandic>
* @license MIT
*/
import * as H from '../../dist/human.esm.js'; // equivalent of @vladmandic/Human
import tracker from './tracker.js';
const humanConfig: Partial<H.Config> = { // user configuration for human, used to fine-tune behavior
debug: true,
backend: 'webgl',
// cacheSensitivity: 0,
// cacheModels: false,
// warmup: 'none',
modelBasePath: 'https://vladmandic.github.io/human-models/models',
filter: { enabled: true, equalization: false, flip: false },
face: {
enabled: true,
detector: { rotation: false, maxDetected: 10, minConfidence: 0.3 },
mesh: { enabled: true },
attention: { enabled: false },
iris: { enabled: false },
description: { enabled: false },
emotion: { enabled: false },
antispoof: { enabled: false },
liveness: { enabled: false },
},
body: { enabled: false, maxDetected: 6, modelPath: 'movenet-multipose.json' },
hand: { enabled: false },
object: { enabled: false, maxDetected: 10 },
segmentation: { enabled: false },
gesture: { enabled: false },
};
interface TrackerConfig {
unMatchedFramesTolerance: number, // number of frame when an object is not matched before considering it gone; ignored if fastDelete is set
iouLimit: number, // exclude things from beeing matched if their IOU less than; 1 means total overlap; 0 means no overlap
fastDelete: boolean, // remove new objects immediately if they could not be matched in the next frames; if set, ignores unMatchedFramesTolerance
distanceLimit: number, // distance limit for matching; if values need to be excluded from matching set their distance to something greater than the distance limit
matchingAlgorithm: 'kdTree' | 'munkres', // algorithm used to match tracks with new detections
}
interface TrackerResult {
id: number,
confidence: number,
bearing: number,
isZombie: boolean,
name: string,
x: number,
y: number,
w: number,
h: number,
}
const trackerConfig: TrackerConfig = {
unMatchedFramesTolerance: 100,
iouLimit: 0.05,
fastDelete: false,
distanceLimit: 1e4,
matchingAlgorithm: 'kdTree',
};
const human = new H.Human(humanConfig); // create instance of human with overrides from user configuration
const dom = { // grab instances of dom objects so we dont have to look them up later
video: document.getElementById('video') as HTMLVideoElement,
canvas: document.getElementById('canvas') as HTMLCanvasElement,
log: document.getElementById('log') as HTMLPreElement,
fps: document.getElementById('status') as HTMLPreElement,
tracker: document.getElementById('tracker') as HTMLInputElement,
interpolation: document.getElementById('interpolation') as HTMLInputElement,
config: document.getElementById('config') as HTMLFormElement,
ctx: (document.getElementById('canvas') as HTMLCanvasElement).getContext('2d') as CanvasRenderingContext2D,
};
const timestamp = { detect: 0, draw: 0, tensors: 0, start: 0 }; // holds information used to calculate performance and possible memory leaks
const fps = { detectFPS: 0, drawFPS: 0, frames: 0, averageMs: 0 }; // holds calculated fps information for both detect and screen refresh
const log = (...msg) => { // helper method to output messages
dom.log.innerText += msg.join(' ') + '\n';
console.log(...msg); // eslint-disable-line no-console
};
const status = (msg) => dom.fps.innerText = msg; // print status element
async function detectionLoop() { // main detection loop
if (!dom.video.paused && dom.video.readyState >= 2) {
if (timestamp.start === 0) timestamp.start = human.now();
// log('profiling data:', await human.profile(dom.video));
await human.detect(dom.video, humanConfig); // actual detection; were not capturing output in a local variable as it can also be reached via human.result
const tensors = human.tf.memory().numTensors; // check current tensor usage for memory leaks
if (tensors - timestamp.tensors !== 0) log('allocated tensors:', tensors - timestamp.tensors); // printed on start and each time there is a tensor leak
timestamp.tensors = tensors;
fps.detectFPS = Math.round(1000 * 1000 / (human.now() - timestamp.detect)) / 1000;
fps.frames++;
fps.averageMs = Math.round(1000 * (human.now() - timestamp.start) / fps.frames) / 1000;
}
timestamp.detect = human.now();
requestAnimationFrame(detectionLoop); // start new frame immediately
}
function drawLoop() { // main screen refresh loop
if (!dom.video.paused && dom.video.readyState >= 2) {
const res: H.Result = dom.interpolation.checked ? human.next(human.result) : human.result; // interpolate results if enabled
let tracking: H.FaceResult[] | H.BodyResult[] | H.ObjectResult[] = [];
if (human.config.face.enabled) tracking = res.face;
else if (human.config.body.enabled) tracking = res.body;
else if (human.config.object.enabled) tracking = res.object;
else log('unknown object type');
let data: TrackerResult[] = [];
if (dom.tracker.checked) {
const items = tracking.map((obj) => ({
x: obj.box[0] + obj.box[2] / 2,
y: obj.box[1] + obj.box[3] / 2,
w: obj.box[2],
h: obj.box[3],
name: obj.label || (human.config.face.enabled ? 'face' : 'body'),
confidence: obj.score,
}));
tracker.updateTrackedItemsWithNewFrame(items, fps.frames);
data = tracker.getJSONOfTrackedItems(true) as TrackerResult[];
}
human.draw.canvas(dom.video, dom.canvas); // copy input video frame to output canvas
for (let i = 0; i < tracking.length; i++) {
// @ts-ignore
const name = tracking[i].label || (human.config.face.enabled ? 'face' : 'body');
dom.ctx.strokeRect(tracking[i].box[0], tracking[i].box[1], tracking[i].box[1], tracking[i].box[2]);
dom.ctx.fillText(`id: ${tracking[i].id} ${Math.round(100 * tracking[i].score)}% ${name}`, tracking[i].box[0] + 4, tracking[i].box[1] + 16);
if (data[i]) {
dom.ctx.fillText(`t: ${data[i].id} ${Math.round(100 * data[i].confidence)}% ${data[i].name} ${data[i].isZombie ? 'zombie' : ''}`, tracking[i].box[0] + 4, tracking[i].box[1] + 34);
}
}
}
const now = human.now();
fps.drawFPS = Math.round(1000 * 1000 / (now - timestamp.draw)) / 1000;
timestamp.draw = now;
status(dom.video.paused ? 'paused' : `fps: ${fps.detectFPS.toFixed(1).padStart(5, ' ')} detect | ${fps.drawFPS.toFixed(1).padStart(5, ' ')} draw`); // write status
setTimeout(drawLoop, 30); // use to slow down refresh from max refresh rate to target of 30 fps
}
async function handleVideo(file: File) {
const url = URL.createObjectURL(file);
dom.video.src = url;
await dom.video.play();
log('loaded video:', file.name, 'resolution:', [dom.video.videoWidth, dom.video.videoHeight], 'duration:', dom.video.duration);
dom.canvas.width = dom.video.videoWidth;
dom.canvas.height = dom.video.videoHeight;
dom.ctx.strokeStyle = 'white';
dom.ctx.fillStyle = 'white';
dom.ctx.font = '16px Segoe UI';
dom.video.playbackRate = 0.25;
}
function initInput() {
document.body.addEventListener('dragenter', (evt) => evt.preventDefault());
document.body.addEventListener('dragleave', (evt) => evt.preventDefault());
document.body.addEventListener('dragover', (evt) => evt.preventDefault());
document.body.addEventListener('drop', async (evt) => {
evt.preventDefault();
if (evt.dataTransfer) evt.dataTransfer.dropEffect = 'copy';
const file = evt.dataTransfer?.files?.[0];
if (file) await handleVideo(file);
log(dom.video.readyState);
});
(document.getElementById('inputvideo') as HTMLInputElement).onchange = async (evt) => {
evt.preventDefault();
const file = evt.target?.['files']?.[0];
if (file) await handleVideo(file);
};
dom.config.onchange = () => {
trackerConfig.distanceLimit = (document.getElementById('distanceLimit') as HTMLInputElement).valueAsNumber;
trackerConfig.iouLimit = (document.getElementById('iouLimit') as HTMLInputElement).valueAsNumber;
trackerConfig.unMatchedFramesTolerance = (document.getElementById('unMatchedFramesTolerance') as HTMLInputElement).valueAsNumber;
trackerConfig.unMatchedFramesTolerance = (document.getElementById('unMatchedFramesTolerance') as HTMLInputElement).valueAsNumber;
trackerConfig.matchingAlgorithm = (document.getElementById('matchingAlgorithm-kdTree') as HTMLInputElement).checked ? 'kdTree' : 'munkres';
tracker.setParams(trackerConfig);
if ((document.getElementById('keepInMemory') as HTMLInputElement).checked) tracker.enableKeepInMemory();
else tracker.disableKeepInMemory();
tracker.reset();
log('tracker config change', JSON.stringify(trackerConfig));
humanConfig.face!.enabled = (document.getElementById('box-face') as HTMLInputElement).checked; // eslint-disable-line @typescript-eslint/no-non-null-assertion
humanConfig.body!.enabled = (document.getElementById('box-body') as HTMLInputElement).checked; // eslint-disable-line @typescript-eslint/no-non-null-assertion
humanConfig.object!.enabled = (document.getElementById('box-object') as HTMLInputElement).checked; // eslint-disable-line @typescript-eslint/no-non-null-assertion
};
dom.tracker.onchange = (evt) => {
log('tracker', (evt.target as HTMLInputElement).checked ? 'enabled' : 'disabled');
tracker.setParams(trackerConfig);
tracker.reset();
};
}
async function main() { // main entry point
log('human version:', human.version, '| tfjs version:', human.tf.version['tfjs-core']);
log('platform:', human.env.platform, '| agent:', human.env.agent);
status('loading...');
await human.load(); // preload all models
log('backend:', human.tf.getBackend(), '| available:', human.env.backends);
log('models loaded:', human.models.loaded());
status('initializing...');
await human.warmup(); // warmup function to initialize backend for future faster detection
initInput(); // initialize input
await detectionLoop(); // start detection loop
drawLoop(); // start draw loop
}
window.onload = main;

1201
demo/tracker/tracker.js Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,5 @@
# Human Demo in TypeScript for Browsers
Simple demo app that can be used as a quick-start guide for use of `Human` in browser environments
- `index.ts` is compiled to `index.js` which is loaded from `index.html`

View File

@ -0,0 +1,30 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Human</title>
<meta name="viewport" content="width=device-width" id="viewport">
<meta name="keywords" content="Human">
<meta name="application-name" content="Human">
<meta name="description" content="Human: 3D Face Detection, Body Pose, Hand & Finger Tracking, Iris Tracking, Age & Gender Prediction, Emotion Prediction & Gesture Recognition; Author: Vladimir Mandic <https://github.com/vladmandic>">
<meta name="msapplication-tooltip" content="Human: 3D Face Detection, Body Pose, Hand & Finger Tracking, Iris Tracking, Age & Gender Prediction, Emotion Prediction & Gesture Recognition; Author: Vladimir Mandic <https://github.com/vladmandic>">
<meta name="theme-color" content="#000000">
<link rel="manifest" href="../manifest.webmanifest">
<link rel="shortcut icon" href="../../favicon.ico" type="image/x-icon">
<link rel="apple-touch-icon" href="../../assets/icon.png">
<script src="./index.js" type="module"></script>
<style>
@font-face { font-family: 'Lato'; font-display: swap; font-style: normal; font-weight: 100; src: local('Lato'), url('../../assets/lato-light.woff2') }
html { font-family: 'Lato', 'Segoe UI'; font-size: 16px; font-variant: small-caps; }
body { margin: 0; background: black; color: white; overflow-x: hidden; width: 100vw; height: 100vh; }
body::-webkit-scrollbar { display: none; }
</style>
</head>
<body>
<canvas id="canvas" style="margin: 0 auto; width: 100vw"></canvas>
<video id="video" playsinline style="display: none"></video>
<pre id="status" style="position: absolute; top: 12px; right: 20px; background-color: grey; padding: 8px; box-shadow: 2px 2px black"></pre>
<pre id="log" style="padding: 8px"></pre>
<div id="performance" style="position: absolute; bottom: 0; width: 100%; padding: 8px; font-size: 0.8rem;"></div>
</body>
</html>

9
demo/typescript/index.js Normal file
View File

@ -0,0 +1,9 @@
/*
Human
homepage: <https://github.com/vladmandic/human>
author: <https://github.com/vladmandic>'
*/
import*as m from"../../dist/human.esm.js";var v=1920,b={debug:!0,backend:"webgl",modelBasePath:"https://vladmandic.github.io/human-models/models/",filter:{enabled:!0,equalization:!1,flip:!1},face:{enabled:!0,detector:{rotation:!1},mesh:{enabled:!0},attention:{enabled:!1},iris:{enabled:!0},description:{enabled:!0},emotion:{enabled:!0},antispoof:{enabled:!0},liveness:{enabled:!0}},body:{enabled:!1},hand:{enabled:!1},object:{enabled:!1},segmentation:{enabled:!1},gesture:{enabled:!0}},e=new m.Human(b);e.env.perfadd=!1;e.draw.options.font='small-caps 18px "Lato"';e.draw.options.lineHeight=20;e.draw.options.drawPoints=!0;var a={video:document.getElementById("video"),canvas:document.getElementById("canvas"),log:document.getElementById("log"),fps:document.getElementById("status"),perf:document.getElementById("performance")},n={detect:0,draw:0,tensors:0,start:0},s={detectFPS:0,drawFPS:0,frames:0,averageMs:0},o=(...t)=>{a.log.innerText+=t.join(" ")+`
`,console.log(...t)},i=t=>a.fps.innerText=t,g=t=>a.perf.innerText="tensors:"+e.tf.memory().numTensors.toString()+" | performance: "+JSON.stringify(t).replace(/"|{|}/g,"").replace(/,/g," | ");async function f(){if(!a.video.paused){n.start===0&&(n.start=e.now()),await e.detect(a.video);let t=e.tf.memory().numTensors;t-n.tensors!==0&&o("allocated tensors:",t-n.tensors),n.tensors=t,s.detectFPS=Math.round(1e3*1e3/(e.now()-n.detect))/1e3,s.frames++,s.averageMs=Math.round(1e3*(e.now()-n.start)/s.frames)/1e3,s.frames%100===0&&!a.video.paused&&o("performance",{...s,tensors:n.tensors})}n.detect=e.now(),requestAnimationFrame(f)}async function u(){var d,r,c;if(!a.video.paused){let l=e.next(e.result),w=await e.image(a.video);e.draw.canvas(w.canvas,a.canvas);let p={bodyLabels:`person confidence [score] and ${(c=(r=(d=e.result)==null?void 0:d.body)==null?void 0:r[0])==null?void 0:c.keypoints.length} keypoints`};await e.draw.all(a.canvas,l,p),g(l.performance)}let t=e.now();s.drawFPS=Math.round(1e3*1e3/(t-n.draw))/1e3,n.draw=t,i(a.video.paused?"paused":`fps: ${s.detectFPS.toFixed(1).padStart(5," ")} detect | ${s.drawFPS.toFixed(1).padStart(5," ")} draw`),setTimeout(u,30)}async function h(){let d=(await e.webcam.enumerate())[0].deviceId,r=await e.webcam.start({element:a.video,crop:!1,width:v,id:d});o(r),a.canvas.width=e.webcam.width,a.canvas.height=e.webcam.height,a.canvas.onclick=async()=>{e.webcam.paused?await e.webcam.play():e.webcam.pause()}}async function y(){o("human version:",e.version,"| tfjs version:",e.tf.version["tfjs-core"]),o("platform:",e.env.platform,"| agent:",e.env.agent),i("loading..."),await e.load(),o("backend:",e.tf.getBackend(),"| available:",e.env.backends),o("models stats:",e.models.stats()),o("models loaded:",e.models.loaded()),o("environment",e.env),i("initializing..."),await e.warmup(),await h(),await f(),await u()}window.onload=y;
//# sourceMappingURL=index.js.map

File diff suppressed because one or more lines are too long

119
demo/typescript/index.ts Normal file
View File

@ -0,0 +1,119 @@
/**
* Human demo for browsers
* @default Human Library
* @summary <https://github.com/vladmandic/human>
* @author <https://github.com/vladmandic>
* @copyright <https://github.com/vladmandic>
* @license MIT
*/
import * as H from '../../dist/human.esm.js'; // equivalent of @vladmandic/Human
const width = 1920; // used by webcam config as well as human maximum resultion // can be anything, but resolutions higher than 4k will disable internal optimizations
const humanConfig: Partial<H.Config> = { // user configuration for human, used to fine-tune behavior
debug: true,
backend: 'webgl',
// cacheSensitivity: 0,
// cacheModels: false,
// warmup: 'none',
// modelBasePath: '../../models',
modelBasePath: 'https://vladmandic.github.io/human-models/models/',
filter: { enabled: true, equalization: false, flip: false },
face: { enabled: true, detector: { rotation: false }, mesh: { enabled: true }, attention: { enabled: false }, iris: { enabled: true }, description: { enabled: true }, emotion: { enabled: true }, antispoof: { enabled: true }, liveness: { enabled: true } },
body: { enabled: false },
hand: { enabled: false },
object: { enabled: false },
segmentation: { enabled: false },
gesture: { enabled: true },
};
const human = new H.Human(humanConfig); // create instance of human with overrides from user configuration
human.env.perfadd = false; // is performance data showing instant or total values
human.draw.options.font = 'small-caps 18px "Lato"'; // set font used to draw labels when using draw methods
human.draw.options.lineHeight = 20;
human.draw.options.drawPoints = true; // draw points on face mesh
// human.draw.options.fillPolygons = true;
const dom = { // grab instances of dom objects so we dont have to look them up later
video: document.getElementById('video') as HTMLVideoElement,
canvas: document.getElementById('canvas') as HTMLCanvasElement,
log: document.getElementById('log') as HTMLPreElement,
fps: document.getElementById('status') as HTMLPreElement,
perf: document.getElementById('performance') as HTMLDivElement,
};
const timestamp = { detect: 0, draw: 0, tensors: 0, start: 0 }; // holds information used to calculate performance and possible memory leaks
const fps = { detectFPS: 0, drawFPS: 0, frames: 0, averageMs: 0 }; // holds calculated fps information for both detect and screen refresh
const log = (...msg) => { // helper method to output messages
dom.log.innerText += msg.join(' ') + '\n';
console.log(...msg); // eslint-disable-line no-console
};
const status = (msg) => dom.fps.innerText = msg; // print status element
const perf = (msg) => dom.perf.innerText = 'tensors:' + human.tf.memory().numTensors.toString() + ' | performance: ' + JSON.stringify(msg).replace(/"|{|}/g, '').replace(/,/g, ' | '); // print performance element
async function detectionLoop() { // main detection loop
if (!dom.video.paused) {
if (timestamp.start === 0) timestamp.start = human.now();
// log('profiling data:', await human.profile(dom.video));
await human.detect(dom.video); // actual detection; were not capturing output in a local variable as it can also be reached via human.result
const tensors = human.tf.memory().numTensors; // check current tensor usage for memory leaks
if (tensors - timestamp.tensors !== 0) log('allocated tensors:', tensors - timestamp.tensors); // printed on start and each time there is a tensor leak
timestamp.tensors = tensors;
fps.detectFPS = Math.round(1000 * 1000 / (human.now() - timestamp.detect)) / 1000;
fps.frames++;
fps.averageMs = Math.round(1000 * (human.now() - timestamp.start) / fps.frames) / 1000;
if (fps.frames % 100 === 0 && !dom.video.paused) log('performance', { ...fps, tensors: timestamp.tensors });
}
timestamp.detect = human.now();
requestAnimationFrame(detectionLoop); // start new frame immediately
}
async function drawLoop() { // main screen refresh loop
if (!dom.video.paused) {
const interpolated = human.next(human.result); // smoothen result using last-known results
const processed = await human.image(dom.video); // get current video frame, but enhanced with human.filters
human.draw.canvas(processed.canvas as HTMLCanvasElement, dom.canvas);
const opt: Partial<H.DrawOptions> = { bodyLabels: `person confidence [score] and ${human.result?.body?.[0]?.keypoints.length} keypoints` };
await human.draw.all(dom.canvas, interpolated, opt); // draw labels, boxes, lines, etc.
perf(interpolated.performance); // write performance data
}
const now = human.now();
fps.drawFPS = Math.round(1000 * 1000 / (now - timestamp.draw)) / 1000;
timestamp.draw = now;
status(dom.video.paused ? 'paused' : `fps: ${fps.detectFPS.toFixed(1).padStart(5, ' ')} detect | ${fps.drawFPS.toFixed(1).padStart(5, ' ')} draw`); // write status
setTimeout(drawLoop, 30); // use to slow down refresh from max refresh rate to target of 30 fps
}
async function webCam() {
const devices = await human.webcam.enumerate();
const id = devices[0].deviceId; // use first available video source
const webcamStatus = await human.webcam.start({ element: dom.video, crop: false, width, id }); // use human webcam helper methods and associate webcam stream with a dom element
log(webcamStatus);
dom.canvas.width = human.webcam.width;
dom.canvas.height = human.webcam.height;
dom.canvas.onclick = async () => { // pause when clicked on screen and resume on next click
if (human.webcam.paused) await human.webcam.play();
else human.webcam.pause();
};
}
async function main() { // main entry point
log('human version:', human.version, '| tfjs version:', human.tf.version['tfjs-core']);
log('platform:', human.env.platform, '| agent:', human.env.agent);
status('loading...');
await human.load(); // preload all models
log('backend:', human.tf.getBackend(), '| available:', human.env.backends);
log('models stats:', human.models.stats());
log('models loaded:', human.models.loaded());
log('environment', human.env);
status('initializing...');
await human.warmup(); // warmup function to initialize backend for future faster detection
await webCam(); // start webcam
await detectionLoop(); // start detection loop
await drawLoop(); // start draw loop
}
window.onload = main;

58
demo/video/index.html Normal file
View File

@ -0,0 +1,58 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Human</title>
<meta name="viewport" content="width=device-width" id="viewport">
<meta name="keywords" content="Human">
<meta name="description" content="Human: Demo; Author: Vladimir Mandic <https://github.com/vladmandic>">
<link rel="manifest" href="../manifest.webmanifest">
<link rel="shortcut icon" href="../../favicon.ico" type="image/x-icon">
<style>
@font-face { font-family: 'Lato'; font-display: swap; font-style: normal; font-weight: 100; src: local('Lato'), url('../../assets/lato-light.woff2') }
body { font-family: 'Lato', 'Segoe UI'; font-size: 16px; font-variant: small-caps; margin: 0; background: black; color: white; overflow: hidden; width: 100vw; height: 100vh; }
</style>
</head>
<body>
<canvas id="canvas" style="margin: 0 auto; width: 100%"></canvas>
<pre id="log" style="padding: 8px; position: fixed; bottom: 0"></pre>
<script type="module">
import * as H from '../../dist/human.esm.js'; // equivalent of import @vladmandic/Human
const humanConfig = { // user configuration for human, used to fine-tune behavior
modelBasePath: '../../models', // models can be loaded directly from cdn as well
filter: { enabled: true, equalization: true, flip: false },
face: { enabled: true, detector: { rotation: false }, mesh: { enabled: true }, attention: { enabled: false }, iris: { enabled: true }, description: { enabled: true }, emotion: { enabled: true } },
body: { enabled: true },
hand: { enabled: true },
gesture: { enabled: true },
object: { enabled: false },
segmentation: { enabled: false },
};
const human = new H.Human(humanConfig); // create instance of human with overrides from user configuration
const canvas = document.getElementById('canvas'); // output canvas to draw both webcam and detection results
async function drawLoop() { // main screen refresh loop
const interpolated = human.next(); // get smoothened result using last-known results which are continously updated based on input webcam video
human.draw.canvas(human.webcam.element, canvas); // draw webcam video to screen canvas // better than using procesed image as this loop happens faster than processing loop
await human.draw.all(canvas, interpolated); // draw labels, boxes, lines, etc.
setTimeout(drawLoop, 30); // use to slow down refresh from max refresh rate to target of 1000/30 ~ 30 fps
}
async function main() { // main entry point
document.getElementById('log').innerHTML = `human version: ${human.version} | tfjs version: ${human.tf.version['tfjs-core']}<br>platform: ${human.env.platform} | agent ${human.env.agent}`;
await human.webcam.start({ crop: true }); // find webcam and start it
human.video(human.webcam.element); // instruct human to continously detect video frames
canvas.width = human.webcam.width; // set canvas resolution to input webcam native resolution
canvas.height = human.webcam.height;
canvas.onclick = async () => { // pause when clicked on screen and resume on next click
if (human.webcam.paused) await human.webcam.play();
else human.webcam.pause();
};
await drawLoop(); // start draw loop
}
window.onload = main;
</script>
</body>
</html>

1
dist/human.d.ts vendored Normal file
View File

@ -0,0 +1 @@
export * from '../types/human';

1
dist/human.esm-nobundle.d.ts vendored Normal file
View File

@ -0,0 +1 @@
export * from '../types/human';

840
dist/human.esm-nobundle.js vendored Normal file

File diff suppressed because one or more lines are too long

1
dist/human.esm.d.ts vendored Normal file
View File

@ -0,0 +1 @@
export * from '../types/human';

46857
dist/human.esm.js vendored Normal file

File diff suppressed because one or more lines are too long

7
dist/human.esm.js.map vendored Normal file

File diff suppressed because one or more lines are too long

9359
dist/human.js vendored Normal file

File diff suppressed because one or more lines are too long

Some files were not shown because too many files have changed in this diff Show More