Options
All
  • Public
  • Public/Protected
  • All
Menu

@vladmandic/face-api - v1.6.4

Index

Namespaces

Enumerations

Classes

Interfaces

Type aliases

Properties

Variables

Functions

Type aliases

AgeAndGenderPrediction

AgeAndGenderPrediction: { age: number; gender: Gender; genderProbability: number }

Type declaration

  • age: number
  • gender: Gender
  • genderProbability: number

BatchNorm

BatchNorm: { sub: tf.Tensor1D; truediv: tf.Tensor1D }

Type declaration

  • sub: tf.Tensor1D
  • truediv: tf.Tensor1D

ConvWithBatchNorm

ConvWithBatchNorm: { bn: BatchNorm; conv: ConvParams }

Type declaration

DefaultTinyYolov2NetParams

DefaultTinyYolov2NetParams: { conv0: ConvWithBatchNorm; conv1: ConvWithBatchNorm; conv2: ConvWithBatchNorm; conv3: ConvWithBatchNorm; conv4: ConvWithBatchNorm; conv5: ConvWithBatchNorm; conv6: ConvWithBatchNorm; conv7: ConvWithBatchNorm; conv8: ConvParams }

Environment

Environment: FileSystem & { Canvas: typeof HTMLCanvasElement; CanvasRenderingContext2D: typeof CanvasRenderingContext2D; Image: typeof HTMLImageElement; ImageData: typeof ImageData; Video: typeof HTMLVideoElement; createCanvasElement: any; createImageElement: any; createVideoElement: any; fetch: any }

FaceDetectionFunction

FaceDetectionFunction: (input: TNetInput) => Promise<FaceDetection[]>

Type declaration

FaceDetectionOptions

FileSystem

FileSystem: { readFile: any }

Type declaration

  • readFile:function
    • readFile(filePath: string): Promise<any>
    • Parameters

      • filePath: string

      Returns Promise<any>

ITinyFaceDetectorOptions

ITinyFaceDetectorOptions: ITinyYolov2Options

MobilenetParams

MobilenetParams: { conv0: SeparableConvParams | ConvParams; conv1: SeparableConvParams; conv2: SeparableConvParams; conv3: SeparableConvParams; conv4: SeparableConvParams; conv5: SeparableConvParams; conv6?: SeparableConvParams; conv7?: SeparableConvParams; conv8: ConvParams }

Type declaration

  • conv0: SeparableConvParams | ConvParams
  • conv1: SeparableConvParams
  • conv2: SeparableConvParams
  • conv3: SeparableConvParams
  • conv4: SeparableConvParams
  • conv5: SeparableConvParams
  • Optional conv6?: SeparableConvParams
  • Optional conv7?: SeparableConvParams
  • conv8: ConvParams

NetOutput

NetOutput: { age: tf.Tensor1D; gender: tf.Tensor2D }

Type declaration

  • age: tf.Tensor1D
  • gender: tf.Tensor2D

NetParams

NetParams: { fc: { age: FCParams; gender: FCParams } }

Type declaration

  • fc: { age: FCParams; gender: FCParams }
    • age: FCParams
    • gender: FCParams

TMediaElement

TMediaElement: HTMLImageElement | HTMLVideoElement | HTMLCanvasElement

TNetInput

TNetInput: TNetInputArg | TNetInputArg[] | NetInput | tf.Tensor4D

TNetInputArg

TNetInputArg: string | TResolvedNetInput

TResolvedNetInput

TResolvedNetInput: TMediaElement | tf.Tensor3D | tf.Tensor4D

TinyYolov2Config

TinyYolov2Config: { anchors: Point[]; classes: string[]; filterSizes?: number[]; iouThreshold: number; isFirstLayerConv2d?: boolean; meanRgb?: [number, number, number]; withClassScores?: boolean; withSeparableConvs: boolean }

Type declaration

  • anchors: Point[]
  • classes: string[]
  • Optional filterSizes?: number[]
  • iouThreshold: number
  • Optional isFirstLayerConv2d?: boolean
  • Optional meanRgb?: [number, number, number]
  • Optional withClassScores?: boolean
  • withSeparableConvs: boolean

TinyYolov2NetParams

WithAge

WithAge<TSource>: TSource & { age: number }

Type parameters

  • TSource

WithFaceDescriptor

WithFaceDescriptor<TSource>: TSource & { descriptor: Float32Array }

Type parameters

  • TSource

WithFaceDetection

WithFaceDetection<TSource>: TSource & { detection: FaceDetection }

Type parameters

  • TSource

WithFaceExpressions

WithFaceExpressions<TSource>: TSource & { expressions: FaceExpressions }

Type parameters

  • TSource

WithFaceLandmarks

WithFaceLandmarks<TSource, TFaceLandmarks>: TSource & { alignedRect: FaceDetection; angle: { pitch: number | undefined; roll: number | undefined; yaw: number | undefined }; landmarks: TFaceLandmarks; unshiftedLandmarks: TFaceLandmarks }

Type parameters

WithGender

WithGender<TSource>: TSource & { gender: Gender; genderProbability: number }

Type parameters

  • TSource

Properties

tf

tf: any

Variables

FACE_EXPRESSION_LABELS

FACE_EXPRESSION_LABELS: string[] = ...

env

env: { createBrowserEnv: () => Environment; createFileSystem: (fs?: any) => FileSystem; createNodejsEnv: () => Environment; getEnv: () => Environment; initialize: () => null | void; isBrowser: () => boolean; isNodejs: () => boolean; monkeyPatch: (env: Partial<Environment>) => void; setEnv: (env: Environment) => void } = ...

Type declaration

nets

nets: { ageGenderNet: AgeGenderNet; faceExpressionNet: FaceExpressionNet; faceLandmark68Net: FaceLandmark68Net; faceLandmark68TinyNet: FaceLandmark68TinyNet; faceRecognitionNet: FaceRecognitionNet; ssdMobilenetv1: SsdMobilenetv1; tinyFaceDetector: TinyFaceDetector; tinyYolov2: TinyYolov2 } = ...

Type declaration

version

version: string = ...

Functions

Const allFaces

allFacesSsdMobilenetv1

allFacesTinyYolov2

awaitMediaLoaded

  • awaitMediaLoaded(media: HTMLCanvasElement | HTMLImageElement | HTMLVideoElement): Promise<unknown>
  • Parameters

    • media: HTMLCanvasElement | HTMLImageElement | HTMLVideoElement

    Returns Promise<unknown>

bufferToImage

  • bufferToImage(buf: Blob): Promise<HTMLImageElement>

Const computeFaceDescriptor

  • computeFaceDescriptor(input: any): Promise<Float32Array | Float32Array[]>
  • Computes a 128 entry vector (face descriptor / face embeddings) from the face shown in an image, which uniquely represents the features of that persons face. The computed face descriptor can be used to measure the similarity between faces, by computing the euclidean distance of two face descriptors.

    Parameters

    • input: any

    Returns Promise<Float32Array | Float32Array[]>

    Face descriptor with 128 entries or array thereof in case of batch input.

createCanvas

  • createCanvas(__namedParameters: IDimensions): HTMLCanvasElement

createCanvasFromMedia

  • createCanvasFromMedia(media: HTMLImageElement | HTMLVideoElement | ImageData, dims?: IDimensions): HTMLCanvasElement

createFaceDetectionNet

createFaceRecognitionNet

createSsdMobilenetv1

createTinyFaceDetector

createTinyYolov2

  • createTinyYolov2(weights: Float32Array, withSeparableConvs?: boolean): TinyYolov2

detectAllFaces

Const detectFaceLandmarks

Const detectFaceLandmarksTiny

  • Detects the 68 point face landmark positions of the face shown in an image using a tinier version of the 68 point face landmark model, which is slightly faster at inference, but also slightly less accurate.

    Parameters

    • input: any

    Returns Promise<FaceLandmarks68 | FaceLandmarks68[]>

    68 point face landmarks or array thereof in case of batch input.

Const detectLandmarks

detectSingleFace

euclideanDistance

  • euclideanDistance(arr1: number[] | Float32Array, arr2: number[] | Float32Array): number
  • Parameters

    • arr1: number[] | Float32Array
    • arr2: number[] | Float32Array

    Returns number

extendWithAge

  • extendWithAge<TSource>(sourceObj: TSource, age: number): WithAge<TSource>

extendWithFaceDescriptor

  • extendWithFaceDescriptor<TSource>(sourceObj: TSource, descriptor: Float32Array): WithFaceDescriptor<TSource>

extendWithFaceDetection

extendWithFaceExpressions

extendWithFaceLandmarks

  • extendWithFaceLandmarks<TSource, TFaceLandmarks>(sourceObj: TSource, unshiftedLandmarks: TFaceLandmarks): WithFaceLandmarks<TSource, TFaceLandmarks>

extendWithGender

  • extendWithGender<TSource>(sourceObj: TSource, gender: Gender, genderProbability: number): WithGender<TSource>

extractFaceTensors

  • extractFaceTensors(imageTensor: any, detections: (FaceDetection | Rect)[]): Promise<tf.Tensor3D[]>
  • Extracts the tensors of the image regions containing the detected faces. Useful if you want to compute the face descriptors for the face images. Using this method is faster then extracting a canvas for each face and converting them to tensors individually.

    Parameters

    • imageTensor: any

      The image tensor that face detection has been performed on.

    • detections: (FaceDetection | Rect)[]

      The face detection results or face bounding boxes for that image.

    Returns Promise<tf.Tensor3D[]>

    Tensors of the corresponding image region for each detected face.

extractFaces

  • extractFaces(input: any, detections: (FaceDetection | Rect)[]): Promise<HTMLCanvasElement[]>
  • Extracts the image regions containing the detected faces.

    Parameters

    • input: any

      The image that face detection has been performed on.

    • detections: (FaceDetection | Rect)[]

      The face detection results or face bounding boxes for that image.

    Returns Promise<HTMLCanvasElement[]>

    The Canvases of the corresponding image region for each detected face.

fetchImage

  • fetchImage(uri: string): Promise<HTMLImageElement>

fetchJson

  • fetchJson<T>(uri: string): Promise<T>

fetchNetWeights

  • fetchNetWeights(uri: string): Promise<Float32Array>

fetchOrThrow

  • fetchOrThrow(url: string, init?: RequestInit): Promise<Response>

fetchVideo

  • fetchVideo(uri: string): Promise<HTMLVideoElement>

getContext2dOrThrow

  • getContext2dOrThrow(canvasArg: string | CanvasRenderingContext2D | HTMLCanvasElement): CanvasRenderingContext2D
  • Parameters

    • canvasArg: string | CanvasRenderingContext2D | HTMLCanvasElement

    Returns CanvasRenderingContext2D

getMediaDimensions

  • getMediaDimensions(input: HTMLCanvasElement | IDimensions | HTMLImageElement | HTMLVideoElement): Dimensions

imageTensorToCanvas

  • imageTensorToCanvas(imgTensor: Tensor, canvas?: HTMLCanvasElement): Promise<HTMLCanvasElement>

imageToSquare

  • imageToSquare(input: HTMLCanvasElement | HTMLImageElement, inputSize: number, centerImage?: boolean): HTMLCanvasElement
  • Parameters

    • input: HTMLCanvasElement | HTMLImageElement
    • inputSize: number
    • centerImage: boolean = false

    Returns HTMLCanvasElement

inverseSigmoid

  • inverseSigmoid(x: number): number

iou

  • iou(box1: Box<any>, box2: Box<any>, isIOU?: boolean): number
  • Parameters

    • box1: Box<any>
    • box2: Box<any>
    • isIOU: boolean = true

    Returns number

isMediaElement

  • isMediaElement(input: any): boolean

isMediaLoaded

  • isMediaLoaded(media: HTMLImageElement | HTMLVideoElement): boolean

isWithAge

  • isWithAge(obj: any): obj is { age: number }

isWithFaceDetection

  • isWithFaceDetection(obj: any): obj is { detection: FaceDetection }

isWithFaceExpressions

isWithFaceLandmarks

isWithGender

  • isWithGender(obj: any): obj is { gender: Gender; genderProbability: number }

Const loadAgeGenderModel

  • loadAgeGenderModel(url: string): Promise<void>

Const loadFaceDetectionModel

  • loadFaceDetectionModel(url: string): Promise<void>

Const loadFaceExpressionModel

  • loadFaceExpressionModel(url: string): Promise<void>

Const loadFaceLandmarkModel

  • loadFaceLandmarkModel(url: string): Promise<void>

Const loadFaceLandmarkTinyModel

  • loadFaceLandmarkTinyModel(url: string): Promise<void>

Const loadFaceRecognitionModel

  • loadFaceRecognitionModel(url: string): Promise<void>

Const loadSsdMobilenetv1Model

  • loadSsdMobilenetv1Model(url: string): Promise<void>

Const loadTinyFaceDetectorModel

  • loadTinyFaceDetectorModel(url: string): Promise<void>

Const loadTinyYolov2Model

  • loadTinyYolov2Model(url: string): Promise<void>

loadWeightMap

  • loadWeightMap(uri: undefined | string, defaultModelName: string): Promise<tf.NamedTensorMap>
  • Parameters

    • uri: undefined | string
    • defaultModelName: string

    Returns Promise<tf.NamedTensorMap>

Const locateFaces

matchDimensions

  • matchDimensions(input: IDimensions, reference: IDimensions, useMediaDimensions?: boolean): { height: number; width: number }

minBbox

nonMaxSuppression

  • nonMaxSuppression(boxes: Box<any>[], scores: number[], iouThreshold: number, isIOU?: boolean): number[]

normalize

  • normalize(x: Tensor4D, meanRgb: number[]): tf.Tensor4D

padToSquare

  • padToSquare(imgTensor: Tensor4D, isCenterImage?: boolean): tf.Tensor4D
  • Pads the smaller dimension of an image tensor with zeros, such that width === height.

    Parameters

    • imgTensor: Tensor4D

      The image tensor.

    • isCenterImage: boolean = false

      (optional, default: false) If true, add an equal amount of padding on both sides of the minor dimension oof the image.

    Returns tf.Tensor4D

    The padded tensor with width === height.

Const predictAgeAndGender

Const recognizeFaceExpressions

  • Recognizes the facial expressions from a face image.

    Parameters

    • input: any

    Returns Promise<FaceExpressions | FaceExpressions[]>

    Facial expressions with corresponding probabilities or array thereof in case of batch input.

resizeResults

  • resizeResults<T>(results: T, dimensions: IDimensions): T

resolveInput

  • resolveInput(arg: any): any

shuffleArray

  • shuffleArray(inputArray: any[]): any[]

sigmoid

  • sigmoid(x: number): number

Const ssdMobilenetv1

  • Attempts to detect all faces in an image using SSD Mobilenetv1 Network.

    Parameters

    • input: any

      The input image.

    • options: SsdMobilenetv1Options

      (optional, default: see SsdMobilenetv1Options constructor for default parameters).

    Returns Promise<FaceDetection[]>

    Bounding box of each face with score.

Const tinyFaceDetector

  • Attempts to detect all faces in an image using the Tiny Face Detector.

    Parameters

    • input: any

      The input image.

    • options: TinyFaceDetectorOptions

      (optional, default: see TinyFaceDetectorOptions constructor for default parameters).

    Returns Promise<FaceDetection[]>

    Bounding box of each face with score.

Const tinyYolov2

  • Attempts to detect all faces in an image using the Tiny Yolov2 Network.

    Parameters

    • input: any

      The input image.

    • options: ITinyYolov2Options

      (optional, default: see TinyYolov2Options constructor for default parameters).

    Returns Promise<FaceDetection[]>

    Bounding box of each face with score.

toNetInput

  • toNetInput(inputs: any): Promise<NetInput>
  • Validates the input to make sure, they are valid net inputs and awaits all media elements to be finished loading.

    Parameters

    • inputs: any

    Returns Promise<NetInput>

    A NetInput instance, which can be passed into one of the neural networks.

validateConfig

  • validateConfig(config: any): void