Options
All
  • Public
  • Public/Protected
  • All
Menu

@vladmandic/face-api - v1.6.5

Index

Namespaces

Enumerations

Classes

Interfaces

Type aliases

Properties

Variables

Functions

Type aliases

AgeAndGenderPrediction: { age: number; gender: Gender; genderProbability: number }

Type declaration

  • age: number
  • gender: Gender
  • genderProbability: number
BatchNorm: { sub: tf.Tensor1D; truediv: tf.Tensor1D }

Type declaration

  • sub: tf.Tensor1D
  • truediv: tf.Tensor1D
ConvWithBatchNorm: { bn: BatchNorm; conv: ConvParams }

Type declaration

DefaultTinyYolov2NetParams: { conv0: ConvWithBatchNorm; conv1: ConvWithBatchNorm; conv2: ConvWithBatchNorm; conv3: ConvWithBatchNorm; conv4: ConvWithBatchNorm; conv5: ConvWithBatchNorm; conv6: ConvWithBatchNorm; conv7: ConvWithBatchNorm; conv8: ConvParams }
Environment: FileSystem & { Canvas: typeof HTMLCanvasElement; CanvasRenderingContext2D: typeof CanvasRenderingContext2D; Image: typeof HTMLImageElement; ImageData: typeof ImageData; Video: typeof HTMLVideoElement; createCanvasElement: any; createImageElement: any; createVideoElement: any; fetch: any }
FaceDetectionFunction: (input: TNetInput) => Promise<FaceDetection[]>

Type declaration

FileSystem: { readFile: any }

Type declaration

  • readFile:function
    • readFile(filePath: string): Promise<any>
    • Parameters

      • filePath: string

      Returns Promise<any>

ITinyFaceDetectorOptions: ITinyYolov2Options
MobilenetParams: { conv0: SeparableConvParams | ConvParams; conv1: SeparableConvParams; conv2: SeparableConvParams; conv3: SeparableConvParams; conv4: SeparableConvParams; conv5: SeparableConvParams; conv6?: SeparableConvParams; conv7?: SeparableConvParams; conv8: ConvParams }

Type declaration

  • conv0: SeparableConvParams | ConvParams
  • conv1: SeparableConvParams
  • conv2: SeparableConvParams
  • conv3: SeparableConvParams
  • conv4: SeparableConvParams
  • conv5: SeparableConvParams
  • Optional conv6?: SeparableConvParams
  • Optional conv7?: SeparableConvParams
  • conv8: ConvParams
NetOutput: { age: tf.Tensor1D; gender: tf.Tensor2D }

Type declaration

  • age: tf.Tensor1D
  • gender: tf.Tensor2D
NetParams: { fc: { age: FCParams; gender: FCParams } }

Type declaration

  • fc: { age: FCParams; gender: FCParams }
    • age: FCParams
    • gender: FCParams
TMediaElement: HTMLImageElement | HTMLVideoElement | HTMLCanvasElement
TNetInput: TNetInputArg | TNetInputArg[] | NetInput | tf.Tensor4D
TNetInputArg: string | TResolvedNetInput
TResolvedNetInput: TMediaElement | tf.Tensor3D | tf.Tensor4D
TinyYolov2Config: { anchors: Point[]; classes: string[]; filterSizes?: number[]; iouThreshold: number; isFirstLayerConv2d?: boolean; meanRgb?: [number, number, number]; withClassScores?: boolean; withSeparableConvs: boolean }

Type declaration

  • anchors: Point[]
  • classes: string[]
  • Optional filterSizes?: number[]
  • iouThreshold: number
  • Optional isFirstLayerConv2d?: boolean
  • Optional meanRgb?: [number, number, number]
  • Optional withClassScores?: boolean
  • withSeparableConvs: boolean
WithAge<TSource>: TSource & { age: number }

Type parameters

  • TSource

WithFaceDescriptor<TSource>: TSource & { descriptor: Float32Array }

Type parameters

  • TSource

WithFaceDetection<TSource>: TSource & { detection: FaceDetection }

Type parameters

  • TSource

WithFaceExpressions<TSource>: TSource & { expressions: FaceExpressions }

Type parameters

  • TSource

WithFaceLandmarks<TSource, TFaceLandmarks>: TSource & { alignedRect: FaceDetection; angle: { pitch: number | undefined; roll: number | undefined; yaw: number | undefined }; landmarks: TFaceLandmarks; unshiftedLandmarks: TFaceLandmarks }

Type parameters

WithGender<TSource>: TSource & { gender: Gender; genderProbability: number }

Type parameters

  • TSource

Properties

tf: any

Variables

FACE_EXPRESSION_LABELS: string[] = ...
env: { createBrowserEnv: () => Environment; createFileSystem: (fs?: any) => FileSystem; createNodejsEnv: () => Environment; getEnv: () => Environment; initialize: () => null | void; isBrowser: () => boolean; isNodejs: () => boolean; monkeyPatch: (env: Partial<Environment>) => void; setEnv: (env: Environment) => void } = ...

Type declaration

nets: { ageGenderNet: AgeGenderNet; faceExpressionNet: FaceExpressionNet; faceLandmark68Net: FaceLandmark68Net; faceLandmark68TinyNet: FaceLandmark68TinyNet; faceRecognitionNet: FaceRecognitionNet; ssdMobilenetv1: SsdMobilenetv1; tinyFaceDetector: TinyFaceDetector; tinyYolov2: TinyYolov2 } = ...

Type declaration

version: string = ...

Functions

  • awaitMediaLoaded(media: HTMLCanvasElement | HTMLImageElement | HTMLVideoElement): Promise<unknown>
  • Parameters

    • media: HTMLCanvasElement | HTMLImageElement | HTMLVideoElement

    Returns Promise<unknown>

  • bufferToImage(buf: Blob): Promise<HTMLImageElement>
  • computeFaceDescriptor(input: any): Promise<Float32Array | Float32Array[]>
  • Computes a 128 entry vector (face descriptor / face embeddings) from the face shown in an image, which uniquely represents the features of that persons face. The computed face descriptor can be used to measure the similarity between faces, by computing the euclidean distance of two face descriptors.

    Parameters

    • input: any

    Returns Promise<Float32Array | Float32Array[]>

    Face descriptor with 128 entries or array thereof in case of batch input.

  • createCanvas(__namedParameters: IDimensions): HTMLCanvasElement
  • createCanvasFromMedia(media: HTMLImageElement | HTMLVideoElement | ImageData, dims?: IDimensions): HTMLCanvasElement
  • createTinyYolov2(weights: Float32Array, withSeparableConvs?: boolean): TinyYolov2
  • Detects the 68 point face landmark positions of the face shown in an image using a tinier version of the 68 point face landmark model, which is slightly faster at inference, but also slightly less accurate.

    Parameters

    • input: any

    Returns Promise<FaceLandmarks68 | FaceLandmarks68[]>

    68 point face landmarks or array thereof in case of batch input.

  • euclideanDistance(arr1: number[] | Float32Array, arr2: number[] | Float32Array): number
  • Parameters

    • arr1: number[] | Float32Array
    • arr2: number[] | Float32Array

    Returns number

  • extendWithAge<TSource>(sourceObj: TSource, age: number): WithAge<TSource>
  • extendWithFaceDescriptor<TSource>(sourceObj: TSource, descriptor: Float32Array): WithFaceDescriptor<TSource>
  • extendWithFaceLandmarks<TSource, TFaceLandmarks>(sourceObj: TSource, unshiftedLandmarks: TFaceLandmarks): WithFaceLandmarks<TSource, TFaceLandmarks>
  • extendWithGender<TSource>(sourceObj: TSource, gender: Gender, genderProbability: number): WithGender<TSource>
  • extractFaceTensors(imageTensor: any, detections: (FaceDetection | Rect)[]): Promise<tf.Tensor3D[]>
  • Extracts the tensors of the image regions containing the detected faces. Useful if you want to compute the face descriptors for the face images. Using this method is faster then extracting a canvas for each face and converting them to tensors individually.

    Parameters

    • imageTensor: any

      The image tensor that face detection has been performed on.

    • detections: (FaceDetection | Rect)[]

      The face detection results or face bounding boxes for that image.

    Returns Promise<tf.Tensor3D[]>

    Tensors of the corresponding image region for each detected face.

  • extractFaces(input: any, detections: (FaceDetection | Rect)[]): Promise<HTMLCanvasElement[]>
  • Extracts the image regions containing the detected faces.

    Parameters

    • input: any

      The image that face detection has been performed on.

    • detections: (FaceDetection | Rect)[]

      The face detection results or face bounding boxes for that image.

    Returns Promise<HTMLCanvasElement[]>

    The Canvases of the corresponding image region for each detected face.

  • fetchImage(uri: string): Promise<HTMLImageElement>
  • fetchJson<T>(uri: string): Promise<T>
  • fetchNetWeights(uri: string): Promise<Float32Array>
  • fetchOrThrow(url: string, init?: RequestInit): Promise<Response>
  • fetchVideo(uri: string): Promise<HTMLVideoElement>
  • getContext2dOrThrow(canvasArg: string | CanvasRenderingContext2D | HTMLCanvasElement): CanvasRenderingContext2D
  • Parameters

    • canvasArg: string | CanvasRenderingContext2D | HTMLCanvasElement

    Returns CanvasRenderingContext2D

  • getMediaDimensions(input: HTMLCanvasElement | IDimensions | HTMLImageElement | HTMLVideoElement): Dimensions
  • imageTensorToCanvas(imgTensor: Tensor, canvas?: HTMLCanvasElement): Promise<HTMLCanvasElement>
  • imageToSquare(input: HTMLCanvasElement | HTMLImageElement, inputSize: number, centerImage?: boolean): HTMLCanvasElement
  • Parameters

    • input: HTMLCanvasElement | HTMLImageElement
    • inputSize: number
    • centerImage: boolean = false

    Returns HTMLCanvasElement

  • inverseSigmoid(x: number): number
  • iou(box1: Box<any>, box2: Box<any>, isIOU?: boolean): number
  • Parameters

    • box1: Box<any>
    • box2: Box<any>
    • isIOU: boolean = true

    Returns number

  • isMediaElement(input: any): boolean
  • isMediaLoaded(media: HTMLImageElement | HTMLVideoElement): boolean
  • isWithAge(obj: any): obj is { age: number }
  • isWithFaceDetection(obj: any): obj is { detection: FaceDetection }
  • isWithGender(obj: any): obj is { gender: Gender; genderProbability: number }
  • loadAgeGenderModel(url: string): Promise<void>
  • loadFaceDetectionModel(url: string): Promise<void>
  • loadFaceExpressionModel(url: string): Promise<void>
  • loadFaceLandmarkModel(url: string): Promise<void>
  • loadFaceLandmarkTinyModel(url: string): Promise<void>
  • loadFaceRecognitionModel(url: string): Promise<void>
  • loadSsdMobilenetv1Model(url: string): Promise<void>
  • loadTinyFaceDetectorModel(url: string): Promise<void>
  • loadTinyYolov2Model(url: string): Promise<void>
  • loadWeightMap(uri: undefined | string, defaultModelName: string): Promise<tf.NamedTensorMap>
  • Parameters

    • uri: undefined | string
    • defaultModelName: string

    Returns Promise<tf.NamedTensorMap>

  • matchDimensions(input: IDimensions, reference: IDimensions, useMediaDimensions?: boolean): { height: number; width: number }
  • nonMaxSuppression(boxes: Box<any>[], scores: number[], iouThreshold: number, isIOU?: boolean): number[]
  • normalize(x: Tensor4D, meanRgb: number[]): tf.Tensor4D
  • padToSquare(imgTensor: Tensor4D, isCenterImage?: boolean): tf.Tensor4D
  • Pads the smaller dimension of an image tensor with zeros, such that width === height.

    Parameters

    • imgTensor: Tensor4D

      The image tensor.

    • isCenterImage: boolean = false

      (optional, default: false) If true, add an equal amount of padding on both sides of the minor dimension oof the image.

    Returns tf.Tensor4D

    The padded tensor with width === height.

  • Recognizes the facial expressions from a face image.

    Parameters

    • input: any

    Returns Promise<FaceExpressions | FaceExpressions[]>

    Facial expressions with corresponding probabilities or array thereof in case of batch input.

  • resizeResults<T>(results: T, dimensions: IDimensions): T
  • resolveInput(arg: any): any
  • shuffleArray(inputArray: any[]): any[]
  • sigmoid(x: number): number
  • Attempts to detect all faces in an image using SSD Mobilenetv1 Network.

    Parameters

    • input: any

      The input image.

    • options: SsdMobilenetv1Options

      (optional, default: see SsdMobilenetv1Options constructor for default parameters).

    Returns Promise<FaceDetection[]>

    Bounding box of each face with score.

  • Attempts to detect all faces in an image using the Tiny Face Detector.

    Parameters

    • input: any

      The input image.

    • options: TinyFaceDetectorOptions

      (optional, default: see TinyFaceDetectorOptions constructor for default parameters).

    Returns Promise<FaceDetection[]>

    Bounding box of each face with score.

  • Attempts to detect all faces in an image using the Tiny Yolov2 Network.

    Parameters

    • input: any

      The input image.

    • options: ITinyYolov2Options

      (optional, default: see TinyYolov2Options constructor for default parameters).

    Returns Promise<FaceDetection[]>

    Bounding box of each face with score.

  • toNetInput(inputs: any): Promise<NetInput>
  • Validates the input to make sure, they are valid net inputs and awaits all media elements to be finished loading.

    Parameters

    • inputs: any

    Returns Promise<NetInput>

    A NetInput instance, which can be passed into one of the neural networks.

  • validateConfig(config: any): void