Live Demo

MaskAnyone in Action

See how SYNAPSIS transforms identifiable audiovisual data into privacy-preserving research material while maintaining analytical value.

Before & After

Face Masking Demonstration

Compare original footage with de-identified output

Sample Recording - Face Masking

Original
Masked
Method: Face Masking
Expression preserved

Presentation Recording - Speaker Masking

Original
Masked
Method: Face Masking
Multi-person detection

Research Recording - Participant Masking

Original
Masked
Method: Face Masking
Expression preserved
Methods

Available Masking Techniques

Pixelation

Block-based face obscuring. Fast, simple, widely understood.

Privacy: High

Blur

Gaussian blur over facial regions. Adjustable intensity.

Privacy: Medium-High

Face Masking

Replace with synthetic face preserving expressions and gaze.

Privacy: High | Utility: High

Skeleton Only

Extract pose data, render skeleton visualization only.

Privacy: Maximum
Data Extraction

Pose & Kinematic Data

Beyond masking, SYNAPSIS extracts skeletal pose data from videos - enabling gesture analysis, movement studies, and behavioral research without exposing identity.

  • 33 body keypoints (MediaPipe) or 25 (OpenPose)
  • Hand landmarks (21 points per hand)
  • Export to JSON, CSV for analysis in R/Python
  • Blendshape extraction for facial expression analysis
{
  "frame": 42,
  "timestamp": 1.4,
  "pose": {
    "nose": [0.52, 0.31, 0.98],
    "left_shoulder": [0.61, 0.48, 0.95],
    "right_shoulder": [0.43, 0.47, 0.96],
    "left_wrist": [0.71, 0.62, 0.89],
    "right_wrist": [0.33, 0.58, 0.91]
    // ... 33 keypoints total
  },
  "hands": {
    "left": [...],
    "right": [...]
  }
}
Sample pose data output (JSON)

Ready to Try?

Contact us to schedule a live demo or request access to the pilot platform.