3D Human Pose Tracking Priors using Geodesic Mixture Models

Edgar Simo-Serra*, Carme Torras, Francesc Moreno-Noguer

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

20 Citations (Scopus)


We present a novel approach for learning a finite mixture model on a Riemannian manifold in which Euclidean metrics are not applicable and one needs to resort to geodesic distances consistent with the manifold geometry. For this purpose, we draw inspiration on a variant of the expectation-maximization algorithm, that uses a minimum message length criterion to automatically estimate the optimal number of components from multivariate data lying on an Euclidean space. In order to use this approach on Riemannian manifolds, we propose a formulation in which each component is defined on a different tangent space, thus avoiding the problems associated with the loss of accuracy produced when linearizing the manifold with a single tangent space. Our approach can be applied to any type of manifold for which it is possible to estimate its tangent space. Additionally, we consider using shrinkage covariance estimation to improve the robustness of the method, especially when dealing with very sparsely distributed samples. We evaluate the approach on a number of situations, going from data clustering on manifolds to combining pose and kinematics of articulated bodies for 3D human pose tracking. In all cases, we demonstrate remarkable improvement compared to several chosen baselines.

Original languageEnglish
Pages (from-to)388-408
Number of pages21
JournalInternational Journal of Computer Vision
Issue number2
Publication statusPublished - 2017 Apr 1


  • 3D human pose
  • Human kinematics
  • Mixture modelling
  • Probabilistic priors
  • Riemannian manifolds

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition
  • Artificial Intelligence


Dive into the research topics of '3D Human Pose Tracking Priors using Geodesic Mixture Models'. Together they form a unique fingerprint.

Cite this