Jimin Pi, Ph.D.

Hong Kong University of Science and Technology

Email: jpi (at) connect.ust.hk

LinkedIn      Google Scholar


Gaze-based Text Input System with Dynamic Bayesian Adjustment

  • From Mar 2016 to present.
  • Implementation of a text input interface using gaze.
  • Proposed a probabilistic generative model for gaze.
  • Used Bayesian to integrate gaze cue and context knowledge.
  • Achieved 50% typing speed increase compared to previous techniques.
  • Tested with (Spinal cord injuries) patients in Hong Kong hospitals.

  • Publications:

  • Dynamic Bayesian Adjustment of Dwell Time for Faster Gaze Typing. Jimin Pi, Paul A. Koljonen, Yong Hu and Bertram E. SHI. Submitted to IEEE Transactions on Neural Systems and Rehabilitation Engineering.

  • Probabilistic Adjustment of Dwell Time for Eye Typing. Jimin Pi and Bertram E. SHI. IEEE International conference on Human system interaction (HSI), 2017 (Best Oral Presentation Award) PDF
  • SLAM-based 3D Gaze Estimation Using Eye Glasses

  • From Jul 2017 to Apr 2018.
  • Estimating human pose through SLAM algorithm.
  • Localizing 3D gaze based on the 3D environment context.
  • Achieved accuracy of 3 degrees in a region of over 2mx2m.

  • Publication:

  • SLAM-based Localization of 3D Gaze Using A Mobile Eye Tracker. Haofei Wang*, Jimin Pi*, Tong Qin, Shaojie Shen and Bertram E. SHI. ACM Symposium on Eye Tracking Research & Applications (ETRA), 2018. (*Equal contribution)   PDF
  • Data-driven Online Calibration for Remote Eye Trackers

  • From Aug 2018 to Mar 2019.
  • Investigated eye tracking degradation through empirical study.
  • Proposed a calibration mechanism based on the natural human-computer interaction history.
  • Solved the calibration using an efficient convex optimization scheme.
  • Reduced errors by up to 43% as the head moves over a 20cm range

  • Publication:

  • Task-embedded online eye-tracker calibration for improving robustness to head motion. Jimin Pi and Bertram E. SHI. ACM Symposium on Eye Tracking Research & Applications (ETRA), 2019. (Oral)   PDF
  • Unsupervised Outlier Detection in Appearance-Based Gaze Estimation

  • From Jun 2019-Aug 2019.
  • Unsupervisely detect outlier inputs during training of the gaze estimator.
  • Reduced error by 8% compared to model without outlier detection.
  • Detect outliers with precision 0.71 when the recall is 0.63.

  • Publication:
    Unsupervised Outlier Detection in Appearance-Based Gaze Estimation.
    Zhaokang Chen*, Didan Deng*, Jimin Pi*, and Bertram E. SHI. (*Equal contribution)
    ICCV 2019 Workshop and Challenge on Real-World Recognition from Low-Quality Images and Videos.

    Pose-independent Facial Expression Recognition

  • From Dec 2016 to Feb 2017.
  • Got the winner of Action Unit Intensity Estimation sub-challenge in the Facial Expression Recognition and Analysis Challenge (FERA 2017).
  • Proposed a multi-task deep network addressing different pose angles.
  • Achieved balanced performance among nine pose angles for most AUs.

  • Publication:
    Pose-independent Facial Action Unit Intensity Regression Based on Multi-task Deep Transfer Learning.
    Yuqian Zhou, Jimin Pi, and Bertram E. SHI.
    IEEE International Conference on Automatic Face & Gesture Recognition (FG), 2017.   PDF

    Last updated: October, 2019