
Learning meaningful representations for chirographic drawing data such as sketches, handwriting, and flowcharts is a gateway for understanding and emulating human creative expression. Despite being inherently continuous-time data, existing works have treated these as discrete-time sequences, disregarding their true nature. In this work, we model such data as continuous-time functions and learn compact representations by virtue of Neural Ordinary Differential Equations. To this end, we introduce the first continuous-time Seq2Seq model and demonstrate some remarkable properties that set it apart from traditional discrete-time analogues. We also provide solutions for some practical challenges for such models, including introducing a family of parameterized ODE dynamics & continuous-time data augmentation particularly suitable for the task. Our models are validated on several datasets including VectorMNIST, DiDi and Quick, Draw!.
Slides for my ICLR 2022 talk
PS: Reusing any of these slides would require permission from the author.
Code repository
The "VectorMNIST" dataset

(Dataset will further be updated)
Want to cite this paper ?
@inproceedings{
das2022sketchode,
title={Sketch{ODE}: Learning neural sketch representation in continuous time},
author={Ayan Das and Yongxin Yang and Timothy Hospedales and Tao Xiang and Yi-Zhe Song},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=c-4HSDAWua5}
}