
How can we create digital garments that accurately reflect the properties of real-world clothing? Garment Modeling research aims to replicate the shape and texture of physical garments in digital space, enabling applications such as virtual try-on and digital twin fashion production workflows. Unlike manual 3D modeling, our approach leverages deep learning to predict garment structures directly from images, even when distorted or occluded. This research is essential for efficient virtual fashion design, production optimization, and personalized user experiences.
We analyze garment images to predict underlying sewing patterns and unwarped textures. Our approach uses deep neural networks to reconstruct realistic 3D garments across various poses. We integrate pattern prediction with texture recovery to create high-fidelity garment models. So far, we have demonstrated garment reconstruction systems that produce detailed and accurate 3D garments from the images or 3D scans of actual garments worn by the user.
Recent papers
- DeepIron: Predicting Unwarped Garment Texture from a Single Image, H. Kwon, and S-H Lee, Eurographics 2024 Conference, , Short, Apr. 2024 [Project]
- SPnet: Estimating Garment Sewing Patterns from a Single Image of a Posed User, S. Lim, S. Kim, and S-H Lee, Eurographics 2024 Conference, , Short, Apr. 2024 [Project]
- NeuralTailor: Reconstructing Sewing Pattern Structures from 3D Point Clouds of Garments, M. Korosteleva and S-H Lee, ACM Transactions on Graphics (TOG), ACM SIGGRAPH 2022, July 2022 [project]
- Generating Datasets of 3D Garments with Sewing Patterns, M. Korosteleva and S-H Lee, NeurIPS (NIPS) Datasets and Benchmarks Track, Sep. 2021
- Estimating Garment Patterns from Static Scan Data, S. Bang, M. Korosteleva, and S-H Lee, Computer Graphics Forum (accepted), May 2021 [project]