Skip to content

Latest commit

 

History

History
66 lines (53 loc) · 4.08 KB

README.md

File metadata and controls

66 lines (53 loc) · 4.08 KB

SieveNet

Python 3.6 License: MIT

This is the unofficial implementation of 'SieveNet: A Unified Framework for Robust Image-Based Virtual Try-On'
Paper can be found from here

Dataset downloading and processing

Dataset download instructions and link of dataset can be found from official repo of CP-VTON and VITON
Put dataset in data folder

Usage

Clone the repo and install requirements through pip install -r requirements.txt

Traning

Coarse-to-Fine Warping module

     In config.py set self.datamode='train' and self.stage='GMM'
     then run python train.py
You can observe results while traning in tensorboard as below
SS from tensorboard while training gmm

Conditional Segmentation Mask generation module

     In config.py set self.datamode='Train' and self.stage='SEG'
     then run python train.py
SS from tensorboard while training segm

Segmentation Assisted Texture Translation module

     In config.py set self.datamode='Train' and self.stage='TOM'
     then run python train.py
SS from tensorboard while training tom

Testing on dataset

Please download checkpoint of all three modules from google drive and put them in checkpoints folder
For testing, in config.py set self.datamode='test'
For Testing of Coarse-to-Fine Warping module, Conditional Segmentation Mask generation module, and Segmentation Assisted Texture Translation module set self.stage='GMM', self.stage='SEG', and self.stage='TOM' respectively.
Here is testing result. For Coarse-to-Fine Warping module,
SS from tensorboard while testing gmm
For Segmentation Assisted Texture Translation module,
SS from tensorboard while testing gmm

Testing on custom image

  1. Please download checkpoint of all three modules from google drive and put them in checkpoints folder.
  2. Please download caffe-model from here and put the model in pose folder.
  3. Generate human parsing from Self-Correction-Human-Parsing repo or from this colab demo.
    Select LIP dataset while generating human parsing.
  4. Set input-image's, cloth-image's, and output of human parsing image's path in config file.
  5. Then run python inference.py Output will be saved in outputs folder.

Update: Inference using colab

Please find inference code of Sievenet in 2nd part of this notebook

Open In Colab

Acknowledgements

Some modules of this implementation is based on this repo
For generating pose keypoints, I have used learnopencv implementation of OpenPose