Improve model card: Add pipeline tag, library name, project/code links, project page & correct Cityscapes paper link

#3
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +8 -7
README.md CHANGED
@@ -1,16 +1,17 @@
1
  ---
 
 
 
2
  tags:
3
  - conditional-image-generation
4
- - image-to-image
5
  - gan
6
  - cyclegan
7
- # See a list of available tags here:
8
- # https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts#L12
9
- # task: unconditional-image-generation or conditional-image-generation or image-to-image
10
- license: mit
11
  ---
12
 
13
  # CycleGAN for unpaired image-to-image translation.
 
 
 
14
 
15
  ## Model description
16
 
@@ -28,7 +29,7 @@ This allows to obtain a generated translation by G_AB, of an image from domain A
28
  Under these framework, these aspects have been used to perform style transfer between synthetic data obtained from a simulated driving dataset, GTA5, and the real driving data from Cityscapes.
29
  This is of paramount importance to develop autonomous driving perception deep learning models, as this allows to generate synthetic data with automatic annotations which resembles real world images, without requiring the intervention of a human annotator.
30
  This is fundamental because a manual annotator has been shown to require 1.5 to 3.3 hours to create semantic and instance segmentation masks for a single images.
31
- These have been provided in the original [cityscapes paper (Cordts et al 2016)](https://arxiv.org/abs/2104.13395) and the [adverse condition dataset (Sakaridis et al. 2021)](https://arxiv.org/abs/2104.13395) paper.
32
 
33
 
34
  Hence the CycleGAN provides forward and backward translation between synthetic and real world data.
@@ -189,4 +190,4 @@ Row3 is the translation of the immediate above images in row2(real) by means of
189
 
190
  copyright = {arXiv.org perpetual, non-exclusive license}
191
  }
192
- ```
 
1
  ---
2
+ license: mit
3
+ pipeline_tag: image-to-image
4
+ library_name: huggan
5
  tags:
6
  - conditional-image-generation
 
7
  - gan
8
  - cyclegan
 
 
 
 
9
  ---
10
 
11
  # CycleGAN for unpaired image-to-image translation.
12
+ Paper: [ACDC: The Adverse Conditions Dataset with Correspondences for Robust Semantic Driving Scene Perception](https://huggingface.co/papers/2104.13395)
13
+ Project page: https://acdc.vision.ee.ethz.ch
14
+ Code: https://github.com/huggingface/community-events.git
15
 
16
  ## Model description
17
 
 
29
  Under these framework, these aspects have been used to perform style transfer between synthetic data obtained from a simulated driving dataset, GTA5, and the real driving data from Cityscapes.
30
  This is of paramount importance to develop autonomous driving perception deep learning models, as this allows to generate synthetic data with automatic annotations which resembles real world images, without requiring the intervention of a human annotator.
31
  This is fundamental because a manual annotator has been shown to require 1.5 to 3.3 hours to create semantic and instance segmentation masks for a single images.
32
+ These have been provided in the original [cityscapes paper (Cordts et al 2016)](https://arxiv.org/abs/1604.01683) and the [adverse condition dataset (Sakaridis et al. 2021)](https://arxiv.org/abs/2104.13395) paper.
33
 
34
 
35
  Hence the CycleGAN provides forward and backward translation between synthetic and real world data.
 
190
 
191
  copyright = {arXiv.org perpetual, non-exclusive license}
192
  }
193
+ ```