arielreplicate / tres_iqa

Assess the quality of an image

  • Public
  • 147K runs
  • GitHub
  • Paper
  • License

Run time and cost

This model costs approximately $0.00022 to run on Replicate, or 4545 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia T4 GPU hardware. Predictions typically complete within 1 seconds.

Readme

Usage

Given an input image, this model predicts the quality of that image. Quality can be defined as how distortion-free an image is, where sources of distortion can include noise, blurring, and compression artifacts. Note that a lower score indicates a higher quality image!

Model

This is an implementation of No-Reference Image Quality Assessment via Transformers, Relative Ranking, and Self-Consistency. Paper Video

This is a model for Image quality assessment (IQA), a task wherein machine learning models are trained to predict the quality of an image in a manner that’s consistent with human quality raters. No-Reference Image Quality Assessment (NR-IQA) means assessing the image quality without a “clean” image to compare to, i.e, predict a score given a single image input. This model is an NR-IQA model.

The demo uses a model trained on the LIVE dataset downloaded from here.

Acknowledgement

This code borrows elements from HyperIQA and DETR.

Citation

If you find this work useful for your research, please cite our paper:

@InProceedings{golestaneh2021no,
  title={No-Reference Image Quality Assessment via Transformers, Relative Ranking, and Self-Consistency},
  author={Golestaneh, S Alireza and Dadsetan, Saba and Kitani, Kris M},
  booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
  pages={3209--3218},
  year={2022}
}

If you have any questions about our work, please do not hesitate to contact isalirezag@gmail.com