A modification of Daniel Russell's notebook merged with Katherine Crowson's hq-skip-net changes

Overview

Cover

Edits made to this repo by Katherine Crowson

I have added several features to this repository for use in creating higher quality generative art (feature visualization probably also benefits):

  • Deformable convolutions have been added.

  • Higher quality non-learnable upsampling filters (bicubic, Lanczos) have been added, with matching downsampling filters. A bilinear downsampling filter which low pass filters properly has also been added.

  • The nets can now optionally output to a fixed decorrelated color space which is then transformed to RGB and sigmoided. Deep Image Prior as originally written does not know anything about the correlations between RGB color channels in natural images, which can be disadvantageous when using it for feature visualization and generative art.

Example:

from models import get_hq_skip_net

net = get_hq_skip_net(input_depth).to(device)

get_hq_skip_net() provides higher quality defaults for the skip net, using the added features, than get_net(). Deformable convolutions can be slow and if this is a problem you can disable them with offset_groups=0 or offset_type='none'. The decorrelated color space can be turned off with decorr_rgb=False. The upsample_mode and downsample_mode defaults are now 'cubic' for visual quality, I would recommend not going below 'linear'. The default channel count and number of scales has been increased.

The default configuration is to use 1x1 convolution layers to create the offsets for the deformable convolutions, because training can become unstable with 3x3. However to make full use of deformable convolutions you may want to use 3x3 offset layers and set their learning rate to around 1/10 of the normal layers:

net = get_hq_skip_net(input_depth, offset_type='full')
params = [{'params': get_non_offset_params(net), 'lr': lr},
          {'params': get_offset_params(net), 'lr': lr / 10}]
opt = optim.Adam(params)

This is a merge of Daniel Russell's deep-image-prior notebook with Katherine Crowson's notebook

Some minor additions: P. Fishwick 01/28/2022

Merged Katherine Crowson's deep_image_prior into Daniel Russell's original notebook : https://github.com/crowsonkb/deep-image-prior
Mount Google Drive to save the directory deep_image_prior
Updated to CLIP model RN50x64 with size 448
Lowered cutn to 10 for a V100 (16GB memory) - update for an A100
Iterates over num_images to create an image batch
Saves the image at each display interval

Original README

Warning! The optimization may not converge on some GPUs. We've personally experienced issues on Tesla V100 and P40 GPUs. When running the code, make sure you get similar results to the paper first. Easiest to check using text inpainting notebook. Try to set double precision mode or turn off cudnn.

Deep image prior

In this repository we provide Jupyter Notebooks to reproduce each figure from the paper:

Deep Image Prior

CVPR 2018

Dmitry Ulyanov, Andrea Vedaldi, Victor Lempitsky

[paper] [supmat] [project page]

Here we provide hyperparameters and architectures, that were used to generate the figures. Most of them are far from optimal. Do not hesitate to change them and see the effect.

We will expand this README with a list of hyperparameters and options shortly.

Install

Here is the list of libraries you need to install to execute the code:

  • python = 3.6
  • pytorch = 0.4
  • numpy
  • scipy
  • matplotlib
  • scikit-image
  • jupyter

All of them can be installed via conda (anaconda), e.g.

conda install jupyter

or create an conda env with all dependencies via environment file

conda env create -f environment.yml

Docker image

Alternatively, you can use a Docker image that exposes a Jupyter Notebook with all required dependencies. To build this image ensure you have both docker and nvidia-docker installed, then run

nvidia-docker build -t deep-image-prior .

After the build you can start the container as

nvidia-docker run --rm -it --ipc=host -p 8888:8888 deep-image-prior

you will be provided an URL through which you can connect to the Jupyter notebook.

Google Colab

To run it using Google Colab, click here and select the notebook to run. Remember to uncomment the first cell to clone the repository into colab's environment.

Citation

@article{UlyanovVL17,
    author    = {Ulyanov, Dmitry and Vedaldi, Andrea and Lempitsky, Victor},
    title     = {Deep Image Prior},
    journal   = {arXiv:1711.10925},
    year      = {2017}
}
Owner
Paul Fishwick
Distinguished Univ. Chair of Arts, Technology, and Emerging Communication & Professor of Computer Science
Paul Fishwick
(NeurIPS '21 Spotlight) IQ-Learn: Inverse Q-Learning for Imitation

Inverse Q-Learning (IQ-Learn) Official code base for IQ-Learn: Inverse soft-Q Learning for Imitation, NeurIPS '21 Spotlight IQ-Learn is an easy-to-use

Divyansh Garg 102 Dec 20, 2022
FlingBot: The Unreasonable Effectiveness of Dynamic Manipulations for Cloth Unfolding

This repository contains code for training and evaluating FlingBot in both simulation and real-world settings on a dual-UR5 robot arm setup for Ubuntu 18.04

Columbia Artificial Intelligence and Robotics Lab 70 Dec 06, 2022
Self-Supervised depth kalilia

Self-Supervised depth kalilia

24 Oct 15, 2022
(Python, R, C/C++) Isolation Forest and variations such as SCiForest and EIF, with some additions (outlier detection + similarity + NA imputation)

IsoTree Fast and multi-threaded implementation of Extended Isolation Forest, Fair-Cut Forest, SCiForest (a.k.a. Split-Criterion iForest), and regular

141 Dec 29, 2022
Tf alloc - Simplication of GPU allocation for Tensorflow2

tf_alloc Simpliying GPU allocation for Tensorflow Developer: korkite (Junseo Ko)

Junseo Ko 3 Feb 10, 2022
NaturalCC is a sequence modeling toolkit that allows researchers and developers to train custom models

NaturalCC NaturalCC is a sequence modeling toolkit that allows researchers and developers to train custom models for many software engineering tasks,

159 Dec 28, 2022
Code for Emergent Translation in Multi-Agent Communication

Emergent Translation in Multi-Agent Communication PyTorch implementation of the models described in the paper Emergent Translation in Multi-Agent Comm

Facebook Research 75 Jul 15, 2022
Official implementation of EdiTTS: Score-based Editing for Controllable Text-to-Speech

EdiTTS: Score-based Editing for Controllable Text-to-Speech Official implementation of EdiTTS: Score-based Editing for Controllable Text-to-Speech. Au

Neosapience 98 Dec 25, 2022
Certified Patch Robustness via Smoothed Vision Transformers

Certified Patch Robustness via Smoothed Vision Transformers This repository contains the code for replicating the results of our paper: Certified Patc

Madry Lab 35 Dec 14, 2022
Code for paper "A Critical Assessment of State-of-the-Art in Entity Alignment" (https://arxiv.org/abs/2010.16314)

A Critical Assessment of State-of-the-Art in Entity Alignment This repository contains the source code for the paper A Critical Assessment of State-of

Max Berrendorf 16 Oct 14, 2022
CondenseNet: Light weighted CNN for mobile devices

CondenseNets This repository contains the code (in PyTorch) for "CondenseNet: An Efficient DenseNet using Learned Group Convolutions" paper by Gao Hua

Shichen Liu 690 Nov 30, 2022
LF-YOLO (Lighter and Faster YOLO) is used to detect defect of X-ray weld image.

This project is based on ultralytics/yolov3. LF-YOLO (Lighter and Faster YOLO) is used to detect defect of X-ray weld image. The related paper is avai

26 Dec 13, 2022
Implementation of GeoDiff: a Geometric Diffusion Model for Molecular Conformation Generation (ICLR 2022).

GeoDiff: a Geometric Diffusion Model for Molecular Conformation Generation [OpenReview] [arXiv] [Code] The official implementation of GeoDiff: A Geome

Minkai Xu 155 Dec 26, 2022
The official code for PRIMER: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization

PRIMER The official code for PRIMER: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization. PRIMER is a pre-trained model for mu

AI2 111 Dec 18, 2022
Tensorflow-seq2seq-tutorials - Dynamic seq2seq in TensorFlow, step by step

seq2seq with TensorFlow Collection of unfinished tutorials. May be good for educational purposes. 1 - simple sequence-to-sequence model with dynamic u

Matvey Ezhov 1k Dec 17, 2022
Using pytorch to implement unet network for liver image segmentation.

Using pytorch to implement unet network for liver image segmentation.

zxq 1 Dec 17, 2021
Tensorflow implementation and notebooks for Implicit Maximum Likelihood Estimation

tf-imle Tensorflow 2 and PyTorch implementation and Jupyter notebooks for Implicit Maximum Likelihood Estimation (I-MLE) proposed in the NeurIPS 2021

NEC Laboratories Europe 69 Dec 13, 2022
A highly modular PyTorch framework with a focus on Neural Architecture Search (NAS).

UniNAS A highly modular PyTorch framework with a focus on Neural Architecture Search (NAS). under development (which happens mostly on our internal Gi

Cognitive Systems Research Group 19 Nov 23, 2022
minimizer-space de Bruijn graphs (mdBG) for whole genome assembly

rust-mdbg: Minimizer-space de Bruijn graphs (mdBG) for whole-genome assembly rust-mdbg is an ultra-fast minimizer-space de Bruijn graph (mdBG) impleme

Barış Ekim 148 Dec 01, 2022
PyContinual (An Easy and Extendible Framework for Continual Learning)

PyContinual (An Easy and Extendible Framework for Continual Learning) Easy to Use You can sumply change the baseline, backbone and task, and then read

Zixuan Ke 176 Jan 05, 2023