AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation

Overview

AtlasNet [Project Page] [Paper] [Talk]

AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation
Thibault Groueix, Matthew Fisher, Vladimir G. Kim , Bryan C. Russell, Mathieu Aubry
In CVPR, 2018.

🚀 New branch : AtlasNet + Shape Reconstruction by Learning Differentiable Surface Representations

chair.png chair.gif

Install

This implementation uses Python 3.6, Pytorch, Pymesh, Cuda 10.1.

# Copy/Paste the snippet in a terminal
git clone --recurse-submodules https://github.com/ThibaultGROUEIX/AtlasNet.git
cd AtlasNet 

#Dependencies
conda create -n atlasnet python=3.6 --yes
conda activate atlasnet
conda install pytorch==1.7.1 torchvision==0.8.2 cudatoolkit=10.1 -c pytorch --yes
pip install --user --requirement  requirements.txt # pip dependencies
Optional : Compile Chamfer (MIT) + Metro Distance (GPL3 Licence)
# Copy/Paste the snippet in a terminal
python auxiliary/ChamferDistancePytorch/chamfer3D/setup.py install #MIT
cd auxiliary
git clone https://github.com/ThibaultGROUEIX/metro_sources.git
cd metro_sources; python setup.py --build # build metro distance #GPL3
cd ../..

A note on data.

Data download should be automatic. However, due to the new google drive traffic caps, you may have to download manually. If you run into an error running the demo, you can refer to #61.

You can manually download the data from three sources (there are the same) :

Please make sure to unzip the archives in the right places :

cd AtlasNet
mkdir data
unzip ShapeNetV1PointCloud.zip -d ./data/
unzip ShapeNetV1Renderings.zip -d ./data/
unzip metro_files.zip -d ./data/
unzip trained_models.zip -d ./training/

Usage

  • Demo : python train.py --demo
  • Training : python train.py --shapenet13 Monitor on http://localhost:8890/
  • Latest Refacto 12-2019 - [x] Factorize Single View Reconstruction and autoencoder in same class
    - [x] Factorise Square and Sphere template in same class
    - [x] Add latent vector as bias after first layer(30% speedup)
    - [x] Remove last th in decoder
    - [x] Make large .pth tensor with all pointclouds in cache(drop the nasty Chunk_reader)
    - [x] Make-it multi-gpu
    - [x] Add netvision visualization of the results
    - [x] Rewrite main script object-oriented
    - [x] Check that everything works in latest pytorch version
    - [x] Add more layer by default and flag for the number of layers and hidden neurons
    - [x] Add a flag to generate a mesh directly
    - [x] Add a python setup install
    - [x] Make sure GPU are used at 100%
    - [x] Add f-score in Chamfer + report f-score
    - [x] Get rid of shapenet_v2 data and use v1!
    - [x] Fix path issues no more sys.path.append
    - [x] Preprocess shapenet 55 and add it in dataloader
    - [x] Make minimal dependencies

Quantitative Results

Method Chamfer (*1) Fscore (*2) Metro (*3) Total Train time (min)
Autoencoder 25 Squares 1.35 82.3% 6.82 731
Autoencoder 1 Sphere 1.35 83.3% 6.94 548
SingleView 25 Squares 3.78 63.1% 8.94 1422
SingleView 1 Sphere 3.76 64.4% 9.01 1297
  • (*1) x1000. Computed between 2500 ground truth points and 2500 reconstructed points.
  • (*2) The threshold is 0.001
  • (*3) x100. Metro is ran on unormalized point clouds (which explains a difference with the paper's numbers)

Related projects

Citing this work

@inproceedings{groueix2018,
          title={{AtlasNet: A Papier-M\^ach\'e Approach to Learning 3D Surface Generation}},
          author={Groueix, Thibault and Fisher, Matthew and Kim, Vladimir G. and Russell, Bryan and Aubry, Mathieu},
          booktitle={Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
          year={2018}
        }

Comments
  • RuntimeError: CUDA error: out of memory

    RuntimeError: CUDA error: out of memory

    Thank you for the great work! I get this error below when I run: ./training/train_AE_AtlasNet.py

    I checked two more similar issues but this looks different. Any idea how to solve it? Any help appreciated!

    File "./training/train_AE_AtlasNet.py", line 151, in dist1, dist2 = distChamfer(points.transpose(2,1).contiguous(), pointsReconstructed) #loss function File "./training/train_AE_AtlasNet.py", line 64, in distChamfer P = (rx.transpose(2,1) + ry - 2*zz) RuntimeError: CUDA error: out of memory

    I run pytorch:0.4.1 / Ubuntu 18.04

    FULL CODE:

    (pytorch-atlasnet) [email protected]:~/AtlasNet$ python ./training/train_AE_AtlasNet.py --env $env --nb_primitives $nb_primitives |& tee ${env}.txt Setting up a new session... Namespace(accelerated_chamfer=0, batchSize=32, env='AE_AtlasNet', model='', nb_primitives=25, nepoch=120, num_points=2500, super_points=2500, workers=12) Random Seed: 314 {'plane': '02691156', 'bench': '02828884', 'cabinet': '02933112', 'car': '02958343', 'chair': '03001627', 'monitor': '03211117', 'lamp': '03636649', 'speaker': '03691459', 'firearm': '04090263', 'couch': '04256520', 'table': '04379243', 'cellphone': '04401088', 'watercraft': '04530566'} category 02691156 files 4044 0.999752781211372 % category 02828884 files 1813 0.9983480176211453 % category 02933112 files 1571 0.9993638676844784 % category 02958343 files 3514 0.46878335112059766 % category 03001627 files 6778 1.0 % category 03211117 files 1093 0.9981735159817352 % category 03636649 files 2309 0.9961173425366695 % category 03691459 files 1597 0.9870210135970334 % category 04090263 files 2373 1.0 % category 04256520 files 3173 1.0 % category 04379243 files 8436 0.9914208485133388 % category 04401088 files 1050 0.9980988593155894 % category 04530566 files 1939 1.0 % {'plane': '02691156', 'bench': '02828884', 'cabinet': '02933112', 'car': '02958343', 'chair': '03001627', 'monitor': '03211117', 'lamp': '03636649', 'speaker': '03691459', 'firearm': '04090263', 'couch': '04256520', 'table': '04379243', 'cellphone': '04401088', 'watercraft': '04530566'} category 02691156 files 4044 0.999752781211372 % category 02828884 files 1813 0.9983480176211453 % category 02933112 files 1571 0.9993638676844784 % category 02958343 files 3514 0.46878335112059766 % category 03001627 files 6778 1.0 % category 03211117 files 1093 0.9981735159817352 % category 03636649 files 2309 0.9961173425366695 % category 03691459 files 1597 0.9870210135970334 % category 04090263 files 2373 1.0 % category 04256520 files 3173 1.0 % category 04379243 files 8436 0.9914208485133388 % category 04401088 files 1050 0.9980988593155894 % category 04530566 files 1939 1.0 % training set 31747 testing set 7943 **Traceback (most recent call last): File "./training/train_AE_AtlasNet.py", line 151, in <module> dist1, dist2 = distChamfer(points.transpose(2,1).contiguous(), pointsReconstructed) #loss function File "./training/train_AE_AtlasNet.py", line 64, in distChamfer P = (rx.transpose(2,1) + ry - 2*zz) RuntimeError: CUDA error: out of memory**

    help wanted 
    opened by spha-code 15
  • To be honest, the latest code is very hard to understand

    To be honest, the latest code is very hard to understand

    I compare our method with AtlasNet several times. I need to edit the source code each time. However, the latest code is very hard to understand because it is of high abstraction. It takes me an hour to understand the relationship between each module.

    help wanted 
    opened by hzxie 10
  • Stuck after launching visdom server

    Stuck after launching visdom server

    I run the demo succesfully but after I launch the visdom server

    python -m visdom.server -p 8888

    I am stuck, I can't write any command anymore in my anaconda window. How do I continue? thanks!

    help wanted 
    opened by spha-code 10
  • [BUG] Chamfer Distance is not Correct

    [BUG] Chamfer Distance is not Correct

    I tried to debug the chamfer.cu by printing the values of tensors. I created two point clouds containing 3 and 5 points, respectively. The values are shown as below.

    (1,.,.) = 
     0.01 *
      0.0000  0.0000  0.0000
      -20.4838  4.4935  6.1395
      -3.7283 -0.7629  1.7736
    
    (2,.,.) = 
     0.01 *
      0.0000  0.0000  0.0000
      -17.4992  4.4902  5.0518
      -1.6003 -1.2430  0.8040
    [ Variable[CUDAType]{2,3,3} ]
    (1,.,.) = 
      0.0051  0.1850  0.0004
      0.0051  0.1850  0.0093
      0.0096  0.1850  0.0081
      0.0096  0.1850  0.0016
      0.0075  0.1850  0.0004
    
    (2,.,.) = 
     -0.1486 -0.0932 -0.0014
     -0.0406 -0.0932 -0.0017
     -0.2057 -0.0932 -0.0001
     -0.0915 -0.0932 -0.0001
      0.0103 -0.0932 -0.0001
    [ Variable[CUDAType]{2,5,3} ]
    

    I also add print statements in CUDA functions, and I got the following output.

    2i = 0, n = 3, j = 0, k = 0, d = 0.03425420, x = (0.00000000 0.00000000 0.00000000) y = (0.00511124 0.18500790 0.00038808)
    2i = 0, n = 3, j = 1, k = 0, d = 0.06742091, x = (-0.20483765 0.04493479 0.06139540) y = (0.00511124 0.18500790 0.00038808)
    2i = 0, n = 3, j = 2, k = 0, d = 0.03920735, x = (-0.03728317 -0.00762936 0.01773610) y = (0.00511124 0.18500790 0.00038808)
    2i = 1, n = 3, j = 0, k = 0, d = 0.03573948, x = (-0.08192606 0.01907521 0.02376382) y = (0.00749534 0.18500790 0.00928491)
    2i = 1, n = 3, j = 1, k = 0, d = 0.03405631, x = (-0.00152916 0.00097788 -0.00109852) y = (0.00749534 0.18500790 0.00928491)
    2i = 1, n = 3, j = 2, k = 0, d = 0.03437031, x = (0.00000000 0.00000000 0.00000000) y = (0.00749534 0.18500790 0.00928491)
    2i = 0, n = 3, j = 0, k = 1, d = 0.03434026, x = (0.00000000 0.00000000 0.00000000) y = (0.00511124 0.18500790 0.00928491)
    2i = 0, n = 3, j = 1, k = 1, d = 0.06641452, x = (-0.20483765 0.04493479 0.06139540) y = (0.00511124 0.18500790 0.00928491)
    2i = 0, n = 3, j = 2, k = 1, d = 0.03897782, x = (-0.03728317 -0.00762936 0.01773610) y = (0.00511124 0.18500790 0.00928491)
    2i = 1, n = 3, j = 0, k = 1, d = 0.03490656, x = (-0.08192606 0.01907521 0.02376382) y = (0.00231482 0.18500790 0.00713918)
    2i = 1, n = 3, j = 1, k = 1, d = 0.03394968, x = (-0.00152916 0.00097788 -0.00109852) y = (0.00231482 0.18500790 0.00713918)
    2i = 1, n = 3, j = 2, k = 1, d = 0.03428425, x = (0.00000000 0.00000000 0.00000000) y = (0.00231482 0.18500790 0.00713918)
    2i = 0, n = 3, j = 0, k = 2, d = 0.03438481, x = (0.00000000 0.00000000 0.00000000) y = (0.00955979 0.18500790 0.00809300)
    2i = 0, n = 3, j = 1, k = 2, d = 0.06842789, x = (-0.20483765 0.04493479 0.06139540) y = (0.00955979 0.18500790 0.00809300)
    2i = 0, n = 3, j = 2, k = 2, d = 0.03939636, x = (-0.03728317 -0.00762936 0.01773610) y = (0.00955979 0.18500790 0.00809300)
    2i = 1, n = 3, j = 0, k = 2, d = 0.03508088, x = (-0.08192606 0.01907521 0.02376382) y = (0.00231482 0.18500790 0.00253408)
    2i = 1, n = 3, j = 1, k = 2, d = 0.03389502, x = (-0.00152916 0.00097788 -0.00109852) y = (0.00231482 0.18500790 0.00253408)
    2i = 1, n = 3, j = 2, k = 2, d = 0.03423970, x = (0.00000000 0.00000000 0.00000000) y = (0.00231482 0.18500790 0.00253408)
    2i = 0, n = 3, j = 0, k = 3, d = 0.03432181, x = (0.00000000 0.00000000 0.00000000) y = (0.00955979 0.18500790 0.00158027)
    2i = 0, n = 3, j = 1, k = 3, d = 0.06916460, x = (-0.20483765 0.04493479 0.06139540) y = (0.00955979 0.18500790 0.00158027)
    2i = 0, n = 3, j = 2, k = 3, d = 0.03956439, x = (-0.03728317 -0.00762936 0.01773610) y = (0.00955979 0.18500790 0.00158027)
    2i = 1, n = 3, j = 0, k = 3, d = 0.03652760, x = (-0.08192606 0.01907521 0.02376382) y = (0.01075170 0.18500790 0.00364473)
    2i = 1, n = 3, j = 1, k = 3, d = 0.03404036, x = (-0.00152916 0.00097788 -0.00109852) y = (0.01075170 0.18500790 0.00364473)
    2i = 1, n = 3, j = 2, k = 3, d = 0.03435681, x = (0.00000000 0.00000000 0.00000000) y = (0.01075170 0.18500790 0.00364473)
    3i = 0, n = 3, j = 0, k = 4, d = 0.03428425, x = (0.00000000 0.00000000 0.00000000) y = (0.00749534 0.18500790 0.00038808)
    3i = 0, n = 3, j = 1, k = 4, d = 0.06842767, x = (-0.20483765 0.04493479 0.06139540) y = (0.00749534 0.18500790 0.00038808)
    3i = 0, n = 3, j = 2, k = 4, d = 0.03941518, x = (-0.03728317 -0.00762936 0.01773610) y = (0.00749534 0.18500790 0.00038808)
    3i = 1, n = 3, j = 0, k = 4, d = 0.03643737, x = (-0.08192606 0.01907521 0.02376382) y = (0.01075170 0.18500790 0.00602855)
    3i = 1, n = 3, j = 1, k = 4, d = 0.03406866, x = (-0.00152916 0.00097788 -0.00109852) y = (0.01075170 0.18500790 0.00602855)
    3i = 1, n = 3, j = 2, k = 4, d = 0.03437987, x = (0.00000000 0.00000000 0.00000000) y = (0.01075170 0.18500790 0.00602855)
    i = 0, n = 3, j = 0, best = 0.03425420, best_i = 0
    i = 0, n = 3, j = 1, best = 0.06641452, best_i = 1
    i = 0, n = 3, j = 2, best = 0.03897782, best_i = 1
    i = 1, n = 3, j = 0, best = 0.03490656, best_i = 1
    i = 1, n = 3, j = 1, best = 0.03389502, best_i = 2
    i = 1, n = 3, j = 2, best = 0.03423970, best_i = 2
    

    For batch 0 (i = 0), everything seems correct. However, for batch 1 (i = 1), the values of point clouds are not in the tensors. Is there something wrong with the code?

    chamfer 
    opened by hzxie 10
  • Evaluate RGB image with pretrained model

    Evaluate RGB image with pretrained model

    Hi Iam actually try to evaluate SVR Atlas pretrained model on RGB image(chair), my parameters are really similar to the demo and i got wird result (by view in chrom 3D viewer ). i used the demo grid generation. when i run your demo plane.jpg im my network i got good results in the 3Dviewer . Demo plain wird_pic_1 wird_pic_2 wird_pic_3 can you please direct my how to evaluate RGB image?

    testing 
    opened by Itamare1982 9
  • Test set used as validation to choose best model

    Test set used as validation to choose best model

    In train_AE_Atlasnet.py, the test set is used as the validation set to choose the best model. The test set should never be used during training and especially not to choose the best model as this biases the results. It's probably more appropriate to report the results on the last training epoch if there was no validation set.

    bug 
    opened by lynetcha 9
  • The corresponding normalized mesh

    The corresponding normalized mesh

    I downloaded the corresponding normalized mesh (Only 58Mb) from the link you provided. I found that the number of the mesh was much smaller than the corresponding point cloud. Could you please provide the full dataset of the corresponding normalized mesh? Thank you!

    data 
    opened by wang-ps 9
  • Cannot download the point cloud data

    Cannot download the point cloud data

    Hi! I'm trying to download the point cloud data provided in this link: https://cloud.enpc.fr/s/j2ECcKleA1IKNzk but the network fails every time I try to download.

    Do you know what's going on or how to download them?

    Thank you in advance!

    data 
    opened by jjpark 8
  • validation loss explodes

    validation loss explodes

    4cb6332fbe946eaa6a317f9f2ddc3b6 I directly run the script 'train_AE_Atlasnet.py' without any modification. As you can see above, the performance is good on the training set, but quite poor on the validation set. The validation loss increases quickly and doesn't decrease.

    pytorch 
    opened by AkonLau 8
  • About the point cloud dataset

    About the point cloud dataset

    I found that some of your point cloud dataset provided are missing. Could you provide all the point cloud dataset? or Could you tell me how to generate the point cloud dataset? Thank you!

    data 
    opened by guoyan1991 7
  • Memory Leak

    Memory Leak

    I found that the unused self.dist1 and self.dist2 in the file "nndistance/functions/nnd.py" cause memory leaking in my environment. (Python 3.5.2 with Pytorch 0.4.0)

    class NNDFunction(Function):
        def forward(self, xyz1, xyz2):
            dist1,dist2=cuda_compute_from(xyz1,xyz2)
            # following two lines cause memory leak
            self.dist1 = dist1
            self.dist2 = dist2
            return dist1, dist2
    
        def backward(self, graddist1, graddist2):
            gradxyz1,gradxyz2=grad_cuda_compute_from(graddist1,graddist2)
            return gradxyz1, gradxyz2
    
    chamfer pytorch 
    opened by liuyuan-pal 7
  • Question about running

    Question about running

    Hi,i'm soory to bother you again. When I ran the code as you explained, I got the following error: sh: 1: tmux: not found Setting up a new session... Exception in user code:

    could you give me some advice? The overall operation results in the terminal are as follows:

    /home/yukon/anaconda3/envs/pymesh/bin/python "/media/yukon/Extreme SSD/AtlasNet-master/train.py" anshu: Namespace(SVR=False, activation='relu', anisotropic_scaling=False, batch_size=32, batch_size_test=32, bottleneck_size=1024, class_choice=['airplane'], data_augmentation_axis_rotation=False, data_augmentation_random_flips=False, demo=True, demo_input_path='./doc/pictures/plane_input_demo.png', dir_name='', env='Atlasnet', hidden_neurons=512, http_port=8891, id='0', loop_per_epoch=1, lr_decay_1=120, lr_decay_2=140, lr_decay_3=145, lrate=0.001, multi_gpu=[0], nb_primitives=1, nepoch=150, no_learning=False, no_metro=False, normalization='UnitBall', num_layers=2, number_points=2500, number_points_eval=2500, random_rotation=False, random_seed=False, random_translation=False, reload_decoder_path='', reload_model_path='', remove_all_batchNorms=False, run_single_eval=False, sample=True, shapenet13=False, start_epoch=0, template_type='SPHERE', train_only_encoder=False, visdom_port=8890, workers=0) Loaded compiled 3D CUDA chamfer distance Launching new visdom instance in port 8890 TMUX=0 tmux new-session -d -s visdom_server ; send-keys "/home/yukon/anaconda3/envs/pymesh/bin/python -m visdom.server -p 8890 > /dev/null 2>&1" Enter sh: 1: tmux: not found Launching new HTTP instance in port 8891 TMUX=0 tmux new-session -d -s http_server ; send-keys "/home/yukon/anaconda3/envs/pymesh/bin/python -m http.server -p 8891 > /dev/null 2>&1" Enter sh: 1: tmux: not found Setting up a new session... Exception in user code:

    Traceback (most recent call last): File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/connection.py", line 175, in _new_conn (self._dns_host, self.port), self.timeout, **extra_kw File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/util/connection.py", line 95, in create_connection raise err File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/util/connection.py", line 85, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/connectionpool.py", line 710, in urlopen chunked=chunked, File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/connectionpool.py", line 398, in _make_request conn.request(method, url, **httplib_request_kw) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/connection.py", line 239, in request super(HTTPConnection, self).request(method, url, body=body, headers=headers) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/http/client.py", line 1291, in request self._send_request(method, url, body, headers, encode_chunked) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/http/client.py", line 1337, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/http/client.py", line 1286, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/http/client.py", line 1046, in _send_output self.send(msg) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/http/client.py", line 984, in send self.connect() File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/connection.py", line 205, in connect conn = self._new_conn() File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/connection.py", line 187, in _new_conn self, "Failed to establish a new connection: %s" % e urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f2834dcc2b0>: Failed to establish a new connection: [Errno 111] Connection refused

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/requests/adapters.py", line 450, in send timeout=timeout File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/connectionpool.py", line 788, in urlopen method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/util/retry.py", line 592, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8890): Max retries exceeded with url: /env/Atlasnetatlasnet_singleview_1_sphere_2atlasnet_singleview_1_sphere (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f2834dcc2b0>: Failed to establish a new connection: [Errno 111] Connection refused',))

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/visdom/init.py", line 695, in _send data=json.dumps(msg), File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/visdom/init.py", line 656, in _handle_post r = self.session.post(url, data=data) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/requests/sessions.py", line 577, in post return self.request('POST', url, data=data, json=json, **kwargs) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/requests/sessions.py", line 529, in request resp = self.send(prep, **send_kwargs) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/requests/sessions.py", line 645, in send r = adapter.send(request, **kwargs) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8890): Max retries exceeded with url: /env/Atlasnetatlasnet_singleview_1_sphere_2atlasnet_singleview_1_sphere (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f2834dcc2b0>: Failed to establish a new connection: [Errno 111] Connection refused',)) [Errno 111] Connection refused on_close() takes 1 positional argument but 3 were given New MLP decoder : hidden size 512, num_layers 2, activation relu Network weights loaded from ./training/trained_models/atlasnet_singleview_1_sphere/network.pth! Atlasnet generated mesh at ./doc/pictures/plane_input_demoAtlasnetReconstruction.ply!

    Process finished with exit code 0

    opened by tang-y-q 2
  • Question About Visulization

    Question About Visulization

    Hey! Sorry to disturb you again!

    I want to know if there any effective tools by python to visulize .obj file and save it to .png (except for meshlab).

    Thanks for your reply!

    opened by yufeng9819 1
  • Compile Metro Distance (GPL3 Licence)

    Compile Metro Distance (GPL3 Licence)

    Hi i'm sorry to bother you again.
    when I used the code you give me to trying to build metro distance, I found that it could not be compiled successfully. 
    It will indicate that the system path cannot be found. The results are shown as follows:
    

    捕获 could you please give some advice to solve this problem. thanks a lot!

    opened by tang-y-q 1
  • Question about train and test strategy

    Question about train and test strategy

    Hi! Sorry for disturb you again.

    I want to ask questions about train and test strategy. In your code, you set opt.shapenet13==True. So does it means that you first train your network on all categories and then test on each class to get the experiment metrics data of every single class.

    Looking forward to your reply!

    opened by yufeng9819 1
  • AtlasNet checkpoint not available

    AtlasNet checkpoint not available

    Hi @ThibaultGROUEIX, thank you for sharing the code.

    When downloading checkpoint of the model using the trained_models/download_models.sh (https://cloud.enpc.fr/s/c27Df7fRNXW2uG3/download) related to the version 2.2 of the source code, the link seems to be broken or no longer available. Could you please help me with this?

    Thanks.

    opened by apicis 4
Releases(v3.0)
Owner
also here : https://bitbucket.org/ThibaultGROUEIX/
Playable Video Generation

Playable Video Generation Playable Video Generation Willi Menapace, Stéphane Lathuilière, Sergey Tulyakov, Aliaksandr Siarohin, Elisa Ricci Paper: ArX

Willi Menapace 136 Dec 31, 2022
Simple image captioning model - CLIP prefix captioning.

CLIP prefix captioning. Inference Notebook: 🥳 New: 🥳 Our technical papar is finally out! Official implementation for the paper "ClipCap: CLIP Prefix

688 Jan 04, 2023
A library for hidden semi-Markov models with explicit durations

hsmmlearn hsmmlearn is a library for unsupervised learning of hidden semi-Markov models with explicit durations. It is a port of the hsmm package for

Joris Vankerschaver 69 Dec 20, 2022
The official implementation of Equalization Loss for Long-Tailed Object Recognition (CVPR 2020) based on Detectron2

Equalization Loss for Long-Tailed Object Recognition Jingru Tan, Changbao Wang, Buyu Li, Quanquan Li, Wanli Ouyang, Changqing Yin, Junjie Yan ⚠️ We re

Jingru Tan 197 Dec 25, 2022
Unrestricted Facial Geometry Reconstruction Using Image-to-Image Translation

Unrestricted Facial Geometry Reconstruction Using Image-to-Image Translation [Arxiv] [Video] Evaluation code for Unrestricted Facial Geometry Reconstr

Matan Sela 242 Dec 30, 2022
The repository contain code for building compiler using puthon.

Building Compiler This is a python implementation of JamieBuild's "Super Tiny Compiler" Overview JamieBuilds developed a wonderfully educative compile

Shyam Das Shrestha 1 Nov 21, 2021
AutoVideo: An Automated Video Action Recognition System

AutoVideo is a system for automated video analysis. It is developed based on D3M infrastructure, which describes machine learning with generic pipeline languages. Currently, it focuses on video actio

Data Analytics Lab at Texas A&M University 267 Dec 17, 2022
LieTransformer: Equivariant Self-Attention for Lie Groups

LieTransformer This repository contains the implementation of the LieTransformer used for experiments in the paper LieTransformer: Equivariant Self-At

OxCSML (Oxford Computational Statistics and Machine Learning) 50 Dec 28, 2022
Official Implementation of Few-shot Visual Relationship Co-localization

VRC Official implementation of the Few-shot Visual Relationship Co-localization (ICCV 2021) paper project page | paper Requirements Use python = 3.8.

22 Oct 13, 2022
A PyTorch Lightning Callback for pushing models to the Hugging Face Hub 🤗⚡️

hf-hub-lightning A callback for pushing lightning models to the Hugging Face Hub. Note: I made this package for myself, mostly...if folks seem to be i

Nathan Raw 27 Dec 14, 2022
Official PyTorch implementation of "Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble" (NeurIPS'21)

Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble This is the code for reproducing the results of the paper Uncertainty-Bas

43 Nov 23, 2022
Fog Simulation on Real LiDAR Point Clouds for 3D Object Detection in Adverse Weather

LiDAR fog simulation Created by Martin Hahner at the Computer Vision Lab of ETH Zurich. This is the official code release of the paper Fog Simulation

Martin Hahner 110 Dec 30, 2022
Very large and sparse networks appear often in the wild and present unique algorithmic opportunities and challenges for the practitioner

Sparse network learning with snlpy Very large and sparse networks appear often in the wild and present unique algorithmic opportunities and challenges

Andrew Stolman 1 Apr 30, 2021
MARE - Multi-Attribute Relation Extraction

MARE - Multi-Attribute Relation Extraction Repository for the paper submission: #TODO: insert link, when available Environment Tested with Ubuntu 18.0

0 May 11, 2021
Code for GNMR in ICDE 2021

GNMR Code for GNMR in ICDE 2021 Please unzip data files in Datasets/MultiInt-ML10M first. Run labcode_preSamp.py (with graph sampling) for ECommerce-c

7 Oct 27, 2022
🙄 Difficult algorithm, Simple code.

🎉TensorFlow2.0-Examples🎉! "Talk is cheap, show me the code." ----- Linus Torvalds Created by YunYang1994 This tutorial was designed for easily divin

1.7k Dec 25, 2022
This is the repository for The Machine Learning Workshops, published by AI DOJO

This is the repository for The Machine Learning Workshops, published by AI DOJO. It contains all the workshop's code with supporting project files necessary to work through the code.

AI Dojo 12 May 06, 2022
A series of convenience functions to make basic image processing operations such as translation, rotation, resizing, skeletonization, and displaying Matplotlib images easier with OpenCV and Python.

imutils A series of convenience functions to make basic image processing functions such as translation, rotation, resizing, skeletonization, and displ

Adrian Rosebrock 4.3k Jan 08, 2023
Image De-raining Using a Conditional Generative Adversarial Network

Image De-raining Using a Conditional Generative Adversarial Network [Paper Link] [Project Page] He Zhang, Vishwanath Sindagi, Vishal M. Patel In this

He Zhang 216 Dec 18, 2022
An unsupervised learning framework for depth and ego-motion estimation from monocular videos

SfMLearner This codebase implements the system described in the paper: Unsupervised Learning of Depth and Ego-Motion from Video Tinghui Zhou, Matthew

Tinghui Zhou 1.8k Dec 30, 2022