SberSwap Video Swap base on deep learning

Overview

SberSwap

Results

Video Swap

Installation

  1. Clone this repository
git clone https://github.com/sberbank-ai/sber-swap.git
cd sber-swap
git submodule init
git submodule update
  1. Install dependent packages
pip install -r requirements.txt
  1. Download weights
sh download_models.sh

Usage

  1. Colab Demo google colab logo or you can use jupyter notebook SberSwapInference.ipynb locally
  2. Face Swap On Video

Swap to one specific person in the video. You must set face from the target video (for example, crop from any frame).

python inference.py --source_paths {PATH_TO_IMAGE} --target_faces_paths {PATH_TO_IMAGE} --target_video {PATH_TO_VIDEO}

Swap to many person in the video. You must set multiple faces for source and the corresponding multiple faces from the target video.

python inference.py --source_paths {PATH_TO_IMAGE PATH_TO_IMAGE ...} --target_faces_paths {PATH_TO_IMAGE PATH_TO_IMAGE ...} --target_video {PATH_TO_VIDEO}
  1. Face Swap On Image

You may set the target face, and then source will be swapped on this person, or you may skip this parameter, and then source will be swapped on any person in the image.

python inference.py --target_path {PATH_TO_IMAGE} --image_to_image True

Training

We also provide the training code for face swap model as follows:

  1. Download VGGFace2 Dataset.
  2. Crop and align faces with out detection model.
python preprocess_vgg.py --path_to_dataset {PATH_TO_DATASET} --save_path {SAVE_PATH}
  1. Start training.
python train.py --run_name {YOUR_RUN_NAME}

We provide a lot of different options for the training. More info about each option you can find in train.py file. If you would like to use wandb logging of the experiments, you should login to wandb first --wandb login.

Tips:

  1. For first epochs we suggest not to use eye detection loss and scheduler if you train from scratch.
  2. In case of finetuning model you can variate losses coefficients to make result look more like source identity, or vice versa, save features and attributes of target face.
  3. You can change backbone for attribute encoder and num_blocks of AAD ResBlk using parameters --backbone and --num_blocks.
  4. For finetuning model you can use our pretrain weights for generator and discriminator that are in folder weights. We provide weights for models with unet backbone and 1-3 blocks in AAD ResBlk. The main model is model with 2 blocks in AAD ResBlk.
Comments
  • Make possible to save the final frames to a separate (temp) folder during video processing

    Make possible to save the final frames to a separate (temp) folder during video processing

    Hello! Please make it possible to save the final frames to a separate folder during video processing, as it is implemented in SimSwap. This is convenient when you can view a frame if a long video is being processed so as not to waste time if the result turns out to be bad. Or it will give the opportunity to make a GIF from these frames yourself. Thanks!

    opened by netrunner-exe 7
  • Предложение по исправлению ошибок

    Предложение по исправлению ошибок

    К сожалению не умею правильно делать пулл-реквесты, поэтому прошу разработчиков поправить или сделать пулл-реквест @AlexanderGroshev

    1. В sberbank-ai/sber-swap/utils/inference/masks.py строка 7 lmrks = np.array( lmrks.copy(), dtype=np.int ) нужно исправить на lmrks = np.array( lmrks.copy(), dtype=np.int32) . Это уберёт предупреждения вроде Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations lmrks = np.array( lmrks.copy(), dtype=np.int ) которых реально чересчур много и которые сильно забивают полезный вывод в консоль.
    2. В sber-swap/utils/inference/video_processing.py строки 29 и 30 - в os.system(f"ffmpeg -i добавить-v -8, чтобы получилось os.system(f"ffmpeg -v -8 -i {video_with_sound} -vn... Это уберёт вывод работы FFmpeg и сделает вывод в консоль более чище и не будет забивать его лишней информацией.
    3. В sber-swap/utils/inference/video_processing.py строка 156 - исправить cv2.VideoWriter_fourcc(*'MP4V') на cv2.VideoWriter_fourcc(*'mp4v') - это решить проблему OpenCV: FFMPEG: tag 0x5634504d/'MP4V' и связанные с ней.

    Остальное что нашёл в пулл-реквесте поправил @AlexanderGroshev. Спасибо!

    opened by netrunner-exe 2
  • from coordinate_reg.image_infer import Handler => error

    from coordinate_reg.image_infer import Handler => error

    Hello ! When I run from coordinate_reg.image_infer import Handler:


    OSError Traceback (most recent call last) ~\AppData\Local\Temp\ipykernel_21280\3719535116.py in ----> 1 from coordinate_reg.image_infer import Handler

    ~\sber-swap\coordinate_reg\image_infer.py in 2 import numpy as np 3 import os ----> 4 import mxnet as mx 5 from skimage import transform as trans 6 import insightface

    ~\anaconda3\envs\original\lib\site-packages\mxnet_init_.py in 22 from future import absolute_import 23 ---> 24 from .context import Context, current_context, cpu, gpu, cpu_pinned 25 from . import engine 26 from .base import MXNetError

    ~\anaconda3\envs\original\lib\site-packages\mxnet\context.py in 22 import warnings 23 import ctypes ---> 24 from .base import classproperty, with_metaclass, _MXClassPropertyMetaClass 25 from .base import _LIB 26 from .base import check_call

    ~\anaconda3\envs\original\lib\site-packages\mxnet\base.py in 211 version = libinfo.version 212 # library instance of mxnet --> 213 _LIB = _load_lib() 214 215 # type definitions

    ~\anaconda3\envs\original\lib\site-packages\mxnet\base.py in _load_lib() 202 """Load library by searching possible path.""" 203 lib_path = libinfo.find_lib_path() --> 204 lib = ctypes.CDLL(lib_path[0], ctypes.RTLD_LOCAL) 205 # DMatrix functions 206 lib.MXGetLastError.restype = ctypes.c_char_p

    ~\anaconda3\envs\original\lib\ctypes_init_.py in init(self, name, mode, handle, use_errno, use_last_error) 354 355 if handle is None: --> 356 self._handle = _dlopen(self._name, mode) 357 else: 358 self._handle = handle

    OSError: [WinError 126] Не найден указанный модуль

    opened by ihorrible 1
  • TypeError: Descriptors cannot not be created directly.

    TypeError: Descriptors cannot not be created directly.

    Hi FaceSwap ! When I run this code:

    python inference.py --source_paths {PATH_TO_IMAGE} --target_faces_paths {PATH_TO_IMAGE} --target_video {PATH_TO_VIDEO}

    I have the following error:

    TypeError: Descriptors cannot not be created directly. If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0. If you cannot immediately regenerate your protos, some other possible workarounds are:

    1. Downgrade the protobuf package to 3.20.x or lower.
    2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).

    More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates

    opened by ihorrible 1
  • incorrect versions in requirements.txt

    incorrect versions in requirements.txt

    Hi FaceSwap,

    Please state the correct versions in requirements.txt:

    • ERROR: Could not find a version that satisfies the requirement torch==1.6.0+cu101 (from versions: 1.7.1, 1.8.0, 1.8.1, 1.9.0, 1.9.1, 1.10.0, 1.10.1, 1.10.2, 1.11.0, 1.12.0, 1.12.1, 1.13.0) ERROR: No matching distribution found for torch==1.6.0+cu101
    • ERROR: Could not find a version that satisfies the requirement torchvision==0.7.0+cu101 (from versions: 0.1.6, 0.1.7, 0.1.8, 0.1.9, 0.2.0, 0.2.1, 0.2.2, 0.2.2.post2, 0.2.2.post3, 0.8.2, 0.9.0, 0.9.1, 0.10.0, 0.10.1, 0.11.0, 0.11.1, 0.11.2, 0.11.3, 0.12.0, 0.13.0, 0.13.1, 0.14.0) ERROR: No matching distribution found for torchvision==0.7.0+cu101
    • ERROR: Could not find a version that satisfies the requirement onnxruntime-gpu==1.4.0 (from versions: none) ERROR: No matching distribution found for onnxruntime-gpu==1.4.0
    • ERROR: Could not find a version that satisfies the requirement mxnet-cu101mkl (from versions: none) ERROR: No matching distribution found for mxnet-cu101mkl
    • ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. conda-repo-cli 1.0.4 requires pathlib, which is not installed. anaconda-project 0.10.1 requires ruamel-yaml, which is not installed.
    opened by ihorrible 1
  • How to change mask height or exclude specific parts of face from swapping?

    How to change mask height or exclude specific parts of face from swapping?

    Hi! How to change mask height or exclude specific parts of face from swapping? There are situations when you need to exclude poorly or incorrectly formed parts of the face or cut the mask in width or height. Please, сan you tell me where in the code this can be changed? Thank you very much for your help!

    opened by netrunner-exe 1
  • Add ability to skip --target_faces_paths option

    Add ability to skip --target_faces_paths option

    Hey! Thank you very much for the work done, we hope that the code will continue to be developed and finalized. There is one question. In the description of the --target_faces_paths option, you noted that you can skip this option and then any face in the photo or video will be selected. Unfortunately, the ability to choose whether to use a face from a screenshot or to automatically select any face does not work. If you skip this parameter, then there will always be an error (for a photo or video)

    List of source paths:  ['/content/sber-swap/examples/images/elon_musk.jpg']
    List of target paths:  ['examples/images/1.png', 'examples/images/2.png', 'examples/images/3.png']
    Traceback (most recent call last):
      File "inference.py", line 153, in <module>
        main(args)
      File "inference.py", line 88, in main
        img = crop_face(img, app, args.crop_size)[0]
      File "/content/sber-swap/utils/inference/image_processing.py", line 16, in crop_face
        image, _ = app.get(image_full, crop_size)
      File "/content/sber-swap/insightface_func/face_detect_crop_multi.py", line 58, in get
        metric='default')
      File "/usr/local/lib/python3.7/dist-packages/insightface/model_zoo/scrfd.py", line 204, in detect
        im_ratio = float(img.shape[0]) / img.shape[1]
    AttributeError: 'NoneType' object has no attribute 'shape'
    

    You can skip this parameter only if you remove default=['examples/images/1.png', 'examples/images/2.png', 'examples/images/3.png'], nargs='+'. from it. But then, if necessary, the parameter itself will not work ... Add the ability to choose whether to use automatic face selection or from a screenshot from the video.

    Привет! Спасибо большое за проделанную работу, надеемся что код и дальше будет развиваться и дорабатываться. Есть один вопрос. В описании опции --target_faces_paths вы отметили что можно пропустить эту опцию и тогда будет выбираться любое лицо на фото или видео. К сожалению возможность выбирать - использовать лицо со скриншота или чтобы автоматически выбиралось любое лицо (если оно всего одно например) не работает. Если скипнуть этот параметр то всегда (для фото или видео) будет ошибка. Скипнуть этот параметр можно только если убрать из него default=['examples/images/1.png', 'examples/images/2.png', 'examples/images/3.png'], nargs='+'. Но тогда если будет нужно - не будет работать сам параметр...Добавьте возможность выбирать, использовать автоматический выбор лица или со скриншота с видео.

    opened by netrunner-exe 1
  • IEEE Xplore

    IEEE Xplore

    Hello! Please accept my congratulations on the publication on IEEE Xplore!) There have been no updates for a long time, do you plan to develop the repository further, or has it lost interest?

    opened by netrunner-exe 0
  • Colab Type Error

    Colab Type Error

    Type Error on Colab In Inference Section

    TypeError: can't convert np.ndarray of type numpy.object_. The only supported types are: float64, float32, float16, complex64, complex128, int64, int32, int16, int8, uint8, and bool.

    opened by andriken 0
  • Make more cleaner console output without warnings

    Make more cleaner console output without warnings

    Hi! If possible, please correct the output of the result in the next update for a cleaner one! There are a lot of warnings that just clog up the useful output on the console. I didn't copy everything, the rest of the output (warnings) is identical. Thanks!

    /usr/local/lib/python3.7/dist-packages/kornia/augmentation/augmentation.py:1833: DeprecationWarning: GaussianBlur is no longer maintained and will be removed from the future versions. Please use RandomGaussianBlur instead.
      category=DeprecationWarning,
    /usr/local/lib/python3.7/dist-packages/scipy/fft/__init__.py:97: DeprecationWarning: The module numpy.dual is deprecated.  Instead of using dual, use the functions directly from numpy or scipy.
      from numpy.dual import register_func
    /usr/local/lib/python3.7/dist-packages/scipy/sparse/sputils.py:17: DeprecationWarning: `np.typeDict` is a deprecated alias for `np.sctypeDict`.
      supported_dtypes = [np.typeDict[x] for x in supported_dtypes]
    /usr/local/lib/python3.7/dist-packages/scipy/sparse/sputils.py:17: DeprecationWarning: `np.typeDict` is a deprecated alias for `np.sctypeDict`.
      supported_dtypes = [np.typeDict[x] for x in supported_dtypes]
    /usr/local/lib/python3.7/dist-packages/scipy/sparse/sputils.py:17: DeprecationWarning: `np.typeDict` is a deprecated alias for `np.sctypeDict`.
      supported_dtypes = [np.typeDict[x] for x in supported_dtypes]
    /usr/local/lib/python3.7/dist-packages/scipy/sparse/sputils.py:17: DeprecationWarning: `np.typeDict` is a deprecated alias for `np.sctypeDict`.
      supported_dtypes = [np.typeDict[x] for x in supported_dtypes]
    /usr/local/lib/python3.7/dist-packages/scipy/sparse/sputils.py:17: DeprecationWarning: `np.typeDict` is a deprecated alias for `np.sctypeDict`.
      supported_dtypes = [np.typeDict[x] for x in supported_dtypes]
    /usr/local/lib/python3.7/dist-packages/scipy/sparse/sputils.py:17: DeprecationWarning: `np.typeDict` is a deprecated alias for `np.sctypeDict`.
      supported_dtypes = [np.typeDict[x] for x in supported_dtypes]
    /usr/local/lib/python3.7/dist-packages/scipy/sparse/sputils.py:17: DeprecationWarning: `np.typeDict` is a deprecated alias for `np.sctypeDict`.
      supported_dtypes = [np.typeDict[x] for x in supported_dtypes]
    /usr/local/lib/python3.7/dist-packages/scipy/sparse/sputils.py:17: DeprecationWarning: `np.typeDict` is a deprecated alias for `np.sctypeDict`.
      supported_dtypes = [np.typeDict[x] for x in supported_dtypes]
    /usr/local/lib/python3.7/dist-packages/scipy/sparse/sputils.py:17: DeprecationWarning: `np.typeDict` is a deprecated alias for `np.sctypeDict`.
      supported_dtypes = [np.typeDict[x] for x in supported_dtypes]
    /usr/local/lib/python3.7/dist-packages/scipy/sparse/sputils.py:17: DeprecationWarning: `np.typeDict` is a deprecated alias for `np.sctypeDict`.
      supported_dtypes = [np.typeDict[x] for x in supported_dtypes]
    /usr/local/lib/python3.7/dist-packages/scipy/sparse/sputils.py:17: DeprecationWarning: `np.typeDict` is a deprecated alias for `np.sctypeDict`.
      supported_dtypes = [np.typeDict[x] for x in supported_dtypes]
    /usr/local/lib/python3.7/dist-packages/scipy/sparse/sputils.py:17: DeprecationWarning: `np.typeDict` is a deprecated alias for `np.sctypeDict`.
      supported_dtypes = [np.typeDict[x] for x in supported_dtypes]
    /usr/local/lib/python3.7/dist-packages/scipy/sparse/sputils.py:17: DeprecationWarning: `np.typeDict` is a deprecated alias for `np.sctypeDict`.
      supported_dtypes = [np.typeDict[x] for x in supported_dtypes]
    /usr/local/lib/python3.7/dist-packages/scipy/sparse/sputils.py:17: DeprecationWarning: `np.typeDict` is a deprecated alias for `np.sctypeDict`.
      supported_dtypes = [np.typeDict[x] for x in supported_dtypes]
    /usr/local/lib/python3.7/dist-packages/scipy/sparse/sputils.py:17: DeprecationWarning: `np.typeDict` is a deprecated alias for `np.sctypeDict`.
      supported_dtypes = [np.typeDict[x] for x in supported_dtypes]
    

    and

    Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
      lmrks = np.array( lmrks.copy(), dtype=np.int )
    /content/sber-swap/utils/inference/masks.py:7: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
    Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
      lmrks = np.array( lmrks.copy(), dtype=np.int )
     16% 46/289 [00:05<00:09, 25.51it/s]/content/sber-swap/utils/inference/masks.py:7: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
    Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
      lmrks = np.array( lmrks.copy(), dtype=np.int )
    /content/sber-swap/utils/inference/masks.py:7: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
    Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
      lmrks = np.array( lmrks.copy(), dtype=np.int )
    /content/sber-swap/utils/inference/masks.py:7: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
    Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
      lmrks = np.array( lmrks.copy(), dtype=np.int )
     17% 49/289 [00:05<00:09, 25.90it/s]/content/sber-swap/utils/inference/masks.py:7: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
    Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
      lmrks = np.array( lmrks.copy(), dtype=np.int )
    /content/sber-swap/utils/inference/masks.py:7: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
    Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
      lmrks = np.array( lmrks.copy(), dtype=np.int )
    /content/sber-swap/utils/inference/masks.py:7: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
    Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
      lmrks = np.array( lmrks.copy(), dtype=np.int )
     18% 52/289 [00:05<00:09, 26.29it/s]/content/sber-swap/utils/inference/masks.py:7: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
    Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
      lmrks = np.array( lmrks.copy(), dtype=np.int )
    /content/sber-swap/utils/inference/masks.py:7: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
    Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
      lmrks = np.array( lmrks.copy(), dtype=np.int )
    /content/sber-swap/utils/inference/masks.py:7: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
    Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
      lmrks = np.array( lmrks.copy(), dtype=np.int )
     19% 55/289 [00:05<00:08, 26.87it/s]/content/sber-swap/utils/inference/masks.py:7: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
    Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
      lmrks = np.array( lmrks.copy(), dtype=np.int )
    /content/sber-swap/utils/inference/masks.py:7: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
    Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
      lmrks = np.array( lmrks.copy(), dtype=np.int )
    /content/sber-swap/utils/inference/masks.py:7: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
    Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
      lmrks = np.array( lmrks.copy(), dtype=np.int )
     20% 58/289 [00:05<00:08, 27.16it/s]/content/sber-swap/utils/inference/masks.py:7: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
    Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
      lmrks = np.array( lmrks.copy(), dtype=np.int )
    /content/sber-swap/utils/inference/masks.py:7: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
    Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
      lmrks = np.array( lmrks.copy(), dtype=np.int )
    /content/sber-swap/utils/inference/masks.py:7: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
    Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
      lmrks = np.array( lmrks.copy(), dtype=np.int )
     21% 61/289 [00:05<00:08, 27.57it/s]/content/sber-swap/utils/inference/masks.py:7: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
    Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
      lmrks = np.array( lmrks.copy(), dtype=np.int )
    /content/sber-swap/utils/inference/masks.py:7: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
    Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
      lmrks = np.array( lmrks.copy(), dtype=np.int )
    /content/sber-swap/utils/inference/masks.py:7: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
    Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
      lmrks = np.array( lmrks.copy(), dtype=np.int )
     22% 64/289 [00:05<00:08, 25.83it/s]/content/sber-swap/utils/inference/masks.py:7: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
    Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
      lmrks = np.array( lmrks.copy(), dtype=np.int )
    /content/sber-swap/utils/inference/masks.py:7: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
    Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
      lmrks = np.array( lmrks.copy(), dtype=np.int )
    /content/sber-swap/utils/inference/masks.py:7: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
    Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
      lmrks = np.array( lmrks.copy(), dtype=np.int )
     23% 67/289 [00:05<00:08, 26.18it/s]/content/sber-swap/utils/inference/masks.py:7: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
    Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
      lmrks = np.array( lmrks.copy(), dtype=np.int )
    
    opened by netrunner-exe 0
  • Problem installing mxnet

    Problem installing mxnet

    $ py -m pip install mxnet Collecting mxnet Using cached mxnet-1.7.0.post2-py2.py3-none-win_amd64.whl (33.1 MB) Collecting numpy<1.17.0,>=1.8.2 Using cached numpy-1.16.6.zip (5.1 MB) Preparing metadata (setup.py): started Preparing metadata (setup.py): finished with status 'done' Collecting requests<2.19.0,>=2.18.4 Using cached requests-2.18.4-py2.py3-none-any.whl (88 kB) Collecting graphviz<0.9.0,>=0.8.1 Using cached graphviz-0.8.4-py2.py3-none-any.whl (16 kB) Collecting certifi>=2017.4.17 Using cached certifi-2022.12.7-py3-none-any.whl (155 kB) Collecting chardet<3.1.0,>=3.0.2 Using cached chardet-3.0.4-py2.py3-none-any.whl (133 kB) Collecting idna<2.7,>=2.5 Using cached idna-2.6-py2.py3-none-any.whl (56 kB) Collecting urllib3<1.23,>=1.21.1 Using cached urllib3-1.22-py2.py3-none-any.whl (132 kB) Installing collected packages: urllib3, idna, chardet, numpy, graphviz, certifi, requests, mxnet WARNING: The script chardetect.exe is installed in 'C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Scripts' which is not on PATH. Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location. Attempting uninstall: numpy Found existing installation: numpy 1.24.0 Uninstalling numpy-1.24.0: Successfully uninstalled numpy-1.24.0 DEPRECATION: numpy is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559 Running setup.py install for numpy: started Running setup.py install for numpy: finished with status 'error' error: subprocess-exited-with-error

    Running setup.py install for numpy did not run successfully. exit code: 1

    [276 lines of output] Running from numpy source directory.

    Note: if you need reliable uninstall behavior, then install with pip instead of using setup.py install:

    - `pip install .`       (from a git repo or downloaded source
                             release)
    - `pip install numpy`   (last NumPy release on PyPi)
    

    C:\Users\Administrator\AppData\Local\Temp\pip-install-_vqy6jc2\numpy_50df99d6d0ce4aa7b689d7b0148eae3d\numpy\distutils\misc_util.py:476: SyntaxWarning: "is" with a literal. Did you mean "=="? return is_string(s) and ('*' in s or '?' is s) blas_opt_info: blas_mkl_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries mkl_rt not found in ['C:\Users\Administrator\AppData\Local\Programs\Python\Python311\lib', 'C:\', 'C:\Users\Administrator\AppData\Local\Programs\Python\Python311\libs'] NOT AVAILABLE

    blis_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries blis not found in ['C:\Users\Administrator\AppData\Local\Programs\Python\Python311\lib', 'C:\', 'C:\Users\Administrator\AppData\Local\Programs\Python\Python311\libs'] NOT AVAILABLE

    openblas_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries openblas not found in ['C:\Users\Administrator\AppData\Local\Programs\Python\Python311\lib', 'C:\', 'C:\Users\Administrator\AppData\Local\Programs\Python\Python311\libs'] get_default_fcompiler: matching types: '['gnu', 'intelv', 'absoft', 'compaqv', 'intelev', 'gnu95', 'g95', 'intelvem', 'intelem', 'flang']' customize GnuFCompiler Could not locate executable g77 Could not locate executable f77 customize IntelVisualFCompiler Could not locate executable ifort Could not locate executable ifl customize AbsoftFCompiler Could not locate executable f90 customize CompaqVisualFCompiler Found executable C:\Program Files\Git\usr\bin\DF.exe Could not locate executable C:\Program customize IntelItaniumVisualFCompiler Could not locate executable efl customize Gnu95FCompiler Could not locate executable gfortran Could not locate executable f95 customize G95FCompiler Could not locate executable g95 customize IntelEM64VisualFCompiler customize IntelEM64TFCompiler Could not locate executable efort Could not locate executable efc customize PGroupFlangCompiler Could not locate executable flang don't know how to compile Fortran code on platform 'nt' NOT AVAILABLE

    atlas_3_10_blas_threads_info: Setting PTATLAS=ATLAS No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries tatlas not found in ['C:\Users\Administrator\AppData\Local\Programs\Python\Python311\lib', 'C:\', 'C:\Users\Administrator\AppData\Local\Programs\Python\Python311\libs'] NOT AVAILABLE

    atlas_3_10_blas_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries satlas not found in ['C:\Users\Administrator\AppData\Local\Programs\Python\Python311\lib', 'C:\', 'C:\Users\Administrator\AppData\Local\Programs\Python\Python311\libs'] NOT AVAILABLE

    atlas_blas_threads_info: Setting PTATLAS=ATLAS No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries ptf77blas,ptcblas,atlas not found in ['C:\Users\Administrator\AppData\Local\Programs\Python\Python311\lib', 'C:\', 'C:\Users\Administrator\AppData\Local\Programs\Python\Python311\libs'] NOT AVAILABLE

    atlas_blas_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries f77blas,cblas,atlas not found in ['C:\Users\Administrator\AppData\Local\Programs\Python\Python311\lib', 'C:\', 'C:\Users\Administrator\AppData\Local\Programs\Python\Python311\libs'] NOT AVAILABLE

    accelerate_info: NOT AVAILABLE

    C:\Users\Administrator\AppData\Local\Temp\pip-install-_vqy6jc2\numpy_50df99d6d0ce4aa7b689d7b0148eae3d\numpy\distutils\system_info.py:639: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. self.calc_info() blas_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries blas not found in ['C:\Users\Administrator\AppData\Local\Programs\Python\Python311\lib', 'C:\', 'C:\Users\Administrator\AppData\Local\Programs\Python\Python311\libs'] NOT AVAILABLE

    C:\Users\Administrator\AppData\Local\Temp\pip-install-_vqy6jc2\numpy_50df99d6d0ce4aa7b689d7b0148eae3d\numpy\distutils\system_info.py:639: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. self.calc_info() blas_src_info: NOT AVAILABLE

    C:\Users\Administrator\AppData\Local\Temp\pip-install-_vqy6jc2\numpy_50df99d6d0ce4aa7b689d7b0148eae3d\numpy\distutils\system_info.py:639: UserWarning: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. self.calc_info() NOT AVAILABLE

    'svnversion' is not recognized as an internal or external command, operable program or batch file. non-existing path in 'numpy\distutils': 'site.cfg' lapack_opt_info: lapack_mkl_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries mkl_rt not found in ['C:\Users\Administrator\AppData\Local\Programs\Python\Python311\lib', 'C:\', 'C:\Users\Administrator\AppData\Local\Programs\Python\Python311\libs'] NOT AVAILABLE

    openblas_lapack_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries openblas not found in ['C:\Users\Administrator\AppData\Local\Programs\Python\Python311\lib', 'C:\', 'C:\Users\Administrator\AppData\Local\Programs\Python\Python311\libs'] NOT AVAILABLE

    openblas_clapack_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries openblas,lapack not found in ['C:\Users\Administrator\AppData\Local\Programs\Python\Python311\lib', 'C:\', 'C:\Users\Administrator\AppData\Local\Programs\Python\Python311\libs'] NOT AVAILABLE

    atlas_3_10_threads_info: Setting PTATLAS=ATLAS No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Users\Administrator\AppData\Local\Programs\Python\Python311\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries tatlas,tatlas not found in C:\Users\Administrator\AppData\Local\Programs\Python\Python311\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:
    No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries tatlas,tatlas not found in C:
    No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Users\Administrator\AppData\Local\Programs\Python\Python311\libs No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries tatlas,tatlas not found in C:\Users\Administrator\AppData\Local\Programs\Python\Python311\libs <class 'numpy.distutils.system_info.atlas_3_10_threads_info'> NOT AVAILABLE

    atlas_3_10_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Users\Administrator\AppData\Local\Programs\Python\Python311\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries satlas,satlas not found in C:\Users\Administrator\AppData\Local\Programs\Python\Python311\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:
    No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries satlas,satlas not found in C:
    No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Users\Administrator\AppData\Local\Programs\Python\Python311\libs No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries satlas,satlas not found in C:\Users\Administrator\AppData\Local\Programs\Python\Python311\libs <class 'numpy.distutils.system_info.atlas_3_10_info'> NOT AVAILABLE

    atlas_threads_info: Setting PTATLAS=ATLAS No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Users\Administrator\AppData\Local\Programs\Python\Python311\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries ptf77blas,ptcblas,atlas not found in C:\Users\Administrator\AppData\Local\Programs\Python\Python311\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:
    No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries ptf77blas,ptcblas,atlas not found in C:
    No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Users\Administrator\AppData\Local\Programs\Python\Python311\libs No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries ptf77blas,ptcblas,atlas not found in C:\Users\Administrator\AppData\Local\Programs\Python\Python311\libs <class 'numpy.distutils.system_info.atlas_threads_info'> NOT AVAILABLE

    atlas_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Users\Administrator\AppData\Local\Programs\Python\Python311\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries f77blas,cblas,atlas not found in C:\Users\Administrator\AppData\Local\Programs\Python\Python311\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:
    No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries f77blas,cblas,atlas not found in C:
    No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Users\Administrator\AppData\Local\Programs\Python\Python311\libs No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries f77blas,cblas,atlas not found in C:\Users\Administrator\AppData\Local\Programs\Python\Python311\libs <class 'numpy.distutils.system_info.atlas_info'> NOT AVAILABLE

    lapack_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack not found in ['C:\Users\Administrator\AppData\Local\Programs\Python\Python311\lib', 'C:\', 'C:\Users\Administrator\AppData\Local\Programs\Python\Python311\libs'] NOT AVAILABLE

    C:\Users\Administrator\AppData\Local\Temp\pip-install-_vqy6jc2\numpy_50df99d6d0ce4aa7b689d7b0148eae3d\numpy\distutils\system_info.py:639: UserWarning: Lapack (http://www.netlib.org/lapack/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [lapack]) or by setting the LAPACK environment variable. self.calc_info() lapack_src_info: NOT AVAILABLE

    C:\Users\Administrator\AppData\Local\Temp\pip-install-_vqy6jc2\numpy_50df99d6d0ce4aa7b689d7b0148eae3d\numpy\distutils\system_info.py:639: UserWarning: Lapack (http://www.netlib.org/lapack/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [lapack_src]) or by setting the LAPACK_SRC environment variable. self.calc_info() NOT AVAILABLE

    C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\setuptools_distutils\dist.py:264: UserWarning: Unknown distribution option: 'define_macros' warnings.warn(msg) running install C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src build_src building py_modules sources creating build creating build\src.win-amd64-3.1 creating build\src.win-amd64-3.1\numpy creating build\src.win-amd64-3.1\numpy\distutils building library "npymath" sources No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/ [end of output]

    note: This error originates from a subprocess, and is likely not a problem with pip. Rolling back uninstall of numpy Moving to c:\users\administrator\appdata\local\programs\python\python311\lib\site-packages\numpy-1.24.0.dist-info
    from C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages~umpy-1.24.0.dist-info Moving to c:\users\administrator\appdata\local\programs\python\python311\lib\site-packages\numpy
    from C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages~umpy Moving to c:\users\administrator\appdata\local\programs\python\python311\scripts\f2py.exe from C:\Users\Administrator\AppData\Local\Temp\pip-uninstall-a_fvs93n\f2py.exe error: legacy-install-failure

    Encountered error while trying to install package.

    numpy

    note: This is an issue with the package mentioned above, not pip. hint: See above for output from the failure.

    opened by Rhenm091619 1
  • Mxnet error . for no GPU

    Mxnet error . for no GPU

    I cant get the versions working on my side. I tried some of the version but none of them working

    File "C:\Users\Administrator\Desktop\Ghost\sber-swap\coordinate_reg\image_infer.py", line 4, in import mxnet as mx ModuleNotFoundError: No module named 'mxnet'

    ANYONE HAS a solution for this error?

    opened by Rhenm091619 0
  • Cant install some the requiments

    Cant install some the requiments

    Igot this error message. whats seems to be the problem

    Traceback (most recent call last): File "C:\Users\Administrator\Desktop\ghost\preprocess_vgg.py", line 5, in from insightface_func.face_detect_crop_single import Face_detect_crop File "C:\Users\Administrator\Desktop\ghost\insightface_func\face_detect_crop_single.py", line 8, in from insightface.model_zoo import model_zoo File "C:\Users\Administrator\AppData\Local\Programs\Python\Python39\lib\site-packages\insightface_init_.py", line 16, in from . import model_zoo File "C:\Users\Administrator\AppData\Local\Programs\Python\Python39\lib\site-packages\insightface\model_zoo_init_.py", line 1, in from .model_zoo import get_model File "C:\Users\Administrator\AppData\Local\Programs\Python\Python39\lib\site-packages\insightface\model_zoo\model_zoo.py", line 11, in from .arcface_onnx import * File "C:\Users\Administrator\AppData\Local\Programs\Python\Python39\lib\site-packages\insightface\model_zoo\arcface_onnx.py", line 10, in import onnx File "C:\Users\Administrator\AppData\Local\Programs\Python\Python39\lib\site-packages\onnx_init_.py", line 20, in import onnx.helper # noqa File "C:\Users\Administrator\AppData\Local\Programs\Python\Python39\lib\site-packages\onnx\helper.py", line 17, in from onnx import mapping File "C:\Users\Administrator\AppData\Local\Programs\Python\Python39\lib\site-packages\onnx\mapping.py", line 27, in int(TensorProto.STRING): np.dtype(np.object) File "C:\Users\Administrator\AppData\Local\Programs\Python\Python39\lib\site-packages\numpy_init_.py", line 284, in getattr raise AttributeError("module {!r} has no attribute " AttributeError: module 'numpy' has no attribute 'object'

    ANd this

    Traceback (most recent call last): File "C:\Users\Administrator\Desktop\ghost\train.py", line 27, in from utils.training.detector import detect_landmarks, paint_eyes File "C:\Users\Administrator\Desktop\ghost\utils\training\detector.py", line 6, in from AdaptiveWingLoss.utils.utils import get_preds_fromhm ModuleNotFoundError: No module named 'AdaptiveWingLoss.utils'

    opened by Rhenm091619 4
  • AttributeError: module 'mxnet' has no attribute 'mod'

    AttributeError: module 'mxnet' has no attribute 'mod'

    Trying to run the repo on Windows 10. Installed mxnet-cu102-2.0.0b20201108.

    The code execution stops on line 115 in image_infer.py file model = mx.mod.Module(symbol=sym, context=ctx, label_names=None)

    Full error: AttributeError: module 'mxnet' has no attribute 'mod'

    opened by antonnes 1
  • Result = source

    Result = source

    Running the colab example locally, I get this result: result

    As you can see, this is just the source image again with no swap. Here is my run log: SBER Example.txt

    Why is no actual swapping occurring?

    opened by k128 0
  • Dockerfile

    Dockerfile

    Not an issue but I think this could come handy for someone :)

    Dockerfile

    FROM nvidia/cuda:10.1-cudnn7-devel-ubuntu18.04
    
    RUN apt-get update && apt-get install -y software-properties-common
    RUN add-apt-repository ppa:deadsnakes/ppa -y
    RUN apt-get update && apt-get install -y \
        wget \
        python3.8 \
        python3.8-distutils \
        ffmpeg \
        libsm6 \
        libxext6
    
    RUN wget https://bootstrap.pypa.io/get-pip.py
    
    RUN python3.8 get-pip.py
    
    COPY requirements.txt requirements.txt
    
    RUN pip install -r requirements.txt
    

    python3.8 inference.py --target_path {PATH_TO_IMAGE} --image_to_image True

    opened by Dutch77 2
Owner
Sber AI
Sber AI
Implementation of the algorithm shown in the article "Modelo de Predicción de Éxito de Canciones Basado en Descriptores de Audio"

Success Predictor Implementation of the algorithm shown in the article "Modelo de Predicción de Éxito de Canciones Basado en Descriptores de Audio". B

Rodrigo Nazar Meier 4 Mar 17, 2022
Stochastic Normalizing Flows

Stochastic Normalizing Flows We introduce stochasticity in Boltzmann-generating flows. Normalizing flows are exact-probability generative models that

AI4Science group, FU Berlin (Frank Noé and co-workers) 50 Dec 16, 2022
FcaNet: Frequency Channel Attention Networks

FcaNet: Frequency Channel Attention Networks PyTorch implementation of the paper "FcaNet: Frequency Channel Attention Networks". Simplest usage Models

327 Dec 27, 2022
Geometric Sensitivity Decomposition

Geometric Sensitivity Decomposition This repo is the official implementation of A Geometric Perspective towards Neural Calibration via Sensitivity Dec

16 Dec 26, 2022
Learning to Reconstruct 3D Non-Cuboid Room Layout from a Single RGB Image

NonCuboidRoom Paper Learning to Reconstruct 3D Non-Cuboid Room Layout from a Single RGB Image Cheng Yang*, Jia Zheng*, Xili Dai, Rui Tang, Yi Ma, Xiao

67 Dec 15, 2022
FindFunc is an IDA PRO plugin to find code functions that contain a certain assembly or byte pattern, reference a certain name or string, or conform to various other constraints.

FindFunc: Advanced Filtering/Finding of Functions in IDA Pro FindFunc is an IDA Pro plugin to find code functions that contain a certain assembly or b

213 Dec 17, 2022
Bunch of different tools which helps visualizing and annotating images for semantic/instance segmentation tasks

Data Framework for Semantic/Instance Segmentation Bunch of different tools which helps visualizing, transforming and annotating images for semantic/in

Bruno Fernandes Carvalho 5 Dec 21, 2022
This is the repository for Learning to Generate Piano Music With Sustain Pedals

SusPedal-Gen This is the official repository of Learning to Generate Piano Music With Sustain Pedals Demo Page Dataset The dataset used in this projec

Joann Ching 12 Sep 02, 2022
KDD CUP 2020 Automatic Graph Representation Learning: 1st Place Solution

KDD CUP 2020: AutoGraph Team: aister Members: Jianqiang Huang, Xingyuan Tang, Mingjian Chen, Jin Xu, Bohang Zheng, Yi Qi, Ke Hu, Jun Lei Team Introduc

96 May 30, 2022
Repository for the "Gotta Go Fast When Generating Data with Score-Based Models" paper

Gotta Go Fast When Generating Data with Score-Based Models This repo contains the official implementation for the paper Gotta Go Fast When Generating

Alexia Jolicoeur-Martineau 89 Nov 09, 2022
Unsupervised captioning - Code for Unsupervised Image Captioning

Unsupervised Image Captioning by Yang Feng, Lin Ma, Wei Liu, and Jiebo Luo Introduction Most image captioning models are trained using paired image-se

Yang Feng 207 Dec 24, 2022
Code for the published paper : Learning to recognize rare traffic sign

Improving traffic sign recognition by active search This repo contains code for the paper : "Learning to recognise rare traffic signs" How to use this

samsja 4 Jan 05, 2023
CurriculumNet: Weakly Supervised Learning from Large-Scale Web Images

CurriculumNet Introduction This repo contains related code and models from the ECCV 2018 CurriculumNet paper. CurriculumNet is a new training strategy

156 Jul 04, 2022
HiFi-GAN: High Fidelity Denoising and Dereverberation Based on Speech Deep Features in Adversarial Networks

HiFiGAN Denoiser This is a Unofficial Pytorch implementation of the paper HiFi-GAN: High Fidelity Denoising and Dereverberation Based on Speech Deep F

Rishikesh (ऋषिकेश) 134 Dec 27, 2022
PatrickStar enables Larger, Faster, Greener Pretrained Models for NLP. Democratize AI for everyone.

PatrickStar: Parallel Training of Large Language Models via a Chunk-based Memory Management Meeting PatrickStar Pre-Trained Models (PTM) are becoming

Tencent 633 Dec 28, 2022
Fast, modular reference implementation of Instance Segmentation and Object Detection algorithms in PyTorch.

Faster R-CNN and Mask R-CNN in PyTorch 1.0 maskrcnn-benchmark has been deprecated. Please see detectron2, which includes implementations for all model

Facebook Research 9k Jan 04, 2023
simple demo codes for Learning to Teach with Dynamic Loss Functions

Learning to Teach with Dynamic Loss Functions This repo contains the simple demo for the NeurIPS-18 paper: Learning to Teach with Dynamic Loss Functio

Lijun Wu 15 Dec 30, 2021
Full-featured Decision Trees and Random Forests learner.

CID3 This is a full-featured Decision Trees and Random Forests learner. It can save trees or forests to disk for later use. It is possible to query tr

Alejandro Penate-Diaz 3 Aug 15, 2022
​TextWorld is a sandbox learning environment for the training and evaluation of reinforcement learning (RL) agents on text-based games.

TextWorld A text-based game generator and extensible sandbox learning environment for training and testing reinforcement learning (RL) agents. Also ch

Microsoft 983 Dec 23, 2022
A Pytorch Implementation of Source Data-free Domain Adaptation for a Faster R-CNN

A Pytorch Implementation of Source Data-free Domain Adaptation for a Faster R-CNN Please follow Faster R-CNN and DAF to complete the environment confi

2 Jan 12, 2022