(Personalized) Page-Rank computation using PyTorch

Overview

torch-ppr

Tests PyPI PyPI - Python Version PyPI - License Documentation Status Codecov status Cookiecutter template from @cthoyt Code style: black Contributor Covenant

This package allows calculating page-rank and personalized page-rank via power iteration with PyTorch, which also supports calculation on GPU (or other accelerators).

💪 Getting Started

As a simple example, consider this simple graph with five nodes.

Its edge list is given as

>>> import torch
>>> edge_index = torch.as_tensor(data=[(0, 1), (1, 2), (1, 3), (2, 4)]).t()

We can use

>>> from torch_ppr import page_rank
>>> page_rank(edge_index=edge_index)
tensor([0.1269, 0.3694, 0.2486, 0.1269, 0.1281])

to calculate the page rank, i.e., a measure of global importance. We notice that the central node receives the largest importance score, while all other nodes have lower importance. Moreover, the two indistinguishable nodes 0 and 3 receive the same page rank.

We can also calculate personalized page rank which measures importance from the perspective of a single node. For instance, for node 2, we have

>>> from torch_ppr import personalized_page_rank
>>> personalized_page_rank(edge_index=edge_index, indices=[2])
tensor([[0.1103, 0.3484, 0.2922, 0.1103, 0.1388]])

Thus, the most important node is the central node 1, nodes 0 and 3 receive the same importance value which is below the value of the direct neighbor 4.

By the virtue of using PyTorch, the code seamlessly works on GPUs, too, and supports auto-grad differentiation. Moreover, the calculation of personalized page rank supports automatic batch size optimization via torch_max_mem.

🚀 Installation

The most recent release can be installed from PyPI with:

$ pip install torch_ppr

The most recent code and data can be installed directly from GitHub with:

$ pip install git+https://github.com/mberr/torch-ppr.git

👐 Contributing

Contributions, whether filing an issue, making a pull request, or forking, are appreciated. See CONTRIBUTING.md for more information on getting involved.

👋 Attribution

⚖️ License

The code in this package is licensed under the MIT License.

🍪 Cookiecutter

This package was created with @audreyfeldroy's cookiecutter package using @cthoyt's cookiecutter-snekpack template.

🛠️ For Developers

See developer instructions

The final section of the README is for if you want to get involved by making a code contribution.

Development Installation

To install in development mode, use the following:

$ git clone git+https://github.com/mberr/torch-ppr.git
$ cd torch-ppr
$ pip install -e .

🥼 Testing

After cloning the repository and installing tox with pip install tox, the unit tests in the tests/ folder can be run reproducibly with:

$ tox

Additionally, these tests are automatically re-run with each commit in a GitHub Action.

📖 Building the Documentation

The documentation can be built locally using the following:

$ git clone git+https://github.com/mberr/torch-ppr.git
$ cd torch-ppr
$ tox -e docs
$ open docs/build/html/index.html

The documentation automatically installs the package as well as the docs extra specified in the setup.cfg. sphinx plugins like texext can be added there. Additionally, they need to be added to the extensions list in docs/source/conf.py.

📦 Making a Release

After installing the package in development mode and installing tox with pip install tox, the commands for making a new release are contained within the finish environment in tox.ini. Run the following from the shell:

$ tox -e finish

This script does the following:

  1. Uses Bump2Version to switch the version number in the setup.cfg, src/torch_ppr/version.py, and docs/source/conf.py to not have the -dev suffix
  2. Packages the code in both a tar archive and a wheel using build
  3. Uploads to PyPI using twine. Be sure to have a .pypirc file configured to avoid the need for manual input at this step
  4. Push to GitHub. You'll need to make a release going with the commit where the version was bumped.
  5. Bump the version to the next patch. If you made big changes and want to bump the version by minor, you can use tox -e bumpversion minor after.
Comments
  • `torch.sparse.mm` breaking API changes

    `torch.sparse.mm` breaking API changes

    Suddenly, everything stopped working 😱 presumably because of the changes to torch.sparse. Particularly, I am on PyTorch 1.10, master branch of PyKEEN and torch-ppr 0.0.5.

    Problem 1: the allclose() check does not pass now: https://github.com/mberr/torch-ppr/blob/921898f1a4b7770e6cdd1931e935262e456eb3c9/src/torch_ppr/utils.py#L221-L222

    MWE:

    import torch
    from torch_ppr import page_rank
    
    from pykeen.datasets import FB15k237
    
    dataset = FB15k237(create_inverse_triples=False)
    edges = dataset.training.mapped_triples[:, [0, 2]].t()
    pr = page_rank(edge_index=torch.cat([edges, edges.flip(0)], dim=-1), num_nodes=dataset.num_entities)
    
    >> ValueError: Invalid column sum: tensor([1.0000, 1.0000, 1.0000,  ..., 1.0000, 1.0000, 1.0000]). expected 1.0
    

    Looking into the debugger:

    • adj_sum does sum up to the number of nodes
    • the default tolerance fails the check, but if I reduce rtol=1e-4 or atol=1e-4 - the check passes

    Problem 2: the signature of torch.sparse.addmm has changed from the one used in power_iteration so the API call fails with the unknown kwarg error.

    https://github.com/mberr/torch-ppr/blob/921898f1a4b7770e6cdd1931e935262e456eb3c9/src/torch_ppr/utils.py#L310

    In fact, I can't find where those kwargs input, sparse, dense come from because the current signature has less readable mat, mat1, mat2. I traced to the very Torch 1.3.0 and still can't find where those originated from. Where does this signature come from? 😅

    My test env

    torch                 1.10.0
    torch-ppr             0.0.5
    
    opened by migalkin 7
  • Incorporating edge weights

    Incorporating edge weights

    Hello,

    Thank you for this great repository; it is a great, handy package that performs very well! I was wondering however; is it possible to incorporate edge weights into the personalized pagerank method?

    Best Filip

    opened by Filco306 5
  • RuntimeError torch.sparse.addmm different torch tensor shape

    RuntimeError torch.sparse.addmm different torch tensor shape

    Dear torch-ppr

    I installed torch-ppr on my Mac with python 3.9 and run the example code

    >>> import torch
    >>> edge_index = torch.as_tensor(data=[(0, 1), (1, 2), (1, 3), (2, 4)]).t()
    >>> from torch_ppr import page_rank
    >>> page_rank(edge_index)
    

    I got a runtimeerror as

    x = torch.sparse.addmm(input=x0, sparse=adj, dense=x, beta=alpha, alpha=beta)
    RuntimeError: mat1 and mat2 shapes cannot be multiplied (2x4 and 2x1)
    

    I printed the shape of x0, adj and x

    torch.Size([2, 1])
    torch.Size([2, 4])
    torch.Size([2, 1])
    

    I believe that the shape of adj should be 2x2 or I might be wrong. I find the define process of adj.

    # convert to sparse matrix, shape: (n, n)
    adj = edge_index_to_sparse_matrix(edge_index=edge_index, num_nodes=num_nodes)
    adj = adj + adj.t()
    

    The adj is symmect.

    I wonder how to fix the runtimeError or any suggestions? Thanks in advanced meatball1982 12-May-2022 09:54:50

    opened by meatball1982 4
  • Expose API functions from top-level

    Expose API functions from top-level

    Also update cookiecutter package in https://github.com/cthoyt/cookiecutter-snekpack/commit/fa032ffc3c718c208d3a03e212aaa299c193de94 to have this be a part by default

    opened by cthoyt 2
  • Formulate page-rank as a torch.nn Layer

    Formulate page-rank as a torch.nn Layer

    Thank you for this repo!

    The reason to request a 'layer' fomulation is to convert the function page_rank to an onnx graph with torch.onnx (only accepts models).

    Once I have the onnx model, I can compile it different hardware (other than cuda).

    Maybe need just the forward pass, no need for a backward pass although I think the compute will be differentiable.

    Thanks.

    opened by LM-AuroTripathy 8
Releases(v0.0.8)
  • v0.0.8(Jul 20, 2022)

    What's Changed

    • Update error message of validate_adjacency by @mberr in https://github.com/mberr/torch-ppr/pull/18
    • Add option to add identity matrix by @mberr in https://github.com/mberr/torch-ppr/pull/20

    Full Changelog: https://github.com/mberr/torch-ppr/compare/v0.0.7...v0.0.8

    Source code(tar.gz)
    Source code(zip)
  • v0.0.7(Jun 29, 2022)

    What's Changed

    • Fix torch 1.12 compat by @mberr in https://github.com/mberr/torch-ppr/pull/17

    Full Changelog: https://github.com/mberr/torch-ppr/compare/v0.0.6...v0.0.7

    Source code(tar.gz)
    Source code(zip)
  • v0.0.6(Jun 29, 2022)

    What's Changed

    • Fix language tag in docs by @cthoyt in https://github.com/mberr/torch-ppr/pull/13
    • Fix torch.sparse.addmm use by @mberr in https://github.com/mberr/torch-ppr/pull/12
    • Enable CI on multiple versions of pytorch by @cthoyt in https://github.com/mberr/torch-ppr/pull/14
    • Improve sparse CSR support by @mberr in https://github.com/mberr/torch-ppr/pull/15
    • Increase numerical tolerance by @mberr in https://github.com/mberr/torch-ppr/pull/16

    Full Changelog: https://github.com/mberr/torch-ppr/compare/v0.0.5...v0.0.6

    Source code(tar.gz)
    Source code(zip)
  • v0.0.5(May 12, 2022)

    What's Changed

    • Improve input validation by @mberr in https://github.com/mberr/torch-ppr/pull/10

    Full Changelog: https://github.com/mberr/torch-ppr/compare/v0.0.4...v0.0.5

    Source code(tar.gz)
    Source code(zip)
  • v0.0.4(May 10, 2022)

    What's Changed

    • Expose num_nodes parameter by @mberr in https://github.com/mberr/torch-ppr/pull/8

    Full Changelog: https://github.com/mberr/torch-ppr/compare/v0.0.3...v0.0.4

    Source code(tar.gz)
    Source code(zip)
  • v0.0.3(May 10, 2022)

    What's Changed

    • Add imports to code examples in README by @cthoyt in https://github.com/mberr/torch-ppr/pull/6
    • Expose API functions from top-level by @cthoyt in https://github.com/mberr/torch-ppr/pull/7

    New Contributors

    • @cthoyt made their first contribution in https://github.com/mberr/torch-ppr/pull/6

    Full Changelog: https://github.com/mberr/torch-ppr/compare/v0.0.2...v0.0.3

    Source code(tar.gz)
    Source code(zip)
  • v0.0.2(May 9, 2022)

    What's Changed

    • Fix device resolution order by @mberr in https://github.com/mberr/torch-ppr/pull/5

    Full Changelog: https://github.com/mberr/torch-ppr/compare/v0.0.1...v0.0.2

    Source code(tar.gz)
    Source code(zip)
  • v0.0.1(May 6, 2022)

Owner
Max Berrendorf
Max Berrendorf
Self-supervised spatio-spectro-temporal represenation learning for EEG analysis

EEG-Oriented Self-Supervised Learning and Cluster-Aware Adaptation This repository provides a tensorflow implementation of a submitted paper: EEG-Orie

Wonjun Ko 4 Jun 09, 2022
ML for NLP and Computer Vision.

Sparrow is our open-source ML product. It runs on Skipper MLOps infrastructure.

Katana ML 2 Nov 28, 2021
Code for the paper SphereRPN: Learning Spheres for High-Quality Region Proposals on 3D Point Clouds Object Detection, ICIP 2021.

SphereRPN Code for the paper SphereRPN: Learning Spheres for High-Quality Region Proposals on 3D Point Clouds Object Detection, ICIP 2021. Authors: Th

Thang Vu 15 Dec 02, 2022
Code for the SIGIR 2022 paper "Hybrid Transformer with Multi-level Fusion for Multimodal Knowledge Graph Completion"

MKGFormer Code for the SIGIR 2022 paper "Hybrid Transformer with Multi-level Fusion for Multimodal Knowledge Graph Completion" Model Architecture Illu

ZJUNLP 68 Dec 28, 2022
torchbearer: A model fitting library for PyTorch

Note: We're moving to PyTorch Lightning! Read about the move here. From the end of February, torchbearer will no longer be actively maintained. We'll

631 Jan 04, 2023
This repository will be a summary and outlook on all our open, medical, AI advancements.

medical by LAION This repository will be a summary and outlook on all our open, medical, AI advancements. See the medical-general channel in the medic

LAION AI 18 Dec 30, 2022
Paddle-Adversarial-Toolbox (PAT) is a Python library for Deep Learning Security based on PaddlePaddle.

Paddle-Adversarial-Toolbox Paddle-Adversarial-Toolbox (PAT) is a Python library for Deep Learning Security based on PaddlePaddle. Model Zoo Common FGS

AgentMaker 17 Nov 08, 2022
Tiny-NewsRec: Efficient and Effective PLM-based News Recommendation

Tiny-NewsRec The source codes for our paper "Tiny-NewsRec: Efficient and Effective PLM-based News Recommendation". Requirements PyTorch == 1.6.0 Tensor

Yang Yu 3 Dec 07, 2022
Speed-Test - You can check your intenet speed using this tool

Speed-Test Tool By Hez_X AVAILABLE ON : Termux & Kali linux & Ubuntu (Linux E

Hez-X 3 Feb 17, 2022
Unpaired Caricature Generation with Multiple Exaggerations

CariMe-pytorch The official pytorch implementation of the paper "CariMe: Unpaired Caricature Generation with Multiple Exaggerations" CariMe: Unpaired

Gu Zheng 37 Dec 30, 2022
Trash Sorter Extraordinaire is a software which efficiently detects the different types of waste in a pile of random trash through feeding it pictures or videos.

Trash-Sorter-Extraordinaire Trash Sorter Extraordinaire is a software which efficiently detects the different types of waste in a pile of random trash

Rameen Mahmood 1 Nov 07, 2021
PPO Lagrangian in JAX

PPO Lagrangian in JAX This repository implements PPO in JAX. Implementation is tested on the safety-gym benchmark. Usage Install dependencies using th

Karush Suri 2 Sep 14, 2022
Official implementation for paper Render In-between: Motion Guided Video Synthesis for Action Interpolation

Render In-between: Motion Guided Video Synthesis for Action Interpolation [Paper] [Supp] [arXiv] [4min Video] This is the official Pytorch implementat

8 Oct 27, 2022
《Geo Word Clouds》paper implementation

《Geo Word Clouds》paper implementation

Russellwzr 2 Jan 28, 2022
Weighing Counts: Sequential Crowd Counting by Reinforcement Learning

LibraNet This repository includes the official implementation of LibraNet for crowd counting, presented in our paper: Weighing Counts: Sequential Crow

Hao Lu 18 Nov 05, 2022
Official code for "Distributed Deep Learning in Open Collaborations" (NeurIPS 2021)

Distributed Deep Learning in Open Collaborations This repository contains the code for the NeurIPS 2021 paper "Distributed Deep Learning in Open Colla

Yandex Research 96 Sep 15, 2022
Predicting Event Memorability from Contextual Visual Semantics

Predicting Event Memorability from Contextual Visual Semantics

0 Oct 06, 2021
Data Augmentation Using Keras and Python

Data-Augmentation-Using-Keras-and-Python Data augmentation is the process of increasing the number of training dataset. Keras library offers a simple

Happy N. Monday 3 Feb 15, 2022
Official repo for QHack—the quantum machine learning hackathon

Note: This repository has been frozen while we consider the submissions for the QHack Open Hackathon. We hope you enjoyed the event! Welcome to QHack,

Xanadu 118 Jan 05, 2023
Kroomsa: A search engine for the curious

Kroomsa A search engine for the curious. It is a search algorithm designed to en

Wingify 7 Jun 20, 2022