Welcome to The Eigensolver Quantum School, a quantum computing crash course designed by students for students.

Related tags

Deep LearningTEQS
Overview

TEQS

Welcome to The Eigensolver Quantum School, a crash course designed by students for students. The aim of this program is to take someone who has no QC knowledge and put through a five day crash course that puts them in the frame of mind necessary to learn via formal texts such as Nielsen and Chuang (which is the prize of our two day hackathon!)

TEQS Prerequisites

One of the beauties behind learning quantum computing is that on an elementary level, very few pre-requisites are required. At TEQS, the course is designed in a way where the only pre-requisites required are basic linear algebra and classical information processing. To ensure that everyone has those under their belts before attending the crash course, we made those three notebooks which we encourage everyone to read and solve the exercises.

  • Chapter 1 is on vectors and how they are used to represent the state of a qubit
  • Chapter 2 is on operators and how they are used to manipulate the state of a qubit
  • Chapter 3 is on Classical Information and Boolean Logic

Module Requirements

Lectures

Day 1:

Overview of mathematical prerequisites, brief introduction to quantum states and operators, and classical computing. Content available here.

Day 2:

Reduced quantum postulates from a quantum computing perspective and introduction to basic quantum circuits and simulators using Qiskit. Content available here.

Day 3:

The no-cloning theorem, quantum teleportation protocol, superdense coding, and BB84 cryptographic protocol. Content available here.

Day 4:

Quantum oracles, Deutsch's algorithm and how to construct a quantum circuit that implements them. Content available here.

Day 5:

IBM Quantum Fun Day! Introduction to RasQberry and Question and Answer Panel. Content available here.

Hackathon!

Welcome to the Eigensolvers Quantum School Hackathon! In the notebook found in this folder there are 4 problems covering all the material covered in the lectures. These problems have been designed for people coming from all different levels of experience in quantum computing. You will get a different certificate level based on the problems you completed:

  • First two: Beginner
  • First three: Intermediate
  • All four: Advanced

There are also prizes for the winners of the hackathon:

  • First Place: RasQberry - Premium
  • Second Place: RasQberry - All Inclusive
  • Third Place: RasQberry - Customizable DIY Kit
  • Fourth Place: Nielsen and Chuang

The ranking will be based on the weighted cost of the solutions for problem 3 and problem 4; as defined in the notebook.

To submit your solutions, fill out the form below, with the code that you write for each problem. https://forms.gle/KkA6gBbhrCZpWgnX8

The deadline for submission is Sunday (July 11th) 7pm Indian Standard Time. Remember, the ultimate goal is to have fun and learn some quantum computing while you're at it. All the best!

Owner
The Eigensolvers
The Eigensolvers
Circuit Training: An open-source framework for generating chip floor plans with distributed deep reinforcement learning

Circuit Training: An open-source framework for generating chip floor plans with distributed deep reinforcement learning. Circuit Training is an open-s

Google Research 479 Dec 25, 2022
Unofficial Implementation of RobustSTL: A Robust Seasonal-Trend Decomposition Algorithm for Long Time Series (AAAI 2019)

RobustSTL: A Robust Seasonal-Trend Decomposition Algorithm for Long Time Series (AAAI 2019) This repository contains python (3.5.2) implementation of

Doyup Lee 222 Dec 21, 2022
Detail-Preserving Transformer for Light Field Image Super-Resolution

DPT Official Pytorch implementation of the paper "Detail-Preserving Transformer for Light Field Image Super-Resolution" accepted by AAAI 2022 . Update

50 Jan 01, 2023
[CIKM 2019] Code and dataset for "Fi-GNN: Modeling Feature Interactions via Graph Neural Networks for CTR Prediction"

FiGNN for CTR prediction The code and data for our paper in CIKM2019: Fi-GNN: Modeling Feature Interactions via Graph Neural Networks for CTR Predicti

Big Data and Multi-modal Computing Group, CRIPAC 75 Dec 30, 2022
Pytorch Implementation of DiffSinger: Diffusion Acoustic Model for Singing Voice Synthesis (TTS Extension)

DiffSinger - PyTorch Implementation PyTorch implementation of DiffSinger: Diffusion Acoustic Model for Singing Voice Synthesis (TTS Extension). Status

Keon Lee 152 Jan 02, 2023
PyTorch implementation code for the paper MixCo: Mix-up Contrastive Learning for Visual Representation

How to Reproduce our Results This repository contains PyTorch implementation code for the paper MixCo: Mix-up Contrastive Learning for Visual Represen

opcrisis 46 Dec 15, 2022
Official code for "InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation Learning via Mutual Information Maximization" (ICLR 2020, spotlight)

InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation Learning via Mutual Information Maximization Authors: Fan-yun Sun, Jordan Hoffm

Fan-Yun Sun 232 Dec 28, 2022
Draw like Bob Ross using the power of Neural Networks (With PyTorch)!

Draw like Bob Ross using the power of Neural Networks! (+ Pytorch) Learning Process Visualization Getting started Install dependecies Requires python3

Kendrick Tan 116 Mar 07, 2022
Complete the code of prefix-tuning in low data setting

Prefix Tuning Note: 作者在论文中提到使用真实的word去初始化prefix的操作(Initializing the prefix with activations of real words,significantly improves generation)。我在使用作者提供的

Andrew Zeng 4 Jul 11, 2022
Devkit for 3D -- Some utils for 3D object detection based on Numpy and Pytorch

D3D Devkit for 3D: Some utils for 3D object detection and tracking based on Numpy and Pytorch Please consider siting my work if you find this library

Jacob Zhong 27 Jul 07, 2022
Iowa Project - My second project done at General Assembly, focused on feature engineering and understanding Linear Regression as a concept

Project 2 - Ames Housing Data and Kaggle Challenge PROBLEM STATEMENT Inferring or Predicting? What's more valuable for a housing model? When creating

Adam Muhammad Klesc 1 Jan 03, 2022
RAFT-Stereo: Multilevel Recurrent Field Transforms for Stereo Matching

RAFT-Stereo: Multilevel Recurrent Field Transforms for Stereo Matching This repository contains the source code for our paper: RAFT-Stereo: Multilevel

Princeton Vision & Learning Lab 328 Jan 09, 2023
The AWS Certified SysOps Administrator

The AWS Certified SysOps Administrator – Associate (SOA-C02) exam is intended for system administrators in a cloud operations role who have at least 1 year of hands-on experience with deployment, man

Aiden Pearce 32 Dec 11, 2022
It is a simple library to speed up CLIP inference up to 3x (K80 GPU)

CLIP-ONNX It is a simple library to speed up CLIP inference up to 3x (K80 GPU) Usage Install clip-onnx module and requirements first. Use this trick !

Gerasimov Maxim 93 Dec 20, 2022
Deeper insights into graph convolutional networks for semi-supervised learning

deeper_insights_into_GCNs Deeper insights into graph convolutional networks for semi-supervised learning References data and utils.py come from Implem

Davidham3 17 Dec 16, 2022
Jittor Medical Segmentation Lib -- The assignment of Pattern Recognition course (2021 Spring) in Tsinghua University

THU模式识别2021春 -- Jittor 医学图像分割 模型列表 本仓库收录了课程作业中同学们采用jittor框架实现的如下模型: UNet SegNet DeepLab V2 DANet EANet HarDNet及其改动HarDNet_alter PSPNet OCNet OCRNet DL

48 Dec 26, 2022
Official implementation of the paper Image Generators with Conditionally-Independent Pixel Synthesis https://arxiv.org/abs/2011.13775

CIPS -- Official Pytorch Implementation of the paper Image Generators with Conditionally-Independent Pixel Synthesis Requirements pip install -r requi

Multimodal Lab @ Samsung AI Center Moscow 201 Dec 21, 2022
PyTorch implementation for our NeurIPS 2021 Spotlight paper "Long Short-Term Transformer for Online Action Detection".

Long Short-Term Transformer for Online Action Detection Introduction This is a PyTorch implementation for our NeurIPS 2021 Spotlight paper "Long Short

77 Dec 16, 2022
Implementation of Lie Transformer, Equivariant Self-Attention, in Pytorch

Lie Transformer - Pytorch (wip) Implementation of Lie Transformer, Equivariant Self-Attention, in Pytorch. Only the SE3 version will be present in thi

Phil Wang 78 Oct 26, 2022
Code for the paper titled "Prabhupadavani: A Code-mixed Speech Translation Data for 25 languages"

Prabhupadavani: A Code-mixed Speech Translation Data for 25 languages Code for the paper titled "Prabhupadavani: A Code-mixed Speech Translation Data

Ayush Daksh 12 Dec 01, 2022