PyBrain -- the Python Machine Learning Library =============================================== INSTALLATION ------------ Quick answer: make sure you have SciPy installed, then python setup.py install Longer answer: (if the above was any trouble) we keep more detailed installation instructions (including those for the dependencies) up-to-date in a wiki at: http://wiki.github.com/pybrain/pybrain/installation DOCUMENTATION ------------- Please read docs/documentation.pdf or browse docs/html/* featuring: quickstart, tutorials, API, etc. If you have matplotlib, the scripts in examples/* may be instructive as well.
PyBrain - Another Python Machine Learning Library.
Overview
Comments
-
python3.5.2
opened by hhuhhu 4Does pybrain support python3.5.2? The simple 'import pybrain ' abort. like below. I install it just with 'pip install pybrain' D:\Anaconda3.5.2\python.exe F:/gitProjects/vnpy_future/pre_code/cnn/rnn.py Traceback (most recent call last): File "F:/gitProjects/vnpy_future/pre_code/cnn/rnn.py", line 7, in
import pybrain File "D:\Anaconda3.5.2\lib\site-packages\pybrain_init_.py", line 1, in from structure.init import * ImportError: No module named 'structure' -
Port most of the code to Python3 compatible.
opened by wernight 4The code should still work on Python2.
Import and print are the main changes.
Not everything might be ported. The most part that was left intact are the
range(). In Python2 it returns a list and in python it returns an iterator. This should speed up and in most cases should work without further changes.See http://www.diveinto.org/python3/porting-code-to-python-3-with-2to3.html#xrange
-
PyPi package update
opened by wernight 3There seem to have been many changes since 2009 (over 4 years ago). The version number on GitHub is almost the same yet it's probably worth making another release.
PyPi allows installing simply for a user or system and other things. Not that git clone isn't good in many cases.
-
IndexError after recurrent network copy
opened by wernight 3Steps:
>>> from pybrain.tools.shortcuts import buildNetwork >>> net = buildNetwork(2, 4, 1, recurrent=True) >>> net.activate((1, 1)) ... array([ 0.02202066]) >>> net.copy() >>> net.activate((1, 1)) ... IndexError: index out of boundsThis seem to be only when
recurrent=True. -
KeyError in sortModules
opened by ghost 3I have an issue with the sortModules method throwing a KeyError.
Following the tutorial example, I created a script with the following:
#! /usr/bin/env python # -*- coding: utf-8 -*- import sys import scipy import numpy as np print "\nPython version: %s" % sys.version print "Numpy version: %s" % np.version.version print "Scipy version: %s" % scipy.version.version from pybrain.structure import FeedForwardNetwork from pybrain.structure import LinearLayer, SigmoidLayer from pybrain.structure import FullConnection # Create network nn = FeedForwardNetwork() # Set network parameters INPUT_NDS = 2 HIDDEN_NDS = 3 OUTPUT_NDS = 1 # Create Feed Forward Network layers inLayer = LinearLayer(INPUT_NDS) hiddenLayer = SigmoidLayer(HIDDEN_NDS) outLayer = LinearLayer(OUTPUT_NDS) # Fully connect all layers in_to_hidden = FullConnection(inLayer, hiddenLayer) hidden_to_out = FullConnection(hiddenLayer, outLayer) # Add the connected layers to the network nn.addConnection(in_to_hidden) nn.addConnection(hidden_to_out) # Sort modules to prepare the NN for use nn.sortModules()Which gives me: Python version: 2.6.5 (r265:79063, Apr 16 2010, 13:57:41) [GCC 4.4.3] Numpy version: 1.3.0 Scipy version: 0.7.0 Traceback (most recent call last): File "/tmp/py7317Q6c", line 46, in
nn.sortModules() File "/usr/local/lib/python2.6/dist-packages/PyBrain-0.3-py2.6.egg/pybrain/structure/networks/network.py", line 224, in sortModules self._topologicalSort() File "/usr/local/lib/python2.6/dist-packages/PyBrain-0.3-py2.6.egg/pybrain/structure/networks/network.py", line 188, in _topologicalSort graph[c.inmod].append(c.outmod) KeyError: <LinearLayer 'LinearLayer-3'> I have the latest version of pybrain installed, so this seems strange. Especially as I when use the shortcut:
from pybrain.tools.shortcuts import buildNetwork -
serialization using pickle freezes network causes strange caching behaviour
opened by bgbg 3This is a duplicate of my Stackoverflow.com question.
I fail to properly serialize/deserialize PyBrain networks using either pickle or cPickle.
See the following example:
from pybrain.datasets import SupervisedDataSet from pybrain.tools.shortcuts import buildNetwork from pybrain.supervised.trainers import BackpropTrainer import cPickle as pickle import numpy as np #generate some data np.random.seed(93939393) data = SupervisedDataSet(2, 1) for x in xrange(10): y = x * 3 z = x + y + 0.2 * np.random.randn() data.addSample((x, y), (z,)) #build a network and train it net1 = buildNetwork( data.indim, 2, data.outdim ) trainer1 = BackpropTrainer(net1, dataset=data, verbose=True) for i in xrange(4): trainer1.trainEpochs(1) print '\tvalue after %d epochs: %.2f'%(i, net1.activate((1, 4))[0])This is the output of the above code:
Total error: 201.501998476 value after 0 epochs: 2.79 Total error: 152.487616382 value after 1 epochs: 5.44 Total error: 120.48092561 value after 2 epochs: 7.56 Total error: 97.9884043452 value after 3 epochs: 8.41As you can see, network total error decreases as the training progresses. You can also see that the predicted value approaches the expected value of 12.
Now we will do a similar exercise, but will include serialization/deserialization:
print 'creating net2' net2 = buildNetwork(data.indim, 2, data.outdim) trainer2 = BackpropTrainer(net2, dataset=data, verbose=True) trainer2.trainEpochs(1) print '\tvalue after %d epochs: %.2f'%(1, net2.activate((1, 4))[0]) #So far, so good. Let's test pickle pickle.dump(net2, open('testNetwork.dump', 'w')) net2 = pickle.load(open('testNetwork.dump')) trainer2 = BackpropTrainer(net2, dataset=data, verbose=True) print 'loaded net2 using pickle, continue training' for i in xrange(1, 4): trainer2.trainEpochs(1) print '\tvalue after %d epochs: %.2f'%(i, net2.activate((1, 4))[0])This is the output of this block:
creating net2 Total error: 176.339378639 value after 1 epochs: 5.45 loaded net2 using pickle, continue training Total error: 123.392181859 value after 1 epochs: 5.45 Total error: 94.2867637623 value after 2 epochs: 5.45 Total error: 78.076711114 value after 3 epochs: 5.45As you can see, it seems that the training has some effect on the network (the reported total error value continues to decrease), however the output value of the network freezes on a value that was relevant for the first training iteration.
Is there any caching mechanism that I need to be aware of that causes this erroneous behaviour? Are there better ways to serialize/deserialize pybrain networks?
Relevant version numbers:
- Python 2.6.5 (r265:79096, Mar 19 2010, 21:48:26) [MSC v.1500 32 bit (Intel)]
- Numpy 1.5.1
- cPickle 1.71
- pybrain 0.3
-
Hierarchy change: take Black-box optimization out of RL
opened by schaul 3Although it technically fits there, it is a bit confusing. I think the split should be along the difference of ontogenetic/phylogenetic with on one side optimization, evolution, pso, etc. (coevolution methods should fit here, but how about multi-objective optimization?) and on the other side policy gradients, and other RL algos.
0.3 Discussion In progress -
splitWithProportion returns same type instead of SupervisedDataSet
opened by borakrc 2When we call splitWithProportion on a ClassificationDataSet object, return type is (SupervisedDataSet, SupervisedDataSet) instead of (ClassificationDataSet, ClassificationDataSet). While this modification fixes this issue, it can be improved by calling the constructor using kwargs. Didn't modify sub-classes in order to prevent repetition of lines 106-112. I've done this modification because when we split a sub-class of SupervisedDataSet, we should get a 2-tuple of sub-class object. Not a 2-tuple of SupervisedDataSet.
-
ImportanceDataSet with BackpropTrainer results in IndexError
opened by kkleidal 2I have a dataset which I am clustering using a gaussian mixture model, and then I want to train a neural network for each of the clusters. I want to use all the points in my dataset weighted based on the probability they are in the cluster for which the net is being trained.
Originally, I was not weighting the training data and it worked fine:
''' Create and train a neural net on the training data, given the actual labels ''' def create_neural_net(training, labels, weights=None, T=10, silent=False): input_units = len(training[0]) output_units = len(labels[0]) n = len(training) net = FeedForwardNetwork() layer_in = SoftmaxLayer(input_units) layer_hidden = SigmoidLayer(1000) layer_hidden2 = SigmoidLayer(50) layer_out = LinearLayer(output_units) net.addInputModule(layer_in) net.addModule(layer_hidden) net.addModule(layer_hidden2) net.addOutputModule(layer_out) net.addConnection(FullConnection(layer_in, layer_hidden)) net.addConnection(FullConnection(layer_hidden, layer_hidden2)) net.addConnection(FullConnection(layer_hidden2, layer_out)) net.sortModules() training_data = SupervisedDataSet(input_units, output_units) for i in xrange(n): # print len(training[i]) # prints 148 # print len(labels[i]) # prints 13 training_data.appendLinked(training[i], labels[i]) trainer = BackpropTrainer(net, training_data) for i in xrange(T): if not silent: print "Training %d" % (i + 1) error = trainer.train() if not silent: print net.activate(training[0]), labels[0] if not silent: print "Training iteration %d. Error: %f." % (i + 1, error) return netBut now when I try to weight the data points:
''' Create and train a neural net on the training data, given the actual labels ''' def create_neural_net(training, labels, weights=None, T=10, silent=False): input_units = len(training[0]) output_units = len(labels[0]) n = len(training) net = FeedForwardNetwork() layer_in = SoftmaxLayer(input_units) layer_hidden = SigmoidLayer(1000) layer_hidden2 = SigmoidLayer(50) layer_out = LinearLayer(output_units) net.addInputModule(layer_in) net.addModule(layer_hidden) net.addModule(layer_hidden2) net.addOutputModule(layer_out) net.addConnection(FullConnection(layer_in, layer_hidden)) net.addConnection(FullConnection(layer_hidden, layer_hidden2)) net.addConnection(FullConnection(layer_hidden2, layer_out)) net.sortModules() training_data = ImportanceDataSet(input_units, output_units) for i in xrange(n): # print len(training[i]) # prints 148 # print len(labels[i]) # prints 13 training_data.addSample(training[i], labels[i], importance=(weights[i] if weights is not None else None)) trainer = BackpropTrainer(net, training_data) for i in xrange(T): if not silent: print "Training %d" % (i + 1) error = trainer.train() if not silent: print net.activate(training[0]), labels[0] if not silent: print "Training iteration %d. Error: %f." % (i + 1, error) return netI get the following error:
Traceback (most recent call last): File "clustering_experiment.py", line 281, in <module> total_model = get_model(training, training_labels, num_clusters=NUM_CLUSTERS , T=NUM_ITERS_NEURAL_NET) File "clustering_experiment.py", line 177, in get_model neural_nets.append(neural_net_plugin.create_neural_net(tra.tolist(), val.tol ist(), T=T, silent=True)) File "/home/neural_net_plugin.py", line 43, in create_neural_net error = trainer.train() File "/usr/local/lib/python2.7/dist-packages/PyBrain-0.3.1-py2.7.egg/pybrain/s upervised/trainers/backprop.py", line 61, in train e, p = self._calcDerivs(seq) File "/usr/local/lib/python2.7/dist-packages/PyBrain-0.3.1-py2.7.egg/pybrain/s upervised/trainers/backprop.py", line 92, in _calcDerivs outerr = target - self.module.outputbuffer[offset] IndexError: index 162 is out of bounds for axis 0 with size 1 -
Fixes to Python3.x
opened by herodrigues 2Changes
All changes I've done were backported from Python3 to Python2 (at least until Python2.7).
- Capturing the currently raised exception (it doesn't work in Python 2.5 and earlier)
- map function returns an iterator
- Tuple parameters removed
import exceptionsremoved as now it's a built-in modulefrom itertools import izipremoved, now it uses just zip
TODO
- I didn't change the files containing weave library. In fact, I really don't know if this library is already supported in latest scipy versions. I couldn't find any recent references to that. Just found "old" news saying that it is not supported yet such as this and this. Maybe it's time to consider using Cython instead.
- RL-Glue imports are also unchanged because its current Python codec have no support for Py3 yet. However, I changed Python codec source for RL-Glue to run in Py2 and Py3 (in fact, I just changed minor things such as print function and exception statements). By the way, if you guys want to try it, I've uploaded on my Github. Another thing to point out is that no one is maintaining RL-Glue code anymore.
I didn't do any tests and I just tried to run the examples in the Pybrain docs and everything worked fine.
-
Add Randlov bicycle RL example.
opened by chrisdembia 2I have written part of the RL bicycle problem introduced by Randlov and Alstrom as an example in PyBrain. Hopefully you all would like to include it in PyBrain!
Here's their paper: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.52.3038&rep=rep1&type=pdf
I include some plotting, so you can view the learning.
Please let me know what improvements I should make.
-
cannot import name 'random' from 'scipy'
opened by noeldum 0I am using scipy 1.9.1 and I get the traceback below when using the buildNetwork function.
Traceback (most recent call last): File "/home/nono/Desktop/tmp/neural/./main.py", line 3, in <module> from pybrain.tools.shortcuts import buildNetwork File "/usr/local/lib/python3.10/dist-packages/PyBrain-0.3.3-py3.10.egg/pybrain/__init__.py", line 1, in <module> from pybrain.structure.__init__ import * File "/usr/local/lib/python3.10/dist-packages/PyBrain-0.3.3-py3.10.egg/pybrain/structure/__init__.py", line 2, in <module> from pybrain.structure.modules.__init__ import * File "/usr/local/lib/python3.10/dist-packages/PyBrain-0.3.3-py3.10.egg/pybrain/structure/modules/__init__.py", line 3, in <module> from pybrain.structure.modules.gaussianlayer import GaussianLayer File "/usr/local/lib/python3.10/dist-packages/PyBrain-0.3.3-py3.10.egg/pybrain/structure/modules/gaussianlayer.py", line 3, in <module> from scipy import random ImportError: cannot import name 'random' from 'scipy' (/usr/local/lib/python3.10/dist-packages/scipy-1.9.1-py3.10-linux-x86_64.egg/scipy/__init__.py)Looks like an old reference where things have changed in scipy and not updated in pybrain probably.Is pybrain still maintained? The last release is from 2015.
-
library with this error
opened by Ickwarw 1this error is being presented when I use this pybrain library
this is my code: from pybrain.structure import FeedForwardNetwork from pybrain.structure import LinearLayer, SigmoidLayer, BiasUnit from pybrain.structure import FullConnection
rneural = FeedForwardNetwork()
CE = LinearLayer(4) CO = SigmoidLayer(6) CS = SigmoidLayer(1) b1 = BiasUnit() b2 = BiasUnit()
rneural.addModule(CE) rneural.addModule(CO) rneural.addModule(CS) rneural.addModule(b1) rneural.addModule(b2)
EO = FullConnection(CE, CO) OS = FullConnection(CO, CS) bO = FullConnection(b1, CO) bS = FullConnection(b2, CS)
rneural.sortModule() print(rneural)
when I run:
python3 rneural.py Traceback (most recent call last): File "/home/warwick/Desktop/scriptsinpython/ai/rneural.py", line 1, in from pybrain.structure import FeedForwardNetwork File "/home/warwick/environments/my_env/lib/python3.10/site-packages/pybrain/init.py", line 1, in from pybrain.structure.init import * File "/home/warwick/environments/my_env/lib/python3.10/site-packages/pybrain/structure/init.py", line 2, in from pybrain.structure.modules.init import * File "/home/warwick/environments/my_env/lib/python3.10/site-packages/pybrain/structure/modules/init.py", line 2, in from pybrain.structure.modules.gate import GateLayer, DoubleGateLayer, MultiplicationLayer, SwitchLayer File "/home/warwick/environments/my_env/lib/python3.10/site-packages/pybrain/structure/modules/gate.py", line 10, in from pybrain.tools.functions import sigmoid, sigmoidPrime File "/home/warwick/environments/my_env/lib/python3.10/site-packages/pybrain/tools/functions.py", line 4, in from scipy.linalg import inv, det, svd, logm, expm2 ImportError: cannot import name 'expm2' from 'scipy.linalg' (/home/warwick/environments/my_env/lib/python3.10/site-packages/scipy/linalg/init.py)
I've tried several solutions but the only one I haven't tried is to downgrade python3.10, I think it's not the most correct solution if anyone knows how to fix this
thanks
-
docs: Fix a few typos
opened by timgates42 0There are small typos in:
- pybrain/rl/environments/flexcube/viewer.py
- pybrain/rl/environments/ode/tasks/ccrl.py
- pybrain/rl/environments/ode/tasks/johnnie.py
- pybrain/rl/environments/shipsteer/viewer.py
- pybrain/structure/modules/lstm.py
- pybrain/tests/runtests.py
- pybrain/tools/rlgluebridge.py
Fixes:
- Should read
suggestedrather thansuggestet. - Should read
specificrather thanspezific. - Should read
heightrather thanhight. - Should read
whetherrather thanwether. - Should read
methodrather thanmethode.
Semi-automated pull request generated by https://github.com/timgates42/meticulous/blob/master/docs/NOTE.md
-
Pybrain: 'SupervisedDataSet' object has no attribute '_convertToOneOfMany' error
opened by ghost 0I'm working on speech recognition using raspberry pi while I was running the code of the build model using pybrain features I got the error:'SupervisedDataSet' object has no attribute '_convertToOneOfMany' ? If anyone has any pointers to get me back on the right path and that would be very much appreciated. ` def createRGBdataSet(inputSet, numOfSamples, numOfPoints): alldata = ClassificationDataSet(numOfPoints, 1, nb_classes=3) # Iter through all 3 groups and add the samples with appropriate class label for i in range(0, 3numOfSamples): input = inputSet[i] if (i < numOfSamples): alldata.addSample(input, [0]) elif (i >= numOfSamples and i < numOfSamples2): alldata.addSample(input, [1]) else: alldata.addSample(input, [2]) return alldata
Split the dataset into 75% training and 25% test data.
def splitData(alldata): tstdata, trndata = alldata.splitWithProportion( 0.25 ) trndata._convertToOneOfMany() tstdata._convertToOneOfMany() return trndata, tstdata `
-
I am having a problem with my code, please help!
opened by ghost 0I'm working on speech recognition using raspberry pi while I was running the code of the build model using pybrain features I got the error:'SupervisedDataSet' object has no attribute '_convertToOneOfMany' ? If anyone has any pointers to get me back on the right path and that would be very much appreciated.
def createRGBdataSet(inputSet, numOfSamples, numOfPoints): alldata = ClassificationDataSet(numOfPoints, 1, nb_classes=3) # Iter through all 3 groups and add the samples with appropriate class label for i in range(0, 3*numOfSamples): input = inputSet[i] if (i < numOfSamples): alldata.addSample(input, [0]) elif (i >= numOfSamples and i < numOfSamples*2): alldata.addSample(input, [1]) else: alldata.addSample(input, [2]) return alldata # Split the dataset into 75% training and 25% test data. def splitData(alldata): tstdata, trndata = alldata.splitWithProportion( 0.25 ) trndata._convertToOneOfMany() tstdata._convertToOneOfMany() return trndata, tstdata
Releases(0.3.3)
-
0.3.3(Jan 9, 2015)
When in Doubt: Improving Classification Performance with Alternating Normalization
When in Doubt: Improving Classification Performance with Alternating Normalization Findings of EMNLP 2021 Menglin Jia, Austin Reiter, Ser-Nam Lim, Yoa
The source code for Generating Training Data with Language Models: Towards Zero-Shot Language Understanding.
SuperGen The source code for Generating Training Data with Language Models: Towards Zero-Shot Language Understanding. Requirements Before running, you
Pytorch and Torch testing code of CartoonGAN
CartoonGAN-Test-Pytorch-Torch Pytorch and Torch testing code of CartoonGAN [Chen et al., CVPR18]. With the released pretrained models by the authors,
CLIPort: What and Where Pathways for Robotic Manipulation
CLIPort CLIPort: What and Where Pathways for Robotic Manipulation Mohit Shridhar, Lucas Manuelli, Dieter Fox CoRL 2021 CLIPort is an end-to-end imitat
SMCA replication There are no extra compiled components in SMCA DETR and package dependencies are minimal
Usage There are no extra compiled components in SMCA DETR and package dependencies are minimal, so the code is very simple to use. We provide instruct
Source code for our paper "Do Not Trust Prediction Scores for Membership Inference Attacks"
Do Not Trust Prediction Scores for Membership Inference Attacks Abstract: Membership inference attacks (MIAs) aim to determine whether a specific samp
Improving Query Representations for DenseRetrieval with Pseudo Relevance Feedback:A Reproducibility Study.
APR The repo for the paper Improving Query Representations for DenseRetrieval with Pseudo Relevance Feedback:A Reproducibility Study. Environment setu
Siamese TabNet
Raifhack-DS-2021 https://raifhack.ru/ - Команда Звёздочка Siamese TabNet Сиамская TabNet предсказывает стоимость объекта недвижимости с price_type=1,
PyTorch implementation for MINE: Continuous-Depth MPI with Neural Radiance Fields
MINE: Continuous-Depth MPI with Neural Radiance Fields Project Page | Video PyTorch implementation for our ICCV 2021 paper. MINE: Towards Continuous D
Social Distancing Detector
Computer vision has opened up a lot of opportunities to explore into AI domain that were earlier highly limited. Here is an application of haarcascade classifier and OpenCV to develop a social distan
Addon and nodes for working with structural biology and molecular data in Blender.
Molecular Nodes 🧬 🔬 💻 Buy Me a Coffee to Keep Development Going! Join a Community of Blender SciVis People! What is Molecular Nodes? Molecular Node
Lipstick ain't enough: Beyond Color-Matching for In-the-Wild Makeup Transfer (CVPR 2021)
Table of Content Introduction Datasets Getting Started Requirements Usage Example Training & Evaluation CPM: Color-Pattern Makeup Transfer CPM is a ho
Library extending Jupyter notebooks to integrate with Apache TinkerPop and RDF SPARQL.
Graph Notebook: easily query and visualize graphs The graph notebook provides an easy way to interact with graph databases using Jupyter notebooks. Us
Diverse graph algorithms implemented using JGraphT library.
# 1. Installing Maven & Pandas First, please install Java (JDK11) and Python 3 if they are not already. Next, make sure that Maven (for importing J
DrWhy is the collection of tools for eXplainable AI (XAI). It's based on shared principles and simple grammar for exploration, explanation and visualisation of predictive models.
Responsible Machine Learning With Great Power Comes Great Responsibility. Voltaire (well, maybe) How to develop machine learning models in a responsib
Sketch-Based 3D Exploration with Stacked Generative Adversarial Networks
pix2vox [Demonstration video] Sketch-Based 3D Exploration with Stacked Generative Adversarial Networks. Generated samples Single-category generation M
AAI supports interdisciplinary research to help better understand human, animal, and artificial cognition.
AnimalAI 3 AAI supports interdisciplinary research to help better understand human, animal, and artificial cognition. It aims to support AI research t
Extreme Lightwegith Portrait Segmentation
Extreme Lightwegith Portrait Segmentation Please go to this link to download code Requirements python 3 pytorch = 0.4.1 torchvision==0.2.1 opencv-pyt
VISSL is FAIR's library of extensible, modular and scalable components for SOTA Self-Supervised Learning with images.
What's New Below we share, in reverse chronological order, the updates and new releases in VISSL. All VISSL releases are available here. [Oct 2021]: V
Simple STAC Catalogs discovery tool.
STAC Catalog Discovery Simple STAC discovery tool. Just paste the STAC Catalog link and press Enter. Details STAC Discovery tool enables discovering d