Python-based Space Physics Environment Data Analysis Software

Overview

pySPEDAS

build Coverage Status Version Language grade: Python Status License

pySPEDAS is an implementation of the SPEDAS framework for Python.

The Space Physics Environment Data Analysis Software (SPEDAS) framework is written in IDL and contains data loading, data analysis and data plotting tools for various scientific missions (NASA, NOAA, etc.) and ground magnetometers.

Please see our documentation at:

https://pyspedas.readthedocs.io/

Projects Supported

Requirements

Python 3.7+ is required.

We recommend Anaconda which comes with a suite of packages useful for scientific data analysis. Step-by-step instructions for installing Anaconda can be found at: Windows, macOS, Linux

Installation

Setup your Virtual Environment

To avoid potential dependency issues with other Python packages, we suggest creating a virtual environment for pySPEDAS; you can create a virtual environment in your terminal with:

python -m venv pyspedas

To enter your virtual environment, run the 'activate' script:

Windows

.\pyspedas\Scripts\activate

macOS and Linux

source pyspedas/bin/activate

Using Jupyter notebooks with your virtual environment

To get virtual environments working with Jupyter, in the virtual environment, type:

pip install ipykernel
python -m ipykernel install --user --name pyspedas --display-name "Python (pySPEDAS)"

(note: "pyspedas" is the name of your virtual environment)

Then once you open the notebook, go to "Kernel" then "Change kernel" and select the one named "Python (pySPEDAS)"

Install

pySPEDAS supports Windows, macOS and Linux. To get started, install the pyspedas package using PyPI:

pip install pyspedas

Upgrade

To upgrade to the latest version of pySPEDAS:

pip install pyspedas --upgrade

Local Data Directories

The recommended way of setting your local data directory is to set the SPEDAS_DATA_DIR environment variable. SPEDAS_DATA_DIR acts as a root data directory for all missions, and will also be used by IDL (if you’re running a recent copy of the bleeding edge).

Mission specific data directories (e.g., MMS_DATA_DIR for MMS, THM_DATA_DIR for THEMIS) can also be set, and these will override SPEDAS_DATA_DIR

Usage

To get started, import pyspedas and pytplot:

import pyspedas
from pytplot import tplot

You can load data into tplot variables by calling pyspedas.mission.instrument(), e.g.,

To load and plot 1 day of THEMIS FGM data for probe 'd':

thm_fgm = pyspedas.themis.fgm(trange=['2015-10-16', '2015-10-17'], probe='d')

tplot(['thd_fgs_gse', 'thd_fgs_gsm'])

To load and plot 2 minutes of MMS burst mode FGM data:

mms_fgm = pyspedas.mms.fgm(trange=['2015-10-16/13:05:30', '2015-10-16/13:07:30'], data_rate='brst')

tplot(['mms1_fgm_b_gse_brst_l2', 'mms1_fgm_b_gsm_brst_l2'])

Note: by default, pySPEDAS loads all data contained in CDFs found within the requested time range; this can potentially load data outside of your requested trange. To remove the data outside of your requested trange, set the time_clip keyword to True

To load and plot 6 hours of PSP SWEAP/SPAN-i data:

spi_vars = pyspedas.psp.spi(trange=['2018-11-5', '2018-11-5/06:00'], time_clip=True)

tplot(['DENS', 'VEL', 'T_TENSOR', 'TEMP'])

To download 5 days of STEREO magnetometer data (but not load them into tplot variables):

stereo_files = pyspedas.stereo.mag(trange=['2013-11-1', '2013-11-6'], downloadonly=True)

Standard Options

  • trange: two-element list specifying the time range of interest. This keyword accepts a wide range of formats
  • time_clip: if set, clip the variables to the exact time range specified by the trange keyword
  • suffix: string specifying a suffix to append to the loaded variables
  • varformat: string specifying which CDF variables to load; accepts the wild cards * and ?
  • varnames: string specifying which CDF variables to load (exact names)
  • get_support_data: if set, load the support variables from the CDFs
  • downloadonly: if set, download the files but do not load them into tplot
  • no_update: if set, only load the data from the local cache
  • notplot: if set, load the variables into dictionaries containing numpy arrays (instead of creating the tplot variables)

Getting Help

To find the options supported, call help on the instrument function you're interested in:

help(pyspedas.themis.fgm)

You can ask questions by creating an issue or by joining the SPEDAS mailing list.

Contributing

We welcome contributions to pySPEDAS; to learn how you can contribute, please see our Contributing Guide

Code of Conduct

In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. To learn more, please see our Code of Conduct.

Additional Information

For examples of pyspedas, see: https://github.com/spedas/pyspedas_examples

For MMS examples, see: https://github.com/spedas/mms-examples

For pytplot, see: https://github.com/MAVENSDC/PyTplot

For cdflib, see: https://github.com/MAVENSDC/cdflib

For SPEDAS, see http://spedas.org/

Comments
  • doesn't load in STATIC energy / phi / theta / etc data

    doesn't load in STATIC energy / phi / theta / etc data

    I was trying to use pyspedas to pull in STATIC data but I noticed large discrepancies in the data pulled by PySPEDAS vs pulling data in IDL. Looking at the CDF file, it seems like pyspedas is ignoring the file's metadata and just pulling variables/support variables. Unfortunately, for STATIC, important parameters (like energy, phi, theta, etc) are given in metadata rather than variables/support variables.

    Please enable reading in of the metadata parameters so STATIC data can be used to the full extent in python as well as in IDL.

    Thank you!

    opened by NeeshaRS 25
  • RBSP Hope file loading UnicodeDecodeError

    RBSP Hope file loading UnicodeDecodeError

    Trying to call pyspedas.rbsp.hope(trange = [date1, date2], probe = "a", datatype = "moments", level = "l3", notplot = True) throws a <class 'UnicodeDecodeError'> for SOME dates, but not all. For example, 2018-11-6 and 2018-11-9 fail but 2018-11-7 does not. 'ascii' codec can't decode byte 0xe0 in position 0: ordinal not in range(128).

    I was using notplot = True to get a dictionary out, if that matters.

    opened by kaltensi 6
  • For poes data

    For poes data

    Hello all,

    I am trying to use POES-MEPED data using pyspedas. I did >>>pyspedas.poes.load(trange) But I found pyspedas does not load the POES flux data for example, 'mep_ele_flux'. When I read the individual CDF files, there is 'mep_ele_flux' variable.

    Could you check all the POES data is properly loaded?

    Best regards, Inchun

    opened by chondrite1230 5
  • ylim does not work for a log axis?

    ylim does not work for a log axis?

    Although this should be asked on the pytplot page, let me ask here as I found (and confirmed) it when plotting THEMIS data. Recently my colleagues and I realized that pytplot.ylim() does not work properly for a tplot variable with ylog=1. Looking into the problem by testing with THEMIS/ESA data, we found:

    • a spectrum-type data with a 2-D "v" array (e.g., (times, 32) ----> ylim works
    • a spectrum-type data with a 1-D "v" array (e.g., (32) ----> ylim NOT work as expected

    Isn't this a bug in pytplot or pyspedas?

    Copied below are the commands that reproduce the above latter case on my environment (macOS 10.14.6, Python 3.9.6, pytplot 1.7.28, pyspedas 1.2.8).

    `import pyspedas import pytplot pyspedas.themis.spacecraft.particles.esa.esa( varformat='peir') pytplot.options('thc_peir_en_eflux', 'zlog', 1) pytplot.options('thc_peir_en_eflux', 'ylog', 1) element_thc_peir_en_eflux = pytplot.get_data('thc_peir_en_eflux') thc_peir_flux_test_meta_data = pytplot.get_data('thc_peir_en_eflux', metadata=True) pytplot.store_data('thc_peir_flux_test', data={'x': element_thc_peir_en_eflux[0], 'y': element_thc_peir_en_eflux[1], 'v': element_thc_peir_en_eflux[2][0]}, attr_dict=thc_peir_flux_test_meta_data) pytplot.options('thc_peir_flux_test', 'zlog', 1) pytplot.zlim('thc_peir_flux_test', (0.00005)*1.e+6, (500.)*1.e+6) pytplot.options('thc_peir_flux_test', 'ytitle', 'thc_peir_flux_test') pytplot.tplot('thc_peir_flux_test') # plot

    pytplot.ylim('thc_peir_flux_test', 1000, 24000) pytplot.tplot('thc_peir_flux_test') # plot with a different yrange setting`

    opened by horit 5
  • How to specify data directory

    How to specify data directory

    I am loading OMNI data, and wish to set local_data_dir to D:/data/omni. In pyspedas/omni/config.py, it seems that local _data_dir can be set using os.environ['OMNI_DATA_DIR'] However, the following code will still download OMNI data in the current directory instead of D:/data/omni. This was because os.environ is not changed in config.py The following code will reproduce the problem. Thanks for your attention.

    import pyspedas import os

    os.environ['OMNI_DATA_DIR'] = "C:/data/omni/" omni_vars = pyspedas.omni.data(trange=['2013-11-5', '2013-11-6'])

    opened by xnchu 5
  • CDF time conversion to unix time using astropy instead of cdflib

    CDF time conversion to unix time using astropy instead of cdflib

    This improves performance for CDF time to unix time conversion originally performed using cdflib.cdfepoch.unixtime, which uses python's intrinsic for loops for conversion and is way too slow. Simple testing finds that conversion via astropy is more than ten times faster. Although this introduces additional module dependency, this will be useful from users' perspective.

    Note: cdflib developers seem to be considering the integration of astropy's time module in cdflib. https://github.com/MAVENSDC/cdflib/issues/14

    opened by amanotk 5
  • failure in http digest authentication

    failure in http digest authentication

    Could you modify Lines 61 and 185 in utilities/download.py, session.auth = (username, password) to, session.auth = requests.auth.HTTPDigestAuth(username, password)? This fixes a bug causing the failure in http digest authentication.

    opened by horit 4
  • Problem using cdflib in cdf_to_tplot

    Problem using cdflib in cdf_to_tplot

    The epoch is converted to unix_time at line 195 in cdf_to_tplot.py If cdflib is used, it is converted using function unixtime at line 192 in epochs.py. The problem is line 222: unixtime.append(datetime.datetime(*date).timestamp()) It assumes local time instead of UTC. Therefore, the time is offset by your local time. The codes to reproduce the error is attached below. The result should be 2020-01-01/00:00:00.

    import pyspedas import numpy as np import pandas as pd trange=['2010-01-01/00:00:00', '2010-01-02/00:00:00'] varname = 'BX_GSE' data_omni = pyspedas.omni.data(trange=['2010-01-01/00:00:00', '2010-01-02/00:00:00'],notplot=True,varformat=varname,time_clip=True) data = np.array(data_omni[varname]['y']) unix_time = np.array(data_omni[varname]['x']) date_time = pd.to_datetime(data_omni[varname]['x'],unit='s') print(date_time[0])

    opened by xnchu 4
  • Could not import pyspedas in google colab

    Could not import pyspedas in google colab

    I was trying to use pyspedas in google colab, the kernel crashed with warning: qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.

    opened by donglai96 4
  • pyspedas/pyspedas/mms/feeps/mms_feeps_remove_bad_data.py line 50

    pyspedas/pyspedas/mms/feeps/mms_feeps_remove_bad_data.py line 50

    Are October 2017 and October 2018 the starting time for bad eyes tables? If so, then you should not take the closest table, but take the table according to the time period.

    opened by PluckZK 3
  • Some updates for ERG

    Some updates for ERG

    Could you merge ERG-related scripts from the "erg" branch? The updates include:

    • Minor bug fixes for load routines for some particle data
    • Initial import of load routines for ground observation data
    • An experimental version of part_products routines for ERG satellite data

    And could you discard the "ergsc-devel" branch? At first I tried to merge the above updates to the old branch, but failed, due to an unknown error with analysis/time_domain_filter.py. That is why I have instead created a new branch "erg" from the master and put all the updates in it. Any future updates for the ERG module would be delivered through this new branch.

    opened by horit 3
  • MMS FPI DIS moms omni spin avg doesn't seem to be averaged

    MMS FPI DIS moms omni spin avg doesn't seem to be averaged

    The FPI DIS moments energy spectra data does not seem to be spin-averaging when the omni product is loaded. dis_data = pyspedas.mms.fpi(trange=[date_start_str, date_end_str], datatype=['dis-moms'], center_measurement=True, data_rate='fast') t, y, z = get_data('mms1_dis_energyspectr_omni_fast') A difference on the time stamps results in a 4.5 second interval, which is the non-omni sampling interval.

    opened by kileyy10 1
  • conda install pyspedas [enhancement]

    conda install pyspedas [enhancement]

    It would be very useful to be able to install pyspedas using conda, since it would make using this framework much more convenient in conda environments.

    opened by johncoxon 5
  • In the case of unavailble data...

    In the case of unavailble data...

    Consider the following code:

    import pyspedas as spd
    import pytplot as ptt
    
    trange = ['2017-09-01 09:58:30', '2017-09-01 09:59:30']
    _ = spd.mms.fpi(trange=trange, probe=3, data_rate='brst', level='l2',
                    datatype='dis-moms', varnames='mms3_dis_bulkv_gse_brst', time_clip=True,
                    latest_version=True)
    fpi_time_unix = ptt.get_data(mms_fpi_varnames[0])[0]
    fpi_v_gse = ptt.get_data(mms_fpi_varnames[0])[1:][0]
    
    fpi_time_utc = spd.time_datetime(fpi_time_unix)
    
    print(f"Start time: {fpi_time_utc[0].strftime('%Y-%m-%d %H:%M:%S')}")
    print(f"End time: {fpi_time_utc[-1].strftime('%Y-%m-%d %H:%M:%S')}")
    

    Given that trange = ['2017-09-01 09:58:30', '2017-09-01 09:59:30'], one would expect the code to print those dates. Or, in the case that 'brst' data is not available, the code should should either throw an error or NaNs.

    However, in this case the dates are: Start time: 2017-09-01 09:17:03, End time: 2017-09-01 09:18:02.

    From what I can surmize, it appears that in case the burst data is not available for the specified time range. the function looks for the data closest to the specified time range and outputs those times and corresponding data, which was unexpected for me, specially because I specified the time_clip parameter.

    This made me wonder, if this is how the code is supposed to work, or if this is a bug in the code.

    opened by qudsiramiz 4
  • Please use logging instead of prints

    Please use logging instead of prints

    There was some conversation in the PyHC Element chat this morning about suppressing all the printouts pySPEDAS generates when you load data (Time clip was applied to..., Loading variables: ..., etc).

    The OP ended up using a hack like this to redirect stdout: https://gist.github.com/vikjam/755930297430091d8d8df70ac89ea9e2

    But it was brought up that if pySPEDAS used logging (https://docs.python.org/3/library/logging.html) instead of standard print(), it would allow messages to be printed by default but users would have some control over what is shown if they wanted. A few people immediately agreed this would be a good change.

    Please consider this a vote for this feature request?

    opened by sapols 1
  • Can I download latest version of PSP data?

    Can I download latest version of PSP data?

    I am trying to download PSP data, for example SPC data from the first encounter.

    On the SWEAP website: http://sweap.cfa.harvard.edu/data/sci/sweap/spc/L3/2018/11/

    I can see that there is a version 26 of the CDF file. However, when using:

            spcdata = pyspedas.psp.spc(trange=[t0, t1], datatype='l3i', level='l3', 
                                    varnames = [
                                        'np_moment',
                                        'wp_moment',
                                        'vp_moment_RTN',
                                        'vp_moment_SC',
                                        'sc_pos_HCI',
                                        'sc_vel_HCI',
                                        'carr_latitude',
                                        'carr_longitude'
                                    ], 
                                    time_clip=True)
    

    I am getting the first vesrion of the CDF file (V01). Is there a functionality that would allow me to download the latest version that I am not aware of?

    Thanks!

    opened by nsioulas 5
Releases(1.3)
  • 1.3(Jan 26, 2022)

    • First version to include PyTplot with a matplotlib backend
    • Added geopack wrappers for T89, T96, T01, TS04
    • Large updates to the MMS plug-in, includeing new tools for calculating energy and angular spectrograms, as well as moments from the FPI and HPCA plasma distribution data
    • Added the 0th (EXPERIMENTAL) version of the ERG plug-in from the Arase team in Japan
    • Added new tools for working with PyTplot variables, e.g., tkm2re, cross products, dot products, normalizing vectors
    • Added routines for wave polarization calculations
    • Added routines for field algined coordinate transformations
    • Added plug-in for Spherical Elementary Currents (SECS) and Equivalent Ionospheric Currents (EICS) from Xin Cao and Xiangning Chu at the University of Colorado Boulder
    • Added initial load routine for Heliophysics Application Programmer's Interface (HAPI) data
    • Added initial load routine for Kyoto Dst data
    • Added initial load routine for THEMIS All Sky Imager data
    • Added THEMIS FIT L1 calibration routine
    • Large updates to the documentation at: https://pyspedas.readthedocs.io/
    • Numerous other bug fixes and updates
    Source code(tar.gz)
    Source code(zip)
  • v1.2(Mar 25, 2021)

    Significant v1.2 updates:

    • Dropped support for Python 3.6; we now support only Python 3.7 and later
    • Added support for Python 3.9
    • Implemented performance enhancements for coordinate transformation routines
    • Made numerous updates and bug fixes to the MMS plug-in
    • Added initial support for Solar Orbiter data
    Source code(tar.gz)
    Source code(zip)
  • 1.1(Dec 7, 2020)

  • 1.0(Jun 16, 2020)

Owner
SPEDAS
Space Physics Environment Data Analysis Software
SPEDAS
A multi-platform GUI for bit-based analysis, processing, and visualization

A multi-platform GUI for bit-based analysis, processing, and visualization

Mahlet 529 Dec 19, 2022
Business Intelligence (BI) in Python, OLAP

Open Mining Business Intelligence (BI) Application Server written in Python Requirements Python 2.7 (Backend) Lua 5.2 or LuaJIT 5.1 (OML backend) Mong

Open Mining 1.2k Dec 27, 2022
First steps with Python in Life Sciences

First steps with Python in Life Sciences This course material is part of the "First Steps with Python in Life Science" three-day course of SIB-trainin

SIB Swiss Institute of Bioinformatics 22 Jan 08, 2023
Python script for transferring data between three drives in two separate stages

Waterlock Waterlock is a Python script meant for incrementally transferring data between three folder locations in two separate stages. It performs ha

David Swanlund 13 Nov 10, 2021
COVID-19 deaths statistics around the world

COVID-19-Deaths-Dataset COVID-19 deaths statistics around the world This is a daily updated dataset of COVID-19 deaths around the world. The dataset c

Nisa Efendioğlu 4 Jul 10, 2022
A highly efficient and modular implementation of Gaussian Processes in PyTorch

GPyTorch GPyTorch is a Gaussian process library implemented using PyTorch. GPyTorch is designed for creating scalable, flexible, and modular Gaussian

3k Jan 02, 2023
This python script allows you to manipulate the audience data from Sl.ido surveys

Slido-Automated-VoteBot This python script allows you to manipulate the audience data from Sl.ido surveys Since Slido blocks interference from automat

Pranav Menon 1 Jan 24, 2022
Used for data processing in machine learning, and help us to construct ML model more easily from scratch

Used for data processing in machine learning, and help us to construct ML model more easily from scratch. Can be used in linear model, logistic regression model, and decision tree.

ShawnWang 0 Jul 05, 2022
Data Science Environment Setup in single line

datascienv is package that helps your to setup your environment in single line of code with all dependency and it is also include pyforest that provide single line of import all required ml libraries

Ashish Patel 55 Dec 16, 2022
PyIOmica (pyiomica) is a Python package for omics analyses.

PyIOmica (pyiomica) This repository contains PyIOmica, a Python package that provides bioinformatics utilities for analyzing (dynamic) omics datasets.

G. Mias Lab 13 Jun 29, 2022
GWpy is a collaboration-driven Python package providing tools for studying data from ground-based gravitational-wave detectors

GWpy is a collaboration-driven Python package providing tools for studying data from ground-based gravitational-wave detectors. GWpy provides a user-f

GWpy 342 Jan 07, 2023
Hatchet is a Python-based library that allows Pandas dataframes to be indexed by structured tree and graph data.

Hatchet Hatchet is a Python-based library that allows Pandas dataframes to be indexed by structured tree and graph data. It is intended for analyzing

Lawrence Livermore National Laboratory 14 Aug 19, 2022
WAL enables programmable waveform analysis.

This repro introcudes the Waveform Analysis Language (WAL). The initial paper on WAL will appear at ASPDAC'22 and can be downloaded here: https://www.

Institute for Complex Systems (ICS), Johannes Kepler University Linz 40 Dec 13, 2022
A lightweight, hub-and-spoke dashboard for multi-account Data Science projects

A lightweight, hub-and-spoke dashboard for cross-account Data Science Projects Introduction Modern Data Science environments often involve many indepe

AWS Samples 3 Oct 30, 2021
Shot notebooks resuming the main functions of GeoPandas

Shot notebooks resuming the main functions of GeoPandas, 2 notebooks written as Exercises to apply these functions.

1 Jan 12, 2022
Useful tool for inserting DataFrames into the Excel sheet.

PyCellFrame Insert Pandas DataFrames into the Excel sheet with a bunch of conditions Install pip install pycellframe Usage Examples Let's suppose that

Luka Sosiashvili 1 Feb 16, 2022
Weather Image Recognition - Python weather application using series of data

Weather Image Recognition - Python weather application using series of data

Kushal Shingote 1 Feb 04, 2022
Statistical Rethinking course winter 2022

Statistical Rethinking (2022 Edition) Instructor: Richard McElreath Lectures: Uploaded Playlist and pre-recorded, two per week Discussion: Online, F

Richard McElreath 3.9k Dec 31, 2022
Kennedy Institute of Rheumatology University of Oxford Project November 2019

TradingBot6M Kennedy Institute of Rheumatology University of Oxford Project November 2019 Run Change api.txt to binance api key: https://www.binance.c

Kannan SAR 2 Nov 16, 2021
Analysiscsv.py for extracting analysis and exporting as CSV

wcc_analysis Lichess page documentation: https://lichess.org/page/world-championships Each WCC has a study, studies are fetched using: https://lichess

32 Apr 25, 2022