Python document object mapper (load python object from JSON and vice-versa)

Overview

lupin is a Python JSON object mapper

Build Status

lupin is meant to help in serializing python objects to JSON and unserializing JSON data to python objects.

Installation

pip install lupin

Usage

lupin uses schemas to create a representation of a python object.

A schema is composed of fields which represents the way to load and dump an attribute of an object.

Define schemas

from datetime import datetime
from lupin import Mapper, Schema, fields as f


# 1) Define your models
class Thief(object):
    def __init__(self, name, stolen_items):
        self.name = name
        self.stolen_items = stolen_items


class Painting(object):
    def __init__(self, name, author):
        self.name = name
        self.author = author


class Artist(object):
    def __init__(self, name, birth_date):
        self.name = name
        self.birth_date = birth_date


# 2) Create schemas
artist_schema = Schema({
    "name": f.String(),
    "birthDate": f.DateTime(binding="birth_date", format="%Y-%m-%d")
}, name="artist")

painting_schema = Schema({
    "name": f.String(),
    "author": f.Object(artist_schema)
}, name="painting")

thief_schema = Schema({
    "name": f.String(),
    "stolenItems": f.List(painting_schema, binding="stolen_items")
}, name="thief")

# 3) Create a mapper and register a schema for each of your models you want to map to JSON objects
mapper = Mapper()

mapper.register(Artist, artist_schema)
mapper.register(Painting, painting_schema)
mapper.register(Thief, thief_schema)


# 4) Create some sample data
leonardo = Artist(name="Leonardo da Vinci", birth_date=datetime(1452, 4, 15))
mona_lisa = Painting(name="Mona Lisa", author=leonardo)
arsene = Thief(name="Arsène Lupin", stolen_items=[mona_lisa])

Dump objects

# use mapper to dump python objects
assert mapper.dump(leonardo) == {
    "name": "Leonardo da Vinci",
    "birthDate": "1452-04-15"
}

assert mapper.dump(mona_lisa) == {
    "name": "Mona Lisa",
    "author": {
        "name": "Leonardo da Vinci",
        "birthDate": "1452-04-15"
    }
}

assert mapper.dump(arsene) == {
    "name": "Arsène Lupin",
    "stolenItems": [
        {
            "name": "Mona Lisa",
            "author": {
                "name": "Leonardo da Vinci",
                "birthDate": "1452-04-15"
            }
        }
    ]
}

Load objects

# use mapper to load JSON data
data = {
    "name": "Mona Lisa",
    "author": {
        "name": "Leonardo da Vinci",
        "birthDate": "1452-04-15"
    }
}
painting = mapper.load(data, "painting")  # "painting" is the name of the schame you want to use
artist = painting.author

assert isinstance(painting, Painting)
assert painting.name == "Mona Lisa"

assert isinstance(artist, Artist)
assert artist.name == "Leonardo da Vinci"
assert artist.birth_date == datetime(1452, 4, 15)

Polymorphic lists

Sometimes a list can contain multiple type of objects. In such cases you will have to use a PolymorphicList, you will also need to add a key in the items schema to store the type of the object (you can use a Constant field).

Say that our thief has level up and has stolen a diamond.

class Diamond(object):
    def __init__(self, carat):
        self.carat = carat


mapper = Mapper()

# Register a schema for diamonds
diamond_schema = Schema({
    "carat": f.Field(),
    "type": f.Constant("diamond")  # this will be used to know which schema to used while loading JSON
}, name="diamond")
mapper.register(Diamond, diamond_schema)

# Change our painting schema in order to include a `type` field
painting_schema = Schema({
    "name": f.String(),
    "type": f.Constant("painting"),
    "author": f.Object(artist_schema)
}, name="painting")
mapper.register(Painting, painting_schema)

# Use `PolymorphicList` for `stolen_items`
thief_schema = Schema({
    "name": f.String(),
    "stolenItems": f.PolymorphicList(on="type",  # JSON key to lookup for the polymorphic type
                                     binding="stolen_items",
                                     schemas={
                                         "painting": painting_schema,  # if `type == "painting"` then use painting_schema
                                         "diamond": diamond_schema  # if `type == "diamond"` then use diamond_schema
                                     })
}, name="thief")
mapper.register(Thief, thief_schema)


diamond = Diamond(carat=20)
arsene.stolen_items.append(diamond)

# Dump object
data = mapper.dump(arsene)
assert data == {
    "name": "Arsène Lupin",
    "stolenItems": [
        {
            "name": "Mona Lisa",
            "type": "painting",
            "author": {
                "name": "Leonardo da Vinci",
                "birthDate": "1452-04-15"
            }
        },
        {
            "carat": 20,
            "type": "diamond"
        }
    ]
}

# Load data
thief = mapper.load(data, "thief")
assert isinstance(thief.stolen_items[0], Painting)
assert isinstance(thief.stolen_items[1], Diamond)

Validation

Lupin provides a set of builtin validators, you can find them in the lupin/validators folder.

While creating your schemas you can assign validators to the fields. Before loading a document lupin will validate its format. If one field is invalid, an InvalidDocument is raised with all the error detected in the data.

Example :

from lupin import Mapper, Schema, fields as f, validators as v
from lupin.errors import InvalidDocument, InvalidLength
from models import Artist

mapper = Mapper()

artist_schema = Schema({
    "name": f.String(validators=v.Length(max=10)),
}, name="artist")
mapper.register(Artist, artist_schema)

data = {
    "name": "Leonardo da Vinci"
}

try:
    mapper.load(data, artist_schema, allow_partial=True)
except InvalidDocument as errors:
    error = errors[0]
    assert isinstance(error, InvalidLength)
    assert error.path == ["name"]

Current validators are :

  • DateTimeFormat (validate that value is a valid datetime format)
  • Equal (validate that value is equal to a predefined one)
  • In (validate that a value is contained in a set of value)
  • Length (validate the length of a value)
  • Match (validate the format of a value with a regex)
  • Type (validate the type of a value, this validator is already included in all fields to match the field type)
  • URL (validate an URL string format)
  • IsNone (validate that value is None)
  • Between (validate that value belongs to a range)

Combination

You can build validators combinations using the & and | operator.

Example :

from lupin import validators as v
from lupin.errors import ValidationError

validators = v.Equal("Lupin") | v.Equal("Andrésy")
# validators passes only if value is "Lupin" or "Andrésy"

validators("Lupin", [])

try:
    validators("Holmes", [])
except ValidationError:
    print("Validation error")
Owner
Aurélien Amilin
Aurélien Amilin
A Python package develop for transportation spatio-temporal big data processing, analysis and visualization.

English 中文版 TransBigData Introduction TransBigData is a Python package developed for transportation spatio-temporal big data processing, analysis and

Qing Yu 251 Jan 03, 2023
Lightweight, configurable Sphinx theme. Now the Sphinx default!

What is Alabaster? Alabaster is a visually (c)lean, responsive, configurable theme for the Sphinx documentation system. It is Python 2+3 compatible. I

Jeff Forcier 670 Dec 19, 2022
Repository for tutorials, examples and starter scripts for using the MTU HPC cluster

MTU-HPC-Starter Repository for tutorials, examples and starter scripts for using the MTU HPC cluster Connecting to the MTU HPC cluster Within the coll

1 Jan 31, 2022
A plugin to introduce a generic API for Decompiler support in GEF

decomp2gef A plugin to introduce a generic API for Decompiler support in GEF. Like GEF, the plugin is battery-included and requires no external depend

Zion 379 Jan 08, 2023
Flask-Rebar combines flask, marshmallow, and swagger for robust REST services.

Flask-Rebar Flask-Rebar combines flask, marshmallow, and swagger for robust REST services. Features Request and Response Validation - Flask-Rebar reli

PlanGrid 223 Dec 19, 2022
The purpose of this project is to share knowledge on how awesome Streamlit is and can be

Awesome Streamlit The fastest way to build Awesome Tools and Apps! Powered by Python! The purpose of this project is to share knowledge on how Awesome

Marc Skov Madsen 1.5k Jan 07, 2023
Demonstration that AWS IAM policy evaluation docs are incorrect

The flowchart from the AWS IAM policy evaluation documentation page, as of 2021-09-12, and dating back to at least 2018-12-27, is the following: The f

Ben Kehoe 15 Oct 21, 2022
Tutorial for STARKs with supporting code in python

stark-anatomy STARK tutorial with supporting code in python Outline: introduction overview of STARKs basic tools -- algebra and polynomials FRI low de

121 Jan 03, 2023
A Sublime Text plugin to select a default syntax dialect

Default Syntax Chooser This Sublime Text 4 plugin provides the set_default_syntax_dialect command. This command manipulates a syntax file (e.g.: SQL.s

3 Jan 14, 2022
epub2sphinx is a tool to convert epub files to ReST for Sphinx

epub2sphinx epub2sphinx is a tool to convert epub files to ReST for Sphinx. It uses Pandoc for converting HTML data inside epub files into ReST. It cr

Nihaal 8 Dec 15, 2022
Data science on SDGs - Udemy Online Course Material: Data Science on Sustainable Development Goals

Data Science on Sustainable Development Goals (SDGs) Udemy Online Course Material: Data Science on Sustainable Development Goals https://bit.ly/data_s

Frank Kienle 1 Jan 04, 2022
Jupyter Notebooks as Markdown Documents, Julia, Python or R scripts

Have you always wished Jupyter notebooks were plain text documents? Wished you could edit them in your favorite IDE? And get clear and meaningful diff

Marc Wouts 5.7k Jan 04, 2023
A collection of simple python mini projects to enhance your python skills

A collection of simple python mini projects to enhance your python skills

PYTHON WORLD 12.1k Jan 05, 2023
learn python in 100 days, a simple step could be follow from beginner to master of every aspect of python programming and project also include side project which you can use as demo project for your personal portfolio

learn python in 100 days, a simple step could be follow from beginner to master of every aspect of python programming and project also include side project which you can use as demo project for your

BDFD 6 Nov 05, 2022
Beautiful static documentation generator for OpenAPI/Swagger 2.0

Spectacle The gentleman at REST Spectacle generates beautiful static HTML5 documentation from OpenAPI/Swagger 2.0 API specifications. The goal of Spec

Sourcey 1.3k Dec 13, 2022
Materi workshop "Light up your Python!" Himpunan Mahasiswa Sistem Informasi Fakultas Ilmu Komputer Universitas Singaperbangsa Karawang, 4 September 2021 (Online via Zoom).

Workshop Python UNSIKA 2021 Materi workshop "Light up your Python!" Himpunan Mahasiswa Sistem Informasi Fakultas Ilmu Komputer Universitas Singaperban

Eka Putra 20 Mar 24, 2022
Mkdocs obsidian publish - Publish your obsidian vault through a python script

Mkdocs Obsidian Mkdocs Obsidian is an association between a python script and a

Mara 49 Jan 09, 2023
Practical Python Programming

Welcome! When I first learned Python nearly 25 years ago, I was immediately struck by how I could productively apply it to all sorts of messy work pro

Dabeaz LLC 8.3k Jan 08, 2023
MonsterManualPlus - An advanced monster manual for Tower of the Sorcerer.

Monster Manual + This is an advanced monster manual for Tower of the Sorcerer mods. Users can get a plenty of extra imformation for decision making wh

Yifan Zhou 1 Jan 01, 2022
Python code for working with NFL play by play data.

nfl_data_py nfl_data_py is a Python library for interacting with NFL data sourced from nflfastR, nfldata, dynastyprocess, and Draft Scout. Includes im

82 Jan 05, 2023