pytest-html
pytest-html is a plugin for pytest that generates a HTML report for test results.
Resources
Contributing
We welcome contributions.
To learn more, see Development
Screenshots
pytest-html is a plugin for pytest that generates a HTML report for test results.
We welcome contributions.
To learn more, see Development
Garbage date-time is printed in the Captured log.
pytest-html report:
------------------------------ Captured log setup ------------------------------ [32mINFO [0m root:test_cyclic_switchover.py:26 Inside Setup [32mINFO [0m root:test_cyclic_switchover.py:54 ------------------------------ Captured log call ------------------------------- [32mINFO [0m root:test_cyclic_switchover.py:82 Switchover Process : SCM
pytest console log :
collected 4 items / 2 deselected / 2 selected
test_cyclic_switchover.py::test_process_switchover[SCM] ------------------------------------------------------------ live log setup ------------------------------------------------------------- 2020-07-13 21:51:50 [ INFO] Inside Setup (test_cyclic_switchover.py:26) 2020-07-13 21:51:50 [ INFO] (test_cyclic_switchover.py:54) ------------------------------------------------------------- live log call ------------------------------------------------------------- 2020-07-13 21:51:50 [ INFO] Switchover Process : SCM (test_cyclic_switchover.py:82) PASSED [ 50%] test_cyclic_switchover.py::test_process_switchover[SAM] ------------------------------------------------------------- live log call ------------------------------------------------------------- 2020-07-13 21:51:50 [ INFO] Switchover Process : SAM (test_cyclic_switchover.py:82) PASSED [100%] ----------------------------------------------------------- live log teardown ----------------------------------------------------------- 2020-07-13 21:51:50 [ INFO] Inside Teardown (test_cyclic_switchover.py:60)
when I use pytest-html in docker (with gitlab-ci) it needs a lot of environment variables set, some holds token and passwords. I do not want to show in my jenkins report to a wider audience.

It seems they are collected in the very first test, so adding this to a test is working using the request autofixture:
# remove eventual gitlab settings
for k in list(request.config._metadata.keys()):
if re.search(r'^(GITLAB_|CI_)', k):
del request.config._metadata[k]
But I like to do it in conftest.py
enhancement featureHi, To reduce report file size, I'm trying to exclude any captured stdout/stderr/log output. I'm running tests with the following command:
pytest --html=report.html --self-contained-html --capture=no --show-capture=no tests/
With this I still get the Captured log call section for every test no matter if it pass or not.
So the question, how to not include any captured output to the report?
I see below error when i tried to run my py test
command tried : pytest test_blank_pages.py --html=report.html
irb-6672:static cirb_apasunoori$ pytest test_blank_pages.py --html=report.html
usage: pytest [options] [file_or_dir] [file_or_dir] [...]
pytest: error: unrecognized arguments: --html=report.html
inifile: None
rootdir: /Users/cirb_apasunoori/PycharmProjects/pythonsitecore/static
irb-6672:static cirb_apasunoori$
Hi,
Could you please add a new feature that user could collapse the error or failure log? it's a little bit hard to find the useful information once there're hundreds of errors or failures in the report. For better understanding, I would like to give an example:
Regards,
Hailin
enhancementHi everyone.
I think it may make sense to create a release schedule. I looked at PyPI and it looks like we haven't released a new version of this plugin since March of this year.
While we're at it, perhaps it would make sense to automate the process as much as possible and create a RELEASING.rst doc similar to pytest's. This way users could be aware of what to expect in terms of feature delivery.
Thoughts @ssbarnea @BeyondEvil @davehunt ?
docs packaging InfrastructureI believe this would resolve #131 ... however it isn't really optimal, I didn't want to dig into the html logic, so instead just moved it to be post processed and tried to coerce the test result into what the existing code seemed to want.
bugI have some test runs that generate a lot of log entries. When opening the report is seems my browser is spending a lot of time presenting all these entries which are immediately collapsed right after.
If I change the behavior to have everything collapsed by default I can open/render the html report much faster.
What I have tested is essentially the same as adding a "collapsed" class here https://github.com/pytest-dev/pytest-html/blob/master/pytest_html/plugin.py#L140
I have no idea if this breaks other use-cases or if other changes are necessary. I also realize that everything related to the collapse functionality is currently in js. Regardless, if something could be done to generate everything collapsed and open the necessary stuff afterwards I believe performance would be a lot better.
Edit: I should mention I use self-contained reports.
In pytest.ini, I have :
addopts = --html $APPRENTI_MOTEUR_LOGS_DIR_TEST/$APPRENTI_MOTEUR_LOGS_HTML_NAME
The path of the html report takes the content of the environment variable into account.
But the page header does not :

It would be nice if it would as well :-)
Hi!
First, thank you for this plugin!
I want to recommend to change font color in report because read grey text on white background is "hard" for eyes. Also if we can increase font size it would be great!
Thanks in advance!
@The-Compiler unicode hurts my head! I encountered a test with 〈 as a parameter. This caused the report to fail due to UnicodeDecodeError. This patch fixes it, but would you mind taking a look to see if there's a smarter approach? I also found it difficult to write a test for this use case.
pytest --html=report.html, is working locally, but not creating report in GIt repository with help of GitHub Action
in Github action, im expecting report to get created in root directory. but nothing is getting created. can you help?
Following is my yml file 👍 name: Test Execution for OrangeHRM - Demo Environment on: schedule:
jobs: build:
runs-on: ubuntu-latest
steps:
- uses: actions/[email protected]
- name: Set up Python
uses: actions/[email protected]
with:
python-version: '3.x'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install setuptools wheel twine
pip install selenium
pip install pytest
pip install PyGithub
pip install pytest-html
pip install allure-pytest
pip install pytest-github-report
pip install webdriver-manager
pip install pytest-failed-screenshot
pip install pytest-metadata
pip install pytest-cov
- name: Test Execution after 20 mins interval
run: |
pytest --html=report.html
pytest-html 3.2.0 error on report generation on Gitlab-CI Observed: error on report generation NB: we have downgraded to version 3.1.1 and everything works well
I think the problem is in: Explicitly add py.xml dependency.
And the issue is the same as: https://github.com/microsoft/pylance-release/issues/2357
Log:
File "/Users/mobile-ci/.pyenv/versions/3.9.16/bin/pytest", line 8, in
"ful-screen" preview of extra images attached to results for self-contained mode. Unlike image preview in normal mode implemented as part of page with little bit of js+css (likely also possible to implement via , however adress line populated by base64 of the image would be really ugly in such case).
Hello everyone,
I was wondering if there was a way to change the output path of the html report after the test has run by using a test fixture which contains the name and parametrized values of a test i.e. :
during the test there is a fixture that creates a directory whose name is based on the values of the current test, for example :
output_path = /path/to/test/test_name_parametrized_val1_val2.
However since the value of the config.option.htmlpath seems to be given during the pytest_configure hook and the tests have not been intialized yet (output_path is not yet created and, even if it was, it is not reachable from the pytest_configure hook) I haven't been able to figure this one out yet.
I would really appreciate any advice on how to accomplish this and I apologize if the solution is trivial as I'm quite new to pytest...
Thanks and have a good day!
next-genAlthough the installed versions of pythest html 3.2.0 and meta 1.0.4 in report there is still version 3.1.1. for html report and 1.0.2 for meta. Also, the test execution time is slower than on another computer, and the code is the same. What could be the problem?

Changed below are not curated yet and is machine generated. Look at https://github.com/pytest-dev/pytest-html/blob/master/CHANGES.rst for human curated changelist.
uiautomator This module is a Python wrapper of Android uiautomator testing framework. It works on Android 4.1+ (API Level 16~30) simply with Android d
MultiPy About MultiPy is a graphical user interface built using Dear PyGui Python GUI Framework that lets you conveniently keep track of your python s
Gabbi Release Notes Gabbi is a tool for running HTTP tests where requests and responses are represented in a declarative YAML-based form. The simplest
pingtest Description A personal project for testing internet stability, intended for use in Linux and Windows.
folder-automation A folder automation made using Watch-dog, it only works in linux for now but I assume, it will be adaptable to mac and PC as well Th
hyppo (HYPothesis Testing in PythOn, pronounced "Hippo") is an open-source software package for multivariate hypothesis testing.
pytest_pyramid pytest_pyramid provides basic fixtures for testing pyramid applications with pytest test suite. By default, pytest_pyramid will create
Benchmark Utilities About A collection of benchmarking tools. PYPI Package Table of Contents Using the library Installing and using the library Manual
percy-selenium-python Percy visual testing for Python Selenium. Installation npm install @percy/cli: $ npm install --save-dev @percy/cli pip install P
pytest-elasticsearch What is this? This is a pytest plugin that enables you to test your code that relies on a running Elasticsearch search engine. It
Hypothesis Hypothesis is a family of testing libraries which let you write tests parametrized by a source of examples. A Hypothesis implementation the
aioresponses Aioresponses is a helper to mock/fake web requests in python aiohttp package. For requests module there are a lot of packages that help u
Instagram-Unfollower-Bot Instagram unfollowing bot. If this script is executed that specific accounts following will be reduced.
Python-Automation-with-openpyxl Using openpyxl in Python, performed following tasks on an Excel Sheet containing Product Suppliers along with their pr
Dredd — HTTP API Testing Framework Dredd is a language-agnostic command-line tool for validating API description document against backend implementati
wdb - Web Debugger Description wdb is a full featured web debugger based on a client-server architecture. The wdb server which is responsible of manag
WomboAI Art Generator Automate AI art generation using wombot.art. Also integrated into SnailBot for you to try out. Setup Install Python Go to the py
To prevent RATel from being detected by antivirus, please do not upload the payload to TOTAL VIRUS. Each month I will test myself if the payload gets detected by antivirus. So you’ll have a photo eve
AllPairs is an open source test combinations generator written in Python
ScreenPy TITLE CARD: "ScreenPy" TITLE DISAPPEARS.