ProductOpener::Test - utility functions used by unit and integration tests
Read gzipped file and return binary content
The path of the gzipped file.
Check that OCR result returned by Google Cloud Vision is as expected: - a single [response] object in `responses` field - `created_at` integer field
String of OCR result JSON as returned by Google Cloud Vision.
Handles test options around expected_results initialization
For many tests, we compare results from the API, with expected results. It enables quick updates on changes, while still getting control.
There are two modes: one to update expected results, and one to test against them.
The path of the file containing the test. Generally should be <pre>__FILE__</pre> within the test.
A list of $test_id, $test_dir, $expected_result_dir, $update_expected_results
Fail unless we have less than 10000 products in database.
This is a simple heuristic to ensure we are not in a production database
For integration tests, we need to start from an empty database, so that the results of the tests are not affected by previously existing content.
This function should only be called by tests, and never on production environments.
For integration tests, we need to start from an empty user base
This function should only be called by tests, and never on production environments.
For integration tests, we need to start from an empty organizations base
This function should only be called by tests, and never on production environments.
Capturing out / err with Stdout/Stderr::Extended while following Capture::Tiny style
This function can help you verify a command did not output errors, or verify something is present in its input / output
my ($out, $err, $csv_result) = capture_ouputs (sub { return scalar load_csv_or_excel_file($my_excel); });
Method to run while capturing outputs - it should not take any parameter.
Returns an array with std output, std error, result of the method as array.
Compare an object (e.g. product data or an API result) to expected results.
The expected result is stored as a JSON file.
This is so that we can easily see diffs with git diffs.
Tests will always pass when this flag is passed, and the new expected results can be diffed / committed in GitHub.
If the test fail, the test reference will be output in the diag
Compare a file (e.g. text or HTML file) to expected results.
The expected result is stored as a plain text file.
This is so that we can easily see diffs with git diffs.
Tests will always pass when this flag is passed, and the new expected results can be diffed / committed in GitHub.
If the test fail, the test reference will be output in the diag
Compare a CSV file containing product data (e.g. the result of a CSV export) to expected results.
The expected results are stored as individual JSON files for each of the product, in files named [barcode].json, with a flat key/value pairs structure corresponding to the CSV columns.
This is so that we can easily see diffs with git diffs: - we know how many products are affected - we see individual diffs with the field name
Tests will pass when this flag is passed, and the new expected results can be diffed / committed in GitHub.
Compare an array containing product data (e.g. the result of a CSV export) to expected results.
The expected results are stored as individual JSON files for each of the product, in files named [barcode].json, with a flat key/value pairs structure corresponding to the CSV columns.
This is so that we can easily see diffs with git diffs: - we know how many products are affected - we see individual diffs with the field name
Tests will always pass when this flag is passed, and the new expected results can be diffed / committed in GitHub.
Create a sto file from a json structure
This might be handy to store data for a test in a readable mode whereas you need it as a sto for your test.
Path of source json file
Path of target sto file
Normalize an object to be able to compare them across tests runs.
We remove some fields and sort some lists.
fields_ignore_content - array of fields which content should be ignored because they vary from test to test. Stars means there is a table of elements and we want to run through all (hash not supported yet)
fields_sort - array of fields which content needs to be sorted to have predictable results
Normalize a product to be able to compare them across tests runs.
We remove time dependent fields and sort some lists.
Like normalize_product_for_test_comparison for a list of products
Array of products
Sort products so that they are always in the same order
Array of products
Normalize a user to be able to compare them across tests runs.
We remove time dependent fields, password (which encryption use salt) and sort some lists.
Normalize a org to be able to compare them across tests runs.
We remove time dependent fields, password (which encryption use salt) and sort some lists.
Normalize the HTML of a web page to be able to compare them across tests runs.
We remove time dependent fields.
We also normalize URLS to avoid the scheme prefix (so that we avoid false positives in CodeQL)
Wait for an event to happen, up to a certain amount of time
This must be the code that check for the event and return a true value if it succeed, false otherwise
how many seconds to wait (default 3s)
how much time to wait between checks