<<

NAME

ProductOpener::Test - utility functions used by unit and integration tests

DESCRIPTION

read_gzip_file($filepath)

Read gzipped file and return binary content

Parameters

String $filepath

The path of the gzipped file.

check_ocr_result($ocr_result)

Check that OCR result returned by Google Cloud Vision is as expected: - a single [response] object in `responses` field - `created_at` integer field

Parameters

String $ocr_result

String of OCR result JSON as returned by Google Cloud Vision.

init_expected_results($filepath)

Handles test options around expected_results initialization

For many tests, we compare results from the API, with expected results. It enables quick updates on changes, while still getting control.

There are two modes: one to update expected results, and one to test against them.

Parameters

String $filepath

The path of the file containing the test. Generally should be <pre>__FILE__</pre> within the test.

return list

A list of $test_id, $test_dir, $expected_result_dir, $update_expected_results

check_not_production ()

Fail unless we have less than 10000 products in database.

This is a simple heuristic to ensure we are not in a production database

remove_all_products ()

For integration tests, we need to start from an empty database, so that the results of the tests are not affected by previously existing content.

This function should only be called by tests, and never on production environments.

remove_all_users ()

For integration tests, we need to start from an empty user base

This function should only be called by tests, and never on production environments.

remove_all_orgs ()

For integration tests, we need to start from an empty organizations base

This function should only be called by tests, and never on production environments.

capture_ouputs ($meth)

Capturing out / err with Stdout/Stderr::Extended while following Capture::Tiny style

This function can help you verify a command did not output errors, or verify something is present in its input / output

Example usage

    my ($out, $err, $csv_result) = capture_ouputs (sub {
        return scalar load_csv_or_excel_file($my_excel);
    });

Arguments

$meth - pointer to a sub

Method to run while capturing outputs - it should not take any parameter.

Return value

Returns an array with std output, std error, result of the method as array.

compare_to_expected_results($object_ref, $expected_results_file, $update_expected_results, $test_ref = undef) {

Compare an object (e.g. product data or an API result) to expected results.

The expected result is stored as a JSON file.

This is so that we can easily see diffs with git diffs.

Arguments

$object_ref - reference to an object (e.g. $product_ref)

$expected_results_file - path to the file with stored results

$update_expected_results - flag to indicate to save test results as expected results

Tests will always pass when this flag is passed, and the new expected results can be diffed / committed in GitHub.

$test_ref - an optional reference to an object describing the test case

If the test fail, the test reference will be output in the diag

compare_file_to_expected_results($content_str, $expected_results_file, $update_expected_results, $test_ref = undef) {

Compare a file (e.g. text or HTML file) to expected results.

The expected result is stored as a plain text file.

This is so that we can easily see diffs with git diffs.

Arguments

$content_str - the reference string

$expected_results_file - path to the file with stored results

$update_expected_results - flag to indicate to save test results as expected results

Tests will always pass when this flag is passed, and the new expected results can be diffed / committed in GitHub.

$test_ref - an optional reference to an object describing the test case

If the test fail, the test reference will be output in the diag

compare_csv_file_to_expected_results($csv_file, $expected_results_dir, $update_expected_results)

Compare a CSV file containing product data (e.g. the result of a CSV export) to expected results.

The expected results are stored as individual JSON files for each of the product, in files named [barcode].json, with a flat key/value pairs structure corresponding to the CSV columns.

This is so that we can easily see diffs with git diffs: - we know how many products are affected - we see individual diffs with the field name

Arguments

$csv_file - CSV file to compare

$expected_results_dir - directory containing the individual JSON files

$update_expected_results - flag to indicate to save test results as expected results

Tests will pass when this flag is passed, and the new expected results can be diffed / committed in GitHub.

$test_name - name of test for failure display

compare_array_to_expected_results($array_ref, $expected_results_dir, $update_expected_results)

Compare an array containing product data (e.g. the result of a CSV export) to expected results.

The expected results are stored as individual JSON files for each of the product, in files named [barcode].json, with a flat key/value pairs structure corresponding to the CSV columns.

This is so that we can easily see diffs with git diffs: - we know how many products are affected - we see individual diffs with the field name

Arguments

$array_ref - reference to array of elements to compare

$expected_results_dir - directory containing the individual JSON files

$update_expected_results - flag to indicate to save test results as expected results

Tests will always pass when this flag is passed, and the new expected results can be diffed / committed in GitHub.

$test_name - name of the test for outputs

create_sto_from_json(json_path, sto_path)

Create a sto file from a json structure

This might be handy to store data for a test in a readable mode whereas you need it as a sto for your test.

Arguments

json_path

Path of source json file

sto_path

Path of target sto file

normalize_object_for_test_comparison($object_ref, $specification_ref)

Normalize an object to be able to compare them across tests runs.

We remove some fields and sort some lists.

Arguments

$object_ref - Hash ref containing information

$specification_ref - Hash ref of specification of transforms

fields_ignore_content - array of fields which content should be ignored because they vary from test to test. Stars means there is a table of elements and we want to run through all (hash not supported yet)

fields_sort - array of fields which content needs to be sorted to have predictable results

normalize_product_for_test_comparison($product_ref)

Normalize a product to be able to compare them across tests runs.

We remove time dependent fields and sort some lists.

Arguments

product_ref - Hash ref containing product information

normalize_products_for_test_comparison(array_ref)

Like normalize_product_for_test_comparison for a list of products

Arguments

array_ref

Array of products

sort_products_for_test_comparison($array_ref, $sort_field)

Sort products so that they are always in the same order

Arguments

array_ref

Array of products

normalize_user_for_test_comparison($user_ref)

Normalize a user to be able to compare them across tests runs.

We remove time dependent fields, password (which encryption use salt) and sort some lists.

Arguments

user_ref - Hash ref containing user information

normalize_org_for_test_comparison($org_ref)

Normalize a org to be able to compare them across tests runs.

We remove time dependent fields, password (which encryption use salt) and sort some lists.

Arguments

org_ref - Hash ref containing org information

normalize_html_for_test_comparison ($html_ref)

Normalize the HTML of a web page to be able to compare them across tests runs.

We remove time dependent fields.

We also normalize URLS to avoid the scheme prefix (so that we avoid false positives in CodeQL)

Arguments

product_ref - Hash ref containing product information

wait_for($code, $timeout=3, $poll_time=1)

Wait for an event to happen, up to a certain amount of time

parameters

$code - sub

This must be the code that check for the event and return a true value if it succeed, false otherwise

$timeout - float

how many seconds to wait (default 3s)

$poll_time - float

how much time to wait between checks

<<