diff options
author | duerpei <duep.fnst@fujitsu.com> | 2022-07-28 10:34:43 +0800 |
---|---|---|
committer | duerpei <duep.fnst@fujitsu.com> | 2022-07-28 10:34:43 +0800 |
commit | aa5fab53993f29311f1aef83488eb0f759dabca8 (patch) | |
tree | a8f561e714feaa48c577c24b062fef8fe9c9b2d3 /plugins/agl_test_utils.py | |
parent | 76665693bf19bdbe159849b43cc42142d3093c2f (diff) |
agl-test-framework: demo code submissionneedlefish_13.93.0needlefish/13.93.013.93.0
Submit the demo code of agl-test-framework
The "agl-test framework" encapsulates pytest,which aims to provide a
unified test set execution entrance. It supports to run various test sets,
even these test sets come from different test frameworks, processing these
test logs uniformly, and generating complete test report.
In this way, it is convenient to test as many targets as possible,
in a wide range, so that the test covers a wider range of objects,
and is more comprehensive.
At present, we plan to support the following test sets in "agl-test":
1. Transplant test sets under Fuego and AGL-JTA
2. Retain the test set under pyagl and agl-ptest
(so will depend on "agl-ptest")
3. Migrate new test sets (with upstream)
4. Append new test sets (without upstream)
The output of test run is summarized by levels.
The first level is the summary of all test sets, and the second level is
the summary of a single test set. Now, they are displayed in HTML format,
and other formats also can be considered later.
Bug-AGL: SPEC-4345
Signed-off-by: duerpei <duep.fnst@fujitsu.com>
Change-Id: I25dfedcf8cdd373544c4fae677330defb5d21840
Diffstat (limited to 'plugins/agl_test_utils.py')
-rw-r--r-- | plugins/agl_test_utils.py | 31 |
1 files changed, 31 insertions, 0 deletions
diff --git a/plugins/agl_test_utils.py b/plugins/agl_test_utils.py new file mode 100644 index 0000000..b1204a0 --- /dev/null +++ b/plugins/agl_test_utils.py @@ -0,0 +1,31 @@ +import subprocess + +from plugins.agl_test_conf import REPORT_LOGS_DIR +from plugins.agl_test_conf import TMP_LOGS_DIR +from plugins.agl_test_conf import TMP_TEST_REPORT + + +#Check if there is the command that we needed +def find_cmd(cmd): + output = subprocess.run(['which',cmd],stdout=subprocess.PIPE) + if output.returncode==0: + return 0 + else: + print("error: {} is not found".format(cmd)) + return 1 + +#Make dir for THIS_TEST to save the log +def create_dir(THIS_TEST): + TMP_THIS_TEST_LOG = TMP_LOGS_DIR + THIS_TEST + "/log/" + TMP_TEST_REPORT_THIS = TMP_TEST_REPORT + THIS_TEST + subprocess.run(['mkdir','-p',TMP_THIS_TEST_LOG]) + subprocess.run(['mkdir','-p',TMP_TEST_REPORT_THIS]) + +# print errors +def printe(msg): + print("**** ERROR: " + msg) + +# print debug info +def printd(msg): + # TODO + print("==== DEBUG: " + msg) |