Refactor QA

Reviewers: teon.banek, buda, mculinovic

Reviewed By: teon.banek

Subscribers: pullbot

Differential Revision: https://phabricator.memgraph.io/D1752
This commit is contained in:
Matej Ferencevic 2018-11-08 16:51:56 +01:00
parent 8b1aa3c2b6
commit e92036cfcc
222 changed files with 521 additions and 1517 deletions

1
tests/qa/.gitignore vendored
View File

@ -5,4 +5,3 @@
*.pyc
ve3/
.quality_assurance_status
latency/memgraph

View File

@ -1,53 +1,57 @@
# Memgraph quality assurance
In order to test dressipi's queries agains memgraph the following commands have
to be executed:
1. ./init [Dxyz] # downloads query implementations + memgraph
# (the auth is manually for now) + optionally user can
# define arcanist diff which will be applied on the
# memgraph source code
2. ./run # compiles and runs database instance, also runs the
# test queries
Python script used to run quality assurance tests against Memgraph.
To run the script execute:
TODO: automate further
```
source ve3/bin/activate
./qa.py --help
./qa.py memgraph_V1
```
## TCK Engine
The script requires one positional parameter that specifies which test suite
should be executed. All available test suites can be found in the `tests/`
directory.
Python script used to run tck tests against memgraph. To run script execute:
## openCypher TCK tests
1. python3 tck_engine/test_executor.py
The script uses Behave to run Cucumber tests.
Script uses Behave to run Cucumber tests.
Some gotchas exist when adding openCypher TCK tests to our QA engine:
The following tck tests have been changed:
- In some tests example injection did not work. Behave stores the first row in
Cucumber tables as headings and the example injection failed to work. To
correct this behavior, one row was added to tables where injection was used.
1. Tests where example injection did not work. Behave stores the first row
in Cucumber tables as headings and the example injection is not working in
headings. To correct this behavior, one row was added to tables where
injection was used.
- Some tests don't have fully defined result ordering. Because the tests rely
on result order, some tests fail. If you find a flaky test to ignore output
ordering you should change the tag "the result should be" to "the result
should be (ignoring element order for lists)".
2. Tests where the results were not always in the same order. Query does not
specify the result order, but tests specified it. It led to the test failure.
To correct tests, tag "the result should be" was changed with a
tag "the result should be (ignoring element order for lists)".
- Behave can't escape character '|' and it throws a parse error. The query was
then changed and the result was returned with a different name.
3. Behave can't escape character '|' and it throws parse error. Query was then
changed and result was returned with different name.
`Comparability.feature` tests are failing because integers are compared to
strings what is not allowed in openCypher.
Comparability.feature tests are failing because integers are compared to strings
what is not allowed in openCypher.
## QA engine issues:
TCK Engine problems:
Comparing tables with ordering doesn't always work, example:
1. Comparing tables with ordering.
ORDER BY x DESC
| x | y | | x | y |
| 3 | 2 | | 3 | 1 |
| 3 | 1 | | 3 | 2 |
| 1 | 4 | | 1 | 4 |
```
ORDER BY x DESC
| x | y | | x | y |
| 3 | 2 | | 3 | 1 |
| 3 | 1 | | 3 | 2 |
| 1 | 4 | | 1 | 4 |
```
2. Properties side effects
| +properties | 1 |
| -properties | 1 |
Side effect aren't tracked or verified, example:
Database is returning properties_set, not properties_created and properties_deleted.
```
| +properties | 1 |
| -properties | 1 |
```
This is because Memgraph currently doesn't give out the list of side effects
that happend on query execution.

View File

@ -2,8 +2,7 @@
commands: TIMEOUT=300 ./continuous_integration
infiles:
- . # current directory
- ../../build_debug/memgraph # memgraph debug binary
- ../../config # directory with config files
- ../../build_release/memgraph # memgraph release binary
outfile_paths: &OUTFILE_PATHS
- \./memgraph/tests/qa/\.quality_assurance_status
@ -11,6 +10,5 @@
commands: TIMEOUT=300 ./continuous_integration --distributed
infiles:
- . # current directory
- ../../build_debug/memgraph_distributed # memgraph distributed debug binary
- ../../config # directory with config files
- ../../build_release/memgraph_distributed # memgraph distributed release binary
outfile_paths: *OUTFILE_PATHS

View File

@ -12,121 +12,54 @@ List of responsibilities:
to post the status on Phabricator. (.quality_assurance_status)
"""
import argparse
import atexit
import copy
import os
import sys
import json
import logging
import subprocess
import tempfile
import time
import yaml
from argparse import ArgumentParser
log = logging.getLogger(__name__)
SCRIPT_DIR = os.path.dirname(os.path.realpath(__file__))
TESTS_DIR = os.path.join(SCRIPT_DIR, "tests")
BASE_DIR = os.path.normpath(os.path.join(SCRIPT_DIR, "..", ".."))
BUILD_DIR = os.path.join(BASE_DIR, "build_release")
class Test:
"""
Class used to store basic information about a single
test suite
@attribute name:
string, name of the test_suite (must be unique)
@attribute test_suite:
string, test_suite within tck_engine/tests which contains tck tests
@attribute memgraph_params
string, any command line arguments that should be passed to
memgraph before evaluating tests from this suite
@attribute mandatory
bool, True if this suite is obligatory for continuous integration
to pass
"""
def __init__(self, name, test_suite, memgraph_params, mandatory):
self.name = name
self.test_suite = test_suite
self.memgraph_params = memgraph_params
self.mandatory = mandatory
# Constants
suites = [
Test(
name="memgraph_V1",
test_suite="memgraph_V1",
memgraph_params="",
mandatory=True
),
Test(
name="memgraph_V1_POD",
test_suite="memgraph_V1",
memgraph_params="--properties-on-disk=x,y,z,w,k,v,a,b,c,d,e,f,r,t,o,prop,age,name,surname,location",
mandatory=True
),
Test(
name="openCypher_M09",
test_suite="openCypher_M09",
memgraph_params="",
mandatory=False
),
]
results_folder = os.path.join("tck_engine", "results")
suite_suffix = "memgraph-{}.json"
qa_status_path = ".quality_assurance_status"
measurements_path = ".apollo_measurements"
def parse_args():
"""
Parse command line arguments
"""
argp = ArgumentParser(description=__doc__)
argp.add_argument("--distributed", action="store_true")
return argp.parse_args()
def get_newest_path(folder, suffix):
"""
:param folder: Scanned folder.
:param suffix: File suffix.
:return: Path to the newest file in the folder with the specified suffix.
"""
name_list = sorted(filter(lambda x: x.endswith(suffix),
os.listdir(folder)))
if len(name_list) <= 0:
sys.exit("Unable to find any file with suffix %s in folder %s!" %
(suffix, folder))
return os.path.join(folder, name_list.pop())
def wait_for_server(port, delay=0.01):
cmd = ["nc", "-z", "-w", "1", "127.0.0.1", str(port)]
count = 0
while subprocess.call(cmd) != 0:
time.sleep(0.01)
if count > 20 / 0.01:
print("Could not wait for server on port", port, "to startup!")
sys.exit(1)
count += 1
time.sleep(delay)
def generate_measurements(suite, result_path):
"""
:param suite: Test suite name.
:param result_path: File path with json status report.
:return: Measurements string.
"""
if not os.path.exists(result_path):
return ""
with open(result_path) as f:
result = json.load(f)
ret = ""
for i in ["total", "passed", "restarts"]:
for i in ["total", "passed"]:
ret += "{}.{} {}\n".format(suite, i, result[i])
return ret
def generate_status(suite, result_path, required=False):
"""
:param suite: Test suite name.
:param result_path: File path with json status report.
:param required: Adds status ticks to the message if required.
:return: Status string.
"""
def generate_status(suite, result_path, required):
if not os.path.exists(result_path):
return ("Internal error!", 0, 1)
with open(result_path) as f:
result = json.load(f)
total = result["total"]
passed = result["passed"]
restarts = result["restarts"]
ratio = passed / total
msg = "{} / {} //({:.2%})//".format(passed, total, ratio)
if required:
@ -134,15 +67,10 @@ def generate_status(suite, result_path, required=False):
msg += " {icon check color=green}"
else:
msg += " {icon times color=red}"
return (msg, passed, total, restarts)
return (msg, passed, total)
def generate_remarkup(data, distributed=False):
"""
:param data: Tabular data to convert to remarkup.
:return: Remarkup formatted status string.
"""
extra_desc = "distributed " if distributed else ""
ret = "==== Quality assurance {}status: ====\n\n".format(extra_desc)
ret += "<table>\n"
@ -159,75 +87,200 @@ def generate_remarkup(data, distributed=False):
return ret
if __name__ == "__main__":
args = parse_args()
distributed = []
class MemgraphRunner():
def __init__(self, build_directory):
self.build_directory = build_directory
self.proc_mg = None
self.args = []
def start(self, args):
if args == self.args and self.is_running():
return
self.stop()
self.args = copy.deepcopy(args)
self.durability_directory = tempfile.TemporaryDirectory()
memgraph_binary = os.path.join(self.build_directory, "memgraph")
args_mg = [memgraph_binary, "--durability-directory",
self.durability_directory.name]
self.proc_mg = subprocess.Popen(args_mg + self.args)
wait_for_server(7687, 1)
assert self.is_running(), "The Memgraph process died!"
def is_running(self):
if self.proc_mg is None:
return False
if self.proc_mg.poll() is not None:
return False
return True
def stop(self):
if not self.is_running():
return
self.proc_mg.terminate()
code = self.proc_mg.wait()
assert code == 0, "The Memgraph process exited with non-zero!"
class MemgraphDistributedRunner():
def __init__(self, build_directory, cluster_size):
self.build_directory = build_directory
self.cluster_size = cluster_size
self.procs = []
self.durability_directories = []
self.args = []
def start(self, args):
if args == self.args and self.is_running():
return
self.stop()
self.args = copy.deepcopy(args)
memgraph_binary = os.path.join(self.build_directory,
"memgraph_distributed")
self.procs = []
self.durability_directories = []
for i in range(self.cluster_size):
durability_directory = tempfile.TemporaryDirectory()
self.durability_directories.append(durability_directory)
args_mg = [memgraph_binary]
if i == 0:
args_mg.extend(["--master", "--master-port", "10000"])
else:
args_mg.extend(["--worker", "--worker-id", str(i),
"--worker-port", str(10000 + i),
"--master-port", "10000"])
args_mg.extend(["--durability-directory",
durability_directory.name])
proc_mg = subprocess.Popen(args_mg + self.args)
self.procs.append(proc_mg)
wait_for_server(10000 + i, 1)
wait_for_server(7687, 1)
assert self.is_running(), "The Memgraph cluster died!"
def is_running(self):
if len(self.procs) == 0:
return False
for i, proc in enumerate(self.procs):
code = proc.poll()
if code is not None:
if code != 0:
print("Memgraph node", i, "exited with non-zero!")
return False
return True
def stop(self):
if len(self.procs) == 0:
return
self.procs[0].terminate()
died = False
for i, proc in enumerate(self.procs):
code = proc.wait()
if code != 0:
print("Memgraph node", i, "exited with non-zero!")
died = True
assert not died, "The Memgraph cluster died!"
def main():
# Parse args
argp = argparse.ArgumentParser()
argp.add_argument("--build-directory", default=BUILD_DIR)
argp.add_argument("--cluster-size", default=3, type=int)
argp.add_argument("--distributed", action="store_true")
args = argp.parse_args()
# Load tests from config file
with open(os.path.join(TESTS_DIR, "config.yaml")) as f:
suites = yaml.load(f)
# Tests are not mandatory for distributed
if args.distributed:
distributed = ["--distributed"]
for suite in suites:
suite.mandatory = False
# Logger config
logging.basicConfig(level=logging.INFO)
suite["must_pass"] = False
# venv used to run the qa engine
venv_python = os.path.join(SCRIPT_DIR, "ve3", "bin", "python3")
exec_dir = os.path.realpath(os.path.join(SCRIPT_DIR, "tck_engine"))
tests_dir = os.path.realpath(os.path.join(exec_dir, "tests"))
# Run suites
for suite in suites:
log.info("Starting suite '{}' scenarios.".format(suite.name))
test = os.path.realpath(os.path.join(tests_dir, suite.test_suite))
cmd = [venv_python, "-u",
os.path.join(exec_dir, "test_executor.py"),
"--root", test,
"--test-name", "{}".format(suite.name),
"--db", "memgraph",
"--memgraph-params",
"\"{}\"".format(suite.memgraph_params)] + distributed
subprocess.run(cmd, check=False)
# Measurements
# Temporary directory for suite results
output_dir = tempfile.TemporaryDirectory()
# Memgraph runner
if args.distributed:
memgraph = MemgraphDistributedRunner(args.build_directory,
args.cluster_size)
else:
memgraph = MemgraphRunner(args.build_directory)
@atexit.register
def cleanup():
memgraph.stop()
# Results storage
measurements = ""
# Status table headers
status_data = [["Suite", "Scenarios", "Restarts"]]
# List of mandatory suites that have failed
status_data = [["Suite", "Scenarios"]]
mandatory_fails = []
# Run suites
for suite in suites:
# Get data files for test suite
suite_result_path = get_newest_path(results_folder,
suite_suffix.format(suite.name))
log.info("Memgraph result path is {}".format(suite_result_path))
print("Starting suite '{}' scenarios.".format(suite["name"]))
# Read scenarios
suite_status, suite_passed, suite_total, suite_restarts = \
generate_status(suite.name, suite_result_path,
required=suite.mandatory)
params = []
if "properties_on_disk" in suite:
params = ["--properties-on-disk=" + suite["properties_on_disk"]]
memgraph.start(params)
if suite.mandatory and suite_passed != suite_total or \
not args.distributed and suite_restarts > 0:
mandatory_fails.append(suite.name)
suite["stats_file"] = os.path.join(output_dir.name,
suite["name"] + ".json")
cmd = [venv_python, "-u",
os.path.join(SCRIPT_DIR, "qa.py"),
"--stats-file", suite["stats_file"],
suite["test_suite"]]
status_data.append([suite.name, suite_status, suite_restarts])
measurements += generate_measurements(suite.name, suite_result_path)
# The exit code isn't checked here because the `behave` framework
# returns a non-zero exit code when some tests fail.
subprocess.run(cmd)
suite_status, suite_passed, suite_total = \
generate_status(suite["name"], suite["stats_file"],
suite["must_pass"])
status_data.append([suite["name"], suite_status])
measurements += generate_measurements(suite["name"],
suite["stats_file"])
if suite["must_pass"] and suite_passed != suite_total:
mandatory_fails.append(suite["name"])
break
# Create status message
qa_status_message = generate_remarkup(status_data, args.distributed)
# Create the report file
qa_status_path = os.path.join(SCRIPT_DIR, ".quality_assurance_status")
with open(qa_status_path, "w") as f:
f.write(qa_status_message)
# Create the measurements file
measurements_path = os.path.join(SCRIPT_DIR, ".apollo_measurements")
with open(measurements_path, "w") as f:
f.write(measurements)
log.info("Status is generated in %s" % qa_status_path)
log.info("Measurements are generated in %s" % measurements_path)
print("Status is generated in %s" % qa_status_path)
print("Measurements are generated in %s" % measurements_path)
# Check if tests failed
if mandatory_fails != []:
sys.exit("Some mandatory tests have failed -- %s"
sys.exit("Some tests that must pass have failed -- %s"
% str(mandatory_fails))
if __name__ == "__main__":
main()

80
tests/qa/environment.py Normal file
View File

@ -0,0 +1,80 @@
# -*- coding: utf-8 -*-
import json
import logging
import sys
from steps.test_parameters import TestParameters
from neo4j.v1 import GraphDatabase, basic_auth
# Helper class and functions
class TestResults:
def __init__(self):
self.total = 0
self.passed = 0
def num_passed(self):
return self.passed
def num_total(self):
return self.total
def add_test(self, status):
if status == "passed":
self.passed += 1
self.total += 1
# Behave specific functions
def before_all(context):
# logging
logging.basicConfig(level="DEBUG")
context.log = logging.getLogger(__name__)
# driver
uri = "bolt://{}:{}".format(context.config.db_host,
context.config.db_port)
auth_token = basic_auth(
context.config.db_user, context.config.db_pass)
context.driver = GraphDatabase.driver(uri, auth=auth_token,
encrypted=False)
# test results
context.test_results = TestResults()
def before_scenario(context, scenario):
context.test_parameters = TestParameters()
context.exception = None
def after_scenario(context, scenario):
context.test_results.add_test(scenario.status)
if context.config.single_scenario or \
(context.config.single_fail and scenario.status == "failed"):
print("Press enter to continue")
sys.stdin.readline()
def after_feature(context, feature):
if context.config.single_feature:
print("Press enter to continue")
sys.stdin.readline()
def after_all(context):
context.driver.close()
if context.config.stats_file == "":
return
js = {
"total": context.test_results.num_total(),
"passed": context.test_results.num_passed(),
"test_suite": context.config.test_suite,
}
with open(context.config.stats_file, 'w') as f:
json.dump(js, f)

View File

@ -1,29 +0,0 @@
#!/usr/bin/env python3
'''
Filters failing scenarios from a tck test run and prints them to stdout.
'''
from argparse import ArgumentParser
def main():
argp = ArgumentParser(description=__doc__)
argp.add_argument('test_log', metavar='TEST_LOG', type=str,
help='Path to the log of a tck test run')
args = argp.parse_args()
with open(args.test_log) as f:
scenario_failed = False
scenario_lines = []
for line in f:
if line.strip().startswith('Scenario:'):
if scenario_failed:
print(''.join(scenario_lines))
scenario_failed = False
scenario_lines.clear()
if line.strip().startswith('AssertionError'):
scenario_failed = True
scenario_lines.append(line)
if __name__ == '__main__':
main()

View File

@ -1,97 +0,0 @@
#!/bin/bash
function print_usage_and_exit {
echo "./local_runner --test-suite test_suite [--distributed] [--num-machines num_machines]"
echo "Required arguments:"
echo -e " --test-suite test_suite\trun test_suite scenarios, test_suite must be test folder in tck_engine/tests."
echo -e " --name name\tunique identifer of test_suite and its parameters"
echo "Optional arguments:"
echo -e " --memgraph-params \"param1=value1 param2=value2\"\tcommand line arguments for memgraph"
echo -e " --distributed\trun memgraph in distributed"
echo -e " --num-machines num-machines\tnumber of machines for distributed, default is 3"
exit 1
}
# exit if any subcommand returns a non-zero status
set -e
# read arguments
distributed=false
num_machines=3
memgraph_params=""
while [[ $# -gt 0 ]]; do
case $1 in
--distributed)
distributed=true
shift
;;
--num-machines)
if [ $# -eq 1 ]; then
print_usage_and_exit
fi
num_machines=$2
re='^[0-9]+$'
if ! [[ $num_machines =~ $re ]] ; then
print_usage_and_exit
fi
shift
shift
;;
--memgraph-params)
if [ $# -eq 1 ]; then
print_usage_and_exit
fi
memgraph_params=$2
shift
shift
;;
--name)
if [ $# -eq 1 ]; then
print_usage_and_exit
fi
name=$2
shift
shift
;;
--test-suite)
if [ $# -eq 1 ]; then
print_usage_and_exit
fi
test_suite=$2
shift
shift
;;
*)
# unknown option
print_usage_and_exit
;;
esac
done
if [[ "$test_suite" = "" ]]; then
print_usage_and_exit
fi
# save the path where this script is
script_dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
# activate virtualenv
source $script_dir/ve3/bin/activate
# run scenarios
cd ${script_dir}
tck_flags="--root tck_engine/tests/$test_suite
--test-name $name
--db memgraph"
if [[ $distributed = true ]]; then
tck_flags="$tck_flags --distributed"
tck_flags="$tck_flags --num-machines $num_machines"
fi
if [ -n "$memgraph_params" ]; then
python3 tck_engine/test_executor.py $tck_flags --memgraph-params \"$memgraph_params\"
else
python3 tck_engine/test_executor.py $tck_flags
fi

View File

@ -1,78 +0,0 @@
#!/usr/bin/env python3
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
import numpy as np
import json
from argparse import ArgumentParser
"""
Plots graph of latencies of memgraph and neo4j. Takes paths to
json of latencies as arguments.
"""
def main():
argp = ArgumentParser(description=__doc__)
argp.add_argument('--memgraph-latency',
help='Path to the json of memgraph latency')
argp.add_argument('--neo4j-latency',
help='Path to the json of neo4j latency')
args = argp.parse_args()
fig = plt.gcf()
fig.set_size_inches(10, 16)
with open(args.neo4j_latency) as json_file:
json_neo = json.load(json_file)
with open(args.memgraph_latency) as json_file:
json_mem = json.load(json_file)
tests_num = 0
time_list_neo = []
time_list_mem = []
max_time = 0
for key in json_mem['data']:
if json_neo['data'][key]['status'] == "passed" and \
json_mem['data'][key]['status'] == 'passed':
time_neo = json_neo['data'][key]['execution_time']
time_mem = json_mem['data'][key]['execution_time']
max_time = max(max_time, time_neo, time_mem)
offset = 0.01 * max_time
for key in json_mem['data']:
if json_neo['data'][key]['status'] == "passed" and \
json_mem['data'][key]['status'] == 'passed':
time_neo = json_neo['data'][key]['execution_time']
time_mem = json_mem['data'][key]['execution_time']
time_list_neo.append(time_neo)
time_list_mem.append(time_mem)
tests_num += 1
if time_neo < time_mem:
plt.plot((time_mem, time_neo), (tests_num, tests_num), color='red',
label=key, lw=0.3)
else:
plt.plot((time_mem, time_neo), (tests_num, tests_num), color='green',
label=key, lw=0.3)
ratio = '%.2f' % (max(time_neo, time_mem) / min(time_neo, time_mem))
plt.text(max(time_mem, time_neo) + offset, tests_num, key + " ---> " + \
ratio + "x", size=1)
x = range(1, tests_num + 1)
plt.plot(time_list_mem, x, marker='o', markerfacecolor='orange', color='orange',
linestyle='', markersize=0.5)
plt.plot(time_list_neo, x, marker='o', markerfacecolor='blue', color='blue',
linestyle='', markersize=0.5)
plt.margins(0.1, 0.01)
plt.savefig("latency_graph.png", dpi=2000)
if __name__ == '__main__':
main()

89
tests/qa/qa.py Executable file
View File

@ -0,0 +1,89 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import argparse
import os
import sys
from behave.__main__ import main as behave_main
from behave import configuration
SCRIPT_DIR = os.path.dirname(os.path.realpath(__file__))
def add_config(option, **kwargs):
found = False
for config in configuration.options:
try:
config[0].index(option)
found = True
except ValueError:
pass
if found:
return
configuration.options.append(((option,), kwargs))
def main():
argp = argparse.ArgumentParser()
args_bool, args_value = [], []
def add_argument(option, **kwargs):
argp.add_argument(option, **kwargs)
add_config(option, **kwargs)
if "action" in kwargs and kwargs["action"].startswith("store"):
val = False if kwargs["action"] == "store_true" else True
args_bool.append((option, val))
else:
args_value.append(option)
# Custom argument for test suite
argp.add_argument("test_suite", help="test suite that should be executed")
add_config("--test-suite")
add_config("--test-directory")
# Arguments that should be passed on to Behave
add_argument("--db-host", default="127.0.0.1",
help="server host (default is 127.0.0.1)")
add_argument("--db-port", default="7687",
help="server port (default is 7687)")
add_argument("--db-user", default="memgraph",
help="server user (default is memgraph)")
add_argument("--db-pass", default="memgraph",
help="server pass (default is memgraph)")
add_argument("--stop", action="store_true",
help="stop testing after first fail")
add_argument("--single-fail", action="store_true",
help="pause after failed scenario")
add_argument("--single-scenario", action="store_true",
help="pause after every scenario")
add_argument("--single-feature", action="store_true",
help="pause after every feature")
add_argument("--stats-file", default="", help="statistics output file")
# Parse arguments
parsed_args = argp.parse_args()
# Find tests
test_directory = os.path.join(SCRIPT_DIR, "tests", parsed_args.test_suite)
# Create arguments for Behave
behave_args = [test_directory]
for arg_name in args_value:
var_name = arg_name[2:].replace("-", "_")
behave_args.extend([arg_name, getattr(parsed_args, var_name)])
for arg_name, arg_val in args_bool:
var_name = arg_name[2:].replace("-", "_")
current = getattr(parsed_args, var_name)
if current != arg_val:
behave_args.append(arg_name)
behave_args.extend(["--test-suite", parsed_args.test_suite])
behave_args.extend(["--test-directory", test_directory])
# Run Behave tests
return behave_main(behave_args)
if __name__ == '__main__':
sys.exit(main())

View File

@ -0,0 +1,40 @@
# -*- coding: utf-8 -*-
def query(q, context, params={}):
"""
Function used to execute query on database. Query results are
set in context.result_list. If exception occurs, it is set on
context.exception.
@param q:
String, database query.
@param context:
behave.runner.Context, context of all tests.
@return:
List of query results.
"""
results_list = []
session = context.driver.session()
try:
# executing query
results = session.run(q, params)
results_list = list(results)
"""
This code snippet should replace code which is now
executing queries when session.transactions will be supported.
with session.begin_transaction() as tx:
results = tx.run(q, params)
summary = results.summary()
results_list = list(results)
tx.success = True
"""
except Exception as e:
# exception
context.exception = e
context.log.info('%s', str(e))
finally:
session.close()
return results_list

View File

@ -224,3 +224,23 @@ def step(context):
@then(u'a SyntaxError should be raised at compile time: InvalidUnicodeCharacter')
def step(context):
handle_error(context)
@then(u'a SyntaxError should be raised at compile time: InvalidArgumentPassingMode')
def step_impl(context):
handle_error(context)
@then(u'a SyntaxError should be raised at compile time: InvalidNumberOfArguments')
def step_impl(context):
handle_error(context)
@then(u'a ParameterMissing should be raised at compile time: MissingParameter')
def step_impl(context):
handle_error(context)
@then(u'a ProcedureError should be raised at compile time: ProcedureNotFound')
def step_impl(context):
handle_error(context)

View File

@ -15,13 +15,11 @@ def clear_graph(context):
@given('an empty graph')
def empty_graph_step(context):
clear_graph(context)
context.graph_properties.set_beginning_parameters()
@given('any graph')
def any_graph_step(context):
clear_graph(context)
context.graph_properties.set_beginning_parameters()
@given('graph "{name}"')
@ -37,7 +35,8 @@ def create_graph(name, context):
and sets graph properties to beginning values.
"""
clear_graph(context)
path = find_graph_path(name, context.config.root)
path = os.path.join(context.config.test_directory, "graphs",
name + ".cypher")
q_marks = ["'", '"', '`']
@ -64,15 +63,3 @@ def create_graph(name, context):
i += 1
if single_query.strip() != '':
database.query(single_query, context)
context.graph_properties.set_beginning_parameters()
def find_graph_path(name, path):
"""
Function returns path to .cypher file with given name in
given folder or subfolders. Argument path is path to a given
folder.
"""
for root, dirs, files in os.walk(path):
if name + '.cypher' in files:
return root + '/' + name + '.cypher'

View File

@ -20,7 +20,6 @@ def parameters_step(context):
def having_executed_step(context):
context.results = database.query(
context.text, context, context.test_parameters.get_parameters())
context.graph_properties.set_beginning_parameters()
@when('executing query')
@ -295,54 +294,11 @@ def empty_result_step(context):
check_exception(context)
def side_effects_number(prop, table):
"""
Function returns an expected list of side effects for property prop
from a table given in a cucumber test.
@param prop:
String, roperty from description, can be nodes, relationships,
labels or properties.
@param table:
behave.model.Table, context table with side effects.
@return
Description.
"""
ret = []
for row in table:
sign = -1
if row[0][0] == '+':
sign = 1
if row[0][1:] == prop:
ret.append(int(row[1]) * sign)
sign = -1
row = table.headings
if row[0][0] == '+':
sign = 1
if row[0][1:] == prop:
ret.append(int(row[1]) * sign)
ret.sort()
return ret
@then('the side effects should be')
def side_effects_step(context):
if not context.config.side_effects:
return
table = context.table
# get side effects from db queries
nodes_dif = side_effects_number("nodes", table)
relationships_dif = side_effects_number("relationships", table)
labels_dif = side_effects_number("labels", table)
properties_dif = side_effects_number("properties", table)
# compare side effects
assert(context.graph_properties.compare(nodes_dif,
relationships_dif, labels_dif, properties_dif))
return
@then('no side effects')
def side_effects_step(context):
if not context.config.side_effects:
return
# check if side effects are non existing
assert(context.graph_properties.compare([], [], [], []))
return

View File

@ -1,3 +0,0 @@
report/
__pycache__/
*.output

View File

@ -1,6 +0,0 @@
[behave]
stderr_capture=False
stdout_capture=False
format=progress
junit=1
junit_directory=report

View File

@ -1,327 +0,0 @@
#
# Copyright 2016 "Neo Technology",
# Network Engine for Objects in Lund AB (http://neotechnology.com)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
Feature: TriadicSelection
Scenario: Handling triadic friend of a friend
Given the binary-tree-1 graph
When executing query:
"""
MATCH (a:A)-[:KNOWS]->(b)-->(c)
RETURN c.name
"""
Then the result should be:
| c.name |
| 'b2' |
| 'b3' |
| 'c11' |
| 'c12' |
| 'c21' |
| 'c22' |
And no side effects
Scenario: Handling triadic friend of a friend that is not a friend
Given the binary-tree-1 graph
When executing query:
"""
MATCH (a:A)-[:KNOWS]->(b)-->(c)
OPTIONAL MATCH (a)-[r:KNOWS]->(c)
WITH c WHERE r IS NULL
RETURN c.name
"""
Then the result should be:
| c.name |
| 'b3' |
| 'c11' |
| 'c12' |
| 'c21' |
| 'c22' |
And no side effects
Scenario: Handling triadic friend of a friend that is not a friend with different relationship type
Given the binary-tree-1 graph
When executing query:
"""
MATCH (a:A)-[:KNOWS]->(b)-->(c)
OPTIONAL MATCH (a)-[r:FOLLOWS]->(c)
WITH c WHERE r IS NULL
RETURN c.name
"""
Then the result should be:
| c.name |
| 'b2' |
| 'c11' |
| 'c12' |
| 'c21' |
| 'c22' |
And no side effects
Scenario: Handling triadic friend of a friend that is not a friend with superset of relationship type
Given the binary-tree-1 graph
When executing query:
"""
MATCH (a:A)-[:KNOWS]->(b)-->(c)
OPTIONAL MATCH (a)-[r]->(c)
WITH c WHERE r IS NULL
RETURN c.name
"""
Then the result should be:
| c.name |
| 'c11' |
| 'c12' |
| 'c21' |
| 'c22' |
And no side effects
Scenario: Handling triadic friend of a friend that is not a friend with implicit subset of relationship type
Given the binary-tree-1 graph
When executing query:
"""
MATCH (a:A)-->(b)-->(c)
OPTIONAL MATCH (a)-[r:KNOWS]->(c)
WITH c WHERE r IS NULL
RETURN c.name
"""
Then the result should be:
| c.name |
| 'b3' |
| 'b4' |
| 'c11' |
| 'c12' |
| 'c21' |
| 'c22' |
| 'c31' |
| 'c32' |
| 'c41' |
| 'c42' |
And no side effects
Scenario: Handling triadic friend of a friend that is not a friend with explicit subset of relationship type
Given the binary-tree-1 graph
When executing query:
"""
MATCH (a:A)-[:KNOWS|FOLLOWS]->(b)-->(c)
OPTIONAL MATCH (a)-[r:KNOWS]->(c)
WITH c WHERE r IS NULL
RETURN c.name
"""
Then the result should be:
| c.name |
| 'b3' |
| 'b4' |
| 'c11' |
| 'c12' |
| 'c21' |
| 'c22' |
| 'c31' |
| 'c32' |
| 'c41' |
| 'c42' |
And no side effects
Scenario: Handling triadic friend of a friend that is not a friend with same labels
Given the binary-tree-2 graph
When executing query:
"""
MATCH (a:A)-[:KNOWS]->(b:X)-->(c:X)
OPTIONAL MATCH (a)-[r:KNOWS]->(c)
WITH c WHERE r IS NULL
RETURN c.name
"""
Then the result should be:
| c.name |
| 'b3' |
| 'c11' |
| 'c21' |
And no side effects
Scenario: Handling triadic friend of a friend that is not a friend with different labels
Given the binary-tree-2 graph
When executing query:
"""
MATCH (a:A)-[:KNOWS]->(b:X)-->(c:Y)
OPTIONAL MATCH (a)-[r:KNOWS]->(c)
WITH c WHERE r IS NULL
RETURN c.name
"""
Then the result should be:
| c.name |
| 'c12' |
| 'c22' |
And no side effects
Scenario: Handling triadic friend of a friend that is not a friend with implicit subset of labels
Given the binary-tree-2 graph
When executing query:
"""
MATCH (a:A)-[:KNOWS]->(b)-->(c:X)
OPTIONAL MATCH (a)-[r:KNOWS]->(c)
WITH c WHERE r IS NULL
RETURN c.name
"""
Then the result should be:
| c.name |
| 'b3' |
| 'c11' |
| 'c21' |
And no side effects
Scenario: Handling triadic friend of a friend that is not a friend with implicit superset of labels
Given the binary-tree-2 graph
When executing query:
"""
MATCH (a:A)-[:KNOWS]->(b:X)-->(c)
OPTIONAL MATCH (a)-[r:KNOWS]->(c)
WITH c WHERE r IS NULL
RETURN c.name
"""
Then the result should be:
| c.name |
| 'b3' |
| 'c11' |
| 'c12' |
| 'c21' |
| 'c22' |
And no side effects
Scenario: Handling triadic friend of a friend that is a friend
Given the binary-tree-2 graph
When executing query:
"""
MATCH (a:A)-[:KNOWS]->(b)-->(c)
OPTIONAL MATCH (a)-[r:KNOWS]->(c)
WITH c WHERE r IS NOT NULL
RETURN c.name
"""
Then the result should be:
| c.name |
| 'b2' |
And no side effects
Scenario: Handling triadic friend of a friend that is a friend with different relationship type
Given the binary-tree-1 graph
When executing query:
"""
MATCH (a:A)-[:KNOWS]->(b)-->(c)
OPTIONAL MATCH (a)-[r:FOLLOWS]->(c)
WITH c WHERE r IS NOT NULL
RETURN c.name
"""
Then the result should be:
| c.name |
| 'b3' |
And no side effects
Scenario: Handling triadic friend of a friend that is a friend with superset of relationship type
Given the binary-tree-1 graph
When executing query:
"""
MATCH (a:A)-[:KNOWS]->(b)-->(c)
OPTIONAL MATCH (a)-[r]->(c)
WITH c WHERE r IS NOT NULL
RETURN c.name
"""
Then the result should be:
| c.name |
| 'b2' |
| 'b3' |
And no side effects
Scenario: Handling triadic friend of a friend that is a friend with implicit subset of relationship type
Given the binary-tree-1 graph
When executing query:
"""
MATCH (a:A)-->(b)-->(c)
OPTIONAL MATCH (a)-[r:KNOWS]->(c)
WITH c WHERE r IS NOT NULL
RETURN c.name
"""
Then the result should be:
| c.name |
| 'b1' |
| 'b2' |
And no side effects
Scenario: Handling triadic friend of a friend that is a friend with explicit subset of relationship type
Given the binary-tree-1 graph
When executing query:
"""
MATCH (a:A)-[:KNOWS|FOLLOWS]->(b)-->(c)
OPTIONAL MATCH (a)-[r:KNOWS]->(c)
WITH c WHERE r IS NOT NULL
RETURN c.name
"""
Then the result should be:
| c.name |
| 'b1' |
| 'b2' |
And no side effects
Scenario: Handling triadic friend of a friend that is a friend with same labels
Given the binary-tree-2 graph
When executing query:
"""
MATCH (a:A)-[:KNOWS]->(b:X)-->(c:X)
OPTIONAL MATCH (a)-[r:KNOWS]->(c)
WITH c WHERE r IS NOT NULL
RETURN c.name
"""
Then the result should be:
| c.name |
| 'b2' |
And no side effects
Scenario: Handling triadic friend of a friend that is a friend with different labels
Given the binary-tree-2 graph
When executing query:
"""
MATCH (a:A)-[:KNOWS]->(b:X)-->(c:Y)
OPTIONAL MATCH (a)-[r:KNOWS]->(c)
WITH c WHERE r IS NOT NULL
RETURN c.name
"""
Then the result should be:
| c.name |
And no side effects
Scenario: Handling triadic friend of a friend that is a friend with implicit subset of labels
Given the binary-tree-2 graph
When executing query:
"""
MATCH (a:A)-[:KNOWS]->(b)-->(c:X)
OPTIONAL MATCH (a)-[r:KNOWS]->(c)
WITH c WHERE r IS NOT NULL
RETURN c.name
"""
Then the result should be:
| c.name |
| 'b2' |
And no side effects
Scenario: Handling triadic friend of a friend that is a friend with implicit superset of labels
Given the binary-tree-2 graph
When executing query:
"""
MATCH (a:A)-[:KNOWS]->(b:X)-->(c)
OPTIONAL MATCH (a)-[r:KNOWS]->(c)
WITH c WHERE r IS NOT NULL
RETURN c.name
"""
Then the result should be:
| c.name |
| 'b2' |
And no side effects

View File

@ -1,292 +0,0 @@
# -*- coding: utf-8 -*-
import atexit
import datetime
import json
import logging
import os
import subprocess
import sys
import tempfile
import time
from fcntl import fcntl, F_GETFL, F_SETFL
from steps.test_parameters import TestParameters
from neo4j.v1 import GraphDatabase, basic_auth
from steps.graph_properties import GraphProperties
from test_results import TestResults
# Constants - Memgraph flags
COMMON_FLAGS = ["--durability-enabled=false",
"--snapshot-on-exit=false",
"--db-recover-on-startup=false"]
DISTRIBUTED_FLAGS = ["--num-workers", str(6),
"--rpc-num-client-workers", str(6),
"--rpc-num-server-workers", str(6)]
MASTER_PORT = 10000
MASTER_FLAGS = ["--master",
"--master-port", str(MASTER_PORT)]
MEMGRAPH_PORT = 7687
# Module-scoped variables
test_results = TestResults()
temporary_directory = tempfile.TemporaryDirectory()
# Helper functions
def get_script_path():
return os.path.dirname(os.path.realpath(__file__))
def start_process(cmd, stdout=subprocess.DEVNULL,
stderr=subprocess.PIPE, **kwargs):
ret = subprocess.Popen(cmd, stdout=stdout, stderr=stderr, **kwargs)
# set the O_NONBLOCK flag of process stderr file descriptor
if stderr == subprocess.PIPE:
flags = fcntl(ret.stderr, F_GETFL) # get current stderr flags
fcntl(ret.stderr, F_SETFL, flags | os.O_NONBLOCK)
return ret
def is_tested_system_active(context):
return all(proc.poll() is None for proc in context.memgraph_processes)
def is_tested_system_inactive(context):
return not any(proc.poll() is None for proc in context.memgraph_processes)
def get_worker_flags(worker_id):
flags = ["--worker",
"--worker-id", str(worker_id),
"--worker-port", str(10000 + worker_id),
"--master-port", str(10000)]
return flags
def wait_for_server(port, delay=0.01):
cmd = ["nc", "-z", "-w", "1", "127.0.0.1", str(port)]
count = 0
while subprocess.call(cmd) != 0:
time.sleep(0.01)
if count > 20 / 0.01:
print("Could not wait for server on port", port, "to startup!")
sys.exit(1)
count += 1
time.sleep(delay)
def run_memgraph(context, flags, distributed):
if distributed:
memgraph_binary = "memgraph_distributed"
else:
memgraph_binary = "memgraph"
memgraph_cmd = [os.path.join(context.memgraph_dir, memgraph_binary)]
memgraph_subprocess = start_process(memgraph_cmd + flags)
context.memgraph_processes.append(memgraph_subprocess)
def start_memgraph(context):
if context.config.distributed: # Run distributed
flags = COMMON_FLAGS.copy()
if context.config.memgraph_params:
flags += context.extra_flags
master_flags = flags.copy()
master_flags.append("--durability-directory=" + os.path.join(
temporary_directory.name, "master"))
run_memgraph(context, master_flags + DISTRIBUTED_FLAGS + MASTER_FLAGS,
context.config.distributed)
wait_for_server(MASTER_PORT, 0.5)
for i in range(1, int(context.config.num_machines)):
worker_flags = flags.copy()
worker_flags.append("--durability-directory=" + os.path.join(
temporary_directory.name, "worker" + str(i)))
run_memgraph(context, worker_flags + DISTRIBUTED_FLAGS +
get_worker_flags(i), context.config.distributed)
wait_for_server(MASTER_PORT + i, 0.5)
else: # Run single machine memgraph
flags = COMMON_FLAGS.copy()
if context.config.memgraph_params:
flags += context.extra_flags
flags.append("--durability-directory=" + temporary_directory.name)
run_memgraph(context, flags, context.config.distributed)
assert is_tested_system_active(context), "Failed to start memgraph"
wait_for_server(MEMGRAPH_PORT, 0.5) # wait for memgraph to start
def cleanup(context):
if context.config.database == "memgraph":
list(map(lambda p: p.kill(), context.memgraph_processes))
list(map(lambda p: p.wait(), context.memgraph_processes))
assert is_tested_system_inactive(context), "Failed to stop memgraph"
context.memgraph_processes.clear()
def get_test_suite(context):
"""
Returns test suite from a test root folder.
If test root is a feature file, name of file is returned without
.feature extension.
"""
root = context.config.root
if root.endswith("/"):
root = root[0:len(root) - 1]
if root.endswith("features"):
root = root[0: len(root) - len("features") - 1]
test_suite = root.split('/')[-1]
return test_suite
def set_logging(context):
"""
Initializes log and sets logging level to debug.
"""
logging.basicConfig(level="DEBUG")
log = logging.getLogger(__name__)
context.log = log
def create_db_driver(context):
"""
Creates database driver and returns it.
"""
uri = context.config.database_uri
auth_token = basic_auth(
context.config.database_username, context.config.database_password)
if context.config.database == "neo4j" or \
context.config.database == "memgraph":
driver = GraphDatabase.driver(uri, auth=auth_token, encrypted=0)
else:
raise "Unsupported database type"
return driver
# Behave specific functions
def before_step(context, step):
"""
Executes before every step. Checks if step is execution
step and sets context variable to true if it is.
"""
context.execution_step = False
if step.name == "executing query":
context.execution_step = True
def before_scenario(context, scenario):
"""
Executes before every scenario. Initializes test parameters,
graph properties, exception and test execution time.
"""
if context.config.database == "memgraph":
# Check if memgraph is up and running
if is_tested_system_active(context):
context.is_tested_system_restarted = False
else:
cleanup(context)
start_memgraph(context)
context.is_tested_system_restarted = True
context.test_parameters = TestParameters()
context.graph_properties = GraphProperties()
context.exception = None
context.execution_time = None
def before_all(context):
"""
Executes before running tests. Initializes driver and latency
dict and creates needed directories.
"""
timestamp = datetime.datetime.fromtimestamp(
time.time()).strftime("%Y_%m_%d__%H_%M_%S")
latency_file = "latency/" + context.config.database + "/" + \
get_test_suite(context) + "/" + timestamp + ".json"
if not os.path.exists(os.path.dirname(latency_file)):
os.makedirs(os.path.dirname(latency_file))
context.latency_file = latency_file
context.js = dict()
context.js["metadata"] = dict()
context.js["metadata"]["execution_time_unit"] = "seconds"
context.js["data"] = dict()
set_logging(context)
# set config for memgraph
context.memgraph_processes = []
script_path = get_script_path()
context.memgraph_dir = os.path.realpath(
os.path.join(script_path, "../../../build"))
if not os.path.exists(context.memgraph_dir):
context.memgraph_dir = os.path.realpath(
os.path.join(script_path, "../../../build_debug"))
if context.config.memgraph_params:
params = context.config.memgraph_params.strip("\"")
context.extra_flags = params.split()
atexit.register(cleanup, context)
if context.config.database == "memgraph":
start_memgraph(context)
context.driver = create_db_driver(context)
def after_scenario(context, scenario):
"""
Executes after every scenario. Pauses execution if flags are set.
Adds execution time to latency dict if it is not None.
"""
err_output = [p.stderr.read() # noqa unused variable
for p in context.memgraph_processes]
# print error output for each subprocess if scenario failed
if scenario.status == "failed":
for i, err in enumerate(err_output):
if err:
err = err.decode("utf-8")
print("\n", "-" * 5, "Machine {}".format(i), "-" * 5)
list(map(print, [line for line in err.splitlines()]))
test_results.add_test(scenario.status, context.is_tested_system_restarted)
if context.config.single_scenario or \
(context.config.single_fail and scenario.status == "failed"):
print("Press enter to continue")
sys.stdin.readline()
if context.execution_time is not None:
context.js['data'][scenario.name] = {
"execution_time": context.execution_time, "status": scenario.status
}
def after_feature(context, feature):
"""
Executes after every feature. If flag is set, pauses before
executing next scenario.
"""
if context.config.single_feature:
print("Press enter to continue")
sys.stdin.readline()
def after_all(context):
"""
Executes when testing is finished. Creates JSON files of test latency
and test results.
"""
context.driver.close()
timestamp = datetime.datetime.fromtimestamp(
time.time()).strftime("%Y_%m_%d__%H_%M")
test_suite = get_test_suite(context)
file_name = context.config.output_folder + timestamp + \
"-" + context.config.database + "-" + context.config.test_name + \
".json"
js = {
"total": test_results.num_total(),
"passed": test_results.num_passed(),
"restarts": test_results.num_restarts(),
"test_suite": test_suite,
"timestamp": timestamp,
"db": context.config.database
}
with open(file_name, 'w') as f:
json.dump(js, f)
with open(context.latency_file, "a") as f:
json.dump(context.js, f)

View File

@ -1,2 +0,0 @@
*
!.gitignore

View File

@ -1,83 +0,0 @@
# -*- coding: utf-8 -*-
import time
def query(q, context, params={}):
"""
Function used to execute query on database. Query results are
set in context.result_list. If exception occurs, it is set on
context.exception.
@param q:
String, database query.
@param context:
behave.runner.Context, context of all tests.
@return:
List of query results.
"""
results_list = []
if (context.config.database == "neo4j" or
context.config.database == "memgraph"):
session = context.driver.session()
start = time.time()
try:
# executing query
results = session.run(q, params)
if context.config.side_effects:
summary = results.summary()
add_side_effects(context, summary.counters)
results_list = list(results)
"""
This code snippet should replace code which is now
executing queries when session.transactions will be supported.
with session.begin_transaction() as tx:
results = tx.run(q, params)
summary = results.summary()
if context.config.side_effects:
add_side_effects(context, summary.counters)
results_list = list(results)
tx.success = True
"""
except Exception as e:
# exception
context.exception = e
context.log.info('%s', str(e))
finally:
end = time.time()
if context.execution_step is not None and \
context.execution_step:
context.execution_time = end - start
session.close()
return results_list
def add_side_effects(context, counters):
"""
Funtion adds side effects from query to graph properties.
@param context:
behave.runner.Context, context of all tests.
"""
graph_properties = context.graph_properties
# check nodes
if counters.nodes_deleted > 0:
graph_properties.change_nodes(-counters.nodes_deleted)
if counters.nodes_created > 0:
graph_properties.change_nodes(counters.nodes_created)
# check relationships
if counters.relationships_deleted > 0:
graph_properties.change_relationships(-counters.relationships_deleted)
if counters.relationships_created > 0:
graph_properties.change_relationships(counters.relationships_created)
# check labels
if counters.labels_removed > 0:
graph_properties.change_labels(-counters.labels_removed)
if counters.labels_added > 0:
graph_properties.change_labels(counters.labels_added)
# check properties
if counters.properties_set > 0:
graph_properties.change_properties(counters.properties_set)

View File

@ -1,131 +0,0 @@
# -*- coding: utf-8 -*-
class GraphProperties:
"""
Class used to store changes(side effects of queries)
to graph parameters(nodes, relationships, labels and
properties) when executing queries.
"""
def set_beginning_parameters(self):
"""
Method sets parameters to empty lists.
@param self:
Instance of a class.
"""
self.nodes = []
self.relationships = []
self.labels = []
self.properties = []
def __init__(self):
"""
Method sets parameters to empty lists.
@param self:
Instance of a class.
"""
self.nodes = []
self.relationships = []
self.labels = []
self.properties = []
def change_nodes(self, dif):
"""
Method adds node side effect.
@param self:
Instance of a class.
@param dif:
Int, difference between number of nodes before
and after executing query.
"""
self.nodes.append(dif)
def change_relationships(self, dif):
"""
Method adds relationship side effect.
@param self:
Instance of a class.
@param dif:
Int, difference between number of relationships
before and after executing query.
"""
self.relationships.append(dif)
def change_labels(self, dif):
"""
Method adds one label side effect.
@param self:
Instance of a class.
@param dif:
Int, difference between number of labels before
and after executing query.
"""
self.labels.append(dif)
def change_properties(self, dif):
"""
Method adds one property side effect.
@param self:
Instance of a class.
@param dif:
Int, number of properties set in query.
"""
self.properties.append(dif)
def compare(self, nodes_dif, relationships_dif, labels_dif,
properties_dif):
"""
Method used to compare side effects from executing
queries and an expected result from a cucumber test.
@param self:
Instance of a class.
@param nodes_dif:
List of all expected node side effects in order
when executing query.
@param relationships_dif:
List of all expected relationship side effects
in order when executing query.
@param labels_dif:
List of all expected label side effects in order
when executing query.
@param properties_dif:
List of all expected property side effects in order
when executing query.
@return:
True if all side effects are equal, else false.
"""
if len(nodes_dif) != len(self.nodes):
return False
if len(relationships_dif) != len(self.relationships):
return False
if len(labels_dif) != len(self.labels):
return False
if len(properties_dif) != len(self.properties):
return False
for i in range(0, len(nodes_dif)):
if nodes_dif[i] != self.nodes[i]:
return False
for i in range(0, len(relationships_dif)):
if relationships_dif[i] != self.relationships[i]:
return False
for i in range(0, len(labels_dif)):
if labels_dif[i] != self.labels[i]:
return False
for i in range(0, len(properties_dif)):
if properties_dif[i] != self.properties[i]:
return False
return True

View File

@ -1,135 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from behave.__main__ import main as behave_main
from behave import configuration
from argparse import ArgumentParser
import os
import sys
def parse_args(argv):
argp = ArgumentParser(description=__doc__)
argp.add_argument("--root", default="tck_engine/tests/memgraph_V1",
help="Path to folder where tests are located, default "
"is tck_engine/tests/memgraph_V1")
argp.add_argument(
"--stop", action="store_true", help="Stop testing after first fail.")
argp.add_argument("--side-effects", action="store_false",
help="Check for side effects in tests.")
argp.add_argument("--db", default="memgraph",
choices=["neo4j", "memgraph"],
help="Default is memgraph.")
argp.add_argument("--db-user", default="neo4j", help="Default is neo4j.")
argp.add_argument(
"--db-pass", default="1234", help="Default is 1234.")
argp.add_argument("--db-uri", default="bolt://127.0.0.1:7687",
help="Default is bolt://127.0.0.1:7687.")
argp.add_argument("--output-folder", default="tck_engine/results/",
help="Test result output folder, default is results/.")
argp.add_argument("--logging", default="DEBUG", choices=["INFO", "DEBUG"],
help="Logging level, default is DEBUG.")
argp.add_argument("--unstable", action="store_true",
help="Include unstable feature from features.")
argp.add_argument("--single-fail", action="store_true",
help="Pause after failed scenario.")
argp.add_argument("--single-scenario", action="store_true",
help="Pause after every scenario.")
argp.add_argument("--single-feature", action="store_true",
help="Pause after every feature.")
argp.add_argument("--test-name", default="",
help="Name of the test")
argp.add_argument("--distributed", action="store_true",
help="Run memgraph in distributed")
argp.add_argument("--num-machines", type=int, default=3,
help="Number of machines for distributed run")
argp.add_argument("--memgraph-params", default="",
help="Additional params for memgraph run")
return argp.parse_args(argv)
def add_config(option, dictionary):
configuration.options.append(
((option,), dictionary)
)
def main(argv):
"""
Script used to run behave tests with given options. List of
options is available when running python test_executor.py -help.
"""
args = parse_args(argv)
tests_root = os.path.abspath(args.root)
# adds options to cucumber configuration
add_config("--side-effects",
dict(action="store_false", help="Exclude side effects."))
add_config("--database", dict(help="Choose database(memgraph/neo4j)."))
add_config("--database-password", dict(help="Database password."))
add_config("--database-username", dict(help="Database username."))
add_config("--database-uri", dict(help="Database uri."))
add_config("--output-folder", dict(
help="Folder where results of tests are written."))
add_config("--root", dict(help="Folder with test features."))
add_config("--single-fail",
dict(action="store_true", help="Pause after failed scenario."))
add_config("--single-scenario",
dict(action="store_true", help="Pause after every scenario."))
add_config("--single-feature",
dict(action="store_true", help="Pause after every feature."))
add_config("--test-name", dict(help="Name of the test."))
add_config("--distributed",
dict(action="store_true", help="Run memgraph in distributed."))
add_config("--num-machines",
dict(help="Number of machines for distributed run."))
add_config("--memgraph-params", dict(help="Additional memgraph params."))
# list with all options
# options will be passed to the cucumber engine
behave_options = [tests_root]
if args.stop:
behave_options.append("--stop")
if args.side_effects:
behave_options.append("--side-effects")
if args.db != "memgraph":
behave_options.append("-e")
behave_options.append("memgraph*")
if not args.unstable:
behave_options.append("-e")
behave_options.append("unstable*")
behave_options.append("--database")
behave_options.append(args.db)
behave_options.append("--database-password")
behave_options.append(args.db_pass)
behave_options.append("--database-username")
behave_options.append(args.db_user)
behave_options.append("--database-uri")
behave_options.append(args.db_uri)
behave_options.append("--root")
behave_options.append(args.root)
if (args.single_fail):
behave_options.append("--single-fail")
if (args.single_scenario):
behave_options.append("--single-scenario")
if (args.single_feature):
behave_options.append("--single-feature")
if (args.distributed):
behave_options.append("--distributed")
behave_options.append("--num-machines")
behave_options.append(str(args.num_machines))
behave_options.append("--output-folder")
behave_options.append(args.output_folder)
behave_options.append("--test-name")
behave_options.append(args.test_name)
if (args.memgraph_params):
behave_options.append("--memgraph-params")
behave_options.append(args.memgraph_params)
# runs tests with options
return behave_main(behave_options)
if __name__ == '__main__':
sys.exit(main(sys.argv[1:]))

View File

@ -1,51 +0,0 @@
# -*- coding: utf-8 -*-
class TestResults:
"""
Class used to store test results.
@attribute total:
int, total number of scenarios.
@attribute passed:
int, number of passed scenarios.
@attribute restarts:
int, number of restarts of underlying tested system.
"""
def __init__(self):
self.total = 0
self.passed = 0
self.restarts = 0
def num_passed(self):
"""
Getter for param passed.
"""
return self.passed
def num_total(self):
"""
Getter for param total.
"""
return self.total
def num_restarts(self):
"""
Getter for param restarts.
"""
return self.restarts
def add_test(self, status, is_tested_system_restarted):
"""
Method adds one scenario to current results. If
scenario passed, number of passed scenarios increases.
@param status:
string in behave 1.2.5, 'passed' if scenario passed
"""
if status == "passed":
self.passed += 1
self.total += 1
if is_tested_system_restarted:
self.restarts += 1

View File

@ -0,0 +1,12 @@
- name: memgraph_V1
test_suite: memgraph_V1
must_pass: true
- name: memgraph_V1_POD
test_suite: memgraph_V1
properties_on_disk: "x,y,z,w,k,v,a,b,c,d,e,f,r,t,o,prop,age,name,surname,location"
must_pass: true
- name: openCypher_M09
test_suite: openCypher_M09
must_pass: false

View File

@ -59,6 +59,20 @@ Feature: Aggregations
| 3 | 0 |
| 2 | 1 |
Scenario: Count test 06:
Given an empty graph
And having executed
"""
CREATE (), (), (), (), ()
"""
When executing query:
"""
MATCH (n) RETURN COUNT(*) AS n
"""
Then the result should be:
| n |
| 5 |
Scenario: Sum test 01:
Given an empty graph
And having executed

View File

@ -635,6 +635,15 @@ Feature: Functions
| (:y) | false | true |
| (:y) | false | true |
Scenario: E test:
When executing query:
"""
RETURN E() as n
"""
Then the result should be:
| n |
| 2.718281828459045 |
Scenario: Pi test:
When executing query:
"""
@ -786,6 +795,7 @@ Feature: Functions
Scenario: CounterSet test:
Given an empty graph
When executing query:
"""
WITH counter("n") AS zero
@ -798,7 +808,6 @@ Feature: Functions
| 42 | 0 | 1 | 0 |
Scenario: Vertex Id test:
# 1024 should be a runtime parameter.
Given an empty graph
And having executed:
"""

View File

@ -1,5 +1,19 @@
Feature: With
Scenario: With test 01:
Given an empty graph
And having executed:
"""
CREATE (a:A), (b:B), (c:C), (d:D), (e:E), (a)-[:R]->(b), (b)-[:R]->(c), (b)-[:R]->(d), (c)-[:R]->(a), (c)-[:R]->(e), (d)-[:R]->(e)
"""
When executing query:
"""
MATCH (:A)--(a)-->() WITH a, COUNT(*) AS n WHERE n > 1 RETURN a
"""
Then the result should be:
| a |
| (:B) |
Scenario: With test 02:
Given an empty graph
And having executed

Some files were not shown because too many files have changed in this diff Show More