Merge branch 'master' into E129-MG-label-based-authorization

This commit is contained in:
Josip Mrden 2022-09-06 11:14:27 +02:00
commit 0a66feccff
88 changed files with 3414 additions and 1117 deletions
docs/csv-import-tool
include
src
tests

View File

@ -0,0 +1,204 @@
# CSV Import Tool Documentation
CSV is a universal and very versatile data format used to store large quantities
of data. Each Memgraph database instance has a CSV import tool installed called
`mg_import_csv`. The CSV import tool should be used for initial bulk ingestion
of data into the database. Upon ingestion, the CSV importer creates a snapshot
that will be used by the database to recover its state on its next startup.
If you are already familiar with the Neo4j bulk import tool, then using the
`mg_import_csv` tool should be easy. The CSV import tool is fully compatible
with the [Neo4j CSV
format](https://neo4j.com/docs/operations-manual/current/tools/import/). If you
already have a pipeline set-up for Neo4j, you should only replace `neo4j-admin
import` with `mg_import_csv`.
## CSV File Format
Each row of a CSV file represents a single entry that should be imported into
the database. Both nodes and relationships can be imported into the database
using CSV files.
Each set of CSV files must have a header that describes the data that is stored
in the CSV files. Each field in the CSV header is in the format
`<name>[:<type>]` which identifies the name that should be used for that column
and the type that should be used for that column. The type is optional and
defaults to `string` (see the following chapter).
Each CSV field must be divided using the delimiter and each CSV field can either
be quoted or unquoted. When the field is quoted, the first and last character in
the field *must* be the quote character. If the field isn't quoted, and a quote
character appears in it, it is treated as a regular character. If a quote
character appears inside a quoted string then the quote character must be
doubled in order to escape it. Line feeds and carriage returns are ignored in
the CSV file, also, the file can't contain a NULL character.
## Properties
Both nodes and relationships can have properties added to them. When importing
properties, the CSV importer uses the name specified in the header of the
corresponding CSV column for the name of the property. A property is designated
by specifying one of the following types in the header:
- `integer`, `int`, `long`, `byte`, `short`: creates an integer property
- `float`, `double`: creates a float property
- `boolean`, `bool`: creates a boolean property
- `string`, `char`: creates a string property
When importing a boolean value, the CSV field should contain exactly the text
`true` to import a `True` boolean value. All other text values are treated as a
boolean value `False`.
If you want to import an array of values, you can do so by appending `[]` to any
of the above types. The values of the array are then determined by splitting
the raw CSV value using the array delimiter character.
Assuming that the array delimiter is `;`, the following example:
```plaintext
first_name,last_name:string,number:integer,aliases:string[]
John,Doe,1,Johnny;Jo;J-man
Melissa,Doe,2,Mel
```
Will yield these results:
```plaintext
CREATE ({first_name: "John", last_name: "Doe", number: 1, aliases: ["Johnny", "Jo", "J-man"]});
CREATE ({first_name: "Melissa", last_name: "Doe", number: 2, aliases: ["Mel"]});
```
### Nodes
When importing nodes, several more types can be specified in the header of the
CSV file (along with all property types):
- `ID`: id of the node that should be used as the node ID when importing
relationships
- `LABEL`: designates that the field contains additional labels for the node
- `IGNORE`: designates that the field should be ignored
The `ID` field type sets the internal ID that will be used for the node when
creating relationships. It is optional and nodes that don't have an ID value
specified will be imported, but can't be connected to any relationships. If you
want to save the ID value as a property in the database, just specify a name for
the ID (`user_id:ID`). If you just want to use the ID during the import, leave
out the name of the field (`:ID`). The `ID` field also supports creating
separate ID spaces. The ID space is specified with the ID space name appended
to the `ID` type in parentheses (`ID(user)`). That allows you to have the same
IDs (by value) for multiple different node files (for example, numbers from 1 to
N). The IDs in each ID space will be treated as an independent set of IDs that
don't interfere with IDs in another ID space.
The `LABEL` field type adds additional labels to the node. The value is treated
as an array type so that multiple additional labels can be specified for each
node. The value is split using the array delimiter (`--array-delimiter` flag).
### Relationships
In order to be able to import relationships, you must import the nodes in the
same invocation of `mg_import_csv` that is used to import the relationships.
When importing relationships, several more types can be specified in the header
of the CSV file (along with all property types):
- `START_ID`: id of the start node that should be connected with the
relationship
- `END_ID`: id of the end node that should be connected with the relationship
- `TYPE`: designates the type of the relationship
- `IGNORE`: designates that the field should be ignored
The `START_ID` field type sets the start node that should be connected with the
relationship to the end node. The field *must* be specified and the node ID
must be one of the node IDs that were specified in the node CSV files. The name
of this field is ignored. If the node ID is in an ID space, you can specify the
ID space for the in the same way as for the node ID (`START_ID(user)`).
The `END_ID` field type sets the end node that should be connected with the
relationship to the start node. The field *must* be specified and the node ID
must be one of the node IDs that were specified in the node CSV files. The name
of this field is ignored. If the node ID is in an ID space, you can specify the
ID space for the in the same way as for the node ID (`END_ID(user)`).
The `TYPE` field type sets the type of the relationship. Each relationship
*must* have a relationship type, but it doesn't necessarily need to be specified
in the CSV file, it can also be set externally for the whole CSV file. The name
of this field is ignored.
## CSV Importer Flags
The importer has many command line options that allow you to customize the way
the importer loads your data.
The two main flags that are used to specify the input CSV files are `--nodes`
and `--relationships`. Basic description of these flags is provided in the table
and more detailed explainion can be found further down bellow.
| Flag | Description |
|-----------------------| -------------- |
|`--nodes` | Used to specify CSV files that contain the nodes to the importer. |
|`--relationships` | Used to specify CSV files that contain the relationships to the importer.|
|`--delimiter` | Sets the delimiter that should be used when splitting the CSV fields (default `,`)|
|`--quote` | Sets the quote character that should be used to quote a CSV field (default `"`)|
|`--array-delimiter` | Sets the delimiter that should be used when splitting array values (default `;`)|
|`--id-type` | Specifies which data type should be used to store the supplied <br /> node IDs when storing them as properties (if the field name is supplied). <br /> The supported values are either `STRING` or `INTEGER`. (default `STRING`)|
|`--ignore-empty-strings` | Instructs the importer to treat all empty strings as `Null` values <br /> instead of an empty string value (default `false`)|
|`--ignore-extra-columns` | Instructs the importer to ignore all columns (instead of raising an error) <br /> that aren't specified after the last specified column in the CSV header. (default `false`) |
| `--skip-bad-relationships`| Instructs the importer to ignore all relationships (instead of raising an error) <br /> that refer to nodes that don't exist in the node files. (default `false`) |
|`--skip-duplicate-nodes` | Instructs the importer to ignore all duplicate nodes (instead of raising an error). <br /> Duplicate nodes are nodes that have an ID that is the same as another node that was already imported. (default `false`) |
| `--trim-strings`| Instructs the importer to trim all of the loaded CSV field values before processing them further. <br /> Trimming the fields removes all leading and trailing whitespace from them. (default `false`) |
The `--nodes` and `--relationships` flags are used to specify CSV files that
contain the nodes and relationships to the importer. Multiple files can be
specified in each supplied `--nodes` or `--relationships` flag. Files that are
supplied in one `--nodes` or `--relationships` flag are treated by the CSV
parser as one big CSV file. Only the first line of the first file is parsed for
the CSV header, all other files (and rows) are treated as data. This is useful
when you have a very large CSV file and don't want to edit its first line just
to add a CSV header. Instead, you can specify the header in a separate file
(e.g. `users_header.csv` or `friendships_header.csv`) and have the data intact
in the large file (e.g. `users.csv` or `friendships.csv`). Also, you can supply
additional labels for each set of node files.
The format of `--nodes` flag is:
`[<label>[:<label>]...=]<file>[,<file>][,<file>]...`. Take note that only the
first `<file>` part is mandatory, all other parts of the flag value are
optional. Multiple `--nodes` flags can be supplied to describe multiple sets of
different node files. For the importer to work, at least one `--nodes` flag
*must* be supplied.
The format of `--relationships` flag is: `[<type>=]<file>[,<file>][,<file>]...`.
Take note that only the first `<file>` part is mandatory, all other parts of the
flag value are optional. Multiple `--relationships` flags can be supplied to
describe multiple sets of different relationship files. The `--relationships`
flag isn't mandatory.
## CSV Parser Logic
The CSV parser uses the same logic as the standard Python CSV parser. The data
is parsed in the same way as the following snippet:
```python
import csv
for row in csv.reader(stream, strict=True):
# process 'row'
```
Python uses 'excel' as the default dialect when parsing CSV files and the
default settings for the CSV parser are:
- delimiter: `','`
- doublequote: `True`
- escapechar: `None`
- lineterminator: `'\r\n'`
- quotechar: `'"'`
- skipinitialspace: `False`
The above snippet can be expanded to:
```python
import csv
for row in csv.reader(stream, delimiter=',', doublequote=True,
escapechar=None, lineterminator='\r\n',
quotechar='"', skipinitialspace=False,
strict=True):
# process 'row'
```
For more information about the meaning of the above values, see:
https://docs.python.org/3/library/csv.html#csv.Dialect

View File

@ -1292,6 +1292,12 @@ struct mgp_proc;
/// Describes a Memgraph magic function.
struct mgp_func;
/// All available log levels that can be used in mgp_log function
MGP_ENUM_CLASS mgp_log_level{
MGP_LOG_LEVEL_TRACE, MGP_LOG_LEVEL_DEBUG, MGP_LOG_LEVEL_INFO,
MGP_LOG_LEVEL_WARN, MGP_LOG_LEVEL_ERROR, MGP_LOG_LEVEL_CRITICAL,
};
/// Entry-point for a query module read procedure, invoked through openCypher.
///
/// Passed in arguments will not live longer than the callback's execution.
@ -1386,6 +1392,9 @@ enum mgp_error mgp_proc_add_result(struct mgp_proc *proc, const char *name, stru
/// Return mgp_error::MGP_ERROR_INVALID_ARGUMENT if `name` is not a valid result name.
/// RETURN mgp_error::MGP_ERROR_LOGIC_ERROR if a result field with the same name was already added.
enum mgp_error mgp_proc_add_deprecated_result(struct mgp_proc *proc, const char *name, struct mgp_type *type);
/// Log a message on a certain level.
enum mgp_error mgp_log(enum mgp_log_level log_level, const char *output);
///@}
/// @name Execution
@ -1512,6 +1521,10 @@ enum mgp_error mgp_module_add_transformation(struct mgp_module *module, const ch
///
///@{
/// State of the database that is exposed to magic functions. Currently it is unused, but it enables extending the
/// functionalities of magic functions in future without breaking the API.
struct mgp_func_context;
/// Add a required argument to a function.
///
/// The order of the added arguments corresponds to the signature of the openCypher function.

View File

@ -2004,4 +2004,82 @@ def _wrap_exceptions():
setattr(module, name, wrap_function(obj))
class Logger:
"""Represents a Logger through which it is possible
to send logs via API to the graph database.
The best way to use this Logger is to have one per query module."""
__slots__ = ("_logger",)
def __init__(self):
self._logger = _mgp._LOGGER
def info(self, out: str) -> None:
"""
Log message on INFO level..
Args:
out: String message to be logged.
Examples:
```logger.info("Hello from query module.")```
"""
self._logger.info(out)
def warning(self, out: str) -> None:
"""
Log message on WARNING level..
Args:
out: String message to be logged.
Examples:
```logger.warning("Hello from query module.")```
"""
self._logger.warning(out)
def critical(self, out: str) -> None:
"""
Log message on CRITICAL level..
Args:
out: String message to be logged.
Examples:
```logger.critical("Hello from query module.")```
"""
self._logger.critical(out)
def error(self, out: str) -> None:
"""
Log message on ERROR level..
Args:
out: String message to be logged.
Examples:
```logger.error("Hello from query module.")```
"""
self._logger.error(out)
def trace(self, out: str) -> None:
"""
Log message on TRACE level..
Args:
out: String message to be logged.
Examples:
```logger.trace("Hello from query module.")```
"""
self._logger.trace(out)
def debug(self, out: str) -> None:
"""
Log message on DEBUG level..
Args:
out: String message to be logged.
Examples:
```logger.debug("Hello from query module.")```
"""
self._logger.debug(out)
_wrap_exceptions()

View File

@ -7,6 +7,7 @@ set(communication_src_files
websocket/listener.cpp
websocket/session.cpp
bolt/v1/value.cpp
bolt/client.cpp
buffer.cpp
client.cpp
context.cpp

View File

@ -0,0 +1,262 @@
// Copyright 2022 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
// License, and you may not use this file except in compliance with the Business Source License.
//
// As of the Change Date specified in that file, in accordance with
// the Business Source License, use of this software will be governed
// by the Apache License, Version 2.0, included in the file
// licenses/APL.txt.
#include "communication/bolt/client.hpp"
#include "communication/bolt/v1/codes.hpp"
#include "communication/bolt/v1/value.hpp"
#include "utils/logging.hpp"
namespace {
constexpr uint8_t kBoltV43Version[4] = {0x00, 0x00, 0x03, 0x04};
constexpr uint8_t kEmptyBoltVersion[4] = {0x00, 0x00, 0x00, 0x00};
} // namespace
namespace memgraph::communication::bolt {
Client::Client(communication::ClientContext &context) : client_{&context} {}
void Client::Connect(const io::network::Endpoint &endpoint, const std::string &username, const std::string &password,
const std::string &client_name) {
if (!client_.Connect(endpoint)) {
throw ClientFatalException("Couldn't connect to {}!", endpoint);
}
if (!client_.Write(kPreamble, sizeof(kPreamble), true)) {
spdlog::error("Couldn't send preamble!");
throw ServerCommunicationException();
}
if (!client_.Write(kBoltV43Version, sizeof(kBoltV43Version), true)) {
spdlog::error("Couldn't send protocol version!");
throw ServerCommunicationException();
}
for (int i = 0; i < 3; ++i) {
if (!client_.Write(kEmptyBoltVersion, sizeof(kEmptyBoltVersion), i != 2)) {
spdlog::error("Couldn't send protocol version!");
throw ServerCommunicationException();
}
}
if (!client_.Read(sizeof(kBoltV43Version))) {
spdlog::error("Couldn't get negotiated protocol version!");
throw ServerCommunicationException();
}
if (memcmp(kBoltV43Version, client_.GetData(), sizeof(kBoltV43Version)) != 0) {
spdlog::error("Server negotiated unsupported protocol version!");
throw ClientFatalException("The server negotiated an usupported protocol version!");
}
client_.ShiftData(sizeof(kBoltV43Version));
if (!encoder_.MessageInit({{"user_agent", client_name},
{"scheme", "basic"},
{"principal", username},
{"credentials", password},
{"routing", {}}})) {
spdlog::error("Couldn't send init message!");
throw ServerCommunicationException();
}
Signature signature{};
Value metadata;
if (!ReadMessage(signature, metadata)) {
spdlog::error("Couldn't read init message response!");
throw ServerCommunicationException();
}
if (signature != Signature::Success) {
spdlog::error("Handshake failed!");
throw ClientFatalException("Handshake with the server failed!");
}
spdlog::debug("Metadata of init message response: {}", metadata);
}
QueryData Client::Execute(const std::string &query, const std::map<std::string, Value> &parameters) {
if (!client_.IsConnected()) {
throw ClientFatalException("You must first connect to the server before using the client!");
}
spdlog::debug("Sending run message with statement: '{}'; parameters: {}", query, parameters);
// It is super critical from performance point of view to send the pull message right after the run message. Otherwise
// the performance will degrade multiple magnitudes.
encoder_.MessageRun(query, parameters, {});
encoder_.MessagePull({});
spdlog::debug("Reading run message response");
Signature signature{};
Value fields;
if (!ReadMessage(signature, fields)) {
throw ServerCommunicationException();
}
if (fields.type() != Value::Type::Map) {
throw ServerMalformedDataException();
}
if (signature == Signature::Failure) {
HandleFailure<ClientQueryException>(fields.ValueMap());
}
if (signature != Signature::Success) {
throw ServerMalformedDataException();
}
spdlog::debug("Reading pull_all message response");
Marker marker{};
Value metadata;
std::vector<std::vector<Value>> records;
while (true) {
if (!GetMessage()) {
throw ServerCommunicationException();
}
if (!decoder_.ReadMessageHeader(&signature, &marker)) {
throw ServerCommunicationException();
}
if (signature == Signature::Record) {
Value record;
if (!decoder_.ReadValue(&record, Value::Type::List)) {
throw ServerCommunicationException();
}
records.emplace_back(std::move(record.ValueList()));
} else if (signature == Signature::Success) {
if (!decoder_.ReadValue(&metadata)) {
throw ServerCommunicationException();
}
break;
} else if (signature == Signature::Failure) {
Value data;
if (!decoder_.ReadValue(&data)) {
throw ServerCommunicationException();
}
HandleFailure<ClientQueryException>(data.ValueMap());
} else {
throw ServerMalformedDataException();
}
}
if (metadata.type() != Value::Type::Map) {
throw ServerMalformedDataException();
}
QueryData ret{{}, std::move(records), std::move(metadata.ValueMap())};
auto &header = fields.ValueMap();
if (header.find("fields") == header.end()) {
throw ServerMalformedDataException();
}
if (header["fields"].type() != Value::Type::List) {
throw ServerMalformedDataException();
}
auto &field_vector = header["fields"].ValueList();
for (auto &field_item : field_vector) {
if (field_item.type() != Value::Type::String) {
throw ServerMalformedDataException();
}
ret.fields.emplace_back(std::move(field_item.ValueString()));
}
return ret;
}
void Client::Reset() {
if (!client_.IsConnected()) {
throw ClientFatalException("You must first connect to the server before using the client!");
}
spdlog::debug("Sending reset message");
encoder_.MessageReset();
Signature signature{};
Value fields;
// In Execute the pull message is sent right after the run message without reading the answer for the run message.
// That means some of the messages sent might get ignored.
while (true) {
if (!ReadMessage(signature, fields)) {
throw ServerCommunicationException();
}
if (signature == Signature::Success) {
break;
}
if (signature != Signature::Ignored) {
throw ServerMalformedDataException();
}
}
}
std::optional<std::map<std::string, Value>> Client::Route(const std::map<std::string, Value> &routing,
const std::vector<Value> &bookmarks,
const std::optional<std::string> &db) {
if (!client_.IsConnected()) {
throw ClientFatalException("You must first connect to the server before using the client!");
}
spdlog::debug("Sending route message with routing: {}; bookmarks: {}; db: {}", routing, bookmarks,
db.has_value() ? *db : Value());
encoder_.MessageRoute(routing, bookmarks, db);
spdlog::debug("Reading route message response");
Signature signature{};
Value fields;
if (!ReadMessage(signature, fields)) {
throw ServerCommunicationException();
}
if (signature == Signature::Ignored) {
return std::nullopt;
}
if (signature == Signature::Failure) {
HandleFailure(fields.ValueMap());
}
if (signature != Signature::Success) {
throw ServerMalformedDataException{};
}
return fields.ValueMap();
}
void Client::Close() { client_.Close(); };
bool Client::GetMessage() {
client_.ClearData();
while (true) {
if (!client_.Read(kChunkHeaderSize)) return false;
size_t chunk_size = client_.GetData()[0];
chunk_size <<= 8U;
chunk_size += client_.GetData()[1];
if (chunk_size == 0) return true;
if (!client_.Read(chunk_size)) return false;
if (decoder_buffer_.GetChunk() != ChunkState::Whole) return false;
client_.ClearData();
}
return true;
}
bool Client::ReadMessage(Signature &signature, Value &ret) {
Marker marker{};
if (!GetMessage()) return false;
if (!decoder_.ReadMessageHeader(&signature, &marker)) return false;
return ReadMessageData(marker, ret);
}
bool Client::ReadMessageData(Marker marker, Value &ret) {
if (marker == Marker::TinyStruct) {
ret = Value();
return true;
}
if (marker == Marker::TinyStruct1) {
return decoder_.ReadValue(&ret);
}
return false;
}
} // namespace memgraph::communication::bolt

View File

@ -11,6 +11,12 @@
#pragma once
#include <map>
#include <optional>
#include <string>
#include <vector>
#include "communication/bolt/v1/codes.hpp"
#include "communication/bolt/v1/decoder/chunked_decoder_buffer.hpp"
#include "communication/bolt/v1/decoder/decoder.hpp"
#include "communication/bolt/v1/encoder/chunked_encoder_buffer.hpp"
@ -19,22 +25,17 @@
#include "communication/context.hpp"
#include "io/network/endpoint.hpp"
#include "utils/exceptions.hpp"
#include "utils/logging.hpp"
namespace memgraph::communication::bolt {
/// This exception is thrown whenever an error occurs during query execution
/// that isn't fatal (eg. mistyped query or some transient error occurred).
/// It should be handled by everyone who uses the client.
class ClientQueryException : public utils::BasicException {
class FailureResponseException : public utils::BasicException {
public:
using utils::BasicException::BasicException;
FailureResponseException() : utils::BasicException{"Couldn't execute query!"} {}
ClientQueryException() : utils::BasicException("Couldn't execute query!") {}
explicit FailureResponseException(const std::string &message) : utils::BasicException{message} {}
template <class... Args>
ClientQueryException(const std::string &code, Args &&...args)
: utils::BasicException(std::forward<Args>(args)...), code_(code) {}
FailureResponseException(const std::string &code, const std::string &message)
: utils::BasicException{message}, code_{code} {}
const std::string &code() const { return code_; }
@ -42,6 +43,14 @@ class ClientQueryException : public utils::BasicException {
std::string code_;
};
/// This exception is thrown whenever an error occurs during query execution
/// that isn't fatal (eg. mistyped query or some transient error occurred).
/// It should be handled by everyone who uses the client.
class ClientQueryException : public FailureResponseException {
public:
using FailureResponseException::FailureResponseException;
};
/// This exception is thrown whenever a fatal error occurs during query
/// execution and/or connecting to the server.
/// It should be handled by everyone who uses the client.
@ -76,12 +85,13 @@ struct QueryData {
/// server. It supports both SSL and plaintext connections.
class Client final {
public:
explicit Client(communication::ClientContext *context) : client_(context) {}
explicit Client(communication::ClientContext &context);
Client(const Client &) = delete;
Client(Client &&) = delete;
Client &operator=(const Client &) = delete;
Client &operator=(Client &&) = delete;
~Client() = default;
/// Method used to connect to the server. Before executing queries this method
/// should be called to set-up the connection to the server. After the
@ -89,50 +99,7 @@ class Client final {
/// established connection.
/// @throws ClientFatalException when we couldn't connect to the server
void Connect(const io::network::Endpoint &endpoint, const std::string &username, const std::string &password,
const std::string &client_name = "memgraph-bolt") {
if (!client_.Connect(endpoint)) {
throw ClientFatalException("Couldn't connect to {}!", endpoint);
}
if (!client_.Write(kPreamble, sizeof(kPreamble), true)) {
SPDLOG_ERROR("Couldn't send preamble!");
throw ServerCommunicationException();
}
for (int i = 0; i < 4; ++i) {
if (!client_.Write(kProtocol, sizeof(kProtocol), i != 3)) {
SPDLOG_ERROR("Couldn't send protocol version!");
throw ServerCommunicationException();
}
}
if (!client_.Read(sizeof(kProtocol))) {
SPDLOG_ERROR("Couldn't get negotiated protocol version!");
throw ServerCommunicationException();
}
if (memcmp(kProtocol, client_.GetData(), sizeof(kProtocol)) != 0) {
SPDLOG_ERROR("Server negotiated unsupported protocol version!");
throw ClientFatalException("The server negotiated an usupported protocol version!");
}
client_.ShiftData(sizeof(kProtocol));
if (!encoder_.MessageInit(client_name, {{"scheme", "basic"}, {"principal", username}, {"credentials", password}})) {
SPDLOG_ERROR("Couldn't send init message!");
throw ServerCommunicationException();
}
Signature signature;
Value metadata;
if (!ReadMessage(&signature, &metadata)) {
SPDLOG_ERROR("Couldn't read init message response!");
throw ServerCommunicationException();
}
if (signature != Signature::Success) {
SPDLOG_ERROR("Handshake failed!");
throw ClientFatalException("Handshake with the server failed!");
}
SPDLOG_INFO("Metadata of init message response: {}", metadata);
}
const std::string &client_name = "memgraph-bolt");
/// Function used to execute queries against the server. Before you can
/// execute queries you must connect the client to the server.
@ -140,168 +107,41 @@ class Client final {
/// executing the query (eg. mistyped query,
/// etc.)
/// @throws ClientFatalException when we couldn't communicate with the server
QueryData Execute(const std::string &query, const std::map<std::string, Value> &parameters) {
if (!client_.IsConnected()) {
throw ClientFatalException("You must first connect to the server before using the client!");
}
SPDLOG_INFO("Sending run message with statement: '{}'; parameters: {}", query, parameters);
encoder_.MessageRun(query, parameters);
encoder_.MessagePullAll();
SPDLOG_INFO("Reading run message response");
Signature signature;
Value fields;
if (!ReadMessage(&signature, &fields)) {
throw ServerCommunicationException();
}
if (fields.type() != Value::Type::Map) {
throw ServerMalformedDataException();
}
if (signature == Signature::Failure) {
HandleFailure();
auto &tmp = fields.ValueMap();
auto it = tmp.find("message");
if (it != tmp.end()) {
auto it_code = tmp.find("code");
if (it_code != tmp.end()) {
throw ClientQueryException(it_code->second.ValueString(), it->second.ValueString());
} else {
throw ClientQueryException("", it->second.ValueString());
}
}
throw ClientQueryException();
} else if (signature != Signature::Success) {
throw ServerMalformedDataException();
}
SPDLOG_INFO("Reading pull_all message response");
Marker marker;
Value metadata;
std::vector<std::vector<Value>> records;
while (true) {
if (!GetMessage()) {
throw ServerCommunicationException();
}
if (!decoder_.ReadMessageHeader(&signature, &marker)) {
throw ServerCommunicationException();
}
if (signature == Signature::Record) {
Value record;
if (!decoder_.ReadValue(&record, Value::Type::List)) {
throw ServerCommunicationException();
}
records.emplace_back(std::move(record.ValueList()));
} else if (signature == Signature::Success) {
if (!decoder_.ReadValue(&metadata)) {
throw ServerCommunicationException();
}
break;
} else if (signature == Signature::Failure) {
Value data;
if (!decoder_.ReadValue(&data)) {
throw ServerCommunicationException();
}
HandleFailure();
auto &tmp = data.ValueMap();
auto it = tmp.find("message");
if (it != tmp.end()) {
auto it_code = tmp.find("code");
if (it_code != tmp.end()) {
throw ClientQueryException(it_code->second.ValueString(), it->second.ValueString());
} else {
throw ClientQueryException("", it->second.ValueString());
}
}
throw ClientQueryException();
} else {
throw ServerMalformedDataException();
}
}
if (metadata.type() != Value::Type::Map) {
throw ServerMalformedDataException();
}
QueryData ret{{}, std::move(records), std::move(metadata.ValueMap())};
auto &header = fields.ValueMap();
if (header.find("fields") == header.end()) {
throw ServerMalformedDataException();
}
if (header["fields"].type() != Value::Type::List) {
throw ServerMalformedDataException();
}
auto &field_vector = header["fields"].ValueList();
for (auto &field_item : field_vector) {
if (field_item.type() != Value::Type::String) {
throw ServerMalformedDataException();
}
ret.fields.emplace_back(std::move(field_item.ValueString()));
}
return ret;
}
QueryData Execute(const std::string &query, const std::map<std::string, Value> &parameters);
/// Close the active client connection.
void Close() { client_.Close(); };
void Close();
/// Can be used to reset the active client connection. Reset is automatically sent after receiving a failure message
/// from the server, which result in throwing an FailureResponseException or any exception derived from it.
void Reset();
/// Can be used to send a route message.
std::optional<std::map<std::string, Value>> Route(const std::map<std::string, Value> &routing,
const std::vector<Value> &bookmarks,
const std::optional<std::string> &db);
private:
bool GetMessage() {
client_.ClearData();
while (true) {
if (!client_.Read(kChunkHeaderSize)) return false;
using ClientEncoder = ClientEncoder<ChunkedEncoderBuffer<communication::ClientOutputStream>>;
size_t chunk_size = client_.GetData()[0];
chunk_size <<= 8;
chunk_size += client_.GetData()[1];
if (chunk_size == 0) return true;
if (!client_.Read(chunk_size)) return false;
if (decoder_buffer_.GetChunk() != ChunkState::Whole) return false;
client_.ClearData();
}
return true;
}
bool ReadMessage(Signature *signature, Value *ret) {
Marker marker;
if (!GetMessage()) return false;
if (!decoder_.ReadMessageHeader(signature, &marker)) return false;
return ReadMessageData(marker, ret);
}
bool ReadMessageData(Marker marker, Value *ret) {
if (marker == Marker::TinyStruct) {
*ret = Value();
return true;
} else if (marker == Marker::TinyStruct1) {
return decoder_.ReadValue(ret);
}
return false;
}
void HandleFailure() {
if (!encoder_.MessageAckFailure()) {
throw ServerCommunicationException();
}
while (true) {
Signature signature;
Value data;
if (!ReadMessage(&signature, &data)) {
throw ServerCommunicationException();
}
if (signature == Signature::Success) {
break;
} else if (signature != Signature::Ignored) {
throw ServerMalformedDataException();
template <typename TException = FailureResponseException>
[[noreturn]] void HandleFailure(const std::map<std::string, Value> &response_map) {
Reset();
auto it = response_map.find("message");
if (it != response_map.end()) {
auto it_code = response_map.find("code");
if (it_code != response_map.end()) {
throw TException(it_code->second.ValueString(), it->second.ValueString());
}
throw TException("", it->second.ValueString());
}
throw TException();
}
bool GetMessage();
bool ReadMessage(Signature &signature, Value &ret);
bool ReadMessageData(Marker marker, Value &ret);
// client
communication::Client client_;
communication::ClientInputStream input_stream_{client_};
@ -313,6 +153,6 @@ class Client final {
// encoder objects
ChunkedEncoderBuffer<communication::ClientOutputStream> encoder_buffer_{output_stream_};
ClientEncoder<ChunkedEncoderBuffer<communication::ClientOutputStream>> encoder_{encoder_buffer_};
ClientEncoder encoder_{encoder_buffer_};
};
} // namespace memgraph::communication::bolt

View File

@ -16,7 +16,6 @@
namespace memgraph::communication::bolt {
inline constexpr uint8_t kPreamble[4] = {0x60, 0x60, 0xB0, 0x17};
inline constexpr uint8_t kProtocol[4] = {0x00, 0x00, 0x00, 0x01};
enum class Signature : uint8_t {
Noop = 0x00,

View File

@ -11,6 +11,11 @@
#pragma once
#include <map>
#include <optional>
#include <string>
#include <vector>
#include "communication/bolt/v1/codes.hpp"
#include "communication/bolt/v1/encoder/base_encoder.hpp"
@ -30,6 +35,7 @@ class ClientEncoder : private BaseEncoder<Buffer> {
using BaseEncoder<Buffer>::WriteList;
using BaseEncoder<Buffer>::WriteMap;
using BaseEncoder<Buffer>::WriteString;
using BaseEncoder<Buffer>::WriteNull;
using BaseEncoder<Buffer>::buffer_;
public:
@ -38,10 +44,9 @@ class ClientEncoder : private BaseEncoder<Buffer> {
/**
* Writes a Init message.
*
* From the Bolt v1 documentation:
* InitMessage (signature=0x01) {
* String clientName
* Map<String,Value> authToken
* From the Bolt v4.3 documentation:
* HelloMess (signature=0x01) {
* Map<String,Value> extra
* }
*
* @param client_name the name of the connected client
@ -49,11 +54,10 @@ class ClientEncoder : private BaseEncoder<Buffer> {
* @returns true if the data was successfully sent to the client
* when flushing, false otherwise
*/
bool MessageInit(const std::string client_name, const std::map<std::string, Value> &auth_token) {
WriteRAW(utils::UnderlyingCast(Marker::TinyStruct2));
bool MessageInit(const std::map<std::string, Value> &extra) {
WriteRAW(utils::UnderlyingCast(Marker::TinyStruct1));
WriteRAW(utils::UnderlyingCast(Signature::Init));
WriteString(client_name);
WriteMap(auth_token);
WriteMap(extra);
// Try to flush all remaining data in the buffer, but tell it that we will
// send more data (the end of message chunk).
if (!buffer_.Flush(true)) return false;
@ -64,10 +68,11 @@ class ClientEncoder : private BaseEncoder<Buffer> {
/**
* Writes a Run message.
*
* From the Bolt v1 documentation:
* From the Bolt v4.3 documentation:
* RunMessage (signature=0x10) {
* String statement
* Map<String,Value> parameters
* String statement
* Map<String,Value> parameters
* Map<String,Value> extra
* }
*
* @param statement the statement that should be executed
@ -75,11 +80,13 @@ class ClientEncoder : private BaseEncoder<Buffer> {
* @returns true if the data was successfully sent to the client
* when flushing, false otherwise
*/
bool MessageRun(const std::string &statement, const std::map<std::string, Value> &parameters, bool have_more = true) {
WriteRAW(utils::UnderlyingCast(Marker::TinyStruct2));
bool MessageRun(const std::string &statement, const std::map<std::string, Value> &parameters,
const std::map<std::string, Value> &extra, bool have_more = true) {
WriteRAW(utils::UnderlyingCast(Marker::TinyStruct3));
WriteRAW(utils::UnderlyingCast(Signature::Run));
WriteString(statement);
WriteMap(parameters);
WriteMap(extra);
// Try to flush all remaining data in the buffer, but tell it that we will
// send more data (the end of message chunk).
if (!buffer_.Flush(true)) return false;
@ -90,18 +97,20 @@ class ClientEncoder : private BaseEncoder<Buffer> {
}
/**
* Writes a DiscardAll message.
* Writes a Discard message.
*
* From the Bolt v1 documentation:
* From the Bolt v4.3 documentation:
* DiscardMessage (signature=0x2F) {
* Map<String,Value> extra
* }
*
* @returns true if the data was successfully sent to the client
* when flushing, false otherwise
*/
bool MessageDiscardAll() {
WriteRAW(utils::UnderlyingCast(Marker::TinyStruct));
bool MessageDiscard(const std::map<std::string, Value> &extra) {
WriteRAW(utils::UnderlyingCast(Marker::TinyStruct1));
WriteRAW(utils::UnderlyingCast(Signature::Discard));
WriteMap(extra);
// Try to flush all remaining data in the buffer, but tell it that we will
// send more data (the end of message chunk).
if (!buffer_.Flush(true)) return false;
@ -112,36 +121,18 @@ class ClientEncoder : private BaseEncoder<Buffer> {
/**
* Writes a PullAll message.
*
* From the Bolt v1 documentation:
* PullAllMessage (signature=0x3F) {
* From the Bolt v4.3 documentation:
* PullMessage (signature=0x3F) {
* Map<String,Value> extra
* }
*
* @returns true if the data was successfully sent to the client
* when flushing, false otherwise
*/
bool MessagePullAll() {
WriteRAW(utils::UnderlyingCast(Marker::TinyStruct));
bool MessagePull(const std::map<std::string, Value> &extra) {
WriteRAW(utils::UnderlyingCast(Marker::TinyStruct1));
WriteRAW(utils::UnderlyingCast(Signature::Pull));
// Try to flush all remaining data in the buffer, but tell it that we will
// send more data (the end of message chunk).
if (!buffer_.Flush(true)) return false;
// Flush an empty chunk to indicate that the message is done.
return buffer_.Flush();
}
/**
* Writes a AckFailure message.
*
* From the Bolt v1 documentation:
* AckFailureMessage (signature=0x0E) {
* }
*
* @returns true if the data was successfully sent to the client
* when flushing, false otherwise
*/
bool MessageAckFailure() {
WriteRAW(utils::UnderlyingCast(Marker::TinyStruct));
WriteRAW(utils::UnderlyingCast(Signature::AckFailure));
WriteMap(extra);
// Try to flush all remaining data in the buffer, but tell it that we will
// send more data (the end of message chunk).
if (!buffer_.Flush(true)) return false;
@ -152,7 +143,7 @@ class ClientEncoder : private BaseEncoder<Buffer> {
/**
* Writes a Reset message.
*
* From the Bolt v1 documentation:
* From the Bolt v4.3 documentation:
* ResetMessage (signature=0x0F) {
* }
*
@ -168,5 +159,36 @@ class ClientEncoder : private BaseEncoder<Buffer> {
// Flush an empty chunk to indicate that the message is done.
return buffer_.Flush();
}
/**
* Writes a Route message.
*
* From the Bolt v4.3 documentation:
* RouteMessage (signature=0x0F) {
* Map<String,Value> routing
* List<String> bookmarks
* String db
* }
*
* @returns true if the data was successfully sent to the client
* when flushing, false otherwise
*/
bool MessageRoute(const std::map<std::string, Value> &routing, const std::vector<Value> &bookmarks,
const std::optional<std::string> &db) {
WriteRAW(utils::UnderlyingCast(Marker::TinyStruct3));
WriteRAW(utils::UnderlyingCast(Signature::Route));
WriteMap(routing);
WriteList(bookmarks);
if (db.has_value()) {
WriteString(*db);
} else {
WriteNull();
}
// Try to flush all remaining data in the buffer, but tell it that we will
// send more data (the end of message chunk).
if (!buffer_.Flush(true)) return false;
// Flush an empty chunk to indicate that the message is done.
return buffer_.Flush();
}
};
} // namespace memgraph::communication::bolt

View File

@ -117,29 +117,6 @@ class Encoder : private BaseEncoder<Buffer> {
return buffer_.Flush();
}
/**
* Sends an Ignored message.
*
* From the bolt v1 documentation:
* IgnoredMessage (signature=0x7E) {
* Map<String,Value> metadata
* }
*
* @param metadata the metadata map object that should be sent
* @returns true if the data was successfully sent to the client,
* false otherwise
*/
bool MessageIgnored(const std::map<std::string, Value> &metadata) {
WriteRAW(utils::UnderlyingCast(Marker::TinyStruct1));
WriteRAW(utils::UnderlyingCast(Signature::Ignored));
WriteMap(metadata);
// Try to flush all remaining data in the buffer, but tell it that we will
// send more data (the end of message chunk).
if (!buffer_.Flush(true)) return false;
// Flush an empty chunk to indicate that the message is done.
return buffer_.Flush();
}
/**
* Sends an Ignored message.
*

View File

@ -15,6 +15,7 @@
#include "communication/bolt/v1/codes.hpp"
#include "communication/bolt/v1/state.hpp"
#include "communication/bolt/v1/states/handlers.hpp"
#include "communication/bolt/v1/value.hpp"
#include "utils/cast.hpp"
#include "utils/likely.hpp"
@ -30,8 +31,8 @@ namespace memgraph::communication::bolt {
*/
template <typename TSession>
State StateErrorRun(TSession &session, State state) {
Marker marker;
Signature signature;
Marker marker{};
Signature signature{};
if (!session.decoder_.ReadMessageHeader(&signature, &marker)) {
spdlog::trace("Missing header data!");
return State::Close;
@ -45,54 +46,49 @@ State StateErrorRun(TSession &session, State state) {
// Clear the data buffer if it has any leftover data.
session.encoder_buffer_.Clear();
if ((session.version_.major == 1 && signature == Signature::AckFailure) || signature == Signature::Reset) {
if (signature == Signature::AckFailure) {
spdlog::trace("AckFailure received");
} else {
spdlog::trace("Reset received");
}
if (session.version_.major == 1 && signature == Signature::AckFailure) {
spdlog::trace("AckFailure received");
if (!session.encoder_.MessageSuccess()) {
spdlog::trace("Couldn't send success message!");
return State::Close;
}
if (signature == Signature::Reset) {
session.Abort();
return State::Idle;
}
// We got AckFailure get back to right state.
MG_ASSERT(state == State::Error, "Shouldn't happen");
return State::Idle;
} else {
uint8_t value = utils::UnderlyingCast(marker);
// All bolt client messages have less than 15 parameters so if we receive
// anything than a TinyStruct it's an error.
if ((value & 0xF0) != utils::UnderlyingCast(Marker::TinyStruct)) {
spdlog::trace("Expected TinyStruct marker, but received 0x{:02X}!", value);
return State::Close;
}
// We need to clean up all parameters from this command.
value &= 0x0F; // The length is stored in the lower nibble.
Value dv;
for (int i = 0; i < value; ++i) {
if (!session.decoder_.ReadValue(&dv)) {
spdlog::trace("Couldn't clean up parameter {} / {}!", i, value);
return State::Close;
}
}
// Ignore this message.
if (!session.encoder_.MessageIgnored()) {
spdlog::trace("Couldn't send ignored message!");
return State::Close;
}
// Cleanup done, command ignored, stay in error state.
return state;
}
if (signature == Signature::Reset) {
spdlog::trace("Reset received");
return HandleReset(session, marker);
}
uint8_t value = utils::UnderlyingCast(marker);
// All bolt client messages have less than 15 parameters so if we receive
// anything than a TinyStruct it's an error.
if ((value & 0xF0U) != utils::UnderlyingCast(Marker::TinyStruct)) {
spdlog::trace("Expected TinyStruct marker, but received 0x{:02X}!", value);
return State::Close;
}
// We need to clean up all parameters from this command.
value &= 0x0FU; // The length is stored in the lower nibble.
Value dv;
for (int i = 0; i < value; ++i) {
if (!session.decoder_.ReadValue(&dv)) {
spdlog::trace("Couldn't clean up parameter {} / {}!", i, value);
return State::Close;
}
}
// Ignore this message.
if (!session.encoder_.MessageIgnored()) {
spdlog::trace("Couldn't send ignored message!");
return State::Close;
}
// Cleanup done, command ignored, stay in error state.
return state;
}
} // namespace memgraph::communication::bolt

View File

@ -74,7 +74,7 @@ State RunHandlerV4(Signature signature, TSession &session, State state, Marker m
}
case Signature::Route: {
if constexpr (bolt_minor >= 3) {
if (signature == Signature::Route) return HandleRoute<TSession>(session);
if (signature == Signature::Route) return HandleRoute<TSession>(session, marker);
} else {
spdlog::trace("Supported only in bolt v4.3");
return State::Close;

View File

@ -18,6 +18,7 @@
#include "communication/bolt/v1/codes.hpp"
#include "communication/bolt/v1/constants.hpp"
#include "communication/bolt/v1/exceptions.hpp"
#include "communication/bolt/v1/state.hpp"
#include "communication/bolt/v1/value.hpp"
#include "communication/exceptions.hpp"
@ -136,7 +137,7 @@ template <bool is_pull, typename TSession>
State HandlePullDiscardV1(TSession &session, const State state, const Marker marker) {
const auto expected_marker = Marker::TinyStruct;
if (marker != expected_marker) {
spdlog::trace("Expected {} marker, but received 0x{:02X}!", "TinyStruct", utils::UnderlyingCast(marker));
spdlog::trace("Expected TinyStruct marker, but received 0x{:02X}!", utils::UnderlyingCast(marker));
return State::Close;
}
@ -157,7 +158,7 @@ template <bool is_pull, typename TSession>
State HandlePullDiscardV4(TSession &session, const State state, const Marker marker) {
const auto expected_marker = Marker::TinyStruct1;
if (marker != expected_marker) {
spdlog::trace("Expected {} marker, but received 0x{:02X}!", "TinyStruct1", utils::UnderlyingCast(marker));
spdlog::trace("Expected TinyStruct1 marker, but received 0x{:02X}!", utils::UnderlyingCast(marker));
return State::Close;
}
@ -216,7 +217,8 @@ State HandleRunV1(TSession &session, const State state, const Marker marker) {
session.version_.major == 1 ? "TinyStruct2" : "TinyStruct3", utils::UnderlyingCast(marker));
return State::Close;
}
Value query, params;
Value query;
Value params;
if (!session.decoder_.ReadValue(&query, Value::Type::String)) {
spdlog::trace("Couldn't read query string!");
return State::Close;
@ -234,10 +236,12 @@ template <typename TSession>
State HandleRunV4(TSession &session, const State state, const Marker marker) {
const auto expected_marker = Marker::TinyStruct3;
if (marker != expected_marker) {
spdlog::trace("Expected {} marker, but received 0x{:02X}!", "TinyStruct3", utils::UnderlyingCast(marker));
spdlog::trace("Expected TinyStruct3 marker, but received 0x{:02X}!", utils::UnderlyingCast(marker));
return State::Close;
}
Value query, params, extra;
Value query;
Value params;
Value extra;
if (!session.decoder_.ReadValue(&query, Value::Type::String)) {
spdlog::trace("Couldn't read query string!");
return State::Close;
@ -292,9 +296,6 @@ State HandleReset(TSession &session, const Marker marker) {
return State::Close;
}
// Clear all pending data and send a success message.
session.encoder_buffer_.Clear();
if (!session.encoder_.MessageSuccess()) {
spdlog::trace("Couldn't send success message!");
return State::Close;
@ -403,12 +404,33 @@ State HandleGoodbye() {
}
template <typename TSession>
State HandleRoute(TSession &session) {
// Route message is not implemented since it is neo4j specific, therefore we
// will receive it an inform user that there is no implementation.
State HandleRoute(TSession &session, const Marker marker) {
// Route message is not implemented since it is Neo4j specific, therefore we will receive it and inform user that
// there is no implementation. Before that, we have to read out the fields from the buffer to leave it in a clean
// state.
if (marker != Marker::TinyStruct3) {
spdlog::trace("Expected TinyStruct3 marker, but received 0x{:02x}!", utils::UnderlyingCast(marker));
return State::Close;
}
Value routing;
if (!session.decoder_.ReadValue(&routing, Value::Type::Map)) {
spdlog::trace("Couldn't read routing field!");
return State::Close;
}
Value bookmarks;
if (!session.decoder_.ReadValue(&bookmarks, Value::Type::List)) {
spdlog::trace("Couldn't read bookmarks field!");
return State::Close;
}
Value db;
if (!session.decoder_.ReadValue(&db)) {
spdlog::trace("Couldn't read db field!");
return State::Close;
}
session.encoder_buffer_.Clear();
bool fail_sent =
session.encoder_.MessageFailure({{"code", 66}, {"message", "Route message not supported in Memgraph!"}});
session.encoder_.MessageFailure({{"code", "66"}, {"message", "Route message is not supported in Memgraph!"}});
if (!fail_sent) {
spdlog::trace("Couldn't send failure message!");
return State::Close;

View File

@ -168,8 +168,6 @@ DEFINE_string(bolt_server_name_for_init, "",
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
DEFINE_string(data_directory, "mg_data", "Path to directory in which to save all permanent data.");
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
DEFINE_HIDDEN_string(log_link_basename, "", "Basename used for symlink creation to the last log file.");
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
DEFINE_uint64(memory_warning_threshold, 1024,
"Memory warning threshold, in MB. If Memgraph detects there is "
"less available RAM it will log a warning. Set to 0 to "
@ -220,11 +218,6 @@ DEFINE_bool(telemetry_enabled, false,
"the database runtime (vertex and edge counts and resource usage) "
"to allow for easier improvement of the product.");
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
DEFINE_bool(storage_restore_replicas_on_startup, true,
"Controls replicas should be restored automatically."); // TODO(42jeremy) this must be removed once T0835
// is implemented.
// Streams flags
// NOLINTNEXTLINE (cppcoreguidelines-avoid-non-const-global-variables)
DEFINE_uint32(
@ -358,7 +351,8 @@ DEFINE_VALIDATED_string(query_modules_directory, "",
});
// Logging flags
DEFINE_bool(also_log_to_stderr, false, "Log messages go to stderr in addition to logfiles");
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
DEFINE_HIDDEN_bool(also_log_to_stderr, false, "Log messages go to stderr in addition to logfiles");
DEFINE_string(log_file, "", "Path to where the log should be stored.");
namespace {
@ -438,9 +432,9 @@ void AddLoggerSink(spdlog::sink_ptr new_sink) {
} // namespace
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
DEFINE_string(license_key, "", "License key for Memgraph Enterprise.");
DEFINE_HIDDEN_string(license_key, "", "License key for Memgraph Enterprise.");
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
DEFINE_string(organization_name, "", "Organization name.");
DEFINE_HIDDEN_string(organization_name, "", "Organization name.");
/// Encapsulates Dbms and Interpreter that are passed through the network server
/// and worker to the session.
@ -528,6 +522,8 @@ class BoltSession final : public memgraph::communication::bolt::Session<memgraph
// Wrap QueryException into ClientError, because we want to allow the
// client to fix their query.
throw memgraph::communication::bolt::ClientError(e.what());
} catch (const memgraph::query::ReplicationException &e) {
throw memgraph::communication::bolt::ClientError(e.what());
}
}
@ -823,7 +819,7 @@ int main(int argc, char **argv) {
.wal_file_size_kibibytes = FLAGS_storage_wal_file_size_kib,
.wal_file_flush_every_n_tx = FLAGS_storage_wal_file_flush_every_n_tx,
.snapshot_on_exit = FLAGS_storage_snapshot_on_exit,
.restore_replicas_on_startup = FLAGS_storage_restore_replicas_on_startup},
.restore_replicas_on_startup = true},
.transaction = {.isolation_level = ParseIsolationLevel()}};
if (FLAGS_storage_snapshot_interval_sec == 0) {
if (FLAGS_storage_wal_enabled) {

View File

@ -12,7 +12,7 @@
#include "query/cypher_query_interpreter.hpp"
// NOLINTNEXTLINE (cppcoreguidelines-avoid-non-const-global-variables)
DEFINE_HIDDEN_bool(query_cost_planner, true, "Use the cost-estimating query planner.");
DEFINE_bool(query_cost_planner, true, "Use the cost-estimating query planner.");
// NOLINTNEXTLINE (cppcoreguidelines-avoid-non-const-global-variables)
DEFINE_VALIDATED_int32(query_plan_cache_ttl, 60, "Time to live for cached query plans, in seconds.",
FLAG_IN_RANGE(0, std::numeric_limits<int32_t>::max()));

View File

@ -324,7 +324,7 @@ class DbAccessor final {
void AdvanceCommand() { accessor_->AdvanceCommand(); }
utils::BasicResult<storage::ConstraintViolation, void> Commit() { return accessor_->Commit(); }
utils::BasicResult<storage::StorageDataManipulationError, void> Commit() { return accessor_->Commit(); }
void Abort() { accessor_->Abort(); }

View File

@ -188,6 +188,12 @@ class FreeMemoryModificationInMulticommandTxException : public QueryException {
: QueryException("Free memory query not allowed in multicommand transactions.") {}
};
class ShowConfigModificationInMulticommandTxException : public QueryException {
public:
ShowConfigModificationInMulticommandTxException()
: QueryException("Show config query not allowed in multicommand transactions.") {}
};
class TriggerModificationInMulticommandTxException : public QueryException {
public:
TriggerModificationInMulticommandTxException()
@ -224,4 +230,11 @@ class VersionInfoInMulticommandTxException : public QueryException {
: QueryException("Version info query not allowed in multicommand transactions.") {}
};
class ReplicationException : public utils::BasicException {
public:
using utils::BasicException::BasicException;
explicit ReplicationException(const std::string &message)
: utils::BasicException("Replication Exception: {} Check the status of the replicas using 'SHOW REPLICA' query.",
message) {}
};
} // namespace memgraph::query

View File

@ -2675,5 +2675,13 @@ cpp<#
(:serialize (:slk))
(:clone))
(lcp:define-class show-config-query (query) ()
(:public
#>cpp
DEFVISITABLE(QueryVisitor<void>);
cpp<#)
(:serialize (:slk))
(:clone))
(lcp:pop-namespace) ;; namespace query
(lcp:pop-namespace) ;; namespace memgraph

View File

@ -94,6 +94,7 @@ class StreamQuery;
class SettingQuery;
class VersionQuery;
class Foreach;
class ShowConfigQuery;
using TreeCompositeVisitor = utils::CompositeVisitor<
SingleQuery, CypherUnion, NamedExpression, OrOperator, XorOperator, AndOperator, NotOperator, AdditionOperator,
@ -125,9 +126,9 @@ class ExpressionVisitor
None, ParameterLookup, Identifier, PrimitiveLiteral, RegexMatch> {};
template <class TResult>
class QueryVisitor
: public utils::Visitor<TResult, CypherQuery, ExplainQuery, ProfileQuery, IndexQuery, AuthQuery, InfoQuery,
ConstraintQuery, DumpQuery, ReplicationQuery, LockPathQuery, FreeMemoryQuery, TriggerQuery,
IsolationLevelQuery, CreateSnapshotQuery, StreamQuery, SettingQuery, VersionQuery> {};
class QueryVisitor : public utils::Visitor<TResult, CypherQuery, ExplainQuery, ProfileQuery, IndexQuery, AuthQuery,
InfoQuery, ConstraintQuery, DumpQuery, ReplicationQuery, LockPathQuery,
FreeMemoryQuery, TriggerQuery, IsolationLevelQuery, CreateSnapshotQuery,
StreamQuery, SettingQuery, VersionQuery, ShowConfigQuery> {};
} // namespace memgraph::query

View File

@ -2467,6 +2467,11 @@ antlrcpp::Any CypherMainVisitor::visitForeach(MemgraphCypher::ForeachContext *ct
return for_each;
}
antlrcpp::Any CypherMainVisitor::visitShowConfigQuery(MemgraphCypher::ShowConfigQueryContext * /*ctx*/) {
query_ = storage_->Create<ShowConfigQuery>();
return query_;
}
LabelIx CypherMainVisitor::AddLabel(const std::string &name) { return storage_->GetLabelIx(name); }
PropertyIx CypherMainVisitor::AddProperty(const std::string &name) { return storage_->GetPropertyIx(name); }

View File

@ -877,6 +877,11 @@ class CypherMainVisitor : public antlropencypher::MemgraphCypherBaseVisitor {
*/
antlrcpp::Any visitForeach(MemgraphCypher::ForeachContext *ctx) override;
/**
* @return ShowConfigQuery*
*/
antlrcpp::Any visitShowConfigQuery(MemgraphCypher::ShowConfigQueryContext *ctx) override;
public:
Query *query() { return query_; }
const static std::string kAnonPrefix;

View File

@ -125,6 +125,7 @@ query : cypherQuery
| streamQuery
| settingQuery
| versionQuery
| showConfigQuery
;
authQuery : createRole
@ -395,4 +396,6 @@ showSetting : SHOW DATABASE SETTING settingName ;
showSettings : SHOW DATABASE SETTINGS ;
showConfigQuery : SHOW CONFIG ;
versionQuery : SHOW VERSION ;

View File

@ -66,6 +66,8 @@ class PrivilegeExtractor : public QueryVisitor<void>, public HierarchicalTreeVis
void Visit(FreeMemoryQuery &free_memory_query) override { AddPrivilege(AuthQuery::Privilege::FREE_MEMORY); }
void Visit(ShowConfigQuery & /*show_config_query*/) override { AddPrivilege(AuthQuery::Privilege::CONFIG); }
void Visit(TriggerQuery &trigger_query) override { AddPrivilege(AuthQuery::Privilege::TRIGGER); }
void Visit(StreamQuery &stream_query) override { AddPrivilege(AuthQuery::Privilege::STREAM); }

View File

@ -204,6 +204,7 @@ const trie::Trie kKeywords = {"union",
"pulsar",
"service_url",
"version",
"config",
"websocket",
"foreach",
"labels",

View File

@ -21,6 +21,7 @@
#include <limits>
#include <optional>
#include <unordered_map>
#include <variant>
#include "auth/models.hpp"
#include "glue/communication.hpp"
@ -78,6 +79,9 @@ extern const Event TriggersCreated;
namespace memgraph::query {
template <typename>
constexpr auto kAlwaysFalse = false;
namespace {
void UpdateTypeCount(const plan::ReadWriteTypeChecker::RWType type) {
switch (type) {
@ -802,6 +806,37 @@ Callback HandleStreamQuery(StreamQuery *stream_query, const Parameters &paramete
}
}
Callback HandleConfigQuery() {
Callback callback;
callback.header = {"name", "default_value", "current_value", "description"};
callback.fn = [] {
std::vector<GFLAGS_NAMESPACE::CommandLineFlagInfo> flags;
GetAllFlags(&flags);
std::vector<std::vector<TypedValue>> results;
for (const auto &flag : flags) {
if (flag.hidden ||
// These flags are not defined with gflags macros but are specified in config/flags.yaml
flag.name == "help" || flag.name == "help_xml" || flag.name == "version") {
continue;
}
std::vector<TypedValue> current_fields;
current_fields.emplace_back(flag.name);
current_fields.emplace_back(flag.default_value);
current_fields.emplace_back(flag.current_value);
current_fields.emplace_back(flag.description);
results.emplace_back(std::move(current_fields));
}
return results;
};
return callback;
}
Callback HandleSettingQuery(SettingQuery *setting_query, const Parameters &parameters, DbAccessor *db_accessor) {
Frame frame(0);
SymbolTable symbol_table;
@ -1375,23 +1410,34 @@ PreparedQuery PrepareIndexQuery(ParsedQuery parsed_query, bool in_explicit_trans
handler = [interpreter_context, label, properties_stringified = std::move(properties_stringified),
label_name = index_query->label_.name, properties = std::move(properties),
invalidate_plan_cache = std::move(invalidate_plan_cache)](Notification &index_notification) {
if (properties.empty()) {
if (!interpreter_context->db->CreateIndex(label)) {
index_notification.code = NotificationCode::EXISTANT_INDEX;
index_notification.title =
fmt::format("Index on label {} on properties {} already exists.", label_name, properties_stringified);
}
EventCounter::IncrementCounter(EventCounter::LabelIndexCreated);
MG_ASSERT(properties.size() <= 1U);
auto maybe_index_error = properties.empty() ? interpreter_context->db->CreateIndex(label)
: interpreter_context->db->CreateIndex(label, properties[0]);
utils::OnScopeExit invalidator(invalidate_plan_cache);
if (maybe_index_error.HasError()) {
const auto &error = maybe_index_error.GetError();
std::visit(
[&index_notification, &label_name, &properties_stringified]<typename T>(T &&) {
using ErrorType = std::remove_cvref_t<T>;
if constexpr (std::is_same_v<ErrorType, storage::ReplicationError>) {
EventCounter::IncrementCounter(EventCounter::LabelIndexCreated);
throw ReplicationException(
fmt::format("At least one SYNC replica has not confirmed the creation of the index on label {} "
"on properties {}.",
label_name, properties_stringified));
} else if constexpr (std::is_same_v<ErrorType, storage::IndexDefinitionError>) {
index_notification.code = NotificationCode::EXISTENT_INDEX;
index_notification.title = fmt::format("Index on label {} on properties {} already exists.",
label_name, properties_stringified);
} else {
static_assert(kAlwaysFalse<T>, "Missing type from variant visitor");
}
},
error);
} else {
MG_ASSERT(properties.size() == 1U);
if (!interpreter_context->db->CreateIndex(label, properties[0])) {
index_notification.code = NotificationCode::EXISTANT_INDEX;
index_notification.title =
fmt::format("Index on label {} on properties {} already exists.", label_name, properties_stringified);
}
EventCounter::IncrementCounter(EventCounter::LabelPropertyIndexCreated);
EventCounter::IncrementCounter(EventCounter::LabelIndexCreated);
}
invalidate_plan_cache();
};
break;
}
@ -1402,21 +1448,31 @@ PreparedQuery PrepareIndexQuery(ParsedQuery parsed_query, bool in_explicit_trans
handler = [interpreter_context, label, properties_stringified = std::move(properties_stringified),
label_name = index_query->label_.name, properties = std::move(properties),
invalidate_plan_cache = std::move(invalidate_plan_cache)](Notification &index_notification) {
if (properties.empty()) {
if (!interpreter_context->db->DropIndex(label)) {
index_notification.code = NotificationCode::NONEXISTANT_INDEX;
index_notification.title =
fmt::format("Index on label {} on properties {} doesn't exist.", label_name, properties_stringified);
}
} else {
MG_ASSERT(properties.size() == 1U);
if (!interpreter_context->db->DropIndex(label, properties[0])) {
index_notification.code = NotificationCode::NONEXISTANT_INDEX;
index_notification.title =
fmt::format("Index on label {} on properties {} doesn't exist.", label_name, properties_stringified);
}
MG_ASSERT(properties.size() <= 1U);
auto maybe_index_error = properties.empty() ? interpreter_context->db->DropIndex(label)
: interpreter_context->db->DropIndex(label, properties[0]);
utils::OnScopeExit invalidator(invalidate_plan_cache);
if (maybe_index_error.HasError()) {
const auto &error = maybe_index_error.GetError();
std::visit(
[&index_notification, &label_name, &properties_stringified]<typename T>(T &&) {
using ErrorType = std::remove_cvref_t<T>;
if constexpr (std::is_same_v<ErrorType, storage::ReplicationError>) {
throw ReplicationException(
fmt::format("At least one SYNC replica has not confirmed the dropping of the index on label {} "
"on properties {}.",
label_name, properties_stringified));
} else if constexpr (std::is_same_v<ErrorType, storage::IndexDefinitionError>) {
index_notification.code = NotificationCode::NONEXISTENT_INDEX;
index_notification.title = fmt::format("Index on label {} on properties {} doesn't exist.",
label_name, properties_stringified);
} else {
static_assert(kAlwaysFalse<T>, "Missing type from variant visitor");
}
},
error);
}
invalidate_plan_cache();
};
break;
}
@ -1544,6 +1600,28 @@ PreparedQuery PrepareFreeMemoryQuery(ParsedQuery parsed_query, const bool in_exp
RWType::NONE};
}
PreparedQuery PrepareShowConfigQuery(ParsedQuery parsed_query, const bool in_explicit_transaction) {
if (in_explicit_transaction) {
throw ShowConfigModificationInMulticommandTxException();
}
auto callback = HandleConfigQuery();
return PreparedQuery{std::move(callback.header), std::move(parsed_query.required_privileges),
[callback_fn = std::move(callback.fn), pull_plan = std::shared_ptr<PullPlanVector>{nullptr}](
AnyStream *stream, std::optional<int> n) mutable -> std::optional<QueryHandlerResult> {
if (!pull_plan) [[unlikely]] {
pull_plan = std::make_shared<PullPlanVector>(callback_fn());
}
if (pull_plan->Pull(stream, n)) {
return QueryHandlerResult::COMMIT;
}
return std::nullopt;
},
RWType::NONE};
}
TriggerEventType ToTriggerEventType(const TriggerQuery::EventType event_type) {
switch (event_type) {
case TriggerQuery::EventType::ANY:
@ -1943,21 +2021,37 @@ PreparedQuery PrepareConstraintQuery(ParsedQuery parsed_query, bool in_explicit_
handler = [interpreter_context, label, label_name = constraint_query->constraint_.label.name,
properties_stringified = std::move(properties_stringified),
properties = std::move(properties)](Notification &constraint_notification) {
auto res = interpreter_context->db->CreateExistenceConstraint(label, properties[0]);
if (res.HasError()) {
auto violation = res.GetError();
auto label_name = interpreter_context->db->LabelToName(violation.label);
MG_ASSERT(violation.properties.size() == 1U);
auto property_name = interpreter_context->db->PropertyToName(*violation.properties.begin());
throw QueryRuntimeException(
"Unable to create existence constraint :{}({}), because an "
"existing node violates it.",
label_name, property_name);
}
if (res.HasValue() && !res.GetValue()) {
constraint_notification.code = NotificationCode::EXISTANT_CONSTRAINT;
constraint_notification.title = fmt::format(
"Constraint EXISTS on label {} on properties {} already exists.", label_name, properties_stringified);
auto maybe_constraint_error = interpreter_context->db->CreateExistenceConstraint(label, properties[0]);
if (maybe_constraint_error.HasError()) {
const auto &error = maybe_constraint_error.GetError();
std::visit(
[&interpreter_context, &label_name, &properties_stringified,
&constraint_notification]<typename T>(T &&arg) {
using ErrorType = std::remove_cvref_t<T>;
if constexpr (std::is_same_v<ErrorType, storage::ConstraintViolation>) {
auto &violation = arg;
MG_ASSERT(violation.properties.size() == 1U);
auto property_name = interpreter_context->db->PropertyToName(*violation.properties.begin());
throw QueryRuntimeException(
"Unable to create existence constraint :{}({}), because an "
"existing node violates it.",
label_name, property_name);
} else if constexpr (std::is_same_v<ErrorType, storage::ConstraintDefinitionError>) {
constraint_notification.code = NotificationCode::EXISTENT_CONSTRAINT;
constraint_notification.title =
fmt::format("Constraint EXISTS on label {} on properties {} already exists.", label_name,
properties_stringified);
} else if constexpr (std::is_same_v<ErrorType, storage::ReplicationError>) {
throw ReplicationException(
"At least one SYNC replica has not confirmed the creation of the EXISTS constraint on label "
"{} on properties {}.",
label_name, properties_stringified);
} else {
static_assert(kAlwaysFalse<T>, "Missing type from variant visitor");
}
},
error);
}
};
break;
@ -1975,21 +2069,35 @@ PreparedQuery PrepareConstraintQuery(ParsedQuery parsed_query, bool in_explicit_
handler = [interpreter_context, label, label_name = constraint_query->constraint_.label.name,
properties_stringified = std::move(properties_stringified),
property_set = std::move(property_set)](Notification &constraint_notification) {
auto res = interpreter_context->db->CreateUniqueConstraint(label, property_set);
if (res.HasError()) {
auto violation = res.GetError();
auto label_name = interpreter_context->db->LabelToName(violation.label);
std::stringstream property_names_stream;
utils::PrintIterable(property_names_stream, violation.properties, ", ",
[&interpreter_context](auto &stream, const auto &prop) {
stream << interpreter_context->db->PropertyToName(prop);
});
throw QueryRuntimeException(
"Unable to create unique constraint :{}({}), because an "
"existing node violates it.",
label_name, property_names_stream.str());
auto maybe_constraint_error = interpreter_context->db->CreateUniqueConstraint(label, property_set);
if (maybe_constraint_error.HasError()) {
const auto &error = maybe_constraint_error.GetError();
std::visit(
[&interpreter_context, &label_name, &properties_stringified]<typename T>(T &&arg) {
using ErrorType = std::remove_cvref_t<T>;
if constexpr (std::is_same_v<ErrorType, storage::ConstraintViolation>) {
auto &violation = arg;
auto violation_label_name = interpreter_context->db->LabelToName(violation.label);
std::stringstream property_names_stream;
utils::PrintIterable(property_names_stream, violation.properties, ", ",
[&interpreter_context](auto &stream, const auto &prop) {
stream << interpreter_context->db->PropertyToName(prop);
});
throw QueryRuntimeException(
"Unable to create unique constraint :{}({}), because an "
"existing node violates it.",
violation_label_name, property_names_stream.str());
} else if constexpr (std::is_same_v<ErrorType, storage::ReplicationError>) {
throw ReplicationException(fmt::format(
"At least one SYNC replica has not confirmed the creation of the UNIQUE constraint: {}({}).",
label_name, properties_stringified));
} else {
static_assert(kAlwaysFalse<T>, "Missing type from variant visitor");
}
},
error);
}
switch (res.GetValue()) {
switch (maybe_constraint_error.GetValue()) {
case storage::UniqueConstraints::CreationStatus::EMPTY_PROPERTIES:
throw SyntaxException(
"At least one property must be used for unique "
@ -2000,7 +2108,7 @@ PreparedQuery PrepareConstraintQuery(ParsedQuery parsed_query, bool in_explicit_
"for unique constraints is exceeded.",
storage::kUniqueConstraintsMaxProperties);
case storage::UniqueConstraints::CreationStatus::ALREADY_EXISTS:
constraint_notification.code = NotificationCode::EXISTANT_CONSTRAINT;
constraint_notification.code = NotificationCode::EXISTENT_CONSTRAINT;
constraint_notification.title =
fmt::format("Constraint UNIQUE on label {} on properties {} already exists.", label_name,
properties_stringified);
@ -2028,10 +2136,27 @@ PreparedQuery PrepareConstraintQuery(ParsedQuery parsed_query, bool in_explicit_
handler = [interpreter_context, label, label_name = constraint_query->constraint_.label.name,
properties_stringified = std::move(properties_stringified),
properties = std::move(properties)](Notification &constraint_notification) {
if (!interpreter_context->db->DropExistenceConstraint(label, properties[0])) {
constraint_notification.code = NotificationCode::NONEXISTANT_CONSTRAINT;
constraint_notification.title = fmt::format(
"Constraint EXISTS on label {} on properties {} doesn't exist.", label_name, properties_stringified);
auto maybe_constraint_error = interpreter_context->db->DropExistenceConstraint(label, properties[0]);
if (maybe_constraint_error.HasError()) {
const auto &error = maybe_constraint_error.GetError();
std::visit(
[&label_name, &properties_stringified, &constraint_notification]<typename T>(T &&) {
using ErrorType = std::remove_cvref_t<T>;
if constexpr (std::is_same_v<ErrorType, storage::ConstraintDefinitionError>) {
constraint_notification.code = NotificationCode::NONEXISTENT_CONSTRAINT;
constraint_notification.title =
fmt::format("Constraint EXISTS on label {} on properties {} doesn't exist.", label_name,
properties_stringified);
} else if constexpr (std::is_same_v<ErrorType, storage::ReplicationError>) {
throw ReplicationException(
fmt::format("At least one SYNC replica has not confirmed the dropping of the EXISTS "
"constraint on label {} on properties {}.",
label_name, properties_stringified));
} else {
static_assert(kAlwaysFalse<T>, "Missing type from variant visitor");
}
},
error);
}
return std::vector<std::vector<TypedValue>>();
};
@ -2050,7 +2175,24 @@ PreparedQuery PrepareConstraintQuery(ParsedQuery parsed_query, bool in_explicit_
handler = [interpreter_context, label, label_name = constraint_query->constraint_.label.name,
properties_stringified = std::move(properties_stringified),
property_set = std::move(property_set)](Notification &constraint_notification) {
auto res = interpreter_context->db->DropUniqueConstraint(label, property_set);
auto maybe_constraint_error = interpreter_context->db->DropUniqueConstraint(label, property_set);
if (maybe_constraint_error.HasError()) {
const auto &error = maybe_constraint_error.GetError();
std::visit(
[&label_name, &properties_stringified]<typename T>(T &&) {
using ErrorType = std::remove_cvref_t<T>;
if constexpr (std::is_same_v<ErrorType, storage::ReplicationError>) {
throw ReplicationException(
fmt::format("At least one SYNC replica has not confirmed the dropping of the UNIQUE "
"constraint on label {} on properties {}.",
label_name, properties_stringified));
} else {
static_assert(kAlwaysFalse<T>, "Missing type from variant visitor");
}
},
error);
}
const auto &res = maybe_constraint_error.GetValue();
switch (res) {
case storage::UniqueConstraints::DeletionStatus::EMPTY_PROPERTIES:
throw SyntaxException(
@ -2064,7 +2206,7 @@ PreparedQuery PrepareConstraintQuery(ParsedQuery parsed_query, bool in_explicit_
storage::kUniqueConstraintsMaxProperties);
break;
case storage::UniqueConstraints::DeletionStatus::NOT_FOUND:
constraint_notification.code = NotificationCode::NONEXISTANT_CONSTRAINT;
constraint_notification.code = NotificationCode::NONEXISTENT_CONSTRAINT;
constraint_notification.title =
fmt::format("Constraint UNIQUE on label {} on properties {} doesn't exist.", label_name,
properties_stringified);
@ -2204,6 +2346,8 @@ Interpreter::PrepareResult Interpreter::Prepare(const std::string &query_string,
&*execution_db_accessor_);
} else if (utils::Downcast<FreeMemoryQuery>(parsed_query.query)) {
prepared_query = PrepareFreeMemoryQuery(std::move(parsed_query), in_explicit_transaction_, interpreter_context_);
} else if (utils::Downcast<ShowConfigQuery>(parsed_query.query)) {
prepared_query = PrepareShowConfigQuery(std::move(parsed_query), in_explicit_transaction_);
} else if (utils::Downcast<TriggerQuery>(parsed_query.query)) {
prepared_query =
PrepareTriggerQuery(std::move(parsed_query), in_explicit_transaction_, &query_execution->notifications,
@ -2280,28 +2424,41 @@ void RunTriggersIndividually(const utils::SkipList<Trigger> &triggers, Interpret
continue;
}
auto maybe_constraint_violation = db_accessor.Commit();
if (maybe_constraint_violation.HasError()) {
const auto &constraint_violation = maybe_constraint_violation.GetError();
switch (constraint_violation.type) {
case storage::ConstraintViolation::Type::EXISTENCE: {
const auto &label_name = db_accessor.LabelToName(constraint_violation.label);
MG_ASSERT(constraint_violation.properties.size() == 1U);
const auto &property_name = db_accessor.PropertyToName(*constraint_violation.properties.begin());
spdlog::warn("Trigger '{}' failed to commit due to existence constraint violation on :{}({})", trigger.Name(),
label_name, property_name);
break;
}
case storage::ConstraintViolation::Type::UNIQUE: {
const auto &label_name = db_accessor.LabelToName(constraint_violation.label);
std::stringstream property_names_stream;
utils::PrintIterable(property_names_stream, constraint_violation.properties, ", ",
[&](auto &stream, const auto &prop) { stream << db_accessor.PropertyToName(prop); });
spdlog::warn("Trigger '{}' failed to commit due to unique constraint violation on :{}({})", trigger.Name(),
label_name, property_names_stream.str());
break;
}
}
auto maybe_commit_error = db_accessor.Commit();
if (maybe_commit_error.HasError()) {
const auto &error = maybe_commit_error.GetError();
std::visit(
[&trigger, &db_accessor]<typename T>(T &&arg) {
using ErrorType = std::remove_cvref_t<T>;
if constexpr (std::is_same_v<ErrorType, storage::ReplicationError>) {
spdlog::warn("At least one SYNC replica has not confirmed execution of the trigger '{}'.",
trigger.Name());
} else if constexpr (std::is_same_v<ErrorType, storage::ConstraintViolation>) {
const auto &constraint_violation = arg;
switch (constraint_violation.type) {
case storage::ConstraintViolation::Type::EXISTENCE: {
const auto &label_name = db_accessor.LabelToName(constraint_violation.label);
MG_ASSERT(constraint_violation.properties.size() == 1U);
const auto &property_name = db_accessor.PropertyToName(*constraint_violation.properties.begin());
spdlog::warn("Trigger '{}' failed to commit due to existence constraint violation on: {}({}) ",
trigger.Name(), label_name, property_name);
}
case storage::ConstraintViolation::Type::UNIQUE: {
const auto &label_name = db_accessor.LabelToName(constraint_violation.label);
std::stringstream property_names_stream;
utils::PrintIterable(
property_names_stream, constraint_violation.properties, ", ",
[&](auto &stream, const auto &prop) { stream << db_accessor.PropertyToName(prop); });
spdlog::warn("Trigger '{}' failed to commit due to unique constraint violation on :{}({})",
trigger.Name(), label_name, property_names_stream.str());
}
}
} else {
static_assert(kAlwaysFalse<T>, "Missing type from variant visitor");
}
},
error);
}
}
}
@ -2342,32 +2499,45 @@ void Interpreter::Commit() {
db_accessor_.reset();
trigger_context_collector_.reset();
};
utils::OnScopeExit members_reseter(reset_necessary_members);
auto maybe_constraint_violation = db_accessor_->Commit();
if (maybe_constraint_violation.HasError()) {
const auto &constraint_violation = maybe_constraint_violation.GetError();
switch (constraint_violation.type) {
case storage::ConstraintViolation::Type::EXISTENCE: {
auto label_name = execution_db_accessor_->LabelToName(constraint_violation.label);
MG_ASSERT(constraint_violation.properties.size() == 1U);
auto property_name = execution_db_accessor_->PropertyToName(*constraint_violation.properties.begin());
reset_necessary_members();
throw QueryException("Unable to commit due to existence constraint violation on :{}({})", label_name,
property_name);
break;
}
case storage::ConstraintViolation::Type::UNIQUE: {
auto label_name = execution_db_accessor_->LabelToName(constraint_violation.label);
std::stringstream property_names_stream;
utils::PrintIterable(
property_names_stream, constraint_violation.properties, ", ",
[this](auto &stream, const auto &prop) { stream << execution_db_accessor_->PropertyToName(prop); });
reset_necessary_members();
throw QueryException("Unable to commit due to unique constraint violation on :{}({})", label_name,
property_names_stream.str());
break;
}
}
auto commit_confirmed_by_all_sync_repplicas = true;
auto maybe_commit_error = db_accessor_->Commit();
if (maybe_commit_error.HasError()) {
const auto &error = maybe_commit_error.GetError();
std::visit(
[&execution_db_accessor = execution_db_accessor_,
&commit_confirmed_by_all_sync_repplicas]<typename T>(T &&arg) {
using ErrorType = std::remove_cvref_t<T>;
if constexpr (std::is_same_v<ErrorType, storage::ReplicationError>) {
commit_confirmed_by_all_sync_repplicas = false;
} else if constexpr (std::is_same_v<ErrorType, storage::ConstraintViolation>) {
const auto &constraint_violation = arg;
auto &label_name = execution_db_accessor->LabelToName(constraint_violation.label);
switch (constraint_violation.type) {
case storage::ConstraintViolation::Type::EXISTENCE: {
MG_ASSERT(constraint_violation.properties.size() == 1U);
auto &property_name = execution_db_accessor->PropertyToName(*constraint_violation.properties.begin());
throw QueryException("Unable to commit due to existence constraint violation on :{}({})", label_name,
property_name);
}
case storage::ConstraintViolation::Type::UNIQUE: {
std::stringstream property_names_stream;
utils::PrintIterable(property_names_stream, constraint_violation.properties, ", ",
[&execution_db_accessor](auto &stream, const auto &prop) {
stream << execution_db_accessor->PropertyToName(prop);
});
throw QueryException("Unable to commit due to unique constraint violation on :{}({})", label_name,
property_names_stream.str());
}
}
} else {
static_assert(kAlwaysFalse<T>, "Missing type from variant visitor");
}
},
error);
}
// The ordered execution of after commit triggers is heavily depending on the exclusiveness of db_accessor_->Commit():
@ -2386,9 +2556,10 @@ void Interpreter::Commit() {
});
}
reset_necessary_members();
SPDLOG_DEBUG("Finished committing the transaction");
if (!commit_confirmed_by_all_sync_repplicas) {
throw ReplicationException("At least one SYNC replica has not confirmed committing last transaction.");
}
}
void Interpreter::AdvanceCommand() {

View File

@ -52,15 +52,15 @@ constexpr std::string_view GetCodeString(const NotificationCode code) {
return "DropStream"sv;
case NotificationCode::DROP_TRIGGER:
return "DropTrigger"sv;
case NotificationCode::EXISTANT_CONSTRAINT:
case NotificationCode::EXISTENT_CONSTRAINT:
return "ConstraintAlreadyExists"sv;
case NotificationCode::EXISTANT_INDEX:
case NotificationCode::EXISTENT_INDEX:
return "IndexAlreadyExists"sv;
case NotificationCode::LOAD_CSV_TIP:
return "LoadCSVTip"sv;
case NotificationCode::NONEXISTANT_INDEX:
case NotificationCode::NONEXISTENT_INDEX:
return "IndexDoesNotExist"sv;
case NotificationCode::NONEXISTANT_CONSTRAINT:
case NotificationCode::NONEXISTENT_CONSTRAINT:
return "ConstraintDoesNotExist"sv;
case NotificationCode::REGISTER_REPLICA:
return "RegisterReplica"sv;
@ -114,4 +114,4 @@ std::string ExecutionStatsKeyToString(const ExecutionStats::Key key) {
}
}
} // namespace memgraph::query
} // namespace memgraph::query

View File

@ -34,11 +34,11 @@ enum class NotificationCode : uint8_t {
DROP_REPLICA,
DROP_STREAM,
DROP_TRIGGER,
EXISTANT_INDEX,
EXISTANT_CONSTRAINT,
EXISTENT_INDEX,
EXISTENT_CONSTRAINT,
LOAD_CSV_TIP,
NONEXISTANT_INDEX,
NONEXISTANT_CONSTRAINT,
NONEXISTENT_INDEX,
NONEXISTENT_CONSTRAINT,
REPLICA_PORT_WARNING,
REGISTER_REPLICA,
SET_REPLICA,

View File

@ -13,11 +13,12 @@
#include "utils/flag_validation.hpp"
DEFINE_VALIDATED_HIDDEN_int64(query_vertex_count_to_expand_existing, 10,
"Maximum count of indexed vertices which provoke "
"indexed lookup and then expand to existing, instead of "
"a regular expand. Default is 10, to turn off use -1.",
FLAG_IN_RANGE(-1, std::numeric_limits<std::int64_t>::max()));
// NOLINTNEXTLINE (cppcoreguidelines-avoid-non-const-global-variables)
DEFINE_VALIDATED_int64(query_vertex_count_to_expand_existing, 10,
"Maximum count of indexed vertices which provoke "
"indexed lookup and then expand to existing, instead of "
"a regular expand. Default is 10, to turn off use -1.",
FLAG_IN_RANGE(-1, std::numeric_limits<std::int64_t>::max()));
namespace memgraph::query::plan::impl {

View File

@ -17,8 +17,9 @@
#include "utils/flag_validation.hpp"
#include "utils/logging.hpp"
DEFINE_VALIDATED_HIDDEN_uint64(query_max_plans, 1000U, "Maximum number of generated plans for a query.",
FLAG_IN_RANGE(1, std::numeric_limits<std::uint64_t>::max()));
// NOLINTNEXTLINE (cppcoreguidelines-avoid-non-const-global-variables)
DEFINE_VALIDATED_uint64(query_max_plans, 1000U, "Maximum number of generated plans for a query.",
FLAG_IN_RANGE(1, std::numeric_limits<std::uint64_t>::max()));
namespace memgraph::query::plan::impl {

View File

@ -2830,3 +2830,29 @@ mgp_error mgp_module_add_function(mgp_module *module, const char *name, mgp_func
},
result);
}
mgp_error mgp_log(const mgp_log_level log_level, const char *output) {
return WrapExceptions([=] {
switch (log_level) {
case mgp_log_level::MGP_LOG_LEVEL_TRACE:
spdlog::trace(output);
return;
case mgp_log_level::MGP_LOG_LEVEL_DEBUG:
spdlog::debug(output);
return;
case mgp_log_level::MGP_LOG_LEVEL_INFO:
spdlog::info(output);
return;
case mgp_log_level::MGP_LOG_LEVEL_WARN:
spdlog::warn(output);
return;
case mgp_log_level::MGP_LOG_LEVEL_ERROR:
spdlog::error(output);
return;
case mgp_log_level::MGP_LOG_LEVEL_CRITICAL:
spdlog::critical(output);
return;
}
throw std::invalid_argument{fmt::format("Invalid log level: {}", log_level)};
});
}

View File

@ -2052,6 +2052,81 @@ PyObject *PyPathMakeWithStart(PyTypeObject *type, PyObject *vertex) {
return py_path;
}
// clang-format off
struct PyLogger {
PyObject_HEAD
};
// clang-format on
PyObject *PyLoggerLog(PyLogger *self, PyObject *args, const mgp_log_level level) {
MG_ASSERT(self);
const char *out = nullptr;
if (!PyArg_ParseTuple(args, "s", &out)) {
return nullptr;
}
if (RaiseExceptionFromErrorCode(mgp_log(level, out))) {
return nullptr;
}
Py_RETURN_NONE;
}
PyObject *PyLoggerLogInfo(PyLogger *self, PyObject *args) {
return PyLoggerLog(self, args, mgp_log_level::MGP_LOG_LEVEL_INFO);
}
PyObject *PyLoggerLogWarning(PyLogger *self, PyObject *args) {
return PyLoggerLog(self, args, mgp_log_level::MGP_LOG_LEVEL_WARN);
}
PyObject *PyLoggerLogError(PyLogger *self, PyObject *args) {
return PyLoggerLog(self, args, mgp_log_level::MGP_LOG_LEVEL_ERROR);
}
PyObject *PyLoggerLogCritical(PyLogger *self, PyObject *args) {
return PyLoggerLog(self, args, mgp_log_level::MGP_LOG_LEVEL_CRITICAL);
}
PyObject *PyLoggerLogTrace(PyLogger *self, PyObject *args) {
return PyLoggerLog(self, args, mgp_log_level::MGP_LOG_LEVEL_TRACE);
}
PyObject *PyLoggerLogDebug(PyLogger *self, PyObject *args) {
return PyLoggerLog(self, args, mgp_log_level::MGP_LOG_LEVEL_DEBUG);
}
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
static PyMethodDef PyLoggerMethods[] = {
{"__reduce__", reinterpret_cast<PyCFunction>(DisallowPickleAndCopy), METH_NOARGS, "__reduce__ is not supported"},
{"info", reinterpret_cast<PyCFunction>(PyLoggerLogInfo), METH_VARARGS,
"Logs a message with level INFO on this logger."},
{"warning", reinterpret_cast<PyCFunction>(PyLoggerLogWarning), METH_VARARGS,
"Logs a message with level WARNNING on this logger."},
{"error", reinterpret_cast<PyCFunction>(PyLoggerLogError), METH_VARARGS,
"Logs a message with level ERROR on this logger."},
{"critical", reinterpret_cast<PyCFunction>(PyLoggerLogCritical), METH_VARARGS,
"Logs a message with level CRITICAL on this logger."},
{"trace", reinterpret_cast<PyCFunction>(PyLoggerLogTrace), METH_VARARGS,
"Logs a message with level TRACE on this logger."},
{"debug", reinterpret_cast<PyCFunction>(PyLoggerLogDebug), METH_VARARGS,
"Logs a message with level DEBUG on this logger."},
{nullptr},
};
// clang-format off
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
static PyTypeObject PyLoggerType = {
PyVarObject_HEAD_INIT(nullptr, 0)
.tp_name = "_mgp.Logger",
.tp_basicsize = sizeof(PyLogger),
// NOLINTNEXTLINE(hicpp-signed-bitwise)
.tp_flags = Py_TPFLAGS_DEFAULT,
.tp_doc = "Logging API.",
.tp_methods = PyLoggerMethods,
};
// clang-format on
struct PyMgpError {
const char *name;
PyObject *&exception;
@ -2103,6 +2178,7 @@ PyObject *PyInitMgpModule() {
if (!register_type(&PyCypherTypeType, "Type")) return nullptr;
if (!register_type(&PyMessagesType, "Messages")) return nullptr;
if (!register_type(&PyMessageType, "Message")) return nullptr;
if (!register_type(&PyLoggerType, "Logger")) return nullptr;
std::array py_mgp_errors{
PyMgpError{"_mgp.UnknownError", gMgpUnknownError, PyExc_RuntimeError, nullptr},
@ -2169,8 +2245,14 @@ auto WithMgpModule(mgp_module *module_def, const TFun &fun) {
"import a new module. Is some other thread also importing Python "
"modules?");
auto *py_query_module = MakePyQueryModule(module_def);
MG_ASSERT(py_query_module);
MG_ASSERT(py_mgp.SetAttr("_MODULE", py_query_module));
// NOLINTNEXTLINE(cppcoreguidelines-pro-type-cstyle-cast)
auto *py_logger = reinterpret_cast<PyObject *>(PyObject_New(PyLogger, &PyLoggerType));
MG_ASSERT(py_mgp.SetAttr("_LOGGER", py_logger));
auto ret = fun();
auto maybe_exc = py::FetchError();
MG_ASSERT(py_mgp.SetAttr("_MODULE", Py_None));

View File

@ -222,23 +222,24 @@ void Storage::ReplicationClient::IfStreamingTransaction(const std::function<void
}
}
void Storage::ReplicationClient::FinalizeTransactionReplication() {
bool Storage::ReplicationClient::FinalizeTransactionReplication() {
// We can only check the state because it guarantees to be only
// valid during a single transaction replication (if the assumption
// that this and other transaction replication functions can only be
// called from a one thread stands)
if (replica_state_ != replication::ReplicaState::REPLICATING) {
return;
return false;
}
if (mode_ == replication::ReplicationMode::ASYNC) {
thread_pool_.AddTask([this] { this->FinalizeTransactionReplicationInternal(); });
thread_pool_.AddTask([this] { static_cast<void>(this->FinalizeTransactionReplicationInternal()); });
return true;
} else {
FinalizeTransactionReplicationInternal();
return FinalizeTransactionReplicationInternal();
}
}
void Storage::ReplicationClient::FinalizeTransactionReplicationInternal() {
bool Storage::ReplicationClient::FinalizeTransactionReplicationInternal() {
MG_ASSERT(replica_stream_, "Missing stream for transaction deltas");
try {
auto response = replica_stream_->Finalize();
@ -249,6 +250,7 @@ void Storage::ReplicationClient::FinalizeTransactionReplicationInternal() {
thread_pool_.AddTask([&, this] { this->RecoverReplica(response.current_commit_timestamp); });
} else {
replica_state_.store(replication::ReplicaState::READY);
return true;
}
} catch (const rpc::RpcFailedException &) {
replica_stream_.reset();
@ -258,6 +260,7 @@ void Storage::ReplicationClient::FinalizeTransactionReplicationInternal() {
}
HandleRpcFailure();
}
return false;
}
void Storage::ReplicationClient::RecoverReplica(uint64_t replica_commit) {

View File

@ -103,7 +103,8 @@ class Storage::ReplicationClient {
// StartTransactionReplication, stream is created.
void IfStreamingTransaction(const std::function<void(ReplicaStream &handler)> &callback);
void FinalizeTransactionReplication();
// Return whether the transaction could be finalized on the replication client or not.
[[nodiscard]] bool FinalizeTransactionReplication();
// Transfer the snapshot file.
// @param path Path of the snapshot file.
@ -125,7 +126,7 @@ class Storage::ReplicationClient {
Storage::TimestampInfo GetTimestampInfo();
private:
void FinalizeTransactionReplicationInternal();
[[nodiscard]] bool FinalizeTransactionReplicationInternal();
void RecoverReplica(uint64_t replica_commit);

View File

@ -495,14 +495,14 @@ uint64_t Storage::ReplicationServer::ReadAndApplyDelta(durability::BaseDecoder *
spdlog::trace(" Create label index on :{}", delta.operation_label.label);
// Need to send the timestamp
if (commit_timestamp_and_accessor) throw utils::BasicException("Invalid transaction!");
if (!storage_->CreateIndex(storage_->NameToLabel(delta.operation_label.label), timestamp))
if (storage_->CreateIndex(storage_->NameToLabel(delta.operation_label.label), timestamp).HasError())
throw utils::BasicException("Invalid transaction!");
break;
}
case durability::WalDeltaData::Type::LABEL_INDEX_DROP: {
spdlog::trace(" Drop label index on :{}", delta.operation_label.label);
if (commit_timestamp_and_accessor) throw utils::BasicException("Invalid transaction!");
if (!storage_->DropIndex(storage_->NameToLabel(delta.operation_label.label), timestamp))
if (storage_->DropIndex(storage_->NameToLabel(delta.operation_label.label), timestamp).HasError())
throw utils::BasicException("Invalid transaction!");
break;
}
@ -510,8 +510,10 @@ uint64_t Storage::ReplicationServer::ReadAndApplyDelta(durability::BaseDecoder *
spdlog::trace(" Create label+property index on :{} ({})", delta.operation_label_property.label,
delta.operation_label_property.property);
if (commit_timestamp_and_accessor) throw utils::BasicException("Invalid transaction!");
if (!storage_->CreateIndex(storage_->NameToLabel(delta.operation_label_property.label),
storage_->NameToProperty(delta.operation_label_property.property), timestamp))
if (storage_
->CreateIndex(storage_->NameToLabel(delta.operation_label_property.label),
storage_->NameToProperty(delta.operation_label_property.property), timestamp)
.HasError())
throw utils::BasicException("Invalid transaction!");
break;
}
@ -519,8 +521,10 @@ uint64_t Storage::ReplicationServer::ReadAndApplyDelta(durability::BaseDecoder *
spdlog::trace(" Drop label+property index on :{} ({})", delta.operation_label_property.label,
delta.operation_label_property.property);
if (commit_timestamp_and_accessor) throw utils::BasicException("Invalid transaction!");
if (!storage_->DropIndex(storage_->NameToLabel(delta.operation_label_property.label),
storage_->NameToProperty(delta.operation_label_property.property), timestamp))
if (storage_
->DropIndex(storage_->NameToLabel(delta.operation_label_property.label),
storage_->NameToProperty(delta.operation_label_property.property), timestamp)
.HasError())
throw utils::BasicException("Invalid transaction!");
break;
}
@ -531,16 +535,17 @@ uint64_t Storage::ReplicationServer::ReadAndApplyDelta(durability::BaseDecoder *
auto ret = storage_->CreateExistenceConstraint(
storage_->NameToLabel(delta.operation_label_property.label),
storage_->NameToProperty(delta.operation_label_property.property), timestamp);
if (!ret.HasValue() || !ret.GetValue()) throw utils::BasicException("Invalid transaction!");
if (ret.HasError()) throw utils::BasicException("Invalid transaction!");
break;
}
case durability::WalDeltaData::Type::EXISTENCE_CONSTRAINT_DROP: {
spdlog::trace(" Drop existence constraint on :{} ({})", delta.operation_label_property.label,
delta.operation_label_property.property);
if (commit_timestamp_and_accessor) throw utils::BasicException("Invalid transaction!");
if (!storage_->DropExistenceConstraint(storage_->NameToLabel(delta.operation_label_property.label),
storage_->NameToProperty(delta.operation_label_property.property),
timestamp))
if (storage_
->DropExistenceConstraint(storage_->NameToLabel(delta.operation_label_property.label),
storage_->NameToProperty(delta.operation_label_property.property), timestamp)
.HasError())
throw utils::BasicException("Invalid transaction!");
break;
}
@ -570,7 +575,8 @@ uint64_t Storage::ReplicationServer::ReadAndApplyDelta(durability::BaseDecoder *
}
auto ret = storage_->DropUniqueConstraint(storage_->NameToLabel(delta.operation_label_properties.label),
properties, timestamp);
if (ret != UniqueConstraints::DeletionStatus::SUCCESS) throw utils::BasicException("Invalid transaction!");
if (ret.HasError() || ret.GetValue() != UniqueConstraints::DeletionStatus::SUCCESS)
throw utils::BasicException("Invalid transaction!");
break;
}
}

View File

@ -46,6 +46,7 @@
#include "storage/v2/replication/replication_client.hpp"
#include "storage/v2/replication/replication_server.hpp"
#include "storage/v2/replication/rpc.hpp"
#include "storage/v2/storage_error.hpp"
namespace memgraph::storage {
@ -846,11 +847,13 @@ EdgeTypeId Storage::Accessor::NameToEdgeType(const std::string_view name) { retu
void Storage::Accessor::AdvanceCommand() { ++transaction_.command_id; }
utils::BasicResult<ConstraintViolation, void> Storage::Accessor::Commit(
utils::BasicResult<StorageDataManipulationError, void> Storage::Accessor::Commit(
const std::optional<uint64_t> desired_commit_timestamp) {
MG_ASSERT(is_transaction_active_, "The transaction is already terminated!");
MG_ASSERT(!transaction_.must_abort, "The transaction can't be committed!");
auto could_replicate_all_sync_replicas = true;
if (transaction_.deltas.empty()) {
// We don't have to update the commit timestamp here because no one reads
// it.
@ -869,7 +872,7 @@ utils::BasicResult<ConstraintViolation, void> Storage::Accessor::Commit(
auto validation_result = ValidateExistenceConstraints(*prev.vertex, storage_->constraints_);
if (validation_result) {
Abort();
return *validation_result;
return StorageDataManipulationError{*validation_result};
}
}
@ -926,7 +929,7 @@ utils::BasicResult<ConstraintViolation, void> Storage::Accessor::Commit(
// Replica can log only the write transaction received from Main
// so the Wal files are consistent
if (storage_->replication_role_ == ReplicationRole::MAIN || desired_commit_timestamp.has_value()) {
storage_->AppendToWal(transaction_, *commit_timestamp_);
could_replicate_all_sync_replicas = storage_->AppendToWalDataManipulation(transaction_, *commit_timestamp_);
}
// Take committed_transactions lock while holding the engine lock to
@ -954,11 +957,15 @@ utils::BasicResult<ConstraintViolation, void> Storage::Accessor::Commit(
if (unique_constraint_violation) {
Abort();
return *unique_constraint_violation;
return StorageDataManipulationError{*unique_constraint_violation};
}
}
is_transaction_active_ = false;
if (!could_replicate_all_sync_replicas) {
return StorageDataManipulationError{ReplicationError{}};
}
return {};
}
@ -1157,46 +1164,82 @@ EdgeTypeId Storage::NameToEdgeType(const std::string_view name) {
return EdgeTypeId::FromUint(name_id_mapper_.NameToId(name));
}
bool Storage::CreateIndex(LabelId label, const std::optional<uint64_t> desired_commit_timestamp) {
utils::BasicResult<StorageIndexDefinitionError, void> Storage::CreateIndex(
LabelId label, const std::optional<uint64_t> desired_commit_timestamp) {
std::unique_lock<utils::RWLock> storage_guard(main_lock_);
if (!indices_.label_index.CreateIndex(label, vertices_.access())) return false;
if (!indices_.label_index.CreateIndex(label, vertices_.access())) {
return StorageIndexDefinitionError{IndexDefinitionError{}};
}
const auto commit_timestamp = CommitTimestamp(desired_commit_timestamp);
AppendToWal(durability::StorageGlobalOperation::LABEL_INDEX_CREATE, label, {}, commit_timestamp);
const auto success =
AppendToWalDataDefinition(durability::StorageGlobalOperation::LABEL_INDEX_CREATE, label, {}, commit_timestamp);
commit_log_->MarkFinished(commit_timestamp);
last_commit_timestamp_ = commit_timestamp;
return true;
if (success) {
return {};
}
return StorageIndexDefinitionError{ReplicationError{}};
}
bool Storage::CreateIndex(LabelId label, PropertyId property, const std::optional<uint64_t> desired_commit_timestamp) {
utils::BasicResult<StorageIndexDefinitionError, void> Storage::CreateIndex(
LabelId label, PropertyId property, const std::optional<uint64_t> desired_commit_timestamp) {
std::unique_lock<utils::RWLock> storage_guard(main_lock_);
if (!indices_.label_property_index.CreateIndex(label, property, vertices_.access())) return false;
if (!indices_.label_property_index.CreateIndex(label, property, vertices_.access())) {
return StorageIndexDefinitionError{IndexDefinitionError{}};
}
const auto commit_timestamp = CommitTimestamp(desired_commit_timestamp);
AppendToWal(durability::StorageGlobalOperation::LABEL_PROPERTY_INDEX_CREATE, label, {property}, commit_timestamp);
auto success = AppendToWalDataDefinition(durability::StorageGlobalOperation::LABEL_PROPERTY_INDEX_CREATE, label,
{property}, commit_timestamp);
commit_log_->MarkFinished(commit_timestamp);
last_commit_timestamp_ = commit_timestamp;
return true;
if (success) {
return {};
}
return StorageIndexDefinitionError{ReplicationError{}};
}
bool Storage::DropIndex(LabelId label, const std::optional<uint64_t> desired_commit_timestamp) {
utils::BasicResult<StorageIndexDefinitionError, void> Storage::DropIndex(
LabelId label, const std::optional<uint64_t> desired_commit_timestamp) {
std::unique_lock<utils::RWLock> storage_guard(main_lock_);
if (!indices_.label_index.DropIndex(label)) return false;
if (!indices_.label_index.DropIndex(label)) {
return StorageIndexDefinitionError{IndexDefinitionError{}};
}
const auto commit_timestamp = CommitTimestamp(desired_commit_timestamp);
AppendToWal(durability::StorageGlobalOperation::LABEL_INDEX_DROP, label, {}, commit_timestamp);
auto success =
AppendToWalDataDefinition(durability::StorageGlobalOperation::LABEL_INDEX_DROP, label, {}, commit_timestamp);
commit_log_->MarkFinished(commit_timestamp);
last_commit_timestamp_ = commit_timestamp;
return true;
if (success) {
return {};
}
return StorageIndexDefinitionError{ReplicationError{}};
}
bool Storage::DropIndex(LabelId label, PropertyId property, const std::optional<uint64_t> desired_commit_timestamp) {
utils::BasicResult<StorageIndexDefinitionError, void> Storage::DropIndex(
LabelId label, PropertyId property, const std::optional<uint64_t> desired_commit_timestamp) {
std::unique_lock<utils::RWLock> storage_guard(main_lock_);
if (!indices_.label_property_index.DropIndex(label, property)) return false;
if (!indices_.label_property_index.DropIndex(label, property)) {
return StorageIndexDefinitionError{IndexDefinitionError{}};
}
// For a description why using `timestamp_` is correct, see
// `CreateIndex(LabelId label)`.
const auto commit_timestamp = CommitTimestamp(desired_commit_timestamp);
AppendToWal(durability::StorageGlobalOperation::LABEL_PROPERTY_INDEX_DROP, label, {property}, commit_timestamp);
auto success = AppendToWalDataDefinition(durability::StorageGlobalOperation::LABEL_PROPERTY_INDEX_DROP, label,
{property}, commit_timestamp);
commit_log_->MarkFinished(commit_timestamp);
last_commit_timestamp_ = commit_timestamp;
return true;
if (success) {
return {};
}
return StorageIndexDefinitionError{ReplicationError{}};
}
IndicesInfo Storage::ListAllIndices() const {
@ -1204,55 +1247,92 @@ IndicesInfo Storage::ListAllIndices() const {
return {indices_.label_index.ListIndices(), indices_.label_property_index.ListIndices()};
}
utils::BasicResult<ConstraintViolation, bool> Storage::CreateExistenceConstraint(
utils::BasicResult<StorageExistenceConstraintDefinitionError, void> Storage::CreateExistenceConstraint(
LabelId label, PropertyId property, const std::optional<uint64_t> desired_commit_timestamp) {
std::unique_lock<utils::RWLock> storage_guard(main_lock_);
auto ret = storage::CreateExistenceConstraint(&constraints_, label, property, vertices_.access());
if (ret.HasError() || !ret.GetValue()) return ret;
if (ret.HasError()) {
return StorageExistenceConstraintDefinitionError{ret.GetError()};
}
if (!ret.GetValue()) {
return StorageExistenceConstraintDefinitionError{ConstraintDefinitionError{}};
}
const auto commit_timestamp = CommitTimestamp(desired_commit_timestamp);
AppendToWal(durability::StorageGlobalOperation::EXISTENCE_CONSTRAINT_CREATE, label, {property}, commit_timestamp);
auto success = AppendToWalDataDefinition(durability::StorageGlobalOperation::EXISTENCE_CONSTRAINT_CREATE, label,
{property}, commit_timestamp);
commit_log_->MarkFinished(commit_timestamp);
last_commit_timestamp_ = commit_timestamp;
return true;
if (success) {
return {};
}
return StorageExistenceConstraintDefinitionError{ReplicationError{}};
}
bool Storage::DropExistenceConstraint(LabelId label, PropertyId property,
const std::optional<uint64_t> desired_commit_timestamp) {
utils::BasicResult<StorageExistenceConstraintDroppingError, void> Storage::DropExistenceConstraint(
LabelId label, PropertyId property, const std::optional<uint64_t> desired_commit_timestamp) {
std::unique_lock<utils::RWLock> storage_guard(main_lock_);
if (!storage::DropExistenceConstraint(&constraints_, label, property)) return false;
const auto commit_timestamp = CommitTimestamp(desired_commit_timestamp);
AppendToWal(durability::StorageGlobalOperation::EXISTENCE_CONSTRAINT_DROP, label, {property}, commit_timestamp);
commit_log_->MarkFinished(commit_timestamp);
last_commit_timestamp_ = commit_timestamp;
return true;
}
utils::BasicResult<ConstraintViolation, UniqueConstraints::CreationStatus> Storage::CreateUniqueConstraint(
LabelId label, const std::set<PropertyId> &properties, const std::optional<uint64_t> desired_commit_timestamp) {
std::unique_lock<utils::RWLock> storage_guard(main_lock_);
auto ret = constraints_.unique_constraints.CreateConstraint(label, properties, vertices_.access());
if (ret.HasError() || ret.GetValue() != UniqueConstraints::CreationStatus::SUCCESS) {
return ret;
if (!storage::DropExistenceConstraint(&constraints_, label, property)) {
return StorageExistenceConstraintDroppingError{ConstraintDefinitionError{}};
}
const auto commit_timestamp = CommitTimestamp(desired_commit_timestamp);
AppendToWal(durability::StorageGlobalOperation::UNIQUE_CONSTRAINT_CREATE, label, properties, commit_timestamp);
auto success = AppendToWalDataDefinition(durability::StorageGlobalOperation::EXISTENCE_CONSTRAINT_DROP, label,
{property}, commit_timestamp);
commit_log_->MarkFinished(commit_timestamp);
last_commit_timestamp_ = commit_timestamp;
return UniqueConstraints::CreationStatus::SUCCESS;
if (success) {
return {};
}
return StorageExistenceConstraintDroppingError{ReplicationError{}};
}
UniqueConstraints::DeletionStatus Storage::DropUniqueConstraint(
LabelId label, const std::set<PropertyId> &properties, const std::optional<uint64_t> desired_commit_timestamp) {
utils::BasicResult<StorageUniqueConstraintDefinitionError, UniqueConstraints::CreationStatus>
Storage::CreateUniqueConstraint(LabelId label, const std::set<PropertyId> &properties,
const std::optional<uint64_t> desired_commit_timestamp) {
std::unique_lock<utils::RWLock> storage_guard(main_lock_);
auto ret = constraints_.unique_constraints.CreateConstraint(label, properties, vertices_.access());
if (ret.HasError()) {
return StorageUniqueConstraintDefinitionError{ret.GetError()};
}
if (ret.GetValue() != UniqueConstraints::CreationStatus::SUCCESS) {
return ret.GetValue();
}
const auto commit_timestamp = CommitTimestamp(desired_commit_timestamp);
auto success = AppendToWalDataDefinition(durability::StorageGlobalOperation::UNIQUE_CONSTRAINT_CREATE, label,
properties, commit_timestamp);
commit_log_->MarkFinished(commit_timestamp);
last_commit_timestamp_ = commit_timestamp;
if (success) {
return UniqueConstraints::CreationStatus::SUCCESS;
}
return StorageUniqueConstraintDefinitionError{ReplicationError{}};
}
utils::BasicResult<StorageUniqueConstraintDroppingError, UniqueConstraints::DeletionStatus>
Storage::DropUniqueConstraint(LabelId label, const std::set<PropertyId> &properties,
const std::optional<uint64_t> desired_commit_timestamp) {
std::unique_lock<utils::RWLock> storage_guard(main_lock_);
auto ret = constraints_.unique_constraints.DropConstraint(label, properties);
if (ret != UniqueConstraints::DeletionStatus::SUCCESS) {
return ret;
}
const auto commit_timestamp = CommitTimestamp(desired_commit_timestamp);
AppendToWal(durability::StorageGlobalOperation::UNIQUE_CONSTRAINT_DROP, label, properties, commit_timestamp);
auto success = AppendToWalDataDefinition(durability::StorageGlobalOperation::UNIQUE_CONSTRAINT_DROP, label,
properties, commit_timestamp);
commit_log_->MarkFinished(commit_timestamp);
last_commit_timestamp_ = commit_timestamp;
return UniqueConstraints::DeletionStatus::SUCCESS;
if (success) {
return UniqueConstraints::DeletionStatus::SUCCESS;
}
return StorageUniqueConstraintDroppingError{ReplicationError{}};
}
ConstraintsInfo Storage::ListAllConstraints() const {
@ -1605,8 +1685,10 @@ void Storage::FinalizeWalFile() {
}
}
void Storage::AppendToWal(const Transaction &transaction, uint64_t final_commit_timestamp) {
if (!InitializeWalFile()) return;
bool Storage::AppendToWalDataManipulation(const Transaction &transaction, uint64_t final_commit_timestamp) {
if (!InitializeWalFile()) {
return true;
}
// Traverse deltas and append them to the WAL file.
// A single transaction will always be contained in a single WAL file.
auto current_commit_timestamp = transaction.commit_timestamp->load(std::memory_order_acquire);
@ -1775,17 +1857,28 @@ void Storage::AppendToWal(const Transaction &transaction, uint64_t final_commit_
FinalizeWalFile();
auto finalized_on_all_replicas = true;
replication_clients_.WithLock([&](auto &clients) {
for (auto &client : clients) {
client->IfStreamingTransaction([&](auto &stream) { stream.AppendTransactionEnd(final_commit_timestamp); });
client->FinalizeTransactionReplication();
const auto finalized = client->FinalizeTransactionReplication();
if (client->Mode() == replication::ReplicationMode::SYNC) {
finalized_on_all_replicas = finalized && finalized_on_all_replicas;
}
}
});
return finalized_on_all_replicas;
}
void Storage::AppendToWal(durability::StorageGlobalOperation operation, LabelId label,
const std::set<PropertyId> &properties, uint64_t final_commit_timestamp) {
if (!InitializeWalFile()) return;
bool Storage::AppendToWalDataDefinition(durability::StorageGlobalOperation operation, LabelId label,
const std::set<PropertyId> &properties, uint64_t final_commit_timestamp) {
if (!InitializeWalFile()) {
return true;
}
auto finalized_on_all_replicas = true;
wal_file_->AppendOperation(operation, label, properties, final_commit_timestamp);
{
if (replication_role_.load() == ReplicationRole::MAIN) {
@ -1794,12 +1887,17 @@ void Storage::AppendToWal(durability::StorageGlobalOperation operation, LabelId
client->StartTransactionReplication(wal_file_->SequenceNumber());
client->IfStreamingTransaction(
[&](auto &stream) { stream.AppendOperation(operation, label, properties, final_commit_timestamp); });
client->FinalizeTransactionReplication();
const auto finalized = client->FinalizeTransactionReplication();
if (client->Mode() == replication::ReplicationMode::SYNC) {
finalized_on_all_replicas = finalized && finalized_on_all_replicas;
}
}
});
}
}
FinalizeWalFile();
return finalized_on_all_replicas;
}
utils::BasicResult<Storage::CreateSnapshotError> Storage::CreateSnapshot() {

View File

@ -48,6 +48,7 @@
#include "storage/v2/replication/enums.hpp"
#include "storage/v2/replication/rpc.hpp"
#include "storage/v2/replication/serialization.hpp"
#include "storage/v2/storage_error.hpp"
namespace memgraph::storage {
@ -309,11 +310,14 @@ class Storage final {
void AdvanceCommand();
/// Commit returns `ConstraintViolation` if the changes made by this
/// transaction violate an existence or unique constraint. In that case the
/// transaction is automatically aborted. Otherwise, void is returned.
/// Returns void if the transaction has been committed.
/// Returns `StorageDataManipulationError` if an error occures. Error can be:
/// * `ReplicationError`: there is at least one SYNC replica that has not confirmed receiving the transaction.
/// * `ConstraintViolation`: the changes made by this transaction violate an existence or unique constraint. In this
/// case the transaction is automatically aborted.
/// @throw std::bad_alloc
utils::BasicResult<ConstraintViolation, void> Commit(std::optional<uint64_t> desired_commit_timestamp = {});
utils::BasicResult<StorageDataManipulationError, void> Commit(
std::optional<uint64_t> desired_commit_timestamp = {});
/// @throw std::bad_alloc
void Abort();
@ -352,54 +356,83 @@ class Storage final {
/// @throw std::bad_alloc if unable to insert a new mapping
EdgeTypeId NameToEdgeType(std::string_view name);
/// Create an index.
/// Returns void if the index has been created.
/// Returns `StorageIndexDefinitionError` if an error occures. Error can be:
/// * `IndexDefinitionError`: the index already exists.
/// * `ReplicationError`: there is at least one SYNC replica that has not confirmed receiving the transaction.
/// @throw std::bad_alloc
bool CreateIndex(LabelId label, std::optional<uint64_t> desired_commit_timestamp = {});
utils::BasicResult<StorageIndexDefinitionError, void> CreateIndex(
LabelId label, std::optional<uint64_t> desired_commit_timestamp = {});
/// Create an index.
/// Returns void if the index has been created.
/// Returns `StorageIndexDefinitionError` if an error occures. Error can be:
/// * `ReplicationError`: there is at least one SYNC replica that has not confirmed receiving the transaction.
/// * `IndexDefinitionError`: the index already exists.
/// @throw std::bad_alloc
bool CreateIndex(LabelId label, PropertyId property, std::optional<uint64_t> desired_commit_timestamp = {});
utils::BasicResult<StorageIndexDefinitionError, void> CreateIndex(
LabelId label, PropertyId property, std::optional<uint64_t> desired_commit_timestamp = {});
bool DropIndex(LabelId label, std::optional<uint64_t> desired_commit_timestamp = {});
/// Drop an existing index.
/// Returns void if the index has been dropped.
/// Returns `StorageIndexDefinitionError` if an error occures. Error can be:
/// * `ReplicationError`: there is at least one SYNC replica that has not confirmed receiving the transaction.
/// * `IndexDefinitionError`: the index does not exist.
utils::BasicResult<StorageIndexDefinitionError, void> DropIndex(
LabelId label, std::optional<uint64_t> desired_commit_timestamp = {});
bool DropIndex(LabelId label, PropertyId property, std::optional<uint64_t> desired_commit_timestamp = {});
/// Drop an existing index.
/// Returns void if the index has been dropped.
/// Returns `StorageIndexDefinitionError` if an error occures. Error can be:
/// * `ReplicationError`: there is at least one SYNC replica that has not confirmed receiving the transaction.
/// * `IndexDefinitionError`: the index does not exist.
utils::BasicResult<StorageIndexDefinitionError, void> DropIndex(
LabelId label, PropertyId property, std::optional<uint64_t> desired_commit_timestamp = {});
IndicesInfo ListAllIndices() const;
/// Creates an existence constraint. Returns true if the constraint was
/// successfuly added, false if it already exists and a `ConstraintViolation`
/// if there is an existing vertex violating the constraint.
///
/// Returns void if the existence constraint has been created.
/// Returns `StorageExistenceConstraintDefinitionError` if an error occures. Error can be:
/// * `ReplicationError`: there is at least one SYNC replica that has not confirmed receiving the transaction.
/// * `ConstraintViolation`: there is already a vertex existing that would break this new constraint.
/// * `ConstraintDefinitionError`: the constraint already exists.
/// @throw std::bad_alloc
/// @throw std::length_error
utils::BasicResult<ConstraintViolation, bool> CreateExistenceConstraint(
utils::BasicResult<StorageExistenceConstraintDefinitionError, void> CreateExistenceConstraint(
LabelId label, PropertyId property, std::optional<uint64_t> desired_commit_timestamp = {});
/// Removes an existence constraint. Returns true if the constraint was
/// removed, and false if it doesn't exist.
bool DropExistenceConstraint(LabelId label, PropertyId property,
std::optional<uint64_t> desired_commit_timestamp = {});
/// Drop an existing existence constraint.
/// Returns void if the existence constraint has been dropped.
/// Returns `StorageExistenceConstraintDroppingError` if an error occures. Error can be:
/// * `ReplicationError`: there is at least one SYNC replica that has not confirmed receiving the transaction.
/// * `ConstraintDefinitionError`: the constraint did not exists.
utils::BasicResult<StorageExistenceConstraintDroppingError, void> DropExistenceConstraint(
LabelId label, PropertyId property, std::optional<uint64_t> desired_commit_timestamp = {});
/// Creates a unique constraint. In the case of two vertices violating the
/// constraint, it returns `ConstraintViolation`. Otherwise returns a
/// `UniqueConstraints::CreationStatus` enum with the following possibilities:
/// * `SUCCESS` if the constraint was successfully created,
/// * `ALREADY_EXISTS` if the constraint already existed,
/// * `EMPTY_PROPERTIES` if the property set is empty, or
// * `PROPERTIES_SIZE_LIMIT_EXCEEDED` if the property set exceeds the
// limit of maximum number of properties.
///
/// Create an unique constraint.
/// Returns `StorageUniqueConstraintDefinitionError` if an error occures. Error can be:
/// * `ReplicationError`: there is at least one SYNC replica that has not confirmed receiving the transaction.
/// * `ConstraintViolation`: there are already vertices violating the constraint.
/// Returns `UniqueConstraints::CreationStatus` otherwise. Value can be:
/// * `SUCCESS` if the constraint was successfully created,
/// * `ALREADY_EXISTS` if the constraint already existed,
/// * `EMPTY_PROPERTIES` if the property set is empty, or
/// * `PROPERTIES_SIZE_LIMIT_EXCEEDED` if the property set exceeds the limit of maximum number of properties.
/// @throw std::bad_alloc
utils::BasicResult<ConstraintViolation, UniqueConstraints::CreationStatus> CreateUniqueConstraint(
utils::BasicResult<StorageUniqueConstraintDefinitionError, UniqueConstraints::CreationStatus> CreateUniqueConstraint(
LabelId label, const std::set<PropertyId> &properties, std::optional<uint64_t> desired_commit_timestamp = {});
/// Removes a unique constraint. Returns `UniqueConstraints::DeletionStatus`
/// enum with the following possibilities:
/// * `SUCCESS` if constraint was successfully removed,
/// * `NOT_FOUND` if the specified constraint was not found,
/// * `EMPTY_PROPERTIES` if the property set is empty, or
/// * `PROPERTIES_SIZE_LIMIT_EXCEEDED` if the property set exceeds the
// limit of maximum number of properties.
UniqueConstraints::DeletionStatus DropUniqueConstraint(LabelId label, const std::set<PropertyId> &properties,
std::optional<uint64_t> desired_commit_timestamp = {});
/// Removes an existing unique constraint.
/// Returns `StorageUniqueConstraintDroppingError` if an error occures. Error can be:
/// * `ReplicationError`: there is at least one SYNC replica that has not confirmed receiving the transaction.
/// Returns `UniqueConstraints::DeletionStatus` otherwise. Value can be:
/// * `SUCCESS` if constraint was successfully removed,
/// * `NOT_FOUND` if the specified constraint was not found,
/// * `EMPTY_PROPERTIES` if the property set is empty, or
/// * `PROPERTIES_SIZE_LIMIT_EXCEEDED` if the property set exceeds the limit of maximum number of properties.
utils::BasicResult<StorageUniqueConstraintDroppingError, UniqueConstraints::DeletionStatus> DropUniqueConstraint(
LabelId label, const std::set<PropertyId> &properties, std::optional<uint64_t> desired_commit_timestamp = {});
ConstraintsInfo ListAllConstraints() const;
@ -474,9 +507,11 @@ class Storage final {
bool InitializeWalFile();
void FinalizeWalFile();
void AppendToWal(const Transaction &transaction, uint64_t final_commit_timestamp);
void AppendToWal(durability::StorageGlobalOperation operation, LabelId label, const std::set<PropertyId> &properties,
uint64_t final_commit_timestamp);
/// Return true in all cases excepted if any sync replicas have not sent confirmation.
[[nodiscard]] bool AppendToWalDataManipulation(const Transaction &transaction, uint64_t final_commit_timestamp);
/// Return true in all cases excepted if any sync replicas have not sent confirmation.
[[nodiscard]] bool AppendToWalDataDefinition(durability::StorageGlobalOperation operation, LabelId label,
const std::set<PropertyId> &properties, uint64_t final_commit_timestamp);
uint64_t CommitTimestamp(std::optional<uint64_t> desired_commit_timestamp = {});

View File

@ -0,0 +1,38 @@
// Copyright 2022 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
// License, and you may not use this file except in compliance with the Business Source License.
//
// As of the Change Date specified in that file, in accordance with
// the Business Source License, use of this software will be governed
// by the Apache License, Version 2.0, included in the file
// licenses/APL.txt.
#pragma once
#include "storage/v2/constraints.hpp"
#include <variant>
namespace memgraph::storage {
struct ReplicationError {};
using StorageDataManipulationError = std::variant<ConstraintViolation, ReplicationError>;
struct IndexDefinitionError {};
using StorageIndexDefinitionError = std::variant<IndexDefinitionError, ReplicationError>;
struct ConstraintDefinitionError {};
using StorageExistenceConstraintDefinitionError =
std::variant<ConstraintViolation, ConstraintDefinitionError, ReplicationError>;
using StorageExistenceConstraintDroppingError = std::variant<ConstraintDefinitionError, ReplicationError>;
using StorageUniqueConstraintDefinitionError = std::variant<ConstraintViolation, ReplicationError>;
using StorageUniqueConstraintDroppingError = std::variant<ReplicationError>;
} // namespace memgraph::storage

View File

@ -45,7 +45,7 @@ class ExpansionBenchFixture : public benchmark::Fixture {
MG_ASSERT(!dba.Commit().HasError());
}
MG_ASSERT(db->CreateIndex(label));
MG_ASSERT(!db->CreateIndex(label).HasError());
interpreter_context.emplace(&*db, memgraph::query::InterpreterConfig{}, data_directory);
interpreter.emplace(&*interpreter_context);

View File

@ -83,7 +83,7 @@ static void AddStarGraph(memgraph::storage::Storage *db, int spoke_count, int de
}
MG_ASSERT(!dba.Commit().HasError());
}
MG_ASSERT(db->CreateIndex(db->NameToLabel(kStartLabel)));
MG_ASSERT(!db->CreateIndex(db->NameToLabel(kStartLabel)).HasError());
}
static void AddTree(memgraph::storage::Storage *db, int vertex_count) {
@ -105,7 +105,7 @@ static void AddTree(memgraph::storage::Storage *db, int vertex_count) {
}
MG_ASSERT(!dba.Commit().HasError());
}
MG_ASSERT(db->CreateIndex(db->NameToLabel(kStartLabel)));
MG_ASSERT(!db->CreateIndex(db->NameToLabel(kStartLabel)).HasError());
}
static memgraph::query::CypherQuery *ParseCypherQuery(const std::string &query_string,

View File

@ -16,6 +16,7 @@
#include <gtest/gtest.h>
#include "storage/v2/storage.hpp"
#include "storage/v2/storage_error.hpp"
#include "utils/thread.hpp"
const uint64_t kNumVerifiers = 5;
@ -29,7 +30,7 @@ TEST(Storage, LabelIndex) {
auto store = memgraph::storage::Storage();
auto label = store.NameToLabel("label");
ASSERT_TRUE(store.CreateIndex(label));
ASSERT_FALSE(store.CreateIndex(label).HasError());
std::vector<std::thread> verifiers;
verifiers.reserve(kNumVerifiers);
@ -111,7 +112,7 @@ TEST(Storage, LabelPropertyIndex) {
auto label = store.NameToLabel("label");
auto prop = store.NameToProperty("prop");
ASSERT_TRUE(store.CreateIndex(label, prop));
ASSERT_FALSE(store.CreateIndex(label, prop).HasError());
std::vector<std::thread> verifiers;
verifiers.reserve(kNumVerifiers);

View File

@ -38,6 +38,7 @@ add_subdirectory(isolation_levels)
add_subdirectory(streams)
add_subdirectory(temporal_types)
add_subdirectory(write_procedures)
add_subdirectory(configuration)
add_subdirectory(magic_functions)
add_subdirectory(module_file_manager)
add_subdirectory(monitoring_server)

View File

@ -0,0 +1,6 @@
function(copy_configuration_check_e2e_python_files FILE_NAME)
copy_e2e_python_files(write_procedures ${FILE_NAME})
endfunction()
copy_configuration_check_e2e_python_files(default_config.py)
copy_configuration_check_e2e_python_files(configuration_check.py)

View File

@ -0,0 +1,46 @@
# Copyright 2022 Memgraph Ltd.
#
# Use of this software is governed by the Business Source License
# included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
# License, and you may not use this file except in compliance with the Business Source License.
#
# As of the Change Date specified in that file, in accordance with
# the Business Source License, use of this software will be governed
# by the Apache License, Version 2.0, included in the file
# licenses/APL.txt.
import sys
import mgclient
import pytest
import default_config
def test_does_default_config_match():
connection = mgclient.connect(host="localhost", port=7687)
connection.autocommit = True
cursor = connection.cursor()
cursor.execute("SHOW CONFIG")
config = cursor.fetchall()
assert len(config) == len(default_config.startup_config_dict)
for flag in config:
flag_name = flag[0]
# The default value of these is dependent on the given machine.
machine_dependent_configurations = ["bolt_num_workers", "data_directory", "log_file"]
if flag_name in machine_dependent_configurations:
continue
# default_value
assert default_config.startup_config_dict[flag_name][0] == flag[1]
# current_value
assert default_config.startup_config_dict[flag_name][1] == flag[2]
# description
assert default_config.startup_config_dict[flag_name][2] == flag[3]
if __name__ == "__main__":
sys.exit(pytest.main([__file__, "-rA"]))

View File

@ -0,0 +1,166 @@
# Copyright 2022 Memgraph Ltd.
#
# Use of this software is governed by the Business Source License
# included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
# License, and you may not use this file except in compliance with the Business Source License.
#
# As of the Change Date specified in that file, in accordance with
# the Business Source License, use of this software will be governed
# by the Apache License, Version 2.0, included in the file
# licenses/APL.txt.
# In order to check the working correctness of the SHOW CONFIG command, a couple of configuration flags has been passed to the testing instance. These are:
# "--log-level=TRACE", "--storage-properties-on-edges=True", "--storage-snapshot-interval-sec", "300", "--storage-wal-enabled=True"
# If you wish to modify these, update the startup_config_dict and workloads.yaml !
startup_config_dict = {
"auth_module_create_missing_role": ("true", "true", "Set to false to disable creation of missing roles."),
"auth_module_create_missing_user": ("true", "true", "Set to false to disable creation of missing users."),
"auth_module_executable": ("", "", "Absolute path to the auth module executable that should be used."),
"auth_module_manage_roles": (
"true",
"true",
"Set to false to disable management of roles through the auth module.",
),
"auth_module_timeout_ms": (
"10000",
"10000",
"Timeout (in milliseconds) used when waiting for a response from the auth module.",
),
"auth_password_permit_null": ("true", "true", "Set to false to disable null passwords."),
"auth_password_strength_regex": (
".+",
".+",
"The regular expression that should be used to match the entire entered password to ensure its strength.",
),
"allow_load_csv": ("true", "true", "Controls whether LOAD CSV clause is allowed in queries."),
"audit_buffer_flush_interval_ms": (
"200",
"200",
"Interval (in milliseconds) used for flushing the audit log buffer.",
),
"audit_buffer_size": ("100000", "100000", "Maximum number of items in the audit log buffer."),
"audit_enabled": ("false", "false", "Set to true to enable audit logging."),
"auth_user_or_role_name_regex": (
"[a-zA-Z0-9_.+-@]+",
"[a-zA-Z0-9_.+-@]+",
"Set to the regular expression that each user or role name must fulfill.",
),
"bolt_address": ("0.0.0.0", "0.0.0.0", "IP address on which the Bolt server should listen."),
"bolt_cert_file": ("", "", "Certificate file which should be used for the Bolt server."),
"bolt_key_file": ("", "", "Key file which should be used for the Bolt server."),
"bolt_num_workers": (
"12",
"12",
"Number of workers used by the Bolt server. By default, this will be the number of processing units available on the machine.",
),
"bolt_port": ("7687", "7687", "Port on which the Bolt server should listen."),
"bolt_server_name_for_init": (
"",
"",
"Server name which the database should send to the client in the Bolt INIT message.",
),
"bolt_session_inactivity_timeout": (
"1800",
"1800",
"Time in seconds after which inactive Bolt sessions will be closed.",
),
"data_directory": ("mg_data", "mg_data", "Path to directory in which to save all permanent data."),
"isolation_level": (
"SNAPSHOT_ISOLATION",
"SNAPSHOT_ISOLATION",
"Default isolation level used for the transactions. Allowed values: SNAPSHOT_ISOLATION, READ_COMMITTED, READ_UNCOMMITTED",
),
"kafka_bootstrap_servers": (
"",
"",
"List of default Kafka brokers as a comma separated list of broker host or host:port.",
),
"log_file": ("", "", "Path to where the log should be stored."),
"log_level": (
"WARNING",
"TRACE",
"Minimum log level. Allowed values: TRACE, DEBUG, INFO, WARNING, ERROR, CRITICAL",
),
"memory_limit": (
"0",
"0",
"Total memory limit in MiB. Set to 0 to use the default values which are 100% of the phyisical memory if the swap is enabled and 90% of the physical memory otherwise.",
),
"memory_warning_threshold": (
"1024",
"1024",
"Memory warning threshold, in MB. If Memgraph detects there is less available RAM it will log a warning. Set to 0 to disable.",
),
"monitoring_address": (
"0.0.0.0",
"0.0.0.0",
"IP address on which the websocket server for Memgraph monitoring should listen.",
),
"monitoring_port": ("7444", "7444", "Port on which the websocket server for Memgraph monitoring should listen."),
"pulsar_service_url": ("", "", "Default URL used while connecting to Pulsar brokers."),
"query_execution_timeout_sec": (
"600",
"600",
"Maximum allowed query execution time. Queries exceeding this limit will be aborted. Value of 0 means no limit.",
),
"query_modules_directory": (
"",
"",
"Directory where modules with custom query procedures are stored. NOTE: Multiple comma-separated directories can be defined.",
),
"replication_replica_check_frequency_sec": (
"1",
"1",
"The time duration between two replica checks/pings. If < 1, replicas will NOT be checked at all. NOTE: The MAIN instance allocates a new thread for each REPLICA.",
),
"storage_gc_cycle_sec": ("30", "30", "Storage garbage collector interval (in seconds)."),
"storage_properties_on_edges": ("false", "true", "Controls whether edges have properties."),
"storage_recover_on_startup": (
"false",
"false",
"Controls whether the storage recovers persisted data on startup.",
),
"storage_snapshot_interval_sec": (
"0",
"300",
"Storage snapshot creation interval (in seconds). Set to 0 to disable periodic snapshot creation.",
),
"storage_snapshot_on_exit": ("false", "false", "Controls whether the storage creates another snapshot on exit."),
"storage_snapshot_retention_count": ("3", "3", "The number of snapshots that should always be kept."),
"storage_wal_enabled": (
"false",
"true",
"Controls whether the storage uses write-ahead-logging. To enable WAL periodic snapshots must be enabled.",
),
"storage_wal_file_flush_every_n_tx": (
"100000",
"100000",
"Issue a 'fsync' call after this amount of transactions are written to the WAL file. Set to 1 for fully synchronous operation.",
),
"storage_wal_file_size_kib": ("20480", "20480", "Minimum file size of each WAL file."),
"stream_transaction_conflict_retries": (
"30",
"30",
"Number of times to retry when a stream transformation fails to commit because of conflicting transactions",
),
"stream_transaction_retry_interval": (
"500",
"500",
"Retry interval in milliseconds when a stream transformation fails to commit because of conflicting transactions",
),
"telemetry_enabled": (
"false",
"false",
"Set to true to enable telemetry. We collect information about the running system (CPU and memory information) and information about the database runtime (vertex and edge counts and resource usage) to allow for easier improvement of the product.",
),
"query_cost_planner": ("true", "true", "Use the cost-estimating query planner."),
"query_plan_cache_ttl": ("60", "60", "Time to live for cached query plans, in seconds."),
"query_vertex_count_to_expand_existing": (
"10",
"10",
"Maximum count of indexed vertices which provoke indexed lookup and then expand to existing, instead of a regular expand. Default is 10, to turn off use -1.",
),
"query_max_plans": ("1000", "1000", "Maximum number of generated plans for a query."),
"flag_file": ("", "", "load flags from file"),
}

View File

@ -0,0 +1,13 @@
template_cluster: &template_cluster
cluster:
main:
args: ["--log-level=TRACE", "--storage-properties-on-edges=True", "--storage-snapshot-interval-sec", "300", "--storage-wal-enabled=True"]
log_file: "configuration-check-e2e.log"
setup_queries: []
validation_queries: []
workloads:
- name: "Configuration check"
binary: "tests/e2e/pytest_runner.sh"
args: ["configuration/configuration_check.py"]
<<: *template_cluster

View File

@ -22,13 +22,13 @@ static void ReturnFunctionArgument(struct mgp_list *args, mgp_func_context *ctx,
mgp_value *value{nullptr};
auto err_code = mgp_list_at(args, 0, &value);
if (err_code != mgp_error::MGP_ERROR_NO_ERROR) {
mgp_func_result_set_error_msg(result, "Failed to fetch list!", memory);
static_cast<void>(mgp_func_result_set_error_msg(result, "Failed to fetch list!", memory));
return;
}
err_code = mgp_func_result_set_value(result, value, memory);
if (err_code != mgp_error::MGP_ERROR_NO_ERROR) {
mgp_func_result_set_error_msg(result, "Failed to construct return value!", memory);
static_cast<void>(mgp_func_result_set_error_msg(result, "Failed to construct return value!", memory));
return;
}
}
@ -38,13 +38,13 @@ static void ReturnOptionalArgument(struct mgp_list *args, mgp_func_context *ctx,
mgp_value *value{nullptr};
auto err_code = mgp_list_at(args, 0, &value);
if (err_code != mgp_error::MGP_ERROR_NO_ERROR) {
mgp_func_result_set_error_msg(result, "Failed to fetch list!", memory);
static_cast<void>(mgp_func_result_set_error_msg(result, "Failed to fetch list!", memory));
return;
}
err_code = mgp_func_result_set_value(result, value, memory);
if (err_code != mgp_error::MGP_ERROR_NO_ERROR) {
mgp_func_result_set_error_msg(result, "Failed to construct return value!", memory);
static_cast<void>(mgp_func_result_set_error_msg(result, "Failed to construct return value!", memory));
return;
}
}
@ -57,14 +57,14 @@ double GetElementFromArg(struct mgp_list *args, int index) {
double result;
int is_int;
mgp_value_is_int(value, &is_int);
static_cast<void>(mgp_value_is_int(value, &is_int));
if (is_int) {
int64_t result_int;
mgp_value_get_int(value, &result_int);
static_cast<void>(mgp_value_get_int(value, &result_int));
result = static_cast<double>(result_int);
} else {
mgp_value_get_double(value, &result);
static_cast<void>(mgp_value_get_double(value, &result));
}
return result;
}
@ -77,30 +77,30 @@ static void AddTwoNumbers(struct mgp_list *args, mgp_func_context *ctx, mgp_func
first = GetElementFromArg(args, 0);
second = GetElementFromArg(args, 1);
} catch (...) {
mgp_func_result_set_error_msg(result, "Unable to fetch the result!", memory);
static_cast<void>(mgp_func_result_set_error_msg(result, "Unable to fetch the result!", memory));
return;
}
mgp_value *value{nullptr};
auto summation = first + second;
mgp_value_make_double(summation, memory, &value);
static_cast<void>(mgp_value_make_double(summation, memory, &value));
memgraph::utils::OnScopeExit delete_summation_value([&value] { mgp_value_destroy(value); });
auto err_code = mgp_func_result_set_value(result, value, memory);
if (err_code != mgp_error::MGP_ERROR_NO_ERROR) {
mgp_func_result_set_error_msg(result, "Failed to construct return value!", memory);
static_cast<void>(mgp_func_result_set_error_msg(result, "Failed to construct return value!", memory));
}
}
static void ReturnNull(struct mgp_list *args, mgp_func_context *ctx, mgp_func_result *result,
struct mgp_memory *memory) {
mgp_value *value{nullptr};
mgp_value_make_null(memory, &value);
static_cast<void>(mgp_value_make_null(memory, &value));
memgraph::utils::OnScopeExit delete_null([&value] { mgp_value_destroy(value); });
auto err_code = mgp_func_result_set_value(result, value, memory);
if (err_code != mgp_error::MGP_ERROR_NO_ERROR) {
mgp_func_result_set_error_msg(result, "Failed to fetch list!", memory);
static_cast<void>(mgp_func_result_set_error_msg(result, "Failed to fetch list!", memory));
}
}
} // namespace
@ -116,7 +116,7 @@ extern "C" int mgp_init_module(struct mgp_module *module, struct mgp_memory *mem
}
mgp_type *type_any{nullptr};
mgp_type_any(&type_any);
static_cast<void>(mgp_type_any(&type_any));
err_code = mgp_func_add_arg(func, "argument", type_any);
if (err_code != mgp_error::MGP_ERROR_NO_ERROR) {
return 1;
@ -131,11 +131,11 @@ extern "C" int mgp_init_module(struct mgp_module *module, struct mgp_memory *mem
}
mgp_value *default_value{nullptr};
mgp_value_make_int(42, memory, &default_value);
static_cast<void>(mgp_value_make_int(42, memory, &default_value));
memgraph::utils::OnScopeExit delete_summation_value([&default_value] { mgp_value_destroy(default_value); });
mgp_type *type_int{nullptr};
mgp_type_int(&type_int);
static_cast<void>(mgp_type_int(&type_int));
err_code = mgp_func_add_opt_arg(func, "opt_argument", type_int, default_value);
if (err_code != mgp_error::MGP_ERROR_NO_ERROR) {
return 1;
@ -150,7 +150,7 @@ extern "C" int mgp_init_module(struct mgp_module *module, struct mgp_memory *mem
}
mgp_type *type_number{nullptr};
mgp_type_number(&type_number);
static_cast<void>(mgp_type_number(&type_number));
err_code = mgp_func_add_arg(func, "first", type_number);
if (err_code != mgp_error::MGP_ERROR_NO_ERROR) {
return 1;

View File

@ -15,25 +15,25 @@ static void TryToWrite(struct mgp_list *args, mgp_func_context *ctx, mgp_func_re
struct mgp_memory *memory) {
mgp_value *value{nullptr};
mgp_vertex *vertex{nullptr};
mgp_list_at(args, 0, &value);
mgp_value_get_vertex(value, &vertex);
static_cast<void>(mgp_list_at(args, 0, &value));
static_cast<void>(mgp_value_get_vertex(value, &vertex));
const char *name;
mgp_list_at(args, 1, &value);
mgp_value_get_string(value, &name);
static_cast<void>(mgp_list_at(args, 1, &value));
static_cast<void>(mgp_value_get_string(value, &name));
mgp_list_at(args, 2, &value);
static_cast<void>(mgp_list_at(args, 2, &value));
// Setting a property should set an error
auto err_code = mgp_vertex_set_property(vertex, name, value);
if (err_code != mgp_error::MGP_ERROR_NO_ERROR) {
mgp_func_result_set_error_msg(result, "Cannot set property in the function!", memory);
static_cast<void>(mgp_func_result_set_error_msg(result, "Cannot set property in the function!", memory));
return;
}
err_code = mgp_func_result_set_value(result, value, memory);
if (err_code != mgp_error::MGP_ERROR_NO_ERROR) {
mgp_func_result_set_error_msg(result, "Failed to construct return value!", memory);
static_cast<void>(mgp_func_result_set_error_msg(result, "Failed to construct return value!", memory));
return;
}
}
@ -49,23 +49,23 @@ extern "C" int mgp_init_module(struct mgp_module *module, struct mgp_memory *mem
}
mgp_type *type_vertex{nullptr};
mgp_type_node(&type_vertex);
static_cast<void>(mgp_type_node(&type_vertex));
err_code = mgp_func_add_arg(func, "argument", type_vertex);
if (err_code != mgp_error::MGP_ERROR_NO_ERROR) {
return 1;
}
mgp_type *type_string{nullptr};
mgp_type_string(&type_string);
static_cast<void>(mgp_type_string(&type_string));
err_code = mgp_func_add_arg(func, "name", type_string);
if (err_code != mgp_error::MGP_ERROR_NO_ERROR) {
return 1;
}
mgp_type *any_type{nullptr};
mgp_type_any(&any_type);
static_cast<void>(mgp_type_any(&any_type));
mgp_type *nullable_type{nullptr};
mgp_type_nullable(any_type, &nullable_type);
static_cast<void>(mgp_type_nullable(any_type, &nullable_type));
err_code = mgp_func_add_arg(func, "value", nullable_type);
if (err_code != mgp_error::MGP_ERROR_NO_ERROR) {
return 1;

View File

@ -1,10 +1,21 @@
// Copyright 2022 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
// License, and you may not use this file except in compliance with the Business Source License.
//
// As of the Change Date specified in that file, in accordance with
// the Business Source License, use of this software will be governed
// by the Apache License, Version 2.0, included in the file
// licenses/APL.txt.
#include "mg_procedure.h"
int *gVal = NULL;
void set_error(struct mgp_result *result) { mgp_result_set_error_msg(result, "Something went wrong"); }
static void procedure(const struct mgp_list *args, const struct mgp_graph *graph, struct mgp_result *result,
static void procedure(struct mgp_list *args, struct mgp_graph *graph, struct mgp_result *result,
struct mgp_memory *memory) {
struct mgp_result_record *record = NULL;
const enum mgp_error new_record_err = mgp_result_new_record(result, &record);
@ -21,14 +32,14 @@ static void procedure(const struct mgp_list *args, const struct mgp_graph *graph
int mgp_init_module(struct mgp_module *module, struct mgp_memory *memory) {
const size_t one_gb = 1 << 30;
const enum mgp_error alloc_err = mgp_global_alloc(one_gb, &gVal);
const enum mgp_error alloc_err = mgp_global_alloc(one_gb, (void **)(&gVal));
if (alloc_err != MGP_ERROR_NO_ERROR) return 1;
struct mgp_proc *proc = NULL;
const enum mgp_error proc_err = mgp_module_add_read_procedure(module, "procedure", procedure, &proc);
if (proc_err != MGP_ERROR_NO_ERROR) return 1;
const struct mgp_type *string_type = NULL;
struct mgp_type *string_type = NULL;
const enum mgp_error string_type_err = mgp_type_string(&string_type);
if (string_type_err != MGP_ERROR_NO_ERROR) return 1;
if (mgp_proc_add_result(proc, "result", string_type) != MGP_ERROR_NO_ERROR) return 1;

View File

@ -1,3 +1,14 @@
// Copyright 2022 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
// License, and you may not use this file except in compliance with the Business Source License.
//
// As of the Change Date specified in that file, in accordance with
// the Business Source License, use of this software will be governed
// by the Apache License, Version 2.0, included in the file
// licenses/APL.txt.
#include "mg_procedure.h"
int *gVal = NULL;
@ -6,7 +17,7 @@ void set_error(struct mgp_result *result) { mgp_result_set_error_msg(result, "So
void set_out_of_memory_error(struct mgp_result *result) { mgp_result_set_error_msg(result, "Out of memory"); }
static void error(const struct mgp_list *args, const struct mgp_graph *graph, struct mgp_result *result,
static void error(struct mgp_list *args, struct mgp_graph *graph, struct mgp_result *result,
struct mgp_memory *memory) {
const size_t one_gb = 1 << 30;
if (gVal) {
@ -14,7 +25,7 @@ static void error(const struct mgp_list *args, const struct mgp_graph *graph, st
gVal = NULL;
}
if (!gVal) {
const enum mgp_error err = mgp_global_alloc(one_gb, &gVal);
const enum mgp_error err = mgp_global_alloc(one_gb, (void **)(&gVal));
if (err == MGP_ERROR_UNABLE_TO_ALLOCATE) return set_out_of_memory_error(result);
if (err != MGP_ERROR_NO_ERROR) return set_error(result);
}
@ -29,11 +40,11 @@ static void error(const struct mgp_list *args, const struct mgp_graph *graph, st
if (result_inserted != MGP_ERROR_NO_ERROR) return set_error(result);
}
static void success(const struct mgp_list *args, const struct mgp_graph *graph, struct mgp_result *result,
static void success(struct mgp_list *args, struct mgp_graph *graph, struct mgp_result *result,
struct mgp_memory *memory) {
const size_t bytes = 1024;
if (!gVal) {
const enum mgp_error err = mgp_global_alloc(bytes, &gVal);
const enum mgp_error err = mgp_global_alloc(bytes, (void **)(&gVal));
if (err == MGP_ERROR_UNABLE_TO_ALLOCATE) return set_out_of_memory_error(result);
if (err != MGP_ERROR_NO_ERROR) return set_error(result);
}

View File

@ -8,7 +8,9 @@ def mg_sleep_and_assert(expected_value, function_to_retrieve_data, max_duration=
current_time = time.time()
duration = current_time - start_time
if duration > max_duration:
assert False, " mg_sleep_and_assert has tried for too long and did not get the expected result!"
assert (
False
), f" mg_sleep_and_assert has tried for too long and did not get the expected result! Last result was: {result}"
time.sleep(time_between_attempt)
result = function_to_retrieve_data()

File diff suppressed because it is too large Load Diff

View File

@ -20,3 +20,18 @@ def underlying_graph_is_mutable(ctx: mgp.ProcCtx, object: mgp.Any) -> mgp.Record
@mgp.read_proc
def graph_is_mutable(ctx: mgp.ProcCtx) -> mgp.Record(mutable=bool):
return mgp.Record(mutable=ctx.graph.is_mutable())
@mgp.read_proc
def log_message(ctx: mgp.ProcCtx, message: str) -> mgp.Record(success=bool):
logger = mgp.Logger()
try:
logger.info(message)
logger.critical(message)
logger.trace(message)
logger.debug(message)
logger.warning(message)
logger.error(message)
except RuntimeError:
return mgp.Record(success=False)
return mgp.Record(success=True)

View File

@ -13,8 +13,7 @@ import typing
import mgclient
import sys
import pytest
from common import (execute_and_fetch_all,
has_one_result_row, has_n_result_row)
from common import execute_and_fetch_all, has_one_result_row, has_n_result_row
def test_is_write(connection):
@ -22,15 +21,19 @@ def test_is_write(connection):
result_order = "name, signature, is_write"
cursor = connection.cursor()
for proc in execute_and_fetch_all(
cursor, "CALL mg.procedures() YIELD * WITH name, signature, "
"is_write WHERE name STARTS WITH 'write' "
f"RETURN {result_order}"):
cursor,
"CALL mg.procedures() YIELD * WITH name, signature, "
"is_write WHERE name STARTS WITH 'write' "
f"RETURN {result_order}",
):
assert proc[is_write] is True
for proc in execute_and_fetch_all(
cursor, "CALL mg.procedures() YIELD * WITH name, signature, "
"is_write WHERE NOT name STARTS WITH 'write' "
f"RETURN {result_order}"):
cursor,
"CALL mg.procedures() YIELD * WITH name, signature, "
"is_write WHERE NOT name STARTS WITH 'write' "
f"RETURN {result_order}",
):
assert proc[is_write] is False
assert cursor.description[0].name == "name"
@ -41,8 +44,7 @@ def test_is_write(connection):
def test_single_vertex(connection):
cursor = connection.cursor()
assert has_n_result_row(cursor, "MATCH (n) RETURN n", 0)
result = execute_and_fetch_all(
cursor, "CALL write.create_vertex() YIELD v RETURN v")
result = execute_and_fetch_all(cursor, "CALL write.create_vertex() YIELD v RETURN v")
vertex = result[0][0]
assert isinstance(vertex, mgclient.Node)
assert has_one_result_row(cursor, "MATCH (n) RETURN n")
@ -50,14 +52,10 @@ def test_single_vertex(connection):
assert vertex.properties == {}
def add_label(label: str):
execute_and_fetch_all(
cursor, f"MATCH (n) CALL write.add_label(n, '{label}') "
"YIELD * RETURN *")
execute_and_fetch_all(cursor, f"MATCH (n) CALL write.add_label(n, '{label}') " "YIELD * RETURN *")
def remove_label(label: str):
execute_and_fetch_all(
cursor, f"MATCH (n) CALL write.remove_label(n, '{label}') "
"YIELD * RETURN *")
execute_and_fetch_all(cursor, f"MATCH (n) CALL write.remove_label(n, '{label}') " "YIELD * RETURN *")
def get_vertex() -> mgclient.Node:
return execute_and_fetch_all(cursor, "MATCH (n) RETURN n")[0][0]
@ -65,8 +63,10 @@ def test_single_vertex(connection):
def set_property(property_name: str, property: typing.Any):
nonlocal cursor
execute_and_fetch_all(
cursor, f"MATCH (n) CALL write.set_property(n, '{property_name}', "
"$property) YIELD * RETURN *", {"property": property})
cursor,
f"MATCH (n) CALL write.set_property(n, '{property_name}', " "$property) YIELD * RETURN *",
{"property": property},
)
label_1 = "LABEL1"
label_2 = "LABEL2"
@ -89,24 +89,23 @@ def test_single_vertex(connection):
set_property(property_name, None)
assert get_vertex().properties == {}
execute_and_fetch_all(
cursor, "MATCH (n) CALL write.delete_vertex(n) YIELD * RETURN 1")
execute_and_fetch_all(cursor, "MATCH (n) CALL write.delete_vertex(n) YIELD * RETURN 1")
assert has_n_result_row(cursor, "MATCH (n) RETURN n", 0)
def test_single_edge(connection):
cursor = connection.cursor()
assert has_n_result_row(cursor, "MATCH (n) RETURN n", 0)
v1_id = execute_and_fetch_all(
cursor, "CALL write.create_vertex() YIELD v RETURN v")[0][0].id
v2_id = execute_and_fetch_all(
cursor, "CALL write.create_vertex() YIELD v RETURN v")[0][0].id
v1_id = execute_and_fetch_all(cursor, "CALL write.create_vertex() YIELD v RETURN v")[0][0].id
v2_id = execute_and_fetch_all(cursor, "CALL write.create_vertex() YIELD v RETURN v")[0][0].id
edge_type = "EDGE"
edge = execute_and_fetch_all(
cursor, f"MATCH (n) WHERE id(n) = {v1_id} "
f"MATCH (m) WHERE id(m) = {v2_id} "
f"CALL write.create_edge(n, m, '{edge_type}') "
"YIELD e RETURN e")[0][0]
cursor,
f"MATCH (n) WHERE id(n) = {v1_id} "
f"MATCH (m) WHERE id(m) = {v2_id} "
f"CALL write.create_edge(n, m, '{edge_type}') "
"YIELD e RETURN e",
)[0][0]
assert edge.type == edge_type
assert edge.properties == {}
@ -120,9 +119,10 @@ def test_single_edge(connection):
def set_property(property_name: str, property: typing.Any):
nonlocal cursor
execute_and_fetch_all(
cursor, "MATCH ()-[e]->() "
f"CALL write.set_property(e, '{property_name}', "
"$property) YIELD * RETURN *", {"property": property})
cursor,
"MATCH ()-[e]->() " f"CALL write.set_property(e, '{property_name}', " "$property) YIELD * RETURN *",
{"property": property},
)
set_property(property_name, property_value_1)
assert get_edge().properties == {property_name: property_value_1}
@ -130,64 +130,74 @@ def test_single_edge(connection):
assert get_edge().properties == {property_name: property_value_2}
set_property(property_name, None)
assert get_edge().properties == {}
execute_and_fetch_all(
cursor, "MATCH ()-[e]->() CALL write.delete_edge(e) YIELD * RETURN 1")
execute_and_fetch_all(cursor, "MATCH ()-[e]->() CALL write.delete_edge(e) YIELD * RETURN 1")
assert has_n_result_row(cursor, "MATCH ()-[e]->() RETURN e", 0)
def test_detach_delete_vertex(connection):
cursor = connection.cursor()
assert has_n_result_row(cursor, "MATCH (n) RETURN n", 0)
v1_id = execute_and_fetch_all(
cursor, "CALL write.create_vertex() YIELD v RETURN v")[0][0].id
v2_id = execute_and_fetch_all(
cursor, "CALL write.create_vertex() YIELD v RETURN v")[0][0].id
v1_id = execute_and_fetch_all(cursor, "CALL write.create_vertex() YIELD v RETURN v")[0][0].id
v2_id = execute_and_fetch_all(cursor, "CALL write.create_vertex() YIELD v RETURN v")[0][0].id
execute_and_fetch_all(
cursor, f"MATCH (n) WHERE id(n) = {v1_id} "
cursor,
f"MATCH (n) WHERE id(n) = {v1_id} "
f"MATCH (m) WHERE id(m) = {v2_id} "
f"CALL write.create_edge(n, m, 'EDGE') "
"YIELD e RETURN e")
"YIELD e RETURN e",
)
assert has_one_result_row(cursor, "MATCH (n)-[e]->(m) RETURN n, e, m")
execute_and_fetch_all(
cursor, f"MATCH (n) WHERE id(n) = {v1_id} "
"CALL write.detach_delete_vertex(n) YIELD * RETURN 1")
cursor, f"MATCH (n) WHERE id(n) = {v1_id} " "CALL write.detach_delete_vertex(n) YIELD * RETURN 1"
)
assert has_n_result_row(cursor, "MATCH (n)-[e]->(m) RETURN n, e, m", 0)
assert has_n_result_row(cursor, "MATCH ()-[e]->() RETURN e", 0)
assert has_one_result_row(
cursor, f"MATCH (n) WHERE id(n) = {v2_id} RETURN n")
assert has_one_result_row(cursor, f"MATCH (n) WHERE id(n) = {v2_id} RETURN n")
def test_graph_mutability(connection):
cursor = connection.cursor()
assert has_n_result_row(cursor, "MATCH (n) RETURN n", 0)
v1_id = execute_and_fetch_all(
cursor, "CALL write.create_vertex() YIELD v RETURN v")[0][0].id
v2_id = execute_and_fetch_all(
cursor, "CALL write.create_vertex() YIELD v RETURN v")[0][0].id
v1_id = execute_and_fetch_all(cursor, "CALL write.create_vertex() YIELD v RETURN v")[0][0].id
v2_id = execute_and_fetch_all(cursor, "CALL write.create_vertex() YIELD v RETURN v")[0][0].id
execute_and_fetch_all(
cursor, f"MATCH (n) WHERE id(n) = {v1_id} "
cursor,
f"MATCH (n) WHERE id(n) = {v1_id} "
f"MATCH (m) WHERE id(m) = {v2_id} "
f"CALL write.create_edge(n, m, 'EDGE') "
"YIELD e RETURN e")
"YIELD e RETURN e",
)
def test_mutability(is_write: bool):
module = "write" if is_write else "read"
assert execute_and_fetch_all(
cursor, f"CALL {module}.graph_is_mutable() "
"YIELD mutable RETURN mutable")[0][0] is is_write
assert execute_and_fetch_all(
cursor, "MATCH (n) "
f"CALL {module}.underlying_graph_is_mutable(n) "
"YIELD mutable RETURN mutable")[0][0] is is_write
assert execute_and_fetch_all(
cursor, "MATCH (n)-[e]->(m) "
f"CALL {module}.underlying_graph_is_mutable(e) "
"YIELD mutable RETURN mutable")[0][0] is is_write
assert (
execute_and_fetch_all(cursor, f"CALL {module}.graph_is_mutable() " "YIELD mutable RETURN mutable")[0][0]
is is_write
)
assert (
execute_and_fetch_all(
cursor, "MATCH (n) " f"CALL {module}.underlying_graph_is_mutable(n) " "YIELD mutable RETURN mutable"
)[0][0]
is is_write
)
assert (
execute_and_fetch_all(
cursor,
"MATCH (n)-[e]->(m) " f"CALL {module}.underlying_graph_is_mutable(e) " "YIELD mutable RETURN mutable",
)[0][0]
is is_write
)
test_mutability(True)
test_mutability(False)
def test_log_message(connection):
cursor = connection.cursor()
success = execute_and_fetch_all(cursor, f"CALL read.log_message('message') YIELD success RETURN success")[0][0]
assert (success) is True
if __name__ == "__main__":
sys.exit(pytest.main([__file__, "-rA"]))

View File

@ -86,7 +86,7 @@ int main(int argc, char **argv) {
memgraph::io::network::Endpoint endpoint(memgraph::io::network::ResolveHostname(FLAGS_address), FLAGS_port);
memgraph::communication::ClientContext context(FLAGS_use_ssl);
memgraph::communication::bolt::Client client(&context);
memgraph::communication::bolt::Client client(context);
client.Connect(endpoint, FLAGS_username, FLAGS_password);
client.Execute(FLAGS_query, JsonToValue(nlohmann::json::parse(FLAGS_params_json)).ValueMap());

View File

@ -33,7 +33,7 @@ int main(int argc, char **argv) {
memgraph::io::network::Endpoint endpoint(memgraph::io::network::ResolveHostname(FLAGS_address), FLAGS_port);
memgraph::communication::ClientContext context(FLAGS_use_ssl);
memgraph::communication::bolt::Client client(&context);
memgraph::communication::bolt::Client client(context);
client.Connect(endpoint, FLAGS_username, FLAGS_password);

View File

@ -38,7 +38,7 @@ int main(int argc, char **argv) {
memgraph::io::network::Endpoint endpoint(memgraph::io::network::ResolveHostname(FLAGS_address), FLAGS_port);
memgraph::communication::ClientContext context(FLAGS_use_ssl);
memgraph::communication::bolt::Client client(&context);
memgraph::communication::bolt::Client client(context);
client.Connect(endpoint, FLAGS_username, FLAGS_password);

View File

@ -35,7 +35,7 @@ int main(int argc, char **argv) {
memgraph::io::network::Endpoint endpoint(memgraph::io::network::ResolveHostname(FLAGS_address), FLAGS_port);
memgraph::communication::ClientContext context(FLAGS_use_ssl);
memgraph::communication::bolt::Client client(&context);
memgraph::communication::bolt::Client client(context);
client.Connect(endpoint, FLAGS_username, FLAGS_password);

View File

@ -38,7 +38,7 @@ int main(int argc, char **argv) {
memgraph::io::network::Endpoint endpoint(memgraph::io::network::ResolveHostname(FLAGS_address), FLAGS_port);
memgraph::communication::ClientContext context(FLAGS_use_ssl);
memgraph::communication::bolt::Client client(&context);
memgraph::communication::bolt::Client client(context);
client.Connect(endpoint, FLAGS_username, FLAGS_password);

View File

@ -37,7 +37,7 @@ int main(int argc, char **argv) {
memgraph::io::network::Endpoint endpoint(memgraph::io::network::ResolveHostname(FLAGS_address), FLAGS_port);
memgraph::communication::ClientContext context(FLAGS_use_ssl);
memgraph::communication::bolt::Client client(&context);
memgraph::communication::bolt::Client client(context);
{
std::string what;

View File

@ -36,7 +36,7 @@ int main(int argc, char **argv) {
memgraph::io::network::Endpoint endpoint(memgraph::io::network::ResolveHostname(FLAGS_address), FLAGS_port);
memgraph::communication::ClientContext context(FLAGS_use_ssl);
memgraph::communication::bolt::Client client(&context);
memgraph::communication::bolt::Client client(context);
client.Connect(endpoint, FLAGS_username, FLAGS_password);
auto ret = client.Execute("DUMP DATABASE", {});

View File

@ -65,7 +65,7 @@ class BoltClient : public ::testing::Test {
memgraph::io::network::Endpoint endpoint_{memgraph::io::network::ResolveHostname(FLAGS_address),
static_cast<uint16_t>(FLAGS_port)};
memgraph::communication::ClientContext context_{FLAGS_use_ssl};
Client client_{&context_};
Client client_{context_};
};
const std::string kNoCurrentTransactionToCommit = "No current transaction to commit.";
@ -100,20 +100,20 @@ TEST_F(BoltClient, DoubleRollbackWithoutTransaction) {
TEST_F(BoltClient, DoubleBegin) {
EXPECT_TRUE(Execute("begin"));
EXPECT_THROW(Execute("begin", kNestedTransactions), ClientQueryException);
EXPECT_TRUE(TransactionActive());
EXPECT_FALSE(TransactionActive());
}
TEST_F(BoltClient, DoubleBeginAndCommit) {
EXPECT_TRUE(Execute("begin"));
EXPECT_THROW(Execute("begin", kNestedTransactions), ClientQueryException);
EXPECT_TRUE(Execute("commit"));
EXPECT_THROW(Execute("commit", kNoCurrentTransactionToCommit), ClientQueryException);
EXPECT_FALSE(TransactionActive());
}
TEST_F(BoltClient, DoubleBeginAndRollback) {
EXPECT_TRUE(Execute("begin"));
EXPECT_THROW(Execute("begin", kNestedTransactions), ClientQueryException);
EXPECT_TRUE(Execute("rollback"));
EXPECT_THROW(Execute("rollback", kNoCurrentTransactionToRollback), ClientQueryException);
EXPECT_FALSE(TransactionActive());
}
@ -157,30 +157,29 @@ TEST_F(BoltClient, BeginAndCorrectQueriesAndBegin) {
EXPECT_TRUE(Execute("create (n)"));
ASSERT_EQ(GetCount(), count + 1);
EXPECT_THROW(Execute("begin", kNestedTransactions), ClientQueryException);
EXPECT_EQ(GetCount(), count + 1);
EXPECT_TRUE(TransactionActive());
EXPECT_EQ(GetCount(), count);
EXPECT_FALSE(TransactionActive());
}
TEST_F(BoltClient, BeginAndWrongQueryAndRollback) {
EXPECT_TRUE(Execute("begin"));
EXPECT_THROW(Execute("asdasd"), ClientQueryException);
EXPECT_TRUE(Execute("rollback"));
EXPECT_THROW(Execute("rollback", kNoCurrentTransactionToRollback), ClientQueryException);
EXPECT_FALSE(TransactionActive());
}
TEST_F(BoltClient, BeginAndWrongQueryAndCommit) {
EXPECT_TRUE(Execute("begin"));
EXPECT_THROW(Execute("asdasd"), ClientQueryException);
EXPECT_THROW(Execute("commit", kCommitInvalid), ClientQueryException);
EXPECT_TRUE(Execute("rollback"));
EXPECT_THROW(Execute("commit", kNoCurrentTransactionToCommit), ClientQueryException);
EXPECT_FALSE(TransactionActive());
}
TEST_F(BoltClient, BeginAndWrongQueryAndBegin) {
EXPECT_TRUE(Execute("begin"));
EXPECT_THROW(Execute("asdasd"), ClientQueryException);
EXPECT_THROW(Execute("commit", kCommitInvalid), ClientQueryException);
EXPECT_THROW(Execute("begin", kNestedTransactions), ClientQueryException);
EXPECT_THROW(Execute("commit", kNoCurrentTransactionToCommit), ClientQueryException);
EXPECT_TRUE(Execute("begin"));
EXPECT_TRUE(TransactionActive());
}
@ -230,7 +229,7 @@ TEST_F(BoltClient, CorrectQueryAndBeginAndBegin) {
EXPECT_TRUE(Execute("match (n) return count(n)"));
EXPECT_TRUE(Execute("begin"));
EXPECT_THROW(Execute("begin", kNestedTransactions), ClientQueryException);
EXPECT_TRUE(TransactionActive());
EXPECT_FALSE(TransactionActive());
}
TEST_F(BoltClient, WrongQueryAndBeginAndCommit) {
@ -251,7 +250,7 @@ TEST_F(BoltClient, WrongQueryAndBeginAndBegin) {
EXPECT_THROW(Execute("asdasd"), ClientQueryException);
EXPECT_TRUE(Execute("begin"));
EXPECT_THROW(Execute("begin", kNestedTransactions), ClientQueryException);
EXPECT_TRUE(TransactionActive());
EXPECT_FALSE(TransactionActive());
}
TEST_F(BoltClient, CorrectQueriesAndBeginAndCommit) {
@ -278,7 +277,7 @@ TEST_F(BoltClient, CorrectQueriesAndBeginAndBegin) {
}
EXPECT_TRUE(Execute("begin"));
EXPECT_THROW(Execute("begin", kNestedTransactions), ClientQueryException);
EXPECT_TRUE(TransactionActive());
EXPECT_FALSE(TransactionActive());
}
TEST_F(BoltClient, WrongQueriesAndBeginAndCommit) {
@ -305,7 +304,7 @@ TEST_F(BoltClient, WrongQueriesAndBeginAndBegin) {
}
EXPECT_TRUE(Execute("begin"));
EXPECT_THROW(Execute("begin", kNestedTransactions), ClientQueryException);
EXPECT_TRUE(TransactionActive());
EXPECT_FALSE(TransactionActive());
}
TEST_F(BoltClient, CorrectQueriesAndBeginAndCorrectQueriesAndCommit) {
@ -341,7 +340,7 @@ TEST_F(BoltClient, CorrectQueriesAndBeginAndCorrectQueriesAndBegin) {
EXPECT_TRUE(Execute("match (n) return count(n)"));
}
EXPECT_THROW(Execute("begin", kNestedTransactions), ClientQueryException);
EXPECT_TRUE(TransactionActive());
EXPECT_FALSE(TransactionActive());
}
TEST_F(BoltClient, WrongQueriesAndBeginAndCorrectQueriesAndCommit) {
@ -377,7 +376,7 @@ TEST_F(BoltClient, WrongQueriesAndBeginAndCorrectQueriesAndBegin) {
EXPECT_TRUE(Execute("match (n) return count(n)"));
}
EXPECT_THROW(Execute("begin", kNestedTransactions), ClientQueryException);
EXPECT_TRUE(TransactionActive());
EXPECT_FALSE(TransactionActive());
}
TEST_F(BoltClient, CorrectQueriesAndBeginAndWrongQueriesAndCommit) {
@ -388,8 +387,8 @@ TEST_F(BoltClient, CorrectQueriesAndBeginAndWrongQueriesAndCommit) {
for (int i = 0; i < 3; ++i) {
EXPECT_THROW(Execute("asdasd"), ClientQueryException);
}
EXPECT_THROW(Execute("commit", kCommitInvalid), ClientQueryException);
EXPECT_TRUE(TransactionActive());
EXPECT_THROW(Execute("commit", kNoCurrentTransactionToCommit), ClientQueryException);
EXPECT_FALSE(TransactionActive());
}
TEST_F(BoltClient, CorrectQueriesAndBeginAndWrongQueriesAndRollback) {
@ -400,7 +399,7 @@ TEST_F(BoltClient, CorrectQueriesAndBeginAndWrongQueriesAndRollback) {
for (int i = 0; i < 3; ++i) {
EXPECT_THROW(Execute("asdasd"), ClientQueryException);
}
EXPECT_TRUE(Execute("rollback"));
EXPECT_THROW(Execute("rollback", kNoCurrentTransactionToRollback), ClientQueryException);
EXPECT_FALSE(TransactionActive());
}
@ -412,7 +411,7 @@ TEST_F(BoltClient, CorrectQueriesAndBeginAndWrongQueriesAndBegin) {
for (int i = 0; i < 3; ++i) {
EXPECT_THROW(Execute("asdasd"), ClientQueryException);
}
EXPECT_THROW(Execute("begin", kNestedTransactions), ClientQueryException);
EXPECT_TRUE(Execute("begin"));
EXPECT_TRUE(TransactionActive());
}
@ -424,8 +423,8 @@ TEST_F(BoltClient, WrongQueriesAndBeginAndWrongQueriesAndCommit) {
for (int i = 0; i < 3; ++i) {
EXPECT_THROW(Execute("asdasd"), ClientQueryException);
}
EXPECT_THROW(Execute("commit", kCommitInvalid), ClientQueryException);
EXPECT_TRUE(TransactionActive());
EXPECT_THROW(Execute("commit", kNoCurrentTransactionToCommit), ClientQueryException);
EXPECT_FALSE(TransactionActive());
}
TEST_F(BoltClient, WrongQueriesAndBeginAndWrongQueriesAndRollback) {
@ -436,7 +435,7 @@ TEST_F(BoltClient, WrongQueriesAndBeginAndWrongQueriesAndRollback) {
for (int i = 0; i < 3; ++i) {
EXPECT_THROW(Execute("asdasd"), ClientQueryException);
}
EXPECT_TRUE(Execute("rollback"));
EXPECT_THROW(Execute("rollback", kNoCurrentTransactionToRollback), ClientQueryException);
EXPECT_FALSE(TransactionActive());
}
@ -448,7 +447,7 @@ TEST_F(BoltClient, WrongQueriesAndBeginAndWrongQueriesAndBegin) {
for (int i = 0; i < 3; ++i) {
EXPECT_THROW(Execute("asdasd"), ClientQueryException);
}
EXPECT_THROW(Execute("begin", kNestedTransactions), ClientQueryException);
EXPECT_TRUE(Execute("begin"));
EXPECT_TRUE(TransactionActive());
}

View File

@ -6,6 +6,7 @@
should be consistent."
(:require [neo4j-clj.core :as dbclient]
[clojure.tools.logging :refer [info]]
[clojure.string :as string]
[jepsen [client :as client]
[checker :as checker]
[generator :as gen]]
@ -80,13 +81,21 @@
:ok
:fail)))
(catch Exception e
; Transaction can fail on serialization errors
(assoc op :type :fail :info (str e))))
(if (string/includes? (str e) "At least one SYNC replica has not confirmed committing last transaction.")
(assoc op :type :ok :info (str e)); Exception due to down sync replica is accepted/expected
(assoc op :type :fail :info (str e)))
))
(assoc op :type :fail))))
(teardown! [this test]
(when (= replication-role :main)
(c/with-session conn session
(c/detach-delete-all session))))
(try
(c/detach-delete-all session)
(catch Exception e
(if-not (string/includes? (str e) "At least one SYNC replica has not confirmed committing last transaction.")
(throw (Exception. (str "Invalid exception when deleting all nodes: " e)))); Exception due to down sync replica is accepted/expected
)
))))
(close! [_ est]
(dbclient/disconnect conn)))

View File

@ -1,6 +1,7 @@
(ns jepsen.memgraph.basic
"Basic Memgraph test"
(:require [neo4j-clj.core :as dbclient]
(:require [neo4j-clj.core :as dbclient]
[clojure.string :as string]
[jepsen [client :as client]
[checker :as checker]
[generator :as gen]]
@ -53,7 +54,13 @@
(assoc op :type :fail, :error :not-found)))))
(teardown! [this test]
(c/with-session conn session
(detach-delete-all session)))
(try
(c/detach-delete-all session)
(catch Exception e
(if-not (string/includes? (str e) "At least one SYNC replica has not confirmed committing last transaction.")
(throw (Exception. (str "Invalid exception when deleting all nodes: " e)))); Exception due to down sync replica is accepted/expected
)
)))
(close! [_ est]
(dbclient/disconnect conn)))
@ -73,4 +80,3 @@
:timeline (timeline/html)})
:generator (gen/mix [r w cas])
:final-generator (gen/once r)})

View File

@ -40,11 +40,11 @@
name
" "
(replication-mode-str node-config)
" TO \""
" TO '"
(:ip node-config)
":"
(:port node-config)
"\"")))
"'")))
(defn create-set-replica-role-query
[port]
@ -103,12 +103,12 @@
(doseq [n (filter #(= (:replication-role (val %))
:replica)
node-config)]
(try
(try
(c/with-session conn session
((c/create-register-replica-query
(first n)
(second n)) session))
(catch Exception e)))
(catch Exception e)))
(assoc op :type :ok))
(assoc op :type :fail)))
cases))

View File

@ -13,7 +13,6 @@
[jepsen.memgraph [basic :as basic]
[bank :as bank]
[large :as large]
[sequential :as sequential]
[support :as s]
[nemesis :as nemesis]
[edn :as e]]))
@ -22,7 +21,6 @@
"A map of workload names to functions that can take opts and construct
workloads."
{:bank bank/workload
;; :sequential sequential/workload (T0532-MG)
:large large/workload})
(def nemesis-configuration

View File

@ -2,6 +2,7 @@
"Large write test"
(:require [neo4j-clj.core :as dbclient]
[clojure.tools.logging :refer [info]]
[clojure.string :as string]
[jepsen [client :as client]
[checker :as checker]
[generator :as gen]]
@ -40,13 +41,27 @@
:node node}))
:add (if (= replication-role :main)
(c/with-session conn session
(create-nodes session)
(assoc op :type :ok))
(try
((create-nodes session)
(assoc op :type :ok))
(catch Exception e
(if (string/includes? (str e) "At least one SYNC replica has not confirmed committing last transaction.")
(assoc op :type :ok :info (str e)); Exception due to down sync replica is accepted/expected
(assoc op :type :fail :info (str e)))
)
)
)
(assoc op :type :fail))))
(teardown! [this test]
(when (= replication-role :main)
(c/with-session conn session
(c/detach-delete-all session))))
(try
(c/detach-delete-all session)
(catch Exception e
(if-not (string/includes? (str e) "At least one SYNC replica has not confirmed committing last transaction.")
(throw (Exception. (str "Invalid exception when deleting all nodes: " e)))); Exception due to down sync replica is accepted/expected
)
))))
(close! [_ est]
(dbclient/disconnect conn)))

View File

@ -1,154 +0,0 @@
(ns jepsen.memgraph.sequential
"Sequential test"
(:require [neo4j-clj.core :as dbclient]
[clojure.tools.logging :refer [info]]
[jepsen [client :as client]
[checker :as checker]
[generator :as gen]]
[jepsen.checker.timeline :as timeline]
[jepsen.memgraph.client :as c]))
(dbclient/defquery get-all-nodes
"MATCH (n:Node) RETURN n ORDER BY n.id;")
(dbclient/defquery create-node
"CREATE (n:Node {id: $id});")
(dbclient/defquery delete-node-with-id
"MATCH (n:Node {id: $id}) DELETE n;")
(def next-node-for-add (atom 0))
(defn add-next-node
"Add a new node with its id set to the next highest"
[conn]
(when (dbclient/with-transaction conn tx
(create-node tx {:id (swap! next-node-for-add identity)}))
(swap! next-node-for-add inc)))
(def next-node-for-delete (atom 0))
(defn delete-oldest-node
"Delete a node with the lowest id"
[conn]
(when (dbclient/with-transaction conn tx
(delete-node-with-id tx {:id (swap! next-node-for-delete identity)}))
(swap! next-node-for-delete inc)))
(c/replication-client Client []
(open! [this test node]
(c/replication-open-connection this node node-config))
(setup! [this test]
(when (= replication-role :main)
(c/with-session conn session
(c/detach-delete-all session)
(create-node session {:id 0}))))
(invoke! [this test op]
(c/replication-invoke-case (:f op)
:read (c/with-session conn session
(assoc op
:type :ok
:value {:ids (->> (get-all-nodes session)
(map #(-> % :n :id))
(reduce conj []))
:node node}))
:add (if (= replication-role :main)
(try
(assoc op :type (if (add-next-node conn) :ok :fail))
(catch Exception e
; Transaction can fail on serialization errors
(assoc op :type :fail :info (str e))))
(assoc op :type :fail))
:delete (if (= replication-role :main)
(try
(assoc op :type (if (delete-oldest-node conn) :ok :fail))
(catch Exception e
; Transaction can fail on serialization errors
(assoc op :type :fail :info (str e))))
(assoc op :type :fail))))
(teardown! [this test]
(when (= replication-role :main)
(c/with-session conn session
(c/detach-delete-all session))))
(close! [_ est]
(dbclient/disconnect conn)))
(defn add-node
"Add node with id set to current_max_id + 1"
[test process]
{:type :invoke :f :add :value nil})
(defn read-ids
"Read all current ids of nodes"
[test process]
{:type :invoke :f :read :value nil})
(defn delete-node
"Delete node with the lowest id"
[test process]
{:type :invoke :f :delete :value nil})
(defn strictly-increasing
[coll]
(every?
#(< (first %) (second %))
(partition 2 1 coll)))
(defn increased-by-1
[coll]
(every?
#(= (inc (first %)) (second %))
(partition 2 1 coll)))
(defn sequential-checker
"Check if all nodes have nodes with ids that are strictly increasing by 1.
All nodes need to have at leas 1 non-empty read."
[]
(reify checker/Checker
(check [this test history opts]
(let [ok-reads (->> history
(filter #(= :ok (:type %)))
(filter #(= :read (:f %))))
bad-reads (->> ok-reads
(map (fn [op]
(let [ids (-> op :value :ids)]
(when (not-empty ids)
(cond ((complement strictly-increasing) ids)
{:type :not-increasing-ids
:op op})))))
;; if there are multiple threads not sure how to guarante that the ids are created in order
;;((complement increased-by-1) ids)
;;{:type :ids-missing
;; :op op})))))
(filter identity)
(into []))
empty-nodes (let [all-nodes (->> ok-reads
(map #(-> % :value :node))
(reduce conj #{}))]
(->> all-nodes
(filter (fn [node]
(every?
empty?
(->> ok-reads
(map :value)
(filter #(= node (:node %)))
(map :ids)))))
(filter identity)
(into [])))]
{:valid? (and
(empty? bad-reads)
(empty? empty-nodes))
:empty-nodes empty-nodes
:bad-reads bad-reads}))))
(defn workload
[opts]
{:client (Client. nil nil nil (:node-config opts))
:checker (checker/compose
{:sequential (sequential-checker)
:timeline (timeline/html)})
:generator (c/replication-gen
(gen/phases (cycle [(gen/time-limit 1 (gen/mix [read-ids add-node]))
(gen/once delete-node)])))
:final-generator (gen/once read-ids)})

View File

@ -25,8 +25,7 @@
:--storage-recover-on-startup
:--storage-wal-enabled
:--storage-snapshot-interval-sec 300
:--storage-properties-on-edges
:--storage-restore-replicas-on-startup false))
:--storage-properties-on-edges))
(defn stop-node!
[test node]

View File

@ -117,7 +117,7 @@ int main(int argc, char **argv) {
Endpoint endpoint(FLAGS_address, FLAGS_port);
ClientContext context(FLAGS_use_ssl);
Client client(&context);
Client client(context);
client.Connect(endpoint, FLAGS_username, FLAGS_password);
std::vector<std::unique_ptr<TestClient>> clients;

View File

@ -317,7 +317,7 @@ int main(int argc, char **argv) {
Endpoint endpoint(FLAGS_address, FLAGS_port);
ClientContext context(FLAGS_use_ssl);
Client client(&context);
Client client(context);
client.Connect(endpoint, FLAGS_username, FLAGS_password);
num_pos.store(NumNodesWithLabel(client, "Pos"));

View File

@ -118,7 +118,7 @@ class TestClient {
private:
memgraph::communication::ClientContext context_{FLAGS_use_ssl};
Client client_{&context_};
Client client_{context_};
};
void RunMultithreadedTest(std::vector<std::unique_ptr<TestClient>> &clients) {

View File

@ -261,7 +261,7 @@ int main(int argc, char **argv) {
auto independent_nodes_ids = [&] {
Endpoint endpoint(memgraph::io::network::ResolveHostname(FLAGS_address), FLAGS_port);
ClientContext context(FLAGS_use_ssl);
Client client(&context);
Client client(context);
client.Connect(endpoint, FLAGS_username, FLAGS_password);
return IndependentSet(client, INDEPENDENT_LABEL);
}();

View File

@ -67,7 +67,7 @@ void ExecuteQueries(const std::vector<std::string> &queries, std::ostream &ostre
threads.push_back(std::thread([&]() {
Endpoint endpoint(FLAGS_address, FLAGS_port);
ClientContext context(FLAGS_use_ssl);
Client client(&context);
Client client(context);
client.Connect(endpoint, FLAGS_username, FLAGS_password);
std::string str;

View File

@ -32,7 +32,7 @@ int main(int argc, char **argv) {
memgraph::io::network::Endpoint endpoint(memgraph::io::network::ResolveHostname(FLAGS_address), FLAGS_port);
memgraph::communication::ClientContext context(FLAGS_use_ssl);
memgraph::communication::bolt::Client client(&context);
memgraph::communication::bolt::Client client(context);
client.Connect(endpoint, FLAGS_username, FLAGS_password);

View File

@ -177,7 +177,7 @@ void Execute(
threads.push_back(std::thread([&, worker]() {
memgraph::io::network::Endpoint endpoint(FLAGS_address, FLAGS_port);
memgraph::communication::ClientContext context(FLAGS_use_ssl);
memgraph::communication::bolt::Client client(&context);
memgraph::communication::bolt::Client client(context);
client.Connect(endpoint, FLAGS_username, FLAGS_password);
ready.fetch_add(1, std::memory_order_acq_rel);

View File

@ -66,7 +66,7 @@ class GraphSession {
}
EndpointT endpoint(FLAGS_address, FLAGS_port);
client_ = std::make_unique<ClientT>(&context_);
client_ = std::make_unique<ClientT>(context_);
client_->Connect(endpoint, FLAGS_username, FLAGS_password);
}
@ -387,7 +387,7 @@ int main(int argc, char **argv) {
// create client
EndpointT endpoint(FLAGS_address, FLAGS_port);
ClientContextT context(FLAGS_use_ssl);
ClientT client(&context);
ClientT client(context);
client.Connect(endpoint, FLAGS_username, FLAGS_password);
// cleanup and create indexes

View File

@ -368,5 +368,5 @@ add_dependencies(memgraph__unit test_lcp)
# Test websocket
find_package(Boost REQUIRED)
add_unit_test(websocket.cpp)
target_link_libraries(${test_prefix}websocket mg-communication Boost::headers)
add_unit_test(monitoring.cpp)
target_link_libraries(${test_prefix}monitoring mg-communication Boost::headers)

View File

@ -1,4 +1,4 @@
// Copyright 2021 Memgraph Ltd.
// Copyright 2022 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
@ -98,15 +98,19 @@ void PrintOutput(std::vector<uint8_t> &output) {
* TODO (mferencevic): document
*/
void CheckOutput(std::vector<uint8_t> &output, const uint8_t *data, uint64_t len, bool clear = true) {
if (clear)
if (clear) {
ASSERT_EQ(len, output.size());
else
} else {
ASSERT_LE(len, output.size());
for (size_t i = 0; i < len; ++i) EXPECT_EQ(output[i], data[i]);
if (clear)
}
for (size_t i = 0; i < len; ++i) {
EXPECT_EQ(output[i], data[i]) << i;
}
if (clear) {
output.clear();
else
} else {
output.erase(output.begin(), output.begin() + len);
}
}
/**

View File

@ -9,6 +9,8 @@
// by the Apache License, Version 2.0, included in the file
// licenses/APL.txt.
#include <string>
#include <gflags/gflags.h>
#include "bolt_common.hpp"
@ -151,7 +153,10 @@ inline constexpr uint8_t noop[] = {0x00, 0x00};
} // namespace v4_1
namespace v4_3 {
inline constexpr uint8_t route[]{0xb0, 0x60};
inline constexpr uint8_t handshake_req[] = {0x60, 0x60, 0xb0, 0x17, 0x00, 0x00, 0x03, 0x04, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00};
inline constexpr uint8_t handshake_resp[] = {0x00, 0x00, 0x03, 0x04};
inline constexpr uint8_t route[]{0xb3, 0x66, 0xa0, 0x90, 0xc0};
} // namespace v4_3
// Write bolt chunk header (length)
@ -705,67 +710,89 @@ TEST(BoltSession, ErrorWrongMarker) {
}
TEST(BoltSession, ErrorOK) {
// v1
{
SCOPED_TRACE("v1");
// test ACK_FAILURE and RESET
const uint8_t *dataset[] = {ackfailure_req, reset_req};
for (int i = 0; i < 2; ++i) {
SCOPED_TRACE("i: " + std::to_string(i));
// first test with socket write success, then with socket write fail
for (int j = 0; j < 2; ++j) {
SCOPED_TRACE("j: " + std::to_string(j));
const auto write_success = j == 0;
INIT_VARS;
ExecuteHandshake(input_stream, session, output);
ExecuteInit(input_stream, session, output);
ASSERT_EQ(session.version_.major, 1U);
ExecuteInit(input_stream, session, output);
WriteRunRequest(input_stream, kInvalidQuery);
session.Execute();
output.clear();
output_stream.SetWriteSuccess(j == 0);
if (j == 0) {
output_stream.SetWriteSuccess(write_success);
if (write_success) {
ExecuteCommand(input_stream, session, dataset[i], 2);
} else {
ASSERT_THROW(ExecuteCommand(input_stream, session, dataset[i], 2), SessionException);
}
// assert that all data from the init message was cleaned up
ASSERT_EQ(session.decoder_buffer_.Size(), 0);
EXPECT_EQ(session.decoder_buffer_.Size(), 0);
if (j == 0) {
ASSERT_EQ(session.state_, State::Idle);
if (write_success) {
EXPECT_EQ(session.state_, State::Idle);
CheckOutput(output, success_resp, sizeof(success_resp));
} else {
ASSERT_EQ(session.state_, State::Close);
ASSERT_EQ(output.size(), 0);
EXPECT_EQ(session.state_, State::Close);
EXPECT_EQ(output.size(), 0);
}
}
}
}
// v4+
{
SCOPED_TRACE("v4");
const uint8_t *dataset[] = {ackfailure_req, v4::reset_req};
for (int i = 0; i < 2; ++i) {
INIT_VARS;
SCOPED_TRACE("i: " + std::to_string(i));
// first test with socket write success, then with socket write fail
for (int j = 0; j < 2; ++j) {
SCOPED_TRACE("j: " + std::to_string(j));
const auto write_success = j == 0;
const auto is_reset = i == 1;
INIT_VARS;
ExecuteHandshake(input_stream, session, output, v4::handshake_req, v4::handshake_resp);
ExecuteInit(input_stream, session, output, true);
ExecuteHandshake(input_stream, session, output, v4::handshake_req, v4::handshake_resp);
ASSERT_EQ(session.version_.major, 4U);
ExecuteInit(input_stream, session, output, true);
WriteRunRequest(input_stream, kInvalidQuery, true);
session.Execute();
WriteRunRequest(input_stream, kInvalidQuery, true);
session.Execute();
output.clear();
output.clear();
output_stream.SetWriteSuccess(write_success);
ExecuteCommand(input_stream, session, dataset[i], 2);
// ACK_FAILURE does not exist in v3+, ingored message is sent
if (write_success) {
ExecuteCommand(input_stream, session, dataset[i], 2);
} else {
ASSERT_THROW(ExecuteCommand(input_stream, session, dataset[i], 2), SessionException);
}
// ACK_FAILURE does not exist in v4+
if (i == 0) {
ASSERT_EQ(session.state_, State::Error);
} else {
ASSERT_EQ(session.state_, State::Idle);
CheckOutput(output, success_resp, sizeof(success_resp));
if (write_success) {
if (is_reset) {
EXPECT_EQ(session.state_, State::Idle);
CheckOutput(output, success_resp, sizeof(success_resp));
} else {
ASSERT_EQ(session.state_, State::Error);
CheckOutput(output, ignored_resp, sizeof(ignored_resp));
}
} else {
EXPECT_EQ(session.state_, State::Close);
}
}
}
}
@ -950,18 +977,100 @@ TEST(BoltSession, Noop) {
TEST(BoltSession, Route) {
// Memgraph does not support route message, but it handles it
{
SCOPED_TRACE("v1");
INIT_VARS;
ExecuteHandshake(input_stream, session, output);
ExecuteInit(input_stream, session, output);
ASSERT_THROW(ExecuteCommand(input_stream, session, v4_3::route, sizeof(v4_3::route)), SessionException);
EXPECT_EQ(session.state_, State::Close);
}
{
SCOPED_TRACE("v4");
INIT_VARS;
ExecuteHandshake(input_stream, session, output, v4::handshake_req, v4::handshake_resp);
ExecuteHandshake(input_stream, session, output, v4_3::handshake_req, v4_3::handshake_resp);
ExecuteInit(input_stream, session, output, true);
ASSERT_THROW(ExecuteCommand(input_stream, session, v4_3::route, sizeof(v4_3::route)), SessionException);
ASSERT_NO_THROW(ExecuteCommand(input_stream, session, v4_3::route, sizeof(v4_3::route)));
static constexpr uint8_t expected_resp[] = {
0x00 /*two bytes of chunk header, chunk contains 64 bytes of data*/,
0x40,
0xb1 /*TinyStruct1*/,
0x7f /*Failure*/,
0xa2 /*TinyMap with 2 items*/,
0x84 /*TinyString with 4 chars*/,
'c',
'o',
'd',
'e',
0x82 /*TinyString with 2 chars*/,
'6',
'6',
0x87 /*TinyString with 7 chars*/,
'm',
'e',
's',
's',
'a',
'g',
'e',
0xd0 /*String*/,
0x2b /*With 43 chars*/,
'R',
'o',
'u',
't',
'e',
' ',
'm',
'e',
's',
's',
'a',
'g',
'e',
' ',
'i',
's',
' ',
'n',
'o',
't',
' ',
's',
'u',
'p',
'p',
'o',
'r',
't',
'e',
'd',
' ',
'i',
'n',
' ',
'M',
'e',
'm',
'g',
'r',
'a',
'p',
'h',
'!',
0x00 /*Terminating zeros*/,
0x00,
};
EXPECT_EQ(input_stream.size(), 0U);
CheckOutput(output, expected_resp, sizeof(expected_resp));
EXPECT_EQ(session.state_, State::Error);
SCOPED_TRACE("Try to reset connection after ROUTE failed");
ASSERT_NO_THROW(ExecuteCommand(input_stream, session, v4::reset_req, sizeof(v4::reset_req)));
EXPECT_EQ(input_stream.size(), 0U);
CheckOutput(output, success_resp, sizeof(success_resp));
EXPECT_EQ(session.state_, State::Idle);
}
}
@ -992,3 +1101,24 @@ TEST(BoltSession, Rollback) {
ASSERT_THROW(ExecuteCommand(input_stream, session, v4::rollback, sizeof(v4::rollback)), SessionException);
}
}
TEST(BoltSession, ResetInIdle) {
{
SCOPED_TRACE("v1");
INIT_VARS;
ExecuteHandshake(input_stream, session, output);
ExecuteInit(input_stream, session, output);
ASSERT_NO_THROW(ExecuteCommand(input_stream, session, reset_req, sizeof(reset_req)));
EXPECT_EQ(session.state_, State::Idle);
}
{
SCOPED_TRACE("v4");
INIT_VARS;
ExecuteHandshake(input_stream, session, output, v4_3::handshake_req, v4_3::handshake_resp);
ExecuteInit(input_stream, session, output, true);
ASSERT_NO_THROW(ExecuteCommand(input_stream, session, v4::reset_req, sizeof(v4::reset_req)));
EXPECT_EQ(session.state_, State::Idle);
}
}

View File

@ -4251,6 +4251,24 @@ TEST_P(CypherMainVisitorTest, VersionQuery) {
ASSERT_NO_THROW(ast_generator.ParseQuery("SHOW VERSION"));
}
TEST_P(CypherMainVisitorTest, ConfigQuery) {
auto &ast_generator = *GetParam();
TestInvalidQuery("SHOW CF", ast_generator);
TestInvalidQuery("SHOW CFG", ast_generator);
TestInvalidQuery("SHOW CFGS", ast_generator);
TestInvalidQuery("SHOW CONF", ast_generator);
TestInvalidQuery("SHOW CONFIGS", ast_generator);
TestInvalidQuery("SHOW CONFIGURATION", ast_generator);
TestInvalidQuery("SHOW CONFIGURATIONS", ast_generator);
Query *query = ast_generator.ParseQuery("SHOW CONFIG");
auto *ptr = dynamic_cast<ShowConfigQuery *>(query);
ASSERT_TRUE(ptr != nullptr);
ASSERT_NO_THROW(ast_generator.ParseQuery("SHOW CONFIG"));
}
TEST_P(CypherMainVisitorTest, ForeachThrow) {
auto &ast_generator = *GetParam();
EXPECT_THROW(ast_generator.ParseQuery("FOREACH(i IN [1, 2] | UNWIND [1,2,3] AS j CREATE (n))"), SyntaxException);

View File

@ -7,17 +7,19 @@
// As of the Change Date specified in that file, in accordance with
// the Business Source License, use of this software will be governed
// by the Apache License, Version 2.0, included in the file
// licenses/APL.txt."""
// licenses/APL.txt.
#define BOOST_ASIO_USE_TS_EXECUTOR_AS_DEFAULT
#include <gmock/gmock.h>
#include <gtest/gtest.h>
#include <atomic>
#include <cstddef>
#include <string>
#include <string_view>
#include <thread>
#include <unordered_set>
#include <vector>
#include <fmt/core.h>
#include <gmock/gmock.h>
#include <gtest/gtest.h>
#include <spdlog/common.h>
#include <spdlog/spdlog.h>
#include <boost/asio/connect.hpp>
#include <boost/asio/ip/tcp.hpp>
@ -38,8 +40,6 @@ inline constexpr auto kResponseSuccess{"success"};
inline constexpr auto kResponseMessage{"message"};
struct MockAuth : public memgraph::communication::websocket::AuthenticationInterface {
MockAuth() = default;
bool Authenticate(const std::string & /*username*/, const std::string & /*password*/) const override {
return authentication;
}
@ -55,25 +55,50 @@ struct MockAuth : public memgraph::communication::websocket::AuthenticationInter
bool has_any_users{true};
};
class WebSocketServerTest : public ::testing::Test {
public:
class MonitoringServerTest : public ::testing::Test {
protected:
WebSocketServerTest() : websocket_server{{"0.0.0.0", 0}, &context, auth} {
EXPECT_NO_THROW(websocket_server.Start());
}
void SetUp() override { ASSERT_NO_THROW(monitoring_server.Start()); }
void TearDown() override {
EXPECT_NO_THROW(websocket_server.Shutdown());
EXPECT_NO_THROW(websocket_server.AwaitShutdown());
StopLogging();
ASSERT_NO_THROW(monitoring_server.Shutdown());
ASSERT_NO_THROW(monitoring_server.AwaitShutdown());
}
std::string ServerPort() const { return std::to_string(websocket_server.GetEndpoint().port()); }
std::string ServerPort() const { return std::to_string(monitoring_server.GetEndpoint().port()); }
std::string ServerAddress() const { return websocket_server.GetEndpoint().address().to_string(); }
std::string ServerAddress() const { return monitoring_server.GetEndpoint().address().to_string(); }
void StartLogging(std::vector<std::pair<spdlog::level::level_enum, std::string>> messages) {
messages_ = std::move(messages);
logging_.store(true, std::memory_order_relaxed);
bg_thread_ = std::jthread([this]() {
while (logging_.load(std::memory_order_relaxed)) {
for (const auto &[message_level, message_content] : messages_) {
spdlog::log(message_level, message_content);
spdlog::default_logger()->flush();
}
}
});
}
void StopLogging() {
if (!logging_.load(std::memory_order_relaxed)) {
return;
}
logging_.store(false, std::memory_order_relaxed);
ASSERT_TRUE(bg_thread_.joinable());
bg_thread_.join();
}
MockAuth auth;
memgraph::communication::ServerContext context{};
memgraph::communication::websocket::Server websocket_server;
memgraph::communication::ServerContext context;
memgraph::communication::websocket::Server monitoring_server{{"0.0.0.0", 0}, &context, auth};
private:
std::jthread bg_thread_;
std::vector<std::pair<spdlog::level::level_enum, std::string>> messages_;
std::atomic<bool> logging_{false};
};
class Client {
@ -97,18 +122,18 @@ class Client {
std::string Read() {
ws_.read(buffer_);
const std::string response = beast::buffers_to_string(buffer_.data());
std::string response = beast::buffers_to_string(buffer_.data());
buffer_.consume(buffer_.size());
return response;
}
private:
net::io_context ioc_{};
net::io_context ioc_;
websocket::stream<tcp::socket> ws_{ioc_};
beast::flat_buffer buffer_;
};
TEST(WebSocketServer, WebsocketWorkflow) {
TEST(MonitoringServer, MonitoringWorkflow) {
/**
* Notice how there is no port management for the clients
* and the servers, that is because when using "0.0.0.0" as address and
@ -116,75 +141,85 @@ TEST(WebSocketServer, WebsocketWorkflow) {
* and it is the keeper of all available port numbers and
* assigns them automatically.
*/
MockAuth auth{};
memgraph::communication::ServerContext context{};
memgraph::communication::websocket::Server websocket_server({"0.0.0.0", 0}, &context, auth);
const auto port = websocket_server.GetEndpoint().port();
MockAuth auth;
memgraph::communication::ServerContext context;
memgraph::communication::websocket::Server monitoring_server({"0.0.0.0", 0}, &context, auth);
const auto port = monitoring_server.GetEndpoint().port();
SCOPED_TRACE(fmt::format("Checking port number different then 0: {}", port));
EXPECT_NE(port, 0);
EXPECT_NO_THROW(websocket_server.Start());
EXPECT_TRUE(websocket_server.IsRunning());
EXPECT_NO_THROW(monitoring_server.Start());
EXPECT_TRUE(monitoring_server.IsRunning());
EXPECT_NO_THROW(websocket_server.Shutdown());
EXPECT_FALSE(websocket_server.IsRunning());
EXPECT_NO_THROW(monitoring_server.Shutdown());
EXPECT_FALSE(monitoring_server.IsRunning());
EXPECT_NO_THROW(websocket_server.AwaitShutdown());
EXPECT_FALSE(websocket_server.IsRunning());
EXPECT_NO_THROW(monitoring_server.AwaitShutdown());
EXPECT_FALSE(monitoring_server.IsRunning());
}
TEST_F(WebSocketServerTest, WebsocketConnection) {
TEST(MonitoringServer, Connection) {
MockAuth auth;
memgraph::communication::ServerContext context;
memgraph::communication::websocket::Server monitoring_server({"0.0.0.0", 0}, &context, auth);
ASSERT_NO_THROW(monitoring_server.Start());
{
auto client = Client{};
EXPECT_NO_THROW(client.Connect("0.0.0.0", ServerPort()));
Client client;
EXPECT_NO_THROW(client.Connect("0.0.0.0", std::to_string(monitoring_server.GetEndpoint().port())));
}
websocket_server.Shutdown();
websocket_server.AwaitShutdown();
ASSERT_NO_THROW(monitoring_server.Shutdown());
ASSERT_NO_THROW(monitoring_server.AwaitShutdown());
ASSERT_FALSE(monitoring_server.IsRunning());
}
TEST_F(WebSocketServerTest, WebsocketLogging) {
TEST_F(MonitoringServerTest, Logging) {
auth.has_any_users = false;
// Set up the websocket logger as one of the defaults for spdlog
{
auto default_logger = spdlog::default_logger();
auto sinks = default_logger->sinks();
sinks.push_back(websocket_server.GetLoggingSink());
sinks.push_back(monitoring_server.GetLoggingSink());
auto logger = std::make_shared<spdlog::logger>("memgraph_log", sinks.begin(), sinks.end());
logger->set_level(default_logger->level());
logger->flush_on(spdlog::level::trace);
spdlog::set_default_logger(std::move(logger));
}
{
auto client = Client();
client.Connect(ServerAddress(), ServerPort());
Client client;
client.Connect(ServerAddress(), ServerPort());
std::vector<std::pair<spdlog::level::level_enum, std::string>> messages{
{spdlog::level::err, "Sending error message!"},
{spdlog::level::warn, "Sending warn message!"},
{spdlog::level::info, "Sending info message!"},
{spdlog::level::trace, "Sending trace message!"},
};
auto log_message = [](spdlog::level::level_enum level, std::string_view message) {
spdlog::log(level, message);
spdlog::default_logger()->flush();
};
auto log_and_check = [log_message, &client](spdlog::level::level_enum level, std::string_view message,
std::string_view log_level_received) {
std::thread(log_message, level, message).detach();
const auto received_message = client.Read();
EXPECT_EQ(received_message, fmt::format("{{\"event\": \"log\", \"level\": \"{}\", \"message\": \"{}\"}}\n",
log_level_received, message));
};
log_and_check(spdlog::level::err, "Sending error message!", "error");
log_and_check(spdlog::level::warn, "Sending warn message!", "warning");
log_and_check(spdlog::level::info, "Sending info message!", "info");
log_and_check(spdlog::level::trace, "Sending trace message!", "trace");
StartLogging(messages);
std::unordered_set<std::string> received_messages;
// Worst case scenario we might need to read 100 messages to get all messages
// that we expect, in case messages get scrambled in network
for (size_t i{0}; i < 100; ++i) {
const auto received_message = client.Read();
received_messages.insert(received_message);
if (received_messages.size() == 4) {
break;
}
}
ASSERT_EQ(received_messages.size(), 4);
for (const auto &[message_level, message_content] : messages) {
EXPECT_TRUE(
received_messages.contains(fmt::format("{{\"event\": \"log\", \"level\": \"{}\", \"message\": \"{}\"}}\n",
spdlog::level::to_string_view(message_level), message_content)));
}
}
TEST_F(WebSocketServerTest, WebsocketAuthenticationParsingError) {
TEST_F(MonitoringServerTest, AuthenticationParsingError) {
static constexpr auto auth_fail = "Cannot parse JSON for WebSocket authentication";
{
SCOPED_TRACE("Checking handling of first request parsing error.");
auto client = Client();
Client client;
EXPECT_NO_THROW(client.Connect(ServerAddress(), ServerPort()));
EXPECT_NO_THROW(client.Write("Test"));
const auto response = nlohmann::json::parse(client.Read());
@ -196,7 +231,7 @@ TEST_F(WebSocketServerTest, WebsocketAuthenticationParsingError) {
}
{
SCOPED_TRACE("Checking handling of JSON parsing error.");
auto client = Client();
Client client;
EXPECT_NO_THROW(client.Connect(ServerAddress(), ServerPort()));
const std::string json_without_comma = R"({"username": "user" "password": "123"})";
EXPECT_NO_THROW(client.Write(json_without_comma));
@ -209,12 +244,12 @@ TEST_F(WebSocketServerTest, WebsocketAuthenticationParsingError) {
}
}
TEST_F(WebSocketServerTest, WebsocketAuthenticationWhenAuthPasses) {
TEST_F(MonitoringServerTest, AuthenticationWhenAuthPasses) {
static constexpr auto auth_success = R"({"message":"User has been successfully authenticated!","success":true})";
{
SCOPED_TRACE("Checking successful authentication response.");
auto client = Client();
Client client;
EXPECT_NO_THROW(client.Connect(ServerAddress(), ServerPort()));
EXPECT_NO_THROW(client.Write(R"({"username": "user", "password": "123"})"));
const auto response = client.Read();
@ -223,13 +258,13 @@ TEST_F(WebSocketServerTest, WebsocketAuthenticationWhenAuthPasses) {
}
}
TEST_F(WebSocketServerTest, WebsocketAuthenticationWithMultipleAttempts) {
TEST_F(MonitoringServerTest, AuthenticationWithMultipleAttempts) {
static constexpr auto auth_success = R"({"message":"User has been successfully authenticated!","success":true})";
static constexpr auto auth_fail = "Cannot parse JSON for WebSocket authentication";
{
SCOPED_TRACE("Checking multiple authentication tries from same client");
auto client = Client();
Client client;
EXPECT_NO_THROW(client.Connect(ServerAddress(), ServerPort()));
EXPECT_NO_THROW(client.Write(R"({"username": "user" "password": "123"})"));
@ -250,8 +285,8 @@ TEST_F(WebSocketServerTest, WebsocketAuthenticationWithMultipleAttempts) {
}
{
SCOPED_TRACE("Checking multiple authentication tries from different clients");
auto client1 = Client();
auto client2 = Client();
Client client1;
Client client2;
EXPECT_NO_THROW(client1.Connect(ServerAddress(), ServerPort()));
EXPECT_NO_THROW(client2.Connect(ServerAddress(), ServerPort()));
@ -274,12 +309,12 @@ TEST_F(WebSocketServerTest, WebsocketAuthenticationWithMultipleAttempts) {
}
}
TEST_F(WebSocketServerTest, WebsocketAuthenticationFails) {
TEST_F(MonitoringServerTest, AuthenticationFails) {
auth.authentication = false;
static constexpr auto auth_fail = R"({"message":"Authentication failed!","success":false})";
{
auto client = Client();
Client client;
EXPECT_NO_THROW(client.Connect(ServerAddress(), ServerPort()));
EXPECT_NO_THROW(client.Write(R"({"username": "user", "password": "123"})"));
@ -289,12 +324,12 @@ TEST_F(WebSocketServerTest, WebsocketAuthenticationFails) {
}
#ifdef MG_ENTERPRISE
TEST_F(WebSocketServerTest, WebsocketAuthorizationFails) {
TEST_F(MonitoringServerTest, AuthorizationFails) {
auth.authorization = false;
static constexpr auto auth_fail = R"({"message":"Authorization failed!","success":false})";
{
auto client = Client();
Client client;
EXPECT_NO_THROW(client.Connect(ServerAddress(), ServerPort()));
EXPECT_NO_THROW(client.Write(R"({"username": "user", "password": "123"})"));

View File

@ -49,8 +49,8 @@ class QueryCostEstimator : public ::testing::Test {
int symbol_count = 0;
void SetUp() {
ASSERT_TRUE(db.CreateIndex(label));
ASSERT_TRUE(db.CreateIndex(label, property));
ASSERT_FALSE(db.CreateIndex(label).HasError());
ASSERT_FALSE(db.CreateIndex(label, property).HasError());
storage_dba.emplace(db.Access());
dba.emplace(&*storage_dba);
}

View File

@ -531,8 +531,8 @@ TEST(DumpTest, IndicesKeys) {
CreateVertex(&dba, {"Label1", "Label 2"}, {{"p", memgraph::storage::PropertyValue(1)}}, false);
ASSERT_FALSE(dba.Commit().HasError());
}
ASSERT_TRUE(db.CreateIndex(db.NameToLabel("Label1"), db.NameToProperty("prop")));
ASSERT_TRUE(db.CreateIndex(db.NameToLabel("Label 2"), db.NameToProperty("prop `")));
ASSERT_FALSE(db.CreateIndex(db.NameToLabel("Label1"), db.NameToProperty("prop")).HasError());
ASSERT_FALSE(db.CreateIndex(db.NameToLabel("Label 2"), db.NameToProperty("prop `")).HasError());
{
ResultStreamFaker stream(&db);
@ -558,8 +558,7 @@ TEST(DumpTest, ExistenceConstraints) {
}
{
auto res = db.CreateExistenceConstraint(db.NameToLabel("L`abel 1"), db.NameToProperty("prop"));
ASSERT_TRUE(res.HasValue());
ASSERT_TRUE(res.GetValue());
ASSERT_FALSE(res.HasError());
}
{
@ -694,16 +693,15 @@ TEST(DumpTest, CheckStateSimpleGraph) {
}
{
auto ret = db.CreateExistenceConstraint(db.NameToLabel("Person"), db.NameToProperty("name"));
ASSERT_TRUE(ret.HasValue());
ASSERT_TRUE(ret.GetValue());
ASSERT_FALSE(ret.HasError());
}
{
auto ret = db.CreateUniqueConstraint(db.NameToLabel("Person"), {db.NameToProperty("name")});
ASSERT_TRUE(ret.HasValue());
ASSERT_EQ(ret.GetValue(), memgraph::storage::UniqueConstraints::CreationStatus::SUCCESS);
}
ASSERT_TRUE(db.CreateIndex(db.NameToLabel("Person"), db.NameToProperty("id")));
ASSERT_TRUE(db.CreateIndex(db.NameToLabel("Person"), db.NameToProperty("unexisting_property")));
ASSERT_FALSE(db.CreateIndex(db.NameToLabel("Person"), db.NameToProperty("id")).HasError());
ASSERT_FALSE(db.CreateIndex(db.NameToLabel("Person"), db.NameToProperty("unexisting_property")).HasError());
const auto &db_initial_state = GetState(&db);
memgraph::storage::Storage db_dump;
@ -852,19 +850,17 @@ TEST(DumpTest, MultiplePartialPulls) {
memgraph::storage::Storage db;
{
// Create indices
db.CreateIndex(db.NameToLabel("PERSON"), db.NameToProperty("name"));
db.CreateIndex(db.NameToLabel("PERSON"), db.NameToProperty("surname"));
ASSERT_FALSE(db.CreateIndex(db.NameToLabel("PERSON"), db.NameToProperty("name")).HasError());
ASSERT_FALSE(db.CreateIndex(db.NameToLabel("PERSON"), db.NameToProperty("surname")).HasError());
// Create existence constraints
{
auto res = db.CreateExistenceConstraint(db.NameToLabel("PERSON"), db.NameToProperty("name"));
ASSERT_TRUE(res.HasValue());
ASSERT_TRUE(res.GetValue());
ASSERT_FALSE(res.HasError());
}
{
auto res = db.CreateExistenceConstraint(db.NameToLabel("PERSON"), db.NameToProperty("surname"));
ASSERT_TRUE(res.HasValue());
ASSERT_TRUE(res.GetValue());
ASSERT_FALSE(res.HasError());
}
// Create unique constraints

View File

@ -105,7 +105,7 @@ TEST(QueryPlan, ScanAll) {
TEST(QueryPlan, ScanAllByLabel) {
memgraph::storage::Storage db;
auto label = db.NameToLabel("label");
ASSERT_TRUE(db.CreateIndex(label));
ASSERT_FALSE(db.CreateIndex(label).HasError());
{
auto dba = db.Access();
// Add some unlabeled vertices

View File

@ -11,6 +11,7 @@
#include <gmock/gmock.h>
#include <gtest/gtest.h>
#include <variant>
#include "storage/v2/storage.hpp"
@ -42,29 +43,29 @@ TEST_F(ConstraintsTest, ExistenceConstraintsCreateAndDrop) {
EXPECT_EQ(storage.ListAllConstraints().existence.size(), 0);
{
auto res = storage.CreateExistenceConstraint(label1, prop1);
EXPECT_TRUE(res.HasValue() && res.GetValue());
EXPECT_FALSE(res.HasError());
}
EXPECT_THAT(storage.ListAllConstraints().existence, UnorderedElementsAre(std::make_pair(label1, prop1)));
{
auto res = storage.CreateExistenceConstraint(label1, prop1);
EXPECT_TRUE(res.HasValue() && !res.GetValue());
EXPECT_TRUE(res.HasError());
}
EXPECT_THAT(storage.ListAllConstraints().existence, UnorderedElementsAre(std::make_pair(label1, prop1)));
{
auto res = storage.CreateExistenceConstraint(label2, prop1);
EXPECT_TRUE(res.HasValue() && res.GetValue());
EXPECT_FALSE(res.HasError());
}
EXPECT_THAT(storage.ListAllConstraints().existence,
UnorderedElementsAre(std::make_pair(label1, prop1), std::make_pair(label2, prop1)));
EXPECT_TRUE(storage.DropExistenceConstraint(label1, prop1));
EXPECT_FALSE(storage.DropExistenceConstraint(label1, prop1));
EXPECT_FALSE(storage.DropExistenceConstraint(label1, prop1).HasError());
EXPECT_TRUE(storage.DropExistenceConstraint(label1, prop1).HasError());
EXPECT_THAT(storage.ListAllConstraints().existence, UnorderedElementsAre(std::make_pair(label2, prop1)));
EXPECT_TRUE(storage.DropExistenceConstraint(label2, prop1));
EXPECT_FALSE(storage.DropExistenceConstraint(label2, prop2));
EXPECT_FALSE(storage.DropExistenceConstraint(label2, prop1).HasError());
EXPECT_TRUE(storage.DropExistenceConstraint(label2, prop2).HasError());
EXPECT_EQ(storage.ListAllConstraints().existence.size(), 0);
{
auto res = storage.CreateExistenceConstraint(label2, prop1);
EXPECT_TRUE(res.HasValue() && res.GetValue());
EXPECT_FALSE(res.HasError());
}
EXPECT_THAT(storage.ListAllConstraints().existence, UnorderedElementsAre(std::make_pair(label2, prop1)));
}
@ -80,7 +81,7 @@ TEST_F(ConstraintsTest, ExistenceConstraintsCreateFailure1) {
{
auto res = storage.CreateExistenceConstraint(label1, prop1);
ASSERT_TRUE(res.HasError());
EXPECT_EQ(res.GetError(),
EXPECT_EQ(std::get<ConstraintViolation>(res.GetError()),
(ConstraintViolation{ConstraintViolation::Type::EXISTENCE, label1, std::set<PropertyId>{prop1}}));
}
{
@ -92,7 +93,7 @@ TEST_F(ConstraintsTest, ExistenceConstraintsCreateFailure1) {
}
{
auto res = storage.CreateExistenceConstraint(label1, prop1);
EXPECT_TRUE(res.HasValue() && res.GetValue());
EXPECT_FALSE(res.HasError());
}
}
@ -107,7 +108,7 @@ TEST_F(ConstraintsTest, ExistenceConstraintsCreateFailure2) {
{
auto res = storage.CreateExistenceConstraint(label1, prop1);
ASSERT_TRUE(res.HasError());
EXPECT_EQ(res.GetError(),
EXPECT_EQ(std::get<ConstraintViolation>(res.GetError()),
(ConstraintViolation{ConstraintViolation::Type::EXISTENCE, label1, std::set<PropertyId>{prop1}}));
}
{
@ -119,7 +120,7 @@ TEST_F(ConstraintsTest, ExistenceConstraintsCreateFailure2) {
}
{
auto res = storage.CreateExistenceConstraint(label1, prop1);
EXPECT_TRUE(res.HasValue() && res.GetValue());
EXPECT_FALSE(res.HasError());
}
}
@ -127,7 +128,7 @@ TEST_F(ConstraintsTest, ExistenceConstraintsCreateFailure2) {
TEST_F(ConstraintsTest, ExistenceConstraintsViolationOnCommit) {
{
auto res = storage.CreateExistenceConstraint(label1, prop1);
ASSERT_TRUE(res.HasValue() && res.GetValue());
EXPECT_FALSE(res.HasError());
}
{
@ -137,7 +138,7 @@ TEST_F(ConstraintsTest, ExistenceConstraintsViolationOnCommit) {
auto res = acc.Commit();
ASSERT_TRUE(res.HasError());
EXPECT_EQ(res.GetError(),
EXPECT_EQ(std::get<ConstraintViolation>(res.GetError()),
(ConstraintViolation{ConstraintViolation::Type::EXISTENCE, label1, std::set<PropertyId>{prop1}}));
}
@ -157,7 +158,7 @@ TEST_F(ConstraintsTest, ExistenceConstraintsViolationOnCommit) {
auto res = acc.Commit();
ASSERT_TRUE(res.HasError());
EXPECT_EQ(res.GetError(),
EXPECT_EQ(std::get<ConstraintViolation>(res.GetError()),
(ConstraintViolation{ConstraintViolation::Type::EXISTENCE, label1, std::set<PropertyId>{prop1}}));
}
@ -173,7 +174,7 @@ TEST_F(ConstraintsTest, ExistenceConstraintsViolationOnCommit) {
ASSERT_NO_ERROR(acc.Commit());
}
ASSERT_TRUE(storage.DropExistenceConstraint(label1, prop1));
ASSERT_FALSE(storage.DropExistenceConstraint(label1, prop1).HasError());
{
auto acc = storage.Access();
@ -208,12 +209,12 @@ TEST_F(ConstraintsTest, UniqueConstraintsCreateAndDropAndList) {
EXPECT_THAT(storage.ListAllConstraints().unique,
UnorderedElementsAre(std::make_pair(label1, std::set<PropertyId>{prop1}),
std::make_pair(label2, std::set<PropertyId>{prop1})));
EXPECT_EQ(storage.DropUniqueConstraint(label1, {prop1}), UniqueConstraints::DeletionStatus::SUCCESS);
EXPECT_EQ(storage.DropUniqueConstraint(label1, {prop1}), UniqueConstraints::DeletionStatus::NOT_FOUND);
EXPECT_EQ(storage.DropUniqueConstraint(label1, {prop1}).GetValue(), UniqueConstraints::DeletionStatus::SUCCESS);
EXPECT_EQ(storage.DropUniqueConstraint(label1, {prop1}).GetValue(), UniqueConstraints::DeletionStatus::NOT_FOUND);
EXPECT_THAT(storage.ListAllConstraints().unique,
UnorderedElementsAre(std::make_pair(label2, std::set<PropertyId>{prop1})));
EXPECT_EQ(storage.DropUniqueConstraint(label2, {prop1}), UniqueConstraints::DeletionStatus::SUCCESS);
EXPECT_EQ(storage.DropUniqueConstraint(label2, {prop2}), UniqueConstraints::DeletionStatus::NOT_FOUND);
EXPECT_EQ(storage.DropUniqueConstraint(label2, {prop1}).GetValue(), UniqueConstraints::DeletionStatus::SUCCESS);
EXPECT_EQ(storage.DropUniqueConstraint(label2, {prop2}).GetValue(), UniqueConstraints::DeletionStatus::NOT_FOUND);
EXPECT_EQ(storage.ListAllConstraints().unique.size(), 0);
{
auto res = storage.CreateUniqueConstraint(label2, {prop1});
@ -239,7 +240,7 @@ TEST_F(ConstraintsTest, UniqueConstraintsCreateFailure1) {
{
auto res = storage.CreateUniqueConstraint(label1, {prop1});
ASSERT_TRUE(res.HasError());
EXPECT_EQ(res.GetError(),
EXPECT_EQ(std::get<ConstraintViolation>(res.GetError()),
(ConstraintViolation{ConstraintViolation::Type::UNIQUE, label1, std::set<PropertyId>{prop1}}));
}
@ -273,7 +274,7 @@ TEST_F(ConstraintsTest, UniqueConstraintsCreateFailure2) {
{
auto res = storage.CreateUniqueConstraint(label1, {prop1});
ASSERT_TRUE(res.HasError());
EXPECT_EQ(res.GetError(),
EXPECT_EQ(std::get<ConstraintViolation>(res.GetError()),
(ConstraintViolation{ConstraintViolation::Type::UNIQUE, label1, std::set<PropertyId>{prop1}}));
}
@ -458,7 +459,7 @@ TEST_F(ConstraintsTest, UniqueConstraintsViolationOnCommit1) {
ASSERT_NO_ERROR(vertex2.SetProperty(prop1, PropertyValue(1)));
auto res = acc.Commit();
ASSERT_TRUE(res.HasError());
EXPECT_EQ(res.GetError(),
EXPECT_EQ(std::get<ConstraintViolation>(res.GetError()),
(ConstraintViolation{ConstraintViolation::Type::UNIQUE, label1, std::set<PropertyId>{prop1}}));
}
}
@ -500,7 +501,7 @@ TEST_F(ConstraintsTest, UniqueConstraintsViolationOnCommit2) {
ASSERT_NO_ERROR(acc2.Commit());
auto res = acc3.Commit();
ASSERT_TRUE(res.HasError());
EXPECT_EQ(res.GetError(),
EXPECT_EQ(std::get<ConstraintViolation>(res.GetError()),
(ConstraintViolation{ConstraintViolation::Type::UNIQUE, label1, std::set<PropertyId>{prop1}}));
}
}
@ -545,11 +546,11 @@ TEST_F(ConstraintsTest, UniqueConstraintsViolationOnCommit3) {
auto res = acc2.Commit();
ASSERT_TRUE(res.HasError());
EXPECT_EQ(res.GetError(),
EXPECT_EQ(std::get<ConstraintViolation>(res.GetError()),
(ConstraintViolation{ConstraintViolation::Type::UNIQUE, label1, std::set<PropertyId>{prop1}}));
res = acc3.Commit();
ASSERT_TRUE(res.HasError());
EXPECT_EQ(res.GetError(),
EXPECT_EQ(std::get<ConstraintViolation>(res.GetError()),
(ConstraintViolation{ConstraintViolation::Type::UNIQUE, label1, std::set<PropertyId>{prop1}}));
}
}
@ -620,7 +621,8 @@ TEST_F(ConstraintsTest, UniqueConstraintsLabelAlteration) {
auto res = acc.Commit();
ASSERT_TRUE(res.HasError());
EXPECT_EQ(res.GetError(), (ConstraintViolation{ConstraintViolation::Type::UNIQUE, label1, std::set{prop1}}));
EXPECT_EQ(std::get<ConstraintViolation>(res.GetError()),
(ConstraintViolation{ConstraintViolation::Type::UNIQUE, label1, std::set{prop1}}));
}
{
@ -654,7 +656,8 @@ TEST_F(ConstraintsTest, UniqueConstraintsLabelAlteration) {
auto res = acc1.Commit();
ASSERT_TRUE(res.HasError());
EXPECT_EQ(res.GetError(), (ConstraintViolation{ConstraintViolation::Type::UNIQUE, label1, std::set{prop1}}));
EXPECT_EQ(std::get<ConstraintViolation>(res.GetError()),
(ConstraintViolation{ConstraintViolation::Type::UNIQUE, label1, std::set{prop1}}));
}
}
@ -669,7 +672,7 @@ TEST_F(ConstraintsTest, UniqueConstraintsPropertySetSize) {
}
// Removing a constraint with empty property set should also fail.
ASSERT_EQ(storage.DropUniqueConstraint(label1, {}), UniqueConstraints::DeletionStatus::EMPTY_PROPERTIES);
ASSERT_EQ(storage.DropUniqueConstraint(label1, {}).GetValue(), UniqueConstraints::DeletionStatus::EMPTY_PROPERTIES);
// Create a set of 33 properties.
std::set<PropertyId> properties;
@ -686,7 +689,7 @@ TEST_F(ConstraintsTest, UniqueConstraintsPropertySetSize) {
}
// An attempt to delete constraint with too large property set should fail.
ASSERT_EQ(storage.DropUniqueConstraint(label1, properties),
ASSERT_EQ(storage.DropUniqueConstraint(label1, properties).GetValue(),
UniqueConstraints::DeletionStatus::PROPERTIES_SIZE_LIMIT_EXCEEDED);
// Remove one property from the set.
@ -702,7 +705,7 @@ TEST_F(ConstraintsTest, UniqueConstraintsPropertySetSize) {
EXPECT_THAT(storage.ListAllConstraints().unique, UnorderedElementsAre(std::make_pair(label1, properties)));
// Removing a constraint with 32 properties should succeed.
ASSERT_EQ(storage.DropUniqueConstraint(label1, properties), UniqueConstraints::DeletionStatus::SUCCESS);
ASSERT_EQ(storage.DropUniqueConstraint(label1, properties).GetValue(), UniqueConstraints::DeletionStatus::SUCCESS);
ASSERT_TRUE(storage.ListAllConstraints().unique.empty());
}
@ -749,7 +752,7 @@ TEST_F(ConstraintsTest, UniqueConstraintsMultipleProperties) {
ASSERT_NO_ERROR(vertex2->SetProperty(prop2, PropertyValue(2)));
auto res = acc.Commit();
ASSERT_TRUE(res.HasError());
EXPECT_EQ(res.GetError(),
EXPECT_EQ(std::get<ConstraintViolation>(res.GetError()),
(ConstraintViolation{ConstraintViolation::Type::UNIQUE, label1, std::set<PropertyId>{prop1, prop2}}));
}
@ -861,7 +864,8 @@ TEST_F(ConstraintsTest, UniqueConstraintsInsertRemoveAbortInsert) {
auto res = acc.Commit();
ASSERT_TRUE(res.HasError());
EXPECT_EQ(res.GetError(), (ConstraintViolation{ConstraintViolation::Type::UNIQUE, label1, std::set{prop1, prop2}}));
EXPECT_EQ(std::get<ConstraintViolation>(res.GetError()),
(ConstraintViolation{ConstraintViolation::Type::UNIQUE, label1, std::set{prop1, prop2}}));
}
}
@ -900,7 +904,8 @@ TEST_F(ConstraintsTest, UniqueConstraintsDeleteVertexSetProperty) {
auto res = acc1.Commit();
ASSERT_TRUE(res.HasError());
EXPECT_EQ(res.GetError(), (ConstraintViolation{ConstraintViolation::Type::UNIQUE, label1, std::set{prop1}}));
EXPECT_EQ(std::get<ConstraintViolation>(res.GetError()),
(ConstraintViolation{ConstraintViolation::Type::UNIQUE, label1, std::set{prop1}}));
ASSERT_NO_ERROR(acc2.Commit());
}
@ -922,7 +927,8 @@ TEST_F(ConstraintsTest, UniqueConstraintsInsertDropInsert) {
ASSERT_NO_ERROR(acc.Commit());
}
ASSERT_EQ(storage.DropUniqueConstraint(label1, {prop2, prop1}), UniqueConstraints::DeletionStatus::SUCCESS);
ASSERT_EQ(storage.DropUniqueConstraint(label1, {prop2, prop1}).GetValue(),
UniqueConstraints::DeletionStatus::SUCCESS);
{
auto acc = storage.Access();

View File

@ -74,10 +74,10 @@ class DurabilityTest : public ::testing::TestWithParam<bool> {
auto et2 = store->NameToEdgeType("base_et2");
// Create label index.
ASSERT_TRUE(store->CreateIndex(label_unindexed));
ASSERT_FALSE(store->CreateIndex(label_unindexed).HasError());
// Create label+property index.
ASSERT_TRUE(store->CreateIndex(label_indexed, property_id));
ASSERT_FALSE(store->CreateIndex(label_indexed, property_id).HasError());
// Create existence constraint.
ASSERT_FALSE(store->CreateExistenceConstraint(label_unindexed, property_id).HasError());
@ -138,10 +138,10 @@ class DurabilityTest : public ::testing::TestWithParam<bool> {
auto et4 = store->NameToEdgeType("extended_et4");
// Create label index.
ASSERT_TRUE(store->CreateIndex(label_unused));
ASSERT_FALSE(store->CreateIndex(label_unused).HasError());
// Create label+property index.
ASSERT_TRUE(store->CreateIndex(label_indexed, property_count));
ASSERT_FALSE(store->CreateIndex(label_indexed, property_count).HasError());
// Create existence constraint.
ASSERT_FALSE(store->CreateExistenceConstraint(label_unused, property_count).HasError());
@ -1433,17 +1433,17 @@ TEST_P(DurabilityTest, WalCreateAndRemoveEverything) {
CreateExtendedDataset(&store);
auto indices = store.ListAllIndices();
for (const auto &index : indices.label) {
ASSERT_TRUE(store.DropIndex(index));
ASSERT_FALSE(store.DropIndex(index).HasError());
}
for (const auto &index : indices.label_property) {
ASSERT_TRUE(store.DropIndex(index.first, index.second));
ASSERT_FALSE(store.DropIndex(index.first, index.second).HasError());
}
auto constraints = store.ListAllConstraints();
for (const auto &constraint : constraints.existence) {
ASSERT_TRUE(store.DropExistenceConstraint(constraint.first, constraint.second));
ASSERT_FALSE(store.DropExistenceConstraint(constraint.first, constraint.second).HasError());
}
for (const auto &constraint : constraints.unique) {
ASSERT_EQ(store.DropUniqueConstraint(constraint.first, constraint.second),
ASSERT_EQ(store.DropUniqueConstraint(constraint.first, constraint.second).GetValue(),
memgraph::storage::UniqueConstraints::DeletionStatus::SUCCESS);
}
auto acc = store.Access();

View File

@ -169,7 +169,7 @@ TEST(StorageV2Gc, Indices) {
memgraph::storage::Storage storage(memgraph::storage::Config{
.gc = {.type = memgraph::storage::Config::Gc::Type::PERIODIC, .interval = std::chrono::milliseconds(100)}});
ASSERT_TRUE(storage.CreateIndex(storage.NameToLabel("label")));
ASSERT_FALSE(storage.CreateIndex(storage.NameToLabel("label")).HasError());
{
auto acc0 = storage.Access();

View File

@ -78,7 +78,7 @@ TEST_F(IndexTest, LabelIndexCreate) {
ASSERT_NO_ERROR(acc.Commit());
}
EXPECT_TRUE(storage.CreateIndex(label1));
EXPECT_FALSE(storage.CreateIndex(label1).HasError());
{
auto acc = storage.Access();
@ -163,7 +163,7 @@ TEST_F(IndexTest, LabelIndexDrop) {
ASSERT_NO_ERROR(acc.Commit());
}
EXPECT_TRUE(storage.CreateIndex(label1));
EXPECT_FALSE(storage.CreateIndex(label1).HasError());
{
auto acc = storage.Access();
@ -171,14 +171,14 @@ TEST_F(IndexTest, LabelIndexDrop) {
EXPECT_THAT(GetIds(acc.Vertices(label1, View::NEW), View::NEW), UnorderedElementsAre(1, 3, 5, 7, 9));
}
EXPECT_TRUE(storage.DropIndex(label1));
EXPECT_FALSE(storage.DropIndex(label1).HasError());
{
auto acc = storage.Access();
EXPECT_FALSE(acc.LabelIndexExists(label1));
}
EXPECT_EQ(storage.ListAllIndices().label.size(), 0);
EXPECT_FALSE(storage.DropIndex(label1));
EXPECT_TRUE(storage.DropIndex(label1).HasError());
{
auto acc = storage.Access();
EXPECT_FALSE(acc.LabelIndexExists(label1));
@ -194,7 +194,7 @@ TEST_F(IndexTest, LabelIndexDrop) {
ASSERT_NO_ERROR(acc.Commit());
}
EXPECT_TRUE(storage.CreateIndex(label1));
EXPECT_FALSE(storage.CreateIndex(label1).HasError());
{
auto acc = storage.Access();
EXPECT_TRUE(acc.LabelIndexExists(label1));
@ -227,8 +227,8 @@ TEST_F(IndexTest, LabelIndexBasic) {
// 3. Remove Label1 from odd numbered vertices, and add it to even numbered
// vertices.
// 4. Delete even numbered vertices.
EXPECT_TRUE(storage.CreateIndex(label1));
EXPECT_TRUE(storage.CreateIndex(label2));
EXPECT_FALSE(storage.CreateIndex(label1).HasError());
EXPECT_FALSE(storage.CreateIndex(label2).HasError());
auto acc = storage.Access();
EXPECT_THAT(storage.ListAllIndices().label, UnorderedElementsAre(label1, label2));
@ -292,8 +292,8 @@ TEST_F(IndexTest, LabelIndexDuplicateVersions) {
// By removing labels and adding them again we create duplicate entries for
// the same vertex in the index (they only differ by the timestamp). This test
// checks that duplicates are properly filtered out.
EXPECT_TRUE(storage.CreateIndex(label1));
EXPECT_TRUE(storage.CreateIndex(label2));
EXPECT_FALSE(storage.CreateIndex(label1).HasError());
EXPECT_FALSE(storage.CreateIndex(label2).HasError());
{
auto acc = storage.Access();
@ -329,8 +329,8 @@ TEST_F(IndexTest, LabelIndexDuplicateVersions) {
// NOLINTNEXTLINE(hicpp-special-member-functions)
TEST_F(IndexTest, LabelIndexTransactionalIsolation) {
// Check that transactions only see entries they are supposed to see.
EXPECT_TRUE(storage.CreateIndex(label1));
EXPECT_TRUE(storage.CreateIndex(label2));
EXPECT_FALSE(storage.CreateIndex(label1).HasError());
EXPECT_FALSE(storage.CreateIndex(label2).HasError());
auto acc_before = storage.Access();
auto acc = storage.Access();
@ -356,8 +356,8 @@ TEST_F(IndexTest, LabelIndexTransactionalIsolation) {
// NOLINTNEXTLINE(hicpp-special-member-functions)
TEST_F(IndexTest, LabelIndexCountEstimate) {
EXPECT_TRUE(storage.CreateIndex(label1));
EXPECT_TRUE(storage.CreateIndex(label2));
EXPECT_FALSE(storage.CreateIndex(label1).HasError());
EXPECT_FALSE(storage.CreateIndex(label2).HasError());
auto acc = storage.Access();
for (int i = 0; i < 20; ++i) {
@ -372,7 +372,7 @@ TEST_F(IndexTest, LabelIndexCountEstimate) {
// NOLINTNEXTLINE(hicpp-special-member-functions)
TEST_F(IndexTest, LabelPropertyIndexCreateAndDrop) {
EXPECT_EQ(storage.ListAllIndices().label_property.size(), 0);
EXPECT_TRUE(storage.CreateIndex(label1, prop_id));
EXPECT_FALSE(storage.CreateIndex(label1, prop_id).HasError());
{
auto acc = storage.Access();
EXPECT_TRUE(acc.LabelPropertyIndexExists(label1, prop_id));
@ -382,10 +382,10 @@ TEST_F(IndexTest, LabelPropertyIndexCreateAndDrop) {
auto acc = storage.Access();
EXPECT_FALSE(acc.LabelPropertyIndexExists(label2, prop_id));
}
EXPECT_FALSE(storage.CreateIndex(label1, prop_id));
EXPECT_TRUE(storage.CreateIndex(label1, prop_id).HasError());
EXPECT_THAT(storage.ListAllIndices().label_property, UnorderedElementsAre(std::make_pair(label1, prop_id)));
EXPECT_TRUE(storage.CreateIndex(label2, prop_id));
EXPECT_FALSE(storage.CreateIndex(label2, prop_id).HasError());
{
auto acc = storage.Access();
EXPECT_TRUE(acc.LabelPropertyIndexExists(label2, prop_id));
@ -393,15 +393,15 @@ TEST_F(IndexTest, LabelPropertyIndexCreateAndDrop) {
EXPECT_THAT(storage.ListAllIndices().label_property,
UnorderedElementsAre(std::make_pair(label1, prop_id), std::make_pair(label2, prop_id)));
EXPECT_TRUE(storage.DropIndex(label1, prop_id));
EXPECT_FALSE(storage.DropIndex(label1, prop_id).HasError());
{
auto acc = storage.Access();
EXPECT_FALSE(acc.LabelPropertyIndexExists(label1, prop_id));
}
EXPECT_THAT(storage.ListAllIndices().label_property, UnorderedElementsAre(std::make_pair(label2, prop_id)));
EXPECT_FALSE(storage.DropIndex(label1, prop_id));
EXPECT_TRUE(storage.DropIndex(label1, prop_id).HasError());
EXPECT_TRUE(storage.DropIndex(label2, prop_id));
EXPECT_FALSE(storage.DropIndex(label2, prop_id).HasError());
{
auto acc = storage.Access();
EXPECT_FALSE(acc.LabelPropertyIndexExists(label2, prop_id));
@ -416,8 +416,8 @@ TEST_F(IndexTest, LabelPropertyIndexCreateAndDrop) {
// NOLINTNEXTLINE(hicpp-special-member-functions)
TEST_F(IndexTest, LabelPropertyIndexBasic) {
storage.CreateIndex(label1, prop_val);
storage.CreateIndex(label2, prop_val);
EXPECT_FALSE(storage.CreateIndex(label1, prop_val).HasError());
EXPECT_FALSE(storage.CreateIndex(label2, prop_val).HasError());
auto acc = storage.Access();
EXPECT_THAT(GetIds(acc.Vertices(label1, prop_val, View::OLD), View::OLD), IsEmpty());
@ -476,7 +476,7 @@ TEST_F(IndexTest, LabelPropertyIndexBasic) {
// NOLINTNEXTLINE(hicpp-special-member-functions)
TEST_F(IndexTest, LabelPropertyIndexDuplicateVersions) {
storage.CreateIndex(label1, prop_val);
EXPECT_FALSE(storage.CreateIndex(label1, prop_val).HasError());
{
auto acc = storage.Access();
for (int i = 0; i < 5; ++i) {
@ -511,7 +511,7 @@ TEST_F(IndexTest, LabelPropertyIndexDuplicateVersions) {
// NOLINTNEXTLINE(hicpp-special-member-functions)
TEST_F(IndexTest, LabelPropertyIndexTransactionalIsolation) {
storage.CreateIndex(label1, prop_val);
EXPECT_FALSE(storage.CreateIndex(label1, prop_val).HasError());
auto acc_before = storage.Access();
auto acc = storage.Access();
@ -545,7 +545,7 @@ TEST_F(IndexTest, LabelPropertyIndexFiltering) {
// We also have a mix of doubles and integers to verify that they are sorted
// properly.
storage.CreateIndex(label1, prop_val);
EXPECT_FALSE(storage.CreateIndex(label1, prop_val).HasError());
{
auto acc = storage.Access();
@ -603,7 +603,7 @@ TEST_F(IndexTest, LabelPropertyIndexFiltering) {
// NOLINTNEXTLINE(hicpp-special-member-functions)
TEST_F(IndexTest, LabelPropertyIndexCountEstimate) {
storage.CreateIndex(label1, prop_val);
EXPECT_FALSE(storage.CreateIndex(label1, prop_val).HasError());
auto acc = storage.Access();
for (int i = 1; i <= 10; ++i) {
@ -625,7 +625,7 @@ TEST_F(IndexTest, LabelPropertyIndexCountEstimate) {
}
TEST_F(IndexTest, LabelPropertyIndexMixedIteration) {
storage.CreateIndex(label1, prop_val);
EXPECT_FALSE(storage.CreateIndex(label1, prop_val).HasError());
const std::array temporals{TemporalData{TemporalType::Date, 23}, TemporalData{TemporalType::Date, 28},
TemporalData{TemporalType::LocalDateTime, 20}};

View File

@ -210,8 +210,8 @@ TEST_F(ReplicationTest, BasicSynchronousReplicationTest) {
const auto *property = "property";
const auto *property_extra = "property_extra";
{
ASSERT_TRUE(main_store.CreateIndex(main_store.NameToLabel(label)));
ASSERT_TRUE(main_store.CreateIndex(main_store.NameToLabel(label), main_store.NameToProperty(property)));
ASSERT_FALSE(main_store.CreateIndex(main_store.NameToLabel(label)).HasError());
ASSERT_FALSE(main_store.CreateIndex(main_store.NameToLabel(label), main_store.NameToProperty(property)).HasError());
ASSERT_FALSE(
main_store.CreateExistenceConstraint(main_store.NameToLabel(label), main_store.NameToProperty(property))
.HasError());
@ -241,13 +241,15 @@ TEST_F(ReplicationTest, BasicSynchronousReplicationTest) {
// existence constraint drop
// unique constriant drop
{
ASSERT_TRUE(main_store.DropIndex(main_store.NameToLabel(label)));
ASSERT_TRUE(main_store.DropIndex(main_store.NameToLabel(label), main_store.NameToProperty(property)));
ASSERT_TRUE(main_store.DropExistenceConstraint(main_store.NameToLabel(label), main_store.NameToProperty(property)));
ASSERT_EQ(
main_store.DropUniqueConstraint(main_store.NameToLabel(label), {main_store.NameToProperty(property),
main_store.NameToProperty(property_extra)}),
memgraph::storage::UniqueConstraints::DeletionStatus::SUCCESS);
ASSERT_FALSE(main_store.DropIndex(main_store.NameToLabel(label)).HasError());
ASSERT_FALSE(main_store.DropIndex(main_store.NameToLabel(label), main_store.NameToProperty(property)).HasError());
ASSERT_FALSE(main_store.DropExistenceConstraint(main_store.NameToLabel(label), main_store.NameToProperty(property))
.HasError());
ASSERT_EQ(main_store
.DropUniqueConstraint(main_store.NameToLabel(label), {main_store.NameToProperty(property),
main_store.NameToProperty(property_extra)})
.GetValue(),
memgraph::storage::UniqueConstraints::DeletionStatus::SUCCESS);
}
{