Summary:
This diff fixes the issue for label name (and edge type/property)
with spaces and special characters to avoid possible OpenCypher injections.
Consider an example where label name is 'hello :world'. `DUMP DATABASE`
used to return query which creates a node (u:hello :world) - i.e. node
that contains two labels 'hello' and 'world'. This fix escapes names to
create the following node with exactly one label as expected:
```
(u:`hello :world`)
```
Reviewers: mferencevic
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2774
Summary:
The storage now uses a file in the data directory (`.lock`) to determine
whether there is another instance of the storage running with the same data
directory. That helps notify the user/administrator that the system is running
in an unsupported configuration.
Reviewers: teon.banek, ipaljak
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2719
Summary:
The importer now supports all of the flags that the modern Neo4j CSV importer
supports.
Reviewers: teon.banek, ipaljak
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2709
Summary:
This diff contains a necessary functionality to save and restore unique
constraint operations. The previous snapshot/WAL version is backward
compatible. Integration tests for migration from older snapshot and WAL
versions are also included.
Reviewers: mferencevic
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2680
Summary:
The new CSV parser in `mg_import_csv` behaves the same when importing a CSV
file as the standard Python CSV importer. Tests are added for all CSV field
edge-cases.
Reviewers: teon.banek, ipaljak
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2702
Summary:
This diff restores (and fixes) the old mg_import_csv implementation. The
importer now supports the new storage engine.
Reviewers: teon.banek, ipaljak
Reviewed By: teon.banek, ipaljak
Subscribers: buda, pullbot
Differential Revision: https://phabricator.memgraph.io/D2690
Summary: The test now checks if the cluster is alive at the end of the test.
Reviewers: mferencevic
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2343
Summary:
The test now uses `ha_client`. Logging is also modified to output 1-indexed
worker ids.
Reviewers: msantl
Reviewed By: msantl
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2113
Summary:
- Included HA client
- Fixed log messages to be 1-indexed
- Added id properties to created nodes for easier debugging
- Create and check steps are now executed 20 times each
Reviewers: msantl
Reviewed By: msantl
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2111
Summary:
- Server ids are now 1-indexed in logs
- All created nodes have distinct is properties which helps with debugging
Reviewers: msantl
Reviewed By: msantl
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2109
Summary:
This tests checks the correctness of a leader election process when its
decoupled from log replication. In other words, in this test we do not change
the state of the database, i. e., the Raft log remains empty.
The test proceeds as follows for clusters of size 3 and 5:
1. Start a random subset of workers in the cluster
2. Check if the leader has been elected
3. Kill all living workers
4. GOTO 1 and repeat 10 times
Reviewers: msantl
Reviewed By: msantl
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2105
Summary:
HA should now support constraints in the same way the SM version does.
I only tested this thing manually, but I plan to add a new integration test for
this also.
Reviewers: ipaljak, vkasljevic, mferencevic
Reviewed By: ipaljak, mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2083
Summary:
At the moment, this test will fail. There are currently two issues with it:
- Cap'n Proto limit exception might occur
- `db.Reset()` method can hang (deadlock)
The first issue should be resolved by migration to SLK and we are currently
working on the second one. The test will land after those are fixed.
Reviewers: msantl, mferencevic
Reviewed By: msantl
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2006
Summary:
Added a new config parameter, replication timeout. This parameter sets the
upper limit to the replication phase and once the timeout exceeds, the
transaction engine stops accepting new transactions.
We could experience this timeout in two cases:
1. a network partition
2. majority of the cluster stops working
Reviewers: ipaljak
Reviewed By: ipaljak
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1893
Summary:
Added index creation and deletion handling in StateDelta.
Also included an integration test that creates an index and makes sure that it
gets replicated by killing each peer eventually causing a leader re-election.
Reviewers: ipaljak
Reviewed By: ipaljak
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1886
Summary:
I've refactored the integration test for HA so we can reuse the common
parts like starting/stopping workers.
I've also added a test that triggers the log compaction and it checks that the
snapshot that has been transferred is the same as the origin one.
Reviewers: ipaljak
Reviewed By: ipaljak
Subscribers: mferencevic, pullbot
Differential Revision: https://phabricator.memgraph.io/D1847
Summary:
In this part of log compaction for raft, I've implemented snapshooting
and snapshot recovery. I've also refactored the code a bit, so `RaftServer` now
has a pointer to the `GraphDb` and it can do some things by itself.
Log compaction requires some further work. Since snapshooting isn't synchronous
between peers, and each peer can work at their own pace, once we've compacted
the log so that the next log to be sent to peer `x` isn't available anymore, we
need to send the snapshot over the wire. This means that the next part will
contain the `InstallSnapshotRPC` and then maybe one more that will implement the
logic of sending `LogEntry` or the whole snapshot.
Reviewers: ipaljak
Reviewed By: ipaljak
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1834
Summary: Run the basic test with two cluster sizes, 3 and 5.
Reviewers: ipaljak
Reviewed By: ipaljak
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1794
Summary:
Created a new integration test for Raft protocol.
The tests iterates through the Raft cluster and does the following:
* kill machine `X`
* execute a query
* bring `X` back to life
The first step is to insert a vertex in the cluster, and last step is to check
if the cluster has all the data.
I also edited some of the raft core files because this test surafaced some bugs.
The `tester` binary is a hacked version of the HA client and so are the parts in
the code that refuse to execute a query is the machine is not in `Leader` mode.o
Those parts will go away once we have a proper HA client.
I've run the `runner.py` for a while (215 times)
```
while ./runner.py &> log.txt; do echo -n "."; done
```
and it didn't break.
Reviewers: ipaljak, mferencevic
Reviewed By: ipaljak
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1788
Summary:
This diff splits single node and distributed storage from each other.
Currently all of the storage code is copied into two directories (one single
node, one distributed). The logic used in the storage implementation isn't
touched, it will be refactored in following diffs.
To clean the working directory after this diff you should execute:
```
rm database/state_delta.capnp
rm database/state_delta.hpp
rm storage/concurrent_id_mapper_rpc_messages.capnp
rm storage/concurrent_id_mapper_rpc_messages.hpp
```
Reviewers: teon.banek, buda, msantl
Reviewed By: teon.banek, msantl
Subscribers: teon.banek, pullbot
Differential Revision: https://phabricator.memgraph.io/D1625
Summary:
This diff changes the RPC layer to directly return `TResponse` to the user when
issuing a `Call<...>` RPC call. The call throws an exception on failure
(instead of the previous return `nullopt`).
All servers (network, RPC and distributed) are set to have explicit `Shutdown`
methods so that a controlled shutdown can always be performed. The object
destructors now have `CHECK`s to enforce that the `AwaitShutdown` methods were
called.
The distributed memgraph is changed that none of the binaries (master/workers)
crash when there is a communication failure. Instead, the whole cluster starts
a graceful shutdown when a persistent communication error is detected.
Transient errors are allowed during execution. The transaction that errored out
will be aborted on the whole cluster. The cluster state is managed using a new
Heartbeat RPC call.
Reviewers: buda, teon.banek, msantl
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1604
Summary:
This change improves detection of errorneous situations when starting a
distributed cluster on a single machine. It asserts that the user hasn't
started more memgraph nodes on the same machine with the same durability
directory. Also, this diff improves worker registration. Now workers don't have
to have explicitly set IP addresses. The master will deduce them from the
connecting IP when the worker registers.
Reviewers: teon.banek, buda, msantl
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1582