Summary:
This is unfortunately needed in the C++17 standard, so that the
allocator is correctly propagated to elements of pair which respect the
"Uses Allocator" protocol. C++20 standard resolves this issue, but we
still have a long way before it is released and implemented by the
compiler and standard library vendors.
Reviewers: mtomic, llugovic, mferencevic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2107
Summary:
The test now uses `ha_client`. Logging is also modified to output 1-indexed
worker ids.
Reviewers: msantl
Reviewed By: msantl
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2113
Summary:
- Included HA client
- Fixed log messages to be 1-indexed
- Added id properties to created nodes for easier debugging
- Create and check steps are now executed 20 times each
Reviewers: msantl
Reviewed By: msantl
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2111
Summary:
- Server ids are now 1-indexed in logs
- All created nodes have distinct is properties which helps with debugging
Reviewers: msantl
Reviewed By: msantl
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2109
Summary:
This tests checks the correctness of a leader election process when its
decoupled from log replication. In other words, in this test we do not change
the state of the database, i. e., the Raft log remains empty.
The test proceeds as follows for clusters of size 3 and 5:
1. Start a random subset of workers in the cluster
2. Check if the leader has been elected
3. Kill all living workers
4. GOTO 1 and repeat 10 times
Reviewers: msantl
Reviewed By: msantl
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2105
Summary:
This diff only introduces a MemoryResource member to TypedValue and correctly
propagates through various constructors and assignments. At the moment,
MemoryResource is not used to actually allocate anything.
Reviewers: mtomic, llugovic, mferencevic, msantl
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2085
Summary:
HA should now support constraints in the same way the SM version does.
I only tested this thing manually, but I plan to add a new integration test for
this also.
Reviewers: ipaljak, vkasljevic, mferencevic
Reviewed By: ipaljak, mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2083
Summary:
At the moment, this test will fail. There are currently two issues with it:
- Cap'n Proto limit exception might occur
- `db.Reset()` method can hang (deadlock)
The first issue should be resolved by migration to SLK and we are currently
working on the second one. The test will land after those are fixed.
Reviewers: msantl, mferencevic
Reviewed By: msantl
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2006
Summary:
Edge creation in tests used to happen in different
transaction than vertex creation which caused unexpected
behaviour. In this change, both vertices and edges are
created in the same transaction.
Reviewers: teon.banek
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2089
Summary:
Preparing unique constraint code to be implemented in HA. To do so, I'm
moving everything to `stograge/common/` folder. I also added a new header,
`gid.hpp` which does a `ifdef` include of the correct `gid.hpp` based on the
product.
Reviewers: ipaljak, mferencevic, vkasljevic, teon.banek
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2079
Summary:
During it's leadership, one peer can receive RPC messages from other peers that his reign is over.
The problem is when this happens during a transaction commit.
This is handled in the following way.
If we're the current leader and we want to commit a transaction, we need to make sure the Raft Log is replicated before we can tell the client that the transaction is committed.
During that wait, we can only notice that the replication takes too long, and we report that with `LOG(WARNING)` messages.
If we change the Raft mode during the wait, our Raft implementation will internally commit this transaction, but won't be able to acquire the Raft lock because the `db.Reset` has been called.
This is why there is an manual lock acquire. If we pick up that the `db.Reset` has been called, we throw an `UnexpectedLeaderChangeException` exception to the client.
Another thing with long running transactions, if someone decides to kill a `memgraph_ha` instance during the commit, the transaction will have `abort` hint set. This will cause the `src/query/operator.cpp` to throw a `HintedAbortError`. We need to catch this during the shutdown, because the `memgraph_ha` isn't dead from the user perspective, and the transaction wasn't aborted because it took too long, but we can differentiate between those two.
Reviewers: mferencevic, ipaljak
Reviewed By: mferencevic, ipaljak
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1956
Summary:
Micro benchmarks show no change compared to global new & delete. This is
to be expected, because Unwind relies only on `std::vector` which ought
to reserve the memory in reasonable chunks.
Reviewers: mtomic, llugovic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2064
Summary:
Micro benchmarks show an improvement to performance of about 10%
compared to global new & delete.
Reviewers: mtomic, llugovic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2061
Summary:
Micro benchmarks show that MonotonicBufferResource improves performance
by a factor of 1.5.
Reviewers: mtomic, mferencevic, llugovic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2048
Summary:
Benchmarks show minor improvements. Perhaps it makes sense at some later date
to use another allocator for things lasting only in a single `Pull`.
Reviewers: mferencevic, mtomic, llugovic
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2018
Summary:
Unfortunately, the written micro benchmark only reports minor
improvements compared to default allocator. The results are in some
cases even a tiny bit worse.
Reviewers: mtomic, mferencevic, llugovic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2039
Summary:
This change introduces dumping of indices keys. During the dump process,
an internal label is assigned to each vertex and index on vertex's
internal property id is created for faster matching during edge creation.
Reviewers: teon.banek
Reviewed By: teon.banek
Subscribers: msantl, pullbot
Differential Revision: https://phabricator.memgraph.io/D2046
Summary:
Prior to this change, a huge query was returned by DumpGenerator that
dumped the entire graph. This change split the single query to multiple
queries, each dumping a single vertex/edge. For easier vertex matching
when dumping edge, an internal property id is assigned to each vertex and
removed after the whole graph is dumped.
Reviewers: teon.banek
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2038
Summary:
According to the written benchmark, using MonotonicBufferResource yields
significant improvements to performance of Distinct. The setup fills the
database with vertices depending on the benchmark state. No edges are
created. Then we run DISTINCT on that. Since each vertex is unique, we
will store everything in the `DistinctCursor::seen_rows_`, which is
backed by a MemoryResource. This setup, on my machine, yields 10 times
better performance when run with MonotonicBufferResource.
Reviewers: mferencevic, mtomic, msantl
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1894
Summary:
There will be a lot of leftover files, execute the following commands inside
`src/` to remove them:
```
git clean -xf
rm -r rpc/ storage/single_node_ha/rpc/
```
Reviewers: teon.banek
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2011
Summary:
This API change is needed in order to propagate the memory allocation
scheme for the execution of LogicalOperator::Cursor
Depends on D1990
Reviewers: mtomic, mferencevic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1980
Summary:
The new distributed directory is inside the query, and mirrors the query
structure. This groups all of the distributed (query) source code
together, which should make the potential directory extraction easier.
Reviewers: mferencevic, llugovic, mtomic
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1923
Summary:
The tests `RelationshipPatternNoDetails` and `PatternPartBraces` in
`memgraph__unit__cypher_main_visitor` checked for the names of the anonymous
identifiers and therefore implicitly relied on the order of the traversal of the
tree.
This "bug" surfaced when Memgraph was compiled with GCC (tested on >= 6.3.0).
Reviewers: mtomic, teon.banek, mferencevic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1945
Summary: New tutorial for backpacking through europe
Reviewers: dsantl, msantl, buda
Reviewed By: dsantl
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1954
Summary:
Test to check that the recovery works even if the snapshot is corrupted
in distributed.
Depends on D1930
Reviewers: vkasljevic, mferencevic
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1950
Summary:
Same as `UniqueLabelPropertyConstratint` except it works with multiple properties.
Because of that it is a bit more complex and slower.
Reviewers: msantl, ipaljak
Reviewed By: msantl
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1926
Summary:
This is a bugfix for D1836. It made `SymbolTable` return references to vector
elements, which then get invalidated and weird stuff happens.
This made a `DCHECK` in `rule_based_planner.hpp` trigger, and it was noticed by
@ipaljak 2 months later. All `DCHECK`s in `rule_based_planner.hpp` are now
changed to `CHECK`s.
Also, hash function for `Symbol` was wrong, because it also took
`user_declared` field into consideration, and `==` operator doesn't do that.
Reviewers: ipaljak, teon.banek, mferencevic, msantl
Reviewed By: msantl
Subscribers: pullbot, ipaljak
Differential Revision: https://phabricator.memgraph.io/D1938
Summary:
For HA benchmarks, if one of the executables exits with a status other
than zero, the benchmark should fail.
Also, removing `LOG(INFO)`, since failing benchmarks should flag where to look.
Reviewers: ipaljak
Reviewed By: ipaljak
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1921
Summary:
This macro benchmark measures read throughput in HA.
The test first creates a random graph with a given number of nodes
and edges. After that, it concurently performs the following query
for 10 seconds:
```
MATCH (n {id:$random_id})-[e]->(m) RETURN e, m;
```
In other words, it randomly picks a node and returns all its neighbours.
Locally measured results are as follows:
| nodes | edges | queries per second |
| 100 | 500 | 8900 |
| 1000 | 5000 | 2700 |
| 10000 | 50000 | 1200 |
Running the same test on Memgraph single node yields very similar results
(up to a few hundred queries).
Reviewers: msantl
Reviewed By: msantl
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1916
Summary:
Case with existence constraints is about 8-9% slower then case without
existence constraints. Before this diff that difference was about 15-16%.
Reviewers: msantl, ipaljak
Reviewed By: ipaljak
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1917
Summary:
UniqueLabelPropertyConstraint defines label + property restriction on vertices.
Label + Property + PropertyValue for that property must be unique at any given moment.
Reviewers: msantl, ipaljak
Reviewed By: msantl
Subscribers: mferencevic, pullbot
Differential Revision: https://phabricator.memgraph.io/D1884
Summary:
PROVE:PLAN clears the previously stored results for the current suite (which is
the suite associated with the current package) and prevents "result
accumulation" (and the accompanying huge and partly outdated reports).
Reviewers: mtomic, teon.banek
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1909
Summary: Add flag to disable printing of records.
Reviewers: teon.banek
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1898
Summary:
Added a new config parameter, replication timeout. This parameter sets the
upper limit to the replication phase and once the timeout exceeds, the
transaction engine stops accepting new transactions.
We could experience this timeout in two cases:
1. a network partition
2. majority of the cluster stops working
Reviewers: ipaljak
Reviewed By: ipaljak
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1893
Summary:
Added index creation and deletion handling in StateDelta.
Also included an integration test that creates an index and makes sure that it
gets replicated by killing each peer eventually causing a leader re-election.
Reviewers: ipaljak
Reviewed By: ipaljak
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1886
Summary:
`EdgesIterable` is used for iterating over Edges in distributed memgraph. Because of lru cache there is a possibility of data getting evicted as someone iterates over it.
To prevent that `EdgesIterable` will lock that data and release it when it's deconstructed.
Reviewers: msantl
Reviewed By: msantl
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1868
Summary:
CachedDataLock is necessary for lru cache as remote data is no longer
persistent. Most methods internally handle this, but for methods that
return pointers or references to remote data, we need to manually
lock data.
Reviewers: msantl, ipaljak
Reviewed By: msantl
Subscribers: teon.banek, pullbot
Differential Revision: https://phabricator.memgraph.io/D1869
Summary: Benchmark shows that database with `ExistenceConstraints` is around 16% slower compared to case without `ExistenceConstraints`.
Reviewers: teon.banek, msantl, ipaljak, mferencevic
Reviewed By: msantl
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1876
Summary:
Existence constraint ensures that all nodes with certain label have a certain property.
`ExistenceRule` defines label -> properties rule and `ExistenceConstraints` manages all
constraints.
Reviewers: msantl, ipaljak, teon.banek, mferencevic
Reviewed By: msantl, teon.banek, mferencevic
Subscribers: mferencevic, pullbot
Differential Revision: https://phabricator.memgraph.io/D1797
Summary:
During the following scenario:
- start a HA cluster with 3 machines
- find the leader and start sending queries
- SIGTERM the leader but leave other 2 machines untouched
The leader would be stuck in the shutdown phase.
This was happening because during the shutdown phase of the Bolt server, a
`graph_db_accessor` would try to commit a transaction after we've already shut
down Raft server. Raft, although not running, is still thinking it's in the
Leader mode. Tx Engine calls the `SafeToCommit` method to Commit transactions,
and ends up in an infinite loop.
Since Raft was shut down it won't handle any of the incoming RPCs and won't
change it's mode.
The fix here is to shut down the Bolt server before Raft, so we don't have any
pending commits once Raft is shut down.
Reviewers: ipaljak
Reviewed By: ipaljak
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1853
Summary:
I've refactored the integration test for HA so we can reuse the common
parts like starting/stopping workers.
I've also added a test that triggers the log compaction and it checks that the
snapshot that has been transferred is the same as the origin one.
Reviewers: ipaljak
Reviewed By: ipaljak
Subscribers: mferencevic, pullbot
Differential Revision: https://phabricator.memgraph.io/D1847
Summary:
This change splits mg-communication into mg-communication and
mg-comm-rpc. The main reason for doing this, is to make separation of
enterprise features from community Memgraph more clear.
Reviewers: mferencevic, msantl
Reviewed By: msantl
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1844
Summary:
RuleBasedPlanner now generates only the regular ScanAll operations, and
Filter operations are appended as soon as possible. The newly added
Rewrite step, takes this operator tree and replaces viable Filter &
ScanAll operators with appropriate ScanAllBy<Index> operator. This
change ought to simplify the behaviour of DistributedPlanner when that
stage is moved before the indexed lookup rewrite.
Showing unoptimized plan in interactive planner is also supported.
Reviewers: mtomic, llugovic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1839
Summary:
All AST nodes had a member `uid_` that was used as a key in
`SymbolTable`. It is renamed to `symbol_pos_` and it appears only in
`Identifier`, `NamedExpression` and `Aggregation`, since only those types were
used in `SymbolTable`. SymbolGenerator is now responsible for creating symbols
in `SymbolTable` and assigning positions to AST nodes.
Cloning and serialization code is now simpler since there is no need to track
UIDs.
Reviewers: teon.banek
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1836
Summary:
In this part of log compaction for raft, I've implemented snapshooting
and snapshot recovery. I've also refactored the code a bit, so `RaftServer` now
has a pointer to the `GraphDb` and it can do some things by itself.
Log compaction requires some further work. Since snapshooting isn't synchronous
between peers, and each peer can work at their own pace, once we've compacted
the log so that the next log to be sent to peer `x` isn't available anymore, we
need to send the snapshot over the wire. This means that the next part will
contain the `InstallSnapshotRPC` and then maybe one more that will implement the
logic of sending `LogEntry` or the whole snapshot.
Reviewers: ipaljak
Reviewed By: ipaljak
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1834
Summary:
`ReplicationLog` had a classic off-by-one bug. The `valid_prefix`
variable wasn't set properly.
This diff also includes a poor man's version of a HA client. This client
assumes that all the HA instances run on a single machine and that the
corresponding Bold endpoints have open ports ranging from `7687` to
`7687 + num_machines - 1`.
This should make it easeir to test certain things, ie. disk usage, P25.
This test revealed the bug with `ReplicationLog`
Reviewers: ipaljak
Reviewed By: ipaljak
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1813
Summary:
Variable expansions cannot appear in merge patterns or after updates,
so they can only be planned with GraphState::OLD. Because of that, it makes
sense to remove GraphView parameter from them to reduce confusion.
Reviewers: teon.banek, llugovic
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1825
Summary:
Implement proper plan cloning using LCP instead of hacking it with
serialization.
depends on D1815
Reviewers: teon.banek, llugovic
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1816
Summary:
Once a leader loses it's leadership, in order to handle hanging
transactions, we reset the storage and the transaction engine.
This requires to re-apply all the commited entries from the log.
Once we add snapshot (log compaction) we would need to do that also.
One thing to have in mind is the `election_timeout_min` parameter. If it's set
too low it could trigger leader re-election too often.
Reviewers: ipaljak
Reviewed By: ipaljak
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1822
Summary:
use newly added LCP functionality to get rid of manually written
`Clone` functions in AST.
depends on D1808
Reviewers: teon.banek, llugovic
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1815
Summary:
When there was an empty line comment starting right at the end of the
query, stripper wouldn't properly change state from `IN_LINE_COMMENT` to `OUT`
and it would return the wrong length (0) from `MatchWhitespaceAndComments`.
Because of that, the two slashes would be interpreted as division operators.
Query "RETURN 5;//" would be changed by stripper into "RETURN 5; / /" which
obviously can't be parsed;
Reviewers: teon.banek
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1819
Summary:
It seems like the HA benchmark is flaky and unrelated builds fail
because of it. Lets disable it for now until we figure out what is happening.
Reviewers: ipaljak
Reviewed By: ipaljak
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1814
Summary:
Rename Context to ExecutionContext and make it struct
Move ParsingContext to cypher_main_visitor.hpp
Reviewers: mtomic, llugovic
Reviewed By: llugovic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1810
Summary:
In order to get more consistent results, give the benchmark a certain
amount of time it is supposed to run and not the number of queries.
The resluts on my machine are as following:
```
duration 10.0004
executed_writes 25190
write_per_second 2518.91
duration 10.0005
executed_writes 25096
write_per_second 2509.48
duration 10.0004
executed_writes 23068
write_per_second 2306.7
duration 10.0006
executed_writes 26390
write_per_second 2638.84
duration 10.0008
executed_writes 26246
write_per_second 2624.38
duration 10.0006
executed_writes 24752
write_per_second 2475.06
duration 10.0027
executed_writes 24818
write_per_second 2481.14
duration 10.0032
executed_writes 25148
write_per_second 2513.99
duration 10.0009
executed_writes 25075
write_per_second 2507.28
duration 10.0008
executed_writes 25846
write_per_second 2584.4
duration 10.0006
executed_writes 25671
write_per_second 2566.96
duration 10.0025
executed_writes 25983
write_per_second 2597.65
```
Reviewers: ipaljak
Reviewed By: ipaljak
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1812
Summary:
Identifiers would be printed correctly if they were printed as part of a larger
`Expression` (so the visitor did the printing). However, if `PrintObject` was
called with an `Identifier *` then it would delegate to the template overload
and just print the pointer.
Reviewers: mtomic, teon.banek
Reviewed By: mtomic
Subscribers: mferencevic, pullbot
Differential Revision: https://phabricator.memgraph.io/D1802
Summary:
This diff removes the need for a database when parsing a query and
creating an Ast. Instead of storing storage::{Label,Property,EdgeType}
in Ast nodes, we store the name and an index into all of the names. This
allows for easy creation of a map from {Label,Property,EdgeType} index
into the concrete storage type. Obviously, this comes with a performance
penalty during execution, but it should be minor. The upside is that the
query/frontend minimally depends on storage (PropertyValue), which makes
writing tests easier as well as running them a lot faster (there is no
database setup). This is most noticeable in the ast_serialization test
which took a long time due to start up of a distributed database.
Reviewers: mtomic, llugovic
Reviewed By: mtomic
Subscribers: mferencevic, pullbot
Differential Revision: https://phabricator.memgraph.io/D1774
Summary:
Each `raft::LogEntry` is now persisted under its own key in our `KVStore`. Locally running our HA feature benchmark yields the following results:
```
duration 23.7
executed_writes: 15000
write_per_second: 632.888
```
This represents about 5x increase in throughput.
Reviewers: msantl
Reviewed By: msantl
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1799
Summary:
A simple benchmark that starts a HA cluster with 3 machines.
The benchmark issues only `CREATE (:Node)` queries.
Local results (debug build), for this raft config, are:
```
duration 4.26899
executed_writes 300
write_per_second 70.2743
```
Reviewers: ipaljak, mferencevic
Reviewed By: ipaljak
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1798
Summary:
`Edges` class didn't properly filter edges when given both edge type
and destination arguments, which causes weird bugs in query execution (wrong
edge getting matched in `Expand` with existing node flag set). This is probably
because we didn't support using both filters at the same time, but the API and
documentation for `VertexAccessor::in` and `VertexAccessor::out` functions
didn't reflect that.
Reviewers: teon.banek, msantl
Reviewed By: msantl
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1796
Summary: Run the basic test with two cluster sizes, 3 and 5.
Reviewers: ipaljak
Reviewed By: ipaljak
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1794
Summary:
Created a new integration test for Raft protocol.
The tests iterates through the Raft cluster and does the following:
* kill machine `X`
* execute a query
* bring `X` back to life
The first step is to insert a vertex in the cluster, and last step is to check
if the cluster has all the data.
I also edited some of the raft core files because this test surafaced some bugs.
The `tester` binary is a hacked version of the HA client and so are the parts in
the code that refuse to execute a query is the machine is not in `Leader` mode.o
Those parts will go away once we have a proper HA client.
I've run the `runner.py` for a while (215 times)
```
while ./runner.py &> log.txt; do echo -n "."; done
```
and it didn't break.
Reviewers: ipaljak, mferencevic
Reviewed By: ipaljak
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1788
Summary: This simple function is required by the Tensorflow integration so that Memgraph can always return regular matrices of desired size.
Reviewers: teon.banek, mtomic, dsantl
Reviewed By: mtomic, dsantl
Subscribers: mferencevic, pullbot
Differential Revision: https://phabricator.memgraph.io/D1783
Summary:
This (almost) removes the dependency of operators on NodeAtom and
EdgeAtom. Only EdgeAtom::Direction is needed. The change was done as the
initial step of removing dependency on storage from Ast. Additionally,
it makes sense for LogicalOperator to only depend on Expression classes.
Reviewers: mtomic, llugovic
Reviewed By: llugovic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1779
Summary:
TransactionReplicator replicates transactions on follower machines in
HA memgraph. Our DB accessor API doesn't provide us with the functionality to
begin transactions with non-increasing ids. This is why the
`TransactionReplicator` uses a internal map that maps tx ids from the leader
node to transactions on the follower node (whose id doesn't have to match the
leaders tx id).
If the leader has the following transaction timeline:
```
L
tx1
|
| tx2
| |
| |
| |
| |
| |
| tx2
|
|
|
|
tx1
```
`tx2` will commit first and will be replicated. When applying `tx2` on follower
nodes, they will start a new transaction with tx id `1`. When `tx1` starts
replicating, followers will start a new transaction with tx id `2`. And this is
wehre `TransactionReplicator` kicks in.
Reviewers: ipaljak
Reviewed By: ipaljak
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1775
Summary:
This is just the first diff that tries to wire the raft protocol into
memgraph.
In this diff I'm introducing transaction engine reset functionality. I also
introduced `RaftInterface` which should be used wherever someone wants to access
Raft from Memgraph.
For design decisions see the feature spec.
Reviewers: ipaljak, teon.banek
Reviewed By: ipaljak
Subscribers: pullbot, teon.banek
Differential Revision: https://phabricator.memgraph.io/D1758
Summary:
With quite frequent changes of serialization backend, it's getting
really painful updating the C++ implementation. This diff defines the
Symbol class via LCP, so that the C++ serialization is generated instead
of written by hand.
Reviewers: mtomic, llugovic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1741
Summary:
Instantiation with VertexAccessor was never used, so the template
needlessly complicated the rest of the codebase.
Reviewers: mtomic, llugovic
Reviewed By: mtomic
Subscribers: mferencevic, pullbot
Differential Revision: https://phabricator.memgraph.io/D1736
Summary:
TypeInfo will be needed for upcoming serialization via SLK. This diff
changes the already defined TypeInfo by removing the reliance on
capnp::typeId calls. The struct itself is now in utils.
Hopefully, this shouldn't break our RPC stack due to new ID generation.
Reviewers: mferencevic, mtomic, llugovic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1735
Summary:
GraphDb now has GetStorageStat method that returns live view to
object containing vertex_count, edge_count and avg_degree().
Stat is updated on each garbage collection run and represents number of
objects that are visible to any transaction.
Reviewers: teon.banek, msantl, ipaljak
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1731
Summary: SmallVector data structure introduction. This is the first step. In this revision are SmallVector implementation and tests. The second step will be swapping std::vector with SmallVector in the current codebase.
Reviewers: teon.banek, buda, ipaljak, mtomic
Reviewed By: teon.banek, ipaljak
Subscribers: ipaljak, pullbot
Differential Revision: https://phabricator.memgraph.io/D1730
Summary:
This change makes HierarchicalTreeVisitor visit only Cypher related AST
nodes. QueryVisitor can be used to differentiate between various query
types we have. The next step is to either rename HierarchicalTreeVisitor
to something like CypherQueryVisitor, or perhaps extract Clause visiting
from it.
Reviewers: mtomic, llugovic
Reviewed By: llugovic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1710
Summary:
The single-binary distributed tests didn't wait for the whole cluster to be up
and running before they started running their tests. Now all tests ensure that
all workers are registered to the master and all other workers before starting
any tests.
Reviewers: mculinovic
Reviewed By: mculinovic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1724
Summary:
See https://app.asana.com/0/743890251333732/888297761596047/f for more details.
# BEFORE
```
Note: Google Test filter = *UniqueConstraintRecovery*
[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from Durability
[ RUN ] Durability.UniqueConstraintRecovery
unknown file: Failure
C++ exception with description "Index couldn't be created due to constraint
violation!" thrown in the test body.
[ FAILED ] Durability.UniqueConstraintRecovery (3 ms)
[----------] 1 test from Durability (3 ms total)
[----------] Global test environment tear-down
[==========] 1 test from 1 test case ran. (3 ms total)
[ PASSED ] 0 tests.
[ FAILED ] 1 test, listed below:
[ FAILED ] Durability.UniqueConstraintRecovery
1 FAILED TEST
```
# AFTER
```
[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from Durability
[ RUN ] Durability.UniqueConstraintRecovery
[ OK ] Durability.UniqueConstraintRecovery (4 ms)
[----------] 1 test from Durability (4 ms total)
[----------] Global test environment tear-down
[==========] 1 test from 1 test case ran. (4 ms total)
[ PASSED ] 1 test.
```
Reviewers: ipaljak, vkasljevic
Reviewed By: ipaljak
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1714
Summary:
Initial for fork (in form of a c/p) form the current single node
version. Once we finish HA we plan to re-link the files to the single node
versions if they don't change.
Reviewers: ipaljak, buda, mferencevic
Reviewed By: ipaljak
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1705
Summary:
Blocking transaction has the ability to stop the transaction engine from
starting new transactions (regular or blocking) and to wait all other active
transactions to finish (to become non active, committed or aborted). One thing
that blocking transactions support is defining the parent transaction which
does not need to end in order for the blocking one to start. This is because of
a use case where we start nested transactions.
One could thing we should build indexes inside those blocking transactions. This
is true and I wanted to implement this, but this would require some digging in
the interpreter which I didn't want to do in this change.
Reviewers: mferencevic, vkasljevic, teon.banek
Reviewed By: mferencevic, teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1695
Summary:
`Query` is now an abstract class which has `CypherQuery`,
`ExplainQuery`, `IndexQuery`, `AuthQuery` and `StreamQuery` as derived
classes. Only `CypherQuery` is forwarded to planner and the rest of the
queries are handled directly in the interpreter. This enabled us to
remove auth, explain and stream operators, clean up `Context` class and
remove coupling between `Results` class and plan cache. This should make
it easier to add similar functionality because no logical operator
boilerplate is needed. It should also be easier to separate community
and enterprise features for open source.
Remove Explain logical operator
Separate IndexQuery in AST
Handle index creation in interpreter
Remove CreateIndex operator and ast nodes
Remove plan cache reference from Results
Move auth queries out of operator tree
Remove auth from context
Fix tests, separate stream queries
Remove in_explicit_transaction and streams from context
Reviewers: teon.banek, mferencevic, msantl
Reviewed By: teon.banek, mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1664
Summary:
LabelPropertyIndex now has the ability to enforce unique constraint.
This doesn't lock the tx engine.
Reviewers: teon.banek, mferencevic
Reviewed By: teon.banek
Subscribers: pullbot, vkasljevic, buda
Differential Revision: https://phabricator.memgraph.io/D1660
Summary: Up till now, `AstStorage` also took care of tracking the root of the `Query` and loading of cloning of `Query` nodes would change that root. This felt out of place because sometimes `AstStorage` is used only for storing expressions, and we don't even have an entire query in the storage. This diff removes that feature from `AstStorage`. Now its only functionality is owning AST nodes and assigning unique IDs to them.
Reviewers: teon.banek, llugovic
Reviewed By: teon.banek
Subscribers: mferencevic, pullbot
Differential Revision: https://phabricator.memgraph.io/D1646
Summary:
Start removing `is_remote` from `Address`
Remove `GlobalAddress`
Remove `GlobalizedAddress`
Remove bitmasks from `Address`
Remove `is_local` from `Address`
Remove `is_local` from `RecordAccessor`
Remove `worker_id` from `Address`
Remove `worker_id` from `GidGenerator`
Unfriend `IndexRpcServer` from `Storage`
Remove `LocalizedAddressIfPossible`
Make member private
Remove `worker_id` from `Storage`
Copy function to ease removal of distributed logic
Remove `worker_id` from `WriteAheadLog`
Remove `worker_id` from `GraphDb`
Remove `worker_id` from durability
Remove nonexistant function
Remove `gid` from `Address`
Remove usage of `Address`
Remove `Address`
Remove `VertexAddress` and `EdgeAddress`
Fix Id test
Remove `cypher_id` from `VersionList`
Remove `cypher_id` from durability
Remove `cypher_id` member from `VersionList`
Remove `cypher_id` from database
Fix recovery (revert D1142)
Remove unnecessary functions from `GraphDbAccessor`
Revert `InsertEdge` implementation to the way it was in/before D1142
Remove leftover `VertexAddress` from `Edge`
Remove `PostCreateIndex` and `PopulateIndexFromBuildIndex`
Split durability paths into single node and distributed
Fix `TransactionIdFromWalFilename` implementation
Fix tests
Remove `cypher_id` from `snapshooter` and `durability` test
Reviewers: msantl, teon.banek
Reviewed By: msantl
Subscribers: msantl, pullbot
Differential Revision: https://phabricator.memgraph.io/D1647
Summary:
To clean the working directory after this diff you should execute:
```
rm src/database/counters_rpc_messages.capnp
rm src/database/counters_rpc_messages.hpp
rm src/database/serialization.capnp
rm src/database/serialization.hpp
```
Reviewers: teon.banek, msantl
Reviewed By: msantl
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1636
Summary:
This should fix the issue of having a write followed by a read during
distributed execution. Any kind of merger of plans should behave like a
Cartesian with regards to planning the following ScanAll.
Reviewers: mtomic, mferencevic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1624
Summary:
This diff splits single node and distributed storage from each other.
Currently all of the storage code is copied into two directories (one single
node, one distributed). The logic used in the storage implementation isn't
touched, it will be refactored in following diffs.
To clean the working directory after this diff you should execute:
```
rm database/state_delta.capnp
rm database/state_delta.hpp
rm storage/concurrent_id_mapper_rpc_messages.capnp
rm storage/concurrent_id_mapper_rpc_messages.hpp
```
Reviewers: teon.banek, buda, msantl
Reviewed By: teon.banek, msantl
Subscribers: teon.banek, pullbot
Differential Revision: https://phabricator.memgraph.io/D1625
Summary:
TODO:
~~1. Figure out how to propagate exceptions during lambda evaluation to master.~~
~~2. Make some more complicated test cases to see if everything is~~
~~sent over the network properly (lambdas depending on frame, evaluation context).~~
~~3. Support only `GraphView::OLD`.~~
4. [MAYBE] Send only parts of the frame necessary for lambda evaluation.
~~5. Fix EdgeType handling~~
--------------------
Serialize frame and send it in PrepareForExpand RPC
Move Lambda out of ExpandVariable
Send symbol table and filter lambda in CreateBfsSubcursor RPC
Evaluate filter lambda during the expansion
Send evaluation context in CreateBfsSubcursor RPC
Reviewers: teon.banek, msantl
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1600
Summary:
This should allow us to more easily decouple the code which should be
open sourced. Unfortunately, the downside of this approach is that we
cannot rely on virtual calls to dispatch the serialization to correct
type. Another downside is that members need to be publicly accessible
for serialization.
Reviewers: mtomic, msantl
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1596
Summary:
This diff changes the RPC layer to directly return `TResponse` to the user when
issuing a `Call<...>` RPC call. The call throws an exception on failure
(instead of the previous return `nullopt`).
All servers (network, RPC and distributed) are set to have explicit `Shutdown`
methods so that a controlled shutdown can always be performed. The object
destructors now have `CHECK`s to enforce that the `AwaitShutdown` methods were
called.
The distributed memgraph is changed that none of the binaries (master/workers)
crash when there is a communication failure. Instead, the whole cluster starts
a graceful shutdown when a persistent communication error is detected.
Transient errors are allowed during execution. The transaction that errored out
will be aborted on the whole cluster. The cluster state is managed using a new
Heartbeat RPC call.
Reviewers: buda, teon.banek, msantl
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1604
Summary:
Knowledge of distributed operators is now removed from PlanPrinter
class. The DistributedOperatorVisitor and PlanPrinter are modified so
that they may be multiple inherited. This is done using virtual
inheritance of HierarchicalLogicalOperatorVisitor. Multiple inheritance
is used to derived a DistributedPlanPrinter which knows how to print
distributed in operators. This removes the dependency of single node
pretty printing on distributed.
Reviewers: msantl, mtomic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1606
Summary:
With this diff we should be able to support dynamic worker addition.
This is ofcourse a minimal effort maximal impact approach.
This diff introduces new RPC calls when a worker registers.
The `DynamicWorkerAddition` doesn't use `GraphDbAccessor` to get indices because
they create WAL entries.
Reviewers: vkasljevic, ipaljak, buda
Reviewed By: ipaljak
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1594
Summary:
In a bunch of places `TypedValue` was used where `PropertyValue` should be. A lot of times it was only because `TypedValue` serialization code could be reused for `PropertyValue`, only without providing callbacks for `VERTEX`, `EDGE` and `PATH`. So first I wrote separate serialization code for `PropertyValue` and put it into storage folder. Then I fixed all the places where `TypedValue` was incorrectly used instead of `PropertyValue`. I also disabled implicit `TypedValue` to `PropertyValue` conversion in hopes of preventing misuse in the future.
After that, I wrote code for `VertexAccessor` and `EdgeAccessor` serialization and put it into `storage` folder because it was almost duplicated in distributed BFS and pull produce RPC messages. On the sender side, some subset of records (old or new or both) is serialized, and on the reciever side, records are deserialized and immediately put into transaction cache.
Then I rewrote the `TypedValue` serialization functions (`SaveCapnpTypedValue` and `LoadCapnpTypedValue`) to not take callbacks for `VERTEX`, `EDGE` and `PATH`, but use accessor serialization functions instead. That means that any code that wants to use `TypedValue` serialization must hold a reference to `GraphDbAccessor` and `DataManager`, so that should make clients reconsider if they really want to use `TypedValue` instead of `PropertyValue`.
Reviewers: teon.banek, msantl
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1598
Summary:
This change introduces a pure virtual initial implementation of the transaction
engine which is then implemented in two versions: single node and distributed.
The interface classes now have the following hierarchy:
```
Engine (pure interface)
|
+----+---------- EngineDistributed (common logic)
| |
EngineSingleNode +-------+--------+
| |
EngineMaster EngineWorker
```
In addition to this layout the `EngineMaster` uses `EngineSingleNode` as its
underlying storage engine and only changes the necessary functions to make
them work with the `EngineWorker`.
After this change I recommend that you delete the following leftover files:
```
rm src/distributed/transactional_cache_cleaner_rpc_messages.*
rm src/transactions/common.*
rm src/transactions/engine_rpc_messages.*
```
Reviewers: teon.banek, msantl, buda
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1589
Summary:
This change improves detection of errorneous situations when starting a
distributed cluster on a single machine. It asserts that the user hasn't
started more memgraph nodes on the same machine with the same durability
directory. Also, this diff improves worker registration. Now workers don't have
to have explicitly set IP addresses. The master will deduce them from the
connecting IP when the worker registers.
Reviewers: teon.banek, buda, msantl
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1582
Summary:
This is a simple change which modifies interface of
awesome_memgraph_functions to accept C-style pointer to array with
count. Doing things this way, allows us to easily try out different
allocation schemes for function arguments. In this diff, we are now
using stack allocation of arguments in a plain fixed size array. This is
done when the number of arguments is small. According to heaptrack, this
small change should yield noticeable improvements to heap usage.
Obviously, this doesn't solve the problem of heap allocations inside
TypedValue arguments themselves. These allocations appear when
std::string and std::vector is used inside TypedValue.
Micro benchmarks show that there is some performance improvement,
mostly around the limits of using array vs std::vector. The improvement is
more noticeable with multiple threads, due to primary gain being in avoiding
calls to memory allocation.
Reviewers: mtomic, msantl, mferencevic
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1581
Summary: Recovery file is version inconsistent iff it starts with a correct magic number, has at least one integer written after that and that integer differs from kVersion.
Reviewers: mferencevic, ipaljak, vkasljevic, buda
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1565
Summary:
In order to add kafka benchmark, `memgraph_bolt.cpp` has been split.
Now we have `memgraph_init.cpp/hpp` files with common memgraph startup code.
Kafka benchmark implements a new `main` function that doesn't start a bolt
server, it just creates and starts a stream. Then it waits for the stream to
start consuming and measures the time it took to import the given number of
entries.
This benchmark is in a new folder, `feature_benchmark`, and so should any new
bechmark that measures performance of memgraphs features.
Reviewers: mferencevic, teon.banek, ipaljak, vkasljevic
Reviewed By: mferencevic, teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1552
Summary:
This diff introduces a new flags
* `--synchronous-commit`
The `--synchronous-commit` tells the WAL when should the deltas be flushed to
the disk drive. By default this is off and the WAL flushes deltas every `N`
milliseconds. If it's turned on, on every transaction end, commit or abort, the
WAL will first flush the deltas and only after that will return from ending a
transaction.
Reviewers: buda, vkasljevic, mferencevic, teon.banek, ipaljak
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1542
Summary: A quick clean-up of user visible error messages. Tried to make them gramatically correct by capitalizing the first word in the sentence and putting a dot at the end.
Reviewers: teon.banek, buda, ipaljak
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1571
Summary:
Changed GRANT ROLE to SET ROLE. Now it is `SET ROLE FOR user TO ROLE` instead of `GRANT ROLE role TO user`. It makes more sense because our users can only have 1 role.
Changed REVOKE ROLE to CLEAR ROLE. Now it is `CLEAR ROLE FOR user` instead of `REVOKE ROLE role FOR user`. REVOKE ROLE would throw exception if user was not a member of role. CLEAR ROLE clears the role whatever it is. I find that the latter makes more sense combined with SET ROLE.
Changed `SHOW ROLE FOR USER user` to `SHOW ROLE FOR user`.
Changed `SHOW USERS FOR ROLE role` to `SHOW USERS FOR role`.
Reviewers: mferencevic, teon.banek, buda
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1572
Summary:
Visitor pattern's main issue is cyclical dependency between classes that
are visited and the visitor instance itself. We need to decouple this
dependency if we want to open source part of the code, namely
non-distributed part. This decoupling is achieved through the use of
`dynamic_cast` in distributed operators. Hopefully the solution is good
enough and doesn't cause performance issues. An alternative solution is
to build our own custom double dispatch solution, but that will
basically boil down to our implementation of runtime type information
and casts.
Note, this only decouples the distributed operators. If and when we
decide that other operators shouldn't be open sourced, the same
`dynamic_cast` pattern should be applied in them also.
Depends on D1563
Reviewers: mtomic, msantl, buda
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1566
Summary:
This is the first step in separating the implementation of distributed
features in operators. Following steps are:
* decoupling distributed visitors
* injecting distributed details in operator state
* minor cleanup or anything else that was overlooked
Reviewers: mtomic, msantl
Reviewed By: msantl
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1563
Summary:
We want to make sure that index doesn't miss any results.
This test inserts vertices in the db and checks that the number of inserted
vertices is the same as the number of results from a scan all and a scan all by
label property.
https://app.asana.com/0/478665099752750/762723827276182/f
Reviewers: ipaljak, vkasljevic, teon.banek
Reviewed By: ipaljak
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1558
Summary:
Added WAL and Snapshot explorer utility executables that read a wal or a
snapshot file and print the content of it, but the main purpose is that they
check it.
This is useful when debugging durability and want to know where it gets stuck.
Reviewers: dgleich, buda
Reviewed By: buda
Subscribers: ipaljak, vkasljevic, teon.banek, pullbot
Differential Revision: https://phabricator.memgraph.io/D1422