Summary: For example, the aggregate element produced for `COUNT(*)` has its `value` set to `NULL`.
Reviewers: teon.banek
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2463
Summary:
Switching to Storage V2 API will require passing storage::View when
serializing VertexAccessor and EdgeAccessor, so this is just the first
step in adapting the code.
Reviewers: mferencevic, ipaljak
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2352
Summary:
This makes Gid the same as the one in storage/v2. Before they can be
merge into one implementation, we probably want to have a similar
transition for remaining ID types.
Depends on D2346
Reviewers: mferencevic, ipaljak
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2347
Summary:
It never made sense that a global ID is its own namespace in the storage
directory tree.
Reviewers: mferencevic, ipaljak
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2346
Summary: The test now checks if the cluster is alive at the end of the test.
Reviewers: mferencevic
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2343
Summary:
This effectively replaces the old PropertyValue implementation from the
one in storage/v2
Depends on D2333
Reviewers: mferencevic, ipaljak
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2335
Summary:
With a pool allocator, lookups in STL set and map are up to 50% faster.
This is probably due to contiguous memory of pooled objects, i.e. nodes
of those containers. In some cases, the lookup outperforms the SkipList.
Insertions are also faster, though not as dramatically, up to 30%. This
does make a significant difference when the STL containers are used in a
single thread as they outperform the SkipList significantly.
Reviewers: mferencevic, ipaljak
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2326
Summary:
There's a possible race condition where we add a deleted vertex into
index and garbage collection removes it from main storage before indices are
cleaned-up.
Reviewers: mferencevic, teon.banek
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2314
Summary: Currently set to run for 2 hours.
Reviewers: mferencevic
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2206
Summary:
For proper client interaction, we need to expose the (term_id, log_index)
pair for the transaction that's about to be replicated and we need to be able
to retrieve the status of a transaction defined by that pair. Transaction
status can be one of the following:
1) REPLICATED (self-explanatory)
2) WAITING (waiting for replication)
3) ABORTED (self-explanatory)
4) INVALID (received request with either invalid term_id or invalid log_index)
Reviewers: mferencevic
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2201
Summary:
The first 2 tuple elements are redundant as they are available through
EdgeAccessor and they needlessly complicate the usage of the API.
Reviewers: mtomic, mferencevic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2200
Summary:
The functionality of the test is the same as for single node Memgraph.
Locally, it seems to work fine. I'll update the apollo related files when I feel
a bit more certain that everything works locally.
Reviewers: mferencevic
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2135
Summary:
This is a preparation step in case we want to have a custom allocator in
SkipList. For example, pool based allocator for SkipListNode.
Introduction of MemoryResource and removal of `calloc` has reduced the
performance a bit according to micro benchmarks. This performance hit is
not visible on benchmarks which do more concurrent operations.
Reviewers: mferencevic, mtomic
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2140
Summary:
This change implements full edges support in storage v2. Edges can be created
and deleted. Support for detach-deleting vertices is added and regular vertex
deletion verifies existance of edges.
Reviewers: mtomic, teon.banek
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2180
Summary:
Initial implementation of new storage engine. It implements snapshot isolation
for transactions. All changes in the database are stored as deltas instead of
making full copies. Currently, the storage supports full transaction
functionality (commit, abort, command advancement). Also, support has been
implemented only for vertices that have only labels.
Reviewers: teon.banek, mtomic
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2138
Summary:
Micro benchmarks show some minor variations compared to the previous
commit. Smaller cases are a bit worse while larger data cases are a bit
better.
Reviewers: mtomic, mferencevic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2136
Summary:
Micro benchmarks show improvements in performance of MapLiteral from 5%
to 40% depending on the size of the input. On the other hand, a sequence
of AdditionOperators behaves the same with both allocation schemes.
Reviewers: mtomic, mferencevic, msantl
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2132
Summary:
The global variable may hide the fact that it uses the default
utils::NewDeleteResource() for allocations.
Reviewers: mtomic, llugovic, mferencevic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2121
Summary:
Stream queries to the output table.
For effective output streaming, a simple operator, `OutputTableStream` is implemented which fetches and
produces a single row on each Pull.
Reviewers: teon.banek
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2099
Summary:
This is unfortunately needed in the C++17 standard, so that the
allocator is correctly propagated to elements of pair which respect the
"Uses Allocator" protocol. C++20 standard resolves this issue, but we
still have a long way before it is released and implemented by the
compiler and standard library vendors.
Reviewers: mtomic, llugovic, mferencevic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2107
Summary:
The test now uses `ha_client`. Logging is also modified to output 1-indexed
worker ids.
Reviewers: msantl
Reviewed By: msantl
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2113
Summary:
- Included HA client
- Fixed log messages to be 1-indexed
- Added id properties to created nodes for easier debugging
- Create and check steps are now executed 20 times each
Reviewers: msantl
Reviewed By: msantl
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2111
Summary:
- Server ids are now 1-indexed in logs
- All created nodes have distinct is properties which helps with debugging
Reviewers: msantl
Reviewed By: msantl
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2109
Summary:
This tests checks the correctness of a leader election process when its
decoupled from log replication. In other words, in this test we do not change
the state of the database, i. e., the Raft log remains empty.
The test proceeds as follows for clusters of size 3 and 5:
1. Start a random subset of workers in the cluster
2. Check if the leader has been elected
3. Kill all living workers
4. GOTO 1 and repeat 10 times
Reviewers: msantl
Reviewed By: msantl
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2105
Summary:
This diff only introduces a MemoryResource member to TypedValue and correctly
propagates through various constructors and assignments. At the moment,
MemoryResource is not used to actually allocate anything.
Reviewers: mtomic, llugovic, mferencevic, msantl
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2085
Summary:
HA should now support constraints in the same way the SM version does.
I only tested this thing manually, but I plan to add a new integration test for
this also.
Reviewers: ipaljak, vkasljevic, mferencevic
Reviewed By: ipaljak, mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2083
Summary:
At the moment, this test will fail. There are currently two issues with it:
- Cap'n Proto limit exception might occur
- `db.Reset()` method can hang (deadlock)
The first issue should be resolved by migration to SLK and we are currently
working on the second one. The test will land after those are fixed.
Reviewers: msantl, mferencevic
Reviewed By: msantl
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2006
Summary:
Edge creation in tests used to happen in different
transaction than vertex creation which caused unexpected
behaviour. In this change, both vertices and edges are
created in the same transaction.
Reviewers: teon.banek
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2089
Summary:
Preparing unique constraint code to be implemented in HA. To do so, I'm
moving everything to `stograge/common/` folder. I also added a new header,
`gid.hpp` which does a `ifdef` include of the correct `gid.hpp` based on the
product.
Reviewers: ipaljak, mferencevic, vkasljevic, teon.banek
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2079
Summary:
During it's leadership, one peer can receive RPC messages from other peers that his reign is over.
The problem is when this happens during a transaction commit.
This is handled in the following way.
If we're the current leader and we want to commit a transaction, we need to make sure the Raft Log is replicated before we can tell the client that the transaction is committed.
During that wait, we can only notice that the replication takes too long, and we report that with `LOG(WARNING)` messages.
If we change the Raft mode during the wait, our Raft implementation will internally commit this transaction, but won't be able to acquire the Raft lock because the `db.Reset` has been called.
This is why there is an manual lock acquire. If we pick up that the `db.Reset` has been called, we throw an `UnexpectedLeaderChangeException` exception to the client.
Another thing with long running transactions, if someone decides to kill a `memgraph_ha` instance during the commit, the transaction will have `abort` hint set. This will cause the `src/query/operator.cpp` to throw a `HintedAbortError`. We need to catch this during the shutdown, because the `memgraph_ha` isn't dead from the user perspective, and the transaction wasn't aborted because it took too long, but we can differentiate between those two.
Reviewers: mferencevic, ipaljak
Reviewed By: mferencevic, ipaljak
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1956
Summary:
Micro benchmarks show no change compared to global new & delete. This is
to be expected, because Unwind relies only on `std::vector` which ought
to reserve the memory in reasonable chunks.
Reviewers: mtomic, llugovic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2064
Summary:
Micro benchmarks show an improvement to performance of about 10%
compared to global new & delete.
Reviewers: mtomic, llugovic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2061
Summary:
Micro benchmarks show that MonotonicBufferResource improves performance
by a factor of 1.5.
Reviewers: mtomic, mferencevic, llugovic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2048
Summary:
Benchmarks show minor improvements. Perhaps it makes sense at some later date
to use another allocator for things lasting only in a single `Pull`.
Reviewers: mferencevic, mtomic, llugovic
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2018
Summary:
Unfortunately, the written micro benchmark only reports minor
improvements compared to default allocator. The results are in some
cases even a tiny bit worse.
Reviewers: mtomic, mferencevic, llugovic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2039
Summary:
This change introduces dumping of indices keys. During the dump process,
an internal label is assigned to each vertex and index on vertex's
internal property id is created for faster matching during edge creation.
Reviewers: teon.banek
Reviewed By: teon.banek
Subscribers: msantl, pullbot
Differential Revision: https://phabricator.memgraph.io/D2046
Summary:
Prior to this change, a huge query was returned by DumpGenerator that
dumped the entire graph. This change split the single query to multiple
queries, each dumping a single vertex/edge. For easier vertex matching
when dumping edge, an internal property id is assigned to each vertex and
removed after the whole graph is dumped.
Reviewers: teon.banek
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2038
Summary:
According to the written benchmark, using MonotonicBufferResource yields
significant improvements to performance of Distinct. The setup fills the
database with vertices depending on the benchmark state. No edges are
created. Then we run DISTINCT on that. Since each vertex is unique, we
will store everything in the `DistinctCursor::seen_rows_`, which is
backed by a MemoryResource. This setup, on my machine, yields 10 times
better performance when run with MonotonicBufferResource.
Reviewers: mferencevic, mtomic, msantl
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1894
Summary:
There will be a lot of leftover files, execute the following commands inside
`src/` to remove them:
```
git clean -xf
rm -r rpc/ storage/single_node_ha/rpc/
```
Reviewers: teon.banek
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2011
Summary:
This API change is needed in order to propagate the memory allocation
scheme for the execution of LogicalOperator::Cursor
Depends on D1990
Reviewers: mtomic, mferencevic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1980
Summary:
The new distributed directory is inside the query, and mirrors the query
structure. This groups all of the distributed (query) source code
together, which should make the potential directory extraction easier.
Reviewers: mferencevic, llugovic, mtomic
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1923
Summary:
The tests `RelationshipPatternNoDetails` and `PatternPartBraces` in
`memgraph__unit__cypher_main_visitor` checked for the names of the anonymous
identifiers and therefore implicitly relied on the order of the traversal of the
tree.
This "bug" surfaced when Memgraph was compiled with GCC (tested on >= 6.3.0).
Reviewers: mtomic, teon.banek, mferencevic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1945
Summary: New tutorial for backpacking through europe
Reviewers: dsantl, msantl, buda
Reviewed By: dsantl
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1954
Summary:
Test to check that the recovery works even if the snapshot is corrupted
in distributed.
Depends on D1930
Reviewers: vkasljevic, mferencevic
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1950
Summary:
Same as `UniqueLabelPropertyConstratint` except it works with multiple properties.
Because of that it is a bit more complex and slower.
Reviewers: msantl, ipaljak
Reviewed By: msantl
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1926
Summary:
This is a bugfix for D1836. It made `SymbolTable` return references to vector
elements, which then get invalidated and weird stuff happens.
This made a `DCHECK` in `rule_based_planner.hpp` trigger, and it was noticed by
@ipaljak 2 months later. All `DCHECK`s in `rule_based_planner.hpp` are now
changed to `CHECK`s.
Also, hash function for `Symbol` was wrong, because it also took
`user_declared` field into consideration, and `==` operator doesn't do that.
Reviewers: ipaljak, teon.banek, mferencevic, msantl
Reviewed By: msantl
Subscribers: pullbot, ipaljak
Differential Revision: https://phabricator.memgraph.io/D1938
Summary:
For HA benchmarks, if one of the executables exits with a status other
than zero, the benchmark should fail.
Also, removing `LOG(INFO)`, since failing benchmarks should flag where to look.
Reviewers: ipaljak
Reviewed By: ipaljak
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1921
Summary:
This macro benchmark measures read throughput in HA.
The test first creates a random graph with a given number of nodes
and edges. After that, it concurently performs the following query
for 10 seconds:
```
MATCH (n {id:$random_id})-[e]->(m) RETURN e, m;
```
In other words, it randomly picks a node and returns all its neighbours.
Locally measured results are as follows:
| nodes | edges | queries per second |
| 100 | 500 | 8900 |
| 1000 | 5000 | 2700 |
| 10000 | 50000 | 1200 |
Running the same test on Memgraph single node yields very similar results
(up to a few hundred queries).
Reviewers: msantl
Reviewed By: msantl
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1916
Summary:
Case with existence constraints is about 8-9% slower then case without
existence constraints. Before this diff that difference was about 15-16%.
Reviewers: msantl, ipaljak
Reviewed By: ipaljak
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1917
Summary:
UniqueLabelPropertyConstraint defines label + property restriction on vertices.
Label + Property + PropertyValue for that property must be unique at any given moment.
Reviewers: msantl, ipaljak
Reviewed By: msantl
Subscribers: mferencevic, pullbot
Differential Revision: https://phabricator.memgraph.io/D1884
Summary:
PROVE:PLAN clears the previously stored results for the current suite (which is
the suite associated with the current package) and prevents "result
accumulation" (and the accompanying huge and partly outdated reports).
Reviewers: mtomic, teon.banek
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1909
Summary: Add flag to disable printing of records.
Reviewers: teon.banek
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1898
Summary:
Added a new config parameter, replication timeout. This parameter sets the
upper limit to the replication phase and once the timeout exceeds, the
transaction engine stops accepting new transactions.
We could experience this timeout in two cases:
1. a network partition
2. majority of the cluster stops working
Reviewers: ipaljak
Reviewed By: ipaljak
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1893
Summary:
Added index creation and deletion handling in StateDelta.
Also included an integration test that creates an index and makes sure that it
gets replicated by killing each peer eventually causing a leader re-election.
Reviewers: ipaljak
Reviewed By: ipaljak
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1886
Summary:
`EdgesIterable` is used for iterating over Edges in distributed memgraph. Because of lru cache there is a possibility of data getting evicted as someone iterates over it.
To prevent that `EdgesIterable` will lock that data and release it when it's deconstructed.
Reviewers: msantl
Reviewed By: msantl
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1868
Summary:
CachedDataLock is necessary for lru cache as remote data is no longer
persistent. Most methods internally handle this, but for methods that
return pointers or references to remote data, we need to manually
lock data.
Reviewers: msantl, ipaljak
Reviewed By: msantl
Subscribers: teon.banek, pullbot
Differential Revision: https://phabricator.memgraph.io/D1869
Summary: Benchmark shows that database with `ExistenceConstraints` is around 16% slower compared to case without `ExistenceConstraints`.
Reviewers: teon.banek, msantl, ipaljak, mferencevic
Reviewed By: msantl
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1876
Summary:
Existence constraint ensures that all nodes with certain label have a certain property.
`ExistenceRule` defines label -> properties rule and `ExistenceConstraints` manages all
constraints.
Reviewers: msantl, ipaljak, teon.banek, mferencevic
Reviewed By: msantl, teon.banek, mferencevic
Subscribers: mferencevic, pullbot
Differential Revision: https://phabricator.memgraph.io/D1797
Summary:
During the following scenario:
- start a HA cluster with 3 machines
- find the leader and start sending queries
- SIGTERM the leader but leave other 2 machines untouched
The leader would be stuck in the shutdown phase.
This was happening because during the shutdown phase of the Bolt server, a
`graph_db_accessor` would try to commit a transaction after we've already shut
down Raft server. Raft, although not running, is still thinking it's in the
Leader mode. Tx Engine calls the `SafeToCommit` method to Commit transactions,
and ends up in an infinite loop.
Since Raft was shut down it won't handle any of the incoming RPCs and won't
change it's mode.
The fix here is to shut down the Bolt server before Raft, so we don't have any
pending commits once Raft is shut down.
Reviewers: ipaljak
Reviewed By: ipaljak
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1853
Summary:
I've refactored the integration test for HA so we can reuse the common
parts like starting/stopping workers.
I've also added a test that triggers the log compaction and it checks that the
snapshot that has been transferred is the same as the origin one.
Reviewers: ipaljak
Reviewed By: ipaljak
Subscribers: mferencevic, pullbot
Differential Revision: https://phabricator.memgraph.io/D1847
Summary:
This change splits mg-communication into mg-communication and
mg-comm-rpc. The main reason for doing this, is to make separation of
enterprise features from community Memgraph more clear.
Reviewers: mferencevic, msantl
Reviewed By: msantl
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1844
Summary:
RuleBasedPlanner now generates only the regular ScanAll operations, and
Filter operations are appended as soon as possible. The newly added
Rewrite step, takes this operator tree and replaces viable Filter &
ScanAll operators with appropriate ScanAllBy<Index> operator. This
change ought to simplify the behaviour of DistributedPlanner when that
stage is moved before the indexed lookup rewrite.
Showing unoptimized plan in interactive planner is also supported.
Reviewers: mtomic, llugovic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1839
Summary:
All AST nodes had a member `uid_` that was used as a key in
`SymbolTable`. It is renamed to `symbol_pos_` and it appears only in
`Identifier`, `NamedExpression` and `Aggregation`, since only those types were
used in `SymbolTable`. SymbolGenerator is now responsible for creating symbols
in `SymbolTable` and assigning positions to AST nodes.
Cloning and serialization code is now simpler since there is no need to track
UIDs.
Reviewers: teon.banek
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1836
Summary:
In this part of log compaction for raft, I've implemented snapshooting
and snapshot recovery. I've also refactored the code a bit, so `RaftServer` now
has a pointer to the `GraphDb` and it can do some things by itself.
Log compaction requires some further work. Since snapshooting isn't synchronous
between peers, and each peer can work at their own pace, once we've compacted
the log so that the next log to be sent to peer `x` isn't available anymore, we
need to send the snapshot over the wire. This means that the next part will
contain the `InstallSnapshotRPC` and then maybe one more that will implement the
logic of sending `LogEntry` or the whole snapshot.
Reviewers: ipaljak
Reviewed By: ipaljak
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1834
Summary:
`ReplicationLog` had a classic off-by-one bug. The `valid_prefix`
variable wasn't set properly.
This diff also includes a poor man's version of a HA client. This client
assumes that all the HA instances run on a single machine and that the
corresponding Bold endpoints have open ports ranging from `7687` to
`7687 + num_machines - 1`.
This should make it easeir to test certain things, ie. disk usage, P25.
This test revealed the bug with `ReplicationLog`
Reviewers: ipaljak
Reviewed By: ipaljak
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1813
Summary:
Variable expansions cannot appear in merge patterns or after updates,
so they can only be planned with GraphState::OLD. Because of that, it makes
sense to remove GraphView parameter from them to reduce confusion.
Reviewers: teon.banek, llugovic
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1825
Summary:
Implement proper plan cloning using LCP instead of hacking it with
serialization.
depends on D1815
Reviewers: teon.banek, llugovic
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1816
Summary:
Once a leader loses it's leadership, in order to handle hanging
transactions, we reset the storage and the transaction engine.
This requires to re-apply all the commited entries from the log.
Once we add snapshot (log compaction) we would need to do that also.
One thing to have in mind is the `election_timeout_min` parameter. If it's set
too low it could trigger leader re-election too often.
Reviewers: ipaljak
Reviewed By: ipaljak
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1822