It is possible for multiple read only queries to be accessing the same
sequence of vertices/edges. The reader mode of the spin lock will ensure
multiple threads can make progress at the same time.
Add timestamp to DELETE_DESERIALIZED_OBJECT delta at which this object was created.
RocksDB currently doesn't provide timestamp() functionality in iterators of TransationDB.
Because of that we are using constant "0" timestamp for DELETE_DESERIALIZED_OBJECT.
Add supernode vertex cache to account for long delta chains and modifications in the same module being independent of scanning of the nodes in the next iteration of the pulling mechanism.
The parallel_exec_info should have been passed to this function before,
otherwise, the recovery of label-property indices would never have been
parallelized.
* Decouple BoltSession and communication::bolt::Session
* Add CREATE/USE/DROP DATABASE
* Add SHOW DATABASES
* Cover WebSocket session
* Simple session safety implemented via RWLock
* Storage symlinks for backward. compatibility
* Extend the audit log with the DB info
* Add auth part
* Add tenant recovery
* Parallelize edge recovery
* Load vertex labels and properties parallel
* Add parallel connectivity loading
* Add batches information to snapshot
* Introduce `items_per_batch` and `recovery_thread_count` flags
* Make possible to load snapshots with old version
* Add vertex batches to `RecoveryInfo`
* Extend durability integration tests with v15 test cases
* Add `std::vector` based `InitProperties`
* Use `InitProperties` in snapshot loading
* Added check if there is invalid reference to the underlying edge
* Added fix and e2e tests
* Isolation levels tracking based on from_vertex_
* Added explicit transaction test + edge accessor changes based on the vertex_edge
* Autocommit on tests, initialize deleted by checking out_edges
Co-authored-by: Marko Budiselić <marko.budiselic@memgraph.com>
Add a report for the case where a sync replica does not confirm within a timeout:
-Add a new exception: ReplicationException to be returned when one sync replica does not confirm the reception of messages (new data, new constraint/index, or for triggers)
-Update the logic to throw the ReplicationException when needed for insertion of new data, triggers, or creation of new constraint/index
-Add end-to-end tests to cover the loss of connection with sync/async replicas when adding new data, adding new constraint/indexes, and triggers
Add end-to-end tests to cover the creation and drop of indexes, existence constraints, and uniqueness constraints
Improved tooling function mg_sleep_and_assert to also show the last result when duration is exceeded
* Storage takes care of the saving of setting when a new replica is added
* Restore replicas at startup
* Modify interactive_mg_runner + memgraph to support that data-directory can be configured in CONTEXT
* Extend e2e test
* Correct typo
* Add flag to config to specify when replication should be stored (true by default when starting Memgraph)
* Remove un-necessary "--" in yaml file
* Make sure Memgraph stops if a replica can't be restored.
* Add UT covering the parsing of ReplicaStatus to/from json
* Add assert in e2e script to check that a port is free before using it
* Add test covering crash on Jepsen
* Make sure applciaiton crashes if it starts on corrupted replications' info
Starting with a non-reponsive replica is allowed.
* Add temporary startup flag: this is needed so jepsen do not automatically restore replica on startup of main. This will be removed in T0835
* Add test
* Add implementation and adapted test
* Update workloads.yaml to have a timeout > 0
* Update tests (failing due to merging of "add replica state")
This PR introduces READ COMMITTED and READ UNCOMMITTED isolation levels.
The isolation level can be set with a config or with a query for different scopes.
* Throw OOMException while creating vertices and edges
* Throw on indices creation
* Throw on setting a property
* Throw oom exception while recovering
* Throw exception when query engine asks for extra memory
* Block out of memor exception during skip list GC
* Define additional commit log constructor which takes an oldest active id
* Delay commit log construction until the recovery process is finished
* Add test for commit log with initial id
* Silence the macro redefinition warning
* Set state to invalid after exception
* Add proper locking
* Start background replicating only if in valid state
* Freeze transaction timestamp on replica
* Timeout fixes
* Fix Jepsen run script
* Disable perf checker and enable nemesis
* Add documentation for some chunks of code
* Decrease timeout so main doesn't hang on network partitions too long
* Add config for replication client/server
* Add SSL to replication
* Add semi-sync replication
* Expose necessary information about replication
* Thread pool fix
* Set BasicResult value type to void
* Add basic communication process using commit timestamp
* Add file number to req
* Add proper recovery handling
* Allow loading of WALs with same seq num
* Allow always desired commit timestamp
* Set replica timestamp for operation
* Mark non-transactional timestamp as finished
* Add file transfer over RPC
* Snapshot transfer implementation
* Allow snapshot creation only for MAIN instances
* Replica and main can have replication clients
* Use only snapshots and WALs that are from the Main storage
* Add flush lock and expose buffer
* Add fstat for file size and TryFlushing method
* Use lseek for size
Co-authored-by: Antonio Andelic <antonio.andelic@memgraph.io>
* Add tests for multiple clients
* Use variant for RPC server and clients
* Using synchronized list for replication clients, extracted variant access to a function
* Set MAIN as default, add unregister function, add a name for replication clients
* Use the regular list for clients
* Use test fixture so storage directory is cleaned
* Use seq_cst for replication_state
Co-authored-by: Antonio Andelic <antonio.andelic@memgraph.io>
This implements the initial version of synchronous replication.
Currently, only one replica is supported and that isn't configurable.
To run the main instance use the following command:
```
./memgraph \
--main \
--data-directory main-data \
--storage-properties-on-edges \
--storage-wal-enabled \
--storage-snapshot-interval-sec 300
```
To run the replica instance use the following command:
```
./memgraph \
--replica \
--data-directory replica-data \
--storage-properties-on-edges \
--bolt-port 7688
```
You can then write/read data to Bolt port 7687 (the main instance) and also you
can read the data from the replica instance using Bolt port 7688.
NOTE: The main instance *must* be started without any data and the replica
*must* be started before any data is added to the main instance.
* Add basic synchronous replication test
* Using RWLock for replication stuff
Co-authored-by: Matej Ferencevic <matej.ferencevic@memgraph.io>
Co-authored-by: Antonio Andelic <antonio.andelic@memgraph.io>