* Fix up REPLICA GetInfo and CreateSnapshot
Subtle bug where these actions were using the incorrect transactional
access while in REPLICA role. This casued timestamp to be incorrectly
bumped, breaking REPLICA from doing replication.
* Delay DNS resolution
Rather than resolve at endpoint creation, we will instread resolve only
on Socket connect. This allows k8s deployments to change their IP during
pod restarts.
* Minor sonarsource fixes
---------
Co-authored-by: Andreja <andreja.tonev@memgraph.io>
Co-authored-by: DavIvek <david.ivekovic@memgraph.io>
* Interpreter transaction ID decoupled from storage transaction ID
* Transactional scope for indices, statistics and constraints
* Storage::Accessor now has 2 modes (unique and shared)
* Introduced ResourceLock to fix pthread mutex problems
* Split InfoQuery in two: non-transactional SystemInfoQuery and transactional DatabaseInfoQuery
* Replicable and durable statistics
* Bumped WAL/Snapshot versions
* Initial implementation of the Lamport clock
---------
Co-authored-by: Andreja Tonev <andreja.tonev@memgraph.io>
Add a report for the case where a sync replica does not confirm within a timeout:
-Add a new exception: ReplicationException to be returned when one sync replica does not confirm the reception of messages (new data, new constraint/index, or for triggers)
-Update the logic to throw the ReplicationException when needed for insertion of new data, triggers, or creation of new constraint/index
-Add end-to-end tests to cover the loss of connection with sync/async replicas when adding new data, adding new constraint/indexes, and triggers
Add end-to-end tests to cover the creation and drop of indexes, existence constraints, and uniqueness constraints
Improved tooling function mg_sleep_and_assert to also show the last result when duration is exceeded
Summary:
Store accumulated results as `communication::bolt::Value`s instead of
`TypedValue`s.
Add additional overloads for `Result` and `Summary` which accept `TypedValue`s
but internally perform conversions.
Reviewers: teon.banek, mferencevic
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2514
Summary:
Depends on D2471
- Add pointer to storage to `InterpreterContext`
- Rename `operator()` to `Prepare`
- Use `Interpret` instead of `operator()` (`Interpret` will be removed soon)
- Remove the `in_explicit_transaction` parameter
- Remove the memory resource parameter from `Interpret`
- Remove the storage accessor parameter from `Interpret`
- Fix up tests (remove the `Interpreter` from `database_transaction_timeout`)
Reviewers: teon.banek, mferencevic
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2482
Summary:
With a pool allocator, lookups in STL set and map are up to 50% faster.
This is probably due to contiguous memory of pooled objects, i.e. nodes
of those containers. In some cases, the lookup outperforms the SkipList.
Insertions are also faster, though not as dramatically, up to 30%. This
does make a significant difference when the STL containers are used in a
single thread as they outperform the SkipList significantly.
Reviewers: mferencevic, ipaljak
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2326
Summary:
Micro benchmarks show some minor variations compared to the previous
commit. Smaller cases are a bit worse while larger data cases are a bit
better.
Reviewers: mtomic, mferencevic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2136
Summary:
Micro benchmarks show improvements in performance of MapLiteral from 5%
to 40% depending on the size of the input. On the other hand, a sequence
of AdditionOperators behaves the same with both allocation schemes.
Reviewers: mtomic, mferencevic, msantl
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2132
Summary:
Micro benchmarks show no change compared to global new & delete. This is
to be expected, because Unwind relies only on `std::vector` which ought
to reserve the memory in reasonable chunks.
Reviewers: mtomic, llugovic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2064
Summary:
Micro benchmarks show an improvement to performance of about 10%
compared to global new & delete.
Reviewers: mtomic, llugovic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2061
Summary:
Micro benchmarks show that MonotonicBufferResource improves performance
by a factor of 1.5.
Reviewers: mtomic, mferencevic, llugovic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2048
Summary:
Benchmarks show minor improvements. Perhaps it makes sense at some later date
to use another allocator for things lasting only in a single `Pull`.
Reviewers: mferencevic, mtomic, llugovic
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2018
Summary:
Unfortunately, the written micro benchmark only reports minor
improvements compared to default allocator. The results are in some
cases even a tiny bit worse.
Reviewers: mtomic, mferencevic, llugovic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2039
Summary:
According to the written benchmark, using MonotonicBufferResource yields
significant improvements to performance of Distinct. The setup fills the
database with vertices depending on the benchmark state. No edges are
created. Then we run DISTINCT on that. Since each vertex is unique, we
will store everything in the `DistinctCursor::seen_rows_`, which is
backed by a MemoryResource. This setup, on my machine, yields 10 times
better performance when run with MonotonicBufferResource.
Reviewers: mferencevic, mtomic, msantl
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1894