Summary:
Depends on D2471
- Add pointer to storage to `InterpreterContext`
- Rename `operator()` to `Prepare`
- Use `Interpret` instead of `operator()` (`Interpret` will be removed soon)
- Remove the `in_explicit_transaction` parameter
- Remove the memory resource parameter from `Interpret`
- Remove the storage accessor parameter from `Interpret`
- Fix up tests (remove the `Interpreter` from `database_transaction_timeout`)
Reviewers: teon.banek, mferencevic
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2482
Summary:
With a pool allocator, lookups in STL set and map are up to 50% faster.
This is probably due to contiguous memory of pooled objects, i.e. nodes
of those containers. In some cases, the lookup outperforms the SkipList.
Insertions are also faster, though not as dramatically, up to 30%. This
does make a significant difference when the STL containers are used in a
single thread as they outperform the SkipList significantly.
Reviewers: mferencevic, ipaljak
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2326
Summary:
Micro benchmarks show some minor variations compared to the previous
commit. Smaller cases are a bit worse while larger data cases are a bit
better.
Reviewers: mtomic, mferencevic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2136
Summary:
Micro benchmarks show no change compared to global new & delete. This is
to be expected, because Unwind relies only on `std::vector` which ought
to reserve the memory in reasonable chunks.
Reviewers: mtomic, llugovic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2064
Summary:
Micro benchmarks show an improvement to performance of about 10%
compared to global new & delete.
Reviewers: mtomic, llugovic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2061
Summary:
Micro benchmarks show that MonotonicBufferResource improves performance
by a factor of 1.5.
Reviewers: mtomic, mferencevic, llugovic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2048
Summary:
Benchmarks show minor improvements. Perhaps it makes sense at some later date
to use another allocator for things lasting only in a single `Pull`.
Reviewers: mferencevic, mtomic, llugovic
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2018
Summary:
Unfortunately, the written micro benchmark only reports minor
improvements compared to default allocator. The results are in some
cases even a tiny bit worse.
Reviewers: mtomic, mferencevic, llugovic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2039
Summary:
According to the written benchmark, using MonotonicBufferResource yields
significant improvements to performance of Distinct. The setup fills the
database with vertices depending on the benchmark state. No edges are
created. Then we run DISTINCT on that. Since each vertex is unique, we
will store everything in the `DistinctCursor::seen_rows_`, which is
backed by a MemoryResource. This setup, on my machine, yields 10 times
better performance when run with MonotonicBufferResource.
Reviewers: mferencevic, mtomic, msantl
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1894