Summary:
Micro benchmarks show some minor variations compared to the previous
commit. Smaller cases are a bit worse while larger data cases are a bit
better.
Reviewers: mtomic, mferencevic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2136
Summary:
Micro benchmarks show improvements in performance of MapLiteral from 5%
to 40% depending on the size of the input. On the other hand, a sequence
of AdditionOperators behaves the same with both allocation schemes.
Reviewers: mtomic, mferencevic, msantl
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2132
Summary:
Micro benchmarks show no change compared to global new & delete. This is
to be expected, because Unwind relies only on `std::vector` which ought
to reserve the memory in reasonable chunks.
Reviewers: mtomic, llugovic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2064
Summary:
Micro benchmarks show an improvement to performance of about 10%
compared to global new & delete.
Reviewers: mtomic, llugovic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2061
Summary:
Micro benchmarks show that MonotonicBufferResource improves performance
by a factor of 1.5.
Reviewers: mtomic, mferencevic, llugovic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2048
Summary:
Benchmarks show minor improvements. Perhaps it makes sense at some later date
to use another allocator for things lasting only in a single `Pull`.
Reviewers: mferencevic, mtomic, llugovic
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2018
Summary:
Unfortunately, the written micro benchmark only reports minor
improvements compared to default allocator. The results are in some
cases even a tiny bit worse.
Reviewers: mtomic, mferencevic, llugovic
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2039
Summary:
According to the written benchmark, using MonotonicBufferResource yields
significant improvements to performance of Distinct. The setup fills the
database with vertices depending on the benchmark state. No edges are
created. Then we run DISTINCT on that. Since each vertex is unique, we
will store everything in the `DistinctCursor::seen_rows_`, which is
backed by a MemoryResource. This setup, on my machine, yields 10 times
better performance when run with MonotonicBufferResource.
Reviewers: mferencevic, mtomic, msantl
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1894
Summary:
This diff removes the need for a database when parsing a query and
creating an Ast. Instead of storing storage::{Label,Property,EdgeType}
in Ast nodes, we store the name and an index into all of the names. This
allows for easy creation of a map from {Label,Property,EdgeType} index
into the concrete storage type. Obviously, this comes with a performance
penalty during execution, but it should be minor. The upside is that the
query/frontend minimally depends on storage (PropertyValue), which makes
writing tests easier as well as running them a lot faster (there is no
database setup). This is most noticeable in the ast_serialization test
which took a long time due to start up of a distributed database.
Reviewers: mtomic, llugovic
Reviewed By: mtomic
Subscribers: mferencevic, pullbot
Differential Revision: https://phabricator.memgraph.io/D1774
Summary:
This change makes HierarchicalTreeVisitor visit only Cypher related AST
nodes. QueryVisitor can be used to differentiate between various query
types we have. The next step is to either rename HierarchicalTreeVisitor
to something like CypherQueryVisitor, or perhaps extract Clause visiting
from it.
Reviewers: mtomic, llugovic
Reviewed By: llugovic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1710
Summary:
`Query` is now an abstract class which has `CypherQuery`,
`ExplainQuery`, `IndexQuery`, `AuthQuery` and `StreamQuery` as derived
classes. Only `CypherQuery` is forwarded to planner and the rest of the
queries are handled directly in the interpreter. This enabled us to
remove auth, explain and stream operators, clean up `Context` class and
remove coupling between `Results` class and plan cache. This should make
it easier to add similar functionality because no logical operator
boilerplate is needed. It should also be easier to separate community
and enterprise features for open source.
Remove Explain logical operator
Separate IndexQuery in AST
Handle index creation in interpreter
Remove CreateIndex operator and ast nodes
Remove plan cache reference from Results
Move auth queries out of operator tree
Remove auth from context
Fix tests, separate stream queries
Remove in_explicit_transaction and streams from context
Reviewers: teon.banek, mferencevic, msantl
Reviewed By: teon.banek, mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1664
Summary:
LabelPropertyIndex now has the ability to enforce unique constraint.
This doesn't lock the tx engine.
Reviewers: teon.banek, mferencevic
Reviewed By: teon.banek
Subscribers: pullbot, vkasljevic, buda
Differential Revision: https://phabricator.memgraph.io/D1660
Summary: Up till now, `AstStorage` also took care of tracking the root of the `Query` and loading of cloning of `Query` nodes would change that root. This felt out of place because sometimes `AstStorage` is used only for storing expressions, and we don't even have an entire query in the storage. This diff removes that feature from `AstStorage`. Now its only functionality is owning AST nodes and assigning unique IDs to them.
Reviewers: teon.banek, llugovic
Reviewed By: teon.banek
Subscribers: mferencevic, pullbot
Differential Revision: https://phabricator.memgraph.io/D1646
Summary:
To clean the working directory after this diff you should execute:
```
rm src/database/counters_rpc_messages.capnp
rm src/database/counters_rpc_messages.hpp
rm src/database/serialization.capnp
rm src/database/serialization.hpp
```
Reviewers: teon.banek, msantl
Reviewed By: msantl
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1636
Summary:
This is a simple change which modifies interface of
awesome_memgraph_functions to accept C-style pointer to array with
count. Doing things this way, allows us to easily try out different
allocation schemes for function arguments. In this diff, we are now
using stack allocation of arguments in a plain fixed size array. This is
done when the number of arguments is small. According to heaptrack, this
small change should yield noticeable improvements to heap usage.
Obviously, this doesn't solve the problem of heap allocations inside
TypedValue arguments themselves. These allocations appear when
std::string and std::vector is used inside TypedValue.
Micro benchmarks show that there is some performance improvement,
mostly around the limits of using array vs std::vector. The improvement is
more noticeable with multiple threads, due to primary gain being in avoiding
calls to memory allocation.
Reviewers: mtomic, msantl, mferencevic
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1581
Summary:
GraphDbAccessor is now constructed only through GraphDb. This allows the
concrete GraphDb to instantiate a concrete GraphDbAccessor. This allows
us to use virtual calls, so that the implementation may be kept
separate. The major downside of doing things this way is heap allocation
of GraphDbAccessor. In case it turns out to be a real performance
issues, another solution with pointer to static implementation may be
used.
InsertVertexIntoRemote is now a non-member function, which reduces
coupling. It made no sense for it to be member function because it used
only the public parts of GraphDbAccessor.
Reviewers: msantl, mtomic, mferencevic
Reviewed By: msantl
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1504
Summary:
The directory was never actually copied on apollo, so tests weren't even
doing anything...
Also remove fswatcher unit test, it should be rewritten correctly.
Reviewers: mislav.bradac, mferencevic, buda
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1108
Summary:
GraphDb is refactored to become an API exposing different parts
necessary for the database to function. These different parts can have
different implementations in SingleNode or distributed Master/Server
GraphDb implementations.
Interally GraphDb is implemented using two class heirarchies. One
contains all the members and correct wiring for each situation. The
other takes care of initialization and shutdown. This architecture is
practical because it can guarantee that the initialization of the
object structure is complete, before initializing state.
Reviewers: buda, mislav.bradac, dgleich, teon.banek
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1093
Summary:
Union query combinator implementation consists of:
* adjustments to the AST and `cypher_main_visitor`
* enabling `QueryStripper` to parse multiple `return` statements (not stopping after first)
* symbol generation for union results
* union logical operator
* query plan generator adjustments
Reviewers: teon.banek, mislav.bradac
Reviewed By: teon.banek
Subscribers: pullbot, buda
Differential Revision: https://phabricator.memgraph.io/D1038
Summary:
Remove name from GraphDb.
Take GraphDb in query test macros instead of accessor.
Add is_accepting_transactions flag to GraphDb.
Reviewers: mislav.bradac, florijan, mferencevic
Reviewed By: mislav.bradac
Subscribers: mferencevic, pullbot
Differential Revision: https://phabricator.memgraph.io/D940
Summary: This change increases the planning time, but should reduce memory consumption.
Reviewers: florijan, mislav.bradac
Reviewed By: mislav.bradac
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D901
Summary:
- Removed BreadthFirstAtom, using EdgeAtom only with a Type enum.
- Both variable expansions (breadth and depth first) now have mandatory inner node and edge Identifiers.
- Both variable expansions use inline property filtering and support inline lambdas.
- BFS and variable expansion now have the same planning process.
- Planner modified in the following ways:
- Variable expansions support inline property filtering (two filters added to all_filters, one for inline, one for post-expand).
- Asserting against existing_edge since we don't support that anymore.
- Edge and node symbols bound after variable expansion to disallow post-expand filters to get inlined.
- Some things simplified due to different handling.
- BreadthFirstExpand logical operator merged into ExpandVariable. Two Cursor classes remain and are dynamically chosen from.
As part of planned planner refactor we should ensure that a filter is applied only once. The current implementation is very suboptimal for property filtering in variable expansions.
@buda: we will start refactoring this these days. This current planner logic is too dense and complex. It is becoming technical debt. Most of the time I spent working on this has been spent figuring the planning out, and I still needed Teon's help at times. Implementing the correct and optimal version of query execution (avoiding multiple potentially expensive filterings) was out of reach also due to tech debt.
Reviewers: buda, teon.banek
Reviewed By: teon.banek
Subscribers: pullbot, buda
Differential Revision: https://phabricator.memgraph.io/D852
Summary:
Benchmark planning and estimating indexed ScanAll. According to the benchmark,
caching speeds up the whole process of planning and estimation by a factor of
2. Most of the performance gain is in the `CostEstimator` itself, due to plenty
of calls to `VerticesCount` when estimating all of the generated plans.
Reviewers: mislav.bradac, florijan
Reviewed By: mislav.bradac
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D765