This PR adds support for generating randomized workloads that will be executed
against a simulated cluster, as well as against a correctness model. Initially
this just generates ScanAll and CreateVertex requests, and anything that it
creates, it also inserts into a `std::set`, and when we do a ScanAll, it asserts
that we get the same number of requests back. This will become much more
sophisticated over time, but it's already hitting pay-dirt.
The communication between the ShardRequestManager and the RsmClient
used to be direct. In this PR this changes into a future-based
communication type. The RsmClient stores state about the currently
processed future (either read or write request) and exposes blocking
and non-blocking functionality to obtain the filled future. The
ShardRequestManager -for now- will send of the set of requests present
in the ExecutionState and block on each of them until the requests are
completed or the set of paginated responses(caused by, for example the
batch-limit in ScanAll) are ready for the next round.
* Use query-v2 in the main executable
* Set up machine manager in memgraph
* Add `ShardRequestManager` to `Interpreter`
* Make vertex creation work
* Make scan all work
* Add edge type map in shard request manager
* Send schema over request
* Empty out DbAccessor
* Store shard mapping at creation
* Remove failing CI steps
Cooltura is the best place in Zagreb!
Co-authored-by: János Benjamin Antal <benjamin.antal@memgraph.io>
Create shard-side handlers for basic messages
Implement the handlers for CreateVertices, CreateEdges and ScanAll. Use
or modify the defined messages to interact with individual Shards and
test their behavior. Shard is currently being owned by ShardRsm
instances. The two top level dispatching functions Read() and Apply()
are responsible for read- and write operations respectively. Currently
there are a handful of messages that are defined but not utilized, these
will be used in the near future, as well as a couple of handler
functions with empty implementations.
* Create LocalTransport Io provider for sending messages to components on the same machine
* Move src/io/simulation/message_conversion.hpp to src/io/message_conversion.hpp for use in other Io providers
Summary:
There will be a lot of leftover files, execute the following commands inside
`src/` to remove them:
```
git clean -xf
rm -r rpc/ storage/single_node_ha/rpc/
```
Reviewers: teon.banek
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D2011
Summary:
Since we need to send `StateDelta`s over the wire in HA, we need to be
able to serialize those bad boys.
This diff hopefully does this the right way.
Reviewers: teon.banek, mferencevic, ipaljak
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1725
Summary:
This should allow us to more easily decouple the code which should be
open sourced. Unfortunately, the downside of this approach is that we
cannot rely on virtual calls to dispatch the serialization to correct
type. Another downside is that members need to be publicly accessible
for serialization.
Reviewers: mtomic, msantl
Reviewed By: mtomic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1596
Summary:
This diff changes the RPC layer to directly return `TResponse` to the user when
issuing a `Call<...>` RPC call. The call throws an exception on failure
(instead of the previous return `nullopt`).
All servers (network, RPC and distributed) are set to have explicit `Shutdown`
methods so that a controlled shutdown can always be performed. The object
destructors now have `CHECK`s to enforce that the `AwaitShutdown` methods were
called.
The distributed memgraph is changed that none of the binaries (master/workers)
crash when there is a communication failure. Instead, the whole cluster starts
a graceful shutdown when a persistent communication error is detected.
Transient errors are allowed during execution. The transaction that errored out
will be aborted on the whole cluster. The cluster state is managed using a new
Heartbeat RPC call.
Reviewers: buda, teon.banek, msantl
Reviewed By: teon.banek
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1604
Summary:
These functions were defined in multiple places. They are moved to
cmake/functions.cmake to keep only one source of truth.
Reviewers: mferencevic, msantl, mculinovic
Reviewed By: mferencevic
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1578
Summary:
This diff implements OpenSSL support in the network stack.
Currently SSL support is only enabled for Bolt connections,
support for RPC connections will be added in another diff.
Reviewers: buda, teon.banek
Reviewed By: buda
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1328
Summary:
Converts the RPC stack to use Cap'n Proto for serialization instead of
boost. There are still some traces of boost in other places in the code,
but most of it is removed. A future diff should cleanup boost for good.
The RPC API is now changed to be more flexible with regards to how
serialize data. This makes the simplest cases a bit more verbose, but
allows complex serialization code to be correctly written instead of
relying on hacks. (For reference, look for the old serialization of
`PullRpc` which had a nasty pointer hacks to inject accessors in
`TypedValue`.)
Since RPC messages were uselessly modeled via inheritance of Message
base class, that class is now removed. Furthermore, that approach
doesn't really work with Cap'n Proto. Instead, each message type is
required to have some type information. This can be automated, so
`define-rpc` has been added to LCP, which hopefully simplifies defining
new RPC request and response messages.
Specify Cap'n Proto schema ID in cmake
This preserves Cap'n Proto generated typeIds across multiple generations
of capnp schemas through LCP. It is imperative that typeId stays the
same to ensure that different compilations of Memgraph may communicate
via RPC in a distributed cluster.
Use CLOS for meta information on C++ types in LCP
Since some structure slots and functions have started to repeat
themselves, it makes sense to model C++ meta information via Common Lisp
Object System.
Depends on D1391
Reviewers: buda, dgleich, mferencevic, mtomic, mculinovic, msantl
Reviewed By: msantl
Subscribers: pullbot
Differential Revision: https://phabricator.memgraph.io/D1407