Fix distributed plan caching and dispatching race condition

Summary:
Distributed plan is now dispatched before inserting into a cache. This
prevents a race condition when another thread would take the cached plan
and assume it was dispatched, thus crashing during execution.

The fix introduces a possibility that the same plan may be distributed
and dispatched twice (with different plan IDs). This shouldn't raise an
issue during execution, because only one plan will be cached and thus
executed. Obviously, we still need to invalidate the unused plan from
worker's caches.

Reviewers: florijan, msantl

Reviewed By: florijan

Subscribers: pullbot

Differential Revision: https://phabricator.memgraph.io/D1178
This commit is contained in:
Teon Banek 2018-02-07 11:02:03 +01:00
parent 7e7f7ce580
commit d05d73f13a

View File

@ -89,10 +89,6 @@ Interpreter::Results Interpreter::operator()(
ctx.symbol_table_, std::move(ast_storage));
}
if (FLAGS_query_plan_cache) {
plan = plan_cache_.access().insert(stripped.hash(), plan).first->second;
}
// Dispatch plans to workers (if we have any for them).
if (plan->distributed_plan().worker_plan) {
auto &dispatcher = db_accessor.db().plan_dispatcher();
@ -100,6 +96,12 @@ Interpreter::Results Interpreter::operator()(
plan->distributed_plan().worker_plan,
plan->symbol_table());
}
if (FLAGS_query_plan_cache) {
// TODO: If the same plan was already cached, invalidate the dispatched
// plan (above) from workers.
plan = plan_cache_.access().insert(stripped.hash(), plan).first->second;
}
}
auto *logical_plan = &plan->plan();