memgraph/tests/feature_benchmark/kafka/generate.py
Matija Santl 57b84f2da3 Add kafka benchmark
Summary:
In order to add kafka benchmark, `memgraph_bolt.cpp` has been split.
Now we have `memgraph_init.cpp/hpp` files with common memgraph startup code.
Kafka benchmark implements a new `main` function that doesn't start a bolt
server, it just creates and starts a stream. Then it waits for the stream to
start consuming and measures the time it took to import the given number of
entries.

This benchmark is in a new folder, `feature_benchmark`, and so should any new
bechmark that measures performance of memgraphs features.

Reviewers: mferencevic, teon.banek, ipaljak, vkasljevic

Reviewed By: mferencevic, teon.banek

Subscribers: pullbot

Differential Revision: https://phabricator.memgraph.io/D1552
2018-08-29 16:35:31 +02:00

45 lines
883 B
Python

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
'''
Kafka benchmark dataset generator.
'''
import random
import sys
from argparse import ArgumentParser
def get_edge():
from_node = random.randint(0, args.nodes)
to_node = random.randint(0, args.nodes)
while from_node == to_node:
to_node = random.randint(0, args.nodes)
return (from_node, to_node)
def parse_args():
argp = ArgumentParser(description=__doc__)
argp.add_argument("--nodes", type=int, default=100, help="Number of nodes.")
argp.add_argument("--edges", type=int, default=30, help="Number of edges.")
return argp.parse_args()
args = parse_args()
edges = set()
for i in range(args.nodes):
print("%d\n" % i)
for i in range(args.edges):
edge = get_edge()
while edge in edges:
edge = get_edge()
edges.add(edge)
print("%d %d\n" % edge)
sys.exit(0)