mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-02-25 00:50:15 +08:00
translated by LuuMing
This commit is contained in:
parent
981f1e7155
commit
0365015f62
@ -1,438 +0,0 @@
|
||||
Writing a Time Series Database from Scratch
|
||||
============================================================
|
||||
|
||||
|
||||
I work on monitoring. In particular on [Prometheus][2], a monitoring system that includes a custom time series database, and its integration with [Kubernetes][3].
|
||||
|
||||
In many ways Kubernetes represents all the things Prometheus was designed for. It makes continuous deployments, auto scaling, and other features of highly dynamic environments easily accessible. The query language and operational model, among many other conceptual decisions make Prometheus particularly well-suited for such environments. Yet, if monitored workloads become significantly more dynamic, this also puts new strains on monitoring system itself. With this in mind, rather than doubling back on problems Prometheus already solves well, we specifically aim to increase its performance in environments with highly dynamic, or transient services.
|
||||
|
||||
Prometheus's storage layer has historically shown outstanding performance, where a single server is able to ingest up to one million samples per second as several million time series, all while occupying a surprisingly small amount of disk space. While the current storage has served us well, I propose a newly designed storage subsystem that corrects for shortcomings of the existing solution and is equipped to handle the next order of scale.
|
||||
|
||||
> Note: I've no background in databases. What I say might be wrong and mislead. You can channel your criticism towards me (fabxc) in #prometheus on Freenode.
|
||||
|
||||
### Problems, Problems, Problem Space
|
||||
|
||||
First, a quick outline of what we are trying to accomplish and what key problems it raises. For each, we take a look at Prometheus' current approach, what it does well, and which problems we aim to address with the new design.
|
||||
|
||||
### Time series data
|
||||
|
||||
We have a system that collects data points over time.
|
||||
|
||||
```
|
||||
identifier -> (t0, v0), (t1, v1), (t2, v2), (t3, v3), ....
|
||||
```
|
||||
|
||||
Each data point is a tuple of a timestamp and a value. For the purpose of monitoring, the timestamp is an integer and the value any number. A 64 bit float turns out to be a good representation for counter as well as gauge values, so we go with that. A sequence of data points with strictly monotonically increasing timestamps is a series, which is addressed by an identifier. Our identifier is a metric name with a dictionary of _label dimensions_ . Label dimensions partition the measurement space of a single metric. Each metric name plus a unique set of labels is its own _time series_ that has a value stream associated with it.
|
||||
|
||||
This is a typical set of series identifiers that are part of metric counting requests:
|
||||
|
||||
```
|
||||
requests_total{path="/status", method="GET", instance=”10.0.0.1:80”}
|
||||
requests_total{path="/status", method="POST", instance=”10.0.0.3:80”}
|
||||
requests_total{path="/", method="GET", instance=”10.0.0.2:80”}
|
||||
```
|
||||
|
||||
Let's simplify this representation right away: A metric name can be treated as just another label dimension — `__name__` in our case. At the query level, it might be be treated specially but that doesn't concern our way of storing it, as we will see later.
|
||||
|
||||
```
|
||||
{__name__="requests_total", path="/status", method="GET", instance=”10.0.0.1:80”}
|
||||
{__name__="requests_total", path="/status", method="POST", instance=”10.0.0.3:80”}
|
||||
{__name__="requests_total", path="/", method="GET", instance=”10.0.0.2:80”}
|
||||
```
|
||||
|
||||
When querying time series data, we want to do so by selecting series by their labels. In the simplest case `{__name__="requests_total"}` selects all series belonging to the `requests_total` metric. For all selected series, we retrieve data points within a specified time window.
|
||||
In more complex queries, we may wish to select series satisfying several label selectors at once and also represent more complex conditions than equality. For example, negative (`method!="GET"`) or regular expression matching (`method=~"PUT|POST"`).
|
||||
|
||||
This largely defines the stored data and how it is recalled.
|
||||
|
||||
### Vertical and Horizontal
|
||||
|
||||
In a simplified view, all data points can be laid out on a two-dimensional plane. The _horizontal_ dimension represents the time and the series identifier space spreads across the _vertical_ dimension.
|
||||
|
||||
```
|
||||
series
|
||||
^
|
||||
│ . . . . . . . . . . . . . . . . . . . . . . {__name__="request_total", method="GET"}
|
||||
│ . . . . . . . . . . . . . . . . . . . . . . {__name__="request_total", method="POST"}
|
||||
│ . . . . . . .
|
||||
│ . . . . . . . . . . . . . . . . . . . ...
|
||||
│ . . . . . . . . . . . . . . . . . . . . .
|
||||
│ . . . . . . . . . . . . . . . . . . . . . {__name__="errors_total", method="POST"}
|
||||
│ . . . . . . . . . . . . . . . . . {__name__="errors_total", method="GET"}
|
||||
│ . . . . . . . . . . . . . .
|
||||
│ . . . . . . . . . . . . . . . . . . . ...
|
||||
│ . . . . . . . . . . . . . . . . . . . .
|
||||
v
|
||||
<-------------------- time --------------------->
|
||||
```
|
||||
|
||||
Prometheus retrieves data points by periodically scraping the current values for a set of time series. The entity from which we retrieve such a batch is called a _target_ . Thereby, the write pattern is completely vertical and highly concurrent as samples from each target are ingested independently.
|
||||
To provide some measurement of scale: A single Prometheus instance collects data points from tens of thousands of _targets_ , which expose hundreds to thousands of different time series each.
|
||||
|
||||
At the scale of collecting millions of data points per second, batching writes is a non-negotiable performance requirement. Writing single data points scattered across our disk would be painfully slow. Thus, we want to write larger chunks of data in sequence.
|
||||
This is an unsurprising fact for spinning disks, as their head would have to physically move to different sections all the time. While SSDs are known for fast random writes, they actually can't modify individual bytes but only write in _pages_ of 4KiB or more. This means writing a 16 byte sample is equivalent to writing a full 4KiB page. This behavior is part of what is known as [ _write amplification_ ][4], which as a bonus causes your SSD to wear out – so it wouldn't just be slow, but literally destroy your hardware within a few days or weeks.
|
||||
For more in-depth information on the problem, the blog series ["Coding for SSDs" series][5] is a an excellent resource. Let's just consider the main take away: sequential and batched writes are the ideal write pattern for spinning disks and SSDs alike. A simple rule to stick to.
|
||||
|
||||
The querying pattern is significantly more differentiated than the write the pattern. We can query a single datapoint for a single series, a single datapoint for 10000 series, weeks of data points for a single series, weeks of data points for 10000 series, etc. So on our two-dimensional plane, queries are neither fully vertical or horizontal, but a rectangular combination of the two.
|
||||
[Recording rules][6] mitigate the problem for known queries but are not a general solution for ad-hoc queries, which still have to perform reasonably well.
|
||||
|
||||
We know that we want to write in batches, but the only batches we get are vertical sets of data points across series. When querying data points for a series over a time window, not only would it be hard to figure out where the individual points can be found, we'd also have to read from a lot of random places on disk. With possibly millions of touched samples per query, this is slow even on the fastest SSDs. Reads will also retrieve more data from our disk than the requested 16 byte sample. SSDs will load a full page, HDDs will at least read an entire sector. Either way, we are wasting precious read throughput.
|
||||
So ideally, samples for the same series would be stored sequentially so we can just scan through them with as few reads as possible. On top, we only need to know where this sequence starts to access all data points.
|
||||
|
||||
There's obviously a strong tension between the ideal pattern for writing collected data to disk and the layout that would be significantly more efficient for serving queries. It is _the_ fundamental problem our TSDB has to solve.
|
||||
|
||||
#### Current solution
|
||||
|
||||
Time to take a look at how Prometheus's current storage, let's call it "V2", addresses this problem.
|
||||
We create one file per time series that contains all of its samples in sequential order. As appending single samples to all those files every few seconds is expensive, we batch up 1KiB chunks of samples for a series in memory and append those chunks to the individual files, once they are full. This approach solves a large part of the problem. Writes are now batched, samples are stored sequentially. It also enables incredibly efficient compression formats, based on the property that a given sample changes only very little with respect to the previous sample in the same series. Facebook's paper on their Gorilla TSDB describes a similar chunk-based approach and [introduces a compression format][7] that reduces 16 byte samples to an average of 1.37 bytes. The V2 storage uses various compression formats including a variation of Gorilla’s.
|
||||
|
||||
```
|
||||
┌──────────┬─────────┬─────────┬─────────┬─────────┐ series A
|
||||
└──────────┴─────────┴─────────┴─────────┴─────────┘
|
||||
┌──────────┬─────────┬─────────┬─────────┬─────────┐ series B
|
||||
└──────────┴─────────┴─────────┴─────────┴─────────┘
|
||||
. . .
|
||||
┌──────────┬─────────┬─────────┬─────────┬─────────┬─────────┐ series XYZ
|
||||
└──────────┴─────────┴─────────┴─────────┴─────────┴─────────┘
|
||||
chunk 1 chunk 2 chunk 3 ...
|
||||
```
|
||||
|
||||
While the chunk-based approach is great, keeping a separate file for each series is troubling the V2 storage for various reasons:
|
||||
|
||||
* We actually need a lot more files than the number of time series we are currently collecting data for. More on that in the section on "Series Churn". With several million files, sooner or later way may run out of [inodes][1] on our filesystem. This is a condition we can only recover from by reformatting our disks, which is as invasive and disruptive as it could be. We generally want to avoid formatting disks specifically to fit a single application.
|
||||
* Even when chunked, several thousands of chunks per second are completed and ready to be persisted. This still requires thousands of individual disk writes every second. While it is alleviated by also batching up several completed chunks for a series, this in return increases the total memory footprint of data which is waiting to be persisted.
|
||||
* It's infeasible to keep all files open for reads and writes. In particular because ~99% of data is never queried again after 24 hours. If it is queried though though, we have to open up to thousands of files, find and read relevant data points into memory, and close them again. As this would result in high query latencies, data chunks are cached rather aggressively leading to problems outlined further in the section on "Resource Consumption".
|
||||
* Eventually, old data has to be deleted and data needs to be removed from the front of millions of files. This means that deletions are actually write intensive operations. Additionally, cycling through millions of files and analyzing them makes this a process that often takes hours. By the time it completes, it might have to start over again. Oh yea, and deleting the old files will cause further write amplification for your SSD!
|
||||
* Chunks that are currently accumulating are only held in memory. If the application crashes, data will be lost. To avoid this, the memory state is periodically checkpointed to disk, which may take significantly longer than the window of data loss we are willing to accept. Restoring the checkpoint may also take several minutes, causing painfully long restart cycles.
|
||||
|
||||
The key take away from the existing design is the concept of chunks, which we most certainly want to keep. The most recent chunks always being held in memory is also generally good. After all, the most recent data is queried the most by a large margin.
|
||||
Having one file per time series is a concept we would like to find an alternative to.
|
||||
|
||||
### Series Churn
|
||||
|
||||
In the Prometheus context, we use the term _series churn_ to describe that a set of time series becomes inactive, i.e. receives no more data points, and a new set of active series appears instead.
|
||||
For example, all series exposed by a given microservice instance have a respective “instance” label attached that identifies its origin. If we perform a rolling update of our microservice and swap out every instance with a newer version, series churn occurs. In more dynamic environments those events may happen on an hourly basis. Cluster orchestration systems like Kubernetes allow continuous auto-scaling and frequent rolling updates of applications, potentially creating tens of thousands of new application instances, and with them completely new sets of time series, every day.
|
||||
|
||||
```
|
||||
series
|
||||
^
|
||||
│ . . . . . .
|
||||
│ . . . . . .
|
||||
│ . . . . . .
|
||||
│ . . . . . . .
|
||||
│ . . . . . . .
|
||||
│ . . . . . . .
|
||||
│ . . . . . .
|
||||
│ . . . . . .
|
||||
│ . . . . .
|
||||
│ . . . . .
|
||||
│ . . . . .
|
||||
v
|
||||
<-------------------- time --------------------->
|
||||
```
|
||||
|
||||
So even if the entire infrastructure roughly remains constant in size, over time there's a linear growth of time series in our database. While a Prometheus server will happily collect data for 10 million time series, query performance is significantly impacted if data has to be found among a billion series.
|
||||
|
||||
#### Current solution
|
||||
|
||||
The current V2 storage of Prometheus has an index based on LevelDB for all series that are currently stored. It allows querying series containing a given label pair, but lacks a scalable way to combine results from different label selections.
|
||||
For example, selecting all series with label `__name__="requests_total"` works efficiently, but selecting all series with `instance="A" AND __name__="requests_total"` has scalability problems. We will later revisit what causes this and which tweaks are necessary to improve lookup latencies.
|
||||
|
||||
This problem is in fact what spawned the initial hunt for a better storage system. Prometheus needed an improved indexing approach for quickly searching hundreds of millions of time series.
|
||||
|
||||
### Resource consumption
|
||||
|
||||
Resource consumption is one of the consistent topics when trying to scale Prometheus (or anything, really). But it's not actually the absolute resource hunger that is troubling users. In fact, Prometheus manages an incredible throughput given its requirements. The problem is rather its relative unpredictability and instability in face of changes. By its architecture the V2 storage slowly builds up chunks of sample data, which causes the memory consumption to ramp up over time. As chunks get completed, they are written to disk and can be evicted from memory. Eventually, Prometheus's memory usage reaches a steady state. That is until the monitored environment changes — _series churn_ increases the usage of memory, CPU, and disk IO every time we scale an application or do a rolling update.
|
||||
If the change is ongoing, it will yet again reach a steady state eventually but it will be significantly higher than in a more static environment. Transition periods are often multiple hours long and it is hard to determine what the maximum resource usage will be.
|
||||
|
||||
The approach of having a single file per time series also makes it way too easy for a single query to knock out the Prometheus process. When querying data that is not cached in memory, the files for queried series are opened and the chunks containing relevant data points are read into memory. If the amount of data exceeds the memory available, Prometheus quits rather ungracefully by getting OOM-killed.
|
||||
After the query is completed the loaded data can be released again but it is generally cached much longer to serve subsequent queries on the same data faster. The latter is a good thing obviously.
|
||||
|
||||
Lastly, we looked at write amplification in the context of SSDs and how Prometheus addresses it by batching up writes to mitigate it. Nonetheless, in several places it still causes write amplification by having too small batches and not aligning data precisely on page boundaries. For larger Prometheus servers, a reduced hardware lifetime was observed in the real world. Chances are that this is still rather normal for database applications with high write throughput, but we should keep an eye on whether we can mitigate it.
|
||||
|
||||
### Starting Over
|
||||
|
||||
By now we have a good idea of our problem domain, how the V2 storage solves it, and where its design has issues. We also saw some great concepts that we want to adapt more or less seamlessly. A fair amount of V2's problems can be addressed with improvements and partial redesigns, but to keep things fun (and after carefully evaluating my options, of course), I decided to take a stab at writing an entire time series database — from scratch, i.e. writing bytes to the file system.
|
||||
|
||||
The critical concerns of performance and resource usage are a direct consequence of the chosen storage format. We have to find the right set of algorithms and disk layout for our data to implement a well-performing storage layer.
|
||||
|
||||
This is where I take the shortcut and drive straight to the solution — skip the headache, failed ideas, endless sketching, tears, and despair.
|
||||
|
||||
### V3 — Macro Design
|
||||
|
||||
What's the macro layout of our storage? In short, everything that is revealed when running `tree` on our data directory. Just looking at that gives us a surprisingly good picture of what is going on.
|
||||
|
||||
```
|
||||
$ tree ./data
|
||||
./data
|
||||
├── b-000001
|
||||
│ ├── chunks
|
||||
│ │ ├── 000001
|
||||
│ │ ├── 000002
|
||||
│ │ └── 000003
|
||||
│ ├── index
|
||||
│ └── meta.json
|
||||
├── b-000004
|
||||
│ ├── chunks
|
||||
│ │ └── 000001
|
||||
│ ├── index
|
||||
│ └── meta.json
|
||||
├── b-000005
|
||||
│ ├── chunks
|
||||
│ │ └── 000001
|
||||
│ ├── index
|
||||
│ └── meta.json
|
||||
└── b-000006
|
||||
├── meta.json
|
||||
└── wal
|
||||
├── 000001
|
||||
├── 000002
|
||||
└── 000003
|
||||
```
|
||||
|
||||
At the top level, we have a sequence of numbered blocks, prefixed with `b-`. Each block obviously holds a file containing an index and a "chunk" directory holding more numbered files. The “chunks” directory contains nothing but raw chunks of data points for various series. Just as for V2, this makes reading series data over a time windows very cheap and allows us to apply the same efficient compression algorithms. The concept has proven to work well and we stick with it. Obviously, there is no longer a single file per series but instead a handful of files holds chunks for many of them.
|
||||
The existence of an “index” file should not be surprising. Let's just assume it contains a lot of black magic allowing us to find labels, their possible values, entire time series and the chunks holding their data points.
|
||||
|
||||
But why are there several directories containing the layout of index and chunk files? And why does the last one contain a "wal" directory instead? Understanding those two questions, solves about 90% of our problems.
|
||||
|
||||
#### Many Little Databases
|
||||
|
||||
We partition our _horizontal_ dimension, i.e. the time space, into non-overlapping blocks. Each block acts as a fully independent database containing all time series data for its time window. Hence, it has its own index and set of chunk files.
|
||||
|
||||
```
|
||||
|
||||
t0 t1 t2 t3 now
|
||||
┌───────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐
|
||||
│ │ │ │ │ │ │ │ ┌────────────┐
|
||||
│ │ │ │ │ │ │ mutable │ <─── write ──── ┤ Prometheus │
|
||||
│ │ │ │ │ │ │ │ └────────────┘
|
||||
└───────────┘ └───────────┘ └───────────┘ └───────────┘ ^
|
||||
└──────────────┴───────┬──────┴──────────────┘ │
|
||||
│ query
|
||||
│ │
|
||||
merge ─────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
Every block of data is immutable. Of course, we must be able to add new series and samples to the most recent block as we collect new data. For this block, all new data is written to an in-memory database that provides the same lookup properties as our persistent blocks. The in-memory data structures can be updated efficiently. To prevent data loss, all incoming data is also written to a temporary _write ahead log_ , which is the set of files in our “wal” directory, from which we can re-populate the in-memory database on restart.
|
||||
All these files come with their own serialization format, which comes with all the things one would expect: lots of flags, offsets, varints, and CRC32 checksums. Good fun to come up with, rather boring to read about.
|
||||
|
||||
This layout allows us to fan out queries to all blocks relevant to the queried time range. The partial results from each block are merged back together to form the overall result.
|
||||
|
||||
This horizontal partitioning adds a few great capabilities:
|
||||
|
||||
* When querying a time range, we can easily ignore all data blocks outside of this range. It trivially addresses the problem of _series churn_ by reducing the set of inspected data to begin with.
|
||||
* When completing a block, we can persist the data from our in-memory database by sequentially writing just a handful of larger files. We avoid any write-amplification and serve SSDs and HDDs equally well.
|
||||
* We keep the good property of V2 that recent chunks, which are queried most, are always hot in memory.
|
||||
* Nicely enough, we are also no longer bound to the fixed 1KiB chunk size to better align data on disk. We can pick any size that makes the most sense for the individual data points and chosen compression format.
|
||||
* Deleting old data becomes extremely cheap and instantaneous. We merely have to delete a single directory. Remember, in the old storage we had to analyze and re-write up to hundreds of millions of files, which could take hours to converge.
|
||||
|
||||
Each block also contains a `meta.json` file. It simply holds human-readable information about the block to easily understand the state of our storage and the data it contains.
|
||||
|
||||
##### mmap
|
||||
|
||||
Moving from millions of small files to a handful of larger allows us to keep all files open with little overhead. This unblocks the usage of [`mmap(2)`][8], a system call that allows us to transparently back a virtual memory region by file contents. For simplicity, you might want to think of it like swap space, just that all our data is on disk already and no writes occur when swapping data out of memory.
|
||||
|
||||
This means we can treat all contents of our database as if they were in memory without occupying any physical RAM. Only if we access certain byte ranges in our database files, the operating system lazily loads pages from disk. This puts the operating system in charge of all memory management related to our persisted data. Generally, it is more qualified to make such decisions, as it has the full view on the entire machine and all its processes. Queried data can be rather aggressively cached in memory, yet under memory pressure the pages will be evicted. If the machine has unused memory, Prometheus will now happily cache the entire database, yet will immediately return it once another application needs it.
|
||||
Therefore, queries can longer easily OOM our process by querying more persisted data than fits into RAM. The memory cache size becomes fully adaptive and data is only loaded once the query actually needs it.
|
||||
|
||||
From my understanding, this is how a lot of databases work today and an ideal way to do it if the disk format allows — unless one is confident to outsmart the OS from within the process. We certainly get a lot of capabilities with little work from our side.
|
||||
|
||||
#### Compaction
|
||||
|
||||
The storage has to periodically "cut" a new block and write the previous one, which is now completed, onto disk. Only after the block was successfully persisted, the write ahead log files, which are used to restore in-memory blocks, are deleted.
|
||||
We are interested in keeping each block reasonably short (about two hours for a typical setup) to avoid accumulating too much data in memory. When querying multiple blocks, we have to merge their results into an overall result. This merge procedure obviously comes with a cost and a week-long query should not have to merge 80+ partial results.
|
||||
|
||||
To achieve both, we introduce _compaction_ . Compaction describes the process of taking one or more blocks of data and writing them into a, potentially larger, block. It can also modify existing data along the way, e.g. dropping deleted data, or restructuring our sample chunks for improved query performance.
|
||||
|
||||
```
|
||||
|
||||
t0 t1 t2 t3 t4 now
|
||||
┌────────────┐ ┌──────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐
|
||||
│ 1 │ │ 2 │ │ 3 │ │ 4 │ │ 5 mutable │ before
|
||||
└────────────┘ └──────────┘ └───────────┘ └───────────┘ └───────────┘
|
||||
┌─────────────────────────────────────────┐ ┌───────────┐ ┌───────────┐
|
||||
│ 1 compacted │ │ 4 │ │ 5 mutable │ after (option A)
|
||||
└─────────────────────────────────────────┘ └───────────┘ └───────────┘
|
||||
┌──────────────────────────┐ ┌──────────────────────────┐ ┌───────────┐
|
||||
│ 1 compacted │ │ 3 compacted │ │ 5 mutable │ after (option B)
|
||||
└──────────────────────────┘ └──────────────────────────┘ └───────────┘
|
||||
```
|
||||
|
||||
In this example we have the sequential blocks `[1, 2, 3, 4]`. Blocks 1, 2, and 3 can be compacted together and the new layout is `[1, 4]`. Alternatively, compact them in pairs of two into `[1, 3]`. All time series data still exist but now in fewer blocks overall. This significantly reduces the merging cost at query time as fewer partial query results have to be merged.
|
||||
|
||||
#### Retention
|
||||
|
||||
We saw that deleting old data was a slow process in the V2 storage and put a toll on CPU, memory, and disk alike. How can we drop old data in our block based design? Quite simply, by just deleting the directory of a block that has no data within our configured retention window. In the example below, block 1 can safely be deleted, whereas 2 has to stick around until it falls fully behind the boundary.
|
||||
|
||||
```
|
||||
|
|
||||
┌────────────┐ ┌────┼─────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐
|
||||
│ 1 │ │ 2 | │ │ 3 │ │ 4 │ │ 5 │ . . .
|
||||
└────────────┘ └────┼─────┘ └───────────┘ └───────────┘ └───────────┘
|
||||
|
|
||||
|
|
||||
retention boundary
|
||||
```
|
||||
|
||||
The older data gets, the larger the blocks may become as we keep compacting previously compacted blocks. An upper limit has to be applied so blocks don’t grow to span the entire database and thus diminish the original benefits of our design.
|
||||
Conveniently, this also limits the total disk overhead of blocks that are partially inside and partially outside of the retention window, i.e. block 2 in the example above. When setting the maximum block size at 10% of the total retention window, our total overhead of keeping block 2 around is also bound by 10%.
|
||||
|
||||
Summed up, retention deletion goes from very expensive, to practically free.
|
||||
|
||||
> _If you've come this far and have some background in databases, you might be asking one thing by now: Is any of this new? — Not really; and probably for the better._
|
||||
>
|
||||
> _The pattern of batching data up in memory, tracked in a write ahead log, and periodically flushed to disk is ubiquitous today._
|
||||
> _The benefits we have seen apply almost universally regardless of the data's domain specifics. Prominent open source examples following this approach are LevelDB, Cassandra, InfluxDB, or HBase. The key takeaway is to avoid reinventing an inferior wheel, researching proven methods, and applying them with the right twist._
|
||||
> _Running out of places to add your own magic dust later is an unlikely scenario._
|
||||
|
||||
### The Index
|
||||
|
||||
The initial motivation to investigate storage improvements were the problems brought by _series churn_ . The block-based layout reduces the total number of series that have to be considered for serving a query. So assuming our index lookup was of complexity _O(n^2)_ , we managed to reduce the _n_ a fair amount and now have an improved complexity of _O(n^2)_ — uhm, wait... damnit.
|
||||
A quick flashback to "Algorithms 101" reminds us that this, in theory, did not buy us anything. If things were bad before, they are just as bad now. Theory can be depressing.
|
||||
|
||||
In practice, most of our queries will already be answered significantly faster. Yet, queries spanning the full time range remain slow even if they just need to find a handful of series. My original idea, dating back way before all this work was started, was a solution to exactly this problem: we need a more capable [ _inverted index_ ][9].
|
||||
An inverted index provides a fast lookup of data items based on a subset of their contents. Simply put, I can look up all series that have a label `app=”nginx"` without having to walk through every single series and check whether it contains that label.
|
||||
|
||||
For that, each series is assigned a unique ID by which it can be retrieved in constant time, i.e. O(1). In this case the ID is our _forward index_ .
|
||||
|
||||
> Example: If the series with IDs 10, 29, and 9 contain the label `app="nginx"`, the inverted index for the label "nginx" is the simple list `[10, 29, 9]`, which can be used to quickly retrieve all series containing the label. Even if there were 20 billion further series, it would not affect the speed of this lookup.
|
||||
|
||||
In short, if _n_ is our total number of series, and _m_ is the result size for a given query, the complexity of our query using the index is now _O(m)_ . Queries scaling along the amount of data they retrieve ( _m_ ) instead of the data body being searched ( _n_ ) is a great property as _m_ is generally significantly smaller.
|
||||
For brevity, let’s assume we can retrieve the inverted index list itself in constant time.
|
||||
|
||||
Actually, this is almost exactly the kind of inverted index V2 has and a minimum requirement to serve performant queries across millions of series. The keen observer will have noticed, that in the worst case, a label exists in all series and thus _m_ is, again, in _O(n)_ . This is expected and perfectly fine. If you query all data, it naturally takes longer. Things become problematic once we get involved with more complex queries.
|
||||
|
||||
#### Combining Labels
|
||||
|
||||
Labels associated with millions of series are common. Suppose a horizontally scaling “foo” microservice with hundreds of instances with thousands of series each. Every single series will have the label `app="foo"`. Of course, one generally won't query all series but restrict the query by further labels, e.g. I want to know how many requests my service instances received and query `__name__="requests_total" AND app="foo"`.
|
||||
|
||||
To find all series satisfying both label selectors, we take the inverted index list for each and intersect them. The resulting set will typically be orders of magnitude smaller than each input list individually. As each input list has the worst case size O(n), the brute force solution of nested iteration over both lists, has a runtime of O(n^2). The same cost applies for other set operations, such as the union (`app="foo" OR app="bar"`). When adding further label selectors to the query, the exponent increases for each to O(n^3), O(n^4), O(n^5), ... O(n^k). A lot of tricks can be played to minimize the effective runtime by changing the execution order. The more sophisticated, the more knowledge about the shape of the data and the relationships between labels is needed. This introduces a lot of complexity, yet does not decrease our algorithmic worst case runtime.
|
||||
|
||||
This is essentially the approach in the V2 storage and luckily a seemingly slight modification is enough gain significant improvements. What happens if we assume that the IDs in our inverted indices are sorted?
|
||||
|
||||
Suppose this example of lists for our initial query:
|
||||
|
||||
```
|
||||
__name__="requests_total" -> [ 9999, 1000, 1001, 2000000, 2000001, 2000002, 2000003 ]
|
||||
app="foo" -> [ 1, 3, 10, 11, 12, 100, 311, 320, 1000, 1001, 10002 ]
|
||||
|
||||
intersection => [ 1000, 1001 ]
|
||||
```
|
||||
|
||||
The intersection is fairly small. We can find it by setting a cursor at the beginning of each list and always advancing the one at the smaller number. When both numbers are equal, we add the number to our result and advance both cursors. Overall, we scan both lists in this zig-zag pattern and thus have a total cost of _O(2n) = O(n)_ as we only ever move forward in either list.
|
||||
|
||||
The procedure for more than two lists of different set operations works similarly. So the number of _k_ set operations merely modifies the factor ( _O(k*n)_ ) instead of the exponent ( _O(n^k)_ ) of our worst-case lookup runtime. A great improvement.
|
||||
What I described here is a simplified version of the canonical search index used by practically any [full text search engine][10] out there. Every series descriptor is treated as a short "document", and every label (name + fixed value) as a "word" inside of it. We can ignore a lot of additional data typically encountered in search engine indices, such as word position and frequency data.
|
||||
Seemingly endless research exists on approaches improving the practical runtime, often making some assumptions about the input data. Unsurprisingly, there are also plenty of techniques to compress inverted indices that come with their own benefits and drawbacks. As our "documents" are tiny and the “words” are hugely repetitive across all series, compression becomes almost irrelevant. For example, a real-world dataset of ~4.4 million series with about 12 labels each has less than 5,000 unique labels. For our initial storage version, we stick to the basic approach without compression, and just a few simple tweaks added to skip over large ranges of non-intersecting IDs.
|
||||
|
||||
While keeping the IDs sorted may sound simple, it is not always a trivial invariant to keep up. For instance, the V2 storage assigns hashes as IDs to new series and we cannot efficiently build up sorted inverted indices.
|
||||
Another daunting task is modifying the indices on disk as data gets deleted or updated. Typically, the easiest approach is to simply recompute and rewrite them but doing so while keeping the database queryable and consistent. The V3 storage does exactly this by having a separate immutable index per block that is only modified via rewrite on compaction. Only the indices for the mutable blocks, which are held entirely in memory, need to be updated.
|
||||
|
||||
### Benchmarking
|
||||
|
||||
I started initial development of the storage with a benchmark based on ~4.4 million series descriptors extracted from a real world data set and generated synthetic data points to feed into those series. This iteration just tested the stand-alone storage and was crucial to quickly identify performance bottlenecks and trigger deadlocks only experienced under highly concurrent load.
|
||||
|
||||
After the conceptual implementation was done, the benchmark could sustain a write throughput of 20 million data points per second on my Macbook Pro — all while a dozen Chrome tabs and Slack were running. So while this sounded all great it also indicated that there's no further point in pushing this benchmark (or running it in a less random environment for that matter). After all, it is synthetic and thus not worth much beyond a good first impression. Starting out about 20x above the initial design target, it was time to embed this into an actual Prometheus server, adding all the practical overhead and flakes only experienced in more realistic environments.
|
||||
|
||||
We actually had no reproducible benchmarking setup for Prometheus, in particular none that allowed A/B testing of different versions. Concerning in hindsight, but [now we have one][11]!
|
||||
|
||||
Our tool allows us to declaratively define a benchmarking scenario, which is then deployed to a Kubernetes cluster on AWS. While this is not the best environment for all-out benchmarking, it certainly reflects our user base better than dedicated bare metal servers with 64 cores and 128GB of memory.
|
||||
We deploy two Prometheus 1.5.2 servers (V2 storage) and two Prometheus servers from the 2.0 development branch (V3 storage). Each Prometheus server runs on a dedicated machine with an SSD. A horizontally scaled application exposing typical microservice metrics is deployed to worker nodes. Additionally, the Kubernetes cluster itself and the nodes are being monitored. The whole setup is supervised by yet another Meta-Prometheus, monitoring each Prometheus server for health and performance.
|
||||
To simulate series churn, the microservice is periodically scaled up and down to remove old pods and spawn new pods, exposing new series. Query load is simulated by a selection of "typical" queries, run against one server of each Prometheus version.
|
||||
|
||||
Overall the scaling and querying load as well as the sampling frequency significantly exceed today's production deployments of Prometheus. For instance, we swap out 60% of our microservice instances every 15 minutes to produce series churn. This would likely only happen 1-5 times a day in a modern infrastructure. This ensures that our V3 design is capable of handling the workloads of the years ahead. As a result, the performance differences between Prometheus 1.5.2 and 2.0 are larger than in a more moderate environment.
|
||||
In total, we are collecting about 110,000 samples per second from 850 targets exposing half a million series at a time.
|
||||
|
||||
After leaving this setup running for a while, we can take a look at the numbers. We evaluate several metrics over the first 12 hours within both versiones reached a steady state.
|
||||
|
||||
> Be aware of the slightly truncated Y axis in screen shots from the Prometheus graph UI.
|
||||
|
||||

|
||||
> _Heap memory usage in GB_
|
||||
|
||||
Memory usage is the most troubling resource for users today as it is relatively unpredictable and it may cause the process to crash.
|
||||
Obviously, the queried servers are consuming more memory, which can largely be attributed to overhead of the query engine, which will be subject to future optimizations. Overall, Prometheus 2.0's memory consumption is reduced by 3-4x. After about six hours, there is a clear spike in Prometheus 1.5, which aligns with the our retention boundary at six hours. As deletions are quite costly, resource consumption ramps up. This will become visible throughout various other graphs below.
|
||||
|
||||

|
||||
> _CPU usage in cores/second_
|
||||
|
||||
A similar pattern shows for CPU usage, but the delta between queried and non-queried servers is more significant. Averaging at about 0.5 cores/sec while ingesting about 110,000 samples/second, our new storage becomes almost negligible compared to the cycles spent on query evaluation. In total the new storage needs 3-10 times fewer CPU resources.
|
||||
|
||||

|
||||
>_Disk writes in MB/second_
|
||||
|
||||
The by far most dramatic and unexpected improvement shows in write utilization of our disk. It clearly shows why Prometheus 1.5 is prone to wear out SSDs. We see an initial ramp-up as soon as the first chunks are persisted into the series files and a second ramp-up once deletion starts rewriting them. Surprisingly, the queried and non-queried server show a very different utilization.
|
||||
Prometheus 2.0 on the other hand, merely writes about a single Megabyte per second to its write ahead log. Writes periodically spike when blocks are compacted to disk. Overall savings: staggering 97-99%.
|
||||
|
||||

|
||||
> _Disk size in GB_
|
||||
|
||||
Closely related to disk writes is the total amount of occupied disk space. As we are using almost the same compression algorithm for samples, which is the bulk of our data, they should be about the same. In a more stable setup that would largely be true, but as we are dealing with high _series churn_ , there's also the per-series overhead to consider.
|
||||
As we can see, Prometheus 1.5 ramps up storage space a lot faster before both versions reach a steady state as the retention kicks in. Prometheus 2.0 seems to have a significantly lower overhead per individual series. We can nicely see how space is linearly filled up by the write ahead log and instantaneously drops as its gets compacted. The fact that the lines for both Prometheus 2.0 servers do not exactly match is a fact that needs further investigation.
|
||||
|
||||
This all looks quite promising. The important piece left is query latency. The new index should have improved our lookup complexity. What has not substantially changed is processing of this data, e.g. in `rate()` functions or aggregations. Those aspects are part of the query engine.
|
||||
|
||||

|
||||
>_99th percentile query latency in seconds_
|
||||
|
||||
Expectations are completely met by the data. In Prometheus 1.5 the query latency increases over time as more series are stored. It only levels off once retention starts and old series are deleted. In contrast, Prometheus 2.0 stays in place right from the beginning.
|
||||
Some caution must be taken on how this data was collected. The queries fired against the servers were chosen by estimating a good mix of range and instant queries, doing heavier and more lightweight computations, and touching few or many series. It does not necessarily represent a real-world distribution of queries. It is also not representative for queries hitting cold data and we can assume that all sample data is practically always hot in memory in either storage.
|
||||
Nonetheless, we can say with good confidence, that the overall query performance became very resilient to series churn and improved by up to 4x in our straining benchmarking scenario. In a more static environment, we can assume query time to be mostly spent in the query engine itself and the improvement to be notably lower.
|
||||
|
||||

|
||||
>_Ingested samples/second_
|
||||
|
||||
Lastly, a quick look into our ingestion rates of the different Prometheus servers. We can see that both servers with the V3 storage have the same ingestion rate. After a few hours it becomes unstable, which is caused by various nodes of the benchmarking cluster becoming unresponsive due to high load rather than the Prometheus instances. (The fact that both 2.0 lines exactly match is hopefully convincing enough.)
|
||||
Both Prometheus 1.5.2 servers start suffering from significant drops in ingestion rate even though more CPU and memory resources are available. The high stress of series churn causes a larger amount of data to not be collected.
|
||||
|
||||
But what's the _absolute maximum_ number of samples per second you could ingest now?
|
||||
|
||||
I don't know — and deliberately don't care.
|
||||
|
||||
There are a lot of factors that shape the data flowing into Prometheus and there is no single number capable of capturing quality. Maximum ingestion rate has historically been a metric leading to skewed benchmarks and neglect of more important aspects such as query performance and resilience to series churn. The rough assumption that resource usage increases linearly was confirmed by some basic testing. It is easy to extrapolate what could be possible.
|
||||
|
||||
Our benchmarking setup simulates a highly dynamic environment stressing Prometheus more than most real-world setups today. The results show we went way above our initial design goal, while running on non-optimal cloud servers. Ultimately, success will be determined by user feedback rather than benchmarking numbers.
|
||||
|
||||
> Note: _At time of writing this, Prometheus 1.6 is in development, which will allow configuring the maximum memory usage more reliably and may notably reduce overall consumption in favor of slightly increased CPU utilization. I did not repeat the tests against this as the overall results still hold, especially when facing high series churn._
|
||||
|
||||
### Conclusion
|
||||
|
||||
Prometheus sets out to handle high cardinality of series and throughput of individual samples. It remains a challenging task, but the new storage seems to position us well for the hyper-scale, hyper-convergent, GIFEE infrastructure of the futu... well, it seems to work pretty well.
|
||||
|
||||
A [first alpha release of Prometheus 2.0][12] with the new V3 storage is available for testing. Expect crashes, deadlocks, and other bugs at this early stage.
|
||||
|
||||
The code for the storage itself can be found [in a separate project][13]. It's surprisingly agnostic to Prometheus itself and could be widely useful for a wider range of applications looking for an efficient local storage time series database.
|
||||
|
||||
> _There's a long list of people to thank for their contributions to this work. Here they go in no particular order:_
|
||||
>
|
||||
> _The groundlaying work by Bjoern Rabenstein and Julius Volz on the V2 storage engine and their feedback on V3 was fundamental to everything seen in this new generation._
|
||||
>
|
||||
> _Wilhelm Bierbaum's ongoing advice and insight contributed significantly to the new design. Brian Brazil's continous feedback ensured that we ended up with a semantically sound approach. Insightful discussions with Peter Bourgon validated the design and shaped this write-up._
|
||||
>
|
||||
> _Not to forget my entire team at CoreOS and the company itself for supporting and sponsoring this work. Thanks to everyone who listened to my ramblings about SSDs, floats, and serialization formats again and again._
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fabxc.org/blog/2017-04-10-writing-a-tsdb/
|
||||
|
||||
作者:[Fabian Reinartz ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://twitter.com/fabxc
|
||||
[1]:https://en.wikipedia.org/wiki/Inode
|
||||
[2]:https://prometheus.io/
|
||||
[3]:https://kubernetes.io/
|
||||
[4]:https://en.wikipedia.org/wiki/Write_amplification
|
||||
[5]:http://codecapsule.com/2014/02/12/coding-for-ssds-part-1-introduction-and-table-of-contents/
|
||||
[6]:https://prometheus.io/docs/practices/rules/
|
||||
[7]:http://www.vldb.org/pvldb/vol8/p1816-teller.pdf
|
||||
[8]:https://en.wikipedia.org/wiki/Mmap
|
||||
[9]:https://en.wikipedia.org/wiki/Inverted_index
|
||||
[10]:https://en.wikipedia.org/wiki/Search_engine_indexing#Inverted_indices
|
||||
[11]:https://github.com/prometheus/prombench
|
||||
[12]:https://prometheus.io/blog/2017/04/10/promehteus-20-sneak-peak/
|
||||
[13]:https://github.com/prometheus/tsdb
|
@ -0,0 +1,431 @@
|
||||
从零写一个时间序列数据库
|
||||
============================================================
|
||||
|
||||
|
||||
我从事监控工作。特别是在 [Prometheus][2] 上,监控系统包含一个自定义的时间序列数据库,并且集成在 [Kubernetes][3] 上。
|
||||
|
||||
在许多方面上 Kubernetes 展现出了所有 Prometheus 的设计用途。它使得<ruby>持续部署<rt>continuous deployments</rt></ruby>,<ruby>弹性伸缩<rt>auto scaling</rt></ruby>和其他<ruby>高动态环境<rt>highly dynamic environments</rt></ruby>下的功能可以轻易地访问。在众多概念上的决策中,查询语句和操作模型使得 Prometheus 特别适合这种环境。但是,如果监控的工作负载动态程度显著地增加,这就会给监控系统本身带来新的压力。记住了这一点,而不是回过头来看 Prometheus 已经解决的很好的问题,我们就可以明确目标去提升它高动态或<ruby>瞬态服务<rt>transient services</rt></ruby>环境下的表现。
|
||||
|
||||
Prometheus 的存储层在很长一段时间里都展现出卓越的性能,单一服务器就能够以每秒数百多万个时间序列的速度摄入多达一百万个样本,同时只占用了很少的磁盘空间。尽管当前的存储做的很好,但我依旧提出一个新设计的存储子系统,它更正了现存解决方案的缺点,并具备处理更大规模数据的能力。
|
||||
|
||||
注释:我没有数据库方面的背景。我说的东西可能是错的并让你误入歧途。你可以在 Freenode 的 #prometheus 频道上提出你的批评(fabxc)
|
||||
|
||||
### 问题,难题,问题域
|
||||
|
||||
首先,快速地概览一下我们要完成的东西和它的关键难题。我们可以先看一下 Prometheus 当前的做法 ,它为什么做的这么好,以及我们打算用新设计解决哪些问题。
|
||||
|
||||
### 时间序列数据
|
||||
|
||||
我们有一个收集一段时间数据的系统。
|
||||
|
||||
```
|
||||
identifier -> (t0, v0), (t1, v1), (t2, v2), (t3, v3), ....
|
||||
```
|
||||
|
||||
每个数据点是一个时间戳和值的元组。在监控中,时间戳是一个整数,值可以是任意数字。64 位浮点数对于计数器和测量值来说是一个好的表示方法,因此我们将会使用它。一系列严格单调递增的时间戳数据点是一个序列,它由标识符所引用。我们的标识符是一个带有<ruby>标签维度<rt>label dimensions</rt></ruby>字典的度量名称。标签维度分开了单一指标的测量空间。每一个指标名称加上一个独一无二的标签集就成了它自己的时间序列,它有一个与之关联的<ruby>数据流<rt>value stream</rt></ruby>。
|
||||
|
||||
这是一个典型的<ruby>序列标识符<rt>series identifiers </rt></ruby>集,它是统计请求指标的一部分:
|
||||
|
||||
```
|
||||
requests_total{path="/status", method="GET", instance=”10.0.0.1:80”}
|
||||
requests_total{path="/status", method="POST", instance=”10.0.0.3:80”}
|
||||
requests_total{path="/", method="GET", instance=”10.0.0.2:80”}
|
||||
```
|
||||
|
||||
让我们简化一下表示方法:度量名称可以当作另一个维度标签,在我们的例子中是 `__name__`。对于查询语句,可以对它进行特殊处理,但与我们存储的方式无关,我们后面也会见到。
|
||||
|
||||
```
|
||||
{__name__="requests_total", path="/status", method="GET", instance=”10.0.0.1:80”}
|
||||
{__name__="requests_total", path="/status", method="POST", instance=”10.0.0.3:80”}
|
||||
{__name__="requests_total", path="/", method="GET", instance=”10.0.0.2:80”}
|
||||
```
|
||||
|
||||
我们想通过标签来查询时间序列数据。在最简单的情况下,使用 `{__name__="requests_total"}` 选择所有属于 `requests_total` 指标的数据。对于所有选择的序列,我们在给定的时间窗口内获取数据点。
|
||||
在更复杂的语句中,我们或许想一次性选择满足多个标签的序列,并且表示比相等条件更复杂的情况。例如,非语句(`method!="GET"`)或正则表达式匹配(`method=~"PUT|POST"`)。
|
||||
|
||||
这些在很大程度上定义了存储的数据和它的获取方式。
|
||||
|
||||
### 纵与横
|
||||
|
||||
在简化的视图中,所有的数据点可以分布在二维平面上。水平维度代表着时间,序列标识符域经纵轴展开。
|
||||
|
||||
```
|
||||
series
|
||||
^
|
||||
│ . . . . . . . . . . . . . . . . . . . . . . {__name__="request_total", method="GET"}
|
||||
│ . . . . . . . . . . . . . . . . . . . . . . {__name__="request_total", method="POST"}
|
||||
│ . . . . . . .
|
||||
│ . . . . . . . . . . . . . . . . . . . ...
|
||||
│ . . . . . . . . . . . . . . . . . . . . .
|
||||
│ . . . . . . . . . . . . . . . . . . . . . {__name__="errors_total", method="POST"}
|
||||
│ . . . . . . . . . . . . . . . . . {__name__="errors_total", method="GET"}
|
||||
│ . . . . . . . . . . . . . .
|
||||
│ . . . . . . . . . . . . . . . . . . . ...
|
||||
│ . . . . . . . . . . . . . . . . . . . .
|
||||
v
|
||||
<-------------------- time --------------------->
|
||||
```
|
||||
|
||||
Prometheus 通过定期地抓取一组时间序列的当前值来获取数据点。我们获取到的实体称为目标。因此,写入模式完全地垂直且高度并发,因为来自每个目标的样本是独立摄入的。这里提供一些测量的规模:单一 Prometheus 实例从成千上万的目标中收集数据点,每个数据点都暴露在成百上千个不同的时间序列中。
|
||||
|
||||
在每秒采集数百万数据点这种规模下,批量写入是一个不能妥协的性能要求。在磁盘上分散地写入单个数据点会相当地缓慢。因此,我们想要按顺序写入更大的数据块。
|
||||
对于旋转式磁盘,它的磁头始终得物理上地向不同的扇区上移动,这是一个不足为奇的事实。而我们都知道 SSD 具有快速随机写入的特点,但事实上它不能修改单独的字节,只能写入一页 4KiB 或更多的数据量。这就意味着写入 16 字节的样本相当于写入满满一个 4Kib 的页。这一行为部分上属于[写入放大][4],这种特性会损耗你的 SSD。因此它不仅影响速度,而且还毫不夸张地在几天或几个周内破坏掉你的硬件。
|
||||
关于此问题更深层次的资料,[“Coding for SSDs”系列][5]博客是极好的资源。让我们想想有什么收获:顺序写入和批量写入对于旋转式磁盘和 SSD 来说都是理想的写入模式。大道至简。
|
||||
|
||||
查询模式比起写入模式千差万别。我们可以查询单一序列的一个数据点,也可以为 10000 个序列查询一个数据点,还可以查询一个序列几个周的数据点,甚至是 10000 个序列几个周的数据点。因此在我们的二维平面上,查询范围不是完全水平或垂直的,而是二者形成矩形似的组合。
|
||||
[记录规则][6]减轻了已知查询的问题,但对于<ruby>点对点<rt>ad-hoc</rt></ruby>查询来说并不是一个通用的解决方法。
|
||||
|
||||
我们知道自己想要批量地写入,但我们得到的仅仅是一系列垂直数据点的集合。当查询一段时间窗口内的数据点时,我们不仅很难弄清楚在哪才能找到这些单独的点,而且不得不从磁盘上大量随机的地方读取。也许一条查询语句会有数百万的样本,即使在最快的 SSD 上也会很慢。读入也会从磁盘上获取更多的数据而不仅仅是 16 字节的样本。SSD 会加载一整页,HDD 至少会读取整个扇区。不论哪一种,我们都在浪费宝贵的读吞吐量。
|
||||
因此在理想上,相同序列的样本将按顺序存储,这样我们就能通过尽可能少的读取来扫描它们。在上层,我们仅需要知道序列的起始位置就能访问所有的数据点。
|
||||
|
||||
显然,将收集到的数据写入磁盘的理想模式与能够显著提高查询效率的布局之间存在着很强的张力。这是我们 TSDB 需要解决的一个基本问题。
|
||||
|
||||
#### 当前的解法
|
||||
|
||||
是时候看一下当前 Prometheus 是如何存储数据来解决这一问题的,让我们称它为“V2”。
|
||||
我们创建一个时间序列的文件,它包含所有样本并按顺序存储。因为每几秒附加一个样本数据到所有文件中非常昂贵,我们打包 1Kib 样本序列的数据块在内存中,一旦打包完成就附加这些数据块到单独的文件中。这一方法解决了大部分问题。写入目前是批量的,样本也是按顺序存储的。它还支持非常高效的压缩格式,基于给定的同一序列的样本相对之前的数据仅发生非常小的改变这一特性。Facebook 在他们 Gorilla TSDB 上的论文中描述了一个相似的基于数据块的方法,并且[引入了一种压缩格式][7],它能够减少 16 字节的样本到平均 1.37 字节。V2 存储使用了包含 Gorilla 等的各种压缩格式。
|
||||
|
||||
```
|
||||
┌──────────┬─────────┬─────────┬─────────┬─────────┐ series A
|
||||
└──────────┴─────────┴─────────┴─────────┴─────────┘
|
||||
┌──────────┬─────────┬─────────┬─────────┬─────────┐ series B
|
||||
└──────────┴─────────┴─────────┴─────────┴─────────┘
|
||||
. . .
|
||||
┌──────────┬─────────┬─────────┬─────────┬─────────┬─────────┐ series XYZ
|
||||
└──────────┴─────────┴─────────┴─────────┴─────────┴─────────┘
|
||||
chunk 1 chunk 2 chunk 3 ...
|
||||
```
|
||||
|
||||
尽管基于块存储的方法非常棒,但为每个序列保存一个独立的文件会给 V2 存储带来麻烦,因为:
|
||||
|
||||
* 我们实际上需要比当前收集的时间序列数目使用更多的文件。多出的部分在<ruby>序列分流<rt>Series Churn</rt></ruby>上。拥有几百万个文件,迟早会使用光文件系统中的 [inodes][1]。这种情况我们只可以通过重新格式化来恢复磁盘,这种方式是最具有破坏性的。我们通常想要避免为了适应一个应用程序而格式化磁盘。
|
||||
* 即使是分块写入,每秒也会产生几千万块的数据块并且准备持久化。这依然需要每秒数千个次的磁盘写入量。尽管通过为每个序列打包好多个块来缓解,但反过来还是增加了等待持久化数据的总内存占用。
|
||||
* 要保持所有文件的打开状态进行读写是不可行的。特别是因为 99% 的数据在 24 小时之后不再会被查询到。如果它还是被查询到,我们就得打开数千个文件,找到并读取相关的数据点到内存中,然后再关掉。这样做就会引起很高的查询延迟,数据块缓存加剧会导致新的问题,这一点在“资源消耗”一节另作讲述。
|
||||
* 最终,旧的数据需要被删除并且数据需要从数百万文件的头部删除。这就意味着删除实际上是高强度的写入操作。此外,循环遍历数百万文件并且进行分析通常会导致这一过程花费数小时。当它完成时,可能又得重新来过。喔天,继续删除旧文件又会进一步导致 SSD 产生写入放大。
|
||||
* 目前所积累的数据块仅维持在内存中。如果应用崩溃,数据就会丢失。为了避免这种情况,内存状态会定期的保存在磁盘上,这比我们能接受数据丢失的时间要长的多。恢复检查点也会花费数分钟,导致很长的重启周期。
|
||||
|
||||
我们能够从现有的设计中学到的关键部分是数据块的概念,这一点会依旧延续。最近一段时间的数据块会保持在内存中也大体上不错。毕竟,最近时间段的数据会大量的查询到。一个时间序列对应一个文件,这种概念是我们想要替换掉的。
|
||||
|
||||
### 序列分流
|
||||
|
||||
在 Prometheus 的<ruby>上下文<rt>context</rt></ruby>中,我们使用术语<ruby>序列分流<rt>series churn</rt></ruby>来描述不活越的时间序列集合,即不再接收数据点,取而代之的是出现一组新的活跃序列。
|
||||
例如,由给定微服务实例产生的所有序列都有一个相对的“instance”标签来标识它的起源。如果我们为微服务执行了<ruby>滚动更新<rt>rolling update</rt></ruby>,并且为每个实例替换一个新的版本,序列分流便会发生。在更加动态的环境中,这些事情基本上每小时都会发生。像 Kubernetes 这样的<ruby>集群编排<rt>Cluster orchestration</rt></ruby>系统允许应用连续性的自动伸缩和频繁的滚动更新,这样也许会创建成千上万个新的应用程序实例,并且伴随着全新的时间序列集合,每天都是如此。
|
||||
|
||||
```
|
||||
series
|
||||
^
|
||||
│ . . . . . .
|
||||
│ . . . . . .
|
||||
│ . . . . . .
|
||||
│ . . . . . . .
|
||||
│ . . . . . . .
|
||||
│ . . . . . . .
|
||||
│ . . . . . .
|
||||
│ . . . . . .
|
||||
│ . . . . .
|
||||
│ . . . . .
|
||||
│ . . . . .
|
||||
v
|
||||
<-------------------- time --------------------->
|
||||
```
|
||||
|
||||
所以即便整个基础设施的规模基本保持不变,过一段时间后数据库内的时间序列还是会成线性增长。尽管 Prometheus 很愿意采集 1000 万个时间序列数据,但要想在 10 亿的序列中找到数据,查询效果还是会受到严重的影响。
|
||||
|
||||
#### 当前解法
|
||||
|
||||
当前 Prometheus 的 V2 存储系统对所有保存的序列拥有基于 LevelDB 的索引。它允许查询语句含有给定的<ruby>标签对<rt>label pair</rt></ruby>,但是缺乏可伸缩的方法来从不同的标签选集中组合查询结果。
|
||||
例如,从所有的序列中选择标签 `__name__="requests_total"` 非常高效,但是选择 `instance="A" AND __name__="requests_total"` 就有了可伸缩性的问题。我们稍后会重新考虑导致这一点的原因和能够提升查找延迟的调整方法。
|
||||
|
||||
事实上正是这个问题才催生出了对更好的存储系统的最初探索。Prometheus 需要为查找亿万的时间序列改进索引方法。
|
||||
|
||||
### 资源消耗
|
||||
|
||||
当试图量化 Prometheus (或其他任何事情,真的)时,资源消耗是永恒不变的话题之一。但真正困扰用户的并不是对资源的绝对渴求。事实上,由于给定的需求,Prometheus 管理着令人难以置信的吞吐量。问题更在于面对变化时的相对未知性与不稳定性。由于自身的架构设计,V2 存储系统构建样本数据块相当缓慢,这一点导致内存占用随时间递增。当数据块完成之后,它们可以写到磁盘上并从内存中清除。最终,Prometheus 的内存使用到达平衡状态。直到监测环境发生了改变——每次我们扩展应用或者进行滚动更新,序列分流都会增加内存、CPU、磁盘 IO 的使用。如果变更正在进行,那么它最终还是会到达一个稳定的状态,但比起更加静态的环境,它的资源消耗会显著地提高。过渡时间通常为数个小时,而且难以确定最大资源使用量。
|
||||
|
||||
为每个时间序列保存一个文件这种方法也使得单一查询很容易崩溃 Prometheus 进程。当查询的数据没有缓存在内存中,查询的序列文件就会被打开,然后将含有相关数据点的数据块读入内存。如果数据量超出内存可用量,Prometheus 就会因 OOM 被杀死而退出。
|
||||
在查询语句完成之后,加载的数据便可以被再次释放掉,但通常会缓存更长的时间,以便更快地查询相同的数据。后者看起来是件不错的事情。
|
||||
|
||||
最后,我们看看之前提到的 SSD 的写入放大,以及 Prometheus 是如何通过批量写入来解决这个问题的。尽管如此,在许多地方还是存在因为拥有太多小批量数据以及在页的边界上未精确对齐的数据而导致的写入放大。对于更大规模的 Prometheus 服务器,现实当中发现会缩减硬件寿命的问题。这一点对于数据库应用的高写入吞吐量来说仍然相当普遍,但我们应该放眼看看是否可以解决它。
|
||||
|
||||
### 重新开始
|
||||
|
||||
到目前为止我们对于问题域,V2 存储系统是如何解决它的,以及设计上的问题有了一个清晰的认识。我们也看到了许多很棒的想法,这些或多或少都可以拿来直接使用。V2 存储系统相当数量的问题都可以通过改进和部分的重新设计来解决,但为了好玩(当然,在我仔细的验证想法之后),我决定试着写一个完整的时间序列数据库——从头开始,即向文件系统写入字节。
|
||||
|
||||
性能与资源使用这种最关键的部分直接导致了存储格式的选取。我们需要为数据找到正确的算法和磁盘布局来实现一个高性能的存储层。
|
||||
|
||||
这就是我解决问题的捷径——跳过令人头疼,失败的想法,数不尽的草图,泪水与绝望。
|
||||
|
||||
### V3—宏观设计
|
||||
|
||||
我们存储系统的宏观布局是什么?简而言之,是当我们在数据文件夹里运行 `tree` 命令时显示的一切。看看它能给我们带来怎样一副惊喜的画面。
|
||||
|
||||
```
|
||||
$ tree ./data
|
||||
./data
|
||||
├── b-000001
|
||||
│ ├── chunks
|
||||
│ │ ├── 000001
|
||||
│ │ ├── 000002
|
||||
│ │ └── 000003
|
||||
│ ├── index
|
||||
│ └── meta.json
|
||||
├── b-000004
|
||||
│ ├── chunks
|
||||
│ │ └── 000001
|
||||
│ ├── index
|
||||
│ └── meta.json
|
||||
├── b-000005
|
||||
│ ├── chunks
|
||||
│ │ └── 000001
|
||||
│ ├── index
|
||||
│ └── meta.json
|
||||
└── b-000006
|
||||
├── meta.json
|
||||
└── wal
|
||||
├── 000001
|
||||
├── 000002
|
||||
└── 000003
|
||||
```
|
||||
|
||||
在最顶层,我们有一系列以 `b-` 为前缀编号的<ruby>块<rt>block</rt></ruby>。每个块中显然保存了索引文件和含有更多编号文件的 `chunk` 文件夹。`chunks` 目录只包含不同序列<ruby>数据点的原始块<rt>raw chunks of data points</rt><ruby>。与 V2存储系统一样,这使得通过时间窗口读取序列数据非常高效并且允许我们使用相同的有效压缩算法。这一点被证实行之有效,我们也打算沿用。显然,这里并不存在含有单个序列的文件,而是一堆保存着许多序列的数据块。
|
||||
`index`文件的存在应不足为奇。让我们假设它拥有黑魔法,可以让我们找到标签、可能的值、整个时间序列和存放数据点的数据块。
|
||||
|
||||
但为什么这里有好几个文件夹都是索引和块文件的布局?并且为什么存在最后一个包含“wal”文件夹?理解这两个疑问便能解决九成的问题 。
|
||||
|
||||
#### 许多小型数据库
|
||||
|
||||
我们分割横轴,即将时间域分割为不重叠的块。每一块扮演者完全独立的数据库,它包含该时间窗口所有的时间序列数据。因此,它拥有自己的索引和一系列块文件。
|
||||
|
||||
```
|
||||
|
||||
t0 t1 t2 t3 now
|
||||
┌───────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐
|
||||
│ │ │ │ │ │ │ │ ┌────────────┐
|
||||
│ │ │ │ │ │ │ mutable │ <─── write ──── ┤ Prometheus │
|
||||
│ │ │ │ │ │ │ │ └────────────┘
|
||||
└───────────┘ └───────────┘ └───────────┘ └───────────┘ ^
|
||||
└──────────────┴───────┬──────┴──────────────┘ │
|
||||
│ query
|
||||
│ │
|
||||
merge ─────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
每一块的数据都是<ruby>不可变的<rt>immutable</rt></ruby>。当然,当我们采集新数据时,我们必须能向最近的块中添加新的序列和样本。对于该数据块,所有新的数据都将写入内存中的数据库中,它与我们的持久化的数据块一样提供了查找属性。内存中的数据结构可以高效地更新。为了防止数据丢失,所有预传入的数据同样被写入临时的<ruby>预写日志<rt>write ahead log</rt></ruby>中,这就是 `wal` 文件夹中的一些列文件,我们可以在重新启动时通过它们加载内存数据库。
|
||||
所有这些文件都带有序列化格式,有我们所期望的所有东西:许多标志,偏移量,变体和 CRC32 校验。纸上得来终觉浅,绝知此事要躬行。
|
||||
|
||||
这种布局允许我们扩展查询范围到所有相关的块上。每个块上的部分结果最终合并成完整的结果。
|
||||
|
||||
这种横向分割增加了一些很棒的功能:
|
||||
|
||||
* 当查询一个时间范围,我们可以简单地忽略所有范围之外的数据块。通过减少需要检查的一系列数据,它可以初步解决序列分流的问题。
|
||||
* 当完成一个块,我们可以通过顺序的写入大文件从内存数据库中保存数据。这样可以避免任何的写入放大,并且 SSD 与 HDD 均适用。
|
||||
* 我们延续了 V2 存储系统的一个好的特性,最近使用而被多次查询的数据块,总是保留在内存中。
|
||||
* 足够好了,我们也不再限定 1KiB 的数据块尺寸来使数据在磁盘上更好地对齐。我们可以挑选对单个数据点和压缩格式最合理的尺寸。
|
||||
* 删除旧数据变得极为简单快捷。我们仅仅只需删除一个文件夹。记住,在旧的存储系统中我们不得不花数个小时分析并重写数亿个文件。
|
||||
|
||||
每个块还包含了 `meta.json` 文件。它简单地保存了关于块的存储状态和包含的数据以供人们简单的阅读。
|
||||
|
||||
##### mmap
|
||||
|
||||
将数百万个小文件合并为一个大文件使得我们用很小的开销就能保持所有的文件都打开。这就引出了 [`mmap(2)`][8] 的使用,一个允许我们通过文件透明地回传虚拟内存的系统调用。为了简便,你也许想到了<ruby>交换空间<rt>swap space</rt></ruby>,只是我们所有的数据已经保存在了磁盘上,并且当数据换出内存后不再会发生写入。
|
||||
|
||||
这意味着我们可以当作所有数据库的内容都保留在内存中却不占用任何物理内存。仅当我们访问数据库文件确定的字节范围时,操作系统从磁盘上<ruby>惰性加载<rt>lazy loads</rt></ruby>页数据。这使得我们将所有数据持久化相关的内存管理都交给了操作系统。大体上,操作系统已足够资格作出决定,因为它拥有整个机器和进程的视图。查询的数据可以相当积极的缓存进内存,但内存压力会使得页被逐出。如果机器拥有未使用的内存,Prometheus 目前将会高兴地缓存整个数据库,但是一旦其他进程需要,它就会立刻返回。
|
||||
因此,查询不再轻易地使我们的进程 OOM,因为查询的是更多的持久化的数据而不是装入内存中的数据。内存缓存大小变得完全自适应,并且仅当查询真正需要时数据才会被加载。
|
||||
|
||||
就个人理解,如果磁盘格式允许,这就是当今大多数数据库的理想工作方式——除非有人自信的在进程中智胜操作系统。我们做了很少的工作但确实从外面获得了很多功能。
|
||||
|
||||
#### 压缩
|
||||
|
||||
存储系统需要定期的“切”出新块并写入之前完成的块到磁盘中。仅在块成功的持久化之后,写之前用来恢复内存块的日志文件(wal)才会被删除。
|
||||
我们很乐意将每个块的保存时间设置的相对短一些(通常配置为 2 小时)以避免内存中积累太多的数据。当查询多个块,我们必须合并它们的结果为一个完成的结果。合并过程显然会消耗资源,一个周的查询不应该由 80 多个部分结果所合并。
|
||||
|
||||
为了实现两者,我们引入<ruby>压缩<rt>compaction</rt></ruby>。压缩描述了一个过程:取一个或更多个数据块并将其写入一个可能更大的块中。它也可以在此过程中修改现有的数据。例如,清除已经删除的数据,或为提升查询性能重建样本块。
|
||||
|
||||
```
|
||||
|
||||
t0 t1 t2 t3 t4 now
|
||||
┌────────────┐ ┌──────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐
|
||||
│ 1 │ │ 2 │ │ 3 │ │ 4 │ │ 5 mutable │ before
|
||||
└────────────┘ └──────────┘ └───────────┘ └───────────┘ └───────────┘
|
||||
┌─────────────────────────────────────────┐ ┌───────────┐ ┌───────────┐
|
||||
│ 1 compacted │ │ 4 │ │ 5 mutable │ after (option A)
|
||||
└─────────────────────────────────────────┘ └───────────┘ └───────────┘
|
||||
┌──────────────────────────┐ ┌──────────────────────────┐ ┌───────────┐
|
||||
│ 1 compacted │ │ 3 compacted │ │ 5 mutable │ after (option B)
|
||||
└──────────────────────────┘ └──────────────────────────┘ └───────────┘
|
||||
```
|
||||
|
||||
在这个例子中我们有一系列块`[1,2,3,4]`。块 1,2 ,3 可以压缩在一起,新的布局将会是 `[1,4]`。或者,将它们成对压缩为 `[1,3]`。所有的时间序列数据仍然存在,但现在整体上保存在更少的块中。这极大程度地缩减了查询时间的消耗,因为需要合并的部分查询结果变得更少了。
|
||||
|
||||
#### 保留
|
||||
|
||||
我们看到了删除旧的数据在 V2 存储系统中是一个缓慢的过程,并且消耗 CPU、内存和磁盘。如何才能在我们基于块的设计上清除旧的数据?相当简单,只要根据块文件夹下的配置的保留窗口里有无数据而删除该文件夹。在下面的例子中,块 1 可以被安全地删除,而块 2 则必须一直保持到界限后面。
|
||||
|
||||
```
|
||||
|
|
||||
┌────────────┐ ┌────┼─────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐
|
||||
│ 1 │ │ 2 | │ │ 3 │ │ 4 │ │ 5 │ . . .
|
||||
└────────────┘ └────┼─────┘ └───────────┘ └───────────┘ └───────────┘
|
||||
|
|
||||
|
|
||||
retention boundary
|
||||
```
|
||||
|
||||
得到越旧的数据,保存的块也就越大,因为我们会压缩之前的压缩块。因此必须为其设置一个上限,以防数据块扩展到整个数据库而损失我们设计的最初优势。
|
||||
方便的是,这一点也限制了部分存在于保留窗口内部分存在于保留窗口外的总磁盘块的消耗。例如上面例子中的块 2。当设置了最大块尺寸为总保留窗口的 10% 后,我们保留块 2 的总开销也有了 10% 的上限。
|
||||
|
||||
总结一下,保留与删除从非常昂贵到了几乎没有成本。
|
||||
|
||||
> 如果你读到这里并有一些数据库的背景知识,现在你也许会问:这些都是最新的技术吗?——并不是。而且可能还会做的更好。
|
||||
|
||||
> 在内存中打包数据,定期的写入日志并刷新磁盘的模式在现在相当普遍。
|
||||
> 我们看到的好处无论在什么领域的数据里都是适用的。遵循这一方法最著名的开源案例是 LevelDB,Cassandra,InfluxDB 和 HBase。关键是避免重复发明劣质的轮子,采用经得起验证的方法,并正确地运用它们。
|
||||
> 这里仍有地方来添加你自己的黑魔法。
|
||||
|
||||
### 索引
|
||||
|
||||
研究存储改进的最初想法是解决序列分流的问题。基于块的布局减少了查询所要考虑的序列总数。因此假设我们索引查找的复杂度是 `O(n^2)`,我们就要设法减少 n 个相当数量的复杂度,之后就有了改进后 `O(n^2)` 的复杂度。——恩,等等...糟糕。
|
||||
快速地想想“算法 101”课上提醒我们的,在理论上它并未带来任何好处。如果之前就很糟糕,那么现在也一样。理论是如此的残酷。
|
||||
|
||||
实际上,我们大多数的查询已经可以相当快地被相应。但是,跨越整个时间范围的查询仍然很慢,尽管只需要找到少部分数据。追溯到所有这些工作之前,最初我用来解决这个问题的想法是:我们需要一个更大容量的[倒排索引][9]。倒排索引基于数据项内容的子集提供了一种快速的查找方式。简单地说,我可以通过标签 `app=”nginx"` 查找所有的序列而无需遍历每个文件来看它是否包含该标签。
|
||||
|
||||
为此,每个序列被赋上一个唯一的 ID 来在常数时间内获取,例如 O(1)。在这个例子中 ID 就是 我们的正向索引。
|
||||
|
||||
> 示例:如果 ID 为 10,29 ,9 的序列包含标签 `app="nginx"`,那么 “nginx”的倒排索引就是简单的列表 `[10, 29, 9]`,它就能用来快速地获取所有包含标签的序列。即使有 200 多亿个数据也不会影响查找速度。
|
||||
|
||||
简而言之,如果 n 是我们序列总数,m 是给定查询结果的大小,使用索引的查询复杂度现在就是 O(m)。查询语句跟随它获取数据的数量 m 而不是被搜索的数据体 n 所扩展是一个很好的特性,因为 m 一般相当小。
|
||||
为了简单起见,我们假设可以在常数时间内查找到倒排索引对应的列表。
|
||||
|
||||
实际上,这几乎就是 V2 存储系统已有的倒排索引,也是提供在数百万序列中查询性能的最低需求。敏锐的人会注意到,在最坏情况下,所有的序列都含有标签,因此 m 又成了 O(n)。这一点在预料之中也相当合理。如果你查询所有的数据,它自然就会花费更多时间。一旦我们牵扯上了更复杂的查询语句就会有问题出现。
|
||||
|
||||
#### 标签组合
|
||||
|
||||
数百万个带有标签的数据很常见。假设横向扩展着数百个实例的“foo”微服务,并且每个实例拥有数千个序列。每个序列都会带有标签`app="foo"`。当然,用户通常不会查询所有的序列而是会通过进一步的标签来限制查询。例如,我想知道服务实例接收到了多少请求,那么查询语句便是 `__name__="requests_total" AND app="foo"`。
|
||||
|
||||
为了找到适应所有标签选择子的序列,我们得到每一个标签的倒排索引列表并取其交集。结果集通常会比任何一个输入列表小一个数量级。因为每个输入列表最坏情况下的尺寸为 O(n),所以在嵌套地为每个列表进行<ruby>暴力求解<rt>brute force solution</rt><ruby>下,运行时间为 O(n^2)。与其他的集合操作耗费相同,例如取并集 (`app="foo" OR app="bar"`)。当添加更多标签选择子在查询语句上,耗费就会指数增长到 O(n^3), O(n^4), O(n^5), ... O(n^k)。有很多手段都能通过改变执行顺序优化运行效率。越复杂,越是需要关于数据特征和标签之间相关性的知识。这引入了大量的复杂度,但是并没有减少算法的最坏运行时间。
|
||||
|
||||
这便是 V2 存储系统使用的基本方法,幸运的是,似乎稍微的改动就能获得很大的提升。如果我们假设倒排索引中的 ID 都是排序好的会怎么样?
|
||||
|
||||
假设这个例子的列表用于我们最初的查询:
|
||||
|
||||
```
|
||||
__name__="requests_total" -> [ 9999, 1000, 1001, 2000000, 2000001, 2000002, 2000003 ]
|
||||
app="foo" -> [ 1, 3, 10, 11, 12, 100, 311, 320, 1000, 1001, 10002 ]
|
||||
|
||||
intersection => [ 1000, 1001 ]
|
||||
```
|
||||
|
||||
它的交集相当小。我们可以为每个列表的起始位置设置游标,每次从最小的游标处移动来找到交集。当二者的数字相等,我们就添加它到结果中并移动二者的游标。总体上,我们以锯齿形扫描两个列表,因此总耗费是 O(2n)=O(n),因为我们总是在一个列表上移动。
|
||||
|
||||
两个以上列表的不同集合操作也类似。因此 k 个集合操作仅仅改变了因子 O(k*n) 而不是最坏查找运行时间下的指数 O(n^k)。
|
||||
我在这里所描述的是任意一个[全文搜索引擎][10]使用的标准搜索索引的简化版本。每个序列描述符都视作一个简短的“文档”,每个标签(名称 + 固定值)作为其中的“单词”。我们可以忽略搜索引擎索引中很多附加的数据,例如单词位置和和频率。
|
||||
似乎存在着无止境的研究来提升实际的运行时间,通常都是对输入数据做一些假设。不出意料的是,仍有大量技术来压缩倒排索引,其中各有利弊。因为我们的“文档”比较小,而且“单词”在所有的序列里大量重复,压缩变得几乎无关紧要。例如,一个真实的数据集约有 440 万个序列与大约 12 个标签,每个标签拥有少于 5000 个单独的标签。对于最初的存储版本,我们坚持基本的方法不使用压缩,仅做微小的调整来跳过大范围非交叉的 ID。
|
||||
|
||||
尽管维持排序好的 ID 听起来很简单,但实践过程中不是总能完成的。例如,V2 存储系统为新的序列赋上一个哈希值来当作 ID,我们就不能轻易地排序倒排索引。另一个艰巨的任务是当磁盘上的数据被更新或删除掉后修改其索引。通常,最简单的方法是重新计算并写入,但是要保证数据库在此期间可查询且具有一致性。V3 存储系统通过每块上独立的不可变索引来解决这一问题,仅通过压缩时的重写来进行修改。只有可变块上的索引需要被更新,它完全保存在内存中。
|
||||
|
||||
### 基准测试
|
||||
|
||||
我发起了一个最初版本的基准测试,它基于现实世界数据集中提取的大约 440 万个序列描述符,并生成合成数据点对应到这些序列中。这个方法仅仅测试单独的存储系统,快速的找到高并发负载场景下的运行瓶颈和触发死锁至关重要。
|
||||
|
||||
在概念性的运用完成之后,基准测试能够在我的 Macbook Pro 上维持每秒 2000 万的吞吐量—并且所有 Chrome 的页面和 Slack 都保持着运行。因此,尽管这听起来都很棒,它这也表明推动这项测试没有的进一步价值。(或者是没有在高随机环境下运行)。毕竟,它是合成的数据,因此在除了好的第一印象外没有多大价值。比起最初的设计目标高出 20 倍,是时候将它部署到真正的 Prometheus 服务器上了,为它添加更多现实环境中的开销和场景。
|
||||
|
||||
我们实际上没有可重复的 Prometheus 基准测试配置,特别是对于不同版本的 A/B 测试。亡羊补牢为时不晚,[现在就有一个了][11]!
|
||||
|
||||
工具可以让我们声明性地定义基准测试场景,然后部署到 AWS 的 Kubernetes 集群上。尽管对于全面的基准测试来说不是最好环境,但它肯定比 64 核 128GB 内存的专用<ruby>裸机服务器<rt>bare metal servers</rt></ruby>更能反映出用户基础。我们部署两个 Prometheus 1.5.2 服务器(V2 存储系统)和两个从 2.0 分支继续开发的 Prometheus (V3 存储系统) 。每个 Prometheus 运行在配备 SSD 的专用服务器上。我们将横向扩展的应用部署在了工作节点上,并且让其暴露典型的微服务量。此外,Kubernetes 集群本身和节点也被监控着。整个配置由另一个 Meta-Prometheus 所监督,它监控每个 Prometheus 的健康状况和性能。为了模拟序列分流,微服务定期的扩展和收缩来移除旧的 pods 并衍生新的 pods,生成新的序列。查询负载通过典型的查询选择来模拟,对每个 Prometheus 版本都执行一次。
|
||||
|
||||
总体上,伸缩与查询的负载和采样频率一样极大的超出了 Prometheus 的生产部署。例如,我们每隔 15 分钟换出 60% 的微服务实例去产生序列分流。在现代的基础设施上,一天仅大约会发生 1-5 次。这就保证了我们的 V3 设计足以处理未来几年的工作量。就结果而言,Prometheus 1.5.2 和 2.0 之间的性能差异在不温和的环境下会变得更大。
|
||||
总而言之,我们每秒从 850 个暴露 50 万数据的目标里收集了大约 11 万份样本。
|
||||
|
||||
在此配置运行一段时间之后,我们可以看一下数字。我们评估了两个版本在 12 个小时之后到达稳定时的几个指标。
|
||||
|
||||
> 请注意从 Prometheus 图形界面的截图中轻微截断的 Y 轴
|
||||
|
||||

|
||||
> 堆内存使用(GB)
|
||||
|
||||
内存资源使用对用户来说是最为困扰的问题,因为它相对的不可预测且能够导致进程崩溃。
|
||||
显然,被查询的服务器正在消耗内存,这极大程度上归咎于查询引擎的开销,这一点可以当作以后优化的主题。总的来说,Prometheus 2.0 的内存消耗减少了 3-4 倍。大约 6 小时之后,在 Prometheus 1.5 上有一个明显的峰值,与我们设置 6 小时的保留边界相对应。因为删除操作成本非常高,所以资源消耗急剧提升。这一点在下面几张图中均有体现。
|
||||
|
||||

|
||||
> CPU 使用(核心/秒)
|
||||
|
||||
类似的模式展示 CPU 使用,但是查询的服务器与非查询的服务器之间的差异尤为明显。每秒获取大约 11 万个数据需要 0.5 核心/秒的 CPU 资源,比起评估查询所花费的时间,我们新的存储系统 CPU 消耗可忽略不计。
|
||||
|
||||

|
||||
> 磁盘写入(MB/秒)
|
||||
|
||||
图片展示出的磁盘利用率取得了令人意想不到的提升。这就清楚的展示了为什么 Prometheus 1.5 很容易造成 SSD 损耗。我们看到最初的上升发生在第一个块被持久化到序列文件中的时期,然后一旦删除操作引发了重写就会带来第二个上升。令人惊讶的是,查询的服务器与非查询的服务器显示出了非常不同的利用率。
|
||||
Prometheus 2.0 on the other hand, merely writes about a single Megabyte per second to its write ahead log. Writes periodically spike when blocks are compacted to disk. Overall savings: staggering 97-99%.Prometheus 2.0 在另一方面,每秒仅仅写入大约一兆字节的日志文件。当块压缩到磁盘之时,写入定期地出现峰值。这在总体上节省了:惊人的 97-99%。
|
||||
|
||||

|
||||
> 磁盘大小(GB)
|
||||
|
||||
与磁盘写入密切相关的是总磁盘空间占用量。由于我们对样本几乎使用了相同的压缩算法,因此磁盘占用量应当相同。在更为稳定的配置中,这样做很大程度上是正确地,但是因为我们需要处理高序列分流,所以还要考虑每个序列的开销。
|
||||
如我们所见,Prometheus 1.5 在两个版本达到稳定状态之前,使用的存储空间因保留操作而急速上升。Prometheus 2.0 似乎在每个序列上具有更少的消耗。我们可以清楚的看到写入日志线性地充满整个存储空间,然后当压缩完成后立刻掉下来。事实上对于两个 Prometheus 2.0 服务器,它们的曲线并不是完全匹配的,这一点需要进一步的调查。
|
||||
|
||||
前景大好。剩下最重要的部分是查询延迟。新的索引应当优化了查找的复杂度。没有实质上发生改变的是处理数据的过程,例如 `rate()` 函数或聚合。这些就是查询引擎要做的东西了。
|
||||
|
||||

|
||||
> 第 99 个百分位查询延迟(秒)
|
||||
|
||||
数据完全符合预期。在 Prometheus 1.5 上,查询延迟随着存储的数据而增加。只有在保留操作开始且旧的序列被删除后才会趋于稳定。作为对比,Prometheus 从一开始就保持在合适的位置。
|
||||
我们需要花一些心思在数据是如何被采集上,对服务器发出的查询请求通过估计以下方面被选中:查询范围和即时查询的组合,进行或轻或重的计算,访问或多或少的文件。它并不需要代表真实世界里查询的分布。也不能代表冷数据的查询性能,我们可以假设所有的样本数据都是保存在内存中的热数据。
|
||||
尽管如此,我们可以相当自信地说,整体查询效果对序列分流变得非常有弹性,并且提升了高压基准测试场景下 4 倍的性能。在更为静态的环境下,我们可以假设查询时间大多数花费在了查询引擎上,改善程度明显较低。
|
||||
|
||||

|
||||
> 摄入的样本/秒
|
||||
|
||||
最后,快速地看一下不同 Prometheus 服务器的摄入率。我们可以看到搭载 V3 存储系统的两个服务器具有相同的摄入速率。在几个小时之后变得不稳定,这是因为不同的基准测试集群节点由于高负载变得无响应,与 Prometheus 实例无关。(两点之前的曲线完全匹配这一事实希望足够具有说服力)
|
||||
尽管还有更多 CPU 和内存资源,两个 Prometheus 1.5.2 服务器的摄入率大大降低。序列分流高压导致了无法采集更多的数据。
|
||||
|
||||
那么现在每秒可以摄入的<ruby>绝对最大<rt>absolute maximum</rt></ruby>样本数是多少?
|
||||
|
||||
我不知道——而且故意忽略。
|
||||
|
||||
存在的很多因素都会影响 Prometheus 数据流量,而且没有一个单独的数字能够描述捕获质量。最大摄入率在历史上是一个导致基准出现偏差的度量量,并且忽视了更多重要的层面,例如查询性能和对序列分流的弹性。关于资源使用线性增长的大致猜想通过一些基本的测试被证实。很容易推断出其中的原因。
|
||||
|
||||
我们的基准测试模拟了高动态环境下 Prometheus 的压力,它比起真实世界中的更大。结果表明,虽然运行在没有优化的云服务器上,但是已经超出了预期的效果。
|
||||
|
||||
> 注意:在撰写本文的同时,Prometheus 1.6 正在开发当中,它允许更可靠地配置最大内存使用量,并且可能会显著地减少整体的消耗,提高 CPU 使用率。我没有重复进行测试,因为整体结果变化不大,尤其是面对高序列分流的情况。
|
||||
|
||||
### 总结
|
||||
|
||||
Prometheus 开始应对高基数序列与单独样本的吞吐量。这仍然是一项富有挑战性的任务,但是新的存储系统似乎向我们展示了未来的一些好东西:<ruby>超大规模<rt>hyper-scale</rt></ruby>,<ruby>高收敛度<rt>hyper-convergent</rt></ruby>,GIFEE 基础设施。好吧,它似乎运行的不错。
|
||||
|
||||
第一个配备 V3 存储系统的 [alpha 版本 Prometheus 2.0][12] 已经可以用来测试了。在早期阶段预计还会出现崩溃,死锁和其他 bug。
|
||||
|
||||
存储系统的代码可以在[这个单独的项目中找到][13]。Prometheus 对于寻找高效本地存储时间序列数据库的应用来说可能非常有用,之一点令人非常惊讶。
|
||||
|
||||
> 这里需要感谢很多人作出的贡献,以下排名不分先后:
|
||||
|
||||
> Bjoern Rabenstein 和 Julius Volz 在 V2 存储引擎上的打磨工作以及 V3 存储系统的反馈,这为新一代的设计奠定了基础。
|
||||
|
||||
> Wilhelm Bierbaum 对新设计不断的建议与见解作出了很大的贡献。Brian Brazil 不断的反馈确保了我们最终得到的是语义上合理的方法。与 Peter Bourgon 深刻的讨论验证了设计并形成了这篇文章。
|
||||
|
||||
> 别忘了我们整个 CoreOS 团队与公司对于这项工作的赞助与支持。感谢所有那些听我一遍遍唠叨 SSD,浮点数,序列化格式的同学。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fabxc.org/blog/2017-04-10-writing-a-tsdb/
|
||||
|
||||
作者:[Fabian Reinartz ][a]
|
||||
译者:[译者ID](https://github.com/LuuMing)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://twitter.com/fabxc
|
||||
[1]:https://en.wikipedia.org/wiki/Inode
|
||||
[2]:https://prometheus.io/
|
||||
[3]:https://kubernetes.io/
|
||||
[4]:https://en.wikipedia.org/wiki/Write_amplification
|
||||
[5]:http://codecapsule.com/2014/02/12/coding-for-ssds-part-1-introduction-and-table-of-contents/
|
||||
[6]:https://prometheus.io/docs/practices/rules/
|
||||
[7]:http://www.vldb.org/pvldb/vol8/p1816-teller.pdf
|
||||
[8]:https://en.wikipedia.org/wiki/Mmap
|
||||
[9]:https://en.wikipedia.org/wiki/Inverted_index
|
||||
[10]:https://en.wikipedia.org/wiki/Search_engine_indexing#Inverted_indices
|
||||
[11]:https://github.com/prometheus/prombench
|
||||
[12]:https://prometheus.io/blog/2017/04/10/promehteus-20-sneak-peak/
|
||||
[13]:https://github.com/prometheus/tsdb
|
Loading…
Reference in New Issue
Block a user