mirror of
https://github.com/LCTT/TranslateProject.git
synced 2024-12-29 21:41:00 +08:00
157 lines
6.9 KiB
Markdown
157 lines
6.9 KiB
Markdown
[#]: subject: "Get started with Parseable, an open source log storage and observability platform"
|
||
[#]: via: "https://opensource.com/article/22/11/parseable-observability-platform"
|
||
[#]: author: "Nitish Tiwari https://opensource.com/users/tiwarinitish86"
|
||
[#]: collector: "lkxed"
|
||
[#]: translator: " "
|
||
[#]: reviewer: " "
|
||
[#]: publisher: " "
|
||
[#]: url: " "
|
||
|
||
Get started with Parseable, an open source log storage and observability platform
|
||
======
|
||
|
||
Written in Rust, Parseable leverages data compression, storage, and networking advances to bring a simple, efficient logging platform that just works.
|
||
|
||
Log data is one of the fastest-growing segments across data storage. It's also one of the most complicated spaces. There are several products and solutions with overlapping use cases and confusing marketing.
|
||
|
||
This article looks at Parseable, a log storage and observability platform. Parseable is geared towards a better user experience, with an easy-to-deploy and use interface and a simple, cloud-native architecture. I'll also show how to set up Parseable with FluentBit to store logs.
|
||
|
||
### What is Parseable?
|
||
|
||
[Parseable][1] is a free and open source log storage and observability platform. Written in Rust, Parseable leverages data compression, storage, and networking advances to bring a simple, efficient logging platform that just works.
|
||
|
||
Some core concepts for building Parseable are:
|
||
|
||
#### Indexing free
|
||
|
||
Traditionally, text search engines like Elastic have doubled as log storage platforms. This makes sense because log data must be searched to be really useful. But indexing comes at a high cost. It is CPU intensive and slows down ingestion. Also, the index data generated by these systems are of the same order of storage as the raw log data. This doubles the storage cost and increases complexity. Parseable changes this. With columnar data formats (parquet), it is possible to compress and query the log data efficiently without indexing it.
|
||
|
||
#### Ownership of both data and content
|
||
|
||
With parquet as the storage format and stored in standard object storage buckets, users own their log data and have complete access to the actual content. This means users can easily use analysis tools like Spark, Presto, or TensorFlow to extract more value from the data. This feature is extremely powerful, opening up new avenues of data analysis.
|
||
|
||
#### Fluid schema
|
||
|
||
Logs are generally semi-structured by nature, and they're ever-evolving. For example, a developer may start with a log schema like this:
|
||
|
||
```
|
||
{
|
||
"Status": "Ready",
|
||
"Application": "Example"
|
||
}
|
||
```
|
||
|
||
But as more information is collected, the log schema may evolve to:
|
||
|
||
```
|
||
{
|
||
"Status": "Ready",
|
||
"Application": {
|
||
"UserID": "3187F492-8449-4486-A2A0-015AE34F1D09",
|
||
"Name": "Example"
|
||
}
|
||
}
|
||
```
|
||
|
||
Engineering and SRE teams regularly face schema-related issues. Parseable solves this with a fluid schema approach that lets users change the schema on the fly.
|
||
|
||
#### Simple ingestion
|
||
|
||
The current ingestion mechanism for logging platforms is quite convoluted, with several available protocols and connectors. Parseable aims to make log ingestion as easy as possible. The result is you can use HTTP POST calls to send logs to Parseable. No complicated SDKs are required.
|
||
|
||
What if you want to use a logging agent like FluentBit, Vector, LogStash, or others? Almost all the major log collectors support HTTP, so Parseable is already compatible with your favorite log collection agent.
|
||
|
||
### Get started
|
||
|
||
You can use a Docker image to try out Parseable. This image shows Parseable in demo mode, using publicly-accessible object storage.
|
||
|
||
```
|
||
$ cat<< EOF > parseable-envP_S3_URL=https://minio.parseable.io:9000P_S3_ACCESS_KEY=minioadminP_S3_SECRET_KEY=minioadminP_S3_REGION=us-east-1P_S3_BUCKET=parseableP_LOCAL_STORAGE=/dataP_USERNAME=parseableP_PASSWORD=parseable
|
||
EOF
|
||
$ mkdir-p/tmp/data
|
||
$ docker run \-p8000:8000 \--env-file parseable-env \-v/tmp/data:/data \
|
||
parseable/parseable:latest
|
||
```
|
||
|
||
Log in to the Parseable UI using the credentials passed here (that's `parseable` and `parseable`.) The demo already contains some data because Parseable is pointing to the publicly-open bucket.
|
||
|
||
Make sure to change the bucket and credentials to your object storage instance before sending any data to Parseable.
|
||
|
||
Refer to the [documentation][2] to understand how Parseable works and how to ingest logs.
|
||
|
||
### Set up FluentBit to send logs to Parseable
|
||
|
||
You can use a Docker compose file to configure both Parseable and FluentBit, making it easier to set up and tear down as needed.
|
||
|
||
First, save this file as `fluent-bit.conf` in a directory. The file is the configuration used to send data to Parseable.
|
||
|
||
```
|
||
[SERVICE]
|
||
Flush 5
|
||
Daemon Off
|
||
Log_Level debug
|
||
[INPUT]
|
||
Name dummy
|
||
Tag dummy
|
||
[OUTPUT]
|
||
Name http
|
||
Match *
|
||
Host parseable
|
||
http_User parseable
|
||
http_Passwd parseable
|
||
format json
|
||
Port 8000
|
||
Header X-P-META-meta1 value1
|
||
Header X-P-TAG-tag1 value1
|
||
URI /api/v1/logstream/fluentbit1
|
||
Json_date_key timestamp
|
||
Json_date_format iso8601
|
||
```
|
||
|
||
Now save the following file as `docker-compose.yaml` in the same directory as above:
|
||
|
||
```
|
||
version: "3.7"
|
||
services:
|
||
fluent-bit:
|
||
image: fluent/fluent-bit
|
||
volumes: - ./fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf
|
||
depends_on: - parseable
|
||
parseable:
|
||
image: parseable/parseable
|
||
ports: - "8000:8000"
|
||
environment: - P_S3_URL=https://minio.parseable.io:9000
|
||
- P_S3_ACCESS_KEY=minioadmin
|
||
- P_S3_SECRET_KEY=minioadmin
|
||
- P_S3_REGION=us-east-1
|
||
- P_S3_BUCKET=parseable
|
||
- P_LOCAL_STORAGE=/tmp/data
|
||
- P_USERNAME=parseable
|
||
- P_PASSWORD=parseable
|
||
```
|
||
|
||
The `docker-compose.yaml` refers to the `fluent-bit.conf` file and passes it to the FluentBit container as the configuration file.
|
||
|
||
Parseable is deployed with the default configuration (as in the above Docker setup). You can observe the data FluentBit container sent to Parseable in the Parseable Console running at **[http://localhost:8000][3]**.
|
||
|
||
### Wrap up
|
||
|
||
In this article, you've taken your first look at Parseable, the open source log storage and analysis platform built in Rust. A single Docker command gets you started with Parseable so you can experience the UI and establish FluentBit as a data source. If you think this looks too easy, then it's probably time to try Parseable!
|
||
|
||
--------------------------------------------------------------------------------
|
||
|
||
via: https://opensource.com/article/22/11/parseable-observability-platform
|
||
|
||
作者:[Nitish Tiwari][a]
|
||
选题:[lkxed][b]
|
||
译者:[译者ID](https://github.com/译者ID)
|
||
校对:[校对者ID](https://github.com/校对者ID)
|
||
|
||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||
|
||
[a]: https://opensource.com/users/tiwarinitish86
|
||
[b]: https://github.com/lkxed
|
||
[1]: https://github.com/parseablehq/parseable
|
||
[2]: https://www.parseable.io/docs/introduction
|
||
[3]: http://localhost:8000
|