mirror of
https://github.com/LCTT/TranslateProject.git
synced 2024-12-26 21:30:55 +08:00
88 lines
8.6 KiB
Markdown
88 lines
8.6 KiB
Markdown
|
[#]: collector: (lujun9972)
|
||
|
[#]: translator: ( )
|
||
|
[#]: reviewer: ( )
|
||
|
[#]: publisher: ( )
|
||
|
[#]: url: ( )
|
||
|
[#]: subject: (How to process real-time data with Apache)
|
||
|
[#]: via: (https://opensource.com/article/20/2/real-time-data-processing)
|
||
|
[#]: author: (Simon Crosby https://opensource.com/users/simon-crosby)
|
||
|
|
||
|
How to process real-time data with Apache
|
||
|
======
|
||
|
Open source is leading the way with a rich canvas of projects for
|
||
|
processing real-time events.
|
||
|
![Alarm clocks with different time][1]
|
||
|
|
||
|
In the "always-on" future with billions of connected devices, storing raw data for analysis later will not be an option because users want accurate responses in real time. Prediction of failures and other context-sensitive conditions require data to be processed in real time—certainly before it hits a database.
|
||
|
|
||
|
It's tempting to simply say "the cloud will scale" to meet demands to process streaming data in real time, but some simple examples show that it can never meet the need for real-time responsiveness to boundless data streams. In these situations—from mobile devices to IoT—a new paradigm is needed. Whereas cloud computing relies on a "store then analyze" big data approach, there is a critical need for software frameworks that are comfortable instantly processing endless, noisy, and voluminous streams of data as they arrive to permit a real-time response, prediction, or insight.
|
||
|
|
||
|
For example, the city of Palo Alto, Calif. produces more streaming data from its traffic infrastructure per day than the Twitter Firehose. That's a lot of data. Predicting city traffic for consumers like Uber, Lyft, and FedEx requires real-time analysis, learning, and prediction. Event processing in the cloud leads to an inescapable latency of about half a second per event.
|
||
|
|
||
|
We need a simple yet powerful programming paradigm that lets applications process boundless data streams on the fly in these and similar situations:
|
||
|
|
||
|
* Data volumes are huge, or moving raw data is expensive.
|
||
|
* Data is generated by widely distributed assets (such as mobile devices).
|
||
|
* Data is of ephemeral value, and analysis can't wait.
|
||
|
* It is critical to always have the latest insight, and extrapolation won't do.
|
||
|
|
||
|
|
||
|
|
||
|
### Publish and subscribe
|
||
|
|
||
|
A key architectural pattern in the domain of event-driven systems is the concept of pub/sub or publish/subscribe messaging. This is an asynchronous communication method in which messages are delivered from _publishers_ (anything producing data) to *subscribers (*applications that process data). Pub/sub decouples arbitrary numbers of senders from an unknown set of consumers.
|
||
|
|
||
|
In pub/sub, sources _publish_ events for a _topic_ to a _broker_ that stores them in the order in which they are received. An application _subscribes_ to one or more _topics_, and the _broker_ forwards matching events. Apache Kafka and Pulsar and CNCF NATS are pub/sub systems. Cloud services for pub/sub include Google Pub/Sub, AWS Kinesis, Azure Service Bus, Confluent Cloud, and others.
|
||
|
|
||
|
Pub/sub systems do not _run_ subscriber applications—they simply _deliver_ data to topic subscribers.
|
||
|
|
||
|
Streaming data often contains events that are updates to the state of applications or infrastructure. When choosing an architecture to process data, the role of a data-distribution system such as a pub/sub framework is limited. The "how" of the consumer application lies beyond the scope of the pub/sub system. This leaves an enormous amount of complexity for the developer to manage. So-called stream processors are a special kind of subscriber that analyzes data on the fly and delivers results back to the same broker.
|
||
|
|
||
|
### Apache Spark
|
||
|
|
||
|
[Apache Spark][2] is a unified analytics engine for large-scale data processing. Often, Apache Spark Streaming is used as a stream processor, for example, to feed machine learning models with new data. Spark Streaming breaks data into mini-batches that are each independently analyzed by a Spark model or some other system. The stream of events is grouped into mini-batches for analysis, but the stream processor itself must be elastic:
|
||
|
|
||
|
* The stream processor must be capable of scaling with the data rate, even across servers and clouds, and also balance load across instances, ensuring resilience and other application-layer needs.
|
||
|
* It must be able to analyze data from sources that report at widely different rates, meaning it must be stateful—or store state in a database. This latter approach is often used when Spark Streaming is used as the stream processor and can cause performance problems when ultra-low latency responses are needed.
|
||
|
|
||
|
|
||
|
|
||
|
A related project, [Apache Samza][3], offers a way to process real-time event streams, and to scale elastically using [Hadoop Yarn][4] or [Apache Mesos][5] to manage compute resources.
|
||
|
|
||
|
### Solving the problem of scaling data
|
||
|
|
||
|
It's important to note that even Samza cannot entirely alleviate data processing demands for the application developer. Scaling data rates mean that tasks to process events need to be load-balanced across many instances, and the only way to share the resulting application-layer state between instances is to use a database. However, the moment state coordination between tasks of an application devolves to a database, there is an inevitable knock-on effect upon performance. Moreover, the choice of database is crucial. As the system scales, cluster management for the database becomes the next potential bottleneck.
|
||
|
|
||
|
This can be solved with alternative solutions that are stateful, elastic, and can be used in place of a stream processor. At the application level (within each container or instance), these solutions build a stateful model of concurrent, interlinked "web agents" on the fly from streaming updates. Agents are concurrent "nano-services" that consume raw data for a single source and maintain their state. Agents interlink to share state based on real-world relationships between sources found in the data, such as containment and proximity. Agents thus form a graph of concurrent services that can analyze their own state and the states of agents to which they are linked. Each agent provides a nano-service for a single data source that converts from raw data to state and analyzes, learns, and predicts from its own changes and those of its linked subgraph.
|
||
|
|
||
|
These solutions simplify application architecture by allowing agents—digital twins of real-world sources—to be widely distributed, even while maintaining the distributed graph that interlinks them at the application layer. This is because the links are URLs that map to the current runtime execution instance of the solution and the agent itself. In this way, the application seamlessly scales across instances without DevOps concerns. Agents consume data and maintain state. They also compute over their own state and that of other agents. Because agents are stateful, there is no need for a database, and insights are computed at memory speed.
|
||
|
|
||
|
### Reading world data with open source
|
||
|
|
||
|
There is a sea change afoot in the way we view data: Instead of the database being the system of record, the real world is, and digital twins of real-world things can continuously stream their state. Fortunately, the open source community is leading the way with a rich canvas of projects for processing real-time events. From pub/sub, where the most active communities are Apache Kafka, Pulsar, and CNCF NATS, to the analytical frameworks that continually process streamed data, including Apache Spark, [Flink][6], [Beam][7], Samza, and Apache-licensed [SwimOS][8] and [Hazelcast][9], developers have the widest choices of software systems. Specifically, there is no richer set of proprietary software frameworks available. Developers have spoken, and the future of software is open source.
|
||
|
|
||
|
Introduction to Apache Hadoop, an open source software framework for storage and large scale...
|
||
|
|
||
|
--------------------------------------------------------------------------------
|
||
|
|
||
|
via: https://opensource.com/article/20/2/real-time-data-processing
|
||
|
|
||
|
作者:[Simon Crosby][a]
|
||
|
选题:[lujun9972][b]
|
||
|
译者:[译者ID](https://github.com/译者ID)
|
||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||
|
|
||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||
|
|
||
|
[a]: https://opensource.com/users/simon-crosby
|
||
|
[b]: https://github.com/lujun9972
|
||
|
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/clocks_time.png?itok=_ID09GDk (Alarm clocks with different time)
|
||
|
[2]: https://spark.apache.org/
|
||
|
[3]: https://samza.apache.org/
|
||
|
[4]: https://hadoop.apache.org/
|
||
|
[5]: http://mesos.apache.org/
|
||
|
[6]: https://flink.apache.org/
|
||
|
[7]: https://beam.apache.org
|
||
|
[8]: https://github.com/swimos/swim
|
||
|
[9]: https://hazelcast.com/
|