Merge pull request #19516 from leommxj/master

20190226 Reducing security risks with centralized logging translated
This commit is contained in:
Xingyu.Wang 2020-09-07 13:59:51 +08:00 committed by GitHub
commit aed835b2d6
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 82 additions and 82 deletions

View File

@ -1,82 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (leommxj)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Reducing security risks with centralized logging)
[#]: via: (https://opensource.com/article/19/2/reducing-security-risks-centralized-logging)
[#]: author: (Hannah Suarez https://opensource.com/users/hcs)
Reducing security risks with centralized logging
======
Centralizing logs and structuring log data for processing can mitigate risks related to insufficient logging.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security_privacy_lock.png?itok=ZWjrpFzx)
Logging and log analysis are essential to securing infrastructure, particularly when we consider common vulnerabilities. This article, based on my lightning talk [Let's use centralized log collection to make incident response teams happy][1] at FOSDEM'19, aims to raise awareness about the security concerns around insufficient logging, offer a way to avoid the risk, and advocate for more secure practices _(disclaimer: I work for NXLog)._
### Why log collection and why centralized logging?
Logging is, to be specific, an append-only sequence of records written to disk. In practice, logs help you investigate an infrastructure issue as you try to find a cause for misbehavior. A challenge comes up when you have heterogeneous systems with their own standards and formats, and you want to be able to handle and process these in a dependable way. This often comes at the cost of metadata. Centralized logging solutions require commonality, and that commonality often removes the rich metadata many open source logging tools provide.
### The security risk of insufficient logging and monitoring
The Open Web Application Security Project ([OWASP][2]) is a nonprofit organization that contributes to the industry with incredible projects (including many [tools][3] focusing on software security). The organization regularly reports on the riskiest security challenges for application developers and maintainers. In its most recent report on the [top 10 most critical web application security risks][4], OWASP added Insufficient Logging and Monitoring to its list. OWASP warns of risks due to insufficient logging, detection, monitoring, and active response in the following types of scenarios.
* Important auditable events, such as logins, failed logins, and high-value transactions are not logged.
* Warnings and errors generate none, inadequate, or unclear log messages.
* Logs are only being stored locally.
* The application is unable to detect, escalate, or alert for active attacks in real time or near real time.
These instances can be mitigated by centralizing logs (i.e., not storing logs locally) and structuring log data for processing (i.e., in alerting dashboards and security suites).
For example, imagine a DNS query leads to a malicious site named **hacked.badsite.net**. With DNS monitoring, administrators monitor and proactively analyze DNS queries and responses. The efficiency of DNS monitoring relies on both sufficient logging and log collection in order to catch potential issues as well as structuring the resulting DNS log for further processing:
```
2019-01-29
Time (GMT)      Source                  Destination             Protocol-Info
12:42:42.112898 SOURCE_IP               xxx.xx.xx.x             DNS     Standard query 0x1de7  A hacked.badsite.net
```
You can try this yourself and run through other examples and snippets with the [NXLog Community Edition][5] _(disclaimer again: I work for NXLog)._
### Important aside: unstructured vs. structured data
It's important to take a moment and consider the log data format. For example, let's consider this log message:
```
debug1: Failed password for invalid user amy from SOURCE_IP port SOURCE_PORT ssh2
```
This log contains a predefined structure, such as a metadata keyword before the colon ( **debug1** ). However, the rest of the log field is an unstructured string ( **Failed password for invalid user amy from SOURCE_IP port SOURCE_PORT ssh2** ). So, while the message is easily available in a human-readable format, it is not a format a computer can easily parse.
Unstructured event data poses limitations including difficulty of parsing, searching, and analyzing the logs. The important metadata is too often set in an unstructured data field in the form of a freeform string like the example above. Logging administrators will come across this problem at some point as they attempt to standardize/normalize log data and centralize their log sources.
### Where to go next
Alongside centralizing and structuring your logs, make sure you're collecting the right log data—Sysmon, PowerShell, Windows EventLog, DNS debug, NetFlow, ETW, kernel monitoring, file integrity monitoring, database logs, external cloud logs, and so on. Also have the right tools and processes in place to collect, aggregate, and help make sense of the data.
Hopefully, this gives you a starting point to centralize log collection across diverse sources; send them to outside sources like dashboards, monitoring software, analytics software, specialized software like security information and event management (SEIM) suites; and more.
What does your centralized logging strategy look like? Share your thoughts in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/2/reducing-security-risks-centralized-logging
作者:[Hannah Suarez][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/hcs
[b]: https://github.com/lujun9972
[1]: https://fosdem.org/2019/schedule/event/lets_use_centralized_log_collection_to_make_incident_response_teams_happy/
[2]: https://www.owasp.org/index.php/Main_Page
[3]: https://github.com/OWASP
[4]: https://www.owasp.org/index.php/Top_10-2017_Top_10
[5]: https://nxlog.co/products/nxlog-community-edition/download

View File

@ -0,0 +1,82 @@
[#]: collector: (lujun9972)
[#]: translator: (leommxj)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Reducing security risks with centralized logging)
[#]: via: (https://opensource.com/article/19/2/reducing-security-risks-centralized-logging)
[#]: author: (Hannah Suarez https://opensource.com/users/hcs)
通过集中日志记录来减少安全风险
======
集中日志并结构化待处理的日志数据可缓解与缺少日志相关的风险
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security_privacy_lock.png?itok=ZWjrpFzx)
日志记录和日志分析对于保护基础设施安全来说至关重要,尤其是当我们考虑到通用漏洞的时候。这篇文章基于我在 FOSDEM'19 上的闪电秀[Let's use centralized log collection to make incident response teams happy][1],目的是提高大家对缺少日志这种安全问题的重视,提供一种避免风险的方法,并且倡议更多的安全实现 _(声明: 我为 NXLog 工作)._
### 为什么收集日志?为什么集中日志记录?
确切的说,日志是写入磁盘的仅追加的记录序列在实际生活中,日志可以在你尝试寻找异常的根源时帮助你调查基础设施问题。当你有多个使用自己标准与格式的日志的异构系统并且想用一种可靠的方法来接收和处理它们的时候,挑战就来临了。这通常以元数据为代价。集中日志记录解决方案需要共性,这种共性常常会去除许多开源日志记录工具所提供的丰富的元数据。
### 缺少日志记录与监控的安全风险
开源 Web 应用程序安全项目 ([OWASP][2]) 是一个为业界贡献了许多杰出项目(包括许多专注于软件安全的[工具][3]的非营利组织。OWASP定期为应用开发人员和维护者报告危险的安全挑战。在最新一版[10项最严重的 Web 应用程序安全风险][4] 中OWASP 将不足的日志记录和监控加入了列表中。OWASP 警告下列情况会导致不足的日志记录、检测、监控和响应:
* 未记录可审计性事件,如:登录、登录失败和高额交易。
* 告警和错误事件未能产生或产生不足的和不清晰的日志信息。
* 日志信息仅在本地存储。
* 对于实时或准实时的攻击,应用程序无法检测、处理和告警。
可以通过集中日志记录(例如, 不仅将日志本地存储)和结构化日志数据来缓解上述情形(例如,在告警仪表盘和安全套件中)。
举例来说, 假设一个DNS查询会导向名为 **hacked.badsite.net** 的恶意网站通过 DNS 监控,管理员监控并且主动的分析 DNS 请求与响应。DNS 监控的效果依赖于充足的日志记录与收集来发现潜在问题,同样也依赖于结构化 DNS 日志的结果来进一步分析。
```
2019-01-29
Time (GMT)      Source                  Destination             Protocol-Info
12:42:42.112898 SOURCE_IP               xxx.xx.xx.x             DNS     Standard query 0x1de7  A hacked.badsite.net
```
你可以在 [NXLog Community Edition][5] 自己尝试一下这个例子也可以尝试其他例子和代码片段。 _(再次声明: 我为 NXLog 工作)._
### 重要的一点:非结构化数据与结构化数据
花费一点时间来考虑下日志数据格式是很重要的例如,让我们来考虑以下日志消息:
```
debug1: Failed password for invalid user amy from SOURCE_IP port SOURCE_PORT ssh2
```
这段日志包含了一个预定义的结构,例如冒号前面的元数据关键词(**debug1**)然而,余下的日志字段是一个未结构化的字符串(**Failed password for invalid user amy from SOURCE_IP port SOURCE_PORT ssh2** )。因此,即便这个消息是人类可轻松阅读的格式,但它不是一个计算机容易解析的格式。
非结构化的事件数据存在局限性,包括难以解析,搜索和分析日志。重要的元数据通常以一种自由字符串的形式作为非结构化数据字段,就像上面的例子一样。日志管理员会在他们尝试标准化/归一化日志数据与集中日志源的过程中遇到这个问题。
### 接下来怎么做
除了集中和结构化日志之外确保你收集了正确的日志数据——Sysmon、PowerShell、Windows 事件日志、 DNS 调试日志、ETW、内核监控、文件完整性监控、数据库日志、外部云日志等等。同样选用适当的工具和流程来来收集汇总和帮助理解数据。
希望这对你从不同日志源中集中日志收集提供了一个开始; 将日志发送到仪表盘监控软件分析软件以及像安全性资讯与事件管理SIEM套件等外部源。
你的集中日志策略会是怎么样?请在评论中分享你的想法。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/2/reducing-security-risks-centralized-logging
作者:[Hannah Suarez][a]
选题:[lujun9972][b]
译者:[leommxj](https://github.com/leommxj)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/hcs
[b]: https://github.com/lujun9972
[1]: https://fosdem.org/2019/schedule/event/lets_use_centralized_log_collection_to_make_incident_response_teams_happy/
[2]: https://www.owasp.org/index.php/Main_Page
[3]: https://github.com/OWASP
[4]: https://www.owasp.org/index.php/Top_10-2017_Top_10
[5]: https://nxlog.co/products/nxlog-community-edition/download