Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu Wang 2020-09-07 16:07:37 +08:00
commit 018ccdf5d1
9 changed files with 317 additions and 228 deletions

View File

@ -1,82 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (leommxj)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Reducing security risks with centralized logging)
[#]: via: (https://opensource.com/article/19/2/reducing-security-risks-centralized-logging)
[#]: author: (Hannah Suarez https://opensource.com/users/hcs)
Reducing security risks with centralized logging
======
Centralizing logs and structuring log data for processing can mitigate risks related to insufficient logging.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security_privacy_lock.png?itok=ZWjrpFzx)
Logging and log analysis are essential to securing infrastructure, particularly when we consider common vulnerabilities. This article, based on my lightning talk [Let's use centralized log collection to make incident response teams happy][1] at FOSDEM'19, aims to raise awareness about the security concerns around insufficient logging, offer a way to avoid the risk, and advocate for more secure practices _(disclaimer: I work for NXLog)._
### Why log collection and why centralized logging?
Logging is, to be specific, an append-only sequence of records written to disk. In practice, logs help you investigate an infrastructure issue as you try to find a cause for misbehavior. A challenge comes up when you have heterogeneous systems with their own standards and formats, and you want to be able to handle and process these in a dependable way. This often comes at the cost of metadata. Centralized logging solutions require commonality, and that commonality often removes the rich metadata many open source logging tools provide.
### The security risk of insufficient logging and monitoring
The Open Web Application Security Project ([OWASP][2]) is a nonprofit organization that contributes to the industry with incredible projects (including many [tools][3] focusing on software security). The organization regularly reports on the riskiest security challenges for application developers and maintainers. In its most recent report on the [top 10 most critical web application security risks][4], OWASP added Insufficient Logging and Monitoring to its list. OWASP warns of risks due to insufficient logging, detection, monitoring, and active response in the following types of scenarios.
* Important auditable events, such as logins, failed logins, and high-value transactions are not logged.
* Warnings and errors generate none, inadequate, or unclear log messages.
* Logs are only being stored locally.
* The application is unable to detect, escalate, or alert for active attacks in real time or near real time.
These instances can be mitigated by centralizing logs (i.e., not storing logs locally) and structuring log data for processing (i.e., in alerting dashboards and security suites).
For example, imagine a DNS query leads to a malicious site named **hacked.badsite.net**. With DNS monitoring, administrators monitor and proactively analyze DNS queries and responses. The efficiency of DNS monitoring relies on both sufficient logging and log collection in order to catch potential issues as well as structuring the resulting DNS log for further processing:
```
2019-01-29
Time (GMT)      Source                  Destination             Protocol-Info
12:42:42.112898 SOURCE_IP               xxx.xx.xx.x             DNS     Standard query 0x1de7  A hacked.badsite.net
```
You can try this yourself and run through other examples and snippets with the [NXLog Community Edition][5] _(disclaimer again: I work for NXLog)._
### Important aside: unstructured vs. structured data
It's important to take a moment and consider the log data format. For example, let's consider this log message:
```
debug1: Failed password for invalid user amy from SOURCE_IP port SOURCE_PORT ssh2
```
This log contains a predefined structure, such as a metadata keyword before the colon ( **debug1** ). However, the rest of the log field is an unstructured string ( **Failed password for invalid user amy from SOURCE_IP port SOURCE_PORT ssh2** ). So, while the message is easily available in a human-readable format, it is not a format a computer can easily parse.
Unstructured event data poses limitations including difficulty of parsing, searching, and analyzing the logs. The important metadata is too often set in an unstructured data field in the form of a freeform string like the example above. Logging administrators will come across this problem at some point as they attempt to standardize/normalize log data and centralize their log sources.
### Where to go next
Alongside centralizing and structuring your logs, make sure you're collecting the right log data—Sysmon, PowerShell, Windows EventLog, DNS debug, NetFlow, ETW, kernel monitoring, file integrity monitoring, database logs, external cloud logs, and so on. Also have the right tools and processes in place to collect, aggregate, and help make sense of the data.
Hopefully, this gives you a starting point to centralize log collection across diverse sources; send them to outside sources like dashboards, monitoring software, analytics software, specialized software like security information and event management (SEIM) suites; and more.
What does your centralized logging strategy look like? Share your thoughts in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/2/reducing-security-risks-centralized-logging
作者:[Hannah Suarez][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/hcs
[b]: https://github.com/lujun9972
[1]: https://fosdem.org/2019/schedule/event/lets_use_centralized_log_collection_to_make_incident_response_teams_happy/
[2]: https://www.owasp.org/index.php/Main_Page
[3]: https://github.com/OWASP
[4]: https://www.owasp.org/index.php/Top_10-2017_Top_10
[5]: https://nxlog.co/products/nxlog-community-edition/download

View File

@ -1,60 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Information could be half the world's mass by 2245, says researcher)
[#]: via: (https://www.networkworld.com/article/3570438/information-could-be-half-the-worlds-mass-by-2245-says-researcher.html)
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
Information could be half the world's mass by 2245, says researcher
======
Because of the amount of energy and resources used to create and store digital information, the data should be considered physical, and not just invisible ones and zeroes, according to one theoretical physicist.
Luza Studios / Getty Images
Digital content should be considered a fifth state of matter, along with gas, liquid, plasma and solid, suggests one university scholar.
Because of the energy and resources used to create, store and distribute data physically and digitally, data has evolved and should now be considered as mass, according to Melvin Vopson, a senior lecturer at the U.K.'s University of Portsmouth and author of an article, "[The information catastrophe][1]," published in the journal AIP Advances.
Vopson also claims digital bits are on a course to overwhelm the planet and will eventually outnumber atoms.
The idea of assigning mass to digital information builds off some existing data points. Vopson cites an IBM estimate that finds data is created at a rate of 2.5 quintillion bytes every day. He also factors in data storage densities of more than 1 terabit per inch to compare the size of a bit to the size of an atom.
Presuming 50% annual growth in data generation, "the number of bits would equal the number of atoms on Earth in approximately 150 years," according to a [media release][2] announcing Vopson's research.
"It would be approximately 130 years until the power needed to sustain digital information creation would equal all the power currently produced on planet Earth, and by 2245, half of Earth's mass would be converted to digital information mass," the release reads.
The COVID-19 pandemic is increasing the rate of digital data creation and accelerating this process, Vopson adds.
He warns of an impending saturation point: "Even assuming that future technological progress brings the bit size down to sizes closer to the atom itself, this volume of digital information will take up more than the size of the planet, leading to what we define as the information catastrophe," Vopson writes in the [paper][3].
"We are literally changing the planet bit by bit, and it is an invisible crisis," says Vopson, a former R&D scientist at Seagate Technology.
Vopson isn't alone in exploring the idea that information isn't simply imperceptible ones and zeroes. According to the release, Vopson draws on the mass-energy comparisons in Einstein's theory of general relativity; the work of Rolf Landauer, who applied the laws of thermodynamics to information; and the work of Claude Shannon, the inventor of the digital bit.
"When one brings information content into existing physical theories, it is almost like an extra dimension to everything in physics," Vopson says.
With a growth rate that seems unstoppable, digital information production "will consume most of the planetary power capacity, leading to ethical and environmental concerns," his paper concludes.
Interestingly and a bit more out there Vopson also suggests that if, as he projects, the future mass of the planet is made up predominantly of bits of information, and there exists enough power created to do it (not certain), then "one could envisage a future world mostly computer simulated and dominated by digital bits and computer code," he writes.
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3570438/information-could-be-half-the-worlds-mass-by-2245-says-researcher.html
作者:[Patrick Nelson][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Patrick-Nelson/
[b]: https://github.com/lujun9972
[1]: https://aip.scitation.org/doi/10.1063/5.0019941
[2]: https://publishing.aip.org/publications/latest-content/digital-content-on-track-to-equal-half-earths-mass-by-2245/
[3]: https://aip.scitation.org/doi/full/10.1063/5.0019941
[4]: https://www.facebook.com/NetworkWorld/
[5]: https://www.linkedin.com/company/network-world

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (wxy)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,84 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Manage your software repositories with this open source tool)
[#]: via: (https://opensource.com/article/20/8/manage-repositories-pulp)
[#]: author: (Melanie Corr https://opensource.com/users/melanie-corr)
Manage your software repositories with this open source tool
======
An introduction to Pulp, the open source repository management solution
that is growing in scope and functionality.
![Cut pieces of citrus fruit and pomegranates][1]
[Foreman][2] is a robust management and automation product that provides administrators of Linux environments with enterprise-level solutions for four key scenarios: provisioning management, configuration management, patch management, and content management. A major component of the content management functionality in Foreman is provided by the Pulp project. While Pulp is an integral part of this product, it is also a standalone, free, and open source project that is making huge progress on its own.
Let's take a look at the Pulp project, especially the features of the latest release, Pulp 3.
### What is Pulp?
Pulp is a platform for managing repositories of software packages and making them available to a large number of consumers. You can use Pulp to mirror, synchronize, upload, and promote content like RPMs, Python packages, Ansible collections, container images, and more across different environments. If you have dozens, hundreds, or even thousands of software packages and need a better way to manage them, Pulp can help.
The latest major version is [Pulp 3][3], which was released in December 2019. Pulp 3 is the culmination of years of gathering user requirements and a complete technical overhaul of the existing Pulp architecture to increase reliability and flexibility. Plus, it includes a vast range of new features.
### Who uses Pulp?
For the most part, Pulp users administer enterprise software environments where the stability and reliability of content are paramount. Pulp users want a platform to develop content without worrying that repositories might disappear. They want to promote content across the different stages of their lifecycle environment in a secure manner that optimizes disk space and scales their environment to meet new demands. They also need the flexibility to work with a wide variety of content types. Pulp 3 provides that and more.
### Manage a wide variety of content in one place
After you install Pulp, you can add [content plugins][4] for the content types that you plan to manage, mirror the content locally, add privately hosted content, and blend content to suit your requirements. If youre an Ansible user, for example, and you don't want to host your private content on Ansible Galaxy, you can add the Pulp Ansible plugin, mirror the public Ansible content that you require, and use Pulp as an on-premise platform to manage and distribute a scalable blend of public and private Ansible roles and collections across your organization. You can do this with any content type. There is a wide variety of content plugins available, including RPM, Debian, Python, Container, and Ansible, to name but a few. There is also a File plugin, which you can use to manage files like ISO images.
If you don't find a plugin for the content type that you require, Pulp 3 has introduced a new plugin API and plugin template to make it easy to create a Pulp plugin of your own. You can use the [plugin writing guide][5] to autogenerate a minimal viable plugin, and then start building from there.
### High availability
With Pulp 3, the change from MongoDB to PostgreSQL facilitated major improvements around performance and data integrity. Pulp users now have a fully open source tech stack that provides high availability (HA) and better scalability.
### Repository versioning
Using Pulp 3, you can experiment without risk. Every time you add or remove content, Pulp creates an immutable repository version so that you can roll back to earlier versions and thereby guarantee the safety and stability of your operation. Using publications and distributions, you can expose multiple versions of a repository, which you can use as another method of rolling back to an earlier version. To roll back, you can simply point your distribution to an older publication.
### Disk optimization
One of the major challenges for any software development environment is disk optimization. If you're downloading a constant stream of packages—for example, nightly builds of repositories that you require today but will no longer require tomorrow—disk space will quickly become an issue. Pulp 3 has been designed with disk optimization in mind. While the default option downloads and saves all software packages, you can also enable either the "on demand" or "streamed" option. The "on demand" option saves disk space by downloading and saving only the content that clients request. With the "streamed" option, you also download upon client request, but you don't save the content in Pulp. This is ideal for synchronizing content, for example, from a nightly repository, and saves you from having to perform a disk cleanup at a later stage.
### Multiple storage options
Even with the best disk optimization, as your project grows, you might need a way to scale your deployment to match your requirements. As well as local file storage, Pulp supports a range of cloud storage options, such as Amazon S3 and Azure, to ensure that you can scale to meet the demands of your deployment.
### Protect your content
Pulp 3 has the option of adding the [Certguard][6] plugin, which provides an X.509 capable ContentGuard that requires clients to submit a certificate proving their entitlement to content before receiving content from Pulp.
Any client presenting an X.509 or Red Hat Subscription Management-based certificate at request time will be authorized, as long as the client certification is not expired, is signed by the Certificate Authority, and was stored on the Certguard when it was created. The client presents the certificate using transport layer security (TLS), which proves that the client has not only the certificate but also its key. You can develop, confident in the knowledge that your content is being protected.
The Pulp team is also actively working on a role-based access control system for the entire Pulp deployment so that administrators can ensure that the right users have access to the right environments.
### Try Pulp in a container
If you're interested in evaluating Pulp 3 for yourself, you can easily install [Pulp 3 in a Container][7] using Docker or Podman. The Pulp team is constantly working on simplifying the installation process. You can also use an [Ansible playbook][8] to automate the full installation and configuration of Pulp 3.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/8/manage-repositories-pulp
作者:[Melanie Corr][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/melanie-corr
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fruit-orange-pomegranate-pulp-unsplash.jpg?itok=4cvODZDJ (Oranges and pomegranates)
[2]: https://opensource.com/article/17/8/system-management-foreman
[3]: https://pulpproject.org/about-pulp-3/
[4]: https://pulpproject.org/content-plugins/
[5]: https://docs.pulpproject.org/plugins/plugin-writer/index.html
[6]: https://pulp-certguard.readthedocs.io/en/latest/
[7]: https://pulpproject.org/pulp-in-one-container/
[8]: https://pulp-installer.readthedocs.io/en/latest/

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,89 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Linux Jargon Buster: What is a Linux Distribution? Why is it Called a Distribution?)
[#]: via: (https://itsfoss.com/what-is-linux-distribution/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Linux Jargon Buster: What is a Linux Distribution? Why is it Called a Distribution?
======
In this chapter of the Linux Jargon Buster, lets discuss something elementary.
Lets discuss what is a Linux distribution, why it is called a distribution (or distro) and how is it different from the Linux kernel. Youll also learn a thing or two about why some people insist of calling Linux as GNU/Linux.
### What is a Linux distribution?
A Linux distribution is an operating system composed of the Linux kernel, [GNU tools][1], additional software and a package manager. It may also include display server and [desktop environment][2] to be used as regular desktop operating system.
The term is Linux distribution (or distro in short form) because an entity like Debian or Ubuntu distributes the Linux kernel along with all the necessary software and utilities (like network manager, package manager, desktop environments etc) so that it can be used as an operating system.
Your distributions also takes the responsibility of providing updates to maintain the kernel and other utilities.
So, Linux is the kernel whereas the Linux distribution is the operating system. This is the reason why they are also sometime referred as Linux-based operating systems.
Dont worry if not all the above makes sense right away. Ill explain it in a bit more detail.
### Linux is just a kernel, not an operating system: What does it mean?
You might have come across that phrase and thats entirely correct. The kernel is at the core of an operating system and it is close to the actual hardware. You interact with it using the applications and shell.
![Linux Kernel Structure][3]
To understand that, Ill use the same analogy that I had used in my [detailed guide on what is Linux][4]. Think of operating systems as vehicles and kernel as engine. You cannot drive an engine directly. Similarly, you cannot use kernel directly.
![Operating System Analogy][5]
A Linux distribution can be seen as a vehicle manufacturer like Toyota or Ford that provides you ready to use cars just like Ubuntu or Fedora distributions provide you a ready to use operating systems based on Linux.
### What is GNU/Linux?
Take a look at this picture once again. What [Linus Torvalds][6] created in 1991 is just the innermost circle, i.e. the Linux kernel.
![Linux Kernel Structure][3]
To use Linux even in the most primitive form (without even a GUI), you need a shell. Most commonly, it is Bash shell.
And then, you need to run some commands in the shell to do some work. Can you recall some basic Linux commands? There is cat, cp, mv, grep find, diff, gzip and more.
Technically, not all of these so called Linux commands belong to Linux exclusively. A lot of them originate mainly from the UNIX operating system.
Even before Linux came into existence, Richard Stallman had created the GNU (recursive acronym for GNU is not Unix) project, the first of the free software project, in 1983. The [GNU project][7] implemented many of the popular Unix utilities like cat, grep, awk, shell (bash) along with developing their own compilers (GCC) and editors (Emacs).
Back in the 80s UNIX was proprietary and super expensive. This is why Linus Torvalds developed a new kernel that was like UNIX. To interact with the Linux kernel, Torvalds used GNU tools which were available for free under their open source GPL license.
With the GNU tools, it also behaved like UNIX. This is the reason why Linux is also termed as UNIX-like operating system.
You cannot imagine Linux without the shell and all those commands. Since Linux integrates deeply with the GNU tools, almost dependent on it, the purists demand that GNU should get its fair share of recognition and this is why they insist on calling it GNU Linux (written as GNU/Linux).
### Conclusion
![][8]
So, what is the correct term? Linux, GNU/Linux, Linux distribution, Linux distro, Linux based operating system or UNIX-like operating system? I say it depends on you and the context. I have provided you enough detail so that you have a better understanding of these related terms.
I hope you are liking this **Linux Jargon Buster** series and learning new things. Your feedback and suggestions are welcome.
--------------------------------------------------------------------------------
via: https://itsfoss.com/what-is-linux-distribution/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://www.gnu.org/manual/blurbs.html
[2]: https://itsfoss.com/what-is-desktop-environment/
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/03/Linux_Kernel_structure.png?resize=800%2C350&ssl=1
[4]: https://itsfoss.com/what-is-linux/
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/03/operating_system_analogy.png?resize=800%2C350&ssl=1
[6]: https://itsfoss.com/linus-torvalds-facts/
[7]: https://www.gnu.org/gnu/thegnuproject.en.html
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/09/what-is-linux-distribution.png?resize=800%2C450&ssl=1

View File

@ -0,0 +1,82 @@
[#]: collector: (lujun9972)
[#]: translator: (leommxj)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Reducing security risks with centralized logging)
[#]: via: (https://opensource.com/article/19/2/reducing-security-risks-centralized-logging)
[#]: author: (Hannah Suarez https://opensource.com/users/hcs)
通过集中日志记录来减少安全风险
======
集中日志并结构化待处理的日志数据可缓解与缺少日志相关的风险
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security_privacy_lock.png?itok=ZWjrpFzx)
日志记录和日志分析对于保护基础设施安全来说至关重要,尤其是当我们考虑到通用漏洞的时候。这篇文章基于我在 FOSDEM'19 上的闪电秀[Let's use centralized log collection to make incident response teams happy][1],目的是提高大家对缺少日志这种安全问题的重视,提供一种避免风险的方法,并且倡议更多的安全实现 _(声明: 我为 NXLog 工作)._
### 为什么收集日志?为什么集中日志记录?
确切的说,日志是写入磁盘的仅追加的记录序列在实际生活中,日志可以在你尝试寻找异常的根源时帮助你调查基础设施问题。当你有多个使用自己标准与格式的日志的异构系统并且想用一种可靠的方法来接收和处理它们的时候,挑战就来临了。这通常以元数据为代价。集中日志记录解决方案需要共性,这种共性常常会去除许多开源日志记录工具所提供的丰富的元数据。
### 缺少日志记录与监控的安全风险
开源 Web 应用程序安全项目 ([OWASP][2]) 是一个为业界贡献了许多杰出项目(包括许多专注于软件安全的[工具][3]的非营利组织。OWASP定期为应用开发人员和维护者报告危险的安全挑战。在最新一版[10项最严重的 Web 应用程序安全风险][4] 中OWASP 将不足的日志记录和监控加入了列表中。OWASP 警告下列情况会导致不足的日志记录、检测、监控和响应:
* 未记录可审计性事件,如:登录、登录失败和高额交易。
* 告警和错误事件未能产生或产生不足的和不清晰的日志信息。
* 日志信息仅在本地存储。
* 对于实时或准实时的攻击,应用程序无法检测、处理和告警。
可以通过集中日志记录(例如, 不仅将日志本地存储)和结构化日志数据来缓解上述情形(例如,在告警仪表盘和安全套件中)。
举例来说, 假设一个DNS查询会导向名为 **hacked.badsite.net** 的恶意网站通过 DNS 监控,管理员监控并且主动的分析 DNS 请求与响应。DNS 监控的效果依赖于充足的日志记录与收集来发现潜在问题,同样也依赖于结构化 DNS 日志的结果来进一步分析。
```
2019-01-29
Time (GMT)      Source                  Destination             Protocol-Info
12:42:42.112898 SOURCE_IP               xxx.xx.xx.x             DNS     Standard query 0x1de7  A hacked.badsite.net
```
你可以在 [NXLog Community Edition][5] 自己尝试一下这个例子也可以尝试其他例子和代码片段。 _(再次声明: 我为 NXLog 工作)._
### 重要的一点:非结构化数据与结构化数据
花费一点时间来考虑下日志数据格式是很重要的例如,让我们来考虑以下日志消息:
```
debug1: Failed password for invalid user amy from SOURCE_IP port SOURCE_PORT ssh2
```
这段日志包含了一个预定义的结构,例如冒号前面的元数据关键词(**debug1**)然而,余下的日志字段是一个未结构化的字符串(**Failed password for invalid user amy from SOURCE_IP port SOURCE_PORT ssh2** )。因此,即便这个消息是人类可轻松阅读的格式,但它不是一个计算机容易解析的格式。
非结构化的事件数据存在局限性,包括难以解析,搜索和分析日志。重要的元数据通常以一种自由字符串的形式作为非结构化数据字段,就像上面的例子一样。日志管理员会在他们尝试标准化/归一化日志数据与集中日志源的过程中遇到这个问题。
### 接下来怎么做
除了集中和结构化日志之外确保你收集了正确的日志数据——Sysmon、PowerShell、Windows 事件日志、 DNS 调试日志、ETW、内核监控、文件完整性监控、数据库日志、外部云日志等等。同样选用适当的工具和流程来来收集汇总和帮助理解数据。
希望这对你从不同日志源中集中日志收集提供了一个开始; 将日志发送到仪表盘监控软件分析软件以及像安全性资讯与事件管理SIEM套件等外部源。
你的集中日志策略会是怎么样?请在评论中分享你的想法。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/2/reducing-security-risks-centralized-logging
作者:[Hannah Suarez][a]
选题:[lujun9972][b]
译者:[leommxj](https://github.com/leommxj)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/hcs
[b]: https://github.com/lujun9972
[1]: https://fosdem.org/2019/schedule/event/lets_use_centralized_log_collection_to_make_incident_response_teams_happy/
[2]: https://www.owasp.org/index.php/Main_Page
[3]: https://github.com/OWASP
[4]: https://www.owasp.org/index.php/Top_10-2017_Top_10
[5]: https://nxlog.co/products/nxlog-community-edition/download

View File

@ -0,0 +1,58 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Information could be half the world's mass by 2245, says researcher)
[#]: via: (https://www.networkworld.com/article/3570438/information-could-be-half-the-worlds-mass-by-2245-says-researcher.html)
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
研究人员表示,到 2245 年信息量可能占世界质量的一半
======
> 根据一位理论物理学家的说法,由于创建和存储数字信息所使用的能源和资源数量,数据应该被视为物理的,而不仅仅是看不见的一和零。
一位大学学者建议,数字内容应该与气体、液体、等离子体和固体一样,被视为第五种物质状态。
英国朴茨茅斯大学高级讲师、发表在《AIP Advances》杂志上的《[信息灾难][1]》一文的作者 Melvin Vopson 称,由于以物理和数字方式创建、存储和分发数据所使用的能量和资源,数据已经发生了演变,现在应该被视为质量。
Vopson 还声称,数字比特正在走向压倒地球的道路,最终将超过原子的数量。
给数字信息分配质量的想法建立在一些现有数据点的基础上。Vopson 引用了 IBM 的一项估计,发现数据每天以 2.5 万亿字节的速度产生。他还将每英寸超过 1 <ruby>太比特<rt>terabit</rt></ruby>的数据存储密度考虑在内,将比特的大小与原子的大小进行比较。
假设数据生成量每年增长 50%,根据宣布 Vopson 研究的[媒体发布][2],“比特的数量将在大约 150 年内等于地球上的原子数量。”
新闻稿中写道:“大约 130 年后,维持数字信息创造所需的动力将等于地球上目前产生的所有动力,到 2245 年,地球上一半的质量将转化为数字信息质量。”
Vopson 补充说COVID-19 大流行正在提高数字数据创造的速度,并加速这一进程。
他警告说一个饱和点即将到来“即使假设未来的技术进步将比特大小降低到接近原子本身的大小这个数字信息量所占的比重将超过地球的大小从而导致我们所定义的信息灾难。”Vopson 在[论文][3]中写道。
“我们正在一点一点地改变这个星球这是一场看不见的危机”Vopson 说,他是希捷科技公司的前研发科学家。
Vopson 并不是一个人在探索,信息并不是简单的不可察觉的 1 和 0。根据发布的消息Vopson 借鉴了爱因斯坦广义相对论中的质能对比;将热力学定律应用于信息的 Rolf Landauer 的工作;以及数字比特的发明者 Claude Shannon 的工作。
“当一个人将信息内容带入现有的物理理论中时这几乎就像物理学中的一切都多了一个维度”Vopson 说。
他的论文总结道,随着增长速度似乎不可阻挡,数字信息生产“将消耗地球上大部分的电力能源,从而导致道德和环境问题。”他的论文总结道。
有趣的是除此以外Vopson 还提出,如果像他所预测的那样,未来地球的质量主要由信息位组成,并且存在足够的动力创造出来(不确定),那么“可以设想未来的世界主要由计算机模拟,并由数字比特和计算机代码主导,”他写道。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3570438/information-could-be-half-the-worlds-mass-by-2245-says-researcher.html
作者:[Patrick Nelson][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Patrick-Nelson/
[b]: https://github.com/lujun9972
[1]: https://aip.scitation.org/doi/10.1063/5.0019941
[2]: https://publishing.aip.org/publications/latest-content/digital-content-on-track-to-equal-half-earths-mass-by-2245/
[3]: https://aip.scitation.org/doi/full/10.1063/5.0019941
[4]: https://www.facebook.com/NetworkWorld/
[5]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,86 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Manage your software repositories with this open source tool)
[#]: via: (https://opensource.com/article/20/8/manage-repositories-pulp)
[#]: author: (Melanie Corr https://opensource.com/users/melanie-corr)
用这个开源工具管理你的软件仓库
======
这篇介绍 Pulp一个开源仓库管理解决方案它的范围和功能在不断增长。
![Cut pieces of citrus fruit and pomegranates][1]
[Foreman][2] 是一个强大的管理和自动化产品,它为 Linux 环境的管理员提供了企业级的解决方案来用于四个关键场景预备管理、配置管理、补丁管理和内容管理。Foreman 中内容管理功能的一个主要组成部分是由 Pulp 项目提供的。虽然 Pulp 是这个产品的一个组成部分,但它也是一个独立的、免费的、开源的项目,自身也在取得巨大的进步。
让我们来看看 Pulp 项目,特别是最新版本 Pulp 3 的功能。
### 什么是 Pulp
Pulp 是一个管理软件包仓库的平台,并将其提供给大量的消费者。你可以使用 Pulp 来跨不同环境镜像、同步、上传和推广内容,如 RPM、Python 包、Ansible 集合、容器镜像等。如果你有几十个、几百个甚至上千个软件包并需要更好的方式来管理它们Pulp可以帮 助你。
最新的主要版本是 [Pulp 3][3],它于 2019 年 12 月发布。Pulp 3 是多年来收集用户需求的结晶,并对现有的 Pulp 架构进行了全面的技术改造,以提高可靠性和灵活性。另外,它还包含了大量的新功能。
### 谁在使用 Pulp
大多数情况下Pulp 用户管理的企业软件环境中内容的稳定性和可靠性是最重要的。Pulp 用户希望有一个平台来开发内容而不用担心仓库可能会消失。他们希望以安全的方式在其生命周期环境的不同阶段推广内容优化磁盘空间并扩展环境以满足新的需求。他们还需要灵活处理各种内容类型。Pulp 3 提供了这些以及更多。
### 在一处管理各类内容
安装 Pulp 后,你可以为你计划管理的内容类型添加[内容插件][4],将内容镜像到本地,添加私人托管的内容,并根据你的需求混合内容。例如,如果你是 Ansible 用户而你又不想在Ansible Galaxy 上托管你的私有内容,你可以添加 Pulp Ansible 插件,镜像你所需要的公共 Ansible 内容,并将 Pulp 作为一个内部平台,在你的组织中管理和分发可扩展的公共和私有 Ansible 角色和集合的混合。你可以用任何内容类型都能实现。有各种各样的内容插件可供选择,包括 RPM、Debian、Python、容器和 Ansible 等等。还有一个文件插件,你可以用它来管理 ISO 镜像等文件。
如果你没有找到你所需的内容类型插件Pulp 3 引入了新的插件 API 和插件模板,你可以轻松创建一个属于自己的 Pulp 插件。你可以根据[插件编写指南][5]自动生成一个最小可用的插件,然后从那里开始构建。
### 高可用性
在 Pulp 3 中,从 MongoDB 到 PostgreSQL 的变化促成了性能和数据完整性的重大改进。Pulp 用户现在有了一个完全开源的技术栈,它可以提供高可用性 HA 和更好的扩展性。
### 仓库版本管理
使用 Pulp 3你可以毫无风险地进行试验。每次你添加或删除内容时Pulp 都会创建一个不可变的仓库版本,这样你就可以回滚到早期的版本,从而保证你操作的安全性和稳定性。通过使用发布和分发,你可以公开一个仓库的多个版本,你可以将其作为回滚到早期版本的另一种方法。如要回滚,你可以简单地将你的分发指向一个旧的发布。
### 磁盘优化
任何软件开发环境的主要挑战之一是磁盘优化。如果你不断地下载包例如你今天需要但明天不再需要的仓库每日构建那么磁盘空间将很快成为一个问题。Pulp 3 的设计已经考虑到了磁盘优化。虽然默认选项下载并保存所有的软件包,你也可以启用“按需”或“流”选项。“按需”选项只下载和保存客户要求的内容,从而节省了磁盘空间。使用“流”
选项,它也会根据客户的要求进行下载,但它不会将内容保存在 Pulp 中。这对于同步内容是非常理想的,例如,从一个每日仓库同步,并让你在后期免于执行磁盘清理。
### 多个存储选项
即使有最好的磁盘优化随着项目的发展你可能需要一种方法来扩展你的部署以满足需求。除了本地文件存储Pulp 还支持一系列的云存储选项,如 Amazon S3 和 Azure以确保你可以扩展满足你的部署需求。
### 保护你的内容
Pulp 3 可以选择添加 [Certguard][6] 插件,该插件提供了一个支持 X.509 的 Contentguard它要求客户在收到 Pulp 的内容之前提交证明其对内容的权利的证书。
只要客户端的证书没有过期,且由证书颁发机构签署,并在创建时存储在 Certguard 上,任何客户端在请求时提供基于 X.509 或基于 Red Hat 订阅管理证书都将获得授权。客户端使用安全传输层(TLS)提供证书,这证明客户端不仅有证书,还有它的密钥。你可以放心地开发,知道你的内容正在受到保护。
Pulp 团队也在积极为整个 Pulp 部署一个基于角色的访问控制系统,这样管理员就可以确保正确的用户可以访问正确的环境。
### 在容器中试用 Pulp
如果你有兴趣亲自评估Pulp 3你可以使用 Docker 或 Podman 轻松[在容器中安装 Pulp 3][7]。Pulp 团队一直在努力简化安装过程。你也可以使用 [Ansible playbook][8] 来自动完成 Pulp 3 的全部安装和配置。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/8/manage-repositories-pulp
作者:[Melanie Corr][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/melanie-corr
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fruit-orange-pomegranate-pulp-unsplash.jpg?itok=4cvODZDJ (Oranges and pomegranates)
[2]: https://opensource.com/article/17/8/system-management-foreman
[3]: https://pulpproject.org/about-pulp-3/
[4]: https://pulpproject.org/content-plugins/
[5]: https://docs.pulpproject.org/plugins/plugin-writer/index.html
[6]: https://pulp-certguard.readthedocs.io/en/latest/
[7]: https://pulpproject.org/pulp-in-one-container/
[8]: https://pulp-installer.readthedocs.io/en/latest/