Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu Wang 2020-04-18 13:52:30 +08:00
commit 1d4e3d4f13
8 changed files with 919 additions and 97 deletions

View File

@ -1,31 +1,31 @@
[#]: collector: (lujun9972)
[#]: translator: (lxbwolf)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12120-1.html)
[#]: subject: (Build a private social network with a Raspberry Pi)
[#]: via: (https://opensource.com/article/20/3/raspberry-pi-open-source-social)
[#]: author: (Giuseppe Cassibba https://opensource.com/users/peppe8o)
用树莓派搭建一个私人社交网络
======
手把手教你怎样以低硬件成本和简易步骤搭建自己的社交网络。
> 手把手教你怎样以低成本的硬件和简易步骤搭建自己的社交网络。
![Team of people around the world][1]
近年来,社交网络已经革新了人们的生活习惯。为了维持与朋友和家人的联系,人们每天都会使用社交频道。但是涉及到隐私和数据安全时,仍有一些普遍问题。尽管社交网络创建了复杂的隐私策略来保护用户的信息,但如果你不想自己的信息被泄露,最好的办法还是把数据保存在自己的服务器上。
近年来,社交网络已经革新了人们的生活习惯。人们每天都会使用社交频道与朋友和家人联系。但是涉及到隐私和数据安全时,仍有一些共同的问题。尽管社交网络创建了复杂的隐私策略来保护用户的信息,但如果你不想自己的信息被泄露,最好的办法还是把数据保存在自己的服务器上。
一个树莓派 — 多才多艺的 Raspbian Lite 版本就可以让你搭建很多有用的家庭服务(参照我的文章[树莓派项目][2])。通过搜索开源软件你就可以实现一些令人痴迷的功能,你也可以用这个神奇的设备来感受那些功能。其中一个有趣的尝试就是在你的树莓派上安装 OSSN译注OpenSource Social Network
一个树莓派 — 多才多艺的 Raspbian Lite 版本就可以让你搭建很多有用的家庭服务(参照我的文章[树莓派项目][2])。通过搜索开源软件你就可以实现一些令人痴迷的功能,你也可以用这个神奇的设备来感受那些功能。其中一个有趣的尝试就是在你的树莓派上安装 OSSN。
### OSSN是什么
### OSSN 是什么?
[OSSN][3] 是用 PHP 写的一个快速开发社交网络软件让你可以搭建自己的社交网站。OSSN 可以用来搭建不同类型的社交应用,如:
<ruby>[开源社交网络][3]<rt>OpenSource Social Network</rt></ruby>OSSN是用 PHP 写的一个快速开发社交网络软件让你可以搭建自己的社交网站。OSSN 可以用来搭建不同类型的社交应用,如:
* 私人内部网
* 公用/公开网络
* 社区
OSSN 支持的功能:
* 照片
@ -35,13 +35,11 @@ OSSN 支持的功能:
* 搜索
* 聊天
OSSN 运行在 LAMP 服务器上。硬件需求很简单,却能提供强大的用户界面,也友好支持移动端。
### 我们需要准备什么
这个项目很简单,而且由于我们只安装远程 web 服务,因此我们只需要一些便宜的零件就够了。我使用的是树莓派 3B+,但是用树莓派 3A+ 或其他更新的板应该也可以。
这个项目很简单,而且由于我们只安装远程 Web 服务,因此我们只需要一些便宜的零件就够了。我使用的是树莓派 3B+,但是用树莓派 3A+ 或其他更新的板应该也可以。
硬件:
@ -49,76 +47,66 @@ OSSN 运行在 LAMP 服务器上。硬件需求很简单,却能提供强大的
* 一张 SD 卡(最好是性能好点的卡,至少 16 GB
* 一台有 SFTP 软件(如免费的 [Filezilla][4])的桌面 PC用来把安装包传到你的树莓派上
### 操作步骤
我们首先搭建一个传统的 LAMP 服务器,然后配置数据库用户和安装 OSSN。
#### 1\. 安装 Raspbian Buster Lite 操作系统
#### 1安装 Raspbian Buster Lite 操作系统
你可以直接参照我的文章[在你的树莓派上安装 Raspbian Buster Lite][5]。
为了确保你的系统是最新的ssh 登录到树莓派后在终端输入下面的命令:
```bash
sudo apt-get update
sudo apt-get upgrade
```
#### 2\. 安装 LAMP 服务
#### 2安装 LAMP 服务
LAMPLinuxApacheMysqlPhp服务通常与 MySQL 数据库配合。在我们的项目中,我们选择 MariaDB因为它更轻量完美支持树莓派。
#### 3\. 安装 Apache 服务:
安装 Apache 服务:
```
`sudo apt-get install apache2 -y`
sudo apt-get install apache2 -y
```
你可以通过在浏览器输入 `http://<<YouRpiIPAddress>>` 来检查 Apache 是否安装正确:
![][6]
#### 4\. 安装 PHP:
安装 PHP
```
`sudo apt-get install php -y`
sudo apt-get install php -y
```
#### 5\. 安装 MariaDB 服务和 PHP connector:
安装 MariaDB 服务和 PHP connector
```
`sudo apt-get install mariadb-server php-mysql -y`
sudo apt-get install mariadb-server php-mysql -y
```
#### 6\. 安装 phpMyAdmin:
安装 phpMyAdmin
在 OSSN 中 phpMyAdmin 不是强制安装的,但我建议你安装,因为它可以简化数据库的管理。
```
`sudo apt-get install phpmyadmin`
sudo apt-get install phpmyadmin
```
在 phpMyAdmin 配置界面,执行以下步骤:
* 按下空格和 OK 选择 apache强制
* 在 dbconfig-common 选择,配置 phpMyAdmin 的数据库。
* 输入想设置的密码,按下 OK。
* 再次输入 phpMyAdmin 密码来确认,按下 OK。
* 按下空格和 OK 选择 apache强制
* 在 dbconfig-common 选择“Yes”,配置 phpMyAdmin 的数据库。
* 输入想设置的密码,按下 OK
* 再次输入 phpMyAdmin 密码来确认,按下 OK
#### 7\. 为 phpMyAdmin 用户添加数据库权限来管理数据库:
为 phpMyAdmin 用户添加数据库权限来管理数据库:
我们用 root 用户连接 MariaDB默认没有密码来设置权限。
```
sudo mysql -uroot -p
grant all privileges on *.* to 'phpmyadmin'@'localhost';
@ -126,11 +114,10 @@ flush privileges;
quit
```
#### 8\. 最后,重启 Apache 服务:
最后,重启 Apache 服务:
```
`sudo systemctl restart apache2.service`
sudo systemctl restart apache2.service
```
在浏览器输入 `http://<<YouRpiIPAddress>>/phpmyadmin/` 来检查 phpMyAdmin 是否正常:
@ -139,52 +126,44 @@ quit
默认的 phpMyAdmin 登录凭证:
* 用户名phpmyadmin
* 用户名:`phpmyadmin`
* 密码:在 phpMyAdmin 安装步骤中你设置的密码
### 安装 OSSN 所需的其他包和配置 PHP
#### 3、安装 OSSN 所需的其他包和配置 PHP
在第一次配置 OSSN 前,我们还需要在系统上安装一些所需的包:
* PHP 版本 5.67.0 或 7.1
* PHP 版本 5.67.0 或 7.1
* MYSQL 5 及以上
* APACHE
* MOD_REWRITE
* 需要打开 PHP 扩展 cURL 和 Mcrypt
* PHP GD 扩展
* PHP ZIP 扩展
* 打开 PHP 设置 allow_url_fopen
* 打开 PHP 设置 `allow_url_fopen`
* PHP JSON 支持
* PHP XML 支持
* PHP OpenSSL
在终端输入以下命令来安装上述包:
```
`sudo apt-get install php7.3-curl php7.3-gd php7.3-zip php7.3-json php7.3-xml`
sudo apt-get install php7.3-curl php7.3-gd php7.3-zip php7.3-json php7.3-xml
```
#### 1\. 打开 MOD_REWRITE:
打开 mod_rewrite
```
`sudo a2enmod rewrite`
sudo a2enmod rewrite
```
#### 2\. 修改默认的 Apache 配置,使用 mod_rewrite:
修改默认的 Apache 配置,使用 mod_rewrite
```
`sudo nano /etc/apache2/sites-available/000-default.conf`
sudo nano /etc/apache2/sites-available/000-default.conf
```
#### 3\. 在 **000-default.conf** 文件中添加下面的内容:
`000-default.conf` 文件中添加下面的内容:
```ini
<VirtualHost *:80>
@ -192,18 +171,17 @@ quit
DocumentRoot /var/www/html
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
# SECTION TO ADD --------------------------------
# 需要添加的部分开始 --------------------------------
<Directory /var/www/html>
Options Indexes FollowSymLinks MultiViews
AllowOverride All
Require all granted
</Directory>
# END SECTION TO ADD --------------------------------
# 需要添加的部分结束 --------------------------------
</VirtualHost>
```
#### 4\. 安装 Mcrypt:
安装 Mcrypt
```bash
sudo apt install php-dev libmcrypt-dev php-pear
@ -211,45 +189,42 @@ sudo pecl channel-update pecl.php.net
sudo pecl install mcrypt-1.0.2
```
#### 5\. 在 `/etc/php/7.3/apache2/php.ini` 文件中添加 `extension=mcrypt.so`(或取消注释)打开 Mcrypt 模块:
打开 Mcrypt 模块:
`/etc/php/7.3/apache2/php.ini` 文件中 `extension=mcrypt.so`(或取消注释):
```bash
sudo nano /etc/php/7.3/apache2/php.ini
```
**allow_url_fopen** 应该已经在 `/etc/php/7.3/apache2/php.ini` 文件中打开了。OpenSSL 应该在 php7.3 中安装了。
#### 6\. 我建议的另一个设置是把 PHP 最大上传文件数修改为 16 MB
`allow_url_fopen` 应该已经在 `/etc/php/7.3/apache2/php.ini` 文件中打开了。OpenSSL 应该在 php7.3 中安装了。
我建议的另一个设置是把 PHP 最大上传文件数修改为 16 MB
```
`sudo nano /etc/php/7.3/apache2/php.ini`
sudo nano /etc/php/7.3/apache2/php.ini
```
#### 7\. 搜索到 **upload_max_filesize** 所在的行,参照下面的设置:
搜索到 `upload_max_filesize` 所在的行,参照下面的设置:
```
`upload_max_filesize = 16M`
upload_max_filesize = 16M
```
#### 8\. 保存并退出,重启 Apache
保存并退出,重启 Apache
```
`sudo systemctl restart apache2.service`
sudo systemctl restart apache2.service
```
### 安装 OSSN
#### 4、安装 OSSN
#### 1\. 创建数据库,设置用户:
##### 创建数据库,设置用户
回到 phpmyadmin web页面浏览器输入 `http://<<YouRpiIPAddress>>/phpmyadmin/`)并登录:
回到 phpmyadmin web 页面(浏览器输入 `http://<<YouRpiIPAddress>>/phpmyadmin/`)并登录:
用户名: phpmyadmin
密码:在 phpMyAdmin 安装步骤中你设置的密码
- 用户名: `phpmyadmin`
- 密码:在 phpMyAdmin 安装步骤中你设置的密码
点击数据库标签页:
@ -261,13 +236,11 @@ sudo nano /etc/php/7.3/apache2/php.ini
现在为 OSSN 创建一个数据库用户,我使用下面的凭证:
用户名: ossn_db_user
密码: ossn_db_password
- 用户名: `ossn_db_user`
- 密码: `ossn_db_password`
在终端输入下面的命令如果你没有修改过密码root 密码应该仍然是空):
```bash
sudo mysql -uroot -p
CREATE USER 'ossn_db_user'@'localhost' IDENTIFIED BY 'ossn_db_password';
@ -276,34 +249,30 @@ flush privileges;
quit
```
#### 2\. 安装 OSSN 软件:
##### 安装 OSSN 软件
在你 PC 上从 [OSSN 下载页面][10] 下载 OSSN 安装压缩文件,保存为文件 `ossn-v5.2-1577836800.zip`
使用你习惯的 SFTP 软件把整个压缩文件通过 SFTP 传到树莓派的新目录 `/home/pi/download` 下。常用的默认SFP 连接参数是:
* 主机:你树莓派的 IP 地址
* 用户名pi
* 用户名:`pi`
* 密码raspberry如果没有修改过默认密码
* 端口: 22
在终端输入:
```bash
cd /home/pi/download/ #Enter directory where OSSN installation files have been transferred
unzip ossn-v5.2-1577836800.zip #Extracts all files from zip
cd /var/www/html/ #Enter Apache web directory
sudo rm index.html #Removes Apache default page - we'll use OSSN one
cd /home/pi/download/ # 进入上传的 OSSN 安装文件的目录。
unzip ossn-v5.2-1577836800.zip # 从压缩包中提取所有文件
cd /var/www/html/ # 进入 Apache Web 目录
sudo rm index.html # 删除 Apache 默认页面 - 我们将使用 OSSN
sudo cp -R /home/pi/download/ossn-v5.2-1577836800/* ./ #Copy installation files to web directory
sudo chown -R www-data:www-data ./
```
创建数据文件夹OSSN 需要一个文件夹来存放数据。出于安全目的OSSN 建议这个文件夹创建在公开文档根目录之外。所以,我们在 `/opt` 下创建。
```bash
sudo mkdir /opt/ossn_data
sudo chown -R www-data:www-data /opt/ossn_data/
@ -337,7 +306,7 @@ sudo chown -R www-data:www-data /opt/ossn_data/
![][17]
*本文首发在 [peppe8o.com][18]。已获得转载授权。*
本文首发在 [peppe8o.com][18]。已获得转载授权。
--------------------------------------------------------------------------------
@ -346,7 +315,7 @@ via: https://opensource.com/article/20/3/raspberry-pi-open-source-social
作者:[Giuseppe Cassibba][a]
选题:[lujun9972][b]
译者:[lxbwolf](https://github.com/lxbwolf)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,56 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Ethernet consortium announces completion of 800GbE spec)
[#]: via: (https://www.networkworld.com/article/3538529/ethernet-consortium-announces-completion-of-800gbe-spec.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Ethernet consortium announces completion of 800GbE spec
======
The specification for 800GbE doubles the maximum speed of the current Ethernet standard, but also tweaks other aspects including latency.
Martyn Williams/IDGNS
The industry-backed Ethernet Technology Consortium has announced the completion of a specification for 800 Gigabit Ethernet technology.
Based on many of the technologies used in the current top-end 400 Gigabit Ethernet protocol, the new spec is formally known as 800GBASE-R. The consortium that designed it (then known as the 25 Gigabit Ethernet Consortium) was also instrumental in developing the 25, 50, and 100 Gigabit Ethernet protocols and includes Broadcom, Cisco, Google, and Microsoft among its members.
**[ Now see [the hidden cause of slow internet and how to fix it][1].]**
The 800GbE spec adds new media access control (MAC) and physical coding sublayer (PCS) methods, which tweaks these functions to distribute data across eight physical lanes running at a native 106.25Gbps. (A lane can be a copper twisted pair or in optical cables, a strand of fiber or a wavelength.)  The 800GBASE-R specification is built on two 400 GbE 2xClause PCSs to create a single MAC which operates at a combined 800Gbps.
And while the focus is on eight 106.25G lanes, it's not locked in. It is possible to run 16 lanes at half the speed, or 53.125Gbps.
The new standard offers half the latency of 400G Ethernet specification, but the new spec also cuts the forward error correction (FEC) overhead on networks running at 50 Gbps, 100 Gbps, and 200 Gbps by half, thus reducing the packet-processing load on the NIC.
By lowering latency this will feed the need for speed in latency-sensitive applications like [high-performance computing][2] and artificial intelligence, where lots of data needs to be moved around as fast as possible.
[][3]
Doubling from 400G to 800G wasnt too great of a technological leap. It meant adding more lanes at the same transfer rate, with a few tweaks. But breaking a terabit, something Cisco and other networking firms have been talking about for a decade, will require a significant reworking of the technology and wont be an easy fix.
It likely wont be cheap, either. 800G works with existing hardware and 400GbE switches are not cheap, running as high as six figures. Moving past the terabit barrier with a major revision to the technology will likely be even more expensive. But for hyperscalers and HPC customers, thats par for the course.
The ETC didnt say when to expect new hardware supporting the 800G, but given its modest change to existing specs, it could appear this year, assuming the pandemic-induced shutdown doesnt throw a monkey wrench into plans.
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3538529/ethernet-consortium-announces-completion-of-800gbe-spec.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3107744/internet/the-hidden-cause-of-slow-internet-and-how-to-fix-it.html
[2]: https://www.networkworld.com/article/3444399/high-performance-computing-do-you-need-it.html
[3]: https://www.networkworld.com/blog/itaas-and-the-corporate-storage-technology/?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE22140&utm_content=sidebar (ITAAS and Corporate Storage Strategy)
[4]: https://www.facebook.com/NetworkWorld/
[5]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,58 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Face-to-face collaboration for community to become more impactful Chip Childers)
[#]: via: (https://www.linux.com/articles/face-to-face-collaboration-for-community-to-become-more-impactful-chip-childers/)
[#]: author: (Swapnil Bhartiya https://www.linux.com/author/swapnil/)
Face-to-face collaboration for community to become more impactful Chip Childers
======
“Anytime theres a crisis, theres also an opportunity for open source communities to find ways to be helpful,” affirms Chip Childers, the Executive Director of Cloud Foundry Foundation. He believes the remarkable work the Cloud Foundry community has done over the years to enhance enterprise developer productivity will now help those who have adopted the platform to combat COVID-19.;
“It (the platform) allows them to move quickly to respond to changing conditions, whether those are market conditions, or in this case, a global pandemic. Were actually pretty proud of a lot of the end-users and how theyre able to use the software more efficiently now,” he says.
Prior to becoming the Executive Director, Childers served as the CTO for a little over five years. He feels “its a very exciting time to be stepping into the role,” and has already chalked out a robust strategy to take the Foundation to the next level.
For starters, Google has upped its membership of Cloud Foundry and is now at the technical tier. “We were quite happy to have them decide to upgrade and we appreciate their support on that,” says Childers.
He is also happy with how Kubernetes and Cloud Foundry are coming together.
“Cloud Foundry and Kubernetes are now becoming complementary. The Cloud Foundry developer platform should run on the Kube infrastructure platform. Both communities have reached the point where theyre ready to see this get blended,” says Childers.
**Unswerving Support**
VMware ranks among the largest contributors to communities. Last year, it acquired Pivotal, which is one of the biggest stakeholders in the Cloud Foundry project. So, what are the implications on Cloud Foundry of a player that has stakes in both Kubernetes and Cloud Foundry?
“Its a clear indication the teams that used to work for Pivotal are being intermixed with the teams that were at VMware, and through some of their other acquisitions like Heptio, theyre all working together on the shared vision of creating the Tanzu platform. This happens to align well with the vision that other Cloud Foundry distributions have — like the way SUSE builds Kubernetes-based infrastructure layers Cloud Foundry on top for developer productivity,” explains Childers.
Childers believes Paul Fazzones promotion to Chairman of the Board will provide an additional boost to Cloud Foundry.
“Paul heads R&amp;D for VMwares modern applications business unit. Hes been on our board for a long time. So its great that were going to continue to see the investment in the open source community not just at the business level from VMware, but also in terms of supporting the contributors, be it developers, product managers, project management support, or marketing support. Theyre not wavering in their commitment to open source across the board and Cloud Foundry is one of those communities that will benefit from that,” he states.
**A Three-Pronged Strategy**
The core mission of the Foundation is to bring a world-class enterprise developer experience to as many enterprise developers as possible. Childers, therefore, considers “enterprise developer productivity for Kubernetes infrastructure our new north star” and has devised a three-pronged strategy to align with it.
“The first thing is that we will focus on working with the contributing community — the Foundation staff, key member contacts and participants — to improve their experience.
Next is to build upon the inclusivity and diversity work that the Foundation has done over the years. “Weve always had a number of different audiences to whom weve tried to describe the work of the project: line of business leaders, CIOs, IT operations, and enterprise developers. It is important to make a little bit of a dent in the broader technology industry,” avers Childers.
The third and final prong of Childers strategy is to encourage developers or companies participating in Cloud Foundry efforts to also participate in Istio (an open platform that connects, manages, and secures microservices), Kubernetes, and the various scenes within the community.
“Theres a massive overlap, which I think will be very beneficial to everyone. The learnings that developers helping to code Kubernetes or Istio learn during that process should flow in such a way that the CF community builds the developer experience on top of those systems. From this perspective, cross-pollination is really, really important,” says Childers.
Childers is also planning a tectonic shift in the way Foundation contributors work. “While now its going to become a little bit harder as we deal with some of the challenges around the pandemic, Im very excited about spending a lot of our effort focused on increasing the impact of the face-to-face collaboration time that were able to create for the community,” adds Childers.
--------------------------------------------------------------------------------
via: https://www.linux.com/articles/face-to-face-collaboration-for-community-to-become-more-impactful-chip-childers/
作者:[Swapnil Bhartiya][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/author/swapnil/
[b]: https://github.com/lujun9972

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (lxbwolf)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,194 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Create a SDN on Linux with open source)
[#]: via: (https://opensource.com/article/20/4/quagga-linux)
[#]: author: (M Umer https://opensource.com/users/noisybotnet)
Create a SDN on Linux with open source
======
Make your Linux system act like a router with the open source routing
stack Quagga.
![Coding on a computer][1]
Network routing protocols fall into two main categories: interior gateway protocols and exterior gateway protocols. Interior gateway protocols are used by routers to share information within a single autonomous system. If you are running Linux, you can make your system behave as a router through the open source (GPLv2) routing stack [Quagga][2].
### What is Quagga?
Quagga is a [routing software suite][3] and a fork of [GNU Zebra][4]. It provides implementations of all major routing protocols such as Open Shortest Path First (OSPF), Routing Information Protocol (RIP), Border Gateway Protocol (BGP), and Intermediate System to Intermediate System (IS-IS) for Unix-like platforms.
Although Quagga implements the routing protocols for both IPv4 and IPv6, it doesn't act as a complete router. A true router not only implements all the routing protocols but also has the ability to forward network traffic. Quagga only implements the routing stack, and the job of forwarding network traffic is handled by the Linux kernel.
### Architecture
Quagga implements the different routing protocols through protocol-specific daemons. The daemon name is the same as the routing protocol followed by the letter "d." Zebra is the core and a protocol-independent daemon that provides an [abstraction layer][5] to the kernel and presents the Zserv API over TCP sockets to Quagga clients. Each protocol-specific daemon is responsible for running the relevant protocol and building the routing table based on the information exchanged.
![Quagga architecture][6]
### Setup
This tutorial implements the OSPF protocol to configure dynamic routing using Quagga. The setup includes two CentOS 7.7 hosts, named Alpha and Beta. Both hosts share access to the **192.168.122.0/24** network.
**Host Alpha:**
IP: 192.168.122.100/24
Gateway: 192.168.122.1
**Host Beta:**
IP: 192.168.122.50/24
Gateway: 192.168.122.1
### Install the package
First, install the Quagga package on both hosts. It is available in the CentOS base repo:
```
`yum install quagga -y`
```
### Enable IP forwarding
Next, enable IP forwarding on both hosts since that will performed by the Linux kernel:
```
sysctl -w net.ipv4.ip_forward = 1
sysctl -p
```
### Configuration
Now, go into the **/etc/quagga** directory and create the configuration files for your setup. You need three files:
* **zebra.conf**: Quagga's daemon configuration file, which is where you'll define the interfaces and their IP addresses and IP forwarding
* **ospfd.conf**: The protocol configuration file, which is where you'll define the networks that will be offered through the OSPF protocol
* **daemons**: Where you'll specify the relevant protocol daemons that are required to run
On host Alpha,
```
 [root@alpha]# cat /etc/quagga/zebra.conf
interface eth0
 ip address 192.168.122.100/24
 ipv6 nd suppress-ra
interface eth1
 ip address 10.12.13.1/24
 ipv6 nd suppress-ra
interface lo
ip forwarding
line vty
[root@alpha]# cat /etc/quagga/ospfd.conf
interface eth0
interface eth1
interface lo
router ospf
 network 192.168.122.0/24 area 0.0.0.0
 network 10.12.13.0/24 area 0.0.0.0
line vty
[root@alphaa ~]# cat /etc/quagga/daemons
zebra=yes
ospfd=yes
```
On host Beta,
```
[root@beta quagga]# cat zebra.conf
interface eth0
 ip address 192.168.122.50/24
 ipv6 nd suppress-ra
interface eth1
 ip address 10.10.10.1/24
 ipv6 nd suppress-ra
interface lo
ip forwarding
line vty
[root@beta quagga]# cat ospfd.conf
interface eth0
interface eth1
interface lo
router ospf
 network 192.168.122.0/24 area 0.0.0.0
 network 10.10.10.0/24 area 0.0.0.0
line vty
[root@beta ~]# cat /etc/quagga/daemons
zebra=yes
ospfd=yes
```
### Configure the firewall
To use the OSPF protocol, you must allow it in the firewall:
```
firewall-cmd --add-protocol=ospf permanent
firewall-cmd reload
```
Now, start the zebra and ospfd daemons.
```
# systemctl start zebra
# systemctl start ospfd
```
Look at the route table on both hosts using:
```
[root@alpha ~]# ip route show  
default via 192.168.122.1 dev eth0 proto static metric 100
10.10.10.0/24 via 192.168.122.50 dev eth0 proto zebra metric 20
10.12.13.0/24 dev eth1 proto kernel scope link src 10.12.13.1
192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.100 metric 100
```
You can see that the routing table on Alpha contains an entry of **10.10.10.0/24** via **192.168.122.50** offered through protocol **zebra**. Similarly, on host Beta, the table contains an entry of network **10.12.13.0/24** via **192.168.122.100**.
```
[root@beta ~]# ip route show
default via 192.168.122.1 dev eth0 proto static metric 100
10.10.10.0/24 dev eth1 proto kernel scope link src 10.10.10.1
10.12.13.0/24 via 192.168.122.100 dev eth0 proto zebra metric 20
192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.50 metric 100
```
### Conclusion
As you can see, the setup and configuration are relatively simple. To add complexity, you can add more network interfaces to the router to provide routing for more networks. You can also implement BGP and RIP protocols using the same method.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/4/quagga-linux
作者:[M Umer][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/noisybotnet
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_laptop_hack_work.png?itok=aSpcWkcl (Coding on a computer)
[2]: https://www.quagga.net/
[3]: https://en.wikipedia.org/wiki/Quagga_(software)
[4]: https://www.gnu.org/software/zebra/
[5]: https://en.wikipedia.org/wiki/Abstraction_layer
[6]: https://opensource.com/sites/default/files/uploads/quagga_arch.png (Quagga architecture)

View File

@ -0,0 +1,207 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to compress files on Linux 5 ways)
[#]: via: (https://www.networkworld.com/article/3538471/how-to-compress-files-on-linux-5-ways.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
How to compress files on Linux 5 ways
======
There are a number of tools that you use to compress files on Linux systems, but they don't all behave the same way or yield the same level of compression. In this post, we compare five of them.
Getty Images
There are quite a few commands on Linux for compressing files. One of the newest and most effective is **xz**, but they all have advantages for both saving disk space and preserving files for later use. In this post, we compare the compression commands and point out the significant differences.
### tar
The tar command is not specifically a compression command. Its generally used to pull a number of files into a single file for easy transport to another system or to back the files up as a related group. It also provides compression as a feature, which makes a lot of sense, and the addition of the **z** compression option is available to make this happen.
When compression is added to a **tar** command with the **z** option, tar uses **gzip** to do the compressing.
You can use **tar** to compress a single file as easily as a group though this offers no particular advantage over using **gzip** directly. To use **tar** for this, just identify the file as you would a group of files with a “tar cfz newtarfile filename” command like this:
```
$ tar cfz bigfile.tgz bigfile
^ ^
| |
+- new file +- file to be compressed
$ ls -l bigfile*
-rw-rw-r-- 1 shs shs 103270400 Apr 16 16:09 bigfile
-rw-rw-r-- 1 shs shs 21608325 Apr 16 16:08 bigfile.tgz
```
Note the significant reduction in the file size.
If you prefer, you can use the **tar.gz** extension which might make the character of the file a bit more obvious, but most Linux users will probably recognize **tgz** as meaning the same thing the combination of **tar** and **gz** to indicate that the file is a compressed tar file. You will be left with both the original file and the compressed file once the compression is complete.
To collect a number of files together and compress the resultant “tar ball” in one command, use the same basic syntax, but specify the files to be included as a group in place of the single file. Heres an example:
[][1]
```
$ tar cfz bin.tgz bin/*
^ ^
| +-- files to include
+ new file
```
### zip
The **zip** command creates a compressed file while leaving the original file intact. The syntax is straightforward except that, as with **tar**, you have to remember that your original file should be the last argument on the command line.
```
$ zip ./bigfile.zip bigfile
updating: bigfile (deflated 79%)
$ ls -l bigfile bigfile.zip
-rw-rw-r-- 1 shs shs 103270400 Apr 16 11:18 bigfile
-rw-rw-r-- 1 shs shs 21606889 Apr 16 11:19 bigfile.zip
```
### gzip
The **gzip** command is very simple to use. You just type "gzip" followed by the name of the file you want to compress. Unlike the commands described above, **gzip** will encrypt the files "in place". In other words, the original file will be replaced by the encrypted file.
```
$ gzip bigfile
$ ls -l bigfile*
-rw-rw-r-- 1 shs shs 21606751 Apr 15 17:57 bigfile.gz
```
### bzip2
As with the **gzip** command, **bzip2** will compress the file that you select "in place", leaving only the original file.
```
$ bzip bigfile
$ ls -l bigfile*
-rw-rw-r-- 1 shs shs 18115234 Apr 15 17:57 bigfile.bz2
```
### xz
A relative newcomer to the compression command team, **xz** is a front runner in terms of how well it compresses files. Like the two previous commands, you only need to supply the file name to the command. Again, the original file is compressed in place.
```
$ xz bigfile
$ ls -l bigfile*
-rw-rw-r-- 1 shs shs 13427236 Apr 15 17:30 bigfile.xz
```
For large files, you are likely to notice that **xz** takes longer to run than other compression commands, but the compression results are very impressive.
### Comparisons to consider
Most people have heard it said that "size isn't everything". So, let's compare file size as well as some other issues to be considered when you make plans for how you want to compress your files.
The stats shown below all relate to compressing the single file  bigfile  used in the example commands shown above. This file is a large and fairly random text file. Compression rates will depend to some extent on the content of the files.
#### Size reduction
When compared, the various compression commands shown above yielded the following results. The percentages represent how the compressed files compare with the original file.
```
-rw-rw-r-- 1 shs shs 103270400 Apr 16 14:01 bigfile
------------------------------------------------------
-rw-rw-r-- 1 shs shs 18115234 Apr 16 13:59 bigfile.bz2 ~17%
-rw-rw-r-- 1 shs shs 21606751 Apr 16 14:00 bigfile.gz ~21%
-rw-rw-r-- 1 shs shs 21608322 Apr 16 13:59 bigfile.tgz ~21%
-rw-rw-r-- 1 shs shs 13427236 Apr 16 14:00 bigfile.xz ~13%
-rw-rw-r-- 1 shs shs 21606889 Apr 16 13:59 bigfile.zip ~21%
```
The **xz** commands wins, ending up at only 13% the size of the original file, but all of these compression commands reduced the original file size quite significantly.
#### Whether the original files are replaced
The **bzip2**, **gzip** and **xz** commands all replace the original files with compressed versions. The **tar** and **zip** commands to not.
#### Run time
The **xz** command seems to take more time than the other commands to encrypt the files. For bigfile, the approximate times were:
```
command run-time
tar 4.9 seconds
zip 5.2 seconds
bzip2 22.8 seconds
gzip 4.8 seconds
xz 50.4 seconds
```
Decompression times are likely to be considerably smaller than compression times.
#### File permissions
Regardless of what permissions you have set on your original file, permissions for the compressed file will be based on your **umask** setting, except for **bzip2** which retains the original file's permissions.
#### Compatibility with Windows
The **zip** command creates a file which can be used (i.e., decompressed) on Windows systems as well as Linux and other Unix systems without having to install other tools which may or may not be available.
### Decompressing files
The commands for decompressing files are similar to those used to compress the files. These commands would work for decompressing bigfile after the compression commands shown above were run.
* tar: **tar xf bigfile.tgz**
* zip: **unzip bigfile.zip**
* gzip: **gunzip bigfile.gz**
* bzip2: **bunzip2 bigfile.gz2**
* xz: **xz -d bigfile.xz** or **unxz bigfile.xz**
### Running your own compression comparisons
If you'd like to run some tests on your own, grab a large but replaceable file and compress it using each of the commands shown above  preferably using a new subdirectory. You might have to first install **xz** if you want to include it in the tests.This script can make the comparison easier, but will likely take a few minutes to complete.
```
#!/bin/bash
# ask user for filename
echo -n "filename> "
read filename
# you need this because some commands will replace the original file
cp $filename $filename-2
# clean up first (in case previous results are still available)
rm $filename.*
tar cvfz ./$filename.tgz $filename > /dev/null
zip $filename.zip $filename > /dev/null
bzip2 $filename
# recover original file
cp $filename-2 $filename
gzip $filename
# recover original file
cp $filename-2 $filename
xz $filename
# show results
ls -l $filename.*
# replace the original file
mv $filename-2 $filename
```
Join the Network World communities on [Facebook][2] and [LinkedIn][3] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3538471/how-to-compress-files-on-linux-5-ways.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/blog/itaas-and-the-corporate-storage-technology/?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE22140&utm_content=sidebar (ITAAS and Corporate Storage Strategy)
[2]: https://www.facebook.com/NetworkWorld/
[3]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,164 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to set up and run WordPress for your classroom)
[#]: via: (https://opensource.com/article/20/4/wordpress-virtual-machine)
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
How to set up and run WordPress for your classroom
======
Follow these simple steps to customize WordPress for use in the
classroom using free open source software.
![Painting art on a computer screen][1]
There are many good reasons to set up WordPress for your classroom. As more schools switch to online classes, WordPress can become the go-to content management system. Teachers using WordPress can provide a number of different educational choices to differentiate instruction for their students. Blogging is an accessible way to create content that energizes student learning. Teachers can write short stories, poems, and provide picture galleries that function as story starters. Students can comment and those comments can be moderated by their teacher.
There are free options like [WordPress.com][2] and [Edublogs][3]. However, these free versions are limited, and you may want to explore all your options. You can install [Virtualbox][4] on any Windows, macOS, or Linux computer. You can use your own computer or an extra you happen to have access to in a virtual environment.
On Linux, you can install Virtualbox from your package manager. For instance, on Debian, Elementary OS, or Ubuntu:
```
`$ sudo apt install virtualbox`
```
On Fedora:
```
`$ sudo dnf install virtualbox`
```
### Download a Wordpress image
Wordpress is easy to install, but server configuration and management can be difficult for the uninitiated. That's why there's [Turnkey Linux][5], a project dedicated to creating virtual machine images and containers of popular server software, preconfigured and ready to run. With Turnkey Linux, you just download a disk image containing the operating system and the software you want to run, and then import that image into Virtualbox.
To get started with Wordpress, download the **VM** virtual machine image from [turnkeylinux.org/wordpress][6] (in the **Builds** section). Make sure you download the image labeled **VM**, because that's the only format meant for Virtualbox.
### Import the image into Virtualbox
After installing Virtualbox, launch the application and import the virtual machine image into Virtualbox.
![][7]
Networking on the imported image is set to NAT by default. You will want to change the network settings to "bridged."
![Virtualbox menu][8]
After restarting the virtual machine, you are prompted to add passwords for MySQL, Adminer, and the WordPress **admin** user.
Then you see the network configuration console for the installation. Launch a web browser and navigate to the **web** address provided (in this example, it's 192.168.86.149).
![Console][9]
In a web browser, you see a login screen for your Wordpress installation. Click on the **Login** link.
![Wordpress welcome][10]
Enter **admin** as the username, followed by the password you created earlier. Click the **Login** link. On this first login as **admin**, you can choose a new password. Be sure to remember it!
![Login screen][11]
After logging in, you're presented with the WordPress Dashboard. The software will likely notify you, in the upper left corner of the window, that a new version of Wordpress exists. Update to the latest versions as prompted so your site is secure.
It's important to note that your Wordpress blog isn't visible by anyone on the Internet yet. It only exists in your local network: only people in your building who are connected to the same router or wifi access point as you can see your Wordpress site right now. The worldwide Internet can't get to it because you're behind a firewall (embedded in your router, and possible also in your computer).
![Wordpress dashboard][12]
Following the upgrade, the application restarts, and you're ready to begin configuring WordPress to your liking.
![Wordpress configuration][13]
On the far left, there is a button to **Customize Your Site**.
There, you can choose the name of your site. You can accept the default theme, which is "Twenty Nineteen," or choose another. My favorite is "Twenty Ten," but browse through the themes available to find your personal favorite. WordPress comes with five free themes installed. You can download other free themes from the [WordPress][14][.org][15] site or choose to purchase a premium theme.
When you click the **Customize Your Site** button, you're presented with new menu options. Select **Site Identity** and change the name of your site. You might use the name of your school or classroom. There's also room to choose a byline (the credit given to the author of a blog post). You can choose the colors for your site and where you will place menus and widgets. WordPress widgets and content and features to the sidebars for your site. Homepage settings are important, as they allow you to choose between a static page that might have a description of your school or classroom or having your blog entries displayed prominently. You can add additional CSS.
![Turnkey theme][16]
You can edit your front page, add additional pages like "About," or add a blog post. You can also manage widgets, manage menus, turn comments on or off, or add a link to learn more about WordPress.
Customizing your site allows you to configure a number of options quickly and easily.
WordPress has dozens of widgets that you can place in different areas of your page. Widgets are independent sections of content that can be placed into specific areas provided by your theme. These areas are called sidebars.
### Adding content
After you have WordPress configured to your liking, you probably want to get busy creating content. The best way to do that is to head back to the WordPress Dashboard.
On the left side, near the top of the page, you see **Posts**. Select that link and a dropdown appears. Choose **Add New** to create your very first blog post.
![Add post dropdown][17]
Fill in your title in the top block and then move down to the body. It's like using a word processor. WordPress has all the tools you need to write. You can set the font size from _small_ to _huge_. You can start a paragraph with dropped capitals. The text and background color can be changed. Your posts can include quote blocks and embedded content. A wide variety of embedded content is supported so you can make your posts a dynamic multimedia experience.
![Wordpress classroom blog][18]
### Going online
So far, your Wordpress blog only exists on your local network. Anyone using the same router as you (your housemates or classroom) can see your Wordpress site by navigating to 192.168.86.149, but once you're away from that router, the site becomes inaccessible.
If you want to go online with your custom Wordpress site, you have to allow traffic through your router, and then direct that traffic to the computer running Virtualbox. If you've installed Virtualbox on a laptop, then your website would disappear any time you closed your laptop, which is why servers that never get shutdown exist. But if this is just a fun lesson on how to run a Wordpress site, then having a website that's only available during class hours is fine.
If you have access to your router, then you can log into it and make the adjustments yourself. If you don't own or control your router, then you must talk to your systems administrator for access.
A _router_ is the box you got from your internet service provider. You might also call it your _modem_.
Every device is different, so there's no way for me to definitively tell you what you need to click on to adjust your settings. Generally, you access your home router through a web browser. Your router's address is often printed on the bottom of the router and begins with either 192.168 or 10.
Navigate to the router address and log in with the credentials you were provided when you got your internet service. It's often as simple as `admin` with a numeric password (sometimes this password is printed on the router, too). If you don't know the login, call your internet provider and ask for details.
Different routers use different terms for the same thing; keywords to look for are **Port forwarding**, **Virtual server**, and **Firewall**. Whatever your router calls it, you want to accept traffic coming to port 80 of your router and forward that traffic to the same port of your virtual machines's IP address (in this example, that is 192.168.86.149, but it could be different for you).
![Example router setting screen][19]
Now you're allowing traffic through the web port of your router's firewall. To view your Wordpress site over the Internet, get your worldwide IP address. You can get your global IP by going to the site [icanhazip.com][20]. Then go to a different computer, open a browser, and navigate to that IP address. As long as Virtualbox is running, you'll see your Wordpress site on the Internet. You can do this from anywhere in the world, because your site is on the Internet now.
Most websites use a domain name so you don't have to remember global IP addresses. You can purchase a domain name from services like [webhosting.coop][21] or [gandi.net][22], or a temporary one from [freenom.com][23]. Mapping that to your Wordpress site, however, is out of scope for this article.
### Wordpress for everyone
[WordPress][24] is open source and is licensed under the [GNU Public License][25]. You are welcome to contribute to WordPress as either a [developer][26] or enthusiast. WordPress is committed to being inclusive and accessible as possible.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/4/wordpress-virtual-machine
作者:[Don Watkins][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/painting_computer_screen_art_design_creative.png?itok=LVAeQx3_ (Painting art on a computer screen)
[2]: https://wordpress.com/
[3]: https://edublogs.org/
[4]: https://www.virtualbox.org/
[5]: https://www.turnkeylinux.org
[6]: https://www.turnkeylinux.org/wordpress
[7]: https://opensource.com/sites/default/files/uploads/how_to_get_started_with_wp_in_the_classroom_1.png
[8]: https://opensource.com/sites/default/files/uploads/how_to_get_started_with_wp_in_the_classroom_2.png (Virtualbox menu)
[9]: https://opensource.com/sites/default/files/uploads/how_to_get_started_with_wp_in_the_classroom_3.png (Console)
[10]: https://opensource.com/sites/default/files/uploads/how_to_get_started_with_wp_in_the_classroom_4.png (Wordpress welcome)
[11]: https://opensource.com/sites/default/files/uploads/how_to_get_started_with_wp_in_the_classroom_5.png (Login screen)
[12]: https://opensource.com/sites/default/files/uploads/how_to_get_started_with_wp_in_the_classroom_6.png (Wordpress dashboard)
[13]: https://opensource.com/sites/default/files/uploads/how_to_get_started_with_wp_in_the_classroom_7.png (Wordpress configuration)
[14]: http://Wordpress.org
[15]: http://WordPress.org
[16]: https://opensource.com/sites/default/files/uploads/how_to_get_started_with_wp_in_the_classroom_8.png (Turnkey theme)
[17]: https://opensource.com/sites/default/files/uploads/how_to_get_started_with_wp_in_the_classroom_12.png (Add post dropdown)
[18]: https://opensource.com/sites/default/files/uploads/how_to_get_started_with_wp_in_the_classroom_13.png (Wordpress classroom blog)
[19]: https://opensource.com/sites/default/files/router-web.jpg (Example router setting screen)
[20]: http://icanhazip.com/
[21]: https://webhosting.coop/domain-names
[22]: https://www.gandi.net
[23]: http://freenom.com/
[24]: https://wordpress.org/
[25]: https://github.com/WordPress/WordPress/blob/master/license.txt
[26]: https://wordpress.org/five-for-the-future/

View File

@ -0,0 +1,174 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Is reporting 100% of code coverage reasonable?)
[#]: via: (https://opensource.com/article/20/4/testing-code-coverage)
[#]: author: (Eric Herman https://opensource.com/users/ericherman)
Is reporting 100% of code coverage reasonable?
======
The time required to reach reporting 100% of code coverage is
considerably less than what I would have estimated before this
exploration.
![Code going into a computer.][1]
The [Foundation for Public Code][2] works to enable open and collaborative public-purpose software for public organizations (like local governments) internationally. We do this by supporting software at the codebase level through codebase stewardship. We also publish the [Standard for Public Code][3] (draft version 0.1.4 at the time of this writing), which helps open source codebase communities build solutions that can be reused successfully by other organizations. It includes guidance for policymakers, managers, developers, designers, and vendors.
Among other things, the standard addresses [code coverage][4], or how much of the code is executed when an automated test suite runs. It's one way to measure the likelihood that the code contains undetected software bugs. In the standard's ["Use continuous integration" requirements][5], it says, "source code test and documentation coverage **should** be monitored." Additionally, the [guidance to check][6] this requirement states, "code coverage tools check whether coverage is at 100% of the code."
Over my software development career, which spans more than two decades, I have worked on codebases large and small and some with very high percentages of code coverage. Yet none of the non-trivial codebases I have contributed to have reported 100% test coverage. This made me question whether the "_check whether coverage is at 100%_" guidance would be followed.
When I think about the nature of the test coverage gaps in the codebases I have worked on, they typically have been around system states that are very difficult (and in some cases, impossible) to create. For instance, in earlier versions of Java, I recall we were required to write catch blocks for exceptions that could never be thrown.
Previously, I reasoned that 100% test coverage is something to aspire to, but it is probably not worth the cost on most codebases and may not be realistic in a few.
Coverage tools have been getting smarter and more tunable over time. Languages have been getting lighter, and libraries have been getting easier to mock and test. So how unreasonable is 100% coverage of functionality today?
### Resource exhaustion
The high-quality but low test-coverage codebases I contribute to happen to be written in C or C++. A quick glance at these codebases shows that there is a class of common low-coverage situations that I'll lump together under the umbrella of resource exhaustion: out of memory, out of disk space, etc.
Here is a simple example of code that does not check for resource exhaustion; in this case, memory allocation failure:
```
char *buf = malloc(80);
sprintf(buf, "hello, world");
```
This example code needs to allocate a small buffer, so it calls **malloc(80)**, and **malloc** usually returns a pointer to 80 bytes of memory … but that can fail. In the (unlikely) case that **malloc** returns **NULL**, the code above will proceed to call **sprintf** with a **NULL** pointer which causes a crash. It is typical in C code to do something more like this:
```
char *buf = malloc(80);
if (buf == NULL) {
    fprintf(stderr, "malloc returned NULL for 80 bytes?\n");
    return NULL;
}
sprintf(buf, "hello, world");
```
This code guards against **malloc** returning **NULL**, which is better. However, creating tests for correct behavior in the face of this kind of resource exhaustion can be really hard. It's not impossible, of course, and there are multiple approaches. Many approaches result in fragile tests, which require a lot of maintenance over time, and these tests can be very time-consuming to build in the first place.
### Exploration
Pondering this, I decided to run a little experiment to see if I could learn something about the costs and consequences of this strict, 100% criterion.
Since I do some embedded-systems development, I have a few C libraries that I've developed and reused over the years in my embedded projects. I decided to look at some of these libraries and see just how hard it would be to bring them up to 100% code coverage. In the process, I paid attention to the impact on code clarity, code structure, and performance.
#### A library with preexisting dependency injection
Step one is measuring by adding code coverage to a codebase. Since this is C, **gcc** provides quite a lot by default with the **\--coverage** option, and **lcov** (with **genhtml**) does a good job of making reports; thus, this step was easy. I expected the starting coverage to be pretty good—it was, but it had a few untested branches, as well as the predicted gaps around error conditions and error reporting.
I made error reporting pluggable, so it was easier to capture and make assertions around error messages in previously untested branches.
Since this code already allowed for pluggable implementations of **malloc** and **free**, it was straightforward to write little malloc and free wrappers that I could inject memory allocation failures into. Within an hour or two, that was covered.
In the process, I realized that there was one condition where, from the perspective of the calling client code, it was impossible to distinguish between the situation where an error occurs and one where **NULL** is a valid return value. For you C programmers, it was essentially similar to the following:
```
/* stashes a copy of the value
 * returns the previously stashed value */
char *foo_stash(foo_s *context,
                char *stash_me,
                size_t stash_me_len)
{
    char *copy = malloc(stash_me_len);
    if (copy == NULL) {
        return NULL;
    }
    memcpy(copy, stash_me, stash_me_len);
    char *previous = context-&gt;stash;
    context-&gt;stash = copy;
    /* previous may be NULL */
    return previous;
}
```
I adjusted the API to allow the error information to be explicitly available. If you are a C developer, you know there are various ways this can be accomplished. I chose an approach similar to this:
```
/* stashes a copy of the value
 * returns the previously stashed value
 * on error, the 'err' pointer is set to 1 */
char *foo_stash2(foo_s *context,
                char *stash_me,
                size_t stash_me_len,
                int *err)
{
    char *copy = malloc(stash_me_len);
    if (copy == NULL) {
        *err = 1;
        return NULL;
    }
    memcpy(copy, stash_me, stash_me_len);
    char *previous = context-&gt;stash;
    context-&gt;stash = copy;
    /* previous may be NULL */
    return previous;
}
```
Without testing for resource exhaustion, it may have taken a long time for me to notice this (now obvious) shortcoming of the API.
To get **lcov** to report 100% test coverage, I had to tell the compiler to [not inline any code][7], something I learned it does even at optimization level zero.
When embedded in actual firmware, the compiler optimized away the unused indirection; therefore, the added indirection in the source code imposed no real-world performance penalty in the compiled firmware.
Of course, this was the easy library.
#### A more typical library
Once I established a method of injecting memory allocation failures in tests, I decided to move onto another library, but one for which malloc and free were not already pluggable. I had questions. How invasive will this be to the codebase? Will it clutter the code, making it less clear? How time-consuming will it be?
While I don't always record coverage metrics, I am a big believer in testing: more than 20 years ago, I learned that my code improves if I write the tests and client code [before][8] the implementation code, and I have worked that way ever since. (In [_Test-Driven Development: By Example_][9], you can find my name in the acknowledgments.) Yet, when I added code coverage reporting to the second library, I was surprised to see that (at some point in the past) I had added a pair of functions to the library without adding tests for them. The other untested areas were, unsurprisingly, code to handle memory-allocation failure.
Writing tests for the pair of untested functions was, of course, quick and easy. The coverage tools also revealed that I had a function with an untested code branch that, given only a quick glance, contained a bug. The fix was trivial, yet I was surprised to find a bug, given the different projects where I use this library. Nonetheless, there it was, a humbling reminder that, all too often, bugs lurk in untested code.
Next up was the more challenging stuff: testing for resource exhaustion. I started by introducing some global variables for the malloc/free function pointers, as well as a variable to hold a memory-tracking object. Once that was working, I moved those variables from global scope into a context argument that was already present. Refactoring the code to allow for the necessary indirection took only a couple of hours (less time than I expected), and the complexity added was negligible.
### Reflections
My conclusion from the first library was that it was well worth the time. The code is now more flexible, the API is now more complete for the caller, and writing the failure injection harness was pretty easy.
From the second library, I was reminded that even less-pluggable code could be made testable without adding undue levels of complexity. The code improved, I fixed a bug, and I can be more confident in the code. Also, the additional modularity of being able to plug in an alternative memory allocator is a feature that may prove more valuable in the future.
Exclusion comments are a feature of **lcov** to cause coverage reporting to ignore a block of code. Interestingly, I didn't feel the need to use exclusion comments in either library.
I am more certain than ever that even very good code is improved by investing in test coverage.
Both of these codebases are small, had some modularity already, began from a point of good testing, are single-threaded, and contain no graphical UI code. If I were to try to tackle this on one of the larger, more monolithic codebases I contribute to, it would be harder and require a larger time investment. There would likely be some sections of code where I might still conclude that the best thing to do would be to "cheat" by tuning the tooling to not report on some section of code.
That said, I estimate that the time required to reach reporting 100% of code coverage is considerably less than what I would have estimated before this exploration.
If you happen to be a C coder and want to see a running example of this, including **gcov** / **lcov** usage, I extracted the out-of-memory injecting code and put it in an [example repository][10].
Have you pushed a codebase to 100% coverage by tests, or tried to? What was your experience? Please share it in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/4/testing-code-coverage
作者:[Eric Herman][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ericherman
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_development_programming.png?itok=4OM29-82 (Code going into a computer.)
[2]: https://publiccode.net/
[3]: https://standard.publiccode.net
[4]: https://en.wikipedia.org/wiki/Code_coverage
[5]: https://standard.publiccode.net/criteria/continuous-integration.html#requirements
[6]: https://standard.publiccode.net/criteria/continuous-integration.html#how-to-test
[7]: https://twitter.com/Eric_Herman/status/1224983465784938496
[8]: https://opensource.com/article/20/2/automate-unit-tests
[9]: https://www.oreilly.com/library/view/test-driven-development/0321146530/
[10]: https://github.com/ericherman/context-alloc