mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-03-21 02:10:11 +08:00
Merge branch 'master' of https://github.com/LCTT/TranslateProject
This commit is contained in:
commit
716f606025
64
published/20170131 Book review Ours to Hack and to Own.md
Normal file
64
published/20170131 Book review Ours to Hack and to Own.md
Normal file
@ -0,0 +1,64 @@
|
||||
书评:《Ours to Hack and to Own》
|
||||
============================================================
|
||||
|
||||

|
||||
|
||||
Image by : opensource.com
|
||||
|
||||
私有制的时代看起来似乎结束了,在这里我将不仅仅讨论那些由我们中的许多人引入到我们的家庭与生活的设备和软件,我也将讨论这些设备与应用依赖的平台与服务。
|
||||
|
||||
尽管我们使用的许多服务是免费的,但我们对它们并没有任何控制权。本质上讲,这些企业确实控制着我们所看到的、听到的以及阅读到的内容。不仅如此,许多企业还在改变工作的本质。他们正使用封闭的平台来助长由全职工作到[零工经济][2]的转变,这种方式提供极少的安全性与确定性。
|
||||
|
||||
这项行动对于网络以及每一个使用与依赖网络的人产生了广泛的影响。仅仅二十多年前的对开放互联网的想象正在逐渐消逝,并迅速地被一块难以穿透的幕帘所取代。
|
||||
|
||||
一种逐渐流行的补救办法就是建立<ruby>[平台合作社][3]<rt>platform cooperatives</rt></ruby>, 即由他们的用户所拥有的电子化平台。正如这本书[《Ours to Hack and to Own》][4]所阐述的,平台合作社背后的观点与开源有许多相同的根源。
|
||||
|
||||
学者 Trebor Scholz 和作家 Nathan Schneider 已经收集了 40 篇论文,探讨平台合作社作为普通人可使用的工具的增长及需求,以提升开放性并对闭源系统的不透明性及各种限制予以还击。
|
||||
|
||||
### 何处适合开源
|
||||
|
||||
任何平台合作社核心及接近核心的部分依赖于开源;不仅开源技术是必要的,构成开源开放性、透明性、协同合作以及共享的准则与理念同样不可或缺。
|
||||
|
||||
在这本书的介绍中,Trebor Scholz 指出:
|
||||
|
||||
> 与斯诺登时代的互联网黑盒子系统相反,这些平台需要使它们的数据流透明来辨别自身。他们需要展示客户与员工的数据在哪里存储,数据出售给了谁以及数据用于何种目的。
|
||||
|
||||
正是对开源如此重要的透明性,促使平台合作社如此吸引人,并在目前大量已有平台之中成为令人耳目一新的变化。
|
||||
|
||||
开源软件在《Ours to Hack and to Own》所分享的平台合作社的构想中必然充当着重要角色。开源软件能够为群体建立助推合作社的技术基础设施提供快速而不算昂贵的途径。
|
||||
|
||||
Mickey Metts 在论文中这样形容, “邂逅你的友邻技术伙伴。" Metts 为一家名为 Agaric 的企业工作,这家企业使用 Drupal 为团体及小型企业建立他们不能自行完成的平台。除此以外, Metts 还鼓励任何想要建立并运营自己的企业的公司或合作社的人接受自由开源软件。为什么呢?因为它是高质量的、并不昂贵的、可定制的,并且你能够与由乐于助人而又热情的人们组成的大型社区产生联系。
|
||||
|
||||
### 不总是开源的,但开源总在
|
||||
|
||||
这本书里不是所有的论文都关注或提及开源的;但是,开源方式的关键元素——合作、社区、开放治理以及电子自由化——总是在其间若隐若现。
|
||||
|
||||
事实上正如《Ours to Hack and to Own》中许多论文所讨论的,建立一个更加开放、基于平常人的经济与社会区块,平台合作社会变得非常重要。用 Douglas Rushkoff 的话讲,那会是类似 Creative Commons 的组织“对共享知识资源的私有化”的补偿。它们也如 Barcelona 的 CTO Francesca Bria 所描述的那样,是“通过确保市民数据安全性、隐私性和权利的系统”来运营他们自己的“分布式通用数据基础架构”的城市。
|
||||
|
||||
### 最后的思考
|
||||
|
||||
如果你在寻找改变互联网以及我们工作的方式的蓝图,《Ours to Hack and to Own》并不是你要寻找的。这本书与其说是用户指南,不如说是一种宣言。如书中所说,《Ours to Hack and to Own》让我们略微了解如果我们将开源方式准则应用于社会及更加广泛的世界我们能够做的事。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Scott Nesbitt ——作家、编辑、雇佣兵、 <ruby>虎猫牛仔<rt>Ocelot wrangle</rt></ruby>、丈夫与父亲、博客写手、陶器收藏家。Scott 正是做这样的一些事情。他还是大量写关于开源软件文章与博客的长期开源用户。你可以在 Twitter、Github 上找到他。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/1/review-book-ours-to-hack-and-own
|
||||
|
||||
作者:[Scott Nesbitt][a]
|
||||
译者:[darsh8](https://github.com/darsh8)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/scottnesbitt
|
||||
[1]:https://opensource.com/article/17/1/review-book-ours-to-hack-and-own?rate=dgkFEuCLLeutLMH2N_4TmUupAJDjgNvFpqWqYCbQb-8
|
||||
[2]:https://en.wikipedia.org/wiki/Access_economy
|
||||
[3]:https://en.wikipedia.org/wiki/Platform_cooperative
|
||||
[4]:http://www.orbooks.com/catalog/ours-to-hack-and-to-own/
|
||||
[5]:https://opensource.com/user/14925/feed
|
||||
[6]:https://opensource.com/users/scottnesbitt
|
@ -1,84 +1,101 @@
|
||||
检查系统和硬件信息的命令
|
||||
======
|
||||
你们好,linux 爱好者们,在这篇文章中,我将讨论一些作为系统管理员重要的事。众所周知,作为一名优秀的系统管理员意味着要了解有关 IT 基础架构的所有信息,并掌握有关服务器的所有信息,无论是硬件还是操作系统。所以下面的命令将帮助你了解所有的硬件和系统信息。
|
||||
|
||||
#### 1- 查看系统信息
|
||||
你们好,Linux 爱好者们,在这篇文章中,我将讨论一些作为系统管理员重要的事。众所周知,作为一名优秀的系统管理员意味着要了解有关 IT 基础架构的所有信息,并掌握有关服务器的所有信息,无论是硬件还是操作系统。所以下面的命令将帮助你了解所有的硬件和系统信息。
|
||||
|
||||
### 1 查看系统信息
|
||||
|
||||
```
|
||||
$ uname -a
|
||||
```
|
||||
|
||||
![uname command][2]
|
||||
|
||||
它会为你提供有关系统的所有信息。它会为你提供系统的内核名、主机名、内核版本、内核发布号、硬件名称。
|
||||
|
||||
#### 2- 查看硬件信息
|
||||
### 2 查看硬件信息
|
||||
|
||||
```
|
||||
$ lshw
|
||||
```
|
||||
|
||||
![lshw command][4]
|
||||
|
||||
使用 lshw 将在屏幕上显示所有硬件信息。
|
||||
使用 `lshw` 将在屏幕上显示所有硬件信息。
|
||||
|
||||
#### 3- 查看块设备(硬盘、闪存驱动器)信息
|
||||
### 3 查看块设备(硬盘、闪存驱动器)信息
|
||||
|
||||
```
|
||||
$ lsblk
|
||||
```
|
||||
|
||||
![lsblk command][6]
|
||||
|
||||
lsblk 命令在屏幕上打印关于块设备的所有信息。使用 lsblk -a 显示所有块设备。
|
||||
`lsblk` 命令在屏幕上打印关于块设备的所有信息。使用 `lsblk -a` 可以显示所有块设备。
|
||||
|
||||
#### 4- 查看 CPU 信息
|
||||
### 4 查看 CPU 信息
|
||||
|
||||
```
|
||||
$ lscpu
|
||||
```
|
||||
|
||||
![lscpu command][8]
|
||||
|
||||
lscpu 在屏幕上显示所有 CPU 信息。
|
||||
`lscpu` 在屏幕上显示所有 CPU 信息。
|
||||
|
||||
#### 5- 查看 PCI 信息
|
||||
### 5 查看 PCI 信息
|
||||
|
||||
```
|
||||
$ lspci
|
||||
```
|
||||
|
||||
![lspci command][10]
|
||||
|
||||
所有的网络适配器卡、USB 卡、图形卡都被称为 PCI。要查看他们的信息使用 lspci。
|
||||
所有的网络适配器卡、USB 卡、图形卡都被称为 PCI。要查看他们的信息使用 `lspci`。
|
||||
|
||||
lspci -v 将提供有关 PCI 卡的详细信息。
|
||||
`lspci -v` 将提供有关 PCI 卡的详细信息。
|
||||
|
||||
lspci -t 会以树形格式显示它们。
|
||||
`lspci -t` 会以树形格式显示它们。
|
||||
|
||||
#### 6- 查看 USB 信息
|
||||
### 6 查看 USB 信息
|
||||
|
||||
```
|
||||
$ lsusb
|
||||
```
|
||||
|
||||
![lsusb command][12]
|
||||
|
||||
要查看有关连接到机器的所有 USB 控制器和设备的信息,我们使用 lsusb。
|
||||
要查看有关连接到机器的所有 USB 控制器和设备的信息,我们使用 `lsusb`。
|
||||
|
||||
#### 7- 查看 SCSI 信息
|
||||
### 7 查看 SCSI 信息
|
||||
|
||||
$ lssci
|
||||
```
|
||||
$ lsscsi
|
||||
```
|
||||
|
||||
![lssci][14]
|
||||
![lsscsi][14]
|
||||
|
||||
要查看 SCSI 信息输入 lsscsi。lsscsi -s 会显示分区的大小。
|
||||
要查看 SCSI 信息输入 `lsscsi`。`lsscsi -s` 会显示分区的大小。
|
||||
|
||||
#### 8- 查看文件系统信息
|
||||
### 8 查看文件系统信息
|
||||
|
||||
```
|
||||
$ fdisk -l
|
||||
```
|
||||
|
||||
![fdisk command][16]
|
||||
|
||||
使用 fdisk -l 将显示有关文件系统的信息。虽然 fdisk 的主要功能是修改文件系统,但是也可以创建新分区,删除旧分区(详情在我以后的教程中)。
|
||||
使用 `fdisk -l` 将显示有关文件系统的信息。虽然 `fdisk` 的主要功能是修改文件系统,但是也可以创建新分区,删除旧分区(详情在我以后的教程中)。
|
||||
|
||||
就是这些了,我的 Linux 爱好者们。建议你在**[这里][17]**和**[这里][18]**查看我文章中关于另外的 Linux 命令。
|
||||
就是这些了,我的 Linux 爱好者们。建议你在**[这里][17]**和**[这里][18]**的文章中查看关于另外的 Linux 命令。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linuxtechlab.com/commands-system-hardware-info/
|
||||
|
||||
作者:[Shusain][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,59 @@
|
||||
安全专家的需求正在快速增长
|
||||
============================================================
|
||||
|
||||

|
||||
|
||||
> 来自 Dice 和 Linux 基金会的“开源工作报告”发现,未来对具有安全经验的专业人员的需求很高。
|
||||
|
||||
对安全专业人员的需求是真实的。在 [Dice.com][4] 多达 75,000 个职位中,有 15% 是安全职位。[福布斯][6] 称:“根据网络安全数据工具 [CyberSeek][5],在美国每年有 4 万个信息安全分析师的职位空缺,雇主正在努力填补其他 20 万个与网络安全相关的工作。”我们知道,安全专家的需求正在快速增长,但感兴趣的程度还较低。
|
||||
|
||||
### 安全是要关注的领域
|
||||
|
||||
根据我的经验,很少有大学生对安全工作感兴趣,所以很多人把安全视为商机。入门级技术专家对业务分析师或系统分析师感兴趣,因为他们认为,如果想学习和应用核心 IT 概念,就必须坚持分析师工作或者更接近产品开发的工作。事实并非如此。
|
||||
|
||||
事实上,如果你有兴趣成为商业领导者,那么安全是要关注的领域 —— 作为一名安全专业人员,你必须端到端地了解业务,你必须看大局来给你的公司带来优势。
|
||||
|
||||
### 无所畏惧
|
||||
|
||||
分析师和安全工作并不完全相同。公司出于必要继续合并工程和安全工作。企业正在以前所未有的速度进行基础架构和代码的自动化部署,从而提高了安全作为所有技术专业人士日常生活的一部分的重要性。在我们的 [Linux 基金会的开源工作报告][7]中,42% 的招聘经理表示未来对有安全经验的专业人士的需求很大。
|
||||
|
||||
在安全方面从未有过更激动人心的时刻。如果你随时掌握最新的技术新闻,就会发现大量的事情与安全相关 —— 数据泄露、系统故障和欺诈。安全团队正在不断变化,快节奏的环境中工作。真正的挑战在于在保持甚至改进最终用户体验的同时,积极主动地进行安全性,发现和消除漏洞。
|
||||
|
||||
### 增长即将来临
|
||||
|
||||
在技术的任何方面,安全将继续与云一起成长。企业越来越多地转向云计算,这暴露出比组织里比过去更多的安全漏洞。随着云的成熟,安全变得越来越重要。
|
||||
|
||||
条例也在不断完善 —— 个人身份信息(PII)越来越广泛。许多公司都发现他们必须投资安全来保持合规,避免成为头条新闻。由于面临巨额罚款,声誉受损以及行政工作安全,公司开始越来越多地为安全工具和人员安排越来越多的预算。
|
||||
|
||||
### 培训和支持
|
||||
|
||||
即使你不选择一个专门的安全工作,你也一定会发现自己需要写安全的代码,如果你没有这个技能,你将开始一场艰苦的战斗。如果你的公司提供在工作中学习的话也是可以的,但我建议结合培训、指导和不断的实践。如果你不使用安全技能,你将很快在快速进化的恶意攻击的复杂性中失去它们。
|
||||
|
||||
对于那些寻找安全工作的人来说,我的建议是找到组织中那些在工程、开发或者架构领域最为强大的人员 —— 与他们和其他团队进行交流,做好实际工作,并且确保在心里保持大局。成为你的组织中一个脱颖而出的人,一个可以写安全的代码,同时也可以考虑战略和整体基础设施健康状况的人。
|
||||
|
||||
### 游戏最后
|
||||
|
||||
越来越多的公司正在投资安全性,并试图填补他们的技术团队的开放角色。如果你对管理感兴趣,那么安全是值得关注的地方。执行层领导希望知道他们的公司正在按规则行事,他们的数据是安全的,并且免受破坏和损失。
|
||||
|
||||
明智地实施和有战略思想的安全是受到关注的。安全对高管和消费者之类至关重要 —— 我鼓励任何对安全感兴趣的人进行培训和贡献。
|
||||
|
||||
_现在[下载][2]完整的 2017 年开源工作报告_
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/os-jobs-report/2017/11/security-jobs-are-hot-get-trained-and-get-noticed
|
||||
|
||||
作者:[BEN COLLEN][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/bencollen
|
||||
[1]:https://www.linux.com/licenses/category/used-permission
|
||||
[2]:http://bit.ly/2017OSSjobsreport
|
||||
[3]:https://www.linux.com/files/images/security-skillspng
|
||||
[4]:http://www.dice.com/
|
||||
[5]:http://cyberseek.org/index.html#about
|
||||
[6]:https://www.forbes.com/sites/jeffkauflin/2017/03/16/the-fast-growing-job-with-a-huge-skills-gap-cyber-security/#292f0a675163
|
||||
[7]:http://media.dice.com/report/the-2017-open-source-jobs-report-employers-prioritize-hiring-open-source-professionals-with-latest-skills/
|
158
published/20171205 How to Use the Date Command in Linux.md
Normal file
158
published/20171205 How to Use the Date Command in Linux.md
Normal file
@ -0,0 +1,158 @@
|
||||
如何使用 date 命令
|
||||
======
|
||||
|
||||

|
||||
|
||||
在本文中, 我们会通过一些案例来演示如何使用 Linux 中的 `date` 命令。`date` 命令可以用户输出/设置系统日期和时间。 `date` 命令很简单, 请参见下面的例子和语法。
|
||||
|
||||
默认情况下,当不带任何参数运行 `date` 命令时,它会输出当前系统日期和时间:
|
||||
|
||||
```shell
|
||||
$ date
|
||||
Sat 2 Dec 12:34:12 CST 2017
|
||||
```
|
||||
|
||||
### 语法
|
||||
|
||||
```
|
||||
Usage: date [OPTION]... [+FORMAT]
|
||||
or: date [-u|--utc|--universal] [MMDDhhmm[[CC]YY][.ss]]
|
||||
以给定格式显示当前时间,或设置系统时间。
|
||||
```
|
||||
|
||||
### 案例
|
||||
|
||||
下面这些案例会向你演示如何使用 `date` 命令来查看前后一段时间的日期时间。
|
||||
|
||||
#### 1、 查找 5 周后的日期
|
||||
|
||||
```shell
|
||||
date -d "5 weeks"
|
||||
Sun Jan 7 19:53:50 CST 2018
|
||||
```
|
||||
|
||||
#### 2、 查找 5 周后又过 4 天的日期
|
||||
|
||||
```shell
|
||||
date -d "5 weeks 4 days"
|
||||
Thu Jan 11 19:55:35 CST 2018
|
||||
```
|
||||
|
||||
#### 3、 获取下个月的日期
|
||||
|
||||
```shell
|
||||
date -d "next month"
|
||||
Wed Jan 3 19:57:43 CST 2018
|
||||
```
|
||||
|
||||
#### 4、 获取下周日的日期
|
||||
|
||||
```shell
|
||||
date -d last-sunday
|
||||
Sun Nov 26 00:00:00 CST 2017
|
||||
```
|
||||
|
||||
`date` 命令还有很多格式化相关的选项, 下面的例子向你演示如何格式化 `date` 命令的输出.
|
||||
|
||||
#### 5、 以 `yyyy-mm-dd` 的格式显示日期
|
||||
|
||||
```shell
|
||||
date +"%F"
|
||||
2017-12-03
|
||||
```
|
||||
|
||||
#### 6、 以 `mm/dd/yyyy` 的格式显示日期
|
||||
|
||||
```shell
|
||||
date +"%m/%d/%Y"
|
||||
12/03/2017
|
||||
```
|
||||
|
||||
#### 7、 只显示时间
|
||||
|
||||
```shell
|
||||
date +"%T"
|
||||
20:07:04
|
||||
```
|
||||
|
||||
#### 8、 显示今天是一年中的第几天
|
||||
|
||||
```shell
|
||||
date +"%j"
|
||||
337
|
||||
```
|
||||
|
||||
#### 9、 与格式化相关的选项
|
||||
|
||||
| 格式 | 说明 |
|
||||
|---------------|----------------|
|
||||
| `%%` | 显示百分号 (`%`)。 |
|
||||
| `%a` | 星期的缩写形式 (如: `Sun`)。 |
|
||||
| `%A` | 星期的完整形式 (如: `Sunday`)。 |
|
||||
| `%b` | 缩写的月份 (如: `Jan`)。 |
|
||||
| `%B` | 当前区域的月份全称 (如: `January`)。 |
|
||||
| `%c` | 日期以及时间 (如: `Thu Mar 3 23:05:25 2005`)。 |
|
||||
| `%C` | 当前世纪;类似 `%Y`, 但是会省略最后两位 (如: `20`)。 |
|
||||
| `%d` | 月中的第几日 (如: `01`)。 |
|
||||
| `%D` | 日期;效果与 `%m/%d/%y` 一样。 |
|
||||
| `%e` | 月中的第几日, 会填充空格;与 `%_d` 一样。 |
|
||||
| `%F` | 完整的日期;跟 `%Y-%m-%d` 一样。 |
|
||||
| `%g` | 年份的后两位 (参见 `%G`)。 |
|
||||
| `%G` | 年份 (参见 `%V`);通常跟 `%V` 连用。 |
|
||||
| `%h` | 同 `%b`。 |
|
||||
| `%H` | 小时 (`00`..`23`)。 |
|
||||
| `%I` | 小时 (`01`..`12`)。 |
|
||||
| `%j` | 一年中的第几天 (`001`..`366`)。 |
|
||||
| `%k` | 小时, 用空格填充 ( `0`..`23`); 与 `%_H` 一样。 |
|
||||
| `%l` | 小时, 用空格填充 ( `1`..`12`); 与 `%_I` 一样。 |
|
||||
| `%m` | 月份 (`01`..`12`)。 |
|
||||
| `%M` | 分钟 (`00`..`59`)。 |
|
||||
| `%n` | 换行。 |
|
||||
| `%N` | 纳秒 (`000000000`..`999999999`)。 |
|
||||
| `%p` | 当前区域时间是上午 `AM` 还是下午 `PM`;未知则为空。 |
|
||||
| `%P` | 类似 `%p`, 但是用小写字母显示。 |
|
||||
| `%r` | 当前区域的 12 小时制显示时间 (如: `11:11:04 PM`)。 |
|
||||
| `%R` | 24 小时制的小时和分钟;同 `%H:%M`。 |
|
||||
| `%s` | 从 1970-01-01 00:00:00 UTC 到现在经历的秒数。 |
|
||||
| `%S` | 秒数 (`00`..`60`)。 |
|
||||
| `%t` | 制表符。 |
|
||||
| `%T` | 时间;同 `%H:%M:%S`。 |
|
||||
| `%u` | 星期 (`1`..`7`);1 表示 `星期一`。 |
|
||||
| `%U` | 一年中的第几个星期,以周日为一周的开始 (`00`..`53`)。 |
|
||||
| `%V` | 一年中的第几个星期,以周一为一周的开始 (`01`..`53`)。 |
|
||||
| `%w` | 用数字表示周几 (`0`..`6`); 0 表示 `周日`。 |
|
||||
| `%W` | 一年中的第几个星期, 周一为一周的开始 (`00`..`53`)。 |
|
||||
| `%x` | 当前区域的日期表示(如: `12/31/99`)。 |
|
||||
| `%X` | 当前区域的时间表示 (如: `23:13:48`)。 |
|
||||
| `%y` | 年份的后面两位 (`00`..`99`)。 |
|
||||
| `%Y` | 年。 |
|
||||
| `%z` | 以 `+hhmm` 的数字格式表示时区 (如: `-0400`)。 |
|
||||
| `%:z` | 以 `+hh:mm` 的数字格式表示时区 (如: `-04:00`)。 |
|
||||
| `%::z` | 以 `+hh:mm:ss` 的数字格式表示时区 (如: `-04:00:00`)。 |
|
||||
| `%:::z` | 以数字格式表示时区, 其中 `:` 的个数由你需要的精度来决定 (例如, `-04`, `+05:30`)。 |
|
||||
| `%Z` | 时区的字符缩写(例如, `EDT`)。 |
|
||||
|
||||
#### 10、 设置系统时间
|
||||
|
||||
你也可以使用 `date` 来手工设置系统时间,方法是使用 `--set` 选项, 下面的例子会将系统时间设置成 2017 年 8 月 30 日下午 4 点 22 分。
|
||||
|
||||
```shell
|
||||
date --set="20170830 16:22"
|
||||
```
|
||||
|
||||
当然, 如果你使用的是我们的 [VPS 托管服务][1],你总是可以联系并咨询我们的 Linux 专家管理员(通过客服电话或者下工单的方式)关于 `date` 命令的任何东西。他们是 24×7 在线的,会立即向您提供帮助。(LCTT 译注:原文的广告~)
|
||||
|
||||
PS. 如果你喜欢这篇帖子,请点击下面的按钮分享或者留言。谢谢。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.rosehosting.com/blog/use-the-date-command-in-linux/
|
||||
|
||||
作者:[rosehosting][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.rosehosting.com
|
||||
[1]:https://www.rosehosting.com/hosting-services.html
|
@ -0,0 +1,143 @@
|
||||
如何在 Linux 使用文件压缩
|
||||
=======
|
||||
|
||||

|
||||
|
||||
> Linux 系统为文件压缩提供了许多选择,关键是选择一个最适合你的。
|
||||
|
||||
如果你对可用于 Linux 系统的文件压缩命令或选项有任何疑问,你也许应该看一下 `apropos compress` 这个命令的输出。如果你有机会这么做,你会惊异于有如此多的的命令来进行压缩文件和解压缩文件;此外还有许多命令来进行压缩文件的比较、检验,并且能够在压缩文件中的内容中进行搜索,甚至能够把压缩文件从一个格式变成另外一种格式(如,将 `.z` 格式变为 `.gz` 格式 )。
|
||||
|
||||
你可以看到只是适用于 bzip2 压缩的全部条目就有这么多。加上 zip、gzip 和 xz 在内,你会有非常多的选择。
|
||||
|
||||
```
|
||||
$ apropos compress | grep ^bz
|
||||
bzcat (1) - decompresses files to stdout
|
||||
bzcmp (1) - compare bzip2 compressed files
|
||||
bzdiff (1) - compare bzip2 compressed files
|
||||
bzegrep (1) - search possibly bzip2 compressed files for a regular expression
|
||||
bzexe (1) - compress executable files in place
|
||||
bzfgrep (1) - search possibly bzip2 compressed files for a regular expression
|
||||
bzgrep (1) - search possibly bzip2 compressed files for a regular expression
|
||||
bzip2 (1) - a block-sorting file compressor, v1.0.6
|
||||
bzless (1) - file perusal filter for crt viewing of bzip2 compressed text
|
||||
bzmore (1) - file perusal filter for crt viewing of bzip2 compressed text
|
||||
```
|
||||
|
||||
在我的 Ubuntu 系统上 ,`apropos compress` 命令的返回中列出了 60 条以上的命令。
|
||||
|
||||
### 压缩算法
|
||||
|
||||
压缩并没有普适的方案,某些压缩工具是有损压缩,例如一些压缩用于减少 mp3 文件大小,而能够使聆听者有接近原声的音乐感受。但是在 Linux 命令行上压缩或归档用户文件所使用的算法必须能够精确地重新恢复为原始数据。换句话说,它们必须是无损的。
|
||||
|
||||
这是如何做到的?让我们假设在一行上有 300 个相同的字符可以被压缩成像 “300x” 这样的字符串,但是这种算法对大多数文件没有很大的用处,因为文件中不可能包含长的相同字符序列比完全随机的序列更多。 压缩算法要复杂得多,从 Unix 早期压缩首次被引入以来,它就越来越复杂了。
|
||||
|
||||
### 在 Linux 系统上的压缩命令
|
||||
|
||||
在 Linux 系统上最常用的文件压缩命令包括 `zip`、`gzip`、`bzip2`、`xz`。 所有这些压缩命令都以类似的方式工作,但是你需要权衡有多少文件要压缩(节省多少空间)、压缩花费的时间、压缩文件在其他你需要使用的系统上的兼容性。
|
||||
|
||||
有时压缩一个文件并不会花费很多时间和精力。在下面的例子中,被压缩的文件实际上比原始文件要大。这并不是一个常见情况,但是有可能发生——尤其是在文件内容达到一定程度的随机性。
|
||||
|
||||
```
|
||||
$ time zip bigfile.zip bigfile
|
||||
adding: bigfile (default 0% )
|
||||
real 0m0.055s
|
||||
user 0m0.000s
|
||||
sys 0m0.016s
|
||||
$ ls -l bigfile*
|
||||
-rw-r--r-- 1 root root 0 12月 20 22:36 bigfile
|
||||
-rw------- 1 root root 164 12月 20 22:41 bigfile.zip
|
||||
```
|
||||
|
||||
注意该文件压缩后的版本(`bigfile.zip`)比原始文件(`bigfile`)要大。如果压缩增加了文件的大小或者减少很少的比例,也许唯一的好处就是便于在线备份。如果你在压缩文件后看到了下面的信息,你不会从压缩中得到什么受益。
|
||||
|
||||
```
|
||||
( defalted 1% )
|
||||
```
|
||||
|
||||
文件内容在文件压缩的过程中有很重要的作用。在上面文件大小增加的例子中是因为文件内容过于随机。压缩一个文件内容只包含 `0` 的文件,你会有一个相当震惊的压缩比。在如此极端的情况下,三个常用的压缩工具都有非常棒的效果。
|
||||
|
||||
```
|
||||
-rw-rw-r-- 1 shs shs 10485760 Dec 8 12:31 zeroes.txt
|
||||
-rw-rw-r-- 1 shs shs 49 Dec 8 17:28 zeroes.txt.bz2
|
||||
-rw-rw-r-- 1 shs shs 10219 Dec 8 17:28 zeroes.txt.gz
|
||||
-rw-rw-r-- 1 shs shs 1660 Dec 8 12:31 zeroes.txt.xz
|
||||
-rw-rw-r-- 1 shs shs 10360 Dec 8 12:24 zeroes.zip
|
||||
```
|
||||
|
||||
令人印象深刻的是,你不太可能看到超过 1000 万字节而压缩到少于 50 字节的文件, 因为基本上不可能有这样的文件。
|
||||
|
||||
在更真实的情况下 ,大小差异总体上是不同的,但是差别并不显著,比如对于确实不太大的 jpg 图片文件来说。
|
||||
|
||||
```
|
||||
-rw-r--r-- 1 shs shs 13522 Dec 11 18:58 image.jpg
|
||||
-rw-r--r-- 1 shs shs 13875 Dec 11 18:58 image.jpg.bz2
|
||||
-rw-r--r-- 1 shs shs 13441 Dec 11 18:58 image.jpg.gz
|
||||
-rw-r--r-- 1 shs shs 13508 Dec 11 18:58 image.jpg.xz
|
||||
-rw-r--r-- 1 shs shs 13581 Dec 11 18:58 image.jpg.zip
|
||||
```
|
||||
|
||||
在对大的文本文件同样进行压缩时 ,你会看到显著的不同。
|
||||
|
||||
```
|
||||
$ ls -l textfile*
|
||||
-rw-rw-r-- 1 shs shs 8740836 Dec 11 18:41 textfile
|
||||
-rw-rw-r-- 1 shs shs 1519807 Dec 11 18:41 textfile.bz2
|
||||
-rw-rw-r-- 1 shs shs 1977669 Dec 11 18:41 textfile.gz
|
||||
-rw-rw-r-- 1 shs shs 1024700 Dec 11 18:41 textfile.xz
|
||||
-rw-rw-r-- 1 shs shs 1977808 Dec 11 18:41 textfile.zip
|
||||
```
|
||||
|
||||
在这种情况下 ,`xz` 相较于其他压缩命令有效的减小了文件大小,对于第二的 bzip2 命令也是如此。
|
||||
|
||||
### 查看压缩文件
|
||||
|
||||
这些以 `more` 结尾的命令(`bzmore` 等等)能够让你查看压缩文件的内容而不需要解压文件。
|
||||
|
||||
```
|
||||
bzmore (1) - file perusal filter for crt viewing of bzip2 compressed text
|
||||
lzmore (1) - view xz or lzma compressed (text) files
|
||||
xzmore (1) - view xz or lzma compressed (text) files
|
||||
zmore (1) - file perusal filter for crt viewing of compressed text
|
||||
```
|
||||
|
||||
为了解压缩文件内容显示给你,这些命令做了大量的计算。但在另一方面,它们不会把解压缩后的文件留在你系统上,它们只是即时解压需要的部分。
|
||||
|
||||
```
|
||||
$ xzmore textfile.xz | head -1
|
||||
Here is the agenda for tomorrow's staff meeting:
|
||||
```
|
||||
|
||||
### 比较压缩文件
|
||||
|
||||
有几个压缩工具箱包含一个差异命令(例如 :`xzdiff`),那些工具会把这些工作交给 `cmp` 和 `diff` 来进行比较,而不是做特定算法的比较。例如,`xzdiff` 命令比较 bz2 类型的文件和比较 xz 类型的文件一样简单 。
|
||||
|
||||
### 如何选择最好的 Linux 压缩工具
|
||||
|
||||
如何选择压缩工具取决于你工作。在一些情况下,选择取决于你所压缩的数据内容。在更多的情况下,取决你组织内的惯例,除非你对磁盘空间有着很高的敏感度。下面是一般性建议:
|
||||
|
||||
**zip** 对于需要分享给或者在 Windows 系统下使用的文件最适合。
|
||||
|
||||
**gzip** 或许对你要在 Unix/Linux 系统下使用的文件是最好的。虽然 bzip2 已经接近普及,但 gzip 看起来仍将长期存在。
|
||||
|
||||
**bzip2** 使用了和 gzip 不同的算法,并且会产生比 gzip 更小的文件,但是它们需要花费更长的时间进行压缩。
|
||||
|
||||
**xz** 通常可以提供最好的压缩率,但是也会花费相当长的时间。它比其他工具更新一些,可能在你工作的系统上还不存在。
|
||||
|
||||
### 注意
|
||||
|
||||
在压缩文件时,你有很多选择,而在极少的情况下,并不能有效节省磁盘存储空间。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3240938/linux/how-to-squeeze-the-most-out-of-linux-file-compression.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][1]
|
||||
译者:[singledo][2]
|
||||
校对:[wxy][4]
|
||||
|
||||
本文由 [ LCTT ][3]原创编译,Linux中国 荣誉推出
|
||||
|
||||
[1]:https://www.networkworld.com
|
||||
[2]:https://github.com/singledo
|
||||
[3]:https://github.com/LCTT/TranslateProject
|
||||
[4]:https://github.com/wxy
|
@ -0,0 +1,136 @@
|
||||
利用 Resetter 将 Ubuntu 系发行版重置为初始状态
|
||||
======
|
||||
|
||||

|
||||
|
||||
*这个 Resetter 工具可以将 Ubuntu、 Linux Mint (以及其它基于 Ubuntu 的发行版)返回到其初始配置。*
|
||||
|
||||
有多少次你投身于 Ubuntu(或 Ubuntu 衍生版本),配置某项内容和安装软件,却发现你的桌面(或服务器)平台并不是你想要的结果。当在机器上产生了大量的用户文件时,这种情况可能会出现问题。既然这样,你有一个选择,你要么可以备份你所有的数据,重新安装操作系统,然后将您的数据复制回本机,或者也可以利用一种类似于 [Resetter][1] 的工具做同样的事情。
|
||||
|
||||
Resetter 是一个新的工具(由名为“[gaining][2]”的加拿大开发者开发),用 Python 和 PyQt 编写,它将会重置 Ubuntu、Linux Mint(和一些其他的,基于 Ubuntu 的衍生版)回到初始配置。Resetter 提供了两种不同的复位选择:自动和自定义。利用自动方式,工具就会完成以下内容:
|
||||
|
||||
* 删除用户安装的应用软件
|
||||
* 删除用户及家目录
|
||||
* 创建默认备份用户
|
||||
* 自动安装缺失的预装应用软件(MPIA)
|
||||
* 删除非默认用户
|
||||
* 删除 snap 软件包
|
||||
|
||||
自定义方式会:
|
||||
|
||||
* 删除用户安装的应用程序或者允许你选择要删除的应用程序
|
||||
* 删除旧的内核
|
||||
* 允许你选择用户进行删除
|
||||
* 删除用户及家目录
|
||||
* 创建默认备份用户
|
||||
* 允许您创建自定义备份用户
|
||||
* 自动安装缺失的预装应用软件(MPIA)或选择 MPIA 进行安装
|
||||
* 删除非默认用户
|
||||
* 查看所有相关依赖包
|
||||
* 删除 snap 软件包
|
||||
|
||||
我将带领您完成安装和使用 Resetter 的过程。但是,我必须告诉你这个工具非常前期的测试版。即便如此, Resetter 绝对值得一试。实际上,我鼓励您测试该应用程序并提交 bug 报告(您可以通过 [GitHub][3] 提交,或者直接发送给开发人员的电子邮件地址 [gaining7@outlook.com][4])。
|
||||
|
||||
还应注意的是,目前仅支持的衍生版有:
|
||||
|
||||
* Debian 9.2 (稳定)Gnome 版本
|
||||
* Linux Mint 17.3+(对 Mint 18.3 的支持即将推出)
|
||||
* Ubuntu 14.04+ (虽然我发现不支持 17.10)
|
||||
* Elementary OS 0.4+
|
||||
* Linux Deepin 15.4+
|
||||
|
||||
说到这里,让我们安装和使用 Resetter。我将在 [Elementary OS Loki][5] 平台展示。
|
||||
|
||||
### 安装
|
||||
|
||||
有几种方法可以安装 Resetter。我选择的方法是通过 `gdebi` 辅助应用程序,为什么?因为它将获取安装所需的所有依赖项。首先,我们必须安装这个特定的工具。打开终端窗口并发出命令:
|
||||
|
||||
```
|
||||
sudo apt install gdebi
|
||||
```
|
||||
|
||||
一旦安装完毕,请将浏览器指向 [Resetter 下载页面][6],并下载该软件的最新版本。一旦下载完毕,打开文件管理器,导航到下载的文件,然后单击(或双击,这取决于你如何配置你的桌面) `resetter_XXX-stable_all.deb` 文件(XXX 是版本号)。`gdebi` 应用程序将会打开(图 1)。点击安装包按钮,输入你的 `sudo` 密码,接下来 Resetter 将开始安装。
|
||||
|
||||
|
||||
![gdebi][8]
|
||||
|
||||
*图 1:利用 gdebi 安装 Resetter*
|
||||
|
||||
当安装完成,准备接下来的操作。
|
||||
|
||||
### 使用 Resetter
|
||||
|
||||
**记住,在这之前,必须备份数据。别怪我没提醒你。**
|
||||
|
||||
从终端窗口发出命令 `sudo resetter`。您将被提示输入 `sudo`密码。一旦 Resetter 打开,它将自动检测您的发行版(图 2)。
|
||||
|
||||
![Resetter][11]
|
||||
|
||||
*图 2:Resetter 主窗口*
|
||||
|
||||
我们将通过自动重置来测试 Resetter 的流程。从主窗口,点击 Automatic Reset(自动复位)。这款应用将提供一个明确的警告,它将把你的操作系统(我的实例,Elementary OS 0.4.1 Loki)重新设置为出厂默认状态(图 3)。
|
||||
|
||||
![警告][13]
|
||||
|
||||
*图 3:在继续之前,Resetter 会警告您。 *
|
||||
|
||||
|
||||
单击“Yes”,Resetter 会显示它将删除的所有包(图 4)。如果您没有问题,单击 OK,重置将开始。
|
||||
|
||||
![移除软件包][15]
|
||||
|
||||
*图 4:所有要删除的包,以便将 Elementary OS 重置为出厂默认值。*
|
||||
|
||||
在重置过程中,应用程序将显示一个进度窗口(图 5)。根据安装的数量,这个过程不应该花费太长时间。
|
||||
|
||||
![进度][17]
|
||||
|
||||
*图 5:Resetter 进度窗口*
|
||||
|
||||
当过程完成时,Resetter 将显示一个新的用户名和密码,以便重新登录到新重置的发行版(图 6)。
|
||||
|
||||
![新用户][19]
|
||||
|
||||
*图 6:新用户及密码*
|
||||
|
||||
单击 OK,然后当提示时单击“Yes”以重新启动系统。当提示登录时,使用 Resetter 应用程序提供给您的新凭证。成功登录后,您需要重新创建您的原始用户。该用户的主目录仍然是完整的,所以您需要做的就是发出命令 `sudo useradd USERNAME` ( USERNAME 是用户名)。完成之后,发出命令 `sudo passwd USERNAME` (USERNAME 是用户名)。使用设置的用户/密码,您可以注销并以旧用户的身份登录(使用在重新设置操作系统之前相同的家目录)。
|
||||
|
||||
### 我的成果
|
||||
|
||||
我必须承认,在将密码添加到我的老用户(并通过使用 `su` 命令切换到该用户进行测试)之后,我无法使用该用户登录到 Elementary OS 桌面。为了解决这个问题,我登录了 Resetter 所创建的用户,移动了老用户的家目录,删除了老用户(使用命令 `sudo deluser jack`),并重新创建了老用户(使用命令 `sudo useradd -m jack`)。
|
||||
|
||||
这样做之后,我检查了原始的家目录,只发现了用户的所有权从 `jack.jack` 变成了 `1000.1000`。利用命令 `sudo chown -R jack.jack /home/jack`,就可以容易的修正这个问题。教训是什么?如果您使用 Resetter 并发现无法用您的老用户登录(在您重新创建用户并设置一个新密码之后),请确保更改用户的家目录的所有权限。
|
||||
|
||||
在这个问题之外,Resetter 在将 Elementary OS Loki 恢复到默认状态方面做了大量的工作。虽然 Resetter 处在测试中,但它是一个相当令人印象深刻的工具。试一试,看看你是否有和我一样出色的成绩。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/learn/intro-to-linux/2017/12/set-ubuntu-derivatives-back-default-resetter
|
||||
|
||||
作者:[Jack Wallen][a]
|
||||
译者:[stevenzdg988](https://github.com/stevenzdg988)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/jlwallen
|
||||
[1]:https://github.com/gaining/Resetter
|
||||
[2]:https://github.com/gaining
|
||||
[3]:https://github.com
|
||||
[4]:mailto:gaining7@outlook.com
|
||||
[5]:https://elementary.io/
|
||||
[6]:https://github.com/gaining/Resetter/releases/tag/v1.1.3-stable
|
||||
[7]:/files/images/resetter1jpg-0
|
||||
[8]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/resetter_1_0.jpg?itok=3c_qrApr (gdebi)
|
||||
[9]:/licenses/category/used-permission
|
||||
[10]:/files/images/resetter2jpg
|
||||
[11]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/resetter_2.jpg?itok=bmawiCYJ (Resetter)
|
||||
[12]:/files/images/resetter3jpg
|
||||
[13]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/resetter_3.jpg?itok=2wlbC3Ue (warning)
|
||||
[14]:/files/images/resetter4jpg-1
|
||||
[15]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/resetter_4_1.jpg?itok=f2I3noDM (remove packages)
|
||||
[16]:/files/images/resetter5jpg
|
||||
[17]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/resetter_5.jpg?itok=3FYs5_2S (progress)
|
||||
[18]:/files/images/resetter6jpg
|
||||
[19]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/resetter_6.jpg?itok=R9SVZgF1 (new username)
|
||||
[20]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
@ -1,32 +1,34 @@
|
||||
yum find out path where is package installed to on CentOS/RHEL
|
||||
在 CentOS/RHEL 上查找 yum 安裝的软件的位置
|
||||
======
|
||||
|
||||
I have [install htop package on a CentOS/RHEL][1] . I wanted find out where and at what path htop package installed all files. Is there an easy way to tell yum where is package installed on a CentOS/RHEL?
|
||||
**我已经在 CentOS/RHEL 上[安装了 htop][1] 。现在想知道软件被安装在哪个位置。有没有简单的方法能找到 yum 软件包安装的目录呢?**
|
||||
|
||||
[yum command][2] is an interactive, open source, rpm based, package manager for a CentOS/RHEL and clones. It can automatically perform the following operations for you:
|
||||
[yum 命令][2] 是可交互的、基于 rpm 的 CentOS/RHEL 的开源软件包管理工具。它会帮助你自动地完成以下操作:
|
||||
|
||||
1. Core system file updates
|
||||
2. Package updates
|
||||
3. Install a new packages
|
||||
4. Delete of old packages
|
||||
5. Perform queries on the installed and/or available packages
|
||||
1. 核心系统文件更新
|
||||
2. 软件包更新
|
||||
3. 安装新的软件包
|
||||
4. 删除旧的软件包
|
||||
5. 查找已安装和可用的软件包
|
||||
|
||||
yum is similar to other high level package managers like [apt-get command][3]/[apt command][4].
|
||||
和 `yum` 相似的软件包管理工具有: [apt-get 命令][3] 和 [apt 命令][4]。
|
||||
|
||||
### yum where is package installed
|
||||
### yum 安装软件包的位置
|
||||
|
||||
The syntax is as follows to install htop package for a demo purpose:
|
||||
处于演示的目的,我们以下列命令安装 `htop`:
|
||||
|
||||
`# yum install htop`
|
||||
```
|
||||
# yum install htop
|
||||
```
|
||||
|
||||
To list the files installed by a yum package called htop, run the following rpm command:
|
||||
要列出名为 htop 的 yum 软件包安装的文件,运行下列 `rpm` 命令:
|
||||
|
||||
```
|
||||
# rpm -q {packageNameHere}
|
||||
# rpm -ql htop
|
||||
```
|
||||
|
||||
Sample outputs:
|
||||
示例输出:
|
||||
|
||||
```
|
||||
/usr/bin/htop
|
||||
@ -37,18 +39,17 @@ Sample outputs:
|
||||
/usr/share/doc/htop-2.0.2/README
|
||||
/usr/share/man/man1/htop.1.gz
|
||||
/usr/share/pixmaps/htop.png
|
||||
|
||||
```
|
||||
|
||||
### How to see the files installed by a yum package using repoquery command
|
||||
### 如何使用 repoquery 命令查看由 yum 软件包安装的文件位置
|
||||
|
||||
First install yum-utils package using [yum command][2]:
|
||||
首先使用 [yum 命令][2] 安装 yum-utils 软件包:
|
||||
|
||||
```
|
||||
# yum install yum-utils
|
||||
```
|
||||
|
||||
Sample outputs:
|
||||
示例输出:
|
||||
|
||||
```
|
||||
Resolving Dependencies
|
||||
@ -60,9 +61,9 @@ Resolving Dependencies
|
||||
---> Package libxml2-python.x86_64 0:2.9.1-6.el7_2.3 will be installed
|
||||
---> Package python-kitchen.noarch 0:1.1.1-5.el7 will be installed
|
||||
--> Finished Dependency Resolution
|
||||
|
||||
|
||||
Dependencies Resolved
|
||||
|
||||
|
||||
=======================================================================================
|
||||
Package Arch Version Repository Size
|
||||
=======================================================================================
|
||||
@ -71,56 +72,61 @@ Installing:
|
||||
Installing for dependencies:
|
||||
libxml2-python x86_64 2.9.1-6.el7_2.3 rhui-rhel-7-server-rhui-rpms 247 k
|
||||
python-kitchen noarch 1.1.1-5.el7 rhui-rhel-7-server-rhui-rpms 266 k
|
||||
|
||||
|
||||
Transaction Summary
|
||||
=======================================================================================
|
||||
Install 1 Package (+2 Dependent packages)
|
||||
|
||||
|
||||
Total download size: 630 k
|
||||
Installed size: 3.1 M
|
||||
Is this ok [y/d/N]: y
|
||||
Downloading packages:
|
||||
(1/3): python-kitchen-1.1.1-5.el7.noarch.rpm | 266 kB 00:00:00
|
||||
(2/3): libxml2-python-2.9.1-6.el7_2.3.x86_64.rpm | 247 kB 00:00:00
|
||||
(3/3): yum-utils-1.1.31-42.el7.noarch.rpm | 117 kB 00:00:00
|
||||
(1/3): python-kitchen-1.1.1-5.el7.noarch.rpm | 266 kB 00:00:00
|
||||
(2/3): libxml2-python-2.9.1-6.el7_2.3.x86_64.rpm | 247 kB 00:00:00
|
||||
(3/3): yum-utils-1.1.31-42.el7.noarch.rpm | 117 kB 00:00:00
|
||||
---------------------------------------------------------------------------------------
|
||||
Total 1.0 MB/s | 630 kB 00:00
|
||||
Total 1.0 MB/s | 630 kB 00:00
|
||||
Running transaction check
|
||||
Running transaction test
|
||||
Transaction test succeeded
|
||||
Running transaction
|
||||
Installing : python-kitchen-1.1.1-5.el7.noarch 1/3
|
||||
Installing : libxml2-python-2.9.1-6.el7_2.3.x86_64 2/3
|
||||
Installing : yum-utils-1.1.31-42.el7.noarch 3/3
|
||||
Verifying : libxml2-python-2.9.1-6.el7_2.3.x86_64 1/3
|
||||
Verifying : yum-utils-1.1.31-42.el7.noarch 2/3
|
||||
Verifying : python-kitchen-1.1.1-5.el7.noarch 3/3
|
||||
|
||||
Installing : python-kitchen-1.1.1-5.el7.noarch 1/3
|
||||
Installing : libxml2-python-2.9.1-6.el7_2.3.x86_64 2/3
|
||||
Installing : yum-utils-1.1.31-42.el7.noarch 3/3
|
||||
Verifying : libxml2-python-2.9.1-6.el7_2.3.x86_64 1/3
|
||||
Verifying : yum-utils-1.1.31-42.el7.noarch 2/3
|
||||
Verifying : python-kitchen-1.1.1-5.el7.noarch 3/3
|
||||
|
||||
Installed:
|
||||
yum-utils.noarch 0:1.1.31-42.el7
|
||||
|
||||
yum-utils.noarch 0:1.1.31-42.el7
|
||||
|
||||
Dependency Installed:
|
||||
libxml2-python.x86_64 0:2.9.1-6.el7_2.3 python-kitchen.noarch 0:1.1.1-5.el7
|
||||
|
||||
libxml2-python.x86_64 0:2.9.1-6.el7_2.3 python-kitchen.noarch 0:1.1.1-5.el7
|
||||
|
||||
Complete!
|
||||
```
|
||||
|
||||
### 如何列出通过 yum 安装的命令?
|
||||
|
||||
### How do I list the contents of a installed package using YUM?
|
||||
现在可以使用 `repoquery` 命令:
|
||||
|
||||
Now run repoquery command as follows:
|
||||
```
|
||||
# repoquery --list htop
|
||||
```
|
||||
|
||||
`# repoquery --list htop`
|
||||
或者:
|
||||
|
||||
OR
|
||||
```
|
||||
# repoquery -l htop
|
||||
```
|
||||
|
||||
`# repoquery -l htop`
|
||||
|
||||
Sample outputs:
|
||||
示例输出:
|
||||
|
||||
[![yum where is package installed][5]][5]
|
||||
|
||||
You can also use the type command or command command to just find location of given binary file such as httpd or htop:
|
||||
*使用 repoquery 命令确定 yum 包安装的路径*
|
||||
|
||||
你也可以使用 `type` 命令或者 `command` 命令查找指定二进制文件的位置,例如 `httpd` 或者 `htop` :
|
||||
|
||||
```
|
||||
$ type -a httpd
|
||||
@ -128,19 +134,19 @@ $ type -a htop
|
||||
$ command -V htop
|
||||
```
|
||||
|
||||
### about the author
|
||||
### 关于作者
|
||||
|
||||
The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][6], [Facebook][7], [Google+][8].
|
||||
作者是 nixCraft 的创始人,是经验丰富的系统管理员并且是 Linux 命令行脚本编程的教练。他拥有全球多行业合作的经验,客户包括 IT,教育,安防和空间研究。他的联系方式:[Twitter][6]、 [Facebook][7]、 [Google+][8]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/faq/yum-determining-finding-path-that-yum-package-installed-to/
|
||||
|
||||
作者:[][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
作者:[cyberciti][a]
|
||||
译者:[cyleung](https://github.com/cyleung)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux 中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.cyberciti.biz
|
||||
[1]:https://www.cyberciti.biz/faq/centos-redhat-linux-install-htop-command-using-yum/
|
||||
@ -151,3 +157,5 @@ via: https://www.cyberciti.biz/faq/yum-determining-finding-path-that-yum-package
|
||||
[6]:https://twitter.com/nixcraft
|
||||
[7]:https://facebook.com/nixcraft
|
||||
[8]:https://plus.google.com/+CybercitiBiz
|
||||
|
||||
|
@ -1,6 +1,3 @@
|
||||
ezio is translating
|
||||
|
||||
|
||||
Anatomy of a Program in Memory
|
||||
============================================================
|
||||
|
||||
|
@ -0,0 +1,116 @@
|
||||
Creating a YUM repository from ISO & Online repo
|
||||
======
|
||||
|
||||
YUM tool is one of the most important tool for Centos/RHEL/Fedora. Though in latest builds of fedora, it has been replaced with DNF but that not at all means that it has ran its course. It is still used widely for installing rpm packages, we have already discussed YUM with examples in our earlier tutorial ([ **READ HERE**][1]).
|
||||
|
||||
In this tutorial, we are going to learn to create a Local YUM repository, first by using ISO image of OS & then by creating a mirror image of an online yum repository.
|
||||
|
||||
### Creating YUM with DVD ISO
|
||||
|
||||
We are using a Centos 7 dvd for this tutorial & same process should work on RHEL 7 as well.
|
||||
|
||||
Firstly create a directory named YUM in root folder
|
||||
|
||||
```
|
||||
$ mkdir /YUM-
|
||||
```
|
||||
|
||||
then mount Centos 7 ISO ,
|
||||
|
||||
```
|
||||
$ mount -t iso9660 -o loop /home/dan/Centos-7-x86_x64-DVD.iso /mnt/iso/
|
||||
```
|
||||
|
||||
Next, copy the packages from mounted ISO to /YUM folder. Once all the packages have been copied to the system, we will install the required packages for creating YUM. Open /YUM & install the following RPM packages,
|
||||
|
||||
```
|
||||
$ rpm -ivh deltarpm
|
||||
$ rpm -ivh python-deltarpm
|
||||
$ rpm -ivh createrepo
|
||||
```
|
||||
|
||||
Once these packages have been installed, we will create a file named " **local.repo "** in **/etc/yum.repos.d** folder with all the yum information
|
||||
|
||||
```
|
||||
$ vi /etc/yum.repos.d/local.repo
|
||||
```
|
||||
|
||||
```
|
||||
LOCAL REPO]
|
||||
Name=Local YUM
|
||||
baseurl=file:///YUM
|
||||
gpgcheck=0
|
||||
enabled=1
|
||||
```
|
||||
|
||||
Save & exit the file. Next we will create repo-data by running the following command
|
||||
|
||||
```
|
||||
$ createrepo -v /YUM
|
||||
```
|
||||
|
||||
It will take some time to create the repo data. Once the process finishes, run
|
||||
|
||||
```
|
||||
$ yum clean all
|
||||
```
|
||||
|
||||
to clean cache & then run
|
||||
|
||||
```
|
||||
$ yum repolist
|
||||
```
|
||||
|
||||
to check the list of all repositories. You should see repo "local.repo" in the list.
|
||||
|
||||
|
||||
### Creating mirror YUM repository with online repository
|
||||
|
||||
Process involved in creating a yum is similar to creating a yum with an ISO image with one exception that we will fetch our rpm packages from an online repository instead of an ISO.
|
||||
|
||||
Firstly, we need to find an online repository to get the latest packages . It is advised to find an online yum that is closest to your location , in order to optimize the download speeds. We will be using below mentioned , you can select one nearest to yours location from [CENTOS MIRROR LIST][2]
|
||||
|
||||
After selecting a mirror, we will sync that mirror with our system using rsync but before you do that, make sure that you plenty of space on your server
|
||||
|
||||
```
|
||||
$ rsync -avz rsync://mirror.fibergrid.in/centos/7.2/os/x86_64/Packages/s/ /YUM
|
||||
```
|
||||
|
||||
Sync will take quite a while (maybe an hour) depending on your internet speed. After the syncing is completed, we will update our repo-data
|
||||
|
||||
```
|
||||
$ createrepo - v /YUM
|
||||
```
|
||||
|
||||
Our Yum is now ready to used . We can create a cron job for our repo to be updated automatically at a determined time daily or weekly as per you needs.
|
||||
|
||||
To create a cron job for syncing the repository, run
|
||||
|
||||
```
|
||||
$ crontab -e
|
||||
```
|
||||
|
||||
& add the following line
|
||||
|
||||
```
|
||||
30 12 * * * rsync -avz http://mirror.centos.org/centos/7/os/x86_64/Packages/ /YUM
|
||||
```
|
||||
|
||||
This will enable the syncing of yum every night at 12:30 AM. Also remember to create repository configuration file in /etc/yum.repos.d , as we did above.
|
||||
|
||||
That's it guys, you now have your own yum repository to use. Please share this article if you like it & leave your comments/queries in the comment box down below.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linuxtechlab.com/creating-yum-repository-iso-online-repo/
|
||||
|
||||
作者:[Shusain][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linuxtechlab.com/author/shsuain/
|
||||
[1]:http://linuxtechlab.com/using-yum-command-examples/
|
||||
[2]:http://mirror.centos.org/centos/
|
@ -1,4 +1,4 @@
|
||||
python-hwinfo : Display Summary Of Hardware Information In Linux
|
||||
Translating by Torival python-hwinfo : Display Summary Of Hardware Information In Linux
|
||||
======
|
||||
Till the date, we have covered most of the utilities which discover Linux system hardware information & configuration but still there are plenty of commands available for the same purpose.
|
||||
|
||||
|
@ -1,3 +1,4 @@
|
||||
Translating by FelixYFZ
|
||||
How to test internet speed in Linux terminal
|
||||
======
|
||||
Learn how to use speedtest cli tool to test internet speed in Linux terminal. Also includes one liner python command to get speed details right away.
|
||||
|
@ -1,83 +0,0 @@
|
||||
Translating by zjon
|
||||
|
||||
What is a firewall?
|
||||
======
|
||||
Network-based firewalls have become almost ubiquitous across US enterprises for their proven defense against an ever-increasing array of threats.
|
||||
|
||||
A recent study by network testing firm NSS Labs found that up to 80% of US large businesses run a next-generation firewall. Research firm IDC estimates the firewall and related unified threat management market was a $7.6 billion industry in 2015 and expected to reach $12.7 billion by 2020.
|
||||
|
||||
**[ If you 're upgrading, here's [What to consider when deploying a next generation firewall][1].]**
|
||||
|
||||
### What is a firewall?
|
||||
|
||||
Firewalls act as a perimeter defense tool that monitor traffic and either allow it or block it. Over the years functionality of firewalls has increased, and now most firewalls can not only block a set of known threats and enforce advanced access control list policies, but they can also deeply inspect individual packets of traffic and test packets to determine if they're safe. Most firewalls are deployed as network hardware that processes traffic and software that allow end users to configure and manage the system. Increasingly, software-only versions of firewalls are being deployed in highly virtualized environments to enforce policies on segmented networks or in the IaaS public cloud.
|
||||
|
||||
Advancements in firewall technology have created new options firewall deployments over the past decade, so now there are a handful of options for end users looking to deploy a firewall. These include:
|
||||
|
||||
### Stateful firewalls
|
||||
|
||||
When firewalls were first created they were stateless, meaning that the hardware that the traffic traverse through while being inspected monitored each packet of network traffic individually and either blocking or allowing it in isolation. Beginning in the mid to late 1990s, the first major advancements in firewalls was the introduction of state. Stateful firewalls examine traffic in a more holistic context, taking into account the operating state and characteristics of the network connection to provide a more holistic firewall. Maintaining this state allows the firewall to allow certain traffic to access certain users while blocking at same traffic to other users, for example.
|
||||
|
||||
### Next-generation firewalls
|
||||
|
||||
Over the years firewalls have added myriad new features, including deep packet inspection, intrusion detection and prevention and inspection of encrypted traffic. Next-generation firewalls (NGFWs) refer to firewalls that have integrated many of these advanced features into the firewall.
|
||||
|
||||
### Proxy-based firewalls
|
||||
|
||||
These firewalls act as a gateway between end users who request data and the source of that data. All traffic is filtered through this proxy before being passed on to the end user. This protects the client from exposure to threats by masking the identity of the original requester of the information.
|
||||
|
||||
### Web application firewalls
|
||||
|
||||
These firewalls sit in front of specific applications as opposed to sitting on an entry or exit point of a broader network. Whereas proxy-based firewalls are typically thought of as protecting end-user clients, WAFs are typically thought of as protecting the application servers.
|
||||
|
||||
### Firewall hardware
|
||||
|
||||
Firewall hardware is typically a straightforward server that can act as a router for filtering traffic and running firewall software. These devices are placed at the edge of a corporate network, between a router and the Internet service provider's connection point. A typical enterprise may deploy dozens of physical firewalls throughout a data center. Users need to determine what throughput capacity they need the firewall to support based on the size of the user base and speed of the Internet connection.
|
||||
|
||||
### Firewall software
|
||||
|
||||
Typically end users deploy multiple firewall hardware endpoints and a central firewall software system to manage the deployment. This central system is where policies and features are configured, where analysis can be done and threats can be responded to.
|
||||
|
||||
### Next-generation firewalls
|
||||
|
||||
Over the years firewalls have added myriad new features, including deep packet inspection, intrusion detection and prevention and inspection of encrypted traffic. Next-generation firewalls (NGFWs) refer to firewalls that have integrated many of these advanced features, and here is a description of some of them.
|
||||
|
||||
### Stateful inspection
|
||||
|
||||
This is the basic firewall functionality in which the device blocks known unwanted traffic
|
||||
|
||||
### Anti-virus
|
||||
|
||||
This functionality that searches for known virus and vulnerabilities in network traffic is aided by the firewall receiving updates on the latest threats and being constantly updated to protect against them.
|
||||
|
||||
### Intrusion Prevention Systems (IPS)
|
||||
|
||||
This class of security products can be deployed as a standalone product, but IPS functionality is increasingly being integrated into NGFWs. Whereas basic firewall technologies identify and block certain types of network traffic, IPS uses more granular security measures such as signature tracing and anomaly detection to prevent unwanted threats from entering corporate networks. IPS systems have replaced the previous version of this technology, Intrusion Detection Systems (IDS) which focused more on identifying threats rather than containing them.
|
||||
|
||||
### Deep Packet Inspection (DPI)
|
||||
|
||||
DPI can be part of or used in conjunction with an IPS, but its nonetheless become an important feature of NGFWs because of the ability to provide granular analysis of traffic, most specifically the headers of traffic packets and traffic data. DPI can also be used to monitor outbound traffic to ensure sensitive information is not leaving corporate networks, a technology referred to as Data Loss Prevention (DLP).
|
||||
|
||||
### SSL Inspection
|
||||
|
||||
Secure Sockets Layer (SSL) Inspection is the idea of inspecting encrypted traffic to test for threats. As more and more traffic is encrypted, SSL Inspection is becoming an important component of DPI technology that is being implemented in NGFWs. SSL Inspection acts as a buffer that unencrypts the traffic before it's delivered to the final destination to test it.
|
||||
|
||||
### Sandboxing
|
||||
|
||||
This is one of the newer features being rolled into NGFWs and refers to the ability of a firewall to take certain unknown traffic or code and run it in a test environment to determine if it is nefarious.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3230457/lan-wan/what-is-a-firewall-perimeter-stateful-inspection-next-generation.html
|
||||
|
||||
作者:[Brandon Butler][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.networkworld.com/author/Brandon-Butler/
|
||||
[1]:https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html
|
||||
|
||||
|
@ -1,159 +0,0 @@
|
||||
How To Create Custom Ubuntu Live CD Image
|
||||
======
|
||||

|
||||
|
||||
Today let us discuss about how to create custom Ubuntu live cd image (ISO). We already have done this using [**Pinguy Builder**][1]. But, It seems to be discontinued now. I don't see any updates lately from the Pinguy builder official site. Fortunately, I found an alternative tool to create Ubuntu live cd images. Meet **Cubic** , acronym for **C** ustom **Ub** untu **I** SO **C** reator, a GUI application to create a customized bootable Ubuntu Live CD (ISO) image.
|
||||
|
||||
Cubic is being actively developed and it offers many options to easily create a customized Ubuntu live cd. It has an integrated command-line chroot environment where you can do all customization, such as installing new packages, Kernels, adding more background wallpapers, adding additional files and folders. It has an intuitive GUI interface that allows effortless navigation (back and forth with a mouse click) during the live image creation process. You can create with a new custom image or modify existing projects. Since it is used to make Ubuntu live images, I believe it can be used in other Ubuntu flavours and derivatives such as Linux Mint.
|
||||
|
||||
### Install Cubic
|
||||
|
||||
Cubic developer has made a PPA to ease the installation process. To install Cubic on your Ubuntu system, run the following commands one by one in your Terminal:
|
||||
```
|
||||
sudo apt-add-repository ppa:cubic-wizard/release
|
||||
```
|
||||
```
|
||||
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 6494C6D6997C215E
|
||||
```
|
||||
```
|
||||
sudo apt update
|
||||
```
|
||||
```
|
||||
sudo apt install cubic
|
||||
```
|
||||
|
||||
### Create Custom Ubuntu Live Cd Image Using Cubic
|
||||
|
||||
Once installed, launch Cubic from application menu or dock. This is how Cubic looks like in my Ubuntu 16.04 LTS desktop system.
|
||||
|
||||
Choose a directory for your new project. It is the directory where your files will be saved.
|
||||
|
||||
[![][2]][3]
|
||||
|
||||
Please note that Cubic will not create a live cd of your system. Instead it just creates a custom live cd from an Ubuntu installation cd. So, you should have a latest ISO image in hand.
|
||||
|
||||
Choose the path where you have stored your Ubuntu installation ISO image. Cubic will automatically fill out all details of your custom OS. You can change the details if you want. Click Next to continue.
|
||||
|
||||
[![][2]][4]
|
||||
|
||||
Next, the compressed Linux file system from the source installation medium will be extracted to your project's directory (i.e **/home/ostechnix/custom_ubuntu** in our case).
|
||||
|
||||
[![][2]][5]
|
||||
|
||||
Once the file system extracted, you will be landed to chroot environment automatically. If you don't see Terminal prompt, press the ENTER key few times.
|
||||
|
||||
[![][2]][6]
|
||||
|
||||
From here you can install any additional packages, add background images, add software sources repositories list, add latest Linux kernel to your live cd and all other customization.
|
||||
|
||||
For example, I want vim installed in my live cd, so I am going to install it now.
|
||||
|
||||
[![][2]][7]
|
||||
|
||||
We don't need to "sudo", because we are already in root environment.
|
||||
|
||||
Similarly, install any additional Linux Kernel version if you want.
|
||||
```
|
||||
apt install linux-image-extra-4.10.0-24-generic
|
||||
```
|
||||
|
||||
Also, you can update software sources list (Add or remove repositories list):
|
||||
|
||||
[![][2]][8]
|
||||
|
||||
After modifying the sources list, don't forget to run "apt update" command to update the sources list:
|
||||
```
|
||||
apt update
|
||||
```
|
||||
|
||||
Also, you can add files or folders to the live cd. Copy the files/folders (right click on them and choose copy or CTRL+C) and right click in the Terminal (inside Cubic window), choose **Paste file(s)** and finally click Copy in the bottom corner of the Cubic wizard.
|
||||
|
||||
[![][2]][9]
|
||||
|
||||
**Note for Ubuntu 17.10 users: **
|
||||
|
||||
In Ubuntu 17.10 system, the DNS lookup may not work in chroot environment. If you are making a custom Ubuntu 17.10 live image, you need to point the correct file resolve.conf file:
|
||||
```
|
||||
ln -sr /run/systemd/resolve/resolv.conf /run/systemd/resolve/stub-resolv.conf
|
||||
|
||||
```
|
||||
|
||||
To verify DNS resolution works, run:
|
||||
```
|
||||
cat /etc/resolv.conf
|
||||
ping google.com
|
||||
```
|
||||
|
||||
Add your own wallpapers if you want. To do so, go to the **/usr/share/backgrounds/** directory,
|
||||
```
|
||||
cd /usr/share/backgrounds
|
||||
```
|
||||
|
||||
and drag/drop the images into the Cubic window. Or copy the images and right click on Cubic Terminal window and choose **Paste file(s)** option. Also, make sure you have added the new wallpapers in an XML file under **/usr/share/gnome-background-properties** , so you can choose the newly added image **Change Desktop Background** dialog when you right-click on your desktop. When you made all changes, click Next in Cubic wizard.
|
||||
|
||||
In the next, choose Linux Kernel version to use when booting into the new live ISO. If you have installed any additional kernels, they will also listed in this section. Just choose the Kernel you'd like to use in your live cd.
|
||||
|
||||
[![][2]][10]
|
||||
|
||||
In the next section, select the packages that you want to remove from your live image. The selected packages will be automatically removed after the Ubuntu OS has been installed using the custom live image. Please be careful while choosing the packages to remove, you might have unknowingly removed a package that depends on another package.
|
||||
|
||||
[![][2]][11]
|
||||
|
||||
Now, the live image creation process will start. It will take some time depending upon your system's specifications.
|
||||
|
||||
[![][2]][12]
|
||||
|
||||
Once the image creation process completed, click Finish. Cubic will display the newly created custom image details.
|
||||
|
||||
If you want to modify the newly create custom live image in the future, **uncheck** the option that says **" Delete all project files, except the generated disk image and the corresponding MD5 checksum file"**. Cubic will left the custom image in the project's working directory, you can make any changes in future. You don't have start all over again.
|
||||
|
||||
To create a new live image for different Ubuntu versions, use a different project directory.
|
||||
|
||||
### Modify Custom Ubuntu Live Cd Image Using Cubic
|
||||
|
||||
Launch Cubic from menu, and select an existing project directory. Click the Next button, and you will see the following three options:
|
||||
|
||||
1. Create a disk image from the existing project.
|
||||
2. Continue customizing the existing project.
|
||||
3. Delete the existing project.
|
||||
|
||||
|
||||
|
||||
[![][2]][13]
|
||||
|
||||
The first option will allow you to create a new live ISO image from your existing project using the same customization you previously made. If you lost your ISO image, you can use the first option to create a new one.
|
||||
|
||||
The second option allows you to make any additional changes in your existing project. If you choose this option, you will be landed into chroot environment again. You can add new files or folders, install any new softwares, remove any softwares, add other Linux kernels, add desktop backgrounds and so on.
|
||||
|
||||
The third option will delete the existing project, so you can start all over from the beginning. Please that this option will delete all files including the newly generated ISO.
|
||||
|
||||
I made a custom Ubuntu 16.04 LTS desktop live cd using Cubic. It worked just fine as described here. If you want to create an Ubuntu live cd, Cubic might be good choice.
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/create-custom-ubuntu-live-cd-image/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:https://www.ostechnix.com/pinguy-builder-build-custom-ubuntu-os/
|
||||
[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[3]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-1.png ()
|
||||
[4]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-2.png ()
|
||||
[5]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-3.png ()
|
||||
[6]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-4.png ()
|
||||
[7]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-6.png ()
|
||||
[8]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-5.png ()
|
||||
[9]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-7.png ()
|
||||
[10]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-8.png ()
|
||||
[11]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-10-1.png ()
|
||||
[12]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-12-1.png ()
|
||||
[13]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-13.png ()
|
115
sources/tech/20171220 Containers without Docker at Red Hat.md
Normal file
115
sources/tech/20171220 Containers without Docker at Red Hat.md
Normal file
@ -0,0 +1,115 @@
|
||||
Containers without Docker at Red Hat
|
||||
======
|
||||
|
||||
The Docker (now [Moby][1]) project has done a lot to popularize containers in recent years. Along the way, though, it has generated concerns about its concentration of functionality into a single, monolithic system under the control of a single daemon running with root privileges: `dockerd`. Those concerns were reflected in a [talk][2] by Dan Walsh, head of the container team at Red Hat, at [KubeCon \+ CloudNativeCon][3]. Walsh spoke about the work the container team is doing to replace Docker with a set of smaller, interoperable components. His rallying cry is "no big fat daemons" as he finds them to be contrary to the venerated Unix philosophy.
|
||||
|
||||
### The quest to modularize Docker
|
||||
|
||||
As we saw in an [earlier article][4], the basic set of container operations is not that complicated: you need to pull a container image, create a container from the image, and start it. On top of that, you need to be able to build images and push them to a registry. Most people still use Docker for all of those steps but, as it turns out, Docker isn't the only name in town anymore: an early alternative was `rkt`, which led to the creation of various standards like CRI (runtime), OCI (image), and CNI (networking) that allow backends like [CRI-O][5] or Docker to interoperate with, for example, [Kubernetes][6].
|
||||
|
||||
These standards led Red Hat to create a set of "core utils" like the CRI-O runtime that implements the parts of the standards that Kubernetes needs. But Red Hat's [OpenShift][7] project needs more than what Kubernetes provides. Developers will want to be able to build containers and push them to the registry. Those operations need a whole different bag of tricks.
|
||||
|
||||
It turns out that there are multiple tools to build containers right now. Apart from Docker itself, a [session][8] from Michael Ducy of Sysdig reviewed eight image builders, and that's probably not all of them. Ducy identified the ideal build tool as one that would create a minimal image in a reproducible way. A minimal image is one where there is no operating system, only the application and its essential dependencies. Ducy identified [Distroless][9], [Smith][10], and [Source-to-Image][11] as good tools to build minimal images, which he called "micro-containers".
|
||||
|
||||
A reproducible container is one that you can build multiple times and always get the same result. For that, Ducy said you have to use a "declarative" approach (as opposed to "imperative"), which is understandable given that he comes from the Chef configuration-management world. He gave the examples of [Ansible Container][12], [Habitat][13], [nixos-container][14], and Smith (yes, again) as being good approaches, provided you were familiar with their domain-specific languages. He added that Habitat ships its own supervisor in its containers, which may be superfluous if you already have an external one, like systemd, Docker, or Kubernetes. To complete the list, we should mention the new [BuildKit][15] from Docker and [Buildah][16], which is part of Red Hat's [Project Atomic][17].
|
||||
|
||||
### Building containers with Buildah
|
||||
|
||||
![\[Buildah logo\]][18] Buildah's name apparently comes from Walsh's colorful [Boston accent][19]; the Boston theme permeates the branding of the tool: the logo, for example, is a Boston terrier dog (seen at right). This project takes a different approach from Ducy's decree: instead of enforcing a declarative configuration-management approach to containers, why not build simple tools that can be used by your favorite configuration-management tool? If you want to use regular command-line commands like `cp` (instead of Docker's custom `COPY` directive, for example), you can. But you can also use Ansible or Puppet, OS-specific or language-specific installers like APT or pip, or whatever other system to provision the content of your containers. This is what building a container looks like with regular shell commands and simply using `make` to install a binary inside the container:
|
||||
```
|
||||
# pull a base image, equivalent to a Dockerfile's FROM command
|
||||
buildah from redhat
|
||||
|
||||
# mount the base image to work on it
|
||||
crt=$(buildah mount)
|
||||
cp foo $crt
|
||||
make install DESTDIR=$crt
|
||||
|
||||
# then make a snapshot
|
||||
buildah commit
|
||||
|
||||
```
|
||||
|
||||
An interesting thing with this approach is that, since you reuse normal build tools from the host environment, you can build really minimal images because you don't need to install all the dependencies in the image. Usually, when building a container image, the target application build dependencies need to be installed within the container. For example, building from source usually requires a compiler toolchain in the container, because it is not meant to access the host environment. A lot of containers will also ship basic Unix tools like `ps` or `bash` which are not actually necessary in a micro-container. Developers often forget to (or simply can't) remove some dependencies from the built containers; that common practice creates unnecessary overhead and attack surface.
|
||||
|
||||
The modular approach of Buildah means you can run at least parts of the build as non-root: the `mount` command still needs the `CAP_SYS_ADMIN` capability, but there is an [issue][20] open to resolve this. However, Buildah [shares][21] the same [limitation][22] as Docker in that it can't build containers inside containers. For Docker, you need to run the container in "privileged" mode, which is not possible in certain environments (like [GitLab Continuous Integration][23], for example) and, even when it is possible, the configuration is [messy][24] at best.
|
||||
|
||||
The manual commit step allows fine-grained control over when to create container snapshots. While in a Dockerfile every line creates a new snapshot, with Buildah commit checkpoints are explicitly chosen, which reduces unnecessary snapshots and saves disk space. This is useful to isolate sensitive material like private keys or passwords which sometimes mistakenly end up in public images as well.
|
||||
|
||||
While Docker builds non-standard, Docker-specific images, Buildah produces standard OCI images among [other output formats][25]. For backward compatibility, it has a command called `build-using-dockerfile` or [`buildah bud`][26] that parses normal Dockerfiles. Buildah has a `enter` command to inspect images from the inside directly and a `run` command to start containers on the fly. It does all the work without any "fat daemon" running in the background and uses standard tools like `runc`.
|
||||
|
||||
Ducy's criticism of Buildah was that it was not declarative, which made it less reproducible. When allowing shell commands anything can happen: for example, a shell script might download arbitrary binaries, without any way of subsequently retracing where those come from. Shell command effects may vary according to the environment. In contrast to shell-based tools, configuration-management systems like Puppet or Chef are designed to "converge" over a final configuration that is more reliable, at least in theory: in practice you can call shell commands from configuration-management systems. Walsh, however, argued that existing configuration management can be used on top of Buildah, but it doesn't force users down that path. This fits well with the classic "separation" principle of the Unix philosophy ("mechanism not policy").
|
||||
|
||||
At this point, Buildah is in beta and Red Hat is working on integrating it into OpenShift. I have tested Buildah while writing this article and, short of some documentation issues, it generally works reliably. It could use some polishing in error handling, but it is definitely a great asset to add to your container toolbox.
|
||||
|
||||
### Replacing the rest of the Docker command-line
|
||||
|
||||
Walsh continued his presentation by giving an overview of another project that Red Hat is working on, tentatively called [libpod][27]. The name derives from a "pod" in Kubernetes, which is a way to group containers inside a host, to share namespaces, for example.
|
||||
|
||||
Libpod includes the `kpod` command to inspect and manipulate container storage directly. Walsh explained this can be useful if, for example, `dockerd` hangs or if a Kubernetes cluster crashes. `kpod` is basically an independent re-implementation of the `docker` command-line tool. There is a command to list running containers (`kpod ps`) or images (`kpod images`). In fact, there is a [translation cheat sheet][28] documenting all Docker commands with a `kpod` equivalent.
|
||||
|
||||
One of the nice things with the modular approach is that when you run a container with `kpod run`, the container is directly started as a subprocess of the current shell, instead of a subprocess of `dockerd`. In theory, this allows running containers directly from systemd, removing the duplicate work `dockerd` is doing. It enables things like [socket-activated containers][29], which is something that is [not straightforward][30] to do with Docker, or [even with Kubernetes][31] right now. In my experiments, however, I have found that containers started with `kpod` lack some fundamental functionality, namely networking (!), although there is an [issue in progress][32] to complete that implementation.
|
||||
|
||||
A final command we haven't covered is `push`. While the above commands provide a good process for working with local containers, they don't cover remote registries, which allow developers to actively collaborate on application packaging. Registries are also an essential part of a continuous-deployment framework. This is where the [skopeo][33] project comes in. Skopeo is another Atomic project that "performs various operations on container images and image repositories", according to the `README` file. It was originally designed to inspect the contents of container registries without actually downloading the sometimes voluminous images as `docker pull` does. Docker [refused patches][34] to support inspection, suggesting the creation of a separate tool, which led to Skopeo. After `pull`, `push` was the logical next step and Skopeo can now do a bunch of other things like copying and converting images between registries without having to store a copy locally. Because this functionality was useful to other projects as well, a lot of the Skopeo code now lives in a reusable library called [containers/image][35]. That library is in turn used by [Pivotal][36], Google's [container-diff][37], `kpod push`, and `buildah push`.
|
||||
|
||||
`kpod` is not directly tied to Kubernetes, so the name might change in the future -- especially since Red Hat legal has not cleared the name yet. (In fact, just as this article was going to "press", the name was changed to [`podman`][38].) The team wants to implement more "pod-level" commands which would allow operations on multiple containers, a bit like what [`docker compose`][39] might do. But at that level, a better tool might be [Kompose][40] which can execute [Compose YAML files][41] into a Kubernetes cluster. Some Docker commands (like [`swarm`][42]) will never be implemented, on purpose, as they are best left for Kubernetes itself to handle.
|
||||
|
||||
It seems that the effort to modularize Docker that started a few years ago is finally bearing fruit. While, at this point, `kpod` is under heavy development and probably should not be used in production, the design of those different tools is certainly interesting; a lot of it is ready for development environments. Right now, the only way to install libpod is to compile it from source, but we should expect packages coming out for your favorite distribution eventually.
|
||||
|
||||
> This article [first appeared][43] in the [Linux Weekly News][44].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://anarc.at/blog/2017-12-20-docker-without-docker/
|
||||
|
||||
作者:[À propos de moi][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://anarc.at
|
||||
[1]:https://mobyproject.org/
|
||||
[2]:https://kccncna17.sched.com/event/CU8j/cri-o-hosted-by-daniel-walsh-red-hat
|
||||
[3]:http://events.linuxfoundation.org/events/kubecon-and-cloudnativecon-north-america
|
||||
[4]:https://lwn.net/Articles/741897/
|
||||
[5]:http://cri-o.io/
|
||||
[6]:https://kubernetes.io/
|
||||
[7]:https://www.openshift.com/
|
||||
[8]:https://kccncna17.sched.com/event/CU6B/building-better-containers-a-survey-of-container-build-tools-i-michael-ducy-chef
|
||||
[9]:https://github.com/GoogleCloudPlatform/distroless
|
||||
[10]:https://github.com/oracle/smith
|
||||
[11]:https://github.com/openshift/source-to-image
|
||||
[12]:https://www.ansible.com/ansible-container
|
||||
[13]:https://www.habitat.sh/
|
||||
[14]:https://nixos.org/nixos/manual/#ch-containers
|
||||
[15]:https://github.com/moby/buildkit
|
||||
[16]:https://github.com/projectatomic/buildah
|
||||
[17]:https://www.projectatomic.io/
|
||||
[18]:https://raw.githubusercontent.com/projectatomic/buildah/master/logos/buildah-logomark_large.png (Buildah logo)
|
||||
[19]:https://en.wikipedia.org/wiki/Boston_accent
|
||||
[20]:https://github.com/projectatomic/buildah/issues/171
|
||||
[21]:https://github.com/projectatomic/buildah/issues/158
|
||||
[22]:https://github.com/moby/moby/issues/27886#issuecomment-281278525
|
||||
[23]:https://about.gitlab.com/features/gitlab-ci-cd/
|
||||
[24]:https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
|
||||
[25]:https://github.com/projectatomic/buildah/blob/master/docs/buildah-push.md
|
||||
[26]:https://github.com/projectatomic/buildah/blob/master/docs/buildah-bud.md
|
||||
[27]:https://github.com/projectatomic/libpod
|
||||
[28]:https://github.com/projectatomic/libpod/blob/master/transfer.md#development-transfer
|
||||
[29]:http://0pointer.de/blog/projects/socket-activated-containers.html
|
||||
[30]:https://legacy-developer.atlassian.com/blog/2015/03/docker-systemd-socket-activation/
|
||||
[31]:https://github.com/kubernetes/kubernetes/issues/484
|
||||
[32]:https://github.com/projectatomic/libpod/issues/129
|
||||
[33]:https://github.com/projectatomic/skopeo
|
||||
[34]:https://github.com/moby/moby/pull/14258
|
||||
[35]:https://github.com/containers/image
|
||||
[36]:https://pivotal.io/
|
||||
[37]:https://github.com/GoogleCloudPlatform/container-diff
|
||||
[38]:https://github.com/projectatomic/libpod/blob/master/docs/podman.1.md
|
||||
[39]:https://docs.docker.com/compose/overview/#compose-documentation
|
||||
[40]:http://kompose.io/
|
||||
[41]:https://docs.docker.com/compose/compose-file/
|
||||
[42]:https://docs.docker.com/engine/swarm/
|
||||
[43]:https://lwn.net/Articles/741841/
|
||||
[44]:http://lwn.net/
|
@ -1,61 +0,0 @@
|
||||
translating by CYLeft
|
||||
|
||||
cURL vs. wget: Their Differences, Usage and Which One You Should Use
|
||||
======
|
||||

|
||||
|
||||
For downloading files directly from the Linux command line, there are two utilities that immediately come to mind: `wget` and `cURL`. They share a lot of features and can easily get many of the same tasks accomplished.
|
||||
|
||||
Though they share similar features, they aren't exactly the same. These programs fit slightly different roles and use cases, and do have traits that make each better for certain situations.
|
||||
|
||||
### cURL vs wget: Their Similarities
|
||||
|
||||
Both wget and cURL can download things. At their core, that's what they both do. They can make requests of the Internet and pull back the requested item. That could be a file, picture, or even the raw HTML of a website.
|
||||
|
||||
Both programs are also capable of making HTTP POST requests. This means they can send data to a website, like filling out a form.
|
||||
|
||||
Since both are command line tools, they were also both designed to be scriptable. You can include both wget and cURL in your [Bash scripts][1] to automatically interact with online content and retrieve what you need.
|
||||
|
||||
### wget Advantages
|
||||
|
||||
![wget download][2]
|
||||
|
||||
wget is simple and straightforward. It's meant for quick downloads, and it's excellent at it. wget is a single self-contained program. It doesn't require any extra libraries, and it's not meant to do anything beyond the scope of what it does.
|
||||
|
||||
Because wget is so tailored for straight downloads, it also has the ability to download recursively. That allows you to download everything on a page or all of the files in an FTP directory at once.
|
||||
|
||||
wget also has intelligent defaults. It specifies how to handle a lot of things that a normal browser would, like cookies and redirects, without the need to add any configuration. Lastly, wget works out of the box.
|
||||
|
||||
### cURL Advantages
|
||||
|
||||
![cURL Download][3]
|
||||
|
||||
cURL is a multi-tool. Sure, it can download content from the Internet. It can do a lot more, too.
|
||||
|
||||
cURL is powered by a library: libcurl. This means you can write entire programs based on cURL, allowing you to base graphical download pograms on libcurl and get access to all of its functionality.
|
||||
|
||||
The wide range or protocols that cURL supports are probably the biggest selling point it has. cURL can access websites over HTTP and HTTPS and can handle FTP in both directions. It supports LDAP and even Samba shares. You can actually use cURL to send and retrieve email.
|
||||
|
||||
cURL has some neat security features, too. cURL supports loads of SSL/TLS libraries. It also supports Internet access via proxies, including SOCKS. That means you can use cURL over Tor.
|
||||
|
||||
cURL also supports gzip compression to send large amounts of data more easily.
|
||||
|
||||
### Closing Thoughts
|
||||
So should you use cURL or wget? That really depends. If you want to download something quickly without needing to worry about flags, then you should go with wget. It's simple and just works. If you want to do something more complex, cURL should be your immediate choice.
|
||||
|
||||
cURL allows you to do a lot more. You can think of cURL like a stripped-down command line web browser. It supports just about every protocol you can think of and can access and interact with nearly all online content. The only is that a browser renders the responses that it receives, and cURL doesn't.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.maketecheasier.com/curl-vs-wget/
|
||||
|
||||
作者:[Nick Congleton][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.maketecheasier.com/author/nickcongleton/
|
||||
[1]:https://www.maketecheasier.com/beginners-guide-scripting-linux/
|
||||
[2]:https://www.maketecheasier.com/assets/uploads/2017/12/wgc-wget.jpg (wget download)
|
||||
[3]:https://www.maketecheasier.com/assets/uploads/2017/12/wgc-curl.jpg (cURL Download)
|
@ -0,0 +1,74 @@
|
||||
Ubuntu Updates for the Meltdown / Spectre Vulnerabilities
|
||||
============================================================
|
||||
|
||||

|
||||
|
||||
* For up-to-date patch, package, and USN links, please refer to: [https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/SpectreAndMeltdown][2]
|
||||
|
||||
Unfortunately, you’ve probably already read about one of the most widespread security issues in modern computing history — colloquially known as “[Meltdown][5]” ([CVE-2017-5754][6]) and “[Spectre][7]” ([CVE-2017-5753][8] and [CVE-2017-5715][9]) — affecting practically every computer built in the last 10 years, running any operating system. That includes [Ubuntu][10].
|
||||
|
||||
I say “unfortunately”, in part because there was a coordinated release date of January 9, 2018, agreed upon by essentially every operating system, hardware, and cloud vendor in the world. By design, operating system updates would be available at the same time as the public disclosure of the security vulnerability. While it happens rarely, this an industry standard best practice, which has broken down in this case.
|
||||
|
||||
At its heart, this vulnerability is a CPU hardware architecture design issue. But there are billions of affected hardware devices, and replacing CPUs is simply unreasonable. As a result, operating system kernels — Windows, MacOS, Linux, and many others — are being patched to mitigate the critical security vulnerability.
|
||||
|
||||
Canonical engineers have been working on this since we were made aware under the embargoed disclosure (November 2017) and have worked through the Christmas and New Years holidays, testing and integrating an incredibly complex patch set into a broad set of Ubuntu kernels and CPU architectures.
|
||||
|
||||
Ubuntu users of the 64-bit x86 architecture (aka, amd64) can expect updated kernels by the original January 9, 2018 coordinated release date, and sooner if possible. Updates will be available for:
|
||||
|
||||
* Ubuntu 17.10 (Artful) — Linux 4.13 HWE
|
||||
|
||||
* Ubuntu 16.04 LTS (Xenial) — Linux 4.4 (and 4.4 HWE)
|
||||
|
||||
* Ubuntu 14.04 LTS (Trusty) — Linux 3.13
|
||||
|
||||
* Ubuntu 12.04 ESM** (Precise) — Linux 3.2
|
||||
* Note that an [Ubuntu Advantage license][1] is required for the 12.04 ESM kernel update, as Ubuntu 12.04 LTS is past its end-of-life
|
||||
|
||||
Ubuntu 18.04 LTS (Bionic) will release in April of 2018, and will ship a 4.15 kernel, which includes the [KPTI][11] patchset as integrated upstream.
|
||||
|
||||
Ubuntu optimized kernels for the Amazon, Google, and Microsoft public clouds are also covered by these updates, as well as the rest of Canonical’s [Certified Public Clouds][12] including Oracle, OVH, Rackspace, IBM Cloud, Joyent, and Dimension Data.
|
||||
|
||||
These kernel fixes will not be [Livepatch-able][13]. The source code changes required to address this problem is comprised of hundreds of independent patches, touching hundreds of files and thousands of lines of code. The sheer complexity of this patchset is not compatible with the Linux kernel Livepatch mechanism. An update and a reboot will be required to active this update.
|
||||
|
||||
Furthermore, you can expect Ubuntu security updates for a number of other related packages, including CPU microcode, GCC and QEMU in the coming days.
|
||||
|
||||
We don’t have a performance analysis to share at this time, but please do stay tuned here as we’ll followup with that as soon as possible.
|
||||
|
||||
Thanks,
|
||||
[@DustinKirkland][14]
|
||||
VP of Product
|
||||
Canonical / Ubuntu
|
||||
|
||||
### About the author
|
||||
|
||||

|
||||
|
||||
Dustin Kirkland is part of Canonical's Ubuntu Product and Strategy team, working for Mark Shuttleworth, and leading the technical strategy, road map, and life cycle of the Ubuntu Cloud and IoT commercial offerings. Formerly the CTO of Gazzang, a venture funded start-up acquired by Cloudera, Dustin designed and implemented an innovative key management system for the cloud, called zTrustee, and delivered comprehensive security for cloud and big data platforms with eCryptfs and other encryption technologies. Dustin is an active Core Developer of the Ubuntu Linux distribution, maintainer of 20+ open source projects, and the creator of Byobu, DivItUp.com, and LinuxSearch.org. A Fightin' Texas Aggie Class of 2001 graduate, Dustin lives in Austin, Texas, with his wife Kim, daughters, and his Australian Shepherds, Aggie and Tiger. Dustin is also an avid home brewer.
|
||||
|
||||
[More articles by Dustin][3]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://insights.ubuntu.com/2018/01/04/ubuntu-updates-for-the-meltdown-spectre-vulnerabilities/
|
||||
|
||||
作者:[Dustin Kirkland][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://insights.ubuntu.com/author/kirkland/
|
||||
[1]:https://www.ubuntu.com/support/esm
|
||||
[2]:https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/SpectreAndMeltdown
|
||||
[3]:https://insights.ubuntu.com/author/kirkland/
|
||||
[4]:https://insights.ubuntu.com/author/kirkland/
|
||||
[5]:https://en.wikipedia.org/wiki/Meltdown_(security_vulnerability)
|
||||
[6]:https://people.canonical.com/~ubuntu-security/cve/2017/CVE-2017-5754.html
|
||||
[7]:https://en.wikipedia.org/wiki/Spectre_(security_vulnerability)
|
||||
[8]:https://people.canonical.com/~ubuntu-security/cve/2017/CVE-2017-5753.html
|
||||
[9]:https://people.canonical.com/~ubuntu-security/cve/2017/CVE-2017-5715.html
|
||||
[10]:https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/SpectreAndMeltdown
|
||||
[11]:https://lwn.net/Articles/742404/
|
||||
[12]:https://partners.ubuntu.com/programmes/public-cloud
|
||||
[13]:https://www.ubuntu.com/server/livepatch
|
||||
[14]:https://twitter.com/dustinkirkland
|
@ -0,0 +1,99 @@
|
||||
What’s behind the Intel design flaw forcing numerous patches?
|
||||
============================================================
|
||||
|
||||
### There's obviously a big problem, but we don't know exactly what.
|
||||
|
||||
|
||||

|
||||
|
||||
|
||||
Both Windows and Linux are receiving significant security updates that can, in the worst case, cause performance to drop by half, to defend against a problem that as yet hasn't been fully disclosed.
|
||||
|
||||
Patches to the Linux kernel have been trickling in over the past few weeks. Microsoft has been [testing the Windows updates in the Insider program since November][3], and it is expected to put the alterations into mainstream Windows builds on Patch Tuesday next week. Microsoft's Azure has scheduled maintenance next week, and Amazon's AWS is scheduled for maintenance on Friday—presumably related.
|
||||
|
||||
Since the Linux patches [first came to light][4], a clearer picture of what seems to be wrong has emerged. While Linux and Windows differ in many regards, the basic elements of how these two operating systems—and indeed, every other x86 operating system such as FreeBSD and [macOS][5]—handle system memory is the same, because these parts of the operating system are so tightly coupled to the capabilities of the processor.
|
||||
|
||||
### Keeping track of addresses
|
||||
|
||||
Every byte of memory in a system is implicitly numbered, those numbers being each byte's address. The very earliest operating systems operated using physical memory addresses, but physical memory addresses are inconvenient for lots of reasons. For example, there are often gaps in the addresses, and (particularly on 32-bit systems), physical addresses can be awkward to manipulate, requiring 36-bit numbers, or even larger ones.
|
||||
|
||||
Accordingly, modern operating systems all depend on a broad concept called virtual memory. Virtual memory systems allow both programs and the kernels themselves to operate in a simple, clean, uniform environment. Instead of the physical addresses with their gaps and other oddities, every program, and the kernel itself, uses virtual addresses to access memory. These virtual addresses are contiguous—no need to worry about gaps—and sized conveniently to make them easy to manipulate. 32-bit programs see only 32-bit addresses, even if the physical address requires 36-bit or more numbering.
|
||||
|
||||
While this virtual addressing is transparent to almost every piece of software, the processor does ultimately need to know which physical memory a virtual address refers to. There's a mapping from virtual addresses to physical addresses, and that's stored in a large data structure called a page table. Operating systems build the page table, using a layout determined by the processor, and the processor and operating system in conjunction use the page table whenever they need to convert between virtual and physical addresses.
|
||||
|
||||
This whole mapping process is so important and fundamental to modern operating systems and processors that the processor has dedicated cache—the translation lookaside buffer, or TLB—that stores a certain number of virtual-to-physical mappings so that it can avoid using the full page table every time.
|
||||
|
||||
The use of virtual memory gives us a number of useful features beyond the simplicity of addressing. Chief among these is that each individual program is given its own set of virtual addresses, with its own set of virtual to physical mappings. This is the fundamental technique used to provide "protected memory;" one program cannot corrupt or tamper with the memory of another program, because the other program's memory simply isn't part of the first program's mapping.
|
||||
|
||||
But these uses of an individual mapping per process, and hence extra page tables, puts pressure on the TLB cache. The TLB isn't very big—typically a few hundred mappings in total—and the more page tables a system uses, the less likely it is that the TLB will include any particular virtual-to-physical translation.
|
||||
|
||||
### Half and half
|
||||
|
||||
To make the best use of the TLB, every mainstream operating system splits the range of virtual addresses into two. One half of the addresses is used for each program; the other half is used for the kernel. When switching between processes, only half the page table entries change—the ones belonging to the program. The kernel half is common to every program (because there's only one kernel), and so it can use the same page table mapping for every process. This helps the TLB enormously; while it still has to discard mappings belonging to the process' half of memory addresses, it can keep the mappings for the kernel's half.
|
||||
|
||||
This design isn't completely set in stone. Work was done on Linux to make it possible to give a 32-bit process the entire range of addresses, with no sharing between the kernel's page table and that of each program. While this gave the programs more address space, it carried a performance cost, because the TLB had to reload the kernel's page table entries every time kernel code needed to run. Accordingly, this approach was never widely used on x86 systems.
|
||||
|
||||
One downside of the decision to split the virtual address space between the kernel and each program is that the memory protection is weakened. If the kernel had its own set of page tables and virtual addresses, it would be afforded the same protection as different programs have from one another; the kernel's memory would be simply invisible. But with the split addressing, user programs and the kernel use the same address range, and, in principle, a user program would be able to read and write kernel memory.
|
||||
|
||||
To prevent this obviously undesirable situation, the processor and virtual addressing system have a concept of "rings" or "modes." x86 processors have lots of rings, but for this issue, only two are relevant: "user" (ring 3) and "supervisor" (ring 0). When running regular user programs, the processor is put into user mode, ring 3\. When running kernel code, the processor is in ring 0, supervisor mode, also known as kernel mode.
|
||||
|
||||
These rings are used to protect the kernel memory from user programs. The page tables aren't just mapping from virtual to physical addresses; they also contain metadata about those addresses, including information about which rings can access an address. The kernel's page table entries are all marked as only being accessible to ring 0; the program's entries are marked as being accessible from any ring. If an attempt is made to access ring 0 memory while in ring 3, the processor blocks the access and generates an exception. The result of this is that user programs, running in ring 3, should not be able to learn anything about the kernel and its ring 0 memory.
|
||||
|
||||
At least, that's the theory. The spate of patches and update show that somewhere this has broken down. This is where the big mystery lies.
|
||||
|
||||
### Moving between rings
|
||||
|
||||
Here's what we do know. Every modern processor performs a certain amount of speculative execution. For example, given some instructions that add two numbers and then store the result in memory, a processor might speculatively do the addition before ascertaining whether the destination in memory is actually accessible and writeable. In the common case, where the location _is_ writeable, the processor managed to save some time, as it did the arithmetic in parallel with figuring out what the destination in memory was. If it discovers that the location isn't accessible—for example, a program trying to write to an address that has no mapping and no physical location at all—then it will generate an exception and the speculative execution is wasted.
|
||||
|
||||
Intel processors, specifically—[though not AMD ones][6]—allow speculative execution of ring 3 code that writes to ring 0 memory. The processors _do_ properly block the write, but the speculative execution minutely disturbs the processor state, because certain data will be loaded into cache and the TLB in order to ascertain whether the write should be allowed. This in turn means that some operations will be a few cycles quicker, or a few cycles slower, depending on whether their data is still in cache or not. As well as this, Intel's processors have special features, such as the Software Guard Extensions (SGX) introduced with Skylake processors, that slightly change how attempts to access memory are handled. Again, the processor does still protect ring 0 memory from ring 3 programs, but again, its caches and other internal state are changed, creating measurable differences.
|
||||
|
||||
What we don't know, yet, is just how much kernel memory information can be leaked to user programs or how easily that leaking can occur. And which Intel processors are affected? Again it's not entirely clear, but indications are that every Intel chip with speculative execution (which is all the mainstream processors introduced since the Pentium Pro, from 1995) can leak information this way.
|
||||
|
||||
The first wind of this problem came from researchers from [Graz Technical University in Austria][7]. The information leakage they discovered was enough to undermine kernel mode Address Space Layout Randomization (kernel ASLR, or KASLR). ASLR is something of a last-ditch effort to prevent the exploitation of [buffer overflows][8]. With ASLR, programs and their data are placed at random memory addresses, which makes it a little harder for attackers to exploit security flaws. KASLR applies that same randomization to the kernel so that the kernel's data (including page tables) and code are randomly located.
|
||||
|
||||
The Graz researchers developed [KAISER][9], a set of Linux kernel patches to defend against the problem.
|
||||
|
||||
If the problem were just that it enabled the derandomization of ASLR, this probably wouldn't be a huge disaster. ASLR is a nice protection, but it's known to be imperfect. It's meant to be a hurdle for attackers, not an impenetrable barrier. The industry reaction—a fairly major change to both Windows and Linux, developed with some secrecy—suggests that it's not just ASLR that's defeated and that a more general ability to leak information from the kernel has been developed. Indeed, researchers have [started to tweet][10] that they're able to leak and read arbitrary kernel data. Another possibility is that the flaw can be used to escape out of a virtual machine and compromise a hypervisor.
|
||||
|
||||
The solution that both the Windows and Linux developers have picked is substantially the same, and derived from that KAISER work: the kernel page table entries are no longer shared with each process. In Linux, this is called Kernel Page Table Isolation (KPTI).
|
||||
|
||||
With the patches, the memory address is still split in two; it's just the kernel half is almost empty. It's not quite empty, because a few kernel pieces need to be mapped permanently, whether the processor is running in ring 3 _or_ ring 0, but it's close to empty. This means that even if a malicious user program tries to probe kernel memory and leak information, it will fail—there's simply nothing to leak. The real kernel page tables are only used when the kernel itself is running.
|
||||
|
||||
This undermines the very reason for the split address space in the first place. The TLB now needs to clear out any entries related to the real kernel page tables every time it switches to a user program, putting an end to the performance saving that splitting enabled.
|
||||
|
||||
The impact of this will vary depending on the workload. Every time a program makes a call into the kernel—to read from disk, to send data to the network, to open a file, and so on—that call will be a little more expensive, since it will force the TLB to be flushed and the real kernel page table to be loaded. Programs that don't use the kernel much might see a hit of perhaps 2-3 percent—there's still some overhead because the kernel always has to run occasionally, to handle things like multitasking.
|
||||
|
||||
But workloads that call into the kernel a ton will see much greater performance drop off. In a benchmark, a program that does virtually nothing _other_ than call into the kernel saw [its performance drop by about 50 percent][11]; in other words, each call into the kernel took twice as long with the patch than it did without. Benchmarks that use Linux's loopback networking also see a big hit, such as [17 percent][12] in this Postgres benchmark. Real database workloads using real networking should see lower impact, because with real networks, the overhead of calling into the kernel tends to be dominated by the overhead of using the actual network.
|
||||
|
||||
While Intel systems are the ones known to have the defect, they may not be the only ones affected. Some platforms, such as SPARC and IBM's S390, are immune to the problem, as their processor memory management doesn't need the split address space and shared kernel page tables; operating systems on those platforms have always isolated their kernel page tables from user mode ones. But others, such as ARM, may not be so lucky; [comparable patches for ARM Linux][13] are under development.
|
||||
|
||||
<aside class="ad_native" id="ad_xrail_native" style="box-sizing: inherit;"></aside>
|
||||
|
||||
[][15][PETER BRIGHT][14]Peter is Technology Editor at Ars. He covers Microsoft, programming and software development, Web technology and browsers, and security. He is based in Brooklyn, NY.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://arstechnica.com/gadgets/2018/01/whats-behind-the-intel-design-flaw-forcing-numerous-patches/
|
||||
|
||||
作者:[ PETER BRIGHT ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://arstechnica.com/author/peter-bright/
|
||||
[1]:https://arstechnica.com/author/peter-bright/
|
||||
[2]:https://arstechnica.com/gadgets/2018/01/whats-behind-the-intel-design-flaw-forcing-numerous-patches/?comments=1
|
||||
[3]:https://twitter.com/aionescu/status/930412525111296000
|
||||
[4]:https://lwn.net/SubscriberLink/741878/eb6c9d3913d7cb2b/
|
||||
[5]:https://twitter.com/aionescu/status/948609809540046849
|
||||
[6]:https://lkml.org/lkml/2017/12/27/2
|
||||
[7]:https://gruss.cc/files/kaiser.pdf
|
||||
[8]:https://arstechnica.com/information-technology/2015/08/how-security-flaws-work-the-buffer-overflow/
|
||||
[9]:https://github.com/IAIK/KAISER
|
||||
[10]:https://twitter.com/brainsmoke/status/948561799875502080
|
||||
[11]:https://twitter.com/grsecurity/status/947257569906757638
|
||||
[12]:https://www.postgresql.org/message-id/20180102222354.qikjmf7dvnjgbkxe@alap3.anarazel.de
|
||||
[13]:https://lwn.net/Articles/740393/
|
||||
[14]:https://arstechnica.com/author/peter-bright
|
||||
[15]:https://arstechnica.com/author/peter-bright
|
@ -0,0 +1,70 @@
|
||||
How To Display Asterisks When You Type Password In terminal
|
||||
======
|
||||
|
||||

|
||||
|
||||
When you type passwords in a web browser login or any GUI login, the passwords will be masked as asterisks like 0_sync_master.sh 1_add_new_article_manual.sh 1_add_new_article_newspaper.sh 2_start_translating.sh 3_continue_the_work.sh 4_finish.sh 5_pause.sh base.sh env format.test lctt.cfg parse_url_by_manual.sh parse_url_by_newspaper.py parse_url_by_newspaper.sh README.org reedit.sh reformat.sh or bullets like •••••••••••••. This is the built-in security mechanism to prevent the users near you to view your password. But when you type the password in Terminal to perform any administrative task with **sudo** or **su** , you won't even the see the asterisks or bullets as you type the password. There won't be any visual indication of entering passwords, there won't be any cursor movement, nothing at all. You will not know whether you entered all characters or not. All you will see just a blank screen!
|
||||
|
||||
Look at the following screenshot.
|
||||
|
||||
![][2]
|
||||
|
||||
As you see in the above image, I've already entered the password, but there was no indication (either asterisks or bullets). Now, I am not sure whether I entered all characters in my password or not. This security mechanism also prevents the person near you to guess the password length. Of course, this behavior can be changed. This is what this guide all about. It is not that difficult. Read on!
|
||||
|
||||
#### Display Asterisks When You Type Password In terminal
|
||||
|
||||
To display asterisks as you type password in Terminal, we need to make a small modification in **" /etc/sudoers"** file. Before making any changes, it is better to backup this file. To do so, just run:
|
||||
```
|
||||
sudo cp /etc/sudoers{,.bak}
|
||||
```
|
||||
|
||||
The above command will backup /etc/sudoers file to a new file named /etc/sudoers.bak. You can restore it, just in case something went wrong after editing the file.
|
||||
|
||||
Next, edit **" /etc/sudoers"** file using command:
|
||||
```
|
||||
sudo visudo
|
||||
```
|
||||
|
||||
Find the following line:
|
||||
```
|
||||
Defaults env_reset
|
||||
```
|
||||
|
||||
![][3]
|
||||
|
||||
Add an extra word **" ,pwfeedback"** to the end of that line as shown below.
|
||||
```
|
||||
Defaults env_reset,pwfeedback
|
||||
```
|
||||
|
||||
![][4]
|
||||
|
||||
Then, press **" CTRL+x"** and **" y"** to save and close the file. Restart your Terminal to take effect the changes.
|
||||
|
||||
Now, you will see asterisks when you enter password in Terminal.
|
||||
|
||||
![][5]
|
||||
|
||||
If you're not comfortable to see a blank screen when you type passwords in Terminal, the small tweak will help. Please be aware that the other users can predict the password length if they see the password when you type it. If you don't mind it, go ahead make the changes as described above to make your password visible (masked as asterisks, of course!).
|
||||
|
||||
And, that's all for now. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/display-asterisks-type-password-terminal/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[2]:http://www.ostechnix.com/wp-content/uploads/2018/01/password-1.png ()
|
||||
[3]:http://www.ostechnix.com/wp-content/uploads/2018/01/visudo-1.png ()
|
||||
[4]:http://www.ostechnix.com/wp-content/uploads/2018/01/visudo-1-1.png ()
|
||||
[5]:http://www.ostechnix.com/wp-content/uploads/2018/01/visudo-2.png ()
|
@ -0,0 +1,110 @@
|
||||
Linux paste Command Explained For Beginners (5 Examples)
|
||||
======
|
||||
|
||||
Sometimes, while working on the command line in Linux, there may arise a situation wherein you have to merge lines of multiple files to create more meaningful/useful data. Well, you'll be glad to know there exists a command line utility **paste** that does this for you. In this tutorial, we will discuss the basics of this command as well as the main features it offers using easy to understand examples.
|
||||
|
||||
But before we do that, it's worth mentioning that all examples mentioned in this article have been tested on Ubuntu 16.04 LTS.
|
||||
|
||||
### Linux paste command
|
||||
|
||||
As already mentioned above, the paste command merges lines of files. Here's the tool's syntax:
|
||||
|
||||
```
|
||||
paste [OPTION]... [FILE]...
|
||||
```
|
||||
|
||||
And here's how the mage of paste explains it:
|
||||
```
|
||||
Write lines consisting of the sequentially corresponding lines from each FILE, separated by TABs,
|
||||
to standard output. With no FILE, or when FILE is -, read standard input.
|
||||
```
|
||||
|
||||
The following Q&A-styled examples should give you a better idea on how paste works.
|
||||
|
||||
### Q1. How to join lines of multiple files using paste command?
|
||||
|
||||
Suppose we have three files - file1.txt, file2.txt, and file3.txt - with following contents:
|
||||
|
||||
[![How to join lines of multiple files using paste command][1]][2]
|
||||
|
||||
And the task is to merge lines of these files in a way that each row of the final output contains index, country, and continent, then you can do that using paste in the following way:
|
||||
|
||||
paste file1.txt file2.txt file3.txt
|
||||
|
||||
[![result of merging lines][3]][4]
|
||||
|
||||
### Q2. How to apply delimiters when using paste?
|
||||
|
||||
Sometimes, there can be a requirement to add a delimiting character between entries of each resulting row. This can be done using the **-d** command line option, which requires you to provide the delimiting character you want to use.
|
||||
|
||||
For example, to apply a colon (:) as a delimiting character, use the paste command in the following way:
|
||||
|
||||
```
|
||||
paste -d : file1.txt file2.txt file3.txt
|
||||
```
|
||||
|
||||
Here's the output this command produced on our system:
|
||||
|
||||
[![How to apply delimiters when using paste][5]][6]
|
||||
|
||||
### Q3. How to change the way in which lines are merged?
|
||||
|
||||
By default, the paste command merges lines in a way that entries in the first column belongs to the first file, those in the second column are for the second file, and so on and so forth. However, if you want, you can change this so that the merge operation happens row-wise.
|
||||
|
||||
This you can do using the **-s** command line option.
|
||||
|
||||
```
|
||||
paste -s file1.txt file2.txt file3.txt
|
||||
```
|
||||
|
||||
Following is the output:
|
||||
|
||||
[![How to change the way in which lines are merged][7]][8]
|
||||
|
||||
### Q4. How to use multiple delimiters?
|
||||
|
||||
Yes, you can use multiple delimiters as well. For example, if you want to use both : and |, you can do that in the following way:
|
||||
|
||||
```
|
||||
paste -d ':|' file1.txt file2.txt file3.txt
|
||||
```
|
||||
|
||||
Following is the output:
|
||||
|
||||
[![How to use multiple delimiters][9]][10]
|
||||
|
||||
### Q5. How to make sure merged lines are NUL terminated?
|
||||
|
||||
By default, lines merged through paste end in a newline. However, if you want, you can make them NUL terminated, something which you can do using the **-z** option.
|
||||
|
||||
```
|
||||
paste -z file1.txt file2.txt file3.txt
|
||||
```
|
||||
|
||||
### Conclusion
|
||||
|
||||
As most of you'd agree, the paste command isn't difficult to understand and use. It may offer a limited set of command line options, but the tool does what it claims. You may not require it on daily basis, but paste can be a real-time saver in some scenarios. Just in case you need, [here's the tool's man page][11].
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/linux-paste-command/
|
||||
|
||||
作者:[Himanshu Arora][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com
|
||||
[1]:https://www.howtoforge.com/images/command-tutorial/paste-3-files.png
|
||||
[2]:https://www.howtoforge.com/images/command-tutorial/big/paste-3-files.png
|
||||
[3]:https://www.howtoforge.com/images/command-tutorial/paste-basic-usage.png
|
||||
[4]:https://www.howtoforge.com/images/command-tutorial/big/paste-basic-usage.png
|
||||
[5]:https://www.howtoforge.com/images/command-tutorial/paste-d-option.png
|
||||
[6]:https://www.howtoforge.com/images/command-tutorial/big/paste-d-option.png
|
||||
[7]:https://www.howtoforge.com/images/command-tutorial/paste-s-option.png
|
||||
[8]:https://www.howtoforge.com/images/command-tutorial/big/paste-s-option.png
|
||||
[9]:https://www.howtoforge.com/images/command-tutorial/paste-d-mult1.png
|
||||
[10]:https://www.howtoforge.com/images/command-tutorial/big/paste-d-mult1.png
|
||||
[11]:https://linux.die.net/man/1/paste
|
@ -0,0 +1,101 @@
|
||||
Meltdown and Spectre Linux Kernel Status
|
||||
============================================================
|
||||
|
||||
|
||||
By now, everyone knows that something “big” just got announced regarding computer security. Heck, when the [Daily Mail does a report on it][1] , you know something is bad…
|
||||
|
||||
Anyway, I’m not going to go into the details about the problems being reported, other than to point you at the wonderfully written [Project Zero paper on the issues involved here][2]. They should just give out the 2018 [Pwnie][3] award right now, it’s that amazingly good.
|
||||
|
||||
If you do want technical details for how we are resolving those issues in the kernel, see the always awesome [lwn.net writeup for the details][4].
|
||||
|
||||
Also, here’s a good summary of [lots of other postings][5] that includes announcements from various vendors.
|
||||
|
||||
As for how this was all handled by the companies involved, well this could be described as a textbook example of how _NOT_ to interact with the Linux kernel community properly. The people and companies involved know what happened, and I’m sure it will all come out eventually, but right now we need to focus on fixing the issues involved, and not pointing blame, no matter how much we want to.
|
||||
|
||||
### What you can do right now
|
||||
|
||||
If your Linux systems are running a normal Linux distribution, go update your kernel. They should all have the updates in them already. And then keep updating them over the next few weeks, we are still working out lots of corner case bugs given that the testing involved here is complex given the huge variety of systems and workloads this affects. If your distro does not have kernel updates, then I strongly suggest changing distros right now.
|
||||
|
||||
However there are lots of systems out there that are not running “normal” Linux distributions for various reasons (rumor has it that it is way more than the “traditional” corporate distros). They rely on the LTS kernel updates, or the normal stable kernel updates, or they are in-house franken-kernels. For those people here’s the status of what is going on regarding all of this mess in the upstream kernels you can use.
|
||||
|
||||
### Meltdown – x86
|
||||
|
||||
Right now, Linus’s kernel tree contains all of the fixes we currently know about to handle the Meltdown vulnerability for the x86 architecture. Go enable the CONFIG_PAGE_TABLE_ISOLATION kernel build option, and rebuild and reboot and all should be fine.
|
||||
|
||||
However, Linus’s tree is currently at 4.15-rc6 + some outstanding patches. 4.15-rc7 should be out tomorrow, with those outstanding patches to resolve some issues, but most people do not run a -rc kernel in a “normal” environment.
|
||||
|
||||
Because of this, the x86 kernel developers have done a wonderful job in their development of the page table isolation code, so much so that the backport to the latest stable kernel, 4.14, has been almost trivial for me to do. This means that the latest 4.14 release (4.14.12 at this moment in time), is what you should be running. 4.14.13 will be out in a few more days, with some additional fixes in it that are needed for some systems that have boot-time problems with 4.14.12 (it’s an obvious problem, if it does not boot, just add the patches now queued up.)
|
||||
|
||||
I would personally like to thank Andy Lutomirski, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, Peter Zijlstra, Josh Poimboeuf, Juergen Gross, and Linus Torvalds for all of the work they have done in getting these fixes developed and merged upstream in a form that was so easy for me to consume to allow the stable releases to work properly. Without that effort, I don’t even want to think about what would have happened.
|
||||
|
||||
For the older long term stable (LTS) kernels, I have leaned heavily on the wonderful work of Hugh Dickins, Dave Hansen, Jiri Kosina and Borislav Petkov to bring the same functionality to the 4.4 and 4.9 stable kernel trees. I had also had immense help from Guenter Roeck, Kees Cook, Jamie Iles, and many others in tracking down nasty bugs and missing patches. I want to also call out David Woodhouse, Eduardo Valentin, Laura Abbott, and Rik van Riel for their help with the backporting and integration as well, their help was essential in numerous tricky places.
|
||||
|
||||
These LTS kernels also have the CONFIG_PAGE_TABLE_ISOLATION build option that should be enabled to get complete protection.
|
||||
|
||||
As this backport is very different from the mainline version that is in 4.14 and 4.15, there are different bugs happening, right now we know of some VDSO issues that are getting worked on, and some odd virtual machine setups are reporting strange errors, but those are the minority at the moment, and should not stop you from upgrading at all right now. If you do run into problems with these releases, please let us know on the stable kernel mailing list.
|
||||
|
||||
If you rely on any other kernel tree other than 4.4, 4.9, or 4.14 right now, and you do not have a distribution supporting you, you are out of luck. The lack of patches to resolve the Meltdown problem is so minor compared to the hundreds of other known exploits and bugs that your kernel version currently contains. You need to worry about that more than anything else at this moment, and get your systems up to date first.
|
||||
|
||||
Also, go yell at the people who forced you to run an obsoleted and insecure kernel version, they are the ones that need to learn that doing so is a totally reckless act.
|
||||
|
||||
### Meltdown – ARM64
|
||||
|
||||
Right now the ARM64 set of patches for the Meltdown issue are not merged into Linus’s tree. They are [staged and ready to be merged][6] into 4.16-rc1 once 4.15 is released in a few weeks. Because these patches are not in a released kernel from Linus yet, I can not backport them into the stable kernel releases (hey, we have [rules][7] for a reason…)
|
||||
|
||||
Due to them not being in a released kernel, if you rely on ARM64 for your systems (i.e. Android), I point you at the [Android Common Kernel tree][8] All of the ARM64 fixes have been merged into the [3.18,][9] [4.4,][10] and [4.9 branches][11] as of this point in time.
|
||||
|
||||
I would strongly recommend just tracking those branches as more fixes get added over time due to testing and things catch up with what gets merged into the upstream kernel releases over time, especially as I do not know when these patches will land in the stable and LTS kernel releases at this point in time.
|
||||
|
||||
For the 4.4 and 4.9 LTS kernels, odds are these patches will never get merged into them, due to the large number of prerequisite patches required. All of those prerequisite patches have been long merged and tested in the android-common kernels, so I think it is a better idea to just rely on those kernel branches instead of the LTS release for ARM systems at this point in time.
|
||||
|
||||
Also note, I merge all of the LTS kernel updates into those branches usually within a day or so of being released, so you should be following those branches no matter what, to ensure your ARM systems are up to date and secure.
|
||||
|
||||
### Spectre
|
||||
|
||||
Now things get “interesting”…
|
||||
|
||||
Again, if you are running a distro kernel, you _might_ be covered as some of the distros have merged various patches into them that they claim mitigate most of the problems here. I suggest updating and testing for yourself to see if you are worried about this attack vector
|
||||
|
||||
For upstream, well, the status is there is no fixes merged into any upstream tree for these types of issues yet. There are numerous patches floating around on the different mailing lists that are proposing solutions for how to resolve them, but they are under heavy development, some of the patch series do not even build or apply to any known trees, the series conflict with each other, and it’s a general mess.
|
||||
|
||||
This is due to the fact that the Spectre issues were the last to be addressed by the kernel developers. All of us were working on the Meltdown issue, and we had no real information on exactly what the Spectre problem was at all, and what patches were floating around were in even worse shape than what have been publicly posted.
|
||||
|
||||
Because of all of this, it is going to take us in the kernel community a few weeks to resolve these issues and get them merged upstream. The fixes are coming in to various subsystems all over the kernel, and will be collected and released in the stable kernel updates as they are merged, so again, you are best off just staying up to date with either your distribution’s kernel releases, or the LTS and stable kernel releases.
|
||||
|
||||
It’s not the best news, I know, but it’s reality. If it’s any consolation, it does not seem that any other operating system has full solutions for these issues either, the whole industry is in the same boat right now, and we just need to wait and let the developers solve the problem as quickly as they can.
|
||||
|
||||
The proposed solutions are not trivial, but some of them are amazingly good. The [Retpoline][12] post from Paul Turner is an example of some of the new concepts being created to help resolve these issues. This is going to be an area of lots of research over the next years to come up with ways to mitigate the potential problems involved in hardware that wants to try to predict the future before it happens.
|
||||
|
||||
### Other arches
|
||||
|
||||
Right now, I have not seen patches for any other architectures than x86 and arm64\. There are rumors of patches floating around in some of the enterprise distributions for some of the other processor types, and hopefully they will surface in the weeks to come to get merged properly upstream. I have no idea when that will happen, if you are dependant on a specific architecture, I suggest asking on the arch-specific mailing list about this to get a straight answer.
|
||||
|
||||
### Conclusion
|
||||
|
||||
Again, update your kernels, don’t delay, and don’t stop. The updates to resolve these problems will be continuing to come for a long period of time. Also, there are still lots of other bugs and security issues being resolved in the stable and LTS kernel releases that are totally independent of these types of issues, so keeping up to date is always a good idea.
|
||||
|
||||
Right now, there are a lot of very overworked, grumpy, sleepless, and just generally pissed off kernel developers working as hard as they can to resolve these issues that they themselves did not cause at all. Please be considerate of their situation right now. They need all the love and support and free supply of their favorite beverage that we can provide them to ensure that we all end up with fixed systems as soon as possible.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://kroah.com/log/blog/2018/01/06/meltdown-status/
|
||||
|
||||
作者:[Greg Kroah-Hartman ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://kroah.com
|
||||
[1]:http://www.dailymail.co.uk/sciencetech/article-5238789/Intel-says-security-updates-fix-Meltdown-Spectre.html
|
||||
[2]:https://googleprojectzero.blogspot.fr/2018/01/reading-privileged-memory-with-side.html
|
||||
[3]:https://pwnies.com/
|
||||
[4]:https://lwn.net/Articles/743265/
|
||||
[5]:https://lwn.net/Articles/742999/
|
||||
[6]:https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git/log/?h=kpti
|
||||
[7]:https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
|
||||
[8]:https://android.googlesource.com/kernel/common/
|
||||
[9]:https://android.googlesource.com/kernel/common/+/android-3.18
|
||||
[10]:https://android.googlesource.com/kernel/common/+/android-4.4
|
||||
[11]:https://android.googlesource.com/kernel/common/+/android-4.9
|
||||
[12]:https://support.google.com/faqs/answer/7625886
|
@ -1,63 +0,0 @@
|
||||
书评:《Ours to Hack and to Own》
|
||||
============================================================
|
||||
|
||||

|
||||
Image by : opensource.com
|
||||
|
||||
私有制的时代看起来似乎结束了,我将不仅仅讨论那些由我们中的许多人引入到我们的家庭与生活的设备和软件。我也将讨论这些设备与应用依赖的平台与服务。
|
||||
|
||||
尽管我们使用的许多服务是免费的,我们对它们并没有任何控制。本质上讲,这些企业确实控制着我们所看到的,听到的以及阅读到的内容。不仅如此,许多企业还在改变工作的性质。他们正使用封闭的平台来助长由全职工作到[零工经济][2]的转变方式,这种方式提供极少的安全性与确定性。
|
||||
|
||||
这项行动对于网络以及每一个使用与依赖网络的人产生了广泛的影响。仅仅二十多年前的开放网络的想象正在逐渐消逝并迅速地被一块难以穿透的幕帘所取代。
|
||||
|
||||
一种变得流行的补救办法就是建立[平台合作][3], 由他们的用户所拥有的电子化平台。正如这本书所阐述的,平台合作社背后的观点与开源有许多相同的根源。
|
||||
|
||||
学者Trebor Scholz和作家Nathan Schneider已经收集了40篇探讨平台合作社作为普通人可使用以提升开放性并对闭源系统的不透明性及各种限制予以还击的工具的增长及需求的论文。
|
||||
|
||||
### 哪里适合开源
|
||||
|
||||
任何平台合作社核心及接近核心的部分依赖与开源;不仅开源技术是必要的,构成开源开放性,透明性,协同合作以及共享的准则与理念同样不可或缺。
|
||||
|
||||
在这本书的介绍中, Trebor Scholz指出:
|
||||
|
||||
> 与网络的黑盒子系统相反,这些平台需要使它们的数据流透明来辨别自身。他们需要展示客户与员工的数据在哪里存储,数据出售给了谁以及数据为了何种目的。
|
||||
|
||||
正是对开源如此重要的透明性,促使平台合作社如此吸引人并在目前大量已存平台之中成为令人耳目一新的变化。
|
||||
|
||||
开源软件在《Ours to Hack and to Own》所分享的平台合作社的构想中必然充当着重要角色。开源软件能够为群体建立助推合作社的技术型公共建设提供快速,不算昂贵的途径。
|
||||
|
||||
Mickey Metts在论文中这样形容, "与你的友好的社区型技术合作社相遇。(原文:Meet Your Friendly Neighborhood Tech Co-Op.)" Metts为一家名为Agaric的企业工作,这家企业使用Drupal为团体及小型企业建立他们不能独自完成的产品。除此以外, Metts还鼓励任何想要建立并运营自己的企业的公司或合作社的人接受免费且开源的软件。为什么呢?因为它是高质量的,不算昂贵的,可定制的,并且你能够与由乐于助人而又热情的人们组成的大型社区产生联系。
|
||||
|
||||
### 不总是开源的,但开源总在
|
||||
|
||||
这本书里不是所有的论文都聚焦或提及开源的;但是,开源方式的关键元素-合作,社区,开放管理以及电子自由化-总是在其表面若隐若现。
|
||||
|
||||
事实上正如《Ours to Hack and to Own》中许多论文所讨论的,建立一个更加开放,基于平常人的经济与社会区块,平台合作社会变得非常重要。用Douglas Rushkoff的话讲,那会是类似Creative Commons的组织“对共享知识资源的私有化”的补偿。它们也如Barcelona的CTO(首席执行官)Francesca Bria所描述的那样,是“通过确保市民数据安全性,隐私性和权利的系统”来运营他们自己的“分布式通用数据基础架构”的城市。
|
||||
|
||||
### 最后的思考
|
||||
|
||||
如果你在寻找改变互联网的蓝图以及我们工作的方式,《Ours to Hack and to Own》并不是你要寻找的。这本书与其说是用户指南,不如说是一种宣言。如书中所说,《Ours to Hack and to Own》让我们略微了解如果我们将开源方式准则应用于社会及更加广泛的世界我们能够做的事。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Scott Nesbitt -作家,编辑,雇佣兵,虎猫牛仔(原文:Ocelot wrangle),丈夫与父亲,博客写手,陶器收藏家。Scott正是做这样的一些事情。他还是大量写关于开源软件文章与博客的长期开源用户。你可以在Twitter,Github上找到他。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/1/review-book-ours-to-hack-and-own
|
||||
|
||||
作者:[Scott Nesbitt][a]
|
||||
译者:[darsh8](https://github.com/darsh8)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/scottnesbitt
|
||||
[1]:https://opensource.com/article/17/1/review-book-ours-to-hack-and-own?rate=dgkFEuCLLeutLMH2N_4TmUupAJDjgNvFpqWqYCbQb-8
|
||||
[2]:https://en.wikipedia.org/wiki/Access_economy
|
||||
[3]:https://en.wikipedia.org/wiki/Platform_cooperative
|
||||
[4]:http://www.orbooks.com/catalog/ours-to-hack-and-to-own/
|
||||
[5]:https://opensource.com/user/14925/feed
|
||||
[6]:https://opensource.com/users/scottnesbitt
|
@ -1,76 +1,75 @@
|
||||
translating by lujun9972
|
||||
|
||||
Vmware Linux Guest Add a New Hard Disk Without Rebooting Guest
|
||||
在不重启的情况下为 Vmware Linux 客户机添加新硬盘
|
||||
======
|
||||
|
||||
As a system admin, I need to use additional hard drives for to provide more storage space or to separate system data from user data. This procedure, adding physical block devices to virtualized guests, describes how to add a hard drive on the host to a virtualized guest using VMWare software running Linux as guest.
|
||||
作为一名系统管理员,我经常需要用额外的硬盘来扩充存储空间或将系统数据从用户数据中分离出来。将物理块设备加到虚拟主机的这个过程,告诉你如何将一个块主机上的硬盘加到一台使用 VMWare 软件虚拟化的 Linux 客户机上。
|
||||
|
||||
It is possible to add or remove a SCSI device explicitly, or to re-scan an entire SCSI bus without rebooting a running Linux VM guest. This how to is tested under Vmware Server and Vmware Workstation v6.0 (but should work with older version too). All instructions are tested on RHEL, Fedora, CentOS and Ubuntu Linux guest / hosts operating systems.
|
||||
你可以显式的添加或删除一个 SCSI 设备,或者重新扫描整个 SCSI 总线而不用重启 Linux 虚拟机。本指南在 Vmware Server 和 Vmware Workstation v6.0 中通过测试(更老版本应该也支持)。所有命令在 RHEL,Fedora,CentOS 和 Ubuntu Linux 客户机 / 主机操作系统下都经过了测试。
|
||||
|
||||
|
||||
## Step # 1: Add a New Disk To Vm Guest
|
||||
## 步骤 # 1:添加新硬盘到虚拟客户机
|
||||
|
||||
First, you need to add hard disk by visiting vmware hardware settings menu.
|
||||
Click on VM > Settings
|
||||
首先,通过 vmware 硬件设置菜单添加硬盘。
|
||||
点击 VM > Settings
|
||||
|
||||
![Fig.01: Vmware Virtual Machine Settings ][1]
|
||||
![Fig.01:Vmware Virtual Machine Settings ][1]
|
||||
|
||||
Alternatively you can press CTRL + D to bring settings dialog box.
|
||||
或者你也可以按下 CTRL + D 也能进入设置对话框。
|
||||
|
||||
Click on Add+ to add new hardware to guest:
|
||||
点击 Add+ 添加新硬盘到客户机:
|
||||
|
||||
![Fig.02: VMWare adding a new hardware][2]
|
||||
![Fig.02:VMWare adding a new hardware][2]
|
||||
|
||||
选择硬件类型为 Hard disk 然后点击 Next
|
||||
|
||||
Select hardware type Hard disk and click on Next
|
||||
![Fig.03 VMware Adding a new disk wizard ][3]
|
||||
|
||||
Select create a new virtual disk and click on Next
|
||||
选择 `create a new virtual disk` 然后点击 Next
|
||||
|
||||
![Fig.04: Vmware Wizard Disk ][4]
|
||||
![Fig.04:Vmware Wizard Disk ][4]
|
||||
|
||||
Set virtual disk type to SCSI and click on Next
|
||||
设置虚拟磁盘类型为 SCSI 然后点击 Next
|
||||
|
||||
![Fig.05: Vmware Virtual Disk][5]
|
||||
![Fig.05:Vmware Virtual Disk][5]
|
||||
|
||||
Set maximum disk size as per your requirements and click on Next
|
||||
按需要设置最大磁盘大小,然后点击 Next
|
||||
|
||||
![Fig.06: Finalizing Disk Virtual Addition ][6]
|
||||
![Fig.06:Finalizing Disk Virtual Addition ][6]
|
||||
|
||||
Finally, set file location and click on Finish.
|
||||
最后,选择文件存放位置然后点击 Finish。
|
||||
|
||||
## Step # 2: Rescan the SCSI Bus to Add a SCSI Device Without rebooting the VM
|
||||
## 步骤 # 2:重新扫描 SCSI 总线,在不重启虚拟机的情况下添加 SCSI 设备
|
||||
|
||||
A rescan can be issued by typing the following command:
|
||||
输入下面命令重新扫描 SCSI 总线:
|
||||
|
||||
```
|
||||
echo "- - -" > /sys/class/scsi_host/ **host#** /scan
|
||||
echo "- - -" > /sys/class/scsi_host/host# /scan
|
||||
fdisk -l
|
||||
tail -f /var/log/message
|
||||
```
|
||||
|
||||
Sample outputs:
|
||||
输出为:
|
||||
|
||||
![Linux Vmware Rescan New Scsi Disk Without Reboot][7]
|
||||
|
||||
Replace host# with actual value such as host0. You can find scsi_host value using the following command:
|
||||
你需要将 `host#` 替换成真实的值,比如 host0。你可以通过下面命令来查出这个值:
|
||||
|
||||
`# ls /sys/class/scsi_host`
|
||||
|
||||
Output:
|
||||
输出:
|
||||
|
||||
```
|
||||
host0
|
||||
```
|
||||
|
||||
Now type the following to send a rescan request:
|
||||
然后输入下面过命令来请求重新扫描:
|
||||
|
||||
```
|
||||
echo "- - -" > /sys/class/scsi_host/ **host0** /scan
|
||||
echo "- - -" > /sys/class/scsi_host/host0/scan
|
||||
fdisk -l
|
||||
tail -f /var/log/message
|
||||
```
|
||||
|
||||
Sample Outputs:
|
||||
输出为:
|
||||
|
||||
```
|
||||
Jul 18 16:29:39 localhost kernel: Vendor: VMware, Model: VMware Virtual S Rev: 1.0
|
||||
@ -109,33 +108,33 @@ Jul 18 16:29:39 localhost kernel: sd 0:0:2:0: Attached scsi disk sdc
|
||||
Jul 18 16:29:39 localhost kernel: sd 0:0:2:0: Attached scsi generic sg2 type 0
|
||||
```
|
||||
|
||||
### How Do I Delete a Single Device Called /dev/sdc?
|
||||
### 如何删除 =/dev/sdc= 这块设备?
|
||||
|
||||
In addition to re-scanning the entire bus, a specific device can be added or existing device deleted using the following command:
|
||||
除了重新扫描整个总线外,你也可以使用下面命令添加或删除指定磁盘:
|
||||
|
||||
```
|
||||
# echo 1 > /sys/block/devName/device/delete
|
||||
# echo 1 > /sys/block/ **sdc** /device/delete
|
||||
# echo 1 > /sys/block/sdc/device/delete
|
||||
```
|
||||
|
||||
### How Do I Add a Single Device Called /dev/sdc?
|
||||
### 如何添加 =/dev/sdc= 这块设备?
|
||||
|
||||
To add a single device explicitly, use the following syntax:
|
||||
使用下面语法添加指定设备:
|
||||
|
||||
```
|
||||
# echo "scsi add-single-device <H> <B> <T> <L>" > /proc/scsi/scsi
|
||||
```
|
||||
|
||||
Where,
|
||||
这里,
|
||||
|
||||
* <H> : Host
|
||||
* <B> : Bus (Channel)
|
||||
* <T> : Target (Id)
|
||||
* <L> : LUN numbers
|
||||
* <H>:Host
|
||||
* <B>:Bus (Channel)
|
||||
* <T>:Target (Id)
|
||||
* <L>:LUN numbers
|
||||
|
||||
|
||||
|
||||
For e.g. add /dev/sdc with host # 0, bus # 0, target # 2, and LUN # 0, enter:
|
||||
例如。使用参数 host#0,bus#0,target#2,以及 LUN#0 来添加 /dev/sdc,则输入:
|
||||
|
||||
```
|
||||
# echo "scsi add-single-device 0 0 2 0">/proc/scsi/scsi
|
||||
@ -143,7 +142,7 @@ For e.g. add /dev/sdc with host # 0, bus # 0, target # 2, and LUN # 0, enter:
|
||||
# cat /proc/scsi/scsi
|
||||
```
|
||||
|
||||
Sample Outputs:
|
||||
结果输出:
|
||||
|
||||
```
|
||||
Attached devices:
|
||||
@ -158,9 +157,9 @@ Host: scsi0 Channel: 00 Id: 02 Lun: 00
|
||||
Type: Direct-Access ANSI SCSI revision: 02
|
||||
```
|
||||
|
||||
## Step #3: Format a New Disk
|
||||
## 步骤 #3:格式化新磁盘
|
||||
|
||||
Now, you can create partition using [fdisk and format it using mkfs.ext3][8] command:
|
||||
现在使用 [fdisk 并通过 mkfs.ext3][8] 命令创建分区:
|
||||
|
||||
```
|
||||
# fdisk /dev/sdc
|
||||
@ -170,39 +169,39 @@ Now, you can create partition using [fdisk and format it using mkfs.ext3][8] com
|
||||
# mkfs.ext4 /dev/sdc3
|
||||
```
|
||||
|
||||
## Step #4: Create a Mount Point And Update /etc/fstab
|
||||
## 步骤 #4:创建挂载点并更新 /etc/fstab
|
||||
|
||||
`# mkdir /disk3`
|
||||
|
||||
Open /etc/fstab file, enter:
|
||||
打开 /etc/fstab 文件,输入:
|
||||
|
||||
`# vi /etc/fstab`
|
||||
|
||||
Append as follows:
|
||||
加入下面这行:
|
||||
|
||||
```
|
||||
/dev/sdc3 /disk3 ext3 defaults 1 2
|
||||
```
|
||||
|
||||
For ext4 fs:
|
||||
若是 ext4 文件系统则加入:
|
||||
|
||||
```
|
||||
/dev/sdc3 /disk3 ext4 defaults 1 2
|
||||
```
|
||||
|
||||
Save and close the file.
|
||||
保存并关闭文件。
|
||||
|
||||
#### Optional Task: Label the partition
|
||||
#### 可选操作:为分区加标签
|
||||
|
||||
[You can label the partition using e2label command][9]. For example, if you want to label the new partition /backupDisk, enter
|
||||
[你可以使用 e2label 命令为分区加标签 ][9]。假设,你想要为 /backupDisk 这块新分区加标签,则输入
|
||||
|
||||
`# e2label /dev/sdc1 /backupDisk`
|
||||
|
||||
See "[The importance of Linux partitions][10]
|
||||
详情参见 "[Linux 分区的重要性 ][10]
|
||||
|
||||
## about the author
|
||||
## 关于作者
|
||||
|
||||
The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][11], [Facebook][12], [Google+][13].
|
||||
作者即是 nixCraft 的创造者,也是一名经验丰富的系统管理员,还是 Linux 操作系统 /Unix shell 脚本培训师。他曾服务过全球客户并与多个行业合作过,包括 IT,教育,国防和空间研究,以及非盈利机构。你可以在 [Twitter][11],[Facebook][12],[Google+][13] 上关注它。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
@ -1,8 +1,8 @@
|
||||
translating by lujun9972
|
||||
How to use curl command with proxy username/password on Linux/ Unix
|
||||
如何让 curl 命令通过代理访问
|
||||
======
|
||||
|
||||
My sysadmin provided me the following proxy details:
|
||||
我的系统管理员给我提供了如下代理信息:
|
||||
```
|
||||
IP: 202.54.1.1
|
||||
Port: 3128
|
||||
@ -10,15 +10,14 @@ Username: foo
|
||||
Password: bar
|
||||
```
|
||||
|
||||
The settings worked perfectly with Google Chrome and Firefox browser. How do I use it with the curl command? How do I tell the curl command to use my proxy settings from Google Chrome browser?
|
||||
该设置在 Google Chrome 和 Firefox 浏览器上很容易设置。但是我要怎么把它应用到 curl 命令上呢?我要如何让 curl 命令使用我在 Google Chrome 浏览器上的代理设置呢?
|
||||
|
||||
很多 Linux 和 Unix 命令行工具(比如 curl 命令,wget 命令,lynx 命令等)使用名为 `http_proxy`,`https_proxy`,`ftp_proxy` 的环境变量来获取代理信息。它允许你通过代理服务器(使用或不使用用户名/密码都行)来连接那些基于文本的会话和应用。**本文就会演示一下如何让 curl 通过代理服务器发送 HTTP/HTTPS 请求。**
|
||||
|
||||
## 让 curl 命令使用代理的语法
|
||||
|
||||
|
||||
Many Linux and Unix command line tools such as curl command, wget command, lynx command, and others; use the environment variable called http_proxy, https_proxy, ftp_proxy to find the proxy details. It allows you to connect text based session and applications via the proxy server with or without a userame/password. T **his page shows how to perform HTTP/HTTPS requests with cURL cli using PROXY server.**
|
||||
|
||||
## Unix and Linux curl command with proxy syntax
|
||||
|
||||
|
||||
The syntax is:
|
||||
语法为:
|
||||
```
|
||||
## Set the proxy address of your uni/company/vpn network ##
|
||||
export http_proxy=http://your-ip-address:port/
|
||||
@ -32,7 +31,7 @@ export https_proxy=https://user:password@your-proxy-ip-address:port/
|
||||
```
|
||||
|
||||
|
||||
Another option is to pass the -x option to the curl command. To use the specified proxy:
|
||||
另一种方法是使用 curl 命令的 -x 选项:
|
||||
```
|
||||
curl -x <[protocol://][user:password@]proxyhost[:port]> url
|
||||
--proxy <[protocol://][user:password@]proxyhost[:port]> url
|
||||
@ -40,9 +39,9 @@ curl -x <[protocol://][user:password@]proxyhost[:port]> url
|
||||
-x http://user:password@Your-Ip-Here:Port url
|
||||
```
|
||||
|
||||
## Linux use curl command with proxy
|
||||
## 在 Linux 上的一个例子
|
||||
|
||||
First set the http_proxy:
|
||||
首先设置 `http_proxy`:
|
||||
```
|
||||
## proxy server, 202.54.1.1, port: 3128, user: foo, password: bar ##
|
||||
export http_proxy=http://foo:bar@202.54.1.1:3128/
|
||||
@ -51,7 +50,7 @@ export https_proxy=$http_proxy
|
||||
curl -I https://www.cyberciti.biz
|
||||
curl -v -I https://www.cyberciti.biz
|
||||
```
|
||||
Sample outputs:
|
||||
输出为:
|
||||
|
||||
```
|
||||
* Rebuilt URL to: www.cyberciti.biz/
|
||||
@ -98,44 +97,43 @@ Connection: keep-alive
|
||||
* Connection #0 to host 10.12.249.194 left intact
|
||||
```
|
||||
|
||||
|
||||
In this example, I'm downloading a pdf file:
|
||||
本例中,我来下载一个 pdf 文件:
|
||||
```
|
||||
$ export http_proxy="vivek:myPasswordHere@10.12.249.194:3128/"
|
||||
$ curl -v -O http://dl.cyberciti.biz/pdfdownloads/b8bf71be9da19d3feeee27a0a6960cb3/569b7f08/cms/631.pdf
|
||||
```
|
||||
OR use the -x option:
|
||||
也可以使用 -x 选项:
|
||||
```
|
||||
curl -x 'http://vivek:myPasswordHere@10.12.249.194:3128' -v -O https://dl.cyberciti.biz/pdfdownloads/b8bf71be9da19d3feeee27a0a6960cb3/569b7f08/cms/631.pdf
|
||||
```
|
||||
Sample outputs:
|
||||
[![Fig.01: curl in action \(click to enlarge\)][1]][2]
|
||||
输出为:
|
||||
![Fig.01:curl in action \(click to enlarge\)][1]
|
||||
|
||||
## How to use the specified proxy server with curl on Unix
|
||||
## Unix 上的一个例子
|
||||
|
||||
```
|
||||
$ curl -x http://prox_server_vpn:3128/ -I https://www.cyberciti.biz/faq/howto-nginx-customizing-404-403-error-page/
|
||||
```
|
||||
|
||||
## How to use socks protocol?
|
||||
## socks 协议怎么办呢?
|
||||
|
||||
The syntax is same:
|
||||
语法也是一样的:
|
||||
```
|
||||
curl -x socks5://[user:password@]proxyhost[:port]/ url
|
||||
curl --socks5 192.168.1.254:3099 https://www.cyberciti.biz/
|
||||
```
|
||||
|
||||
## How do I configure and setup curl to permanently use a proxy connection?
|
||||
## 如何让代理设置永久生效?
|
||||
|
||||
Update/edit your ~/.curlrc file using a text editor such as vim:
|
||||
编辑 ~/.curlrc 文件:
|
||||
`$ vi ~/.curlrc`
|
||||
Append the following:
|
||||
添加下面内容:
|
||||
```
|
||||
proxy = server1.cyberciti.biz:3128
|
||||
proxy-user = "foo:bar"
|
||||
```
|
||||
|
||||
Save and close the file. Another option is create a bash shell alias in your ~/.bashrc file:
|
||||
保存并关闭该文件。另一种方法是在你的 `~/.bashrc` 文件中创建一个别名:
|
||||
```
|
||||
## alias for curl command
|
||||
## set proxy-server and port, the syntax is
|
||||
@ -143,7 +141,7 @@ Save and close the file. Another option is create a bash shell alias in your ~/.
|
||||
alias curl = "curl -x server1.cyberciti.biz:3128"
|
||||
```
|
||||
|
||||
Remember, the proxy string can be specified with a protocol:// prefix to specify alternative proxy protocols. Use socks4://, socks4a://, socks5:// or socks5h:// to request the specific SOCKS version to be used. No protocol specified, http:// and all others will be treated as HTTP proxies. If the port number is not specified in the proxy string, it is assumed to be 1080. The -x option overrides existing environment variables that set the proxy to use. If there's an environment variable setting a proxy, you can set proxy to "" to override it. See curl command man page [here for more info][3].
|
||||
记住,代理字符串中可以使用 `protocol://` 前缀来指定不同的代理协议。使用 `socks4://`,`socks4a://`,`socks5:// `或者 `socks5h://` 来指定使用的 SOCKS 版本。若没有指定协议或者 `http://` 表示 HTTP 协议。若没有指定端口号则默认为 1080。-x 选项的值要优先于环境变量设置的值。若不想走代理,而环境变量总设置了代理,那么可以通过设置代理为 "" 来覆盖环境变量的值。[详细信息请参阅 curl 的 man 页 ][3]。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
78
translated/tech/20171011 What is a firewall.md
Normal file
78
translated/tech/20171011 What is a firewall.md
Normal file
@ -0,0 +1,78 @@
|
||||
什么是防火墙?
|
||||
=====
|
||||
基于网络的防火墙已经在美国企业无处不在,因为它们证实了抵御日益增长的威胁的防御能力。
|
||||
|
||||
通过网络测试公司 NSS 实验室最近的一项研究发现高达 80% 的美国大型企业运行着下一代防火墙。研究公司 IDC 评估防火墙和相关的统一威胁管理市场营业额在 2015 是 76 亿美元,预计到 2020 年底将达到 127 亿美元。
|
||||
|
||||
**如果你想提升,这里是[What to consider when deploying a next generation firewall][1]**
|
||||
|
||||
### 什么是防火墙?
|
||||
|
||||
防火墙充当一个监控流量的边界防御工具,要么允许它要么屏蔽它。 多年来,防火墙的功能不断增强,现在大多数防火墙不仅可以阻止已知的一组威胁,并执行高级访问控制列表策略,还可以深入检查各个包的流量和测试包,以确定它们是否安全。大多数防火墙被部署为网络硬件,用于处理流量和允许终端用户配置和管理系统的软件。越来越多的软件版防火墙部署到高度虚拟机环境中执行策略在被隔离的网络或 IaaS 公有云中。
|
||||
|
||||
随着防火墙技术的进步在过去十年中创造了新的防火墙部署选项,所以现在对于部署防火墙的最终用户来说,有一些选择。这些选择包括:
|
||||
|
||||
### 有状态的防火墙
|
||||
当首次创造防火墙时,它们是无状态的,这意味着流量通过硬件,在检查被监视的每个网络包流量的过程中,并单独屏蔽或允许它。从1990年代中后期开始,防火墙的第一个主要进展是引入状态。有状态防火墙在更全面的上下文中检查流量,同时考虑到网络连接的工作状态和特性,以提供更全面的防火墙。例如,维持这状态的防火墙允许某些流量访问某些用户,同时阻塞其他用户的同一流量。
|
||||
|
||||
### 下一代防火墙
|
||||
多年来,防火墙增加了多种新的特性,包括深度包检查、入侵检测以及对加密流量的预防和检查。下一代防火墙(NGFWs)是指有许多先进的功能集成到防火墙的防火墙。
|
||||
|
||||
### 基于代理的防火墙
|
||||
|
||||
这些防火墙充当请求数据的最终用户和数据源之间的网关。在传递给最终用户之前,所有的流量都通过这个代理过滤。这通过掩饰信息的原始请求者的身份来保护客户端不受威胁。
|
||||
|
||||
### Web 应用防火墙
|
||||
|
||||
这些防火墙位于特定应用程序的前面,而不是在更广阔的网络的入口或则出口上。而基于代理的防火墙通常被认为是保护终端客户,WAFs 通常被认为是保护应用服务器。
|
||||
|
||||
### 防火墙硬件
|
||||
|
||||
防火墙硬件通常是一个简单的服务器,它可以充当路由器来过滤流量和运行防火墙软件。这些设备放置在企业网络的边缘,路由器和 Internet 服务提供商的连接点之间。通常企业可能在整个数据中心部署十几个物理防火墙。 用户需要根据用户基数的大小和 Internet 连接的速率来确定防火墙需要支持的吞吐量容量。
|
||||
|
||||
### 防火墙软件
|
||||
|
||||
通常,终端用户部署多个防火墙硬件端和一个中央防火墙软件系统来管理部署。 这个中心系统是配置策略和特性的地方,在那里可以进行分析,并可以对威胁作出响应。
|
||||
|
||||
### 下一代防火墙
|
||||
|
||||
多年来,防火墙增加了多种新的特性,包括深度包检查、入侵检测以及对加密流量的预防和检查。下一代防火墙(NGFWs)是指集成了这些先进功能的防火墙,这里描述的是它们中的一些。
|
||||
|
||||
### 有状态的检测
|
||||
|
||||
阻止已知不需要的流量,这是基本的防火墙功能。
|
||||
|
||||
### 抵御病毒
|
||||
|
||||
在网络流量中搜索已知病毒和漏洞,这个功能有助于防火墙接收最新威胁的更新,并不断更新以保护它们。
|
||||
|
||||
### 入侵防御系统
|
||||
|
||||
这类安全产品可以部署为一个独立的产品,但 IPS 功能正逐步融入 NGFWs。 虽然基本的防火墙技术识别和阻止某些类型的网络流量,但 IPS 使用更多的细粒度安全措施,如签名跟踪和异常检测,以防止不必要的威胁进入公司网络。 IPS 系统已经取代了以前这一技术的版本,入侵检测系统(IDS)的重点是识别威胁而不是遏制它们。
|
||||
|
||||
### 深度包检测(DPI)
|
||||
|
||||
DPI 可部分或用于与 IPS 的结合,但其仍然成为一个 NGFWs 的重要特征,因为它提供细粒度分析的能力,具体到流量包和流量数据的头文件。DPI 还可以用来监测出站流量,以确保敏感信息不会离开公司网络,这种技术称为数据丢失预防(DLP)。
|
||||
|
||||
### SSL 检测
|
||||
|
||||
安全套接字层(SSL)检测是一个检测加密流量来测试威胁的方法。随着越来越多的流量进行加密,SSL 检测成为 DPI 技术,NGFWs 正在实施的一个重要组成部分。SSL 检测作为一个缓冲区,它在送到最终目的地之前解码流量以检测它。
|
||||
|
||||
### 沙盒
|
||||
|
||||
这个是被卷入 NGFWs 中的一个较新的特性,它指防火墙接收某些未知的流量或者代码,并在一个测试环境运行,以确定它是否是邪恶的能力。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3230457/lan-wan/what-is-a-firewall-perimeter-stateful-inspection-next-generation.html
|
||||
|
||||
作者:[Brandon Butler][a]
|
||||
译者:[zjon](https://github.com/zjon)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.networkworld.com/author/Brandon-Butler/
|
||||
[1]:https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html
|
||||
|
||||
|
@ -0,0 +1,157 @@
|
||||
如何创建 Ubuntu Live CD (Linux 中国注:Ubuntu 原生光盘)的定制镜像
|
||||
======
|
||||

|
||||
|
||||
今天让我们来讨论一下如何创建 Ubuntu Live CD 的定制镜像(ISO)。我们已经使用[* *Pinguy Builder* *][1]完成了这项工作。但是,现在似乎停止了。最近 Pinguy Builder 的官方网站似乎没有任何更新。幸运的是,我找到了另一种创建 Ubuntu Live CD 镜像的工具。使用 **Cubic** 即 **C**ustom **Ub**untu **I**SO **C**reator (Linux 中国注:Ubuntu 镜像定制器)的首字母所写,一个 GUI (图形用户界面)应用程序用来创建一个可定制的可启动的 Ubuntu Live CD(ISO)镜像。
|
||||
|
||||
Cubic 正在积极开发,它提供了许多选项来轻松地创建一个定制的 Ubuntu Live CD ,它有一个集成的命令行环境``chroot``(Linux 中国注:Change Root,也就是改变程序执行时所参考的根目录位置),在那里你可以定制所有,比如安装新的软件包,内核,添加更多的背景壁纸,添加更多的文件和文件夹。它有一个直观的 GUI 界面,在实时镜像创建过程中可以轻松的利用导航(可以利用点击鼠标来回切换)。您可以创建一个新的自定义镜像或修改现有的项目。因为它可以用来实时制作 Ubuntu 镜像,所以我相信它可以被利用在制作其他 Ubuntu 的发行版和衍生版镜像中使用,比如 Linux Mint。
|
||||
### 安装 Cubic
|
||||
|
||||
Cubic 的开发人员已经开发出了一个 PPA (Linux 中国注:Personal Package Archives 首字母简写,私有的软件包档案) 来简化安装过程。要在 Ubuntu 系统上安装 Cubic ,在你的终端上运行以下命令:
|
||||
```
|
||||
sudo apt-add-repository ppa:cubic-wizard/release
|
||||
```
|
||||
```
|
||||
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 6494C6D6997C215E
|
||||
```
|
||||
```
|
||||
sudo apt update
|
||||
```
|
||||
```
|
||||
sudo apt install cubic
|
||||
```
|
||||
|
||||
### 利用 Cubic 创建 Ubuntu Live CD 的定制镜像
|
||||
|
||||
|
||||
安装完成后,从应用程序菜单或坞站启动 Cubic。这是在我在 Ubuntu 16.04 LTS 桌面系统中 Cubic 的样子。
|
||||
|
||||
为新项目选择一个目录。它是保存镜像文件的目录。
|
||||
[![][2]][3]
|
||||
|
||||
请注意,Cubic 不是创建您系统的 Live CD 镜像。而它只是利用 Ubuntu 安装 CD 来创建一个定制的 Live CD,因此,你应该有一个最新的 ISO 镜像。
|
||||
选择您存储 Ubuntu 安装 ISO 镜像的路径。Cubic 将自动填写您定制操作系统的所有细节。如果你愿意,你可以改变细节。单击 Next 继续。
|
||||
[![][2]][4]
|
||||
|
||||
|
||||
接下来,从压缩的源安装介质中的 Linux 文件系统将被提取到项目的目录(在我们的例子中目录的位置是 **/home/ostechnix/custom_ubuntu**)。
|
||||
[![][2]][5]
|
||||
|
||||
|
||||
一旦文件系统被提取出来,将自动加载到``chroot``环境。如果你没有看到终端提示,按下回车键几次。
|
||||
[![][2]][6]
|
||||
|
||||
|
||||
在这里可以安装任何额外的软件包,添加背景图片,添加软件源列表,添加最新的 Linux 内核和所有其他定制到你的 Live CD 。
|
||||
|
||||
例如,我希望 `vim` 安装在我的 Live CD 中,所以现在就要安装它。
|
||||
[![][2]][7]
|
||||
|
||||
|
||||
我们不需要使用 ``sudo``,因为我们已经在具有最高权限(root)的环境中了。
|
||||
|
||||
类似地,如果需要,可以安装添加的任何版本 Linux Kernel 。
|
||||
```
|
||||
apt install linux-image-extra-4.10.0-24-generic
|
||||
```
|
||||
|
||||
此外,您还可以更新软件源列表(添加或删除软件存储库列表):
|
||||
[![][2]][8]
|
||||
|
||||
修改源列表后,不要忘记运行 ``apt update`` 命令来更新源列表:
|
||||
```
|
||||
apt update
|
||||
```
|
||||
|
||||
|
||||
另外,您还可以向 Live CD 中添加文件或文件夹。复制文件/文件夹(右击它们并选择复制或者利用 `CTRL+C`),在终端右键单击(在 Cubic 窗口内),选择**Paste file(s)**,最后点击它将其复制进 Cubic 向导的底部。
|
||||
[![][2]][9]
|
||||
|
||||
**Ubuntu 17.10 用户注意事项: **
|
||||
|
||||
|
||||
在 Ubuntu 17.10 系统中,DNS 查询可能无法在 ``chroot``环境中工作。如果您正在制作一个定制的 Ubuntu 17.10 原生镜像,您需要指向正确的 `resolve.conf` 配置文件:
|
||||
```
|
||||
ln -sr /run/systemd/resolve/resolv.conf /run/systemd/resolve/stub-resolv.conf
|
||||
|
||||
```
|
||||
|
||||
验证 DNS 解析工作,运行:
|
||||
```
|
||||
cat /etc/resolv.conf
|
||||
ping google.com
|
||||
```
|
||||
|
||||
|
||||
如果你想的话,可以添加你自己的壁纸。要做到这一点,请切换到 **/usr/share/backgrounds/** 目录,
|
||||
```
|
||||
cd /usr/share/backgrounds
|
||||
```
|
||||
|
||||
|
||||
并将图像拖放到 Cubic 窗口中。或复制图像,右键单击 Cubic 终端窗口,选择 **Paste file(s)** 选项。此外,确保你在**/usr/share/gnome-backproperties** 的XML文件中添加了新的壁纸,这样你可以在桌面上右键单击新添加的图像选择**Change Desktop Background** 进行交互。完成所有更改后,在 Cubic 向导中单击 ``Next``。
|
||||
|
||||
接下来,选择引导到新的原生 ISO 镜像时使用的 Linux 内核版本。如果已经安装了其他版本内核,它们也将在这部分中被列出。然后选择您想在 Live CD 中使用的内核。
|
||||
[![][2]][10]
|
||||
|
||||
|
||||
在下一节中,选择要从您的原生映像中删除的软件包。在使用定制的原生映像安装完 Ubuntu 操作系统后,所选的软件包将自动删除。在选择要删除的软件包时,要格外小心,您可能在不知不觉中删除了一个软件包,而此软件包又是另外一个软件包的依赖包。
|
||||
[![][2]][11]
|
||||
|
||||
|
||||
接下来,原生镜像创建过程将开始。这里所要花费的时间取决于你定制的系统规格。
|
||||
[![][2]][12]
|
||||
|
||||
|
||||
镜像创建完成后后,单击 ``Finish``。Cubic 将显示新创建的自定义镜像的细节。
|
||||
|
||||
如果你想在将来修改刚刚创建的自定义原生镜像,**uncheck** 选项解释说**" Delete all project files, except the generated disk image and the corresponding MD5 checksum file"** (**除了生成的磁盘映像和相应的MD5校验和文件之外,删除所有的项目文件**) Cubic 将在项目的工作目录中保留自定义图像,您可以在将来进行任何更改。而不用从头再来一遍。
|
||||
|
||||
要为不同的 Ubuntu 版本创建新的原生镜像,最好使用不同的项目目录。
|
||||
### 利用 Cubic 修改 Ubuntu Live CD 的定制镜像
|
||||
|
||||
从菜单中启动 Cubic ,并选择一个现有的项目目录。单击 Next 按钮,您将看到以下三个选项:
|
||||
1. 从现有项目创建一个磁盘映像。
|
||||
2. 继续定制现有项目。
|
||||
3. 删除当前项目。
|
||||
|
||||
|
||||
|
||||
[![][2]][13]
|
||||
|
||||
|
||||
第一个选项将允许您使用之前所做的自定义在现有项目中创建一个新的原生 ISO 镜像。如果您丢失了 ISO 镜像,您可以使用第一个选项来创建一个新的。
|
||||
|
||||
第二个选项允许您在现有项目中进行任何其他更改。如果您选择此选项,您将再次进入 ``chroot``环境。您可以添加新的文件或文件夹,安装任何新的软件,删除任何软件,添加其他的 Linux 内核,添加桌面背景等等。
|
||||
|
||||
第三个选项将删除现有的项目,所以您可以从头开始。选择此选项将删除所有文件,包括新生成的 ISO 镜像文件。
|
||||
|
||||
我用 Cubic 做了一个定制的 Ubuntu 16.04 LTS 桌面 Live CD 。就像这篇文章里描述的一样。如果你想创建一个 Ubuntu Live CD, Cubic 可能是一个不错的选择。
|
||||
|
||||
就这些了,再会!
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/create-custom-ubuntu-live-cd-image/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[stevenzdg988](https://github.com/stevenzdg988)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:https://www.ostechnix.com/pinguy-builder-build-custom-ubuntu-os/
|
||||
[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[3]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-1.png ()
|
||||
[4]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-2.png ()
|
||||
[5]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-3.png ()
|
||||
[6]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-4.png ()
|
||||
[7]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-6.png ()
|
||||
[8]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-5.png ()
|
||||
[9]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-7.png ()
|
||||
[10]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-8.png ()
|
||||
[11]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-10-1.png ()
|
||||
[12]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-12-1.png ()
|
||||
[13]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-13.png ()
|
@ -1,31 +1,28 @@
|
||||
[并发服务器:第四部分 - libuv][17]
|
||||
并发服务器(四):libuv
|
||||
============================================================
|
||||
|
||||
这是写并发网络服务器系列文章的第四部分。在这一部分中,我们将使用 libuv 去再次重写我们的服务器,并且也讨论关于使用一个线程池在回调中去处理耗时任务。最终,我们去看一下底层的 libuv,花一点时间去学习如何用异步 API 对文件系统阻塞操作进行封装。
|
||||
这是并发网络服务器系列文章的第四部分。在这一部分中,我们将使用 libuv 再次重写我们的服务器,并且也会讨论关于使用一个线程池在回调中去处理耗时任务。最终,我们去看一下底层的 libuv,花一点时间去学习如何用异步 API 对文件系统阻塞操作进行封装。
|
||||
|
||||
这一系列的所有文章包括:
|
||||
本系列的所有文章:
|
||||
|
||||
* [第一部分 - 简介][7]
|
||||
* [第一节 - 简介][7]
|
||||
* [第二节 - 线程][8]
|
||||
* [第三节 - 事件驱动][9]
|
||||
* [第四节 - libuv][10]
|
||||
|
||||
* [第二部分 - 线程][8]
|
||||
### 使用 libuv 抽象出事件驱动循环
|
||||
|
||||
* [第三部分 - 事件驱动][9]
|
||||
在 [第三节][11] 中,我们看到了基于 `select` 和 `epoll` 的服务器的相似之处,并且,我说过,在它们之间抽象出细微的差别是件很有吸引力的事。许多库已经做到了这些,所以在这一部分中我将去选一个并使用它。我选的这个库是 [libuv][12],它最初设计用于 Node.js 底层的可移植平台层,并且,后来发现在其它的项目中已有使用。libuv 是用 C 写的,因此,它具有很高的可移植性,非常适用嵌入到像 JavaScript 和 Python 这样的高级语言中。
|
||||
|
||||
* [第四部分 - libuv][10]
|
||||
|
||||
### 使用 Linux 抽象出事件驱动循环
|
||||
|
||||
在 [第三部分][11] 中,我们看到了基于 `select` 和 `epoll` 的相似之处,并且,我说过,在它们之间抽象出细微的差别是件很有魅力的事。Numerous 库已经做到了这些,但是,因为在这一部分中,我将去选一个并使用它。我选的这个库是 [libuv][12],它最初设计用于 Node.js 底层的轻便的平台层,并且,后来发现在其它的项目中已有使用。libuv 是用 C 写的,因此,它具有很高的可移植性,非常适用嵌入到像 JavaScript 和 Python 这样的高级语言中。
|
||||
|
||||
虽然 libuv 为抽象出底层平台细节已经有了一个非常大的框架,但它仍然是一个以 _事件循环_ 思想为中心的。在我们第三部分的事件驱动服务器中,事件循环在 main 函数中是很明确的;当使用 libuv 时,循环通常隐藏在库自身中,而用户代码仅需要注册事件句柄(作为一个回调函数)和运行这个循环。此外,libuv 将为给定的平台实现更快的事件循环实现。对于 Linux 它是 epoll,等等。
|
||||
虽然 libuv 为抽象出底层平台细节已经变成了一个相当大的框架,但它仍然是以 _事件循环_ 思想为中心的。在我们第三部分的事件驱动服务器中,事件循环在 `main` 函数中是很明确的;当使用 libuv 时,该循环通常隐藏在库自身中,而用户代码仅需要注册事件句柄(作为一个回调函数)和运行这个循环。此外,libuv 会在给定的平台上使用更快的事件循环实现,对于 Linux 它是 epoll,等等。
|
||||
|
||||

|
||||
|
||||
libuv 支持多路事件循环,并且,因此一个事件循环在库中是非常重要的;它有一个句柄 - `uv_loop_t`,和创建/杀死/启动/停止循环的函数。也就是说,在这篇文章中,我将仅需要使用 “默认的” 循环,libuv 可通过 `uv_default_loop()` 提供它;多路循环大多用于多线程事件驱动的服务器,这是一个更高级别的话题,我将留在这一系列文章的以后部分。
|
||||
libuv 支持多路事件循环,并且,因此事件循环在库中是非常重要的;它有一个句柄 —— `uv_loop_t`,和创建/杀死/启动/停止循环的函数。也就是说,在这篇文章中,我将仅需要使用 “默认的” 循环,libuv 可通过 `uv_default_loop()` 提供它;多路循环大多用于多线程事件驱动的服务器,这是一个更高级别的话题,我将留在这一系列文章的以后部分。
|
||||
|
||||
### 使用 libuv 的并发服务器
|
||||
|
||||
为了对 libuv 有一个更深的印象,让我们跳转到我们的可靠的协议服务器,它通过我们的这个系列已经有了一个强大的重新实现。这个服务器的结构与第三部分中的基于 select 和 epoll 的服务器有一些相似之处。因为,它也依赖回调。完整的 [示例代码在这里][13];我们开始设置这个服务器的套接字绑定到一个本地端口:
|
||||
为了对 libuv 有一个更深的印象,让我们跳转到我们的可靠协议的服务器,它通过我们的这个系列已经有了一个强大的重新实现。这个服务器的结构与第三部分中的基于 select 和 epoll 的服务器有一些相似之处,因为,它也依赖回调。完整的 [示例代码在这里][13];我们开始设置这个服务器的套接字绑定到一个本地端口:
|
||||
|
||||
```
|
||||
int portnum = 9090;
|
||||
@ -50,11 +47,11 @@ if ((rc = uv_tcp_bind(&server_stream, (const struct sockaddr*)&server_address, 0
|
||||
}
|
||||
```
|
||||
|
||||
除了它被封装进 libuv APIs 中之外,你看到的是一个相当标准的套接字。在它的返回中,我们取得一个可工作于任何 libuv 支持的平台上的轻便的接口。
|
||||
除了它被封装进 libuv API 中之外,你看到的是一个相当标准的套接字。在它的返回中,我们取得一个可工作于任何 libuv 支持的平台上的可移植接口。
|
||||
|
||||
这些代码也很认真负责地演示了错误处理;多数的 libuv 函数返回一个整数状态,返回一个负数意味着出现了一个错误。在我们的服务器中,我们把这些错误按致命的问题处理,但也可以设想为一个更优雅的恢复。
|
||||
这些代码也展示了很认真负责的错误处理;多数的 libuv 函数返回一个整数状态,返回一个负数意味着出现了一个错误。在我们的服务器中,我们把这些错误看做致命问题进行处理,但也可以设想为一个更优雅的错误恢复。
|
||||
|
||||
现在,那个套接字已经绑定,是时候去监听它了。这里我们运行一个回调注册:
|
||||
现在,那个套接字已经绑定,是时候去监听它了。这里我们运行首个回调注册:
|
||||
|
||||
```
|
||||
// Listen on the socket for new peers to connect. When a new peer connects,
|
||||
@ -64,9 +61,9 @@ if ((rc = uv_listen((uv_stream_t*)&server_stream, N_BACKLOG, on_peer_connected))
|
||||
}
|
||||
```
|
||||
|
||||
当新的对端连接到这个套接字,`uv_listen` 将被调用去注册一个事件循环回调。我们的回调在这里被称为 `on_peer_connected`,并且我们一会儿将去检测它。
|
||||
`uv_listen` 注册一个事件回调,当新的对端连接到这个套接字时将会调用事件循环。我们的回调在这里被称为 `on_peer_connected`,我们一会儿将去查看它。
|
||||
|
||||
最终,main 运行这个 libuv 循环,直到它被停止(`uv_run` 仅在循环被停止或者发生错误时返回)
|
||||
最终,`main` 运行这个 libuv 循环,直到它被停止(`uv_run` 仅在循环被停止或者发生错误时返回)。
|
||||
|
||||
```
|
||||
// Run the libuv event loop.
|
||||
@ -76,7 +73,7 @@ uv_run(uv_default_loop(), UV_RUN_DEFAULT);
|
||||
return uv_loop_close(uv_default_loop());
|
||||
```
|
||||
|
||||
注意,那个仅是一个单一的通过 main 优先去运行的事件循环回调;我们不久将看到怎么去添加更多的另外的回调。在事件循环的整个运行时中,添加和删除回调并不是一个问题 - 事实上,大多数服务器就是这么写的。
|
||||
注意,在运行事件循环之前,只有一个回调是通过 main 注册的;我们稍后将看到怎么去添加更多的回调。在事件循环的整个运行过程中,添加和删除回调并不是一个问题 —— 事实上,大多数服务器就是这么写的。
|
||||
|
||||
这是一个 `on_peer_connected`,它处理到服务器的新的客户端连接:
|
||||
|
||||
@ -135,9 +132,8 @@ void on_peer_connected(uv_stream_t* server_stream, int status) {
|
||||
|
||||
这些代码都有很好的注释,但是,这里有一些重要的 libuv 语法我想去强调一下:
|
||||
|
||||
* 进入回调中的自定义数据:因为 C 还没有停用,这可能是个挑战,libuv 在它的处理类型中有一个 `void*` 数据域;这些域可以被用于进入到用户数据。例如,注意 `client->data` 是如何指向到一个 `peer_state_t` 结构上,以便于通过 `uv_write` 和 `uv_read_start` 注册的回调可以知道它们正在处理的是哪个客户端的数据。
|
||||
|
||||
* 内存管理:事件驱动编程在语言中使用垃圾回收是非常容易的,因为,回调通常运行在一个它们注册的完全不同的栈框架中,使得基于栈的内存管理很困难。它总是需要传递堆分配的数据到 libuv 回调中(当所有回调运行时,除了 main,其它的都运行在栈上),并且,为了避免泄漏,许多情况下都要求这些数据去安全释放。这些都是些需要实践的内容 [[1]][6]。
|
||||
* 传入自定义数据到回调中:因为 C 还没有闭包,这可能是个挑战,libuv 在它的所有的处理类型中有一个 `void* data` 字段;这些字段可以被用于传递用户数据。例如,注意 `client->data` 是如何指向到一个 `peer_state_t` 结构上,以便于 `uv_write` 和 `uv_read_start` 注册的回调可以知道它们正在处理的是哪个客户端的数据。
|
||||
* 内存管理:在带有垃圾回收的语言中进行事件驱动编程是非常容易的,因为,回调通常运行在一个它们注册的完全不同的栈帧中,使得基于栈的内存管理很困难。它总是需要传递堆分配的数据到 libuv 回调中(当所有回调运行时,除了 main,其它的都运行在栈上),并且,为了避免泄漏,许多情况下都要求这些数据去安全释放。这些都是些需要实践的内容 [[1]][6]。
|
||||
|
||||
这个服务器上对端的状态如下:
|
||||
|
||||
@ -479,11 +475,11 @@ via: https://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/
|
||||
[4]:https://eli.thegreenplace.net/tag/concurrency
|
||||
[5]:https://eli.thegreenplace.net/tag/c-c
|
||||
[6]:https://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/#id4
|
||||
[7]:http://eli.thegreenplace.net/2017/concurrent-servers-part-1-introduction/
|
||||
[8]:http://eli.thegreenplace.net/2017/concurrent-servers-part-2-threads/
|
||||
[9]:http://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/
|
||||
[7]:https://linux.cn/article-8993-1.html
|
||||
[8]:https://linux.cn/article-9002-1.html
|
||||
[9]:https://linux.cn/article-9117-1.html
|
||||
[10]:http://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/
|
||||
[11]:http://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/
|
||||
[11]:https://linux.cn/article-9117-1.html
|
||||
[12]:http://libuv.org/
|
||||
[13]:https://github.com/eliben/code-for-blog/blob/master/2017/async-socket-server/uv-server.c
|
||||
[14]:https://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/#id5
|
||||
|
@ -1,58 +0,0 @@
|
||||
安全工作热门:受到培训并获得注意
|
||||
============================================================
|
||||
|
||||

|
||||
来自 Dice 和 Linux 基金会的“开源工作报告”发现,未来对具有安全经验的专业人员的需求很高。[经许可使用][1]
|
||||
|
||||
对安全专业人员的需求是真实的。在 [Dice.com][4] 中,超过 75,000 个职位中有 15% 是安全职位。[Forbes][6] 中称:“根据网络安全数据工具 [CyberSeek][5],在美国每年有 4 万个信息安全分析师的职位空缺,雇主正在努力填补其他 20 万个与网络安全相关的工作。”我们知道,安全专家的需求正在快速增长,但兴趣水平还很低。
|
||||
|
||||
### 安全是要关注的领域
|
||||
|
||||
根据我的经验,很少有大学生对安全工作感兴趣,所以很多人把安全视为利基。入门级技术专家对业务分析师或系统分析师感兴趣,因为他们认为,如果想学习和应用核心 IT 概念,就必须坚持分析师工作或者更接近产品开发的工作。事实并非如此。
|
||||
|
||||
事实上,如果你有兴趣领先于商业领导者,那么安全是要关注的领域 - 作为一名安全专业人员,你必须端到端地了解业务,你必须看大局来给你的公司优势。
|
||||
|
||||
### 无所畏惧
|
||||
|
||||
分析师和安全工作并不完全相同。公司出于必要继续合并工程和安全工作。企业正在以前所未有的速度进行基础架构和代码的自动化部署,从而提高了安全作为所有技术专业人士日常生活的一部分的重要性。在我们的[ Linux 基金会的开源工作报告][7]中,42% 的招聘经理表示未来对有安全经验的专业人士的需求很大。
|
||||
|
||||
在安全方面从未有过更激动人心的时刻。如果你随时掌握最新的技术新闻,就会发现大量的事情与安全相关 - 数据泄露、系统故障和欺诈。安全团队正在不断变化,快节奏的环境中工作。真正的挑战在于在保持甚至改进最终用户体验的同时,积极主动地进行安全性,发现和消除漏洞。
|
||||
|
||||
### 增长即将来临
|
||||
|
||||
在技术的任何方面,安全将继续与云一起成长。企业越来越多地转向云计算,这暴露出比组织过去更多的安全漏洞。随着云的成熟,安全变得越来越重要。
|
||||
|
||||
条例也在不断完善 - 个人身份信息(PII)越来越广泛。许多公司都发现他们必须投资安全来保持合规,避免成为头条新闻。由于面临巨额罚款,声誉受损以及行政工作安全,公司开始越来越多地为安全工具和人员安排越来越多的预算。
|
||||
|
||||
### 培训和支持
|
||||
|
||||
即使你不选择一个特定的安全工作,你也一定会发现自己需要写安全的代码,如果你没有这个技能,你将开始一场艰苦的战斗。如果你的公司提供在工作中学习的话也是鼓励的,但我建议结合培训、指导和不断实践。如果你不使用安全技能,你将很快在快速进化的恶意攻击的复杂性中失去它们。
|
||||
|
||||
对于那些寻找安全工作的人来说,我的建议是找到组织中那些在工程、开发或者架构领域最为强大的人员 - 与他们和其他团队进行交流,做好实际工作,并且确保在心里保持大局。成为你的组织中一个脱颖而出的人,一个可以写安全的代码,同时也可以考虑战略和整体基础设施健康状况的人。
|
||||
|
||||
### 游戏最后
|
||||
|
||||
越来越多的公司正在投资安全性,并试图填补他们的技术团队的开放角色。如果你对管理感兴趣,那么安全是值得关注的地方。执行领导希望知道他们的公司正在按规则行事,他们的数据是安全的,并且免受破坏和损失。
|
||||
|
||||
明治地实施和有战略思想的安全是受到关注的。安全对高管和消费者之类至关重要 - 我鼓励任何对安全感兴趣的人进行培训和贡献。
|
||||
|
||||
_现在[下载][2]完整的 2017 年开源工作报告_
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/os-jobs-report/2017/11/security-jobs-are-hot-get-trained-and-get-noticed
|
||||
|
||||
作者:[ BEN COLLEN][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/bencollen
|
||||
[1]:https://www.linux.com/licenses/category/used-permission
|
||||
[2]:http://bit.ly/2017OSSjobsreport
|
||||
[3]:https://www.linux.com/files/images/security-skillspng
|
||||
[4]:http://www.dice.com/
|
||||
[5]:http://cyberseek.org/index.html#about
|
||||
[6]:https://www.forbes.com/sites/jeffkauflin/2017/03/16/the-fast-growing-job-with-a-huge-skills-gap-cyber-security/#292f0a675163
|
||||
[7]:http://media.dice.com/report/the-2017-open-source-jobs-report-employers-prioritize-hiring-open-source-professionals-with-latest-skills/
|
@ -1,163 +0,0 @@
|
||||
如何使用 Date 命令
|
||||
======
|
||||
在本文中, 我们会通过一些案例来演示如何使用 linux 中的 date 命令. date 命令可以用户输出/设置系统日期和时间. Date 命令很简单, 请参见下面的例子和语法.
|
||||
|
||||
默认情况下,当不带任何参数运行 date 命令时,它会输出当前系统日期和时间:
|
||||
|
||||
```shell
|
||||
date
|
||||
```
|
||||
|
||||
```
|
||||
Sat 2 Dec 12:34:12 CST 2017
|
||||
```
|
||||
|
||||
#### 语法
|
||||
|
||||
```
|
||||
Usage: date [OPTION]... [+FORMAT]
|
||||
or: date [-u|--utc|--universal] [MMDDhhmm[[CC]YY][.ss]]
|
||||
Display the current time in the given FORMAT, or set the system date.
|
||||
|
||||
```
|
||||
|
||||
### 案例
|
||||
|
||||
下面这些案例会向你演示如何使用 date 命令来查看前后一段时间的日期时间.
|
||||
|
||||
#### 1\. 查找5周后的日期
|
||||
|
||||
```shell
|
||||
date -d "5 weeks"
|
||||
Sun Jan 7 19:53:50 CST 2018
|
||||
|
||||
```
|
||||
|
||||
#### 2\. 查找5周后又过4天的日期
|
||||
|
||||
```shell
|
||||
date -d "5 weeks 4 days"
|
||||
Thu Jan 11 19:55:35 CST 2018
|
||||
|
||||
```
|
||||
|
||||
#### 3\. 获取下个月的日期
|
||||
|
||||
```shell
|
||||
date -d "next month"
|
||||
Wed Jan 3 19:57:43 CST 2018
|
||||
```
|
||||
|
||||
#### 4\. 获取下周日的日期
|
||||
|
||||
```shell
|
||||
date -d last-sunday
|
||||
Sun Nov 26 00:00:00 CST 2017
|
||||
```
|
||||
|
||||
date 命令还有很多格式化相关的选项, 下面的例子向你演示如何格式化 date 命令的输出.
|
||||
|
||||
#### 5\. 以 yyyy-mm-dd 的格式显示日期
|
||||
|
||||
```shell
|
||||
date +"%F"
|
||||
2017-12-03
|
||||
```
|
||||
|
||||
#### 6\. 以 mm/dd/yyyy 的格式显示日期
|
||||
|
||||
```shell
|
||||
date +"%m/%d/%Y"
|
||||
12/03/2017
|
||||
|
||||
```
|
||||
|
||||
#### 7\. 只显示时间
|
||||
|
||||
```shell
|
||||
date +"%T"
|
||||
20:07:04
|
||||
|
||||
```
|
||||
|
||||
#### 8\. 显示今天是一年中的第几天
|
||||
|
||||
```shell
|
||||
date +"%j"
|
||||
337
|
||||
|
||||
```
|
||||
|
||||
#### 9\. 与格式化相关的选项
|
||||
|
||||
| **%%** | 百分号 (“**%**“). |
|
||||
| **%a** | 星期的缩写形式 (像这样, **Sun**). |
|
||||
| **%A** | 星期的完整形式 (像这样, **Sunday**). |
|
||||
| **%b** | 缩写的月份 (像这样, **Jan**). |
|
||||
| **%B** | 当前区域的月份全称 (像这样, **January**). |
|
||||
| **%c** | 日期以及时间 (像这样, **Thu Mar 3 23:05:25 2005**). |
|
||||
| **%C** | 本世纪; 类似 **%Y**, 但是会省略最后两位 (像这样, **20**). |
|
||||
| **%d** | 月中的第几日 (像这样, **01**). |
|
||||
| **%D** | 日期; 效果与 **%m/%d/%y** 一样. |
|
||||
| **%e** | 月中的第几日, 会填充空格; 与 **%_d** 一样. |
|
||||
| **%F** | 完整的日期; 跟 **%Y-%m-%d** 一样. |
|
||||
| **%g** | 年份的后两位 (参见 **%G**). |
|
||||
| **%G** | 年份 (参见 **%V**); 通常跟 **%V** 连用. |
|
||||
| **%h** | 同 **%b**. |
|
||||
| **%H** | 小时 (**00**..**23**). |
|
||||
| **%I** | 小时 (**01**..**12**). |
|
||||
| **%j** | 一年中的第几天 (**001**..**366**). |
|
||||
| **%k** | 小时, 用空格填充 ( **0**..**23**); same as **%_H**. |
|
||||
| **%l** | 小时, 用空格填充 ( **1**..**12**); same as **%_I**. |
|
||||
| **%m** | 月份 (**01**..**12**). |
|
||||
| **%M** | 分钟 (**00**..**59**). |
|
||||
| **%n** | 换行. |
|
||||
| **%N** | 纳秒 (**000000000**..**999999999**). |
|
||||
| **%p** | 当前区域时间是上午 **AM** 还是下午 **PM**; 未知则为空哦. |
|
||||
| **%P** | 类似 **%p**, 但是用小写字母现实. |
|
||||
| **%r** | 当前区域的12小时制现实时间 (像这样, **11:11:04 PM**). |
|
||||
| **%R** | 24-小时制的小时和分钟; 同 **%H:%M**. |
|
||||
| **%s** | 从 1970-01-01 00:00:00 UTC 到现在经历的秒数. |
|
||||
| **%S** | 秒数 (**00**..**60**). |
|
||||
| **%t** | tab 制表符. |
|
||||
| **%T** | 时间; 同 **%H:%M:%S**. |
|
||||
| **%u** | 星期 (**1**..**7**); 1 表示 **星期一**. |
|
||||
| **%U** | 一年中的第几个星期, 以周日为一周的开始 (**00**..**53**). |
|
||||
| **%V** | 一年中的第几个星期,以周一为一周的开始 (**01**..**53**). |
|
||||
| **%w** | 用数字表示周几 (**0**..**6**); 0 表示 **周日**. |
|
||||
| **%W** | 一年中的第几个星期, 周一为一周的开始 (**00**..**53**). |
|
||||
| **%x** | Locale’s date representation (像这样, **12/31/99**). |
|
||||
| **%X** | Locale’s time representation (像这样, **23:13:48**). |
|
||||
| **%y** | 年份的后面两位 (**00**..**99**). |
|
||||
| **%Y** | 年. |
|
||||
| **%z** | +hhmm 指定数字时区 (像这样, **-0400**). |
|
||||
| **%:z** | +hh:mm 指定数字时区 (像这样, **-04:00**). |
|
||||
| **%::z** | +hh:mm:ss 指定数字时区 (像这样, **-04:00:00**). |
|
||||
| **%:::z** | 指定数字时区, 其中 “**:**” 的个数由你需要的精度来决定 (例如, **-04**, **+05:30**). |
|
||||
| **%Z** | 时区的字符缩写(例如, EDT). |
|
||||
|
||||
#### 10\. 设置系统时间
|
||||
|
||||
你也可以使用 date 来手工设置系统时间,方法是使用 `--set` 选项, 下面的例子会将系统时间设置成2017年8月30日下午4点22分
|
||||
|
||||
```shell
|
||||
date --set="20170830 16:22"
|
||||
|
||||
```
|
||||
|
||||
当然, 如果你使用的是我们的 [VPS Hosting services][1], 你总是可以联系并咨询我们的Linux专家管理员 (通过客服电话或者下工单的方式) 关于 date 命令的任何东西. 他们是 24×7 在线的,会立即向您提供帮助.
|
||||
|
||||
PS. 如果你喜欢这篇帖子,请点击下面的按钮分享或者留言. 谢谢.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.rosehosting.com/blog/use-the-date-command-in-linux/
|
||||
|
||||
作者:[][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.rosehosting.com
|
||||
[1]:https://www.rosehosting.com/hosting-services.html
|
@ -1,115 +0,0 @@
|
||||
如何优雅的使用大部分的 Linux 文件压缩
|
||||
=======
|
||||
如果你对 linux 系统下的对文件压缩命令或操作的有效性有任何疑问 ,你应该看一下 **apropos compress** 这个命令的输出 ;如果你有机会这么做 ,你会惊异于有如此多的的命令来进行压缩文件和解压缩文件 ;还有许多命令来进行压缩文件的比较 ,检验 ,并且能够在压缩文件中的内容中进行搜索 ,甚至能够把压缩文件从一个格式变成另外一种格式 ( *.z 格式变为 *.gz 格式 ) 。
|
||||
你想在所有词目中寻找一组 bzip2 的压缩命令 。包括 zip ,gzip ,和 xz 在内 ,你将得到一个有意思的操作。
|
||||
|
||||
```
|
||||
$ apropos compress | grep ^bz
|
||||
bzcat (1) - decompresses files to stdout
|
||||
bzcmp (1) - compare bzip2 compressed files
|
||||
bzdiff (1) - compare bzip2 compressed files
|
||||
bzegrep (1) - search possibly bzip2 compressed files for a regular expression
|
||||
bzexe (1) - compress executable files in place
|
||||
bzfgrep (1) - search possibly bzip2 compressed files for a regular expression
|
||||
bzgrep (1) - search possibly bzip2 compressed files for a regular expression
|
||||
bzip2 (1) - a block-sorting file compressor, v1.0.6
|
||||
bzless (1) - file perusal filter for crt viewing of bzip2 compressed text
|
||||
bzmore (1) - file perusal filter for crt viewing of bzip2 compressed text
|
||||
```
|
||||
|
||||
在我的Ubuntu系统上 ,列出了超过 60 条命令作为 apropos compress 命令的返回 。
|
||||
|
||||
## 压缩算法
|
||||
压缩并没有普适的方案 ,某些压缩工具是有损耗的压缩 ,例如能够使 mp3 文件减小大小而能够是听者有接近聆听原声的音乐感受 。但是 Linux 命令行能够用算法使压缩文件或档案文件能够重新恢复为原始数据 ,换句话说 ,算法能够使压缩或存档无损 。
|
||||
|
||||
这是如何做到的 ?300 个相同的在一行的相同的字符能够被压缩成像 “300x” 。但是这种算法不会对大多数的文件产生有效的益处 。因为文件中完全随机的序列要比相同字符的序列要多的多 。 压缩算法会越来越复杂和多样 ,所以在 Unix 早期 ,压缩是第一个被介绍的 。
|
||||
|
||||
## 在 Linux 系统上的压缩命令
|
||||
在 Linux 系统上最常用的压缩命令是 zip ,gzip ,bzip2 ,xz 。 前面提到的常用压缩命令以同样的方式工作 。会权衡文件内容压缩程度 ,压缩花费的时间 ,压缩文件在其他你需要使用的系统上的兼容性 。
|
||||
一些时候压缩一个文件并不会花费很多时间和性能 。在下面的例子中 ,被压缩的文件会比原始文件要大 。当在一个不是很普遍的情况下 ,尤其是在文件内容达到一定等级的随机度 。
|
||||
|
||||
```
|
||||
$ time zip bigfile.zip bigfile
|
||||
adding: bigfile (default 0% )
|
||||
real 0m0.055s
|
||||
user 0m0.000s
|
||||
sys 0m0.016s
|
||||
$ ls -l bigfile*
|
||||
-rw-r--r-- 1 root root 0 12月 20 22:36 bigfile
|
||||
-rw------- 1 root root 164 12月 20 22:41 bigfile.zip
|
||||
```
|
||||
注意压缩后的文件 ( bigfile.zip ) 比源文件 ( bigfile ) 要大 。如果压缩增加了文件的大小或者减少的很少的百分比 ,那就只剩下在线备份的好处了 。如果你在压缩文件后看到了下面的信息 。你不会得到太多的益处 。
|
||||
( defalted 1% )
|
||||
|
||||
文件内容在文件压缩的过程中有很重要的作用 。在上面文件大小增加的例子中是因为文件内容过于随机 。压缩一个文件内容只包含 0 的文件 。你会有一个相当震惊的压缩比 。在如此极端的情况下 ,三个常用的压缩工具都有非常棒的效果 。
|
||||
|
||||
```
|
||||
-rw-rw-r-- 1 shs shs 10485760 Dec 8 12:31 zeroes.txt
|
||||
-rw-rw-r-- 1 shs shs 49 Dec 8 17:28 zeroes.txt.bz2
|
||||
-rw-rw-r-- 1 shs shs 10219 Dec 8 17:28 zeroes.txt.gz
|
||||
-rw-rw-r-- 1 shs shs 1660 Dec 8 12:31 zeroes.txt.xz
|
||||
-rw-rw-r-- 1 shs shs 10360 Dec 8 12:24 zeroes.zip
|
||||
```
|
||||
你不会喜欢为了查看文件中的 50 个字节的而将 10 0000 0000 字节的数据完全解压 。这样是及其不可能的 。
|
||||
在更真实的情况下 ,大小差异是总体上的不同 -- 不是重大的效果 -- 对于一个小的公正的 jpg 的图片文件 。
|
||||
|
||||
```
|
||||
-rw-r--r-- 1 shs shs 13522 Dec 11 18:58 image.jpg
|
||||
-rw-r--r-- 1 shs shs 13875 Dec 11 18:58 image.jpg.bz2
|
||||
-rw-r--r-- 1 shs shs 13441 Dec 11 18:58 image.jpg.gz
|
||||
-rw-r--r-- 1 shs shs 13508 Dec 11 18:58 image.jpg.xz
|
||||
-rw-r--r-- 1 shs shs 13581 Dec 11 18:58 image.jpg.zip
|
||||
```
|
||||
|
||||
在压缩拉的文本文件时 ,你会发现重要的不同 。
|
||||
```
|
||||
$ ls -l textfile*
|
||||
-rw-rw-r-- 1 shs shs 8740836 Dec 11 18:41 textfile
|
||||
-rw-rw-r-- 1 shs shs 1519807 Dec 11 18:41 textfile.bz2
|
||||
-rw-rw-r-- 1 shs shs 1977669 Dec 11 18:41 textfile.gz
|
||||
-rw-rw-r-- 1 shs shs 1024700 Dec 11 18:41 textfile.xz
|
||||
-rw-rw-r-- 1 shs shs 1977808 Dec 11 18:41 textfile.zip
|
||||
```
|
||||
|
||||
在这种情况下 ,XZ 相较于其他压缩文件有效的减小了文件的大小 ,对于第二的 bzip2 命令也有很大的提高
|
||||
|
||||
## 查看压缩文件
|
||||
|
||||
以 more 结尾的命令能够让你查看压缩文件而不解压文件 。
|
||||
|
||||
```
|
||||
bzmore (1) - file perusal filter for crt viewing of bzip2 compressed text
|
||||
lzmore (1) - view xz or lzma compressed (text) files
|
||||
xzmore (1) - view xz or lzma compressed (text) files
|
||||
zmore (1) - file perusal filter for crt viewing of compressed text
|
||||
```
|
||||
这些命令在大多数工作中被使用 ,自从不得不使文件解压缩而只为了显示给用户 。在另一方面 ,留下被解压的文件在系统中 。这些命令简单的使文件解压缩 。
|
||||
|
||||
```
|
||||
$ xzmore textfile.xz | head -1
|
||||
Here is the agenda for tomorrow's staff meeting:
|
||||
```
|
||||
|
||||
## 比较压缩文件
|
||||
许多的压缩工具箱包含一个差异命令 ( 例如 :xzdiff ) 。这些工具通过这些工作来进行比较和差异而不是做算法指定的比较 。例如 ,xzdiff 命令比较 bz2 类型的文件和比较 xz 类型的文件一样简单 。
|
||||
|
||||
## 如何选择最好的 Linux 压缩工具
|
||||
如何选择压缩工具取决于你工作 。在一些情况下 ,选择取决于你所压缩的数据内容 。在更多的情况下 ,取决你你组织的惯例 ,除非你对磁盘空间有着很高的敏感度 。下面是一般的建议 :
|
||||
zip :文件需要被分享或者会在 Windows 系统下使用 。
|
||||
gzip :文件在 Unix/Linux 系统下使用 。长远来看 ,bzip2 是普遍存在的 。
|
||||
bzip2 :使用了不同的算法 ,产生比 gzip 更小的文件 ,但是花更长的时间 。
|
||||
xz :一般提供做好的压缩率 ,但是也会花费相当的时间 。比其他工具更新 ,可能在你工作的系统上不存在 。
|
||||
|
||||
## 注意
|
||||
当你在压缩文件时,你有很多选择 ,在极少的情况下 ,会产生无效的磁盘存储空间。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
via: https://www.networkworld.com/article/3240938/linux/how-to-squeeze-the-most-out-of-linux-file-compression.html
|
||||
|
||||
作者 :[ Sandra Henry-Stocker ][1] 译者:[ singledo ][2] 校对:校对者ID
|
||||
|
||||
本文由 [ LCTT ][3]原创编译,Linux中国 荣誉推出
|
||||
|
||||
[1]:https://www.networkworld.com
|
||||
[2]:https://github.com/singledo
|
||||
[3]:https://github.com/LCTT/TranslateProject
|
@ -1,149 +0,0 @@
|
||||
translated by stevenzdg988
|
||||
|
||||
利用 Resetter 将 Ubuntu 衍生版重置为初始状态
|
||||
======
|
||||
有多少次你投入到Ubuntu(或Ubuntu衍生版本),配置某项内容和安装软件,却发现你的桌面(或服务器)平台并不是你想要的。当在机器上产生了大量的用户文件时,这种情况可能会出现问题。既然这样,你有一个选择,你或者可以备份你所有的数据,重新安装操作系统,然后将您的数据复制回本机,或者也可以利用一种类似于[Resetter][1]的工具做同样的事情。
|
||||
|
||||
Resetter 是一个新的工具(由加拿大开发者,被称为"[gaining][2]"),用python和PyQt,将重置Ubuntu,Linux Mint(和一些其他的,基于Ubuntu的衍生版)回到初始配置。Resetter 提供了两种不同的复位选项:自动和自定义。利用自动选项,工具就会完成以下内容:
|
||||
* 删除用户安装的应用软件
|
||||
|
||||
* 删除用户及家目录
|
||||
|
||||
* 创建默认备份用户
|
||||
|
||||
* 自动安装预装的应用软件(MPIAs)
|
||||
|
||||
* 删除非默认用户
|
||||
|
||||
* 删除协议软件包
|
||||
|
||||
自定义选项:
|
||||
|
||||
* 删除用户安装的应用程序或者允许你选择要删除的应用程序
|
||||
|
||||
* 删除旧的内核
|
||||
|
||||
* 允许你选择用户进行删除
|
||||
|
||||
* 删除用户及家目录
|
||||
|
||||
* 创建默认备份用户
|
||||
|
||||
* 允许您创建自定义备份用户
|
||||
|
||||
* 自动安装MPIAs或选择MPIAs进行安装
|
||||
|
||||
* 删除非默认用户
|
||||
|
||||
* 查看所有相关依赖包
|
||||
|
||||
* 删除协议软件包
|
||||
|
||||
我将带领您完成安装和使用Resetter的过程。但是,我必须告诉你这个工具非常的测试版。即便如此,resetter绝对值得一试。实际上,我鼓励您测试应用程序并提交bug报告(您可以通过[GitHub][3]提交,或者直接发送给开发人员的电子邮件地址[gaining7@outlook.com][4])。
|
||||
It should also be noted that, at the moment, the only supported distributions are:
|
||||
还应注意的是,目前仅支持的衍生版有:
|
||||
* Debian 9.2(稳定)Gnome版本
|
||||
* Linux Mint 17.3 +(支持Mint 18.3即将推出)
|
||||
* Ubuntu 14.04+(虽然我发现不支持17.10)
|
||||
* Elementary OS 0.4+
|
||||
* Linux Deepin 15.4+
|
||||
|
||||
说到这里,让我们安装和使用Resetter。我将在[Elementary OS Loki][5]平台展示
|
||||
### 安装
|
||||
|
||||
有几种方法可以安装Resetter。我选择的方法是通过gdebi辅助应用程序,为什么?因为它将获取安装所需的所有依赖项。首先,我们必须安装那个特定的工具。打开终端窗口并发出命令:
|
||||
```
|
||||
sudo apt install gdebi
|
||||
```
|
||||
一旦安装完毕,请将浏览器指向[Resetter下载页面][6],并下载该软件的最新版本。一旦下载完毕,打开文件管理器,导航到下载的文件,然后单击(或双击,这取决于你如何配置你的桌面)在resetter_xxx - stable_all.deb文件(XXX是版本号)。gdebi应用程序将会打开(图1)。点击安装包按钮,输入你的sudo密码,接下来 Resetter 将开始安装。
|
||||
## [resetter_1.jpg][7]
|
||||
|
||||
![gdebi][8]
|
||||
|
||||
图1:利用gdebi安装Resetter
|
||||
[使用许可][9]
|
||||
|
||||
当安装完成,准备接下来的操作。
|
||||
### 使用 Resetter
|
||||
|
||||
记住,在做这个之前,必须备份数据。别怪我没提醒你。从终端窗口发出命令```sudo resetter```。您将被提示输入sudo密码。一旦Resetter打开,它将自动检测您的发行版(图2)。
|
||||
## [resetter_2.jpg][10]
|
||||
|
||||
![Resetter][11]
|
||||
|
||||
图2: Resetter 主窗口
|
||||
[使用许可][9]
|
||||
|
||||
我们将通过自动重置来测试 Resetter 的流程。从主窗口,点击Automatic Reset(自动复位)。这款应用将提供一个明确的警告,它将把你的操作系统(的实例,Elementary OS 0.4.1 Loki)重新设置为出厂默认状态(图3)。
|
||||
## [resetter_3.jpg][12]
|
||||
|
||||
![警告][13]
|
||||
|
||||
图3:在继续之前,Resetter警告您。
|
||||
[用户许可][9]
|
||||
|
||||
单击Yes,Resetter将显示它将删除的所有包(图4)。如果您没有问题,单击OK,重置将开始。
|
||||
## [resetter_4.jpg][14]
|
||||
|
||||
![移除软件包][15]
|
||||
|
||||
图4:所有要删除的包,以便将 Elementary OS 重置为出厂默认值。
|
||||
[使用许可][9]
|
||||
|
||||
在重置过程中,应用程序将显示一个进度窗口(图5)。根据安装的数量,这个过程不应该花费太长时间。
|
||||
## [resetter_5.jpg][16]
|
||||
|
||||
![进度][17]
|
||||
|
||||
图5: Resetter 进度窗口
|
||||
[使用许可][9]
|
||||
|
||||
当进程完成时,Resetter将显示一个新的用户名和密码,以便重新登录到新重置的发行版(图6)。
|
||||
## [resetter_6.jpg][18]
|
||||
|
||||
![新用户][19]
|
||||
|
||||
图6:新用户及密码
|
||||
[使用许可][9]
|
||||
|
||||
单击OK,然后在提示单击Yes以重新启动系统。当提示登录时,使用 Resetter 应用程序提供给您的新凭证。成功登录后,您需要重新创建您的原始用户。该用户的主目录仍然是完整的,所以您需要做的就是发出命令```sudo useradd USERNAME (USERNAME 是用户名)```。完成之后,发出命令```sudo passwd USERNAME (USERNAME 是用户名)```。使用设置的用户/密码,您可以注销并以旧用户的身份登录(在重新设置操作系统之前使用相同的家目录)。
|
||||
### 我的成果
|
||||
|
||||
我必须承认,在将密码添加到我的老用户(并通过使用su命令对该用户进行更改)之后,我无法使用该用户登录到 Elementary OS 桌面。为了解决这个问题,我登录了Resetter-created 用户,移动了老用户的家目录,删除了老用户(使用命令``` sudo deluser jack```),并重新创建了老用户(使用命令```sudo useradd -m jack```)。
|
||||
|
||||
这样做之后,我检查了原始的家目录,只发现了用户的所有权从 jack.jack 变成了 1000.1000。利用命令 ```sudo chown -R jack.jack /home/jack```,就可以容易的修正这个问题。这非常关键?如果您使用Resetter并发现无法用您的老用户登录(在您重新创建用户并设置一个新密码之后),请确保更改用户的家目录的所有权限。
|
||||
|
||||
在这个问题之外,Resetter在将 Elementary OS Loki 恢复到默认状态方面做了大量的工作。虽然 Resetter 处在测试中,但它是一个相当令人印象深刻的工具。试一试,看看你是否有和我一样出色的成绩。
|
||||
|
||||
从Linux基金会和edX的免费[" Linux入门"][20]课程学习更多关于Linux的知识。
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/learn/intro-to-linux/2017/12/set-ubuntu-derivatives-back-default-resetter
|
||||
|
||||
作者:[Jack Wallen][a]
|
||||
译者:[stevenzdg988](https://github.com/stevenzdg988)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/jlwallen
|
||||
[1]:https://github.com/gaining/Resetter
|
||||
[2]:https://github.com/gaining
|
||||
[3]:https://github.com
|
||||
[4]:mailto:gaining7@outlook.com
|
||||
[5]:https://elementary.io/
|
||||
[6]:https://github.com/gaining/Resetter/releases/tag/v1.1.3-stable
|
||||
[7]:/files/images/resetter1jpg-0
|
||||
[8]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/resetter_1_0.jpg?itok=3c_qrApr (gdebi)
|
||||
[9]:/licenses/category/used-permission
|
||||
[10]:/files/images/resetter2jpg
|
||||
[11]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/resetter_2.jpg?itok=bmawiCYJ (Resetter)
|
||||
[12]:/files/images/resetter3jpg
|
||||
[13]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/resetter_3.jpg?itok=2wlbC3Ue (warning)
|
||||
[14]:/files/images/resetter4jpg-1
|
||||
[15]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/resetter_4_1.jpg?itok=f2I3noDM (remove packages)
|
||||
[16]:/files/images/resetter5jpg
|
||||
[17]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/resetter_5.jpg?itok=3FYs5_2S (progress)
|
||||
[18]:/files/images/resetter6jpg
|
||||
[19]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/resetter_6.jpg?itok=R9SVZgF1 (new username)
|
||||
[20]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
@ -0,0 +1,60 @@
|
||||
cURL VS wget:根据两者的差异和使用习惯,你应该选用哪一个?
|
||||
======
|
||||

|
||||
|
||||
当想要直接通过 Linux 命令行下载文件,马上就能想到两个工具:‘wget’和‘cURL’。它们有很多共享的特征,可以很轻易的完成一些相同的任务。
|
||||
|
||||
虽然它们有一些相似的特征,但它们并不是完全一样。这两个程序适用与不同的场合,在特定场合下,都拥有各自的特性。
|
||||
|
||||
### cURL vs wget: 相似之处
|
||||
|
||||
wget 和 cURL 都可以下载内容。它们的内核就是这么设计的。它们都可以向互联网发送请求并返回请求项。这可以是文件、图片或者是其他诸如网站的原始 HTML 之类。
|
||||
|
||||
这两个程序都可以进行 HTTP POST 请求。这意味着它们都可以向网站发送数据,比如说填充表单什么的。
|
||||
|
||||
由于这两者都是命令行工具,它们都被设计成脚本程序。wget 和 cURL 都可以写进你的 [Bash 脚本][1] ,自动与新内容交互,下载所需内容。
|
||||
|
||||
### wget 的优势
|
||||
|
||||
![wget download][2]
|
||||
|
||||
wget 简单直接。这意味着你能享受它超凡的下载速度。wget 是一个独立的程序,无需额外的资源库,更不会做出格的事情。
|
||||
|
||||
wget 是专业的直接下载程序,支持递归下载。同时,它也允许你在网页或是 FTP 目录下载任何事物。
|
||||
|
||||
wget 拥有智能的默认项。他规定了很多在常规浏览器里的事物处理方式,比如 cookies 和重定向,这都不需要额外的配置。可以说,wget 简直就是无需说明,开罐即食!
|
||||
|
||||
### cURL 优势
|
||||
|
||||
![cURL Download][3]
|
||||
|
||||
cURL是一个多功能工具。当然,他可以下载网络内容,但同时它也能做更多别的事情。
|
||||
|
||||
cURL 技术支持库是:libcurl。这就意味着你可以基于 cURL 编写整个程序,允许你在 libcurl 库中基于图形环境下载程序,访问它所有的功能。
|
||||
|
||||
cURL 宽泛的网络协议支持可能是其最大的卖点。cURL 支持访问 HTTP 和 HTTPS 协议,能够处理 FTP 传送。它支持 LDAP 协议,甚至支持 Samba 分享。实际上,你还可以用 cURL 收发邮件。
|
||||
|
||||
cURL 也有一些简洁的安全特性。cURL 支持安装许多 SSL/TLS 库,也支持通过网络代理访问,包括 SOCKS。这意味着,你可以越过 Tor. 使用cURL。
|
||||
|
||||
cURL 同样支持让数据发送变得更容易的 gzip 压缩技术。
|
||||
|
||||
### 思考总结
|
||||
|
||||
那你应该使用 cURL 还是使用 wget?这个比较得看实际用途。如果你想快速下载并且没有担心参数标识的需求,那你应该使用轻便有效的 wget。如果你想做一些更复杂的使用,直觉告诉你,你应该选择 cRUL。
|
||||
|
||||
cURL 支持你做很多事情。你可以把 cURL想象成一个精简的命令行网页浏览器。它支持几乎你能想到的所有协议,可以交互访问几乎所有在线内容。唯一和浏览器不同的是,cURL 不能显示接收到的相应信息。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.maketecheasier.com/curl-vs-wget/
|
||||
|
||||
作者:[Nick Congleton][a]
|
||||
译者:[译者ID](https://github.com/CYLeft)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.maketecheasier.com/author/nickcongleton/
|
||||
[1]:https://www.maketecheasier.com/beginners-guide-scripting-linux/
|
||||
[2]:https://www.maketecheasier.com/assets/uploads/2017/12/wgc-wget.jpg (wget download)
|
||||
[3]:https://www.maketecheasier.com/assets/uploads/2017/12/wgc-curl.jpg (cURL Download)
|
Loading…
Reference in New Issue
Block a user