This commit is contained in:
geekpi 2016-11-16 10:37:03 +08:00
commit 5c41df73f3
60 changed files with 5202 additions and 1970 deletions

View File

@ -1,43 +1,43 @@
# dpkg commands to manage packages on Debian Based Systems 用 dpkg 命令在 Debian 系的 Linux 系统中管理软件包
==================
[dpkg][7] stands for Debian package manager (dpkg). dpkg is a command-line tool to install, build, remove and manage Debian packages. dpkg uses Aptitude (primary and more user-friendly) as a front-end to perform all the actions. [dpkg][7] 意即 Debian 包管理器Debian PacKaGe manager。dpkg 是一个可以安装、构建、删除及管理 Debian 软件包的命令行工具。dpkg 将 Aptitude首选而更用户友好作为执行所有操作的前端界面。
Other utility such as dpkg-deb and dpkg-query uses dpkg as a front-end to perform some action. 其它的一些工具如 dpkg-deb 和 dpkg-query 等也使用 dpkg 作为执行某些操作的前端。
Now a days most of the administrator using Apt, [Apt-Get][6] & Aptitude to manage packages easily without headache and its robust management too. 现在大多数系统管理员使用 Apt、[Apt-Get][6] 及 Aptitude 等工具,不用费心就可以轻松地管理软件。
Even though still we need to use dpkg to perform some software installation where its necessary. Some other package manger utilities which are being used widely in Linux are [yum][5], [dnf][4], [apt-get][3], dpkg, [rpm][2], [Zypper][1], pacman, urpmi, etc., 尽管如此,必要的时候还是需要用 dpkg 来安装某些软件。其它的一些在 Linux 系统上广泛使用的包管理工具还有 [yum][5]、[dnf][4]、[apt-get][3]、dpkg、[rpm][2]、[Zypper][1]、pacman、urpmi 等等。
Now, im going to play on our Ubuntu 15.10 box to explain and cover mostly used dpkg commands with examples. 现在,我要在装有 Ubuntu 15.10 的机器上用一些实例讲解最常用的 dpkg 命令。
#### 1) Common syntax/file location for dpkg ### 1) dpkg 常见命令的语法及 dpkg 文件位置
See below for common syntax/ file location of dpkg which will help you if you want to check more about it. 下面是 dpkg 常见命令的语法及 dpkg 相关文件的位置,如果想深入了解,这些对你肯定大有益处。
<iframe marginwidth="0" marginheight="0" scrolling="no" frameborder="0" height="90" width="728" id="_mN_gpt_827143833" style="border-width: 0px; border-style: initial; font-style: inherit; font-variant: inherit; font-weight: inherit; font-stretch: inherit; font-size: inherit; line-height: inherit; font-family: inherit; vertical-align: baseline;"></iframe>
``` ```
[General syntax for dpkg] ### dpkg 命令的语法
$ dpkg -[command] [.deb package name] $ dpkg -[command] [.deb package name]
$ dpkg -[command] [package name] $ dpkg -[command] [package name]
[dpkg releated files location] ### dpkg 相关文件的位置
$ /var/lib/dpkg $ /var/lib/dpkg
[This file contain modified package info by dpkg command like (install, remove, etc..,)] ### 这个文件包含了被 dpkg 命令install、remove 等)所修改的包的信息
$ /var/lib/dpkg/status $ /var/lib/dpkg/status
[This file contain available package list] ### 这个文件包含了可用包的列表
$ /var/lib/dpkg/status $ /var/lib/dpkg/status
``` ```
#### 2) Install/Upgrade the package ### 2) 安装/升级软件
Use the below command to install/upgrade .deb packge on Debian based systems such as Debian, Mint, Ubuntu & elementryOS, etc..,. Here im going to install Atom through atom-amd64.deb file. It will upgrade if its installed other wise install a fresh one. 在基于 Debian 的系统里,比如 Debian、Mint、Ubuntu 和 elementryOS用以下命令来安装/升级 .deb 软件包。这里我要用 `atom-amd64.deb` 文件安装 Atom。要是已经安装了 Atom就会升级它。要么就会安装一个新的 Atom。
``` ```
[Install/Upgrade dpkg packages] ### 安装或升级 dpkg 软件包
$ sudo dpkg -i atom-amd64.deb $ sudo dpkg -i atom-amd64.deb
Selecting previously unselected package atom. Selecting previously unselected package atom.
(Reading database ... 426102 files and directories currently installed.) (Reading database ... 426102 files and directories currently installed.)
@ -52,9 +52,9 @@ Processing triggers for mime-support (3.58ubuntu1) ...
``` ```
#### 3) Install a package from folder ### 3) 从文件夹里安装软件
Use the below command to install the packages recursively from directory on Debian based systems such as Debian, Mint, Ubuntu & elementryOS, etc,. This will install all the *.deb packages under the /opt/software directory. 在基于 Debian 的系统里,用下列命令从目录中逐个安装软件。这会安装 `/opt/software` 目录下的所有以 .deb 为后缀的软件。
``` ```
$ sudo dpkg -iR /opt/software $ sudo dpkg -iR /opt/software
@ -70,9 +70,9 @@ Processing triggers for desktop-file-utils (0.22-1ubuntu3) ...
Processing triggers for mime-support (3.58ubuntu1) ... Processing triggers for mime-support (3.58ubuntu1) ...
``` ```
#### 4) Print the Installed packages list ### 4) 显示已安装软件列表
Use the below command to List all installed packages, along with package version and description on Debian based systems such as Debian, Mint, Ubuntu & elementryOS, etc..,. 以下命令可以列出 Debian 系的系统中所有已安装的软件,同时会显示软件版本和描述信息。
``` ```
$ dpkg -l $ dpkg -l
@ -92,11 +92,10 @@ ii account-plugin-salut 3.12.10-0ubuntu2 amd64
``` ```
#### 5) Check particular Installed package ### 5) 查看指定的已安装软件
Use the below command to List individual installed package, along with package version and description on Debian based systems such as Debian, Mint, Ubuntu & elementryOS, etc..,. 用以下命令列出指定的一个已安装软件,同时会显示软件版本和描述信息。
<iframe marginwidth="0" marginheight="0" scrolling="no" frameborder="0" height="90" width="728" id="_mN_gpt_827143833" style="border-width: 0px; border-style: initial; font-style: inherit; font-variant: inherit; font-weight: inherit; font-stretch: inherit; font-size: inherit; line-height: inherit; font-family: inherit; vertical-align: baseline;"></iframe>
``` ```
$ dpkg -l atom $ dpkg -l atom
Desired=Unknown/Install/Remove/Purge/Hold Desired=Unknown/Install/Remove/Purge/Hold
@ -108,9 +107,9 @@ ii atom 1.5.3 amd64 A hackable text editor for the 21st
``` ```
#### 6) Check package Installed Location ### 6) 查看软件安装目录
Use the below command to Check package Installed Location on Debian based systems such as Debian, Mint, Ubuntu & elementryOS, etc..,. 以下命令可以在基于 Debian 的系统上查看软件的安装路径。
``` ```
$ dpkg -L atom $ dpkg -L atom
@ -128,9 +127,9 @@ $ dpkg -L atom
``` ```
#### 7) View deb package content ### 7) 查看 deb 包内容
Use the below command to View deb package content, It will show list of files on inside .deb package. 下列命令可以查看 deb 包内容。它会显示 .deb 包中的一系列文件。
``` ```
$ dpkg -c atom-amd64.deb $ dpkg -c atom-amd64.deb
@ -149,9 +148,9 @@ drwxr-xr-x root/root 0 2016-02-13 02:13 ./usr/share/doc/
. .
``` ```
#### 8) Display details about package ### 8) 显示软件的详细信息
Use the below command to Display detailed information about package, package group, version, maintainer, Architecture, display depends packages, description, etc.,. 以下命令可以显示软件的详细信息,如软件名、软件类别、版本、维护者、软件架构、依赖的软件、软件描述等等。
``` ```
$ dpkg -s atom $ dpkg -s atom
@ -169,9 +168,9 @@ Description: A hackable text editor for the 21st Century.
Atom is a free and open source text editor that is modern, approachable, and hackable to the core.</atom@github.com> Atom is a free and open source text editor that is modern, approachable, and hackable to the core.</atom@github.com>
``` ```
#### 9) Find what package owns the file ### 9) 查看文件属于哪个软件
Use the below command to find out what package does file belong. 用以下命令来查看文件属于哪个软件。
``` ```
$ dpkg -S /usr/bin/atom $ dpkg -S /usr/bin/atom
@ -179,11 +178,11 @@ atom: /usr/bin/atom
``` ```
#### 10) Remove/Delete package ### 10) 移除/删除软件
以下命令可以用来移除/删除一个已经安装的软件,但不删除配置文件。
Use the below command to Remove/Delete an installed package except configuration files.
<iframe marginwidth="0" marginheight="0" scrolling="no" frameborder="0" height="90" width="728" id="_mN_gpt_827143833" style="border-width: 0px; border-style: initial; font-style: inherit; font-variant: inherit; font-weight: inherit; font-stretch: inherit; font-size: inherit; line-height: inherit; font-family: inherit; vertical-align: baseline;"></iframe>
``` ```
$ sudo dpkg -r atom $ sudo dpkg -r atom
(Reading database ... 426404 files and directories currently installed.) (Reading database ... 426404 files and directories currently installed.)
@ -196,9 +195,9 @@ Processing triggers for mime-support (3.58ubuntu1) ...
``` ```
#### 11) Purge package ### 11) 清除软件
Use the below command to Remove/Delete everything including configuration files. 以下命令可以用来移除/删除包括配置文件在内的所有文件。
``` ```
$ sudo dpkg -P atom $ sudo dpkg -P atom
@ -212,28 +211,26 @@ Processing triggers for mime-support (3.58ubuntu1) ...
``` ```
#### 12) Read more about dpkg ### 12) 了解更多
Use the below commands to read more about dpkg command information. 用以下命令来查看更多关于 dpkg 的信息。
``` ```
$ dpkg -help $ dpkg -help
or
$ man dpkg $ man dpkg
``` ```
Enjoy….) 开始体验 dpkg 吧。
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
via: http://www.2daygeek.com/dpkg-command-examples/ via: http://www.2daygeek.com/dpkg-command-examples/
作者:[MAGESH MARUTHAMUTHU ][a] 作者:[MAGESH MARUTHAMUTHU][a]
译者:[GitFuture](https://github.com/GitFuture)
译者:[译者ID](https://github.com/译者ID) 校对:[wxy](https://github.com/wxy)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,58 @@
为满足当今和未来 IT 需求,培训员工还是雇佣新人?
================================================================
![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/cio_talent_4.png?itok=QLhyS_Xf)
在数字化时代,由于 IT 工具不断更新,技术公司紧随其后,对 IT 技能的需求也不断变化。对于企业来说,寻找和雇佣那些拥有令人垂涎能力的创新人才,是非常不容易的。同时,培训内部员工来使他们接受新的技能和挑战,需要一定的时间,而时间要求常常是紧迫的。
[Sandy Hill][1] 对 IT 涉及到的多项技术都很熟悉。她作为 [Pegasystems][2] 项目的 IT 总监,负责多个 IT 团队从应用的部署到数据中心的运营都要涉及。更重要的是Pegasystems 开发帮助销售、市场、服务以及运营团队流水化操作,以及客户联络的应用。这意味着她需要掌握使用 IT 内部资源的最佳方法,面对公司客户遇到的 IT 挑战。
![](https://enterprisersproject.com/sites/default/files/CIO_Q%20and%20A_0.png)
**TEP企业家项目这些年你是如何调整培训重心的**
**Hill**:在过去的几年中,我们经历了爆炸式的发展,现在我们要实现更多的全球化进程。因此,培训目标是确保每个人都在同一起跑线上。
我们主要的关注点在培养员工使用新产品和工具上,这些新产品和工具能够推动创新,提高工作效率。例如,我们使用了之前没有的资产管理系统。因此我们需要为全部员工做培训,而不是雇佣那些已经知道该产品的人。当我们正在发展的时候,我们也试图保持紧张的预算和稳定的职员总数。所以,我们更愿意在内部培训而不是雇佣新人。
**TEP说说培训方法吧怎样帮助你的员工发展他们的技能**
**Hill**:我要求每一位员工制定一个技术性的和非技术性的训练目标。这作为他们绩效评估的一部分。他们的技术性目标需要与他们的工作职能相符,非技术岗目标则随意,比如着重发展一项软技能,或是学一些专业领域之外的东西。我每年对职员进行一次评估,看看差距和不足之处,以使团队保持全面发展。
**TEP你的训练计划能够在多大程度上减轻招聘工作量, 保持职员的稳定性?**
**Hill**:使我们的职员保持学习新技术的兴趣,可以让他们不断提高技能。让职员知道我们重视他们并且让他们在擅长的领域成长和发展,以此激励他们。
**TEP你们发现哪些培训是最有效的**
**HILL**:我们使用几种不同的培训方法,认为效果很好。对新的或特殊的项目,我们会由供应商提供培训课程,作为项目的一部分。要是这个方法不能实现,我们将进行脱产培训。我们也会购买一些在线的培训课程。我也鼓励职员每年参加至少一次会议,以了解行业的动向。
**TEP哪些技能需求更适合雇佣新人而不是培训现有员工**
**Hill**:这和项目有关。最近有一个项目,需要使用 OpenStack而我们根本没有这方面的专家。所以我们与一家从事这一领域的咨询公司合作。我们利用他们的专业人员运行该项目并现场培训我们的内部团队成员。让内部员工学习他们需要的技能同时还要完成他们的日常工作是一项艰巨的任务。
顾问帮助我们确定我们需要的员工人数。这样我们就可以对员工进行评估,看是否存在缺口。如果存在人员上的缺口,我们还需要额外的培训或是员工招聘。我们也确实雇佣了一些合同工。另一个选择是对一些全职员工进行为期六至八周的培训,但我们的项目模式不容许这么做。
**TEP最近雇佣的员工他们的那些技能特别能够吸引到你**
**Hill**:在最近的招聘中,我更看重软技能。除了扎实的技术能力外,他们需要能够在团队中进行有效的沟通和工作,要有说服他人,谈判和解决冲突的能力。
IT 人常常独来独往不擅社交。然而如今IT 与整个组织结合越来越紧密,为其他业务部门提供有用的更新和状态报告的能力至关重要,可展示 IT 部门存在的重要性。
--------------------------------------------------------------------------------
via: https://enterprisersproject.com/article/2016/6/training-vs-hiring-meet-it-needs-today-and-tomorrow
作者:[Paul Desmond][a]
译者:[Cathon](https://github.com/Cathon)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://enterprisersproject.com/user/paul-desmond
[1]: https://enterprisersproject.com/user/sandy-hill
[2]: https://www.pega.com/pega-can?&utm_source=google&utm_medium=cpc&utm_campaign=900.US.Evaluate&utm_term=pegasystems&gloc=9009726&utm_content=smAXuLA4U|pcrid|102822102849|pkw|pegasystems|pmt|e|pdv|c|

View File

@ -0,0 +1,140 @@
轻轻几个点击,在 AWS 和 Azure 上搭建 Docker 数据中心
===================================================
通过几个点击即可在 “AWS 快速起步”和“Azure 市场”上高效搭建产品级 Docker 数据中心。
通过 AWS 快速起步的 CloudFormation 模板和在 Azure 市场上的预编译模板来部署 Docker 数据中心使得比以往在公有云基础设施下的部署企业级的 CaaS Docker 环境更加容易。
Docker 数据中心 CaaS 平台为各种规模的企业的敏捷应用部署提供了容器和集群的编排和管理,使之更简单、安全和可伸缩。使用新为 Docker 数据中心预编译的云模板,开发者和 IT 运维人员可以无缝的把容器化的应用迁移到亚马逊 EC2 或者微软的 Azure 环境而无需修改任何代码。现在,企业可以快速实现更高的计算和运营效率,可以通过短短几步操作实现支持 Docker 的容器管理和编排。
### 什么是 Docker 数据中心?
Docker 数据中心包括了 Docker 通用控制面板Docker Universal Control PlaneUCPDocker 可信注册库( Docker Trusted RegistryUTR和商用版 Docker 引擎CS Docker Engine并带有与客户的应用服务等级协议相匹配的商业支持服务。
- Docker 通用控制面板UCP一种企业级的集群管理方案帮助客户通过单个管理面板管理整个集群
- Docker 可信注册库DTR 一种镜像存储管理方案,帮助客户安全存储和管理 Docker 镜像
- 商用版的 Docker 引擎
![](http://img.scoop.it/lVraAJgJbjAKqfWCLtLuZLnTzqrqzN7Y9aBZTaXoQ8Q=)
### 在 AWS 上快速布置 Docker 数据中心
秉承 Docker 与 AWS 最佳实践,参照 AWS 快速起步教程来,你可以在 AWS 云上快速部署 Docker 容器。Docker 数据中心快速起步基于模块化和可定制的 CloudFormation 模板,客户可以在其之上增加额外功能或者为自己的 Docker 部署修改模板。
- [AWS 的 Docker 数据中心应用说明](https://youtu.be/aUx7ZdFSkXU)
#### 架构
![](http://img.scoop.it/sZ3_TxLba42QB-r_6vuApLnTzqrqzN7Y9aBZTaXoQ8Q=)
AWS Cloudformation 的安装过程始于创建 AWS 资源,这些 AWS 需要的资源包括VPC、安全组、公有与私有子网、因特网网关、NAT 网关与 S3 bucket。
然后AWS Cloudformation 启动第一个 UCP 控制器实例,紧接着,安装 Docker 引擎和 UCP 容器。它把第一个 UCP 控制器创建的根证书备份到 S3。一旦第一个 UCP 控制器成功运行,其他 UCP 控制器、UCP 集群节点和第一个 DTR 复制的进程就会被触发。和第一个 UCP 控制器节点类似,其他所有节点创建进程也都由商用版 Docker 引擎开始,然后安装并运行 UCP 和 DTR 容器以加入集群。两个弹性负载均衡器ELB一个分配给 UCP另外一个为 DTR 服务它们启动并自动完成配置来在两个可用区AZ之间提供弹性负载均衡。
除这些之外如有需要UCP 控制器和节点在 ASG 中启动并提供扩展功能。这种架构确保 UCP 和 DTR 两者都部署在两个 AZ 上以增强弹性与高可靠性。在公有或者私有 HostedZone 上Route53 用来动态注册或者配置 UCP 和 DTR。
![](http://img.scoop.it/HM7Ag6RFvMXvZ_iBxRgKo7nTzqrqzN7Y9aBZTaXoQ8Q=)
#### 快速起步模板的核心功能如下:
- 创建 VPC、不同 AZ 上的私有和公有子网、ELB、NAT 网关、因特网网关、自动伸缩组,它们全部基于 AWS 最佳实践
- 为 DDC 创建一个 S3 bucket其用于证书备份和 DTR 映像存储DTR 需要额外配置)
- 在客户的 VPC 范畴,跨多 AZ 部署 3 个 UCP 控制器
- 创建预配置正常检测的 UCP ELB
- 创建一个 DNS 记录并关联到 UCP ELB
- 创建可伸缩的 UCP 节点集群
- 在 VPC 范畴内,跨多 AZ 创建 3 个 DTR 副本
- 创建一个预配置正常检测的 DTR
- 创建一个 DNS 记录,并关联到 DTR ELB
- [下载 AWS 快速指南](https://s3.amazonaws.com/quickstart-reference/docker/latest/doc/docker-datacenter-on-the-aws-cloud.pdf)
### 在 AWS 使用 Docker 数据中心
1. 登录 [Docker Store][1] 获取 [30 天免费试用][2]或者[联系销售][4]
2. 确认之后看到提示“Launch Stack”后客户会被重定向到 AWS Cloudformation 入口
3. 确认启动 Docker 的 AWS 区域
4. 提供启动参数
5. 确认并启动
6. 启动完成之后,点击输出标签可以看到 UCP/DTR 的 URL、缺省用户名、密码和 S3 bucket 的名称
- [Docker 数据中心需要 2000 美刀信用担保](https://aws.amazon.com/mp/contactdocker/)
### 在 Azure 使用 Azure 市场的预编译模板部署
在 Azure 市场上Docker 数据中心是一个预先编译的模板,客户可以在 Azure 横跨全球的数据中心即起即用。客户可以根据自己需求从 Azure 提供的各种 VM 中选择适合自己的 VM 部署 Docker 数据中心。
#### 架构
![](http://img.scoop.it/V9SpuBCoAnUnkRL3J-FRFLnTzqrqzN7Y9aBZTaXoQ8Q=)
Azure 部署过程始于输入一些基本用户信息,如 ssh 登录的管理员用户名(系统级管理员)和资源组名称。你可以把资源组理解为一组有生命周期和部署边界的资源集合。你可以在这个链接了解更多关于资源组的信息: http://azure.microsoft.com/en-us/documentation/articles/resource-group-overview/ 。
下一步输入集群详细信息包括UCP 控制器 VM 大小、控制器个数(缺省为 3 个、UCP 节点 VM 大小、UCP 节点个数(缺省 1最大值为 10、DTR 节点 VM 大小、DTR 节点个数、虚拟网络名和地址例如10.0.0.1/19。关于网络客户可以配置 2 个子网:第一个子网分配给 UCP 控制器 ,第二个分配给 DTC 和 UCP 节点。
最后,点击 OK 完成部署。对于小集群,服务开通需要大约 15-19 分钟,大集群更久些。
![](http://img.scoop.it/DXPM5-GXP0j2kEhno0kdRLnTzqrqzN7Y9aBZTaXoQ8Q=)
![](http://img.scoop.it/321ElkCf6rqb7u_-nlGPtrnTzqrqzN7Y9aBZTaXoQ8Q=)
#### 如何在 Azure 部署
1. 注册 [Docker 数据中心 30 天试用][5]许可或者[联系销售][6]
2. [跳转到微软 Azure 市场的 Docker 数据中心][7]
3. [查看部署文档][8]
---
通过注册获取 Docker 数据中心许可证开始,然后你就能够通过 AWS 或者 Azure 模板搭建自己的数据中心。
- [获取 30 天试用许可证][9]
- [通过视频理解 Docker 数据中心架构][10]
- [观看演示视频][11]
- [获取 AWS 提供的部署 Docker 数据中心的 75 美元红包奖励][12]
了解有关 Docker 的更多信息:
- 初识 Docker? 尝试一下 10 分钟[在线学习课程][20]
- 分享镜像,自动构建,或用一个[免费的 Docker Hub 账号][21]尝试更多
- 阅读 [Docker 1.12 发行说明][22]
- 订阅 [Docker Weekly][23]
- 报名参加即将到来的 [Docker Online Meetups][24]
- 参加即将发生的 [Docker Meetups][25]
- 观看 [DockerCon EU2015][26]视频
- 开始为 [Docker][27] 贡献力量
--------------------------------------------------------------------------------
via: https://blog.docker.com/2016/06/docker-datacenter-aws-azure-cloud/
作者:[Trisha McCanna][a]
译者:[firstadream](https://github.com/firstadream)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://blog.docker.com/author/trisha/
[1]: https://store.docker.com/login?next=%2Fbundles%2Fdocker-datacenter%2Fpurchase?plan=free-trial
[2]: https://store.docker.com/login?next=%2Fbundles%2Fdocker-datacenter%2Fpurchase?plan=free-trial
[4]: https://goto.docker.com/contact-us.html
[5]: https://store.docker.com/login?next=%2Fbundles%2Fdocker-datacenter%2Fpurchase?plan=free-trial
[6]: https://goto.docker.com/contact-us.html
[7]: https://azure.microsoft.com/en-us/marketplace/partners/docker/dockerdatacenterdocker-datacenter/
[8]: https://success.docker.com/Datacenter/Apply/Docker_Datacenter_on_Azure
[9]: http://www.docker.com/trial
[10]: https://www.youtube.com/playlist?list=PLkA60AVN3hh8tFH7xzI5Y-vP48wUiuXfH
[11]: https://www.youtube.com/playlist?list=PLkA60AVN3hh8a8JaIOA5Q757KiqEjPKWr
[12]: https://aws.amazon.com/quickstart/promo/
[20]: https://docs.docker.com/engine/understanding-docker/
[21]: https://hub.docker.com/
[22]: https://docs.docker.com/release-notes/
[23]: https://www.docker.com/subscribe_newsletter/
[24]: http://www.meetup.com/Docker-Online-Meetup/
[25]: https://www.docker.com/community/meetup-groups
[26]: https://www.youtube.com/playlist?list=PLkA60AVN3hh87OoVra6MHf2L4UR9xwJkv
[27]: https://docs.docker.com/contributing/contributing/

View File

@ -0,0 +1,143 @@
2016 年 Linux 下五个最佳视频编辑软件
=====================================================
![](https://itsfoss.com/wp-content/uploads/2016/06/linux-video-ditor-software.jpg)
概要: 在这篇文章中Tiwo 讨论了 Linux 下最佳视频编辑器的优缺点和在基于 Ubuntu 的发行版中的安装方法。
在过去,我们已经在类似的文章中讨论了 [Linux 下最佳图像管理应用软件][1][Linux 上四个最佳的现代开源代码编辑器][2]。今天,我们来看看 Linux 下的最佳视频编辑软件。
当谈及免费的视频编辑软件Windows Movie Maker 和 iMovie 是大多数人经常推荐的。
不幸的是,它们在 GNU/Linux 下都是不可用的。但是你不必担心这个,因为我们已经为你收集了一系列最佳的视频编辑器。
### Linux下最佳的视频编辑应用程序
接下来,让我们来看看 Linux 下排名前五的最佳视频编辑软件:
#### 1. Kdenlive
![](https://itsfoss.com/wp-content/uploads/2016/06/kdenlive-free-video-editor-on-ubuntu.jpg)
[Kdenlive][3] 是一款来自于 KDE 的自由而开源的视频编辑软件,它提供双视频监视器、多轨时间轴、剪辑列表、可自定义的布局支持、基本效果和基本转换的功能。
它支持各种文件格式和各种摄像机和相机包括低分辨率摄像机Raw 和 AVI DV 编辑mpeg2、mpeg4 和 h264 AVCHD小型摄像机和摄像机高分辨率摄像机文件包括 HDV 和 AVCHD 摄像机;专业摄像机,包括 XDCAM-HD^TM 流、IMX^TM D10流、DVCAMD10、DVCAM、DVCPRO^TM 、DVCPRO50^TM 流和 DNxHD^TM 流等等。
你可以在命令行下运行下面的命令安装 :
```
sudo apt-get install kdenlive
```
或者,打开 Ubuntu 软件中心,然后搜索 Kdenlive。
#### 2. OpenShot
![](https://itsfoss.com/wp-content/uploads/2016/06/openshot-free-video-editor-on-ubuntu.jpg)
[OpenShot][5] 是我们这个 Linux 视频编辑软件列表中的第二选择。 OpenShot 可以帮助您创建支持过渡、效果、调整音频电平的电影,当然,它也支持大多数格式和编解码器。
您还可以将电影导出到 DVD上传到 YouTube、Vimeo、Xbox 360 和许多其他常见的格式。 OpenShot 比 Kdenlive 更简单。 所以如果你需要一个简单界面的视频编辑器OpenShot 会是一个不错的选择。
最新的版本是 2.0.7。您可以从终端窗口运行以下命令安装 OpenShot 视频编辑器:
```
sudo apt-get install openshot
```
它需要下载 25 MB安装后需要 70 MB 硬盘空间。
#### 3. Flowblade Movie Editor
![](https://itsfoss.com/wp-content/uploads/2016/06/flowblade-movie-editor-on-ubuntu.jpg)
[Flowblade Movie Editor][6] 是一个用于 Linux 的多轨非线性视频编辑器。它是自由而开源的。 它配备了一个时尚而现代的用户界面。
它是用 Python 编写的,旨在提供一个快速、精确的功能。 Flowblade 致力于在 Linux 和其他自由平台上提供最好的体验。 所以现在没有 Windows 和 OS X 版本。
要在 Ubuntu 和其他基于 Ubuntu 的系统上安装 Flowblade请使用以下命令
```
sudo apt-get install flowblade
```
#### 4. Lightworks
![](https://itsfoss.com/wp-content/uploads/2016/06/lightworks-running-on-ubuntu-16.04.jpg)
如果你要寻找一个有更多功能的视频编辑软件,这会是答案。 [Lightworks][7] 是一个跨平台的专业的视频编辑器,在 Linux、Mac OS X 和 Windows 系统下都可用。
它是一个获奖的专业的[非线性编辑][8]NLE软件支持高达 4K 的分辨率以及标清和高清格式的视频。
该应用程序有两个版本Lightworks 免费版和 Lightworks 专业版。不过免费版本不支持 VimeoH.264 / MPEG-4和 YouTubeH.264 / MPEG-4 - 高达 2160p4K UHD、蓝光和 H.264 / MP4 导出选项,以及可配置的位速率设置,但是专业版本支持。
- Lightworks 免费版
- Lightworks 专业版
专业版本有更多的功能例如更高的分辨率支持4K 和蓝光支持等。
##### 怎么安装Lightworks
不同于其他的视频编辑器,安装 Lightwork 不像运行单个命令那么直接。别担心,这不会很复杂。
- 第1步 你可以从 [Lightworks 下载页面][9]下载安装包。这个安装包大约 79.5MB。*请注意这里没有32 位 Linux 的支持。*
- 第2步 一旦下载,你可以使用 [Gdebi 软件包安装器][10]来安装。Gdebi 会自动下载依赖关系 :
![](https://itsfoss.com/wp-content/uploads/2016/06/Installing-lightworks-on-ubuntu.jpg)
- 第3步 现在你可以从 Ubuntu 仪表板或您的 Linux 发行版菜单中打开它。
- 第4步 当你第一次使用它时,需要一个账号。点击 “Not Registerd?” 按钮来注册。别担心,它是免费的。
- 第5步 在你的账号通过验证后,就可以登录了。
现在Lightworks 可以使用了。
需要 Lightworks 的视频教程? 在 [Lightworks 视频教程页][11]得到它们。
#### 5. Blender
![](https://itsfoss.com/wp-content/uploads/2016/06/blender-running-on-ubuntu-16.04.jpg)
Blender 是一个专业的,工业级的开源、跨平台的视频编辑器。在 3D 作品的制作中,是非常受欢迎的。 Blender 已被用于几部好莱坞电影的制作,包括蜘蛛侠系列。
虽然最初是设计用于制作 3D 模型,但它也可以用于各种格式的视频编辑和输入能力。 该视频编辑器包括:
- 实时预览、亮度波形、色度矢量示波器和直方图显示
- 音频混合、同步、擦除和波形可视化
- 多达 32 个插槽用于添加视频、图像、音频、场景、面具和效果
- 速度控制、调整图层、过渡、关键帧、过滤器等
最新的版本可以从 [Blender 下载页][12]下载.
### 哪一个是最好的视频编辑软件?
如果你需要一个简单的视频编辑器OpenShot、Kdenlive 和 Flowblade 是一个不错的选择。这些软件是适合初学者的,并且带有标准规范的系统。
如果你有一个高性能的计算机,并且需要高级功能,你可以使用 Lightworks。如果你正在寻找更高级的功能 Blender 可以帮助你。
这就是我写的 5 个最佳的视频编辑软件,它们可以在 Ubuntu、Linux Mint、Elementary 和其他 Linux 发行版下使用。 请与我们分享您最喜欢的视频编辑器。
--------------------------------------------------------------------------------
via: https://itsfoss.com/best-video-editing-software-linux/
作者:[Tiwo Satriatama][a]
译者:[DockerChen](https://github.com/DockerChen)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/tiwo/
[1]: https://linux.cn/article-7462-1.html
[2]: https://linux.cn/article-7468-1.html
[3]: https://kdenlive.org/
[4]: https://itsfoss.com/tag/open-source/
[5]: http://www.openshot.org/
[6]: http://jliljebl.github.io/flowblade/
[7]: https://www.lwks.com/
[8]: https://en.wikipedia.org/wiki/Non-linear_editing_system
[9]: https://www.lwks.com/index.php?option=com_lwks&view=download&Itemid=206
[10]: https://itsfoss.com/gdebi-default-ubuntu-software-center/
[11]: https://www.lwks.com/videotutorials
[12]: https://www.blender.org/download/

View File

@ -1,10 +1,9 @@
OneNewLife translated 怎样用 Tar 和 OpenSSL 给文件和目录加密及解密
=========
# 怎样用 Tar 和 OpenSSL 给文件和目录加密及解密 当你有重要的敏感数据的时候,给你的文件和目录额外加一层保护是至关重要的,特别是当你需要通过网络与他人传输数据的时候。
当你有重要的隐私数据的时候,给你的文件和目录额外加一层保护是至关重要的,特别是当你需要通过网络与他人传输数据的时候。 由于这个原因,我在寻找一个可疑在 Linux 上加密及解密文件和目录的实用程序,幸运的是我找到了一个用 tarLinux 的一个压缩打包工具)和 OpenSSL 来解决的方案。借助这两个工具,你真的可以毫不费力地创建和加密 tar 归档文件。
这就是为什么我正在寻找一个实用程序在 Linux 上加密及解密文件和目录,幸运的是我找到了一个用 tarLinux 的一个压缩打包工具)和 OpenSSL 来解决的方案。借助这两个工具,你真的可以毫不费力地创建和加密 tar 归档文件。
在这篇文章中,我们将了解如何使用 OpenSSL 创建和加密 tar 或 gzgzip另一种压缩文件归档文件 在这篇文章中,我们将了解如何使用 OpenSSL 创建和加密 tar 或 gzgzip另一种压缩文件归档文件
@ -12,40 +11,37 @@ OneNewLife translated
``` ```
# openssl command command-options arguments # openssl command command-options arguments
``` ```
#### 在 Linux 中加密文件 ### 在 Linux 中加密文件
要加密当前工作目录的内容(根据文件的大小,这可能需要一点时间): 要加密当前工作目录的内容(根据文件的大小,这可能需要一点时间):
``` ```
# tar -czf - * | openssl enc -e -aes256 -out secured.tar.gz # tar -czf - * | openssl enc -e -aes256 -out secured.tar.gz
``` ```
上述命令的解释: 上述命令的解释:
1. `enc` - openssl 命令使用密进行编码 1. `enc` - openssl 命令使用密进行编码
2. `-e`  用来加密输入文件的 enc 命令选项,这里是 tar 命令的输出 2. `-e`  用来加密输入文件的 `enc` 命令选项,这里是指前一个 tar 命令的输出
3. `-aes256`  加密用的算法 3. `-aes256`  加密用的算法
4. `-out`  用于指定输出文件名的 enc 选项,这里文件名是 `secured.tar.gz` 4. `-out`  用于指定输出文件名的 `enc` 命令选项,这里文件名是 `secured.tar.gz`
#### 在 Linux 中解密文件 ### 在 Linux 中解密文件
要解密 tar 归档内容,使用以下命令。 要解密上述 tar 归档内容,使用以下命令。
``` ```
# openssl enc -d -aes256 -in secured.tar.gz | tar xz -C test # openssl enc -d -aes256 -in secured.tar.gz | tar xz -C test
``` ```
上述命令的解释: 上述命令的解释:
1. `-d`  用于解密文件的选项 1. `-d`  用于解密文件
2. `-C`  提取内容到 `test` 子目录 2. `-C`  提取内容到 `test` 子目录
下图展示了加密过程,以及当你尝试执行以下操作时会发生什么: 下图展示了加密过程,以及当你尝试执行以下操作时会发生什么:
1. 以传统方式提取 tar 包的内容 1. 以传统方式提取 tar 包的内容
2. 使用了错误的密码的时候 2. 使用了错误的密码的时候
@ -53,7 +49,7 @@ OneNewLife translated
[![在 Linux 中加密和解密 Tar 归档文件](http://www.tecmint.com/wp-content/uploads/2016/08/Encrypt-Decrypt-Tar-Archive-Files-in-Linux.png)][1] [![在 Linux 中加密和解密 Tar 归档文件](http://www.tecmint.com/wp-content/uploads/2016/08/Encrypt-Decrypt-Tar-Archive-Files-in-Linux.png)][1]
在 Linux 中加密和解密 Tar 归档文件 *在 Linux 中加密和解密 Tar 归档文件*
当你在本地网络或因特网工作的时候,你可以随时通过加密来保护你和他人共享的重要文本或文件,这有助于降低将其暴露给恶意攻击者的风险。 当你在本地网络或因特网工作的时候,你可以随时通过加密来保护你和他人共享的重要文本或文件,这有助于降低将其暴露给恶意攻击者的风险。
@ -61,13 +57,11 @@ OneNewLife translated
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
via: http://www.tecmint.com/encrypt-decrypt-files-tar-openssl-linux/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29 via: http://www.tecmint.com/encrypt-decrypt-files-tar-openssl-linux/
作者:[Gabriel Cánepa][a] 作者:[Gabriel Cánepa][a]
译者:[OneNewLife](https://github.com/OneNewLife) 译者:[OneNewLife](https://github.com/OneNewLife)
校对:[wxy](https://github.com/wxy)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,51 @@
拥有开源项目部门的公司可以从四个方面获益
====
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BUSINESS_creativity.png?itok=x2HTRKVW)
在我的第一篇关于开源项目部门program office的系列文章中我深入剖析了[什么是开源项目部门,为什么你的公司需要一个开源项目部门][1]。接着我又说到了[谷歌是如何创建一种新的开源项目部门的][2]。而这篇文章,我将阐述拥有一个开源项目部门的好处。
乍一看非软件开发公司会更加热情的去拥抱开源项目部门的一个重要原因是他们并没有什么损失。毕竟他们并不需要依靠这些软件产品来获得收益。比如Facebook 可以很轻易的释放出一个 “分布式键值数据存储” 作为开源项目,是因为他们并没有售卖一个叫做 “企业级键值数据存储” 的产品。这回答了关于风险的问题,但是并没有回答他们如何通过向开源生态共献代码而获益的问题。让我们逐个来推测和探讨其中可能的原因。你会发现开源项目供应商的许多动机都是相同的,但是也有些许不同。
### 招聘
招聘可能是一个将开源项目部门推销给上层管理部门的最容易方法。向他们展示与招聘相关的成本,以及投资回报率,然后解释如何与天才工程师发展关系,从而与那些对这些项目感兴趣并且十分乐意在其中工作的天才开发者们建立联系。不需要我多说了,你懂的!
### 技术影响
曾几何时,那些没有专门从事软件销售的公司是难以直接对他们软件供应商的开发周期施加影响力的,尤其当他们并不是一个大客户时。开源完全改变了这一点,它将用户与供应商放在了一个更公平的竞争环境中。随着开源开发的兴起,任何人,假如他们愿意投入时间和资源的话,都可以将技术推向一个选定的方向。但是这些公司发现,虽然将投资用于开发上会带来丰硕的成果,但是总体战略的努力却更加有效——对比一下 bug 的修复和软件的构建——大多数公司都将 bug 的修复推给上游的开源项目,但是一些公司开始认识到通过更深层次的回报承诺和更快的功能开发来协调持久的工作,将会更有利于业务。通过开源项目部门模式,公司的职员能够从开源社区中准确嗅出战略重心,然后投入开发资源。
对于快速增长的公司,如 Google 和 Facebook其对现有的开源项目提供的领导力仍然不足以满足业务的膨胀。面对激烈的增长和建立超大规模系统所带来的挑战许多大型企业开始构建仅供内部使用的高度定制的软件栈。除非他们能说服别人在一些基础设施项目上达成合作。因此虽然他们保持在诸如 Linux 内核Apache 和其他现有项目领域的投资他们也开始推出自己的大型项目。Facebook 发布了 CassandraTwitter 创造了 Mesos并且甚至谷歌也创建了 Kubernetes 项目。这些项目已成为行业创新的主要平台证实了该举措是相关公司引人注目的成功。请注意Facebook 在它需要创造一个新软件项目来解决更大规模的问题之后,已经在内部停止使用 Cassandra 了,但是,这时 Cassandra 已经变得流行,而 DataStax 公司接过了开发任务)。所有这些项目已经促使了开发商、相关的项目、以及最终用户构成的整个生态加速增长和发展。
没有与公司战略举措取得一致的开源项目部门不可能成功的。不这样做的话,这些公司依然会试图单独地解决这些问题,而且更慢。不仅拥有这些项目可以帮助内部解决业务问题,它们也帮助这些公司逐渐成为行业巨头。当然,谷歌成为行业巨头好多年了,但是 Kubernetes 的发展确保了软件的质量,并且在容器技术未来的发展方向上有着直接的话语权,并且远超之前就有的话语权。这些公司目前还是闻名于他们超大规模的基础设施和硅谷的中坚份子。鲜为人知,但是更为重要的是它们与技术生产人员的亲密度。开源项目部门凭借技术建议和与有影响力的开发者的关系,再加上在社区治理和人员管理方面深厚的专业知识来引领这些工作,并最大限度地发挥其影响力,
### 市场营销能力
与技术的影响齐头并进的是每个公司谈论他们在开源方面的努力。通过传播这些与项目和社区有关的消息一个开源项目部门能够通过有针对性的营销活动来提供最大的影响。营销在开放源码领域一直是一个肮脏的词汇因为每个人都有一个由企业营销造成的糟糕的经历。在开源社区中营销呈现出一种与传统方法截然不同的形式它会更注重于我们的社区已经在战略方向上做了什么。因此一个开源项目部门不可能去宣传一些根本还没有发布任何代码的项目但是他们会讨论他们创造什么软件和参与了其他什么举措。基本上不会有“雾件vaporware”。
想想谷歌的开源项目部门作出的第一份工作。他们不只是简单的贡献代码给 Linux 内核或其他项目他们更多的是谈论它并经常在开源会议主题演讲。他们不仅仅是把钱给写开源代码的代码的学生他们还创建了一个全球计划——“Google Summer of Code”现在已经成为一种开源发展的文化试金石。这些市场营销的作用在 Kubernetes 开发完成之前就奠定了谷歌在开源世界巨头的地位。最终使得,谷歌在创建 GPLv3 授权协议期间拥有重要影响力,并且在科技活动中公司的发言人和开源项目部门的代表人成为了主要人物。开源项目部门是协调这些工作的最好的实体,并可以为母公司提供真正的价值。
###改善内部流程
改善内部流程听起来不像一个大好处但克服混乱的内部流程对于每一个开源项目部门都是一个挑战不论是对软件供应商还是公司内的部门。而软件供应商必须确保他们的流程不与他们发布的产品重叠例如不小心开源了他们的商业售卖软件用户更关心的是侵犯了知识产权IP专利、版权和商标。没有人想只是因为释放软件而被起诉。没有一个活跃的开源项目部门去管理和协调这些许可和其他法律问题的话大公司在开源流程和管理上会面临着巨大的困难。为什么这个很重要呢如果不同的团队释放的软件是在不兼容的许可证下那么这不仅是一个坑爹的尴尬它还将对实现最基本的目标改良协作产生巨大的障碍。
考虑到还有许多这样的公司仍在飞快的增长,如果无法建立基本流程规则的话,将可以预见到它们将会遇到阻力。我见过一个罗列着批准、未经批准的许可证的巨大的电子表格,以及指导如何(或如何不)创建开源社区而遵守法律限制。关键是当开发者需要做出决定时要有一个可以依据的东西,并且每次当开发人员想要为一个开源社区贡献代码时,可以不产生大量的法律开销,和效率低下的知识产权检查。
有一个活跃的开放源码项目部门,负责维护许可规则和源的贡献,以及建立培训项目工程师,有助于避免潜在的法律缺陷和昂贵的诉讼。毕竟,良好的开源项目合作可以减少由于某人没有看许可证而导致公司赔钱这样的事件。好消息是,公司已经可以较少的担心关于专有的知识产权与软件供应商冲突的事。坏消息是,它们的法律问题不够复杂,尤其是当他们需要直接面对软件供应商的阻力时。
你的组织是如何受益于拥有一个开源项目部门的?可以在评论中与我们分享。
本文作者 John Mark Walker 是 Dell EMC 的产品管理总监,负责管理 ViPR 控制器产品及 CoprHD 开源社区。他领导过包括 ManageIQ 在内的许多开源社区。
--------------------------------------------------------------------------------
via: https://opensource.com/business/16/9/4-big-ways-companies-benefit-having-open-source-program-offices
作者:[John Mark Walker][a]
译者:[chao-zhi](https://github.com/chao-zhi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/johnmark
[1]: https://opensource.com/business/16/5/whats-open-source-program-office
[2]: https://opensource.com/business/16/8/google-open-source-program-office

View File

@ -0,0 +1,57 @@
宽松开源许可证的崛起意味着什么
====
为什么像 GNU GPL 这样的限制性许可证越来越不受青睐。
“如果你用了任何开源软件, 那么你软件的其他部分也必须开源。” 这是微软前 CEO 巴尔默 2001 年说的, 尽管他说的不对, 还是引发了人们对自由软件的 FUD (恐惧, 不确定和怀疑fear, uncertainty and doubt。 大概这才是他的意图。
对开源软件的这些 FUD 主要与开源许可有关。 现在有许多不同的许可证, 当中有些限制比其他的更严格(也有人称“更具保护性”)。 诸如 GNU 通用公共许可证 GPL 这样的限制性许可证使用了 copyleft 的概念。 copyleft 赋予人们自由发布软件副本和修改版的权力, 只要衍生工作保留同样的权力。 bash 和 GIMP 等开源项目就是使用了 GPL(v3)。 还有一个 AGPL Affero GPL 许可证, 它为网络上的软件(如 web service提供了 copyleft 许可。
这意味着, 如果你使用了这种许可的代码, 然后加入了你自己的专有代码, 那么在一些情况下, 整个代码, 包括你的代码也就遵从这种限制性开源许可证。 Ballmer 说的大概就是这类的许可证。
但宽松许可证不同。 比如, 只要保留版权声明和许可声明且不要求开发者承担责任, MIT 许可证允许任何人任意使用开源代码, 包括修改和出售。 另一个比较流行的宽松开源许可证是 Apache 许可证 2.0,它还包含了贡献者向用户提供专利授权相关的条款。 使用 MIT 许可证的有 JQuery、.NET Core 和 Rails 使用 Apache 许可证 2.0 的软件包括安卓, Apache 和 Swift。
两种许可证类型最终都是为了让软件更有用。 限制性许可证促进了参与和分享的开源理念, 使每一个人都能从软件中得到最大化的利益。 而宽松许可证通过允许人们任意使用软件来确保人们能从软件中得到最多的利益, 即使这意味着他们可以使用代码, 修改它, 据为己有,甚至以专有软件出售,而不做任何回报。
开源许可证管理公司黑鸭子软件的数据显示, 去年使用最多的开源许可证是限制性许可证 GPL 2.0,份额大约 25%。 宽松许可证 MIT 和 Apache 2.0 次之, 份额分别为 18% 和 16% 再后面是 GPL 3.0, 份额大约 10%。 这样来看, 限制性许可证占 35% 宽松许可证占 34% 几乎是平手。
但这份当下的数据没有揭示发展趋势。黑鸭子软件的数据显示, 从 2009 年到 2015 年的六年间, MIT 许可证的份额上升了 15.7% Apache 的份额上升了 12.4%。 在这段时期, GPL v2 和 v3 的份额惊人地下降了 21.4%。 换言之, 在这段时期里, 大量软件从限制性许可证转到宽松许可证。
这个趋势还在继续。 黑鸭子软件的[最新数据][1]显示, MIT 现在的份额为 26% GPL v2 为 21% Apache 2 为 16% GPL v3 为 9%。 即 30% 的限制性许可证和 42% 的宽松许可证--与前一年的 35% 的限制许可证和 34% 的宽松许可证相比, 发生了重大的转变。 对 GitHub 上使用许可证的[调查研究][2]证实了这种转变。 它显示 MIT 以压倒性的 45% 占有率成为最流行的许可证, 与之相比, GPL v2 只有 13% Apache 11%。
![](http://images.techhive.com/images/article/2016/09/open-source-licenses.jpg-100682571-large.idge.jpeg)
### 引领趋势
从限制性许可证到宽松许可证,这么大的转变背后是什么呢? 是公司害怕如果使用了限制性许可证的软件,他们就会像巴尔默说的那样,失去自己私有软件的控制权了吗? 事实上, 可能就是如此。 比如, Google 就[禁用了 Affero GPL 软件][3]。
[Instructional Media + Magic][4] 的主席 Jim Farmer 是一个教育开源技术的开发者。 他认为很多公司为避免法律问题而不使用限制性许可证。 “问题就在于复杂性。 许可证的复杂性越高, 被他人因为某行为而告上法庭的可能性越高。 高复杂性更可能带来诉讼”, 他说。
他补充说, 这种对限制性许可证的恐惧正被律师们驱动着, 许多律师建议自己的客户使用 MIT 或 Apache 2.0 许可证的软件, 并明确反对使用 Affero 许可证的软件。
他说, 这会对软件开发者产生影响, 因为如果公司都避开限制性许可证软件的使用,开发者想要自己的软件被使用, 就更会把新的软件使用宽松许可证。
但 SalesAgility开源 SuiteCRM 背后的公司)的 CEO Greg Soper 认为这种到宽松许可证的转变也是由一些开发者驱动的。 “看看像 Rocket.Chat 这样的应用。 开发者本可以选择 GPL 2.0 或 Affero 许可证, 但他们选择了宽松许可证,” 他说。 “这样可以给这个应用最大的机会, 因为专有软件厂商可以使用它, 不会伤害到他们的产品, 且不需要把他们的产品也使用开源许可证。 这样如果开发者想要让第三方应用使用他的应用的话, 他有理由选择宽松许可证。”
Soper 指出, 限制性许可证致力于帮助开源项目获得成功,方式是阻止开发者拿了别人的代码、做了修改,但不把结果回报给社区。 “Affero 许可证对我们的产品健康发展很重要, 因为如果有人利用了我们的代码开发,做得比我们好, 却又不把代码回报回来, 就会扼杀掉我们的产品,” 他说。 “ 对 Rocket.Chat 则不同, 因为如果它使用 Affero 那么它会污染公司的知识产权, 所以公司不会使用它。 不同的许可证有不同的使用案例。”
曾在 Gnome、OpenOffice 工作过,现在是 LibreOffice 的开源开发者的 Michael Meeks 同意 Jim Farmer 的观点,认为许多公司确实出于对法律的担心,而选择使用宽松许可证的软件。 “copyleft 许可证有风险, 但同样也有巨大的益处。 遗憾的是人们都听从律师的, 而律师只是讲风险, 却从不告诉你有些事是安全的。”
巴尔默发表他的错误言论已经过去 15 年了, 但它产生的 FUD 还是有影响--即使从限制性许可证到宽松许可证的转变并不是他的目的。
--------------------------------------------------------------------------------
via: http://www.cio.com/article/3120235/open-source-tools/what-the-rise-of-permissive-open-source-licenses-means.html
作者:[Paul Rubens][a]
译者:[willcoderwang](https://github.com/willcoderwang)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.cio.com/author/Paul-Rubens/
[1]: https://www.blackducksoftware.com/top-open-source-licenses
[2]: https://github.com/blog/1964-open-source-license-usage-on-github-com
[3]: http://www.theregister.co.uk/2011/03/31/google_on_open_source_licenses/
[4]: http://immagic.com/

View File

@ -0,0 +1,113 @@
在 Linux 下使用 TCP 封装器来加强网络服务安全
===========
在这篇文章中,我们将会讲述什么是 TCP 封装器TCP wrappers以及如何在一台 Linux 服务器上配置他们来[限制网络服务的权限][7]。在开始之前,我们必须澄清 TCP 封装器并不能消除对于正确[配置防火墙][6]的需要。
就这一点而言,你可以把这个工具看作是一个[基于主机的访问控制列表][5],而且并不能作为你的系统的[终极安全措施][4]。通过使用一个防火墙和 TCP 封装器,而不是只偏爱其中的一个,你将会确保你的服务不会被出现单点故障。
### 正确理解 hosts.allow 和 hosts.deny 文件
当一个网络请求到达你的主机的时候TCP 封装器会使用 `hosts.allow``hosts.deny` (按照这样的顺序)来决定客户端是否应该被允许使用一个提供的服务。.
在默认情况下,这些文件内容是空的,或者被注释掉,或者根本不存在。所以,任何请求都会被允许通过 TCP 过滤器而且你的系统被置于依靠防火墙来提供所有的保护。因为这并不是我们想要的。由于在一开始我们就介绍过的原因,清确保下面两个文件都存在:
```
# ls -l /etc/hosts.allow /etc/hosts.deny
```
两个文件的编写语法规则是一样的:
```
<services> : <clients> [: <option1> : <option2> : ...]
```
在文件中,
1. `services` 指当前规则对应的服务,是一个逗号分割的列表。
2. `clients` 指被规则影响的主机名或者 IP 地址,逗号分割的。下面的通配符也可以接受:
1. `ALL` 表示所有事物,应用于`clients`和`services`。
2. `LOCAL` 表示匹配在正式域名中没有完全限定主机名FQDN的机器例如 `localhost`
3. `KNOWN` 表示主机名,主机地址,或者用户是已知的(即可以通过 DNS 或其它服务解析到)。
4. `UNKNOWN``KNOWN` 相反。
5. `PARANOID` 如果进行反向 DNS 查找彼此返回了不同的地址,那么连接就会被断开(首先根据 IP 去解析主机名,然后根据主机名去获得 IP 地址)。
3. 最后,一个冒号分割的动作列表表示了当一个规则被触发的时候会采取什么操作。
你应该记住 `/etc/hosts.allow` 文件中允许一个服务接入的规则要优先于 `/etc/hosts.deny` 中的规则。另外还有,如果两个规则应用于同一个服务,只有第一个规则会被纳入考虑。
不幸的是,不是所有的网络服务都支持 TCP 过滤器,为了查看一个给定的服务是否支持他们,可以执行以下命令:
```
# ldd /path/to/binary | grep libwrap
```
如果以上命令执行以后得到了以下结果,那么它就可以支持 TCP 过滤器,`sshd` 和 `vsftpd` 作为例子,输出如下所示。
[![Find Supported Services in TCP Wrapper](http://www.tecmint.com/wp-content/uploads/2016/10/Find-Supported-Services-in-TCP-Wrapper.png)][3]
*查找 TCP 过滤器支持的服务*
### 如何使用 TCP 过滤器来限制服务的权限
当你编辑 `/etc/hosts.allow``/etc/hosts.deny` 的时候,确保你在最后一个非空行后面通过回车键来添加一个新的行。
为了使得 [SSH 和 FTP][2] 服务只允许 `localhost``192.168.0.102` 并且拒绝所有其他用户,在 `/etc/hosts.deny` 添加如下内容:
```
sshd,vsftpd : ALL
ALL : ALL
```
而且在 `/etc/hosts.allow` 文件中添加如下内容:
```
sshd,vsftpd : 192.168.0.102,LOCAL
```
这些更改会立刻生效并且不需要重新启动。
在下图中你会看到,在最后一行中删掉 `LOCAL`FTP 服务器会对于 `localhost` 不可用。在我们添加了通配符以后,服务又变得可用了。
[![确认 FTP 权限 ](http://www.tecmint.com/wp-content/uploads/2016/10/Verify-FTP-Access.png)][1]
*确认 FTP 权限*
为了允许所有服务对于主机名中含有 `example.com` 都可用,在 `hosts.allow` 中添加如下一行:
```
ALL : .example.com
```
而为了禁止 `10.0.1.0/24` 的机器访问 `vsftpd` 服务,在 `hosts.deny` 文件中添加如下一行:
```
vsftpd : 10.0.1.
```
在最后的两个例子中,注意到客户端列表每行开头和结尾的点。这是用来表示 “所有名字或者 IP 中含有那个字符串的主机或客户端”
这篇文章对你有用吗?你有什么问题或者评论吗?请你尽情在下面留言交流。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/secure-linux-tcp-wrappers-hosts-allow-deny-restrict-access/
作者:[Gabriel Cánepa][a]
译者:[LinuxBars](https://LinuxBar.org)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/wp-content/uploads/2016/10/Verify-FTP-Access.png
[2]:http://www.tecmint.com/block-ssh-and-ftp-access-to-specific-ip-and-network-range/
[3]:http://www.tecmint.com/wp-content/uploads/2016/10/Find-Supported-Services-in-TCP-Wrapper.png
[4]:http://www.tecmint.com/linux-server-hardening-security-tips/
[5]:https://linux.cn/article-3966-1.html
[6]:https://linux.cn/article-4425-1.html
[7]:https://linux.cn/article-7719-1.html

View File

@ -0,0 +1,166 @@
删除一个目录下部分类型之外的所有文件的三种方法
=========
有的时候,你可能会遇到这种情况,你需要删除一个目录下的所有文件,或者只是简单的通过删除除了一些指定类型(以指定扩展名结尾)之外的文件来清理一个目录。
在这篇文章,我们将会向你展现如何通过 `rm``find``globignore` 命令删除一个目录下除了指定文件扩展名或者类型的之外的文件。
在我们进一步深入之前,让我们开始简要的了解一下 Linux 中的一个重要的概念 —— 文件名模式匹配,它可以让我们解决眼前的问题。
在 Linux 下,一个 shell 模式是一个包含以下特殊字符的字符串,称为通配符或者元字符:
1. `*`  匹配 0 个或者多个字符
2. `?`  匹配任意单个字符
3. `[序列]`  匹配序列中的任意一个字符
4. `[!序列]`  匹配任意一个不在序列中的字符
我们将在这儿探索三种可能的办法,包括:
### 使用扩展模式匹配操作符删除文件
下来列出了不同的扩展模式匹配操作符,这些模式列表是一个用 `|` 分割包含一个或者多个文件名的列表:
1. `*(模式列表)`  匹配 0 个或者多个出现的指定模式
2. `?(模式列表)`  匹配 0 个或者 1 个出现的指定模式
4. `@(模式列表)`  匹配 1 个或者多个出现的指定模式
5. `!(模式列表)`  匹配除了一个指定模式之外的任何内容
为了使用它们,需要像下面一样打开 extglob shell 选项:
```
# shopt -s extglob
```
**1. 输入以下命令,删除一个目录下除了 filename 之外的所有文件**
```
$ rm -v !("filename")
```
![删除 Linux 下除了一个文件之外的所有文件](http://www.tecmint.com/wp-content/uploads/2016/10/DeleteAll-Files-Except-One-File-in-Linux.png)
*删除 Linux 下除了一个文件之外的所有文件*
**2. 删除除了 filename1 和 filename2 之外的所有文件**
```
$ rm -v !("filename1"|"filename2")
```
![在 Linux 下删除除了一些文件之外的所有文件](http://www.tecmint.com/wp-content/uploads/2016/10/Delete-All-Files-Except-Few-Files-in-Linux.png)
*在 Linux 下删除除了一些文件之外的所有文件*
**3. 下面的例子显示如何通过交互模式删除除了 `.zip` 之外的所有文件**
```
$ rm -i !(*.zip)
```
![在 Linux 下删除除了 Zip 文件之外的所有文件](http://www.tecmint.com/wp-content/uploads/2016/10/Delete-All-Files-Except-Zip-Files-in-Linux.png)
*在 Linux 下删除除了 Zip 文件之外的所有文件*
**4. 接下来,通过如下的方式你可以删除一个目录下除了所有的`.zip` 和 `.odt` 文件的所有文件,并且在删除的时候,显示正在删除的文件:**
```
$ rm -v !(*.zip|*.odt)
```
![删除除了指定文件扩展的所有文件](http://www.tecmint.com/wp-content/uploads/2016/10/Delete-All-Files-Except-Certain-File-Extensions.png)
*删除除了指定文件扩展的所有文件*
一旦你已经执行了所有需要的命令,你还可以使用如下的方式关闭 extglob shell 选项。
```
$ shopt -u extglob
```
### 使用 Linux 下的 find 命令删除文件
在这种方法下,我们可以[只使用 find 命令][5]的适当的选项或者采用管道配合 `xargs` 命令,如下所示:
```
$ find /directory/ -type f -not -name 'PATTERN' -delete
$ find /directory/ -type f -not -name 'PATTERN' -print0 | xargs -0 -I {} rm {}
$ find /directory/ -type f -not -name 'PATTERN' -print0 | xargs -0 -I {} rm [options] {}
```
**5. 下面的命令将会删除当前目录下除了 `.gz` 之外的所有文件**
```
$ find . -type f -not -name '*.gz' -delete
```
![find 命令 —— 删除 .gz 之外的所有文件](http://www.tecmint.com/wp-content/uploads/2016/10/Remove-All-Files-Except-gz-Files.png)
*find 命令 —— 删除 .gz 之外的所有文件*
**6. 使用管道和 xargs你可以通过如下的方式修改上面的例子**
```
$ find . -type f -not -name '*gz' -print0 | xargs -0 -I {} rm -v {}
```
![使用 find 和 xargs 命令删除文件](http://www.tecmint.com/wp-content/uploads/2016/10/Remove-Files-Using-Find-and-Xargs-Command.png)
*使用 find 和 xargs 命令删除文件*
**7. 让我们看一个额外的例子,下面的命令行将会删除掉当前目录下除了 `.gz``.odt` 和 `.jpg` 之外的所有文件:**
```
$ find . -type f -not \(-name '*gz' -or -name '*odt' -or -name '*.jpg' \) -delete
```
![删除除了指定扩展文件的所有文件](http://www.tecmint.com/wp-content/uploads/2016/10/Remove-All-Files-Except-File-Extensions.png)
*删除除了指定扩展文件的所有文件*
### 通过 bash 中的 GLOBIGNORE 变量删除文件
然而,最后的方法,只适用于 bash。 `GLOBIGNORE` 变量存储了一个路径名展开pathname expansion功能的忽略模式或文件名列表以冒号分隔。
为了使用这种方法,切换到要删除文件的目录,像下面这样设置 `GLOBIGNORE` 变量:
```
$ cd test
$ GLOBIGNORE=*.odt:*.iso:*.txt
```
在这种情况下,除了 `.odt``.iso` 和 `.txt` 之外的所有文件,都将从当前目录删除。
现在,运行如下的命令清空这个目录:
```
$ rm -v *
```
之后,关闭 `GLOBIGNORE` 变量:
```
$ unset GLOBIGNORE
```
![使用 bash 变量 GLOBIGNORE 删除文件](http://www.tecmint.com/wp-content/uploads/2016/10/Delete-Files-Using-Bash-GlobIgnore.png)
*使用 bash 变量 GLOBIGNORE 删除文件*
注:为了理解上面的命令行采用的标识的意思,请参考我们在每一个插图中使用的命令对应的 man 手册。
就这些了!如果你知道有实现相同目录的其他命令行技术,不要忘了通过下面的反馈部分分享给我们。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/delete-all-files-in-directory-except-one-few-file-extensions/
作者:[Aaron Kili][a]
译者:[yangmingming](https://github.com/yangmingming)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/wp-content/uploads/2016/10/Delete-Files-Using-Bash-GlobIgnore.png
[2]:http://www.tecmint.com/wp-content/uploads/2016/10/Remove-All-Files-Except-File-Extensions.png
[3]:http://www.tecmint.com/wp-content/uploads/2016/10/Remove-Files-Using-Find-and-Xargs-Command.png
[4]:http://www.tecmint.com/wp-content/uploads/2016/10/Remove-All-Files-Except-gz-Files.png
[5]:http://www.tecmint.com/35-practical-examples-of-linux-find-command/
[6]:http://www.tecmint.com/wp-content/uploads/2016/10/Delete-All-Files-Except-Certain-File-Extensions.png
[7]:http://www.tecmint.com/wp-content/uploads/2016/10/Delete-All-Files-Except-Zip-Files-in-Linux.png
[8]:http://www.tecmint.com/wp-content/uploads/2016/10/Delete-All-Files-Except-Few-Files-in-Linux.png
[9]:http://www.tecmint.com/wp-content/uploads/2016/10/DeleteAll-Files-Except-One-File-in-Linux.png

View File

@ -0,0 +1,129 @@
如何在 Linux 中将文件编码转换为 UTF-8
===============
在这篇教程中,我们将解释字符编码的含义,然后给出一些使用命令行工具将使用某种字符编码的文件转化为另一种编码的例子。最后,我们将一起看一看如何在 Linux 下将使用各种字符编码的文件转化为 UTF-8 编码。
你可能已经知道,计算机除了二进制数据,是不会理解和存储字符、数字或者任何人类能够理解的东西的。一个二进制位只有两种可能的值,也就是 `0``1``真`或`假``是`或`否`。其它的任何事物,比如字符、数据和图片,必须要以二进制的形式来表现,以供计算机处理。
简单来说,字符编码是一种可以指示电脑来将原始的 0 和 1 解释成实际字符的方式,在这些字符编码中,字符都以一串数字来表示。
字符编码方案有很多种,比如 ASCII、ANCI、Unicode 等等。下面是 ASCII 编码的一个例子。
```
字符 二进制
A 01000001
B 01000010
```
在 Linux 中,命令行工具 `iconv` 用来将使用一种编码的文本转化为另一种编码。
你可以使用 `file` 命令,并添加 `-i``--mime` 参数来查看一个文件的字符编码,这个参数可以让程序像下面的例子一样输出字符串的 mime (Multipurpose Internet Mail Extensions) 数据:
```
$ file -i Car.java
$ file -i CarDriver.java
```
![在 Linux 中查看文件的编码](http://www.tecmint.com/wp-content/uploads/2016/10/Check-File-Encoding-in-Linux.png)
*在 Linux 中查看文件的编码*
iconv 工具的使用方法如下:
```
$ iconv option
$ iconv options -f from-encoding -t to-encoding inputfile(s) -o outputfile
```
在这里,`-f` 或 `--from-code` 表明了输入编码,而 `-t``--to-encoding` 指定了输出编码。
为了列出所有已有编码的字符集,你可以使用以下命令:
```
$ iconv -l
```
![列出所有已有编码字符集](http://www.tecmint.com/wp-content/uploads/2016/10/List-Coded-Charsets-in-Linux.png)
*列出所有已有编码字符集*
### 将文件从 ISO-8859-1 编码转换为 UTF-8 编码
下面,我们将学习如何将一种编码方案转换为另一种编码方案。下面的命令将会将 ISO-8859-1 编码转换为 UTF-8 编码。
考虑如下文件 `input.file`,其中包含这几个字符:
```
<EFBFBD> <20> <20> <20>
```
我们从查看这个文件的编码开始,然后来查看文件内容。最后,我们可以把所有字符转换为 UTF-8 编码。
在运行 `iconv` 命令之后,我们可以像下面这样检查输出文件的内容,和它使用的字符编码。
```
$ file -i input.file
$ cat input.file
$ iconv -f ISO-8859-1 -t UTF-8//TRANSLIT input.file -o out.file
$ cat out.file
$ file -i out.file
```
![在 Linux 中将 ISO-8859-1 转化为 UTF-8](http://www.tecmint.com/wp-content/uploads/2016/10/Converts-UTF8-to-ASCII-in-Linux.png)
*在 Linux 中将 ISO-8859-1 转化为 UTF-8*
注意:如果输出编码后面添加了 `//IGNORE` 字符串,那些不能被转换的字符将不会被转换,并且在转换后,程序会显示一条错误信息。
好,如果字符串 `//TRANSLIT` 被添加到了上面例子中的输出编码之后 (`UTF-8//TRANSLIT`),待转换的字符会尽量采用形译原则。也就是说,如果某个字符在输出编码方案中不能被表示的话,它将会被替换为一个形状比较相似的字符。
而且,如果一个字符不在输出编码中,而且不能被形译,它将会在输出文件中被一个问号标记 `?` 代替。
### 将多个文件转换为 UTF-8 编码
回到我们的主题。如果你想将多个文件甚至某目录下所有文件转化为 UTF-8 编码,你可以像下面一样,编写一个简单的 shell 脚本,并将其命名为 `encoding.sh`
```
#!/bin/bash
### 将 values_here 替换为输入编码
FROM_ENCODING="value_here"
### 输出编码 (UTF-8)
TO_ENCODING="UTF-8"
### 转换命令
CONVERT=" iconv -f $FROM_ENCODING -t $TO_ENCODING"
### 使用循环转换多个文件
for file in *.txt; do
$CONVERT "$file" -o "${file%.txt}.utf8.converted"
done
exit 0
```
保存文件,然后为它添加可执行权限。在待转换文件 (*.txt) 所在的目录中运行这个脚本。
```
$ chmod +x encoding.sh
$ ./encoding.sh
```
重要事项:你也可以使这个脚本变得更通用,比如转换任意特定的字符编码到另一种编码。为了达到这个目的,你只需要改变 `FROM_ENCODING``TO_ENCODING` 变量的值。别忘了改一下输出文件的文件名 `"${file%.txt}.utf8.converted"`.
若要了解更多信息,可以查看 `iconv` 的手册页 (man page)。
```
$ man iconv
```
将这篇指南总结一下,理解字符编码的概念、了解如何将一种编码方案转换为另一种,是一个电脑用户处理文本时必须要掌握的知识,程序员更甚。
最后,你可以在下面的评论部分中与我们联系,提出问题或反馈。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/convert-files-to-utf-8-encoding-in-linux/
作者:[Aaron Kili][a]
译者:[StdioA](https://github.com/StdioA)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/wp-content/uploads/2016/10/Converts-UTF8-to-ASCII-in-Linux.png
[2]:http://www.tecmint.com/wp-content/uploads/2016/10/List-Coded-Charsets-in-Linux.png
[3]:http://www.tecmint.com/wp-content/uploads/2016/10/Check-File-Encoding-in-Linux.png

View File

@ -0,0 +1,27 @@
98% 的开发者在工作中使用了开源软件
==================
![developer using open source](http://i0.wp.com/opensourceforu.com/wp-content/uploads/2016/07/developer.jpg?resize=750%2C500)
开源每天都会达到新的高度。但是一个新的研究表明超过 98% 的开发者在工作中使用开源工具。
Git 仓库管理软件 [GitLab][1] 进行了一项调查披露了一些关于开源接受度的有趣事实。针对开发人员群体的调查表明 98% 的开发者更喜欢在工作中使用开源91% 选择在工作和个人项目中选择使用相同的开发工具。此外92% 的人认为分布式版本控制系统Git 仓库)在工作中很重要。
在所有的偏好编程语言中JavaScript 占了 51% 的受访者比例。它后面是 Python、PHP、Java、Swift 和Objective-C。86% 的开发者认为安全是代码的主要判断标准。
GitLab 首席执行官兼联合创始人 Sid Sijbrandij 在一次声明中表示:“尽管过程驱动的开发技术在过去已经取得了成功,但开发人员正在寻找一种更自然的软件开发革新以促进项目生命周期内的协作和信息共享。”
这份报告来自 GitLab 在 7 月 6 日和 27 日之间对使用其存储库平台的 362 家初创企业和企业的 CTO、开发人员和 DevOps 专业人士的调查。
--------------------------------------------------------------------------------
via: http://opensourceforu.com/2016/11/98-percent-developers-use-open-source-at-work/
作者:[JAGMEET SINGH][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://opensourceforu.com/author/jagmeet-singh/
[1]:https://about.gitlab.com/2016/11/02/global-developer-survey-2016/

View File

@ -0,0 +1,87 @@
LINUX NOW RUNS ON 99.6% OF TOP 500 SUPERCOMPUTERS
============================================================
[
![Linux rules the world of supercomputers](https://itsfoss.com/wp-content/uploads/2016/11/Linux-King-Supercomputer-world-min.jpg)
][12]
_Brief: Linux may have just 2% in the desktop market share, but when it comes to supercomputers, Linux is simply ruling it with over 99% of the share._
Linux running on more than 99% of the top 500 fastest supercomputers in the world is no surprise. If you followed our previous reports, in the year 2015, [Linux was running on more than 97% of the top 500 supercomputers][13]. This year, it just got better.
[#Linux now runs on more than 99% of top 500 #supercomputers in the world][4]
[CLICK TO TWEET][5]
This information is collected by an independent organization [Top500][14] that publishes the details about the top 500 fastest supercomputers known to them, twice a year. You can [go the website and filter out the list][15] based on country, OS type used, vendors etc. Dont worry, Ill do it for you to present some of the most interesting facts from this years list.
### LINUX GOT 498 OUT OF 500
If I have to break it down in numbers, 498 out of the top 500 supercomputers run Linux. Rest of the two supercomputers run Unix-based OS. Windows, which was running on 1 supercomputer until last year, is nowhere in the list this year. Perhaps, none of the supercomputers can run Windows 10 (pun intended).
To summarize the list of top 500 supercomputers based on OS this year:
* Linux: 498
* Unix: 2
* Windows: 0
To give you a year wise summary of Linux shares on the top 500 supercomputers:
* In 2012: 94%
* In [2013][6]: 95%
* In [2014][7]: 97%
* In [2015][8]: 97.2%
* In 2016: 99.6%
* In 2017: ???
In addition to that, first 380 fastest supercomputers run Linux, including of course the fastest supercomputer based in China. Unix is used by the 386th and 387th ranked supercomputers also based in China.
### SOME OTHER INTERESTING STATS ABOUT FASTEST SUPERCOMPUTERS
[
![List of top 10 fastest supercomputers in the world in 2016](https://itsfoss.com/wp-content/uploads/2016/11/fastest-supercomputers.png)
][16]
Moving Linux aside, I was looking at the list and thought of sharing some other interesting stats with you.
* Worlds fastest supercomputer is [Sunway TaihuLight][9]. It based in [National Supercomputing Center in Wuxi][10], China. It has a speed of 93PFLOPS.
* Worlds second fastest supercomputer is also based in China ([Tianhe-2][11]) while the third spot is taken by US based Titan.
* Out of the top 10 fastest supercomputers, USA has 5, Japan and China have 2 each while Switzerland has 1.
* US and China both have 171 supercomputers each in the list of the top 500 supercomputers.
* Japan has 27, France has 20, while India, Russia and Saudi Arabia has 5 supercomputers in the list.
[Suggested ReaddigiKam 5.0 Released! Install It In Ubuntu Linux][17]
Some interesting facts, isnt it? You can filter out your own list [here][18] to further details. For the moment I am happy to brag about Linux running on 99% of the top 500 supercomputers and look forward to a perfect score of 100% next year.
While you are reading it, do share this article on social media. Its an achievement for Linux and we got to show off :P
--------------------------------------------------------------------------------
via: https://itsfoss.com/linux-99-percent-top-500-supercomputers
作者:[Abhishek Prakash ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/abhishek/
[1]:https://twitter.com/share?original_referer=https%3A%2F%2Fitsfoss.com%2F&source=tweetbutton&text=Linux+Now+Runs+On+99.6%25+Of+Top+500+Supercomputers&url=https%3A%2F%2Fitsfoss.com%2Flinux-99-percent-top-500-supercomputers%2F&via=%40itsfoss
[2]:https://www.linkedin.com/cws/share?url=https://itsfoss.com/linux-99-percent-top-500-supercomputers/
[3]:http://pinterest.com/pin/create/button/?url=https://itsfoss.com/linux-99-percent-top-500-supercomputers/&description=Linux+Now+Runs+On+99.6%25+Of+Top+500+Supercomputers&media=https://itsfoss.com/wp-content/uploads/2016/11/Linux-King-Supercomputer-world-min.jpg
[4]:https://twitter.com/share?text=%23Linux+now+runs+on+more+than+99%25+of+top+500+%23supercomputers+in+the+world&via=itsfoss&related=itsfoss&url=https://itsfoss.com/linux-99-percent-top-500-supercomputers/
[5]:https://twitter.com/share?text=%23Linux+now+runs+on+more+than+99%25+of+top+500+%23supercomputers+in+the+world&via=itsfoss&related=itsfoss&url=https://itsfoss.com/linux-99-percent-top-500-supercomputers/
[6]:https://itsfoss.com/95-percent-worlds-top-500-supercomputers-run-linux/
[7]:https://itsfoss.com/97-percent-worlds-top-500-supercomputers-run-linux/
[8]:https://itsfoss.com/linux-runs-97-percent-worlds-top-500-supercomputers/
[9]:https://en.wikipedia.org/wiki/Sunway_TaihuLight
[10]:https://www.top500.org/site/50623
[11]:https://en.wikipedia.org/wiki/Tianhe-2
[12]:https://itsfoss.com/wp-content/uploads/2016/11/Linux-King-Supercomputer-world-min.jpg
[13]:https://itsfoss.com/linux-runs-97-percent-worlds-top-500-supercomputers/
[14]:https://www.top500.org/
[15]:https://www.top500.org/statistics/sublist/
[16]:https://itsfoss.com/wp-content/uploads/2016/11/fastest-supercomputers.png
[17]:https://itsfoss.com/digikam-5-0-released-install-it-in-ubuntu-linux/
[18]:https://www.top500.org/statistics/sublist/

View File

@ -1,3 +1,4 @@
yangmingming translating
# aria2 (Command Line Downloader) command examples # aria2 (Command Line Downloader) command examples
[aria2][4] is a Free, open source, lightweight multi-protocol & multi-source command-line download utility. It supports HTTP/HTTPS, FTP, SFTP, BitTorrent and Metalink. aria2 can be manipulated via built-in JSON-RPC and XML-RPC interfaces. aria2 automatically validates chunks of data while downloading a file. It can download a file from multiple sources/protocols and tries to utilize your maximum download bandwidth. By default all the Linux Distribution included aria2, so we can install easily from official repository. some of the GUI download manager using aria2 as a plugin to improve the download speed like [uget][3]. [aria2][4] is a Free, open source, lightweight multi-protocol & multi-source command-line download utility. It supports HTTP/HTTPS, FTP, SFTP, BitTorrent and Metalink. aria2 can be manipulated via built-in JSON-RPC and XML-RPC interfaces. aria2 automatically validates chunks of data while downloading a file. It can download a file from multiple sources/protocols and tries to utilize your maximum download bandwidth. By default all the Linux Distribution included aria2, so we can install easily from official repository. some of the GUI download manager using aria2 as a plugin to improve the download speed like [uget][3].

View File

@ -1,148 +0,0 @@
TOP 5 BEST VIDEO EDITING SOFTWARE FOR LINUX IN 2016
=====================================================
![](https://itsfoss.com/wp-content/uploads/2016/06/linux-video-ditor-software.jpg)
Brief: Tiwo discusses the best video editors for Linux, their pros and cons and the installation method for Ubuntu-based distros in this article.
We have discussed [best photo management applications for Linux][1], [best code editors for Linux][2] in similar articles in the past. Today we shall see the best video editing software for Linux.
When asked about free video editing software, Windows Movie Maker and iMovie is what most people often suggest.
Unfortunately, both of them are not available for GNU/Linux. But you dont need to worry about it, we have pooled together a list of best free video editors for you.
### BEST VIDEO EDITOR APPS FOR LINUX
Lets have a look at the top 5 best free video editing software for Linux below :
#### 1. KDENLIVE
![](https://itsfoss.com/wp-content/uploads/2016/06/kdenlive-free-video-editor-on-ubuntu.jpg)
[Kdenlive][3] is a free and [open source][4] video editing software from KDE that provides dual video monitors, a multi-track timeline, clip list, customizable layout support, basic effects, and basic transitions.
It supports wide variety of file formats and a wide range of camcorders and cameras including Low resolution camcorder (Raw and AVI DV editing), Mpeg2, mpeg4 and h264 AVCHD (small cameras and camcorders), High resolution camcorder files, including HDV and AVCHD camcorders, Professional camcorders, including XDCAM-HD™ streams, IMX™ (D10) streams, DVCAM (D10) , DVCAM, DVCPRO™, DVCPRO50™ streams and DNxHD™ streams.
You can install it from terminal by running the following command :
```
sudo apt-get install kdenlive
```
Or, open Ubuntu Software Center then search Kdenlive.
#### 2. OPENSHOT
![](https://itsfoss.com/wp-content/uploads/2016/06/openshot-free-video-editor-on-ubuntu.jpg)
[OpenShot][5] is the second choice in our list of Linux video editing software. OpenShot can help you create the film that supports for transitions, effects, adjusting audio levels, and of course, it support of most formats and codecs.
You can also export your film to DVD, upload to YouTube, Vimeo, Xbox 360, and many other common formats. OpenShot is simpler than kdenlive. So if you need a video editor with a simple UI OpenShot is a good choice.
The latest version is 2.0.7. You can install OpenShot video editor by run the following command from terminal window :
```
sudo apt-get install openshot
```
It needs to download 25 MB, and 70 MB disk space after installed.
#### 3. FLOWBLADE MOVIE EDITOR
![](https://itsfoss.com/wp-content/uploads/2016/06/flowblade-movie-editor-on-ubuntu.jpg)
[Flowblade Movie Editor][6] is a multitrack non-linear video editor for Linux. It is free and open source. It comes with a stylish and modern user interface.
Written in Python, it is designed to provide a fast, and precise. Flowblade has focused on providing the best possible experience on Linux and other free platforms. So theres no Windows and OS X version for now.
To install Flowblade in Ubuntu and other Ubuntu based systems, use the command below:
```
sudo apt-get install flowblade
```
#### 4. LIGHTWORKS
![](https://itsfoss.com/wp-content/uploads/2016/06/lightworks-running-on-ubuntu-16.04.jpg)
If you looking for a video editor software that has more feature, this is the answer. [Lightworks][7] is a cross-platform professional video editor, available for Linux, Mac OS X and Windows.
It is an award winning professional [non-linear editing][8] (NLE) software that supports resolutions up to 4K as well as video in SD and HD formats.
This application has two versions: Lightworks Free and Lightworks Pro. While free version doesnt support Vimeo (H.264 / MPEG-4) and YouTube (H.264 / MPEG-4)- Up to 2160p (4K UHD), Blu-ray, and H.264/MP4 export option with configurable bitrate setting, then pro version is.
- Lightworks Free
- Lightworks Pro
Pro version has more features such as higher resolution support, 4K and Blue Ray support etc.
##### HOW TO INSTALL LIGHTWORKS?
Unlike the other video editors, installing Lightwork is not as straight forward as running a single command. Dont worry, its not that complicated either.
- Step 1 You can get the package from [Lightworks Downloads Page][9]. The packages size about 79,5 MB.
>Please note: Theres no Linux 32-bit support.
- Step 2 Once downloaded, you can install it using [Gdebi package installer][10]. Gdebi automatically downloads the dependency :
![](https://itsfoss.com/wp-content/uploads/2016/06/Installing-lightworks-on-ubuntu.jpg)
- Step 3 Now you can open it from Ubuntu dashboard, or your Linux distros menu.
- Step 4 It needs an account when you use it for first time. Click at Not Registerd? button to register. Dont worry, its free!
- Step 5 After your account has been verified, now login.
Now the Lightworks is ready to use.
Need Lightworks video tutorial? Get them at [Lightworks video tutorials Page][11].
#### 5. BLENDER
![](https://itsfoss.com/wp-content/uploads/2016/06/blender-running-on-ubuntu-16.04.jpg)
Blender is a professional, industry-grade open source, cross platform video editor. It is popular for 3D works. Blender has been used in several Hollywood movies including Spider Man series.
Although originally designed for produce 3D modeling, but it can also be used for video editing and input capabilities with a variety of formats. The Video Editor includes:
- Live preview, luma waveform, chroma vectorscope and histogram displays
- Audio mixing, syncing, scrubbing and waveform visualization
- Up to 32 slots for adding video, images, audio, scenes, masks and effects
- Speed control, adjustment layers, transitions, keyframes, filters and more.
The latest version can be downloaded from [Blender Download Page][12].
### WHICH IS THE BEST VIDEO EDITING SOFTWARE?
If you need a simple video editor, OpenShot, Kdenlive or Flowblade is a good choice. These are suitable for beginners and a system with standard specification.
Then if you have a high-end computer, and need advanced features you can go out with Lightworks. If you are looking for more advanced features, Blender has got your back.
So thats all I can write about 5 best video editing software for Linux such as Ubuntu, Linux Mint, Elementary, and other Linux distributions. Share with us which video editor you like the most.
--------------------------------------------------------------------------------
via: https://itsfoss.com/best-video-editing-software-linux/
作者:[Tiwo Satriatama][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/tiwo/
[1]: https://itsfoss.com/linux-photo-management-software/
[2]: https://itsfoss.com/best-modern-open-source-code-editors-for-linux/
[3]: https://kdenlive.org/
[4]: https://itsfoss.com/tag/open-source/
[5]: http://www.openshot.org/
[6]: http://jliljebl.github.io/flowblade/
[7]: https://www.lwks.com/
[8]: https://en.wikipedia.org/wiki/Non-linear_editing_system
[9]: https://www.lwks.com/index.php?option=com_lwks&view=download&Itemid=206
[10]: https://itsfoss.com/gdebi-default-ubuntu-software-center/
[11]: https://www.lwks.com/videotutorials
[12]: https://www.blender.org/download/

View File

@ -0,0 +1,84 @@
How To Manually Backup Your SMS / MMS Messages On Android?
============================================================
![Android backup sms mms](https://iwf1.com/wordpress/wp-content/uploads/2016/10/Android-backup-sms-mms.jpg)
If youre switching a device or upgrading your system, making a backup of your data might be of crucial importance.
One of the places where our important data may lie, is in our SMS / MMS messages, be it of sentimental or utilizable value, backing it up might prove quite useful.
However, unlike our photos, videos or song files which can be transferred and backed up with relative ease, backing our SMS / MMS usually proves to be a bit more complicated task that commonly require involving a third party-app or service.
### Why Do It Manually?
Although there currently exist quite a bit of different apps that might take care of backing SMS and MMS for you, you may want to consider doing it manually for the following reasons:
1. Apps **may not work** on different devices or different Android versions.
2. Apps may backup your data by uploading it to the Internet cloud therefore requiring you to **jeopardize the safety** of your content.
3. By backing up manually, you have complete control over where your data goes and what it goes through, thus **limiting the risk of spyware** in the process.
4. Doing it manually can be overall **less time consuming, easier and more straightforward**than any other way.
### How To Backup SMS / MMS Manually?
To backup your SMS / MMS messages manually youll need to have an Android tool called [adb][1]installed on your computer.
Now, the important thing to know regarding SMS / MMS is that Android stores them in a database commonly named **mmssms.db.**
Since the location of that database may differ between one device to another and also because other SMS apps can create databases of their own, such as, gommssms.db created by GO SMS app, the first thing youd want to do is to search for these databases.
So, open up your CLI tool (I use Linux Terminal, you may use Windows CMD or PowerShell) and issue the following commands:
Note: below is a series of commands needed for the task and later is the explanation of what each command does.
`
adb root
adb shell
find / -name "*mmssms*"
exit
adb pull /PATH/TO/mmssms.db /PATH/TO/DESTINATION/FOLDER
`
#### Explanation:
We start with adb root command in order to start adb in root mode so that well have permissions to reach system protected files as well.
“adb shell” is used to get inside the device shell.
Next, the “find” command is used to search for the databases. (in my case its found in: /data/data/com.android.providers.telephony/databases/mmssms.db)
* Tip: if your Terminal prints too many irrelevant results, try refining your “find” parameters (google it).
[
![Android SMS&MMS databases](http://iwf1.com/wordpress/wp-content/uploads/2016/10/Android-SMSMMS-databases-730x726.jpg)
][2]
Android SMS&MMS databases
Then we use exit command in order to exit back to our local system directory.
Lastly, adb pull is used to copy the database files into a folder on our computer.
Now, once youre ready to restore your SMS / MMS messages, whether its on a new device or a new system version, simply search again for the location of mmssms on the new system and replace it with the one youve backed.
Use adb push to replace it, e.g: adb push ~/Downloads/mmssms.db /data/data/com.android.providers.telephony/databases/mmssms.db
--------------------------------------------------------------------------------
via: https://iwf1.com/how-to-manually-backup-your-sms-mms-messages-on-android/
作者:[Liron ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://iwf1.com/tag/android
[1]:http://developer.android.com/tools/help/adb.html
[2]:http://iwf1.com/wordpress/wp-content/uploads/2016/10/Android-SMSMMS-databases.jpg

View File

@ -1,153 +0,0 @@
LinuxBars翻译中
How to Secure Network Services Using TCP Wrappers in Linux
===========
In this article we will explain what TCP wrappers are and how to configure them to [restrict access to network services][7] running on a Linux server. Before we start, however, we must clarify that the use of TCP wrappers does not eliminate the need for a properly [configured firewall][6].
In this regard, you can think of this tool as a [host-based access control list][5], and not as the [ultimate security measure][4] for your system. By using a firewall and TCP wrappers, instead of favoring one over the other, you will make sure that your server is not left with a single point of failure.
### Understanding hosts.allow and hosts.deny
When a network request reaches your server, TCP wrappers uses `hosts.allow` and `hosts.deny` (in that order) to determine if the client should be allowed to use a given service.
By default, these files are empty, all commented out, or do not exist. Thus, everything is allowed through the TCP wrappers layer and your system is left to rely on the firewall for full protection. Since this is not desired, due to the reason we stated in the introduction, make sure both files exist:
```
# ls -l /etc/hosts.allow /etc/hosts.deny
```
The syntax of both files is the same:
```
<services> : <clients> [: <option1> : <option2> : ...]
```
where,
1. services is a comma-separated list of services the current rule should be applied to.
2. clients represent the list of comma-separated hostnames or IP addresses affected by the rule. The following wildcards are accepted:
1. ALL matches everything. Applies both to clients and services.
2. LOCAL matches hosts without a period in their FQDN, such as localhost.
3. KNOWN indicate a situation where the hostname, host address, or user are known.
4. UNKNOWN is the opposite of KNOWN.
5. PARANOID causes a connection to be dropped if reverse DNS lookups (first on IP address to determine host name, then on host name to obtain the IP addresses) return a different address in each case.
3. Finally, an optional list of colon-separated actions indicate what should happen when a given rule is triggered.
You may want to keep in mind that a rule allowing access to a given service in `/etc/hosts.allow` takes precedence over a rule in `/etc/hosts.deny` prohibiting it. Additionally, if two rules apply to the same service, only the first one will be taken into account.
Unfortunately, not all network services support the use of TCP wrappers. To determine if a given service supports them, do:
```
# ldd /path/to/binary | grep libwrap
```
If the above command returns output, it can be TCP-wrapped. An example of this are sshd and vsftpd, as shown here:
[![Find Supported Services in TCP Wrapper](http://www.tecmint.com/wp-content/uploads/2016/10/Find-Supported-Services-in-TCP-Wrapper.png)][3]
Find Supported Services in TCP Wrapper
### How to Use TCP Wrappers to Restrict Access to Services
As you edit `/etc/hosts.allow` and `/etc/hosts.deny`, make sure you add a newline by pressing Enter after the last non-empty line.
To [allow SSH and FTP access][2] only to 192.168.0.102 and localhost and deny all others, add these two lines in `/etc/hosts.deny`:
```
sshd,vsftpd : ALL
ALL : ALL
```
and the following line in `/etc/hosts.allow`:
```
sshd,vsftpd : 192.168.0.102,LOCAL
```
TCP Wrappers hosts.deny File
```
#
# hosts.deny This file contains access rules which are used to
# deny connections to network services that either use
# the tcp_wrappers library or that have been
# started through a tcp_wrappers-enabled xinetd.
#
# The rules in this file can also be set up in
# /etc/hosts.allow with a 'deny' option instead.
#
# See 'man 5 hosts_options' and 'man 5 hosts_access'
# for information on rule syntax.
# See 'man tcpd' for information on tcp_wrappers
#
sshd,vsftpd : ALL
ALL : ALL
```
TCP Wrappers hosts.allow File
```
#
# hosts.allow This file contains access rules which are used to
# allow or deny connections to network services that
# either use the tcp_wrappers library or that have been
# started through a tcp_wrappers-enabled xinetd.
#
# See 'man 5 hosts_options' and 'man 5 hosts_access'
# for information on rule syntax.
# See 'man tcpd' for information on tcp_wrappers
#
sshd,vsftpd : 192.168.0.102,LOCAL
```
These changes take place immediately without the need for a restart.
In the following image you can see the effect of removing the word `LOCAL` from the last line: the FTP server will become unavailable for localhost. After we add the wildcard back, the service becomes available again.
[![Verify FTP Access ](http://www.tecmint.com/wp-content/uploads/2016/10/Verify-FTP-Access.png)][1]
>Verify FTP Access
To allow all services to hosts where the name contains `example.com`, add this line in `hosts.allow`:
```
ALL : .example.com
```
and to deny access to vsftpd to machines on 10.0.1.0/24, add this line in `hosts.deny`:
```
vsftpd : 10.0.1.
```
On the last two examples, notice the dot at the beginning and the end of the client list. It is used to indicate “ALL hosts and / or clients where the name or the IP contains that string”.
Was this article helpful to you? Do you have any questions or comments? Feel free to drop us a note using the comment form below.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/secure-linux-tcp-wrappers-hosts-allow-deny-restrict-access/
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/wp-content/uploads/2016/10/Verify-FTP-Access.png
[2]:http://www.tecmint.com/block-ssh-and-ftp-access-to-specific-ip-and-network-range/
[3]:http://www.tecmint.com/wp-content/uploads/2016/10/Find-Supported-Services-in-TCP-Wrapper.png
[4]:http://www.tecmint.com/linux-server-hardening-security-tips/
[5]:http://www.tecmint.com/secure-files-using-acls-in-linux/
[6]:http://www.tecmint.com/configure-firewalld-in-centos-7/
[7]:http://www.tecmint.com/mandatory-access-control-with-selinux-or-apparmor-linux/

View File

@ -0,0 +1,100 @@
Livepatch Apply Critical Security Patches to Ubuntu Linux Kernel Without Rebooting
============================================================
If you are a system administrator in charge of maintaining critical systems in enterprise environments, we are sure you know two important things:
1) Finding a downtime window to install security patches in order to handle kernel or operating system vulnerabilities can be difficult. If the company or business you work for does not have security policies in place, operations management may end up favoring uptime over the need to solve vulnerabilities. Additionally, internal bureaucracy can cause delays in granting approvals for a downtime. Been there myself.
2) Sometimes you cant really afford a downtime, and should be prepared to mitigate any potential exposures to malicious attacks some other way.
The good news is that Canonical has recently released (actually, a couple of days ago) its Livepatchservice to apply critical kernel patches to Ubuntu 16.04 (64-bit edition / 4.4.x kernel) without the need for a later reboot. Yes, you read that right: with Livepatch, you dont need to restart your Ubuntu 16.04 server in order for the security patches to take effect.
### Signing up for Ubuntu Livepatch
In order to use Canonical Livepatch Service, you need to sign up at [https://auth.livepatch.canonical.com/][1] and indicate if you are a regular Ubuntu user or an Advantage subscriber (paid option). All Ubuntu users can link up to 3 different machines to Livepatch through the use of a token:
[
![Canonical Livepatch Service](http://www.tecmint.com/wp-content/uploads/2016/10/Canonical-Livepatch-Service.png)
][2]
Canonical Livepatch Service
In the next step you will be prompted to enter your Ubuntu One credentials or sign up for a new account. If you choose the latter, you will need to confirm your email address in order to finish your registration:
[
![Ubuntu One Confirmation Mail](http://www.tecmint.com/wp-content/uploads/2016/10/Ubuntu-One-Confirmation-Mail.png)
][3]
Ubuntu One Confirmation Mail
Once you click on the link above to confirm your email address, youll be ready to go back to [https://auth.livepatch.canonical.com/][4] and get your Livepatch token.
### Getting and Using your Livepatch Token
To begin, copy the unique token assigned to your Ubuntu One account:
[
![Canonical Livepatch Token](http://www.tecmint.com/wp-content/uploads/2016/10/Livepatch-Token.png)
][5]
Canonical Livepatch Token
Then go to a terminal and type:
```
$ sudo snap install canonical-livepatch
```
The above command will install the livepatch, whereas
```
$ sudo canonical-livepatch enable [YOUR TOKEN HERE]
```
will enable it for your system. If this last command indicates it cant find canonical-livepatch, make sure `/snap/bin` has been added to your path. A workaround consists of changing your working directory to `/snap/bin` and do.
```
$ sudo ./canonical-livepatch enable [YOUR TOKEN HERE]
```
[
![Install Livepatch in Ubuntu](http://www.tecmint.com/wp-content/uploads/2016/10/Install-Livepatch-in-Ubuntu.png)
][6]
Install Livepatch in Ubuntu
Overtime, youll want to check the description and the status of patches applied to your kernel. Fortunately, this is as easy as doing.
```
$ sudo ./canonical-livepatch status --verbose
```
as you can see in the following image:
[
![Check Livepatch Status in Ubuntu](http://www.tecmint.com/wp-content/uploads/2016/10/Check-Livepatch-Status.png)
][7]
Check Livepatch Status in Ubuntu
Having enabled Livepatch on your Ubuntu server, you will be able to reduce planned and unplanned downtimes at a minimum while keeping your system secure. Hopefully Canonicals initiative will award you a pat on the back by management or better yet, a raise.
Feel free to let us know if you have any questions about this article. Just drop us a note using the comment form below and we will get back to you as soon as possible.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/livepatch-install-critical-security-patches-to-ubuntu-kernel
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:https://auth.livepatch.canonical.com/
[2]:http://www.tecmint.com/wp-content/uploads/2016/10/Canonical-Livepatch-Service.png
[3]:http://www.tecmint.com/wp-content/uploads/2016/10/Ubuntu-One-Confirmation-Mail.png
[4]:https://auth.livepatch.canonical.com/
[5]:http://www.tecmint.com/wp-content/uploads/2016/10/Livepatch-Token.png
[6]:http://www.tecmint.com/wp-content/uploads/2016/10/Install-Livepatch-in-Ubuntu.png
[7]:http://www.tecmint.com/wp-content/uploads/2016/10/Check-Livepatch-Status.png

View File

@ -1,552 +0,0 @@
OneNewLife translating
# Getting Started with Webpack 2
![](https://cdn-images-1.medium.com/max/2000/1*yI44h8Df-l-2LUqvXIi8JQ.png)
Webpack 2 will be out of beta [once the documentation has been finished][26]. But that doesnt mean you cant start using version 2 now if you know how to configure it.
### What is Webpack?
At its simplest, Webpack is a module bundler for your JavaScript. However, since its release its evolved into a manager of all your front-end code (either intentionally or by the communitys will).
![](https://cdn-images-1.medium.com/max/800/1*yBt2rFj2DbckFliGE0LEyg.png)
A task runner such as _Gulp _can handle many different preprocessers and transpilers, but in all cases, it will take a source _input_ and crunch it into a compiled _output. _However, it does this on a case-by-case basis with no concern for the system at large. That is the burden of the developer: to pick up where the task runner left off and find the proper way for all these moving parts to mesh together in production.
Webpack attempts to lighten the developer load a bit by asking a bold question: _what if there were a part of the development process that handled dependencies on its own? What if we could simply write code in such a way that the build process managed itself, based on only what was necessary in the end?_
![](https://cdn-images-1.medium.com/max/800/1*TOFfoH0cXTc8G3Y_F6j3Jg.png)
If youve been a part of the web community for the past few years, you already know the preferred method of solving a problem: _build this with JavaScript._And so Webpack attempts to make the build process easier by passing dependencies through JavaScript. But the true power of its design isnt simply the code _management_ part; its that this management layer is 100% valid JavaScript (with Node features). Webpack gives you the ability to write valid JavaScript that has a better sense of the system at large.
In other words: _you dont write code for Webpack. You write code for your projec_t. And Webpack keeps up (with some config, of course).
In a nutshell, if youve ever struggled with any of the following:
* Accidentally including stylesheets and JS libraries you dont need into production, bloating the size
* Encountering scoping issues—both from CSS and JavaScript
* Finding a good system for using Node/Bower modules in your JavaScript, or relying on a crazy backend configuration to properly utilize those modules
* Needing to optimize asset delivery better but fearing youll break something
…then you could benefit from Webpack. It handles all the above effortlessly by letting JavaScript worry about your dependencies and load order instead of your developer brain. The best part? Webpack can even run purely on the server side, meaning you can still build [progressively-enhanced][25] websites using Webpack.
### First Steps
Well use [Yarn][24] (`brew install yarn`) in this tutorial instead of `npm`, but its totally up to you; they do the same thing. From our project folder, well run the following in a terminal window to add Webpack 2 to both our global packages and our local project:
```
yarn global add webpack@2.1.0-beta.25 webpack-dev-server@2.1.0-beta.9
yarn add --dev webpack@2.1.0-beta.25 webpack-dev-server@2.1.0-beta.9
```
Well then declare a webpack configuration with a `webpack.config.js` file in the root of our project directory:
```
`'use strict';
const webpack = require("webpack");
module.exports = {
context: __dirname + "/src",
entry: {
app: "./app.js",
},
output: {
path: __dirname + "/dist",
filename: "[name].bundle.js",
},
};`
```
_Note: _`___dirname_`_ refers to the root of your project._
Remember that Webpack “knows” whats going in your project? It _knows_ by reading your code (dont worry; it signed an NDA). Webpack basically does the following:
1. Starting from the `context` folder, …
2. … it looks for `entry` filenames 
3. … and reads the content. Every `import` ([ES6][7]) or `require()` (Node) dependency it finds as it parses the code, it bundles for the final build. It then searches _those_ dependencies, and those dependencies dependencies, until it reaches the very end of the “tree”—only bundling what it needed to, and nothing else.
4. From there, Webpack bundles everything to the `output.path` folder, naming it using the `output.filename` naming template (`[name]` gets replaced with the object key from `entry`)
So if our `src/app.js` file looked something like this (assuming we ran `yarn add --dev moment` beforehand):
```
'use strict';
```
```
import moment from 'moment';
var rightNow = moment().format('MMMM Do YYYY, h:mm:ss a');
console.log( rightNow );
```
```
// "October 23rd 2016, 9:30:24 pm"
```
Wed run
```
webpack -p
```
_Note: The _`_p_`_ flag is “production” mode and uglifies/minifies output._
And it would output a `dist/app.bundle.js` that logged the current date & time to the console. Note that Webpack automatically knew what `'moment'`referred to (although if you had a `moment.js` file in your directory, by default Webpack would have prioritized this over your `moment` Node module).
### Working with Multiple Files
You can specify any number of entry/output points you wish by modifying only the `entry` object.
#### Multiple files, bundled together
```
`'use strict';
const webpack = require("webpack");
module.exports = {
context: __dirname + "/src",
entry: {
app: ["./home.js", "./events.js", "./vendor.js"],
},
output: {
path: __dirname + "/dist",
filename: "[name].bundle.js",
},
};`
```
Will all be bundled together as one `dist/app.bundle.js` file, in array order.
#### Multiple files, multiple outputs
```
`const webpack = require("webpack");
module.exports = {
context: __dirname + "/src",
entry: {
home: "./home.js",
events: "./events.js",
contact: "./contact.js",
},
output: {
path: __dirname + "/dist",
filename: "[name].bundle.js",
},
};`
```
Alternately, you may choose to bundle multiple JS files to break up parts of your app. This will be bundled as 3 files: `dist/home.bundle.js`, `dist/events.bundle.js`, and `dist/contact.bundle.js`.
#### Advanced auto-bundling
If youre breaking up your application into multiple `output` bundles (useful if one part of your app has a ton of JS you dont need to load up front), theres a likelihood you may be duplicating code across those files, because it will resolve each dependency separately from one another. Fortunately, Webpack has a built-in _CommonsChunk_ plugin to handle this:
```
module.exports = {
// …
```
```
plugins: [
new webpack.optimize.CommonsChunkPlugin({
name: "commons",
filename: "commons.js",
minChunks: 2,
}),
],
```
```
// …
};
```
Now, across your `output` files, if you have any modules that get loaded `2` or more times (set by `minChunks`), it will bundle that into a `commons.js` file which you can then cache on the client side. This will result in an additional header request, sure, but you prevent the client from downloading the same libraries more than once. So there are many scenarios where this is a net gain for speed.
### Developing
Webpack actually has its own development server, so whether youre developing a static site or are just prototyping your front-end, its perfect for either. To get that running, just add a `devServer` object to `webpack.config.js`:
```
module.exports = {
context: __dirname + "/src",
entry: {
app: "./app.js",
},
output: {
filename: "[name].bundle.js",
path: __dirname + "/dist/assets",
publicPath: "/assets", // New
},
devServer: {
contentBase: __dirname + "/src", // New
},
};
```
Now make a `src/index.html` file that has:
```
<script src="/assets/app.bundle.js"></script>
```
… and from your terminal, run:
```
webpack-dev-server
```
Your server is now running at `localhost:8080`. _Note how _`_/assets_`_ in the script tag matches _`_output.publicPath_`_—you can name this whatever you want (useful if you need a CDN)._
Webpack will hotload any JavaScript changes as you make them without the need to refresh your browser. However, any changes to the`webpack.config.js` file will require a server restart to take effect.
### Globally-accessible methods
Need to use some of your functions from a global namespace? Simply set `output.library` within `webpack.config.js`:
```
module.exports = {
output: {
library: 'myClassName',
}
};
```
… and it will attach your bundle to a `window.myClassName` instance. So using that name scope, you could call methods available to that entry point (you can read more about this setting [on the documentation][23]).
### Loaders
Up until now, weve only covered working with JavaScript. Its important to start with JavaScript because _thats the only language Webpack speaks_. We can work with virtually any file type, as long as we pass it into JavaScript. We do that with _Loaders_.
A loader can refer to a preprocessor such as Sass, or a transpiler such as Babel. On NPM, theyre usually named `*-loader` such as `sass-loader` or `babel-loader`.
#### Babel + ES6
If we wanted to use ES6 via [Babel][22] in our project, wed first install the appropriate loaders locally:
```
yarn add --dev babel-loader babel-core babel-preset-es2015
```
… and then add it to `webpack.config.js` so Webpack knows where to use it.
```
module.exports = {
// …
```
```
module: {
rules: [
{
test: /\.js$/,
use: [{
loader: "babel-loader",
options: { presets: ["es2015"] }
}],
},
// Loaders for other file types can go here
```
```
],
},
```
```
// …
};
```
_A note for Webpack 1 users: the core concept for Loaders remains the same, but the syntax has improved. Until they finish the docs this may/may not be the exact preferred syntax._
This looks for the `/\.js$/` RegEx search for any files that end in `.js` to be loaded via Babel. Webpack relies on RegEx tests to give you complete control—it doesnt limit you to file extensions or assume your code must be organized in a certain way. For example: maybe your `/my_legacy_code/` folder isnt written in ES6\. So you could modify the `test` above to be `/^((?!my_legacy_folder).)*\.js$/` which would exclude that specific folder, but process the rest with Babel.
#### CSS + Style Loader
If we wanted to only load CSS as our application needed, we could do that as well. Lets say we have an `index.js` file. Well import it from there:
```
import styles from './assets/stylesheets/application.css';
```
Well get the following error: `You may need an appropriate loader to handle this file type`. Remember that Webpack can only understand JavaScript, so well have to install the appropriate loader:
```
yarn add --dev css-loader style-loader
```
… and then add a rule to `webpack.config.js`:
```
module.exports = {
// …
```
```
module: {
rules: [
{
test: /\.css$/,
use: ["style-loader", "css-loader"],
},
```
```
// …
],
},
};
```
_Loaders are processed in __reverse array order__. That means _`_css-loader_`_ will run before _`_style-loader_`_._
You may notice that even in production builds, this actually bundles your CSS in with your bundled JavaScript, and `style-loader` manually writes your styles to the `<head>`. At first glance it may seem a little kooky, but slowly starts to make more sense the more you think about it. Youve saved a header request—saving valuable time on some connections—and if youre loading your DOM with JavaScript anyway, this essentially eliminates [FOUC][21] on its own.
Youll also notice that—out of the box—Webpack has automatically resolved all of your `@import` queries by packaging those files together as one (rather than relying on CSSs default import which can result in gratuitious header requests and slow-loading assets).
Loading CSS from your JS is pretty amazing, because you now can modularize your CSS in powerful new ways. Say you loaded `button.css`only through `button.js`. This would mean if `button.js` is never actually used_, _its CSS wouldnt bloat out our production build. If you adhere to component-oriented CSS practices such as SMACSS or BEM, you see the value in pairing your CSS more closely with your markup + JavaScript.
#### CSS + Node modules
We can use Webpack to take advantage of importing Node modules using Nodes `~` prefix. If we ran `yarn add normalize.css`, we could use:
```
@import "~normalize.css";
```
… and take full advantage of NPM managing our third party styles for us—versioning and all—without any copy + pasting on our part. Further, getting Webpack to bundle CSS for us has obvious advantages over using CSSs default import, saving the client from gratuitous header requests and slow load times.
_Update: this and the following section have been updated for accuracy, no longer confusing using CSS Modules to simply import Node modules. Thanks to _[_Albert Fernández_][20]_ for the help!_
#### CSS Modules
You may have heard of [CSS Modules][19], which takes the _C_ out of _CSS_. It typically works best only if youre building the DOM with JavaScript, but in essence, it magically scopes your CSS classes to the JavaScript file that loaded it ([learn more about it here][18]). If you plan on using it, CSS Modules comes packaged with `css-loader` (`yarn add --dev css-loader`):
```
module.exports = {
// …
```
```
module: {
rules: [
{
test: /\.css$/,
use: [
"style-loader",
{ loader: "css-loader", options: { modules: true } }
],
},
```
```
// …
],
},
};
```
_Note: for _`_css-loader_`_ were now using the __expanded object syntax__ to pass an option to it. You can use a string instead as shorthand to use the default options, as were still doing with _`_style-loader_`_._
* * *
Its worth noting that you can actually drop the `~` when importing Node Modules with CSS Modules enabled (e.g.: `@import "normalize.css";`). However, you may encounter build errors now when you `@import` your own CSS. If youre getting “cant find ___” errors, try adding a `resolve` object to `webpack.config.js` to give Webpack a better understanding of your intended module order.
```
const path = require("path");
```
```
module.exports = {
//…
```
```
resolve: {
modules: [path.resolve(__dirname, "src"), "node_modules"]
},
};
```
We specified our source directory first, and then `node_modules`. So Webpack will handle resolution a little better, first looking through our source directory and then the installed Node modules, in that order (replace `"src"` and `"node_modules"` with your source and Node module directories, respectively).
#### Sass
Need to use Sass? No problem. Install:
```
yarn add --dev sass-loader node-sass
```
And add another rule:
```
module.exports = {
// …
```
```
module: {
rules: [
{
test: /\.(sass|scss)$/,
use: [
"style-loader",
"css-loader",
"sass-loader",
]
}
```
```
// …
],
},
};
```
Then when your Javascript calls for an `import` on a `.scss` or `.sass` file, Webpack will do its thing.
#### CSS bundled separately
Maybe youre dealing with progressive enhancement; maybe you need a separate CSS file for some other reason. We can do that easily by swapping out `style-loader` with `extract-text-webpack-plugin` in our config without having to change any code. Take our example `app.js` file:
```
import styles from './assets/stylesheets/application.css';
```
Lets install the plugin locally (we need the beta version for this as of Oct 2016)…
```
yarn add --dev extract-text-webpack-plugin@2.0.0-beta.4
```
… and add to `webpack.config.js`:
```
const ExtractTextPlugin = require("extract-text-webpack-plugin");
```
```
module.exports = {
// …
```
```
module: {
rules: [
{
test: /\.css$/,
use: [
ExtractTextPlugin.extract("css"),
{ loader: "css-loader", options: { modules: true } },
],
},
// …
]
},
plugins: [
new ExtractTextPlugin({
filename: "[name].bundle.css",
allChunks: true,
}),
],
};
```
Now when running `webpack -p` youll also notice an `app.bundle.css` file in your `output` directory. Simply add a `<link>` tag to that file in your HTML as you would normally.
#### HTML
As you might have guessed, theres also an `[html-loader][6]`[ plugin][17] for Webpack. However, when we get to loading HTML with JavaScript, this is about the point where we branch off into a myriad of differing approaches, and I cant think of one single example that would set you up for whatever youre planning on doing next. Typically, youd load HTML for the purpose of using JavaScript-flavored markup such as [JSX][16] or [Mustache][15] or [Handlebars][14] to be used within a larger system such as [React][13], [Angular][12], [Vue][11], or [Ember][10].
So Ill end the tutorial here: you _can_ load markup with Webpack, but by this point youll be making your own decisions about your architecture that neither I nor Webpack can make for you. But using the above examples for reference and searching for the right loaders on NPM should be enough to get you going.
### Thinking in Modules
In order to get the most out of Webpack, youll have to think in modules—small, reusable, self-contained processes that do one thing and one thing well. That means taking something like this:
```
└── js/
└── application.js // 300KB of spaghetti code
```
… and turning it into this:
```
└── js/
├── components/
│ ├── button.js
│ ├── calendar.js
│ ├── comment.js
│ ├── modal.js
│ ├── tab.js
│ ├── timer.js
│ ├── video.js
│ └── wysiwyg.js
└── application.js // ~ 1KB of code; imports from ./components/
```
The result is clean, reusable code. Each individual component depends on `import`-ing its own dependencies, and `export`-ing what it wants to make public to other modules. Pair this with Babel + ES6, and you can utilize [JavaScript Classes][9] for great modularity, and _dont-think-about-it _scoping that just works.
For more on modules, see [this excellent article by Preethi Kasreddy][8].
* * *
### Further Reading
* [Whats New in Webpack 2][5]
* [Webpack Config docs][4]
* [Webpack Examples][3]
* [React + Webpack Starter Kit][2]
* [Webpack How-to][1]
</section>
--------------------------------------------------------------------------------
via: https://blog.madewithenvy.com/getting-started-with-webpack-2-ed2b86c68783#.oozfpppao
作者:[Drew Powers][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://blog.madewithenvy.com/@an_ennui
[1]:https://github.com/petehunt/webpack-howto
[2]:https://github.com/kriasoft/react-starter-kit
[3]:https://github.com/webpack/webpack/tree/master/examples
[4]:https://webpack.js.org/configuration/
[5]:https://gist.github.com/sokra/27b24881210b56bbaff7
[6]:https://github.com/webpack/html-loader
[7]:https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/import
[8]:https://medium.freecodecamp.com/javascript-modules-a-beginner-s-guide-783f7d7a5fcc
[9]:https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Classes
[10]:http://emberjs.com/
[11]:http://vuejs.org/
[12]:https://angularjs.org/
[13]:https://facebook.github.io/react/
[14]:http://handlebarsjs.com/
[15]:https://github.com/janl/mustache.js/
[16]:https://jsx.github.io/
[17]:https://github.com/webpack/html-loader
[18]:https://github.com/css-modules/css-modules
[19]:https://github.com/css-modules/css-modules
[20]:https://medium.com/u/901a038e32e5
[21]:https://en.wikipedia.org/wiki/Flash_of_unstyled_content
[22]:https://babeljs.io/
[23]:https://webpack.js.org/concepts/output/#output-library
[24]:https://yarnpkg.com/
[25]:https://www.smashingmagazine.com/2009/04/progressive-enhancement-what-it-is-and-how-to-use-it/
[26]:https://github.com/webpack/webpack/issues/1545#issuecomment-255446425

View File

@ -1,121 +0,0 @@
alim0x translating
How to Check Bad Sectors or Bad Blocks on Hard Disk in Linux
===
Let us start by defining a bad sector/block, its a section on a disk drive or flash memory that can not be read from or written to anymore, as a result of a fixed [physical damage on the disk][7] surface or failed flash memory transistors.
As bad sectors continue to accumulate, they can undesirably or destructively affect your disk drive or flash memory capacity or even lead to a possible hardware failure.
It is also important to note that the presence of bad blocks should alert you to start thinking of getting a new disk drive or simply mark the bad blocks as unusable.
Therefore, in this article, we will go through the necessary steps that can enable you determine the presence or absence of bad sectors on your Linux disk drive or flash memory using certain [disk scanning utilities][6].
That said, below are the methods:
### Check Bad Sectors in Linux Disks Using badblocks Tool
A badblocks program enables users to scan a device for bad sectors or blocks. The device can be a hard disk or an external disk drive, represented by a file such as /dev/sdc.
Firstly, use the [fdisk command][5] with superuser privileges to display information about all your disk drives or flash memory plus their partitions:
```
$ sudo fdisk -l
```
[![List Linux Filesystem Partitions](http://www.tecmint.com/wp-content/uploads/2016/10/List-Linux-Filesystem-Partitions.png)][4]
List Linux Filesystem Partitions
Then scan your Linux disk drive to check for bad sectors/blocks by typing:
```
$ sudo badblocks -v /dev/sda10 > badsectors.txt
```
[![Scan Hard Disk Bad Sectors in Linux](http://www.tecmint.com/wp-content/uploads/2016/10/Scan-Hard-Disk-Bad-Sectors-in-Linux.png)][3]
Scan Hard Disk Bad Sectors in Linux
In the command above, badblocks is scanning device /dev/sda10 (remember to specify your actual device) with the `-v` enabling it to display details of the operation. In addition, the results of the operation are stored in the file badsectors.txt by means of output redirection.
In case you discover any bad sectors on your disk drive, unmount the disk and instruct the operating system not to write to the reported sectors as follows.
You will need to employ e2fsck (for ext2/ext3/ext4 file systems) or fsck command with the badsectors.txt file and the device file as in the command below.
The `-l` option tells the command to add the block numbers listed in the file specified by filename (badsectors.txt) to the list of bad blocks.
```
------------ Specifically for ext2/ext3/ext4 file-systems ------------
$ sudo e2fsck -l badsectors.txt /dev/sda10
OR
------------ For other file-systems ------------
$ sudo fsck -l badsectors.txt /dev/sda10
```
### Scan Bad Sectors on Linux Disk Using Smartmontools
This method is more reliable and efficient for modern disks (ATA/SATA and SCSI/SAS hard drives and solid-state drives) which ship in with a S.M.A.R.T (Self-Monitoring, Analysis and Reporting Technology) system that helps detect, report and possibly log their health status, so that you can figure out any impending hardware failures.
You can install smartmontools by running the command below:
```
------------ On Debian/Ubuntu based systems ------------
$ sudo apt-get install smartmontools
------------ On RHEL/CentOS based systems ------------
$ sudo yum install smartmontools
```
Once the installation is complete, use smartctl which controls the S.M.A.R.T system integrated into a disk. You can look through its man page or help page as follows:
```
$ man smartctl
$ smartctl -h
```
Now execute the smartctrl command and name your specific device as an argument as in the following command, the flag `-H` or `--health` is included to display the SMART overall health self-assessment test result.
```
$ sudo smartctl -H /dev/sda10
```
[![Check Linux Hard Disk Health](http://www.tecmint.com/wp-content/uploads/2016/10/Check-Linux-Hard-Disk-Health.png)][2]
Check Linux Hard Disk Health
The result above indicates that your hard disk is healthy, and may not experience hardware failures any soon.
For an overview of disk information, use the `-a` or `--all` option to print out all SMART information concerning a disk and `-x` or `--xall` which displays all SMART and non-SMART information about a disk.
In this tutorial, we covered a very important topic concerning [disk drive health diagnostics][1], you can reach us via the feedback section below to share your thoughts or ask any questions and remember to always stay connected to Tecmint.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/check-linux-hard-disk-bad-sectors-bad-blocks/
作者:[Aaron Kili][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/defragment-linux-system-partitions-and-directories/
[2]:http://www.tecmint.com/wp-content/uploads/2016/10/Check-Linux-Hard-Disk-Health.png
[3]:http://www.tecmint.com/wp-content/uploads/2016/10/Scan-Hard-Disk-Bad-Sectors-in-Linux.png
[4]:http://www.tecmint.com/wp-content/uploads/2016/10/List-Linux-Filesystem-Partitions.png
[5]:http://www.tecmint.com/fdisk-commands-to-manage-linux-disk-partitions/
[6]:http://www.tecmint.com/ncdu-a-ncurses-based-disk-usage-analyzer-and-tracker/
[7]:http://www.tecmint.com/defragment-linux-system-partitions-and-directories/

View File

@ -1,3 +1,5 @@
GitFuture get translating
DTrace for Linux 2016 DTrace for Linux 2016
=========== ===========

View File

@ -0,0 +1,112 @@
How to Sort Output of ls Command By Last Modified Date and Time
============================================================
One of the commonest things a Linux user will always do on the command line is [listing the contents of a directory][1]. As we may already know, [ls][2] and [dir][3] are the two commands available on Linux for listing directory content, with the former being more popular and in most cases, preferred by users.
When listing directory contents, the results can be sorted based on several criteria such as alphabetical order of filenames, modification time, access time, version and file size. Sorting using each of these file properties can be enabled by using a specific flag.
In this brief [ls command guide][4], we will look at how to [sort the output of ls command][5] by last modification time (date and time).
Let us start by executing some [basic ls commands][6].
### Linux Basic ls Commands
1. Running ls command without appending any argument will list current working directory contents.
```
$ ls
```
[
![List Content of Working Directory](http://www.tecmint.com/wp-content/uploads/2016/10/List-Content-of-Working-Directory.png)
][7]
List Content of Working Directory
2. To list contents of any directory, for example /etc directory use:
```
$ ls /etc
```
[
![List Contents of Directory](http://www.tecmint.com/wp-content/uploads/2016/10/List-Contents-of-Directory.png)
][8]
List Contents of Directory
3. A directory always contains a few hidden files (at least two), therefore, to show all files in a directory, use the `-a` or `--all` flag:
```
$ ls -a
```
[
![List Hidden Files in Directory](http://www.tecmint.com/wp-content/uploads/2016/10/List-Hidden-Files-in-Directory.png)
][9]
List Hidden Files in Directory
4. You can as well print detailed information about each file in the ls output, such as the file permissions, number of links, owners name and group owner, file size, time of last modification and the file/directory name.
This is activated by the `-l` option, which means a long listing format as in the next screenshot:
```
$ ls -l
```
[
![Long List Directory Contents](http://www.tecmint.com/wp-content/uploads/2016/10/ls-Long-List-Format.png)
][10]
Long List Directory Contents
### Sort Files Based on Time and Date
5. To list files in a directory and [sort them last modified date and time][11], make use of the `-t` option as in the command below:
```
$ ls -lt
```
[
![Sort ls Output by Date and Time](http://www.tecmint.com/wp-content/uploads/2016/10/Sort-ls-Output-by-Date-and-Time.png)
][12]
Sort ls Output by Date and Time
6. If you want a reverse sorting files based on date and time, you can use the `-r` option to work like so:
```
$ ls -ltr
```
[
![Sort ls Output Reverse by Date and Time](http://www.tecmint.com/wp-content/uploads/2016/10/Sort-ls-Output-Reverse-by-Date-and-Time.png)
][13]
Sort ls Output Reverse by Date and Time
We will end here for now, however, there is more usage information and options in the [ls command][14], so make it a point to look through it or any other guides offering [ls command tricks every Linux user should know][15] or [use sort command][16]. Last but not least, you can reach us via the feedback section below.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/sort-ls-output-by-last-modified-date-and-time
作者:[Aaron Kili][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/file-and-directory-management-in-linux/
[2]:http://www.tecmint.com/15-basic-ls-command-examples-in-linux/
[3]:http://www.tecmint.com/linux-dir-command-usage-with-examples/
[4]:http://www.tecmint.com/tag/linux-ls-command/
[5]:http://www.tecmint.com/sort-command-linux/
[6]:http://www.tecmint.com/15-basic-ls-command-examples-in-linux/
[7]:http://www.tecmint.com/wp-content/uploads/2016/10/List-Content-of-Working-Directory.png
[8]:http://www.tecmint.com/wp-content/uploads/2016/10/List-Contents-of-Directory.png
[9]:http://www.tecmint.com/wp-content/uploads/2016/10/List-Hidden-Files-in-Directory.png
[10]:http://www.tecmint.com/wp-content/uploads/2016/10/ls-Long-List-Format.png
[11]:http://www.tecmint.com/find-and-sort-files-modification-date-and-time-in-linux/
[12]:http://www.tecmint.com/wp-content/uploads/2016/10/Sort-ls-Output-by-Date-and-Time.png
[13]:http://www.tecmint.com/wp-content/uploads/2016/10/Sort-ls-Output-Reverse-by-Date-and-Time.png
[14]:http://www.tecmint.com/tag/linux-ls-command/
[15]:http://www.tecmint.com/linux-ls-command-tricks/
[16]:http://www.tecmint.com/linux-sort-command-examples/

View File

@ -0,0 +1,115 @@
3 Ways to Extract and Copy Files from ISO Image in Linux
============================================================
Lets say you have a large ISO file on your Linux server and you wanted to access, extract or copy one single file from it. How do you do it? Well in Linux there are couple ways do it.
For example, you can use standard mount command to mount an ISO image in read-only mode using the loop device and then copy the files to another directory.
### Mount or Extract ISO File in Linux
To do so, you must have an ISO file (I used ubuntu-16.10-server-amd64.iso ISO image) and mount point directory to mount or extract ISO files.
First create an mount point directory, where you will going to mount the image as shown:
```
$ sudo mkdir /mnt/iso
```
Once directory has been created, you can easily mount ubuntu-16.10-server-amd64.iso file and verify its content by running following command.
```
$ sudo mount -o loop ubuntu-16.10-server-amd64.iso /mnt/iso
$ ls /mnt/iso/
```
[
![Mount ISO File in Linux](http://www.tecmint.com/wp-content/uploads/2016/10/Mount-ISO-File-in-Linux.png)
][1]
Mount ISO File in Linux
Now you can go inside the mounted directory (/mnt/iso) and access the files or copy the files to `/tmp`directory using [cp command][2].
```
$ cd /mnt/iso
$ sudo cp md5sum.txt /tmp/
$ sudo cp -r ubuntu /tmp/
```
[
![Copy Files From ISO File in Linux](http://www.tecmint.com/wp-content/uploads/2016/10/Copy-Files-From-ISO-File-in-Linux.png)
][3]
Copy Files From ISO File in Linux
Note: The `-r` option used to copy directories recursively, if you want you can also [monitor progress of copy command][4].
### Extract ISO Content Using 7zip Command
If you dont want to mount ISO file, you can simply install 7zip, is an open source archive program used to pack or unpack different number of formats including TAR, XZ, GZIP, ZIP, BZIP2, etc..
```
$ sudo apt-get install p7zip-full p7zip-rar [On Debian/Ubuntu systems]
$ sudo yum install p7zip p7zip-plugins [On CentOS/RHEL systems]
```
Once 7zip program has been installed, you can use 7z command to extract ISO file contents.
```
$ 7z x ubuntu-16.10-server-amd64.iso
```
[
![7zip - Extract ISO File Content in Linux](http://www.tecmint.com/wp-content/uploads/2016/10/Extract-ISO-Content-in-Linux.png)
][5]
7zip Extract ISO File Content in Linux
Note: As compared to Linux mount command, 7zip seems much faster and smart enough to pack or unpack any archive formats.
### Extract ISO Content Using isoinfo Command
The isoinfo command is used for directory listings of iso9660 images, but you can also use this program to extract files.
As I said isoinfo program perform directory listing, so first list the content of ISO file.
```
$ isoinfo -i ubuntu-16.10-server-amd64.iso -l
```
[
![List ISO Content in Linux](http://www.tecmint.com/wp-content/uploads/2016/10/List-ISO-Content-in-Linux.png)
][6]
List ISO Content in Linux
Now you can extract a single file from an ISO image like so:
```
$ isoinfo -i ubuntu-16.10-server-amd64.iso -x MD5SUM.TXT > MD5SUM.TXT
```
Note: The redirection is needed as `-x` option extracts to stdout.
[
![Extract Single File from ISO in Linux](http://www.tecmint.com/wp-content/uploads/2016/10/Extract-Single-File-from-ISO-in-Linux.png)
][7]
Extract Single File from ISO in Linux
Well, there are many ways to do, if you know any useful command or program to extract or copy files from ISO file do share us via comment section.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/extract-files-from-iso-files-linux
作者:[Ravi Saive][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/admin/
[1]:http://www.tecmint.com/wp-content/uploads/2016/10/Mount-ISO-File-in-Linux.png
[2]:http://www.tecmint.com/advanced-copy-command-shows-progress-bar-while-copying-files/
[3]:http://www.tecmint.com/wp-content/uploads/2016/10/Copy-Files-From-ISO-File-in-Linux.png
[4]:http://www.tecmint.com/monitor-copy-backup-tar-progress-in-linux-using-pv-command/
[5]:http://www.tecmint.com/wp-content/uploads/2016/10/Extract-ISO-Content-in-Linux.png
[6]:http://www.tecmint.com/wp-content/uploads/2016/10/List-ISO-Content-in-Linux.png
[7]:http://www.tecmint.com/wp-content/uploads/2016/10/Extract-Single-File-from-ISO-in-Linux.png

View File

@ -1,3 +1,5 @@
translating by chenzhijun
HOW TO CREATE AN EBOOK WITH CALIBRE IN LINUX [COMPLETE GUIDE] HOW TO CREATE AN EBOOK WITH CALIBRE IN LINUX [COMPLETE GUIDE]
==== ====

View File

@ -0,0 +1,100 @@
4 Useful Way to Know Plugged USB Device Name in Linux
============================================================
As a newbie, one of the many [things you should master in Linux][1] is identification of devices attached to your system. It may be your computers hard disk, an external hard drive or removable media such USB drive or SD Memory card.
Using USB drives for file transfer is so common today, and for those (new Linux users) who prefer to use the command line, learning the different ways to identify a USB device name is very important, when you need to format it.
Once you attach a device to your system such as a USB, especially on a desktop, it is automatically mounted to a given directory, normally under /media/username/device-label and you can then access the files in it from that directory. However, this is not the case with a server where you have to[ manually mount a device][2] and specify its mount point.
Linux identifies devices using special device files stored in `/dev` directory. Some of the files you will find in this directory include `/dev/sda` or `/dev/hda` which represents your first master drive, each partition will be represented by a number such as `/dev/sda1` or `/dev/hda1` for the first partition and so on.
```
$ ls /dev/sda*
```
[
![List All Linux Device Names](http://www.tecmint.com/wp-content/uploads/2016/10/List-All-Linux-Device-Names.png)
][3]
List All Linux Device Names
Now lets find out device names using some different command-line tools as shown:
### Find Out Plugged USB Device Name Using df Command
To view each device attached to your system as well as its mount point, you can use the [df command][4](checks Linux disk space utilization) as shown in the image below:
```
$ df -h
```
[
![Find USB Device Name Using df Command](http://www.tecmint.com/wp-content/uploads/2016/10/Find-USB-Device-Name.png)
][5]
Find USB Device Name Using df Command
### Use lsblk Command to Find USB Device Name
You can also use the [lsblk command (list block devices)][6] which lists all block devices attached to your system like so:
```
$ lsblk
```
[
![List Linux Block Devices](http://www.tecmint.com/wp-content/uploads/2016/10/List-Linux-Block-Devices.png)
][7]
List Linux Block Devices
### Identify USB Device Name with fdisk Utility
[fdisk is a powerful utility][8] which prints out the partition table on all your block devices, a USB drive inclusive, you can run it will root privileges as follows:
```
$ sudo fdisk -l
```
[
![List Partition Table of Block Devices](http://www.tecmint.com/wp-content/uploads/2016/10/List-Partition-Table.png)
][9]
List Partition Table of Block Devices
### Determine USB Device Name with dmesg Command
dmesg is an important command that prints or controls the kernel ring buffer, a data structure which [stores information about the kernels operations][10].
Run the command below to view kernel operation messages which will as well print information about your USB device:
```
$ dmesg
```
[
![dmesg - Prints USB Device Name](http://www.tecmint.com/wp-content/uploads/2016/10/dmesg-shows-kernel-information.png)
][11]
dmesg Prints USB Device Name
That is all for now, in this article, we have covered different approaches of how to find out a USB device name from the command line. You can also share with us any other methods for the same purpose or perhaps offer us your thoughts about the article via the response section below.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/find-usb-device-name-in-linux
作者:[Aaron Kili ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/tag/linux-tricks/
[2]:http://www.tecmint.com/mount-filesystem-in-linux/
[3]:http://www.tecmint.com/wp-content/uploads/2016/10/List-All-Linux-Device-Names.png
[4]:http://www.tecmint.com/how-to-check-disk-space-in-linux/
[5]:http://www.tecmint.com/wp-content/uploads/2016/10/Find-USB-Device-Name.png
[6]:http://www.tecmint.com/commands-to-collect-system-and-hardware-information-in-linux/
[7]:http://www.tecmint.com/wp-content/uploads/2016/10/List-Linux-Block-Devices.png
[8]:http://www.tecmint.com/fdisk-commands-to-manage-linux-disk-partitions/
[9]:http://www.tecmint.com/wp-content/uploads/2016/10/List-Partition-Table.png
[10]:http://www.tecmint.com/dmesg-commands/
[11]:http://www.tecmint.com/wp-content/uploads/2016/10/dmesg-shows-kernel-information.png

View File

@ -1,146 +0,0 @@
# How to Convert Files to UTF-8 Encoding in Linux
In this guide, we will describe what character encoding and cover a few examples of converting files from one character encoding to another using a command line tool. Then finally, we will look at how to convert several files from any character set (charset) to UTF-8 encoding in Linux.
As you may probably have in mind already, a computer does not understand or store letters, numbers or anything else that we as humans can perceive except bits. A bit has only two possible values, that is either a `0` or `1`, `true` or `false`, `yes` or `no`. Every other thing such as letters, numbers, images must be represented in bits for a computer to process.
In simple terms, character encoding is a way of informing a computer how to interpret raw zeroes and ones into actual characters, where a character is represented by set of numbers. When we type text in a file, the words and sentences we form are cooked-up from different characters, and characters are organized into a charset.
There are various encoding schemes out there such as ASCII, ANSI, Unicode among others. Below is an example of ASCII encoding.
```
Character bits
A 01000001
B 01000010
```
In Linux, the iconv command line tool is used to convert text from one form of encoding to another.
You can check the encoding of a file using the file command, by using the `-i` or `--mime` flag which enables printing of mime type string as in the examples below:
```
$ file -i Car.java
$ file -i CarDriver.java
```
[
![Check File Encoding in Linux](http://www.tecmint.com/wp-content/uploads/2016/10/Check-File-Encoding-in-Linux.png)
][3]
Check File Encoding in Linux
The syntax for using iconv is as follows:
```
$ iconv option
$ iconv options -f from-encoding -t to-encoding inputfile(s) -o outputfile
```
Where `-f` or `--from-code` means input encoding and `-t` or `--to-encoding` specifies output encoding.
To list all known coded character sets, run the command below:
```
$ iconv -l
```
[
![List Coded Charsets in Linux](http://www.tecmint.com/wp-content/uploads/2016/10/List-Coded-Charsets-in-Linux.png)
][2]
List Coded Charsets in Linux
### Convert Files from UTF-8 to ASCII Encoding
Next, we will learn how to convert from one encoding scheme to another. The command below converts from ISO-8859-1 to UTF-8 encoding.
Consider a file named `input.file` which contains the characters:
```
<EFBFBD> <20> <20> <20>
```
Let us start by checking the encoding of the characters in the file and then view the file contents. Closely, we can convert all the characters to ASCII encoding.
After running the iconv command, we then check the contents of the output file and the new encoding of the characters as below.
```
$ file -i input.file
$ cat input.file
$ iconv -f ISO-8859-1 -t UTF-8//TRANSLIT input.file -o out.file
$ cat out.file
$ file -i out.file
```
[
![Convert UTF-8 to ASCII in Linux](http://www.tecmint.com/wp-content/uploads/2016/10/Converts-UTF8-to-ASCII-in-Linux.png)
][1]
Convert UTF-8 to ASCII in Linux
Note: In case the string `//IGNORE` is added to to-encoding, characters that cant be converted and an error is displayed after conversion.
Again, supposing the string `//TRANSLIT` is added to to-encoding as in the example above (ASCII//TRANSLIT), characters being converted are transliterated as needed and if possible. Which implies in the event that a character cant be represented in the target character set, it can be approximated through one or more similar looking characters.
Consequently, any character that cant be transliterated and is not in target character set is replaced with a question mark `(?)` in the output.
### Convert Multiple Files to UTF-8 Encoding
Coming back to our main topic, to convert multiple or all files in a directory to UTF-8 encoding, you can write a small shell script called encoding.sh as follows:
```
#!/bin/bash
#enter input encoding here
FROM_ENCODING="value_here"
#output encoding(UTF-8)
TO_ENCODING="UTF-8"
#convert
CONVERT=" iconv -f $FROM_ENCODING -t $TO_ENCODING"
#loop to convert multiple files
for file in *.txt; do
$CONVERT "$file" -o "${file%.txt}.utf8.converted"
done
exit 0
```
Save the file, then make the script executable. Run it from the directory where your files (`*.txt`) are located.
```
$ chmod +x encoding.sh
$ ./encoding.sh
```
Important: You can as well use this script for general conversion of multiple files from one given encoding to another, simply play around with the values of the `FROM_ENCODING` and `TO_ENCODING`variable, not forgetting the output file name `"${file%.txt}.utf8.converted"`.
For more information, look through the iconv man page.
```
$ man iconv
```
To sum up this guide, understanding encoding and how to convert from one character encoding scheme to another is necessary knowledge for every computer user more so for programmers when it comes to dealing with text.
Lastly, you can get in touch with us by using the comment section below for any questions or feedback.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/convert-files-to-utf-8-encoding-in-linux/#
作者:[Aaron Kili][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/wp-content/uploads/2016/10/Converts-UTF8-to-ASCII-in-Linux.png
[2]:http://www.tecmint.com/wp-content/uploads/2016/10/List-Coded-Charsets-in-Linux.png
[3]:http://www.tecmint.com/wp-content/uploads/2016/10/Check-File-Encoding-in-Linux.png

View File

@ -0,0 +1,117 @@
How to Compress and Decompress a .bz2 File in Linux
============================================================
To compress a file(s), is to significantly decrease the size of the file(s) by encoding data in the file(s) using less bits, and it is normally a useful practice [during backup and transfer of a file(s)][1] over a network. On the other hand, decompressing a file(s) means restoring data in the file(s) to its original state.
There are several [file compression and decompression tools][2] available in Linux such as gzip, 7-zip, Lrzip, [PeaZip][3] and many more.
In this tutorial, we will look at how to compress and decompress `.bz2` files using the bzip2 tool in Linux.
Bzip2 is a well known compression tool and its available on most if not all the major Linux distributions, you can use the appropriate command for your distribution to install it.
```
$ sudo apt install bzip2 [On Debian/Ubuntu]
$ sudo yum install bzip2 [On CentOS/RHEL]
$ sudo dnf install bzip2 [On Fedora 22+]
```
The conventional syntax of using bzip2 is:
```
$ bzip2 option(s) filenames
```
### How to Use “bzip2” to Compress Files in Linux
You can compress a file as below, where the flag `-z` enables file compression:
```
$ bzip2 filename
OR
$ bzip2 -z filename
```
To compress a `.tar` file, use the command format:
```
$ bzip2 -z backup.tar
```
Important: By default, bzip2 deletes the input files during compression or decompression, to keep the input files, use the `-k` or `--keep` option.
In addition, the `-f` or `--force` flag will force bzip2 to overwrite an existing output file.
```
------ To keep input file ------
$ bzip2 -zk filename
$ bzip2 -zk backup.tar
```
You can as well set the block size to 100k upto 900k, using `-1` or `--fast` to `-9` or best as shown in the below examples:
```
$ bzip2 -k1 Etcher-linux-x64.AppImage
$ ls -lh Etcher-linux-x64.AppImage.bz2
$ bzip2 -k9 Etcher-linux-x64.AppImage
$ bzip2 -kf9 Etcher-linux-x64.AppImage
$ ls -lh Etcher-linux-x64.AppImage.bz2
```
The screenshot below shows how to use options to keep the input file, force bzip2 to overwrite an output file and set the block size during compression.
[
![Compress Files Using bzip2 in Linux](http://www.tecmint.com/wp-content/uploads/2016/11/Compress-Files-Using-bzip2-in-Linux.png)
][4]
Compress Files Using bzip2 in Linux
### How to Use “bzip2” to Decompress Files in Linux
To decompress a `.bz2` file, make use of the `-d` or `--decompress` option like so:
```
$ bzip2 -d filename.bz2
```
Note: The file must end with a `.bz2` extension for the command above to work.
```
$ bzip2 -vd Etcher-linux-x64.AppImage.bz2
$ bzip2 -vfd Etcher-linux-x64.AppImage.bz2
$ ls -l Etcher-linux-x64.AppImage
```
[
![Decompress bzip2 File in Linux](http://www.tecmint.com/wp-content/uploads/2016/11/Decompression-bzip2-File-in-Linux.png)
][5]
Decompress bzip2 File in Linux
To view the bzip2 help page and man page, type the command below:
```
$ bzip2 -h
$ man bzip2
```
Lastly, with the simple elaborations above, I believe you are now capable of compressing and decompressing `.bz2` files using the bzip2 tool in Linux. However, for any questions or feedback, reach us using the comment section below.
Importantly, you may want to go over a few important [Tar command examples][6] in Linux so as to learn using the tar utility to [create compressed archive files][7].
--------------------------------------------------------------------------------
via: http://www.tecmint.com/linux-compress-decompress-bz2-files-using-bzip2
作者:[Aaron Kili][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/rsync-local-remote-file-synchronization-commands/
[2]:http://www.tecmint.com/command-line-archive-tools-for-linux/
[3]:http://www.tecmint.com/peazip-linux-file-manager-and-file-archive-tool/
[4]:http://www.tecmint.com/wp-content/uploads/2016/11/Compress-Files-Using-bzip2-in-Linux.png
[5]:http://www.tecmint.com/wp-content/uploads/2016/11/Decompression-bzip2-File-in-Linux.png
[6]:http://www.tecmint.com/18-tar-command-examples-in-linux/
[7]:http://www.tecmint.com/compress-files-and-finding-files-in-linux/

View File

@ -1,283 +0,0 @@
# 4 Easy Ways To Generate A Strong Password In Linux
![Generate a strong password in Linux](https://www.ostechnix.com/wp-content/uploads/2016/11/password-720x340.jpg)
Image Courtesy: Google.
Yesterday, We have covered how to [force users to use a strong password in DEB based systems][8]such as Debian, Ubuntu, Linux Mint, Elementary OS etc. You might wonder how a strong password looks like, and how could I create one? No worries! Here is the 4 easy ways to generate a strong password in Linux. Of course, there are many free tools and ways to accomplish this task, however I consider these methods are simple, and straightforward. Let us get started.
Download  [Free EBook: “Getting started with Ubuntu 16.04”][7]
### 1\. Generate a strong password in Linux using OpenSSL
OpenSSL is available for all Unix-like distributions, Solaris, Mac OS X, and Windows.
To generate a random password with OpenSSL, fire up your Terminal and run the following command:
```
openssl rand 14 -base64
```
Here, -base64 string will make sure the password can be typed on a keyboard.
Sample output:
```
wXCHXlxuhrFrFMQLqik=
```
[
![sksk_003](http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_003.png)
][6]
The above command will generate a random and strong password with length of 14 characters. Remember It is always recommend to generate 14 characters password. Of course you can generate any length of characters using openssl.
For more details, refer the man pages.
```
man openssl
```
### 2\. Generate a strong password in Linux using Pwgen
pwgen is simple, yet useful command line utility to generate a random and strong password in seconds. It designs secure passwords that can be easily memorized by humans. It is available in the most Unix-like operating systems.
To install pwgen in DEB based systems, run:
```
sudo apt-get install pwgen
```
In RPM based systems:
```
sudo yum install pwgen
```
In Arch based systems:
```
sudo pacman -S pwgen
```
Once pwgen installed, generate a random and strong password with length of 14 letters using command:
```
pwgen 14 1
```
Sample output:
```
Choo4aicozai3a
```
[
![sksk_004](http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_004.png)
][5]
The above command will create only one password with length of 14 characters. To create 2 different passwords with length of 14 characters, run:
```
pwgen 14 2
```
Sample output:
```
xee7seerez6Kau Aeshu0geveeji8
```
To crate 100 different passwords (Not necessary though) with length of 14 characters, run:
```
pwgen 14
```
Sample output:
```
kaeNg3EiVei4ei Oo0iehiJaix5Ae aenuv2eree2Quo iaT7zahH1eN2Aj Bie2owaiFahsie
gaan9zu5Xeh5ah ahGeeth8ea5ooh Ir0ueda5poogh5 uo0ohqu2ufaiX2 Mei0pee6Og3zae
Oofeiceer8Aipu sheew3aeReidir Dee4Heib2eim2o eig6jar8giPhae Zahde9nae1Niew
quatol5Oi3Bah2 quue4eebaiNgaa oGoahieSh5oL4m aequeeQue2piti laige5seePhugo
iiGo9Uthee4ros WievaiQu2xech6 shaeve0maaK3ae ool8Pai2eighis EPheiRiet1ohci
ZieX9outhoht8N Uh1UoPhah2Thee reaGhohZae5idi oiG4ooshiyi5in keePh1ohshei8y
aim5Eevah2thah Xaej8tha5eisho IeGie1Anaalaev gaoY3ohthooh3x chaebeesahTh8e
soh7oosieY5eiD ahmoh6Ihii6que Shoowoo5dahbah ieW0aiChubee7I Caet6aikai6aex
coo1du2Re9aika Ohnei5Egoh7leV aiyie6Ahdeipho EiV0aeToeth1da iNgaesu4eeyu0S
Eeb1suoV3naera railai2Vaina8u xu3OhVee1reeyu Og0eavae3oohoh audahneihaeK8a
foo6iechi5Eira oXeixoh6EwuboD we1eiDahNgoh9s ko1Eeju1iedu1z aeP7achiisohr7
phang5caeGei5j ait4Shuo5Aitai no4eis9Tohd8oh Quiet6oTaaQuei Dei2pu2NaefeCa
Shiim9quiuy0ku yiewooph3thieL thu8Aphai1ieDa Phahnahch1Aam1 oocex7Yaith8oo
eraiGaech5ahNg neixa3malif5Ya Eux7chah8ahXix eex1lahXae4Mei uGhahzonu6airu
yah8uWahn3jeiW Yi4ye4Choongie io1Vo3aiQuahpi rie4Rucheet6ae Dohbieyaeleis5
xi1Zaushohbei7 jeeb9EiSiech0u eewo0Oow7ielie aiquooZamah5th kouj7Jaivohx9o
biyeeshesaDi9e she9ooj3zuw6Ah Eit7dei1Yei5la xohN0aeSheipaa Eeg9Phob6neema
eengoneo4saeL4 aeghi4feephu6W eiWash2Vie1mee chieceish5ioPe ool4Hongo7ef1o
jahBe1pui9thou eeV2choohoa4ee Ohmae0eef4ic8I Eet0deiyohdiew Ke9ue5thohzei3
aiyoxeiva8Maih gieRahgh8anahM ve2ath9Eyi5iet quohg6ok3Ahgee theingaech5Nef
```
[
![sksk_005](https://www.ostechnix.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][4]
To include at least 1 number in the password run:
```
pwgen 14 1 -n 1
```
Sample output:
```
xoiFush3ceiPhe
```
There are also some useful options available to use with pwgen command.
```
-c or --capitalize (Include at least one capital letter in the password)
-A or --no-capitalize (Don't include capital letters in the password)
-n or --numerals (Include at least one number in the password)
-0 or --no-numerals (Don't include numbers in the password)
-y or --symbols (Include at least one special symbol in the password)
-s or --secure (Generate completely random passwords)
-B or --ambiguous (Don't include ambiguous characters in the password)
-h or --help (Print a help message)
-H or --sha1=path/to/file[#seed] (Use sha1 hash of given file as a (not so) random generator)
-C (Print the generated passwords in columns)
-1 (Don't print the generated passwords in columns)
-v or --no-vowels (Do not use any vowels so as to avoid accidental nasty words)
```
For more details, check the man pages.
```
man pwgen
```
### 3\. Generate a strong password in Linux using GPG
GPG (GnuPG or GNU Privacy Guard), is free command-line program and replacement of Symantecs PGP cryptographic software. It is available for Unix-like operating systems, Microsoft Windows and Android versions.
To generate a random and strong password with length of 14 characters using GPG, run the following command from the Terminal:
```
gpg --gen-random --armor 1 14
```
Sample output:
```
DkmsrUy3klzzbIbavx8=
```
[
![sksk_006](https://www.ostechnix.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][3]
The above command will generate a secure, random, strong and base64 encoded password.
### 4\. Generate a strong password in Linux using Perl
Perl is available in the most Linux distributions default repositories. Install it using the package manager.
For example, to install Perl on DEB based systems run:
```
sudo apt-get install perl
```
To install Perl on RPM based systems, run:
```
sudo yum install perl
```
On Arch based systems:
```
sudo pacman -S perl
```
Once Perl installed, create a file:
```
vi password.pl
```
Add the following contents in it.
```
#!/usr/bin/perl
my @alphanumeric = ('a'..'z', 'A'..'Z', 0..9);
my $randpassword = join '', map $alphanumeric[rand @alphanumeric], 0..8;
print "$randpassword\n"
```
[
![sksk_001](http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_001.png)
][2]
Save and close the file.
Now, go to the location where you saved the file, and run the following command:
```
perl password.pl
```
Replace password.pl with your own filename.
Sample output:
```
3V4CJJnYd
```
[
![sksk_002](http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_002.png)
][1]
Note: I couldnt find the original author of this script. If anyone know the authors name, please let me know in the comment section below. I will add the author name in this guide.
Please note that you must memorize or keep the passwords you have generated in a safe place in your computer. I recommend you to memorize the password and delete it from your system. It is much better in case your system is compromised by any hackers.
Thats all for today folks. I will here with another interesting article soon. Until then, stay tuned with OSTechNix.
Happy Weekend!
Cheers!!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/4-easy-ways-to-generate-a-strong-password-in-linux/
作者:[ SK ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_002.png
[2]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_001.png
[3]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_006.png
[4]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_005.png
[5]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_004.png
[6]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_003.png
[7]:http://ostechnix.tradepub.com/free/w_ubun08/prgm.cgi?a=1
[8]:https://www.ostechnix.com/force-users-use-strong-passwords-debian-ubuntu/

View File

@ -1,3 +1,5 @@
Vic020
# How to design and add your own font on Linux with Glyphr # How to design and add your own font on Linux with Glyphr
LibreOffice already offers a galore of fonts, and users can always download and add more. However, if you want to create your own custom font, you can do it easily by using Glyphr. Glyphr is a new open source vector font designer with an intuitive and easy to use graphical interface and a rich set of features that will take care every aspect of the font design. Although the application is still in early development, it is already pretty good. Heres a quick guide showing how to design your own custom fonts on Glyphr, and how to add them on LibreOffice once youre done. LibreOffice already offers a galore of fonts, and users can always download and add more. However, if you want to create your own custom font, you can do it easily by using Glyphr. Glyphr is a new open source vector font designer with an intuitive and easy to use graphical interface and a rich set of features that will take care every aspect of the font design. Although the application is still in early development, it is already pretty good. Heres a quick guide showing how to design your own custom fonts on Glyphr, and how to add them on LibreOffice once youre done.

View File

@ -0,0 +1,109 @@
CLOUD FOCUSED LINUX DISTROS FOR PEOPLE WHO BREATHE ONLINE
============================================================
[
![Best Linux distributions for cloud computing](https://itsfoss.com/wp-content/uploads/2016/11/cloud-centric-Linux-distributions.jpg)
][6]
_Brief: We list some _Cloud centric_ Linux distributions that might be termed as real Linux alternatives to Chrome OS._
The world is moving to cloud-based services and we all know the kind of love that Chrome OS got. Well, it does deserve respect. Its super fast, light, power-efficient, minimalistic, beautifully designed and utilizes the full potential of cloud that technology permits today.
Although [Chrome OS][7] is exclusively available only for Googles hardware, there are other means to experience the potential of cloud computing right on your laptop or desktop.
As I have repeatedly said, there is always something for everybody in the Linux domain, be it [Linux distributions that look like Windows][8] or Mac OS. Linux is all about sharing, love and some really bleeding edge computing experience. Lets crack this list right away!
### 1\. CUB LINUX
![Cub Linux Desktop](https://itsfoss.com/wp-content/uploads/2016/10/cub1.jpg)
It is not Chrome OS. But the above image is featuring the desktop of [Cub Linux][9]. Say what?
Cub Linux is no news for Linux users. But if you already did not know, Cub Linux is the web focused Linux distro that is inspired from mainstream Chrome OS. It is also the open source brother of Chrome OS from mother Linux.
Chrome OS has the Chrome Browser as its primary component. Not so long ago, a project by name [Chromixium OS][10] was started to recreate Chrome OS like experience by using the Chromium Browser in place of Chrome Browser. Due to some legal issues, the name was later changed to Cub Linux (Chromium+Ubuntu).
![cub2](https://itsfoss.com/wp-content/uploads/2016/10/cub2.jpg)
Well, history apart, as the name hints, Cub Linux is based on Ubuntu, features the lightweight Openbox Desktop Environment. The Desktop is customized to give a Chrome OS impression and looks really neat.
In the apps department, you can install the web applications from the Chrome web store and all the Ubuntu software. Yup, with all the snappy apps of the Chrome OS, Youll still get the Ubuntu goodies.
As far as the performance is concerned, the operating system is super fast thanks to its Openbox Desktop Environment. Based on Ubuntu Linux, the stability of Cub Linux is unquestionable. The desktop itself is a treat to the eyes with all its smooth animations and beautiful UI.
[Suggested Read[Year 2013 For Linux] 2 Linux Distributions Discontinued][11]
I suggest Cub Linux to anybody who spends most their times on a browser and do some home tasks now and then. Well, a browser is all you need and a browser is exactly what youll get.
### 2\. PEPPERMINT OS
A good number of people look towards Linux because they want a no-BS computing experience. Some people do not really like the hassle of an anti-virus, a defragmenter, a cleaner etcetera as they want an operating system and not a baby. And I must say Peppermint OS is really good at being no-BS. [Peppermint OS][12] developers have put a good amount of effort in understanding the user requirements and needs.
![pep1](https://itsfoss.com/wp-content/uploads/2016/11/pep1.jpg)
There is a very small number of selected software included by default. The traditional ideology of including a couple apps from every software category is ditched here for good. The power to customize the computer as per needs has been given to the user. By the way, do we really need to install so many applications when we can get web alternatives for almost all the applications?
Ice
Ice is a utile little tool that converts your favorite and most used websites into desktop applications that you can directly launch from your desktop or the menu. Its what we call a site-specific browser.
![pep4](https://itsfoss.com/wp-content/uploads/2016/11/pep4.jpg)
Love facebook? Why not make a facebook web app on your desktop for quick launch? While there are people complaining about a decent Google drive application for Linux, Ice allows you to access Drive with just a click. Not just Drive, the functionality of Ice is limited only by your imagination.
Peppermint OS 7 is based on Ubuntu 16.04\. It not only provides a smooth, rock solid performance but also a very swift response. A heavily customizes LXDE will be your home screen. And the customization Im speaking about is driven to achieve both a snappy performance as well as visual appeal.
Peppermint OS hits more of a middle ground in the cloud-native operating system types. Although the skeleton of the OS is designed to support the speedy cloudy apps, the native Ubuntu application play well too. If you are someone like me who wants an OS that is balanced in online-offline capabilities, [Peppermint OS is for you][13].
[Suggested ReadPennsylvania High School Distributes 1,700 Ubuntu Laptops to Students][14]
### 3.APRICITY OS
[Apricity OS][15] stole the show for being one of the top aesthetically pleasing Linux distros out there. Its just gorgeous. Its like the Mona Lisa of the Linux domain. But, theres more to it than just the looks.
![ap2](https://itsfoss.com/wp-content/uploads/2016/11/ap2.jpg)
The prime reason [Apricity OS][16] makes this list is because of its simplicity. While OS desktop design is getting chaotic and congested with elements (and Im not talking only about non-Linux operating systems), Apricity de-clutters everything and simplifies the very basic human-desktop interaction. Gnome desktop environment is customized beautifully here. They made it really simpler.
The pre-installed software list is really small. Almost all Linux distros have the same pre-installed software. But Apricity OS has a completely new set of software. Chrome instead of Firefox. I was really waiting for this.  I mean why not give us whats rocking out there?
Apricity OS also features the Ice tool that we discussed in the last segment. But instead of Firefox, Chrome browser is used in website-desktop integration. Apricity OS has Numix Circle icons by default and everytime you add a popular webapp, there will be a beautiful icon placed on your Dock.
![](https://itsfoss.com/wp-content/uploads/2016/11/ap1.jpg)
See what I mean?
Apricity OS is based on Arch Linux. (So anybody looking for a quick start with Arch, and a beautiful at that one, download that Apricity ISO [here][17]) Apricity fully upholds the Arch principle of freedom of choice. Within just 10 minutes on the Ice, and youll have all your favorite webapps set up.
Gorgeous backgrounds, minimalistic desktop and a great functionality. These make Apricity OS a really great choice for setting up an amazing cloud-centric system. Itll take 5 mins for Apricity OS to make you fall in love with it. I mean it.
There you have it, people. Cloud-centric Linux distros for online dwellers. Do give us your take on the webapp-native app topic. Dont forget to share.
--------------------------------------------------------------------------------
via: https://itsfoss.com/cloud-focused-linux-distros/
作者:[Aquil Roshan ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/aquil/
[1]:https://itsfoss.com/author/aquil/
[2]:https://itsfoss.com/cloud-focused-linux-distros/#comments
[3]:https://twitter.com/share?original_referer=https%3A%2F%2Fitsfoss.com%2F&source=tweetbutton&text=Cloud+Focused+Linux+Distros+For+People+Who+Breathe+Online&url=https%3A%2F%2Fitsfoss.com%2Fcloud-focused-linux-distros%2F&via=%40itsfoss
[4]:https://www.linkedin.com/cws/share?url=https://itsfoss.com/cloud-focused-linux-distros/
[5]:http://pinterest.com/pin/create/button/?url=https://itsfoss.com/cloud-focused-linux-distros/&description=Cloud+Focused+Linux+Distros+For+People+Who+Breathe+Online&media=https://itsfoss.com/wp-content/uploads/2016/11/cloud-centric-Linux-distributions.jpg
[6]:https://itsfoss.com/wp-content/uploads/2016/11/cloud-centric-Linux-distributions.jpg
[7]:https://en.wikipedia.org/wiki/Chrome_OS
[8]:https://itsfoss.com/windows-like-linux-distributions/
[9]:https://cublinux.com/
[10]:https://itsfoss.com/chromixiumos-released/
[11]:https://itsfoss.com/year-2013-linux-2-linux-distributions-discontinued/
[12]:https://peppermintos.com/
[13]:https://peppermintos.com/
[14]:https://itsfoss.com/pennsylvania-high-school-ubuntu/
[15]:https://apricityos.com/
[16]:https://itsfoss.com/apricity-os/
[17]:https://apricityos.com/

View File

@ -0,0 +1,223 @@
Translating by rusking
# Kali Linux Fresh Installation Guide
Kali Linux is arguably one of the best out of the box [Linux distributions available for security testing][18]. While many of the tools in Kali can be installed in most Linux distributions, the Offensive Security team developing Kali has put countless hours into perfecting their ready to boot security distribution.
Kali Linux is a Debian based, security distribution. The distribution comes pre-loaded with hundreds of well known security tools and has gained quite a name for itself.
Kali even has an industry respected certification available called “Pentesting with Kali”. The certification is a rigorous 24 hour challenge in which applicants must successfully compromise a number of computers with another 24 hours to write up a professional penetration test report that is sent to and graded by the personnel at Offensive Security. Successfully passing this exam will allow the test taker to obtain the OSCP credential.
The focus of this guide and future articles is to help individuals become more familiar with Kali Linux and several of the tools available within the distribution.
Please be sure to use extreme caution with the tools included with Kali as many of them can accidentally be used in a manner that will break computer systems. The information contained within all of these Kali articles is intended for legal usages.
#### System Requirements
Kali has some minimum suggested specifications for hardware. Depending upon the intended use, more may be desired. This guide will be assuming that the reader will want to install Kali as the only operating system on the computer.
1. At least 10GB of disk space; strongly encouraged to have more
2. At least 512MB of ram; more is encouraged especially for graphical environments
3. USB or CD/DVD boot support
4. Kali Linux ISO available from [https://www.kali.org/downloads/][1]
#### Create a Bootable USB Using dd Command
This guide will be assuming that a USB drive is available to use as the installation media. Take note that the USB drive should be as close to 4/8GB as possible and ALL DATA WILL BE REMOVED!
The author has had issues with larger USB drives but some may still work. Regardless, following the next few steps WILL RESULT IN DATA LOSS ON THE USB DRIVE.
Please be sure to backup all data before proceeding. This bootable Kali Linux USB drive is going to be created from another Linux machine.
Step 1 is to obtain the Kali Linux ISO. This guide is going to use the current newest version of Kali with the Enlightenment [Linu desktop environment][17].
To obtain this version, type the following into a terminal.
```
$ cd ~/Downloads
$ wget -c http://cdimage.kali.org/kali-2016.2/kali-linux-e17-2016.2-amd64.iso
```
The two commands above will download the Kali Linux ISO into the current users Downloads folder.
The next process is to write the ISO to a USB drive to boot the installer. To accomplish this we can use the ddtool within Linux. First, the disk name needs to be located with lsblk command though.
```
$ lsblk
```
[
![Find Out USB Device Name in Linux](http://www.tecmint.com/wp-content/uploads/2016/10/Find-USB-Device-Name-in-Linux.png)
][16]
Find Out USB Device Name in Linux
With the name of the USB drive determined as `/dev/sdc`, the Kali ISO can be written to the drive with the ddtool.
```
$ sudo dd if=~/Downloads/kali-linux-e17-2016.2-amd64.iso of=/dev/sdc
```
Important: The above command requires root privileges so utilize sudo or login as the root user to run the command. Also this command will REMOVE EVERYTHING on the USB drive. Be sure to backup needed data.
Once the ISO is copied over to the USB drive, proceed further to install Kali Linux.
### Installation of Kali Linux Distribution
1. First, plug the USB drive into the respective computer that Kali should be installed upon and proceed to boot to the USB drive. Upon successful booting to the USB drive, the user will be presented with the following screen and should proceed with the Install or Graphical Install options.
This guide will be using the Graphical Install method.
[
![Kali Linux Boot Menu](http://www.tecmint.com/wp-content/uploads/2016/10/Kali-Linux-Boot-Menu.png)
][15]
Kali Linux Boot Menu
2. The next couple of screens will ask the user to select locale information such as language, country, and keyboard layout.
Once through the locale information, the installer will prompt for a hostname and domain for this install. Provide the appropriate information for the environment and continue installing.
[
![Set Hostname for Kali Linux](http://www.tecmint.com/wp-content/uploads/2016/10/Set-Hostname-for-Kali-Linux.png)
][14]
Set Hostname for Kali Linux
[
![Set Domain for Kali Linux](http://www.tecmint.com/wp-content/uploads/2016/10/Set-Domain-for-Kali-Linux.png)
][13]
Set Domain for Kali Linux
3. After setting up the hostname and domain name, the root users password needs to be set. DO NOT FORGET THIS PASSWORD.
[
![Set Root User Password for Kali Linux](http://www.tecmint.com/wp-content/uploads/2016/10/Set-Root-User-Password-for-Kali-Linux.png)
][12]
Set Root User Password for Kali Linux
4. After setting the password is set, the installer will prompt for time zone data and then pause at the disk partitioning.
If Kali will be the only operating on the machine, the easiest option is to use Guided Use Entire Disk and then select the storage device you wish to install Kali.
[
![Select Kali Linux Installation Type](http://www.tecmint.com/wp-content/uploads/2016/10/Select-Kali-Linux-Installation-Type.png)
][11]
Select Kali Linux Installation Type
[
![Select Kali Linux Installation Disk](http://www.tecmint.com/wp-content/uploads/2016/10/Select-Kali-Linux-Installation-Disk.png)
][10]
Select Kali Linux Installation Disk
5. The next question will prompt the user to determine the partitioning on the storage device. Most installs can simply put all data on one partition though.
[
![Install Kali Linux Files in Partition](http://www.tecmint.com/wp-content/uploads/2016/10/Install-Kali-Linux-Files-in-Partition.png)
][9]
Install Kali Linux Files in Partition
6. The final step with ask the user to confirm all changes to be made to the disk on the host machine. Be aware that continuing will ERASE DATA ON THE DISK.
[
![Confirm Disk Partition Write Changes](http://www.tecmint.com/wp-content/uploads/2016/10/Confirm-Disk-Partition-Write-Changes.png)
][8]
Confirm Disk Partition Write Changes
7. Once confirming the partition changes, the installer will run through the process of installing the files. Once it is completed, the system will want to setup a network mirror to obtain future pieces of software and updates. Be sure to enable this functionality if you wish to use the Kali repositories.
[
![Configure Kali Linux Package Manager](http://www.tecmint.com/wp-content/uploads/2016/10/Configure-Kali-Linux-Package-Manager.png)
][7]
Configure Kali Linux Package Manager
8. After selecting a network mirror, the system will ask to install grub. Again this guide is assuming that Kali is to be the only operating system on this computer.
Selecting Yes on this screen will allow the user to pick the device to write the necessary boot loader information to the hard drive to boot Kali.
[
![Install GRUB Boot Loader](http://www.tecmint.com/wp-content/uploads/2016/10/Install-GRUB-Boot-Loader.png)
][6]
Install GRUB Boot Loader
[
![Select Partition to Install GRUB Boot Loader](http://www.tecmint.com/wp-content/uploads/2016/10/Select-Partition-to-Install-GRUB-Boot-Loader.png)
][5]
Select Partition to Install GRUB Boot Loader
9. Once the installer finishes installing GRUB to the disk, it will alert the user to reboot the machine to boot into the newly installed Kali machine.
[
![Kali Linux Installation Completed](http://www.tecmint.com/wp-content/uploads/2016/10/Kali-Linux-Installation-Completed.png)
][4]
Kali Linux Installation Completed
10. Since this guide installed Enlightenment as the Kali desktop environment, it will likely default boot into a shell.
In order to launch Enlightenment, log in as the user root with the password created earlier in the installation process.
Once logged in all that needs to be issued to start Enlightenment is the command startx.
```
# startx
```
[
![Start Enlightenment Desktop in Kali Linux](http://www.tecmint.com/wp-content/uploads/2016/10/Start-Enlightenment-Desktop-in-Kali-Linux.png)
][3]
Start Enlightenment Desktop in Kali Linux
The first time that Enlightenment is run, it will ask the user for some configuration preferences and then launch the Desktop Environment.
[
![Kali Linux Enlightenment Desktop](http://www.tecmint.com/wp-content/uploads/2016/10/Kali-Linux-Enlightenment-Desktop.png)
][2]
Kali Linux Enlightenment Desktop
At this point, Kali is successfully installed and ready to be used! Upcoming articles will walk through the tools available within Kali and how the can be utilized to test the security posture of hosts and networks. Please feel free to post any comments or questions below.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/kali-linux-installation-guide/
作者:[Rob Turner][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/robturner/
[1]:https://www.kali.org/downloads/
[2]:http://www.tecmint.com/wp-content/uploads/2016/10/Kali-Linux-Enlightenment-Desktop.png
[3]:http://www.tecmint.com/wp-content/uploads/2016/10/Start-Enlightenment-Desktop-in-Kali-Linux.png
[4]:http://www.tecmint.com/wp-content/uploads/2016/10/Kali-Linux-Installation-Completed.png
[5]:http://www.tecmint.com/wp-content/uploads/2016/10/Select-Partition-to-Install-GRUB-Boot-Loader.png
[6]:http://www.tecmint.com/wp-content/uploads/2016/10/Install-GRUB-Boot-Loader.png
[7]:http://www.tecmint.com/wp-content/uploads/2016/10/Configure-Kali-Linux-Package-Manager.png
[8]:http://www.tecmint.com/wp-content/uploads/2016/10/Confirm-Disk-Partition-Write-Changes.png
[9]:http://www.tecmint.com/wp-content/uploads/2016/10/Install-Kali-Linux-Files-in-Partition.png
[10]:http://www.tecmint.com/wp-content/uploads/2016/10/Select-Kali-Linux-Installation-Disk.png
[11]:http://www.tecmint.com/wp-content/uploads/2016/10/Select-Kali-Linux-Installation-Type.png
[12]:http://www.tecmint.com/wp-content/uploads/2016/10/Set-Root-User-Password-for-Kali-Linux.png
[13]:http://www.tecmint.com/wp-content/uploads/2016/10/Set-Domain-for-Kali-Linux.png
[14]:http://www.tecmint.com/wp-content/uploads/2016/10/Set-Hostname-for-Kali-Linux.png
[15]:http://www.tecmint.com/wp-content/uploads/2016/10/Kali-Linux-Boot-Menu.png
[16]:http://www.tecmint.com/wp-content/uploads/2016/10/Find-USB-Device-Name-in-Linux.png
[17]:http://www.tecmint.com/best-linux-desktop-environments/
[18]:http://www.tecmint.com/best-security-centric-linux-distributions-of-2016/

View File

@ -0,0 +1,56 @@
When to use NGINX instead of Apache
=====
>They're both popular open-source web servers but, according to NGINX CEO Gus Robertson, they have different use cases. And Microsoft? Its web server has dropped below 10 percent of all active websites for the first time in 20 years.
![web-server-popularity-october-2016.png](http://zdnet1.cbsistatic.com/hub/i/r/2016/11/07/f38d190e-046c-49e6-b451-096ee0776a04/resize/770xauto/b009f53417e9a4af207eff6271b90c43/web-server-popularity-october-2016.png)
Apache is the top web server, but NGINX continues to gain and Microsoft IIS falls below 10 percent for the first time in decades.
[NGINX][4] has risen to become the number two web server. It surpassed [Microsoft Internet Information Services (IIS)][5] long ago and has been creeping up on long-time top web server [Apache][6]. But, NGINX CEO Gus Roberston said in an interview, Apache and NGINX are not after the same audiences.
"I think Apache is a great web server. NGINX is different use case," said Robertson. "We don't see Apache as a rival. Our customers use NGINX to replace hardware load balancers and build micro-services neither of which is Apache."
Indeed, Robertson finds that many customers use both open-source web services. "Customers will use NGINX in front of Apache for load balancing and applications. Our architecture is quite different and we can do better concurrent web services." He also said NGINX works better in cloud configurations.
He concluded, "We're the only web server still growing, everyone else is still shrinking."
These gains, coupled with Microsoft's loss of 1.2 million active sites, led to Microsoft's share of active sites dropping to 9.27 percent, the first time that it has fallen below 10 percent. Apache increased its market share by 0.19 percentage points and continues to dominate, now with 46.30 percent of active sites. Still, it is true that over the years Apache has been slowly declining, while NGINX is now at 19 percent.That's not quite true. According to the [October Netcraft web server survey][7], Apache saw the largest increase of active sites this month, gaining 1.8 million, while NGINX gained 400,000, the second-largest growth.
NGINX's developers are seeking to make their open-core commercial web server, [NGINX Plus][8], more competitive by continuing to improve it. With the latest release, [NGINX Plus Release 11 (R11][9]), the server is both easier to extend and customize, and support a broader range of deployments.
The biggest addition is binary compatibility for [dynamic modules][10]. This means that dynamic modules that have been compiled against the [open-source NGINX software][11] can be loaded into NGINX Plus.
This means you can leverage the large number of [thirdparty NGINX modules][12] to extend and add functionality to NGINX Plus, drawing from a range of open-source and commercially produced modules. Developers can create custom extensions, addons, and new products based on the supported NGINX Plus core.
NGINX Plus R11 also added other enhancements:
* [Improved TCP/UDP load balancing][1] -- New features include SSL server name routing, new logging functionality, additional variables, and improved PROXY protocol support. These new features enhance debugging capabilities and enable you to support a broader range of enterprise applications.
* [Better geolocation by IP address][2] -- The thirdparty GeoIP2 module is now certified and provided to NGINX Plus customers. This new version provides localized and richer location detail than the original GeoIP module.
* [Enhanced nginScript module][3] -- nginScript is the nextgeneration configuration language for NGINX Plus, based on JavaScript. New features enable you to modify request and response data on the fly in the Stream (TCP/UDP) module.
The end result? NGINX is poised to continue to make the race for the top web server a two-horse race. Microsoft IIS? It continues to slowly fade away.
--------------------------------------------------------------------------------
via: http://www.zdnet.com/article/when-to-use-nginx-instead-of-apache/
作者:[ Steven J. Vaughan-Nichols][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/
[1]:https://www.nginx.com/blog/nginx-plus-r11-released/?utm_source=nginx-plus-r11-released&utm_medium=blog#r11-tcp-udp-lb
[2]:https://www.nginx.com/blog/nginx-plus-r11-released/?utm_source=nginx-plus-r11-released&utm_medium=blog#r11-geoip2
[3]:https://www.nginx.com/blog/nginx-plus-r11-released/?utm_source=nginx-plus-r11-released&utm_medium=blog#r11-nginScript
[4]:https://www.nginx.com/
[5]:https://www.iis.net/
[6]:https://httpd.apache.org/
[7]:https://news.netcraft.com/archives/2016/10/21/october-2016-web-server-survey.html
[8]:https://www.nginx.com/products/
[9]:https://www.nginx.com/blog/nginx-plus-r11-released/
[10]:https://www.nginx.com/blog/nginx-plus-r11-released/?utm_source=nginx-plus-r11-released&utm_medium=blog#r11-dynamic-modules
[11]:https://www.nginx.com/products/download-oss/
[12]:https://www.nginx.com/resources/wiki/modules/index.html?utm_source=nginx-plus-r11-released&utm_medium=blog

View File

@ -0,0 +1,119 @@
ucasFL translating
# How to Recover a Deleted File in Linux
Did this ever happen to you? You realized that you had mistakenly deleted a file either through the Del key, or using `rm` in the command line.
In the first case, you can always go to the Trash, [search for the file][6], and restore it to its original location. But what about the second case? As I am sure you probably know, the Linux command line does not send removed files anywhere it REMOVES them. Bum. Theyre gone.
In this article we will share a tip that may be helpful to prevent this from happening to you, and a tool that you may consider using if at any point you are careless enough to do it anyway.
### Create an alias to rm -i
The `-i` switch, when used with rm (and also other [file-manipulation tools such as cp or mv][5]) causes a prompt to appear before removing a file.
The same applies to [copying, moving, or renaming a file][4] in a location where one with the same name exists already.
This prompt gives you a second chance to consider if you actually want to remove the file if you confirm the prompt, it will be gone. In that case, Im sorry but this tip will not protect you from your own carelessness.
To replace rm with an alias to `'rm -i'`, do:
```
alias rm='rm -i'
```
The alias command will confirm that rm is now aliased:
[
![Add Alias rm Command](http://www.tecmint.com/wp-content/uploads/2016/11/Add-Alias-rm-Command.png)
][3]
Add Alias rm Command
However, this will only last during the current user session in the current shell. To make the change permanent, you will have to save it to `~/.bashrc` (some distributions may use `~/.profile` instead) as shown below:
[
![Add Alias Permanently in Linux](http://www.tecmint.com/wp-content/uploads/2016/11/Add-Alias-Permanently-in-Linux.png)
][2]
Add Alias Permanently in Linux
In order for the changes in `~/.bashrc` (or `~/.profile`) to take effect immediately, source the file from the current shell:
```
. ~/.bashr
```
[
![Active Alias in Linux](http://www.tecmint.com/wp-content/uploads/2016/11/Active-Alias-in-Linux.png)
][1]
Active Alias in Linux
### The forensics tool Foremost
Hopefully, you will be careful with your files and will only need to use this tool while recovering a lost file from an external disk or USB drive.
However, if you realize you accidentally removed a file in your system and are going to panic dont. Lets take a look at foremost, a forensics tool that was designed for this kind of scenarios.
To install foremost in CentOS/RHEL 7, you will need to enable Repoforge first:
```
# rpm -Uvh http://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.3-1.el7.rf.x86_64.rpm
# yum install foremost
```
Whereas in Debian and derivatives, just do
```
# aptitude install foremost
```
Once the installation has completed, lets proceed with a simple test. We will begin by removing an image file named `nosdos.jpg` from the /boot/images directory:
```
# cd images
# rm nosdos.jpg
```
To recover it, use foremost as follows (youll need to identify the underlying partition first  `/dev/sda1` is where `/boot` resides in this case):
```
# foremost -t jpg -i /dev/sda1 -o /home/gacanepa/rescued
```
where /home/gacanepa/rescued is a directory on a separate disk keep in mind that recovering files on the same drive where the removed ones were located is not a wise move.
If, during the recovery, you occupy the same disk sectors where the removed files used to be, it may not be possible to recover anything. Additionally, it is essential to stop all your activities before performing the recovery.
After foremost has finished executing, the recovered file (if recovery was possible) will be found inside the /home/gacanepa/rescued/jpg directory.
##### Summary
In this article we have explained how to avoid removing a file accidentally and how to attempt to recover it if such an undesired event happens. Be warned, however, that foremost can take quite a while to run depending on the size of the partition.
As always, dont hesitate to let us know if you have questions or comments. Feel free to drop us a note using the form below.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/recover-deleted-file-in-linux/
作者:[ Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/wp-content/uploads/2016/11/Active-Alias-in-Linux.png
[2]:http://www.tecmint.com/wp-content/uploads/2016/11/Add-Alias-Permanently-in-Linux.png
[3]:http://www.tecmint.com/wp-content/uploads/2016/11/Add-Alias-rm-Command.png
[4]:http://www.tecmint.com/rename-multiple-files-in-linux/
[5]:http://www.tecmint.com/progress-monitor-check-progress-of-linux-commands/
[6]:http://www.tecmint.com/linux-find-command-to-search-multiple-filenames-extensions/

View File

@ -0,0 +1,182 @@
# 4 Ways to Batch Convert Your PNG to JPG and Vice-Versa
In computing, Batch processing is the [execution of a series of tasks][11] in a program non-interactively. In this guide will offer you 4 simple ways to batch convert several `.PNG` images to `.JPG` and vice-versa using Linux command-line tools.
We will use convert command line tool in all the examples, however, you can as well make use of mogrify to achieve this.
The syntax for using convert is:
```
$ convert input-option input-file output-option output-file
```
And for mogrify is:
```
$ mogrify options input-file
```
Note: With mogrify, the original image file is replaced with the new image file by default, but it is possible to prevent this, by using certain options that you can find in the man page.
Below are the various ways to batch convert your all `.PNG` images to `.JPG` format, if you want to convert `.JPG`to `.PNG`, you can modify the commands according to your needs.
### 1\. Convert PNG to JPG Using ls and xargs Commands
The [ls command][10] allows you to list all your png images and xargs make it possible to build and execute a convert command from standard input to convert all `.png` images to `.jpg`.
```
----------- Convert PNG to JPG -----------
$ ls -1 *.png | xargs -n 1 bash -c 'convert "$0" "${0%.png}.jpg"'
----------- Convert JPG to PNG -----------
$ ls -1 *.jpg | xargs -n 1 bash -c 'convert "$0" "${0%.jpg}.png"'
```
Explanation about the options used in the above command.
1. `-1`  flag tells ls to list one image per line.
2. `-n`  specifies the maximum number of arguments, which is 1 for the case.
3. `-c`  instructs bash to run the given command.
4. `${0%.png}.jpg`  sets the name of the new converted image, the % sign helps to remove the old file extension.
[
![Convert PNG to JPG Format in Linux](http://www.tecmint.com/wp-content/uploads/2016/11/Convert-PNG-to-JPG-in-Linux.png)
][9]
Convert PNG to JPG Format in Linux
I used `ls -ltr` command to [list all files by modified date and time][8].
Similarly, you can use above command to convert all your `.jpg` images to `.png` by tweaking the above command.
### 2\. Convert PNG to JPG Using GNU Parallel Command
GNU Parallel enables a user to build and execute shell commands from standard input in parallel. Make sure you have GNU Parallel installed on your system, otherwise install it using the appropriate commands below:
```
$ sudo apt-get install parallel [On Debian/Ubuntu systems]
$ sudo yum install parallel [On RHEL/CentOS and Fedora]
```
Once Parallel utility installed, you can run the following command to convert all `.png` images to `.jpg` format from the standard input.
```
----------- Convert PNG to JPG -----------
$ parallel convert '{}' '{.}.jpg' ::: *.png
----------- Convert JPG to PNG -----------
$ parallel convert '{}' '{.}.png' ::: *.jpg
```
Where,
1. `{}`  input line which is a replacement string substituted by a complete line read from the input source.
2. `{.}`  input line minus extension.
3. `:::`  specifies input source, that is the command line for the example above where *png or *jpg is the argument.
[
![Parallel Command - Converts All PNG Images to JPG Format](http://www.tecmint.com/wp-content/uploads/2016/11/Convert-PNG-to-JPG-Using-Parallel-Command.png)
][7]
Parallel Command Converts All PNG Images to JPG Format
Alternatively, you can as well use [ls][6] and parallel commands together to batch convert all your images as shown:
```
----------- Convert PNG to JPG -----------
$ ls -1 *.png | parallel convert '{}' '{.}.jpg'
----------- Convert JPG to PNG -----------
$ ls -1 *.jpg | parallel convert '{}' '{.}.png'
```
### 3\. Convert PNG to JPG Using for loop Command
To avoid the hustle of writing a shell script, you can execute a `for loop` from the command line as follows:
```
----------- Convert PNG to JPG -----------
$ bash -c 'for image in *.png; do convert "$image" "${image%.png}.jpg"; echo “image $image converted to ${image%.png}.jpg ”; done'
----------- Convert JPG to PNG -----------
$ bash -c 'for image in *.jpg; do convert "$image" "${image%.jpg}.png"; echo “image $image converted to ${image%.jpg}.png ”; done'
```
Description of each option used in the above command:
1. -c allows for execution of the for loop statement in single quotes.
2. The image variable is a counter for number of images in the directory.
3. For each conversion operation, the [echo command][1] informs the user that a png image has been converted to jpg format and vice-versa in the line $image converted to ${image%.png}.jpg”.
4. “${image%.png}.jpg” creates the name of the converted image, where % removes the extension of the old image format.
[
![for loop - Convert PNG to JPG Format](http://www.tecmint.com/wp-content/uploads/2016/11/Convert-PNG-to-JPG-Using-for-loop-Command.png)
][5]
for loop Convert PNG to JPG Format
### 4\. Convert PNG to JPG Using Shell Script
If you do not want to make your command line dirty as in the previous example, write a small script like so:
Note: Appropriately interchange the `.png` and `.jpg` extensions as in the example below for conversion from one format to another.
```
#!/bin/bash
#convert
for image in *.png; do
convert "$image" "${image%.png}.jpg"
echo “image $image converted to ${image%.png}.jpg ”
done
exit 0
```
Save it as `convert.sh` and make the script executable and then run it from within the directory that has your images.
```
$ chmod +x convert.sh
$ ./convert.sh
```
[
![Batch Image Convert Using Shell Script](http://www.tecmint.com/wp-content/uploads/2016/11/Batch-Image-Convert-Using-Shell-Script.png)
][4]
Batch Image Convert Using Shell Script
In summary, we covered some important ways to batch convert `.png` images to `.jpg` format and vice-versa. If you want to optimize images, you can go through our guide that shows [how to compress png and jpg images in Linux][3].
You can as well share with us any other methods including [Linux command line tools][2] for converting images from one format to another on the terminal, or ask a question via the comment section below.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/linux-image-conversion-tools/
作者:[Aaron Kili][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/echo-command-in-linux/
[2]:http://www.tecmint.com/tag/linux-tricks/
[3]:http://www.tecmint.com/optimize-and-compress-jpeg-or-png-batch-images-linux-commandline/
[4]:http://www.tecmint.com/wp-content/uploads/2016/11/Batch-Image-Convert-Using-Shell-Script.png
[5]:http://www.tecmint.com/wp-content/uploads/2016/11/Convert-PNG-to-JPG-Using-for-loop-Command.png
[6]:http://www.tecmint.com/tag/linux-ls-command/
[7]:http://www.tecmint.com/wp-content/uploads/2016/11/Convert-PNG-to-JPG-Using-Parallel-Command.png
[8]:http://www.tecmint.com/sort-ls-output-by-last-modified-date-and-time/
[9]:http://www.tecmint.com/wp-content/uploads/2016/11/Convert-PNG-to-JPG-in-Linux.png
[10]:http://www.tecmint.com/tag/linux-ls-command/
[11]:http://www.tecmint.com/using-shell-script-to-automate-linux-system-maintenance-tasks/

View File

@ -0,0 +1,106 @@
How To Update Wifi Network Password From Terminal In Arch Linux
============================================================
![Update Wifi Network Password From Terminal In Arch Linux](https://www.ostechnix.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
After changing the Wifi Network password in my Router, My Arch Linux test machine lost the Internet connection. So I wanted to update the new password from Terminal because my Arch Linux test box doesnt have graphical desktop environment. Changing old wifi password to new password is pretty easy in GUI mode. I will simply open the network manager and update the new password to the wifi in few seconds. However, I am not aware of updating the wifi network password from command line in Arch Linux. So, I started to dig into Google and find a perfect solution from the Arch Linux forum. In case you ever been in the same situation, read on. Its not that difficult.
### Update Wifi Network Password From Terminal
After changing the password in Router, I ran _wifi-menu_ command to update the new password. But It kept throwing the following error.
```
sudo wifi-menu
```
It displayed the list of available wifi networks.
[
![sksk_001](http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_001-1.png)
][2]
My wifi network name is Murugs9376. Then, I selected my network and hit OK button. Instead of asking the new password (I thought it was going to ask me if the password has been changed.), It showed the following error.
```
Interface 'wlp9s0' is controlled by netctl-auto
WPA association/authentication failed for interface 'wlp9s0'
```
[
![sksk_002](http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_002-1.png)
][3]
I dont have much experience in Arch based distributions. So I went thorough the Arch linux forum hoping for the solution. Thankfully, someone has posted the same problem and got the workaround from one of the fellow Arch user. Following is the solution to update the wifi network password from Terminal in Arch based distributions.
The network profiles is stored in the /etc/netctl/ folder. For example, here is my Arch Linux test box wifi network profile details.
```
ls /etc/netctl/
Sample Output:
examples ostechnix 'wlp9s0-Chendhan Cell Service' wlp9s0-Pratheesh
hooks wlp9s0 wlp9s0-Murugu9376
interfaces wlp9s0-AndroidAP wlp9s0-none
```
[
![sksk_003](http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_003-1.png)
][4]
All I need to update the new password is to delete the my wifi network profile (Ex. wlp9s0-Murugs9376) and re-run the _wifi-menu_ command to new password.
So, first let us delete the wifi profile using command:
```
sudo rm /etc/netctl/wlp9s0-Murugu9376
```
After deleting the profile, run wifi-menu command to update the new password.
```
sudo wifi-menu
```
Select the wifi-network and press ENTER.
[
![sksk_004](http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_004-1.png)
][5]
Enter a name for the profile.
[
![sksk_005](http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_005-1.png)
][6]
Finally, Enter the security key to the network profile and hit ENTER key.
[
![sksk_006](http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_006-1.png)
][7]
Thats it. Now, we have updated the password to the wifi network. As you can see, updating password from Terminal in Arch Linux is no big deal. Anyone could do it in a matter of seconds.
If you find this guide useful, please share it on your social networks and support us.
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/update-wifi-network-password-terminal-arch-linux/
作者:[ SK][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:http://ostechnix.tradepub.com/free/w_pacb38/prgm.cgi?a=1
[2]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_001-1.png
[3]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_002-1.png
[4]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_003-1.png
[5]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_004-1.png
[6]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_005-1.png
[7]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_006-1.png

View File

@ -0,0 +1,125 @@
How to check if port is in use on Linux or Unix
============================================================
[
![](https://s0.cyberciti.org/images/category/old/linux-logo.png)
][1]
How do I determine if a port is in use under Linux or Unix-like system? How can I verify which ports are listening on Linux server?
It is important you verify which ports are listing on the servers network interfaces. You need to pay attention to open ports to detect an intrusion. Apart from an intrusion, for troubleshooting purposes, it may be necessary to check if a port is already in use by a different application on your servers. For example, you may install Apache and Nginx server on the same system. So it is necessary to know if Apache or Nginx is using TCP port # 80/443\. This quick tutorial provides steps to use the netstat, nmap and lsof command to check the ports in use and view the application that is utilizing the port.
### How to check the listening ports and applications on Linux:
1. Open a terminal application i.e. shell prompt.
2. Run any one of the following command:
```
sudo lsof -i -P -n | grep LISTEN
sudo netstat -tulpn | grep LISTEN
sudo nmap -sTU -O IP-address-Here
```
Let us see commands and its output in details.
### Option #1: lsof command
The syntax is:
```
$ sudo lsof -i -P -n
$ sudo lsof -i -P -n | grep LISTEN
$ doas lsof -i -P -n | grep LISTEN
```
### [OpenBSD] ###
Sample outputs:
[
![Fig.01: Check the listening ports and applications with lsof command](https://s0.cyberciti.org/uploads/faq/2016/11/lsof-outputs.png)
][2]
Fig.01: Check the listening ports and applications with lsof command
Consider the last line from above outputs:
```
sshd 85379 root 3u IPv4 0xffff80000039e000 0t0 TCP 10.86.128.138:22 (LISTEN)
```
- sshd is the name of the application.
- 10.86.128.138 is the IP address to which sshd application bind to (LISTEN)
- 22 is the TCP port that is being used (LISTEN)
- 85379 is the process ID of the sshd process
### Option #2: netstat command
You can check the listening ports and applications with netstat as follows.
### Linux netstat syntax
```
$ netstat -tulpn | grep LISTEN
```
### FreeBSD/MacOS X netstat syntax
```
$ netstat -anp tcp | grep LISTEN
$ netstat -anp udp | grep LISTEN
```
### OpenBSD netstat syntax
````
$ netstat -na -f inet | grep LISTEN
$ netstat -nat | grep LISTEN
```
### Option #3: nmap command
The syntax is:
```
$ sudo nmap -sT -O localhost
$ sudo nmap -sU -O 192.168.2.13 ##[ list open UDP ports ]##
$ sudo nmap -sT -O 192.168.2.13 ##[ list open TCP ports ]##
```
Sample outputs:
[
![Fig.02: Determines which ports are listening for TCP connections using nmap](https://s0.cyberciti.org/uploads/faq/2016/11/nmap-outputs.png)
][3]
Fig.02: Determines which ports are listening for TCP connections using nmap
You can combine TCP/UDP scan in a single command:
`$ sudo nmap -sTU -O 192.168.2.13`
### A note about Windows users
You can check port usage from Windows operating system using following command:
```
netstat -bano | more
netstat -bano | grep LISTENING
netstat -bano | findstr /R /C:"[LISTING]"
````
--------------------------------------------------------------------------------
via: https://www.cyberciti.biz/faq/unix-linux-check-if-port-is-in-use-command/
作者:[ VIVEK GITE][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.cyberciti.biz/faq/unix-linux-check-if-port-is-in-use-command/
[1]:https://www.cyberciti.biz/faq/category/linux/
[2]:http://www.cyberciti.biz/faq/unix-linux-check-if-port-is-in-use-command/lsof-outputs/
[3]:http://www.cyberciti.biz/faq/unix-linux-check-if-port-is-in-use-command/nmap-outputs/

View File

@ -0,0 +1,114 @@
User Editorial: Steam Machines & SteamOS after a year in the wild
====
On this day, last year, [Valve released Steam Machines onto the world][2], after the typical Valve delays. While the state of the Linux desktop regarding gaming has improved, Steam Machines have not taken off as a platform, and SteamOS remains stagnant. What happened with these projects from Valve? Why were they created, why did they fail, and what could have been done to make them succeed?
**Context**
In 2012, when Windows 8 released, it included an app store, much like iOS and Android. With the new touch-friendly user interface Microsoft debuted, there was a new set of APIs available called “WinRT,” for creating these immersive touch-friendly applications in the UI language called “Metro.” Applications created with this new API, however, could only be distributed via the Windows Store, with Microsoft taking out a 30% cut, just like the other stores. To Gabe Newell, CEO of Valve, this was unacceptable, and he saw the risks of Microsoft using its position to push the Windows Store and Metro applications to crush Valve, like what they had done to Netscape using Internet Explorer.
To Valve, the strength of the PC running Windows was it that was an open platform, where anyone could run whatever they want without control over the operating system or hardware vendor. The alternative to these proprietary platforms closing in on third-party application stores like Steam was to push a truly open platform that grants freedoms to change, to everyone Linux. Linux is just a kernel, but you can easily create an operating system with it and other software like the GNU core utilities and Gnome, such as Ubuntu. While pushing Ubuntu and other Linux distributions would allow Valve a sanctuary platform in case Microsoft or Apple turned hostile, Linux gave them possibilities to create a new platform.
**Conception**
Valve seemed to have found an opportunity in the console space, if we can call Steam Machines consoles. To achieve the user interface expectations of a console, being used on a large screen television from afar, Big Picture Mode was created. A core principle of the machines was openness; the software was able to be swapped out for Windows, as an example, and the CAD designs for the controller are available for peoples projects.
Originally, Valve had planned to create their own box as a “flagship” machine. However, these only shipped as prototypes to testers in 2013\. They would also let other OEMs like Dell create their own Steam Machines as well, and allow a variety of pricing and specification options. A company called “Xi3” showed their small box, small enough to fit into a palm, as a possible candidate to become a premiere Steam Machine, which created more hype around Steam Machines. Ultimately, Valve decided to only go with OEM partners to make and advertise Steam Machines, rather than doing it themselves.
More “out there” ideas were considered. Biometrics, gaze tracking, and motion controllers were considered for the controller. Of them, the released Steam Controller had a gyroscope, and the HTC Vive controllers had various tracking and motion features that may have been originally intended for the original controller concepts. The controller was also originally intended to be more radical in its approach, with a touchscreen in the middle that had customizable, context-sensitive actions. Ultimately, the launch controller was more conservative, but still had features like the dual trackpads and advanced software that gave it flexibility. Valve had also considered making a version of Steam Machines and SteamOS for smaller hardware like laptops. This ultimately never bore any fruit, though the “Smach Z” handheld could be compared to this.
In [September 2013][3], Valve had announced Steam Machines and SteamOS, with an expected release in the middle of 2014\. The aforementioned 300 prototype machines were released to testers in December, and in January, 2000 more machines were provided to developers. SteamOS was released for testers experienced with Linux to try out. With the feedback given, Valve had decided to delay the release until November 2015.
The late launch caused problems with partners; Dells Steam Machine was launched a year early running Windows as the Alienware Alpha, with extra software to improve usability with a controller.
**Launch**
With the launch, Valve and their OEM partners released their machines, and Valve also released the Steam Controller and the Steam Link. A retail presence was established with GameStop and other brick and mortar stores providing space. Before release, some OEMs pulled out of the launch; Origin PC and Falcon Northwest, two high-end boutique builders. They had claimed performance issues and limitations had made them decide not to ship SteamOS.
The machines had launched to mixed reviews. The Steam Link was praised and many had considered buying one for their existing PC instead of buying a Steam Machine for the living room. The Steam Controller reception was muddled, due to its rich feature set but high learning curve. The Steam Machines themselves ultimately launched to the muddiest reception, however. Reviewers like LinusTechTips noticed glaring defects with the SteamOS software, including performance issues. Many of the machines were criticized for their high price point and poor value, especially when compared to the option of building a PC from the perspective of a PC gamer, or the price in comparison to other consoles. The use of SteamOS was criticized over compatibility, bugs, and lower performance than Windows. Of the available options, the Alienware Steam Machine was considered to be the most interesting option due to its value relative to other machines and small form factor.
By using Debian Linux as the base, Valve had many “launch titles” for the platform, as they had a library of pre-existing Linux titles. The initial availability of games was seen as favourable over other consoles. However, many titles originally announced for the platform never came out, or came out much later. Rocket League and Mad Max only came out only recently after the initial announcements a year ago, and titles like The Witcher 3 and Batman: Arkham Knight never came for the platform, despite initial promises from Valve or publishers. In the case of The Witcher 3, the developer, CD Projekt Red, had denied they ever announced a port, despite their game appearing in a list of titles on sale that had or were announced to have Linux and SteamOS support. In addition, many “AAA” titles have not been ported; though this situation continues to improve over time.
**Neglect**
With the Steam Machines launched, developers at Valve had moved on to other projects. Of the projects being worked on, virtual reality was seen as the most important, with about a third of employees working on it as of June. Valve had seen virtual reality as something to develop, and Steam as the prime ecosystem for delivering VR. Using HTC to manufacture, they had designed their own virtual reality headset and controllers, and would continue to develop new revisions. However, Linux and Steam Machines had fallen to the wayside with this focus. SteamVR, until recently, did not support Linux (it's still not public yet, but it was shown off at SteamDevDays on Linux), which put into question Valves commitments to Steam Machines and Linux as an open platform with a future.
There has been little development to SteamOS itself. The last major update, SteamOS 2.0 was mostly synchronizing with upstream Debian and required a reinstallation, and continued patches simply continue synchronizing with upstream sources. While Valve has made improvements to projects like Mesa, which have improved performance for many users, it has done little with Steam Machines as a product.
Many features continue to go undeveloped. Steams built in functionality like chat and broadcasting continue to be weak, but this affects all platforms that Steam runs on. More pressingly, services like Netflix, Twitch, and Spotify are not integrated into the interface like most major consoles, but accessing them requires using the browser, which can be slow and clunky, if it even achieves what is wanted, or to bring in software from third-party sources, which requires using the terminal, and the software might not be very usable using a controller this is a poor UX for whats considered to be an appliance.
Valve put little effort into marketing the platform, preferring to leave this to OEMs. However, most OEMs were either boutique builders or makers or barebones builders. Of the OEMs, only Dell was the major player in the PC market, and the only one who pushed Steam Machines with advertisements.
Sales were not strong. With 500,000 controllers sold 7 months on (stated in June 2016), including those bundled with a Steam Machine. This puts retail Steam Machines, not counting machines people have installed SteamOS on, in the low hundred thousand mark. Compared to the existing PC and console install bases, this is low.
**Post-mortem thoughts**
So, with the story of what happened, can we identify why Steam Machines failed, and ways they could succeed in the future?
_Vision and purpose_
Steam Machines did not make clear what they were in the market, nor did any advantages particularly stand out. On the PC flank, building PCs had become popular and is a cheaper option with better upgrade and flexibility options. On the console flank, they were outflanked by consoles with low initial investment, despite a possibly higher TCO with game prices, and a far simpler user experience.
With PCs, flexibility is seen as a core asset, with users being able to use their machines beyond gaming, doing work and other tasks. While Steam Machines were just PCs running SteamOS with no restrictions, the SteamOS software and marketing had solidified their view as consoles to PC gamers, compounded by the price and lower flexibility in hardware with some options. In the living room where these machines could have made sense to PC gamers, the Steam Link offered a way to access content on a PC in another room, and small form factor hardware like NUCs and Mini-ITX motherboards allowed for custom built PCs that are more socially acceptable in living rooms. The SteamOS software was also available to “convert” these PCs into Steam Machines, but people seeking flexibility and compatibility often opted for a Linux or Windows desktop. Recent strides in Windows and desktop Linux have simplified maintenance tasks associated with desktop-experience computers, automating most of it.
With consoles, simplicity is a virtue. Even as they have expanded in their roles, with media features often a priority, they are still a plug and play experience where compatibility and experience are guaranteed, with a low barrier of entry. Consoles also have long life cycles, ranging from four to seven years, and the fixed hardware during this life cycle allow developers to target and optimize especially for their specifications and features. New mid-life upgrades like “Scorpio” and PlayStation 4 Pro may change the unified experience previously shared by users, but manufactures are requiring games to work on the original model consoles to avoid the most problematic aspects. To keep users attached to the systems, social networks and exclusive games are used. Games also come on discs that can be freely reused and resold, which is a positive for retailers and users. Steam Machines have none of these guarantees; they carry PC complexity and higher initial prices despite a living room friendly exterior.
_Reconciliation_
With this, Steam Machines could be seen as a “worst of both worlds” product, carrying the burdens of both kinds of product, without showing clearly as one or the other, or some kind of new product category. There also exist many deficiencies that neither party experiences, like lack of AAA titles that appear on consoles and Windows PCs, and lack of clients for services like Netflix. Despite this, Valve has shown little effort into improving the product or even trying to resolve the seemingly contradictory goals like the mutual distrust of PC and console gaming.
Some things may make it impossible to reconcile the two concepts into one category or the other, though. Things like graphics settings and mods may make it hard to create a foolproof experience, and the complexity of the underlying system appears from time to time.
One of the most complex parts is the concept of having a lineup users need to evaluate not only the costs and specifications of a system, but its value and value relative to other systems. You need some way for the user to know that their hardware can run any given game, either by some automated benchmark system with comparison, or a grading system, though these need to be simple and need to support (almost) every game in the library. In addition, you also need to worry about how these systems and grades will age what does a “2016 Grade A” machine mean three years from now?
_Valve, effort, and organization_
Valves organizational structure may be detrimental to creating platforms like Steam Machines, let alone maintaining services like Steam. Their mostly leaderless structure with people supposedly moving their desks to ad-hoc units working on projects that they alone decide to work on can be great for creative endeavours, as well as research and development. Its said Valve only hires what they consider to be the “cream of the crop,” with very strict standards, tying them to what they deem more "worthy" work. This view may be inaccurate though; as cliques often exist, the word of Gabe Newell is more important than the “leaked” employee handbook lets on, and people hired and then fired as needed, as a form of contractor working on certain aspects.
However, this leaves projects that arent glamorous or interesting, but need persistent and often mundane work, to wither on the vine. Customer support for Valve has been a constant headache, with enraged users felt ignored, and Valve sometimes only acting when legally required to do so, like with the automated refund system that was forced into action by Australian and European legislation, or the Counter-Strike: Global Offensive item gambling site controversy involving the gambling commission of Washington State thats still ongoing.
This has affected Steam Machines as a result. With the launch delayed by a year, some partners hands were forced, by Dell launching the Alienware Steam Machine a year earlier as the Alienware Alpha causing the hardware to be outdated on launch. These delays may have also affected game availability as well. The opinions of developers and hardware partners as a result of the delayed and non-monumental launch are not clear. Valves platform for virtual reality simply wasnt available on Linux, and as such, SteamOS, until recently, even as SteamVR was receiving significant developer effort.
The “long game”
Valve is seen as playing a “long game” with Steam Machines and SteamOS, though it appears as if there is no roadmap. An example of Valve aiming for the long term is with Steam, from its humble and initially reviled beginnings as a patching platform for their games to the popular distribution and social network it is today. It also helped that Steam was required to play Valves games like Half-Life 2 and Counter-Strike 1.6\. However, it doesnt seem as if Valve is putting in the effort to Steam Machines as they did with Steam before. There is also entrenched competition that Steam in the early days never really dealt with. Their competition includes arguably Valve itself, with Steam on Windows.
_Gambit_
With the lack of developments in Steam Machines, one wonders if the platform was a bargaining chip of sorts. Steam Machines had been originally started over Valves Linux efforts took fruit because of concerns that Microsoft and Apple would have pushed them out of the market with native app stores, and Steam Machines grew so Valve would have a safe haven in case this happened, and a bargaining chip so Valve can remind the developers of its host platforms of possible independence. When these turned out to be non-threatening, Valve slowed down development. I dont see this however; Valve has expended a lot of goodwill with hardware partners and developers trying to push this, only to halt it. You could say both Microsoft and Valve called each others bluffs Microsoft with a locked-down Windows 8, and Valves capability as an independent player.
Even then, who is to say developers wouldnt follow Microsoft with a locked-in platform, if they can offer superior deals to publishers, or better customer relationships? In addition, now Microsoft is pushing Xbox on Windows integration with cross-buy, Xbox Live integration, and Xbox exclusive games on Windows, all while preserving Windows as an open platform arguably more a threat to Steam.
Another point you could argue is that all of this with Steam Machines was simply to push Linux adoption with PC gaming, and Steam Machines were simply to make it more palatable to publishers and developers by implying a large push and continued support. However, this made it an awfully expensive gambit, and developers continued to support Linux before and after Steam Machines, and could have backfired with developers pulling out of Linux due to the lack of the Promised Land of SteamOS coming.
**My opinions on what could have been done**
I think theres an interesting product with Steam Machines, and that there is a market for it, but lack of interest and effort, as well as possible confusion in what it should have been has been damaging for it. I see Steam Machines as a way to cut out the complexity of PC gaming of worrying about parts, life cycles, and maintenance; while giving the advantages like cheap games, mods, and an open platform that can be fiddled with if the user desires. However, they need to get core aspects like pricing, marketing, lineup, and software right.
I think Steam Machines can make compromises on things like upgradability (Though its possible to preserve this but it should be done with attention to user experience.) and choices, to reduce friction. PCs would still exist to these options. The paralysis of choice is a real dilemma, and the sheer amount of poorly valued options available with Steam Machines didn't help. Valve needs a flagship machine to lead Steam Machines. Arguably, the Alienware model was close, but it wasnt made officially so. Theres good industrial design talent in Valve, and if they focused on their own machine, and with effort put in, it might be worth it. A company like Dell or HTC can manufacture for Valve, bringing their experience in. Defining life cycles and only having one or two specifications updated periodically would help, especially if they worked with developers to establish this is a baseline that should be supported. Im not sure with OEMs; if Valve is putting their effort behind one machine, they might be made redundant and ultimately only hindering development of the platform.
Addressing the software issues is essential. The lack of integration with services like Netflix and Twitch that exist fluidly on console and easily put into place on PC, despite living room user interface issues, are holding Steam Machines back. Although Valve has slowly been acquiring movie licenses for distribution on Steam, people will use existing and trusted streaming sources. This needs to be addressed, especially as people use their consoles as parts of their home theatre. Fixing issues with the Steam client and platform are essential, and feature parity with other platforms is a good idea. Performance issues with Linux and its graphics stack are also a problem, but this is slowly improving. Getting ports of games will also be another issue. Game porting shops like Feral Interactive and Aspyr Media help the library, but they need to be contracted by publishers and developers, and they often use wrappers that add overhead. Valve has helped studios directly with porting, such as with Rocket League, but this has rarely happened and when it did, slowly at the typical Valve pace. The monolith of AAA games cant be ignored either the situation has improved dramatically, but studios like Bethesda are still reluctant to port, especially with a small user base, lack of support from Valve with Steam Machines even if Linux is doing relatively well, and the lack of extra DRM like Denuvo.
Valve also needs to put effort into the other bits beyond hardware and software. With one machine, they have an interest and can subsidize the hardware effectively. This would put it into parity with consoles, and possibly cheaper than custom built PCs. Efforts to marketing the product to market segments that would be interested in the machines are essential, whatever they are. (I myself would be interested in the machines. I dont like the hassle of dealing with PC building or the premium on prebuilt machines, but consoles often lack the games I want to play, and I have an existing library of games on Steam I acquired cheaply.) Retail partners may not be effective, due to their interest in selling and reselling physical copies of games.
Even with my suggestions towards the platform and product, Im not sure how effective it would be to help Steam Machines achieve their full potential and do well in the marketplace. Ultimately, learning from not just your own mistakes, but the mistakes of previous entrants like 3DO and Pippin who relied on an open platform or were descended from desktop-experience computing, which are relevant to Valves current situation, and the future of Nintendo's Switch, which steps into the realm of possible confusion between values.
_Note: Clearing up done by liamdawe, all thoughts are from the submitter._ 
This article was submitted by a guest, we encourage anyone to [submit their own articles][1].
--------------------------------------------------------------------------------
via: https://www.gamingonlinux.com/articles/user-editorial-steam-machines-steamos-after-a-year-in-the-wild.8474
作者:[calvin][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.gamingonlinux.com/profiles/5163
[1]:https://www.gamingonlinux.com/submit-article/
[2]:https://www.gamingonlinux.com/articles/steam-machines-steam-link-steam-controller-officially-released-steamos-sale.6201
[3]:https://www.gamingonlinux.com/articles/valve-announces-steam-machines-you-can-win-one-too.2469

View File

@ -0,0 +1,47 @@
[XFCE GETS A `DO NOT DISTURB` MODE AND PER APPLICATION NOTIFICATION SETTINGS][7]
============================================================
The Xfce developers are busy [porting][3] Xfce applications and components to GTK3, and in the process, they are also adding new features.
**"Do not disturb"**, a much requested feature, landed in _xfce4-notifyd_ 0.3.4 (the Xfce notification daemon) [recently][4]. Using this, you can suppress notification bubbles for a limited time-frame.
Furthermore, **the latest _xfce4-notifyd_ includes an option to enable or disable notifications on a per-application basis**.
After an application sends a notification, the app is added to a list in the notification settings. From here, you can control which applications can show notifications.
Both the "Do not disturb" mode and the application-specific notification settings can be found in _Settings > Notifications_:
[
![](https://1.bp.blogspot.com/-fvSesp1ukaQ/WCR8JQVgfiI/AAAAAAAAYl8/IJ1CshVQizs9aG2ClfraVaNjKP3OyxvAgCLcB/s400/xfce-do-not-disturb.png)
][5]
Right now there's no way of accessing notifications missed due to the "Do not disturb" mode being enabled. However, **a notification logging / persistence feature is expected in a future release.**
And finally, yet** another feature** in _xfce4-notifyd_ 0.3.4 is an **option display notifications on the primary monitor** (until now, notifications were displayed on the active monitor).This option is not available in the GUI for now, and it must be enabled using Xfconf (Settings Editor), by adding a Boolean property, called "/primary-monitor" (without the quotes), to _xfce4-notifyd_ and set it to "True":
[
![](https://2.bp.blogspot.com/-M8xZpEHMrq8/WCR9EufvsnI/AAAAAAAAYmA/nLI5JQUtmE0J9TgvNM9ZKGHBdwwBhRH3QCLcB/s400/xfce-xfconf.png)
][6]
**_xfce4-notifyd_ 0.3.4 is not available in a PPA right now, but it will probably be added to the [Xfce GTK3 PPA][1] soon.**
**If you want to build it from source, download it from [HERE][2].**
--------------------------------------------------------------------------------
via: http://www.webupd8.org/2016/11/xfce-gets-do-not-disturb-mode-and-per.html
作者:[Andrew ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.webupd8.org/p/about.html
[1]:https://launchpad.net/~xubuntu-dev/+archive/ubuntu/xfce4-gtk3
[2]:http://archive.xfce.org/src/apps/xfce4-notifyd/0.3/
[3]:https://wiki.xfce.org/releng/4.14/roadmap
[4]:http://simon.shimmerproject.org/2016/11/09/xfce4-notifyd-0-3-4-released-do-not-disturb-and-per-application-settings/
[5]:https://1.bp.blogspot.com/-fvSesp1ukaQ/WCR8JQVgfiI/AAAAAAAAYl8/IJ1CshVQizs9aG2ClfraVaNjKP3OyxvAgCLcB/s1600/xfce-do-not-disturb.png
[6]:https://2.bp.blogspot.com/-M8xZpEHMrq8/WCR9EufvsnI/AAAAAAAAYmA/nLI5JQUtmE0J9TgvNM9ZKGHBdwwBhRH3QCLcB/s1600/xfce-xfconf.png
[7]:http://www.webupd8.org/2016/11/xfce-gets-do-not-disturb-mode-and-per.html

View File

@ -0,0 +1,172 @@
### [Can Linux containers save IoT from a security meltdown?][28]
![](http://hackerboards.com/files/internet_of_things_wikimedia1-thm.jpg)
In this final IoT series post, Canonical and Resin.io champion Linux container technology as a solution to IoT security and interoperability challenges.
|
![](http://hackerboards.com/files/samsung_artik710-thm.jpg)
**Artik 7** |
Despite growing security threats, the Internet of Things hype shows no sign of abating. Feeling the FoMo, companies are busily rearranging their roadmaps for IoT. The transition to IoT runs even deeper and broader than the mobile revolution. Everything gets swallowed in the IoT maw, including smartphones, which are often our windows on the IoT world, and sometimes our hubs or sensor endpoints.
New IoT focused processors and embedded boards continue to reshape the tech landscape. Since our [Linux and Open Source Hardware for IoT][5] story in September, weve seen [Intel Atom E3900][6] “Apollo Lake” SoCs aimed at IoT gateways, as well as [new Samsung Artik modules][7], including a Linux-driven, 64-bit Artik 7 COM for gateways and an RTOS-ready, Cortex-M4 Artik 0\. ARM announced [Cortex-M23 and Cortex-M33][8] cores for IoT endpoints featuring ARMv8-M and TrustZone security.
Security is a selling point for these products, and for good reason. The Mirai botnet that recently attacked the Dyn service and blacked out much of the U.S. Internet for a day brought Linux-based IoT into the forefront — and not in a good way. Just as IoT devices can be turned to the dark side via DDoS, the devices and their owners can also be the victimized directly by malicious attacks.
|
![](http://hackerboards.com/files/arm_cortexm33m23-thm.jpg)
**Cortex-M33 and -M23** |
The Dyn attack reinforced the view that IoT will more confidently move forward in more controlled and protected industrial environments rather than the home. Its not that consumer [IoT security technology][9] is unavailable, but unless products are designed for security from scratch, as are many of the solutions in our [smart home hub story][10], security adds cost and complexity.
In this final, future-looking segment of our IoT series, we look at two Linux-based, Docker-oriented container technologies that are being proposed as solutions to IoT security. Containers might also help solve the ongoing issues of development complexity and barriers to interoperability that we explored in our story on [IoT frameworks][11].
We spoke with Canonicals Oliver Ries, VP Engineering Ubuntu Client Platform about his companys Ubuntu Core and its Docker-friendly, container-like Snaps package management technology. We also interviewed Resin.io CEO and co-founder Alexandros Marinos about his companys new Docker-based ResinOS for IoT.
**Ubuntu Core Snaps to**
Canonicals IoT-oriented [Snappy Ubuntu Core][12] version of Ubuntu is built around a container-like snap package management mechanism, and offers app store support. The snaps technology was recently [released on its own][13] for other Linux distributions. On November 3, Canonical released [Ubuntu Core 16][14], which improves white label app store and update control services.
<center>
[
![](http://hackerboards.com/files/canonical_ubuntucore16_diagram-sm.jpg)
][15]
**Classic Ubuntu (left) architecture vs. Ubuntu Core 16**
(click image to enlarge)
</center>
The snap mechanism offers automatic updates, and helps block unauthorized updates. Using transactional systems management, snaps ensure that updates either deploy as intended or not at all. In Ubuntu Core, security is further strengthened with AppArmor, and the fact that all application files are kept in separate silos, and are read-only.
|
![](http://hackerboards.com/files/limesdr-thm.jpg)
**LimeSDR** |
Ubuntu Core, which was part of our recent [survey of open source IoT OSes][16], now runs on Gumstix boards, Erle Robotics drones, Dell Edge Gateways, the [Nextcloud Box][17], LimeSDR, the Mycroft home hub, Intels Joule, and SBCs compliant with Linaros 96Boards spec. Canonical is also collaborating with the Linaro IoT and Embedded (LITE) Segment Group on its [96Boards IoT Edition][18]Initially, 96Boards IE is focused on Zephyr-driven Cortex-M4 boards like Seeeds [BLE Carbon][19], but it will expand to gateway boards that can run Ubuntu Core.
“Ubuntu Core and snaps have relevance from edge to gateway to the cloud,” says Canonicals Ries. “The ability to run snap packages on any major distribution, including Ubuntu Server and Ubuntu for Cloud, allows us to provide a coherent experience. Snaps can be upgraded in a failsafe manner using transactional updates, which is important in an IoT world moving to continuous updates for security, bug fixes, or new features.”
|
![](http://hackerboards.com/files/nextcloud_box3-thm.jpg)
**Nextcloud Box** |
Security and reliability are key points of emphasis, says Ries. “Snaps can run completely isolated from one another and from the OS, making it possible for two applications to securely run on a single gateway,” he says. “Snaps are read-only and authenticated, guaranteeing the integrity of the code.”
Ries also touts the technology for reducing development time. “Snap packages allow a developer to deliver the same binary package to any platform that supports it, thereby cutting down on development and testing costs, deployment time, and update speed,” says Ries. “With snap packages, the developer is in full control of the lifecycle, and can update immediately. Snap packages provide all required dependencies, so developers can choose which components they use.”
**ResinOS: Docker for IoT**
Resin.io, which makes the commercial IoT framework of the same name, recently spun off the frameworks Yocto Linux based [ResinOS 2.0][20]” target=”new”>ResinOS 2.0 as an open source project. Whereas Ubuntu Core runs Docker container engines within snap packages, ResinOS runs Docker on the host. The minimalist ResinOS abstracts the complexity of working with Yocto code, enabling developers to quickly deploy Docker containers.
<center>
[
![](http://hackerboards.com/files/resinio_resinos_arch-sm.jpg)
][21]
**ResinOS 2.0 architecture**
(click image to enlarge)
</center>
Like the Linux-based CoreOS, ResinOS integrates systemd control services and a networking stack, enabling secure rollouts of updated applications over a heterogeneous network. However, its designed to run on resource constrained devices such as ARM hacker boards, whereas CoreOS and other Docker-oriented OSes like the Red Hat based Project Atomic are currently x86 only and prefer a resource-rich server platform. ResinOS can run on 20 Linux devices and counting, including the Raspberry Pi, BeagleBone, and Odroid-C1.
“We believe that Linux containers are even more important for embedded than for the cloud,” says Resin.ios Marinos. “In the cloud, containers represent an optimization over previous processes, but in embedded they represent the long-delayed arrival of generic virtualization.”
|
![](http://hackerboards.com/files/beaglebone-hand-thm.jpg)
**BeagleBone
Black** |
When applied to IoT, full enterprise virtual machines have performance issues and restrictions on direct hardware access, says Marinos. Mobile VMs like OSGi and Androids Dalvik can been used for IoT, but they require Java among other limitations.
Using Docker may seem natural for enterprise developers, but how do you convince embedded hackers to move to an entirely new paradigm? “Rather than transferring practices from the cloud wholesale, ResinOS is optimized for embedded,” answers Marinos. In addition, he says, containers are better than typical IoT technologies at containing failure. “If theres a software defect, the host OS can remain functional and even connected. To recover, you can either restart the container or push an update. The ability to update a device without rebooting it further removes failure opportunities.”
According to Marinos, other benefits accrue from better alignment with the cloud, such as access to a broader set of developers. Containers provide “a uniform paradigm across data center and edge, and a way to easily transfer technology, workflows, infrastructure, and even applications to the edge,” he adds.
The inherent security benefits in containers are being augmented with other technologies, says Marinos. “As the Docker community pushes to implement signed images and attestation, these naturally transfer to ResinOS,” he says. “Similar benefits accrue when the Linux kernel is hardened to improve container security, or gains the ability to better manage resources consumed by a container.”
Containers also fit in well with open source IoT frameworks, says Marinos. “Linux containers are easy to use in combination with an almost endless variety of protocols, applications, languages and libraries,” says Marinos. “Resin.io has participated in the AllSeen Alliance, and we have worked with partners who use IoTivity and Thread.”
**Future IoT: Smarter Gateways and Endpoints**
Marinos and Canonicals Ries agree on several future trends in IoT. First, the original conception of IoT, in which MCU-based endpoints communicate directly with the cloud for processing, is quickly being replaced with a fog computing architecture. That calls for more intelligent gateways that do a lot more than aggregate data and translate between ZigBee and WiFi.
Second, gateways and smart edge devices are increasingly running multiple apps. Third, many of these devices will provide onboard analytics, which were seeing in the latest [smart home hubs][22]. Finally, rich media will soon become part of the IoT mix.
<center>
[
![](http://hackerboards.com/files/eurotech_reliagate2026-sm.jpg)
][23] [
![](http://hackerboards.com/files/advantech_ubc221-sm.jpg)
][24]
**Some recent IoT gateways: Eurotechs [ReliaGate 20-26][1] and Advantechs [UBC-221][2]**
(click images to enlarge)
</center>
“Intelligent gateways are taking over a lot of the processing and control functions that were originally envisioned for the cloud,” says Marinos. “Accordingly, were seeing an increased push for containerization, so feature- and security-related improvements can be deployed with a cloud-like workflow. The decentralization is driven by factors such as the mobile data crunch, an evolving legal framework, and various physical limitations.”
Platforms like Ubuntu Core are enabling an “explosion of software becoming available for gateways,” says Canonicals Ries. “The ability to run multiple applications on a single device is appealing both for users annoyed with the multitude of single-function devices, and for device owners, who can now generate ongoing software revenues.”
<center>
[
![](http://hackerboards.com/files/myomega_mynxg-sm.jpg)
][25] [
![](http://hackerboards.com/files/technexion_ls1021aiot_front-sm.jpg)
][26]
**Two more IoT gateways: [MyOmega MYNXG IC2 Controller (left) and TechNexions ][3][LS1021A-IoT Gateway][4]**
(click images to enlarge)
</center>
Its not only gateways — endpoints are getting smarter, too. “Reading a lot of IoT coverage, you get the impression that all endpoints run on microcontrollers,” says Marinos. “But we were surprised by the large amount of Linux endpoints out there like digital signage, drones, and industrial machinery, that perform tasks rather than operate as an intermediary. We call this the shadow IoT.”
Canonicals Ries agrees that a single-minded focus on minimalist technology misses out on the emerging IoT landscape. “The notion of lightweight is very short lived in an industry thats developing as fast as IoT,” says Ries. “Todays premium consumer hardware will be powering endpoints in a matter of months.”
While much of the IoT world will remain lightweight and “headless,” with sensors like accelerometers and temperature sensors communicating in whisper thin data streams, many of the newer IoT applications use rich media. “Media input/output is simply another type of peripheral,” says Marinos. “Theres always the issue of multiple containers competing for a limited resource, but its not much different than with sensor or Bluetooth antenna access.”
Ries sees a trend of “increasing smartness at the edge” in both industrial and home gateways. “We are seeing a large uptick in AI, machine learning, computer vision, and context awareness,” says Ries. “Why run face detection software in the cloud and incur delays and bandwidth and computing costs, when the same software could run at the edge?”
As we explored in our [opening story][27] of this IoT series, there are IoT issues related to security such as loss of privacy and the tradeoffs from living in a surveillance culture. There are also questions about the wisdom of relinquishing ones decisions to AI agents that may be controlled by someone else. These wont be fully solved by containers, snaps, or any other technology.
Perhaps wed be happier if Alexa handled the details of our lives while we sweat the big stuff, and maybe theres a way to balance privacy and utility. For now, were still exploring, and thats all for the good.
--------------------------------------------------------------------------------
via: http://hackerboards.com/can-linux-containers-save-iot-from-a-security-meltdown/
作者:[Eric Brown][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://hackerboards.com/can-linux-containers-save-iot-from-a-security-meltdown/
[1]:http://hackerboards.com/atom-based-gateway-taps-new-open-source-iot-cloud-platform/
[2]:http://hackerboards.com/compact-iot-gateway-runs-yocto-linux-on-quark/
[3]:http://hackerboards.com/wireless-crazed-customizable-iot-gateway-uses-arm-or-x86-coms/
[4]:http://hackerboards.com/iot-gateway-runs-linux-on-qoriq-accepts-arduino-shields/
[5]:http://hackerboards.com/linux-and-open-source-hardware-for-building-iot-devices/
[6]:http://hackerboards.com/intel-launches-14nm-atom-e3900-and-spins-an-automotive-version/
[7]:http://hackerboards.com/samsung-adds-first-64-bit-and-cortex-m4-based-artik-modules/
[8]:http://hackerboards.com/new-cortex-m-chips-add-armv8-and-trustzone/
[9]:http://hackerboards.com/exploring-security-challenges-in-linux-based-iot-devices/
[10]:http://hackerboards.com/linux-based-smart-home-hubs-advance-into-ai/
[11]:http://hackerboards.com/open-source-projects-for-the-internet-of-things-from-a-to-z/
[12]:http://hackerboards.com/lightweight-snappy-ubuntu-core-os-targets-iot/
[13]:http://hackerboards.com/canonical-pushes-snap-as-a-universal-linux-package-format/
[14]:http://hackerboards.com/ubuntu-core-16-gets-smaller-goes-all-snaps/
[15]:http://hackerboards.com/files/canonical_ubuntucore16_diagram.jpg
[16]:http://hackerboards.com/open-source-oses-for-the-internet-of-things/
[17]:http://hackerboards.com/private-cloud-server-and-iot-gateway-runs-ubuntu-snappy-on-rpi/
[18]:http://hackerboards.com/linaro-beams-lite-at-internet-of-things-devices/
[19]:http://hackerboards.com/96boards-goes-cortex-m4-with-iot-edition-and-carbon-sbc/
[20]:http://hackerboards.com/can-linux-containers-save-iot-from-a-security-meltdown/%3Ca%20href=
[21]:http://hackerboards.com/files/resinio_resinos_arch.jpg
[22]:http://hackerboards.com/linux-based-smart-home-hubs-advance-into-ai/
[23]:http://hackerboards.com/files/eurotech_reliagate2026.jpg
[24]:http://hackerboards.com/files/advantech_ubc221.jpg
[25]:http://hackerboards.com/files/myomega_mynxg.jpg
[26]:http://hackerboards.com/files/technexion_ls1021aiot_front.jpg
[27]:http://hackerboards.com/an-open-source-perspective-on-the-internet-of-things-part-1/
[28]:http://hackerboards.com/can-linux-containers-save-iot-from-a-security-meltdown/

View File

@ -0,0 +1,233 @@
Neofetch Shows Linux System Information with Distribution Logo
============================================================
Neoftech is a cross-platform and easy-to-use [system information command line script][3] that collects your Linux system information and display it on the terminal next to an image, it could be your distributions logo or any ascii art of your choice.
Neoftech is very similar to [ScreenFetch][4] or [Linux_Logo][5] utilities, but highly customizable and comes with some extra features as discussed below.
Its main features include: its fast, prints a full color image your distributions logo in ASCII alongside your system information, its highly customizable in terms of which, where and when information is printed on the terminal and it can take a screenshot of your desktop when closing the script as enabled by a special flag.
#### Required Dependencies:
1. Bash 3.0+ with ncurses support.
2. w3m-img (occasionally packaged with w3m) or iTerm2 or Terminology for printing images.
3. [imagemagick][1]  for thumbnail creation.
4. [Linux terminal emulator][2] should support \033[14t [3] or xdotool or xwininfo + xprop or xwininfo + xdpyinfo .
5. On Linux, you need feh, nitrogen or gsettings for wallpaper support.
Important: You can read more about optional dependencies from the Neofetch Github repository to check if your [Linux terminal emulator][6] actually supports \033[14t or any extra dependencies for the script to work well on your distro.
### How To Install Neofetch in Linux
Neofetch can be easily installed from third-party repositories on almost all Linux distributions by following below respective installation instructions as per your distribution.
#### On Debian
```
$ echo "deb http://dl.bintray.com/dawidd6/neofetch jessie main" | sudo tee -a /etc/apt/sources.list
$ curl -L "https://bintray.com/user/downloadSubjectPublicKey?username=bintray" -o Release-neofetch.key && sudo apt-key add Release-neofetch.key && rm Release-neofetch.key
$ sudo apt-get update
$ sudo apt-get install neofetch
```
#### On Ubuntu and Linux Mint
```
$ sudo add-apt-repository ppa:dawidd0811/neofetch
$ sudo apt-get update
$ sudo apt-get install neofetch
```
#### On RHEL, CentOS and Fedora
You need to have dnf-plugins-core installed on your system, or else install it with the command below:
```
$ sudo yum install dnf-plugins-core
```
Enable COPR repository and install neofetch package.
```
$ sudo dnf copr enable konimex/neofetch
$ sudo dnf install neofetch
```
#### On Arch Linux
You can either install neofetch or neofetch-git from the AUR using packer or Yaourt.
```
$ packer -S neofetch
$ packer -S neofetch-git
OR
$ yaourt -S neofetch
$ yaourt -S neofetch-git
```
#### On Gentoo
Install app-misc/neofetch from Gentoo/Funtoos official repositories. However, in case you need the git version of the package, you can install =app-misc/neofetch-9999.
### How To Use Neofetch in Linux
Once you have installed the package, the general syntax for using it is:
```
$ neofetch
```
Note: If w3m-img or [imagemagick][7] is not installed on your system, [screenfetch][8] will be enabled by default and neofetch will display your [ASCII art logo][9] as in the image below.
#### Linux Mint Information
[
![Linux Mint System Information](http://www.tecmint.com/wp-content/uploads/2016/11/Linux-Mint-System-Information.png)
][10]
Linux Mint System Information
#### Ubuntu Information
[
![Ubuntu System Information](http://www.tecmint.com/wp-content/uploads/2016/11/Ubuntu-System-Information.png)
][11]
Ubuntu System Information
If you want to display the default distribution logo as image, you should install w3m-img or imagemagickon your system as follows:
```
$ sudo apt-get install w3m-img [On Debian/Ubuntu/Mint]
$ sudo yum install w3m-img [On RHEL/CentOS/Fedora]
```
Then run neofetch again, you will see the default wallpaper of your Linux distributions as the image.
```
$ neofetch
```
[
![Ubuntu System Information with Logo](http://www.tecmint.com/wp-content/uploads/2016/11/Ubuntu-System-Information-with-Logo.png)
][12]
Ubuntu System Information with Logo
After running neofetch for the first time, it will create a configuration file with all options and settings: `$HOME/.config/neofetch/config`.
This configuration file will enable you through the `printinfo ()` function to alter the system information that you want to print on the terminal. You can type in new lines of information, modify the information lineup, delete certain lines and also tweak the script it using bash code to manage the information to be printed out.
You can open the configuration file using your favorite editor as follows:
```
$ vi ~/.config/neofetch/config
```
Below is an excerpt of the configuration file on my system showing the `printinfo ()` function.
Neofetch Configuration File
```
#!/usr/bin/env bash
# vim:fdm=marker
#
# Neofetch config file
# https://github.com/dylanaraps/neofetch
# Speed up script by not using unicode
export LC_ALL=C
export LANG=C
# Info Options {{{
# Info
# See this wiki page for more info:
# https://github.com/dylanaraps/neofetch/wiki/Customizing-Info
printinfo() {
info title
info underline
info "Model" model
info "OS" distro
info "Kernel" kernel
info "Uptime" uptime
info "Packages" packages
info "Shell" shell
info "Resolution" resolution
info "DE" de
info "WM" wm
info "WM Theme" wmtheme
info "Theme" theme
info "Icons" icons
info "Terminal" term
info "Terminal Font" termfont
info "CPU" cpu
info "GPU" gpu
info "Memory" memory
# info "CPU Usage" cpu_usage
# info "Disk" disk
# info "Battery" battery
# info "Font" font
# info "Song" song
# info "Local IP" localip
# info "Public IP" publicip
# info "Users" users
# info "Birthday" birthday
info linebreak
info cols
info linebreak
}
.....
```
Type the command below to view all flags and their configuration values you can use with neofetch script:
```
$ neofetch --help
```
To launch neofetch with all functions and flags enabled, employ the `--test` flag:
```
$ neofetch --test
```
You can enable the ASCII art logo again using the `--ascii` flag:
```
$ neofetch --ascii
```
In this article, we have covered a simple and highly configuration/customizable command line script that gathers your system information and displays it on the terminal.
Remember to get in touch with us via the feedback form below to ask any questions or give us your thoughts concerning the neofetch script.
Last but not least, if you know of any similar scripts out there, do not hesitate to let us know, we will be pleased to hear from you.
Visit the [neofetch Github repository][13].
--------------------------------------------------------------------------------
via: http://www.tecmint.com/neofetch-shows-linux-system-information-with-logo
作者:[Aaron Kili ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/install-imagemagick-in-linux/
[2]:http://www.tecmint.com/linux-terminal-emulators/
[3]:http://www.tecmint.com/screenfetch-system-information-generator-for-linux/
[4]:http://www.tecmint.com/screenfetch-system-information-generator-for-linux/
[5]:http://www.tecmint.com/linux_logo-tool-to-print-color-ansi-logos-of-linux/
[6]:http://www.tecmint.com/linux-terminal-emulators/
[7]:http://www.tecmint.com/install-imagemagick-in-linux/
[8]:http://www.tecmint.com/screenfetch-system-information-generator-for-linux/
[9]:http://www.tecmint.com/linux_logo-tool-to-print-color-ansi-logos-of-linux/
[10]:http://www.tecmint.com/wp-content/uploads/2016/11/Linux-Mint-System-Information.png
[11]:http://www.tecmint.com/wp-content/uploads/2016/11/Ubuntu-System-Information.png
[12]:http://www.tecmint.com/wp-content/uploads/2016/11/Ubuntu-System-Information-with-Logo.png
[13]:https://github.com/dylanaraps/neofetch

View File

@ -0,0 +1,78 @@
Make KDE Plasma 5 Desktop Look & Feel Like Windows 10 Using These Extensions
============================================================
![kde-plasma-to-windows-10](https://iwf1.com/wordpress/wp-content/uploads/2016/11/KDE-Plasma-to-Windows-10.jpg)
With a few steps, heres how you can turn KDE Plasma 5 desktop into Windows 10.
Other than the menu, much of Plasma desktop is already pretty much resembling Win 10\. Therefore, it only require a few light touches in order to make the two almost identical.
### The Start Menu
The first and probably most iconic part of making Plasma look like Win 10 is by achieving the Win 10 Start Menu look.
This can be easily done by installing [Zrens Tiled Menu][1].
#### To install:
1. Right click on Plasma Desktop -> Unlock Widgets
2. Right click on Plasma Desktop -> Add Widgets
3. Get new widgets -> Download New Plasma Widgets
4. Search for “Tiled Menu” -> Install
#### To activate:
1. Right click on your current menu button -> Alternatives…
2. Select “Tiled Menu” -> click Switch
[
![KDE Tiled Menu extension.](http://iwf1.com/wordpress/wp-content/uploads/2016/11/KDE-Tiled-Menu-extension-730x619.jpg)
][2]
KDE Tiled Menu extension.
### The Theme
The next you might inquire after the menu is a theme. Luckily, [K10ne][3] offers you a Win 10 theme experience.
#### To install:
1. Open up “System Settings” from Plasmas menu -> Workspace Theme
2. Select “Desktop Theme” from the sidebar -> Get new Theme
3. Search for “K10ne” -> Install
#### To activate:
1. Open up “System Settings” from Plasmas menu -> Workspace Theme
2. Select “Desktop Theme” from the sidebar -> “K10ne”
3. Apply
### The Task Bar
Lastly, you might also want to incorporate a more Win 10 style task bar, just to have a more complete experience.
This time, the package you need, called “Icons-only Task Manager”, usually installed by default by most distros. If you dont have it inquire your distros appropriate channels how to get it.
#### To activate:
1. Right click on Plasma Desktop -> Unlock Widgets
2. Right click on Plasma Desktop -> Add Widgets
3. Drag & drop “Icons-only Task Manager” to the suitable place on your desktops panel
--------------------------------------------------------------------------------
via: https://iwf1.com/make-kde-plasma-5-desktop-look-feel-like-windows-10-using-these-extensions/
作者:[Liron][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://iwf1.com/tag/linux
[1]:https://github.com/Zren/plasma-applets/tree/master/tiledmenu
[2]:http://iwf1.com/wordpress/wp-content/uploads/2016/11/KDE-Tiled-Menu-extension.jpg
[3]:https://store.kde.org/p/1153465/

View File

@ -0,0 +1,231 @@
Apache Vs Nginx Vs Node.js And What It Means About The Performance Of WordPress Vs Ghost
============================================================
![Node vs Apache vs Nginx](https://iwf1.com/wordpress/wp-content/uploads/2016/11/Node-vs-Apache-vs-Nginx-730x430.jpg)
Ultimate battle of the giants: can the rising star Node.js prevail against the titans Apache and Nginx?
Just like you, I too have read the various kinds of opinions / facts which are scattered all over the Internet throughout all sorts of sources, some of which I consider reliable, while others, perhaps shady or doubtful.
Many of the sources I read were quite contradicting, ahm did someone say StackOverflow?[1][2], others showed a clear yet surprising results[3] thus having a crucial role in pushing me towards running my own tests and experiments.
At first, I did some thought experiments thinking I may avoid all the hassle of building and running physical tests of my own I was drowning deep in those before I even knew it.
Nonetheless, looking backwards on it, it seem that my initial thoughts were quite accurate after all and have been reaffirmed by my tests; a fact which reminds me of what I learned back in school regarding Einstein and his photoelectric effect experiments where he faced a waveparticle duality and initially concluded that the experiments were affected by his state of mind, that is, when he expected the result would be a wave then so it was and vice versa.
That said, Im pretty sure my results wont prove to be a duality anytime in the near future, although my own state of mind probably did had an effect, to some extents, on them.
### About The Comparison
One of the sources I read came up with a revolutionary way, in my view, to deal with the natural subjectiveness and personal biases an author may have.
A way which I decided to embrace as-well, thus I declare the following in advance:
Developers spend many years honing their craft. Those who reach higher levels usually make their own choice based on a host of factors. Its subjective; youll promote and defend your technology decision.
That said, the point of this comparison is not to become another “use whatever suits you, buddy” article. I will make recommendations based on my own experience, requirements and biases. Youll agree with some points and disagree with others; thats great — your comments will help others make an informed choice.
And thank you to Craig Buckler of [SitePoint][2] for re-enlightening me regarding the purpose of comparisons a purpose I tend re-forgetting as Im trying to please all visitors.
### About The Tests
All test were ran locally on an:
* Intel core i7-2600k machine of 4 cores and 8 threads.
* **[Gentoo Linux][1]** is the operating system used to run the tests.
The tool used for benchmarking: ApacheBench, Version 2.3 <$Revision: 1748469 $>.
The tests included a series of benchmarks, starting from 1,000 to 10,000 requests and a concurrency of 100 to 1,000 the results were quite surprising.
In addition, stress test to measure server function under high load was also issued.
As for the content, the main focus was about a static file containing a number of Lorem Ipsum verses with headings and an image.
[
![Lorem Ipsum and ApacheBenchmark](http://iwf1.com/wordpress/wp-content/uploads/2016/11/Lorem-Ipsum-and-ApacheBenchmark-730x411.jpg)
][3]
Lorem Ipsum and ApacheBenchmark
The reason I decided to focus on static files is because they remove all sorts of rendering factors that may have an effect on the tests, such as: the speed of a programming language interpreter, how well is an interpreter integrated with the server, etc…
Also, based on my own experience, a substantial part of the average page load time is usually being spent on static content such as images for example, therefore in order to see which server could save us the most of that precious time it seem more realistic to focus on that part.
That aside, I also wanted to test a more real case scenario where I benchmarked each server upon running a dynamic page of different CMSs (more details about that later on).
### The Servers
As Im running Gentoo Linux, you could say that either one of my HTTP servers is starting from an optimized state to begin with, since I built them using only the use-flags I actually needed. I.e there shouldnt be any unnecessary code or module loading or running in the background while I ran my tests.
[
![Apache vs Nginx vs Node.js use-flags](http://iwf1.com/wordpress/wp-content/uploads/2016/10/Apache-vs-Nginx-vs-Node.js-use-flags-730x241.jpg)
][4]
Apache vs Nginx vs Node.js use-flags
### Apache
`$: curl -i http://localhost/index.html
HTTP/1.1 200 OK
Date: Sun, 30 Oct 2016 15:35:44 GMT
Server: Apache
Last-Modified: Sun, 30 Oct 2016 14:13:36 GMT
ETag: "2cf2-54015b280046d"
Accept-Ranges: bytes
Content-Length: 11506
Cache-Control: max-age=600
Expires: Sun, 30 Oct 2016 15:45:44 GMT
Vary: Accept-Encoding
Content-Type: text/html`
Apache was configured with “event mpm”.
### Nginx
`$: curl -i http://localhost/index.html
HTTP/1.1 200 OK
Server: nginx/1.10.1
Date: Sun, 30 Oct 2016 14:17:30 GMT
Content-Type: text/html
Content-Length: 11506
Last-Modified: Sun, 30 Oct 2016 14:13:36 GMT
Connection: keep-alive
Keep-Alive: timeout=20
ETag: "58160010-2cf2"
Accept-Ranges: bytes`
Nginx included various tweaks, among them: “sendfile on”, “tcp_nopush on” and “tcp_nodelay on”.
### Node.js
`$: curl -i http://127.0.0.1:8080
HTTP/1.1 200 OK
Content-Length: 11506
Etag: 15
Last-Modified: Thu, 27 Oct 2016 14:09:58 GMT
Content-Type: text/html
Date: Sun, 30 Oct 2016 16:39:47 GMT
Connection: keep-alive`
The Node.js server used in the static tests was custom built from scratch, tailor made to be as lightweight and fast as possible no external modules (outside of Nodes core) were used.
### The Results
Click on the images to enlarge:
[
![Apache vs Nginx vs Node: performance under requests load (per 100 concurrent users)](http://iwf1.com/wordpress/wp-content/uploads/2016/11/requests-730x234.jpg)
][5]
Apache vs Nginx vs Node: performance under requests load (per 100 concurrent users)
[
![Apache vs Nginx vs Node: performance under concurrent users load](http://iwf1.com/wordpress/wp-content/uploads/2016/11/concurrency-730x234.jpg)
][6]
Apache vs Nginx vs Node: performance under concurrent users load (per 1,000 requests)
### Stress Testing
[
![Apache vs Nginx vs Node: time to complete 100,000 requests with concurrency of 1,000](http://iwf1.com/wordpress/wp-content/uploads/2016/11/stress.jpg)
][7]
Apache vs Nginx vs Node: time to complete 100,000 requests with concurrency of 1,000
### What Can We Learn From The Results?
Judging by the results above, it appears that Nginx can complete the highest amount of requests in the least amount of time, in other words, **Nginx** is the fastest HTTP server.
Another thing we can learn, which is quite surprising as a matter of fact, is that Node.js can be faster than Nginx and Apache in some cases, given the right amount of concurrent users and requests.
To those who wonder, the answer is NO, when the number of requests was raised during the concurrency test then Nginx would return to a leading position.
Unlike Apache and Nginx, Node.js, especially clustered Node, seem to be indifferent to the number of concurrent users hitting it. As the chart shows, clustered Node keeps a straight line at around 0.1 seconds while both Apache and Nginx suffer a variation of about 0.2 seconds.
A conclusion that can be drawn based on the above statistics is that the smaller the site is the less it matters which server it uses. However, as the site grows larger audience, the more apparent the impact an HTTP server has.
At the bottom line, when it comes to the raw speed of each server, as its depicted by the stress test, my sense is that the most crucial factor behind the performance is not some special algorithm but what it comes down to is actually the programming language each server is running.
As both Apache and Nginx are using C language which is AOT (Ahead Of Time) compiled language, Node.js on the other hand is using JavaScript which is an interpreted / JIT (Just In Time) compiled language. This means theres additional work for the Node.js server on its way to execute a program.
This sense I base not only upon the results above but also upon further results, which youll see below, where I got pretty much the same performance parity even when using an optimized Node.js server built with the popular Express framework.
### The Bigger Picture
At the end of the day, an HTTP server is quite useless without the content it serves. Therefore when looking to compare web servers, a vital part we must take into account is the content we wish to run on top of it.
Although other function exists as well, the most widely popular use done with an HTTP server is running a website. Hence, to see the real life implications of each servers performance I decided to compare WordPress the most widely used CMS (Content Management System) in the world, with Ghost a rising star with a gimmick of using JavaScript at its core.
Will a Ghost web-page based on JavaScript alone be able to outperform a WordPress page running on top of PHP and Apache / Nginx?
Thats an interesting question since Ghost has the advantage of using a single, coherent tool for its actions no additional layers needed, whereas WordPress needs to rely on the integration between Apache / Nginx and PHP, an integration which might incur significant performance drawbacks.
Adding to that, theres also a significant performance difference between PHP and Node.js in favor of the latter, which Ill briefly talk about below, things might come out a bit differently than initially seemed.
### PHP Vs Node.js
In order to compare WordPress and Ghost we must first consider an essential component which affects both.
Essentially, WordPress is a PHP based CMS while Ghost is Node.js (JavaScript) based. Unlike PHP, Node.js enjoys the following advantages:
* Non-blocking I/O
* Event driven
* Modern, less legacy code encumbered
Since there are plenty of comparisons out there explaining and demonstrating Node.js raw speed over PHP (including PHP 7) I shall not elaborate further on the subject, Google it, I implore you.
So, given that Node.js outperforms PHP in general, will it be significant enough to make a Node.js website faster than Apache / Nginx with PHP?
### WordPress Vs Ghost
When comparing WordPress to Ghost some would say its like comparing apples to oranges and for the most part Ill agree, as WordPress is a fully fledged CMS while Ghost is basically just a blogging platform at the moment.
However, the two still share many overlapping areas where both can be used to publish thoughts to the world.
Given that premise, how can we compare the 2 while one runs on totally different code base than the other, including themes and core features.
Indeed, a scientific lab-conditioned test would be hard to devise. However, in this comparison Im interested in a more real life case scenario, where WordPress gets to keep its theme and so does Ghost. Thus, the goal here is to have both platforms web-pages similar in size as possible and let PHP and Node.js do their magic behind the scenes.
Since the results were measured against different criteria and most importantly not exact same sizes, it wouldnt be fair to display them side by side in a chart. Hence a table is used instead:
[
![Node vs Nginx vs Apache comparison table](http://iwf1.com/wordpress/wp-content/uploads/2016/11/Node-vs-Nginx-vs-Apache-comparison-table-730x185.jpg)
][8]
Node vs Nginx vs Apache running WordPress & Ghost. Top 2 rows are WordPress, bottom 2 are Ghost
As you can see, despite the fact Ghost (Node.js) is loading a smaller sized page (youd be surprised how much difference can 1kB make) it still remains slower than both WordPress with Nginx and with Apache.
Also, does preempting every Node server hit with Nginx proxy that serves as a load balancer actually contributes or detracts from performance?
Well, according to the table above, if it has any effect at all then it is a detracting one which is a reasonable outcome as adding another layer should make things slower. However, the numbers above shows it just might be negligible.
But the most important thing the table above shows us is that even though Node.js is faster than PHP, the role an HTTP server has, may surpass the importance of what type of programming language a certain web platform uses.
Of course, on the other hand, if the page loaded was a lot more reliant on server-side script serving, then the results would of wind up a bit different, I suspect.
At the end of it, if a web platform really wants to beat WordPress at its own game, performance-wise that is, the conclusion rising from this comparison is itll have to have some sort of customized tool a-la PHP-FPM, that will communicate with JavaScript directly (instead of running it as a server) thus it could fully harness JS power to reach a better performance.
--------------------------------------------------------------------------------
via: https://iwf1.com/apache-vs-nginx-vs-node-js-and-what-it-means-about-the-performance-of-wordpress-vs-ghost/
作者:[Liron][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://iwf1.com/tag/linux
[1]:http://iwf1.com/5-reasons-use-gentoo-linux/
[2]:https://www.sitepoint.com/sitepoint-smackdown-php-vs-node-js/
[3]:http://iwf1.com/wordpress/wp-content/uploads/2016/11/Lorem-Ipsum-and-ApacheBenchmark.jpg
[4]:http://iwf1.com/wordpress/wp-content/uploads/2016/10/Apache-vs-Nginx-vs-Node.js-use-flags.jpg
[5]:http://iwf1.com/wordpress/wp-content/uploads/2016/11/requests.jpg
[6]:http://iwf1.com/wordpress/wp-content/uploads/2016/11/concurrency.jpg
[7]:http://iwf1.com/wordpress/wp-content/uploads/2016/11/stress.jpg
[8]:http://iwf1.com/wordpress/wp-content/uploads/2016/11/Node-vs-Nginx-vs-Apache-comparison-table.jpg

View File

@ -0,0 +1,90 @@
Translating by StdioA
How to Check Timezone in Linux
============================================================
In this short article, we will walk newbies through the various simple ways of checking system timezone in Linux. Time management on a Linux machine especially a production server is always an important aspect of system administration.
There are a number of time management utilities available on Linux such as date and timedatectlcommands to get the current timezone of system and [synchronize with a remote NTP server][1] to enable an automatic and more accurate system time handling.
Well, let us dive into the different ways of finding out our Linux system timezone.
1. We will start by using the traditional date command to find out present timezone as follows:
```
$ date
```
Alternatively, type the command below, where `%Z` format prints the alphabetic timezone and `%z` prints the numeric timezone:
```
$ date +”%Z %z”
```
[
![Find Linux Timezone](http://www.tecmint.com/wp-content/uploads/2016/10/Find-Linux-Timezone.png)
][2]
Find Linux Timezone
Note: There are many formats in the date man page that you can make use of, to alter the output of the date command:
```
$ man date
```
2. Next, you can likewise use timedatectl, when you run it without any options, the command displays an overview of the system including the timezone like so:
```
$ timedatectl
```
More so, try to employ a pipeline and [grep command][3] to only filter the timezone as below:
```
$ timedatectl | grep “Time zone”
```
[
![Find Current Linux Timezone](http://www.tecmint.com/wp-content/uploads/2016/10/Find-Current-Linux-Timezone.png)
][4]
Find Current Linux Timezone
Learn how to [set timezone in Linux using timedatectl][5] command.
3. In addition, display the content of the file `/etc/timezone` using [cat utility][6] to check your timezone:
```
$ cat /etc/timezone
```
[
![Check Timezone of Linux](http://www.tecmint.com/wp-content/uploads/2016/10/Check-Timezone-of-Linux.png)
][7]
Check Timezone of Linux
For REHL/CentOS/Fedora users, here is one more command for the same purpose:
```
$ grep ZONE /etc/sysconfig/clock
```
Thats all! Do not forget to share you thoughts about the article by means of the feedback form below. Importantly, you should look through this time management guide for Linux to get more insight into handling time on your system, it has simple and easy-to-follow examples.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/check-linux-timezone
作者:[Aaron Kili ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/install-ntp-server-in-centos/
[2]:http://www.tecmint.com/wp-content/uploads/2016/10/Find-Linux-Timezone.png
[3]:http://www.tecmint.com/12-practical-examples-of-linux-grep-command/
[4]:http://www.tecmint.com/wp-content/uploads/2016/10/Find-Current-Linux-Timezone.png
[5]:http://www.tecmint.com/set-time-timezone-and-synchronize-time-using-timedatectl-command/
[6]:http://www.tecmint.com/13-basic-cat-command-examples-in-linux/
[7]:http://www.tecmint.com/wp-content/uploads/2016/10/Check-Timezone-of-Linux.png

View File

@ -0,0 +1,40 @@
Introduction to Eclipse Che, a next-generation, web-based IDE
============================================================
![Introduction to Eclipse Che, a next-generation, web-based IDE](https://opensource.com/sites/default/files/styles/image-full-size/public/images/education/EDU_OSDC_OpenClass_520x292_FINAL_JD.png?itok=ETOrrpcP "Introduction to Eclipse Che, a next-generation, web-based IDE")
>Image by : opensource.com
Correctly installing and configuring an integrated development environment, workspace, and build tools in order to contribute to a project can be a daunting or time consuming task, even for experienced developers. Tyler Jewell, CEO of [Codenvy][1], faced this problem when he was attempting to set up a simple Java project when he was working on getting his coding skills back after dealing with some health issues and having spent time in managerial positions. After multiple days of struggling, Jewell could not get the project to work, but inspiration struck him. He wanted to make it so that "anyone, anytime can contribute to a project with installing software."
It is this idea that lead to the development of [Eclipse Che][2].
Eclipse Che is a web-based integrated development environment (IDE) and workspace. Workspaces in Eclipse Che are bundled with an appropriate runtime stack and serve their own IDE, all in one tightly integrated bundle. A project in one of these workspaces has everything it needs to run without the developer having to do anything more than picking the correct stack when creating a workspace.
The ready-to-go bundled stacks included with Eclipse Che cover most of the modern popular languages. There are stacks for C++, Java, Go, PHP, Python, .NET, Node.js, Ruby on Rails, and Android development. A Stack Library provides even more options and if that is not enough, there is the option to create a custom stack that can provide specialized environments.
Eclipse Che is a full-featured IDE, not a simple web-based text editor. It is built on Orion and the JDT. Intellisense and debugging are both supported, and version control with both Git and Subversion is integrated. The IDE can even be shared by multiple users for paired programming. With just a web browser, a developer can write and debug their code. However, if a developer would prefer to use a desktop-based IDE, it is possible to connect to the workspace with a SSH connection.
One of the major technologies underlying Eclipse Che are [Linux containers][3], using Docker. Workspaces are built using Docker and installing a local copy of Eclipse Che requires nothing but Docker and a small script file. The first time `che.sh start` is run, the requisite Docker containers are downloaded and run. If setting up Docker to install Eclipse Che is too much work for you, Codenvy does offer online hosting options. They even provide 4GB workspaces for open source projects for any contributor to the project. Using Codenvy's hosting option or another online hosting method, it is possible to provide a url to potential contributors that will automatically create a workspace complete with a project's code, all with one click.
Beyond Codenvy, contributors to Eclipse Che include Microsoft, Red Hat, IBM, Samsung, and many others. Several of the contributors are working on customized versions of Eclipse Che for their own specific purposes. For example, Samsung's [Artik IDE][4] for IoT projects. A web-based IDE might turn some people off, but Eclipse Che has a lot to offer, and with so many big names in the industry involved, it is worth checking out.
* * *
If you are interested in learning more about Eclipse Che, [CheConf 2016][5] takes place on November 15\. CheConf 2016 is an online conference and registration is free. Sessions start at 11:00 am Eastern time (4:00 pm UTC) and end at 5:30 pm Eastern time (10:30 pm UTC).
--------------------------------------------------------------------------------
via: https://opensource.com/life/16/11/introduction-eclipse-che
作者:[Joshua Allen Holm][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/holmja
[1]:http://codenvy.com/
[2]:http://eclipse.org/che
[3]:https://opensource.com/resources/what-are-linux-containers
[4]:http://eclipse.org/che/artik
[5]:https://eclipse.org/che/checonf/

View File

@ -0,0 +1,175 @@
Build, Deploy and Manage Custom Apps with IBM Bluemix
============================================================
![IBM Blue mix logo](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/IBM-Blue-mix-logo.jpg?resize=300%2C266)
_IBMs Bluemix affords developers an opportunity to build, deploy and manage custom apps. Bluemix is built on Cloud Foundry. It supports a number of programming languages as well as OpenWhisk, which allows developers to call any function without the need for resource management._
Bluemix is an open standards, cloud-based platform implemented by IBM. It has an open architecture which enables organisations to create, develop and manage their applications on the cloud. It is based on Cloud Foundry and hence can be considered as a Platform as a Service (PaaS). With Bluemix, developers need not worry about cloud configurations, but can concentrate on their applications. Cloud configurations will be done automatically by Bluemix.
Bluemix also provides a dashboard, with which developers can create, manage and view services and applications, while monitoring resource usage also.
It supports the following programming languages:
* Java
* Python
* Ruby on Rails
* PHP
* Node.js
It also supports OpenWhisk (Function as a Service), which is also an IBM product that allows developers to call any function without requiring any resource management.
![Figure 1 An Overview of IBM Bluemix](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-1-An-Overview-of-IBM-Bluemix.jpg?resize=296%2C307)
Figure 1: An Overview of IBM Bluemix
![Figure 2 The IBM Bluemix architecture](http://i0.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-2-The-IBM-Bluemix-architecture.jpg?resize=350%2C239)
Figure 2: The IBM Bluemix architecture
![Figure 3 Creating an organisation in IBM Bluemix](http://i0.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-3-Creating-an-organisation-in-IBM-Bluemix.jpg?resize=350%2C280)
Figure 3: Creating an organisation in IBM Bluemix
**How IBM Bluemix works**
Bluemix is built on top of IBMs SoftLayer IaaS (Infrastructure as a Service). It uses Cloud Foundry as an open source PaaS. It starts by pushing code through Cloud Foundry, which plays the role of combining the code and suitable runtime environment based on the programming language in which the application is written. IBM services, third party services or community built services can be used for different functionalities. Secure connectors can be used to connect to on-premise systems and the cloud.
![Figure 4 Setting up Space in IBM Bluemix](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-4-Setting-up-Space-in-IBM-Bluemix.jpg?resize=350%2C267)
Figure 4: Setting up Space in IBM Bluemix
![Figure 5 The app template](http://i2.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-5-The-app-template.jpg?resize=350%2C135)
Figure 5: The app template
![Figure 6 IBM Bluemix supported programming languages](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-6-IBM-Bluemix-supported-programming-languages.jpg?resize=350%2C173)
Figure 6: IBM Bluemix supported programming languages
**Creating an app in Bluemix**
In this article, we will create a sample Hello World application in IBM Bluemix by using the Liberty for Java starter pack, in just a few simple steps.
1\. Go to [_https://console.ng.bluemix.net/registration/_][2].
2\. Confirm the Bluemix account.
3\. Click on the confirmation link in the mail to complete the sign up process.
4\. Give your email ID and click on _Continue_ to log in.
5\. Enter the password and click on _Log in._
6. _Set up_ and _Environment_ share resources in specific regions.
7\. Create Space to manage access and roll-back in Bluemix. We can map Spaces to development stages such as dev, test, uat, pre-prod and prod.
![Figure 7 Naming the app](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-7-Naming-the-app.jpg?resize=350%2C133)
Figure 7: Naming the app
![Figure 8 Knowing when the app is ready](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-8-Knowing-when-the-app-is-ready.jpg?resize=350%2C170)
Figure 8: Knowing when the app is ready
![Figure 9 The IBM Bluemix Java App](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-9-The-IBM-Bluemix-Java-App.jpg?resize=350%2C151)
Figure 9: The IBM Bluemix Java App
8\. Once this initial configuration is completed, click on_ Im ready_. _Good to Go_!
9\. Verify the IBM Bluemix dashboard after successfully logging in, specifically sections such as Cloud Foundry Apps where 2GB is available and Virtual Server where 0 instances are available, as of now.
10\. Click on _Create app_. Choose the template for app creation. In our case, we will go for a Web app.
11\. How do you get started? Click on Liberty for Java, and then verify the description.
12\. Click on _Continue_.
13\. What do you want to name your new app? For this article, lets use osfy-bluemix-tutorial and click on _Finish_.
14\. It will take some time to create resources and to host an application on Bluemix.
15\. In a few minutes, your app will be up and running. Note the URL of the application.
16\. Visit the applications URL _http://osfy-bluemix-tutorial.au-syd.mybluemix.net/_. Bingo, our first Java application is up and running on IBM Bluemix.
17\. To verify the source code, click on _Files_ and navigate to different files and folders in the portal.
18\. The _Logs_ section provides all the activity logs, starting from the applications creation.
19\. The _Environment Variables_ section provides details on all the environment variables of VCAP_Services as well as those that are user defined.
20\. To verify the applications consumption of resources, go to the Liberty for Java section.
21\. The _Overview_ section of each application contains details regarding resources, the applications health, and activity logs, by default.
22\. Open Eclipse, go to the Help menu and click on _Eclipse Marketplace_.
23\. Find _IBM Eclipse tools_ for _Bluemix_ and click on _Install_.
24\. Confirm the selected features and install them in Eclipse.
25\. Download the application starter code. Import it into Eclipse by clicking on _File Menu_, select _Import Existing Projects_ into _Workspace_ and start modifying the existing code.
![Figure 10 The Java app source files](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-10-The-Java-app-source-files.jpg?resize=350%2C173)
Figure 10: The Java app source files
![Figure 11 The Java app logs](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-11-The-Java-app-logs.jpg?resize=350%2C133)
Figure 11: The Java app logs
![Figure 12 Java app -- Liberty for Java](http://i0.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-12-Java-app-Liberty-for-Java.jpg?resize=350%2C169)
Figure 12: Java app — Liberty for Java
**[
][1]Why IBM Bluemix?**
Here are some compelling reasons to use IBM Bluemix:
* Supports multiple languages and platforms
* Free trial
1\. Minimal registration process
2\. No credit card required
3\. 30-days trial period with quotas of 2GB of runtime, 20 services, 500 routes
4\. Unlimited access to standard support
5\. No production use limitations
* Pay only for the use of each runtime and service
* Quick set-up hence faster time to market
* Continuous delivery of new features
* Secure integration with on-premise resources
* Use cases
1\. Web applications and mobile back-ends
2\. APIs and on-premise integration
* DevOps services are available as SaaS on the cloud and support continuous delivery of:
1\. Web IDE
2\. SCM
3\. Agile planning
4\. Delivery pipeline service
--------------------------------------------------------------------------------
via: http://opensourceforu.com/2016/11/build-deploy-manage-custom-apps-ibm-bluemix/
作者:[MITESH_SONI][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://opensourceforu.com/author/mitesh_soni/
[1]:http://opensourceforu.com/wp-content/uploads/2016/10/Figure-7-Naming-the-app.jpg
[2]:https://console.ng.bluemix.net/registration/

View File

@ -0,0 +1,68 @@
Is Mozilla Firefox Collecting Your Data Without Your Consent?
============================================================
![Mozilla Firefox collects your data](https://iwf1.com/wordpress/wp-content/uploads/2016/11/Mozilla-Firefox-collects-your-data-730x429.jpg)
A geolocation service packaged with Firefox web-browser is running in the background even while the latter is closed.
Weve still not fully recovered from the news about the scandalous browser add-on which meant to protect users privacy but instead **[sells their information to third-party companies][1]**, and already we are perhaps in front of another, much bigger in scale, new outrage.
**MLS** is Mozilla Location Service which lets devices determine their location based on network infrastructure like WiFi access points, cell towers and Bluetooth beacons.
Pretty much, it is Mozillas equivalent to Google Location Service which is the service used when you turn on your GPS on Android devices and opt for High accuracy mode.
Those of you who ever experienced GPS issues will probably know to appreciate how accurate this mode actually is.
But besides being able to accurately pinpoint your location, another side of it is that the service, through the use of WiFi networks, is able to collect personally identifiable information of both the **users who knowingly contribute to the database** and the **owners of the WiFi devices being scanned**.
That being said, Mozilla also mentions you can opt out from the service, but can you really?
### When The Background Becomes Your Privacy Foreground
Being a [crowdsource][2] project, in order to maintain and grow MLS, Mozilla is in fact dependent of users contributions, thus theyve developed a number of ways through which users can participate.
One of these ways, meant to be used by end users is a Android app called Stumbler:
> “Mozilla Stumbler is an open-source wireless network scanner which collects GPS, cellular and wireless network metadata for our crowd-sourced location database.”[1]
Yet Stumbler is not only a standalone app but also a service used by Firefox for Android “to contribute data and enhance” MLS.
The problem with that service lies in the fact it runs in the background without most users are aware of it and **even though you may disable it**.
According to Mozilla[1], to enable the service you need to open the Settings menu (in Firefox for Android) -> Open the Privacy section -> scroll to the bottom to see the Data Choices, and finally, Check the Mozilla Location Service box.
[
![Mozilla Location Services is unchecked yet Stumbler is on](http://iwf1.com/wordpress/wp-content/uploads/2016/11/Mozilla-Location-Services-is-unchecked-yet-Stumler-is-on-730x602.jpg)
][3]
Mozilla Location Services is unchecked yet Stumbler is on
In reality, youll find that the Stumbler service is **actively running on your device in the background**, meaning its practically invisible because it has no interface, even though the MLS box is unchecked and furthermore, even if all the Data Choices check boxes are unchecked and Firefox browser itself is closed.
Apparently, the only way to stop stumbler is by ending it directly, however to do so, youll first need a way to detect its running and ultimately, thats just a temporary solution that only lasts until the devices next reboot.
### How To Stay Safer?
In order to exempt yourself from MLS data collection, there are still a few methods you may practice, in the hope those wouldnt be disregarded by Mozilla just like the MLS check box in Firefox for Android.
Make your wireless network hidden or add the string “_nomap” to the end of its name, e.g “myWirelessNetwork” becomes “myWirelessNetwork_nomap”. This should signal Mozillas applications that you do not wish to participate in their data collection.
As for the Stumbler service on Android, due to being a service (as opposed to a process), youll probably wont be able to see it in the list of running processes / recent apps. Thus, either use a dedicated app to close it or enable “Developer Options” and go to “Running services” -> tap on Firefox and finally, stop “stumbler”.
--------------------------------------------------------------------------------
via: https://iwf1.com/is-mozilla-firefox-collecting-your-data-without-your-consent/
作者:[Liron][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://iwf1.com/is-mozilla-firefox-collecting-your-data-without-your-consent/
[1]:https://iwf1.com/shock-this-popular-browser-add-on-sells-your-browsing-history/
[2]:https://en.wikipedia.org/wiki/Crowdsourcing
[3]:http://iwf1.com/wordpress/wp-content/uploads/2016/11/Mozilla-Location-Services-is-unchecked-yet-Stumler-is-on.jpg

View File

@ -0,0 +1,177 @@
How to Check Which Apache Modules are Enabled/Loaded in Linux
============================================================
In this guide, we will briefly talk about the Apache web server front-end and how to list or check which Apache modules have been enabled on your server.
Apache is built, based on the principle of modularity, this way, it enables web server administrators to add different modules to extend its primary functionalities and [enhance apache performance][5] as well.
Some of the common Apache modules include:
1. mod_ssl  which offers [HTTPS for Apache][1].
2. mod_rewrite  which allows for matching url patterns with regular expressions, and perform a transparent redirect using [.htaccess tricks][2], or apply a HTTP status code response.
3. mod_security  which offers you to [protect Apache against Brute Force or DDoS attacks][3].
4. mod_status  that allows you to [monitor Apache web server load and page statics][4].
In Linux, the apachectl or apache2ctl command is used to control Apache HTTP server interface, it is a front-end to Apache.
You can display the usage information for apache2ctl as below:
```
$ apache2ctl help
OR
$ apachectl help
```
apachectl help
```
Usage: /usr/sbin/httpd [-D name] [-d directory] [-f file]
[-C "directive"] [-c "directive"]
[-k start|restart|graceful|graceful-stop|stop]
[-v] [-V] [-h] [-l] [-L] [-t] [-S]
Options:
-D name : define a name for use in directives
-d directory : specify an alternate initial ServerRoot
-f file : specify an alternate ServerConfigFile
-C "directive" : process directive before reading config files
-c "directive" : process directive after reading config files
-e level : show startup errors of level (see LogLevel)
-E file : log startup errors to file
-v : show version number
-V : show compile settings
-h : list available command line options (this page)
-l : list compiled in modules
-L : list available configuration directives
-t -D DUMP_VHOSTS : show parsed settings (currently only vhost settings)
-S : a synonym for -t -D DUMP_VHOSTS
-t -D DUMP_MODULES : show all loaded modules
-M : a synonym for -t -D DUMP_MODULES
-t : run syntax check for config files
```
apache2ctl can function in two possible modes, a Sys V init mode and pass-through mode. In the SysV init mode, apache2ctl takes simple, one-word commands in the form below:
```
$ apachectl command
OR
$ apache2ctl command
```
For instance, to start Apache and check its status, run these two commands with root user privileges by employing the [sudo command][6], in case you are a normal user:
```
$ sudo apache2ctl start
$ sudo apache2ctl status
```
Check Apache Status
```
tecmint@TecMint ~ $ sudo apache2ctl start
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1\. Set the 'ServerName' directive globally to suppress this message
httpd (pid 1456) already running
tecmint@TecMint ~ $ sudo apache2ctl status
Apache Server Status for localhost (via 127.0.0.1)
Server Version: Apache/2.4.18 (Ubuntu)
Server MPM: prefork
Server Built: 2016-07-14T12:32:26
-------------------------------------------------------------------------------
Current Time: Tuesday, 15-Nov-2016 11:47:28 IST
Restart Time: Tuesday, 15-Nov-2016 10:21:46 IST
Parent Server Config. Generation: 2
Parent Server MPM Generation: 1
Server uptime: 1 hour 25 minutes 41 seconds
Server load: 0.97 0.94 0.77
Total accesses: 2 - Total Traffic: 3 kB
CPU Usage: u0 s0 cu0 cs0
.000389 requests/sec - 0 B/second - 1536 B/request
1 requests currently being processed, 4 idle workers
__W__...........................................................
................................................................
......................
Scoreboard Key:
"_" Waiting for Connection, "S" Starting up, "R" Reading Request,
"W" Sending Reply, "K" Keepalive (read), "D" DNS Lookup,
"C" Closing connection, "L" Logging, "G" Gracefully finishing,
"I" Idle cleanup of worker, "." Open slot with no current process
```
And when operating in pass-through mode, apache2ctl can take all the Apache arguments in the following syntax:
```
$ apachectl [apache-argument]
$ apache2ctl [apache-argument]
```
All the Apache-arguments can be listed as follows:
```
$ apache2 help [On Debian based systems]
$ httpd help [On RHEL based systems]
```
#### Check Enabled Apache Modules
Therefore, in order to check which modules are enabled on your Apache web server, run the applicable command below for your distribution, where `-t -D DUMP_MODULES` is a Apache-argument to show all enabled/loaded modules:
```
--------------- On Debian based systems ---------------
$ apache2ctl -t -D DUMP_MODULES
OR
$ apache2ctl -M
```
```
--------------- On RHEL based systems ---------------
$ apachectl -t -D DUMP_MODULES
OR
$ httpd -M
$ apache2ctl -M
```
List Apache Enabled Loaded Modules
```
[root@tecmint httpd]# apachectl -M
Loaded Modules:
core_module (static)
mpm_prefork_module (static)
http_module (static)
so_module (static)
auth_basic_module (shared)
auth_digest_module (shared)
authn_file_module (shared)
authn_alias_module (shared)
authn_anon_module (shared)
authn_dbm_module (shared)
authn_default_module (shared)
authz_host_module (shared)
authz_user_module (shared)
authz_owner_module (shared)
authz_groupfile_module (shared)
authz_dbm_module (shared)
authz_default_module (shared)
ldap_module (shared)
authnz_ldap_module (shared)
include_module (shared)
....
```
Thats all! in this simple tutorial, we explained how to use the Apache front-end tools to list enabled/loaded apache modules. Keep in mind that you can get in touch using the feedback form below to send us your questions or comments concerning this guide.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/check-apache-modules-enabled
作者:[Aaron Kili][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/install-lets-encrypt-ssl-certificate-to-secure-apache-on-rhel-centos/
[2]:http://www.tecmint.com/apache-htaccess-tricks/
[3]:http://www.tecmint.com/protect-apache-using-mod_security-and-mod_evasive-on-rhel-centos-fedora/
[4]:http://www.tecmint.com/monitor-apache-web-server-load-and-page-statistics/
[5]:http://www.tecmint.com/apache-performance-tuning/
[6]:http://www.tecmint.com/su-vs-sudo-and-how-to-configure-sudo-in-linux/

View File

@ -1,59 +0,0 @@
Training vs. hiring to meet the IT needs of today and tomorrow
培训还是雇人,来满足当今和未来的 IT 需求
================================================================
![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/cio_talent_4.png?itok=QLhyS_Xf)
在数字化时代,由于企业需要不断跟上工具和技术更新换代的步伐,对 IT 技能的需求也稳定增长。对于企业来说,寻找和雇佣那些拥有令人垂涎能力的创新人才,是非常不容易的。同时,培训内部员工来使他们接受新的技能和挑战,需要一定的时间。而且,这也往往满足不了需求。
[Sandy Hill][1] 对多种 IT 学科涉及到的多项技术都很熟悉。她作为 [Pegasystems][2] 项目的 IT 主管,负责的 IT 团队涉及的领域从应用的部署到数据中心的运营。更重要的是Pegasystems 开发应用来帮助销售,市场,服务以及运行团队简化操作,联系客户。这意味着她需要掌握和利用 IT 内部资源的最佳方法,面对公司客户遇到的 IT 挑战。
![](https://enterprisersproject.com/sites/default/files/CIO_Q%20and%20A_0.png)
**企业家项目TEP这些年你是如何调整培训重心的**
**Hill**:在过去的几十年中,我们经历了爆炸式的发展,所以现在我们要实现更多的全球化进程。随之而来的培训方面,将确保每个人都在同一起跑线上。
我们大多的关注点已经转移到培养员工使用新的产品和工具上,这些新产品和工具的实现,能够推动创新,并提高工作效率。例如,我们实现了资产管理系统; 以前我们是没有的。因此我们需要为全部员工做培训,而不是雇佣那些已经知道该产品的人。当我们正在发展的时候,我们也试图保持紧张的预算和稳定的职员总数。所以,我们更愿意在内部培训而不是雇佣新人。
**TEP说说培训方法吧你是怎样帮助你的员工发展他们的技能**
**Hill**:我要求每一位员工制定一个技术性的和非技术性的训练目标。这作为他们绩效评估的一部分。他们的技术性目标需要与他们的工作职能相符,非技术行目标则着重发展一项软技能,或是学一些专业领域之外的东西。我每年对职员进行一次评估,看看差距和不足之处,以使团队保持全面发展。
**TEP你的训练计划能够在多大程度上减轻招聘和保留职员的问题**
**Hill**:使我们的职员对学习新的技术保持兴奋,让他们的技能更好。让职员知道我们重视他们并且让他们在擅长的领域成长和发展,以此激励他们。
**TEP你有没有发现哪种培训是最有效的**
**HILL**:我们使用几种不同的我们发现是有效的培训方法。当有新的或特殊的项目时,我们尝试加入一套由甲方(不会翻:乙方,卖方?)领导的培训课程,作为项目的一部分。要是这个方法不能实现,我们将进行异地培训。我们也会购买一些在线的培训课程。我也鼓励职员每年参加至少一次会议,以了解行业的动向。
**TEP你有没有发现有哪些技能雇佣新人要比培训现有员工要好**
**Hill**:这和项目有关。有一个最近的计划,试图实现 OpenStack而我们根本没有这方面的专家。所以我们与一家从事这一领域的咨询公司合作。我们利用他们的专业知识帮助我们运行项目并现场培训我们的内部团队成员。让内部员工学习他们需要的技能同时还要完成他们们天的工作这是一项艰巨的任务。
顾问帮助我们确定我们需要的对某一技术熟练的的员工人数。这使我们能够对员工进行评估,看看是否存在缺口。如果存在人员上的缺口,我们还需要额外的培训或是员工招聘。我们也确实雇佣了一些承包商。另一个选择是让一些全职员工进行为期六至八周的培训,但我们的项目模式不容许这么做。
**TEP想一下你最近雇佣的员工他们的那些技能特别能够吸引到你**
**Hill**:在最近的招聘中,我侧重于软技能。除了扎实的技术能力外,他们需要能够在团队中进行有效的沟通和工作,要有说服他人,谈判和解决冲突的能力。
IT 人一向独来独往。他们一般不是社交最多的人。现在IT 越来越整合到组织中,它为其他业务部门提供有用的更新报告和状态报告的能力是至关重要的,这也表明 IT 是积极的存在,并将取得成功。
--------------------------------------------------------------------------------
via: https://enterprisersproject.com/article/2016/6/training-vs-hiring-meet-it-needs-today-and-tomorrow
作者:[Paul Desmond][a]
译者:[Cathon](https://github.com/Cathon)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://enterprisersproject.com/user/paul-desmond
[1]: https://enterprisersproject.com/user/sandy-hill
[2]: https://www.pega.com/pega-can?&utm_source=google&utm_medium=cpc&utm_campaign=900.US.Evaluate&utm_term=pegasystems&gloc=9009726&utm_content=smAXuLA4U|pcrid|102822102849|pkw|pegasystems|pmt|e|pdv|c|

View File

@ -1,57 +0,0 @@
宽松开源许可证的崛起意味着什么
====
为什么像 GNU GPL 这样的限制性许可证越来越不受青睐。
“如果你用了任何开源软件, 那么你软件的其他部分也必须开源。” 这是微软 CEO Steve Ballmer 2001 年说的, 尽管他说的不对, 还是引发了人们对自由软件的 FUD (恐惧, 不确定和怀疑)。 大概这才是他的意图。
对开源软件的这些 FUD 主要与开源许可有关。 现在有许多不同的许可证, 当中有些限制比其他的更严格(也有人称“更具保护性”)。 诸如 GNU 通用公共许可证 GPL 这样的限制性许可证使用了 copyleft 的概念。 copyleft 赋予人们自由发布软件副本和修改版的权力, 只要衍生工作给予人们同样的权力。 bash 和 GIMP 等开源项目就是使用了 GPL v3。 还有一个 Affero GPL 的许可证, 它为网络上的软件(如网络服务)提供了 copyleft 许可。
这意味着, 如果你使用了这种许可的代码, 然后加入了你自己的专有代码, 那么在一些情况下, 整个代码, 包括你的代码也就遵从这种限制性开源许可证。 Ballmer 说的大概就是这类的许可证。
但宽松许可证不同。 比如, 只要保留属性且不要求开发者承担责任, MIT 许可证允许任何人任意使用开源代码, 包括修改和出售。 另一个比较流行的宽松开源许可证, Apache 许可证 2.0 也把专利权从贡献者授予用户。 JQuery, .NET Core 和 Rails 使用了 MIT 许可证, 使用 Apache 许可证 2.0 的软件包括安卓, Apache 和 Swift。
两种许可证类型最终都是为了让软件更有用。 限制性许可证促进了参与和分享的开源理念, 使每个人从软件中得到最多的利益。 而宽松许可证通过允许人们任意使用软件来确保人们能从软件中得到最多的利益, 即使这意味着他们可以使用代码, 修改它, 据为己有,甚至以专有软件出售,而不做任何回报。
开源许可证管理公司 Black Duck Software 的数据显示, 去年使用最多的开源许可证是限制性许可证 GPL 2.0, 市占率大约 25%。 宽松许可证 MIT 和 Apache 2.0 次之, 市占率分别为 18% 和 16% 再后面是 GPL 3.0, 市占率大约 10%。 这样来看, 限制性许可证占 35% 宽松许可证占 34% 几乎是平手。
但这个数据没有显示趋势。 Black Duck 的数据显示, 从 2009 年到 2015 年的六年间, MIT 许可证的市占率上升了 15.7% Apache 的市占率上升了 12.4%。 在这段时期, GPL v2 和 v3 的市占率惊人地下降了 21.4%。 换言之, 在这段时期里, 大量市占率从限制性许可证移动到宽松许可证。
这个趋势还在继续。 Black Duck 的[最新数据][1]显示, MIT 现在的市占率为 26% GPL v2 为 21% Apache 2 为 16% GPL v3 为 9%。 即 30% 的限制性许可证和 42% 的宽松许可证--与前一年的 35% 的限制许可证和 34% 的宽松许可证相比, 发生了重大的转变。 对 GitHub 上使用许可证的[调查研究][2]证实了这种转变。 它显示 MIT 以压倒性的 45% 占有率成为最流行的许可证, 与之相比, GPL v2 只有 13% Apache 11%。
![](http://images.techhive.com/images/article/2016/09/open-source-licenses.jpg-100682571-large.idge.jpeg)
### 引领趋势
从限制性许可证到宽松许可证,这么大的转变背后是什么呢? 是公司害怕如果使用了限制性许可证的软件他们就会像Ballmer说的那样失去自己私有软件的控制权了吗 事实上, 可能就是如此。 比如, Google [禁用了 Affero GPL 软件][3]。
[Instructional Media + Magic][4] 的主席 Jim Farmer 是一个教育开源技术的开发者。 他作为很多公司为避开法律问题而不使用限制性许可证。 “问题就在于复杂性。 许可证的复杂性越高, 被人因为某此行为而告上法庭的可能性越高。 高复杂性更可能带来麻烦“, 他说。
他补充说, 这种对限制性许可证的恐惧正被律师们驱动着, 许多律师建议自己的客户使用 MIT 或 Apache 2.0 许可证的软件, 并明确反对使用 Affero 许可证的软件。
他说, 这会对软件开发者产生影响, 因为如果公司都避开限制性许可证软件的使用,开发者想要自己的软件被使用, 就更会把新的软件使用宽松许可证。
但 SalesAgility 也就是开源 SuiteCRM 的那家公司,的 CEO Greg Soper 认为这种到宽松许可证的转变也由一些开发者驱动。 “看看像 Rocket.Chat 这样的应用。 开发者本可以选择 GPL 2.0 或 Affero 许可证, 但他们选择了宽松许可证,” 他说。 “这样可以给这个应用最大的机会, 因为专有软件厂商可以使用它, 不会伤害到他们的产品, 且不需要把他们的产品也使用开源许可证。 这样如果开发者想要让第三方应用使用他的应用的话, 他有理由选择宽松许可证。”
Soper 指出, 限制性许可证的设计,就是通过阻止开发者拿了别人的代码,做了修改,但不把结果回报给社区来帮助开源项目。 “ Affero 许可证对我们的产品很重要, 因为如果有人 fork 了,并做得比我们好, 却又不把代码回报回来, 就会杀死我们的产品,” 他说。 “ 对 Rocket.Chat 则不同, 因为如果它使用 Affero 那么它会污染公司的 IP 所以公司不会使用它。 不同的许可证有不同的使用案例。”
曾在 Gnome 现在是 LibreOffice 的 OpenOffice 上工作的开源开发者 Michael Meeks 同意 Jim Farmer 的,许多公司确实出于对法律的担心,而选择使用宽松许可证的软件的观点。 “copyleft 许可证有风险, 但同样也有巨大的益处。 遗憾的是人们都听从律师, 而律师只是讲风险, 但从不告诉你有些事是安全的。”
Ballmer 发表他不正确的言论已经 15 年了, 但它产生的 FUD 还是有影响--即使从限制性许可证到宽松许可证的转变并不是他想要的。
--------------------------------------------------------------------------------
via: http://www.cio.com/article/3120235/open-source-tools/what-the-rise-of-permissive-open-source-licenses-means.html
作者:[Paul Rubens ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.cio.com/author/Paul-Rubens/
[1]: https://www.blackducksoftware.com/top-open-source-licenses
[2]: https://github.com/blog/1964-open-source-license-usage-on-github-com
[3]: http://www.theregister.co.uk/2011/03/31/google_on_open_source_licenses/
[4]: http://immagic.com/

View File

@ -1,145 +0,0 @@
聊聊Docker Datacenter在AWS和AZURE上的应用
===================================================
三言两语介绍一下AWS快速启动应用和Azure Marketplace上产品化和高可用性的Docker部署模板。
Docker Datacenter AWS快速启动应用使用CloudFormation模板和AZure Marketpalce上预编译的模板来简化企业CaaS Docker环境在公有云基础设施下的部署。
为敏捷应用而生的CasS平台为各种规模企业提供容器、集群编排和管理等各种简单、安全和可伸缩的服务。使用为Docker Datacenter预编译的崭新的云模板开发者和IT运维人员可以无缝的把它们的应用迁移到亚马逊EC2或者微软的Azure环境而无需修改任何代码。现在企业可以快速实现更高的计算和运营效率Docker可以通过短短几步操作支持容器管理和编排。
### 什么是Docker Datacenter ?
Docker Datacenter包括Docker通用控制平面Docker可信注册表和与客户的应用服务等级协议相匹配的商用CS Docker引擎。
- Docker通用控制平面(UCP),一种企业级的集群管理方案,帮助客户通过单个管理仪表盘管理整个集群
- Docker可信注册表DTR 一种映像管理方案帮助客户安全存储和管理Docker映像
- 商用版的Docker引擎
![](http://img.scoop.it/lVraAJgJbjAKqfWCLtLuZLnTzqrqzN7Y9aBZTaXoQ8Q=)
### 在AWS上快速布置Docker Datacenter
秉承Docker与AWS最佳实践参照AWS快速启动教程你可以在AWS云上快速部署Docker容器。Docker Datacenter快速应用基于模块化和可定制的CloudFormation模板客户可以在其之上增加额外功能或者为自己的Docker部署修改模板。
[AWS的Docker Datacenter应用说明](https://youtu.be/aUx7ZdFSkXU)
#### 架构
![](http://img.scoop.it/sZ3_TxLba42QB-r_6vuApLnTzqrqzN7Y9aBZTaXoQ8Q=)
AWS Cloudformation通过创建AWS资源开始安装进程这些AWS需要的资源包括VPC, 安全组公有与私有子网因特网网关NAT网关与S3 bucket。
然后AWS Cloudformation启动第一个UCP控制器实例紧接着安装Docker引擎和UCP容器。它把UCP控制器创建的根证书备份到S3。一旦第一个UCP控制器成功运行其他UCP控制器UCP集群结点和第一个DTR复制进程就会被触发。和第一个UCP控制器结点类似其他所有结点创建进程也都由商业版的Docker引擎开始然后安装并运行UCP和DTR容器以加入集群。两个弹性负载均衡器ELB一个分配给UCP另外一个为DTR服务它们启动、自动完成配置并在两个可用区Availability Zone之间提供弹性负载均衡。
除些之外如有需要UCP控制器和结点在ASG中启动并提供扩展功能。这种架构确保UCP和DTR两者都部署在两个AZ上以增强弹性与高可靠性。在公有或者私有HostedZoneRoute53用来动态注册或者配置UCP和DTR 。
![](http://img.scoop.it/HM7Ag6RFvMXvZ_iBxRgKo7nTzqrqzN7Y9aBZTaXoQ8Q=)
### 快速启动模板的核心功能如下:
- 创建VPC不同AZ上的私有和公有子网ELBNAT网关因特网网关自动伸缩组它们全部基于AWS最佳实践
- 为DDC创建一个S3 bucket其应用于证书备份和DTR映像存储DTR需要额外配置
- 在客户的VPC范畴跨多AZ部署3个UCP控制器
- 创建预配置正常检测的UCP ELB
- 创建一个DNS记录并关联到UCP ELB
- 创建可伸缩的UCP结点集群
- 在VPC范畴内跨多AZ创建3个DTR副本
- 创建一个预配置正常检测的DTR
- 创建一个DNS记录并关联到DTR ELB
[下载AWS快速指南](https://s3.amazonaws.com/quickstart-reference/docker/latest/doc/docker-datacenter-on-the-aws-cloud.pdf)
### 在AWS使用Docker Datacenter
1. 登录[Docker Store][1]获取[30天免费试用][2]或者[联系销售][4]
2. 确认之后提示“Launch Stack”客户会被重定向到AWS Cloudformation入口
3. 确认启动Docker的AWS区域
4. 提供启动参数
5. 确认并启动
6. 启动完成之后点击输出分页标签可以看到UCP/DTR的 URL、缺省用户名、密码和S3 bucket的名称
[Docker Datacenter需要2000美刀信用担保](https://aws.amazon.com/mp/contactdocker/)
### 在Azure使用Azure Marketplace上预编译的模板部署
在Azure Marketplace上Docker Datacenter是一个预先编译的模板客户可以在Azure全球不同的数据中心即起即用。客户可以根据自己需求从Azure提供的各种VM中选择部署适合自己的Docker Datacenter。
#### 架构
![](http://img.scoop.it/V9SpuBCoAnUnkRL3J-FRFLnTzqrqzN7Y9aBZTaXoQ8Q=)
Azure部署进程开始于输入一些基本用户信息如ssh-ing管理员用户名系统级管理员和资源组名称。你可以把资源组理解为一组有生命周期和部署边界的资源集合。你可以在这个链接了解更多关于资源组的信息[azure.microsoft.com/en-us/documentation/articles/resource-group-overview/](azure.microsoft.com/en-us/documentation/articles/resource-group-overview/)
下一步输入集群详细信息包括UCP控制器VM大小控制器个数缺省为3个UCP结点VM大小UCP结点个数缺省1最大值为10DTR结点VM大小DTR结点个数虚拟网络名和地址例如10.0.0.1/19。关于网络客户可以配置2个子网第一个子网分配给UCP控制器 第二个分配给DTC和UCP结点。
最后点击OK完成部署。对于小集群服务开通需要大约15-19分钟大集群更久些。
![](http://img.scoop.it/DXPM5-GXP0j2kEhno0kdRLnTzqrqzN7Y9aBZTaXoQ8Q=)
![](http://img.scoop.it/321ElkCf6rqb7u_-nlGPtrnTzqrqzN7Y9aBZTaXoQ8Q=)
#### 如何在Azure部署
1. 注册[Docker Datacenter30天试用][5]许可或者[联系销售][6]
2. [跳转到微软Azure Markplace的Docker Datacenter][7]
3. [评审部署文档][8]
如果客户注册获取Docker Datacenter许可证那么他们将授权启动AWS或者Azure模板.
- [获取30天试用许可证][9]
- [通过视频理解Docker Datacenter架构][10]
- [观看演示视频][11]
- [获取AWS提供的部署Docker Datacenter的75美元红包奖励][12]
### 了解有关Docker的更多信息
- 初识Docker? 尝试一下10分钟[在线学习课程][20]
- 分享映像,自动构建,或用一个[免费的Docker Hub账号][21]尝试更多
- 阅读[Docker 1.12 发行说明][22]
- 订阅[Docker Weekly][23]
- 报名参加即将到来的[Docker Online Meetups][24]
- 参加即将发生的[Docker Meetups][25]
- 观看[DockerCon EU2015][26]视频
- 开始为[Docker][27]贡献力量
--------------------------------------------------------------------------------
via: https://blog.docker.com/2016/06/docker-datacenter-aws-azure-cloud/
作者:[Trisha McCanna][a]
译者:[firstadream](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://blog.docker.com/author/trisha/
[1]: https://store.docker.com/login?next=%2Fbundles%2Fdocker-datacenter%2Fpurchase?plan=free-trial
[2]: https://store.docker.com/login?next=%2Fbundles%2Fdocker-datacenter%2Fpurchase?plan=free-trial
[4]: https://goto.docker.com/contact-us.html
[5]: https://store.docker.com/login?next=%2Fbundles%2Fdocker-datacenter%2Fpurchase?plan=free-trial
[6]: https://goto.docker.com/contact-us.html
[7]: https://azure.microsoft.com/en-us/marketplace/partners/docker/dockerdatacenterdocker-datacenter/
[8]: https://success.docker.com/Datacenter/Apply/Docker_Datacenter_on_Azure
[9]: http://www.docker.com/trial
[10]: https://www.youtube.com/playlist?list=PLkA60AVN3hh8tFH7xzI5Y-vP48wUiuXfH
[11]: https://www.youtube.com/playlist?list=PLkA60AVN3hh8a8JaIOA5Q757KiqEjPKWr
[12]: https://aws.amazon.com/quickstart/promo/
[20]: https://docs.docker.com/engine/understanding-docker/
[21]: https://hub.docker.com/
[22]: https://docs.docker.com/release-notes/
[23]: https://www.docker.com/subscribe_newsletter/
[24]: http://www.meetup.com/Docker-Online-Meetup/
[25]: https://www.docker.com/community/meetup-groups
[26]: https://www.youtube.com/playlist?list=PLkA60AVN3hh87OoVra6MHf2L4UR9xwJkv
[27]: https://docs.docker.com/contributing/contributing/

View File

@ -1,50 +0,0 @@
拥有开源项目部门的公司可以从四个方面获益
====
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BUSINESS_creativity.png?itok=x2HTRKVW)
在我的第一篇关于开源项目部门的系列文章中,我深入剖析了[什么是开源项目部门,为什么你的公司需要一个开源项目部门][1]。接着我又说到了[谷歌是如何创建一个新的开源项目部门的][2]。而这篇文章,我将阐述拥有一个开源项目部门的好处。
乍一看非软件开发公司会更加热情的去拥抱开源项目部门的一个重要原因是他们并没有什么损失。毕竟他们并不需要依靠这些软件产品来获得收益。比如Facebook 可以很轻易的释放出一个 “分布式键值数据存储” 作为开源项目,是因为他们并没有售卖一个叫做 “分布式键值数据存储” 的产品。这回答了关于风险的问题,但是并没有回答他们如何通过向开源生态共献代码而获益的问题。让我们逐个来推测和探讨其中可能的原因。你会发现开源项目供应商的许多动机都是相同的,但是也有些许不同。
### 招聘
招聘可能是一个最容易的方法将一个开源项目售卖给上层管理部门。向他们展示与招聘相关的成本,以及投资回报率,然后解释如何与天才工程师发展关系,从而与那些对这些项目感兴趣并且十分乐意在其中工作的天才开发者们建立联系。不需要我多说了,你懂的!
### 技术影响
曾几何时,那些没有专门从事软件销售的公司难以直接影响他们软件供应商的开发周期,尤其是他们并不是一个大客户时。开源完全改变了这一点,它将用户与供应商放在了一个更公平的竞争环境中。随着开源开发的兴起,任何人,假如他们愿意投入时间和资源的话,都可以将技术推向一个选定的方向。但是这些公司发现,虽然将投资用于开发上会带来丰硕的成果,但是总体战略的努力却更加有效——试想 bug 的修复 VS 软件的构建——大多数公司都将 bug 的修复推给上游的开源部门,但是一些公司开始认识到通过更深层次的回报承诺和更快的功能开发来协调持久的工作,将会更有利于业务。通过一个开源项目部门的模型,公司的职员能够从开源社区中准确嗅出战略重心,然后投入开发资源。
对于快速增长的公司,如 Google 和 Facebook其对现有的开源项目提供的领导力仍然不足以满足业务的膨胀。面对激烈的增长和建立超大规模系统所带来的挑战许多大型企业开始为软件构建仅供内部使用的高度定制的栈。除非他们能说服别人在一些基础设施项目上达成合作因此虽然他们保持在诸如 Linux 内核Apache 和其他现有项目领域的投资他们也开始推出自己的大型项目。Facebook 发布了 CassandraTwitter 创造了 Mesos并且甚至谷歌也创建了 Kubernetes 项目。这些项目已成为行业创新的主要平台证实该举措是相关公司引人注目的成功。请注意Facebook 内部停止使用 Cassandra 后,它需要创造一个新软件项目来解决更大规模的问题,但是,这时 Cassandra 已经变得流行,而 DataStax 已经开始承担开发任务)。所有这些项目已经促使了开发商、相关的项目、以及最终用户来供应加速的增长和发展的整个生态。
开源项目部门和公司战略举措是不可能不协调的。没有这种努力,每个所提到的公司依然在试图单独地和更慢解决这些问题。不仅拥有这些项目可以帮助解决内部业务问题,它们也帮助这些公司逐渐成为行业巨头。当然,谷歌当了好多年行业巨头,但是 Kubernetes 的发展确保了软件的质量,并且在容器技术未来的发展方向上有着直接的话语权,并且远超之前就有的话语权。这些公司目前还是闻名于他们超大规模的基础设施和硅谷的中坚份子。鲜为人知,但是更为重要的是它们与技术生产人员的亲密度。开源项目办公室凭借技术建议和与有影响力的开发者的关系,再加上在社区治理和人员管理方面深厚的专业知识来引领这些工作,并最大限度地发挥其影响力,
### 市场营销能力
与技术的影响齐头并进的是每个公司谈论如何开源的努力。通过推敲这些项目和社区周围的消息,一个开源项目部门能够通过有针对性的营销活动来提供最大的影响。营销在开放源码领域一直是一个肮脏的词汇,因为每个人都有一个由企业营销造成的糟糕的经历。在开源社区中,营销呈现出一种与传统方法截然不同的形式,他会更注重于我们的社区已经在战略方向上做了什么。因此,一个开源项目部门不可能去宣传一些根本还没有发布任何代码的项目,但是他们会讨论他们创造什么软件和参与了其他什么举措。基本上,不会有“雾件”。
想想谷歌的开源项目办公室作出的第一份工作。他们不只是简单的贡献代码给 Linux 内核或其他项目他们更多的是谈论它并经常在开源会议主题演讲。他们不仅仅是把钱给写开源的代码的学生他们还创建了一个全球计划——“Google Summer of Code”现在已经成为一种开源发展的文化试金石。这些市场营销的作用在 Kubernetes 开发完成之前就奠定了谷歌在开源世界巨头的地位。最终使得,谷歌在创建 GPLv3 授权协议期间拥有重要影响力,并且在科技活动中公司的发言人和开源项目部门代表人成为主要人物。开源项目部门是协调这些工作的最好的实体,并可以为母公司提供真正的价值。
###改善内部流程
改善内部流程听起来不像一个大好处但克服混乱的内部流程对于每一个开源项目部门都是一个挑战不论是软件开发商还是驱动开发公司。而软件供应商必须确保他们的流程不与他们发布的产品重叠例如不小心开源了他们的专业软件用户更关心的是侵犯了知识产权IP专利、版权和商标。没有人想只是因为释放软件而被起诉。没有一个活跃的开源项目部门去管理和协调这些许可和其他法律问题大公司在开源流程和管理上面临着巨大的困难。为什么这个很重要呢如果不同的组释放的软件是在不兼容的许可证下那么这不仅是一个坑爹的尴尬它还将对实现最基本的目标改良协作产生巨大的障碍。
考虑到还有许多这样的公司仍在飞快的增长,如果无法建立基本流程规则的话,将可以预见到它们将会遇到阻力。我见过一个巨大的电子表格罗列着批准、未经批准的许可证,以及指导如何(或如何不)创建开源社区而遵守法律限制。关键是当开发者需要做出决定时要有一个可以依据的东西,并且每次当开发人员想要为一个开源社区贡献代码时,可以不产生大量的法律开销,和效率低下的知识产权检查。
有一个活跃的开放源码项目部门,负责维护许可规则和源的贡献,以及建立培训项目工程师,有助于避免潜在的法律缺陷和昂贵的诉讼。毕竟,良好的开源项目合作可以减少由于某人没有看许可证而导致公司赔钱这样的事件。好消息是,公司已经不用担心关于专有的知识产权与软件供应商冲突的事。坏消息是,它们的法律问题不够复杂,尤其是当他们直接需要软件供应商提供法律阻力时。
你的组织是如何受益于拥有一个开源项目部门的?可以在评论中与我们分享。
--------------------------------------------------------------------------------
via: https://opensource.com/business/16/9/4-big-ways-companies-benefit-having-open-source-program-offices
作者:[John Mark Walker][a]
译者:[chao-zhi](https://github.com/chao-zhi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/johnmark
[1]: https://opensource.com/business/16/5/whats-open-source-program-office
[2]: https://opensource.com/business/16/8/google-open-source-program-office

View File

@ -0,0 +1,515 @@
OneNewLife translated
# Webpack 2 入门
![](https://cdn-images-1.medium.com/max/2000/1*yI44h8Df-l-2LUqvXIi8JQ.png)
Webpack 2 即将退出测试,[一旦文档完成][26]。不过这不意味着你现在不能开始使用第 2 版,前提是你知道怎么配置它。
### Webpack 是什么
官方的说法是最简单的 —— Webpack 是一个 JavaScript 模块打包器。然而,自从它发布以来,它发展成为了你所有前端代码的管理工具(有意地或社区的意愿)。
![](https://cdn-images-1.medium.com/max/800/1*yBt2rFj2DbckFliGE0LEyg.png)
任务运行器,例如 Gulp可以处理许多不同的预处理器和转换器但是在所有的情景下它都需要一个输入源并将其压缩到一个编译好的输出文件中。然而它是在个案基础上这样做的不用担心整个系统。这是开发者的负担找到任务运行器中断的地方并找到适当的方式将所有这些模块在生产中联合在一起。
Webpack 试图通过提出一个大胆的想法来减轻开发者的负担:如果有一部分开发过程可以自动处理依赖关系会怎样?如果我们可以简单地写代码,让构建过程只基于最终需求管理自己会怎样?
![](https://cdn-images-1.medium.com/max/800/1*TOFfoH0cXTc8G3Y_F6j3Jg.png)
如果你过去几年一直是 web 社区的一员,你已经知道解决问题的首选方法:使用 JavaScript 来构建。因此 Webpack 尝试通过 JavaScript 传递依赖关系使构建过程更加容易。不过这个设计真正的亮点不是简单的代码管理部分,而是管理层由 100% 有效的 JavaScript 实现(具有 Nodejs 特性。Webpack 能够让你写有效的 JavaScript更好更全面地了解系统。
换句话来说:你不需要为 Webpack 写代码。你只需要写项目代码。而且 Webpack 会持续工作(当然需要一些配置)。
简而言之,如果你曾经遇到过以下任何一种情况:
* 意外引入一些你不需要在生产中用上的样式表和 JS 库,使项目膨胀
* 遇到作用域的问题 —— CSS 和 JavaScript 都会有
* 找到一个好的构建系统让你在 JavaScript 中使用 Node/Bower 模块,或者依靠一个疯狂的后端配置来正确地使用这些模块
* 需要优化资产交付,但担心你会弄坏一些东西
那么你可以从 Webpack 中收益了。它通过让 JavaScript 毫不费力地担心你的依赖关系和加载顺序而不是开发者的大脑。最好的部分是Webpack 甚至可以纯粹在服务器端运行,这意味着你还可以使用 Webpack 构建[渐进增强][25]的网站。
### 第一步
我们将在本教程中使用 [Yarn][24](运行命令 `brew install yarn` 代替 `npm`,不过这完全取决于你,它们做同样的事情。在我们的项目文件夹中,我们将在终端窗口中运行以下代码,将 Webpack 2 添加到我们的全局软件包以及本地项目中:
```
yarn global add webpack@2.1.0-beta.25 webpack-dev-server@2.1.0-beta.9
yarn add --dev webpack@2.1.0-beta.25 webpack-dev-server@2.1.0-beta.9
```
我们接着会通过项目根目录的一个 `webpack.config.js` 文件来声明 webpack 的配置:
```
'use strict';
const webpack = require('webpack');
module.exports = {
context: __dirname + '/src',
entry: {
app: './app.js',
},
output: {
path: __dirname + '/dist',
filename: '[name].bundle.js',
},
};
```
注意:`__dirname` 是指你的项目根目录
记住Webpack “知道”你的项目发生了什么。它通过阅读你的代码来实现(别担心,它签署了一个 NDA 协议。Webpack 基本上执行以下操作:
1. 从 `context` 文件夹开始...
2. ...它查找 `entry` 的文件名...
3. ...并读取内容。每一个 `import`[ES6][7])或 `require()`Nodejs的依赖会在它解析代码的时候找到它会在最终构建的时候打包这些依赖项。然后它会搜索那些依赖项以及那些依赖项所依赖的依赖项直到它到达“树”的最底端 —— 只是打包它所需要的,没有其它东西。
4. Webpack 从 `context` 文件夹打包所有东西到 `output.path` 文件夹,使用 `output.filename` 命名模板来为其命名(其中 `[name]` 被替换成来自 `entry` 的对象键)。
所以如果我们的 `src/app.js` 文件看起来像这样(假设我们事先运行了 `yarn add --dev moment`
```
'use strict';
import moment from 'moment';
var rightNow = moment().format('MMMM Do YYYY, h:mm:ss a');
console.log( rightNow );
// "October 23rd 2016, 9:30:24 pm"
```
我们应该运行:
```
webpack -p
```
注意:`p` 标志表示“生产”模式,这会压缩输出文件。
它会输出一个 `dist/app.bundle.js`,这会将当前日期和时间打印到控制台。要注意 Webpack 会自动识别 `'moment'` 指代什么(虽然如果你有一个 `moment.js` 文件在你的目录,默认情况下 Webpack 会优先考虑你的 `moment` Node 模块)。
### 使用多个文件
你可以通过仅仅修改 `entry` 对象来指定任意数量的输入/输出点。
#### 打包多个文件
```
`'use strict';
const webpack = require("webpack");
module.exports = {
context: __dirname + "/src",
entry: {
app: ["./home.js", "./events.js", "./vendor.js"],
},
output: {
path: __dirname + "/dist",
filename: "[name].bundle.js",
},
};`
```
所有文件都会按照数组的顺序一起被打包成一个 `dist/app.bundle.js` 文件。
#### 输出多个文件
```
`const webpack = require("webpack");
module.exports = {
context: __dirname + "/src",
entry: {
home: "./home.js",
events: "./events.js",
contact: "./contact.js",
},
output: {
path: __dirname + "/dist",
filename: "[name].bundle.js",
},
};`
```
或者,你可以选择打包成多个 JS 文件以便于分割应用的某些模块。这将被打包成 3 个文件:`dist/home.bundle.js``dist/events.bundle.js` 和 `dist/contact.bundle.js`
#### 高级打包自动化
如果你将你的应用分割成多个 `output` 输出项(如果你的应用的一部分有大量你不需要预加载的 JS这会很有用你可能会重用这些文件的代码因为它将分别解析每个依赖关系。幸运的是Webpack 有一个内置的 `CommonsChunk` 插件来处理这个:
```
module.exports = {
// …
plugins: [
new webpack.optimize.CommonsChunkPlugin({
name: "commons",
filename: "commons.bundle.js",
minChunks: 2,
}),
],
// …
};
```
现在,在你的 `output` 文件中,如果你有任何模块被加载 2 次以上(通过 `minChunks` 设置),它会把那个模块打包到 `common.js` 文件中,然后你可以将其缓存在客户端。这将生成一个额外的请求头,但是你防止了客户端多次下载同一个库。因此,在很多情景下,这会大大提升速度。
### 开发
Webpack actually has its own development server, so whether youre developing a static site or are just prototyping your front-end, its perfect for either. To get that running, just add a `devServer` object to `webpack.config.js`:
Webpack 实际上有自己的开发服务器,所以无论你是开发一个静态网站还是只是你的网站前端原型,它都是无可挑剔的。要运行那个服务器,只需要添加一个 `devServer` 对象到 `webpack.config.js`
```
module.exports = {
context: __dirname + "/src",
entry: {
app: "./app.js",
},
output: {
filename: "[name].bundle.js",
path: __dirname + "/dist/assets",
publicPath: "/assets", // New
},
devServer: {
contentBase: __dirname + "/src", // New
},
};
```
现在创建一个包含以下代码的 `src/index.html` 文件:
```
<script src="/assets/app.bundle.js"></script>
```
... 在你的终端运行:
```
webpack-dev-server
```
你的服务器现在运行在 `localhost:8080`。注意 `script` 标签里面的 `/assets` 是怎么匹配到 `output.publicPath` 的 —— 你可以随意更改它的名称(如果你需要一个 CDN 的时候这会很有用)。
Webpack 会热加载所有 JavaScript 更改,而不需要刷新你的浏览器。但是,所有 `webpack.config.js` 文件里面的更改都需要重新启动服务器才能生效。
### 全局访问方法
需要在全局空间使用你的函数?在 `webpack.config.js` 里面简单地设置 `output.library`
```
module.exports = {
output: {
library: 'myClassName',
}
};
```
这会将你打包好的文件附加到一个 `window.myClassName` 实例。因此,使用该命名空间,你可以调用入口文件的可用方法(可以在[文档][23]中阅读有关此设置的更多信息)。
### 加载器
到目前为止,我们所做的一切只涉及 JavaScript。从一开始使用 JavaScript 是重要的,因为它是 Webpack 唯一支持的语言。事实上我们可以处理几乎所有文件类型,只要我们将其转换成 JavaScript。我们用加载器来实现这个功能。
加载器可以是 Sass 这样的预处理器,或者是 Babel 这样的转译器。在 NPM 上,它们通常被命名为 `*-loader`,例如 `sass-loader``babel-loader`
#### Babel 和 ES6
如果我们想在项目中通过 [Babel][22] 来使用 ES6我们首先需要在本地安装合适的加载器
```
yarn add --dev babel-loader babel-core babel-preset-es2015
```
然后将它添加到 `webpack.config.js`,让 Webpack 知道在哪里使用它。
```
module.exports = {
// …
module: {
rules: [
{
test: /\.js$/,
use: [{
loader: "babel-loader",
options: { presets: ["es2015"] }
}],
},
// Loaders for other file types can go here
],
},
// …
};
```
Webpack 1 的用户注意:加载器的核心概念没有任何改变,但是语法改进了。直到官方文档完成之前,这可能不是确切的首选语法。
`/\.js$/` 这个正则表达式查找所有以 `.js` 结尾的待通过 Babel 加载的文件。Webpack 依靠正则检查给予你完全的控制权 —— 它不限制你的文件扩展名或者假设你的代码必须以某种方式组织。例如:也许你的 `/my_legacy_code/` 文件夹下的内容不是用 ES6 写的。所以你可以修改上述的 `test``/^((?!my_legacy_folder).)\.js$/`,这将会排除那个特定的文件夹,不过会用 Babel 处理其余的文件。
#### CSS 和 Style 加载器
如果我们只想加载 CSS 作为我们的应用程序,我们也可以这样做。假设我们有一个 `index.js` 文件,我们将从那里引入:
```
import styles from './assets/stylesheets/application.css';
```
我们会得到以下错误:`你可能需要一个合适的加载器来处理这种类型的文件`。记住Webpack 只能识别 JavaScript所以我们必须安装合适的加载器
```
yarn add --dev css-loader style-loader
```
然后添加一条规则到 `webpack.config.js`
```
module.exports = {
// …
module: {
rules: [
{
test: /\.css$/,
use: ["style-loader", "css-loader"],
},
// …
],
},
};
```
加载器以数组的逆序处理。这意味着 `css-loader` 会比 `style-loader` 先执行。
你可能会注意到,即使在生产版本中,这实际上是将你的 CSS 和 JavaScript 打包在一起,`style-loader` 手动将你的样式写到 `<head>`。乍一看,它可能看起来有点怪异,但你仔细想想这就慢慢开始变得更加有意义了。你已经节省了一个头部请求 —— 节省了一些连接上的时间。如果你用 JavaScript 来加载你的 DOM无论如何这从本质上消除了 [FOUC][21]。
你还会注意到一个开箱即用的特性 —— Webpack 已经通过将这些文件打包在一起以自动解决你所有的 `@import` 查询(而不是依靠 CSS 默认的 import 方式,这会导致无谓的头部请求以及资源加载缓慢)。
从你的 JS 加载 CSS 是非常惊人的,因为你现在可以用一种新的强大的方式将你的 CSS 模块化。仅仅通过加载 `button.js` 来加载 `button.css`。这将意味着如果 `button.js` 从来没有真正使用过的话,它的 CSS 就不会膨胀我们的生产版本。如果你坚持面向组件的 CSS 实践,如 SMACSS 或 BEM你会看到更紧密地结合你的 CSS 和你的标记 + JavaScript 的价值。
#### CSS 和 Node 模块
我们可以使用 Webpack 来利用 Node 的使用 `~` 前缀导入 Node 模块的优势。如果我们运行 `yarn add normalize.css`,我们可以使用:
```
@import "~normalize.css";
```
并且充分利用 NPM 来管理我们的第三方样式 —— 版本控制、没有任何副本和粘贴的部分。此外,让 Webpack 为我们打包 CSS 比起使用 CSS 的默认导入方式有明显的优势 —— 节省无谓的头部请求和加载时间。
更新:这一节和下面一节已经更新为准确的用法,不再使用 CSS 模块简单地导入 Node 模块。感谢 [Albert Fernández][20] 的帮助!
#### CSS 模块
你可能听说过 [CSS 模块][19],它消除了 CSS 的层叠性。通常它的最适用场景是只有当你使用 JavaScript 构建 DOM 的时候,但实质上,它神奇地将你的 CSS 类放置到加载它的 JavaScript 文件([在这里了解更多][18]。如果你打算使用它CSS 模块已经与 `css-loader` 封装在一起(`yarn add --dev css-loader`
```
module.exports = {
// …
module: {
rules: [
{
test: /\.css$/,
use: [
"style-loader",
{ loader: "css-loader", options: { modules: true } }
],
},
// …
],
},
};
```
注意:对于 `css-loader`,我们现在使用扩展对象语法来给它传递一个选项。你可以使用一个更为精简的字符串来取代默认选项,正如我们仍然使用了 `'style-loader'`
* * *
值得注意的是,当允许导入 CSS 模块的时候(例如:`@import 'normalize.css';`),你完全可以删除掉 `~`。但是,当你 `@import` 你自己的 CSS 的时候,你可能会遇到构建错误。如果你遇到“无法找到 ____”的错误,尝试添加一个 `resolve` 对象到 `webpack.config.js`,让 Webpack 更好地理解你的模块加载顺序。
```
const path = require("path");
module.exports = {
//…
resolve: {
modules: [path.resolve(__dirname, "src"), "node_modules"]
},
};
```
我们首先指定源目录,然后指定 `node_modules`。因此Webpack 会更好地处理解析度,按照既定的顺序(分别用你的源目录和 Node 模块的目录替换 `'src'``'node_modules'`),首先查找我们的源目录,然后再查找已安装的 Node 模块。
#### Sass
需要使用 Sass没问题。安装
```
yarn add --dev sass-loader node-sass
```
并添加新的规则:
```
module.exports = {
// …
module: {
rules: [
{
test: /\.(sass|scss)$/,
use: [
"style-loader",
"css-loader",
"sass-loader",
]
}
// …
],
},
};
```
然后当你的 Javascript 对一个 `.scss``.sass` 文件调用 `import` 方法的时候Webpack 会处理的。
#### CSS 独立打包
或许你在处理渐进增强的问题;或许你因为其它原因需要一个单独的 CSS 文件。我们可以通过在我们的配置中用 `extract-text-webpack-plugin` 替换 `style-loader` 而轻易地做到这一点,这不需要更改任何代码。以我们的 `app.js` 文件为例:
```
import styles from './assets/stylesheets/application.css';
```
让我们安装这个插件到本地(我们需要 2016 年 10 月的 测试版本):
```
yarn add --dev extract-text-webpack-plugin@2.0.0-beta.4
```
并且添加到 `webpack.config.js`
```
const ExtractTextPlugin = require("extract-text-webpack-plugin");
module.exports = {
// …
module: {
rules: [
{
test: /\.css$/,
use: [
ExtractTextPlugin.extract("css"),
{ loader: "css-loader", options: { modules: true } },
],
},
// …
]
},
plugins: [
new ExtractTextPlugin({
filename: "[name].bundle.css",
allChunks: true,
}),
],
};
```
现在当运行 `webpack -p` 的时候,你的 `output` 目录还会有一个 `app.bundle.css` 文件。只需要像往常一样简单地在你的 HTML 中向该文件添加一个 `<link>` 标签即可。
#### HTML
正如你可能已经猜到Webpack 还有一个 `[html-loader][6]` 插件。但是,当我们用 JavaScript 加载 HTML 时,我们针对不同的场景分成了不同的方法,我无法想出一个单一的例子来为你计划下一步做什么。通常,你需要加载 HTML 以便于在更大的系统(如 [React][13]、[Angular][12]、[Vue][11] 或 [Ember][10])中使用 JavaScript 风格的标记,如 [JSX][16]、[Mustache][15] 或 [Handlebars][14]。
教程到此为止了:你可以用 Webpack 加载标记,但是进展到这一步的时候,关于你的架构,你将做出自己的决定,我和 Webpack 都无法左右你。不过参考以上的例子以及搜索 NPM 上适用的加载器应该足够你发展下去了。
### 从模块的角度思考
为了充分使用 Webpack你必须从模块的角度来思考 —— 细粒度的、可复用的、用于高效处理每一件事的独立的处理程序。这意味着采取这样的方式:
```
└── js/
└── application.js // 300KB of spaghetti code
```
将其转变成这样:
```
└── js/
├── components/
│ ├── button.js
│ ├── calendar.js
│ ├── comment.js
│ ├── modal.js
│ ├── tab.js
│ ├── timer.js
│ ├── video.js
│ └── wysiwyg.js
└── application.js // ~ 1KB of code; imports from ./components/
```
结果呈现了整洁的、可复用的代码。每一个独立的组件依赖于 `import` 自身的依赖,并 `export` 它想要暴露给其它模块的部分。结合 Babel 和 ES6你可以利用 [JavaScript 类][9] 来实现更强大的模块化,而不用考虑它的工作原理。
有关模块的更多信息,请参阅 Preethi Kasreddy [这篇优秀的文章][8].
* * *
### 延伸阅读
* [Webpack 2 的新特性][5]
* [Webpack 配置文档][4]
* [Webpack 范例][3]
* [React + Webpack 入门套件][2]
* [怎么使用 Webpack][1]
--------------------------------------------------------------------------------
via: https://blog.madewithenvy.com/getting-started-with-webpack-2-ed2b86c68783#.oozfpppao
作者:[Drew Powers][a]
译者:[OneNewLife](https://github.com/OneNewLife)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://blog.madewithenvy.com/@an_ennui
[1]:https://github.com/petehunt/webpack-howto
[2]:https://github.com/kriasoft/react-starter-kit
[3]:https://github.com/webpack/webpack/tree/master/examples
[4]:https://webpack.js.org/configuration/
[5]:https://gist.github.com/sokra/27b24881210b56bbaff7
[6]:https://github.com/webpack/html-loader
[7]:https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/import
[8]:https://medium.freecodecamp.com/javascript-modules-a-beginner-s-guide-783f7d7a5fcc
[9]:https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Classes
[10]:http://emberjs.com/
[11]:http://vuejs.org/
[12]:https://angularjs.org/
[13]:https://facebook.github.io/react/
[14]:http://handlebarsjs.com/
[15]:https://github.com/janl/mustache.js/
[16]:https://jsx.github.io/
[17]:https://github.com/webpack/html-loader
[18]:https://github.com/css-modules/css-modules
[19]:https://github.com/css-modules/css-modules
[20]:https://medium.com/u/901a038e32e5
[21]:https://en.wikipedia.org/wiki/Flash_of_unstyled_content
[22]:https://babeljs.io/
[23]:https://webpack.js.org/concepts/output/#output-library
[24]:https://yarnpkg.com/
[25]:https://www.smashingmagazine.com/2009/04/progressive-enhancement-what-it-is-and-how-to-use-it/
[26]:https://github.com/webpack/webpack/issues/1545#issuecomment-255446425

View File

@ -1,178 +0,0 @@
# 删除在一个目录下除了一个或者一些带扩展名文件的其他所有文件的三种方法
有的时候,你可能会遇到这种情况,你需要删除一个目录下的所有文件,或者只是简单的通过删除除了一些指定类型(以指定扩展名结尾)的文件来清空一个目录。
在这篇文章,我们将会向你展现如何通过 rm、 find 和 globignore 命令删除一个目录下除了指定文件后缀或者类型的的文件。
在我们进一步深入之前,让我们开始简要的了解一下 Linux 中的一个重要的概念 —— 文件名模式匹配,它可以让我们解决眼前的问题。
在 Linux 下,一个 shell 模式一个包含以下特殊字符的字符串,称为通配符或者元字符:
1. `*`  匹配 0 个或者多个字符
2. `?`  匹配任意单个字符
3. `[seq]`  匹配序列中的任意一个字符
4. `[!seq]`  匹配任意一个不再序列中的字符
我们将在这儿探索三种可能的办法,包括:
### 使用扩展模式匹配操作符删除文件
下来列出了不同的扩展模式匹配操作符,这些模式列表是一个用 `|` 分割包含一个或者多个文件名的列表:
1. `*(pattern-list)`  匹配 0 个或者多个出现的指定模式
2. `?(pattern-list)`  匹配 0 个或者 1 个出现的指定模式
4. `@(pattern-list)`  匹配 1 个或者多个出现的指定模式
5. `!(pattern-list)`  匹配除了一个指定模式之外的任何内容
为了使用它们,像下面一样打开 extglob shell 选项:
```
# shopt -s extglob
```
#### 1. 输入以下命令,删除一个目录下除了 filename 之外的所有文件
```
$ rm -v !("filename")
```
[![删除 Linux 下除了一个文件之外的所有文件](http://www.tecmint.com/wp-content/uploads/2016/10/DeleteAll-Files-Except-One-File-in-Linux.png)][9]
删除 Linux 下除了一个文件之外的所有文件
#### 2. 删除除了 filename1 和 filename2 之外的所有文件
```
$ rm -v !("filename1"|"filename2")
```
[![在 Linux 下删除除了一些文件之外的所有文件](http://www.tecmint.com/wp-content/uploads/2016/10/Delete-All-Files-Except-Few-Files-in-Linux.png)][8]
在 Linux 下删除除了一些文件之外的所有文件
#### 3. 下面的例子显示如何通过交互模式删除除了 `.zip` 之外的所有文件
```
$ rm -i !(*.zip)
```
[![在 Linux 下删除除了 Zip 文件之外的所有文件](http://www.tecmint.com/wp-content/uploads/2016/10/Delete-All-Files-Except-Zip-Files-in-Linux.png)][7]
在 Linux 下删除除了 Zip 文件之外的所有文件
#### 4. 接下来,通过如下的方式你可以删除一个目录下除了所有的`.zip` 和 `.odt` 文件的所有文件,并且在删除的时候,显示正在删除的文件:
```
$ rm -v !(*.zip|*.odt)
```
[![删除除了指定文件扩展的所有文件](http://www.tecmint.com/wp-content/uploads/2016/10/Delete-All-Files-Except-Certain-File-Extensions.png)][6]
删除除了指定文件扩展的所有文件
一旦你已经执行了所有需要的命令,使用如下的方式关闭 extglob shell 选项。
```
$ shopt -u extglob
```
### 使用 Linux 下的 find 命令删除文件
在这种方法下,我们可以[只使用 find 命令][5]的适当的选项或者采用管道配合 xargs 命令,如下所示:
```
$ find /directory/ -type f -not -name 'PATTERN' -delete
$ find /directory/ -type f -not -name 'PATTERN' -print0 | xargs -0 -I {} rm {}
$ find /directory/ -type f -not -name 'PATTERN' -print0 | xargs -0 -I {} rm [options] {}
```
#### 5. 下面的命令将会删除当前目录下除了 `.gz` 之外的所有文件
```
$ find . -type f -not -name '*.gz' -delete
```
[![find 命令 —— 删除 .gz 之外的所有文件](http://www.tecmint.com/wp-content/uploads/2016/10/Remove-All-Files-Except-gz-Files.png)][4]
find 命令 —— 删除 .gz 之外的所有文件
#### 6. 使用管道和 xargs你可以通过如下的方式修改上面的例子
```
$ find . -type f -not -name '*gz' -print0 | xargs -0 -I {} rm -v {}
```
[![使用 find 和 xargs 命令删除文件](http://www.tecmint.com/wp-content/uploads/2016/10/Remove-Files-Using-Find-and-Xargs-Command.png)][3]
使用 find 和 xargs 命令删除文件
#### 7. 让我们看一个额外的例子,下面的命令行将会抹除掉当前目录下除了 `.gz``.odt` 和 `.jpg` 之外的所有文件:
```
$ find . -type f -not \(-name '*gz' -or -name '*odt' -or -name '*.jpg' \) -delete
```
[![删除除了指定扩展文件的所有文件](http://www.tecmint.com/wp-content/uploads/2016/10/Remove-All-Files-Except-File-Extensions.png)][2]
删除除了指定扩展文件的所有文件
### 通过 bash 中的 GLOBIGNORE 变量删除文件
然而,最后的方法,只适用于 bash。 GLOBIGNORE 变量存储了一个通过路径名扩展忽略的分离的模式(或者文件名)列表。
为了使用这种方法,移动到要删除文件的目录,像下面这样设置 GLOBIGNORE 变量:
```
$ cd test
$ GLOBIGNORE=*.odt:*.iso:*.txt
```
在这种情况下,除了 `.odt``.iso` 和 `.txt` 之外的所有文件,都将从当前目录删除。
现在,运行如下的命令清空这个目录:
```
$ rm -v *
```
之后,关闭 GLOBIGNORE 变量:
```
$ unset GLOBIGNORE
```
[![使用 bash 变量 GLOBIGNORE 删除文件](http://www.tecmint.com/wp-content/uploads/2016/10/Delete-Files-Using-Bash-GlobIgnore.png)][1]
使用 bash 变量 GLOBIGNORE 删除文件
注:为了理解上面的命令行采用的标识的意思,请参考我们在每一个插图中使用的命令对应的 man 手册。
就这些了!如果你心里有实现相同目录的其他命令行技术,不要忘了通过下面的反馈部分分享给我们。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/delete-all-files-in-directory-except-one-few-file-extensions/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29
作者:[ Aaron Kili][a]
译者:[yangmingming](https://github.com/yangmingming)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/wp-content/uploads/2016/10/Delete-Files-Using-Bash-GlobIgnore.png
[2]:http://www.tecmint.com/wp-content/uploads/2016/10/Remove-All-Files-Except-File-Extensions.png
[3]:http://www.tecmint.com/wp-content/uploads/2016/10/Remove-Files-Using-Find-and-Xargs-Command.png
[4]:http://www.tecmint.com/wp-content/uploads/2016/10/Remove-All-Files-Except-gz-Files.png
[5]:http://www.tecmint.com/35-practical-examples-of-linux-find-command/
[6]:http://www.tecmint.com/wp-content/uploads/2016/10/Delete-All-Files-Except-Certain-File-Extensions.png
[7]:http://www.tecmint.com/wp-content/uploads/2016/10/Delete-All-Files-Except-Zip-Files-in-Linux.png
[8]:http://www.tecmint.com/wp-content/uploads/2016/10/Delete-All-Files-Except-Few-Files-in-Linux.png
[9]:http://www.tecmint.com/wp-content/uploads/2016/10/DeleteAll-Files-Except-One-File-in-Linux.png

View File

@ -0,0 +1,119 @@
在 Linux 上检测硬盘坏道和坏块
===
让我们从定义坏道和坏块开始说起,它们是一块磁盘或闪存上不再能够被读写的部分,一般是由于磁盘表面特定的[物理损坏][7]或闪存晶体管失效导致的。
随着坏道的继续积累,它们会对你的磁盘或闪存容量产生令人不快或破坏性的影响,甚至可能会导致硬件失效。
同时还需要注意的是坏块的存在警示你应该开始考虑买块新磁盘了,或者简单地将坏块标记为不可用。
因此,在这篇文章中,我们通过几个必要的步骤,使用特定的[磁盘扫描工具][6]让你能够判断 Linux 磁盘或闪存是否存在坏道。
以下就是步骤:
### 在 Linux 上使用坏块工具检查坏道
坏块工具可以让用户扫描设备检查坏道或坏块。设备可以是一个磁盘或外置磁盘,由一个如 /dev/sdc 这样的文件代表。
首先,通过超级用户权限执行 [fdisk 命令][5]来显示你的所有磁盘或闪存的信息以及它们的分区信息:
```
$ sudo fdisk -l
```
[![列出 Linux 文件系统分区](http://www.tecmint.com/wp-content/uploads/2016/10/List-Linux-Filesystem-Partitions.png)][4]
列出 Linux 文件系统分区
然后用这个命令检查你的 Linux 硬盘上的坏道/坏块:
```
$ sudo badblocks -v /dev/sda10 > badsectors.txt
```
[![在 Linux 上扫描硬盘坏道](http://www.tecmint.com/wp-content/uploads/2016/10/Scan-Hard-Disk-Bad-Sectors-in-Linux.png)][3]
在 Linux 上扫描硬盘坏道
上面的命令中badblocks 扫描设备 /dev/sda10记得指定你的实际设备-v 选项让它显示操作的详情。另外,这里使用了输出重定向将操作结果重定向到了文件 badsectors.txt。
如果你在你的磁盘上发现任何坏道,卸载磁盘并像下面这样让系统不要将数据写入回报的扇区中。
你需要执行 e2fsck针对 ext2/ext3/ext4 文件系统)或 fsck 命令,命令中还需要用到 badsectors.txt 文件和设备文件。
`-l` 选项告诉命令将指定文件名文件badsectors.txt中列出的扇区号码加入坏块列表。
```
------------ 针对 for ext2/ext3/ext4 文件系统 ------------
$ sudo e2fsck -l badsectors.txt /dev/sda10
------------ 针对其它文件系统 ------------
$ sudo fsck -l badsectors.txt /dev/sda10
```
### 在 Linux 上使用 Smartmontools 工具扫描坏道
这个方法对带有 S.M.A.R.TSelf-Monitoring, Analysis and Reporting Technology自我监控分析报告技术系统的现代磁盘ATA/SATA 和 SCSI/SAS 硬盘以及固态硬盘更加的可靠和高效。S.M.A.R.T 系统能够帮助检测,报告,以及可能记录它们的健康状况,这样你就可以找出任何可能出现的硬件失效。
你可以使用以下命令安装 smartmontools
```
------------ 在基于 Debian/Ubuntu 的系统上 ------------
$ sudo apt-get install smartmontools
------------ 在基于 RHEL/CentOS 的系统上 ------------
$ sudo yum install smartmontools
```
安装完成之后,使用 smartctl 控制磁盘集成的 S.M.A.R.T 系统。你可以这样查看它的手册或帮助:
```
$ man smartctl
$ smartctl -h
```
然后执行 smartctrl 命令并在命令中指定你的设备作为参数,以下命令包含了参数 `-H``--health` 以显示 SMART 整体健康自我评估测试结果。
```
$ sudo smartctl -H /dev/sda10
```
[![检查 Linux 硬盘健康](http://www.tecmint.com/wp-content/uploads/2016/10/Check-Linux-Hard-Disk-Health.png)][2]
检查 Linux 硬盘健康
上面的结果指出你的硬盘很健康,近期内不大可能发生硬件失效。
要获取磁盘信息总览,使用 `-a``--all` 选项来显示关于磁盘所有的 SMART 信息,`-x` 或 `--xall` 来显示所有关于磁盘的 SMART 信息以及非 SMART 信息。
在这个教程中,我们覆盖了有关[磁盘健康诊断][1]的重要话题,你可以下面的反馈区来分享你的想法或提问,并且记得多回来看看。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/check-linux-hard-disk-bad-sectors-bad-blocks/
作者:[Aaron Kili][a]
译者:[alim0x](https://github.com/alim0x)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/defragment-linux-system-partitions-and-directories/
[2]:http://www.tecmint.com/wp-content/uploads/2016/10/Check-Linux-Hard-Disk-Health.png
[3]:http://www.tecmint.com/wp-content/uploads/2016/10/Scan-Hard-Disk-Bad-Sectors-in-Linux.png
[4]:http://www.tecmint.com/wp-content/uploads/2016/10/List-Linux-Filesystem-Partitions.png
[5]:http://www.tecmint.com/fdisk-commands-to-manage-linux-disk-partitions/
[6]:http://www.tecmint.com/ncdu-a-ncurses-based-disk-usage-analyzer-and-tracker/
[7]:http://www.tecmint.com/defragment-linux-system-partitions-and-directories/

View File

@ -0,0 +1,287 @@
# 4 种简单方法让你在 Linux 下生成一个高强度密码
![在 Linux 下生成一个高强度密码](https://www.ostechnix.com/wp-content/uploads/2016/11/password-720x340.jpg)
图片来源: Google.
昨天,我们已经分享了如何 [要求用户在基于 DEB 的系统中使用一个高强度的密码][8],例如在 DebianUbuntuLinux Mint Elementary OS 等系统中。那么,你可能会疑惑一个高强度的密码究竟是什么样的呢?怎么才能生成一个那样的密码呢?不用担心,下面我们将介绍 4 种简单方法让你在 Linux 中生成一个高强度密码。当然,已经有很多免费的工具或者方式来完成这个任务,但这里我们仅考虑那些简单直接的方法。下面就让我们开始吧。
下载 – [免费电子书“Ubuntu 16.04 入门”][7]
### 1. 在 Linux 中使用 OpenSSL 来生成一个高强度密码
OpenSSL 在所有的类 Unix 发行版本SolarisMac OS X 和 Windows 中都可以获取到。
要使用 OpenSSL 生成一个随机密码,唤起你的终端并运行下面的命令:
```
openssl rand 14 -base64
```
上面的 `-base64` 字符串将确保生成的密码可以被键盘敲出来。
样例输出:
```
wXCHXlxuhrFrFMQLqik=
```
[
![sksk_003](http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_003.png)
][6]
上面的命令将生成一个随机的、长度为 14 个字符的高强度密码。记住我们强烈推荐你生成 14 个字符的密码。
当然你可以使用 OpenSSL 生成任意长度的密码。
要了解更多信息,可以参考联机手册:
```
man openssl
```
### 2. 在 Linux 中使用 Pwgen 来生成一个高强度密码
pwgen 是一个简单却非常有用的命令行工具,用它可以在短时间内生成一个随机且高强度的密码。它设计出的安全密码可以被人们更容易地记住。在大多数的类 Unix 系统中都可以获取到它。
在基于 DEB 的系统中安装 pwgen 请运行:
```
sudo apt-get install pwgen
```
在基于 RPM 的系统中,运行:
```
sudo yum install pwgen
```
在基于 Arch 的系统中,则运行:
```
sudo pacman -S pwgen
```
一旦 pwgen 安装完成后,便可以使用下面的命令来生成 1 个长度为 14 个字符的随机高强度密码:
```
pwgen 14 1
```
样例输出:
```
Choo4aicozai3a
```
[
![sksk_004](http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_004.png)
][5]
上面的命令将只生成 1 个长度为 14 个字符的密码,如果要生成 2 个长度为 14 个字符的不同密码,则可以运行:
```
pwgen 14 2
```
样例输出:
```
xee7seerez6Kau Aeshu0geveeji8
```
如果要生成 100 个(尽管可能没有必要生成那么多)长度为 14 个字符的不同密码,则可以运行:
```
pwgen 14
```
样例输出:
```
kaeNg3EiVei4ei Oo0iehiJaix5Ae aenuv2eree2Quo iaT7zahH1eN2Aj Bie2owaiFahsie
gaan9zu5Xeh5ah ahGeeth8ea5ooh Ir0ueda5poogh5 uo0ohqu2ufaiX2 Mei0pee6Og3zae
Oofeiceer8Aipu sheew3aeReidir Dee4Heib2eim2o eig6jar8giPhae Zahde9nae1Niew
quatol5Oi3Bah2 quue4eebaiNgaa oGoahieSh5oL4m aequeeQue2piti laige5seePhugo
iiGo9Uthee4ros WievaiQu2xech6 shaeve0maaK3ae ool8Pai2eighis EPheiRiet1ohci
ZieX9outhoht8N Uh1UoPhah2Thee reaGhohZae5idi oiG4ooshiyi5in keePh1ohshei8y
aim5Eevah2thah Xaej8tha5eisho IeGie1Anaalaev gaoY3ohthooh3x chaebeesahTh8e
soh7oosieY5eiD ahmoh6Ihii6que Shoowoo5dahbah ieW0aiChubee7I Caet6aikai6aex
coo1du2Re9aika Ohnei5Egoh7leV aiyie6Ahdeipho EiV0aeToeth1da iNgaesu4eeyu0S
Eeb1suoV3naera railai2Vaina8u xu3OhVee1reeyu Og0eavae3oohoh audahneihaeK8a
foo6iechi5Eira oXeixoh6EwuboD we1eiDahNgoh9s ko1Eeju1iedu1z aeP7achiisohr7
phang5caeGei5j ait4Shuo5Aitai no4eis9Tohd8oh Quiet6oTaaQuei Dei2pu2NaefeCa
Shiim9quiuy0ku yiewooph3thieL thu8Aphai1ieDa Phahnahch1Aam1 oocex7Yaith8oo
eraiGaech5ahNg neixa3malif5Ya Eux7chah8ahXix eex1lahXae4Mei uGhahzonu6airu
yah8uWahn3jeiW Yi4ye4Choongie io1Vo3aiQuahpi rie4Rucheet6ae Dohbieyaeleis5
xi1Zaushohbei7 jeeb9EiSiech0u eewo0Oow7ielie aiquooZamah5th kouj7Jaivohx9o
biyeeshesaDi9e she9ooj3zuw6Ah Eit7dei1Yei5la xohN0aeSheipaa Eeg9Phob6neema
eengoneo4saeL4 aeghi4feephu6W eiWash2Vie1mee chieceish5ioPe ool4Hongo7ef1o
jahBe1pui9thou eeV2choohoa4ee Ohmae0eef4ic8I Eet0deiyohdiew Ke9ue5thohzei3
aiyoxeiva8Maih gieRahgh8anahM ve2ath9Eyi5iet quohg6ok3Ahgee theingaech5Nef
```
[
![sksk_005](https://www.ostechnix.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][4]
如果要在密码中包含至少 1 个数字,则可以运行:
```
pwgen 14 1 -n 1
```
样例输出:
```
xoiFush3ceiPhe
```
另外pwgen 命令还有一些很实用的选项:
```
-c 或 --capitalize (在密码中包含至少一个大写字母)
-A 或 --no-capitalize (在密码中不包含大写字母)
-n 或 --numerals (在密码中包含至少一个数字)
-0 或 --no-numerals (在密码中不包含数字)
-y 或 --symbols (在密码中包含至少一个特殊字符)
-s 或 --secure (生成完全随机的密码)
-B 或 --ambiguous (在密码中不包含双关词语)
-h 或 --help (输出帮助信息)
-H 或 --sha1=path/to/file[#seed] (使用某个给定文件的 sha1 哈希值来作为随机数的生成种子)
-C (按列输出生成好的密码)
-1 (不按列输出生成好的密码)
-v 或 --no-vowels (不使用任何元音字母,以防止生成下流的词语)
```
若想了解更多信息,请查阅其联机手册:
```
man pwgen
```
### 3. 在 Linux 中使用 GPG 来生成一个高强度密码
GPG (GnuPG or GNU Privacy Guard) 是一个免费的命令行程序,可以用于替代赛门铁克的 PGP 加密软件。在类 Unix 操作系统、Microsoft Windows 和 Android 中都可以获取到它。
要使用 PGP 生成 1 个长度为 14 个字符的高强度密码,请在终端中运行下面的命令:
```
gpg --gen-random --armor 1 14
```
样例输出:
```
DkmsrUy3klzzbIbavx8=
```
[
![sksk_006](https://www.ostechnix.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][3]
上面的命令将生成一个安全、随机、高强度且基于 base64 编码的密码。
### 4. 在 Linux 中使用 Perl 来生成一个高强度密码
Perl 在大多数 Linux 发行版本的默认软件仓库中都可以获取到,你可以使用相应的包管理器来安装它。
例如在基于 DEB 的系统中,可以运行下面的命令来安装 Perl
```
sudo apt-get install perl
```
在基于 RPM 的系统中安装 Perl ,可以运行:
```
sudo yum install perl
```
在基于 Arch 的系统中,则运行:
```
sudo pacman -S perl
```
一旦 Perl 安装完成,使用下面的命令创建一个文件:
```
vi password.pl
```
接着添加下面的内容到这个文件中:
```
#!/usr/bin/perl
my @alphanumeric = ('a'..'z', 'A'..'Z', 0..9);
my $randpassword = join '', map $alphanumeric[rand @alphanumeric], 0..8;
print "$randpassword\n"
```
[
![sksk_001](http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_001.png)
][2]
保存并关闭该文件。
接着,切换到你刚才保存文件的地方,并运行下面的命令:
```
perl password.pl
```
使用你自己定义的文件名来替换上面命令中的 `password.pl`
样例输出:
```
3V4CJJnYd
```
[
![sksk_002](http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_002.png)
][1]
注: 我无法找到这个脚本的原有作者,假如你知道作者的名字,请在下面的评论部分让我知晓,我将在这篇指南中添加上该作者的名字。
Please note that you must memorize or keep the passwords you have generated in a safe place in your computer. I recommend you to memorize the password and delete it from your system. It is much better in case your system is compromised by any hackers.
请注意:对于你生成的密码,你必须记住它,或者将它保存到你电脑中一个安全的地方。我建议你记住密码并将它从你的系统中删除,因为这总比你的系统被黑客控制要好。
伙计们,今天就是这么多了。不久我将带来另一篇有意思的文章。在此之前,敬请关注 OSTechNix。
Happy Weekend!
Cheers!!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/4-easy-ways-to-generate-a-strong-password-in-linux/
作者:[ SK ][a]
译者:[FSSlc](https://github.com/FSSlc)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_002.png
[2]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_001.png
[3]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_006.png
[4]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_005.png
[5]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_004.png
[6]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_003.png
[7]:http://ostechnix.tradepub.com/free/w_ubun08/prgm.cgi?a=1
[8]:https://www.ostechnix.com/force-users-use-strong-passwords-debian-ubuntu/

View File

@ -0,0 +1,261 @@
# 一份关于在Kali Linux下使用Nmap网络安全扫描器的实用指南
在这第二篇Kali Linux文章中, 将讨论称为‘[nmap][30]的网络工具。虽然nmap不是Kali下唯一的一个工具但它是最[有用的网络映射工具][29]之一。
1. [第一部分-为初学者准备的Kali Linux安装指南][4]
Nmap,是Network Mapper的缩写,由Gordon Lyon维护(更多关于Mr. Lyon的信息在这里: [http://insecure.org/fyodor/][28]) ,并被世界各地许多的安全专业人员使用。
这个工具在Linux和Windows都能使用并且是用命令行驱动的。相对于那些令人害怕的命令行对于nmap在这里有一个美妙的图形化前端叫做zenmap。
强烈建议个人去学习nmap的CLI版本因为与图形化版本zenmap相比它提供了更多的灵活性。
nmap服务器的目的是什么很好的问题。Nmap允许管理员快速彻底地了解网络上的系统因此它的名字叫Network MAPper或者nmap。
Nmap能够快速找到活动的主机和与该主机相关联的服务。Nmap的功能还可以进一步被扩展通过结合Nmap脚本引擎通常缩写为NSE。
此脚本引擎允许管理员快速创建可用于确定其网络上是否存在新发现的漏洞的脚本。许多脚本已经被开发并且包含在大多数的nmap安装中。
提醒一句- nmap是被有好的和坏的意图的人共同使用的。应该非常小心以确保你不使用nmap对系统的权限没有明确提供在书面/法律协议上进行攻击。请在使用nmap工具的时候注意
#### 系统要求
1. [Kali Linux][3] (nmap可以用于其他操作系统并且功能也和这个指南里面讲的类似).
2. 有另一台计算机并且装有nmap的计算机有权限扫描它-这通常很容易通过软件来实现,例如[VirtualBox][2],可以创建虚拟机.
1. 想要有一个好的机器来联系请阅读Metasploitable 2
2. 下载MS2 [Metasploitable2][1]
3. 一个与网络有效的工作连接或者是使用虚拟机,就可以为这两台计算机建立有效的内部网络连接
### Kali Linux 使用Nmap
The first step to working with nmap is to log into the Kali Linux machine and if desired, start a graphical session (This first article in this series installed [Kali Linux with the Enlightenment Desktop Environment][27]).
使用nmap的第一步是登录Kali Linux如果需要就启动一个图形会话本系列的第一篇文章安装了[Kali Linux with Enlightenment桌面环境] [27])。
在安装过程中安装程序将提示用户输入需要登录的“root”用户和密码。 一旦登录到Kali Linux机器使用命令'startx'就可以启动Enlightenment桌面环境 - 值得注意的是nmap不需要运行桌面环境。
```
# startx
```
[
![Start Desktop Environment in Kali Linux](http://www.tecmint.com/wp-content/uploads/2016/11/Start-Desktop-Environment-in-Kali-Linux.png)
][26]
在Kali Linux中启动桌面环境
一旦登录启用,将需要打开终端窗口。 通过点击桌面背景,将会出现一个菜单。 导航到终端可以进行如下操作:应用程序 - >系统 - >'Xterm'或'UXterm'或'根终端'。
作者是shell程序的粉丝叫'[Terminator] [25]'但是这可能不会显示在Kali Linux的默认安装中。 列出的所有shell程序都出于使用nmap的目的。
[
![Launch Terminal in Kali Linux](http://www.tecmint.com/wp-content/uploads/2016/11/Launch-Terminal-in-Kali-Linux.png)
][24]
在Kali Linux下启动终端
一旦终端启动nmap的乐趣就开始了。 对于这个特定的教程创建了一个具有Kali机器和Metasploitable机器的专用网络。
这会使事情变得更容易和更安全因为私有的网络范围将确保扫描保持在安全的机器上防止易受攻击的Metasploitable机器被其他人攻击。
在此示例中这两台计算机都位于专用的192.168.56.0 / 24网络上。 Kali机器的IP地址为192.168.56.101要扫描的Metasploitable机器的IP地址为192.168.56.102。
虽然IP地址信息不可用但是 快速nmap扫描可以帮助确定在特定网络上存在什么 此扫描称为“Simple List”扫描因此将`-sL`参数传递给nmap命令。
```
# nmap -sL 192.168.56.0/24
```
[
![Nmap - Scan Network for Live Hosts](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Network.png)
][23]
Nmap 扫描网络上的存活主机
悲伤的是,这个初始扫描没有返回任何活主机。 有时,这是某些操作系统处理[端口扫描网络流量] [22]的一个因素。
不用担心在这里有一些技巧可以使nmap尝试找到这些机器。 下一个技巧会告诉nmap只是尝试ping 192.168.56.0/24网络中的所有地址。
```
# nmap -sn 192.168.56.0/24
```
[
![Nmap - Ping All Connected Live Network Hosts](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Ping-All-Network-Live-Hosts.png)
][21]
Nmap Ping所有已连接的活动网络主机
这个时间nmap会返回一些潜在的主机来进行扫描 在此命令中,`-sn`禁用nmap的尝试端口扫描主机的默认行为只是让nmap尝试ping主机。
让我们尝试让nmap端口扫描这些特定的主机看看会出现什么。
```
# nmap 192.168.56.1,100-102
```
[
![Nmap - Network Ports Scan on Host](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-for-Ports-on-Hosts.png)
][20]
Nmap 在主机上扫描网络端口
哇! 这一次nmap挖到了一个金矿。 这个特定的主机有相当多的[开放网络端口] [19]。
这些端口都指代在此特定机器上的某种监听服务。 从早期回忆192.168.56.102的IP地址会分配给性易受攻击的机器因此这就是为什么会有这么多[开放端口在这个主机上] [18]。
在大多数机器上打开这个端口是非常不正常的,所以它可能是一个明智的想法,因为可以非常紧密地调查这台机器。 管理员可以跟踪网络上的物理机器并在本地查看这台机器但这不会很有趣特别是当nmap可以做到我们更快
下一个扫描是服务扫描,通常用于尝试确定机器上什么[服务在特定的端口被监听] [17]。
Nmap将探测所有打开的端口并尝试从每个端口上运行的服务中获取信息。
```
# nmap -sV 192.168.56.102
```
[
![Nmap - Scan Network Services Listening of Ports](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Network-Services-Ports.png)
][16]
Nmap 扫描网络服务监听端口
请注意这次nmap提供了一些关于nmap在特定端口运行的建议在白框中突出显示而且nmap也试图[确认这个操作系统的信息][15]运行在这台机器上和它的主机名(也非常成功!)。
Notice this time nmap provided some suggestions on what nmap thought might be running on this particular port (highlighted in the white box). Also nmap also tried to [determine information about the operating system][15]running on this machine as well as its hostname (with great success too!).
查看这个输出,应该引起网络管理员相当多的关注。 第一行声称VSftpd版本2.3.4正在这台机器上运行! 这是一个真正的旧版本的VSftpd。
通过查找ExploitDB,一个非常严重的漏洞是在2001被发现这是一个特别的版本(ExploitDB ID 17491)。
让我们使用nmap更加清楚的查看这个特别的端口并且看看可以确认什么东西。
```
# nmap -sC 192.168.56.102 -p 21
```
[
![Nmap - Scan Particular Post on Machine](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Particular-Port-on-Host.png)
][14]
Nmap 扫描机器上的特定邮件
使用此命令让nmap在主机上的FTP端口-p 21上运行其默认脚本-sC。 虽然它可能是也可能不是一个问题但是nmap确实发现[匿名FTP登录是允许的] [13]在这个特定的服务器。
这与早先有关具有旧漏洞的VSftd的知识相匹配应该引起一些关注。 让我们看看nmap是否有任何脚本来试图检查VSftpd漏洞。
```
# locate .nse | grep ftp
```
[
![Nmap - Scan VSftpd Vulnerability](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Service-Vulnerability.png)
][12]
Nmap 扫描VSftpd漏洞
注意nmap有一个NSE脚本已经用来处理VSftpd后门问题 让我们尝试对这个主机运行这个脚本,看看会发生什么,但首先知道如何使用脚本可能是很重要的。
```
# nmap --script-help=ftp-vsftd-backdoor.nse
```
[
![Learn Nmap NSE Script Usage](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Learn-NSE-Script.png)
][11]
了解Nmap NSE脚本使用情况
通过这个描述很明显这个脚本可以用来试图查看这个特定的机器是否容易受到先前识别的ExploitDB问题。
让我们运行这个脚本,看看会发生什么。
```
# nmap --script=ftp-vsftpd-backdoor.nse 192.168.56.102 -p 21
```
[
![Nmap - Scan Host for Vulnerable](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Host-for-Vulnerable.png)
][10]
Nmap 扫描易受攻击的主机
Yikes Nmap的脚本返回了一些危险的消息。 这台机器可能是一个很好的候选者,之后可以进行更加详细的调查。 这并不意味着机器缺乏抵抗力和可以被用于做一些可怕/糟糕的事情,但它应该给网络/安全团队带来一些关注。
Nmap具有极高的选择性和极高的能力。 到目前为止已经做的大多数尝试保持nmap的网络流量适度保持平稳然而以这种方式扫描个人拥有的网络可能是非常耗时的。
Nmap有能力做一个更积极的扫描往往会产生大部分相同的信息但在一个命令而不是几个。 让我们来看看积极的扫描的输出(注意 - 积极的扫描可以被关闭[入侵检测/预防系统][9]!).
```
# nmap -A 192.168.56.102
```
[
![Nmap - Complete Network Scan on Host](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Network-Host.png)
][8]
Nmap 在主机上完成网络扫描
注意这一次使用一个命令nmap返回了很多关于在这台特定机器上运行的开放端口服务和配置的信息。 这些信息中的大部分可用于帮助确定[如何保护本机] [7]以及评估网络上可能存在的软件。
这只是nmap可用于在主机或网段上找到的许多有用事情的简短列表。 强烈敦促个人在个人拥有的网络上继续[以nmap] [6]进行实验。(不要通过扫描其他实体练习!)。
有一个关于Nmap网络扫描的官方指南作者Gordon Lyon可从亚马逊上获得。
<center style="border: 0px; font-style: inherit; font-variant: inherit; font-weight: inherit; font-stretch: inherit; font-size: inherit; line-height: inherit; font-family: inherit; vertical-align: baseline;">[
![Nmap Network Scanning Guide](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Network-Security-Scanner-Guide.png)
][5]</center>
--------------------------------------------------------------------------------
via: http://www.tecmint.com/nmap-network-security-scanner-in-kali-linux/
作者:[Rob Turner][a]
译者:[DockerChen](https://github.com/DockerChen)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/robturner/
[1]:https://sourceforge.net/projects/metasploitable/files/Metasploitable2/
[2]:http://www.tecmint.com/install-virtualbox-on-redhat-centos-fedora/
[3]:http://www.tecmint.com/kali-linux-installation-guide
[4]:http://www.tecmint.com/kali-linux-installation-guide
[5]:http://amzn.to/2eFNYrD
[6]:http://www.tecmint.com/nmap-command-examples/
[7]:http://www.tecmint.com/security-and-hardening-centos-7-guide/
[8]:http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Network-Host.png
[9]:http://www.tecmint.com/protect-apache-using-mod_security-and-mod_evasive-on-rhel-centos-fedora/
[10]:http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Host-for-Vulnerable.png
[11]:http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Learn-NSE-Script.png
[12]:http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Service-Vulnerability.png
[13]:http://www.tecmint.com/setup-ftp-anonymous-logins-in-linux/
[14]:http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Particular-Port-on-Host.png
[15]:http://www.tecmint.com/commands-to-collect-system-and-hardware-information-in-linux/
[16]:http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Network-Services-Ports.png
[17]:http://www.tecmint.com/find-linux-processes-memory-ram-cpu-usage/
[18]:http://www.tecmint.com/find-open-ports-in-linux/
[19]:http://www.tecmint.com/find-open-ports-in-linux/
[20]:http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-for-Ports-on-Hosts.png
[21]:http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Ping-All-Network-Live-Hosts.png
[22]:http://www.tecmint.com/audit-network-performance-security-and-troubleshooting-in-linux/
[23]:http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Network.png
[24]:http://www.tecmint.com/wp-content/uploads/2016/11/Launch-Terminal-in-Kali-Linux.png
[25]:http://www.tecmint.com/terminator-a-linux-terminal-emulator-to-manage-multiple-terminal-windows/
[26]:http://www.tecmint.com/wp-content/uploads/2016/11/Start-Desktop-Environment-in-Kali-Linux.png
[27]:http://www.tecmint.com/kali-linux-installation-guide
[28]:http://insecure.org/fyodor/
[29]:http://www.tecmint.com/bcc-best-linux-performance-monitoring-tools/
[30]:http://www.tecmint.com/nmap-command-examples/

View File

@ -21,15 +21,15 @@ Windows的Linux子系统测试在上周刚刚完成所有测试并放出升
![](https//openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=186c4d0&c=a8c914bf9b64cf67abc65e319f8e71c7951fb1aa&p=0) ![](https//openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=186c4d0&c=a8c914bf9b64cf67abc65e319f8e71c7951fb1aa&p=0)
First up was the SQLite embedded database benchmark. The out-of-the-box Ubuntu/Bash on Windows performance was quite slow, but when switching that 14.04 environment to 16.04 LTS, the performance was much faster. However, for this disk-heavy workload the native Ubuntu Linux installations were almost twice as fast as relying upon the Windows Subsystem for Linux. 首先是SQLite嵌入式数据库基准测试.这个盒子外的Ubuntu/Bash on Windows性能是相当的慢,但是如果切换环境从14.04到16.04LTS, 性能会块很多.然而, 对于重磁盘的工作负载,原生Ubuntu Linux比Windows的子系统Linux快了近2倍.
![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=fa40825&c=0912dc3f6d6a9f36da09fdd4c0cf4e330fa40f90&p=0) ![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=fa40825&c=0912dc3f6d6a9f36da09fdd4c0cf4e330fa40f90&p=0)
![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=8419652&c=9b9f6b0822ed5b9dc2977a7f2faf499fce4fba23&p=0) ![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=8419652&c=9b9f6b0822ed5b9dc2977a7f2faf499fce4fba23&p=0)
The CompileBench test profile as additional disk-focused workloads show that this is the particular subsystem really straining the Ubuntu performance atop Windows 10 with it being up to multiple times slower. 编译测试作为额外的重磁盘测试显示, 定制的Windows子系统真的成倍的限制了Ubuntu性能.
Next up were some basic system memory speed tests with Stream. 接下来,是一些使用Stream的基本的系统内存速度测试
![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=9560e6f&c=ebbc6937fa8daf0540e0df353432a29f938cf7ed&p=0) ![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=9560e6f&c=ebbc6937fa8daf0540e0df353432a29f938cf7ed&p=0)
@ -37,29 +37,29 @@ Next up were some basic system memory speed tests with Stream.
![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=5a2e9d2&c=d37eee4c9394fa8104e7e49e26c964af70ec326b&p=0) ![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=5a2e9d2&c=d37eee4c9394fa8104e7e49e26c964af70ec326b&p=0)
Strangely, the Stream memory benchmarks show better performance with Ubuntu on Windows than Ubuntu itself! This happened on both the 14.04 and 16.04 based environments that the Windows results came out faster. 奇怪的是, 这些内存的基准测试显示Ubuntu on Windows的性能比原生的Ubuntu好!这个现象同时发生在基于同样的Windows却环境不同的14.04和16.04上.
Next are more of the CPU-heavy tests, 接下来, 是一些重CPU测试.
![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=ee1f01f&c=3e9a67230e0e081b99ee3237e702c0b40ee73d60&p=0) ![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=ee1f01f&c=3e9a67230e0e081b99ee3237e702c0b40ee73d60&p=0)
With the Dolfyn scientific test, the performance between Ubuntu on Windows and Ubuntu installed bare metal was actually quite close. With Ubuntu 16.04 the performance is slower on both platforms due to the newer GCC compiler regressing the performance. 通过Dolfyn科学测试Ubuntu On Windows和原生Ubuntu之间的性能其实是相当接近的。 对于Ubuntu 16.04由于较新的GCC编译器回退性能两个平台上的性能都较慢。
![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=dd69257&c=0e31babb8b96be1ae38ea739fbb1346bf9bc4b07&p=0) ![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=dd69257&c=0e31babb8b96be1ae38ea739fbb1346bf9bc4b07&p=0)
![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=a02416b&c=c8abb70dee982dd494fb1891bd9dc154fa7a7f47&p=0) ![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=a02416b&c=c8abb70dee982dd494fb1891bd9dc154fa7a7f47&p=0)
Fhourstones and John The Ripper show that the performance of Ubuntu running on Windows via the Windows Subsystem for Linux can be incredibly close to the bare metal Ubuntu Linux performance! 透过Fhourstones和John The Ripper表明通过在Windows上运行Linux子系统的Ubuntu的性能可以非常接近裸机Ubuntu Linux性能
![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=3140e3c&c=f4bf6330a7d58b5939c61cbd91fe5db379c1592a&p=0) ![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=3140e3c&c=f4bf6330a7d58b5939c61cbd91fe5db379c1592a&p=0)
The x264 results were another strange case similar to Stream where the best performance was actually with Ubuntu on Windows 10 via WSL! 类似于Stream, x264结果是另一个奇怪的情况其中最好的性能实际上是使用WSL Ubuntu On Windows
![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=ad12f0b&c=f50c829c97d731f6926c5a874cf83f8fc5440067&p=0) ![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=ad12f0b&c=f50c829c97d731f6926c5a874cf83f8fc5440067&p=0)
![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=8b7a7ca&c=3de3e8537d08665e8a41380b6b2298c09f408fa0&p=0) ![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=8b7a7ca&c=3de3e8537d08665e8a41380b6b2298c09f408fa0&p=0)
The timed compilation benchmarks were heavily in favor of the bare metal Ubuntu Linux installations outside of Windows. This is likely due to these large program compilations requiring plenty of disk reads and from the earlier disk-focused benchmarks showing that is the big area where the Windows Subsystem for Linux is slow. 定时编译基准测试非常利于裸机Ubuntu Linux. 这是由于大型程序编译需要大量读写磁盘, 先前测试已经发现了, 基于Windows的子系统缓慢的大灾区.
![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=25892d8&c=f6cd3fa4a3497e3d2663106e0bf3fcd227f9b9a3&p=0) ![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=25892d8&c=f6cd3fa4a3497e3d2663106e0bf3fcd227f9b9a3&p=0)
@ -67,11 +67,11 @@ The timed compilation benchmarks were heavily in favor of the bare metal Ubuntu
![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=4899bb2&c=80df0e1e749910ebd84b0d6c2688316e5cfb8cda&p=0) ![](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=4899bb2&c=80df0e1e749910ebd84b0d6c2688316e5cfb8cda&p=0)
Many of our other common open-source benchmarks show that for the strictly CPU-focused tests, the Windows Subsystem for Linux is close -- or even matches -- the native Ubuntu Linux performance running on the actual hardware. 许多其他的通用开源基准测试表明, 严格的重CPU测试, Windows子系统的Ubuntu的性能是很接近的, 甚至是相等与原生安装在实际硬盘中的Ubuntu Linux.
These latest Windows Subsystem for Linux results are actually rather impressive. The big letdown is just the continued slow disk/file-system performance, but for CPU-bound workloads the results are very compelling. There's also the rare cases with x264 and Stream where the performance of the Ubuntu user-space on Windows appears to clearly outperform that of Ubuntu Linux running on the hardware by itself. 最新的Window的Linux子系统,测试结果实际上相当令人印象深刻。让人沮丧仅仅只是持续缓慢的磁盘/文件系统性能但是对于受CPU限制的工作负载结果是非常引人注目的。还有很罕见的x264和Stream测试Ubuntu On Windows上的性能似乎明显优于运行在硬件上的Ubuntu Linux。
Overall the experience was actually quite pleasant and haven't run into any other bugs or annoyances while running with Ubuntu/Bash on Windows. If you're interested in more Windows vs. Linux benchmarks, please consider voicing yourself as a Phoronix Premium subscriber. 总的来说, 测试实验是十分愉快的并且在Ubuntu/Bash on Windows也没有遇到任何其他的bug.如果你有还兴趣了解更多关于Windows和Linux的基准测试, 欢迎留言讨论.
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
via: https://www.phoronix.com/scan.php?page=article&item=windows10-anv-wsl&num=1 via: https://www.phoronix.com/scan.php?page=article&item=windows10-anv-wsl&num=1