merge from lctt.
This commit is contained in:
runningwater 2016-11-27 22:48:59 +08:00
commit e9e058c5a5
108 changed files with 9274 additions and 4244 deletions

View File

@ -0,0 +1,245 @@
用 dpkg 命令在 Debian 系的 Linux 系统中管理软件包
==================
[dpkg][7] 意即 Debian 包管理器Debian PacKaGe manager。dpkg 是一个可以安装、构建、删除及管理 Debian 软件包的命令行工具。dpkg 将 Aptitude首选而更用户友好作为执行所有操作的前端界面。
其它的一些工具如 dpkg-deb 和 dpkg-query 等也使用 dpkg 作为执行某些操作的前端。
现在大多数系统管理员使用 Apt、[Apt-Get][6] 及 Aptitude 等工具,不用费心就可以轻松地管理软件。
尽管如此,必要的时候还是需要用 dpkg 来安装某些软件。其它的一些在 Linux 系统上广泛使用的包管理工具还有 [yum][5]、[dnf][4]、[apt-get][3]、dpkg、[rpm][2]、[Zypper][1]、pacman、urpmi 等等。
现在,我要在装有 Ubuntu 15.10 的机器上用一些实例讲解最常用的 dpkg 命令。
### 1) dpkg 常见命令的语法及 dpkg 文件位置
下面是 dpkg 常见命令的语法及 dpkg 相关文件的位置,如果想深入了解,这些对你肯定大有益处。
```
### dpkg 命令的语法
$ dpkg -[command] [.deb package name]
$ dpkg -[command] [package name]
### dpkg 相关文件的位置
$ /var/lib/dpkg
### 这个文件包含了被 dpkg 命令install、remove 等)所修改的包的信息
$ /var/lib/dpkg/status
### 这个文件包含了可用包的列表
$ /var/lib/dpkg/status
```
### 2) 安装/升级软件
在基于 Debian 的系统里,比如 Debian、Mint、Ubuntu 和 elementryOS用以下命令来安装/升级 .deb 软件包。这里我要用 `atom-amd64.deb` 文件安装 Atom。要是已经安装了 Atom就会升级它。要么就会安装一个新的 Atom。
```
### 安装或升级 dpkg 软件包
$ sudo dpkg -i atom-amd64.deb
Selecting previously unselected package atom.
(Reading database ... 426102 files and directories currently installed.)
Preparing to unpack atom-amd64.deb ...
Unpacking atom (1.5.3) over (1.5.3) ...
Setting up atom (1.5.3) ...
Processing triggers for gnome-menus (3.13.3-6ubuntu1) ...
Processing triggers for bamfdaemon (0.5.2~bzr0+15.10.20150627.1-0ubuntu1) ...
Rebuilding /usr/share/applications/bamf-2.index...
Processing triggers for desktop-file-utils (0.22-1ubuntu3) ...
Processing triggers for mime-support (3.58ubuntu1) ...
```
### 3) 从文件夹里安装软件
在基于 Debian 的系统里,用下列命令从目录中逐个安装软件。这会安装 `/opt/software` 目录下的所有以 .deb 为后缀的软件。
```
$ sudo dpkg -iR /opt/software
Selecting previously unselected package atom.
(Reading database ... 423303 files and directories currently installed.)
Preparing to unpack /opt/software/atom-amd64.deb ...
Unpacking atom (1.5.3) ...
Setting up atom (1.5.3) ...
Processing triggers for gnome-menus (3.13.3-6ubuntu1) ...
Processing triggers for bamfdaemon (0.5.2~bzr0+15.10.20150627.1-0ubuntu1) ...
Rebuilding /usr/share/applications/bamf-2.index...
Processing triggers for desktop-file-utils (0.22-1ubuntu3) ...
Processing triggers for mime-support (3.58ubuntu1) ...
```
### 4) 显示已安装软件列表
以下命令可以列出 Debian 系的系统中所有已安装的软件,同时会显示软件版本和描述信息。
```
$ dpkg -l
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-===========================-==================================-============-================================================================
ii account-plugin-aim 3.12.10-0ubuntu2 amd64 Messaging account plugin for AIM
ii account-plugin-facebook 0.12+15.10.20150723-0ubuntu1 all GNOME Control Center account plugin for single signon - facebook
ii account-plugin-flickr 0.12+15.10.20150723-0ubuntu1 all GNOME Control Center account plugin for single signon - flickr
ii account-plugin-google 0.12+15.10.20150723-0ubuntu1 all GNOME Control Center account plugin for single signon
ii account-plugin-jabber 3.12.10-0ubuntu2 amd64 Messaging account plugin for Jabber/XMPP
ii account-plugin-salut 3.12.10-0ubuntu2 amd64 Messaging account plugin for Local XMPP (Salut)
.
.
```
### 5) 查看指定的已安装软件
用以下命令列出指定的一个已安装软件,同时会显示软件版本和描述信息。
```
$ dpkg -l atom
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-==========-=========-===================-============================================
ii atom 1.5.3 amd64 A hackable text editor for the 21st Century.
```
### 6) 查看软件安装目录
以下命令可以在基于 Debian 的系统上查看软件的安装路径。
```
$ dpkg -L atom
/.
/usr
/usr/bin
/usr/bin/atom
/usr/share
/usr/share/lintian
/usr/share/lintian/overrides
/usr/share/lintian/overrides/atom
/usr/share/pixmaps
/usr/share/pixmaps/atom.png
/usr/share/doc
```
### 7) 查看 deb 包内容
下列命令可以查看 deb 包内容。它会显示 .deb 包中的一系列文件。
```
$ dpkg -c atom-amd64.deb
drwxr-xr-x root/root 0 2016-02-13 02:13 ./
drwxr-xr-x root/root 0 2016-02-13 02:13 ./usr/
drwxr-xr-x root/root 0 2016-02-13 02:13 ./usr/bin/
-rwxr-xr-x root/root 3067 2016-02-13 02:13 ./usr/bin/atom
drwxr-xr-x root/root 0 2016-02-13 02:13 ./usr/share/
drwxr-xr-x root/root 0 2016-02-13 02:13 ./usr/share/lintian/
drwxr-xr-x root/root 0 2016-02-13 02:13 ./usr/share/lintian/overrides/
-rw-r--r-- root/root 299 2016-02-13 02:13 ./usr/share/lintian/overrides/atom
drwxr-xr-x root/root 0 2016-02-13 02:13 ./usr/share/pixmaps/
-rw-r--r-- root/root 643183 2016-02-13 02:13 ./usr/share/pixmaps/atom.png
drwxr-xr-x root/root 0 2016-02-13 02:13 ./usr/share/doc/
.
.
```
### 8) 显示软件的详细信息
以下命令可以显示软件的详细信息,如软件名、软件类别、版本、维护者、软件架构、依赖的软件、软件描述等等。
```
$ dpkg -s atom
Package: atom
Status: install ok installed
Priority: optional
Section: devel
Installed-Size: 213496
Maintainer: GitHub <atom@github.com>Architecture: amd64
Version: 1.5.3
Depends: git, gconf2, gconf-service, libgtk2.0-0, libudev0 | libudev1, libgcrypt11 | libgcrypt20, libnotify4, libxtst6, libnss3, python, gvfs-bin, xdg-utils, libcap2
Recommends: lsb-release
Suggests: libgnome-keyring0, gir1.2-gnomekeyring-1.0
Description: A hackable text editor for the 21st Century.
Atom is a free and open source text editor that is modern, approachable, and hackable to the core.</atom@github.com>
```
### 9) 查看文件属于哪个软件
用以下命令来查看文件属于哪个软件。
```
$ dpkg -S /usr/bin/atom
atom: /usr/bin/atom
```
### 10) 移除/删除软件
以下命令可以用来移除/删除一个已经安装的软件,但不删除配置文件。
```
$ sudo dpkg -r atom
(Reading database ... 426404 files and directories currently installed.)
Removing atom (1.5.3) ...
Processing triggers for gnome-menus (3.13.3-6ubuntu1) ...
Processing triggers for bamfdaemon (0.5.2~bzr0+15.10.20150627.1-0ubuntu1) ...
Rebuilding /usr/share/applications/bamf-2.index...
Processing triggers for desktop-file-utils (0.22-1ubuntu3) ...
Processing triggers for mime-support (3.58ubuntu1) ...
```
### 11) 清除软件
以下命令可以用来移除/删除包括配置文件在内的所有文件。
```
$ sudo dpkg -P atom
(Reading database ... 426404 files and directories currently installed.)
Removing atom (1.5.3) ...
Processing triggers for gnome-menus (3.13.3-6ubuntu1) ...
Processing triggers for bamfdaemon (0.5.2~bzr0+15.10.20150627.1-0ubuntu1) ...
Rebuilding /usr/share/applications/bamf-2.index...
Processing triggers for desktop-file-utils (0.22-1ubuntu3) ...
Processing triggers for mime-support (3.58ubuntu1) ...
```
### 12) 了解更多
用以下命令来查看更多关于 dpkg 的信息。
```
$ dpkg -help
$ man dpkg
```
开始体验 dpkg 吧。
--------------------------------------------------------------------------------
via: http://www.2daygeek.com/dpkg-command-examples/
作者:[MAGESH MARUTHAMUTHU][a]
译者:[GitFuture](https://github.com/GitFuture)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.2daygeek.com/author/magesh/
[1]:http://www.2daygeek.com/zypper-command-examples/
[2]:http://www.2daygeek.com/rpm-command-examples/
[3]:http://www.2daygeek.com/apt-get-apt-cache-command-examples/
[4]:http://www.2daygeek.com/dnf-command-examples/
[5]:http://www.2daygeek.com/yum-command-examples/
[6]:http://www.2daygeek.com/apt-get-apt-cache-command-examples/
[7]:https://wiki.debian.org/Teams/Dpkg
[8]:http://www.2daygeek.com/author/magesh/

View File

@ -0,0 +1,54 @@
病毒过后,系统管理员投向了 Linux
=======================================================
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/OPENHERE_blue.png?itok=3eqp-7gT)
我开源事业的第一笔,是我在 2001 年作为一名兼职系统管理员,为大学工作的时候。成为了那个以教学为目的,不仅仅在大学中,还在学术界的其他领域建立商业案例研究的小组的一份子。
随着团队的发展渐渐地开始需要一个由文件服务、intranet 应用,域登录等功能构建而成的健壮的局域网。 我们的 IT 基础设施主要由跑着 Windows 98 的计算机组成,这些计算机对于大学的 IT 实验室来说已经太老了,就重新分配给了我们部门。
### 初探 Linux
一天作为大学IT采购计划的一部分我们部门收到了一台 IBM 服务器。 我们计划将其用作 Internet 网关,域控制站,文件服务器和备份服务器,以及 intranet 应用程序主机。
拆封后,我们注意到它附带了红帽 Linux 的 CD。 我们的 22 人团队(包括我)对 Linux 一无所知。 经过几天的研究,我找到了一位朋友的朋友,一位以 Linux RTOS Linux 的实时操作系统领域)编程为生的人,求助他如何安装。
光看着那朋友用 CD 驱动器载入第一张安装 CD 并进入 Anaconda 安装系统,我的头都晕了。 大约一个小时,我们完成了基本的安装,但仍然没有可用的 internet 连接。
又花了一个小时的折腾才使我们连接到互联网,但仍没有域登录或 Internet 网关功能。 经过一个周末的折腾,我们可以让我们的 Windows 98 机器作为 Linux PC 的代理,终于构出了一个正常工作的共享互联环境。 但域登录还需要一段时间。
我们用龟速的电话调制解调器下载了 [Samba][1],并手动配置它作为域控制站。文件服务也通过 NFS Kernel Server 开启了,随后为 Windows 98 的网络邻居创建了用户目录并进行了必要的调整和配置。
这个设置完美运行了一段时间,直到最终我们决定开始使用 Intranet 应用管理时间表和一些别的东西。 这个时候,我已经离开了该组织,并把大部分系统管理员的东西交给了接替我的人。
### 再遇 Linux
2004 年,我又重新装回了 Linux。我的妻子经营的一份独立员工安置业务使用来自 Monster.com 等服务的数据来打通客户与求职者的交流渠道。
作为我们两人中的计算机好点的那个,在计算机和互联网出故障的时候,维修就成了我的分内之事。我们还需要用许多工具尝试,从堆积如山的简历中筛选出她每天必须看的。
Windows [BSoD][2](蓝屏) 早已司空见惯,但只要我们的付费数据是安全的,那就还算可以容忍。为此我将不得不每周花几个小时去做备份。
一天,我们的电脑中了毒,并且通过简单的方法无法清除。我们并不了解磁盘上的数据发生了些什么。当磁盘彻底挂掉后,我们插入了一周前的辅助备份磁盘,但是一周后它也挂了。我们的第二个备份直接拒绝启动。是时候寻求专业帮助了,所以我们把电脑送到一家靠谱的维修店。两天以后,我们被告知一些恶意软件或病毒已经将某些种类的文件擦除殆尽,其中包括我们的付费数据。
这是对我妻子的商业计划的一个巨大的打击,同时意味着丢失合同并耽误了账单。我曾短期出国工作,并在台湾的 [Computex 2004][3] 购买了我的第一台笔记本电脑。 预装的是 Windows XP但我还是想换成 Linux。 我知道 Linux 已经为桌面端做好了准备,[Mandrake Linux][4] (曼德拉草) 是一个很不错的选择。 我第一次安装就很顺利。所有工作都执行的非常漂亮。我使用 [OpenOffice][5] 来满足我写作,演示文稿和电子表格的需求。
我们为我们的计算机买了新的硬盘驱动器,并为其安装了 Mandrake Linux。用 OpenOffice 替换了 Microsoft Office。 我们依靠 Web 邮件来满足邮件需求,并在 2004 年的 11 月迎来了 [Mozilla Firefox][6]。我的妻子马上从中看到了好处,因为没有崩溃或病毒/恶意软件感染!更重要的是,我们告别了困扰 Windows 98 和 XP 的频繁崩溃问题。 她一直使用这个发行版。
而我,开始尝试其他的发行版。 我爱上了 distro-hopping LCTT 译注:指在不同版本的 Linux 发行版之间频繁切换的 Linux 用户)和第一时间尝试新发行版的感觉。我也经常会在 Apache 和 NGINX 上尝试和测试 Web 应用程序,如 Drupal、Joomla 和 WordPress。现在我们 2006 年出生的儿子,在 Linux 下成长。 也对 Tux PaintGcompris 和 SMPlayer 非常满意。
--------------------------------------------------------------------------------
via: https://opensource.com/life/16/3/my-linux-story-soumya-sarkar
作者:[Soumya Sarkar][a]
译者:[martin2011qi](https://github.com/martin2011qi)
校对:[wxy](https://github.com/wxy)
[a]: https://opensource.com/users/ssarkarhyd
[1]: https://www.samba.org/
[2]: https://en.wikipedia.org/wiki/Blue_Screen_of_Death
[3]: https://en.wikipedia.org/wiki/Computex_Taipei
[4]: https://en.wikipedia.org/wiki/Mandriva_Linux
[5]: http://www.openoffice.org/
[6]: https://www.mozilla.org/en-US/firefox/new/

View File

@ -0,0 +1,101 @@
我成为软件工程师的原因和经历
==========================================
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/myopensourcestory.png?itok=6TXlAkFi)
1989 年乌干达首都,坎帕拉。
我明智的父母决定与其将我留在家里添麻烦,不如把我送到叔叔的办公室学学电脑。几天后,我和另外六、七个小孩,还有一台放置在课桌上的崭新电脑,一起置身于 21 层楼的一间狭小房屋中。很明显我们还不够格去碰那家伙。在长达三周无趣的 DOS 命令学习后,美好时光来到,终于轮到我来输 **copy doc.txt d:** 啦。
那将文件写入五英寸软盘的奇怪的声音听起来却像音乐般美妙。那段时间这块软盘简直成为了我的至宝。我把所有可以拷贝的东西都放在上面了。然而1989 年的乌干达,人们的生活十分“正统”,相比较而言,捣鼓电脑、拷贝文件还有格式化磁盘就称不上“正统”。于是我不得不专注于自己接受的教育,远离计算机科学,走入建筑工程学。
之后几年里,我和同龄人一样,干过很多份工作也学到了许多技能。我教过幼儿园的小朋友,也教过大人如何使用软件,在服装店工作过,还在教堂中担任过引座员。在我获取堪萨斯大学的学位时,我正在技术管理员的手下做技术助理,听上去比较神气,其实也就是搞搞学生数据库而已。
当我 2007 年毕业时,计算机技术已经变得不可或缺。建筑工程学的方方面面都与计算机科学深深的交织在一起,所以我们不经意间学了些简单的编程知识。我对于这方面一直很着迷,但我不得不成为一位“正统”的工程师,由此我发展了一项秘密的私人爱好:写科幻小说。
在我的故事中,我以我笔下的女主角的形式存在。她们都是编程能力出众的科学家,总是卷入冒险,并用自己的技术发明战胜那些渣渣们,有时甚至要在现场发明新方法。我提到的这些“新技术”,有的是基于真实世界中的发明,也有些是从科幻小说中读到的。这就意味着我需要了解这些技术的原理,而且我的研究使我关注了许多有趣的 reddit 版块和电子杂志。
### 开源:巨大的宝库
那几周在 DOS 命令上花费的经历对我影响巨大我在一些非专业的项目上耗费心血并占据了宝贵的学习时间。Geocities 刚向所有 Yahoo! 用户开放时,我就创建了一个网站,用于发布一些用小型数码相机拍摄的个人图片。我建立多个免费网站,帮助家人和朋友解决一些他们所遇到的电脑问题,还为教堂搭建了一个图书馆数据库。
这意味着我需要一直研究并尝试获取更多的信息使它们变得更棒。互联网上帝保佑我让开源进入我的视野。突然之间30 天试用期和 license 限制对我而言就变成了过去式。我可以完全不受这些限制,继续使用 GIMP、Inkscape 和 OpenOffice。
### 是正经做些事情的时候了
我很幸运,有商业伙伴喜欢我的经历。她也是个想象力丰富的人,期待更高效、更便捷的互联世界。我们根据我们以往成功道路中经历的弱点制定了解决方案,但执行却成了一个问题。我们都缺乏给产品带来活力的能力,每当我们试图将想法带到投资人面前时,这表现的尤为突出。
我们需要学习编程。于是 2015 年夏末,我们来到 Holberton 学校。那是一所座落于旧金山,由社区推进,基于项目教学的学校。
一天早晨我的商业伙伴来找我,以她独有的方式(每当她有疯狂想法想要拉我入伙时),进行一场对话。
**Zee**: Gloria我想和你说点事在你说“不”前能先听我说完吗
**Me**: 不行。
**Zee**: 为做全栈工程师,咱们申请上一所学校吧。
**Me**: 什么?
**Zee**: 就是这,看!就是这所学校,我们要申请这所学校来学习编程。
**Me**: 我不明白。我们不是正在网上学 Python 和…
**Zee**: 这不一样。相信我。
**Me**: 那…
**Zee**: 这就是不信任我了。
**Me**: 好吧 … 给我看看。
### 抛开偏见
我读到的和我们在网上看的的似乎很相似。这简直太棒了,以至于让人觉得不太真实,但我们还是决定尝试一下,全力以赴,看看结果如何。
要成为学生,我们需要经历四步选择,不过选择的依据仅仅是天赋和动机,而不是学历和编程经历。筛选便是课程的开始,通过它我们开始学习与合作。
根据我和我伙伴的经验, Holberton 学校的申请流程比其他的申请流程有趣太多了,就像场游戏。如果你完成了一项挑战,就能通往下一关,在那里有别的有趣的挑战正等着你。我们创建了 Twitter 账号,在 Medium 上写博客,为创建网站而学习 HTML 和 CSS 打造了一个充满活力的在线社区,虽然在此之前我们并不知晓有谁会来。
在线社区最吸引人的就是大家有多种多样的使用电脑的经验而背景和性别不是社区创始人我们私下里称他们为“The Trinity”做出选择的因素。大家只是喜欢聚在一块儿交流。我们都行进在通过学习编程来提升自己计算机技术的旅途上。
相较于其他的的申请流程,我们不需要泄露很多的身份信息。就像我的伙伴,她的名字里看不出她的性别和种族。直到最后一个步骤,在视频聊天的时候, The Trinity 才知道她是一位有色人种女性。迄今为止,促使她达到这个级别的只是她的热情和才华。肤色和性别并没有妨碍或者帮助到她。还有比这更酷的吗?
获得录取通知书的晚上我们知道生活将向我们的梦想转变。2016 年 1 月 22 日,我们来到巴特瑞大街 98 号,去见我们的同学们 [Hippokampoiers][2]这是我们的初次见面。很明显在见面之前“The Trinity”已经做了很多工作聚集了一批形形色色的人他们充满激情与热情致力于成长为全栈工程师。
这所学校有种与众不同的体验。每天都是向某一方面编程的一次竭力的冲锋。交给我们的工程,并不会有很多指导,我们需要使用一切可以使用的资源找出解决方案。[Holberton 学校][1] 认为信息来源相较于以前已经大大丰富了。MOOC大型开放式课程、教程、可用的开源软件和项目以及线上社区等等为我们完成项目提供了足够的知识。加之宝贵的导师团队来指导我们制定解决方案这所学校变得并不仅仅是一所学校我们已经成为了求学者的团体。任何对软件工程感兴趣并对这种学习方法感兴趣的人我都强烈推荐这所学校。在这里的经历会让人有些悲喜交加但是绝对值得。
### 开源问题
我最早使用的开源系统是 [Fedora][3],一个 [Red Hat][4] 赞助的项目。与 一名IRC 成员交流时,她推荐了这款免费的操作系统。 虽然在此之前,我还未独自安装过操作系统,但是这激起了我对开源的兴趣和日常使用计算机时对开源软件的依赖性。我们提倡为开源贡献代码,创造并使用开源的项目。我们的项目就在 Github 上,任何人都可以使用或是向它贡献出自己的力量。我们也会使用或以自己的方式为一些既存的开源项目做出贡献。在学校里,我们使用的大部分工具是开源的,例如 Fedora、[Vagrant][5]、[VirtualBox][6]、[GCC][7] 和 [Discourse][8],仅举几例。
在向软件工程师行进的路上,我始终憧憬着有朝一日能为开源社区做出一份贡献,能与他人分享我所掌握的知识。
### 多样性问题
站在教室里,和 29 位求学者交流心得,真是令人陶醉。学员中 40% 是女性, 44% 是有色人种。当你是一位有色人种且为女性,并身处于这个以缺乏多样性而著名的领域时,这些数字就变得非常重要了。这是高科技圣地麦加上的绿洲,我到达了。
想要成为一个全栈工程师是十分困难的,你甚至很难了解这意味着什么。这是一条充满挑战但又有丰富回报的旅途。科技推动着未来飞速发展,而你也是美好未来很重要的一部分。虽然媒体在持续的关注解决科技公司的多样化的问题,但是如果能认清自己,清楚自己的背景,知道自己为什么想成为一名全栈工程师,你便能在某一方面迅速成长。
不过可能最重要的是,告诉大家,女性在计算机的发展史上扮演过多么重要的角色,以帮助更多的女性回归到科技界,而且在给予就业机会时,不会因性别等因素而感到犹豫。女性的才能将会共同影响科技的未来,以及整个世界的未来。
------------------------------------------------------------------------------
via: https://opensource.com/life/16/4/my-open-source-story-gloria-bwandungi
作者:[Gloria Bwandungi][a]
译者:[martin2011qi](https://github.com/martin2011qi)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/nappybrain
[1]: https://www.holbertonschool.com/
[2]: https://twitter.com/hippokampoiers
[3]: https://en.wikipedia.org/wiki/Fedora_(operating_system)
[4]: https://www.redhat.com/
[5]: https://www.vagrantup.com/
[6]: https://www.virtualbox.org/
[7]: https://gcc.gnu.org/
[8]: https://www.discourse.org/

View File

@ -0,0 +1,140 @@
通过 docker-compose 进行快速原型设计
========================================
在这篇文章中,我们将考察一个 Node.js 开发原型该原型用于从英国三个主要折扣网店查找“Raspberry PI Zero”的库存。
我写好了代码,然后经过一晚的鼓捣把它部署在 Aure 上的 Ubuntu 虚拟机上。Docker 和 docker-compose 工具使得部署和更新过程非常快。
### 还记得链接指令link
如果你已经阅读过 [Hands-on Docker tutorial][1],那么你应该已经可以使用命令行链接 Docker 容器。通过命令行将 Node.js 的计数器链接到 Redis 服务器,其命令可能如下所示:
```
$ docker run -d -P --name redis1
$ docker run -d hit_counter -p 3000:3000 --link redis1:redis
```
现在假设你的应用程序分为三层:
- Web 前端
- 处理长时间运行任务的批处理层
- Redis 或者 mongo 数据库
通过`--link`的显式链接只是管理几个容器是可以的,但是可能会因为我们向应用程序添加更多层或容器而失控。
### 加入 docker-compose
![](http://blog.alexellis.io/content/images/2016/05/docker-compose-logo-01.png)
*Docker Compose logo*
docker-compose 工具是标准 Docker 工具箱的一部分,也可以单独下载。 它提供了一组丰富的功能,通过纯文本 YAML 文件配置所有应用程序的部件。
上面的例子看起来像这样:
```
version: "2.0"
services:
redis1:
image: redis
hit_counter:
build: ./hit_counter
ports:
- 3000:3000
```
从 Docker 1.10 开始我们可以利用网络覆盖network overlays来帮助我们在多个主机上进行扩展。 在此之前,链接仅能工作在单个主机上。 `docker-compose scale` 命令可以用来在需要时带来更多的计算能力。
> 查看 docker.com 上的 [docker-compose][2] 参考
### 真实工作示例Raspberry PI 库存警示
![](http://blog.alexellis.io/content/images/2016/05/Raspberry_Pi_Zero_ver_1-3_1_of_3_large.JPG)
*新的 Raspberry PI Zero v1.3 图片,由 Pimoroni 提供*
Raspberry PI Zero 嗡嗡作响 - 它是一个极小的微型计算机,具有 1GHz CPU 和 512MB RAM可以运行完整的Linux、Docker、Node.js、Ruby 和其他许多流行的开源工具。 PI Zero 最好的优点之一就是它成本只有 5 美元。 这也意味着它销售的速度非常之快。
*如果你想在 PI 上尝试 Docker 和 Swarm请查看下面的教程[Docker Swarm on the PI Zero][3]*
### 原始网站whereismypizero.com
我发现一个网页,它使用屏幕抓取以找出 4-5 个最受欢迎的折扣网店是否有库存。
- 网站包含静态 HTML 网页
- 向每个折扣网店发出一个 XMLHttpRequest 访问 /public/api/
- 服务器向每个网店发出 HTTP 请求并执行抓屏
每一次对 /public/api/ 的调用,其执行花 3 秒钟,而使用 Apache Benchab我每秒只能完成 0.25 个请求。
### 重新发明轮子
零售商似乎并不介意 whereismypizero.com 抓取他们的网站的商品库存信息,所以我开始从头写一个类似的工具。 我尝试通过缓存和解耦 web 层来处理更多的抓取请求。 Redis 是执行这项工作的完美工具。 它允许我设置一个自动过期的键/值对(即一个简单的缓存),还可以通过 pub/sub 在 Node.js 进程之间传输消息。
> 复刻或者追踪放在 github 上的代码: [alexellis/pi_zero_stock][4]
如果之前使用过 Node.js你肯定知道它是单线程的并且任何 CPU 密集型任务,如解析 HTML 或 JSON 都可能导致速度放缓。一种缓解这种情况的方法是使用一个工作进程和 Redis 消息通道作为它和 web 层之间的连接组织。
- Web 层
- 使用 200 代表缓冲命中(该商店的 Redis 键存在)
- 使用 202 代表高速缓存未命中(该商店的 Redis 键不存在,因此发出消息)
- 因为我们只是读一个 Redis 键,响应时间非常快。
- 库存抓取器
- 执行 HTTP 请求
- 用于在不同类型的网店上抓屏
- 更新 Redis 键的缓存失效时间为 60 秒
- 另外,锁定一个 Redis 键,以防止对网店过多的 HTTP 请求。
```
version: "2.0"
services:
web:
build: ./web/
ports:
- "3000:3000"
stock_fetch:
build: ./stock_fetch/
redis:
image: redis
```
*来自示例的 docker-compose.yml 文件*
一旦本地正常工作,再向 Azure 的 Ubuntu 16.04 镜像云部署就轻车熟路,只花了不到 5 分钟。 我登录、克隆仓库并键入`docker compose up -d` 这就是所有的工作 - 快速实现整个系统的原型不会比这几个步骤更多。 任何人(包括 whereismypizero.com 的所有者)只需两行命令就可以部署新解决方案:
```
$ git clone https://github.com/alexellis/pi_zero_stock
$ docker-compose up -d
```
更新网站很容易,只需要一个`git pull`命令,然后执行`docker-compose up -d`命令,该命令需要带上`--build`参数。
如果你仍然手动链接你的 Docker 容器,请自己或使用如下我的代码尝试 Docker Compose
> 复刻或者追踪在 github 上的代码: [alexellis/pi_zero_stock][5]
### 一睹测试网站芳容
目前测试网站使用 docker-compose 部署:[stockalert.alexellis.io][6]
![](http://blog.alexellis.io/content/images/2016/05/Screen-Shot-2016-05-16-at-22-34-26-1.png)
*预览于 2016 年 5 月 16 日*
----------
via: http://blog.alexellis.io/rapid-prototype-docker-compose/
作者:[Alex Ellis][a]
译者:[firstadream](https://github.com/firstadream)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://blog.alexellis.io/author/alex/
[1]: http://blog.alexellis.io/handsondocker
[2]: https://docs.docker.com/compose/compose-file/
[3]: http://blog.alexellis.io/dockerswarm-pizero/
[4]: https://github.com/alexellis/pi_zero_stock
[5]: https://github.com/alexellis/pi_zero_stock
[6]: http://stockalert.alexellis.io/

View File

@ -0,0 +1,280 @@
aria2 (命令行下载器)实例
============
[aria2][4] 是一个自由、开源、轻量级多协议和多源的命令行下载工具。它支持 HTTP/HTTPS、FTP、SFTP、 BitTorrent 和 Metalink 协议。aria2 可以通过内建的 JSON-RPC 和 XML-RPC 接口来操纵。aria2 下载文件的时候,自动验证数据块。它可以通过多个来源或者多个协议下载一个文件,并且会尝试利用你的最大下载带宽。默认情况下,所有的 Linux 发行版都包括 aria2所以我们可以从官方库中很容易的安装。一些 GUI 下载管理器例如 [uget][3] 使用 aria2 作为插件来提高下载速度。
### Aria2 特性
* 支持 HTTP/HTTPS GET
* 支持 HTTP 代理
* 支持 HTTP BASIC 认证
* 支持 HTTP 代理认证
* 支持 FTP (主动、被动模式)
* 通过 HTTP 代理的 FTPGET 命令行或者隧道)
* 分段下载
* 支持 Cookie
* 可以作为守护进程运行。
* 支持使用 fast 扩展的 BitTorrent 协议
* 支持在多文件 torrent 中选择文件
* 支持 Metalink 3.0 版本HTTP/FTP/BitTorrent
* 限制下载、上传速度
### 1) Linux 下安装 aria2
我们可以很容易的在所有的 Linux 发行版上安装 aria2 命令行下载器,例如 Debian、 Ubuntu、 Mint、 RHEL、 CentOS、 Fedora、 suse、 openSUSE、 Arch Linux、 Manjaro、 Mageia 等等……只需要输入下面的命令安装即可。对于 CentOS、 RHEL 系统,我们需要开启 [uget][2] 或者 [RPMForge][1] 库的支持。
```
[对于 Debian、 Ubuntu 和 Mint]
$ sudo apt-get install aria2
[对于 CentOS、 RHEL、 Fedora 21 和更早些的操作系统]
# yum install aria2
[Fedora 22 和 之后的系统]
# dnf install aria2
[对于 suse 和 openSUSE]
# zypper install wget
[Mageia]
# urpmi aria2
[对于 Debian、 Ubuntu 和 Mint]
$ sudo pacman -S aria2
```
### 2) 下载单个文件
下面的命令将会从指定的 URL 中下载一个文件,并且保存在当前目录,在下载文件的过程中,我们可以看到文件的(日期、时间、下载速度和下载进度)。
```
# aria2c https://download.owncloud.org/community/owncloud-9.0.0.tar.bz2
[#986c80 19MiB/21MiB(90%) CN:1 DL:3.0MiB]
03/22 09:49:13 [NOTICE] Download complete: /opt/owncloud-9.0.0.tar.bz2
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
986c80|OK | 3.0MiB/s|/opt/owncloud-9.0.0.tar.bz2
Status Legend:
(OK):download completed.
```
### 3) 使用不同的名字保存文件
在初始化下载的时候,我们可以使用 `-o`(小写)选项在保存文件的时候使用不同的名字。这儿我们将要使用 owncloud.zip 文件名来保存文件。
```
# aria2c -o owncloud.zip https://download.owncloud.org/community/owncloud-9.0.0.tar.bz2
[#d31304 16MiB/21MiB(74%) CN:1 DL:6.2MiB]
03/22 09:51:02 [NOTICE] Download complete: /opt/owncloud.zip
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
d31304|OK | 7.3MiB/s|/opt/owncloud.zip
Status Legend:
(OK):download completed.
```
### 4) 下载速度限制
默认情况下aria2 会利用全部带宽来下载文件,在文件下载完成之前,我们在服务器就什么也做不了(这将会影响其他服务访问带宽)。所以在下载大文件时最好使用 `max-download-limit` 选项来避免进一步的问题。
```
# aria2c --max-download-limit=500k https://download.owncloud.org/community/owncloud-9.0.0.tar.bz2
[#7f9fbf 21MiB/21MiB(99%) CN:1 DL:466KiB]
03/22 09:54:51 [NOTICE] Download complete: /opt/owncloud-9.0.0.tar.bz2
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
7f9fbf|OK | 462KiB/s|/opt/owncloud-9.0.0.tar.bz2
Status Legend:
(OK):download completed.
```
### 5) 下载多个文件
下面的命令将会从指定位置下载超过一个的文件并保存到当前目录,在下载文件的过程中,我们可以看到文件的(日期、时间、下载速度和下载进度)。
```
# aria2c -Z https://download.owncloud.org/community/owncloud-9.0.0.tar.bz2 ftp://ftp.gnu.org/gnu/wget/wget-1.17.tar.gz
[DL:1.7MiB][#53533c 272KiB/21MiB(1%)][#b52bb1 768KiB/3.6MiB(20%)]
03/22 10:25:54 [NOTICE] Download complete: /opt/wget-1.17.tar.gz
[#53533c 18MiB/21MiB(86%) CN:1 DL:3.2MiB]
03/22 10:25:59 [NOTICE] Download complete: /opt/owncloud-9.0.0.tar.bz2
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
b52bb1|OK | 2.8MiB/s|/opt/wget-1.17.tar.gz
53533c|OK | 3.4MiB/s|/opt/owncloud-9.0.0.tar.bz2
Status Legend:
(OK):download completed.
```
### 6) 续传未完成的下载
当你遇到一些网络连接问题或者系统问题的时候,并将要下载一个大文件(例如: ISO 镜像文件),我建议你使用 `-c` 选项,它可以帮助我们从该状态续传未完成的下载,并且像往常一样完成。不然的话,当你再次下载,它将会初始化新的下载,并保存成一个不同的文件名(自动的在文件名后面添加 .1 。注意如果出现了任何中断aria2 使用 .aria2 后缀保存(未完成的)文件。
```
# aria2c -c https://download.owncloud.org/community/owncloud-9.0.0.tar.bz2
[#db0b08 8.2MiB/21MiB(38%) CN:1 DL:3.1MiB ETA:4s]^C
03/22 10:09:26 [NOTICE] Shutdown sequence commencing... Press Ctrl-C again for emergency shutdown.
03/22 10:09:26 [NOTICE] Download GID#db0b08bf55d5908d not complete: /opt/owncloud-9.0.0.tar.bz2
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
db0b08|INPR| 3.3MiB/s|/opt/owncloud-9.0.0.tar.bz2
Status Legend:
(INPR):download in-progress.
如果重新启动传输aria2 将会恢复下载。
# aria2c -c https://download.owncloud.org/community/owncloud-9.0.0.tar.bz2
[#873d08 21MiB/21MiB(98%) CN:1 DL:2.7MiB]
03/22 10:09:57 [NOTICE] Download complete: /opt/owncloud-9.0.0.tar.bz2
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
873d08|OK | 1.9MiB/s|/opt/owncloud-9.0.0.tar.bz2
Status Legend:
(OK):download completed.
```
### 7) 从文件获取输入
就像 wget 可以从一个文件获取输入的 URL 列表来下载一样。我们需要创建一个文件,将每一个 URL 存储在单独的行中。ara2 命令行可以添加 `-i` 选项来执行此操作。
```
# aria2c -i test-aria2.txt
[DL:3.9MiB][#b97984 192KiB/21MiB(0%)][#673c8e 2.5MiB/3.6MiB(69%)]
03/22 10:14:22 [NOTICE] Download complete: /opt/wget-1.17.tar.gz
[#b97984 19MiB/21MiB(90%) CN:1 DL:2.5MiB]
03/22 10:14:30 [NOTICE] Download complete: /opt/owncloud-9.0.0.tar.bz2
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
673c8e|OK | 4.3MiB/s|/opt/wget-1.17.tar.gz
b97984|OK | 2.5MiB/s|/opt/owncloud-9.0.0.tar.bz2
Status Legend:
(OK):download completed.
```
### 8) 每个主机使用两个连接来下载
默认情况,每次下载连接到一台服务器的最大数目,对于一条主机只能建立一条。我们可以通过 aria2 命令行添加 `-x2`2 表示两个连接)来创建到每台主机的多个连接,以加快下载速度。
```
# aria2c -x2 https://download.owncloud.org/community/owncloud-9.0.0.tar.bz2
[#ddd4cd 18MiB/21MiB(83%) CN:1 DL:5.0MiB]
03/22 10:16:27 [NOTICE] Download complete: /opt/owncloud-9.0.0.tar.bz2
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
ddd4cd|OK | 5.5MiB/s|/opt/owncloud-9.0.0.tar.bz2
Status Legend:
(OK):download completed.
```
### 9) 下载 BitTorrent 种子文件
我们可以使用 aria2 命令行直接下载一个 BitTorrent 种子文件:
```
# aria2c https://torcache.net/torrent/C86F4E743253E0EBF3090CCFFCC9B56FA38451A3.torrent?title=[kat.cr]irudhi.suttru.2015.official.teaser.full.hd.1080p.pathi.team.sr
[#388321 0B/0B CN:1 DL:0B]
03/22 20:06:14 [NOTICE] Download complete: /opt/[kat.cr]irudhi.suttru.2015.official.teaser.full.hd.1080p.pathi.team.sr.torrent
03/22 20:06:14 [ERROR] Exception caught
Exception: [BtPostDownloadHandler.cc:98] errorCode=25 Could not parse BitTorrent metainfo
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
388321|OK | 11MiB/s|/opt/[kat.cr]irudhi.suttru.2015.official.teaser.full.hd.1080p.pathi.team.sr.torrent
Status Legend:
(OK):download completed.
```
### 10) 下载 BitTorrent 磁力链接
使用 aria2 我们也可以通过 BitTorrent 磁力链接直接下载一个种子文件:
```
# aria2c 'magnet:?xt=urn:btih:248D0A1CD08284299DE78D5C1ED359BB46717D8C'
```
### 11) 下载 BitTorrent Metalink 种子
我们也可以通过 aria2 命令行直接下载一个 Metalink 文件。
```
# aria2c https://curl.haxx.se/metalink.cgi?curl=tar.bz2
```
### 12) 从密码保护的网站下载一个文件
或者,我们也可以从一个密码保护网站下载一个文件。下面的命令行将会从一个密码保护网站中下载文件。
```
# aria2c --http-user=xxx --http-password=xxx https://download.owncloud.org/community/owncloud-9.0.0.tar.bz2
# aria2c --ftp-user=xxx --ftp-password=xxx ftp://ftp.gnu.org/gnu/wget/wget-1.17.tar.gz
```
### 13) 阅读更多关于 aria2
如果你希望了解了解更多选项 —— 它们同时适用于 wget可以输入下面的命令行在你自己的终端获取详细信息
```
# man aria2c
or
# aria2c --help
```
谢谢欣赏 …)
--------------------------------------------------------------------------------
via: http://www.2daygeek.com/aria2-command-line-download-utility-tool/
作者:[MAGESH MARUTHAMUTHU][a]
译者:[yangmingming](https://github.com/yangmingming)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.2daygeek.com/author/magesh/
[1]:http://www.2daygeek.com/aria2-command-line-download-utility-tool/
[2]:http://www.2daygeek.com/aria2-command-line-download-utility-tool/
[3]:http://www.2daygeek.com/install-uget-download-manager-on-ubuntu-centos-debian-fedora-mint-rhel-opensuse/
[4]:https://aria2.github.io/

View File

@ -0,0 +1,191 @@
Linux 与 Windows 的设备驱动模型比对架构、API 和开发环境比较
============================================================================================
> 名词缩写:
> - API 应用程序接口Application Program Interface
> - ABI 应用系统二进制接口Application Binary Interface
设备驱动是操作系统的一部分,它能够通过一些特定的编程接口便于硬件设备的使用,这样软件就可以控制并且运行那些设备了。因为每个驱动都对应不同的操作系统,所以你就需要不同的 Linux、Windows 或 Unix 设备驱动,以便能够在不同的计算机上使用你的设备。这就是为什么当你雇佣一个驱动开发者或者选择一个研发服务商提供者的时候,查看他们为各种操作系统平台开发驱动的经验是非常重要的。
![](https://c2.staticflickr.com/8/7289/26775594584_d2fe7483f9_c.jpg)
驱动开发的第一步是理解每个操作系统处理它的驱动的不同方式、底层驱动模型、它使用的架构、以及可用的开发工具。例如Linux 驱动程序模型就与 Windows 非常不同。虽然 Windows 提倡驱动程序开发和操作系统开发分别进行,并通过一组 ABI 调用来结合驱动程序和操作系统,但是 Linux 设备驱动程序开发不依赖任何稳定的 ABI 或 API所以它的驱动代码并没有被纳入内核中。每一种模型都有自己的优点和缺点但是如果你想为你的设备提供全面支持那么重要的是要全面的了解它们。
在本文中,我们将比较 Windows 和 Linux 设备驱动程序探索不同的架构API构建开发和分发希望让您比较深入的理解如何开始为每一个操作系统编写设备驱动程序。
### 1. 设备驱动架构
Windows 设备驱动程序的体系结构和 Linux 中使用的不同它们各有优缺点。差异主要受以下原因的影响Windows 是闭源操作系统,而 Linux 是开源操作系统。比较 Linux 和 Windows 设备驱动程序架构将帮助我们理解 Windows 和 Linux 驱动程序背后的核心差异。
#### 1.1. Windows 驱动架构
虽然 Linux 内核分发时带着 Linux 驱动,而 Windows 内核则不包括设备驱动程序。与之不同的是,现代 Windows 设备驱动程序编写使用 Windows 驱动模型WDM这是一种完全支持即插即用和电源管理的模型所以可以根据需要加载和卸载驱动程序。
处理来自应用的请求,是由 Windows 内核的中被称为 I/O 管理器的部分来完成的。I/O 管理器的作用是是转换这些请求到 I/O 请求数据包IO Request PacketsIRPIRP 可以被用来在驱动层识别请求并且传输数据。
Windows 驱动模型 WDM 提供三种驱动, 它们形成了三个层:
- 过滤Filter驱动提供关于 IRP 的可选附加处理。
- 功能Function驱动是实现接口和每个设备通信的主要驱动。
- 总线Bus驱动服务不同的配适器和不同的总线控制器来实现主机模式控制设备。
一个 IRP 通过这些层就像它们经过 I/O 管理器到达底层硬件那样。每个层能够独立的处理一个 IRP 并且把它们送回 I/O 管理器。在硬件底层中有硬件抽象层HAL它提供一个通用的接口到物理设备。
#### 1.2. Linux 驱动架构
相比于 Windows 设备驱动Linux 设备驱动架构根本性的不同就是 Linux 没有一个标准的驱动模型也没有一个干净分隔的层。每一个设备驱动都被当做一个能够自动的从内核中加载和卸载的模块来实现。Linux 为即插即用设备和电源管理设备提供一些方式,以便那些驱动可以使用它们来正确地管理这些设备,但这并不是必须的。
模式输出那些它们提供的函数,并通过调用这些函数和传入随意定义的数据结构来沟通。请求来自文件系统或网络层的用户应用,并被转化为需要的数据结构。模块能够按层堆叠,在一个模块进行处理之后,另外一个再处理,有些模块提供了对一类设备的公共调用接口,例如 USB 设备。
Linux 设备驱动程序支持三种设备:
- 实现一个字节流接口的字符Character设备。
- 用于存放文件系统和处理多字节数据块 IO 的块Block设备。
- 用于通过网络传输数据包的网络Network接口。
Linux 也有一个硬件抽象层HAL它实际扮演了物理硬件的设备驱动接口。
### 2. 设备驱动 API
Linux 和 Windows 驱动 API 都属于事件驱动类型:只有当某些事件发生的时候,驱动代码才执行——当用户的应用程序希望从设备获取一些东西,或者当设备有某些请求需要告知操作系统。
#### 2.1. 初始化
在 Windows 上,驱动被表示为 `DriverObject` 结构,它在 `DriverEntry` 函数的执行过程中被初始化。这些入口点也注册一些回调函数,用来响应设备的添加和移除、驱动卸载和处理新进入的 IRP。当一个设备连接的时候Windows 创建一个设备对象,这个设备对象在设备驱动后面处理所有应用请求。
相比于 WindowsLinux 设备驱动生命周期由内核模块的 `module_init``module_exit` 函数负责管理,它们分别用于模块的加载和卸载。它们负责注册模块来通过使用内核接口来处理设备的请求。这个模块需要创建一个设备文件(或者一个网络接口),为其所希望管理的设备指定一个数字识别号,并注册一些当用户与设备文件交互的时候所使用的回调函数。
#### 2.2. 命名和声明设备
##### 在 Windows 上注册设备
Windows 设备驱动在新连接设备时是由回调函数 `AddDevice` 通知的。它接下来就去创建一个设备对象device object用于识别该设备的特定的驱动实例。取决于驱动的类型设备对象可以是物理设备对象Physical Device ObjectPDO功能设备对象Function Device ObjectFDO或者过滤设备对象Filter Device Object FIDO。设备对象能够堆叠PDO 在底层。
设备对象在这个设备连接在计算机期间一直存在。`DeviceExtension` 结构能够被用于关联到一个设备对象的全局数据。
设备对象可以有如下形式的名字 `\Device\DeviceName`,这被系统用来识别和定位它们。应用可以使用 `CreateFile` API 函数来打开一个有上述名字的文件,获得一个可以用于和设备交互的句柄。
然而,通常只有 PDO 有自己的名字。未命名的设备能够通过设备级接口来访问。设备驱动注册一个或多个接口,以 128 位全局唯一标识符GUID来标示它们。用户应用能够使用已知的 GUID 来获取一个设备的句柄。
##### 在 Linux 上注册设备
在 Linux 平台上,用户应用通过文件系统入口访问设备,它通常位于 `/dev` 目录。在模块初始化的时候,它通过调用内核函数 `register_chrdev` 创建了所有需要的入口。应用可以发起 `open` 系统调用来获取一个文件描述符来与设备进行交互。这个调用后来被发送到回调函数,这个调用(以及将来对该返回的文件描述符的进一步调用,例如 `read`、`write` 或`close`)会被分配到由该模块安装到 `file_operations` 或者 `block_device_operations`这样的数据结构中的回调函数。
设备驱动模块负责分配和保持任何需要用于操作的数据结构。传送进文件系统回调函数的 `file` 结构有一个 `private_data` 字段,它可以被用来存放指向具体驱动数据的指针。块设备和网络接口 API 也提供类似的字段。
虽然应用使用文件系统的节点来定位设备,但是 Linux 在内部使用一个主设备号major numbers和次设备号minor numbers的概念来识别设备及其驱动。主设备号被用来识别设备驱动而次设备号由驱动使用来识别它所管理的设备。驱动为了去管理一个或多个固定的主设备号必须首先注册自己或者让系统来分配未使用的设备号给它。
目前Linux 为主次设备对major-minor pairs使用一个 32 位的值,其中 12 位分配主设备号,并允许多达 4096 个不同的设备。主次设备对对于字符设备和块设备是不同的,所以一个字符设备和一个块设备能使用相同的设备对而不导致冲突。网络接口是通过像 eth0 的符号名来识别,这些又是区别于主次设备的字符设备和块设备的。
#### 2.3. 交换数据
Linux 和 Windows 都支持在用户级应用程序和内核级驱动程序之间传输数据的三种方式:
- **缓冲型输入输出Buffered Input-Output** 它使用由内核管理的缓冲区。对于写操作,内核从用户空间缓冲区中拷贝数据到内核分配的缓冲区,并且把它传送到设备驱动中。读操作也一样,由内核将数据从内核缓冲区中拷贝到应用提供的缓冲区中。
- **直接型输入输出Direct Input-Output** 它不使用拷贝功能。代替它的是,内核在物理内存中钉死一块用户分配的缓冲区以便它可以一直留在那里,以便在数据传输过程中不被交换出去。
- **内存映射Memory mapping** 它也能够由内核管理,这样内核和用户空间应用就能够通过不同的地址访问同样的内存页。
##### **Windows 上的驱动程序 I/O 模式**
支持缓冲型 I/O 是 WDM 的内置功能。缓冲区能够被设备驱动通过在 IRP 结构中的 `AssociatedIrp.SystemBuffer` 字段访问。当需要和用户空间通讯的时候,驱动只需从这个缓冲区中进行读写操作。
Windows 上的直接 I/O 由内存描述符列表memory descriptor listsMDL介导。这种半透明的结构是通过在 IRP 中的 `MdlAddress` 字段来访问的。它们被用来定位由用户应用程序分配的缓冲区的物理地址,并在 I/O 请求期间钉死不动。
在 Windows 上进行数据传输的第三个选项称为 `METHOD_NEITHER`。 在这种情况下,内核需要传送用户空间的输入输出缓冲区的虚拟地址给驱动,而不需要确定它们有效或者保证它们映射到一个可以由设备驱动访问的物理内存地址。设备驱动负责处理这些数据传输的细节。
##### **Linux 上的驱动程序 I/O 模式**
Linux 提供许多函数例如,`clear_user`、`copy_to_user`、`strncpy_from_user` 和一些其它的用来在内核和用户内存之间进行缓冲区数据传输的函数。这些函数保证了指向数据缓存区指针的有效,并且通过在内存区域之间安全地拷贝数据缓冲区来处理数据传输的所有细节。
然而,块设备的驱动对已知大小的整个数据块进行操作,它可以在内核和用户地址区域之间被快速移动而不需要拷贝它们。这种情况是由 Linux 内核来自动处理所有的块设备驱动。块请求队列处理传送数据块而不用多余的拷贝,而 Linux 系统调用接口来转换文件系统请求到块请求中。
最终,设备驱动能够从内核地址区域分配一些存储页面(不可交换的)并且使用 `remap_pfn_range` 函数来直接映射这些页面到用户进程的地址空间。然后应用能获取这些缓冲区的虚拟地址并且使用它来和设备驱动交流。
### 3. 设备驱动开发环境
#### 3.1. 设备驱动框架
##### Windows 驱动程序工具包
Windows 是一个闭源操作系统。Microsoft 提供 Windows 驱动程序工具包以方便非 Microsoft 供应商开发 Windows 设备驱动。工具包中包含开发、调试、检验和打包 Windows 设备驱动等所需的所有内容。
Windows 驱动模型Windows Driver ModelWDM为设备驱动定义了一个干净的接口框架。Windows 保持这些接口的源代码和二进制的兼容性。编译好的 WDM 驱动通常是前向兼容性:也就是说,一个较旧的驱动能够在没有重新编译的情况下在较新的系统上运行,但是它当然不能够访问系统提供的新功能。但是,驱动不保证后向兼容性。
##### **Linux 源代码**
和 Windows 相对比Linux 是一个开源操作系统,因此 Linux 的整个源代码是用于驱动开发的 SDK。没有驱动设备的正式框架但是 Linux 内核包含许多提供了如驱动注册这样的通用服务的子系统。这些子系统的接口在内核头文件中描述。
尽管 Linux 有定义接口但这些接口在设计上并不稳定。Linux 不提供有关前向和后向兼容的任何保证。设备驱动对于不同的内核版本需要重新编译。没有稳定性的保证可以让 Linux 内核进行快速开发,因为开发人员不必去支持旧的接口,并且能够使用最好的方法解决手头的这些问题。
当为 Linux 写树内in-tree指当前 Linux 内核开发主干驱动程序时这种不断变化的环境不会造成任何问题因为它们作为内核源代码的一部分与内核本身同步更新。然而闭源驱动必须单独开发并且在树外out-of-tree必须维护它们以支持不同的内核版本。因此Linux 鼓励设备驱动程序开发人员在树内维护他们的驱动。
#### 3.2. 为设备驱动构建系统
Windows 驱动程序工具包为 Microsoft Visual Studio 添加了驱动开发支持,并包括用来构建驱动程序代码的编译器。开发 Windows 设备驱动程序与在 IDE 中开发用户空间应用程序没有太大的区别。Microsoft 提供了一个企业 Windows 驱动程序工具包,提供了类似于 Linux 命令行的构建环境。
Linux 使用 Makefile 作为树内和树外系统设备驱动程序的构建系统。Linux 构建系统非常发达,通常是一个设备驱动程序只需要少数行就产生一个可工作的二进制代码。开发人员可以使用任何 [IDE][5],只要它可以处理 Linux 源代码库和运行 `make` ,他们也可以很容易地从终端手动编译驱动程序。
#### 3.3. 文档支持
Windows 对于驱动程序的开发有良好的文档支持。Windows 驱动程序工具包包括文档和示例驱动程序代码,通过 MSDN 可获得关于内核接口的大量信息,并存在大量的有关驱动程序开发和 Windows 底层的参考和指南。
Linux 文档不是描述性的,但整个 Linux 源代码可供驱动开发人员使用缓解了这一问题。源代码树中的 Documentation 目录描述了一些 Linux 的子系统,但是有[几本书][4]介绍了关于 Linux 设备驱动程序开发和 Linux 内核概览,它们更详细。
Linux 没有提供设备驱动程序的指定样本,但现有生产级驱动程序的源代码可用,可以用作开发新设备驱动程序的参考。
#### 3.4. 调试支持
Linux 和 Windows 都有可用于追踪调试驱动程序代码的日志机制。在 Windows 上将使用 `DbgPrint` 函数,而在 Linux 上使用的函数称为 `printk`。然而,并不是每个问题都可以通过只使用日志记录和源代码来解决。有时断点更有用,因为它们允许检查驱动代码的动态行为。交互式调试对于研究崩溃的原因也是必不可少的。
Windows 通过其内核级调试器 `WinDbg` 支持交互式调试。这需要通过一个串行端口连接两台机器一台计算机运行被调试的内核另一台运行调试器和控制被调试的操作系统。Windows 驱动程序工具包包括 Windows 内核的调试符号,因此 Windows 的数据结构将在调试器中部分可见。
Linux 还支持通过 `KDB``KGDB` 进行的交互式调试。调试支持可以内置到内核并可在启动时启用。之后可以直接通过物理键盘调试系统或通过串行端口从另一台计算机连接到它。KDB 提供了一个简单的命令行界面这是唯一的在同一台机器上来调试内核的方法。然而KDB 缺乏源代码级调试支持。KGDB 通过串行端口提供了一个更复杂的接口。它允许使用像 GDB 这样标准的应用程序调试器来调试 Linux 内核,就像任何其它用户空间应用程序一样。
### 4. 设备驱动分发
##### 4.1. 安装设备驱动
在 Windows 上安装的驱动程序,是由被称为为 INF 的文本文件描述的,通常存储在 `C:\Windows\INF` 目录中。这些文件由驱动供应商提供,并且定义哪些设备由该驱动程序服务,哪里可以找到驱动程序的二进制文件,和驱动程序的版本等。
当一个新设备插入计算机时Windows 通过查看已经安装的驱动程序并且选择适当的一个加载。当设备被移除的时候,驱动会自动卸载它。
在 Linux 上,一些驱动被构建到内核中并且保持永久的加载。非必要的驱动被构建为内核模块,它们通常是存储在 `/lib/modules/kernel-version` 目录中。这个目录还包含各种配置文件,如 `modules.dep`,用于描述内核模块之间的依赖关系。
虽然 Linux 内核可以在自身启动时加载一些模块,但通常模块加载由用户空间应用程序监督。例如,`init` 进程可能在系统初始化期间加载一些模块,`udev` 守护程序负责跟踪新插入的设备并为它们加载适当的模块。
#### 4.2. 更新设备驱动
Windows 为设备驱动程序提供了稳定的二进制接口,因此在某些情况下,无需与系统一起更新驱动程序二进制文件。任何必要的更新由 Windows Update 服务处理,它负责定位、下载和安装适用于系统的最新版本的驱动程序。
然而Linux 不提供稳定的二进制接口,因此有必要在每次内核更新时重新编译和更新所有必需的设备驱动程序。显然,内置在内核中的设备驱动程序会自动更新,但是树外模块会产生轻微的问题。 维护最新的模块二进制文件的任务通常用 [DKMS][3] 来解决:这是一个当安装新的内核版本时自动重建所有注册的内核模块的服务。
#### 4.3. 安全方面的考虑
所有 Windows 设备驱动程序在 Windows 加载它们之前必须被数字签名。在开发期间可以使用自签名证书,但是分发给终端用户的驱动程序包必须使用 Microsoft 信任的有效证书进行签名。供应商可以从 Microsoft 授权的任何受信任的证书颁发机构获取软件出版商证书Software Publisher Certificate。然后此证书由 Microsoft 交叉签名,并且生成的交叉证书用于在发行之前签署驱动程序包。
Linux 内核也能配置为在内核模块被加载前校验签名,并禁止不可信的内核模块。被内核所信任的公钥集在构建时是固定的,并且是完全可配置的。由内核执行的检查,这个检查严格性在构建时也是可配置的,范围从简单地为不可信模块发出警告,到拒绝加载有效性可疑的任何东西。
### 5. 结论
如上所示Windows 和 Linux 设备驱动程序基础设施有一些共同点,例如调用 API 的方法,但更多的细节是相当不同的。最突出的差异源于 Windows 是由商业公司开发的封闭源操作系统这个事实。这使得 Windows 上有好的、文档化的、稳定的驱动 ABI 和正式框架,而在 Linux 上,更多的是源代码做了一个有益的补充。文档支持也在 Windows 环境中更加发达,因为 Microsoft 具有维护它所需的资源。
另一方面Linux 不会使用框架来限制设备驱动程序开发人员,并且内核和产品级设备驱动程序的源代码可以在需要的时候有所帮助。缺乏接口稳定性也有其作用,因为它意味着最新的设备驱动程序总是使用最新的接口,内核本身承载较小的后向兼容性负担,这带来了更干净的代码。
了解这些差异以及每个系统的具体情况是为您的设备提供有效的驱动程序开发和支持的关键的第一步。我们希望这篇文章对 Windows 和 Linux 设备驱动程序开发做的对比,有助于您理解它们,并在设备驱动程序开发过程的研究中,将此作为一个伟大的起点。
--------------------------------------------------------------------------------
via: http://xmodulo.com/linux-vs-windows-device-driver-model.html
作者:[Dennis Turpitka][a]
译者:[FrankXinqi &YangYang](https://github.com/FrankXinqi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://xmodulo.com/author/dennis
[1]: http://xmodulo.com/linux-vs-windows-device-driver-model.html?format=pdf
[2]: https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=PBHS9R4MB9RX4
[3]: http://xmodulo.com/build-kernel-module-dkms-linux.html
[4]: http://xmodulo.com/go/linux_device_driver_books
[5]: https://linux.cn/article-7704-1.html

View File

@ -1,41 +1,43 @@
谁需要 GUI——L[inux 终端生存之道 谁需要 GUI—— Linux 终端生存之道
================================================= =================================================
完全在 Linux 终端中生存并不容易,但这绝对是可行的。
![](http://core0.staticworld.net/images/article/2016/07/linux-terminal-1-100669790-orig.jpg) ![](http://core0.staticworld.net/images/article/2016/07/linux-terminal-1-100669790-orig.jpg)
### 处理常见功能的最佳 Linux shell 应用 ### 处理常见功能的最佳 Linux shell 应用
你是否曾想像过完完全全在 Linux 终端里生存?即没有图形桌面。没有现代的 GUI 软件,仅有的就是文本——在 Linux shell 中,除了文本还是文本。这对于大部分人来说可能并不容易,但这是绝对可以逐渐做到的。[我最近在曾是在 30 天内只在 Linux shell 中生存][1]。下边提到的就是我最喜欢用的 shell 应用,它们可以用来处理多数的常用电脑功能(网页浏览、文字处理等)。这些显然有些不足,因为存文本实在是有些艰难。 你是否曾想像过完完全全在 Linux 终端里生存?没有图形桌面,没有现代的 GUI 软件,只有文本 —— 在 Linux shell 中,除了文本还是文本。这可能并不容易,但这是绝对可行的。[我最近尝试完全在 Linux shell 中生存30天][1]。下边提到的就是我最喜欢用的 shell 应用,可以用来处理大部分的常用电脑功能(网页浏览、文字处理等)。这些显然有些不足,因为纯文本操作实在是有些艰难。
![](http://core0.staticworld.net/images/article/2016/07/linux-terminal-2-100669791-orig.png) ![](http://core0.staticworld.net/images/article/2016/07/linux-terminal-2-100669791-orig.png)
### 在 Linux 终端里发邮件 ### 在 Linux 终端里发邮件
为了能在 终端里边发送邮件,我们对可选择性有极度的渴望。很多人会推荐 mutt 和 notmuch这两个软件都功能强大并且表现非凡但是我却更喜欢 alpine。为何不仅是因为它的高效性而且如果你习惯了像 Thunderbird 之类的 GUI 邮件客户端,你会发现 alpine 的界面与它们非常相似。 要在终端里发邮件,选择有很多。很多人会推荐 mutt 和 notmuch这两个软件都功能强大并且表现非凡但是我却更喜欢 alpine。为何不仅是因为它的高效性还因为如果你习惯了像 Thunderbird 之类的 GUI 邮件客户端,你会发现 alpine 的界面与它们非常相似。
![](http://core0.staticworld.net/images/article/2016/07/linux-terminal-3-100669837-orig.jpg) ![](http://core0.staticworld.net/images/article/2016/07/linux-terminal-3-100669837-orig.jpg)
### 在 Linux 终端里浏览网页 ### 在 Linux 终端里浏览网页
我有句号要给你说:[w3m][5]。好吧,我承认这并不是一句话。但 w3m 的确是我在 Linux 终端想的作为 web 浏览器的选择。它能够柔和的将网页呈现出了,并且它也足够强大,让你在像 Google+ 之类的网站上发布消息(尽管方法并不有趣)。实际上 Lynx 可能才是那个基于文本的 Web 浏览器,但 w3m 还是我最想用的 我有一个词要告诉你:[w3m][5]。好吧,我承认这并不是一个真实的词。但 w3m 的确是我在 Linux 终端的 web 浏览器选择。它能够很好的呈现网页,并且它也足够强大,可以用来在像 Google+ 之类的网站上发布消息(尽管方法并不有趣)。 Lynx 可能是基于文本的 Web 浏览器的事实标准,但 w3m 还是我的最爱
![](http://core0.staticworld.net/images/article/2016/07/linux-terminal-4-100669838-orig.jpg) ![](http://core0.staticworld.net/images/article/2016/07/linux-terminal-4-100669838-orig.jpg)
### 在 Linux 终端里编辑文本 ### 在 Linux 终端里编辑文本
对于编辑简单的文本文件,有那么一个应用是我最爱用的。不!不!不是 emacs同样也绝对不是 vim。对于编辑文本文件或者简要记下笔记我喜欢使用 nano。对就是 nano。它非常简单易于学习并且使用方便。当然还有更多的软件富含其他特性,但 nano 的使用则是最令人愉快的。 对于编辑简单的文本文件,有一个应用是我最的最爱。不!不!不是 emacs同样也绝对不是 vim。对于编辑文本文件或者简要记下笔记我喜欢使用 nano。对就是 nano。它非常简单易于学习并且使用方便。当然还有更多的软件具有更多功能,但 nano 的使用是最令人愉快的。
![](http://core0.staticworld.net/images/article/2016/07/linux-terminal-5-100669839-orig.jpg) ![](http://core0.staticworld.net/images/article/2016/07/linux-terminal-5-100669839-orig.jpg)
### 在 Linux 终端里处理文字 ### 在 Linux 终端里处理文字
在一个只有文本的 shell 之中,对于“文本编辑器”和“文字处理程序”实在没有什么大的区别。但是像我这样需要大量写作的,如果有一些内置应用来长期协同则是非常必要的。而我最爱的就是 wordgrinder。它由足够的工具让我愉快工作——一个菜单驱动的界面使用快捷键控制并且支持开放文档、HTML或其他等多种文件格式。 在一个只有文本的 shell 之中,“文本编辑器” “文字处理程序” 实在没有什么大的区别。但是像我这样需要大量写作的,有一个专门用于长期写作的软件是非常必要的。而我最爱的就是 wordgrinder。它由足够的工具让我愉快工作——一个菜单驱动的界面使用快捷键控制并且支持 OpenDocument、HTML 或其他等多种文件格式。
![](http://core0.staticworld.net/images/article/2016/07/linux-terminal-6-100669795-orig.jpg) ![](http://core0.staticworld.net/images/article/2016/07/linux-terminal-6-100669795-orig.jpg)
### 在 Linux 终端里听音乐 ### 在 Linux 终端里听音乐
当谈到在 shell 中播放音乐(比如 mp3ogg 等),有一个软件绝对是卫冕之王:[cmus][7]。它支持所有你想得到的文件格式。它的使用超级简单,运行速度超级快,并且只使用系统少量的资源。如此清洁,如此流畅。这才是一个好的应用播发器的样子。 当谈到在 shell 中播放音乐(比如 mp3ogg 等),有一个软件绝对是卫冕之王:[cmus][7]。它支持所有你想得到的文件格式。它的使用超级简单,运行速度超级快,并且只使用系统少量的资源。如此清洁,如此流畅。这才是一个好的音乐播放器的样子。
![](http://core0.staticworld.net/images/article/2016/07/linux-terminal-7-100669796-orig.jpg) ![](http://core0.staticworld.net/images/article/2016/07/linux-terminal-7-100669796-orig.jpg)
@ -47,7 +49,7 @@
### 在 Linux 终端里发布推文 ### 在 Linux 终端里发布推文
这不是开玩笑!由于 [rainbowstream][10] 的存在,我们已经在终端里发布推文了。尽管我经常遇到一些错误信息,但体验一番之后,它确实可以很好的工作。虽然不能像在网页版 Twitter也不能像其移动版那样但这是一个终端版的,来试一试吧。尽管它的功能还未完善,但是用起来还是很酷,不是吗? 这不是开玩笑!由于 [rainbowstream][10] 的存在,我们已经可以在终端里发布推文了。尽管我时不时遇到一些 bug但整体上它工作得很好。虽然没有网页版 Twitter 或官方移动客户端那么好用,但这是一个终端版的 Twitter,来试一试吧。尽管它的功能还未完善,但是用起来还是很酷,不是吗?
![](http://core0.staticworld.net/images/article/2016/07/linux-terminal-9-100669798-orig.jpg) ![](http://core0.staticworld.net/images/article/2016/07/linux-terminal-9-100669798-orig.jpg)
@ -59,25 +61,25 @@
### 在 Linux 终端里管理进程 ### 在 Linux 终端里管理进程
可以 [htop][12]。与 top 相似,但更好用、更美观。有时候,我打开 htop 之后就让它一直运行。这样做是因为,它就是一个音乐视察器——当然,这里显示的是 RAM 和 CPU 使用情况。 可以使用 [htop][12]。与 top 相似,但更好用、更美观。有时候,我打开 htop 之后就让它一直运行。没有原因,就是喜欢!从某方面说,它就像将音乐可视化——当然,这里显示的是 RAM 和 CPU 的使用情况。
![](http://core0.staticworld.net/images/article/2016/07/linux-terminal-11-100669800-orig.png) ![](http://core0.staticworld.net/images/article/2016/07/linux-terminal-11-100669800-orig.png)
### 在 Linux 终端管理文件 ### 在 Linux 终端管理文件
在一个纯文本终端里并不意味着你不能享受生活的美好之物。比方说一个出色的文件浏览和管理器。基于这样的场景,[Midnight Commander][13] 则是极其好用的一个 在一个纯文本终端里并不意味着你不能享受生活之美好。比方说一个出色的文件浏览和管理器。这方面,[Midnight Commander][13] 是很好用的
![](http://core0.staticworld.net/images/article/2016/07/linux-terminal-12-100669801-orig.png) ![](http://core0.staticworld.net/images/article/2016/07/linux-terminal-12-100669801-orig.png)
### 在 Linux 终端里管理终端窗口 ### 在 Linux 终端里管理终端窗口
如果你需要在终端里工作很长时间,你就需要一个多窗口终端了。基本上,它就是这样一个软件——可以让你将终端回话分割成一个自定义网格,让你同时使用和查看多个终端应用。对于 shell它相当于一个瓦片时窗口管理器。我最喜欢用的就是 [tmux][14]。但 [GNU Screen][15] 也是很好用的。你可能需要花一些时间来学习怎么使用,但你会用之后,你就会迷上这样的用法 如果要在终端里工作很长时间,就需要一个多窗口终端了。它是这样一个软件 —— 可以将用户终端会话分割成一个自定义网格,从而可以同时使用和查看多个终端应用。对于 shell它相当于一个平铺式窗口管理器。我最喜欢用的是 [tmux][14]。不过 [GNU Screen][15] 也很好用。学习怎么使用它们可能要花点时间,但一旦会用,就会很方便
![](http://core0.staticworld.net/images/article/2016/07/linux-terminal-13-100669802-orig.jpg) ![](http://core0.staticworld.net/images/article/2016/07/linux-terminal-13-100669802-orig.jpg)
### 在 Linux 终端里进行讲稿演示 ### 在 Linux 终端里进行讲稿演示
不管是 LibreOffice、Google slides、gasp 或者 PowerPoint。我都会在讲稿演示软件花费很多时间。如果这类软件有一个终端版的就美妙了。它应该叫做“[文本演示程序][16]”。很显然,没有图片,只是一个使用简单标记语言将幻灯片放在一起的简单程序。它不可能让你在其中插入猫的图片,但它却可以让你在终端里进行完整的演示。 这类软件有 LibreOffice、Google slides、gasp 或者 PowerPoint。我在讲稿演示软件花费很多时间很高兴有一个终端版的软件。它称做“[文本演示程序tpp][16]”。很显然,没有图片,只是一个使用简单标记语言将放在一起的幻灯片展示出来的简单程序。它不可能让你在其中插入猫的图片,但可以让你在终端里进行完整的演示。
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
@ -85,7 +87,7 @@ via: http://www.networkworld.com/article/3091139/linux/who-needs-a-gui-how-to-li
作者:[Bryan Lunduke][a] 作者:[Bryan Lunduke][a]
译者:[GHLandy](https://github.com/GHLandy) 译者:[GHLandy](https://github.com/GHLandy)
校对:[校对者ID](https://github.com/校对者ID) 校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,58 @@
为满足当今和未来 IT 需求,培训员工还是雇佣新人?
================================================================
![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/cio_talent_4.png?itok=QLhyS_Xf)
在数字化时代,由于 IT 工具不断更新,技术公司紧随其后,对 IT 技能的需求也不断变化。对于企业来说,寻找和雇佣那些拥有令人垂涎能力的创新人才,是非常不容易的。同时,培训内部员工来使他们接受新的技能和挑战,需要一定的时间,而时间要求常常是紧迫的。
[Sandy Hill][1] 对 IT 涉及到的多项技术都很熟悉。她作为 [Pegasystems][2] 项目的 IT 总监,负责多个 IT 团队从应用的部署到数据中心的运营都要涉及。更重要的是Pegasystems 开发帮助销售、市场、服务以及运营团队流水化操作,以及客户联络的应用。这意味着她需要掌握使用 IT 内部资源的最佳方法,面对公司客户遇到的 IT 挑战。
![](https://enterprisersproject.com/sites/default/files/CIO_Q%20and%20A_0.png)
**TEP企业家项目这些年你是如何调整培训重心的**
**Hill**:在过去的几年中,我们经历了爆炸式的发展,现在我们要实现更多的全球化进程。因此,培训目标是确保每个人都在同一起跑线上。
我们主要的关注点在培养员工使用新产品和工具上,这些新产品和工具能够推动创新,提高工作效率。例如,我们使用了之前没有的资产管理系统。因此我们需要为全部员工做培训,而不是雇佣那些已经知道该产品的人。当我们正在发展的时候,我们也试图保持紧张的预算和稳定的职员总数。所以,我们更愿意在内部培训而不是雇佣新人。
**TEP说说培训方法吧怎样帮助你的员工发展他们的技能**
**Hill**:我要求每一位员工制定一个技术性的和非技术性的训练目标。这作为他们绩效评估的一部分。他们的技术性目标需要与他们的工作职能相符,非技术岗目标则随意,比如着重发展一项软技能,或是学一些专业领域之外的东西。我每年对职员进行一次评估,看看差距和不足之处,以使团队保持全面发展。
**TEP你的训练计划能够在多大程度上减轻招聘工作量, 保持职员的稳定性?**
**Hill**:使我们的职员保持学习新技术的兴趣,可以让他们不断提高技能。让职员知道我们重视他们并且让他们在擅长的领域成长和发展,以此激励他们。
**TEP你们发现哪些培训是最有效的**
**HILL**:我们使用几种不同的培训方法,认为效果很好。对新的或特殊的项目,我们会由供应商提供培训课程,作为项目的一部分。要是这个方法不能实现,我们将进行脱产培训。我们也会购买一些在线的培训课程。我也鼓励职员每年参加至少一次会议,以了解行业的动向。
**TEP哪些技能需求更适合雇佣新人而不是培训现有员工**
**Hill**:这和项目有关。最近有一个项目,需要使用 OpenStack而我们根本没有这方面的专家。所以我们与一家从事这一领域的咨询公司合作。我们利用他们的专业人员运行该项目并现场培训我们的内部团队成员。让内部员工学习他们需要的技能同时还要完成他们的日常工作是一项艰巨的任务。
顾问帮助我们确定我们需要的员工人数。这样我们就可以对员工进行评估,看是否存在缺口。如果存在人员上的缺口,我们还需要额外的培训或是员工招聘。我们也确实雇佣了一些合同工。另一个选择是对一些全职员工进行为期六至八周的培训,但我们的项目模式不容许这么做。
**TEP最近雇佣的员工他们的那些技能特别能够吸引到你**
**Hill**:在最近的招聘中,我更看重软技能。除了扎实的技术能力外,他们需要能够在团队中进行有效的沟通和工作,要有说服他人,谈判和解决冲突的能力。
IT 人常常独来独往不擅社交。然而如今IT 与整个组织结合越来越紧密,为其他业务部门提供有用的更新和状态报告的能力至关重要,可展示 IT 部门存在的重要性。
--------------------------------------------------------------------------------
via: https://enterprisersproject.com/article/2016/6/training-vs-hiring-meet-it-needs-today-and-tomorrow
作者:[Paul Desmond][a]
译者:[Cathon](https://github.com/Cathon)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://enterprisersproject.com/user/paul-desmond
[1]: https://enterprisersproject.com/user/sandy-hill
[2]: https://www.pega.com/pega-can?&utm_source=google&utm_medium=cpc&utm_campaign=900.US.Evaluate&utm_term=pegasystems&gloc=9009726&utm_content=smAXuLA4U|pcrid|102822102849|pkw|pegasystems|pmt|e|pdv|c|

View File

@ -0,0 +1,140 @@
轻轻几个点击,在 AWS 和 Azure 上搭建 Docker 数据中心
===================================================
通过几个点击即可在 “AWS 快速起步”和“Azure 市场”上高效搭建产品级 Docker 数据中心。
通过 AWS 快速起步的 CloudFormation 模板和在 Azure 市场上的预编译模板来部署 Docker 数据中心使得比以往在公有云基础设施下的部署企业级的 CaaS Docker 环境更加容易。
Docker 数据中心 CaaS 平台为各种规模的企业的敏捷应用部署提供了容器和集群的编排和管理,使之更简单、安全和可伸缩。使用新为 Docker 数据中心预编译的云模板,开发者和 IT 运维人员可以无缝的把容器化的应用迁移到亚马逊 EC2 或者微软的 Azure 环境而无需修改任何代码。现在,企业可以快速实现更高的计算和运营效率,可以通过短短几步操作实现支持 Docker 的容器管理和编排。
### 什么是 Docker 数据中心?
Docker 数据中心包括了 Docker 通用控制面板Docker Universal Control PlaneUCPDocker 可信注册库( Docker Trusted RegistryUTR和商用版 Docker 引擎CS Docker Engine并带有与客户的应用服务等级协议相匹配的商业支持服务。
- Docker 通用控制面板UCP一种企业级的集群管理方案帮助客户通过单个管理面板管理整个集群
- Docker 可信注册库DTR 一种镜像存储管理方案,帮助客户安全存储和管理 Docker 镜像
- 商用版的 Docker 引擎
![](http://img.scoop.it/lVraAJgJbjAKqfWCLtLuZLnTzqrqzN7Y9aBZTaXoQ8Q=)
### 在 AWS 上快速布置 Docker 数据中心
秉承 Docker 与 AWS 最佳实践,参照 AWS 快速起步教程来,你可以在 AWS 云上快速部署 Docker 容器。Docker 数据中心快速起步基于模块化和可定制的 CloudFormation 模板,客户可以在其之上增加额外功能或者为自己的 Docker 部署修改模板。
- [AWS 的 Docker 数据中心应用说明](https://youtu.be/aUx7ZdFSkXU)
#### 架构
![](http://img.scoop.it/sZ3_TxLba42QB-r_6vuApLnTzqrqzN7Y9aBZTaXoQ8Q=)
AWS Cloudformation 的安装过程始于创建 AWS 资源,这些 AWS 需要的资源包括VPC、安全组、公有与私有子网、因特网网关、NAT 网关与 S3 bucket。
然后AWS Cloudformation 启动第一个 UCP 控制器实例,紧接着,安装 Docker 引擎和 UCP 容器。它把第一个 UCP 控制器创建的根证书备份到 S3。一旦第一个 UCP 控制器成功运行,其他 UCP 控制器、UCP 集群节点和第一个 DTR 复制的进程就会被触发。和第一个 UCP 控制器节点类似,其他所有节点创建进程也都由商用版 Docker 引擎开始,然后安装并运行 UCP 和 DTR 容器以加入集群。两个弹性负载均衡器ELB一个分配给 UCP另外一个为 DTR 服务它们启动并自动完成配置来在两个可用区AZ之间提供弹性负载均衡。
除这些之外如有需要UCP 控制器和节点在 ASG 中启动并提供扩展功能。这种架构确保 UCP 和 DTR 两者都部署在两个 AZ 上以增强弹性与高可靠性。在公有或者私有 HostedZone 上Route53 用来动态注册或者配置 UCP 和 DTR。
![](http://img.scoop.it/HM7Ag6RFvMXvZ_iBxRgKo7nTzqrqzN7Y9aBZTaXoQ8Q=)
#### 快速起步模板的核心功能如下:
- 创建 VPC、不同 AZ 上的私有和公有子网、ELB、NAT 网关、因特网网关、自动伸缩组,它们全部基于 AWS 最佳实践
- 为 DDC 创建一个 S3 bucket其用于证书备份和 DTR 映像存储DTR 需要额外配置)
- 在客户的 VPC 范畴,跨多 AZ 部署 3 个 UCP 控制器
- 创建预配置正常检测的 UCP ELB
- 创建一个 DNS 记录并关联到 UCP ELB
- 创建可伸缩的 UCP 节点集群
- 在 VPC 范畴内,跨多 AZ 创建 3 个 DTR 副本
- 创建一个预配置正常检测的 DTR
- 创建一个 DNS 记录,并关联到 DTR ELB
- [下载 AWS 快速指南](https://s3.amazonaws.com/quickstart-reference/docker/latest/doc/docker-datacenter-on-the-aws-cloud.pdf)
### 在 AWS 使用 Docker 数据中心
1. 登录 [Docker Store][1] 获取 [30 天免费试用][2]或者[联系销售][4]
2. 确认之后看到提示“Launch Stack”后客户会被重定向到 AWS Cloudformation 入口
3. 确认启动 Docker 的 AWS 区域
4. 提供启动参数
5. 确认并启动
6. 启动完成之后,点击输出标签可以看到 UCP/DTR 的 URL、缺省用户名、密码和 S3 bucket 的名称
- [Docker 数据中心需要 2000 美刀信用担保](https://aws.amazon.com/mp/contactdocker/)
### 在 Azure 使用 Azure 市场的预编译模板部署
在 Azure 市场上Docker 数据中心是一个预先编译的模板,客户可以在 Azure 横跨全球的数据中心即起即用。客户可以根据自己需求从 Azure 提供的各种 VM 中选择适合自己的 VM 部署 Docker 数据中心。
#### 架构
![](http://img.scoop.it/V9SpuBCoAnUnkRL3J-FRFLnTzqrqzN7Y9aBZTaXoQ8Q=)
Azure 部署过程始于输入一些基本用户信息,如 ssh 登录的管理员用户名(系统级管理员)和资源组名称。你可以把资源组理解为一组有生命周期和部署边界的资源集合。你可以在这个链接了解更多关于资源组的信息: http://azure.microsoft.com/en-us/documentation/articles/resource-group-overview/ 。
下一步输入集群详细信息包括UCP 控制器 VM 大小、控制器个数(缺省为 3 个、UCP 节点 VM 大小、UCP 节点个数(缺省 1最大值为 10、DTR 节点 VM 大小、DTR 节点个数、虚拟网络名和地址例如10.0.0.1/19。关于网络客户可以配置 2 个子网:第一个子网分配给 UCP 控制器 ,第二个分配给 DTC 和 UCP 节点。
最后,点击 OK 完成部署。对于小集群,服务开通需要大约 15-19 分钟,大集群更久些。
![](http://img.scoop.it/DXPM5-GXP0j2kEhno0kdRLnTzqrqzN7Y9aBZTaXoQ8Q=)
![](http://img.scoop.it/321ElkCf6rqb7u_-nlGPtrnTzqrqzN7Y9aBZTaXoQ8Q=)
#### 如何在 Azure 部署
1. 注册 [Docker 数据中心 30 天试用][5]许可或者[联系销售][6]
2. [跳转到微软 Azure 市场的 Docker 数据中心][7]
3. [查看部署文档][8]
---
通过注册获取 Docker 数据中心许可证开始,然后你就能够通过 AWS 或者 Azure 模板搭建自己的数据中心。
- [获取 30 天试用许可证][9]
- [通过视频理解 Docker 数据中心架构][10]
- [观看演示视频][11]
- [获取 AWS 提供的部署 Docker 数据中心的 75 美元红包奖励][12]
了解有关 Docker 的更多信息:
- 初识 Docker? 尝试一下 10 分钟[在线学习课程][20]
- 分享镜像,自动构建,或用一个[免费的 Docker Hub 账号][21]尝试更多
- 阅读 [Docker 1.12 发行说明][22]
- 订阅 [Docker Weekly][23]
- 报名参加即将到来的 [Docker Online Meetups][24]
- 参加即将发生的 [Docker Meetups][25]
- 观看 [DockerCon EU2015][26]视频
- 开始为 [Docker][27] 贡献力量
--------------------------------------------------------------------------------
via: https://blog.docker.com/2016/06/docker-datacenter-aws-azure-cloud/
作者:[Trisha McCanna][a]
译者:[firstadream](https://github.com/firstadream)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://blog.docker.com/author/trisha/
[1]: https://store.docker.com/login?next=%2Fbundles%2Fdocker-datacenter%2Fpurchase?plan=free-trial
[2]: https://store.docker.com/login?next=%2Fbundles%2Fdocker-datacenter%2Fpurchase?plan=free-trial
[4]: https://goto.docker.com/contact-us.html
[5]: https://store.docker.com/login?next=%2Fbundles%2Fdocker-datacenter%2Fpurchase?plan=free-trial
[6]: https://goto.docker.com/contact-us.html
[7]: https://azure.microsoft.com/en-us/marketplace/partners/docker/dockerdatacenterdocker-datacenter/
[8]: https://success.docker.com/Datacenter/Apply/Docker_Datacenter_on_Azure
[9]: http://www.docker.com/trial
[10]: https://www.youtube.com/playlist?list=PLkA60AVN3hh8tFH7xzI5Y-vP48wUiuXfH
[11]: https://www.youtube.com/playlist?list=PLkA60AVN3hh8a8JaIOA5Q757KiqEjPKWr
[12]: https://aws.amazon.com/quickstart/promo/
[20]: https://docs.docker.com/engine/understanding-docker/
[21]: https://hub.docker.com/
[22]: https://docs.docker.com/release-notes/
[23]: https://www.docker.com/subscribe_newsletter/
[24]: http://www.meetup.com/Docker-Online-Meetup/
[25]: https://www.docker.com/community/meetup-groups
[26]: https://www.youtube.com/playlist?list=PLkA60AVN3hh87OoVra6MHf2L4UR9xwJkv
[27]: https://docs.docker.com/contributing/contributing/

View File

@ -0,0 +1,143 @@
2016 年 Linux 下五个最佳视频编辑软件
=====================================================
![](https://itsfoss.com/wp-content/uploads/2016/06/linux-video-ditor-software.jpg)
概要: 在这篇文章中Tiwo 讨论了 Linux 下最佳视频编辑器的优缺点和在基于 Ubuntu 的发行版中的安装方法。
在过去,我们已经在类似的文章中讨论了 [Linux 下最佳图像管理应用软件][1][Linux 上四个最佳的现代开源代码编辑器][2]。今天,我们来看看 Linux 下的最佳视频编辑软件。
当谈及免费的视频编辑软件Windows Movie Maker 和 iMovie 是大多数人经常推荐的。
不幸的是,它们在 GNU/Linux 下都是不可用的。但是你不必担心这个,因为我们已经为你收集了一系列最佳的视频编辑器。
### Linux下最佳的视频编辑应用程序
接下来,让我们来看看 Linux 下排名前五的最佳视频编辑软件:
#### 1. Kdenlive
![](https://itsfoss.com/wp-content/uploads/2016/06/kdenlive-free-video-editor-on-ubuntu.jpg)
[Kdenlive][3] 是一款来自于 KDE 的自由而开源的视频编辑软件,它提供双视频监视器、多轨时间轴、剪辑列表、可自定义的布局支持、基本效果和基本转换的功能。
它支持各种文件格式和各种摄像机和相机包括低分辨率摄像机Raw 和 AVI DV 编辑mpeg2、mpeg4 和 h264 AVCHD小型摄像机和摄像机高分辨率摄像机文件包括 HDV 和 AVCHD 摄像机;专业摄像机,包括 XDCAM-HD^TM 流、IMX^TM D10流、DVCAMD10、DVCAM、DVCPRO^TM 、DVCPRO50^TM 流和 DNxHD^TM 流等等。
你可以在命令行下运行下面的命令安装 :
```
sudo apt-get install kdenlive
```
或者,打开 Ubuntu 软件中心,然后搜索 Kdenlive。
#### 2. OpenShot
![](https://itsfoss.com/wp-content/uploads/2016/06/openshot-free-video-editor-on-ubuntu.jpg)
[OpenShot][5] 是我们这个 Linux 视频编辑软件列表中的第二选择。 OpenShot 可以帮助您创建支持过渡、效果、调整音频电平的电影,当然,它也支持大多数格式和编解码器。
您还可以将电影导出到 DVD上传到 YouTube、Vimeo、Xbox 360 和许多其他常见的格式。 OpenShot 比 Kdenlive 更简单。 所以如果你需要一个简单界面的视频编辑器OpenShot 会是一个不错的选择。
最新的版本是 2.0.7。您可以从终端窗口运行以下命令安装 OpenShot 视频编辑器:
```
sudo apt-get install openshot
```
它需要下载 25 MB安装后需要 70 MB 硬盘空间。
#### 3. Flowblade Movie Editor
![](https://itsfoss.com/wp-content/uploads/2016/06/flowblade-movie-editor-on-ubuntu.jpg)
[Flowblade Movie Editor][6] 是一个用于 Linux 的多轨非线性视频编辑器。它是自由而开源的。 它配备了一个时尚而现代的用户界面。
它是用 Python 编写的,旨在提供一个快速、精确的功能。 Flowblade 致力于在 Linux 和其他自由平台上提供最好的体验。 所以现在没有 Windows 和 OS X 版本。
要在 Ubuntu 和其他基于 Ubuntu 的系统上安装 Flowblade请使用以下命令
```
sudo apt-get install flowblade
```
#### 4. Lightworks
![](https://itsfoss.com/wp-content/uploads/2016/06/lightworks-running-on-ubuntu-16.04.jpg)
如果你要寻找一个有更多功能的视频编辑软件,这会是答案。 [Lightworks][7] 是一个跨平台的专业的视频编辑器,在 Linux、Mac OS X 和 Windows 系统下都可用。
它是一个获奖的专业的[非线性编辑][8]NLE软件支持高达 4K 的分辨率以及标清和高清格式的视频。
该应用程序有两个版本Lightworks 免费版和 Lightworks 专业版。不过免费版本不支持 VimeoH.264 / MPEG-4和 YouTubeH.264 / MPEG-4 - 高达 2160p4K UHD、蓝光和 H.264 / MP4 导出选项,以及可配置的位速率设置,但是专业版本支持。
- Lightworks 免费版
- Lightworks 专业版
专业版本有更多的功能例如更高的分辨率支持4K 和蓝光支持等。
##### 怎么安装Lightworks
不同于其他的视频编辑器,安装 Lightwork 不像运行单个命令那么直接。别担心,这不会很复杂。
- 第1步 你可以从 [Lightworks 下载页面][9]下载安装包。这个安装包大约 79.5MB。*请注意这里没有32 位 Linux 的支持。*
- 第2步 一旦下载,你可以使用 [Gdebi 软件包安装器][10]来安装。Gdebi 会自动下载依赖关系 :
![](https://itsfoss.com/wp-content/uploads/2016/06/Installing-lightworks-on-ubuntu.jpg)
- 第3步 现在你可以从 Ubuntu 仪表板或您的 Linux 发行版菜单中打开它。
- 第4步 当你第一次使用它时,需要一个账号。点击 “Not Registerd?” 按钮来注册。别担心,它是免费的。
- 第5步 在你的账号通过验证后,就可以登录了。
现在Lightworks 可以使用了。
需要 Lightworks 的视频教程? 在 [Lightworks 视频教程页][11]得到它们。
#### 5. Blender
![](https://itsfoss.com/wp-content/uploads/2016/06/blender-running-on-ubuntu-16.04.jpg)
Blender 是一个专业的,工业级的开源、跨平台的视频编辑器。在 3D 作品的制作中,是非常受欢迎的。 Blender 已被用于几部好莱坞电影的制作,包括蜘蛛侠系列。
虽然最初是设计用于制作 3D 模型,但它也可以用于各种格式的视频编辑和输入能力。 该视频编辑器包括:
- 实时预览、亮度波形、色度矢量示波器和直方图显示
- 音频混合、同步、擦除和波形可视化
- 多达 32 个插槽用于添加视频、图像、音频、场景、面具和效果
- 速度控制、调整图层、过渡、关键帧、过滤器等
最新的版本可以从 [Blender 下载页][12]下载.
### 哪一个是最好的视频编辑软件?
如果你需要一个简单的视频编辑器OpenShot、Kdenlive 和 Flowblade 是一个不错的选择。这些软件是适合初学者的,并且带有标准规范的系统。
如果你有一个高性能的计算机,并且需要高级功能,你可以使用 Lightworks。如果你正在寻找更高级的功能 Blender 可以帮助你。
这就是我写的 5 个最佳的视频编辑软件,它们可以在 Ubuntu、Linux Mint、Elementary 和其他 Linux 发行版下使用。 请与我们分享您最喜欢的视频编辑器。
--------------------------------------------------------------------------------
via: https://itsfoss.com/best-video-editing-software-linux/
作者:[Tiwo Satriatama][a]
译者:[DockerChen](https://github.com/DockerChen)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/tiwo/
[1]: https://linux.cn/article-7462-1.html
[2]: https://linux.cn/article-7468-1.html
[3]: https://kdenlive.org/
[4]: https://itsfoss.com/tag/open-source/
[5]: http://www.openshot.org/
[6]: http://jliljebl.github.io/flowblade/
[7]: https://www.lwks.com/
[8]: https://en.wikipedia.org/wiki/Non-linear_editing_system
[9]: https://www.lwks.com/index.php?option=com_lwks&view=download&Itemid=206
[10]: https://itsfoss.com/gdebi-default-ubuntu-software-center/
[11]: https://www.lwks.com/videotutorials
[12]: https://www.blender.org/download/

View File

@ -0,0 +1,69 @@
怎样用 Tar 和 OpenSSL 给文件和目录加密及解密
=========
当你有重要的敏感数据的时候,给你的文件和目录额外加一层保护是至关重要的,特别是当你需要通过网络与他人传输数据的时候。
由于这个原因,我在寻找一个可疑在 Linux 上加密及解密文件和目录的实用程序,幸运的是我找到了一个用 tarLinux 的一个压缩打包工具)和 OpenSSL 来解决的方案。借助这两个工具,你真的可以毫不费力地创建和加密 tar 归档文件。
在这篇文章中,我们将了解如何使用 OpenSSL 创建和加密 tar 或 gzgzip另一种压缩文件归档文件
牢记使用 OpenSSL 的常规方式是:
```
# openssl command command-options arguments
```
### 在 Linux 中加密文件
要加密当前工作目录的内容(根据文件的大小,这可能需要一点时间):
```
# tar -czf - * | openssl enc -e -aes256 -out secured.tar.gz
```
上述命令的解释:
1. `enc` - openssl 命令使用加密进行编码
2. `-e`  用来加密输入文件的 `enc` 命令选项,这里是指前一个 tar 命令的输出
3. `-aes256`  加密用的算法
4. `-out`  用于指定输出文件名的 `enc` 命令选项,这里文件名是 `secured.tar.gz`
### 在 Linux 中解密文件
要解密上述 tar 归档内容,使用以下命令。
```
# openssl enc -d -aes256 -in secured.tar.gz | tar xz -C test
```
上述命令的解释:
1. `-d`  用于解密文件
2. `-C`  提取内容到 `test` 子目录
下图展示了加解密过程,以及当你尝试执行以下操作时会发生什么:
1. 以传统方式提取 tar 包的内容
2. 使用了错误的密码的时候
3. 当你输入正确的密码的时候
[![在 Linux 中加密和解密 Tar 归档文件](http://www.tecmint.com/wp-content/uploads/2016/08/Encrypt-Decrypt-Tar-Archive-Files-in-Linux.png)][1]
*在 Linux 中加密和解密 Tar 归档文件*
当你在本地网络或因特网工作的时候,你可以随时通过加密来保护你和他人共享的重要文本或文件,这有助于降低将其暴露给恶意攻击者的风险。
我们研究了一种使用 OpenSSL一个 openssl 命令行工具)加密 tar 包的简单技术你可以参考它的手册页man page来获取更多信息和有用的命令。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/encrypt-decrypt-files-tar-openssl-linux/
作者:[Gabriel Cánepa][a]
译者:[OneNewLife](https://github.com/OneNewLife)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/wp-content/uploads/2016/08/Encrypt-Decrypt-Tar-Archive-Files-in-Linux.png

View File

@ -0,0 +1,188 @@
写一个 JavaScript 框架:比 setTimeout 更棒的定时执行
===================
这是 [JavaScript 框架系列][2]的第二章。在这一章里,我打算讲一下在浏览器里的异步代码不同执行方式。你将了解定时器和事件循环之间的不同差异,比如 setTimeout 和 Promises。
这个系列是关于一个开源的客户端框架,叫做 NX。在这个系列里我主要解释一下写该框架不得不克服的主要困难。如果你对 NX 感兴趣可以参观我们的 [主页][1]。
这个系列包含以下几个章节:
1. [项目结构][2]
2. 定时执行 (当前章节)
3. [沙箱代码评估][3]
4. [数据绑定介绍](https://blog.risingstack.com/writing-a-javascript-framework-data-binding-dirty-checking/)
5. [数据绑定与 ES6 代理](https://blog.risingstack.com/writing-a-javascript-framework-data-binding-es6-proxy/)
6. 自定义元素
7. 客户端路由
### 异步代码执行
你可能比较熟悉 `Promise`、`process.nextTick()`、`setTimeout()`,或许还有 `requestAnimationFrame()` 这些异步执行代码的方式。它们内部都使用了事件循环,但是它们在精确计时方面有一些不同。
在这一章里,我将解释它们之间的不同,然后给大家演示怎样在一个类似 NX 这样的先进框架里面实现一个定时系统。不用我们重新做一个,我们将使用原生的事件循环来达到我们的目的。
### 事件循环
事件循环甚至没有在 [ES6 规范](http://www.ecma-international.org/ecma-262/6.0/)里提到。JavaScript 自身只有任务Job和任务队列job queue。更加复杂的事件循环是在 NodeJS 和 HTML5 规范里分别定义的,因为这篇是针对前端的,我会在详细说明后者。
事件循环可以被看做某个条件的循环。它不停的寻找新的任务来运行。这个循环中的一次迭代叫做一个滴答tick。在一次滴答期间执行的代码称为一次任务task
```
while (eventLoop.waitForTask()) {
eventLoop.processNextTask()
}
```
任务是同步代码,它可以在循环中调度其它任务。一个简单的调用新任务的方式是 `setTimeout(taskFn)`。不管怎样, 任务可能有很多来源,比如用户事件、网络或者 DOM 操作。
![](https://risingstack-blog.s3.amazonaws.com/2016/Aug/Execution_timing_event_lopp_with_tasks-1470127590983.svg)
### 任务队列
更复杂一些的是,事件循环可以有多个任务队列。这里有两个约束条件,相同任务源的事件必须在相同的队列,以及任务必须按插入的顺序进行处理。除此之外,浏览器可以做任何它想做的事情。例如,它可以决定接下来处理哪个任务队列。
```
while (eventLoop.waitForTask()) {
const taskQueue = eventLoop.selectTaskQueue()
if (taskQueue.hasNextTask()) {
taskQueue.processNextTask()
}
}
```
用这个模型,我们不能精确的控制定时。如果用 `setTimeout()`浏览器可能决定先运行完其它几个队列才运行我们的队列。
![](https://risingstack-blog.s3.amazonaws.com/2016/Aug/Execution_timing_event_loop_with_task_queues-1470127624172.svg)
### 微任务队列
幸运的是事件循环还提供了一个叫做微任务microtask队列的单一队列。当前任务结束的时候微任务队列会清空每个滴答里的任务。
```
while (eventLoop.waitForTask()) {
const taskQueue = eventLoop.selectTaskQueue()
if (taskQueue.hasNextTask()) {
taskQueue.processNextTask()
}
const microtaskQueue = eventLoop.microTaskQueue
while (microtaskQueue.hasNextMicrotask()) {
microtaskQueue.processNextMicrotask()
}
}
```
最简单的调用微任务的方法是 `Promise.resolve().then(microtaskFn)`。微任务按照插入顺序进行处理,并且由于仅存在一个微任务队列,浏览器不会把时间弄乱了。
此外,微任务可以调度新的微任务,它将插入到同一个队列,并在同一个滴答内处理。
![](https://risingstack-blog.s3.amazonaws.com/2016/Aug/Execution_timing_event_loop_with_microtask_queue-1470127679393.svg)
### 绘制Rendering
最后是绘制Rendering调度不同于事件处理和分解绘制并不是在单独的后台任务完成的。它是一个可以运行在每个循环滴答结束时的算法。
在这里浏览器又有了许多自由:它可能在每个任务以后绘制,但是它也可能在好几百个任务都执行了以后也不绘制。
幸运的是,我们有 `requestAnimationFrame()`,它在下一个绘制之前执行传递的函数。我们最终的事件模型像这样:
```
while (eventLoop.waitForTask()) {
const taskQueue = eventLoop.selectTaskQueue()
if (taskQueue.hasNextTask()) {
taskQueue.processNextTask()
}
const microtaskQueue = eventLoop.microTaskQueue
while (microtaskQueue.hasNextMicrotask()) {
microtaskQueue.processNextMicrotask()
}
if (shouldRender()) {
applyScrollResizeAndCSS()
runAnimationFrames()
render()
}
}
```
现在用我们所知道知识来创建定时系统!
### 利用事件循环
和大多数现代框架一样,[NX][1] 也是基于 DOM 操作和数据绑定的。批量操作和异步执行以取得更好的性能表现。基于以上理由我们用 `Promises``MutationObservers``requestAnimationFrame()`
我们所期望的定时器是这样的:
1. 代码来自于开发者
2. 数据绑定和 DOM 操作由 NX 来执行
3. 开发者定义事件钩子
4. 浏览器进行绘制
#### 步骤 1
NX 寄存器对象基于 [ES6 代理](https://ponyfoo.com/articles/es6-proxies-in-depth) 以及 DOM 变动基于[MutationObserver](https://davidwalsh.name/mutationobserver-api) (变动观测器)同步运行(下一节详细介绍)。 它作为一个微任务延迟直到步骤 2 执行以后才做出反应。这个延迟已经在 `Promise.resolve().then(reaction)` 进行了对象转换,并且它将通过变动观测器自动运行。
#### 步骤 2
来自开发者的代码(任务)运行完成。微任务由 NX 开始执行所注册。 因为它们是微任务,所以按序执行。注意,我们仍然在同一个滴答循环中。
#### 步骤 3
开发者通过 `requestAnimationFrame(hook)` 通知 NX 运行钩子。这可能在滴答循环后发生。重要的是,钩子运行在下一次绘制之前和所有数据操作之后,并且 DOM 和 CSS 改变都已经完成。
#### 步骤 4
浏览器绘制下一个视图。这也有可能发生在滴答循环之后,但是绝对不会发生在一个滴答的步骤 3 之前。
### 牢记在心里的事情
我们在原生的事件循环之上实现了一个简单而有效的定时系统。理论上讲它运行的很好,但是还是很脆弱,一个轻微的错误可能会导致很严重的 BUG。
在一个复杂的系统当中,最重要的就是建立一定的规则并在以后保持它们。在 NX 中有以下规则:
1. 永远不用 `setTimeout(fn, 0)` 来进行内部操作
2. 用相同的方法来注册微任务
3. 微任务仅供内部操作
4. 不要干预开发者钩子运行时间
#### 规则 1 和 2
数据反射和 DOM 操作将按照操作顺序执行。这样只要不混合就可以很好的延迟它们的执行。混合执行会出现莫名其妙的问题。
`setTimeout(fn, 0)` 的行为完全不可预测。使用不同的方法注册微任务也会发生混乱。例如,下面的例子中 microtask2 不会正确地在 microtask1 之前运行。
```
Promise.resolve().then().then(microtask1)
Promise.resolve().then(microtask2)
```
![](https://risingstack-blog.s3.amazonaws.com/2016/Aug/Execution_timing_microtask_registration_method-1470127727609.svg)
#### 规则 3 和 4
分离开发者的代码执行和内部操作的时间窗口是非常重要的。混合这两种行为会导致不可预测的事情发生,并且它会需要开发者了解框架内部。我想很多前台开发者已经有过类似经历。
### 结论
如果你对 NX 框架感兴趣,可以参观我们的[主页][1]。还可以在 GIT 上找到我们的[源代码][5]。
在下一节我们再见,我们将讨论 [沙盒化代码执行][4]
你也可以给我们留言。
--------------------------------------------------------------------------------
via: https://blog.risingstack.com/writing-a-javascript-framework-execution-timing-beyond-settimeout/
作者:[Bertalan Miklos][a]
译者:[kokialoves](https://github.com/kokialoves)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://blog.risingstack.com/author/bertalan/
[1]: http://nx-framework.com/
[2]: https://blog.risingstack.com/writing-a-javascript-framework-project-structuring/
[3]: https://blog.risingstack.com/writing-a-javascript-framework-sandboxed-code-evaluation/
[4]: https://blog.risingstack.com/writing-a-javascript-framework-sandboxed-code-evaluation/
[5]: https://github.com/RisingStack/nx-framework

View File

@ -1,9 +1,11 @@
小模块的开销 JavaScript 小模块的开销
==== ====
大约一年之前,我在将一个大型 JavaScript 代码库重构为小模块时发现了 Browserify 和 Webpack 中一个令人沮丧的事实: 更新2016/10/30我写完这篇文章之后我在[这个基准测试中发了一个错误](https://github.com/nolanlawson/cost-of-small-modules/pull/8),会导致 Rollup 比它预期的看起来要好一些。不过整体结果并没有明显的不同Rollup 仍然击败了 Browserify 和 Webpack虽然它并没有像 Closure 十分好),所以我只是更新了图表。该基准测试包括了 [RequireJS 和 RequireJS Almond 打包器](https://github.com/nolanlawson/cost-of-small-modules/pull/5),所以文章中现在也包括了它们。要看原始帖子,可以查看[历史版本](https://web.archive.org/web/20160822181421/https://nolanlawson.com/2016/08/15/the-cost-of-small-modules/)。
> “代码越模块化,代码体积就越大。”- Nolan Lawson 大约一年之前,我在将一个大型 JavaScript 代码库重构为更小的模块时发现了 Browserify 和 Webpack 中一个令人沮丧的事实:
> “代码越模块化,代码体积就越大。:< ”- Nolan Lawson
过了一段时间Sam Saccone 发布了一些关于 [Tumblr][1] 和 [Imgur][2] 页面加载性能的出色的研究。其中指出: 过了一段时间Sam Saccone 发布了一些关于 [Tumblr][1] 和 [Imgur][2] 页面加载性能的出色的研究。其中指出:
@ -15,9 +17,9 @@
一个页面中包含的 JavaScript 脚本越多,页面加载也将越慢。庞大的 JavaScript 包会导致浏览器花费更多的时间去下载、解析和执行,这些都将加长载入时间。 一个页面中包含的 JavaScript 脚本越多,页面加载也将越慢。庞大的 JavaScript 包会导致浏览器花费更多的时间去下载、解析和执行,这些都将加长载入时间。
即使当你使用如 Webpack [code splitting][3]、Browserify [factor bundles][4] 等工具将代码分解为多个包,时间的花费也仅仅是被延迟到页面生命周期的晚些时候。JavaScript 迟早都将有一笔开销。 即使当你使用如 Webpack [code splitting][3]、Browserify [factor bundles][4] 等工具将代码分解为多个包,该开销也仅仅是被延迟到页面生命周期的晚些时候。JavaScript 迟早都将有一笔开销。
此外,由于 JavaScript 是一门动态语言,同时流行的 [CommonJS][5] 模块也是动态的,所以这就使得在最终分发给用户的代码中剔除无用的代码变得异常困难。譬如你可能只使用到 jQuery 中的 $.ajax但是通过载入 jQuery 包,你将以整个包为代价。 此外,由于 JavaScript 是一门动态语言,同时流行的 [CommonJS][5] 模块也是动态的,所以这就使得在最终分发给用户的代码中剔除无用的代码变得异常困难。譬如你可能只使用到 jQuery 中的 $.ajax但是通过载入 jQuery 包,你将付出整个包的代价。
JavaScript 社区对这个问题提出的解决办法是提倡 [小模块][6] 的使用。小模块不仅有许多 [美好且实用的好处][7] 如易于维护,易于理解,易于集成等,而且还可以通过鼓励包含小巧的功能而不是庞大的库来解决之前提到的 jQuery 的问题。 JavaScript 社区对这个问题提出的解决办法是提倡 [小模块][6] 的使用。小模块不仅有许多 [美好且实用的好处][7] 如易于维护,易于理解,易于集成等,而且还可以通过鼓励包含小巧的功能而不是庞大的库来解决之前提到的 jQuery 的问题。
@ -66,7 +68,7 @@ $ browserify node_modules/qs | browserify-count-modules
顺带一提,我写过的最大的开源站点 [Pokedex.org][21] 包含了 4 个包,共 311 个模块。 顺带一提,我写过的最大的开源站点 [Pokedex.org][21] 包含了 4 个包,共 311 个模块。
让我们先暂时忽略这些 JavaScript 包的实际大小,我认为去探索一下一定数量的模块本身开销会一件有意思的事。虽然 Sam Saccone 的文章 [“2016 年 ES2015 转译的开销”][22] 已经广为流传,但是我认为他的结论还未到达足够深度,所以让我们挖掘的稍微再深一点吧。 让我们先暂时忽略这些 JavaScript 包的实际大小,我认为去探索一下一定数量的模块本身开销会一件有意思的事。虽然 Sam Saccone 的文章 [“2016 年 ES2015 转译的开销”][22] 已经广为流传,但是我认为他的结论还未到达足够深度,所以让我们挖掘的稍微再深一点吧。
### 测试环节! ### 测试环节!
@ -86,13 +88,13 @@ console.log(total)
module.exports = 1 module.exports = 1
``` ```
我测试了五种打包方法Browserify, 带 [bundle-collapser][24] 插件的 Browserify, Webpack, Rollup 和 Closure Compiler。对于 Rollup 和 Closure Compiler 我使用了 ES6 模块,而对于 Browserify 和 Webpack 则用的 CommonJS目的是为了不涉及其各自缺点而导致测试的不公平由于它们可能需要做一些转译工作如 Babel 一样,而这些工作将会增加其自身的运行时间)。 我测试了五种打包方法Browserify、带 [bundle-collapser][24] 插件的 Browserify、Webpack、Rollup 和 Closure Compiler。对于 Rollup 和 Closure Compiler 我使用了 ES6 模块,而对于 Browserify 和 Webpack 则用的 CommonJS目的是为了不涉及其各自缺点而导致测试的不公平由于它们可能需要做一些转译工作如 Babel 一样,而这些工作将会增加其自身的运行时间)。
为了更好地模拟一个生产环境,我将带 -mangle 和 -compress 参数的 Uglify 用于所有的包,并且使用 gzip 压缩后通过 GitHub Pages 用 HTTPS 协议进行传输。对于每个包,我一共下载并执行 15 次,然后取其平均值,并使用 performance.now() 函数来记录载入时间(未使用缓存)与执行时间。 为了更好地模拟一个生产环境,我对所有的包采用带 `-mangle``-compress` 参数的 `Uglify` ,并且使用 gzip 压缩后通过 GitHub Pages 用 HTTPS 协议进行传输。对于每个包,我一共下载并执行 15 次,然后取其平均值,并使用 `performance.now()` 函数来记录载入时间(未使用缓存)与执行时间。
### 包大小 ### 包大小
在我们查看测试结果之前,我们有必要先来看一眼我们要测试的包文件。下是每个包最小处理后但并未使用 gzip 压缩时的体积大小单位Byte 在我们查看测试结果之前,我们有必要先来看一眼我们要测试的包文件。下是每个包最小处理后但并未使用 gzip 压缩时的体积大小单位Byte
| | 100 个模块 | 1000 个模块 | 5000 个模块 | | | 100 个模块 | 1000 个模块 | 5000 个模块 |
| --- | --- | --- | --- | | --- | --- | --- | --- |
@ -110,7 +112,7 @@ module.exports = 1
| rollup | 300 | 2145 | 11510 | | rollup | 300 | 2145 | 11510 |
| closure | 302 | 2140 | 11789 | | closure | 302 | 2140 | 11789 |
Browserify 和 Webpack 的工作方式是隔离各个模块到各自的函数空间,然后声明一个全局载入器,并在每次 require() 函数调用时定位到正确的模块处。下面是我们的 Browserify 包的样子: Browserify 和 Webpack 的工作方式是隔离各个模块到各自的函数空间,然后声明一个全局载入器,并在每次 `require()` 函数调用时定位到正确的模块处。下面是我们的 Browserify 包的样子:
``` ```
(function e(t,n,r){function s(o,u){if(!n[o]){if(!t[o]){var a=typeof require=="function"&&require;if(!u&&a)return a(o,!0);if(i)return i(o,!0);var f=new Error("Cannot find module '"+o+"'");throw f.code="MODULE_NOT_FOUND",f}var l=n[o]={exports:{}};t[o][0].call(l.exports,function(e){var n=t[o][1][e];return s(n?n:e)},l,l.exports,e,t,n,r)}return n[o].exports}var i=typeof require=="function"&&require;for(var o=0;o (function e(t,n,r){function s(o,u){if(!n[o]){if(!t[o]){var a=typeof require=="function"&&require;if(!u&&a)return a(o,!0);if(i)return i(o,!0);var f=new Error("Cannot find module '"+o+"'");throw f.code="MODULE_NOT_FOUND",f}var l=n[o]={exports:{}};t[o][0].call(l.exports,function(e){var n=t[o][1][e];return s(n?n:e)},l,l.exports,e,t,n,r)}return n[o].exports}var i=typeof require=="function"&&require;for(var o=0;o
@ -144,7 +146,7 @@ Browserify 和 Webpack 的工作方式是隔离各个模块到各自的函数空
在 100 个模块时,各包的差异是微不足道的,但是一旦模块数量达到 1000 个甚至 5000 个时差异将会变得非常巨大。iPod Touch 在不同包上的差异并不明显,而对于具有一定年代的 Nexus 5 来说Browserify 和 Webpack 明显耗时更多。 在 100 个模块时,各包的差异是微不足道的,但是一旦模块数量达到 1000 个甚至 5000 个时差异将会变得非常巨大。iPod Touch 在不同包上的差异并不明显,而对于具有一定年代的 Nexus 5 来说Browserify 和 Webpack 明显耗时更多。
与此同时,我发现有意思的是 Rollup 和 Closure 的运行开销对于 iPod 而言几乎可以忽略,并且与模块的数量关系也不大。而对于 Nexus 5 来说,运行的开销并非完全可以忽略,但它们仍比 Browserify 或 Webpack 低很多。后者若未在几百毫秒内完成加载则将会占用主线程的好几帧的时间,这就意味着用户界面将冻结并且等待直到模块载入完成。 与此同时,我发现有意思的是 Rollup 和 Closure 的运行开销对于 iPod 而言几乎可以忽略,并且与模块的数量关系也不大。而对于 Nexus 5 来说,运行的开销并非完全可以忽略,但 Rollup/Closure 仍比 Browserify/Webpack 低很多。后者若未在几百毫秒内完成加载则将会占用主线程的好几帧的时间,这就意味着用户界面将冻结并且等待直到模块载入完成。
值得注意的是前面这些测试都是在千兆网速下进行的,所以在网络情况来看,这只是一个最理想的状况。借助 Chrome 开发者工具,我们可以认为地将 Nexus 5 的网速限制到 3G 水平,然后来看一眼这对测试产生的影响([查看表格][30] 值得注意的是前面这些测试都是在千兆网速下进行的,所以在网络情况来看,这只是一个最理想的状况。借助 Chrome 开发者工具,我们可以认为地将 Nexus 5 的网速限制到 3G 水平,然后来看一眼这对测试产生的影响([查看表格][30]
@ -152,13 +154,13 @@ Browserify 和 Webpack 的工作方式是隔离各个模块到各自的函数空
一旦我们将网速考虑进来Browserify/Webpack 和 Rollup/Closure 的差异将变得更为显著。在 1000 个模块规模(接近于 Reddit 1050 个模块的规模Browserify 花费的时间比 Rollup 长大约 400 毫秒。然而 400 毫秒已经不是一个小数目了,正如 Google 和 Bing 指出的,亚秒级的延迟都会 [对用户的参与产生明显的影响][32] 。 一旦我们将网速考虑进来Browserify/Webpack 和 Rollup/Closure 的差异将变得更为显著。在 1000 个模块规模(接近于 Reddit 1050 个模块的规模Browserify 花费的时间比 Rollup 长大约 400 毫秒。然而 400 毫秒已经不是一个小数目了,正如 Google 和 Bing 指出的,亚秒级的延迟都会 [对用户的参与产生明显的影响][32] 。
还有一件事需要指出,那就是这个测试并非测量 100 个、1000 个或者 5000 个模块的每个模块的精确运行时间。因为这还与你对 require() 函数的使用有关。在这些包中,我采用的是对每个模块调用一次 require() 函数。但如果你每个模块调用了多次 require() 函数(这在代码库中非常常见)或者你多次动态调用 require() 函数(例如在子函数中调用 require() 函数),那么你将发现明显的性能退化。 还有一件事需要指出,那就是这个测试并非测量 100 个、1000 个或者 5000 个模块的每个模块的精确运行时间。因为这还与你对 `require()` 函数的使用有关。在这些包中,我采用的是对每个模块调用一次 `require()` 函数。但如果你每个模块调用了多次 `require()` 函数(这在代码库中非常常见)或者你多次动态调用 `require()` 函数(例如在子函数中调用 `require()` 函数),那么你将发现明显的性能退化。
Reddit 的移动站点就是一个很好的例子。虽然该站点有 1050 个模块,但是我测量了它们使用 Browserify 的实际执行时间后发现比“1000 个模块”的测试结果差好多。当使用那台运行 Chrome 的 Nexus 5 时,我测出 Reddit 的 Browserify require() 函数耗时 2.14 秒。而那个“1000 个模块”脚本中的等效函数只需要 197 毫秒(在搭载 i7 处理器的 Surface Book 上的桌面版 Chrome我测出的结果分别为 559 毫秒与 37 毫秒,虽然给出桌面平台的结果有些令人惊讶)。 Reddit 的移动站点就是一个很好的例子。虽然该站点有 1050 个模块,但是我测量了它们使用 Browserify 的实际执行时间后发现比“1000 个模块”的测试结果差好多。当使用那台运行 Chrome 的 Nexus 5 时,我测出 Reddit 的 Browserify require() 函数耗时 2.14 秒。而那个“1000 个模块”脚本中的等效函数只需要 197 毫秒(在搭载 i7 处理器的 Surface Book 上的桌面版 Chrome我测出的结果分别为 559 毫秒与 37 毫秒,虽然给出桌面平台的结果有些令人惊讶)。
这结果提示我们有必要对每个模块使用多个 require() 函数的情况再进行一次测试。不过,我并不认为这对 Browserify 和 Webpack 会是一个公平的测试,因为 Rollup 和 Closure 都会将重复的 ES6 库导入处理为一个的顶级变量声明同时也阻止了顶层空间以外的其他区域的导入。所以根本上来说Rollup 和 Closure 中一个导入和多个导入的开销是相同的,而对于 Browserify 和 Webpack运行开销随 require() 函数的数量线性增长。 这结果提示我们有必要对每个模块使用多个 `require()` 函数的情况再进行一次测试。不过,我并不认为这对 Browserify 和 Webpack 会是一个公平的测试,因为 Rollup 和 Closure 都会将重复的 ES6 库导入处理为一个的顶级变量声明同时也阻止了顶层空间以外的其他区域的导入。所以根本上来说Rollup 和 Closure 中一个导入和多个导入的开销是相同的,而对于 Browserify 和 Webpack运行开销随 `require()` 函数的数量线性增长。
为了我们这个分析的目的我认为最好假设模块的数量是性能的短板。而事实上“5000 个模块”也是一个比“5000 个 require() 函数调用”更好的度量标准。 为了我们这个分析的目的我认为最好假设模块的数量是性能的短板。而事实上“5000 个模块”也是一个比“5000 个 `require()` 函数调用”更好的度量标准。
### 结论 ### 结论
@ -168,11 +170,11 @@ Reddit 的移动站点就是一个很好的例子。虽然该站点有 1050 个
给出这些结果之后,我对 Closure Compiler 和 Rollup 在 JavaScript 社区并没有得到过多关注而感到惊讶。我猜测或许是因为(前者)需要依赖 Java后者仍然相当不成熟并且未能做到开箱即用详见 [Calvins Metcalf 的评论][37] 中作的不错的总结)。 给出这些结果之后,我对 Closure Compiler 和 Rollup 在 JavaScript 社区并没有得到过多关注而感到惊讶。我猜测或许是因为(前者)需要依赖 Java后者仍然相当不成熟并且未能做到开箱即用详见 [Calvins Metcalf 的评论][37] 中作的不错的总结)。
即使没有足够数量的 JavaScript 开发者加入到 Rollup 或 Closure 的队伍中,我认为 npm 包作者们也已准备好了去帮助解决这些问题。如果你使用 npm 安装 lodash你将会发其现主要的导入是一个巨大的 JavaScript 模块,而不是你期望的 Lodash 的超模块hyper-modular特性require('lodash/uniq')require('lodash.uniq') 等等)。对于 PouchDB我们做了一个类似的声明以 [使用 Rollup 作为预发布步骤][38],这将产生对于用户而言尽可能小的包。 即使没有足够数量的 JavaScript 开发者加入到 Rollup 或 Closure 的队伍中,我认为 npm 包作者们也已准备好了去帮助解决这些问题。如果你使用 npm 安装 lodash你将会发其现主要的导入是一个巨大的 JavaScript 模块,而不是你期望的 Lodash 的超模块hyper-modular特性`require('lodash/uniq')``require('lodash.uniq')` 等等)。对于 PouchDB我们做了一个类似的声明以 [使用 Rollup 作为预发布步骤][38],这将产生对于用户而言尽可能小的包。
同时,我创建了 [rollupify][39] 来尝试将这过程变得更为简单一些,只需拖动到已存在的 Browserify 工程中即可。其基本思想是在你自己的项目中使用导入import和导出export可以使用 [cjs-to-es6][40] 来帮助迁移),然后使用 require() 函数来载入第三方包。这样一来,你依旧可以在你自己的代码库中享受所有模块化的优点,同时能导出一个适当大小的大模块来发布给你的用户。不幸的是,你依旧得为第三方库付出一些代价,但是我发现这是对于当前 npm 生态系统的一个很好的折中方案。 同时,我创建了 [rollupify][39] 来尝试将这过程变得更为简单一些,只需拖动到已存在的 Browserify 工程中即可。其基本思想是在你自己的项目中使用导入import和导出export可以使用 [cjs-to-es6][40] 来帮助迁移),然后使用 require() 函数来载入第三方包。这样一来,你依旧可以在你自己的代码库中享受所有模块化的优点,同时能导出一个适当大小的大模块来发布给你的用户。不幸的是,你依旧得为第三方库付出一些代价,但是我发现这是对于当前 npm 生态系统的一个很好的折中方案。
所以结论如下:一个大的 JavaScript 包比一百个小 JavaScript 模块要快。尽管这是事实,我依旧希望我们社区能最终发现我们所处的困境————提倡小模块的原则对开发者有利,但是对用户不利。同时希望能优化我们的工具,使得我们可以对两方面都有利。 所以结论如下:**一个大的 JavaScript 包比一百个小 JavaScript 模块要快**。尽管这是事实,我依旧希望我们社区能最终发现我们所处的困境————提倡小模块的原则对开发者有利,但是对用户不利。同时希望能优化我们的工具,使得我们可以对两方面都有利。
### 福利时间!三款桌面浏览器 ### 福利时间!三款桌面浏览器
@ -205,15 +207,15 @@ Firefox 48 ([查看表格][45])
[![Nexus 5 (3G) RequireJS 结果][53]](https://nolanwlawson.files.wordpress.com/2016/08/2016-08-20-14_45_29-small_modules3-xlsx-excel.png) [![Nexus 5 (3G) RequireJS 结果][53]](https://nolanwlawson.files.wordpress.com/2016/08/2016-08-20-14_45_29-small_modules3-xlsx-excel.png)
更新 3: 我写了一个 [optimize-js](http://github.com/nolanlawson/optimize-js) ,它会减少一些函数内的函数的解析成本。
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
via: https://nolanlawson.com/2016/08/15/the-cost-of-small-modules/?utm_source=javascriptweekly&utm_medium=email via: https://nolanlawson.com/2016/08/15/the-cost-of-small-modules/
作者:[Nolan][a] 作者:[Nolan][a]
译者:[Yinr](https://github.com/Yinr) 译者:[Yinr](https://github.com/Yinr)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,447 @@
新手指南 通过 Docker 在 Linux 上托管 .NET Core
=====
这篇文章基于我之前的文章 [.NET Core 入门][1]。首先,我把 RESTful API 从 .NET Core RC1 升级到了 .NET Core 1.0,然后,我增加了对 Docker 的支持并描述了如何在 Linux 生产环境里托管它。
![](http://blog.scottlogic.com/nsoper/assets/noob.png)
我是首次接触 Docker 并且距离成为一名 Linux 高手还有很远的一段路程。因此,这里的很多想法是来自一个新手。
### 安装
按照 https://www.microsoft.com/net/core 上的介绍在你的电脑上安装 .NET Core 。这将会同时在 Windows 上安装 dotnet 命令行工具以及最新的 Visual Studio 工具。
### 源代码
你可以直接到 [GitHub](https://github.com/niksoper/aspnet5-books/tree/blog-docker) 上找最到最新完整的源代码。
### 转换到 .NET CORE 1.0
自然地,当我考虑如何把 API 从 .NET Core RC1 升级到 .NET Core 1.0 时想到的第一个求助的地方就是谷歌搜索。我是按照下面这两条非常全面的指导来进行升级的:
- [从 DNX 迁移到 .NET Core CLI][2]
- [从 ASP.NET 5 RC1 迁移到 ASP.NET Core 1.0][3]
当你迁移代码的时候,我建议仔细阅读这两篇指导,因为我在没有阅读第一篇指导的情况下又尝试浏览第二篇,结果感到非常迷惑和沮丧。
我不想描述细节上的改变因为你可以看 GitHub 上的[提交](https://github.com/niksoper/aspnet5-books/commit/b41ad38794c69a70a572be3ffad051fd2d7c53c0)。这儿是我所作改变的总结:
- 更新 `global.json``project.json` 上的版本号
- 删除 `project.json` 上的废弃章节
- 使用轻型 `ControllerBase` 而不是 `Controller` 因为我不需要与 MVC 视图相关的方法(这是一个可选的改变)。
- 从辅助方法中去掉 `Http` 前缀,比如:`HttpNotFound` -> `NotFound`
- `LogVerbose` -> `LogTrace`
- 名字空间改变: `Microsoft.AspNetCore.*`
- 在 `Startup` 中使用 `SetBasePath`(没有它 `appsettings.json` 将不会被发现)
- 通过 `WebHostBuilder` 来运行而不是通过 `WebApplication.Run` 来运行
- 删除 Serilog在写文章的时候它不支持 .NET Core 1.0
唯一令我真正头疼的事是需要移动 Serilog。我本可以实现自己的文件记录器但是我删除了文件记录功能因为我不想为了这次操作在这件事情上花费精力。
不幸的是,将有大量的第三方开发者扮演追赶 .NET Core 1.0 的角色,我非常同情他们,因为他们通常在休息时间还坚持工作但却依旧根本无法接近靠拢微软的可用资源。我建议阅读 Travis Illig 的文章 [.NET Core 1.0 发布了,但 Autofac 在哪儿][4]?这是一篇关于第三方开发者观点的文章。
做了这些改变以后,我可以从 `project.json` 目录恢复、构建并运行 dotnet可以看到 API 又像以前一样工作了。
### 通过 Docker 运行
在我写这篇文章的时候, Docker 只能够在 Linux 系统上工作。在 [Windows](https://docs.docker.com/engine/installation/windows/#/docker-for-windows) 系统和 [OS X](https://docs.docker.com/engine/installation/mac/) 上有 beta 支持 Docker但是它们都必须依赖于虚拟化技术因此我选择把 Ubuntu 14.04 当作虚拟机来运行。如果你还没有安装过 Docker请按照[指导](https://docs.docker.com/engine/installation/linux/ubuntulinux/)来安装。
我最近阅读了一些关于 Docker 的东西,但我直到现在还没有真正用它来干任何事。我假设读者还没有关于 Docker 的知识,因此我会解释我所使用的所有命令。
#### HELLO DOCKER
在 Ubuntu 上安装好 Docker 之后,我所进行的下一步就是按照 https://www.microsoft.com/net/core#docker 上的介绍来开始运行 .NET Core 和 Docker。
首先启动一个已安装有 .NET Core 的容器。
```
docker run -it microsoft/dotnet:latest
```
`-it` 选项表示交互,所以你执行这条命令之后,你就处于容器之内了,可以如你所希望的那样执行任何 bash 命令。
然后我们可以执行下面这五条命令来在 Docker 内部运行起来微软 .NET Core 控制台应用程序示例。
```
mkdir hwapp
cd hwapp
dotnet new
dotnet restore
dotnet run
```
你可以通过运行 `exit` 来离开容器,然后运行 `Docker ps -a` 命令,这会显示你创建的那个已经退出的容器。你可以通过上运行命令 `Docker rm <container_name>` 来清除容器。
#### 挂载源代码
我的下一步骤是使用和上面相同的 microsoft/dotnet 镜像,但是将为我们的应用程序以[数据卷](https://docs.docker.com/engine/tutorials/dockervolumes/1)的方式挂载上源代码。
首先签出有相关提交的仓库:
```
git clone https://github.com/niksoper/aspnet5-books.git
cd aspnet5-books/src/MvcLibrary
git checkout dotnet-core-1.0
```
现在启动一个容器来运行 .NET Core 1.0,并将源代码放在 `/book` 下。注意更改 `/path/to/repo` 这部分文件来匹配你的电脑:
```
docker run -it \
-v /path/to/repo/aspnet5-books/src/MvcLibrary:/books \
microsoft/dotnet:latest
```
现在你可以在容器中运行应用程序了!
```
cd /books
dotnet restore
dotnet run
```
作为一个概念性展示这的确很棒,但是我们可不想每次运行一个程序都要考虑如何把源代码安装到容器里。
#### 增加一个 DOCKERFILE
我的下一步骤是引入一个 Dockerfile这可以让应用程序很容易在自己的容器内启动。
我的 Dockerfile 和 `project.json` 一样位于 `src/MvcLibrary` 目录下,看起来像下面这样:
```
FROM microsoft/dotnet:latest
# 为应用程序源代码创建目录
RUN mkdir -p /usr/src/books
WORKDIR /usr/src/books
# 复制源代码并恢复依赖关系
COPY . /usr/src/books
RUN dotnet restore
# 暴露端口并运行应用程序
EXPOSE 5000
CMD [ "dotnet", "run" ]
```
严格来说,`RUN mkdir -p /usr/src/books` 命令是不需要的,因为 `COPY` 会自动创建丢失的目录。
Docker 镜像是按层建立的,我们从包含 .NET Core 的镜像开始,添加另一个从源代码生成应用程序,然后运行这个应用程序的层。
添加了 Dockerfile 以后,我通过运行下面的命令来生成一个镜像,并使用生成的镜像启动一个容器(确保在和 Dockerfile 相同的目录下进行操作,并且你应该使用自己的用户名)。
```
docker build -t niksoper/netcore-books .
docker run -it niksoper/netcore-books
```
你应该看到程序能够和之前一样的运行,不过这一次我们不需要像之前那样安装源代码,因为源代码已经包含在 docker 镜像里面了。
#### 暴露并发布端口
这个 API 并不是特别有用,除非我们需要从容器外面和它进行通信。 Docker 已经有了暴露和发布端口的概念,但这是两件完全不同的事。
据 Docker [官方文档](https://docs.docker.com/engine/reference/builder/#/expose)
> `EXPOSE` 指令通知 Docker 容器在运行时监听特定的网络端口。`EXPOSE` 指令不能够让容器的端口可被主机访问。要使可被访问,你必须通过 `-p` 标志来发布一个端口范围或者使用 `-P` 标志来发布所有暴露的端口
`EXPOSE` 指令只是将元数据添加到镜像上,所以你可以如文档中说的认为它是镜像消费者。从技术上讲,我本应该忽略 `EXPOSE 5000` 这行指令,因为我知道 API 正在监听的端口,但把它们留下很有用的,并且值得推荐。
在这个阶段,我想直接从主机访问这个 API ,因此我需要通过 `-p` 指令来发布这个端口,这将允许请求从主机上的端口 5000 转发到容器上的端口 5000无论这个端口是不是之前通过 Dockerfile 暴露的。
```
docker run -d -p 5000:5000 niksoper/netcore-books
```
通过 `-d` 指令告诉 docker 在分离模式下运行容器,因此我们不能看到它的输出,但是它依旧会运行并监听端口 5000。你可以通过 `docker ps` 来证实这件事。
因此,接下来我准备从主机向容器发起一个请求来庆祝一下:
```
curl http://localhost:5000/api/books
```
它不工作。
重复进行相同 `curl` 请求,我看到了两个错误:要么是 `curl: (56) Recv failure: Connection reset by peer`,要么是 `curl: (52) Empty reply from server`
我返回去看 docker run 的[文档](https://docs.docker.com/engine/reference/run/#/expose-incoming-ports),然后再次检查我所使用的 `-p` 选项以及 Dockerfile 中的 `EXPOSE` 指令是否正确。我没有发现任何问题,这让我开始有些沮丧。
重新振作起来以后,我决定去咨询当地的一个 Scott Logic DevOps 大师 - Dave Wybourn也在[这篇 Docker Swarm 的文章](http://blog.scottlogic.com/2016/08/30/docker-1-12-swarm-mode-round-robin.html)里提到过),他的团队也曾遇到这个实际问题。这个问题是我没有配置过 [Kestral](https://docs.asp.net/en/latest/fundamentals/servers.html#kestrel),这是一个全新的轻量级、跨平台 web 服务器,用于 .NET Core 。
默认情况下, Kestrel 会监听 http://localhost:5000。但问题是这儿的 `localhost` 是一个回路接口。
据[维基百科](https://en.wikipedia.org/wiki/Localhost)
> 在计算机网络中localhost 是一个代表本机的主机名。本地主机可以通过网络回路接口访问在主机上运行的网络服务。通过使用回路接口可以绕过任何硬件网络接口。
当运行在容器内时这是一个问题,因为 `localhost` 只能够在容器内访问。解决方法是更新 `Startup.cs` 里的 `Main` 方法来配置 Kestral 监听的 URL
```
public static void Main(string[] args)
{
var host = new WebHostBuilder()
.UseKestrel()
.UseContentRoot(Directory.GetCurrentDirectory())
.UseUrls("http://*:5000") // 在所有网络接口上监听端口 5000
.UseIISIntegration()
.UseStartup<Startup>()
.Build();
host.Run();
}
```
通过这些额外的配置,我可以重建镜像,并在容器中运行应用程序,它将能够接收来自主机的请求:
```
docker build -t niksoper/netcore-books .
docker run -d -p 5000:5000 niksoper/netcore-books
curl -i http://localhost:5000/api/books
```
我现在得到下面这些相应:
```
HTTP/1.1 200 OK
Date: Tue, 30 Aug 2016 15:25:43 GMT
Transfer-Encoding: chunked
Content-Type: application/json; charset=utf-8
Server: Kestrel
[{"id":"1","title":"RESTful API with ASP.NET Core MVC 1.0","author":"Nick Soper"}]
```
### 在产品环境中运行 KESTREL
[微软的介绍](https://docs.asp.net/en/latest/publishing/linuxproduction.html#why-use-a-reverse-proxy-server)
> Kestrel 可以很好的处理来自 ASP.NET 的动态内容,然而,网络服务部分的特性没有如 IISApache 或者 Nginx 那样的全特性服务器那么好。反向代理服务器可以让你不用去做像处理静态内容、缓存请求、压缩请求、SSL 端点这样的来自 HTTP 服务器的工作。
因此我需要在我的 Linux 机器上把 Nginx 设置成一个反向代理服务器。微软介绍了如何[发布到 Linux 生产环境下](https://docs.asp.net/en/latest/publishing/linuxproduction.html)的指导教程。我把说明总结在这儿:
1. 通过 `dotnet publish` 来给应用程序产生一个自包含包。
2. 把已发布的应用程序复制到服务器上
3. 安装并配置 Nginx作为反向代理服务器
4. 安装并配置 [supervisor](http://supervisord.org/)(用于确保 Nginx 服务器处于运行状态中)
5. 安装并配置 [AppArmor](https://wiki.ubuntu.com/AppArmor)(用于限制应用的资源使用)
6. 配置服务器防火墙
7. 安全加固 Nginx从源代码构建和配置 SSL
这些内容已经超出了本文的范围,因此我将侧重于如何把 Nginx 配置成一个反向代理服务器。自然地,我通过 Docker 来完成这件事。
### 在另一个容器中运行 NGINX
我的目标是在第二个 Docker 容器中运行 Nginx 并把它配置成我们的应用程序容器的反向代理服务器。
我使用的是[来自 Docker Hub 的官方 Nginx 镜像](https://hub.docker.com/_/nginx/)。首先我尝试这样做:
```
docker run -d -p 8080:80 --name web nginx
```
这启动了一个运行 Nginx 的容器并把主机上的 8080 端口映射到了容器的 80 端口上。现在在浏览器中打开网址 `http://localhost:8080` 会显示出 Nginx 的默认登录页面。
现在我们证实了运行 Nginx 是多么的简单,我们可以关闭这个容器。
```
docker rm -f web
```
### 把 NGINX 配置成一个反向代理服务器
可以通过像下面这样编辑位于 `/etc/nginx/conf.d/default.conf` 的配置文件,把 Nginx 配置成一个反向代理服务器:
```
server {
listen 80;
location / {
proxy_pass http://localhost:6666;
}
}
```
通过上面的配置可以让 Nginx 将所有对根目录的访问请求代理到 `http://localhost:6666`。记住这里的 `localhost` 指的是运行 Nginx 的容器。我们可以在 Nginx容器内部利用卷来使用我们自己的配置文件
```
docker run -d -p 8080:80 \
-v /path/to/my.conf:/etc/nginx/conf.d/default.conf \
nginx
```
注意:这把一个单一文件从主机映射到容器中,而不是一个完整目录。
### 在容器间进行通信
Docker 允许内部容器通过共享虚拟网络进行通信。默认情况下,所有通过 Docker 守护进程启动的容器都可以访问一种叫做“桥”的虚拟网络。这使得一个容器可以被另一个容器在相同的网络上通过 IP 地址和端口来引用。
你可以通过监测inspect容器来找到它的 IP 地址。我将从之前创建的 `niksoper/netcore-books` 镜像中启动一个容器并监测inspect
```
docker run -d -p 5000:5000 --name books niksoper/netcore-books
docker inspect books
```
![](http://blog.scottlogic.com/nsoper/assets/docker-inspect-ip.PNG)
我们可以看到这个容器的 IP 地址是 `"IPAddress": "172.17.0.3"`
所以现在如果我创建下面的 Nginx 配置文件,并使用这个文件启动一个 Nginx容器 它将代理请求到我的 API
```
server {
listen 80;
location / {
proxy_pass http://172.17.0.3:5000;
}
}
```
现在我可以使用这个配置文件启动一个 Nginx 容器(注意我把主机上的 8080 端口映射到了 Nginx 容器上的 80 端口):
```
docker run -d -p 8080:80 \
-v ~/dev/nginx/my.nginx.conf:/etc/nginx/conf.d/default.conf \
nginx
```
一个到 `http://localhost:8080` 的请求将被代理到应用上。注意下面 `curl` 响应的 `Server` 响应头:
![](http://blog.scottlogic.com/nsoper/assets/nginx-proxy-response.PNG)
### DOCKER COMPOSE
在这个地方,我为自己的进步而感到高兴,但我认为一定还有更好的方法来配置 Nginx可以不需要知道应用程序容器的确切 IP 地址。另一个当地的 Scott Logic DevOps 大师 Jason Ebbin 在这个地方进行了改进,并建议使用 [Docker Compose](https://docs.docker.com/compose/)。
概况描述一下Docker Compose 使得一组通过声明式语法互相连接的容器很容易启动。我不想再细说 Docker Compose 是如何工作的,因为你可以在[之前的文章](http://blog.scottlogic.com/2016/01/25/playing-with-docker-compose-and-erlang.html)中找到。
我将通过一个我所使用的 `docker-compose.yml` 文件来启动:
```
version: '2'
services:
books-service:
container_name: books-api
build: .
reverse-proxy:
container_name: reverse-proxy
image: nginx
ports:
- "9090:8080"
volumes:
- ./proxy.conf:/etc/nginx/conf.d/default.conf
```
*这是版本 2 语法,所以为了能够正常工作,你至少需要 1.6 版本的 Docker Compose。*
这个文件告诉 Docker 创建两个服务:一个是给应用的,另一个是给 Nginx 反向代理服务器的。
### BOOKS-SERVICE
这个与 docker-compose.yml 相同目录下的 Dockerfile 构建的容器叫做 `books-api`。注意这个容器不需要发布任何端口,因为只要能够从反向代理服务器访问它就可以,而不需要从主机操作系统访问它。
### REVERSE-PROXY
这将基于 nginx 镜像启动一个叫做 `reverse-proxy` 的容器,并将位于当前目录下的 `proxy.conf` 文件挂载为配置。它把主机上的 9090 端口映射到容器中的 8080 端口,这将允许我们在 `http://localhost:9090` 上通过主机访问容器。
`proxy.conf` 文件看起来像下面这样:
```
server {
listen 8080;
location / {
proxy_pass http://books-service:5000;
}
}
```
这儿的关键点是我们现在可以通过名字引用 `books-service`,因此我们不需要知道 `books-api` 这个容器的 IP 地址!
现在我们可以通过一个运行着的反向代理启动两个容器(`-d` 意味着这是独立的,因此我们不能看到来自容器的输出):
```
docker compose up -d
```
验证我们所创建的容器:
```
docker ps
```
最后来验证我们可以通过反向代理来控制该 API
```
curl -i http://localhost:9090/api/books
```
### 怎么做到的?
Docker Compose 通过创建一个新的叫做 `mvclibrary_default` 的虚拟网络来实现这件事,这个虚拟网络同时用于 `books-api``reverse-proxy` 容器(名字是基于 `docker-compose.yml` 文件的父目录)。
通过 `docker network ls` 来验证网络已经存在:
![](http://blog.scottlogic.com/nsoper/assets/docker-network-ls.PNG)
你可以使用 `docker network inspect mvclibrary_default` 来看到新的网络的细节:
![](http://blog.scottlogic.com/nsoper/assets/network-inspect.PNG)
注意 Docker 已经给网络分配了子网:`"Subnet": "172.18.0.0/16"`。/16 部分是无类域内路由选择CIDR完整的解释已经超出了本文的范围但 CIDR 只是表示 IP 地址范围。运行 `docker network inspect bridge` 显示子网:`"Subnet": "172.17.0.0/16"`,因此这两个网络是不重叠的。
现在用 `docker inspect books-api` 来确认应用程序的容器正在使用该网络:
![](http://blog.scottlogic.com/nsoper/assets/docker-inspect-books-api.PNG)
注意容器的两个别名(`"Aliases"`)是容器标识符(`3c42db680459`)和由 `docker-compose.yml` 给出的服务名(`books-service`)。我们通过 `books-service` 别名在自定义 Nginx 配置文件中来引用应用程序的容器。这本可以通过 `docker network create` 手动创建,但是我喜欢用 Docker Compose因为它可以干净简洁地将容器创建和依存捆绑在一起。
### 结论
所以现在我可以通过几个简单的步骤在 Linux 系统上用 Nginx 运行应用程序,不需要对主机操作系统做任何长期的改变:
```
git clone https://github.com/niksoper/aspnet5-books.git
cd aspnet5-books/src/MvcLibrary
git checkout blog-docker
docker-compose up -d
curl -i http://localhost:9090/api/books
```
我知道我在这篇文章中所写的内容不是一个真正的生产环境就绪的设备,因为我没有写任何有关下面这些的内容,绝大多数下面的这些主题都需要用单独一篇完整的文章来叙述。
- 安全考虑比如防火墙和 SSL 配置
- 如何确保应用程序保持运行状态
- 如何选择需要包含的 Docker 镜像(我把所有的都放入了 Dockerfile 中)
- 数据库 - 如何在容器中管理它们
对我来说这是一个非常有趣的学习经历,因为有一段时间我对探索 ASP.NET Core 的跨平台支持非常好奇,使用 “Configuratin as Code” 的 Docker Compose 方法来探索一下 DevOps 的世界也是非常愉快并且很有教育意义的。
如果你对 Docker 很好奇,那么我鼓励你来尝试学习它 或许这会让你离开舒适区,不过,有可能你会喜欢它?
--------------------------------------------------------------------------------
via: http://blog.scottlogic.com/2016/09/05/hosting-netcore-on-linux-with-docker.html
作者:[Nick Soper][a]
译者:[ucasFL](https://github.com/ucasFL)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://blog.scottlogic.com/nsoper
[1]: http://blog.scottlogic.com/2016/01/20/restful-api-with-aspnet50.html
[2]: https://docs.microsoft.com/en-us/dotnet/articles/core/migrating-from-dnx
[3]: https://docs.asp.net/en/latest/migration/rc1-to-rtm.html
[4]: http://www.paraesthesia.com/archive/2016/06/29/netcore-rtm-where-is-autofac/

View File

@ -1,13 +1,13 @@
Adobe 的新任首席信息官CIO股份领导对于开始一个新职位的忠告 Adobe 的新任首席信息官CIO对于开始一个新领导职位的建议
==== ====
![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/CIO_Leadership_3.png?itok=QWUGMw-V) ![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/CIO_Leadership_3.png?itok=QWUGMw-V)
我目前的几个月在一家十分受人尊敬的基于云的技术公司担任新的 CIO 一职。我的首要任务之一就是熟悉组织的人、文化和当务之急的事件。 我目前的几个月在一家十分受人尊敬的基于云的技术公司担任新的 CIO 一职。我的首要任务之一就是熟悉组织的人、文化和当务之急的事件。
作为这一目标的一部分,我访问了所有主要的网站。而在印度,上任不到两个月时,我被问道:“你打算做什么?你的计划是什么?” 我回答道,这个问题不会让经验丰富的 CIOs 感到吃惊,我现在仍然处于探索模式,我在做的主要是聆听和学习。 作为这一目标的一部分,我访问了所有主要的网站。而在印度,上任不到两个月时,我被问道:“你打算做什么?你的计划是什么?” 我回答道,这个问题不会让经验丰富的 CIO 们感到吃惊:我现在仍然处于探索模式,我在做的主要是聆听和学习。
我从来没有在入职时制定一份蓝图说我要做什么。我知道一些 CIOs 拥有一本关于他要怎么做的”剧本“。他会煽动整个组织将他的计划付诸行动。 我从来没有在入职时制定一份蓝图说我要做什么。我知道一些 CIO 拥有一本关于他要怎么做的”剧本“。他会煽动整个组织将他的计划付诸行动。
是的,在有些地方是完全崩坏了并无法发挥作用的情况下,这种行动可能是有意义的。但是,当我进入到一个公司时,我的策略是先开始一个探索的过程。我不想带入任何先入为主的观念,比如什么事应该是什么样子的,哪些工作和哪些是有冲突的,而哪些不是。 是的,在有些地方是完全崩坏了并无法发挥作用的情况下,这种行动可能是有意义的。但是,当我进入到一个公司时,我的策略是先开始一个探索的过程。我不想带入任何先入为主的观念,比如什么事应该是什么样子的,哪些工作和哪些是有冲突的,而哪些不是。
@ -25,13 +25,13 @@ Adobe 的新任首席信息官CIO股份领导对于开始一个新职位
### 了解客户 ### 了解客户
从很早开始,我们就收到客户的会面请求。与客户会面是一种很好的方式来启发你对 IT 机构未来的的思考,包括各种我们可以改进的地方,如技术、客户和消费者 从很早开始,我们就收到客户的会面请求。与客户会面是一种很好的方式来启发你对 IT 机构未来的的思考,包括各种我们可以改进的地方,如技术、客户和消费者
### 对未来的计划 ### 对未来的计划
作为一个新上任的领导者,我有一个全新的视角用以考虑组织的未来,而不会有挑战和障碍来干扰我。 作为一个新上任的领导者,我有一个全新的视角用以考虑组织的未来,而不会有挑战和障碍来干扰我。
CIOs 所需要做的就是推动 IT 进化到下一代。当我会见我的员工是,我问他们我们可以开始定位我们三到五年后的未来。这意味着开始讨论方案和当务之急的事。 CIO 所需要做的就是推动 IT 进化到下一代。当我会见我的员工时,我问他们我们三到五年后的未来可以做什么,以便我们可以开始尽早定位。这意味着开始讨论方案和当务之急的事。
从那以后,它使领导小组团结在一起,所以我们能够共同来组建我们的下一代体系——它的使命、愿景、组织模式和操作规范。如果你开始从内而外的改变,那么它会渗透到业务和其他你所做的一切事情上。 从那以后,它使领导小组团结在一起,所以我们能够共同来组建我们的下一代体系——它的使命、愿景、组织模式和操作规范。如果你开始从内而外的改变,那么它会渗透到业务和其他你所做的一切事情上。
@ -43,7 +43,7 @@ via: https://enterprisersproject.com/article/2016/9/adobes-new-cio-shares-leader
作者:[Cynthia Stoddard][a] 作者:[Cynthia Stoddard][a]
译者:[Chao-zhi](https://github.com/Chao-zhi) 译者:[Chao-zhi](https://github.com/Chao-zhi)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,51 @@
拥有开源项目部门的公司可以从四个方面获益
====
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BUSINESS_creativity.png?itok=x2HTRKVW)
在我的第一篇关于开源项目部门program office的系列文章中我深入剖析了[什么是开源项目部门,为什么你的公司需要一个开源项目部门][1]。接着我又说到了[谷歌是如何创建一种新的开源项目部门的][2]。而这篇文章,我将阐述拥有一个开源项目部门的好处。
乍一看非软件开发公司会更加热情的去拥抱开源项目部门的一个重要原因是他们并没有什么损失。毕竟他们并不需要依靠这些软件产品来获得收益。比如Facebook 可以很轻易的释放出一个 “分布式键值数据存储” 作为开源项目,是因为他们并没有售卖一个叫做 “企业级键值数据存储” 的产品。这回答了关于风险的问题,但是并没有回答他们如何通过向开源生态共献代码而获益的问题。让我们逐个来推测和探讨其中可能的原因。你会发现开源项目供应商的许多动机都是相同的,但是也有些许不同。
### 招聘
招聘可能是一个将开源项目部门推销给上层管理部门的最容易方法。向他们展示与招聘相关的成本,以及投资回报率,然后解释如何与天才工程师发展关系,从而与那些对这些项目感兴趣并且十分乐意在其中工作的天才开发者们建立联系。不需要我多说了,你懂的!
### 技术影响
曾几何时,那些没有专门从事软件销售的公司是难以直接对他们软件供应商的开发周期施加影响力的,尤其当他们并不是一个大客户时。开源完全改变了这一点,它将用户与供应商放在了一个更公平的竞争环境中。随着开源开发的兴起,任何人,假如他们愿意投入时间和资源的话,都可以将技术推向一个选定的方向。但是这些公司发现,虽然将投资用于开发上会带来丰硕的成果,但是总体战略的努力却更加有效——对比一下 bug 的修复和软件的构建——大多数公司都将 bug 的修复推给上游的开源项目,但是一些公司开始认识到通过更深层次的回报承诺和更快的功能开发来协调持久的工作,将会更有利于业务。通过开源项目部门模式,公司的职员能够从开源社区中准确嗅出战略重心,然后投入开发资源。
对于快速增长的公司,如 Google 和 Facebook其对现有的开源项目提供的领导力仍然不足以满足业务的膨胀。面对激烈的增长和建立超大规模系统所带来的挑战许多大型企业开始构建仅供内部使用的高度定制的软件栈。除非他们能说服别人在一些基础设施项目上达成合作。因此虽然他们保持在诸如 Linux 内核Apache 和其他现有项目领域的投资他们也开始推出自己的大型项目。Facebook 发布了 CassandraTwitter 创造了 Mesos并且甚至谷歌也创建了 Kubernetes 项目。这些项目已成为行业创新的主要平台证实了该举措是相关公司引人注目的成功。请注意Facebook 在它需要创造一个新软件项目来解决更大规模的问题之后,已经在内部停止使用 Cassandra 了,但是,这时 Cassandra 已经变得流行,而 DataStax 公司接过了开发任务)。所有这些项目已经促使了开发商、相关的项目、以及最终用户构成的整个生态加速增长和发展。
没有与公司战略举措取得一致的开源项目部门不可能成功的。不这样做的话,这些公司依然会试图单独地解决这些问题,而且更慢。不仅拥有这些项目可以帮助内部解决业务问题,它们也帮助这些公司逐渐成为行业巨头。当然,谷歌成为行业巨头好多年了,但是 Kubernetes 的发展确保了软件的质量,并且在容器技术未来的发展方向上有着直接的话语权,并且远超之前就有的话语权。这些公司目前还是闻名于他们超大规模的基础设施和硅谷的中坚份子。鲜为人知,但是更为重要的是它们与技术生产人员的亲密度。开源项目部门凭借技术建议和与有影响力的开发者的关系,再加上在社区治理和人员管理方面深厚的专业知识来引领这些工作,并最大限度地发挥其影响力,
### 市场营销能力
与技术的影响齐头并进的是每个公司谈论他们在开源方面的努力。通过传播这些与项目和社区有关的消息一个开源项目部门能够通过有针对性的营销活动来提供最大的影响。营销在开放源码领域一直是一个肮脏的词汇因为每个人都有一个由企业营销造成的糟糕的经历。在开源社区中营销呈现出一种与传统方法截然不同的形式它会更注重于我们的社区已经在战略方向上做了什么。因此一个开源项目部门不可能去宣传一些根本还没有发布任何代码的项目但是他们会讨论他们创造什么软件和参与了其他什么举措。基本上不会有“雾件vaporware”。
想想谷歌的开源项目部门作出的第一份工作。他们不只是简单的贡献代码给 Linux 内核或其他项目他们更多的是谈论它并经常在开源会议主题演讲。他们不仅仅是把钱给写开源代码的代码的学生他们还创建了一个全球计划——“Google Summer of Code”现在已经成为一种开源发展的文化试金石。这些市场营销的作用在 Kubernetes 开发完成之前就奠定了谷歌在开源世界巨头的地位。最终使得,谷歌在创建 GPLv3 授权协议期间拥有重要影响力,并且在科技活动中公司的发言人和开源项目部门的代表人成为了主要人物。开源项目部门是协调这些工作的最好的实体,并可以为母公司提供真正的价值。
###改善内部流程
改善内部流程听起来不像一个大好处但克服混乱的内部流程对于每一个开源项目部门都是一个挑战不论是对软件供应商还是公司内的部门。而软件供应商必须确保他们的流程不与他们发布的产品重叠例如不小心开源了他们的商业售卖软件用户更关心的是侵犯了知识产权IP专利、版权和商标。没有人想只是因为释放软件而被起诉。没有一个活跃的开源项目部门去管理和协调这些许可和其他法律问题的话大公司在开源流程和管理上会面临着巨大的困难。为什么这个很重要呢如果不同的团队释放的软件是在不兼容的许可证下那么这不仅是一个坑爹的尴尬它还将对实现最基本的目标改良协作产生巨大的障碍。
考虑到还有许多这样的公司仍在飞快的增长,如果无法建立基本流程规则的话,将可以预见到它们将会遇到阻力。我见过一个罗列着批准、未经批准的许可证的巨大的电子表格,以及指导如何(或如何不)创建开源社区而遵守法律限制。关键是当开发者需要做出决定时要有一个可以依据的东西,并且每次当开发人员想要为一个开源社区贡献代码时,可以不产生大量的法律开销,和效率低下的知识产权检查。
有一个活跃的开放源码项目部门,负责维护许可规则和源的贡献,以及建立培训项目工程师,有助于避免潜在的法律缺陷和昂贵的诉讼。毕竟,良好的开源项目合作可以减少由于某人没有看许可证而导致公司赔钱这样的事件。好消息是,公司已经可以较少的担心关于专有的知识产权与软件供应商冲突的事。坏消息是,它们的法律问题不够复杂,尤其是当他们需要直接面对软件供应商的阻力时。
你的组织是如何受益于拥有一个开源项目部门的?可以在评论中与我们分享。
本文作者 John Mark Walker 是 Dell EMC 的产品管理总监,负责管理 ViPR 控制器产品及 CoprHD 开源社区。他领导过包括 ManageIQ 在内的许多开源社区。
--------------------------------------------------------------------------------
via: https://opensource.com/business/16/9/4-big-ways-companies-benefit-having-open-source-program-offices
作者:[John Mark Walker][a]
译者:[chao-zhi](https://github.com/chao-zhi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/johnmark
[1]: https://opensource.com/business/16/5/whats-open-source-program-office
[2]: https://opensource.com/business/16/8/google-open-source-program-office

View File

@ -0,0 +1,57 @@
宽松开源许可证的崛起意味着什么
====
为什么像 GNU GPL 这样的限制性许可证越来越不受青睐。
“如果你用了任何开源软件, 那么你软件的其他部分也必须开源。” 这是微软前 CEO 巴尔默 2001 年说的, 尽管他说的不对, 还是引发了人们对自由软件的 FUD (恐惧, 不确定和怀疑fear, uncertainty and doubt。 大概这才是他的意图。
对开源软件的这些 FUD 主要与开源许可有关。 现在有许多不同的许可证, 当中有些限制比其他的更严格(也有人称“更具保护性”)。 诸如 GNU 通用公共许可证 GPL 这样的限制性许可证使用了 copyleft 的概念。 copyleft 赋予人们自由发布软件副本和修改版的权力, 只要衍生工作保留同样的权力。 bash 和 GIMP 等开源项目就是使用了 GPL(v3)。 还有一个 AGPL Affero GPL 许可证, 它为网络上的软件(如 web service提供了 copyleft 许可。
这意味着, 如果你使用了这种许可的代码, 然后加入了你自己的专有代码, 那么在一些情况下, 整个代码, 包括你的代码也就遵从这种限制性开源许可证。 Ballmer 说的大概就是这类的许可证。
但宽松许可证不同。 比如, 只要保留版权声明和许可声明且不要求开发者承担责任, MIT 许可证允许任何人任意使用开源代码, 包括修改和出售。 另一个比较流行的宽松开源许可证是 Apache 许可证 2.0,它还包含了贡献者向用户提供专利授权相关的条款。 使用 MIT 许可证的有 JQuery、.NET Core 和 Rails 使用 Apache 许可证 2.0 的软件包括安卓, Apache 和 Swift。
两种许可证类型最终都是为了让软件更有用。 限制性许可证促进了参与和分享的开源理念, 使每一个人都能从软件中得到最大化的利益。 而宽松许可证通过允许人们任意使用软件来确保人们能从软件中得到最多的利益, 即使这意味着他们可以使用代码, 修改它, 据为己有,甚至以专有软件出售,而不做任何回报。
开源许可证管理公司黑鸭子软件的数据显示, 去年使用最多的开源许可证是限制性许可证 GPL 2.0,份额大约 25%。 宽松许可证 MIT 和 Apache 2.0 次之, 份额分别为 18% 和 16% 再后面是 GPL 3.0, 份额大约 10%。 这样来看, 限制性许可证占 35% 宽松许可证占 34% 几乎是平手。
但这份当下的数据没有揭示发展趋势。黑鸭子软件的数据显示, 从 2009 年到 2015 年的六年间, MIT 许可证的份额上升了 15.7% Apache 的份额上升了 12.4%。 在这段时期, GPL v2 和 v3 的份额惊人地下降了 21.4%。 换言之, 在这段时期里, 大量软件从限制性许可证转到宽松许可证。
这个趋势还在继续。 黑鸭子软件的[最新数据][1]显示, MIT 现在的份额为 26% GPL v2 为 21% Apache 2 为 16% GPL v3 为 9%。 即 30% 的限制性许可证和 42% 的宽松许可证--与前一年的 35% 的限制许可证和 34% 的宽松许可证相比, 发生了重大的转变。 对 GitHub 上使用许可证的[调查研究][2]证实了这种转变。 它显示 MIT 以压倒性的 45% 占有率成为最流行的许可证, 与之相比, GPL v2 只有 13% Apache 11%。
![](http://images.techhive.com/images/article/2016/09/open-source-licenses.jpg-100682571-large.idge.jpeg)
### 引领趋势
从限制性许可证到宽松许可证,这么大的转变背后是什么呢? 是公司害怕如果使用了限制性许可证的软件,他们就会像巴尔默说的那样,失去自己私有软件的控制权了吗? 事实上, 可能就是如此。 比如, Google 就[禁用了 Affero GPL 软件][3]。
[Instructional Media + Magic][4] 的主席 Jim Farmer 是一个教育开源技术的开发者。 他认为很多公司为避免法律问题而不使用限制性许可证。 “问题就在于复杂性。 许可证的复杂性越高, 被他人因为某行为而告上法庭的可能性越高。 高复杂性更可能带来诉讼”, 他说。
他补充说, 这种对限制性许可证的恐惧正被律师们驱动着, 许多律师建议自己的客户使用 MIT 或 Apache 2.0 许可证的软件, 并明确反对使用 Affero 许可证的软件。
他说, 这会对软件开发者产生影响, 因为如果公司都避开限制性许可证软件的使用,开发者想要自己的软件被使用, 就更会把新的软件使用宽松许可证。
但 SalesAgility开源 SuiteCRM 背后的公司)的 CEO Greg Soper 认为这种到宽松许可证的转变也是由一些开发者驱动的。 “看看像 Rocket.Chat 这样的应用。 开发者本可以选择 GPL 2.0 或 Affero 许可证, 但他们选择了宽松许可证,” 他说。 “这样可以给这个应用最大的机会, 因为专有软件厂商可以使用它, 不会伤害到他们的产品, 且不需要把他们的产品也使用开源许可证。 这样如果开发者想要让第三方应用使用他的应用的话, 他有理由选择宽松许可证。”
Soper 指出, 限制性许可证致力于帮助开源项目获得成功,方式是阻止开发者拿了别人的代码、做了修改,但不把结果回报给社区。 “Affero 许可证对我们的产品健康发展很重要, 因为如果有人利用了我们的代码开发,做得比我们好, 却又不把代码回报回来, 就会扼杀掉我们的产品,” 他说。 “ 对 Rocket.Chat 则不同, 因为如果它使用 Affero 那么它会污染公司的知识产权, 所以公司不会使用它。 不同的许可证有不同的使用案例。”
曾在 Gnome、OpenOffice 工作过,现在是 LibreOffice 的开源开发者的 Michael Meeks 同意 Jim Farmer 的观点,认为许多公司确实出于对法律的担心,而选择使用宽松许可证的软件。 “copyleft 许可证有风险, 但同样也有巨大的益处。 遗憾的是人们都听从律师的, 而律师只是讲风险, 却从不告诉你有些事是安全的。”
巴尔默发表他的错误言论已经过去 15 年了, 但它产生的 FUD 还是有影响--即使从限制性许可证到宽松许可证的转变并不是他的目的。
--------------------------------------------------------------------------------
via: http://www.cio.com/article/3120235/open-source-tools/what-the-rise-of-permissive-open-source-licenses-means.html
作者:[Paul Rubens][a]
译者:[willcoderwang](https://github.com/willcoderwang)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.cio.com/author/Paul-Rubens/
[1]: https://www.blackducksoftware.com/top-open-source-licenses
[2]: https://github.com/blog/1964-open-source-license-usage-on-github-com
[3]: http://www.theregister.co.uk/2011/03/31/google_on_open_source_licenses/
[4]: http://immagic.com/

View File

@ -0,0 +1,86 @@
Wire一个极酷、专注于个人隐私的开源聊天应用程序已经来到了 Linux 上
===========
[![开源聊天应用程序 WIRE 来到了 Linux 上](https://itsfoss.com/wp-content/uploads/2016/10/wire-on-desktop-linux.jpeg)][21]
回到大约两年前,一些曾开发 [Skype][20] 的开发人员发行了一个漂亮的新聊天应用个程序:[Wire][19]。当我说它漂亮的时候只是谈论它的“外貌”。Wire 具有一个许多其他聊天应用程序所没有的整洁优美的“外貌”,但这并不是它最大的卖点。
从一开始Wire 就推销自己是[世界上最注重隐私的聊天应用程序][18]。无论是文本、语音电话,还是图表、图像等基本的内容,它都提供端到端的加密。
WhatsApp 也提供‘端到端加密’,但是考虑一下它的所有者 [Facebook 为了吸引用户而把 WhatsApp 的数据分享出去][17]。我不太相信 WhatsApp 以及它的加密手段。
使 Wire 对于我们这些 FOSS自由/开源软件)爱好者来说更加重要的是,几个月前 [Wire 开源了][16]。几个月下来我们见到了一个用于 Linux 的 beta 版本 Wire 桌面应用程序。
除了一个包装器以外,桌面版的 Wire 并没有比 web 版多任何东西。感谢 [Electron 开源项目][15]提供了一种开发跨平台桌面应用程序的简单方式。许多其他应用程序也通过使用 Electron 为 Linux 带去了一个本地桌面应用程序,包括 [Skype][14]。
### WIRE 的特性:
在我们了解有关 Linux 版 Wire 应用程序的更多信息之前,让我们先快速看一下它的一些主要特性。
* 开源应用程序
* 针对所有类型内容的全加密
* 无广告,无数据收集,无数据分享
* 支持文本,语音以及视频聊天
* 支持群聊和群电话
* [音频过滤器][1](不需要吸入氦元素,只需要使用过滤器就可以用有趣的声音说话)
* 不需要电话号码,可以使用邮箱登录
* 优美、现代化的界面
* 跨平台聊天应用程序iOSAndroidWebMacWindows 和 Linux 客户机均有相应版本
* 欧洲法保护(欧洲法比美国法更注重隐私)
Wire 有一些更棒的特性,尤其是和[Snapchat][13]类似的音频过滤器。
### 在 Linux 上安装 WIRE
在安装 Wire 到 Linux 上之前,让我先警告你它目前还处于 beta 阶段。所以,如果你遇到一些故障,请不要生气。
Wire 有一个 64 位系统可使用的 .deb 客户端。如果你有一台 [32 位或者 64 位系统][12]的电脑,你可以使用这些技巧来找到它。你可以从下面的链接下载 .deb 文件。
- [下载 Linux 版 Wire [Beta]][11]
如果感兴趣的话,你也可以看一看它的源代码:
- [桌面版 Wire 源代码][10]
这是 Wire 的默认界面,看起来像 [elementary OS Loki][9]:
![Linux 上的 Wire 桌面应用程序](https://itsfoss.com/wp-content/uploads/2016/10/Wire-desktop-appl-linux.jpeg)
你看,他们甚至还弄了一个机器人:)
你已经开始使用 Wire 了吗?如果是,你的体验是什么样的?如果没有,你将尝试一下吗?因为它现在是[开源的][7]并且可以在 Linux 上使用。
--------------------------------------------------------------------------------
via: https://itsfoss.com/wire-messaging-linux/
作者:[Abhishek Prakash][a]
译者:[ucasFL](https://github.com/ucasFL)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/abhishek/
[1]:https://medium.com/colorful-conversations/the-tune-for-this-summer-audio-filters-eca8cb0b4c57#.c8gvs143k
[2]:http://pinterest.com/pin/create/button/?url=https://itsfoss.com/wire-messaging-linux/&description=Coolest+Privacy+Focused+Open+Source+Chat+App+Wire+Comes+To+Linux&media=https://itsfoss.com/wp-content/uploads/2016/10/wire-on-desktop-linux.jpeg
[3]:https://www.linkedin.com/cws/share?url=https://itsfoss.com/wire-messaging-linux/
[4]:https://twitter.com/share?original_referer=https%3A%2F%2Fitsfoss.com%2F&source=tweetbutton&text=Coolest+Privacy+Focused+Open+Source+Chat+App+Wire+Comes+To+Linux&url=https%3A%2F%2Fitsfoss.com%2Fwire-messaging-linux%2F&via=%40itsfoss
[5]:https://itsfoss.com/wire-messaging-linux/#comments
[6]:https://itsfoss.com/author/abhishek/
[7]:https://itsfoss.com/tag/open-source
[8]:https://itsfoss.com/wp-content/uploads/2016/10/Wire-desktop-appl-linux.jpeg
[9]:https://itsfoss.com/tag/elementary-os-loki/
[10]:https://github.com/wireapp/wire-desktop
[11]:https://wire.com/download/
[12]:https://itsfoss.com/32-bit-64-bit-ubuntu/
[13]:https://www.snapchat.com/
[14]:https://itsfoss.com/skpe-alpha-linux/
[15]:http://electron.atom.io/
[16]:http://www.infoworld.com/article/3099194/security/wire-open-sources-messaging-client-woos-developers.html
[17]:https://techcrunch.com/2016/08/25/whatsapp-to-share-user-data-with-facebook-for-ad-targeting-heres-how-to-opt-out/
[18]:http://www.ibtimes.co.uk/wire-worlds-most-private-messaging-app-offers-total-encryption-calls-texts-1548964
[19]:https://wire.com/
[20]:https://www.skype.com/en/
[21]:https://itsfoss.com/wp-content/uploads/2016/10/wire-on-desktop-linux.jpeg

View File

@ -0,0 +1,113 @@
在 Linux 下使用 TCP 封装器来加强网络服务安全
===========
在这篇文章中,我们将会讲述什么是 TCP 封装器TCP wrappers以及如何在一台 Linux 服务器上配置他们来[限制网络服务的权限][7]。在开始之前,我们必须澄清 TCP 封装器并不能消除对于正确[配置防火墙][6]的需要。
就这一点而言,你可以把这个工具看作是一个[基于主机的访问控制列表][5],而且并不能作为你的系统的[终极安全措施][4]。通过使用一个防火墙和 TCP 封装器,而不是只偏爱其中的一个,你将会确保你的服务不会被出现单点故障。
### 正确理解 hosts.allow 和 hosts.deny 文件
当一个网络请求到达你的主机的时候TCP 封装器会使用 `hosts.allow``hosts.deny` (按照这样的顺序)来决定客户端是否应该被允许使用一个提供的服务。.
在默认情况下,这些文件内容是空的,或者被注释掉,或者根本不存在。所以,任何请求都会被允许通过 TCP 过滤器而且你的系统被置于依靠防火墙来提供所有的保护。因为这并不是我们想要的。由于在一开始我们就介绍过的原因,清确保下面两个文件都存在:
```
# ls -l /etc/hosts.allow /etc/hosts.deny
```
两个文件的编写语法规则是一样的:
```
<services> : <clients> [: <option1> : <option2> : ...]
```
在文件中,
1. `services` 指当前规则对应的服务,是一个逗号分割的列表。
2. `clients` 指被规则影响的主机名或者 IP 地址,逗号分割的。下面的通配符也可以接受:
1. `ALL` 表示所有事物,应用于`clients`和`services`。
2. `LOCAL` 表示匹配在正式域名中没有完全限定主机名FQDN的机器例如 `localhost`
3. `KNOWN` 表示主机名,主机地址,或者用户是已知的(即可以通过 DNS 或其它服务解析到)。
4. `UNKNOWN``KNOWN` 相反。
5. `PARANOID` 如果进行反向 DNS 查找彼此返回了不同的地址,那么连接就会被断开(首先根据 IP 去解析主机名,然后根据主机名去获得 IP 地址)。
3. 最后,一个冒号分割的动作列表表示了当一个规则被触发的时候会采取什么操作。
你应该记住 `/etc/hosts.allow` 文件中允许一个服务接入的规则要优先于 `/etc/hosts.deny` 中的规则。另外还有,如果两个规则应用于同一个服务,只有第一个规则会被纳入考虑。
不幸的是,不是所有的网络服务都支持 TCP 过滤器,为了查看一个给定的服务是否支持他们,可以执行以下命令:
```
# ldd /path/to/binary | grep libwrap
```
如果以上命令执行以后得到了以下结果,那么它就可以支持 TCP 过滤器,`sshd` 和 `vsftpd` 作为例子,输出如下所示。
[![Find Supported Services in TCP Wrapper](http://www.tecmint.com/wp-content/uploads/2016/10/Find-Supported-Services-in-TCP-Wrapper.png)][3]
*查找 TCP 过滤器支持的服务*
### 如何使用 TCP 过滤器来限制服务的权限
当你编辑 `/etc/hosts.allow``/etc/hosts.deny` 的时候,确保你在最后一个非空行后面通过回车键来添加一个新的行。
为了使得 [SSH 和 FTP][2] 服务只允许 `localhost``192.168.0.102` 并且拒绝所有其他用户,在 `/etc/hosts.deny` 添加如下内容:
```
sshd,vsftpd : ALL
ALL : ALL
```
而且在 `/etc/hosts.allow` 文件中添加如下内容:
```
sshd,vsftpd : 192.168.0.102,LOCAL
```
这些更改会立刻生效并且不需要重新启动。
在下图中你会看到,在最后一行中删掉 `LOCAL`FTP 服务器会对于 `localhost` 不可用。在我们添加了通配符以后,服务又变得可用了。
[![确认 FTP 权限 ](http://www.tecmint.com/wp-content/uploads/2016/10/Verify-FTP-Access.png)][1]
*确认 FTP 权限*
为了允许所有服务对于主机名中含有 `example.com` 都可用,在 `hosts.allow` 中添加如下一行:
```
ALL : .example.com
```
而为了禁止 `10.0.1.0/24` 的机器访问 `vsftpd` 服务,在 `hosts.deny` 文件中添加如下一行:
```
vsftpd : 10.0.1.
```
在最后的两个例子中,注意到客户端列表每行开头和结尾的点。这是用来表示 “所有名字或者 IP 中含有那个字符串的主机或客户端”
这篇文章对你有用吗?你有什么问题或者评论吗?请你尽情在下面留言交流。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/secure-linux-tcp-wrappers-hosts-allow-deny-restrict-access/
作者:[Gabriel Cánepa][a]
译者:[LinuxBars](https://LinuxBar.org)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/wp-content/uploads/2016/10/Verify-FTP-Access.png
[2]:http://www.tecmint.com/block-ssh-and-ftp-access-to-specific-ip-and-network-range/
[3]:http://www.tecmint.com/wp-content/uploads/2016/10/Find-Supported-Services-in-TCP-Wrapper.png
[4]:http://www.tecmint.com/linux-server-hardening-security-tips/
[5]:https://linux.cn/article-3966-1.html
[6]:https://linux.cn/article-4425-1.html
[7]:https://linux.cn/article-7719-1.html

View File

@ -1,50 +1,45 @@
OneNewLife translated 怎样在 RHEL、CentOS 和 Fedora 上安装 Git 及设置 Git 账号
怎样在 RHELCentOS 和 Fedora 上安装 Git 并设置 Git 账号
========= =========
对于新手来说Git 是一个免费、开源、高效的分布式版本控制系统VCS它是为了给广泛的小规模软件开发项目提供速度、高性能以及数据一致性而设计的。 对于新手来说Git 是一个自由、开源、高效的分布式版本控制系统VCS它是基于速度、高性能以及数据一致性而设计的,以支持从小规模到大体量的软件开发项目
Git 是一个软件仓库,它可以让你追踪你的软件改动,回滚到之前的版本以及创建新版本的目录和文件 Git 是一个可以让你追踪软件改动、版本回滚以及创建另外一个版本的目录和文件的软件仓库
Git 主要是用 C 语言来写的,当中还混入了 Perl 和各种各样的 shell 脚本。它主要在 Linux 内核上运行,并且有以下列举的卓越的性能: Git 主要是用 C 语言来写的,混杂了少量的 Perl 脚本和各种 shell 脚本。它主要在 Linux 内核上运行,并且有以下列举的卓越的性能:
1. 容易上手 - 易于上手
2. 运行速度飞快且大部分操作在本地进行,因此,它为需要与远程服务器通信的集中式系统提供了巨大的速度。 - 运行速度飞快,且大部分操作在本地进行,因此,它极大的提升了那些需要与远程服务器通信的集中式系统的速度。
3. 高性能 - 高效
4. 提供数据一致性检查 - 提供数据一致性检查
5. 启用低开销的本地分支 - 支持低开销的本地分支
6. 提供非常便利的暂存区 - 提供非常便利的暂存区
7. 可以集成其它工具来维持大量工作流 - 可以集成其它工具来支持多种工作流
在这篇操作指南中,我们将介绍在 CentOS/RHEL 7/6 和 Fedora 20-24 Linux 发行版上安装 Git 的必要步骤以及怎么配置 Git以便于你可以快速开始参与工作。 在这篇操作指南中,我们将介绍在 CentOS/RHEL 7/6 和 Fedora 20-24 Linux 发行版上安装 Git 的必要步骤以及怎么配置 Git以便于你可以快速开始工作。
### 使用 Yum 安装 Git ### 使用 Yum 安装 Git
我们应该从系统默认的仓库安装 Git并通过运行以下 [YUM 包管理器][8] 的更新命令来确保你系统的软件包都是最新的: 我们从系统默认的仓库安装 Git并通过运行以下 [YUM 包管理器][8] 的更新命令来确保你系统的软件包都是最新的:
``` ```
# yum update # yum update
``` ```
接着,通过以下命令来安装 Git 接着,通过以下命令来安装 Git
``` ```
# yum install git # yum install git
``` ```
在 Git 成功安装之后,你可以通过以下命令来显示 Git 安装的版本: 在 Git 成功安装之后,你可以通过以下命令来显示 Git 安装的版本:
``` ```
# git --version # git --version
``` ```
[![检查 Git 的安装版本](http://www.tecmint.com/wp-content/uploads/2016/10/Check-Git-Version.png)][7] ![检查 Git 的安装版本](http://www.tecmint.com/wp-content/uploads/2016/10/Check-Git-Version.png)
检查 Git 安装的版本 *检查 Git 安装的版本*
注意:从系统默认仓库安装的 Git 会是比较旧的版本。如果你想拥有最新版的 Git请考虑使用以下说明来编译源代码进行安装。 注意:从系统默认仓库安装的 Git 会是比较旧的版本。如果你想拥有最新版的 Git请考虑使用以下说明来编译源代码进行安装。
@ -55,7 +50,6 @@ Git 主要是用 C 语言来写的,当中还混入了 Perl 和各种各样的
``` ```
# yum groupinstall "Development Tools" # yum groupinstall "Development Tools"
# yum install gettext-devel openssl-devel perl-CPAN perl-devel zlib-devel # yum install gettext-devel openssl-devel perl-CPAN perl-devel zlib-devel
``` ```
安装所需的软件依赖包之后,转到官方的 [Git 发布页面][6],抓取最新版的 Git 并使用下列命令编译它的源代码: 安装所需的软件依赖包之后,转到官方的 [Git 发布页面][6],抓取最新版的 Git 并使用下列命令编译它的源代码:
@ -68,39 +62,36 @@ Git 主要是用 C 语言来写的,当中还混入了 Perl 和各种各样的
# ./configure --prefix=/usr/local # ./configure --prefix=/usr/local
# make install # make install
# git --version # git --version
``` ```
[![检查 Git 的安装版本](http://www.tecmint.com/wp-content/uploads/2016/10/Check-Git-Source-Version.png)][5] ![检查 Git 的安装版本](http://www.tecmint.com/wp-content/uploads/2016/10/Check-Git-Source-Version.png)
检查 Git 的安装版本 *检查 Git 的安装版本*
**推荐阅读:** [Linux 下 11 个最好用的 Git 客户端和 Git 仓库查看器][4] **推荐阅读:** [Linux 下 11 个最好用的 Git 客户端和 Git 仓库查看器][4]
### 在 Linux 设置 Git 账户 ### 在 Linux 设置 Git 账户
在这个环节中,我们将介绍如何使用正确的用户信息(如:姓名、邮件地址)和 `git config` 命令来设置 Git 账户,以避免出现提交错误。 在这个环节中,我们将介绍如何使用正确的用户信息(如:姓名、邮件地址)和 `git config` 命令来设置 Git 账户,以避免出现提交错误。
注意:确保将用户名替换为在你的系统上创建和使用的 Git 用户的真实名称。 注意:确保将下面的 `username` 替换为在你的系统上创建和使用的 Git 用户的真实名称。
你可以使用下面的 [useradd 命令][3] 创建一个 Git 用户,其中 `-m` 选项用于在 `/home` 目录下创建用户主目录,`-s` 选项用于指定用户默认的 shell。 你可以使用下面的 [useradd 命令][3] 创建一个 Git 用户,其中 `-m` 选项用于在 `/home` 目录下创建用户主目录,`-s` 选项用于指定用户默认的 shell。
``` ```
# useradd -m -s /bin/bash username # useradd -m -s /bin/bash username
# passwd username # passwd username
``` ```
现在,将新用户添加到 `wheel` 用户组以启用其使用 `sudo` 命令的权限: 现在,将新用户添加到 `wheel` 用户组以启用其使用 `sudo` 命令的权限:
``` ```
# usermod username -aG wheel # usermod username -aG wheel
``` ```
[![创建 Git 用户账号](http://www.tecmint.com/wp-content/uploads/2016/10/Create-Git-User-Account.png)][2] ![创建 Git 用户账号](http://www.tecmint.com/wp-content/uploads/2016/10/Create-Git-User-Account.png)
创建 Git 用户账号 *创建 Git 用户账号*
然后通过以下命令使用新用户配置 Git 然后通过以下命令使用新用户配置 Git
@ -108,40 +99,36 @@ Git 主要是用 C 语言来写的,当中还混入了 Perl 和各种各样的
# su username # su username
$ sudo git config --global user.name "Your Name" $ sudo git config --global user.name "Your Name"
$ sudo git config --global user.email "you@example.com" $ sudo git config --global user.email "you@example.com"
``` ```
现在通过下面的命令校验 Git 的配置。 现在通过下面的命令校验 Git 的配置。
``` ```
$ sudo git config --list $ sudo git config --list
``` ```
如果配置没有错误的话,你应该能够查看具有以下详细信息的输出: 如果配置没有错误的话,你应该能够看到类似以下详细信息的输出:
``` ```
user.name=username user.name=username
user.email= username@some-domian.com user.email= username@some-domian.com
``` ```
[![在 Linux 设置 Git 用户](http://www.tecmint.com/wp-content/uploads/2016/10/Setup-Git-Account.png)][1] ![在 Linux 设置 Git 用户](http://www.tecmint.com/wp-content/uploads/2016/10/Setup-Git-Account.png)
>在 Linux 设置 Git 用户
##### 总结 *在 Linux 设置 Git 用户*
### 总结
在这个简单的教程中,我们已经了解怎么在你的 Linux 系统上安装 Git 以及配置它。我相信你应该可以驾轻就熟。 在这个简单的教程中,我们已经了解怎么在你的 Linux 系统上安装 Git 以及配置它。我相信你应该可以驾轻就熟。
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
编译自: http://www.tecmint.com/install-git-centos-fedora-redhat/ via: http://www.tecmint.com/install-git-centos-fedora-redhat/
作者:[Aaron Kili ][a]
作者:[Aaron Kili][a]
译者:[OneNewLife](https://github.com/OneNewLife) 译者:[OneNewLife](https://github.com/OneNewLife)
校对:[wxy](https://github.com/wxy)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@ -153,4 +140,4 @@ user.email= username@some-domian.com
[5]:http://www.tecmint.com/wp-content/uploads/2016/10/Check-Git-Source-Version.png [5]:http://www.tecmint.com/wp-content/uploads/2016/10/Check-Git-Source-Version.png
[6]:https://github.com/git/git/releases [6]:https://github.com/git/git/releases
[7]:http://www.tecmint.com/wp-content/uploads/2016/10/Check-Git-Version.png [7]:http://www.tecmint.com/wp-content/uploads/2016/10/Check-Git-Version.png
[8]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/ [8]:https://linux.cn/article-2272-1.html

View File

@ -0,0 +1,166 @@
删除一个目录下部分类型之外的所有文件的三种方法
=========
有的时候,你可能会遇到这种情况,你需要删除一个目录下的所有文件,或者只是简单的通过删除除了一些指定类型(以指定扩展名结尾)之外的文件来清理一个目录。
在这篇文章,我们将会向你展现如何通过 `rm``find``globignore` 命令删除一个目录下除了指定文件扩展名或者类型的之外的文件。
在我们进一步深入之前,让我们开始简要的了解一下 Linux 中的一个重要的概念 —— 文件名模式匹配,它可以让我们解决眼前的问题。
在 Linux 下,一个 shell 模式是一个包含以下特殊字符的字符串,称为通配符或者元字符:
1. `*`  匹配 0 个或者多个字符
2. `?`  匹配任意单个字符
3. `[序列]`  匹配序列中的任意一个字符
4. `[!序列]`  匹配任意一个不在序列中的字符
我们将在这儿探索三种可能的办法,包括:
### 使用扩展模式匹配操作符删除文件
下来列出了不同的扩展模式匹配操作符,这些模式列表是一个用 `|` 分割包含一个或者多个文件名的列表:
1. `*(模式列表)`  匹配 0 个或者多个出现的指定模式
2. `?(模式列表)`  匹配 0 个或者 1 个出现的指定模式
4. `@(模式列表)`  匹配 1 个或者多个出现的指定模式
5. `!(模式列表)`  匹配除了一个指定模式之外的任何内容
为了使用它们,需要像下面一样打开 extglob shell 选项:
```
# shopt -s extglob
```
**1. 输入以下命令,删除一个目录下除了 filename 之外的所有文件**
```
$ rm -v !("filename")
```
![删除 Linux 下除了一个文件之外的所有文件](http://www.tecmint.com/wp-content/uploads/2016/10/DeleteAll-Files-Except-One-File-in-Linux.png)
*删除 Linux 下除了一个文件之外的所有文件*
**2. 删除除了 filename1 和 filename2 之外的所有文件**
```
$ rm -v !("filename1"|"filename2")
```
![在 Linux 下删除除了一些文件之外的所有文件](http://www.tecmint.com/wp-content/uploads/2016/10/Delete-All-Files-Except-Few-Files-in-Linux.png)
*在 Linux 下删除除了一些文件之外的所有文件*
**3. 下面的例子显示如何通过交互模式删除除了 `.zip` 之外的所有文件**
```
$ rm -i !(*.zip)
```
![在 Linux 下删除除了 Zip 文件之外的所有文件](http://www.tecmint.com/wp-content/uploads/2016/10/Delete-All-Files-Except-Zip-Files-in-Linux.png)
*在 Linux 下删除除了 Zip 文件之外的所有文件*
**4. 接下来,通过如下的方式你可以删除一个目录下除了所有的`.zip` 和 `.odt` 文件的所有文件,并且在删除的时候,显示正在删除的文件:**
```
$ rm -v !(*.zip|*.odt)
```
![删除除了指定文件扩展的所有文件](http://www.tecmint.com/wp-content/uploads/2016/10/Delete-All-Files-Except-Certain-File-Extensions.png)
*删除除了指定文件扩展的所有文件*
一旦你已经执行了所有需要的命令,你还可以使用如下的方式关闭 extglob shell 选项。
```
$ shopt -u extglob
```
### 使用 Linux 下的 find 命令删除文件
在这种方法下,我们可以[只使用 find 命令][5]的适当的选项或者采用管道配合 `xargs` 命令,如下所示:
```
$ find /directory/ -type f -not -name 'PATTERN' -delete
$ find /directory/ -type f -not -name 'PATTERN' -print0 | xargs -0 -I {} rm {}
$ find /directory/ -type f -not -name 'PATTERN' -print0 | xargs -0 -I {} rm [options] {}
```
**5. 下面的命令将会删除当前目录下除了 `.gz` 之外的所有文件**
```
$ find . -type f -not -name '*.gz' -delete
```
![find 命令 —— 删除 .gz 之外的所有文件](http://www.tecmint.com/wp-content/uploads/2016/10/Remove-All-Files-Except-gz-Files.png)
*find 命令 —— 删除 .gz 之外的所有文件*
**6. 使用管道和 xargs你可以通过如下的方式修改上面的例子**
```
$ find . -type f -not -name '*gz' -print0 | xargs -0 -I {} rm -v {}
```
![使用 find 和 xargs 命令删除文件](http://www.tecmint.com/wp-content/uploads/2016/10/Remove-Files-Using-Find-and-Xargs-Command.png)
*使用 find 和 xargs 命令删除文件*
**7. 让我们看一个额外的例子,下面的命令行将会删除掉当前目录下除了 `.gz``.odt` 和 `.jpg` 之外的所有文件:**
```
$ find . -type f -not \(-name '*gz' -or -name '*odt' -or -name '*.jpg' \) -delete
```
![删除除了指定扩展文件的所有文件](http://www.tecmint.com/wp-content/uploads/2016/10/Remove-All-Files-Except-File-Extensions.png)
*删除除了指定扩展文件的所有文件*
### 通过 bash 中的 GLOBIGNORE 变量删除文件
然而,最后的方法,只适用于 bash。 `GLOBIGNORE` 变量存储了一个路径名展开pathname expansion功能的忽略模式或文件名列表以冒号分隔。
为了使用这种方法,切换到要删除文件的目录,像下面这样设置 `GLOBIGNORE` 变量:
```
$ cd test
$ GLOBIGNORE=*.odt:*.iso:*.txt
```
在这种情况下,除了 `.odt``.iso` 和 `.txt` 之外的所有文件,都将从当前目录删除。
现在,运行如下的命令清空这个目录:
```
$ rm -v *
```
之后,关闭 `GLOBIGNORE` 变量:
```
$ unset GLOBIGNORE
```
![使用 bash 变量 GLOBIGNORE 删除文件](http://www.tecmint.com/wp-content/uploads/2016/10/Delete-Files-Using-Bash-GlobIgnore.png)
*使用 bash 变量 GLOBIGNORE 删除文件*
注:为了理解上面的命令行采用的标识的意思,请参考我们在每一个插图中使用的命令对应的 man 手册。
就这些了!如果你知道有实现相同目录的其他命令行技术,不要忘了通过下面的反馈部分分享给我们。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/delete-all-files-in-directory-except-one-few-file-extensions/
作者:[Aaron Kili][a]
译者:[yangmingming](https://github.com/yangmingming)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/wp-content/uploads/2016/10/Delete-Files-Using-Bash-GlobIgnore.png
[2]:http://www.tecmint.com/wp-content/uploads/2016/10/Remove-All-Files-Except-File-Extensions.png
[3]:http://www.tecmint.com/wp-content/uploads/2016/10/Remove-Files-Using-Find-and-Xargs-Command.png
[4]:http://www.tecmint.com/wp-content/uploads/2016/10/Remove-All-Files-Except-gz-Files.png
[5]:http://www.tecmint.com/35-practical-examples-of-linux-find-command/
[6]:http://www.tecmint.com/wp-content/uploads/2016/10/Delete-All-Files-Except-Certain-File-Extensions.png
[7]:http://www.tecmint.com/wp-content/uploads/2016/10/Delete-All-Files-Except-Zip-Files-in-Linux.png
[8]:http://www.tecmint.com/wp-content/uploads/2016/10/Delete-All-Files-Except-Few-Files-in-Linux.png
[9]:http://www.tecmint.com/wp-content/uploads/2016/10/DeleteAll-Files-Except-One-File-in-Linux.png

View File

@ -1,13 +1,13 @@
在CentOS下根据依赖性来下载RPM软件 怎样在 CentOS 里下载 RPM 包及其所有依赖
=== ========
![在CentOS下根据依赖性来下载RPM软件包](https://www.ostechnix.com/wp-content/uploads/2016/10/Download-a-package-720x340.png) ![怎样在 CentOS 下下载 RPM 包及其所有依赖包](https://www.ostechnix.com/wp-content/uploads/2016/10/Download-a-package-720x340.png)
前几天我尝试去创建一个我们在CentOS 7下经常使用的软件的本地仓库。当然我们可以使用 curl 或者 wget 下载任何软件包然而这些命令并不能下载要求的依赖软件包。你必须去花一些时间而且手动的去寻找和下载被安装的软件所依赖的软件包。然而,这并不是我们必须要做的事情。在这个简短的教程中我将会带领你学习如何根据依赖性来下载软件包的两种方法。我已经在CentOS 7 下进行了测试,尽管这些相同的步骤或许在其他基于 RPM 管理系统的发行版上也可以工作,例如 RHELFedora 和 Scientific Linux。 前几天我尝试去创建一个仅包含了我们经常 CentOS 7 下使用的软件的本地仓库。当然我们可以使用 curl 或者 wget 下载任何软件包然而这些命令并不能下载要求的依赖软件包。你必须去花一些时间而且手动的去寻找和下载被安装的软件所依赖的软件包。然而,我们并不是必须这样。在这个简短的教程中,我将会带领你以两种方式下载软件包及其所有依赖包。我已经在 CentOS 7 下进行了测试,不过这些相同的步骤或许在其他基于 RPM 管理系统的发行版上也可以工作,例如 RHELFedora 和 Scientific Linux。
### 方法 1-利用“Downloadonly”插件来根据依赖性来下载RPM软件 ### 方法 1 利用 “Downloadonly” 插件下载 RPM 软件包及其所有依赖
我们可以通过 yum 命令的“Downloadonly”插件来很容易地根据依赖性来下载RPM软件包。 我们可以通过 yum 命令的 “Downloadonly” 插件下载 RPM 软件包及其所有依赖包。
为了安装 Downloadonly 插件,以 root 身份运行以下命令。 为了安装 Downloadonly 插件,以 root 身份运行以下命令。
@ -15,13 +15,13 @@
yum install yum-plugin-downloadonly yum install yum-plugin-downloadonly
``` ```
现在,运行以下命令去下载一个 RPM 软件包 现在,运行以下命令去下载一个 RPM 软件包
``` ```
yum install --downloadonly <package-name> yum install --downloadonly <package-name>
``` ```
默认情况下,这个命令将会下载并把软件包保存到 /var/cache/yum/ 在 rhel-{arch}-channel/packageslocation然而你可以下载和保存软件包到任何位置你可以通过 “downloaddir” 选项来指定。 默认情况下,这个命令将会下载并把软件包保存到 `/var/cache/yum/``rhel-{arch}-channel/packageslocation` 目录,不过,你也可以下载和保存软件包到任何位置,你可以通过 `downloaddir` 选项来指定。
``` ```
yum install --downloadonly --downloaddir=<directory> <package-name> yum install --downloadonly --downloaddir=<directory> <package-name>
@ -86,9 +86,9 @@ Total 331 kB/s | 3.0 MB 00:00:09
exiting because "Download Only" specified exiting because "Download Only" specified
``` ```
[![rootserver1_001](http://www.ostechnix.com/wp-content/uploads/2016/10/root@server1_001.png)][6] ![rootserver1_001](http://www.ostechnix.com/wp-content/uploads/2016/10/root@server1_001.png)
现在去你指定的目录位置下你将会看到那里有下载好的软件包和依赖的软件。在我这种情况下,我已经把软件包下载到/root/mypackages/ 目录下。 现在去你指定的目录位置下你将会看到那里有下载好的软件包和依赖的软件。在我这种情况下,我已经把软件包下载到 `/root/mypackages/` 目录下。
让我们来查看一下内容。 让我们来查看一下内容。
@ -108,10 +108,9 @@ mailcap-2.1.41-2.el7.noarch.rpm
[![rootserver1_003](http://www.ostechnix.com/wp-content/uploads/2016/10/root@server1_003-1.png)][5] [![rootserver1_003](http://www.ostechnix.com/wp-content/uploads/2016/10/root@server1_003-1.png)][5]
正如你在上面输出所看到的, httpd软件包已经被依据所有依赖性下载完成了. 正如你在上面输出所看到的, httpd软件包已经被依据所有依赖性下载完成了
请注意,这个插件适用于 `yum install/yum update` 但是并不适用于 `yum groupinstall`。默认情况下,这个插件将会下载仓库中最新可用的软件包。然而你可以通过指定版本号来下载某个特定的软件版本。
请注意,这个插件适用于 “yum install/yum update” 而且并不适用于 “yum groupinstall”。默认情况下这个插件将会下载仓库中最新可用的软件包。然而你可以通过指定版本号来下载某个特定的软件版本。
例子: 例子:
@ -119,29 +118,28 @@ mailcap-2.1.41-2.el7.noarch.rpm
yum install --downloadonly --downloaddir=/root/mypackages/ httpd-2.2.6-40.el7 yum install --downloadonly --downloaddir=/root/mypackages/ httpd-2.2.6-40.el7
``` ```
Also, you can download multiple packages at once as shown below. 此外,你也可以如下一次性下载多个包:
``` ```
yum install --downloadonly --downloaddir=/root/mypackages/ httpd vsftpd yum install --downloadonly --downloaddir=/root/mypackages/ httpd vsftpd
``` ```
### 方法 2-使用 Yumdownloader 工具来根据依赖性下载 RPM 软件包, ### 方法 2 使用 “Yumdownloader” 工具来下载 RPM 软件包及其所有依赖包
“Yumdownloader” 是一款简单,但是却十分有用的命令行工具,它可以一次性下载任何 RPM 软件包及其所有依赖包。
以 root 身份运行如下命令安装 “Yumdownloader” 工具。
Yumdownloader是一款简单但是却十分有用的命令行工具它可以一次性根据依赖性下载任何 RPM 软件包
以 root 身份运行如下命令安装 Yumdownloader 工具
``` ```
yum install yum-utils yum install yum-utils
``` ```
一旦安装完成,运行如下命令去下载一个软件包,例如 httpd 一旦安装完成,运行如下命令去下载一个软件包,例如 httpd
``` ```
yumdownloader httpd yumdownloader httpd
``` ```
为了根据所有依赖性下载软件包,我们使用 _resolve 参数:
为了根据所有依赖性下载软件包,我们使用 `--resolve` 参数:
``` ```
yumdownloader --resolve httpd yumdownloader --resolve httpd
@ -149,18 +147,19 @@ yumdownloader --resolve httpd
默认情况下Yumdownloader 将会下载软件包到当前工作目录下。 默认情况下Yumdownloader 将会下载软件包到当前工作目录下。
为了将软件下载到一个特定的目录下,我们使用 _destdir 参数: 为了将软件下载到一个特定的目录下,我们使用 `--destdir` 参数:
``` ```
yumdownloader --resolve --destdir=/root/mypackages/ httpd yumdownloader --resolve --destdir=/root/mypackages/ httpd
``` ```
或者 或者
``` ```
yumdownloader --resolve --destdir /root/mypackages/ httpd yumdownloader --resolve --destdir /root/mypackages/ httpd
``` ```
终端输出: 终端输出
``` ```
Loaded plugins: fastestmirror Loaded plugins: fastestmirror
@ -188,9 +187,10 @@ Loading mirror speeds from cached hostfile
(5/5): httpd-2.4.6-40.el7.centos.4.x86_64.rpm | 2.7 MB 00:00:19 (5/5): httpd-2.4.6-40.el7.centos.4.x86_64.rpm | 2.7 MB 00:00:19
``` ```
[![rootserver1_004](http://www.ostechnix.com/wp-content/uploads/2016/10/root@server1_004-1.png)][3] ![rootserver1_004](http://www.ostechnix.com/wp-content/uploads/2016/10/root@server1_004-1.png)
让我们确认一下软件包是否被下载到我们指定的目录下。 让我们确认一下软件包是否被下载到我们指定的目录下。
``` ```
ls /root/mypackages/ ls /root/mypackages/
``` ```
@ -205,7 +205,7 @@ httpd-tools-2.4.6-40.el7.centos.4.x86_64.rpm
mailcap-2.1.41-2.el7.noarch.rpm mailcap-2.1.41-2.el7.noarch.rpm
``` ```
[![rootserver1_005](http://www.ostechnix.com/wp-content/uploads/2016/10/root@server1_005.png)][2] ![rootserver1_005](http://www.ostechnix.com/wp-content/uploads/2016/10/root@server1_005.png)
不像 Downloadonly 插件Yumdownload 可以下载一组相关的软件包。 不像 Downloadonly 插件Yumdownload 可以下载一组相关的软件包。
@ -213,7 +213,7 @@ mailcap-2.1.41-2.el7.noarch.rpm
yumdownloader "@Development Tools" --resolve --destdir /root/mypackages/ yumdownloader "@Development Tools" --resolve --destdir /root/mypackages/
``` ```
在我看来,我更喜欢 Yumdownloader 应用在 Yum 上面。但是,两者都是十分简单易懂而且可以完成相同的工作。 在我看来,我喜欢 Yumdownloader 更胜于 Yum 的 Downloadonly 插件。但是,两者都是十分简单易懂而且可以完成相同的工作。
这就是今天所有的内容,如果你觉得这份引导教程有用,清在你的社交媒体上面分享一下去让更多的人知道。 这就是今天所有的内容,如果你觉得这份引导教程有用,清在你的社交媒体上面分享一下去让更多的人知道。
@ -224,10 +224,8 @@ yumdownloader "@Development Tools" --resolve --destdir /root/mypackages/
via: https://www.ostechnix.com/download-rpm-package-dependencies-centos/ via: https://www.ostechnix.com/download-rpm-package-dependencies-centos/
作者:[SK][a] 作者:[SK][a]
译者:[LinuxBars](https://github.com/LinuxBars) 译者:[LinuxBars](https://github.com/LinuxBars)
校对:[wxy](https://github.com/wxy)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,202 @@
如何在 Arch Linux 的终端里设定 WiFi 网络
===
![How To Setup A WiFi In Arch Linux Using Terminal](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/how-to-connect-to-wifi-in-arch-linux-cli_orig.jpg)
如果你使用的是其他 [Linux 发行版][16] 而不是 [Arch][15] CLI那么可能会不习惯在终端里设置 WiFi。尽管整个过程有点简单不过我还是要讲一下。在这篇文章里我将带领新手们通过一步步的设置向导把你们的 Arch Linux 接入到你的 WiFi 网络里。
在 Linux 里有很多程序来设置无线连接,我们可以用 **ip****iw** 来配置因特网连接,但是对于新手来说有点复杂。所以我们会使用 **netctl** 命令,这是一个基于命令行的工具,用来通过配置文件来设置和管理网络连接。
注意:所有的设定都需要 root 权限,或者你也可以使用 sudo 命令来完成。
### 搜索网络
运行下面的命令来查看你的网络接口:
```
iwconfig
```
运行如下命令启用你的网络接口,如果没有启用的话:
```
ip link set  interface up
```
运行下面的命令搜索可用的 WiFi 网络。可以向下翻页来查看。
```
iwlist interface scan | less
```
**注意:** 命令里的 interface 是之前用 **iwconfig** 获取到的实际网络接口。
扫描完,如果不使用该接口可以运行如下命令关闭:
```
ip link set interface down
```
### 使用 netctl 配置 Wi-Fi
在使用 `netctl` 设置连接之前,你必须先检查一下你的网卡在 Linux 下的兼容性。
运行命令:
```
lspci -k
```
这条命令是用来检查内核是否加载了你的无线网卡驱动。输出必须是像这样的:
![arch linux wifi kernel compatibility ](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/arch-wifi-find-kernel-compatibility_orig.png)
如果内核没有加载驱动,你就必须使用有线连接来安装一下。这里是 Linux 无线网络的官方维基页面:[https://wireless.wiki.kernel.org/][11]。
如果你的无线网卡和 Linux 兼容,你可以使用 `netctl configuration`
`netctl` 使用配置文件,这是一个包含连接信息的文件。创建这个文件有简单和困难两种方式。
#### 简单方式 Wifi-menu
如果你想用 `wifi-menu`,必须安装 `dialog`
1. 运行命令: `wifi-menu`
2. 选择你的网络
![wifi-menu to setup wifi in arch](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/wifi-menu-to-setup-wifi-in-arch_orig.png)
3. 输入正确的密码并等待
![wifi-menu setup wifi password in arch](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/wifi-menu-type-wifi-password-in-arch.png?605)
如果没有连接失败的信息,你可以用下面的命令确认下:
```
ping -c 3 www.google.com
```
哇!如果你看到正在 ping意味着网络设置成功。你现在已经在 Arch Linux 下连上 WiFi 了。如果有任何问题,可以倒回去重来。也许漏了什么。
#### 困难方式
比起上面的 `wifi-menu` 命令,这种方式会难一点点,所以我叫做困难方式。在上面的命令里,网络配置会自动生成。而在困难方式里,我们将手动修改配置文件。不过不要担心,也没那么难。那我们开始吧!
1. 首先第一件事,你必须要知道网络接口的名字,通常会是 `wlan0``wlp2s0`,但是也有很多例外。要确认你自己的网络接口,输入 `iwconfig` 命令并记下来。
[![scan wifi networks in arch linux cli](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/scan-wifi-networks-in-arch-linux-cli_orig.png)][8]     
2. 运行命令:
```
cd /etc/netctl/examples
```
在这个目录里,有很多不同的配置文件例子。
3. 拷贝将用到的配置文件例子到 `/etc/netctl/your_profile`
```
cp /etc/netctl/examples/wireless-wpa /etc/netctl/your_profile
```
4. 你可以用这个命令来查看配置文件内容: `cat /etc/netctl/your_profile`
![view network profile in arch linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/view-network-profile-in-arch-linux_orig.png)
5. 用 `vi` 或者 `nano` 编辑你的配置文件的下面几个部分:
```
nano /etc/netctl/your_profile
```
- `Interface`:比如说 `wlan0`
- `ESSID`:你的无线网络名字
- `key`:你的无线网络密码
**注意:** 
如果你不知道怎么用 `nano`,打开文件后,编辑要修改的地方,完了按 `ctrl+o`,然后回车,然后按 `ctrl+x`
![edit network profile in arch](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edit-network-profile-in-arch_orig.png)
### 运行 netctl
1. 运行命令:
```
cd /etc/netctl
ls
```
你一定会看到 `wifi-menu` 生成的配置文件,比如 `wlan0-SSID`;或者你选择了困难方式,你一定会看到你自己创建的配置文件。
2. 运行命令启动连接配置:`netctl start your_profile`。
3. 用下面的命令测试连接:
```
ping -c 3 www.google.com
```
输出看上去像这样:
![check internet connection in arch linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/check-internet-connection-in-arch-linux_orig.png)      
4. 最后,你必须运行下面的命令:`netctl enable your_profile`。 
```
netctl enable your_profile
```
这样将创建并激活一个 systemd 服务,然后开机时自动启动。然后欢呼吧!你在你的 Arch Linux 里配置好 wifi 网络啦。
### 其他工具
你还可以使用其他程序来设置无线连接:
iw
1. `iw dev wlan0 link`  状态
2. `iw dev wlan0 scan`  搜索网络
3. `iw dev wlan0 connect your_essid`  连接到开放网络
4. `iw dev wlan0 connect your_essid key your_key` - 使用 16 进制密钥连接到 WEP 加密的网络
wpa_supplicant
- [https://wiki.archlinux.org/index.php/WPA_supplicant][4]
Wicd
- [https://wiki.archlinux.org/index.php/wicd][3]
NetworkManager
- [https://wiki.archlinux.org/index.php/NetworkManager][2]
### 总结
会了吧!我提供了在 **Arch Linux** 里接入 WiFI 网络的三种方式。这里有一件事我再强调一下,当你执行第一条命令的时候,请记住你的网络接口名字。在接下来搜索网络的命令里,请使用你的网络接口名字比如 `wlan0``wlp2s0`(上一个命令里得到的),而不是用 interface 这个词。如果你碰到任何问题,可以在下面的评论区里直接留言给我。然后别忘了在你的朋友圈里和大家分享这篇文章哦。谢谢!
--------------------------------------------------------------------------------
via: http://www.linuxandubuntu.com/home/how-to-setup-a-wifi-in-arch-linux-using-terminal
作者:[Mohd Sohail][a]
译者:[zpl1025](https://github.com/zpl1025)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.linuxandubuntu.com/contact-us.html
[1]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/wifi-menu-to-setup-wifi-in-arch_orig.png
[2]:https://wiki.archlinux.org/index.php/NetworkManager
[3]:https://wiki.archlinux.org/index.php/wicd
[4]:https://wiki.archlinux.org/index.php/WPA_supplicant
[5]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/check-internet-connection-in-arch-linux_orig.png
[6]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edit-network-profile-in-arch_orig.png
[7]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/view-network-profile-in-arch-linux_orig.png
[8]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/scan-wifi-networks-in-arch-linux-cli_orig.png
[9]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/wifi-menu-type-wifi-password-in-arch_orig.png?605
[10]:http://www.linuxandubuntu.com/home/5-best-arch-linux-based-linux-distributions
[11]:https://wireless.wiki.kernel.org/
[12]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/arch-wifi-find-kernel-compatibility_orig.png
[13]:http://www.linuxandubuntu.com/home/arch-linux-take-your-linux-knowledge-to-next-level-review
[14]:http://www.linuxandubuntu.com/home/category/arch-linux
[15]:http://www.linuxandubuntu.com/home/arch-linux-take-your-linux-knowledge-to-next-level-review
[16]:http://linuxandubuntu.com/home/category/distros

View File

@ -0,0 +1,117 @@
在 Linux 上检测硬盘上的坏道和坏块
===
让我们从坏道和坏块的定义开始说起,它们是一块磁盘或闪存上不再能够被读写的部分,一般是由于磁盘表面特定的[物理损坏][7]或闪存晶体管失效导致的。
随着坏道的继续积累,它们会对你的磁盘或闪存容量产生令人不快或破坏性的影响,甚至可能会导致硬件失效。
同时还需要注意的是坏块的存在警示你应该开始考虑买块新磁盘了,或者简单地将坏块标记为不可用。
因此,在这篇文章中,我们通过几个必要的步骤,使用特定的[磁盘扫描工具][6]让你能够判断 Linux 磁盘或闪存是否存在坏道。
以下就是步骤:
### 在 Linux 上使用坏块工具检查坏道
坏块工具可以让用户扫描设备检查坏道或坏块。设备可以是一个磁盘或外置磁盘,由一个如 `/dev/sdc` 这样的文件代表。
首先,通过超级用户权限执行 [fdisk 命令][5]来显示你的所有磁盘或闪存的信息以及它们的分区信息:
```
$ sudo fdisk -l
```
[![列出 Linux 文件系统分区](http://www.tecmint.com/wp-content/uploads/2016/10/List-Linux-Filesystem-Partitions.png)][4]
*列出 Linux 文件系统分区*
然后用如下命令检查你的 Linux 硬盘上的坏道/坏块:
```
$ sudo badblocks -v /dev/sda10 > badsectors.txt
```
[![在 Linux 上扫描硬盘坏道](http://www.tecmint.com/wp-content/uploads/2016/10/Scan-Hard-Disk-Bad-Sectors-in-Linux.png)][3]
*在 Linux 上扫描硬盘坏道*
上面的命令中badblocks 扫描设备 `/dev/sda10`(记得指定你的实际设备),`-v` 选项让它显示操作的详情。另外,这里使用了输出重定向将操作结果重定向到了文件 `badsectors.txt`
如果你在你的磁盘上发现任何坏道,卸载磁盘并像下面这样让系统不要将数据写入回报的扇区中。
你需要执行 `e2fsck`(针对 ext2/ext3/ext4 文件系统)或 `fsck` 命令,命令中还需要用到 `badsectors.txt` 文件和设备文件。
`-l` 选项告诉命令将在指定的文件 `badsectors.txt` 中列出的扇区号码加入坏块列表。
```
------------ 针对 for ext2/ext3/ext4 文件系统 ------------
$ sudo e2fsck -l badsectors.txt /dev/sda10
------------ 针对其它文件系统 ------------
$ sudo fsck -l badsectors.txt /dev/sda10
```
### 在 Linux 上使用 Smartmontools 工具扫描坏道
这个方法对带有 S.M.A.R.TSelf-Monitoring, Analysis and Reporting Technology自我监控分析报告技术系统的现代磁盘ATA/SATA 和 SCSI/SAS 硬盘以及固态硬盘更加的可靠和高效。S.M.A.R.T 系统能够帮助检测,报告,以及可能记录它们的健康状况,这样你就可以找出任何可能出现的硬件失效。
你可以使用以下命令安装 `smartmontools`
```
------------ 在基于 Debian/Ubuntu 的系统上 ------------
$ sudo apt-get install smartmontools
------------ 在基于 RHEL/CentOS 的系统上 ------------
$ sudo yum install smartmontools
```
安装完成之后,使用 `smartctl` 控制磁盘集成的 S.M.A.R.T 系统。你可以这样查看它的手册或帮助:
```
$ man smartctl
$ smartctl -h
```
然后执行 `smartctrl` 命令并在命令中指定你的设备作为参数,以下命令包含了参数 `-H``--health` 以显示 SMART 整体健康自我评估测试结果。
```
$ sudo smartctl -H /dev/sda10
```
[![检查 Linux 硬盘健康](http://www.tecmint.com/wp-content/uploads/2016/10/Check-Linux-Hard-Disk-Health.png)][2]
*检查 Linux 硬盘健康*
上面的结果指出你的硬盘很健康,近期内不大可能发生硬件失效。
要获取磁盘信息总览,使用 `-a``--all` 选项来显示关于磁盘所有的 SMART 信息,`-x` 或 `--xall` 来显示所有关于磁盘的 SMART 信息以及非 SMART 信息。
在这个教程中,我们覆盖了有关[磁盘健康诊断][1]的重要话题,你可以下面的反馈区来分享你的想法或提问,并且记得多回来看看。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/check-linux-hard-disk-bad-sectors-bad-blocks/
作者:[Aaron Kili][a]
译者:[alim0x](https://github.com/alim0x)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/defragment-linux-system-partitions-and-directories/
[2]:http://www.tecmint.com/wp-content/uploads/2016/10/Check-Linux-Hard-Disk-Health.png
[3]:http://www.tecmint.com/wp-content/uploads/2016/10/Scan-Hard-Disk-Bad-Sectors-in-Linux.png
[4]:http://www.tecmint.com/wp-content/uploads/2016/10/List-Linux-Filesystem-Partitions.png
[5]:http://www.tecmint.com/fdisk-commands-to-manage-linux-disk-partitions/
[6]:http://www.tecmint.com/ncdu-a-ncurses-based-disk-usage-analyzer-and-tracker/
[7]:http://www.tecmint.com/defragment-linux-system-partitions-and-directories/

View File

@ -0,0 +1,108 @@
如何按最后修改时间对 ls 命令的输出进行排序
============================================================
Linux 用户常常做的一个事情是,是在命令行[列出目录内容][1]。
我们已经知道,[ls][2] 和 [dir][3] 是两个可用在列出目录内容的 Linux 命令,前者是更受欢迎的,在大多数情况下,是用户的首选。
我们列出目录内容时,可以按照不同的标准进行排序,例如文件名、修改时间、添加时间、版本或者文件大小。可以通过指定一个特别的参数来使用这些文件的属性进行排序。
在这个简洁的 [ls 命令指导][4]中,我们将看看如何通过上次修改时间(日期和时分秒)[排序 ls 命令的输出结果][5] 。
让我们由执行一些基本的 [ls 命令][6]开始。
### Linux 基本 ls 命令
1、 不带任何参数运行 ls 命令将列出当前工作目录的内容。
```
$ ls
```
![List Content of Working Directory](http://www.tecmint.com/wp-content/uploads/2016/10/List-Content-of-Working-Directory.png)
*列出工作目录的内容*
2、要列出任何目录的内容例如 /etc 目录使用如下命令:
```
$ ls /etc
```
![List Contents of Directory](http://www.tecmint.com/wp-content/uploads/2016/10/List-Contents-of-Directory.png)
*列出工作目录 /etc 的内容*
3、一个目录总是包含一些隐藏的文件至少有两个因此要展示目录中的所有文件使用`-a`或`-all`标志:
```
$ ls -a
```
![List Hidden Files in Directory](http://www.tecmint.com/wp-content/uploads/2016/10/List-Hidden-Files-in-Directory.png)
*列出工作目录的隐藏文件*
4、你还可以打印输出的每一个文件的详细信息例如文件权限、链接数、所有者名称和组所有者、文件大小、最后修改的时间和文件/目录名称。
这是由` -l `选项来设置,这意味着一个如下面的屏幕截图般的长长的列表格式。
```
$ ls -l
```
![Long List Directory Contents](http://www.tecmint.com/wp-content/uploads/2016/10/ls-Long-List-Format.png)
*长列表目录内容*
### 基于日期和基于时刻排序文件
5、要在目录中列出文件并[对最后修改日期和时间进行排序][11],在下面的命令中使用`-t`选项:
```
$ ls -lt
```
![Sort ls Output by Date and Time](http://www.tecmint.com/wp-content/uploads/2016/10/Sort-ls-Output-by-Date-and-Time.png)
*按日期和时间排序ls输出内容*
6、如果你想要一个基于日期和时间的逆向排序文件你可以使用`-r`选项来工作,像这样:
```
$ ls -ltr
```
![Sort ls Output Reverse by Date and Time](http://www.tecmint.com/wp-content/uploads/2016/10/Sort-ls-Output-Reverse-by-Date-and-Time.png)
*按日期和时间排序的逆向输出*
我们将在这里结束,但是,[ls 命令][14]还有更多的使用信息和选项,因此,应该特别注意它或看看其它指南,比如《[每一个用户应该知道 ls 的命令技巧][15]》或《[使用排序命令][16]》。
最后但并非最不重要的,你可以通过以下反馈部分联系我们。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/sort-ls-output-by-last-modified-date-and-time
作者:[Aaron Kili][a]
译者:[zky001](https://github.com/zky001)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/file-and-directory-management-in-linux/
[2]:http://www.tecmint.com/15-basic-ls-command-examples-in-linux/
[3]:http://www.tecmint.com/linux-dir-command-usage-with-examples/
[4]:http://www.tecmint.com/tag/linux-ls-command/
[5]:http://www.tecmint.com/sort-command-linux/
[6]:http://www.tecmint.com/15-basic-ls-command-examples-in-linux/
[7]:http://www.tecmint.com/wp-content/uploads/2016/10/List-Content-of-Working-Directory.png
[8]:http://www.tecmint.com/wp-content/uploads/2016/10/List-Contents-of-Directory.png
[9]:http://www.tecmint.com/wp-content/uploads/2016/10/List-Hidden-Files-in-Directory.png
[10]:http://www.tecmint.com/wp-content/uploads/2016/10/ls-Long-List-Format.png
[11]:http://www.tecmint.com/find-and-sort-files-modification-date-and-time-in-linux/
[12]:http://www.tecmint.com/wp-content/uploads/2016/10/Sort-ls-Output-by-Date-and-Time.png
[13]:http://www.tecmint.com/wp-content/uploads/2016/10/Sort-ls-Output-Reverse-by-Date-and-Time.png
[14]:http://www.tecmint.com/tag/linux-ls-command/
[15]:http://www.tecmint.com/linux-ls-command-tricks/
[16]:http://www.tecmint.com/linux-sort-command-examples/

View File

@ -0,0 +1,117 @@
在 Linux 系统下从 ISO 镜像中提取和复制文件的 3 种方法
============================================================
假设你的 Linux 服务器上有一个超大的 ISO 镜像文件,你想要打开它,然后提取或者复制其中的一个文件。你会怎么做呢?
其实在 Linux 系统里,有很多方法来实现这个要求。
比如说,你可以使用传统的 mount 命令以只读方式把 ISO 镜像文件加载为 loop 设备,然后再把文件复制到另一个目录。
### 在 Linux 系统下提取 ISO 镜像文件
为了完成该测试,你得有一个 ISO 镜像文件(我使用 ubuntu-16.10-server-amd64.iso 系统镜像文件)以及用于挂载和提取 ISO 镜像文件的目录。
首先,使用如下命令创建一个挂载目录来挂载 ISO 镜像文件:
```
$ sudo mkdir /mnt/iso
```
目录创建完成后,你就可以运行如下命令很容易地挂载 ubuntu-16.10-server-amd64.iso 系统镜像文件,并查看其中的内容。
```
$ sudo mount -o loop ubuntu-16.10-server-amd64.iso /mnt/iso
$ ls /mnt/iso/
```
[
![Mount ISO File in Linux](http://www.tecmint.com/wp-content/uploads/2016/10/Mount-ISO-File-in-Linux.png)
][1]
*在 Linux 系统里挂载 ISO 镜像*
现在你就可以进入到挂载目录 /mnt/iso 里,查看文件或者使用 [cp 命令][2]把文件复制到 /tmp 目录了。
```
$ cd /mnt/iso
$ sudo cp md5sum.txt /tmp/
$ sudo cp -r ubuntu /tmp/
```
[
![Copy Files From ISO File in Linux](http://www.tecmint.com/wp-content/uploads/2016/10/Copy-Files-From-ISO-File-in-Linux.png)
][3]
*在 Linux 系统中复制 ISO 镜像里的文件*
注意:`-r` 选项用于递归复制目录里的内容。如有必要,你也可以[监控复制命令的完成进度][4]。
### 使用 7zip 命令提取 ISO 镜像里的内容
如果不想挂载 ISO 镜像,你可以简单地安装一个 7zip 工具,这是一个自由而开源的解压缩软件,用于压缩或解压不同类型格式的文件,包括 TAR、XZ、GZIP、ZIP、BZIP2 等等。
```
$ sudo apt-get install p7zip-full p7zip-rar [On Debian/Ubuntu systems]
$ sudo yum install p7zip p7zip-plugins [On CentOS/RHEL systems]
```
7zip 软件安装完成后,你就可以使用`7z` 命令提取 ISO 镜像文件里的内容了。
```
$ 7z x ubuntu-16.10-server-amd64.iso
```
[
![7zip - Extract ISO File Content in Linux](http://www.tecmint.com/wp-content/uploads/2016/10/Extract-ISO-Content-in-Linux.png)
][5]
*使用 7zip 工具在 Linux 系统下提取 ISO 镜像里的文件*
注意:跟 Linux 的 mount 命令相比起来7zip 在压缩和解压缩任何格式的文件时速度更快,更智能。
### 使用 isoinfo 命令来提取 ISO 镜像文件内容
虽然 `isoinfo` 命令是用来以目录的形式列出 iso9660 镜像文件的内容,但是你也可以使用该程序来提取文件。
我说过isoinfo 程序会显示目录列表,因此先列出 ISO 镜像文件的内容。
```
$ isoinfo -i ubuntu-16.10-server-amd64.iso -l
```
[
![List ISO Content in Linux](http://www.tecmint.com/wp-content/uploads/2016/10/List-ISO-Content-in-Linux.png)
][6]
*Linux 里列出 ISO 文件的内容*
现在你可以按如下的方式从 ISO 镜像文件中提取单文件:
```
$ isoinfo -i ubuntu-16.10-server-amd64.iso -x MD5SUM.TXT > MD5SUM.TXT
```
注意:因为 `-x` 解压到标准输出,必须使用重定向来提取指定文件。
[
![Extract Single File from ISO in Linux](http://www.tecmint.com/wp-content/uploads/2016/10/Extract-Single-File-from-ISO-in-Linux.png)
][7]
*从 ISO 镜像文件中提取单个文件*
就到这里吧,其实还有很多种方法来实现这个要求,如果你还知道其它有用的命令或工具来提取复制出 ISO 镜像文件中的文件,请在下面的评论中跟大家分享下。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/extract-files-from-iso-files-linux
作者:[Ravi Saive][a]
译者:[rusking](https://github.com/rusking)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/admin/
[1]:http://www.tecmint.com/wp-content/uploads/2016/10/Mount-ISO-File-in-Linux.png
[2]:http://www.tecmint.com/advanced-copy-command-shows-progress-bar-while-copying-files/
[3]:http://www.tecmint.com/wp-content/uploads/2016/10/Copy-Files-From-ISO-File-in-Linux.png
[4]:http://www.tecmint.com/monitor-copy-backup-tar-progress-in-linux-using-pv-command/
[5]:http://www.tecmint.com/wp-content/uploads/2016/10/Extract-ISO-Content-in-Linux.png
[6]:http://www.tecmint.com/wp-content/uploads/2016/10/List-ISO-Content-in-Linux.png
[7]:http://www.tecmint.com/wp-content/uploads/2016/10/Extract-Single-File-from-ISO-in-Linux.png

View File

@ -0,0 +1,98 @@
完整指南:在 Linux 上使用 Calibre 创建电子书
====
[![Guide to create an eBoook in Linux with Calibre](https://itsfoss.com/wp-content/uploads/2016/10/Create-an-eBook-in-Linux.jpg)][8]
摘要:这份初学者指南是告诉你如何在 Linux 上用 Calibre 工具快速创建一本电子书。
自从 Amazon亚马逊在多年前开始销售电子书电子书已经有了质的飞跃发展并且变得越来越流行。好消息是电子书非常容易使用自由开源的工具来被创建。
在这个教程中,我会告诉你如何在 Linux 上创建一本电子书。
### 在 Linux 上创建一本电子书
要创建一本电子书,你可能需要两个软件:一个文本处理器(当然,我使用的是 [LibreOffice][7])和 Calibre 。[Calibre][6] 是一个非常优秀的电子书阅读器,也是一个电子书库的程序。你可以使用它来[在 Linux 上打开 ePub 文件][5]或者管理你收集的电子书。LCTT 译注LibreOffice 是 Linux 上用来处理文本的软件,类似于 Windows 的 Office 软件)
除了这些软件之外你还需要准备一个电子书封面1410×2250和你的原稿。
### 第一步
首先,你需要用你的文本处理器程序打开你的原稿。 Calibre 可以自动的为你创建一个书籍目录。要使用到这个功能,你需要在你的原稿中设置每一章的标题样式为 Heading 1在 LibreOffice 中要做到这个只需要高亮标题并且在段落样式下拉框中选择“Heading 1”即可。
![ebook creation with Calibre](https://itsfoss.com/wp-content/uploads/2016/10/header1.png)
如果你想要有子章节,并且希望他们也被加入到目录中,只需要设置这些子章节的标题为 Heading 2。
做完这些之后,保存你的文档为 HTML 格式文件。
### 第二步
在 Calibre 程序里面点击“添加书籍Add books”按钮。在对话框出现后你可以打开你刚刚存储的 HTML 格式文件,将它加入到 Calibre 中。
![create ebooks with Calibre](https://itsfoss.com/wp-content/uploads/2016/10/calibre1.png)
### 第三步
一旦这个 HTML 文件加入到 Calibre 库中选择这个新文件并且点击“编辑元数据Edit Metadata”按钮。在这里你可以添加下面的这些信息标题Title)、 作者Author、封面图片cover image、 描述description和其它的一些信息。当你填完之后点击“Ok”。
![creating ebooks with Calibre in Linux](https://itsfoss.com/wp-content/uploads/2016/10/calibre2.png)
### 第四步
现在点击“转换书籍Covert books”按钮。
在新的窗口中,这里会有一些可选项,但是你不会需要使用它们。
![creating ebooks with Calibre in Linux -2](https://itsfoss.com/wp-content/uploads/2016/10/calibre3.png)
在新窗口的右上部选择框中,选择 epub 文件格式。Calibre 也有创建 mobi 文件格式的其它选项,但是我发现创建那些文件之后经常出现我意料之外的事情。
![creating ebooks with Calibre in Linux -3](https://itsfoss.com/wp-content/uploads/2016/10/calibre4.png)
### 第五步
在左边新的对话框中点击“外观Look & Feel”。然后勾选中“移除段落间空白Remove spacing between paragraphs
![creating ebooks with Calibre in Linux - 4](https://itsfoss.com/wp-content/uploads/2016/10/calibre5.png)
接下来,我们会创建一个内容目录。如果不打算在你的书中使用目录,你可以跳过这个步骤。选中“内容目录(Table of Contents”标签。接下来点击“一级目录Level 1 TOC (XPath expression”右边的魔术棒图标。
![creating ebooks with Calibre in Linux - 5](https://itsfoss.com/wp-content/uploads/2016/10/calibre6.png)
在这个新的窗口中,在“匹配 HTML 标签Match HTML tags with tag name”下的下拉菜单中选择“h1”。点击“OK” 来关闭这个窗口。如果你有子章节在“二级目录Level 2 TOC (XPath expression)”下选择“h2”。
![creating ebooks with Calibre in Linux - 6](https://itsfoss.com/wp-content/uploads/2016/10/calibre7.png)
在我们开始生成电子书前,选择输出 EPUB 文件。在这个新的页面选择“插入目录Insert inline Table of Contents”选项。
![creating ebooks with Calibre in Linux - 7](https://itsfoss.com/wp-content/uploads/2016/10/calibre8.png)
现在你需要做的是点击“OK”来开始生成电子书。除非你的是一个大文件否则生成电子书的过程一般都完成的很快。
到此为止,你就已经创建一本电子书了。
对一些特别的用户比如他们知道如何写 CSS 样式文件LCTT 译注CSS 文件可以用来美化 HTML 页面)Calibre 给了这类用户一个选项来为文章增加 CSS 样式。只需要回到“外观Look & Feel”部分选择“风格styling”标签选项。但如果你想创建一个 mobi 格式的文件,因为一些原因,它是不能接受 CSS 样式文件的。
![creating ebooks with Calibre in Linux - 8](https://itsfoss.com/wp-content/uploads/2016/10/calibre9.png)
好了,是不是感到非常容易?我希望这个教程可以帮助你在 Linux 上创建电子书。
--------------------------------------------------------------------------------
via: https://itsfoss.com/create-ebook-calibre-linux/
作者:[John Paul][a]
译者:[chenzhijun](https://github.com/chenzhijun)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[1]:http://pinterest.com/pin/create/button/?url=https://itsfoss.com/create-ebook-calibre-linux/&description=How+To+Create+An+Ebook+With+Calibre+In+Linux+%5BComplete+Guide%5D&media=https://itsfoss.com/wp-content/uploads/2016/10/Create-an-eBook-in-Linux.jpg
[2]:https://www.linkedin.com/cws/share?url=https://itsfoss.com/create-ebook-calibre-linux/
[3]:https://twitter.com/share?original_referer=https%3A%2F%2Fitsfoss.com%2F&source=tweetbutton&text=How+To+Create+An+Ebook+With+Calibre+In+Linux+%5BComplete+Guide%5D&url=https%3A%2F%2Fitsfoss.com%2Fcreate-ebook-calibre-linux%2F&via=%40itsfoss
[4]:https://itsfoss.com/fix-updater-issue-pear-os-8/
[5]:https://itsfoss.com/open-epub-books-ubuntu-linux/
[6]:http://calibre-ebook.com/
[7]:https://www.libreoffice.org/
[8]:https://itsfoss.com/wp-content/uploads/2016/10/Create-an-eBook-in-Linux.jpg

View File

@ -38,35 +38,31 @@ pwgen 14 1
你也可以使用以下标记: 你也可以使用以下标记:
<small style="transition: height 0.15s ease-out, width 0.15s ease-out; font-size: 14.45px;"></small> - -c 或 --capitalize
```
-c 或 --capitalize
生成的密码中至少包含一个大写字母 生成的密码中至少包含一个大写字母
-A 或 --no-capitalize - -A 或 --no-capitalize
生成的密码中不含大写字母 生成的密码中不含大写字母
-n 或 --numerals - -n 或 --numerals
生成的密码中至少包含一个数字 生成的密码中至少包含一个数字
-0 或 --no-numerals - -0 或 --no-numerals
生成的密码中不含数字 生成的密码中不含数字
-y 或 --symbols - -y 或 --symbols
生成的密码中至少包含一个特殊字符 生成的密码中至少包含一个特殊字符
-s 或 --secure - -s 或 --secure
生成一个完全随机的密码 生成一个完全随机的密码
-B 或 --ambiguous - -B 或 --ambiguous
生成的密码中不含模糊字符 (ambiguous characters) 生成的密码中不含易混淆字符 (ambiguous characters)
-h 或 --help - -h 或 --help
输出帮助信息 输出帮助信息
-H 或 --sha1=path/to/file[#seed] - -H 或 --sha1=path/to/file[#seed]
使用指定文件的 sha1 哈希值作为随机生成器 使用指定文件的 sha1 哈希值作为随机生成器
-C - -C
按列输出生成的密码 按列输出生成的密码
-1 - -1
不按列输出生成的密码 不按列输出生成的密码
-v 或 --no-vowels - -v 或 --no-vowels
不使用任何元音,以免意外生成让人讨厌的单词 不使用任何元音,以免意外生成让人讨厌的单词
```
### 使用 gpg 生成高强度密码 ### 使用 gpg 生成高强度密码
@ -76,10 +72,9 @@ pwgen 14 1
gpg --gen-random --armor 1 14 gpg --gen-random --armor 1 14
``` ```
* * * ### 其它方法
当然,可能还有很多方法可以生成一个高强度密码。比方说,你可以添加以下 bash shell 方法到 `~/.bashrc` 文件: 当然,可能还有很多方法可以生成一个高强度密码。比方说,你可以添加以下 bash shell 方法到 `~/.bashrc` 文件:
Of course, there are many other ways to generate a strong password. For example, you can add the following bash shell function to your `~/.bashrc` file:
``` ```
genpasswd() { genpasswd() {
@ -94,10 +89,8 @@ genpasswd() {
via: https://www.rosehosting.com/blog/generate-password-linux-command-line/ via: https://www.rosehosting.com/blog/generate-password-linux-command-line/
作者:[RoseHosting][a] 作者:[RoseHosting][a]
译者:[GHLandy](https://github.com/GHLandy) 译者:[GHLandy](https://github.com/GHLandy)
校对:[wxy](https://github.com/wxy)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,129 @@
如何在 Linux 中将文件编码转换为 UTF-8
===============
在这篇教程中,我们将解释字符编码的含义,然后给出一些使用命令行工具将使用某种字符编码的文件转化为另一种编码的例子。最后,我们将一起看一看如何在 Linux 下将使用各种字符编码的文件转化为 UTF-8 编码。
你可能已经知道,计算机除了二进制数据,是不会理解和存储字符、数字或者任何人类能够理解的东西的。一个二进制位只有两种可能的值,也就是 `0``1``真`或`假``是`或`否`。其它的任何事物,比如字符、数据和图片,必须要以二进制的形式来表现,以供计算机处理。
简单来说,字符编码是一种可以指示电脑来将原始的 0 和 1 解释成实际字符的方式,在这些字符编码中,字符都以一串数字来表示。
字符编码方案有很多种,比如 ASCII、ANCI、Unicode 等等。下面是 ASCII 编码的一个例子。
```
字符 二进制
A 01000001
B 01000010
```
在 Linux 中,命令行工具 `iconv` 用来将使用一种编码的文本转化为另一种编码。
你可以使用 `file` 命令,并添加 `-i``--mime` 参数来查看一个文件的字符编码,这个参数可以让程序像下面的例子一样输出字符串的 mime (Multipurpose Internet Mail Extensions) 数据:
```
$ file -i Car.java
$ file -i CarDriver.java
```
![在 Linux 中查看文件的编码](http://www.tecmint.com/wp-content/uploads/2016/10/Check-File-Encoding-in-Linux.png)
*在 Linux 中查看文件的编码*
iconv 工具的使用方法如下:
```
$ iconv option
$ iconv options -f from-encoding -t to-encoding inputfile(s) -o outputfile
```
在这里,`-f` 或 `--from-code` 表明了输入编码,而 `-t``--to-encoding` 指定了输出编码。
为了列出所有已有编码的字符集,你可以使用以下命令:
```
$ iconv -l
```
![列出所有已有编码字符集](http://www.tecmint.com/wp-content/uploads/2016/10/List-Coded-Charsets-in-Linux.png)
*列出所有已有编码字符集*
### 将文件从 ISO-8859-1 编码转换为 UTF-8 编码
下面,我们将学习如何将一种编码方案转换为另一种编码方案。下面的命令将会将 ISO-8859-1 编码转换为 UTF-8 编码。
考虑如下文件 `input.file`,其中包含这几个字符:
```
<EFBFBD> <20> <20> <20>
```
我们从查看这个文件的编码开始,然后来查看文件内容。最后,我们可以把所有字符转换为 UTF-8 编码。
在运行 `iconv` 命令之后,我们可以像下面这样检查输出文件的内容,和它使用的字符编码。
```
$ file -i input.file
$ cat input.file
$ iconv -f ISO-8859-1 -t UTF-8//TRANSLIT input.file -o out.file
$ cat out.file
$ file -i out.file
```
![在 Linux 中将 ISO-8859-1 转化为 UTF-8](http://www.tecmint.com/wp-content/uploads/2016/10/Converts-UTF8-to-ASCII-in-Linux.png)
*在 Linux 中将 ISO-8859-1 转化为 UTF-8*
注意:如果输出编码后面添加了 `//IGNORE` 字符串,那些不能被转换的字符将不会被转换,并且在转换后,程序会显示一条错误信息。
好,如果字符串 `//TRANSLIT` 被添加到了上面例子中的输出编码之后 (`UTF-8//TRANSLIT`),待转换的字符会尽量采用形译原则。也就是说,如果某个字符在输出编码方案中不能被表示的话,它将会被替换为一个形状比较相似的字符。
而且,如果一个字符不在输出编码中,而且不能被形译,它将会在输出文件中被一个问号标记 `?` 代替。
### 将多个文件转换为 UTF-8 编码
回到我们的主题。如果你想将多个文件甚至某目录下所有文件转化为 UTF-8 编码,你可以像下面一样,编写一个简单的 shell 脚本,并将其命名为 `encoding.sh`
```
#!/bin/bash
### 将 values_here 替换为输入编码
FROM_ENCODING="value_here"
### 输出编码 (UTF-8)
TO_ENCODING="UTF-8"
### 转换命令
CONVERT=" iconv -f $FROM_ENCODING -t $TO_ENCODING"
### 使用循环转换多个文件
for file in *.txt; do
$CONVERT "$file" -o "${file%.txt}.utf8.converted"
done
exit 0
```
保存文件,然后为它添加可执行权限。在待转换文件 (*.txt) 所在的目录中运行这个脚本。
```
$ chmod +x encoding.sh
$ ./encoding.sh
```
重要事项:你也可以使这个脚本变得更通用,比如转换任意特定的字符编码到另一种编码。为了达到这个目的,你只需要改变 `FROM_ENCODING``TO_ENCODING` 变量的值。别忘了改一下输出文件的文件名 `"${file%.txt}.utf8.converted"`.
若要了解更多信息,可以查看 `iconv` 的手册页 (man page)。
```
$ man iconv
```
将这篇指南总结一下,理解字符编码的概念、了解如何将一种编码方案转换为另一种,是一个电脑用户处理文本时必须要掌握的知识,程序员更甚。
最后,你可以在下面的评论部分中与我们联系,提出问题或反馈。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/convert-files-to-utf-8-encoding-in-linux/
作者:[Aaron Kili][a]
译者:[StdioA](https://github.com/StdioA)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/wp-content/uploads/2016/10/Converts-UTF8-to-ASCII-in-Linux.png
[2]:http://www.tecmint.com/wp-content/uploads/2016/10/List-Coded-Charsets-in-Linux.png
[3]:http://www.tecmint.com/wp-content/uploads/2016/10/Check-File-Encoding-in-Linux.png

View File

@ -0,0 +1,27 @@
98% 的开发者在工作中使用了开源软件
==================
![developer using open source](http://i0.wp.com/opensourceforu.com/wp-content/uploads/2016/07/developer.jpg?resize=750%2C500)
开源每天都会达到新的高度。但是一个新的研究表明超过 98% 的开发者在工作中使用开源工具。
Git 仓库管理软件 [GitLab][1] 进行了一项调查披露了一些关于开源接受度的有趣事实。针对开发人员群体的调查表明 98% 的开发者更喜欢在工作中使用开源91% 选择在工作和个人项目中选择使用相同的开发工具。此外92% 的人认为分布式版本控制系统Git 仓库)在工作中很重要。
在所有的偏好编程语言中JavaScript 占了 51% 的受访者比例。它后面是 Python、PHP、Java、Swift 和Objective-C。86% 的开发者认为安全是代码的主要判断标准。
GitLab 首席执行官兼联合创始人 Sid Sijbrandij 在一次声明中表示:“尽管过程驱动的开发技术在过去已经取得了成功,但开发人员正在寻找一种更自然的软件开发革新以促进项目生命周期内的协作和信息共享。”
这份报告来自 GitLab 在 7 月 6 日和 27 日之间对使用其存储库平台的 362 家初创企业和企业的 CTO、开发人员和 DevOps 专业人士的调查。
--------------------------------------------------------------------------------
via: http://opensourceforu.com/2016/11/98-percent-developers-use-open-source-at-work/
作者:[JAGMEET SINGH][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://opensourceforu.com/author/jagmeet-singh/
[1]:https://about.gitlab.com/2016/11/02/global-developer-survey-2016/

View File

@ -0,0 +1,116 @@
如何在 Linux 中压缩及解压缩 .bz2 文件
============================================================
对文件进行压缩,可以通过使用较少的字节对文件中的数据进行编码来显著地减小文件的大小,并且在跨网络的[文件的备份和传送][1]时很有用。 另一方面,解压文件意味着将文件中的数据恢复到初始状态。
Linux 中有几个[文件压缩和解压缩工具][2]比如gzip、7-zip、Lrzip、[PeaZip][3] 等等。
本篇教程中,我们将介绍如何在 Linux 中使用 bzip2 工具压缩及解压缩`.bz2`文件。
bzip2 是一个非常有名的压缩工具,并且在大多数主流 Linux 发行版上都有,你可以在你的发行版上用合适的命令来安装它。
```
$ sudo apt install bzip2 [On Debian/Ubuntu]
$ sudo yum install bzip2 [On CentOS/RHEL]
$ sudo dnf install bzip2 [On Fedora 22+]
```
使用 bzip2 的常规语法是:
```
$ bzip2 option(s) filenames
```
### 如何在 Linux 中使用“bzip2”压缩文件
你可以如下压缩一个文件,使用`-z`标志启用压缩:
```
$ bzip2 filename
或者
$ bzip2 -z filename
```
要压缩一个`.tar`文件,使用的命令为:
```
$ bzip2 -z backup.tar
```
重要bzip2 默认会在压缩及解压缩文件时删除输入文件(原文件),要保留输入文件,使用`-k`或者`--keep`选项。
此外,`-f`或者`--force`标志会强制让 bzip2 覆盖已有的输出文件。
```
------ 要保留输入文件 ------
$ bzip2 -zk filename
$ bzip2 -zk backup.tar
```
你也可以设置块的大小,从 100k 到 900k分别使用`-1`或者`--fast`到`-9`或者`--best`
```
$ bzip2 -k1 Etcher-linux-x64.AppImage
$ ls -lh Etcher-linux-x64.AppImage.bz2
$ bzip2 -k9 Etcher-linux-x64.AppImage
$ bzip2 -kf9 Etcher-linux-x64.AppImage
$ ls -lh Etcher-linux-x64.AppImage.bz2
```
下面的截屏展示了如何使用选项来保留输入文件,强制 bzip2 覆盖输出文件,并且在压缩中设置块的大小。
![Compress Files Using bzip2 in Linux](http://www.tecmint.com/wp-content/uploads/2016/11/Compress-Files-Using-bzip2-in-Linux.png)
*在 Linux 中使用 bzip2 压缩文件*
### 如何在 Linux 中使用“bzip2”解压缩文件
要解压缩`.bz2`文件,确保使用`-d`或者`--decompress`选项:
```
$ bzip2 -d filename.bz2
```
注意:这个文件必须是`.bz2`的扩展名,上面的命令才能使用。
```
$ bzip2 -vd Etcher-linux-x64.AppImage.bz2
$ bzip2 -vfd Etcher-linux-x64.AppImage.bz2
$ ls -l Etcher-linux-x64.AppImage
```
![Decompress bzip2 File in Linux](http://www.tecmint.com/wp-content/uploads/2016/11/Decompression-bzip2-File-in-Linux.png)
*在 Linux 中解压 bzip2 文件*
要浏览 bzip2 的帮助及 man 页面,输入下面的命令:
```
$ bzip2 -h
$ man bzip2
```
最后,通过上面简单的阐述,我相信你现在已经可以在 Linux 中压缩及解压缩`bz2`文件了。然而,有任何的问题和反馈,可以在评论区中留言。
重要的是,你可能想在 Linux 中查看一些重要的 [tar 命令示例][6],以便学习使用 tar 命令来[创建压缩归档文件][7]。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/linux-compress-decompress-bz2-files-using-bzip2
作者:[Aaron Kili][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/rsync-local-remote-file-synchronization-commands/
[2]:http://www.tecmint.com/command-line-archive-tools-for-linux/
[3]:http://www.tecmint.com/peazip-linux-file-manager-and-file-archive-tool/
[4]:http://www.tecmint.com/wp-content/uploads/2016/11/Compress-Files-Using-bzip2-in-Linux.png
[5]:http://www.tecmint.com/wp-content/uploads/2016/11/Decompression-bzip2-File-in-Linux.png
[6]:http://www.tecmint.com/18-tar-command-examples-in-linux/
[7]:http://www.tecmint.com/compress-files-and-finding-files-in-linux/

View File

@ -0,0 +1,110 @@
在 Linux 上使用 Glyphr 设计自己的字体
=============
LibreOffice 提供了丰富的字体,并且用户可以自由选择和下载增加自己的字体。然而,就算是你想创造自己的字体,也可以非常容易地使用 Glyphr 来做到。Glyphr 是一个新开源的矢量字体设计器,通过直观而易用的图形界面和丰富的功能集可以完成字体设计的方方面面。虽然这个应用还在早期开发阶段,但是已经十分棒了。下面将会有一个简短的快速入门教你如何使用 Glyphr 创建字体并加入到 LibreOffice。
首先,从[官方 Git 库][14]下载 Glyphr。它提供 32 位和 64 位版本的二进制格式。完成下载后,进入下载文件夹, 解压文件,进入解压后的文件夹,右键点击 Glyphr Studio选择“Run”。
[
![](https://www.howtoforge.com/images/how-to-design-and-add-your-own-font-on-linux-with-glyphr/pic_1.png)
][13]
启动应用后会给你三个选项。一个是从头创建一个新的字体集;第二个是读取已经存在的项目,可以是 Glyphr Studio 项目文件也可以是其他OpenType 字体otf或 TrueType 字体ttf甚至是 SVG 字体。第三个是读取已有的两个示例之一,然后可以在示例上修改创建。我将会选择第一个选项,并教你一些简单的设计概念。
[
![](https://www.howtoforge.com/images/how-to-design-and-add-your-own-font-on-linux-with-glyphr/pic_2.png)
][12]
完成进入编辑界面后, 你可以从屏幕左边的面板中选择字母,然后在右边的绘制区域设计。我选择 A 字母的图标开始编辑它。
[
![](https://www.howtoforge.com/images/how-to-design-and-add-your-own-font-on-linux-with-glyphr/pic_3.png)
][11]
要在绘图板上设计一些东西我们可以从该板的左上角选择矩形、椭圆形或者路径等同处的“形状”工具也可以使用该工具的第二行的第一项的路径编辑工具。使用任意工具开始在板上放路径点path point来创建形状。添加的点数越多接下来步骤的形状选项就越多。
[
![](https://www.howtoforge.com/images/how-to-design-and-add-your-own-font-on-linux-with-glyphr/pic_4.png)
][10]
将点移动到不同位置可以获得不同的路径,可以使用路径编辑工具右边的路径编辑,点击形状会出现可编辑点。然后可以把这些点拖到你喜欢的任意位置。
[
![](https://www.howtoforge.com/images/how-to-design-and-add-your-own-font-on-linux-with-glyphr/pic_5.png)
][9]
最后,形状编辑工具可以让你选择形状并将其拖动到其它位置、更改其尺寸以及旋转。
[
![](https://www.howtoforge.com/images/how-to-design-and-add-your-own-font-on-linux-with-glyphr/pic_6.png)
][8]
其它有用的设计动作集还有左侧面板提供的复制-粘贴、翻转-旋转操作。来看个例子,假设我现在正在创作 B 字母, 我要把已经创建好的上部分镜像到下半部分,保持设计的高度一致性。
[
![](https://www.howtoforge.com/images/how-to-design-and-add-your-own-font-on-linux-with-glyphr/pic_7.png)
][7]
现在,为了达到这个目的,选择形状编辑工具,选中欲镜像的部分,点击复制操作,然后在其上点击图形,拖放粘帖的形状到你需要的位置,根据你的需要进行水平翻转或者垂直翻转。
[
![](https://www.howtoforge.com/images/how-to-design-and-add-your-own-font-on-linux-with-glyphr/pic_8.png)
][6]
[
![](https://www.howtoforge.com/images/how-to-design-and-add-your-own-font-on-linux-with-glyphr/pic_9.png)
][5]
这款应用在太多地方可以讲述。如果有兴趣深入,可以深入了解数字化编辑、弯曲和引导等等,
然而,字体并不是仅仅是单个字体的设计,还需要学习字体设计的其他方面。通过应用左上角菜单栏上的“导航”还可以设置特殊字符对之间的字间距、增加连字符、部件、和设置常规字体设置等。
[
![](https://www.howtoforge.com/images/how-to-design-and-add-your-own-font-on-linux-with-glyphr/pic_10.png)
][4]
最棒的是你可以使用“测试驱动”来使用你的新字体,帮助你判断字体设计如何、间距对不对、尽量来优化你的字体。
[
![](https://www.howtoforge.com/images/how-to-design-and-add-your-own-font-on-linux-with-glyphr/pic_11.png)
][3]
完成设计和优化后,我们还可以导出 ttf 和 svg 格式的字体。
[
![](https://www.howtoforge.com/images/how-to-design-and-add-your-own-font-on-linux-with-glyphr/pic_12.png)
][2]
要将新的字体加入到系统中,打开字体浏览器并点击“安装”按钮。如果它不工作,可以在主目录下创建一个新的文件夹叫做`.fonts`,并将字体复制进去。也可以使用 root 用户打开文件管理器,进入 `/usr/share/fonts/opentype` 创建一个新的文件夹并粘贴字体文件到里面。然后打开终端,输入命令重建字体缓存:`sudo fc-cache -f -v`。
在 LibreOffice 中已经可以看见新的字体咯,同样也可以使用你系统中的其它文本应用程序如 Gedit 来测试新字体。
[
![](https://www.howtoforge.com/images/how-to-design-and-add-your-own-font-on-linux-with-glyphr/pic_13.png)
][1]
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/how-to-design-and-add-your-own-font-on-linux-with-glyphr/
作者:[Bill Toulas][a]
译者:[VicYu/Vic020](http://vicyu.net)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://twitter.com/howtoforgecom
[1]:https://www.howtoforge.com/images/how-to-design-and-add-your-own-font-on-linux-with-glyphr/big/pic_13.png
[2]:https://www.howtoforge.com/images/how-to-design-and-add-your-own-font-on-linux-with-glyphr/big/pic_12.png
[3]:https://www.howtoforge.com/images/how-to-design-and-add-your-own-font-on-linux-with-glyphr/big/pic_11.png
[4]:https://www.howtoforge.com/images/how-to-design-and-add-your-own-font-on-linux-with-glyphr/big/pic_10.png
[5]:https://www.howtoforge.com/images/how-to-design-and-add-your-own-font-on-linux-with-glyphr/big/pic_9.png
[6]:https://www.howtoforge.com/images/how-to-design-and-add-your-own-font-on-linux-with-glyphr/big/pic_8.png
[7]:https://www.howtoforge.com/images/how-to-design-and-add-your-own-font-on-linux-with-glyphr/big/pic_7.png
[8]:https://www.howtoforge.com/images/how-to-design-and-add-your-own-font-on-linux-with-glyphr/big/pic_6.png
[9]:https://www.howtoforge.com/images/how-to-design-and-add-your-own-font-on-linux-with-glyphr/big/pic_5.png
[10]:https://www.howtoforge.com/images/how-to-design-and-add-your-own-font-on-linux-with-glyphr/big/pic_4.png
[11]:https://www.howtoforge.com/images/how-to-design-and-add-your-own-font-on-linux-with-glyphr/big/pic_3.png
[12]:https://www.howtoforge.com/images/how-to-design-and-add-your-own-font-on-linux-with-glyphr/big/pic_2.png
[13]:https://www.howtoforge.com/images/how-to-design-and-add-your-own-font-on-linux-with-glyphr/big/pic_1.png
[14]:https://github.com/glyphr-studio/Glyphr-Studio-Desktop

View File

@ -0,0 +1,85 @@
开源 vs. 闭源
===========================
[![开源 vs. 闭源](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/open-source-vs-closed-source_orig.jpg)][2]
**开源操作系统**和**闭源操作系统**之间有诸多不同。这里我们仅寥书几笔。
### 开源是什么?自由!
[![开源是什么?](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/what-is-open-source.jpg?250)][1]
这是用户需要知道的最重要的一点。无论我是否打算修改代码,其他人出于善意的修改都不应受到限制。且如果用户喜欢,他们可以分享这个软件。使用**开源**软件,这些都是可能的。
**闭源**操作系统的许可条款很是吓人。但真的所有人都会看吗许多用户只是点了一下Accept 而已。
### 价格
几乎所有的开源操纵系统是免费的。仅有自愿性质的捐款。且只需有个一个 **CD/DVD 或 USB** 就能将系统安装到所有你想要安装的电脑上。
闭源操作系统相较于开源 OS 就贵了许多,如果你用它来构建 PC每台 PC 得你花不少于 $100。
我们可以用花在闭源软件上的钱去买更好的东西。
### 选择
开源操作系统的发行版有很多。如果你不中意眼下的这个,你可以尝试其他的发行版,总能找到适合你的那一个。
诸如 Ubuntu studio、Bio Linux、Edubuntu、Kali Linux、Qubes、SteamOS 这些发行版,就是针对特定用户群产生的发行版。
闭源软件有“选择”这种说法吗?我觉得是没有的。
### 隐私
**开源操作系统**无需收集用户数据信息。不会根据用户的个性显示广告,也不会把你的信息卖给第三方机构。如果开发者需要资金,他们会要求捐助,或是在他们的网页上打广告。广告的内容也会与 Linux 相关,对我们有益无害。
你一定知道那个主流的闭源操作系统(名字就不写了,大家都知道的),据说因为收集用户信息这件事,导致多数用户关闭了他们的免费更新服务。多数该系统的用户为了避免升级,将更新功能关闭,他们宁可承担安全风险也不要免费更新。
### 安全
开源软件大多安全。究其缘由不仅有市场占有率低这个原因,其本身的构成方式就比较安全。例如, Qubes Os 是最安全的操作系统之一,它将运行的软件相互隔离。
闭源操作系统向易用性妥协,使其变得愈发脆弱。
### 硬件支持
我们中有大多数人,家里还放着那些不想丢的旧 PC这时一个轻量级的发行版可能会让这些老家伙们重获新生。你可以试试在文章 [5 个适合旧电脑的轻量级 Linux][3] 中罗列的那些轻量级操作系统发行版。基于 Linux 的系统几乎包含所有设备的驱动,所以基本不需要你亲自寻找、安装驱动。
闭源操作系统时不时中止对旧硬件的支持,迫使用户去购买新的硬件。我们还不得不亲自寻找、安装驱动。
### 社区支持
几乎所有的**开源操作系统**都有用户论坛,你可以在那里提问题,并从别的用户那里得到答案。大家在那里分享技巧和窍门,互帮互助。有经验的 Linux 用户往往乐于帮助新手,为此他们常常不遗余力。
**闭源操作系统**社区和开源操作系统社区简直无法相提并论。提出的问题常常得不到回答。
### 商业化
在操作系统上挣钱是非常困难的。(开源)开发者一般通过用户的捐款和在网站上打广告来挣钱。捐款主要用于支付主机的开销和开发者的薪水。
许多闭源操作系统,不仅仅通过出售软件的使用许可这种方式来赚钱,还会推送广告大赚一票。
### 垃圾应用
我承认有些开源操作系统会提供一些我们可能一辈子都不会用到的应用,有些人认为他们是垃圾应用。但是也有发行版只提供最小安装,其中就不包含这些不想要的软件。所以,这不是真正的问题。
而所有的闭源操作系统中都包含厂商安装的垃圾应用,强制你安装,就像在安装一个干净系统一样。
开源软件更像是一门哲学。一种分享和学习的方式。甚至还有益于经济。
我们已经将我们知晓的罗列了出来。如果你觉得我们还漏了些什么,可以评论告诉我们。
--------------------------------------------------------------------------------
via: http://www.linuxandubuntu.com/home/open-source-vs-closed-source
作者:[Mohd Sohail][a]
译者:[martin2011qi](https://github.com/martin2011qi)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linkedin.com/in/mohdsohail
[1]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/what-is-open-source_orig.jpg?250
[2]:http://www.linuxandubuntu.com/home/open-source-vs-closed-source
[3]:http://www.linuxandubuntu.com/home/5-lightweight-linux-for-old-computers

View File

@ -0,0 +1,34 @@
标志性文本编辑器 Vim 迎来其 25 周年纪念日
===============
![标志性文本编辑器 Vim 迎来其 25 周年纪念日](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/osdc-lead-happybirthday.png?itok=38SOo5_T "标志性文本编辑器 Vim 迎来其 25 周年纪念日")
*图片由 [Oscar Cortez][8] 提供。由 Opensource.com 编辑,遵循 [CC BY-SA 2.0][7] 协议发布。*
让我们把时钟往回拨一点。不,别停…再来一点……好了!在 25 年前当你的某些专家同事还在蹒跚学步时Bram Moolenaar 已经开始为他的 Amiga 计算机开发一款文本编辑器。他曾经是 Unix 上的 vi 用户,但 Amiga 上却没有与其类似的编辑器。在三年的开发之后1991 年 11 月 2 日,他发布了“仿 vi 编辑器 (Vi IMitation)”(也就是 [Vim][6])的第一个版本。
两年之后,随着 2.0 版本的发布Vim 的功能集已经超过了 vi所以这款编辑器的全称也被改为了“vi 增强版 (Vi IMproved)”。现在,刚刚度过 25 岁生日的 Vim已经可以在绝大部分平台上运行——Windows、OS/2、OpenVMS、BSD、Android、iOS——并且已在 OS X 及很多 Linux 发行版上成为标配软件。它受到了许多赞誉,也受到了许多批评,不同组织或开发者也会围绕它来发起争论,甚至有些面试官会问:“你用 Emacs 还是 Vim”Vim 已拥有自由许可证,它基于[慈善软件证书( charityware license][5](“帮助乌干达的可怜儿童”)发布,该证书与 GPL 兼容。
Vim 是一个灵活的、可扩展的编辑器,带有一个强大的插件系统,被深入集成到许多开发工具,并且可以编辑上百种编程语言或文件类型的文件。 在 Vim 诞生二十五年后Bram Moolenaar 仍然在主导开发并维护它——这真是一个壮举Vim 曾经在超过 20 年的时间里数次间歇中断开发,但在 2016 年 9 月,它发布了 [8.0 版本](https://linux.cn/article-7766-1.html)添加了许多新功能为现代程序员用户提供了更多方便。很多年来Vim 在官网上售卖 T 恤及 logo 贴纸,所得的销售款为支持 [ICCF][4]——一个帮助乌干达儿童的德国慈善机构做出了巨大贡献。Vim 所支持的慈善项目深受 Moolenaar 喜爱Mooleaar 本人也去过乌干达,在基巴莱的一个儿童中心做志愿者。
Vim 在开源历史上记下了有趣的一笔:一个工程,在 25 年中,坚持由一个稳定的核心贡献者维护,拥有超多的用户,但很多人从未了解过它的历史。如果你想简单的了解 Vim[请访问它的网站][3],或者你可以读一读 [Vim 入门的一些小技巧][2],或者在 Opensource.com 阅读一位 [Vim 新用户的故事][1]。
--------------------------------------------------------------------------------
via: https://opensource.com/life/16/11/happy-birthday-vim-25
作者:[D Ruth Bavousett][a]
译者:[StdioA](https://github.com/StdioA)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/druthb
[1]:https://opensource.com/business/16/8/7-reasons-love-vim
[2]:https://opensource.com/life/16/7/tips-getting-started-vim
[3]:http://www.vim.org/
[4]:http://iccf-holland.org/
[5]:http://vimdoc.sourceforge.net/htmldoc/uganda.html#license
[6]:http://www.vim.org/
[7]:https://creativecommons.org/licenses/by-sa/2.0/
[8]:https://www.flickr.com/photos/drastudio/7161064915/in/photolist-bUNk7t-d1eWDm-7X6nmx-7AUchG-7AQpe8-9ob1yW-YZfUi-5LqxMi-9rye8j-7xptiR-9AE5Pe-duq1Wu-7DvLFt-7Mt7TN-7xRZHa-e19sFi-7uc6u3-dV7YuK-9DRH37-6oQE3u-9u3TG9-9jbg3J-7ELgDS-5Sgp87-8NXn1u-7ZSBk7-9kytY5-7f1cMC-3sdkMh-8SWLRX-8ebBMm-pfRPHJ-9wsSQW-8iZj4Z-pCaSMa-ejZLDj-7NnKCZ-9PjXb1-92hxHD-7LbXSZ-7cAZuB-7eJgiE-7VKc9d-8Yuun8-9tZReM-dxp7r8-9NH1R4-7QfoWB-7RGWtU-7NCPf9

View File

@ -0,0 +1,192 @@
全新 Kali Linux 系统安装指南
===============
Kali Linux 系统可以说是在[安全测试方面最好][18]的开箱即用的 Linux 发行版。Kali 下的很多工具软件都可以安装在大多数的 Linux 发行版中Offensive Security 团队在 Kali 系统的开发过程中投入大量的时间精力来完善这个用于渗透测试和安全审计的 Linux 发行版。
Kali Linux 是基于 Debian 的面向安全的发行版本。该系统由于预安装了上百个知名的安全工具软件而出名。
Kali 甚至在信息安全领域还有一个含金量较高的认证叫做“Kali 渗透测试Pentesting with Kali”认证。该认证的申请者必须在艰难的 24 小时内成功入侵多台计算机,然后另外 24 小时内完成渗透测试报告并发送给 Offensive Security 的安全人员进行评审。成功通过考试的人将会获得 OSCP 认证证书。
本安装指南及以后的文章主要是为了帮助个人熟悉 Kali Linux 系统和其中一些工具软件的使用。
**请谨慎使用 Kali 下的工具,因为其中一些工具如果使用不当将会导致计算机系统损坏。请在合法的途径下使用所有 Kali 系列文章中所包含的信息。**
### 系统要求
Kali 系统对硬件有一些最基本的要求及建议。根据用户使用目的,你可以使有更高的配置。这篇文章中假设读者想要把 kali 安装为电脑上唯一的操作系统。
1. 至少 10GB 的磁盘空间;强烈建议分配更多的存储空间。
2. 至少 512MB 的内存;希望有更多的内存,尤其是在图形界面下。
3. 支持 USB 或 CD/DVD 启动方式。
4. Kali Linux 系统 ISO 镜像下载地址 [https://www.kali.org/downloads/][1]。
### 使用 dd 命令创建 USB 启动工具
该文章假设可使用 USB 设备来引导安装系统。注意尽可能的使用 4GB 或者 8GB 的 USB 设备,并且其上的所有数据将会被删除。
本文作者在使用更大容量的 USB 设备在安装的过程中遇到了问题,但是别的人应该还是可以的。不管怎么说,下面的安装步骤将会清除 USB 设备内的数据。
在开始之前请务必备份所有数据。用于安装 Kali Linux 系统的 USB 启动设备将在另外一台机器上创建完成。
第一步是获取 Kali Linux 系统 ISO 镜像文件。本指南将使用最新版的包含 Enlightenment 桌面环境的 Kali Linux 系统进行安装。
在终端下输入如下命令来获取这个版本的 ISO 镜像文件。
```
$ cd ~/Downloads
$ wget -c http://cdimage.kali.org/kali-2016.2/kali-linux-e17-2016.2-amd64.iso
```
上面两个命令将会把 Kali Linux 的 ISO 镜像文件下载到当前用户的 Downloads 目录。
下一步是把 ISO 镜像写入到 USB 设备中来启动安装程序。我们可以使用 Linux 系统中的 `dd` 命令来完成该操作。首先,该 USB 设备要在 `lsblk` 命令下可找到。
```
$ lsblk
```
![Find Out USB Device Name in Linux](http://www.tecmint.com/wp-content/uploads/2016/10/Find-USB-Device-Name-in-Linux.png)
*在 Linux 系统中确认 USB 设备名*
确定 USB 设备的名字为 `/dev/sdc`,可以使用 dd 工具将 Kali 系统镜像写入到 USB 设备中。
```
$ sudo dd if=~/Downloads/kali-linux-e17-2016.2-amd64.iso of=/dev/sdc
```
**注意:**以上命令需要 root 权限,因此使用 `sudo` 命令或使用 root 账号登录来执行该命令。这个命令会删除 USB 设备中的所有数据。确保已备份所需的数据。
一旦 ISO 镜像文件完全复制到 USB 设备,接下来可进行 Kali Linux 系统的安装。
### 安装 Kali Linux 系统
1、 首先把 USB 设备插入到要安装 Kali 操作系统的电脑上,然后从 USB 设备引导系统启动。只要成功地从 USB 设备启动系统你将会看到下面的图形界面选择“Install”或者“Graphical Install”选项。
本指南将使用“Graphical Install”方式进行安装。
![Kali Linux Boot Menu](http://www.tecmint.com/wp-content/uploads/2016/10/Kali-Linux-Boot-Menu.png)
*Kali Linux 启动菜单*
2、 下面几个界面将会询问用户选择区域设置信息比如语言、国家以及键盘布局。
选择完成之后,系统将会提示用户输入主机名和域名信息。输入合适的环境信息后,点击继续安装。
![Set Hostname for Kali Linux](http://www.tecmint.com/wp-content/uploads/2016/10/Set-Hostname-for-Kali-Linux.png)
*设置 Kali Linux 系统的主机名*
![Set Domain for Kali Linux](http://www.tecmint.com/wp-content/uploads/2016/10/Set-Domain-for-Kali-Linux.png)
*设置 Kali Linux 系统的域名*
3、 主机名和域名设置完成后需要设置 root 用户的密码。请勿忘记该密码。
![Set Root User Password for Kali Linux](http://www.tecmint.com/wp-content/uploads/2016/10/Set-Root-User-Password-for-Kali-Linux.png)
*设置 Kali Linux 系统用户密码*
4、 密码设置完成之后安装步骤会提示用户选择时区然后停留在硬盘分区界面。
如果 Kali Linux 是这个电脑上的唯一操作系统最简单的选项就是使用“Guided Use Entire Disk”然后选择你需要安装 Kali 的存储设备。
![Select Kali Linux Installation Type](http://www.tecmint.com/wp-content/uploads/2016/10/Select-Kali-Linux-Installation-Type.png)
*选择 Kali Linux 系统安装类型*
![](http://www.tecmint.com/wp-content/uploads/2016/10/Select-Kali-Linux-Installation-Disk.png)
*选择 Kali Linux 安装磁盘*
5、 下一步将提示用户在存储设备上进行分区。大多数情况下我们可以把整个系统安装在一个分区内。
![Install Kali Linux Files in Partition](http://www.tecmint.com/wp-content/uploads/2016/10/Install-Kali-Linux-Files-in-Partition.png)
*在分区上安装 Kali Linux 系统*
6、 最后一步是提示用户确认将所有的更改写入到主机硬盘。注意,点确认后将会**清空整个磁盘上的所有数据**。
![Confirm Disk Partition Write Changes](http://www.tecmint.com/wp-content/uploads/2016/10/Confirm-Disk-Partition-Write-Changes.png)
*确认磁盘分区更改*
7、 一旦确认分区更改安装包将会进行复制文件的安装过程。安装完成后你需要设置一个网络镜像源来获取软件包和系统更新。如果你希望使用 Kali 的软件库,确保开启此功能。
![Configure Kali Linux Package Manager](http://www.tecmint.com/wp-content/uploads/2016/10/Configure-Kali-Linux-Package-Manager.png)
*配置 Kali Linux 包管理器*
8、 选择网络镜像源后系统将会询问你安装 Grub 引导程序。再次说明,本文假设你的电脑上仅安装唯一的 Kali Linux 操作系统。
在该屏幕上选择“Yes”用户需要选择要写入引导程序信息的硬盘引导设备。
![Install GRUB Boot Loader](http://www.tecmint.com/wp-content/uploads/2016/10/Install-GRUB-Boot-Loader.png)
*安装 Grub 引导程序*
![Select Partition to Install GRUB Boot Loader](http://www.tecmint.com/wp-content/uploads/2016/10/Select-Partition-to-Install-GRUB-Boot-Loader.png)
*选择安装 Grub 引导程序的分区*
9、 当 Grub 安装完成后,系统将会提醒用户重启机器以进入新安装的 Kali Linux 系统。
![Kali Linux Installation Completed](http://www.tecmint.com/wp-content/uploads/2016/10/Kali-Linux-Installation-Completed.png)
*Kali Linux 系统安装完成*
10、 因为本指南使用 Enlightenment 作为 Kali Linux 系统的桌面环境,因此默认情况下是启动进入到 shell 环境。
使用 root 账号及之前安装过程中设置的密码登录系统,以便运行 Enlightenment 桌面环境。
登录成功后输入命令`startx`进入 Enlightenment 桌面环境。
```
# startx
```
![Start Enlightenment Desktop in Kali Linux](http://www.tecmint.com/wp-content/uploads/2016/10/Start-Enlightenment-Desktop-in-Kali-Linux.png)
*Kali Linux 下进入 Enlightenment 桌面环境*
初次进入 Enlightenment 桌面环境时,它将会询问用户进行一些首选项配置,然后再运行桌面环境。
[
![Kali Linux Enlightenment Desktop](http://www.tecmint.com/wp-content/uploads/2016/10/Kali-Linux-Enlightenment-Desktop.png)
][2]
*Kali Linux Enlightenment 桌面*
此时,你已经成功地安装了 Kali Linux 系统,并可以使用了。后续的文章我们将探讨 Kali 系统中一些有用的工具以及如何使用这些工具来探测主机及网络方面的安全状况。
请随意发表任何评论或提出相关的问题。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/kali-linux-installation-guide/
作者:[Rob Turner][a]
译者:[rusking](https://github.com/rusking)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/robturner/
[1]:https://www.kali.org/downloads/
[2]:http://www.tecmint.com/wp-content/uploads/2016/10/Kali-Linux-Enlightenment-Desktop.png
[3]:http://www.tecmint.com/wp-content/uploads/2016/10/Start-Enlightenment-Desktop-in-Kali-Linux.png
[4]:http://www.tecmint.com/wp-content/uploads/2016/10/Kali-Linux-Installation-Completed.png
[5]:http://www.tecmint.com/wp-content/uploads/2016/10/Select-Partition-to-Install-GRUB-Boot-Loader.png
[6]:http://www.tecmint.com/wp-content/uploads/2016/10/Install-GRUB-Boot-Loader.png
[7]:http://www.tecmint.com/wp-content/uploads/2016/10/Configure-Kali-Linux-Package-Manager.png
[8]:http://www.tecmint.com/wp-content/uploads/2016/10/Confirm-Disk-Partition-Write-Changes.png
[9]:http://www.tecmint.com/wp-content/uploads/2016/10/Install-Kali-Linux-Files-in-Partition.png
[10]:http://www.tecmint.com/wp-content/uploads/2016/10/Select-Kali-Linux-Installation-Disk.png
[11]:http://www.tecmint.com/wp-content/uploads/2016/10/Select-Kali-Linux-Installation-Type.png
[12]:http://www.tecmint.com/wp-content/uploads/2016/10/Set-Root-User-Password-for-Kali-Linux.png
[13]:http://www.tecmint.com/wp-content/uploads/2016/10/Set-Domain-for-Kali-Linux.png
[14]:http://www.tecmint.com/wp-content/uploads/2016/10/Set-Hostname-for-Kali-Linux.png
[15]:http://www.tecmint.com/wp-content/uploads/2016/10/Kali-Linux-Boot-Menu.png
[16]:http://www.tecmint.com/wp-content/uploads/2016/10/Find-USB-Device-Name-in-Linux.png
[17]:http://www.tecmint.com/best-linux-desktop-environments/
[18]:http://www.tecmint.com/best-security-centric-linux-distributions-of-2016/

View File

@ -0,0 +1,58 @@
何时 NGINX 将取代 Apache
=====
> NGINX 和 Apache 两者都是主流的开源 web 服务器,但是据 NGINX 的首席执行官 Gus Robertson 所言,他们有不同的使用场景。此外还有微软,其 web 服务器 IIS 在活跃网站的份额在 20 年间首次跌破 10%。
![活跃网站的 web 服务器市场份额](http://zdnet1.cbsistatic.com/hub/i/r/2016/11/07/f38d190e-046c-49e6-b451-096ee0776a04/resize/770xauto/b009f53417e9a4af207eff6271b90c43/web-server-popularity-october-2016.png)
*Apache 是最受欢迎的 web 服务器,不过 NGINX 正逐渐增长,而微软的 IIS 几十年来首次跌破 10%。*
[NGINX][4] 已经成为第二大 web 服务器。它在很久以前就已经超越了[微软 IIS][5],并且一直在老大 [Apache][6] 的身后穷追不舍。但是NGINX 的首席执行官Gus Roberston 在接受采访时表示Apache 和 NGINX 的用户群体不一样。
“我认为 Apache 是很好的 web 服务器。NGINX 和它的使用场景不同”Robertson 说。“我们没有把 Apache 当成竞争对手。我们的用户使用 NGINX 来取代硬件负载均衡器和构建微服务,这两个都不是 Apache 的长处。”
事实上Robertson 发现许多用户同时使用了这两种开源的 web 服务。“用户会在 Apache 的上层使用 NGINX 来实现负载均衡。我们的架构差异很大,我们可以提供更好的并发 web 服务。”他还表示 NGINX 在云环境中表现更优秀。
他总结说,“我们是唯一一个仍然在持续增长的 web 服务器,其它的 web 服务器都在慢慢缩小份额。”
这不太准确。根据 [Netcraft 十月份的网络服务器调查][7]Apache 当月的活跃网站增加得最多,获得了 180 万个新站点,而 NGINX 增加了 40 万个新站点,排在第二位。
这些增长,加上微软损失的 120 万个活跃站点,导致微软的活跃网站份额下降到 9.27%,这是他们第一次跌破 10%。Apache 的市场份额提高了 0.19%,并继续领跑市场,现在坐拥 46.3% 的活跃站点。尽管如此,多年来 Apache 一直在缓慢下降,而 NGINX 现在上升到了 19%。
NGINX 的开发者正在努力创造他们的核心开放open-core )的商业 web 服务器 —— [NGINX Plus][8]通过不断的改进使其变得更有竞争力。NGINX Plus 最新的版本是 [NGINX Plus 11 版R11][9],该服务器易于扩展和自定义,并支持更广泛的部署。
这次最大的补充是 [动态模块][10] 的二进制兼容性。也就是说为 [开源 NGINX 软件][11] 编译的动态模块也可以加载到 NGINX Plus。
这意味着你可以利用大量的[第三方 NGINX 模块][12] 来扩展 NGINX Plus 的功能,借鉴一系列开源和商业化生产的模块。开发者可以基于支持 NGINX Plus 的内核创建自定义扩展、附加组件和新产品。
NGINX Plus R11 还增强了其它功能:
* [提升 TCP/UDP 负载均衡][1] —— 新功能包括 SSL 服务器路由、新的日志功能、附加变量以及改进的代理协议支持。这些新功能增强了调试功能,使你能够支持更广泛的企业应用。
* [更好的 IP 定位][2] —— 第三方的 GeoIP2 模块现在已经通过认证,并提供给 NGINX Plus 用户。这个新版本提供比原来的 GeoIP 模块更精准和丰富的位置信息。
* [增强的 nginScript 模块][3] —— nginScript 是基于 JavaScript 的 NGINX Plus 的下一代配置语言。新功能可以让你在流TCP/UDP模块中即时修改请求和响应数据。
最终结果NGINX 准备继续与 Apache 竞争顶级 web 服务器的宝座。至于微软的 IIS它将逐渐淡出市场。
--------------------------------------------------------------------------------
via: http://www.zdnet.com/article/when-to-use-nginx-instead-of-apache/
作者:[Steven J. Vaughan-Nichols][a]
译者:[OneNewLife](https://github.com/OneNewLife)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/
[1]:https://www.nginx.com/blog/nginx-plus-r11-released/?utm_source=nginx-plus-r11-released&utm_medium=blog#r11-tcp-udp-lb
[2]:https://www.nginx.com/blog/nginx-plus-r11-released/?utm_source=nginx-plus-r11-released&utm_medium=blog#r11-geoip2
[3]:https://www.nginx.com/blog/nginx-plus-r11-released/?utm_source=nginx-plus-r11-released&utm_medium=blog#r11-nginScript
[4]:https://www.nginx.com/
[5]:https://www.iis.net/
[6]:https://httpd.apache.org/
[7]:https://news.netcraft.com/archives/2016/10/21/october-2016-web-server-survey.html
[8]:https://www.nginx.com/products/
[9]:https://www.nginx.com/blog/nginx-plus-r11-released/
[10]:https://www.nginx.com/blog/nginx-plus-r11-released/?utm_source=nginx-plus-r11-released&utm_medium=blog#r11-dynamic-modules
[11]:https://www.nginx.com/products/download-oss/
[12]:https://www.nginx.com/resources/wiki/modules/index.html?utm_source=nginx-plus-r11-released&utm_medium=blog

View File

@ -0,0 +1,241 @@
在 Kali Linux 下实战 Nmap网络安全扫描器
========
在这第二篇 Kali Linux 文章中, 将讨论称为 [nmap][30] 的网络工具。虽然 nmap 不是 Kali 下唯一的一个工具,但它是最[有用的网络映射工具][29]之一。
- [第一部分-为初学者准备的Kali Linux安装指南][4]
Nmap 是 Network Mapper 的缩写,由 Gordon Lyon 维护(更多关于 Mr. Lyon 的信息在这里: [http://insecure.org/fyodor/][28]) ,并被世界各地许多的安全专业人员使用。
这个工具在 Linux 和 Windows 下都能使用,并且是用命令行驱动的。相对于那些令人害怕的命令行,对于 nmap在这里有一个美妙的图形化前端叫做 zenmap。
强烈建议个人去学习 nmap 的命令行版本,因为与图形化版本 zenmap 相比,它提供了更多的灵活性。
对服务器进行 nmap 扫描的目的是什么很好的问题。Nmap 允许管理员快速彻底地了解网络上的系统,因此,它的名字叫 Network MAPper 或者 nmap。
Nmap 能够快速找到活动的主机和与该主机相关联的服务。Nmap 的功能还可以通过结合 Nmap 脚本引擎(通常缩写为 NSE进一步被扩展。
这个脚本引擎允许管理员快速创建可用于确定其网络上是否存在新发现的漏洞的脚本。已经有许多脚本被开发出来并且包含在大多数的 nmap 安装中。
提醒一句 - 使用 nmap 的人既可能是善意的,也可能是恶意的。应该非常小心,确保你不要使用 nmap 对没有明确得到书面许可的系统进行扫描。请在使用 nmap 工具的时候注意!
#### 系统要求
1. [Kali Linux][3] (nmap 可以用于其他操作系统,并且功能也和这个指南里面讲的类似)。
2. 另一台计算机,并且装有 nmap 的计算机有权限扫描它 - 这通常很容易通过软件来实现,例如通过 [VirtualBox][2] 创建虚拟机。
1. 想要有一个好的机器来练习一下,可以了解一下 Metasploitable 2。
2. 下载 MS2 [Metasploitable2][1]。
3. 一个可以工作的网络连接,或者是使用虚拟机就可以为这两台计算机建立有效的内部网络连接。
### Kali Linux 使用 Nmap
使用 nmap 的第一步是登录 Kali Linux如果需要就启动一个图形会话本系列的第一篇文章安装了 [Kali Linux 的 Enlightenment 桌面环境] [27])。
在安装过程中安装程序将提示用户输入用来登录的“root”用户和密码。 一旦登录到 Kali Linux 机器,使用命令`startx`就可以启动 Enlightenment 桌面环境 - 值得注意的是 nmap 不需要运行桌面环境。
```
# startx
```
![Start Desktop Environment in Kali Linux](http://www.tecmint.com/wp-content/uploads/2016/11/Start-Desktop-Environment-in-Kali-Linux.png)
*在 Kali Linux 中启动桌面环境*
一旦登录到 Enlightenment将需要打开终端窗口。通过点击桌面背景将会出现一个菜单。导航到终端可以进行如下操作应用程序 -> 系统 -> 'Xterm' 或 'UXterm' 或 '根终端'。
作者是名为 '[Terminator] [25]' 的 shell 程序的粉丝,但是这可能不会显示在 Kali Linux 的默认安装中。这里列出的所有 shell 程序都可用于使用 nmap 。
![Launch Terminal in Kali Linux](http://www.tecmint.com/wp-content/uploads/2016/11/Launch-Terminal-in-Kali-Linux.png)
*在 Kali Linux 下启动终端*
一旦终端启动nmap 的乐趣就开始了。 对于这个特定的教程,将会创建一个 Kali 机器和 Metasploitable机器之间的私有网络。
这会使事情变得更容易和更安全,因为私有的网络范围将确保扫描保持在安全的机器上,防止易受攻击的 Metasploitable 机器被其他人攻击。
### 怎样在我的网络上找到活动主机
在此示例中,这两台计算机都位于专用的 192.168.56.0/24 网络上。 Kali 机器的 IP 地址为 192.168.56.101,要扫描的 Metasploitable 机器的 IP 地址为 192.168.56.102。
假如我们不知道 IP 地址信息,但是可以通过快速 nmap 扫描来帮助确定在特定网络上哪些是活动主机。这种扫描称为 “简单列表” 扫描,将 `-sL`参数传递给 nmap 命令。
```
# nmap -sL 192.168.56.0/24
```
![Nmap - Scan Network for Live Hosts](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Network.png)
*Nmap 扫描网络上的活动主机*
悲伤的是,这个初始扫描没有返回任何活动主机。 有时,这是某些操作系统处理[端口扫描网络流量][22]的一个方法。
###在我的网络中找到并 ping 所有活动主机
不用担心,在这里有一些技巧可以使 nmap 尝试找到这些机器。 下一个技巧会告诉 nmap 尝试去 ping 192.168.56.0/24 网络中的所有地址。
```
# nmap -sn 192.168.56.0/24
```
![Nmap - Ping All Connected Live Network Hosts](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Ping-All-Network-Live-Hosts.png)
*Nmap Ping 所有已连接的活动网络主机*
这次 nmap 会返回一些潜在的主机来进行扫描! 在此命令中,`-sn` 禁用 nmap 的尝试对主机端口扫描的默认行为,只是让 nmap 尝试 ping 主机。
### 找到主机上的开放端口
让我们尝试让 nmap 端口扫描这些特定的主机,看看会出现什么。
```
# nmap 192.168.56.1,100-102
```
![Nmap - Network Ports Scan on Host](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-for-Ports-on-Hosts.png)
*Nmap 在主机上扫描网络端口*
哇! 这一次 nmap 挖到了一个金矿。 这个特定的主机有相当多的[开放网络端口][19]。
这些端口全都代表着在此特定机器上的某种监听服务。 我们前面说过192.168.56.102 的 IP 地址会分配给一台易受攻击的机器,这就是为什么在这个主机上会有这么多[开放端口][18]。
在大多数机器上打开这么多端口是非常不正常的,所以赶快调查这台机器是个明智的想法。管理员可以检查下网络上的物理机器,并在本地查看这些机器,但这不会很有趣,特别是当 nmap 可以为我们更快地做到时!
### 找到主机上监听端口的服务
下一个扫描是服务扫描,通常用于尝试确定机器上什么[服务监听在特定的端口][17]。
Nmap 将探测所有打开的端口,并尝试从每个端口上运行的服务中获取信息。
```
# nmap -sV 192.168.56.102
```
![Nmap - Scan Network Services Listening of Ports](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Network-Services-Ports.png)
*Nmap 扫描网络服务监听端口*
请注意这次 nmap 提供了一些关于 nmap 在特定端口运行的建议(在白框中突出显示),而且 nmap 也试图确认运行在这台机器上的[这个操作系统的信息][15]和它的主机名(也非常成功!)。
查看这个输出,应该引起网络管理员相当多的关注。 第一行声称 VSftpd 版本 2.3.4 正在这台机器上运行! 这是一个真正的旧版本的 VSftpd。
通过查找 ExploitDB对于这个版本早在 2001 年就发现了一个非常严重的漏洞ExploitDB ID 17491
### 发现主机上上匿名 ftp 登录
让我们使用 nmap 更加清楚的查看这个端口,并且看看可以确认什么。
```
# nmap -sC 192.168.56.102 -p 21
```
![Nmap - Scan Particular Post on Machine](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Particular-Port-on-Host.png)
*Nmap 扫描机器上的特定端口*
使用此命令,让 nmap 在主机上的 FTP 端口(`-p 21`)上运行其默认脚本(`-sC`)。 虽然它可能是、也可能不是一个问题,但是 nmap 确实发现在这个特定的服务器[是允许匿名 FTP 登录的][13]。
### 检查主机上的漏洞
这与我们早先知道 VSftd 有旧漏洞的知识相匹配,应该引起一些关注。 让我们看看 nmap有没有脚本来尝试检查 VSftpd 漏洞。
```
# locate .nse | grep ftp
```
![Nmap - Scan VSftpd Vulnerability](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Service-Vulnerability.png)
*Nmap 扫描 VSftpd 漏洞*
注意 nmap 已有一个 NSE 脚本已经用来处理 VSftpd 后门问题!让我们尝试对这个主机运行这个脚本,看看会发生什么,但首先知道如何使用脚本可能是很重要的。
```
# nmap --script-help=ftp-vsftd-backdoor.nse
```
![Learn Nmap NSE Script Usage](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Learn-NSE-Script.png)
*了解 Nmap NSE 脚本使用*
通过这个描述,很明显,这个脚本可以用来试图查看这个特定的机器是否容易受到先前识别的 ExploitDB 问题的影响。
让我们运行这个脚本,看看会发生什么。
```
# nmap --script=ftp-vsftpd-backdoor.nse 192.168.56.102 -p 21
```
![Nmap - Scan Host for Vulnerable](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Host-for-Vulnerable.png)
*Nmap 扫描易受攻击的主机*
Nmap 的脚本返回了一些危险的消息。 这台机器可能面临风险,之后可以进行更加详细的调查。虽然这并不意味着机器缺乏对风险的抵抗力和可以被用于做一些可怕/糟糕的事情,但它应该给网络/安全团队带来一些关注。
Nmap 具有极高的选择性,非常平稳。 到目前为止已经做的大多数扫描, nmap 的网络流量都保持适度平稳,然而以这种方式扫描对个人拥有的网络可能是非常耗时的。
Nmap 有能力做一个更积极的扫描,往往一个命令就会产生之前几个命令一样的信息。 让我们来看看积极的扫描的输出(注意 - 积极的扫描会触发[入侵检测/预防系统][9]!)。
```
# nmap -A 192.168.56.102
```
![Nmap - Complete Network Scan on Host](http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Network-Host.png)
*Nmap 在主机上完成网络扫描*
注意这一次使用一个命令nmap 返回了很多关于在这台特定机器上运行的开放端口、服务和配置的信息。 这些信息中的大部分可用于帮助确定[如何保护本机][7]以及评估网络上可能运行的软件。
这只是 nmap 可用于在主机或网段上找到的许多有用信息的很短的一个列表。强烈敦促个人在个人拥有的网络上继续[以nmap][6] 进行实验。(不要通过扫描其他主机来练习!)。
有一个关于 Nmap 网络扫描的官方指南,作者 Gordon Lyon可从[亚马逊](http://amzn.to/2eFNYrD)上获得。
方便的话可以留下你的评论和问题(或者使用 nmap 扫描器的技巧)。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/nmap-network-security-scanner-in-kali-linux/
作者:[Rob Turner][a]
译者:[DockerChen](https://github.com/DockerChen)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/robturner/
[1]:https://sourceforge.net/projects/metasploitable/files/Metasploitable2/
[2]:http://www.tecmint.com/install-virtualbox-on-redhat-centos-fedora/
[3]:http://www.tecmint.com/kali-linux-installation-guide
[4]:http://www.tecmint.com/kali-linux-installation-guide
[5]:http://amzn.to/2eFNYrD
[6]:http://www.tecmint.com/nmap-command-examples/
[7]:http://www.tecmint.com/security-and-hardening-centos-7-guide/
[8]:http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Network-Host.png
[9]:http://www.tecmint.com/protect-apache-using-mod_security-and-mod_evasive-on-rhel-centos-fedora/
[10]:http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Host-for-Vulnerable.png
[11]:http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Learn-NSE-Script.png
[12]:http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Service-Vulnerability.png
[13]:http://www.tecmint.com/setup-ftp-anonymous-logins-in-linux/
[14]:http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Particular-Port-on-Host.png
[15]:http://www.tecmint.com/commands-to-collect-system-and-hardware-information-in-linux/
[16]:http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Network-Services-Ports.png
[17]:http://www.tecmint.com/find-linux-processes-memory-ram-cpu-usage/
[18]:http://www.tecmint.com/find-open-ports-in-linux/
[19]:http://www.tecmint.com/find-open-ports-in-linux/
[20]:http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-for-Ports-on-Hosts.png
[21]:http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Ping-All-Network-Live-Hosts.png
[22]:http://www.tecmint.com/audit-network-performance-security-and-troubleshooting-in-linux/
[23]:http://www.tecmint.com/wp-content/uploads/2016/11/Nmap-Scan-Network.png
[24]:http://www.tecmint.com/wp-content/uploads/2016/11/Launch-Terminal-in-Kali-Linux.png
[25]:http://www.tecmint.com/terminator-a-linux-terminal-emulator-to-manage-multiple-terminal-windows/
[26]:http://www.tecmint.com/wp-content/uploads/2016/11/Start-Desktop-Environment-in-Kali-Linux.png
[27]:http://www.tecmint.com/kali-linux-installation-guide
[28]:http://insecure.org/fyodor/
[29]:http://www.tecmint.com/bcc-best-linux-performance-monitoring-tools/
[30]:http://www.tecmint.com/nmap-command-examples/

View File

@ -0,0 +1,106 @@
如何在 Linux 中恢复一个删除了的文件
==============
你曾经是否遇到这样的事?当你发现的时候,你已经通过删除键,或者在命令行中使用 `rm` 命令,错误的删除了一个不该删除的文件。
在第一种情况下,你可以到垃圾箱,[搜索那个文件][6]然后把它复原到原始位置。但是第二种情况又该怎么办呢你可能知道Linux 命令行不会把删除的文件转移到任何位置而是直接把它们移除了biu~,它们就不复存在了。
在这篇文章里,将分享一个很有用的技巧来避免此事发生。同时,也会分享一个工具,不小心删除了某些不该删除的文件时,也许用得上。
### 把删除创建为 `rm -i` 的别名
`-i` 选项配合 `rm` 命令(也包括其他[文件处理命令比如 `cp` 或者 `mv`][5])使用时,在删除文件前会出现一个提示。
这同样也可以运用到当[复制,移动或重命名一个文件][4],当所在位置已经存在一个和目标文件同名的文件时。
这个提示会给你第二次机会来考虑是否真的要删除该文件 - 如果你在这个提示上选择确定,那么文件就被删除了。这种情况下,很抱歉,这个技巧并不能防止你的粗心大意。
为了 `rm -i` 别名替代 `rm` ,这样做:
```
alias rm='rm -i'
```
运行 `alias` 命令可以确定 `rm` 现在已经被别名了:
![增加 rm 别名的命令](http://www.tecmint.com/wp-content/uploads/2016/11/Add-Alias-rm-Command.png)
*为 rm 增加别名*
然而,这只能在当前用户的当前 shell 上有效。为了永久改变,你必须像下面展示的这样把它保存到 `~/.bashrc` 一些版本的 Linux 系统可能是 `~/.profile`)。
![在 Linux 中永久增添别名](http://www.tecmint.com/wp-content/uploads/2016/11/Add-Alias-Permanently-in-Linux.png)
*在 Linux 中永久增添别名*
为了让 `~/.bashrc`(或 `~/.profile`)中所做的改变立即生效,从当前 shell 中运行文件:
```
. ~/.bashrc
```
![在 Linux 中激活别名](http://www.tecmint.com/wp-content/uploads/2016/11/Active-Alias-in-Linux.png)
*在 Linux 中激活别名*
### 取证工具  Foremost
但愿你对于你的文件足够小心,当你要从外部磁盘或 USB 设备中恢复丢失的文件时,你只需使用这个工具即可。
然而,当你意识到你意外的删除了系统中的一个文件并感到恐慌时-不用担心。让我们来看一看 `foremost`,一个用来处理这种状况的取证工具。
要在 CentOS/RHEL 7 中安装 Foremost需要首先启用 Repoforge
```
# rpm -Uvh http://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.3-1.el7.rf.x86_64.rpm
# yum install foremost
```
然而在 Debian 及其衍生系统中,需这样做:
```
# aptitude install foremost
```
安装完成后,我们做一个简单的测试吧。首先删除 `/boot/images` 目录下一个名为 `nosdos.jpg` 的图像文件:
```
# cd images
# rm nosdos.jpg
```
要恢复这个文件,如下所示使用 `foremost`(要先确认所在分区 - 本例中, `/boot` 位于 `/dev/sda1` 分区中)。
```
# foremost -t jpg -i /dev/sda1 -o /home/gacanepa/rescued
```
其中,`/home/gacanepa/rescued` 是另外一个磁盘中的目录 请记住,把文件恢复到被删除文件所在的磁盘中不是一个明智的做法。
如果在恢复过程中,占用了被删除文件之前所在的磁盘分区,就可能无法恢复文件。另外,进行文件恢复操作前不要做任何其他操作。
`foremost` 执行完成以后,恢复的文件(如果可以恢复)将能够在目录 ·/home/gacanepa/rescue/jpg` 中找到。
##### 总结
在这篇文章中,我们阐述了如何避免意外删除一个不该删除的文件,以及万一这类事情发生,如何恢复文件。还要警告一下, `foremost` 可能运行很长时间,时间长短取决于分区的大小。
如果您有什么问题或想法,和往常一样,不要犹豫,告诉我们。可以给我们留言。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/recover-deleted-file-in-linux/
作者:[Gabriel Cánepa][a]
译者:[ucasFL](https://github.com/ucasFL)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/wp-content/uploads/2016/11/Active-Alias-in-Linux.png
[2]:http://www.tecmint.com/wp-content/uploads/2016/11/Add-Alias-Permanently-in-Linux.png
[3]:http://www.tecmint.com/wp-content/uploads/2016/11/Add-Alias-rm-Command.png
[4]:http://www.tecmint.com/rename-multiple-files-in-linux/
[5]:http://www.tecmint.com/progress-monitor-check-progress-of-linux-commands/
[6]:http://www.tecmint.com/linux-find-command-to-search-multiple-filenames-extensions/

View File

@ -0,0 +1,43 @@
Xfce 桌面新增‘免打扰’模式以及单一应用通知设置的新特性
============================================================
Xfce 的开发者们正忙于把 Xfce 的应用和部件[转移][3]到 GTK3 上,在这个过程中,他们也增加了一些新的特性。
**“免打扰”**,一个常被要求增加的特性,[最近][4]已登陆到了 xfce-notifyd 0.3.4 (Xfce 通知进程)上。
更近一步地,**最新的 xfce-notifyd 包括了一个可以在单一应用基础上开启或关闭通知的选项。
当一个应用发出一个通知以后,这个应用就被加入到到了通知设置的列表里。从通知列表里,你可以控制哪些应用能够显示通知。
”免打扰“模式和应用特定的通知设置均可在“设置” > “通知” 中找到:
![](https://1.bp.blogspot.com/-fvSesp1ukaQ/WCR8JQVgfiI/AAAAAAAAYl8/IJ1CshVQizs9aG2ClfraVaNjKP3OyxvAgCLcB/s400/xfce-do-not-disturb.png)
现在为止,还没有方法可以访问由于启用”免打扰“模式而错过的消息。然而,可以预期将来会发布**通知记录/维持的特性。**
最后, xfce-notifyd 0.3.4 的**另一个特性**是**选择在主监视器显示通知**(直到现在,通知都是显示在当前监视器)。这个特性目前在图形设置界面里是没有的,必须使用 `Xfconf`(设置编辑)在 xfce-notifyd 下增添一个叫做 `/primary-monitor`(没有引号)的布尔属性,并设置为 `True` 来启用该特性:
![](https://2.bp.blogspot.com/-M8xZpEHMrq8/WCR9EufvsnI/AAAAAAAAYmA/nLI5JQUtmE0J9TgvNM9ZKGHBdwwBhRH3QCLcB/s400/xfce-xfconf.png)
**xfce4-notifyd 0.3.4 目前在 PPA 上还没有,但是不久它可能被增添到 [Xfce GTK3 PPA][1]中。**
**如果你想直接从源代码编译,从[这儿][2]下载。**
--------------------------------------------------------------------------------
via: http://www.webupd8.org/2016/11/xfce-gets-do-not-disturb-mode-and-per.html
作者:[Andrew][a]
译者:[ucasFL](https://github.com/ucasFL)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.webupd8.org/p/about.html
[1]:https://launchpad.net/~xubuntu-dev/+archive/ubuntu/xfce4-gtk3
[2]:http://archive.xfce.org/src/apps/xfce4-notifyd/0.3/
[3]:https://wiki.xfce.org/releng/4.14/roadmap
[4]:http://simon.shimmerproject.org/2016/11/09/xfce4-notifyd-0-3-4-released-do-not-disturb-and-per-application-settings/
[5]:https://1.bp.blogspot.com/-fvSesp1ukaQ/WCR8JQVgfiI/AAAAAAAAYl8/IJ1CshVQizs9aG2ClfraVaNjKP3OyxvAgCLcB/s1600/xfce-do-not-disturb.png
[6]:https://2.bp.blogspot.com/-M8xZpEHMrq8/WCR9EufvsnI/AAAAAAAAYmA/nLI5JQUtmE0J9TgvNM9ZKGHBdwwBhRH3QCLcB/s1600/xfce-xfconf.png
[7]:http://www.webupd8.org/2016/11/xfce-gets-do-not-disturb-mode-and-per.html

View File

@ -0,0 +1,84 @@
通过安装扩展让 KDE Plasma 5 桌面看起来感觉就像 Windows 10 桌面
============================================================
![kde-plasma-to-windows-10](https://iwf1.com/wordpress/wp-content/uploads/2016/11/KDE-Plasma-to-Windows-10.jpg)
通过一些步骤,我将告诉你如何把 KDE Plasma 5 桌面变成 Windows 10 桌面。
除了菜单, KDE Plasma 桌面的许多地方已经和 Win 10 桌面非常像了。因此,只需要一点点改动就可以使二者看起来几乎是一样。
### 开始菜单
让 KDE Plasma 桌面看起来像 Win 10 桌面的首要以及可能最有标志性的环节是实现 Win 10 的 ‘开始’ 菜单。
通过安装 [Zren's Tiled Menu][1],这很容易实现。
#### 安装
1、 在 KDE Plasma 桌面上单击右键 -> 解锁窗口部件Unlock Widgets
2、 在 KDE Plasma 桌面上单击右键 -> 增添窗口部件( Add Widgets
3、 获取新窗口部件 -> 下载新的 Plasma 窗口部件Download New Plasma Widgets
4、 搜索“Tiled Menu” -> 安装Install
#### 激活
1、 在你当前的菜单按钮上单击右键 -> 替代……Alternatives…
2、 选择 "TIled Mune" ->点击切换Switch
![KDE Tiled Menu extension.](http://iwf1.com/wordpress/wp-content/uploads/2016/11/KDE-Tiled-Menu-extension-730x619.jpg)
*KDE Tiled 菜单扩展*
### 主题
弄好菜单以后,下一个你可能需要的就是主题。幸运的是, [K10ne][3] 提供了一个 WIn 10 主题体验。
#### 安装:
1、 从 Plasma 桌面菜单打开“系统设置System Settings” -> 工作空间主题Workspace Theme
2、 从侧边栏选择“桌面主题Desktop Theme” -> 获取新主题Get new Theme
3、 搜索“K10ne” -> 安装Install
#### 激活
1、 从 Plasma 桌面菜单选择“系统设置System Settings” -> 工作空间主题Workspace Theme
2、 从侧边栏选择“桌面主题Desktop Theme” -> "K10ne"
3、 应用Apply
### 任务栏
最后,为了有一个更加完整的体验,你可能也想拥有一个更加 Win 10 风格的任务栏,
这次你需要的安装包叫做“Icons-only Task Manager” 在大多数 Linux 发行版中,通常会默认安装。如果没有安装,需要通过你的系统的合适通道来获取它。
#### 激活
1、 在 Plasma 桌面上单击右键 -> 打开窗口部件Unlock Widgets
2、 在 Plasma 桌面上单击右键 -> 增添部件Add Widgets
3、 把“Icons-only Task Manager”拖放到你的桌面面板的合适位置。
--------------------------------------------------------------------------------
via: https://iwf1.com/make-kde-plasma-5-desktop-look-feel-like-windows-10-using-these-extensions/
作者:[Liron][a]
译者:[ucasFL](https://github.com/ucasFL)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://iwf1.com/tag/linux
[1]:https://github.com/Zren/plasma-applets/tree/master/tiledmenu
[2]:http://iwf1.com/wordpress/wp-content/uploads/2016/11/KDE-Tiled-Menu-extension.jpg
[3]:https://store.kde.org/p/1153465/

View File

@ -0,0 +1,89 @@
在 Linux 中查看你的时区
============================================================
在这篇短文中,我们将向你简单介绍几种 Linux 下查看系统时区的简单方法。在 Linux 机器中,尤其是生产服务器上的时间管理技能,是在系统管理中一个极其重要的方面。
Linux 包含多种可用的时间管理工具,比如 `date``timedatectlcommands`,你可以用它们来获取当前系统时区,也可以[将系统时间与 NTP 服务器同步][1],来自动地、更精确地进行时间管理。
好,我们一起来看几种查看我们的 Linux 系统时区的不同方法。
### 1、我们从使用传统的 `date` 命令开始
使用下面的命令,来看一看我们的当前时区:
```
$ date
```
或者,你也可以使用下面的命令。其中 `%Z` 格式可以输出字符形式的时区,而 `%z` 输出数字形式的时区:
```
$ date +”%Z %z”
```
![Find Linux Timezone](http://www.tecmint.com/wp-content/uploads/2016/10/Find-Linux-Timezone.png)
*查看 Linux 时区*
注意:`date` 的手册页中包含很多输出格式,你可以利用它们,来替换你的 `date` 命令的输出内容:
```
$ man date
```
### 2、接下来你同样可以用 `timedatectl` 命令
当你不带任何参数运行它时,这条命令可以像下图一样,输出系统时间概览,其中包含当前时区:
```
$ timedatectl
```
然后,你可以在命令中提供一条管道,然后用 [grep 命令][3] 来像下面一样,只过滤出时区信息:
```
$ timedatectl | grep “Time zone”
```
![Find Current Linux Timezone](http://www.tecmint.com/wp-content/uploads/2016/10/Find-Current-Linux-Timezone.png)
*查看当前 Linux 时区*
同样,我们可以学习如何使用 timedatectl 来[设置 Linux 时区][5]。
###3、进一步显示文件 /etc/timezone 的内容
使用 [cat 工具][6]显示文件 `/etc/timezone` 的内容,来查看你的时区:
```
$ cat /etc/timezone
```
![Check Timezone of Linux](http://www.tecmint.com/wp-content/uploads/2016/10/Check-Timezone-of-Linux.png)
*在 Linux 中查看时区*
对于 RHEL/CentOS/Fedora 用户,这里还有一条可以起到同样效果的命令:
```
$ grep ZONE /etc/sysconfig/clock
```
就这些了!别忘了在下面的反馈栏中分享你对于这篇文章中的看法。重要的是:你应该通过这篇 Linux 时区管理指南来学习更多系统时间管理的知识,因为它含有很多易于操作的实例。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/check-linux-timezone
作者:[Aaron Kili][a]
译者:[StdioA](https://github.com/StdioA)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/install-ntp-server-in-centos/
[2]:http://www.tecmint.com/wp-content/uploads/2016/10/Find-Linux-Timezone.png
[3]:http://www.tecmint.com/12-practical-examples-of-linux-grep-command/
[4]:http://www.tecmint.com/wp-content/uploads/2016/10/Find-Current-Linux-Timezone.png
[5]:http://www.tecmint.com/set-time-timezone-and-synchronize-time-using-timedatectl-command/
[6]:http://www.tecmint.com/13-basic-cat-command-examples-in-linux/
[7]:http://www.tecmint.com/wp-content/uploads/2016/10/Check-Timezone-of-Linux.png

View File

@ -0,0 +1,65 @@
火狐是否在未经授权的情况下搜集您的数据?
===============
![Mozilla Firefox collects your data](https://iwf1.com/wordpress/wp-content/uploads/2016/11/Mozilla-Firefox-collects-your-data-730x429.jpg)
打包在 Firefox Web 浏览器里面的地理位置服务即使浏览器关闭后也会在后台运行。
我们还没有从关于浏览器插件丑闻的消息中平复下来。插件原本目的是保卫隐私,[但现在却把信息卖给了第三方公司](https://iwf1.com/shock-this-popular-browser-add-on-sells-your-browsing-history/)。然而更令人愤怒的是其规模完全超出我们的预计。
**MLS**,即 Mozilla 位置服务,其可以让设备基于类似于 WIFI 接入点、无线电基站和蓝牙信标等基础设施确定其位置。
MLS 非常像是 Google 位置服务,后者是需要在 Android 设备上打开 GPS 并选择“高精度”模式时使用的服务。
那些曾经经历过 GPS 问题的人可能会知道这个模式是多么精确。
MLS 服务除了能够准确地确定您的位置以外,这个服务可以通过使用 WiFi 网络收集两种个人身份信息,包括**愿意贡献到数据库的用户**和**被扫描的 WiFi 设备的所有者**。
话虽这么说Mozilla 也提到说你可以选择退出服务,但你真的可以退出吗?
### 当后台变成你隐私的展台
作为一个众包项目,为了维护和发展 MLSMozilla 事实上非常依赖于用户的贡献,因此他们开发了多种方法以便用户参与进来。
其中之一,就是被终端用户使用的一款名为 Stumbler 的 Android 应用程序:
> [Mozilla Stumbler][4] 是一个开源的无线网络扫描器,它为我们的众包位置数据库收集 GPS、蜂窝网络和无线网络元数据。
这样一来Stumbler 不仅仅是一个独立的应用程序,同时也是 Firefox 在 Android 设备上提供的一种为 MLS“贡献数据和增强功能”的服务。
该服务的问题在于它在后台运行,而大多数用户都不知道它,**即使您可能已经禁用它**。
根据 Mozilla 提供的[信息][4],要启用该服务,您需要打开“设置”菜单(在 Firefox for Android 版本中) -> 打开“隐私”部分 -> 滚动到底部以查看“数据选择”,最后,勾选 Mozilla 位置服务框。
![Mozilla Location Services is unchecked yet Stumbler is on](http://iwf1.com/wordpress/wp-content/uploads/2016/11/Mozilla-Location-Services-is-unchecked-yet-Stumler-is-on-730x602.jpg)
*Mozilla 定位服务尚未勾选,但 Stumbler 已开启*
实际上,你会发现 Stumbler 服务**运行在你的设备后台**,这意味着它几乎不可见,因为它没有接口,即使 MLS 框未被选中,即使“数据选择”复选框都未选中,甚至 Firefox 浏览器本身已被关闭。
显然,停止 stumbler 的唯一方法是直接结束它。然而,这样做你首先需要一种方法来检测它的运行和结束,最终,这只是一个设备重启前的临时解决方案。
### 如何保证安全
为了避免 MLS 收集您的数据,仍然有一些方法值得您尝试一下,希望这些方法不会像在 Firefox for Android 中的MLS 复选框一样被 Mozilla 忽视。
您可以将您的无线网络 SSID 隐藏或者在 SSID 结尾添加“\_nomap”例如将您的 SSID 从“myWirelessNetwork”更名为“myWirelessNetwork\_nomap”。这在向 Mozilla 的应用程序暗示,您不希望参与他们的数据收集活动。
对于 Android 上的 Stumbler 服务,由于是一个服务(而不是进程),您可能无法在运行的进程/最近的应用程序列表中看到它。 因此,使用专用应用程序关闭它或启用“开发人员选项”,并转到“运行服务” -> 点击 Firefox最后停止“stumbler”。
--------------------------------------------------------------------------------
via: https://iwf1.com/is-mozilla-firefox-collecting-your-data-without-your-consent/
作者:[Liron][a]
译者:[flankershen](https://github.com/flankershen)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://iwf1.com/is-mozilla-firefox-collecting-your-data-without-your-consent/
[1]:https://iwf1.com/shock-this-popular-browser-add-on-sells-your-browsing-history/
[2]:https://en.wikipedia.org/wiki/Crowdsourcing
[3]:http://iwf1.com/wordpress/wp-content/uploads/2016/11/Mozilla-Location-Services-is-unchecked-yet-Stumler-is-on.jpg
[4]: https://location.services.mozilla.com/apps

View File

@ -0,0 +1,77 @@
现在 Linux 运行在 99.6% 的 TOP500 超级计算机上
============================================================
![Linux rules the world of supercomputers](https://itsfoss.com/wp-content/uploads/2016/11/Linux-King-Supercomputer-world-min.jpg)
_简介虽然 Linux 在桌面操作系统只有 2% 的市场占有率但是对于超级计算机来说Linux 用 99% 的市场占有率轻松地获取了统治地位。_
**Linux 运行在超过 99% 的 TOP500 超级计算机上**这并不会让人感到惊讶。2015 年我们报道过[“Linux 正运行在超过 97% 的 TOP500 超级计算机上”][13],今年 Linux 表现得更好。
这些信息是由独立组织 [Top500][4] 收集的,每两年他们会公布已知的最快的 500 台超级计算机的细节。你可以[打开这个网站,用以下条件筛选所需要的信息][15]:国家、使用的操作系统类型、所有者等。别担心,我将会从这份表格中筛选整理出今年几个有趣的事实。
### Linux 运行在 500 台超级计算机中的 498 台
如果要将上面的百分比细化到具体数量的话500 台超级计算机中的 498 台运行着 Linux。剩余的两台超级计算机运行着基于 Unix 的操作系统。直到去年,还有一台超级计算机上在运行 Windows今年的列表中没有出现 Windows 的身影。或许,这些超级计算机没一台能运行 Windows 10一语双关
总结一下今年表单上 TOP500 超级计算机所运行操作系统情况:
* Linux: 498
* Unix: 2
* Windows: 0
还有一份总结,它清晰展现了每年 Linux 在 TOP500 超级计算机的份额的变化情况。
* 2012 年: 94%
* [2013][6] 年: 95%
* [2014][7] 年: 97%
* [2015][8] 年: 97.2%
* 2016 年: 99.6%
* 2017 年: ???
另外最快的前380台超级计算机运行着 Linux包括在中国的那台最快的超级计算机。排名第 386 和第 387 的超级计算机运行着 Unix它们同样来自中国。←_←
### 其他关于最快的超级计算机的有趣数据
![List of top 10 fastest supercomputers in the world in 2016](https://itsfoss.com/wp-content/uploads/2016/11/fastest-supercomputers.png)
除去 Linux我在表单中还找到了几个有趣的数据想跟你分享。
* 全球最快的超级计算机是[神威太湖之光][9]. 它位于中国的[国家超级计算无锡中心][10]。它有着 93PFlops 的速度。
* 全球第二快的超级计算机是中国的[天河二号][11],第三的位置则属于美国的 Titan。
* 在速度前十的超级计算机中,美国占据了 5 台,日本和中国各有 4 台,瑞士有 1 台。
* 美国和中国都各有 171 台超级计算机进入了 TOP500 的榜单中。
* 日本有 27 台,法国有 20 台,印度、俄罗斯和沙特阿拉伯各有 5 台进入了榜单中。
有趣的事实,不是吗?你能点击[这里][18]筛选出属于自己的榜单来获得更多信息。现在我很开心来宣传 Linux 运行在 99% 的 TOP500 超级计算机上,期待下一年能有 100% 的更好成绩。
当你在阅读这篇文章时,请在社交平台上分享这篇文章,这是 Linux 的一个成就,我们引以为豪~ :P
--------------------------------------------------------------------------------
via: https://itsfoss.com/linux-99-percent-top-500-supercomputers
作者:[Abhishek Prakash][a]
译者:[ypingcn](https://github.com/ypingcn)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[1]: https://twitter.com/share?original_referer=https%3A%2F%2Fitsfoss.com%2F&amp;source=tweetbutton&amp;text=Linux+Now+Runs+On+99.6%25+Of+Top+500+Supercomputers&amp;url=https%3A%2F%2Fitsfoss.com%2Flinux-99-percent-top-500-supercomputers%2F&amp;via=%40itsfoss
[2]: https://www.linkedin.com/cws/share?url=https://itsfoss.com/linux-99-percent-top-500-supercomputers/
[3]: http://pinterest.com/pin/create/button/?url=https://itsfoss.com/linux-99-percent-top-500-supercomputers/&amp;description=Linux+Now+Runs+On+99.6%25+Of+Top+500+Supercomputers&amp;media=https://itsfoss.com/wp-content/uploads/2016/11/Linux-King-Supercomputer-world-min.jpg
[4]: https://twitter.com/share?text=%23Linux+now+runs+on+more+than+99%25+of+top+500+%23supercomputers+in+the+world&amp;via=itsfoss&amp;related=itsfoss&amp;url=https://itsfoss.com/linux-99-percent-top-500-supercomputers/
[5]: https://twitter.com/share?text=%23Linux+now+runs+on+more+than+99%25+of+top+500+%23supercomputers+in+the+world&amp;via=itsfoss&amp;related=itsfoss&amp;url=https://itsfoss.com/linux-99-percent-top-500-supercomputers/
[6]: https://itsfoss.com/95-percent-worlds-top-500-supercomputers-run-linux/
[7]: https://itsfoss.com/97-percent-worlds-top-500-supercomputers-run-linux/
[8]: https://itsfoss.com/linux-runs-97-percent-worlds-top-500-supercomputers/
[9]: https://en.wikipedia.org/wiki/Sunway_TaihuLight
[10]: https://www.top500.org/site/50623
[11]: https://en.wikipedia.org/wiki/Tianhe-2
[12]: https://itsfoss.com/wp-content/uploads/2016/11/Linux-King-Supercomputer-world-min.jpg
[13]: https://itsfoss.com/linux-runs-97-percent-worlds-top-500-supercomputers/
[14]: https://www.top500.org/
[15]: https://www.top500.org/statistics/sublist/
[16]: https://itsfoss.com/wp-content/uploads/2016/11/fastest-supercomputers.png
[17]: https://itsfoss.com/digikam-5-0-released-install-it-in-ubuntu-linux/
[18]: https://www.top500.org/statistics/sublist/

View File

@ -0,0 +1,176 @@
如何使用 Apache 控制命令检查它的模块是否已经启用或加载
============================================================
本篇中,我们会简要地讨论 Apache 服务器前端以及如何列出或查看已经启用的 Apache 模块。
Apache 基于模块化的理念而构建,这样就可以让 web 管理员添加不同的模块来扩展主要的功能及[增强性能][5]。
常见的 Apache 模块有:
1. mod_ssl  提供了 [HTTPS 功能][1]。
2. mod_rewrite  可以用正则表达式匹配 url 样式,并且使用 [.htaccess 技巧][2]来进行透明转发,或者提供 HTTP 状态码回应。
3. mod_security  用于[保护 Apache 免于暴力破解或者 DDoS 攻击][3]。
4. mod_status - 用于[监测 Apache 的负载及页面统计][4]。
在 Linux 中 `apachectl` 或者 `apache2ctl`用于控制 Apache 服务器,是 Apache 的前端。
你可以用下面的命令显示 `apache2ctl` 的使用信息:
```
$ apache2ctl help
或者
$ apachectl help
```
```
Usage: /usr/sbin/httpd [-D name] [-d directory] [-f file]
[-C "directive"] [-c "directive"]
[-k start|restart|graceful|graceful-stop|stop]
[-v] [-V] [-h] [-l] [-L] [-t] [-S]
Options:
-D name : define a name for use in directives
-d directory : specify an alternate initial ServerRoot
-f file : specify an alternate ServerConfigFile
-C "directive" : process directive before reading config files
-c "directive" : process directive after reading config files
-e level : show startup errors of level (see LogLevel)
-E file : log startup errors to file
-v : show version number
-V : show compile settings
-h : list available command line options (this page)
-l : list compiled in modules
-L : list available configuration directives
-t -D DUMP_VHOSTS : show parsed settings (currently only vhost settings)
-S : a synonym for -t -D DUMP_VHOSTS
-t -D DUMP_MODULES : show all loaded modules
-M : a synonym for -t -D DUMP_MODULES
-t : run syntax check for config files
```
`apache2ctl` 可以工作在两种模式下SysV init 模式和直通模式。在 SysV init 模式下,`apache2ctl` 用如下的简单的单命令形式:
```
$ apachectl command
或者
$ apache2ctl command
```
比如要启动并检查它的状态,运行这两个命令。如果你是普通用户,使用 [sudo 命令][6]来以 root 用户权限来运行:
```
$ sudo apache2ctl start
$ sudo apache2ctl status
```
```
tecmint@TecMint ~ $ sudo apache2ctl start
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1\. Set the 'ServerName' directive globally to suppress this message
httpd (pid 1456) already running
tecmint@TecMint ~ $ sudo apache2ctl status
Apache Server Status for localhost (via 127.0.0.1)
Server Version: Apache/2.4.18 (Ubuntu)
Server MPM: prefork
Server Built: 2016-07-14T12:32:26
-------------------------------------------------------------------------------
Current Time: Tuesday, 15-Nov-2016 11:47:28 IST
Restart Time: Tuesday, 15-Nov-2016 10:21:46 IST
Parent Server Config. Generation: 2
Parent Server MPM Generation: 1
Server uptime: 1 hour 25 minutes 41 seconds
Server load: 0.97 0.94 0.77
Total accesses: 2 - Total Traffic: 3 kB
CPU Usage: u0 s0 cu0 cs0
.000389 requests/sec - 0 B/second - 1536 B/request
1 requests currently being processed, 4 idle workers
__W__...........................................................
................................................................
......................
Scoreboard Key:
"_" Waiting for Connection, "S" Starting up, "R" Reading Request,
"W" Sending Reply, "K" Keepalive (read), "D" DNS Lookup,
"C" Closing connection, "L" Logging, "G" Gracefully finishing,
"I" Idle cleanup of worker, "." Open slot with no current process
```
当在直通模式下,`apache2ctl` 可以用下面的语法带上所有 Apache 的参数:
```
$ apachectl [apache-argument]
$ apache2ctl [apache-argument]
```
可以用下面的命令列出所有的 Apache 参数:
```
$ apache2 help [在基于Debian的系统中]
$ httpd help [在RHEL的系统中]
```
### 检查启用的 Apache 模块
因此,为了检测你的 Apache 服务器启动了哪些模块,在你的发行版中运行适当的命令,`-t -D DUMP_MODULES` 是一个用于显示所有启用的模块的 Apache 参数:
```
--------------- 在基于 Debian 的系统中 ---------------
$ apache2ctl -t -D DUMP_MODULES
或者
$ apache2ctl -M
```
```
--------------- 在 RHEL 的系统中 ---------------
$ apachectl -t -D DUMP_MODULES
或者
$ httpd -M
$ apache2ctl -M
```
```
[root@tecmint httpd]# apachectl -M
Loaded Modules:
core_module (static)
mpm_prefork_module (static)
http_module (static)
so_module (static)
auth_basic_module (shared)
auth_digest_module (shared)
authn_file_module (shared)
authn_alias_module (shared)
authn_anon_module (shared)
authn_dbm_module (shared)
authn_default_module (shared)
authz_host_module (shared)
authz_user_module (shared)
authz_owner_module (shared)
authz_groupfile_module (shared)
authz_dbm_module (shared)
authz_default_module (shared)
ldap_module (shared)
authnz_ldap_module (shared)
include_module (shared)
....
```
就是这样!在这篇简单的教程中,我们解释了如何使用 Apache 前端工具来列出启动的 apache 模块。记住你可以在下面的反馈表中给我们留下你的问题或者留言。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/check-apache-modules-enabled
作者:[Aaron Kili][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/install-lets-encrypt-ssl-certificate-to-secure-apache-on-rhel-centos/
[2]:http://www.tecmint.com/apache-htaccess-tricks/
[3]:http://www.tecmint.com/protect-apache-using-mod_security-and-mod_evasive-on-rhel-centos-fedora/
[4]:http://www.tecmint.com/monitor-apache-web-server-load-and-page-statistics/
[5]:http://www.tecmint.com/apache-performance-tuning/
[6]:http://www.tecmint.com/su-vs-sudo-and-how-to-configure-sudo-in-linux/

View File

@ -0,0 +1,139 @@
安卓编年史8Android 1.5 Cupcake——虚拟键盘打开设备设计的大门
================================================================================
![安卓 1.5 的屏幕软键盘输入时的输入建议栏,大写状态键盘,数字与符号界面,更多符号弹窗。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/kb5.png)
*安卓 1.5 的虚拟键盘输入时的输入建议栏、大写状态键盘、数字与符号界面、更多符号弹窗。
[Ron Amadeo 供图]*
### Android 1.5 Cupcake——虚拟键盘打开设备设计的大门 ###
在 2009 年 4 月,安卓 1.1 发布后将近三个月后,安卓 1.5 发布了。这是第一个拥有公开的、市场化代号的安卓版本纸杯蛋糕Cupcake。从这个版本开始每个版本的安卓将会拥有一个按字母表排序以小吃为主题的代号。
纸杯蛋糕新增功能中最重要的明显当属虚拟键盘。这是 OEM 厂商第一次有可能抛开带有数不清的按键的实体键盘以及复杂的滑动结构,创造出平板风格的安卓设备。
安卓的按键标识可以在大小写之间切换这取决于大写锁定是否开启。尽管默认情况下它是关闭的显示在键盘顶部的建议栏有个选项可以打开它。在按键的弹框中带有省略号的就像“u”上面图那样的可以在按住的情况下可以输入弹框中的[发音符号][1]。键盘可以切换到数字和符号,长按句号键可以打开更多符号。
![1.5和1.1中的应用程序界面和通知面板的对比。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/abweave.png)
*1.5 和 1.1 中的应用程序界面和通知面板的对比。
[Ron Amadeo 供图]*
给新的“摄像机”功能增加了新图标Google Talk 从 IM 中分离出来成为了一个独立的应用。亚马逊 MP3 和浏览器的图标同样经过了重新设计。亚马逊 MP3 图标更改的主要原因是亚马逊即将计划推出其它的安卓应用而“A”图标所指范围太泛了。浏览器图标无疑是安卓 1.1 中最糟糕的设计,所以它被重新设计了,并且不再像是一个桌面操作系统的对话框。应用抽屉的最后一个改变是“图片”,它被重新命名为了“相册”。
通知面板同样经过了重新设计。面板背景加上了布纹纹理,通知的渐变效果也被平滑化了。安卓 1.5 在系统核心部分有许多设计上的微小改变,这些改变影响到所有的应用。在“清除通知”按钮上,你可以看到全新的系统按钮风格,与旧版本的按钮相比有了渐变、更细的边框线以及更少的阴影。
![安卓1.5和1.1中的“添加到主屏幕”对话框。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/widget.png)
*安卓 1.5 和 1.1 中的“添加到主屏幕”对话框。
[Ron Amadeo 供图]*
第三方小部件是纸杯蛋糕的另一个头等特性,它们现在仍然是安卓的本质特征之一。无论是用来控制应用还是显示应用的信息,开发者们都可以为他们的应用捆绑一个主屏幕小部件。谷歌同样展示了一些它们自己的新的小部件,分别来自日历和音乐这两个应用。
![左:日历小部件,音乐小部件以及一排实时文件夹的截图。中:文件夹列表。右:“带电话号码的联系人”实时文件夹的打开视图。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/folders-and-widgets-and-stuff.png)
*左:日历小部件,音乐小部件以及一排实时文件夹的截图。中:文件夹列表。右:“带电话号码的联系人”实时文件夹的打开视图。
[Ron Amadeo 供图]*
在上方左边的截图里你可以看到新的日历和音乐图标。日历小部件只能显示当天的一个事件,点击它会打开日历。你不能够选择日历所显示的内容,小部件也不能够重新设置大小——它就是上面看起来的那个样子。音乐小部件是蓝色的——尽管音乐应用里没有一丁点的蓝色——它展示了歌曲名和歌手名,此外还有播放和下一曲按钮。
同样在左侧截图里,底部一排的头三个文件夹是一个叫做“[实时文件夹](http://android-developers.blogspot.com/2009/04/live-folders.html)”的新特性。它们可以在“添加到主屏幕”菜单中的新顶层选项“文件夹”中被找到,就像你在中间那张图看到的那样。实时文件夹可以展示一个应用的内容而不用打开这个应用。纸杯蛋糕带来的都是和联系人相关的实时文件夹,能够显示所有联系人,带有电话号码的联系人和加星标的联系人。
实时文件夹在主屏的弹窗使用了一个简单的列表视图,而不是图标。联系人只是实时文件夹的一个初级应用,它是给开发者使用的一个完整 API。谷歌用 Google Books 应用做了个图书文件夹的演示,它可以显示 RSS 订阅或是一个网站的热门故事。实时文件夹是安卓没有成功实现的想法之一这个特性最终在蜂巢3.x中被取消。
![摄像机和相机界面,屏幕上有触摸快门。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/device-2013-12-26-11016071.png)
*摄像机和相机界面,屏幕上有触摸快门。
[Ron Amadeo 供图]*
如果你不能认出新的“摄像机”图标,这不奇怪,视频录制是在安卓 1.5 中才被添加进来的。相机和摄像机两个图标其实是同一个应用你可用过菜单中的“切换至相机”和“切换至摄像机”选项在它们之间切换。T-Mobile G1 上录制的视频质量并不高。一个“高”质量的测试视频输出一个 .3GP 格式的视频文件,其分辨率仅为 352 x 288帧率只有 4FPS。
除了新的视频特性,相机应用中还可以看到一些急需的 UI 调整。上方左侧的快照展示了最近拍摄的那张照片,点击它会跳转到相册中的相机胶卷。各个界面上方右侧的圆形图标是触摸快门,这意味着,从 1.5 开始,安卓设备不再需要一个实体相机按钮。
这个界面相比于之后版本的相机应用实际上更加接近于安卓 4.2 的设计。尽管后续的设计会向相机加入愚蠢的皮革纹理和更多的控制设置,安卓最终还是回到了基本的设计,安卓 4.2 的重新设计和这里有很多共同之处。安卓 1.5 中的原始布局演变成了安卓 4.2 中的最小化的、全屏的取景器。
![Google Talk运行在Google Talk中vs运行在IM应用中。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/gtalk-im.png)
*Google Talk 运行在 Google Talk 中 vs 运行在 IM 应用中。
[Ron Amadeo 供图]*
安卓 1.0 的 IM 即时通讯应用功能上支持 Google Talk但在安卓 1.5 中Google Talk 从中分离出来成为独立应用。IM应用中对其的支持已经被移除。Google Talk上图左侧明显是基于 IM 应用(上图右侧)的,但随着独立应用在 1.5 中的发布,在 IM 应用的工作被放弃了。
新的 Google Talk 应用拥有重新设计过的状态栏、右侧状态指示灯、重新设计过的移动设备标识,是个灰色的安卓小绿人图案。聊天界面的蓝色的输入框变成了更加合理的灰色,消息的背景从淡绿和白色变成了淡绿和绿色。有了独立的应用,谷歌可以向其中添加 Gtalk 独有的特性,比如“不保存聊天记录”聊天,该特性可以阻止 Gmail 保存每个聊天记录。
![安卓1.5的日历更加明亮。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/cal15.png)
*安卓 1.5 的日历更加明亮。
[Ron Amadeo 供图]*
日历抛弃了丑陋的黑色背景上白色方块的设计,转变为全浅色主题。所有东西的背景都变成了白色,顶部的星期日变成了蓝色。单独的约会方块从带有颜色的细条变成了拥有整个颜色背景,文字也变为白色。这将是很长一段时间内日历的最后一次改动。
![从左到右:新的浏览器控件,缩放视图,复制/粘贴文本高亮。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/browser-craziness.png)
*从左到右:新的浏览器控件,缩放视图,复制/粘贴文本高亮。
[Ron Amadeo 供图]*
安卓 1.5 从系统全局修改了缩放控件。缩放控件不再是两个大圆形,取而代之的是一个圆角的矩形从中间分开为左右两个按钮。这些新的控件被用在了浏览器、谷歌地图和相册之中。
浏览器在缩放功能上做了很多工作。在放大或缩小之后点击“1x”按钮可以回到正常缩放状态。底部右侧的按钮会将缩放整个页面并在页面上显示一个放大矩形框就像你能在上面中间截图看到的那样。按住矩形框并且释放会将页面的那一部分显示回“1x”视图。安卓并没有加速滚动这使得最快的滚动速度也着实很慢——这就是谷歌对长网页页面导航的解决方案。
浏览器的另一个新增功能就是从网页上复制文本——之前你只能从输入框中复制文本。在菜单中选择“复制文本”会激活高亮模式在网页文本上拖动你的手指使它们高亮。G1 的轨迹球对于这种精准的移动十分的方便,并且能够控制鼠标指针。这里并没有可以拖动的光标,当你的手指离开屏幕的时候,安卓就会复制文本并且移除高亮。所以你必须做到荒谬般的精确来使用复制功能。
安卓 1.5 中的浏览器很容易崩溃——比之前的版本经常多了。仅仅是以桌面模式浏览 Ars Technica 就会导致崩溃,许多其它的站点也是一样。
![](http://cdn.arstechnica.net/wp-content/uploads/2013/12/lockscreen.png)
*Ron Amadeo供图*
默认的锁屏界面和图形锁屏都不再是空荡荡的黑色背景,而是和主屏幕一致的壁纸。
图形解锁界面的浅色背景显示出了谷歌在圆圈对齐工作上的草率和马虎。白色圆圈在黑圆圈里从来都不是在正中心的位置——像这样基本的对齐问题对于这一时期的安卓是个频繁出现的问题。
![Youtube上传工具内容快照自动旋转设置全新的音乐应用设计。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/TWEAKS2.png)
*Youtube 上传工具,内容快照,自动旋转设置,全新的音乐应用设计。
[Ron Amadeo 供图]*
安卓 1.5 给予了 YouTube 应用向其网站上传视频的能力。上传通过从相册中分享视频到 YouTube 应用来完成,或从 YouTube 应用中直接打开一个视频。这将会打开一个上传界面,用户在这里可以设置像视频标题、标签和权限这样的选项。照片可以以类似的方式上传到 Picasa这是一个谷歌建立的图片网站。
整个系统的调整没有多少。常用联系人在联系人列表中可以显示图片(尽管常规联系人还是没有图片)。第三张截图展示了设置中全新的自动旋转选项——这个版本同样也是第一个支持基于从设备内部传感器读取的数据来自动切换方向的版本。
![HTC Magic第二部安卓设备第一个不带实体键盘的设备。](http://cdn.arstechnica.net/wp-content/uploads/2014/04/htc-magic-white.jpg)
*HTC Magic第二部安卓设备第一个不带实体键盘的设备。
[HTC 供图]*
纸杯蛋糕在改进安卓上做了巨大的工作,特别是从硬件方面。虚拟键盘意味着不再需要实体键盘。自动旋转使得系统更加接近 iPhone屏幕上的虚拟快门按键同样也意味着实体相机按键变成了可选选项。1.5 发布后不久第二部安卓设备的出现将会展示出这个平台未来的方向HTC Magic。Magic上图没有实体键盘或相机按钮。它是没有间隙、没有滑动结构的平板状设备依赖于安卓的虚拟按键来完成任务。
安卓旗舰机开始可能有着最多按键——一个实体 qwerty 键盘——后来随着时间流逝开始慢慢减少按键数量。而 Magic 是重大的一步,去除了整个键盘和相机按钮,它仍然使用通话和挂断键、四个系统键以及轨迹球。
#### 谷歌地图是第一个登陆谷歌市场的内置应用 ####
尽管这篇文章为了简单起见主要以安卓版本顺序来组织应用更新但还是有一些在这时间线之外的东西值得我们特别注意一下。2009 年 6 月 14 日,谷歌地图成为第一个通过谷歌市场更新的预置应用。尽管其它的所有应用更新还是要在一个完整的系统发布中进行更新,但是地图从系统中脱离了出来,只要新特性已经就绪就可以随时接收升级周期之外的更新。
将应用从核心系统分离发布到安卓市场上将成为谷歌前进的主要关注点。总的来说OTA 更新是个重大的主动改进——这需要 OEM 厂商和运营商的合作,二者都是拖后腿的角色。更新同样没有做到到达每个设备。今天,谷歌市场给了谷歌一个与每个安卓手机之间的联系渠道,而没有了这样的外界干扰。
然而,这是后来才需要考虑的问题。在 2009 年,谷歌只有两种裸机需要支持,而且早期的安卓运营商似乎对谷歌的升级需要反应积极。这些早期的行动对谷歌这方面来说将被证明是非常积极的决定。一开始,公司只在最重要的应用——地图和 Gmail 上——走这条路线,但后来它将大部分预置应用导入安卓市场。后来的举措比如 Google Play 服务甚至将应用 API 从系统移除加入了谷歌商店。
至于这时的新地图应用,得到了一个新的路线界面,此外还有提供公共交通和步行方向的能力。现在,路线只有个朴素的黑色列表界面——逐步风格的导航很快就会登场。
2009 年 6 月同时还是苹果发布第三代 iPhone——3GS——以及第三版 iPhone OS 的时候。iPhone OS 3 的主要特性大多是追赶上来的项目,比如复制/粘贴和对彩信的支持。苹果的硬件依然是更好的软件更流畅、更整合还有更好的设计。尽管谷歌疯狂的开发步伐使得它不得不走上追赶的道路。iPhone OS 2 是在安卓 0.5 的 Milestone 5 版本之前发布的,在 iOS 一年的发布周期里安卓发布了五个版本。
----------
![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
[Ron Amadeo][a] / Ron是Ars Technica的评论编缉专注于安卓系统和谷歌产品。他总是在追寻新鲜事物还喜欢拆解事物看看它们到底是怎么运作的。
[@RonAmadeo][t]
--------------------------------------------------------------------------------
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/8/
译者:[alim0x](https://github.com/alim0x) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://en.wikipedia.org/wiki/Diacritic
[a]:http://arstechnica.com/author/ronamadeo
[t]:https://twitter.com/RonAmadeo

View File

@ -0,0 +1,85 @@
Ubuntu 14.04/16.04 与 Windows 10 周年版 Ubuntu Bash 性能对比
===========================
今年初,当 Microsoft 和 Canonical 发布 [Windows 10 Bash 和 Ubuntu 用户空间][1],我尝试做了一些初步性能测试 [Ubuntu on Windows 10 对比 原生 Ubuntu][2],这次我发布更多的,关于原生纯净的 Ubuntu 和基于 Windows 10 的基准对比。
![Windows 的 Linux 子系统](http://www.phoronix.net/image.php?id=windows10-anv-wsl&image=windows_wsl_1_med)
Windows 的 Linux 子系统测试完成了所有测试,并随着 Windows 10周年更新放出。 默认的 Ubuntu 用户空间还是 Ubuntu 14.04,但是已经可以升级到 16.04。所以测试首先在 14.04 测试,完成后将系统升级升级到 16.04 版本并重复所有测试。完成所有基于 Windows 的 Ubuntu 子系统测试后,我在同样的系统上干净地安装了 Ubuntu 14.04.5 和 Ubuntu 16.04 LTS 来做性能对比。
![Ubuntu](http://www.phoronix.net/image.php?id=windows10-anv-wsl&image=windows_wsl_2_med)
配置为 Intel i5 6600K Skylake16G 内存和 256G 东芝 ssd测试过程中每个操作系统都采用其原生默认配置和软件包。
![Phoronix 测试套件](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=09989b3&p=2)
这次 Ubuntu/Bash on Windows 和原生 Ubuntu 对比测试,采用开源软件 [Phoronix 测试套件](http://www.phoronix-test-suite.com/),完全自动化并可重复测试。
![SQLite 嵌入式数据库基准测试](https//openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=186c4d0&c=a8c914bf9b64cf67abc65e319f8e71c7951fb1aa&p=0)
首先是 SQLite 嵌入式数据库基准测试。这方面开箱即用的 Ubuntu/Bash on Windows 性能是相当的慢,但是如果将环境从 14.04 升级到 16.04 LTS性能会快很多。然而对于繁重磁盘操作的任务原生 Ubuntu Linux 几乎比 Windows 的子系统 Linux 快了近 2 倍。
![编译测试:编译](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=fa40825&c=0912dc3f6d6a9f36da09fdd4c0cf4e330fa40f90&p=0)
![编译测试:初始创建](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=8419652&c=9b9f6b0822ed5b9dc2977a7f2faf499fce4fba23&p=0)
编译测试作为额外的繁重磁盘操作测试显示,定制的 Windows 子系统真的成倍的限制了 Ubuntu 性能。
接下来,是一些使用 Stream 的基本的系统内存速度测试:
![Stream复制](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=9560e6f&c=ebbc6937fa8daf0540e0df353432a29f938cf7ed&p=0)
![Stream缩放](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=63fa8d6&c=88cd58f9eca6d3a09699d60d9f877529113fb1bc&p=0)
![Stream添加](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=5a2e9d2&c=d37eee4c9394fa8104e7e49e26c964af70ec326b&p=0)
奇怪的是,这些 Stream 内存的基准测试显示 Ubuntu on Windows 的性能比原生的 Ubuntu 好!这个现象同时发生在基于同样的 Windows 却环境不同的 14.04 和 16.04 LTS 上。
接下来,是一些繁重 CPU 操作测试。
![Dolfyn 科学测试](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=ee1f01f&c=3e9a67230e0e081b99ee3237e702c0b40ee73d60&p=0)
通过 Dolfyn 科学测试Ubuntu On Windows 和原生 Ubuntu 之间的性能其实是相当接近的。 对于 Ubuntu 16.04,由于较新的 GCC 编译器性能衰减,两个平台上的性能都较慢。
![Fhourstones 测试](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=dd69257&c=0e31babb8b96be1ae38ea739fbb1346bf9bc4b07&p=0)
![John The Ripper 测试](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=a02416b&c=c8abb70dee982dd494fb1891bd9dc154fa7a7f47&p=0)
透过 Fhourstones 测试和 John The Ripper 测试表明,通过在 Windows 的 Linux 子系统运行的 Ubuntu 的性能可以非常接近裸机 Ubuntu Linux 性能!
![x264 测试H264视频编码](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=3140e3c&c=f4bf6330a7d58b5939c61cbd91fe5db379c1592a&p=0)
类似于 Stream 测试x264 结果是另一个奇怪的情况,其中最好的性能实际上是使用 Linux 子系统的 Ubuntu On Windows
![Linux 内核编译时间](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=ad12f0b&c=f50c829c97d731f6926c5a874cf83f8fc5440067&p=0)
![PHP 编译时间](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=8b7a7ca&c=3de3e8537d08665e8a41380b6b2298c09f408fa0&p=0)
计时编译基准测试非常利于裸机 Ubuntu Linux。这是应该是由于大型程序编译需要大量读写磁盘在先前测试已经发现了这是基于 Windows 的 Linux 子系统缓慢的一大领域。
![Crafty](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=25892d8&c=f6cd3fa4a3497e3d2663106e0bf3fcd227f9b9a3&p=0)
![FLAC 音频编码](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=2ea1062&c=fbbec58a6aa1f3fb8dbc55e3de612afc99c666f7&p=0)
![OpenSSL](https://openbenchmarking.org/embed.php?i=1608096-LO-BASHWINDO87&sha=4899bb2&c=80df0e1e749910ebd84b0d6c2688316e5cfb8cda&p=0)
许多其他的通用开源基准测试表明,严格的针对 CPU 的测试Windows 子系统的 Ubuntu 的性能是很接近的,甚至是与原生安装在实际硬件中的 Ubuntu Linux 相等。
最新的 Windows 的 Linux 子系统,测试结果实际上相当令人印象深刻。让人沮丧的仅仅只是持续缓慢的磁盘/文件系统性能,但是对于受 CPU 限制的工作负载,结果是非常引人注目的。还有很罕见的情况, x264 和 Stream 测试Ubuntu On Windows 上的性能看起来明显优于运行在实际硬件上 的Ubuntu Linux。
总的来说,体验是十分愉快的,并且在 Ubuntu/Bash on Windows 也没有遇到任何其他的 bug。如果你有还兴趣了解更多关于 Windows 和 Linux 的基准测试,欢迎留言讨论。
--------------------------------------------------------------------------------
via: https://www.phoronix.com/scan.php?page=article&item=windows10-anv-wsl&num=1
作者:[Michael Larabel][a]
译者:[VicYu/Vic020](http://vicyu.net)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.michaellarabel.com/
[1]: http://www.phoronix.com/scan.php?page=news_item&px=Ubuntu-User-Space-On-Win10
[2]: http://www.phoronix.com/scan.php?page=article&item=windows-10-lxcore&num=1

View File

@ -1,3 +1,7 @@
/*翻译中 WangYueScream LemonDemo*/
What is open source What is open source
=========================== ===========================

View File

@ -1,68 +0,0 @@
/*翻译中 WangYueScream LemonDemo*/
Fedora-powered computer lab at our university
==========
![Fedora-powered computer lab at our university](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/fedora-powered-computer-lab-945x400.png)
At the [University of Novi Sad in Serbia, Faculty of Sciences, Department of Mathematics and Informatics][5], we teach our students a lot of things. From an introduction to programming to machine learning, all the courses make them think like great developers and software engineers. The pace is fast and there are many students, so we must have a setup on which we can rely on. We decided to switch our computer lab to Fedora.
### Previous setup
Our previous solution was keeping our development software in Windows [virtual machines][4] installed on Ubuntu Linux. This seemed like a good idea at the time. However, there were a couple of drawbacks. Firstly, there were serious performance losses because of running virtual machines. Performance and speed of the operating system was impacted because of this. Also, sometimes virtual machines ran concurrently in another users session. This led to serious slowdowns. We were losing precious time on booting the machines and then booting the virtual machines. Lastly, we realized that most of our software was Linux-compatible. Virtual machines werent necessary. We had to find a better solution.
### Enter Fedora!
![Computer lab in Serbia powered by Fedora](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/jxXtuFO-1024x576.jpg)
Picture of a computer lab running Fedora Workstation by default
We thought about replacing the virtual machines with a “bare bones” installation for a while. We decided to go for Fedora for several reasons.
#### Cutting edge of development
In our courses, we use many different development tools. Therefore, it is crucial that we always use the latest and greatest development tools available. In Fedora, we found 95% of the tools we needed in the official software repositories! For a few tools, we had to do a manual installation. This was easy in Fedora because you have almost all development tools available out of the box.
What we realized in this process was that we used a lot of free and open source software and tools. Having all that software always up to date was always going to be a lot of work but not with Fedora.
#### Hardware compatibility
The second reason for choosing Fedora in our computer lab was hardware compatibility. The computers in the lab are new. In the past, there were some problems with older kernel versions. In Fedora, we knew that we would always have a recent kernel. As we expected, everything worked out of the box without any issues.
We decided that we would go for the [Workstation edition][3] of Fedora with [GNOME desktop environment][2]. Students found it easy, intuitive, and fast to navigate through the operating system. It was important for us that students have an easy environment where they could focus on the tasks at hand and the course itself rather than a complicated or slow user interface.
#### Powered by freedom
Lastly, in our department, we value free and open source software greatly. By utilizing such software, students are able to use it freely even when they graduate and start working. In the process, they also learn about Fedora and free and open source software in general.
### Switching the computer lab
We took one of the computers and fully set it up manually. That included preparing all the needed scripts and software, setting up remote access, and other important components. We also made one user account per course so students could easily store their files.
After that one computer was ready, we used a great, free and open source tool called [CloneZilla][1]. CloneZilla let us make a hard drive image for restoration. The image size was around 11GB. We used some fast USB 3.0 flash drives to restore the disk image to the remaining computers. We managed to fully provision and setup twenty-four computers in one hour and fifteen minutes, with just a couple of flash drives.
### Future work
All the computers in our computer lab are now exclusively using Fedora (with no virtual machines). The remaining work is to set up some administration scripts for remotely installing software, turning computers on and off, and so forth.
We would like to thank all Fedora maintainers, packagers, and other contributors. We hope our work encourages other schools and universities to make a switch similar to ours. We happily confirm that Fedora works great for us, and we can also vouch that it would work great for you!
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/fedora-computer-lab-university/
作者:[Nemanja Milošević][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/nmilosev/
[1]:http://clonezilla.org/
[2]:https://www.gnome.org/
[3]:https://getfedora.org/workstation/
[4]:https://en.wikipedia.org/wiki/Virtual_machine
[5]:http://www.dmi.rs/

View File

@ -1,57 +0,0 @@
willcoderwang translating
# Would You Consider Riding in a Driverless Car?
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/10/Writers-Opinion-Driverless-Car-Featured.jpg "Would You Consider Riding in a Driverless Car?s")
Technology goes through major movements. The last one we entered into was the wearables phase with the Apple Watch and clones, FitBits, and Google Glass. It seems like the next phase theyve been working on for quite a while is the driverless car.
These cars, sometimes called autonomous cars, self-driving cars, or robotic cars, would literally drive themselves thanks to technology. They detect their surroundings, such as obstacles and signs, and use GPS to find their way. But would they be safe to drive in? We asked our technology-minded writers, “Would you consider riding in a driverless car?
### Our Opinion
**Derrik** reports that he would ride in a driver-less car because “_the technology is there and a lot of smart people have been working on it for a long time._” He admits there are issues with them, but for the most part he believes a lot of the accidents happen when a human does get involved. But if you take humans out of the equation, he thinks riding in a driverless car “_would be incredibly safe_.”
For **Phil**, these cars give him “the willies,” yet he admits thats only in the abstract as hes never ridden in one. He agrees with Derrik that the tech is well developed and knows how it works, but then sees himself as “a_ tough sell being a bit of a luddite at heart._” He admits to even rarely using cruise control. Yet he agrees that a driver relying on it too much would be what would make him feel unsafe.
![writers-opinion-driverless-car](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/10/Writers-Opinion-Driverless-Car.jpg "writers-opinion-driverless-car")
**Robert** agrees that “_the concept is a little freaky,_” but in principle he doesnt see why cars shouldnt go in that direction. He notes that planes have gone that route and become much safer, and he believes the accidents we see are mainly “_caused by human error due to over-relying on the technology then not knowing what to do when it fails._”
Hes a bit of an “anxious passenger” as it is, preferring to have control over the situation, so for him much of it would have to do with where he car is being driven. Hed be okay with it if it was driving in the city at slow speeds but definitely not on “motorways of weaving English country roads with barely enough room for two cars to drive past each other.” He and Phil both see English roads as much different than American ones. He suggests letting others be the guinea pigs and joining in after its known to be safe.
For **Mahesh**, he would definitely ride in a driverless car, as he knows that the companies with these cars “_have robust technology and would never put their customers at risk._” He agrees that it depends on the roads that the cars are being driven on.
My opinion kind of floats in the middle of all the others. While Im normally one to jump readily into new technology, putting my life at risk makes it different. I agree that the cars have been in development so long theyre bound to be safe. And frankly there are many drivers on the road that are much more dangerous than driverless cars. But like Robert, I think Ill let others be the guinea pigs and will welcome the technology once it becomes a bit more commonplace.
### Your Opinion
Where do you sit with this issue? Do you trust this emerging technology? Or would you be a nervous nelly in one of these cars? Would you consider driving in a driverless car? Jump into the discussion below in the comments.
<small style="box-sizing: inherit; font-size: 16px;">Image Credit: [Steve Jurvetson][4] and [Steve Jurvetson at Wikimedia Commons][3]</small>
--------------------------------------------------------------------------------
via: https://www.maketecheasier.com/riding-driverless-car/?utm_medium=feed&utm_source=feedpress.me&utm_campaign=Feed%3A+maketecheasier
作者:[Laura Tucker][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.maketecheasier.com/author/lauratucker/
[1]:https://www.maketecheasier.com/riding-driverless-car/#comments
[2]:https://www.maketecheasier.com/author/lauratucker/
[3]:https://commons.m.wikimedia.org/wiki/File:Inside_the_Google_RoboCar_today_with_PlanetLabs.jpg
[4]:https://commons.m.wikimedia.org/wiki/File:Jurvetson_Google_driverless_car_trimmed.jpg
[5]:https://support.google.com/adsense/troubleshooter/1631343
[6]:https://www.maketecheasier.com/best-wordpress-video-plugins/
[7]:https://www.maketecheasier.com/hidden-google-games/
[8]:mailto:?subject=Would%20You%20Consider%20Riding%20in%20a%20Driverless%20Car?&body=https%3A%2F%2Fwww.maketecheasier.com%2Friding-driverless-car%2F
[9]:http://twitter.com/share?url=https%3A%2F%2Fwww.maketecheasier.com%2Friding-driverless-car%2F&text=Would+You+Consider+Riding+in+a+Driverless+Car%3F
[10]:http://www.facebook.com/sharer.php?u=https%3A%2F%2Fwww.maketecheasier.com%2Friding-driverless-car%2F
[11]:https://www.maketecheasier.com/category/opinion/

View File

@ -1,56 +0,0 @@
# Arch Linux: In a world of polish, DIY never felt so good
![Tripple Renault photo by Gilles Paire via Shutterstock ](https://regmedia.co.uk/2016/10/31/tripple_renault_photo_by_gilles_paire_via_shutterstock.jpg?x=648&y=348&crop=1)
Dig through the annals of Linux journalism and you'll find a surprising amount of coverage of some pretty obscure distros. Flashy new distros like Elementary OS and Solus garner attention for their slick interfaces, and anything shipping with a MATE desktop gets coverage by simple virtue of using MATE.
Thanks to television shows like _Mr Robot_, I fully expect coverage of even Kali Linux to be on the uptick soon.
In all that coverage, though, there's one very widely used distro that's almost totally ignored: Arch Linux.
Arch gets very little coverage for a several reasons, not the least of which is that it's somewhat difficult to install and requires you feel comfortable with the command line to get it working. Worse, from the point of view of anyone trying to appeal to mainstream users, that difficulty is by design - nothing keeps the noobs out like a daunting install process.
It's a shame, though, because once the installation is complete, Arch is actually - in my experience - far easier to use than any other Linux distro I've tried.
But yes, installation is a pain. Hand-partitioning, hand-mounting and generating your own `fstab` files takes more time and effort than clicking "install" and merrily heading off to do something else. But the process of installing Arch teaches you a lot. It pulls back the curtain so you can see what's behind it. In fact it makes the curtain disappear entirely. In Arch, _you_ are the person behind the curtain.
In addition to its reputation for being difficult to install, Arch is justly revered for its customizability, though this is somewhat misunderstood. There is no "default" desktop in Arch. What you want installed on top of the base set of Arch packages is entirely up to you.
![ARCH "DESKTOP" SCREENSHOT LINUX - OBVS VARIES DEPENDING ON USER ](https://regmedia.co.uk/2016/11/01/arch.jpg?x=648&y=364&infer_y=1 "ARCH "DESKTOP" SCREENSHOT LINUX - OBVS VARIES DEPENDING ON USER ")
While you can see this as infinite customizability, you can also see it as totally lacking in customization. For example, unlike - say - Ubuntu there is almost no patching or customization happening in Arch. Arch developers simply pass on what upstream developers have released, end of story. For some this good; you can run "pure" GNOME, for instance. But in other cases, some custom patching can take care of bugs that upstream devs might not prioritize.
The lack of a default set of applications and desktop system also does not make for tidy reviews - or reviews at all really, since what I install will no doubt be different to what you choose. I happened to select a very minimal setup of bare Openbox, tint2 and dmenu. You might prefer the latest release of GNOME. We'd both be running Arch, but our experiences of it would be totally different. This is of course true of any distro, but most others have a default desktop at least.
Still there are common elements that together can make the basis of an Arch review. There is, for example, the primary reason I switched - Arch is a rolling release distro. This means two things. First, the latest kernels are delivered as soon as they're available and reasonably stable. This means I can test things that are difficult to test with other distros. The other big win for a rolling distro is that all updates are delivered when they're ready. Not only does this mean newer software sooner, it means there's no massive system updates that might break things.
Many people feel that Arch is less stable because it's rolling, but in my experience over the last nine months I would argue the opposite.
I have yet to break anything with an update. I did once have to rollback because my /boot partition wasn't mounted when I updated and changes weren't written, but that was pure user error. Bugs that do surface (like some regressions related to the trackpad on a Dell XPS laptop I was testing) are fixed and updates are available much faster than they would be with a non-rolling distro. In short, I've found Arch's rolling release updates to be far more stable than anything else I've been using along side it. The only caveat I have to add to that is read the wiki and pay close attention to what you're updating.
This brings us to the main reason I suspect that Arch's appeal is limited - you have to pay attention to what you're doing. Blindly updating Arch is risky - but it's risky with any distro; you've just been conditioned to think it's not because you have no choice.
All of which leads me to the other major reason I embraced Arch - the [Arch Philosophy][1]. The part in particular that I find appealing is this bit: "[Arch] is targeted at the proficient GNU/Linux user, or anyone with a do-it-yourself attitude who is willing to read the documentation, and solve their own problems."
As Linux moves further into the mainstream developers seem to feel a greater need to smooth over all the rough areas - as if mirroring the opaque user experience of proprietary software were somehow the apex of functionality.
Strange though it sounds in this day and age, there are many of us who actually prefer to configure things ourselves. In this sense Arch may well be the last refuge of the DIY Linux user. ®
--------------------------------------------------------------------------------
via: http://www.theregister.co.uk/2016/11/02/arch_linux_taster/
作者:[Scott Gilbertson][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.theregister.co.uk/Author/1785
[1]:https://wiki.archlinux.org/index.php/Arch_Linux
[2]:http://www.theregister.co.uk/Author/1785
[3]:https://www.linkedin.com/shareArticle?mini=true&url=http://www.theregister.co.uk/2016/11/02/arch_linux_taster/&title=Arch%20Linux%3A%20In%20a%20world%20of%20polish%2C%20DIY%20never%20felt%20so%20good&summary=Last%20refuge%20for%20purists
[4]:http://twitter.com/share?text=Arch%20Linux%3A%20In%20a%20world%20of%20polish%2C%20DIY%20never%20felt%20so%20good&url=http://www.theregister.co.uk/2016/11/02/arch_linux_taster/&via=theregister
[5]:http://www.reddit.com/submit?url=http://www.theregister.co.uk/2016/11/02/arch_linux_taster/&title=Arch%20Linux%3A%20In%20a%20world%20of%20polish%2C%20DIY%20never%20felt%20so%20good

View File

@ -0,0 +1,54 @@
# Build Strong Real-Time Streaming Apps with Apache Calcite
![Calcite](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/calcite.jpg?itok=CUZmjPjy "Calcite ")
Calcite is a data framework that lets you to build custom database functionality, explains Microsoft developer Atri Sharma in this preview to his upcoming talk at Apache: Big Data Europe, Nov. 14-16 in Seville, Spain.[Creative Commons Zero][2]Wikimedia Commons: Parent Géry
The [Apache Calcite][7] data management framework contains many pieces of a typical database management system but omits others, such as storage of data and algorithms to process data. In his talk at the upcoming [Apache: Big Data][6] conference in Seville, Spain, Atri Sharma, a Software Engineer for Azure Data Lake at Microsoft, will talk about developing applications using [Apache Calcite][5]'s advanced query planning capabilities. We spoke with Sharma to learn more about Calcite and how existing applications can take advantage of its functionality.
![Atri Sharma](https://www.linux.com/sites/lcom/files/styles/floated_images/public/atri-sharma.jpg?itok=77cvZWfw "Atri Sharma")
Atri Sharma, Software Engineer, Azure Data Lake, Microsoft[Used with permission][1]
**Linux.com: Can you provide some background on Apache Calcite? What does it do?**
Atri Sharma: Calcite is a framework that is the basis of many database kernels. Calcite empowers you to build your custom database functionality and use the required resources from Calcite. For example, Hive uses Calcite for cost-based query optimization, Drill and Kylin use Calcite for SQL parsing and optimization, and Apex uses Calcite for streaming SQL.
**Linux.com: What are some features that make Apache Calcite different from other frameworks?**
Atri: Calcite is unique in the sense that it allows you to build your own data platform. Calcite does not manage your data directly but rather allows you to use Calcite's libraries to define your own components. For eg, instead of providing a generic query optimizer, it allows defining custom query optimizers using the Planners available in Calcite.
**Linux.com: Apache Calcite itself does not store or process data. How does that affect application development?**
Atri: Calcite is a dependency in the kernel of your database. It is targeted for data management platforms that wish to extend their functionalities without writing a lot of functionality from scratch.
**Linux.com: Who should be using it? Can you give some examples?**
Atri: Any data management platform looking to extend their functionalities should use Calcite. We are the foundation of your next high-performance database!
Specifically, I think the biggest examples would be Hive using Calcite for query optimization and Flink for parsing and streaming SQL processing. Hive and Flink are full-fledged data management engines, and they use Calcite for highly specialized purposes. This is a good case study for applications of Calcite to further strengthen the core of a data management platform.
**Linux.com: What are some new features that youre looking forward to?**
Atri: Streaming SQL enhancements are something I am very excited about. These features are exciting because they will enable users of Calcite to develop real-time streaming applications much faster, and the strength and capabilities of these applications will be manifold. Streaming applications are the new de facto, and the strength to have query optimization in streaming SQL will be very useful for a large crowd. Also, there is discussion ongoing about temporal tables, so watch out for more!
--------------------------------------------------------------------------------
via: https://www.linux.com/news/build-strong-real-time-streaming-apps-apache-calcite
作者:[AMBER ANKERHOLZ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/aankerholz
[1]:https://www.linux.com/licenses/category/used-permission
[2]:https://www.linux.com/licenses/category/creative-commons-zero
[3]:https://www.linux.com/files/images/atri-sharmajpg
[4]:https://www.linux.com/files/images/calcitejpg
[5]:https://calcite.apache.org/
[6]:http://events.linuxfoundation.org/events/apache-big-data-europe
[7]:https://calcite.apache.org/

View File

@ -1,144 +0,0 @@
Translating by Firstadream
Rapid prototyping with docker-compose
========================================
In this write-up we'll look at a Node.js prototype for **finding stock of the Raspberry PI Zero** from three major outlets in the UK.
I wrote the code and deployed it to an Ubuntu VM in Azure within a single evening of hacking. Docker and the docker-compose tool made the deployment and update process extremely quick.
### Remember linking?
If you've already been through the [Hands-On Docker tutorial][1] then you will have experience linking Docker containers on the command line. Linking a Node hit counter to a Redis server on the command line may look like this:
```
$ docker run -d -P --name redis1
$ docker run -d hit_counter -p 3000:3000 --link redis1:redis
```
Now imagine your application has three tiers
- Web front-end
- Batch tier for processing long running tasks
- Redis or mongo database
Explicit linking through `--link` is just about manageable with a couple of containers, but can get out of hand as we add more tiers or containers to the application.
### Enter docker-compose
![](http://blog.alexellis.io/content/images/2016/05/docker-compose-logo-01.png)
>Docker Compose logo
The docker-compose tool is part of the standard Docker Toolbox and can also be downloaded separately. It provides a rich set of features to configure all of an application's parts through a plain-text YAML file.
The above example would look like this:
```
version: "2.0"
services:
redis1:
image: redis
hit_counter:
build: ./hit_counter
ports:
- 3000:3000
```
From Docker 1.10 onwards we can take advantage of network overlays to help us scale out across multiple hosts. Prior to this linking only worked across a single host. The `docker-compose scale` command can be used to bring on more computing power as the need arises.
>View the [docker-compose][2] reference on docker.com
### Real-world example: Raspberry PI Stock Alert
![](http://blog.alexellis.io/content/images/2016/05/Raspberry_Pi_Zero_ver_1-3_1_of_3_large.JPG)
>The new Raspberry PI Zero v1.3 image courtesy of Pimoroni
There is a huge buzz around the Raspberry PI Zero - a tiny microcomputer with a 1GHz CPU and 512MB RAM capable of running full Linux, Docker, Node.js, Ruby and many other popular open-source tools. One of the best things about the PI Zero is that costs only 5 USD. That also means that stock gets snapped up really quickly.
*If you want to try Docker or Swarm on the PI check out the tutorial below.*
>[Docker Swarm on the PI Zero][3]
### Original site: whereismypizero.com
I found a webpage which used screen scraping to find whether 4-5 of the most popular outlets had stock.
- The site contained a static HTML page
- Issued one XMLHttpRequest per outlet accessing /public/api/
- The server issued the HTTP request to each shop and performed the scraping
Every call to /public/api/ took 3 seconds to execute and using Apache Bench (ab) I was only able to get through 0.25 requests per second.
### Reinventing the wheel
The retailers didn't seem to mind whereismypizero.com scraping their sites for stock, so I set about writing a similar tool from the ground up. I had the intention of handing a much higher amount of requests per second through caching and de-coupling the scrape from the web tier. Redis was the perfect tool for the job. It allowed me to set an automatically expiring key/value pair (i.e. a simple cache) and also to transmit messages between Node processes through pub/sub.
>Fork or star the code on Github: [alexellis/pi_zero_stock][4]
If you've worked with Node.js before then you will know it is single-threaded and that any CPU intensive tasks such as parsing HTML or JSON could lead to a slow-down. One way to mitigate that is to use a second worker process and a Redis messaging channel as connective tissue between this and the web tier.
- Web tier
-Gives 200 for cache hit (Redis key exists for store)
-Gives 202 for cache miss (Redis key doesn't exist, so issues message)
-Since we are only ever reading a Redis key the response time is very quick.
- Stock Fetcher
-Performs HTTP request
-Scrapes for different types of web stores
-Updates a Redis key with a cache expire of 60 seconds
-Also locks a Redis key to prevent too many in-flight HTTP requests to the web stores.
```
version: "2.0"
services:
web:
build: ./web/
ports:
- "3000:3000"
stock_fetch:
build: ./stock_fetch/
redis:
image: redis
```
*The docker-compose.yml file from the example.*
Once I had this working locally deploying to an Ubuntu 16.04 image in the cloud (Azure) took less than 5 minutes. I logged in, cloned the repository and typed in `docker compose up -d`. That was all it took - rapid prototyping a whole system doesn't get much better. Anyone (including the owner of whereismypizero.com) can deploy the new solution with just two lines:
```
$ git clone https://github.com/alexellis/pi_zero_stock
$ docker-compose up -d
```
Updating the site is easy and just involves a `git pull` followed by a `docker-compose up -d` with the `--build` argument passed along.
If you are still linking your Docker containers manually, try Docker Compose for yourself or my code below:
>Fork or star the code on Github: [alexellis/pi_zero_stock][5]
### Check out the test site
The test site is currently deployed now using docker-compose.
>[stockalert.alexellis.io][6]
![](http://blog.alexellis.io/content/images/2016/05/Screen-Shot-2016-05-16-at-22-34-26-1.png)
Preview as of 16th of May 2016
----------
via: http://blog.alexellis.io/rapid-prototype-docker-compose/
作者:[Alex Ellis][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://blog.alexellis.io/author/alex/
[1]: http://blog.alexellis.io/handsondocker
[2]: https://docs.docker.com/compose/compose-file/
[3]: http://blog.alexellis.io/dockerswarm-pizero/
[4]: https://github.com/alexellis/pi_zero_stock
[5]: https://github.com/alexellis/pi_zero_stock
[6]: http://stockalert.alexellis.io/

View File

@ -1,3 +1,6 @@
@poodarchu 翻译中
Building a data science portfolio: Storytelling with data Building a data science portfolio: Storytelling with data
======== ========
@ -419,13 +422,13 @@ data["class_size"].head()
Out[4]: Out[4]:
| | CSD | BOROUGH | SCHOOL CODE | SCHOOL NAME | GRADE | PROGRAM TYPE | CORE SUBJECT (MS CORE and 9-12 ONLY) | CORE COURSE (MS CORE and 9-12 ONLY) | SERVICE CATEGORY(K-9* ONLY) | NUMBER OF STUDENTS / SEATS FILLED | NUMBER OF SECTIONS | AVERAGE CLASS SIZE | SIZE OF SMALLEST CLASS | SIZE OF LARGEST CLASS | DATA SOURCE | SCHOOLWIDE PUPIL-TEACHER RATIO | | | CSD | BOROUGH | SCHOOL CODE | SCHOOL NAME | GRADE | PROGRAM TYPE | CORE SUBJECT (MS CORE and 9-12 ONLY) | CORE COURSE (MS CORE and 9-12 ONLY) | SERVICE CATEGORY(K-9* ONLY) | NUMBER OF STUDENTS / SEATS FILLED | NUMBER OF SECTIONS | AVERAGE CLASS SIZE | SIZE OF SMALLEST CLASS | SIZE OF LARGEST CLASS | DATA SOURCE | SCHOOLWIDE PUPIL-TEACHER RATIO |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | ---- | ---- | ------- | ----------- | ------------------------- | ----- | ------------ | ------------------------------------ | ----------------------------------- | --------------------------- | --------------------------------- | ------------------ | ------------------ | ---------------------- | --------------------- | ----------- | ------------------------------ |
| 0 | 1 | M | M015 | P.S. 015 Roberto Clemente | 0K | GEN ED | - | - | - | 19.0 | 1.0 | 19.0 | 19.0 | 19.0 | ATS | NaN | | 0 | 1 | M | M015 | P.S. 015 Roberto Clemente | 0K | GEN ED | - | - | - | 19.0 | 1.0 | 19.0 | 19.0 | 19.0 | ATS | NaN |
| 1 | 1 | M | M015 | P.S. 015 Roberto Clemente | 0K | CTT | - | - | - | 21.0 | 1.0 | 21.0 | 21.0 | 21.0 | ATS | NaN | | 1 | 1 | M | M015 | P.S. 015 Roberto Clemente | 0K | CTT | - | - | - | 21.0 | 1.0 | 21.0 | 21.0 | 21.0 | ATS | NaN |
| 2 | 1 | M | M015 | P.S. 015 Roberto Clemente | 01 | GEN ED | - | - | - | 17.0 | 1.0 | 17.0 | 17.0 | 17.0 | ATS | NaN | | 2 | 1 | M | M015 | P.S. 015 Roberto Clemente | 01 | GEN ED | - | - | - | 17.0 | 1.0 | 17.0 | 17.0 | 17.0 | ATS | NaN |
| 3 | 1 | M | M015 | P.S. 015 Roberto Clemente | 01 | CTT | - | - | - | 17.0 | 1.0 | 17.0 | 17.0 | 17.0 | ATS | NaN | | 3 | 1 | M | M015 | P.S. 015 Roberto Clemente | 01 | CTT | - | - | - | 17.0 | 1.0 | 17.0 | 17.0 | 17.0 | ATS | NaN |
| 4 | 1 | M | M015 | P.S. 015 Roberto Clemente | 02 | GEN ED | - | - | - | 15.0 | 1.0 | 15.0 | 15.0 | 15.0 | ATS | NaN | | 4 | 1 | M | M015 | P.S. 015 Roberto Clemente | 02 | GEN ED | - | - | - | 15.0 | 1.0 | 15.0 | 15.0 | 15.0 | ATS | NaN |
As you can see above, it looks like the `DBN` is actually a combination of `CSD`, `BOROUGH`, and `SCHOOL CODE`. For those unfamiliar with New York City, it is composed of `5` boroughs. Each borough is an organizational unit, and is about the same size as a fairly large US City.`DBN` stands for `District Borough Number`. It looks like `CSD` is the District, `BOROUGH` is the borough, and when combined with the `SCHOOL CODE`, forms the `DBN`. Theres no systematized way to find insights like this in data, and it requires some exploration and playing around to figure out. As you can see above, it looks like the `DBN` is actually a combination of `CSD`, `BOROUGH`, and `SCHOOL CODE`. For those unfamiliar with New York City, it is composed of `5` boroughs. Each borough is an organizational unit, and is about the same size as a fairly large US City.`DBN` stands for `District Borough Number`. It looks like `CSD` is the District, `BOROUGH` is the borough, and when combined with the `SCHOOL CODE`, forms the `DBN`. Theres no systematized way to find insights like this in data, and it requires some exploration and playing around to figure out.
@ -468,13 +471,13 @@ survey.head()
``` ```
Out[16]: Out[16]:
| | N_p | N_s | N_t | aca_p_11 | aca_s_11 | aca_t_11 | aca_tot_11 | bn | com_p_11 | com_s_11 | ... | t_q8c_1 | t_q8c_2 | t_q8c_3 | t_q8c_4 | t_q9 | t_q9_1 | t_q9_2 | t_q9_3 | t_q9_4 | t_q9_5 | | | N_p | N_s | N_t | aca_p_11 | aca_s_11 | aca_t_11 | aca_tot_11 | bn | com_p_11 | com_s_11 | ... | t_q8c_1 | t_q8c_2 | t_q8c_3 | t_q8c_4 | t_q9 | t_q9_1 | t_q9_2 | t_q9_3 | t_q9_4 | t_q9_5 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | ---- | ----- | ----- | ---- | -------- | -------- | -------- | ---------- | ---- | -------- | -------- | ---- | ------- | ------- | ------- | ------- | ---- | ------ | ------ | ------ | ------ | ------ |
| 0 | 90.0 | NaN | 22.0 | 7.8 | NaN | 7.9 | 7.9 | M015 | 7.6 | NaN | ... | 29.0 | 67.0 | 5.0 | 0.0 | NaN | 5.0 | 14.0 | 52.0 | 24.0 | 5.0 | | 0 | 90.0 | NaN | 22.0 | 7.8 | NaN | 7.9 | 7.9 | M015 | 7.6 | NaN | ... | 29.0 | 67.0 | 5.0 | 0.0 | NaN | 5.0 | 14.0 | 52.0 | 24.0 | 5.0 |
| 1 | 161.0 | NaN | 34.0 | 7.8 | NaN | 9.1 | 8.4 | M019 | 7.6 | NaN | ... | 74.0 | 21.0 | 6.0 | 0.0 | NaN | 3.0 | 6.0 | 3.0 | 78.0 | 9.0 | | 1 | 161.0 | NaN | 34.0 | 7.8 | NaN | 9.1 | 8.4 | M019 | 7.6 | NaN | ... | 74.0 | 21.0 | 6.0 | 0.0 | NaN | 3.0 | 6.0 | 3.0 | 78.0 | 9.0 |
| 2 | 367.0 | NaN | 42.0 | 8.6 | NaN | 7.5 | 8.0 | M020 | 8.3 | NaN | ... | 33.0 | 35.0 | 20.0 | 13.0 | NaN | 3.0 | 5.0 | 16.0 | 70.0 | 5.0 | | 2 | 367.0 | NaN | 42.0 | 8.6 | NaN | 7.5 | 8.0 | M020 | 8.3 | NaN | ... | 33.0 | 35.0 | 20.0 | 13.0 | NaN | 3.0 | 5.0 | 16.0 | 70.0 | 5.0 |
| 3 | 151.0 | 145.0 | 29.0 | 8.5 | 7.4 | 7.8 | 7.9 | M034 | 8.2 | 5.9 | ... | 21.0 | 45.0 | 28.0 | 7.0 | NaN | 0.0 | 18.0 | 32.0 | 39.0 | 11.0 | | 3 | 151.0 | 145.0 | 29.0 | 8.5 | 7.4 | 7.8 | 7.9 | M034 | 8.2 | 5.9 | ... | 21.0 | 45.0 | 28.0 | 7.0 | NaN | 0.0 | 18.0 | 32.0 | 39.0 | 11.0 |
| 4 | 90.0 | NaN | 23.0 | 7.9 | NaN | 8.1 | 8.0 | M063 | 7.9 | NaN | ... | 59.0 | 36.0 | 5.0 | 0.0 | NaN | 10.0 | 5.0 | 10.0 | 60.0 | 15.0 | | 4 | 90.0 | NaN | 23.0 | 7.9 | NaN | 8.1 | 8.0 | M063 | 7.9 | NaN | ... | 59.0 | 36.0 | 5.0 | 0.0 | NaN | 10.0 | 5.0 | 10.0 | 60.0 | 15.0 |
5 rows × 2773 columns 5 rows × 2773 columns
@ -513,13 +516,13 @@ data["class_size"].head()
Out[18]: Out[18]:
| | CSD | BOROUGH | SCHOOL CODE | SCHOOL NAME | GRADE | PROGRAM TYPE | CORE SUBJECT (MS CORE and 9-12 ONLY) | CORE COURSE (MS CORE and 9-12 ONLY) | SERVICE CATEGORY(K-9* ONLY) | NUMBER OF STUDENTS / SEATS FILLED | NUMBER OF SECTIONS | AVERAGE CLASS SIZE | SIZE OF SMALLEST CLASS | SIZE OF LARGEST CLASS | DATA SOURCE | SCHOOLWIDE PUPIL-TEACHER RATIO | DBN | | | CSD | BOROUGH | SCHOOL CODE | SCHOOL NAME | GRADE | PROGRAM TYPE | CORE SUBJECT (MS CORE and 9-12 ONLY) | CORE COURSE (MS CORE and 9-12 ONLY) | SERVICE CATEGORY(K-9* ONLY) | NUMBER OF STUDENTS / SEATS FILLED | NUMBER OF SECTIONS | AVERAGE CLASS SIZE | SIZE OF SMALLEST CLASS | SIZE OF LARGEST CLASS | DATA SOURCE | SCHOOLWIDE PUPIL-TEACHER RATIO | DBN |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | ---- | ---- | ------- | ----------- | ------------------------- | ----- | ------------ | ------------------------------------ | ----------------------------------- | --------------------------- | --------------------------------- | ------------------ | ------------------ | ---------------------- | --------------------- | ----------- | ------------------------------ | ------ |
| 0 | 1 | M | M015 | P.S. 015 Roberto Clemente | 0K | GEN ED | - | - | - | 19.0 | 1.0 | 19.0 | 19.0 | 19.0 | ATS | NaN | 01M015 | | 0 | 1 | M | M015 | P.S. 015 Roberto Clemente | 0K | GEN ED | - | - | - | 19.0 | 1.0 | 19.0 | 19.0 | 19.0 | ATS | NaN | 01M015 |
| 1 | 1 | M | M015 | P.S. 015 Roberto Clemente | 0K | CTT | - | - | - | 21.0 | 1.0 | 21.0 | 21.0 | 21.0 | ATS | NaN | 01M015 | | 1 | 1 | M | M015 | P.S. 015 Roberto Clemente | 0K | CTT | - | - | - | 21.0 | 1.0 | 21.0 | 21.0 | 21.0 | ATS | NaN | 01M015 |
| 2 | 1 | M | M015 | P.S. 015 Roberto Clemente | 01 | GEN ED | - | - | - | 17.0 | 1.0 | 17.0 | 17.0 | 17.0 | ATS | NaN | 01M015 | | 2 | 1 | M | M015 | P.S. 015 Roberto Clemente | 01 | GEN ED | - | - | - | 17.0 | 1.0 | 17.0 | 17.0 | 17.0 | ATS | NaN | 01M015 |
| 3 | 1 | M | M015 | P.S. 015 Roberto Clemente | 01 | CTT | - | - | - | 17.0 | 1.0 | 17.0 | 17.0 | 17.0 | ATS | NaN | 01M015 | | 3 | 1 | M | M015 | P.S. 015 Roberto Clemente | 01 | CTT | - | - | - | 17.0 | 1.0 | 17.0 | 17.0 | 17.0 | ATS | NaN | 01M015 |
| 4 | 1 | M | M015 | P.S. 015 Roberto Clemente | 02 | GEN ED | - | - | - | 15.0 | 1.0 | 15.0 | 15.0 | 15.0 | ATS | NaN | 01M015 | | 4 | 1 | M | M015 | P.S. 015 Roberto Clemente | 02 | GEN ED | - | - | - | 15.0 | 1.0 | 15.0 | 15.0 | 15.0 | ATS | NaN | 01M015 |
There are several rows for each high school (as you can see by the repeated `DBN` and `SCHOOL NAME` fields). However, if we take a look at the `sat_results` dataset, it only has one row per high school: There are several rows for each high school (as you can see by the repeated `DBN` and `SCHOOL NAME` fields). However, if we take a look at the `sat_results` dataset, it only has one row per high school:
@ -531,13 +534,13 @@ data["sat_results"].head()
Out[21]: Out[21]:
| | DBN | SCHOOL NAME | Num of SAT Test Takers | SAT Critical Reading Avg. Score | SAT Math Avg. Score | SAT Writing Avg. Score | | | DBN | SCHOOL NAME | Num of SAT Test Takers | SAT Critical Reading Avg. Score | SAT Math Avg. Score | SAT Writing Avg. Score |
| --- | --- | --- | --- | --- | --- | --- | | ---- | ------ | ---------------------------------------- | ---------------------- | ------------------------------- | ------------------- | ---------------------- |
| 0 | 01M292 | HENRY STREET SCHOOL FOR INTERNATIONAL STUDIES | 29 | 355 | 404 | 363 | | 0 | 01M292 | HENRY STREET SCHOOL FOR INTERNATIONAL STUDIES | 29 | 355 | 404 | 363 |
| 1 | 01M448 | UNIVERSITY NEIGHBORHOOD HIGH SCHOOL | 91 | 383 | 423 | 366 | | 1 | 01M448 | UNIVERSITY NEIGHBORHOOD HIGH SCHOOL | 91 | 383 | 423 | 366 |
| 2 | 01M450 | EAST SIDE COMMUNITY SCHOOL | 70 | 377 | 402 | 370 | | 2 | 01M450 | EAST SIDE COMMUNITY SCHOOL | 70 | 377 | 402 | 370 |
| 3 | 01M458 | FORSYTH SATELLITE ACADEMY | 7 | 414 | 401 | 359 | | 3 | 01M458 | FORSYTH SATELLITE ACADEMY | 7 | 414 | 401 | 359 |
| 4 | 01M509 | MARTA VALLE HIGH SCHOOL | 44 | 390 | 433 | 384 | | 4 | 01M509 | MARTA VALLE HIGH SCHOOL | 44 | 390 | 433 | 384 |
In order to combine these datasets, well need to find a way to condense datasets like `class_size` to the point where theres only a single row per high school. If not, there wont be a way to compare SAT scores to class size. We can accomplish this by first understanding the data better, then by doing some aggregation. With the `class_size`dataset, it looks like `GRADE` and `PROGRAM TYPE` have multiple values for each school. By restricting each field to a single value, we can filter most of the duplicate rows. In the below code, we: In order to combine these datasets, well need to find a way to condense datasets like `class_size` to the point where theres only a single row per high school. If not, there wont be a way to compare SAT scores to class size. We can accomplish this by first understanding the data better, then by doing some aggregation. With the `class_size`dataset, it looks like `GRADE` and `PROGRAM TYPE` have multiple values for each school. By restricting each field to a single value, we can filter most of the duplicate rows. In the below code, we:

View File

@ -1,69 +0,0 @@
**************Translating By messon007**************
Microfluidic cooling may prevent the demise of Moore's Law
============================================================
![](http://tr1.cbsistatic.com/hub/i/r/2015/12/09/a7cb82d1-96e8-43b5-bfbd-d4593869b230/resize/620x/9607388a284e3a61a39f4399a9202bd7/networkingistock000042544852agsandrew.jpg)
>Image: iStock/agsandrew
Existing technology's inability to keep microchips cool is fast becoming the number one reason why [Moore's Law][1] may soon meet its demise.
In the ongoing need for digital speed, scientists and engineers are working hard to squeeze more transistors and support circuitry onto an already-crowded piece of silicon. However, as complex as that seems, it pales in comparison to the [problem of heat buildup][2].
"Right now, we're limited in the power we can put into microchips," says John Ditri, principal investigator at Lockheed Martin in [this press release][3]. "One of the biggest challenges is managing the heat. If you can manage the heat, you can use fewer chips, and that means using less material, which results in cost savings as well as reduced system size and weight. If you manage the heat and use the same number of chips, you'll get even greater performance in your system."
Resistance to the flow of electrons through silicon causes the heat, and packing so many transistors in such a small space creates enough heat to destroy components. One way to eliminate heat buildup is to reduce the flow of electrons by [using photonics at the chip level][4]. However, photonic technology is not without its set of problems.
SEE: [Silicon photonics will revolutionize data centers in 2015][5]
### Microfluid cooling might be the answer
To seek out other solutions, the Defense Advanced Research Projects Agency (DARPA) has initiated a program called [ICECool Applications][6] (Intra/Interchip Enhanced Cooling). "ICECool is exploring disruptive thermal technologies that will mitigate thermal limitations on the operation of military electronic systems while significantly reducing the size, weight, and power consumption," explains the [GSA website FedBizOpps.gov][7].
What is unique about this method of cooling is the push to use a combination of intra- and/or inter-chip microfluidic cooling and on-chip thermal interconnects.
![](http://tr4.cbsistatic.com/hub/i/r/2016/05/25/fd3d0d17-bd86-4d25-a89a-a7050c4d59c4/resize/300x/e9c18034bde66526310c667aac92fbf5/microcooling-1.png)
>MicroCooling 1 Image: DARPA
The [DARPA ICECool Application announcement][8] notes, "Such miniature intra- and/or inter-chip passages (see right) may take the form of axial micro-channels, radial passages, and/or cross-flow passages, and may involve micro-pores and manifolded structures to distribute and re-direct liquid flow, including in the form of localized liquid jets, in the most favorable manner to meet the specified heat flux and heat density metrics."
Using the above technology, engineers at Lockheed Martin have experimentally demonstrated how on-chip cooling is a significant improvement. "Phase I of the ICECool program verified the effectiveness of Lockheed's embedded microfluidic cooling approach by showing a four-times reduction in thermal resistance while cooling a thermal demonstration die dissipating 1 kW/cm2 die-level heat flux with multiple local 30 kW/cm2 hot spots," mentions the Lockheed Martin press release.
In phase II of the Lockheed Martin project, the engineers focused on RF amplifiers. The press release continues, "Utilizing its ICECool technology, the team has been able to demonstrate greater than six times increase in RF output power from a given amplifier while still running cooler than its conventionally cooled counterpart."
### Moving to production
Confident of the technology, Lockheed Martin is already designing and building a functional microfluidic cooled transmit antenna. Lockheed Martin is also collaborating with Qorvo to integrate its thermal solution with Qorvo's high-performance [GaN process][9].
The authors of the research paper [DARPA's Intra/Interchip Enhanced Cooling (ICECool) Program][10] suggest ICECool Applications will produce a paradigm shift in the thermal management of electronic systems. "ICECool Apps performers will define and demonstrate intra-chip and inter-chip thermal management approaches that are tailored to specific applications and this approach will be consistent with the materials sets, fabrication processes, and operating environment of the intended application."
If this microfluidic technology is as successful as scientists and engineers suggest, it seems Moore's Law does have a fighting chance.
For more about networking, subscribe to our Data Centers newsletter.
[SUBSCRIBE](https://secure.techrepublic.com/user/login/?regSource=newsletter-button&position=newsletter-button&appId=true&redirectUrl=http%3A%2F%2Fwww.techrepublic.com%2Farticle%2Fmicrofluidic-cooling-may-prevent-the-demise-of-moores-law%2F&)
--------------------------------------------------------------------------------
via: http://www.techrepublic.com/article/microfluidic-cooling-may-prevent-the-demise-of-moores-law/
作者:[Michael Kassner][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.techrepublic.com/search/?a=michael+kassner
[1]: http://www.intel.com/content/www/us/en/history/museum-gordon-moore-law.html
[2]: https://books.google.com/books?id=mfec2Zw_b7wC&pg=PA154&lpg=PA154&dq=does+heat+destroy+transistors&source=bl&ots=-aNdbMD7FD&sig=XUUiaYG_6rcxHncx4cI4Cqe3t20&hl=en&sa=X&ved=0ahUKEwif4M_Yu_PMAhVL7oMKHW3GC3cQ6AEITTAH#v=onepage&q=does%20heat%20destroy%20transis
[3]: http://www.lockheedmartin.com/us/news/press-releases/2016/march/160308-mst-cool-technology-turns-down-the-heat-on-high-tech-equipment.html
[4]: http://www.techrepublic.com/article/silicon-photonics-will-revolutionize-data-centers-in-2015/
[5]: http://www.techrepublic.com/article/silicon-photonics-will-revolutionize-data-centers-in-2015/
[6]: https://www.fbo.gov/index?s=opportunity&mode=form&id=0be99f61fbac0501828a9d3160883b97&tab=core&_cview=1
[7]: https://www.fbo.gov/index?s=opportunity&mode=form&id=0be99f61fbac0501828a9d3160883b97&tab=core&_cview=1
[8]: https://www.fbo.gov/index?s=opportunity&mode=form&id=0be99f61fbac0501828a9d3160883b97&tab=core&_cview=1
[9]: http://electronicdesign.com/communications/what-s-difference-between-gaas-and-gan-rf-power-amplifiers
[10]: http://www.csmantech.org/Digests/2013/papers/050.pdf

View File

@ -1,148 +0,0 @@
TOP 5 BEST VIDEO EDITING SOFTWARE FOR LINUX IN 2016
=====================================================
![](https://itsfoss.com/wp-content/uploads/2016/06/linux-video-ditor-software.jpg)
Brief: Tiwo discusses the best video editors for Linux, their pros and cons and the installation method for Ubuntu-based distros in this article.
We have discussed [best photo management applications for Linux][1], [best code editors for Linux][2] in similar articles in the past. Today we shall see the best video editing software for Linux.
When asked about free video editing software, Windows Movie Maker and iMovie is what most people often suggest.
Unfortunately, both of them are not available for GNU/Linux. But you dont need to worry about it, we have pooled together a list of best free video editors for you.
### BEST VIDEO EDITOR APPS FOR LINUX
Lets have a look at the top 5 best free video editing software for Linux below :
#### 1. KDENLIVE
![](https://itsfoss.com/wp-content/uploads/2016/06/kdenlive-free-video-editor-on-ubuntu.jpg)
[Kdenlive][3] is a free and [open source][4] video editing software from KDE that provides dual video monitors, a multi-track timeline, clip list, customizable layout support, basic effects, and basic transitions.
It supports wide variety of file formats and a wide range of camcorders and cameras including Low resolution camcorder (Raw and AVI DV editing), Mpeg2, mpeg4 and h264 AVCHD (small cameras and camcorders), High resolution camcorder files, including HDV and AVCHD camcorders, Professional camcorders, including XDCAM-HD™ streams, IMX™ (D10) streams, DVCAM (D10) , DVCAM, DVCPRO™, DVCPRO50™ streams and DNxHD™ streams.
You can install it from terminal by running the following command :
```
sudo apt-get install kdenlive
```
Or, open Ubuntu Software Center then search Kdenlive.
#### 2. OPENSHOT
![](https://itsfoss.com/wp-content/uploads/2016/06/openshot-free-video-editor-on-ubuntu.jpg)
[OpenShot][5] is the second choice in our list of Linux video editing software. OpenShot can help you create the film that supports for transitions, effects, adjusting audio levels, and of course, it support of most formats and codecs.
You can also export your film to DVD, upload to YouTube, Vimeo, Xbox 360, and many other common formats. OpenShot is simpler than kdenlive. So if you need a video editor with a simple UI OpenShot is a good choice.
The latest version is 2.0.7. You can install OpenShot video editor by run the following command from terminal window :
```
sudo apt-get install openshot
```
It needs to download 25 MB, and 70 MB disk space after installed.
#### 3. FLOWBLADE MOVIE EDITOR
![](https://itsfoss.com/wp-content/uploads/2016/06/flowblade-movie-editor-on-ubuntu.jpg)
[Flowblade Movie Editor][6] is a multitrack non-linear video editor for Linux. It is free and open source. It comes with a stylish and modern user interface.
Written in Python, it is designed to provide a fast, and precise. Flowblade has focused on providing the best possible experience on Linux and other free platforms. So theres no Windows and OS X version for now.
To install Flowblade in Ubuntu and other Ubuntu based systems, use the command below:
```
sudo apt-get install flowblade
```
#### 4. LIGHTWORKS
![](https://itsfoss.com/wp-content/uploads/2016/06/lightworks-running-on-ubuntu-16.04.jpg)
If you looking for a video editor software that has more feature, this is the answer. [Lightworks][7] is a cross-platform professional video editor, available for Linux, Mac OS X and Windows.
It is an award winning professional [non-linear editing][8] (NLE) software that supports resolutions up to 4K as well as video in SD and HD formats.
This application has two versions: Lightworks Free and Lightworks Pro. While free version doesnt support Vimeo (H.264 / MPEG-4) and YouTube (H.264 / MPEG-4)- Up to 2160p (4K UHD), Blu-ray, and H.264/MP4 export option with configurable bitrate setting, then pro version is.
- Lightworks Free
- Lightworks Pro
Pro version has more features such as higher resolution support, 4K and Blue Ray support etc.
##### HOW TO INSTALL LIGHTWORKS?
Unlike the other video editors, installing Lightwork is not as straight forward as running a single command. Dont worry, its not that complicated either.
- Step 1 You can get the package from [Lightworks Downloads Page][9]. The packages size about 79,5 MB.
>Please note: Theres no Linux 32-bit support.
- Step 2 Once downloaded, you can install it using [Gdebi package installer][10]. Gdebi automatically downloads the dependency :
![](https://itsfoss.com/wp-content/uploads/2016/06/Installing-lightworks-on-ubuntu.jpg)
- Step 3 Now you can open it from Ubuntu dashboard, or your Linux distros menu.
- Step 4 It needs an account when you use it for first time. Click at Not Registerd? button to register. Dont worry, its free!
- Step 5 After your account has been verified, now login.
Now the Lightworks is ready to use.
Need Lightworks video tutorial? Get them at [Lightworks video tutorials Page][11].
#### 5. BLENDER
![](https://itsfoss.com/wp-content/uploads/2016/06/blender-running-on-ubuntu-16.04.jpg)
Blender is a professional, industry-grade open source, cross platform video editor. It is popular for 3D works. Blender has been used in several Hollywood movies including Spider Man series.
Although originally designed for produce 3D modeling, but it can also be used for video editing and input capabilities with a variety of formats. The Video Editor includes:
- Live preview, luma waveform, chroma vectorscope and histogram displays
- Audio mixing, syncing, scrubbing and waveform visualization
- Up to 32 slots for adding video, images, audio, scenes, masks and effects
- Speed control, adjustment layers, transitions, keyframes, filters and more.
The latest version can be downloaded from [Blender Download Page][12].
### WHICH IS THE BEST VIDEO EDITING SOFTWARE?
If you need a simple video editor, OpenShot, Kdenlive or Flowblade is a good choice. These are suitable for beginners and a system with standard specification.
Then if you have a high-end computer, and need advanced features you can go out with Lightworks. If you are looking for more advanced features, Blender has got your back.
So thats all I can write about 5 best video editing software for Linux such as Ubuntu, Linux Mint, Elementary, and other Linux distributions. Share with us which video editor you like the most.
--------------------------------------------------------------------------------
via: https://itsfoss.com/best-video-editing-software-linux/
作者:[Tiwo Satriatama][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/tiwo/
[1]: https://itsfoss.com/linux-photo-management-software/
[2]: https://itsfoss.com/best-modern-open-source-code-editors-for-linux/
[3]: https://kdenlive.org/
[4]: https://itsfoss.com/tag/open-source/
[5]: http://www.openshot.org/
[6]: http://jliljebl.github.io/flowblade/
[7]: https://www.lwks.com/
[8]: https://en.wikipedia.org/wiki/Non-linear_editing_system
[9]: https://www.lwks.com/index.php?option=com_lwks&view=download&Itemid=206
[10]: https://itsfoss.com/gdebi-default-ubuntu-software-center/
[11]: https://www.lwks.com/videotutorials
[12]: https://www.blender.org/download/

View File

@ -1,77 +0,0 @@
OneNewLife translating
# How to Encrypt and Decrypt Files and Directories Using Tar and OpenSSL
When you have important sensitive data, then its crucial to have an extra layer of security to your files and directories, specially when you need to transmit the data with others over a network.
Thats the reason, I am looking for a utility to encrypt and decrypt certain files and directories in Linux, luckily I found a solution that tar with OpenSSL can do the trick, yes with the help of these two tools you can easily create and encrypt tar archive file without any hassle.
In this article, we will see how to create and encrypt a tar or gz (gzip) archive file with OpenSSL:
Remember that the conventional form of using OpenSSL is:
```
# openssl command command-options arguments
```
#### Encrypt Files in Linux
To encrypt the contents of the current working directory (depending on the size of the files, this may take a while):
```
# tar -czf - * | openssl enc -e -aes256 -out secured.tar.gz
```
Explanation of the above command:
1. `enc`  openssl command to encode with ciphers
2. `-e`  a enc command option to encrypt the input file, which in this case is the output of the tar command
3. `-aes256`  the encryption cipher
4. `-out`  enc option used to specify the name of the out filename, secured.tar.gz
#### Decrypt Files in Linux
To decrypt a tar archive contents, use the following command.
```
# openssl enc -d -aes256 -in secured.tar.gz | tar xz -C test
```
Explanation of the above command:
1. `-d`  used to decrypt the files
2. `-C`  extract in subdirectory named test
The following image shows the encryption process and what happens when you try to:
1. extract the contents of the tarball the traditional way
2. use the wrong password, and
3. when you enter the right password
[![Encrypt or Decrypt Tar Archive File in Linux](http://www.tecmint.com/wp-content/uploads/2016/08/Encrypt-Decrypt-Tar-Archive-Files-in-Linux.png)][1]
Encrypt or Decrypt Tar Archive File in Linux
When you are working on a local network or the Internet, you can always secure your vital documents or files that you share with others by encrypting them, this can help reduce the risk of exposing them to malicious attackers.
We looked at a simple technique of encrypting tarballs using OpenSSL, a openssl command line tool. You can refer to its man page for more information and useful commands.
As usual, for any additional thoughts or simple tips that you wish to share with us, use the feedback form below and in the upcoming tip, we shall look at a way of translating rwx permissions into octal form.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/encrypt-decrypt-files-tar-openssl-linux/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/wp-content/uploads/2016/08/Encrypt-Decrypt-Tar-Archive-Files-in-Linux.png

View File

@ -1,87 +0,0 @@
translating by ucasFL
COOLEST PRIVACY FOCUSED OPEN SOURCE CHAT APP WIRE COMES TO LINUX
===========
[![Open Source messaging app Wire comes to Linux](https://itsfoss.com/wp-content/uploads/2016/10/wire-on-desktop-linux.jpeg)][21]
Around two years back, a few people behind [Skype][20] launched a beautiful new messaging app, [Wire][19]. When I say beautiful, I am talking about the looks. Wire has an uncluttered, sleek look which many other messaging apps dont have. But thats not its best selling point.
Since the beginning, Wire marketed itself as the [worlds most private messaging app][18]. It offers end to end encryption to text and voice calls, graphics, images, basically every content you share.
WhatsApp also offers end to end encryption but considering that its owner [Facebook is sharing WhatsApp data for ad targeting][17], I have less faith in WhatsApp and its encryption.
What makes Wire even more special for us FOSS lovers is that a few months back [Wire went open source][16]. Few months down the line and we have a beta version of Wire desktop application for Linux.
The desktop client is nothing more than a wrapper of its web version. Thank [open source project Electron][15] for providing a way to easily make cross-platform desktop applications. Many other applications have used Electron to bring a native desktop app for Linux, including [Skype][14].
### WIRE FEATURES:
Before we see more about the Linux version of Wire, lets have a quick look at some of its main features.
* Open source application
* Complete encryption for all type of contents
* No ads, no data gathering, no data sharing
* Text, voice and video chats
* Group chats and calls
* [Audio filters][1] (no need to inhale Helium, just apply that filter and talk in a funny voice)
* No phone numbers required, can be signed up with email
* Sleek, modern interface
* Cross platform messaging app with iOS, Android, Web, Mac, Windows and Linux clients
* Protected by European laws (which are more privacy oriented that the US ones)
Wire has some seriously cool features up its sleeve, especially those audio filters akin to [Snapchat][13].
### INSTALL WIRE ON LINUX
Before you go on installing Wire on Linux, let me warn you that it is still in beta phase. So, if you encounter a few bugs, dont get miffed.
Wire has a .deb client available for 64 bit systems. You can use these tips to find out if you got [32 bit or 64 bit system][12]. You can download the .deb file from the link below:
[Download Wire for Linux [Beta]][11]
If you are interested, you can have a look at the source code also:
[Wire Desktop Source Code][10]
This is what the default interface of Wire look like in [elementary OS Loki][9]:
[![Wire desktop application in Linux](https://itsfoss.com/wp-content/uploads/2016/10/Wire-desktop-appl-linux.jpeg)][8]
You see, they have even got bots here :)
Have you been already using Wire? If yes, how is your experience with it? If no, will you give it a try since its [open source][7] now and available for Linux?
--------------------------------------------------------------------------------
via: https://itsfoss.com/wire-messaging-linux/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+ItsFoss+%28Its+FOSS%21+An+Open+Source+Blog%29
作者:[ Abhishek Prakash ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/abhishek/
[1]:https://medium.com/colorful-conversations/the-tune-for-this-summer-audio-filters-eca8cb0b4c57#.c8gvs143k
[2]:http://pinterest.com/pin/create/button/?url=https://itsfoss.com/wire-messaging-linux/&description=Coolest+Privacy+Focused+Open+Source+Chat+App+Wire+Comes+To+Linux&media=https://itsfoss.com/wp-content/uploads/2016/10/wire-on-desktop-linux.jpeg
[3]:https://www.linkedin.com/cws/share?url=https://itsfoss.com/wire-messaging-linux/
[4]:https://twitter.com/share?original_referer=https%3A%2F%2Fitsfoss.com%2F&source=tweetbutton&text=Coolest+Privacy+Focused+Open+Source+Chat+App+Wire+Comes+To+Linux&url=https%3A%2F%2Fitsfoss.com%2Fwire-messaging-linux%2F&via=%40itsfoss
[5]:https://itsfoss.com/wire-messaging-linux/#comments
[6]:https://itsfoss.com/author/abhishek/
[7]:https://itsfoss.com/tag/open-source
[8]:https://itsfoss.com/wp-content/uploads/2016/10/Wire-desktop-appl-linux.jpeg
[9]:https://itsfoss.com/tag/elementary-os-loki/
[10]:https://github.com/wireapp/wire-desktop
[11]:https://wire.com/download/
[12]:https://itsfoss.com/32-bit-64-bit-ubuntu/
[13]:https://www.snapchat.com/
[14]:https://itsfoss.com/skpe-alpha-linux/
[15]:http://electron.atom.io/
[16]:http://www.infoworld.com/article/3099194/security/wire-open-sources-messaging-client-woos-developers.html
[17]:https://techcrunch.com/2016/08/25/whatsapp-to-share-user-data-with-facebook-for-ad-targeting-heres-how-to-opt-out/
[18]:http://www.ibtimes.co.uk/wire-worlds-most-private-messaging-app-offers-total-encryption-calls-texts-1548964
[19]:https://wire.com/
[20]:https://www.skype.com/en/
[21]:https://itsfoss.com/wp-content/uploads/2016/10/wire-on-desktop-linux.jpeg

View File

@ -1,3 +1,5 @@
fuowang翻译中
WattOS: A Rock-Solid, Lightning-Fast, Lightweight Linux Distro For All WattOS: A Rock-Solid, Lightning-Fast, Lightweight Linux Distro For All
============================= =============================

View File

@ -0,0 +1,86 @@
willcoderewang 翻译中
How To Manually Backup Your SMS / MMS Messages On Android?
============================================================
![Android backup sms mms](https://iwf1.com/wordpress/wp-content/uploads/2016/10/Android-backup-sms-mms.jpg)
If youre switching a device or upgrading your system, making a backup of your data might be of crucial importance.
One of the places where our important data may lie, is in our SMS / MMS messages, be it of sentimental or utilizable value, backing it up might prove quite useful.
However, unlike our photos, videos or song files which can be transferred and backed up with relative ease, backing our SMS / MMS usually proves to be a bit more complicated task that commonly require involving a third party-app or service.
### Why Do It Manually?
Although there currently exist quite a bit of different apps that might take care of backing SMS and MMS for you, you may want to consider doing it manually for the following reasons:
1. Apps **may not work** on different devices or different Android versions.
2. Apps may backup your data by uploading it to the Internet cloud therefore requiring you to **jeopardize the safety** of your content.
3. By backing up manually, you have complete control over where your data goes and what it goes through, thus **limiting the risk of spyware** in the process.
4. Doing it manually can be overall **less time consuming, easier and more straightforward**than any other way.
### How To Backup SMS / MMS Manually?
To backup your SMS / MMS messages manually youll need to have an Android tool called [adb][1]installed on your computer.
Now, the important thing to know regarding SMS / MMS is that Android stores them in a database commonly named **mmssms.db.**
Since the location of that database may differ between one device to another and also because other SMS apps can create databases of their own, such as, gommssms.db created by GO SMS app, the first thing youd want to do is to search for these databases.
So, open up your CLI tool (I use Linux Terminal, you may use Windows CMD or PowerShell) and issue the following commands:
Note: below is a series of commands needed for the task and later is the explanation of what each command does.
`
adb root
adb shell
find / -name "*mmssms*"
exit
adb pull /PATH/TO/mmssms.db /PATH/TO/DESTINATION/FOLDER
`
#### Explanation:
We start with adb root command in order to start adb in root mode so that well have permissions to reach system protected files as well.
“adb shell” is used to get inside the device shell.
Next, the “find” command is used to search for the databases. (in my case its found in: /data/data/com.android.providers.telephony/databases/mmssms.db)
* Tip: if your Terminal prints too many irrelevant results, try refining your “find” parameters (google it).
[
![Android SMS&MMS databases](http://iwf1.com/wordpress/wp-content/uploads/2016/10/Android-SMSMMS-databases-730x726.jpg)
][2]
Android SMS&MMS databases
Then we use exit command in order to exit back to our local system directory.
Lastly, adb pull is used to copy the database files into a folder on our computer.
Now, once youre ready to restore your SMS / MMS messages, whether its on a new device or a new system version, simply search again for the location of mmssms on the new system and replace it with the one youve backed.
Use adb push to replace it, e.g: adb push ~/Downloads/mmssms.db /data/data/com.android.providers.telephony/databases/mmssms.db
--------------------------------------------------------------------------------
via: https://iwf1.com/how-to-manually-backup-your-sms-mms-messages-on-android/
作者:[Liron ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://iwf1.com/tag/android
[1]:http://developer.android.com/tools/help/adb.html
[2]:http://iwf1.com/wordpress/wp-content/uploads/2016/10/Android-SMSMMS-databases.jpg

View File

@ -1,153 +0,0 @@
LinuxBars翻译中
How to Secure Network Services Using TCP Wrappers in Linux
===========
In this article we will explain what TCP wrappers are and how to configure them to [restrict access to network services][7] running on a Linux server. Before we start, however, we must clarify that the use of TCP wrappers does not eliminate the need for a properly [configured firewall][6].
In this regard, you can think of this tool as a [host-based access control list][5], and not as the [ultimate security measure][4] for your system. By using a firewall and TCP wrappers, instead of favoring one over the other, you will make sure that your server is not left with a single point of failure.
### Understanding hosts.allow and hosts.deny
When a network request reaches your server, TCP wrappers uses `hosts.allow` and `hosts.deny` (in that order) to determine if the client should be allowed to use a given service.
By default, these files are empty, all commented out, or do not exist. Thus, everything is allowed through the TCP wrappers layer and your system is left to rely on the firewall for full protection. Since this is not desired, due to the reason we stated in the introduction, make sure both files exist:
```
# ls -l /etc/hosts.allow /etc/hosts.deny
```
The syntax of both files is the same:
```
<services> : <clients> [: <option1> : <option2> : ...]
```
where,
1. services is a comma-separated list of services the current rule should be applied to.
2. clients represent the list of comma-separated hostnames or IP addresses affected by the rule. The following wildcards are accepted:
1. ALL matches everything. Applies both to clients and services.
2. LOCAL matches hosts without a period in their FQDN, such as localhost.
3. KNOWN indicate a situation where the hostname, host address, or user are known.
4. UNKNOWN is the opposite of KNOWN.
5. PARANOID causes a connection to be dropped if reverse DNS lookups (first on IP address to determine host name, then on host name to obtain the IP addresses) return a different address in each case.
3. Finally, an optional list of colon-separated actions indicate what should happen when a given rule is triggered.
You may want to keep in mind that a rule allowing access to a given service in `/etc/hosts.allow` takes precedence over a rule in `/etc/hosts.deny` prohibiting it. Additionally, if two rules apply to the same service, only the first one will be taken into account.
Unfortunately, not all network services support the use of TCP wrappers. To determine if a given service supports them, do:
```
# ldd /path/to/binary | grep libwrap
```
If the above command returns output, it can be TCP-wrapped. An example of this are sshd and vsftpd, as shown here:
[![Find Supported Services in TCP Wrapper](http://www.tecmint.com/wp-content/uploads/2016/10/Find-Supported-Services-in-TCP-Wrapper.png)][3]
Find Supported Services in TCP Wrapper
### How to Use TCP Wrappers to Restrict Access to Services
As you edit `/etc/hosts.allow` and `/etc/hosts.deny`, make sure you add a newline by pressing Enter after the last non-empty line.
To [allow SSH and FTP access][2] only to 192.168.0.102 and localhost and deny all others, add these two lines in `/etc/hosts.deny`:
```
sshd,vsftpd : ALL
ALL : ALL
```
and the following line in `/etc/hosts.allow`:
```
sshd,vsftpd : 192.168.0.102,LOCAL
```
TCP Wrappers hosts.deny File
```
#
# hosts.deny This file contains access rules which are used to
# deny connections to network services that either use
# the tcp_wrappers library or that have been
# started through a tcp_wrappers-enabled xinetd.
#
# The rules in this file can also be set up in
# /etc/hosts.allow with a 'deny' option instead.
#
# See 'man 5 hosts_options' and 'man 5 hosts_access'
# for information on rule syntax.
# See 'man tcpd' for information on tcp_wrappers
#
sshd,vsftpd : ALL
ALL : ALL
```
TCP Wrappers hosts.allow File
```
#
# hosts.allow This file contains access rules which are used to
# allow or deny connections to network services that
# either use the tcp_wrappers library or that have been
# started through a tcp_wrappers-enabled xinetd.
#
# See 'man 5 hosts_options' and 'man 5 hosts_access'
# for information on rule syntax.
# See 'man tcpd' for information on tcp_wrappers
#
sshd,vsftpd : 192.168.0.102,LOCAL
```
These changes take place immediately without the need for a restart.
In the following image you can see the effect of removing the word `LOCAL` from the last line: the FTP server will become unavailable for localhost. After we add the wildcard back, the service becomes available again.
[![Verify FTP Access ](http://www.tecmint.com/wp-content/uploads/2016/10/Verify-FTP-Access.png)][1]
>Verify FTP Access
To allow all services to hosts where the name contains `example.com`, add this line in `hosts.allow`:
```
ALL : .example.com
```
and to deny access to vsftpd to machines on 10.0.1.0/24, add this line in `hosts.deny`:
```
vsftpd : 10.0.1.
```
On the last two examples, notice the dot at the beginning and the end of the client list. It is used to indicate “ALL hosts and / or clients where the name or the IP contains that string”.
Was this article helpful to you? Do you have any questions or comments? Feel free to drop us a note using the comment form below.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/secure-linux-tcp-wrappers-hosts-allow-deny-restrict-access/
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/wp-content/uploads/2016/10/Verify-FTP-Access.png
[2]:http://www.tecmint.com/block-ssh-and-ftp-access-to-specific-ip-and-network-range/
[3]:http://www.tecmint.com/wp-content/uploads/2016/10/Find-Supported-Services-in-TCP-Wrapper.png
[4]:http://www.tecmint.com/linux-server-hardening-security-tips/
[5]:http://www.tecmint.com/secure-files-using-acls-in-linux/
[6]:http://www.tecmint.com/configure-firewalld-in-centos-7/
[7]:http://www.tecmint.com/mandatory-access-control-with-selinux-or-apparmor-linux/

View File

@ -0,0 +1,148 @@
bjwrkj 翻译中..
# Suspend to Idle
### Introduction
The Linux kernel supports a variety of sleep states.  These states provide power savings by placing the various parts of the system into low power modes.  The four sleep states are suspend to idle, power-on standby (standby), suspend to ram, and suspend to disk.  These are also referred to sometimes by their ACPI state: S0, S1, S3, and S4, respectively.  Suspend to idle is purely software driven and involves keeping the CPUs in their deepest idle state as much as possible.  Power-on standby involves placing devices in low power states and powering off all non-boot CPUs.  Suspend to ram takes this further by powering off all CPUs and putting the memory into self-refresh.  Lastly, suspend to disk gets the greatest power savings through powering off as much of the system as possible, including the memory.  The contents of memory are written to disk, and on resume this is read back into memory.
This blog post focuses on the implementation of suspend to idle.  As described above, suspend to idle is a software implemented sleep state.  The system goes through a normal platform suspend where it freezes the user space and puts peripherals into low-power states.  However, instead of powering off and hotplugging out CPUs, the system is quiesced and forced into an idle cpu state.  With peripherals in low power mode, no IRQs should occur, aside from wake related irqs.  These wake irqs could be timers set to wake the system (RTC, generic timers, etc), or other sources like power buttons, USB, and other peripherals.
During freeze, a special cpuidle function is called as processors enter idle.  This enter_freeze() function can be as simple as calling the cpuidle enter() function, or can be much more complex.  The complexity of the function is dependent on the SoCs requirements and methods for placing the SoC into lower power modes.
### Prerequisites
### Platform suspend_ops
Typically, to support S2I, a system must implement a platform_suspend_ops and provide at least minimal suspend support.  This meant filling in at least the valid() function in the platform_suspend_ops.  If suspend-to-idle and suspend-to-ram was to be supported, the suspend_valid_only_mem would be used for the valid function.
Recently, however, automatic support for S2I was added to the kernel.  Sudeep Holla proposed a change that would provide S2I support on systems without requiring the implementation of platform_suspend_ops.  This patch set was accepted and will be part of the 4.9 release.  The patch can be found at:  [https://lkml.org/lkml/2016/8/19/474][1]
With suspend_ops defined, the system will report the valid platform suspend states when the /sys/power/state is read.
```
# cat /sys/power/state
```
freeze mem_
This example shows that both S0 (suspend to idle) and S3 (suspend to ram) are supported on this platform.  With Sudeeps change, only freeze will show up for platforms which do not implement platform_suspend_ops.
### Wake IRQ support
Once the system is placed into a sleep state, the system must receive wake events which will resume the system.  These wake events are generated from devices on the system.  It is important to make sure that device drivers utilize wake irqs and configure themselves to generate wake events upon receiving wake irqs.  If wake devices are not identified properly, the system will take the interrupt and then go back to sleep and will not resume.
Once devices implement proper wake API usage, they can be used to generate wake events.  Make sure DT files also specify wake sources properly.  An example of configuring a wakeup-source is the following (arch/arm/boot/dst/am335x-evm.dts):
```
    gpio_keys: volume_keys@0 {__
               compatible = “gpio-keys”;
               #address-cells = <1>;
               #size-cells = <0>;
               autorepeat;
               switch@9 {
                       label = “volume-up”;
                       linux,code = <115>;
                       gpios = <&gpio0 2 GPIO_ACTIVE_LOW>;
                       wakeup-source;
               };
               switch@10 {
                       label = “volume-down”;
                       linux,code = <114>;
                       gpios = <&gpio0 3 GPIO_ACTIVE_LOW>;
                       wakeup-source;
               };
       };
```
As you can see, two gpio keys are defined to be wakeup-sources.  Either of these keys, when pressed, would generate a wake event during suspend.
An alternative to DT configuration is if the device driver itself configures wake support in the code using the typical wakeup facilities.
### Implementation
### Freeze function
Systems should define a enter_freeze() function in their cpuidle driver if they want to take full advantage of suspend to idle.  The enter_freeze() function uses a slightly different function prototype than the enter() function.  As such, you cant just specify the enter() for both enter and enter_freeze.  At a minimum, it will directly call the enter() function.  If no enter_freeze() is specified, the suspend will occur, but the extra things that would have occurred if enter_freeze() was present, like tick_freeze() and stop_critical_timings(), will not occur.  This results in timer IRQs waking up the system.  This will not result in a resume, as the system will go back into suspend after handling the IRQ.
During suspend, minimal interrupts should occur (ideally none).
The picture below shows a plot of power usage vs time.  The two spikes on the graph are the suspend and the resume.  The small periodic spikes before and after the suspend are the system exiting idle to do bookkeeping operations, scheduling tasks, and handling timers.  It takes a certain period of time for the system to go back into the deeper idle state due to latency.
![blog-picture-one](http://www.linaro.org/wp-content/uploads/2016/10/blog-picture-one-1024x767.png)
Power Usage Time Progression
The ftrace capture shown below displays the activity on the 4 CPUs before, during, and after the suspend/resume operation.  As you can see, during the suspend, no IPIs or IRQs are handled.  
![blog-picture-2](http://www.linaro.org/wp-content/uploads/2016/10/blog-picture-2-1024x577.png)
Ftrace capture of Suspend/Resume
### Idle State Support
You must determine which idle states support freeze.  During freeze, the power code will determine the deepest idle state that supports freeze.  This is done by iterating through the idle states and looking for which states have defined enter_freeze().  The cpuidle driver or SoC specific suspend code must determine which idle states should implement freeze and it must configure them by specifying the freeze function for all applicable idle states for each cpu.
As an example, the Qualcomm platform will set the enter_freeze function during the suspend init function in the platform suspend code.  This is done after the cpuidle driver is initialized so that all structures are defined and in place.
### Driver support for Suspend/Resume
You may encounter buggy drivers during your first successful suspend operation.  Many drivers have not had robust testing of suspend/resume paths.  You may even find that suspend may not have much to do because pm_runtime has already done everything you would have done in the suspend.  Because the user space is frozen, the devices should already be idled and pm_runtime disabled.
### Testing
Testing for suspend to idle can be done either manually, or through using something that does an auto suspend (script/process/etc), auto sleep or through something like Android where if a wakelock is not held the system continuously tried to suspend.  If done manually, the following will place the system in freeze:
```
/ # echo freeze > /sys/power/state
[  142.580832] PM: Syncing filesystems … done.
[  142.583977] Freezing user space processes … (elapsed 0.001 seconds) done.
[  142.591164] Double checking all user space processes after OOM killer disable… (elapsed 0.000 seconds)
[  142.600444] Freezing remaining freezable tasks … (elapsed 0.001 seconds) done._
_[  142.608073] Suspending console(s) (use no_console_suspend to debug)
[  142.708787] mmc1: Reset 0x1 never completed.
[  142.710608] msm_otg 78d9000.phy: USB in low power mode
[  142.711379] PM: suspend of devices complete after 102.883 msecs
[  142.712162] PM: late suspend of devices complete after 0.773 msecs
[  142.712607] PM: noirq suspend of devices complete after 0.438 msecs
< system suspended >
….
< wake irq triggered >
[  147.700522] PM: noirq resume of devices complete after 0.216 msecs
[  147.701004] PM: early resume of devices complete after 0.353 msecs
[  147.701636] msm_otg 78d9000.phy: USB exited from low power mode
[  147.704492] PM: resume of devices complete after 3.479 msecs
[  147.835599] Restarting tasks … done.
/ #
```
In the above example, it should be noted that the MMC driver was responsible for 100ms of that 102.883ms.  Some device drivers will still have work to do when suspending.  This may be flushing of data out to disk or other tasks which take some time.
If the system has freeze defined, it will try to suspend the system.  If it does not have freeze capabilities, you will see the following:
```
/ # echo freeze > /sys/power/state 
sh: write error: Invalid argument
/ #
```
### Future Developments
There are two areas where work is currently being done on Suspend to Idle on ARM platforms.  The first area was mentioned earlier in the platform_suspend_ops prerequisite section.  The work to always allow for the freeze state was accepted and will be part of the 4.9 kernel.  The other area that is being worked on is the freeze_function support.
The freeze_function implementation is currently required if you want the best response/performance.  However, since most SoCs will use the ARM cpuidle driver, it makes sense for the ARM cpuidle driver to implement its own generic freeze_function.  And in fact, ARM is working to add this generic support.  A SoC vendor should only have to implement specialized freeze_functions if they implement their own cpuidle driver or require additional provisioning before entering their deepest freezable idle state.
--------------------------------------------------------------------------------
via: http://www.linaro.org/blog/suspend-to-idle/
作者:[Andy Gross][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linaro.org/author/andygross/
[1]:https://lkml.org/lkml/2016/8/19/474

View File

@ -0,0 +1,101 @@
Yinux 翻译中
Livepatch Apply Critical Security Patches to Ubuntu Linux Kernel Without Rebooting
============================================================
If you are a system administrator in charge of maintaining critical systems in enterprise environments, we are sure you know two important things:
1) Finding a downtime window to install security patches in order to handle kernel or operating system vulnerabilities can be difficult. If the company or business you work for does not have security policies in place, operations management may end up favoring uptime over the need to solve vulnerabilities. Additionally, internal bureaucracy can cause delays in granting approvals for a downtime. Been there myself.
2) Sometimes you cant really afford a downtime, and should be prepared to mitigate any potential exposures to malicious attacks some other way.
The good news is that Canonical has recently released (actually, a couple of days ago) its Livepatchservice to apply critical kernel patches to Ubuntu 16.04 (64-bit edition / 4.4.x kernel) without the need for a later reboot. Yes, you read that right: with Livepatch, you dont need to restart your Ubuntu 16.04 server in order for the security patches to take effect.
### Signing up for Ubuntu Livepatch
In order to use Canonical Livepatch Service, you need to sign up at [https://auth.livepatch.canonical.com/][1] and indicate if you are a regular Ubuntu user or an Advantage subscriber (paid option). All Ubuntu users can link up to 3 different machines to Livepatch through the use of a token:
[
![Canonical Livepatch Service](http://www.tecmint.com/wp-content/uploads/2016/10/Canonical-Livepatch-Service.png)
][2]
Canonical Livepatch Service
In the next step you will be prompted to enter your Ubuntu One credentials or sign up for a new account. If you choose the latter, you will need to confirm your email address in order to finish your registration:
[
![Ubuntu One Confirmation Mail](http://www.tecmint.com/wp-content/uploads/2016/10/Ubuntu-One-Confirmation-Mail.png)
][3]
Ubuntu One Confirmation Mail
Once you click on the link above to confirm your email address, youll be ready to go back to [https://auth.livepatch.canonical.com/][4] and get your Livepatch token.
### Getting and Using your Livepatch Token
To begin, copy the unique token assigned to your Ubuntu One account:
[
![Canonical Livepatch Token](http://www.tecmint.com/wp-content/uploads/2016/10/Livepatch-Token.png)
][5]
Canonical Livepatch Token
Then go to a terminal and type:
```
$ sudo snap install canonical-livepatch
```
The above command will install the livepatch, whereas
```
$ sudo canonical-livepatch enable [YOUR TOKEN HERE]
```
will enable it for your system. If this last command indicates it cant find canonical-livepatch, make sure `/snap/bin` has been added to your path. A workaround consists of changing your working directory to `/snap/bin` and do.
```
$ sudo ./canonical-livepatch enable [YOUR TOKEN HERE]
```
[
![Install Livepatch in Ubuntu](http://www.tecmint.com/wp-content/uploads/2016/10/Install-Livepatch-in-Ubuntu.png)
][6]
Install Livepatch in Ubuntu
Overtime, youll want to check the description and the status of patches applied to your kernel. Fortunately, this is as easy as doing.
```
$ sudo ./canonical-livepatch status --verbose
```
as you can see in the following image:
[
![Check Livepatch Status in Ubuntu](http://www.tecmint.com/wp-content/uploads/2016/10/Check-Livepatch-Status.png)
][7]
Check Livepatch Status in Ubuntu
Having enabled Livepatch on your Ubuntu server, you will be able to reduce planned and unplanned downtimes at a minimum while keeping your system secure. Hopefully Canonicals initiative will award you a pat on the back by management or better yet, a raise.
Feel free to let us know if you have any questions about this article. Just drop us a note using the comment form below and we will get back to you as soon as possible.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/livepatch-install-critical-security-patches-to-ubuntu-kernel
作者:[Gabriel Cánepa][a]
译者:[Yinux](https://github.com/Yinux)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:https://auth.livepatch.canonical.com/
[2]:http://www.tecmint.com/wp-content/uploads/2016/10/Canonical-Livepatch-Service.png
[3]:http://www.tecmint.com/wp-content/uploads/2016/10/Ubuntu-One-Confirmation-Mail.png
[4]:https://auth.livepatch.canonical.com/
[5]:http://www.tecmint.com/wp-content/uploads/2016/10/Livepatch-Token.png
[6]:http://www.tecmint.com/wp-content/uploads/2016/10/Install-Livepatch-in-Ubuntu.png
[7]:http://www.tecmint.com/wp-content/uploads/2016/10/Check-Livepatch-Status.png

View File

@ -1,3 +1,5 @@
FSSlc translating
HOW TO SHARE STEAM GAME FILES BETWEEN LINUX AND WINDOWS HOW TO SHARE STEAM GAME FILES BETWEEN LINUX AND WINDOWS
============ ============

View File

@ -1,550 +0,0 @@
# Getting Started with Webpack 2
![](https://cdn-images-1.medium.com/max/2000/1*yI44h8Df-l-2LUqvXIi8JQ.png)
Webpack 2 will be out of beta [once the documentation has been finished][26]. But that doesnt mean you cant start using version 2 now if you know how to configure it.
### What is Webpack?
At its simplest, Webpack is a module bundler for your JavaScript. However, since its release its evolved into a manager of all your front-end code (either intentionally or by the communitys will).
![](https://cdn-images-1.medium.com/max/800/1*yBt2rFj2DbckFliGE0LEyg.png)
A task runner such as _Gulp _can handle many different preprocessers and transpilers, but in all cases, it will take a source _input_ and crunch it into a compiled _output. _However, it does this on a case-by-case basis with no concern for the system at large. That is the burden of the developer: to pick up where the task runner left off and find the proper way for all these moving parts to mesh together in production.
Webpack attempts to lighten the developer load a bit by asking a bold question: _what if there were a part of the development process that handled dependencies on its own? What if we could simply write code in such a way that the build process managed itself, based on only what was necessary in the end?_
![](https://cdn-images-1.medium.com/max/800/1*TOFfoH0cXTc8G3Y_F6j3Jg.png)
If youve been a part of the web community for the past few years, you already know the preferred method of solving a problem: _build this with JavaScript._And so Webpack attempts to make the build process easier by passing dependencies through JavaScript. But the true power of its design isnt simply the code _management_ part; its that this management layer is 100% valid JavaScript (with Node features). Webpack gives you the ability to write valid JavaScript that has a better sense of the system at large.
In other words: _you dont write code for Webpack. You write code for your projec_t. And Webpack keeps up (with some config, of course).
In a nutshell, if youve ever struggled with any of the following:
* Accidentally including stylesheets and JS libraries you dont need into production, bloating the size
* Encountering scoping issues—both from CSS and JavaScript
* Finding a good system for using Node/Bower modules in your JavaScript, or relying on a crazy backend configuration to properly utilize those modules
* Needing to optimize asset delivery better but fearing youll break something
…then you could benefit from Webpack. It handles all the above effortlessly by letting JavaScript worry about your dependencies and load order instead of your developer brain. The best part? Webpack can even run purely on the server side, meaning you can still build [progressively-enhanced][25] websites using Webpack.
### First Steps
Well use [Yarn][24] (`brew install yarn`) in this tutorial instead of `npm`, but its totally up to you; they do the same thing. From our project folder, well run the following in a terminal window to add Webpack 2 to both our global packages and our local project:
```
yarn global add webpack@2.1.0-beta.25 webpack-dev-server@2.1.0-beta.9
yarn add --dev webpack@2.1.0-beta.25 webpack-dev-server@2.1.0-beta.9
```
Well then declare a webpack configuration with a `webpack.config.js` file in the root of our project directory:
```
`'use strict';
const webpack = require("webpack");
module.exports = {
context: __dirname + "/src",
entry: {
app: "./app.js",
},
output: {
path: __dirname + "/dist",
filename: "[name].bundle.js",
},
};`
```
_Note: _`___dirname_`_ refers to the root of your project._
Remember that Webpack “knows” whats going in your project? It _knows_ by reading your code (dont worry; it signed an NDA). Webpack basically does the following:
1. Starting from the `context` folder, …
2. … it looks for `entry` filenames 
3. … and reads the content. Every `import` ([ES6][7]) or `require()` (Node) dependency it finds as it parses the code, it bundles for the final build. It then searches _those_ dependencies, and those dependencies dependencies, until it reaches the very end of the “tree”—only bundling what it needed to, and nothing else.
4. From there, Webpack bundles everything to the `output.path` folder, naming it using the `output.filename` naming template (`[name]` gets replaced with the object key from `entry`)
So if our `src/app.js` file looked something like this (assuming we ran `yarn add --dev moment` beforehand):
```
'use strict';
```
```
import moment from 'moment';
var rightNow = moment().format('MMMM Do YYYY, h:mm:ss a');
console.log( rightNow );
```
```
// "October 23rd 2016, 9:30:24 pm"
```
Wed run
```
webpack -p
```
_Note: The _`_p_`_ flag is “production” mode and uglifies/minifies output._
And it would output a `dist/app.bundle.js` that logged the current date & time to the console. Note that Webpack automatically knew what `'moment'`referred to (although if you had a `moment.js` file in your directory, by default Webpack would have prioritized this over your `moment` Node module).
### Working with Multiple Files
You can specify any number of entry/output points you wish by modifying only the `entry` object.
#### Multiple files, bundled together
```
`'use strict';
const webpack = require("webpack");
module.exports = {
context: __dirname + "/src",
entry: {
app: ["./home.js", "./events.js", "./vendor.js"],
},
output: {
path: __dirname + "/dist",
filename: "[name].bundle.js",
},
};`
```
Will all be bundled together as one `dist/app.bundle.js` file, in array order.
#### Multiple files, multiple outputs
```
`const webpack = require("webpack");
module.exports = {
context: __dirname + "/src",
entry: {
home: "./home.js",
events: "./events.js",
contact: "./contact.js",
},
output: {
path: __dirname + "/dist",
filename: "[name].bundle.js",
},
};`
```
Alternately, you may choose to bundle multiple JS files to break up parts of your app. This will be bundled as 3 files: `dist/home.bundle.js`, `dist/events.bundle.js`, and `dist/contact.bundle.js`.
#### Advanced auto-bundling
If youre breaking up your application into multiple `output` bundles (useful if one part of your app has a ton of JS you dont need to load up front), theres a likelihood you may be duplicating code across those files, because it will resolve each dependency separately from one another. Fortunately, Webpack has a built-in _CommonsChunk_ plugin to handle this:
```
module.exports = {
// …
```
```
plugins: [
new webpack.optimize.CommonsChunkPlugin({
name: "commons",
filename: "commons.js",
minChunks: 2,
}),
],
```
```
// …
};
```
Now, across your `output` files, if you have any modules that get loaded `2` or more times (set by `minChunks`), it will bundle that into a `commons.js` file which you can then cache on the client side. This will result in an additional header request, sure, but you prevent the client from downloading the same libraries more than once. So there are many scenarios where this is a net gain for speed.
### Developing
Webpack actually has its own development server, so whether youre developing a static site or are just prototyping your front-end, its perfect for either. To get that running, just add a `devServer` object to `webpack.config.js`:
```
module.exports = {
context: __dirname + "/src",
entry: {
app: "./app.js",
},
output: {
filename: "[name].bundle.js",
path: __dirname + "/dist/assets",
publicPath: "/assets", // New
},
devServer: {
contentBase: __dirname + "/src", // New
},
};
```
Now make a `src/index.html` file that has:
```
<script src="/assets/app.bundle.js"></script>
```
… and from your terminal, run:
```
webpack-dev-server
```
Your server is now running at `localhost:8080`. _Note how _`_/assets_`_ in the script tag matches _`_output.publicPath_`_—you can name this whatever you want (useful if you need a CDN)._
Webpack will hotload any JavaScript changes as you make them without the need to refresh your browser. However, any changes to the`webpack.config.js` file will require a server restart to take effect.
### Globally-accessible methods
Need to use some of your functions from a global namespace? Simply set `output.library` within `webpack.config.js`:
```
module.exports = {
output: {
library: 'myClassName',
}
};
```
… and it will attach your bundle to a `window.myClassName` instance. So using that name scope, you could call methods available to that entry point (you can read more about this setting [on the documentation][23]).
### Loaders
Up until now, weve only covered working with JavaScript. Its important to start with JavaScript because _thats the only language Webpack speaks_. We can work with virtually any file type, as long as we pass it into JavaScript. We do that with _Loaders_.
A loader can refer to a preprocessor such as Sass, or a transpiler such as Babel. On NPM, theyre usually named `*-loader` such as `sass-loader` or `babel-loader`.
#### Babel + ES6
If we wanted to use ES6 via [Babel][22] in our project, wed first install the appropriate loaders locally:
```
yarn add --dev babel-loader babel-core babel-preset-es2015
```
… and then add it to `webpack.config.js` so Webpack knows where to use it.
```
module.exports = {
// …
```
```
module: {
rules: [
{
test: /\.js$/,
use: [{
loader: "babel-loader",
options: { presets: ["es2015"] }
}],
},
// Loaders for other file types can go here
```
```
],
},
```
```
// …
};
```
_A note for Webpack 1 users: the core concept for Loaders remains the same, but the syntax has improved. Until they finish the docs this may/may not be the exact preferred syntax._
This looks for the `/\.js$/` RegEx search for any files that end in `.js` to be loaded via Babel. Webpack relies on RegEx tests to give you complete control—it doesnt limit you to file extensions or assume your code must be organized in a certain way. For example: maybe your `/my_legacy_code/` folder isnt written in ES6\. So you could modify the `test` above to be `/^((?!my_legacy_folder).)*\.js$/` which would exclude that specific folder, but process the rest with Babel.
#### CSS + Style Loader
If we wanted to only load CSS as our application needed, we could do that as well. Lets say we have an `index.js` file. Well import it from there:
```
import styles from './assets/stylesheets/application.css';
```
Well get the following error: `You may need an appropriate loader to handle this file type`. Remember that Webpack can only understand JavaScript, so well have to install the appropriate loader:
```
yarn add --dev css-loader style-loader
```
… and then add a rule to `webpack.config.js`:
```
module.exports = {
// …
```
```
module: {
rules: [
{
test: /\.css$/,
use: ["style-loader", "css-loader"],
},
```
```
// …
],
},
};
```
_Loaders are processed in __reverse array order__. That means _`_css-loader_`_ will run before _`_style-loader_`_._
You may notice that even in production builds, this actually bundles your CSS in with your bundled JavaScript, and `style-loader` manually writes your styles to the `<head>`. At first glance it may seem a little kooky, but slowly starts to make more sense the more you think about it. Youve saved a header request—saving valuable time on some connections—and if youre loading your DOM with JavaScript anyway, this essentially eliminates [FOUC][21] on its own.
Youll also notice that—out of the box—Webpack has automatically resolved all of your `@import` queries by packaging those files together as one (rather than relying on CSSs default import which can result in gratuitious header requests and slow-loading assets).
Loading CSS from your JS is pretty amazing, because you now can modularize your CSS in powerful new ways. Say you loaded `button.css`only through `button.js`. This would mean if `button.js` is never actually used_, _its CSS wouldnt bloat out our production build. If you adhere to component-oriented CSS practices such as SMACSS or BEM, you see the value in pairing your CSS more closely with your markup + JavaScript.
#### CSS + Node modules
We can use Webpack to take advantage of importing Node modules using Nodes `~` prefix. If we ran `yarn add normalize.css`, we could use:
```
@import "~normalize.css";
```
… and take full advantage of NPM managing our third party styles for us—versioning and all—without any copy + pasting on our part. Further, getting Webpack to bundle CSS for us has obvious advantages over using CSSs default import, saving the client from gratuitous header requests and slow load times.
_Update: this and the following section have been updated for accuracy, no longer confusing using CSS Modules to simply import Node modules. Thanks to _[_Albert Fernández_][20]_ for the help!_
#### CSS Modules
You may have heard of [CSS Modules][19], which takes the _C_ out of _CSS_. It typically works best only if youre building the DOM with JavaScript, but in essence, it magically scopes your CSS classes to the JavaScript file that loaded it ([learn more about it here][18]). If you plan on using it, CSS Modules comes packaged with `css-loader` (`yarn add --dev css-loader`):
```
module.exports = {
// …
```
```
module: {
rules: [
{
test: /\.css$/,
use: [
"style-loader",
{ loader: "css-loader", options: { modules: true } }
],
},
```
```
// …
],
},
};
```
_Note: for _`_css-loader_`_ were now using the __expanded object syntax__ to pass an option to it. You can use a string instead as shorthand to use the default options, as were still doing with _`_style-loader_`_._
* * *
Its worth noting that you can actually drop the `~` when importing Node Modules with CSS Modules enabled (e.g.: `@import "normalize.css";`). However, you may encounter build errors now when you `@import` your own CSS. If youre getting “cant find ___” errors, try adding a `resolve` object to `webpack.config.js` to give Webpack a better understanding of your intended module order.
```
const path = require("path");
```
```
module.exports = {
//…
```
```
resolve: {
modules: [path.resolve(__dirname, "src"), "node_modules"]
},
};
```
We specified our source directory first, and then `node_modules`. So Webpack will handle resolution a little better, first looking through our source directory and then the installed Node modules, in that order (replace `"src"` and `"node_modules"` with your source and Node module directories, respectively).
#### Sass
Need to use Sass? No problem. Install:
```
yarn add --dev sass-loader node-sass
```
And add another rule:
```
module.exports = {
// …
```
```
module: {
rules: [
{
test: /\.(sass|scss)$/,
use: [
"style-loader",
"css-loader",
"sass-loader",
]
}
```
```
// …
],
},
};
```
Then when your Javascript calls for an `import` on a `.scss` or `.sass` file, Webpack will do its thing.
#### CSS bundled separately
Maybe youre dealing with progressive enhancement; maybe you need a separate CSS file for some other reason. We can do that easily by swapping out `style-loader` with `extract-text-webpack-plugin` in our config without having to change any code. Take our example `app.js` file:
```
import styles from './assets/stylesheets/application.css';
```
Lets install the plugin locally (we need the beta version for this as of Oct 2016)…
```
yarn add --dev extract-text-webpack-plugin@2.0.0-beta.4
```
… and add to `webpack.config.js`:
```
const ExtractTextPlugin = require("extract-text-webpack-plugin");
```
```
module.exports = {
// …
```
```
module: {
rules: [
{
test: /\.css$/,
use: [
ExtractTextPlugin.extract("css"),
{ loader: "css-loader", options: { modules: true } },
],
},
// …
]
},
plugins: [
new ExtractTextPlugin({
filename: "[name].bundle.css",
allChunks: true,
}),
],
};
```
Now when running `webpack -p` youll also notice an `app.bundle.css` file in your `output` directory. Simply add a `<link>` tag to that file in your HTML as you would normally.
#### HTML
As you might have guessed, theres also an `[html-loader][6]`[ plugin][17] for Webpack. However, when we get to loading HTML with JavaScript, this is about the point where we branch off into a myriad of differing approaches, and I cant think of one single example that would set you up for whatever youre planning on doing next. Typically, youd load HTML for the purpose of using JavaScript-flavored markup such as [JSX][16] or [Mustache][15] or [Handlebars][14] to be used within a larger system such as [React][13], [Angular][12], [Vue][11], or [Ember][10].
So Ill end the tutorial here: you _can_ load markup with Webpack, but by this point youll be making your own decisions about your architecture that neither I nor Webpack can make for you. But using the above examples for reference and searching for the right loaders on NPM should be enough to get you going.
### Thinking in Modules
In order to get the most out of Webpack, youll have to think in modules—small, reusable, self-contained processes that do one thing and one thing well. That means taking something like this:
```
└── js/
└── application.js // 300KB of spaghetti code
```
… and turning it into this:
```
└── js/
├── components/
│ ├── button.js
│ ├── calendar.js
│ ├── comment.js
│ ├── modal.js
│ ├── tab.js
│ ├── timer.js
│ ├── video.js
│ └── wysiwyg.js
└── application.js // ~ 1KB of code; imports from ./components/
```
The result is clean, reusable code. Each individual component depends on `import`-ing its own dependencies, and `export`-ing what it wants to make public to other modules. Pair this with Babel + ES6, and you can utilize [JavaScript Classes][9] for great modularity, and _dont-think-about-it _scoping that just works.
For more on modules, see [this excellent article by Preethi Kasreddy][8].
* * *
### Further Reading
* [Whats New in Webpack 2][5]
* [Webpack Config docs][4]
* [Webpack Examples][3]
* [React + Webpack Starter Kit][2]
* [Webpack How-to][1]
</section>
--------------------------------------------------------------------------------
via: https://blog.madewithenvy.com/getting-started-with-webpack-2-ed2b86c68783#.oozfpppao
作者:[Drew Powers][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://blog.madewithenvy.com/@an_ennui
[1]:https://github.com/petehunt/webpack-howto
[2]:https://github.com/kriasoft/react-starter-kit
[3]:https://github.com/webpack/webpack/tree/master/examples
[4]:https://webpack.js.org/configuration/
[5]:https://gist.github.com/sokra/27b24881210b56bbaff7
[6]:https://github.com/webpack/html-loader
[7]:https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/import
[8]:https://medium.freecodecamp.com/javascript-modules-a-beginner-s-guide-783f7d7a5fcc
[9]:https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Classes
[10]:http://emberjs.com/
[11]:http://vuejs.org/
[12]:https://angularjs.org/
[13]:https://facebook.github.io/react/
[14]:http://handlebarsjs.com/
[15]:https://github.com/janl/mustache.js/
[16]:https://jsx.github.io/
[17]:https://github.com/webpack/html-loader
[18]:https://github.com/css-modules/css-modules
[19]:https://github.com/css-modules/css-modules
[20]:https://medium.com/u/901a038e32e5
[21]:https://en.wikipedia.org/wiki/Flash_of_unstyled_content
[22]:https://babeljs.io/
[23]:https://webpack.js.org/concepts/output/#output-library
[24]:https://yarnpkg.com/
[25]:https://www.smashingmagazine.com/2009/04/progressive-enhancement-what-it-is-and-how-to-use-it/
[26]:https://github.com/webpack/webpack/issues/1545#issuecomment-255446425

View File

@ -1,73 +0,0 @@
wcnnbdk1 translating
Physical RAM attack can root Android and possibly other devices
===
>Attackers can reliably flip bits in physical memory cells in order to compromise mobile devices and computers
![](http://images.techhive.com/images/idgnsImport/2015/08/id-2969037-security1-100606370-large.jpg)
Researchers have devised a new way to compromise Android devices without exploiting any software vulnerabilities and instead taking advantage of a physical design weakness in RAM chips. The attack technique could also affect other ARM and x86-based devices and computers.
The attack stems from the push over the past decade to pack more DRAM (dynamic random-access memory) capacity onto increasingly smaller chips, which can lead to memory cells on adjacent rows leaking electric charges to one another under certain conditions.
For example, repeated and rapid accessing of physical memory locations -- an action now dubbed "hammering" -- can cause the bit values from adjacent locations to flip from 0 to 1 or the other way around.
While such electrical interference has been known for a while and has been studied by vendors from a reliability standpoint -- because memory corruption can lead to system crashes -- researchers have shown that it can also have serious security implications when triggered in a controlled manner.
In March 2015, researchers from Google's Project Zero [presented two privilege escalation exploits][7] based on this memory "row hammer" effect on the x86-64 CPU architecture. One of the exploits allowed code to escape the Google Chrome sandbox and be executed directly on the OS and the other gained kernel-level privileges on a Linux machine.
Since then, other researchers have further investigated the problem and have shown that it could be [exploited from websites through JavaScript][6] or [could affect virtualized servers][5] running in cloud environments. However, there have been doubts about whether the technique would also work on the significantly different ARM architecture used in smartphones and other mobile devices.
But now, a team of researchers from the VUSec Group at Vrije Universiteit Amsterdam in the Netherlands, the Graz University of Technology in Austria, and the University of California in Santa Barbara has demonstrated not only are Rowhammer attacks possible on ARM, but they're even easier to pull off than on x86.
The researchers dubbed their new attack Drammer, which stands for deterministic Rowhammer, and plan to present it Wednesday at the 23rd ACM Conference on Computer and Communications Security in Vienna. The attack builds upon previous Rowhammer techniques devised and demonstrated in the past.
The VUSec researchers have created a malicious Android application that doesn't require any permissions and gains root privileges when it is executed by using undetectable memory bit flipping.
The researchers tested 27 Android devices from different manufacturers, 21 using ARMv7 (32-bit) and six using ARMv8 (64-bit) architectures. They managed to flip bits on 17 of the ARMv7 devices and one of the ARMv8 devices, indicating they are vulnerable to the attack.
Furthermore, Drammer can be combined with other Android vulnerabilities such as [Stagefright][4] or [BAndroid][3] to build remote attacks that don't require users to manually download the malicious app.
Google is aware of this type of attack. "After researchers reported this issue to our Vulnerability Rewards Program, we worked closely with them to deeply understand it in order to better secure our users," a Google representative said in an emailed statement. "Weve developed a mitigation which we will include in our upcoming November security bulletin.”
Google's mitigation complicates the attack, but it doesn't fix the underlying problem, according to the VUSec researchers.
In fact, fixing what is essentially a hardware issue in software is impossible. Hardware vendors are investigating the problem and may be able to fix it in future memory chips, but chips present in existing devices will likely remain vulnerable.
Even worse, it's hard to say which devices are affected because there are many factors that come into play and haven't yet been fully investigated, the researchers said. For example, a memory controller might behave differently when the device battery level is under a certain threshold, so a device that doesn't appear to be vulnerable under a full charge might be vulnerable when its battery is low, the researchers explained.
Also, there's an adage in cybersecurity: Attacks always get better, they never get worse. Rowhammer attacks have grown from theoretical to practical but probabilistic and now to practical and deterministic. This means that a device that does not appear to be affected today could be proven vulnerable to an improved Rowhammer technique tomorrow.
Drammer was demonstrated on Android because the researchers wanted to investigate the impact on ARM-based devices, but the underlying technique likely applies to all architectures and operating systems. The new attack is also a vast improvement over past techniques that required either luck or special features that are present only on certain platforms and easily disabled.
Drammer relies on DMA (direct memory access) buffers used by many hardware subsystems, including graphics, network, and sound. While Drammer is implemented using Android's ION memory allocator, APIs and methods to allocate DMA buffers are present in all operating systems, and this warning is one of the paper's major contributions.
"For the very first time, we show that we can do targeted, fully reliable and deterministic Rowhammer without any special feature," said Cristiano Giuffrida, one of the VUSec researchers. "The memory massaging part is not even Android specific. It will work on any Linux platform -- and we suspect also on other operating systems -- because it exploits the inherent properties of the memory management inside the OS kernel."
"I expect that we're going to see many other flavors of this attack on different platforms," added Herbert Bos, a professor at Vrije Universiteit Amsterdam and leader of the VUSec Systems Security research group.
Along with their [paper][2], the researchers have released an Android app that can test if an Android device is vulnerable to Rowhammer -- at least to the currently known techniques. The app is not yet available on Google Play but can be downloaded from the [VUSec Drammer website][1] to be installed manually. An open-source Rowhammer simulator that can help other researchers investigate this issue further is also available.
--------------------------------------------------------------------------------
via:http://www.csoonline.com/article/3134726/security/physical-ram-attack-can-root-android-and-possibly-other-devices.html
作者:[Lucian Constantin][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.csoonline.com/author/Lucian-Constantin/
[1]:https://www.vusec.net/projects/drammer/
[2]:https://vvdveen.com/publications/drammer.pdf
[3]:https://www.vusec.net/projects/bandroid/
[4]:http://www.csoonline.com/article/3045836/security/new-stagefright-exploit-puts-millions-of-android-devices-at-risk.html
[5]:http://www.infoworld.com/article/3105889/security/flip-feng-shui-attack-on-cloud-vms-exploits-hardware-weaknesses.html
[6]:http://www.computerworld.com/article/2954582/security/researchers-develop-astonishing-webbased-attack-on-a-computers-dram.html
[7]:http://www.computerworld.com/article/2895898/google-researchers-hack-computers-using-dram-electrical-leaks.html
[8]:http://csoonline.com/newsletters/signup.html

View File

@ -1,180 +0,0 @@
yangmingming translating
# 3 Ways to Delete All Files in a Directory Except One or Few Files with Extensions
Sometimes you get into a situation where you need to delete all files in a directory or simply cleanup a directory by removing all files except files of a given type (ending with a particular extension).
In this article, we will show you how to delete files in a directory except certain file extensions or types using rm, find and globignore commands.
Before we move any further, let us start by briefly having a look at one important concept in Linux filename pattern matching, which will enable us to deal with our issue at hand.
In Linux, a shell pattern is a string that consists of the following special characters, which are referred to as wildcards or metacharacters:
1. `*`  matches zero or more characters
2. `?`  matches any single character
3. `[seq]`  matches any character in seq
4. `[!seq]`  matches any character not in seq
There are three possible methods we shall explore here, and these include:
### Delete Files Using Extended Pattern Matching Operators
The different extended pattern matching operators are listed below, where pattern-list is a list containing one or more filenames, separated using the `|` character:
1. `*(pattern-list)`  matches zero or more occurrences of the specified patterns
2. `?(pattern-list)`  matches zero or one occurrence of the specified patterns
3. +(pattern-list)  matches one or more occurrences of the specified patterns
4. `@(pattern-list)`  matches one of the specified patterns
5. `!(pattern-list)`  matches anything except one of the given patterns
To use them, enable the extglob shell option as follows:
```
# shopt -s extglob
```
#### 1. To delete all files in a directory except filename, type the command below:
```
$ rm -v !("filename")
```
[![Delete All Files Except One File in Linux](http://www.tecmint.com/wp-content/uploads/2016/10/DeleteAll-Files-Except-One-File-in-Linux.png)][9]
Delete All Files Except One File in Linux
#### 2. To delete all files with the exception of filename1 and filename2:
```
$ rm -v !("filename1"|"filename2")
```
[![Delete All Files Except Few Files in Linux](http://www.tecmint.com/wp-content/uploads/2016/10/Delete-All-Files-Except-Few-Files-in-Linux.png)][8]
Delete All Files Except Few Files in Linux
#### 3. The example below shows how to remove all files other than all `.zip` files interactively:
```
$ rm -i !(*.zip)
```
[![Delete All Files Except Zip Files in Linux](http://www.tecmint.com/wp-content/uploads/2016/10/Delete-All-Files-Except-Zip-Files-in-Linux.png)][7]
Delete All Files Except Zip Files in Linux
#### 4. Next, you can delete all files in a directory apart from all `.zip` and `.odt` files as follows, while displaying what is being done:
```
$ rm -v !(*.zip|*.odt)
```
[![Delete All Files Except Certain File Extensions](http://www.tecmint.com/wp-content/uploads/2016/10/Delete-All-Files-Except-Certain-File-Extensions.png)][6]
Delete All Files Except Certain File Extensions
Once you have all the required commands, turn off the extglob shell option like so:
```
$ shopt -u extglob
```
### Delete Files Using Linux find Command
Under this method, we can [use find command exclusively][5] with appropriate options or in conjunction with xargscommand by employing a pipeline as in the forms below:
```
$ find /directory/ -type f -not -name 'PATTERN' -delete
$ find /directory/ -type f -not -name 'PATTERN' -print0 | xargs -0 -I {} rm {}
$ find /directory/ -type f -not -name 'PATTERN' -print0 | xargs -0 -I {} rm [options] {}
```
#### 5. The following command will delete all files apart from `.gz` files in the current directory:
```
$ find . -type f -not -name '*.gz'-delete
```
[![Command find - Remove All Files Except .gz Files](http://www.tecmint.com/wp-content/uploads/2016/10/Remove-All-Files-Except-gz-Files.png)][4]
Command find Remove All Files Except .gz Files
#### 6. Using a pipeline and xargs, you can modify the case above as follows:
```
$ find . -type f -not -name '*gz' -print0 | xargs -0 -I {} rm -v {}
```
[![Remove Files Using find and xargs Commands](http://www.tecmint.com/wp-content/uploads/2016/10/Remove-Files-Using-Find-and-Xargs-Command.png)][3]
Remove Files Using find and xargs Commands
#### 7. Let us look at one additional example, the command below will wipe out all files excluding `.gz`, `.odt`, and `.jpg` files in the current directory:
```
$ find . -type f -not \(-name '*gz' -or -name '*odt' -or -name '*.jpg' \) -delete
```
[![Remove All Files Except File Extensions](http://www.tecmint.com/wp-content/uploads/2016/10/Remove-All-Files-Except-File-Extensions.png)][2]
Remove All Files Except File Extensions
### Delete Files Using Bash GLOBIGNORE Variable
This last approach however, only works with bash. Here, the GLOBIGNORE variable stores a colon-separated pattern-list (filenames) to be ignored by pathname expansion.
To employ this method, move into the directory that you wish to clean up, then set the GLOBIGNORE variable as follows:
```
$ cd test
$ GLOBIGNORE=*.odt:*.iso:*.txt
```
In this instance, all files other than `.odt`, `.iso`, and `.txt` files with be removed from the current directory.
Now run the command to clean up the directory:
```
$ rm -v *
```
Afterwards, turn off GLOBIGNORE variable:
```
$ unset GLOBIGNORE
```
[![Delete Files Using Bash GLOBIGNORE Variable](http://www.tecmint.com/wp-content/uploads/2016/10/Delete-Files-Using-Bash-GlobIgnore.png)][1]
Delete Files Using Bash GLOBIGNORE Variable
Note: To understand the meaning of the flags employed in the commands above, refer to the man pages of each command we have used in the various illustrations.
Thats all! If you have any other command line techniques in mind for the same purpose, do not forget to share with us via our feedback section below.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/delete-all-files-in-directory-except-one-few-file-extensions/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+tecmint+%28Tecmint%3A+Linux+Howto%27s+Guide%29
作者:[ Aaron Kili][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/wp-content/uploads/2016/10/Delete-Files-Using-Bash-GlobIgnore.png
[2]:http://www.tecmint.com/wp-content/uploads/2016/10/Remove-All-Files-Except-File-Extensions.png
[3]:http://www.tecmint.com/wp-content/uploads/2016/10/Remove-Files-Using-Find-and-Xargs-Command.png
[4]:http://www.tecmint.com/wp-content/uploads/2016/10/Remove-All-Files-Except-gz-Files.png
[5]:http://www.tecmint.com/35-practical-examples-of-linux-find-command/
[6]:http://www.tecmint.com/wp-content/uploads/2016/10/Delete-All-Files-Except-Certain-File-Extensions.png
[7]:http://www.tecmint.com/wp-content/uploads/2016/10/Delete-All-Files-Except-Zip-Files-in-Linux.png
[8]:http://www.tecmint.com/wp-content/uploads/2016/10/Delete-All-Files-Except-Few-Files-in-Linux.png
[9]:http://www.tecmint.com/wp-content/uploads/2016/10/DeleteAll-Files-Except-One-File-in-Linux.png

View File

@ -1,201 +0,0 @@
zpl1025
How To Setup A WiFi Network In Arch Linux Using Terminal
===
![How To Setup A WiFi In Arch Linux Using Terminal](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/how-to-connect-to-wifi-in-arch-linux-cli_orig.jpg)
If you're using [Linux distro][16] other than [Arch][15] CLI then it's one of the toughest tasks to setup WiFi on [Arch Linux][14] using terminal. Though the process is slightly straight forward. In this article, I'll walk you newbies through the step-by-step setup guide to connect your Arch Linux to your  WiFi network.There are a lot of programs to setup a wireless connection in Linux, we could use **ip** and **iw** to configure an Internet connection, but it would be a little complicated for newbies. So we'll use **netctl**, it's a cli-based tool used to configure and manage network connections via profiles.
Note: You must be root for all the configurations, also you can use sudo.
### Scanning Network
Run the command to know the name of your network interface -
```
iwconfig
```
Run the command - 
```
ip link set  _interface_ up
```
Run the command to search the available WiFi networks. Now move down to look for your WiFi network.
```
iwlist interface scan | less
```
**Note:** Where interface is your network interface that you found using the above  **iwconfig** command.
Run the command -
```
ip link set interface down
```
### Setup A Wi-Fi Using netctl:
Before configure a connection with netctl you must check the compatibility of your network card with Linux.
1. Run the command:
```
lspci -k
```
This command is to check if kernel loaded the driver for wireless card. The output must look like this:
[![arch linux wifi kernel compatibility ](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/arch-wifi-find-kernel-compatibility_orig.png)][12]
If the kernel didn't load the driver, you must install it using an Ethernet connection. Here is the Official Linux Wireless Wiki: [https://wireless.wiki.kernel.org/][11]
If your wireless card is compatible with Linux, you can start with **netctl configuration**.
**netctl** works with profiles, profile is a file that contains information about connection. A profile can be created by the hard way or the easy way.
### The Easy Way Wifi-menu
If you want use wifi-menu, dialog must be installed.
1. Run the command: wifi-menu
2.  Select your Network
[![wifi-menu to setup wifi in arch](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/wifi-menu-to-setup-wifi-in-arch_orig.png)][1] |
3. Type the correct password and wait.
[![wifi-menu setup wifi password in arch](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/wifi-menu-type-wifi-password-in-arch.png?605)][9]
If you don't have a failed connection message, then you can prove it typing the command:
```
ping -c 3 www.google.com
```
Hurrah! If you're watching it pinging, then the network is setup successfully. You're now connected to WiFi network in Arch Linux. If you've any error then follow the above steps again. Perhaps you've missed something to do.
### The Hard Way
In camparison to the above wifi-menu method, this method is a little hard. That's I call it the hard way. In the above command, the network profile was setup automatically. In this method, we'll setup the profile manually. But don't worry this is not going to be much more complicated. So let's get started!
1. The first thing that you must do is know the name of your interface, generally the name is wlan0/wlp2s0, but there are many exceptions. To know the name of your interface, you must type the command iwconfig and note it down.
[![scan wifi networks in arch linux cli](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/scan-wifi-networks-in-arch-linux-cli_orig.png)][8]     
2. Run the command:
```
cd /etc/netctl/examples
```
In this subdirectory you can see different profile examples.
3.Copy your profile example to **_/etc/netctl/your_profile_**
```
cp /etc/netctl/examples/wireless-wpa /etc/netctl/your_profile
```
4. You can see the profile content typing: cat /etc/netctl/your_profile
[![view network profile in arch linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/view-network-profile-in-arch-linux_orig.png)][7]
5. Edit the following fields of your profile using vi or nano:
```
nano /etc/netctl/your_profile
```
1. **Interface**: it would be wlan0
2. **ESSID**: The name of your Internet connection
3. **key**: The password of your Internet connection**Note:** 
If you don't know how to use nano, only edit your text, when you finish type ctrl+o and return, then type ctrl+x.
[![edit network profile in arch](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edit-network-profile-in-arch_orig.png)][6]
### Running netctl
1. Run the command:
```
cd /etc/netctl
ls
```
You must see the profile created by wifi-menu, for example wlan0-SSID; or if you used the hard way then you must see the profile created by yourself.
2. Start your connection profile typing the command: netctl start your_profile.
3. Test your connection typing:
```
ping -c 3 www.google.com
```
The output must look like this:[![check internet connection in arch linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/check-internet-connection-in-arch-linux_orig.png)][5]       
6. Finally, you must run the following command: netctl enable your_profile. 
```
netctl enable your_profile
```
This will create and enable a systemd service that will start when the computer boots. So it's time shout Hurrah! You've setup wifi network in your Arch Linux.
### Other Utilities
Also, you can use other programs to setup a wireless connection: For example iw -
iw
1. iw dev wlan0 link  status
2. iw dev wlan0 scan  Scanning networks
3. iw dev wlan0 connect your_essid  Connecting to open network
4. iw dev wlan0 connect your_essid key your_key - Connecting to WEP encrypted network using hexadecimal key.
wpa_supplicant
[https://wiki.archlinux.org/index.php/WPA_supplicant][4]
Wicd
[https://wiki.archlinux.org/index.php/wicd][3]
NetworkManager
[https://wiki.archlinux.org/index.php/NetworkManager][2]
### Conclusion
So there you go! I have mentioned 3 ways to connect to a WiFi network in your  **Arch Linux** . One thing that I want to focus here is that when you're exeuting first command, please note down the interface. In the next command where we're scanning networks, don't just use interface but the name of your interface such wlan0 or wlp2s0 (you got from previous command). If you have any problem, then do talk to me in the comment section below. Also don't forget to share this article with your friends on social media. Thank you!
--------------------------------------------------------------------------------
via: http://www.linuxandubuntu.com/home/how-to-setup-a-wifi-in-arch-linux-using-terminal
作者:[Mohd Sohail][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.linuxandubuntu.com/contact-us.html
[1]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/wifi-menu-to-setup-wifi-in-arch_orig.png
[2]:https://wiki.archlinux.org/index.php/NetworkManager
[3]:https://wiki.archlinux.org/index.php/wicd
[4]:https://wiki.archlinux.org/index.php/WPA_supplicant
[5]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/check-internet-connection-in-arch-linux_orig.png
[6]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edit-network-profile-in-arch_orig.png
[7]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/view-network-profile-in-arch-linux_orig.png
[8]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/scan-wifi-networks-in-arch-linux-cli_orig.png
[9]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/wifi-menu-type-wifi-password-in-arch_orig.png?605
[10]:http://www.linuxandubuntu.com/home/5-best-arch-linux-based-linux-distributions
[11]:https://wireless.wiki.kernel.org/
[12]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/arch-wifi-find-kernel-compatibility_orig.png
[13]:http://www.linuxandubuntu.com/home/arch-linux-take-your-linux-knowledge-to-next-level-review
[14]:http://www.linuxandubuntu.com/home/category/arch-linux
[15]:http://www.linuxandubuntu.com/home/arch-linux-take-your-linux-knowledge-to-next-level-review
[16]:http://linuxandubuntu.com/home/category/distros

View File

@ -0,0 +1,163 @@
# Applying the Linus Torvalds “Good Taste” Coding Requirement
In [a recent interview with Linus Torvalds][1], the creator of Linux, at approximately 14:20 in the interview, he made a quick point about coding with “good taste”. Good taste? The interviewer prodded him for details and Linus came prepared with illustrations.
He presented a code snippet. But this wasnt “good taste” code. This snippet was an example of poor taste in order to provide some initial contrast.
![](https://d262ilb51hltx0.cloudfront.net/max/1200/1*X2VgEA_IkLvsCS-X4iPY7g.png)
Its a function, written in C, that removes an object from a linked list. It contains 10 lines of code.
He called attention to the if-statement at the bottom. It was _this_ if-statement that he criticized.
I paused the video and studied the slide. I had recently written code very similar. Linus was effectively saying I had poor taste. I swallowed my pride and continued the video.
Linus explained to the audience, as I already knew, that when removing an object from a linked list, there are two cases to consider. If the object is at the start of the list there is a different process for its removal than if it is in the middle of the list. And this is the reason for the “poor taste” if-statement.
But if he admits it is necessary, then why is it so bad?
Next he revealed a second slide to the audience. This was his example of the same function, but written with “good taste”.
![](https://d262ilb51hltx0.cloudfront.net/max/1200/1*GHFLYFB3vDQeakMyUGPglw.png)
The original 10 lines of code had now been reduced to 4.
But it wasnt the line count that mattered. It was that if-statement. Its gone. No longer needed. The code has been refactored so that, regardless of the objects position in the list, the same process is applied to remove it.
Linus explained the new code, the elimination of the edge case, and that was it. The interview then moved on to the next topic.
I studied the code for a moment. Linus was right. The second slide _was_better. If this was a test to determine good taste from poor taste, I would have failed. The thought that it may be possible to eliminate that conditional statement had never occurred to me. And I had written it more than once, since I commonly work with linked lists.
Whats good about this illustration isnt just that it teaches you a better way to remove an item from a linked list, but that it makes you consider that the code youve written, the little algorithms youve sprinkled throughout the program, may have room for improvement in ways youve never considered.
So this was my focus as I went back and reviewed the code in my most recent project. Perhaps it was serendipitous that it also happened to be written in C.
To the best of my ability to discern, the crux of the “good taste” requirement is the elimination of edge cases, which tend to reveal themselves as conditional statements. The fewer conditions you test for, the better your code “_tastes”_.
Here is one particular example of an improvement I made that I wanted to share.
Initializing Grid Edges
Below is an algorithm I wrote to initialize the points along the edge of a grid, which is represented as a multidimensional array: grid[rows][cols].
Again, the purpose of this code was to only initialize the values of the points that reside on the edge of the gridso only the top row, bottom row, left column, and right column.
To accomplish this I initially looped over every point in the grid and used conditionals to test for the edges. This is what it looked like:
```
for (r = 0; r < GRID_SIZE; ++r) {
for (c = 0; c < GRID_SIZE; ++c) {
```
```
// Top Edge
if (r == 0)
grid[r][c] = 0;
```
```
// Left Edge
if (c == 0)
grid[r][c] = 0;
```
```
// Right Edge
if (c == GRID_SIZE - 1)
grid[r][c] = 0;
```
```
// Bottom Edge
if (r == GRID_SIZE - 1)
grid[r][c] = 0;
}
}
```
Even though it works, in hindsight, there are some issues with this construct.
1. ComplexityThe use 4 conditional statements inside 2 embedded loops seems overly complex.
2. EfficiencyGiven that GRID_SIZE has a value of 64, this loop performs 4096 iterations in order to set values for only the 256 edge points.
Linus would probably agree, this is not very _tasty_.
So I did some tinkering with it. After a little bit I was able to reduce the complexity to only a single for_-_loop containing four conditionals. It was only a slight improvement in complexity, but a large improvement in performance, because it only performed 256 loop iterations, one for each point along the edge.
```
for (i = 0; i < GRID_SIZE * 4; ++i) {
```
```
// Top Edge
if (i < GRID_SIZE)
grid[0][i] = 0;
```
```
// Right Edge
else if (i < GRID_SIZE * 2)
grid[i - GRID_SIZE][GRID_SIZE - 1] = 0;
```
```
// Left Edge
else if (i < GRID_SIZE * 3)
grid[i - (GRID_SIZE * 2)][0] = 0;
```
```
// Bottom Edge
else
grid[GRID_SIZE - 1][i - (GRID_SIZE * 3)] = 0;
}
```
An improvement, yes. But it looked really ugly. Its not exactly code that is easy to follow. Based on that alone, I wasnt satisfied.
I continued to tinker. Could this really be improved further? In fact, the answer was _YES_. And what I eventually came up with was so astoundingly simple and elegant that I honestly couldnt believe it took me this long to find it.
Below is the final version of the code. It has _one for-loop_ and _no conditionals_. Moreover, the loop only performs 64 iterations. It vastly improves both complexity and efficiency.
```
for (i = 0; i < GRID_SIZE; ++i) {
```
```
// Top Edge
grid[0][i] = 0;
// Bottom Edge
grid[GRID_SIZE - 1][i] = 0;
```
```
// Left Edge
grid[i][0] = 0;
```
```
// Right Edge
grid[i][GRID_SIZE - 1] = 0;
}
```
This code initializes four different edge points for each loop iteration. Its not complex. Its highly efficient. Its easy to read. Compared to the original version, and even the second version, they are like night and day.
I was quite satisfied.
--------------------------------------------------------------------------------
via: https://medium.com/@bartobri/applying-the-linus-tarvolds-good-taste-coding-requirement-99749f37684a
作者:[Brian Barto][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://medium.com/@bartobri?source=post_header_lockup
[1]:https://www.ted.com/talks/linus_torvalds_the_mind_behind_linux

View File

@ -1,121 +0,0 @@
alim0x translating
How to Check Bad Sectors or Bad Blocks on Hard Disk in Linux
===
Let us start by defining a bad sector/block, its a section on a disk drive or flash memory that can not be read from or written to anymore, as a result of a fixed [physical damage on the disk][7] surface or failed flash memory transistors.
As bad sectors continue to accumulate, they can undesirably or destructively affect your disk drive or flash memory capacity or even lead to a possible hardware failure.
It is also important to note that the presence of bad blocks should alert you to start thinking of getting a new disk drive or simply mark the bad blocks as unusable.
Therefore, in this article, we will go through the necessary steps that can enable you determine the presence or absence of bad sectors on your Linux disk drive or flash memory using certain [disk scanning utilities][6].
That said, below are the methods:
### Check Bad Sectors in Linux Disks Using badblocks Tool
A badblocks program enables users to scan a device for bad sectors or blocks. The device can be a hard disk or an external disk drive, represented by a file such as /dev/sdc.
Firstly, use the [fdisk command][5] with superuser privileges to display information about all your disk drives or flash memory plus their partitions:
```
$ sudo fdisk -l
```
[![List Linux Filesystem Partitions](http://www.tecmint.com/wp-content/uploads/2016/10/List-Linux-Filesystem-Partitions.png)][4]
List Linux Filesystem Partitions
Then scan your Linux disk drive to check for bad sectors/blocks by typing:
```
$ sudo badblocks -v /dev/sda10 > badsectors.txt
```
[![Scan Hard Disk Bad Sectors in Linux](http://www.tecmint.com/wp-content/uploads/2016/10/Scan-Hard-Disk-Bad-Sectors-in-Linux.png)][3]
Scan Hard Disk Bad Sectors in Linux
In the command above, badblocks is scanning device /dev/sda10 (remember to specify your actual device) with the `-v` enabling it to display details of the operation. In addition, the results of the operation are stored in the file badsectors.txt by means of output redirection.
In case you discover any bad sectors on your disk drive, unmount the disk and instruct the operating system not to write to the reported sectors as follows.
You will need to employ e2fsck (for ext2/ext3/ext4 file systems) or fsck command with the badsectors.txt file and the device file as in the command below.
The `-l` option tells the command to add the block numbers listed in the file specified by filename (badsectors.txt) to the list of bad blocks.
```
------------ Specifically for ext2/ext3/ext4 file-systems ------------
$ sudo e2fsck -l badsectors.txt /dev/sda10
OR
------------ For other file-systems ------------
$ sudo fsck -l badsectors.txt /dev/sda10
```
### Scan Bad Sectors on Linux Disk Using Smartmontools
This method is more reliable and efficient for modern disks (ATA/SATA and SCSI/SAS hard drives and solid-state drives) which ship in with a S.M.A.R.T (Self-Monitoring, Analysis and Reporting Technology) system that helps detect, report and possibly log their health status, so that you can figure out any impending hardware failures.
You can install smartmontools by running the command below:
```
------------ On Debian/Ubuntu based systems ------------
$ sudo apt-get install smartmontools
------------ On RHEL/CentOS based systems ------------
$ sudo yum install smartmontools
```
Once the installation is complete, use smartctl which controls the S.M.A.R.T system integrated into a disk. You can look through its man page or help page as follows:
```
$ man smartctl
$ smartctl -h
```
Now execute the smartctrl command and name your specific device as an argument as in the following command, the flag `-H` or `--health` is included to display the SMART overall health self-assessment test result.
```
$ sudo smartctl -H /dev/sda10
```
[![Check Linux Hard Disk Health](http://www.tecmint.com/wp-content/uploads/2016/10/Check-Linux-Hard-Disk-Health.png)][2]
Check Linux Hard Disk Health
The result above indicates that your hard disk is healthy, and may not experience hardware failures any soon.
For an overview of disk information, use the `-a` or `--all` option to print out all SMART information concerning a disk and `-x` or `--xall` which displays all SMART and non-SMART information about a disk.
In this tutorial, we covered a very important topic concerning [disk drive health diagnostics][1], you can reach us via the feedback section below to share your thoughts or ask any questions and remember to always stay connected to Tecmint.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/check-linux-hard-disk-bad-sectors-bad-blocks/
作者:[Aaron Kili][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/defragment-linux-system-partitions-and-directories/
[2]:http://www.tecmint.com/wp-content/uploads/2016/10/Check-Linux-Hard-Disk-Health.png
[3]:http://www.tecmint.com/wp-content/uploads/2016/10/Scan-Hard-Disk-Bad-Sectors-in-Linux.png
[4]:http://www.tecmint.com/wp-content/uploads/2016/10/List-Linux-Filesystem-Partitions.png
[5]:http://www.tecmint.com/fdisk-commands-to-manage-linux-disk-partitions/
[6]:http://www.tecmint.com/ncdu-a-ncurses-based-disk-usage-analyzer-and-tracker/
[7]:http://www.tecmint.com/defragment-linux-system-partitions-and-directories/

View File

@ -1,453 +0,0 @@
DTrace for Linux 2016
===========
![](https://raw.githubusercontent.com/brendangregg/bcc/master/images/bcc_tracing_tools_2016.png)
With the final major capability for BPF tracing (timed sampling) merging in Linux 4.9-rc1, the Linux kernel now has raw capabilities similar to those provided by DTrace, the advanced tracer from Solaris. As a long time DTrace user and expert, this is an exciting milestone! On Linux, you can now analyze the performance of applications and the kernel using production-safe low-overhead custom tracing, with latency histograms, frequency counts, and more.
There have been many tracing projects for Linux, but the technology that finally merged didnt start out as a tracing project at all: it began as enhancements to Berkeley Packet Filter (BPF). At first, these enhancements allowed BPF to redirect packets to create software-defined networks. Later on, support for tracing events was added, enabling programmatic tracing in Linux.
While BPF currently lacks a high-level language like DTrace, the front-ends available have been enough for me to create many BPF tools, some based on my older [DTraceToolkit][37]. In this post I'll describe how you can use these tools, the front-ends available, and discuss where the technology is going next.
### Screenshots
I've been adding BPF-based tracing tools to the open source [bcc][36] project (thanks to Brenden Blanco, of PLUMgrid, for leading bcc development). See the [bcc install][35] instructions. It will add a collection of tools under /usr/share/bcc/tools, including the following.
Tracing new processes:
```
# **execsnoop**
PCOMM PID RET ARGS
bash 15887 0 /usr/bin/man ls
preconv 15894 0 /usr/bin/preconv -e UTF-8
man 15896 0 /usr/bin/tbl
man 15897 0 /usr/bin/nroff -mandoc -rLL=169n -rLT=169n -Tutf8
man 15898 0 /usr/bin/pager -s
nroff 15900 0 /usr/bin/locale charmap
nroff 15901 0 /usr/bin/groff -mtty-char -Tutf8 -mandoc -rLL=169n -rLT=169n
groff 15902 0 /usr/bin/troff -mtty-char -mandoc -rLL=169n -rLT=169n -Tutf8
groff 15903 0 /usr/bin/grotty
```
Histogram of disk I/O latency:
```
# **biolatency -m**
Tracing block device I/O... Hit Ctrl-C to end.
^C
msecs : count distribution
0 -> 1 : 96 |************************************ |
2 -> 3 : 25 |********* |
4 -> 7 : 29 |*********** |
8 -> 15 : 62 |*********************** |
16 -> 31 : 100 |**************************************|
32 -> 63 : 62 |*********************** |
64 -> 127 : 18 |****** |
```
Tracing common ext4 operations slower than 5 milliseconds:
```
# **ext4slower 5**
Tracing ext4 operations slower than 5 ms
TIME COMM PID T BYTES OFF_KB LAT(ms) FILENAME
21:49:45 supervise 3570 W 18 0 5.48 status.new
21:49:48 supervise 12770 R 128 0 7.55 run
21:49:48 run 12770 R 497 0 16.46 nsswitch.conf
21:49:48 run 12770 R 1680 0 17.42 netflix_environment.sh
21:49:48 run 12770 R 1079 0 9.53 service_functions.sh
21:49:48 run 12772 R 128 0 17.74 svstat
21:49:48 svstat 12772 R 18 0 8.67 status
21:49:48 run 12774 R 128 0 15.76 stat
21:49:48 run 12777 R 128 0 7.89 grep
21:49:48 run 12776 R 128 0 8.25 ps
21:49:48 run 12780 R 128 0 11.07 xargs
21:49:48 ps 12776 R 832 0 12.02 libprocps.so.4.0.0
21:49:48 run 12779 R 128 0 13.21 cut
[...]
```
Tracing new active TCP connections (connect()):
```
# **tcpconnect**
PID COMM IP SADDR DADDR DPORT
1479 telnet 4 127.0.0.1 127.0.0.1 23
1469 curl 4 10.201.219.236 54.245.105.25 80
1469 curl 4 10.201.219.236 54.67.101.145 80
1991 telnet 6 ::1 ::1 23
2015 ssh 6 fe80::2000:bff:fe82:3ac fe80::2000:bff:fe82:3ac 22
```
Tracing DNS latency by tracing getaddrinfo()/gethostbyname() library calls:
```
# **gethostlatency**
TIME PID COMM LATms HOST
06:10:24 28011 wget 90.00 www.iovisor.org
06:10:28 28127 wget 0.00 www.iovisor.org
06:10:41 28404 wget 9.00 www.netflix.com
06:10:48 28544 curl 35.00 www.netflix.com.au
06:11:10 29054 curl 31.00 www.plumgrid.com
06:11:16 29195 curl 3.00 www.facebook.com
06:11:25 29404 curl 72.00 foo
06:11:28 29475 curl 1.00 foo
```
Interval summaries of VFS operations by type:
```
# **vfsstat**
TIME READ/s WRITE/s CREATE/s OPEN/s FSYNC/s
18:35:32: 231 12 4 98 0
18:35:33: 274 13 4 106 0
18:35:34: 586 86 4 251 0
18:35:35: 241 15 4 99 0
```
Tracing off-CPU time with kernel and user stack traces (summarized in kernel), for a given PID:
```
# **offcputime -d -p 24347**
Tracing off-CPU time (us) of PID 24347 by user + kernel stack... Hit Ctrl-C to end.
^C
[...]
ffffffff810a9581 finish_task_switch
ffffffff8185d385 schedule
ffffffff81085672 do_wait
ffffffff8108687b sys_wait4
ffffffff81861bf6 entry_SYSCALL_64_fastpath
--
00007f6733a6b64a waitpid
- bash (24347)
4952
ffffffff810a9581 finish_task_switch
ffffffff8185d385 schedule
ffffffff81860c48 schedule_timeout
ffffffff810c5672 wait_woken
ffffffff8150715a n_tty_read
ffffffff815010f2 tty_read
ffffffff8122cd67 __vfs_read
ffffffff8122df65 vfs_read
ffffffff8122f465 sys_read
ffffffff81861bf6 entry_SYSCALL_64_fastpath
--
00007f6733a969b0 read
- bash (24347)
1450908
```
Tracing MySQL query latency (via a USDT probe):
```
# **mysqld_qslower `pgrep -n mysqld`**
Tracing MySQL server queries for PID 14371 slower than 1 ms...
TIME(s) PID MS QUERY
0.000000 18608 130.751 SELECT * FROM words WHERE word REGEXP '^bre.*n$'
2.921535 18608 130.590 SELECT * FROM words WHERE word REGEXP '^alex.*$'
4.603549 18608 24.164 SELECT COUNT(*) FROM words
9.733847 18608 130.936 SELECT count(*) AS count FROM words WHERE word REGEXP '^bre.*n$'
17.864776 18608 130.298 SELECT * FROM words WHERE word REGEXP '^bre.*n$' ORDER BY word
```
Using the trace multi-tool to watch login requests, by instrumenting the pam library:
```
# **trace 'pam:pam_start "%s: %s", arg1, arg2'**
TIME PID COMM FUNC -
17:49:45 5558 sshd pam_start sshd: root
17:49:47 5662 sudo pam_start sudo: root
17:49:49 5727 login pam_start login: bgregg
```
Many tools have usage messages (-h), and all should have man pages and text files of example output in the bcc project.
### Out of necessity
In 2014, Linux tracing had some kernel summary features (from ftrace and perf_events), but outside those we still had to dump-and-post-process data a decades old technique that has high overhead at scale. You couldn't frequency count process names, function names, stack traces, or other arbitrary data in the kernel. You couldn't save variables in one probe event, and then retrieve them in another, which meant that you couldn't measure latency (or time deltas) in custom places, and you couldn't create in-kernel latency histograms. You couldn't trace USDT probes. You couldn't even write custom programs. DTrace could do all these, but only on Solaris or BSD. On Linux, some out-of-tree tracers like SystemTap could serve these needs, but brought their own challenges. (For the sake of completeness: yes, you _could_ write kprobe-based kernel modules but practically no one did.)
In 2014 I joined the Netflix cloud performance team. Having spent years as a DTrace expert, it might have seemed crazy for me to move to Linux. But I had some motivations, in particular seeking a greater challenge: performance tuning the Netflix cloud, with its rapid application changes, microservice architecture, and distributed systems. Sometimes this job involves systems tracing, for which I'd previously used DTrace. Without DTrace on Linux, I began by using what was built in to the Linux kernel, ftrace and perf_events, and from them made a toolkit of tracing tools ([perf-tools][34]). They have been invaluable. But I couldn't do some tasks, particularly latency histograms and stack trace counting. We needed kernel tracing to be programmatic.
### What happened?
BPF adds programmatic capabilities to the existing kernel tracing facilities (tracepoints, kprobes, uprobes). It has been enhanced rapidly in the Linux 4.x series.
Timed sampling was the final major piece, and it landed in Linux 4.9-rc1 ([patchset][33]). Many thanks to Alexei Starovoitov (now working on BPF at Facebook), the lead developer behind these BPF enhancements.
The Linux kernel now has the following features built in (added between 2.6 and 4.9):
* Dynamic tracing, kernel-level (BPF support for kprobes)
* Dynamic tracing, user-level (BPF support for uprobes)
* Static tracing, kernel-level (BPF support for tracepoints)
* Timed sampling events (BPF with perf_event_open)
* PMC events (BPF with perf_event_open)
* Filtering (via BPF programs)
* Debug output (bpf_trace_printk())
* Per-event output (bpf_perf_event_output())
* Basic variables (global & per-thread variables, via BPF maps)
* Associative arrays (via BPF maps)
* Frequency counting (via BPF maps)
* Histograms (power-of-2, linear, and custom, via BPF maps)
* Timestamps and time deltas (bpf_ktime_get_ns(), and BPF programs)
* Stack traces, kernel (BPF stackmap)
* Stack traces, user (BPF stackmap)
* Overwrite ring buffers (perf_event_attr.write_backward)
The front-end we are using is bcc, which provides both Python and lua interfaces. bcc adds:
* Static tracing, user-level (USDT probes via uprobes)
* Debug output (Python with BPF.trace_pipe() and BPF.trace_fields())
* Per-event output (BPF_PERF_OUTPUT macro and BPF.open_perf_buffer())
* Interval output (BPF.get_table() and table.clear())
* Histogram printing (table.print_log2_hist())
* C struct navigation, kernel-level (bcc rewriter maps to bpf_probe_read())
* Symbol resolution, kernel-level (ksym(), ksymaddr())
* Symbol resolution, user-level (usymaddr())
* BPF tracepoint support (via TRACEPOINT_PROBE)
* BPF stack trace support (incl. walk method for stack frames)
* Various other helper macros and functions
* Examples (under /examples)
* Many tools (under /tools)
* Tutorials (/docs/tutorial*.md)
* Reference guide (/docs/reference_guide.md)
I'd been holding off on this post until the last major feature was integrated, and now it has been in 4.9-rc1\. There are still some minor missing things we have workarounds for, and additional things we might do, but what we have right now is worth celebrating. Linux now has advanced tracing capabilities built in.
### Safety
BPF and its enhancements are designed to be production safe, and it is used today in large scale production environments. But if you're determined, you may be able to still find a way to hang the kernel. That experience should be the exception rather than the rule, and such bugs will be fixed fast, especially since BPF is part of Linux. All eyes are on Linux.
We did hit a couple of non-BPF bugs during development that needed to be fixed: rcu not reentrant, which could cause kernel hangs for funccount and was fixed by the "bpf: map pre-alloc" patchset in 4.6, and with a workaround in bcc for older kernels. And a uprobe memory accounting issue, which failed uprobe allocations, and was fixed by the "uprobes: Fix the memcg accounting" patch in 4.8 and backported to earlier kernels (eg, it's in the current 4.4.27 and 4.4.0-45.66).
### Why did Linux tracing take so long?
Prior work had been split among several other tracers: there was never a consolidated effort on any single one. For more about this and other issues, see my 2014 [tracing summit talk][32]. One thing I didn't note there was the counter effect of partial solutions: some companies had found another tracer (SystemTap or LTTng) was sufficient for their specific needs, and while they have been happy to hear about BPF, contributing to its development wasn't a priority given their existing solution.
BPF has only been enhanced to do tracing in the last two years. This process could have gone faster, but early on there were zero full-time engineers working on BPF tracing. Alexei Starovoitov (BPF lead), Brenden Blanco (bcc lead), myself, and others, all had other priorities. I tracked my hours on this at Netflix (voluntarily), and I've spent around 7% of my time on BPF/bcc. It wasn't that much of a priority, in part because we had our own workarounds (including my perf-tools, which work on older kernels).
Now that BPF tracing has arrived, there's already tech companies on the lookout for BPF skills. I can still highly recommend [Netflix][31]. (If you're trying to hire _me_ for BPF skills, then I'm still very happy at Netflix!.)
### Ease of use
What might appear to be the largest remaining difference between DTrace and bcc/BPF is ease of use. But it depends on what you're doing with BPF tracing. Either you are:
* **Using BPF tools/metrics**: There should be no difference. Tools behave the same, GUIs can access similar metrics. Most people will use BPF in this way.
* **Developing tools/metrics**: bcc right now is much harder. DTrace has its own concise language, D, similar to awk, whereas bcc uses existing languages (C and Python or lua) with libraries. A bcc tool in C+Python may be a _lot_ more code than a D-only tool: 10x the lines, or more. However, many DTrace tools used shell wrapping to provide arguments and error checking, inflating the code to a much bigger size. The coding difficulty is also different: the rewriter in bcc can get fiddly, which makes some scripts much more complicated to develop (extra bpf_probe_read()s, requiring more knowledge of BPF internals). This situation should improve over time as improvements are planned.
* **Running common one-liners**: Fairly similar. DTrace could do many with the "dtrace" command, whereas bcc has a variety of multitools: trace, argdist, funccount, funclatency, etc.
* **Writing custom ad hoc one-liners**: With DTrace this was trivial, and accelerated advanced analysis by allowing rapid custom questions to be posed and answered by the system. bcc is currently limited by its multitools and their scope.
In short, if you're an end user of BPF tools, you shouldn't notice these differences. If you're an advanced user and tool developer (like me), bcc is a lot more difficult right now.
To show a current example of the bcc Python front-end, here's the code for tracing disk I/O and printing I/O size as a histogram:
```
from bcc import BPF
from time import sleep
# load BPF program
b = BPF(text="""
#include <uapi/linux/ptrace.h>
#include <linux/blkdev.h>
BPF_HISTOGRAM(dist);
int kprobe__blk_account_io_completion(struct pt_regs *ctx, struct request *req)
{
dist.increment(bpf_log2l(req->__data_len / 1024));
return 0;
}
""")
# header
print("Tracing... Hit Ctrl-C to end.")
# trace until Ctrl-C
try:
sleep(99999999)
except KeyboardInterrupt:
print
# output
b["dist"].print_log2_hist("kbytes")
```
Note the embedded C (text=) in the Python code.
This gets the job done, but there's also room for improvement. Fortunately, we have time to do so: it will take many months before people are on Linux 4.9 and can use BPF, so we have time to create tools and front-ends.
### A higher-level language
An easier front-end, such as a higher-level language, may not improve adoption as much as you might imagine. Most people will use the canned tools (and GUIs), and only some of us will actually write them. But I'm not opposed to a higher-level language either, and some already exist, like SystemTap:
```
#!/usr/bin/stap
/*
* opensnoop.stp Trace file open()s. Basic version of opensnoop.
*/
probe begin
{
printf("\n%6s %6s %16s %s\n", "UID", "PID", "COMM", "PATH");
}
probe syscall.open
{
printf("%6d %6d %16s %s\n", uid(), pid(), execname(), filename);
}
```
Wouldn't it be nice if we could have the SystemTap front-end with all its language integration and tapsets, with the high-performance kernel built in BPF back-end? Richard Henderson of Red Hat has already begun work on this, and has released an [initial version][30]!
There's also [ply][29], an entirely new higher-level language for BPF:
```
#!/usr/bin/env ply
kprobe:SyS_*
{
$syscalls[func].count()
}
```
This is also promising.
Although, I think the real challenge for tool developers won't be the language: it will be knowing what to do with these new superpowers.
### How you can contribute
* **Promotion**: There are currently no marketing efforts for BPF tracing. Some companies know it and are using it (Facebook, Netflix, Github, and more), but it'll take years to become widely known. You can help by sharing articles and resources with others in the industry.
* **Education**: You can write articles, give meetup talks, and contribute to bcc documentation. Share case studies of how BPF has solved real issues, and provided value to your company.
* **Fix bcc issues**: See the [bcc issue list][19], which includes bugs and feature requests.
* **File bugs**: Use bcc/BPF, and file bugs as you find them.
* **New tools**: There are more observability tools to develop, but please don't be hasty: people are going to spend hours learning and using your tool, so make it as intuitive and excellent as possible (see my [docs][18]). As Mike Muuss has said about his [ping][17] program: "If I'd known then that it would be my most famous accomplishment in life, I might have worked on it another day or two and added some more options."
* **High-level language**: If the existing bcc front-end languages really bother you, maybe you can come up with something much better. If you build it in bcc you can leverage libbcc. Or, you could help the SystemTap BPF or ply efforts.
* **GUI integration**: Apart from the bcc CLI observability tools, how can this new information be visualized? Latency heat maps, flame graphs, and more.
### Other Tracers
What about SystemTap, ktap, sysdig, LTTng, etc? It's possible that they all have a future, either by using BPF, or by becoming better at what they specifically do. Explaining each will be a blog post by itself.
And DTrace itself? We're still using it at Netflix, on our FreeBSD-based CDN.
### Further bcc/BPF Reading
I've written a [bcc/BPF Tool End-User Tutorial][28], a [bcc Python Developer's Tutorial][27], a [bcc/BPF Reference Guide][26], and contributed useful [/tools][25], each with an [example.txt][24] file and [man page][23]. My prior posts about bcc & BPF include:
* [eBPF: One Small Step][16] (we later just called it BPF)
* [bcc: Taming Linux 4.3+ Tracing Superpowers][15]
* [Linux eBPF Stack Trace Hack][14] (stack traces are now officially supported)
* [Linux eBPF Off-CPU Flame Graph][13] (" " ")
* [Linux Wakeup and Off-Wake Profiling][12] (" " ")
* [Linux Chain Graph Prototype][11] (" " ")
* [Linux eBPF/bcc uprobes][10]
* [Linux BPF Superpowers][9]
* [Ubuntu Xenial bcc/BPF][8]
* [Linux bcc Tracing Security Capabilities][7]
* [Linux MySQL Slow Query Tracing with bcc/BPF][6]
* [Linux bcc ext4 Latency Tracing][5]
* [Linux bcc/BPF Run Queue (Scheduler) Latency][4]
* [Linux bcc/BPF Node.js USDT Tracing][3]
* [Linux bcc tcptop][2]
* [Linux 4.9's Efficient BPF-based Profiler][1]
I've also given a talk about bcc/BPF, at Facebook's Performance@Scale event: [Linux BPF Superpowers][22]. In December, I'm giving a tutorial and talk on BPF/bcc at [USENIX LISA][21] in Boston.
### Acknowledgements
* Van Jacobson and Steve McCanne, who created the original BPF as a packet filter.
* Barton P. Miller, Jeffrey K. Hollingsworth, and Jon Cargille, for inventing dynamic tracing, and publishing the paper: "Dynamic Program Instrumentation for Scalable Performance Tools", Scalable High-performance Conputing Conference (SHPCC), Knoxville, Tennessee, May 1994.
* kerninst (ParaDyn, UW-Madison), an early dynamic tracing tool that showed the value of dynamic tracing (late 1990's).
* Mathieu Desnoyers (of LTTng), the lead developer of kernel markers that led to tracepoints.
* IBM developed kprobes as part of DProbes. DProbes was combined with LTT to provide Linux dynamic tracing in 2000, but wasn't integrated.
* Bryan Cantrill, Mike Shapiro, and Adam Leventhal (Sun Microsystems), the core developers of DTrace, an awesome tool which proved that dynamic tracing could be production safe and easy to use (2004). Given the mechanics of dynamic tracing, this was a crucial turning point for the technology: that it became safe enough to be shipped _by default in Solaris_, an OS known for reliability.
* The many Sun Microsystems staff in marketing, sales, training, and other roles, for promoting DTrace and creating the awareness and desire for advanced system tracing.
* Roland McGrath (at Red Hat), the lead developer of utrace, which became uprobes.
* Alexei Starovoitov (PLUMgrid, then Facebook), the lead developer of enhanced BPF: the programmatic kernel components necessary.
* Many other Linux kernel engineers who contributed feedback, code, testing, and their own patchsets for the development of enhanced BPF (search lkml for BPF): Wang Nan, Daniel Borkmann, David S. Miller, Peter Zijlstra, and many others.
* Brenden Blanco (PLUMgrid), the lead developer of bcc.
* Sasha Goldshtein (Sela) developed tracepoint support in bcc, developed the most powerful bcc multitools trace and argdist, and contributed to USDT support.
* Vicent Martí and others at Github engineering, for developing the lua front-end for bcc, and contributing parts of USDT.
* Allan McAleavy, Mark Drayton, and other bcc contributors for various improvements.
Thanks to Netflix for providing the environment and support where I've been able to contribute to BPF and bcc tracing, and help get them done. I've also contributed to tracing in general over the years by developing tracing tools (using TNF/prex, DTrace, SystemTap, ktap, ftrace, perf, and now bcc/BPF), and books, blogs, and talks.
Finally, thanks to [Deirdré][20] for editing another post.
### Conclusion
Linux doesn't have DTrace (the language), but it now does, in a way, have the DTraceToolkit (the tools).
The Linux 4.9 kernel has the final capabilities needed to support modern tracing, via enhancments to its built-in BPF engine. The hardest part is now done: kernel support. Future work now includes more performance CLI tools, alternate higher-level languages, and GUIs.
For customers of performance analysis products, this is also good news: you can now ask for latency histograms and heatmaps, CPU and off-CPU flame graphs, better latency breakdowns, and lower-cost instrumentation. Per-packet tracing and processing in user space is now the old inefficient way.
So when are you going to upgrade to Linux 4.9? Once it is officially released, new performance tools await: apt-get install bcc-tools.
Enjoy!
Brendan
--------------------------------------------------------------------------------
via: http://www.brendangregg.com/blog/2016-10-27/dtrace-for-linux-2016.html
作者:[Brendan Gregg][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.brendangregg.com/
[1]:http://www.brendangregg.com/blog/2016-10-21/linux-efficient-profiler.html
[2]:http://www.brendangregg.com/blog/2016-10-15/linux-bcc-tcptop.html
[3]:http://www.brendangregg.com/blog/2016-10-12/linux-bcc-nodejs-usdt.html
[4]:http://www.brendangregg.com/blog/2016-10-08/linux-bcc-runqlat.html
[5]:http://www.brendangregg.com/blog/2016-10-06/linux-bcc-ext4dist-ext4slower.html
[6]:http://www.brendangregg.com/blog/2016-10-04/linux-bcc-mysqld-qslower.html
[7]:http://www.brendangregg.com/blog/2016-10-01/linux-bcc-security-capabilities.html
[8]:http://www.brendangregg.com/blog/2016-06-14/ubuntu-xenial-bcc-bpf.html
[9]:http://www.brendangregg.com/blog/2016-03-05/linux-bpf-superpowers.html
[10]:http://www.brendangregg.com/blog/2016-02-08/linux-ebpf-bcc-uprobes.html
[11]:http://www.brendangregg.com/blog/2016-02-05/ebpf-chaingraph-prototype.html
[12]:http://www.brendangregg.com/blog/2016-02-01/linux-wakeup-offwake-profiling.html
[13]:http://www.brendangregg.com/blog/2016-01-20/ebpf-offcpu-flame-graph.html
[14]:http://www.brendangregg.com/blog/2016-01-18/ebpf-stack-trace-hack.html
[15]:http://www.brendangregg.com/blog/2015-09-22/bcc-linux-4.3-tracing.html
[16]:http://www.brendangregg.com/blog/2015-05-15/ebpf-one-small-step.html
[17]:http://ftp.arl.army.mil/~mike/ping.html
[18]:https://github.com/iovisor/bcc/blob/master/CONTRIBUTING-SCRIPTS.md
[19]:https://github.com/iovisor/bcc/issues
[20]:http://www.brendangregg.com/blog/2016-07-23/deirdre.html
[21]:https://www.usenix.org/conference/lisa16
[22]:http://www.brendangregg.com/blog/2016-03-05/linux-bpf-superpowers.html
[23]:https://github.com/iovisor/bcc/tree/master/man/man8
[24]:https://github.com/iovisor/bcc/tree/master/tools
[25]:https://github.com/iovisor/bcc/tree/master/tools
[26]:https://github.com/iovisor/bcc/blob/master/docs/reference_guide.md
[27]:https://github.com/iovisor/bcc/blob/master/docs/tutorial_bcc_python_developer.md
[28]:https://github.com/iovisor/bcc/blob/master/docs/tutorial.md
[29]:https://wkz.github.io/ply/
[30]:https://lkml.org/lkml/2016/6/14/749
[31]:http://www.brendangregg.com/blog/2016-03-30/working-at-netflix-2016.html
[32]:http://www.slideshare.net/brendangregg/from-dtrace-to-linux
[33]:https://lkml.org/lkml/2016/9/1/831
[34]:https://github.com/brendangregg/perf-tools
[35]:https://github.com/iovisor/bcc/blob/master/INSTALL.md
[36]:https://github.com/iovisor/bcc
[37]:https://github.com/opendtrace/toolkit
[38]:https://raw.githubusercontent.com/brendangregg/bcc/master/images/bcc_tracing_tools_2016.png

View File

@ -1,104 +0,0 @@
HOW TO CREATE AN EBOOK WITH CALIBRE IN LINUX [COMPLETE GUIDE]
====
[![Guide to create an eBoook in Linux with Calibre](https://itsfoss.com/wp-content/uploads/2016/10/Create-an-eBook-in-Linux.jpg)][8]
_Brief: This beginners guide shows you how to quickly create an ebook with Calibre tool in Linux._
Ebooks have been growing by leaps and bounds in popularity since Amazon started selling them several years ago. The good news is that they are very easy to create with Free and Open Source tools.
In this tutorial, Ill show you how to create an eBook in Linux.
### CREATING AN EBOOK IN LINUX
To create an ebook youll need two pieces of software: a word processor (Ill be using [LibreOffice][7], of course) and Calibre. [Calibre][6]is a great ebook reader and library program. You can use it to [open ePub files in Linux][5] or to manage your collection of eBooks.
Besides this software, you also need an ebook cover (1410×2250) and your manuscript.
### STEP 1
First, you need to open your manuscript with your word processor. Calibre can automatically create a table of contents for you. In order to do so, you need to set the chapter titles into your manuscript to Heading 1\. Just highlight the chapter titles and selection “Heading 1” from the paragraph style drop down box.
![ebook creation with Calibre](https://itsfoss.com/wp-content/uploads/2016/10/header1.png)
If you plan to have sub-chapters and want them to be added to the TOC, then make all those titles Heading 2.
Now, save your document as an HTML file.
### STEP 2
In Calibre, click the “Add books” button. After the dialog box appears, you can browse to where your HTML file is located and add it to the program.
![create ebooks with Calibre](https://itsfoss.com/wp-content/uploads/2016/10/calibre1.png)
### STEP 3
Once the new HTML file is added to the Calibre library, select the new file and click the “Edit metadata” button. From here you can add the following information: Title, Author, cover image, description and more. When youre done, click “OK”.
![creating ebooks with Calibre in Linux](https://itsfoss.com/wp-content/uploads/2016/10/calibre2.png)
### STEP 4
Now click the “Convert books” button.
In the new windows, there are quite a few options available, but you dont need to use them all.
[Suggested ReadFix Pear Updater Issue In Pear OS 8][4]
![creating ebooks with Calibre in Linux -2](https://itsfoss.com/wp-content/uploads/2016/10/calibre3.png)
From the top right of the new screen, you select epub. Calibre also gives your the option to create a mobi file, but I found that those files didnt always work the way I wanted them to.
![creating ebooks with Calibre in Linux -3](https://itsfoss.com/wp-content/uploads/2016/10/calibre4.png)
### STEP 5
Click the “Look & Feel” tab from the left side of the new dialog box. Now, select the “Remove spacing between paragraphs”.
![creating ebooks with Calibre in Linux - 4](https://itsfoss.com/wp-content/uploads/2016/10/calibre5.png)
Next, we will create the table of contents. If dont plan to use a table of contents in your book, you skip this step. Select the Table of Contents tab. From there, click on the select the wand icon to the right of “Level 1 TOC (XPath expression)”.
![creating ebooks with Calibre in Linux - 5](https://itsfoss.com/wp-content/uploads/2016/10/calibre6.png)
In the new window, select “h1” from the drop down menu under “Match HTML tags with tag name”. Click “OK” to close the window. If you set up sub-chapters, set the “Level 2 TOC (XPath expression)” to h2.
![creating ebooks with Calibre in Linux - 6](https://itsfoss.com/wp-content/uploads/2016/10/calibre7.png)
Before we start the conversion, select EPUB Output. On the new page, select the “Insert inline Table of Contents” option.
![creating ebooks with Calibre in Linux - 7](https://itsfoss.com/wp-content/uploads/2016/10/calibre8.png)
Now all you have to do is click “OK” to start the conversion process. Unless you have a huge file, the conversion should finish fairly quickly.
There you go, you just created a quick ebook
For the more advanced users who know how to write CSS, Calibre gives your the option to add CSS styling to your text. Just go to the “Look & Feel” section and select the styling tab. If you try to do this with mobi, it wont accept all of the styling for some reason.
![creating ebooks with Calibre in Linux - 8](https://itsfoss.com/wp-content/uploads/2016/10/calibre9.png)
Well, that was fairly easy, isnt it? I hope this tutorial helped you to create eBooks in Linux.
--------------------------------------------------------------------------------
via: https://itsfoss.com/create-ebook-calibre-linux/
作者:[John Paul ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[1]:http://pinterest.com/pin/create/button/?url=https://itsfoss.com/create-ebook-calibre-linux/&description=How+To+Create+An+Ebook+With+Calibre+In+Linux+%5BComplete+Guide%5D&media=https://itsfoss.com/wp-content/uploads/2016/10/Create-an-eBook-in-Linux.jpg
[2]:https://www.linkedin.com/cws/share?url=https://itsfoss.com/create-ebook-calibre-linux/
[3]:https://twitter.com/share?original_referer=https%3A%2F%2Fitsfoss.com%2F&source=tweetbutton&text=How+To+Create+An+Ebook+With+Calibre+In+Linux+%5BComplete+Guide%5D&url=https%3A%2F%2Fitsfoss.com%2Fcreate-ebook-calibre-linux%2F&via=%40itsfoss
[4]:https://itsfoss.com/fix-updater-issue-pear-os-8/
[5]:https://itsfoss.com/open-epub-books-ubuntu-linux/
[6]:http://calibre-ebook.com/
[7]:https://www.libreoffice.org/
[8]:https://itsfoss.com/wp-content/uploads/2016/10/Create-an-eBook-in-Linux.jpg

View File

@ -0,0 +1,209 @@
Translating by firstadream
# I don't understand Python's Asyncio
Recently I started looking into Python's new [asyncio][4] module a bit more. The reason for this is that I needed to do something that works better with evented IO and I figured I might give the new hot thing in the Python world a try. Primarily what I learned from this exercise is that I it's a much more complex system than I expected and I am now at the point where I am very confident that I do not know how to use it properly.
It's not conceptionally hard to understand and borrows a lot from Twisted, but it has so many elements that play into it that I'm not sure any more how the individual bits and pieces are supposed to go together. Since I'm not clever enough to actually propose anything better I just figured I share my thoughts about what confuses me instead so that others might be able to use that in some capacity to understand it.
### The Primitives
<cite>asyncio</cite> is supposed to implement asynchronous IO with the help of coroutines. Originally implemented as a library around the <cite>yield</cite> and <cite>yield from</cite> expressions it's now a much more complex beast as the language evolved at the same time. So here is the current set of things that you need to know exist:
* event loops
* event loop policies
* awaitables
* coroutine functions
* old style coroutine functions
* coroutines
* coroutine wrappers
* generators
* futures
* concurrent futures
* tasks
* handles
* executors
* transports
* protocols
In addition the language gained a few special methods that are new:
* <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">__aenter__</tt> and <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">__aexit__</tt> for asynchronous <cite>with</cite> blocks
* <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">__aiter__</tt> and <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">__anext__</tt> for asynchronous iterators (async loops and async comprehensions). For extra fun that protocol already changed once. In 3.5 it returns an awaitable (a coroutine) in Python 3.6 it will return a newfangled async generator.
* <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">__await__</tt> for custom awaitables
That's quite a bit to know and the documentation covers those parts. However here are some notes I made on some of those things to understand them better:
### Event Loops
The event loop in asyncio is a bit different than you would expect from first look. On the surface it looks like each thread has one event loop but that's not really how it works. Here is how I think this works:
* if you are the main thread an event loop is created when you call <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">asyncio.get_event_loop()</tt>
* if you are any other thread, a runtime error is raised from <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">asyncio.get_event_loop()</tt>
* You can at any point <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">asyncio.set_event_loop()</tt> to bind an event loop with the current thread. Such an event loop can be created with the <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">asyncio.new_event_loop()</tt> function.
* Event loops can be used without being bound to the current thread.
* <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">asyncio.get_event_loop()</tt> returns the thread bound event loop, it does not return the currently running event loop.
The combination of these behaviors is super confusing for a few reasons. First of all you need to know that these functions are delegates to the underlying event loop policy which is globally set. The default is to bind the event loop to the thread. Alternatively one could in theory bind the event loop to a greenlet or something similar if one would so desire. However it's important to know that library code does not control the policy and as such cannot reason that asyncio will scope to a thread.
Secondly asyncio does not require event loops to be bound to the context through the policy. An event loop can work just fine in isolation. However this is the first problem for library code as a coroutine or something similar does not know which event loop is responsible for scheduling it. This means that if you call <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">asyncio.get_event_loop()</tt> from within a coroutine you might not get the event loop back that ran you. This is also the reason why all APIs take an optional explicit loop parameter. So for instance to figure out which coroutine is currently running one cannot invoke something like this:
```
def get_task():
loop = asyncio.get_event_loop()
try:
return asyncio.Task.get_current(loop)
except RuntimeError:
return None
```
Instead the loop has to be passed explicitly. This furthermore requires you to pass through the loop explicitly everywhere in library code or very strange things will happen. Not sure what the thinking for that design is but if this is not being fixed (that for instance <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">get_event_loop()</tt> returns the actually running loop) then the only other change that makes sense is to explicitly disallow explicit loop passing and require it to be bound to the current context (thread etc.).
Since the event loop policy does not provide an identifier for the current context it also is impossible for a library to "key" to the current context in any way. There are also no callbacks that would permit to hook the tearing down of such a context which further limits what can be done realistically.
### Awaitables and Coroutines
In my humble opinion the biggest design mistake of Python was to overload iterators so much. They are now being used not just for iteration but also for various types of coroutines. One of the biggest design mistakes of iterators in Python is that <cite>StopIteration</cite> bubbles if not caught. This can cause very frustrating problems where an exception somewhere can cause a generator or coroutine elsewhere to abort. This is a long running issue that Jinja for instance has to fight with. The template engine internally renders into a generator and when a template for some reason raises a <cite>StopIteration</cite> the rendering just ends there.
Python is slowly learning the lesson of overloading this system more. First of all in 3.something the asyncio module landed and did not have language support. So it was decorators and generators all the way down. To implemented the <cite>yield from</cite> support and more, the <cite>StopIteration</cite>was overloaded once more. This lead to surprising behavior like this:
```
>>> def foo(n):
... if n in (0, 1):
... return [1]
... for item in range(n):
... yield item * 2
...
>>> list(foo(0))
[]
>>> list(foo(1))
[]
>>> list(foo(2))
[0, 2]
```
No error, no warning. Just not the behavior you expect. This is because a <cite>return</cite> with a value from a function that is a generator actually raises a <cite>StopIteration</cite> with a single arg that is not picked up by the iterator protocol but just handled in the coroutine code.
With 3.5 and 3.6 a lot changed because now in addition to generators we have coroutine objects. Instead of making a coroutine by wrapping a generator there is no a separate object which creates a coroutine directly. It's implemented by prefixing a function with <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">async</tt>. For instance <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">async def x()</tt> will make such a coroutine. Now in 3.6 there will be separate async generators that will raise <cite>AsyncStopIteration</cite> to keep it apart. Additionally with Python 3.5 and later there is now a future import (<tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">generator_stop</tt>) that will raise a <cite>RuntimeError</cite> if code raises <cite>StopIteration</cite> in an iteration step.
Why am I mentioning all this? Because the old stuff does not really go away. Generators still have <cite>send</cite> and <cite>throw</cite> and coroutines still largely behave like generators. That is a lot of stuff you need to know now for quite some time going forward.
To unify a lot of this duplication we have a few more concepts in Python now:
* awaitable: an object with an <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">__await__</tt> method. This is for instance implemented by native coroutines and old style coroutines and some others.
* coroutinefunction: a function that returns a native coroutine. Not to be confused with a function returning a coroutine.
* a coroutine: a native coroutine. Note that old asyncio coroutines are not considered coroutines by the current documentation as far as I can tell. At the very least <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">inspect.iscoroutine</tt> does not consider that a coroutine. It's however picked up by the future/awaitable branches.
In particularly confusing is that <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">asyncio.iscoroutinefunction</tt> and <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">inspect.iscoroutinefunction</tt> are doing different things. Same with <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">inspect.iscoroutine</tt> and <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">inspect.iscoroutinefunction</tt>. Note that even though inspect does not know anything about asycnio legacy coroutine functions in the type check, it is apparently aware of them when you check for awaitable status even though it does not conform to <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">__await__</tt>.
### Coroutine Wrappers
Whenever you run <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">async def</tt> Python invokes a thread local coroutine wrapper. It's set with <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">sys.set_coroutine_wrapper</tt> and it's a function that can wrap this. Looks a bit like this:
```
>>> import sys
>>> sys.set_coroutine_wrapper(lambda x: 42)
>>> async def foo():
... pass
...
>>> foo()
__main__:1: RuntimeWarning: coroutine 'foo' was never awaited
42
```
In this case I never actually invoke the original function and just give you a hint of what this can do. As far as I can tell this is always thread local so if you swap out the event loop policy you need to figure out separately how to make this coroutine wrapper sync up with the same context if that's something you want to do. New threads spawned will not inherit that flag from the parent thread.
This is not to be confused with the asyncio coroutine wrapping code.
### Awaitables and Futures
Some things are awaitables. As far as I can see the following things are considered awaitable:
* native coroutines
* generators that have the fake <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">CO_ITERABLE_COROUTINE</tt> flag set (we will cover that)
* objects with an <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">__await__</tt> method
Essentially these are all objects with an <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">__await__</tt> method except that the generators don't for legacy reasons. Where does the <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">CO_ITERABLE_COROUTINE</tt> flag come from? It comes from a coroutine wrapper (now to be confused with <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">sys.set_coroutine_wrapper</tt>) that is <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">@asyncio.coroutine</tt>. That through some indirection will wrap the generator with <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">types.coroutine</tt> (to to be confused with<tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">types.CoroutineType</tt> or <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">asyncio.coroutine</tt>) which will re-create the internal code object with the additional flag <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">CO_ITERABLE_COROUTINE</tt>.
So now that we know what those things are, what are futures? First we need to clear up one thing: there are actually two (completely incompatible) types of futures in Python 3. <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">asyncio.futures.Future</tt> and <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">concurrent.futures.Future</tt>. One came before the other but they are also also both still used even within asyncio. For instance <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">asyncio.run_coroutine_threadsafe()</tt> will dispatch a coroutine to a event loop running in another thread but it will then return a<tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">concurrent.futures.Future</tt> object instead of a <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">asyncio.futures.Future</tt> object. This makes sense because only the <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">concurrent.futures.Future</tt> object is thread safe.
So now that we know there are two incompatible futures we should clarify what futures are in asyncio. Honestly I'm not entirely sure where the differences are but I'm going to call this "eventual" for the moment. It's an object that eventually will hold a value and you can do some handling with that eventual result while it's still computing. Some variations of this are called deferreds, others are called promises. What the exact difference is is above my head.
What can you do with a future? You can attach a callback that will be invoked once it's ready or you can attach a callback that will be invoked if the future fails. Additionally you can <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">await</tt> it (it implements <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">__await__</tt> and is thus awaitable). Additionally futures can be cancelled.
So how do you get such a future? By calling <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">asyncio.ensure_future</tt> on an awaitable object. This will also make a good old generator into such a future. However if you read the docs you will read that <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">asyncio.ensure_future</tt> actually returns a <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">Task</tt>. So what's a task?
### Tasks
A task is a future that is wrapping a coroutine in particular. It works like a future but it also has some extra methods to extract the current stack of the contained coroutine. We already saw the tasks mentioned earlier because it's the main way to figure out what an event loop is currently doing via <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">Task.get_current</tt>.
There is also a difference in how cancellation works for tasks and futures but that's beyond the scope of this. Cancellation is its own entire beast. If you are in a coroutine and you know you are currently running you can get your own task through <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">Task.get_current</tt> as mentioned but this requires knowledge of what event loop you are dispatched on which might or might not be the thread bound one.
It's not possible for a coroutine to know which loop goes with it. Also the <cite>Task</cite> does not provide that information through a public API. However if you did manage to get hold of a task you can currently access <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">task._loop</tt> to find back to the event loop.
### Handles
In addition to all of this there are handles. Handles are opaque objects of pending executions that cannot be awaited but they can be cancelled. In particular if you schedule the execution of a call with <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">call_soon</tt> or <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">call_soon_threadsafe</tt> (and some others) you get that handle you can then use to cancel the execution as a best effort attempt but you can't wait for the call to actually take place.
### Executors
Since you can have multiple event loops but it's not obvious what the use of more than one of those things per thread is the obvious assumption can be made that a common setup is to have N threads with an event loop each. So how do you inform another event loop about doing some work? You cannot schedule a callback into an event loop in another thread _and_ get the result back. For that you need to use executors instead.
Executors come from <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">concurrent.futures</tt> for instance and they allow you to schedule work into threads that itself is not evented. For instance if you use <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">run_in_executor</tt> on the event loop to schedule a function to be called in another thread. The result is then returned as an asyncio coroutine instead of a concurrent coroutine like <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">run_coroutine_threadsafe</tt> would do. I did not yet have enough mental capacity to figure out why those APIs exist, how you are supposed to use and when which one. The documentation suggests that the executor stuff could be used to build multiprocess things.
### Transports and Protocols
I always though those would be the confusing things but that's basically a verbatim copy of the same concepts in Twisted. So read those docs if you want to understand them.
### How to use asyncio
Now that we know roughly understand asyncio I found a few patterns that people seem to use when they write asyncio code:
* pass the event loop to all coroutines. That appears to be what a part of the community is doing. Giving a coroutine knowledge about what loop is going to schedule it makes it possible for the coroutine to learn about its task.
* alternatively you require that the loop is bound to the thread. That also lets a coroutine learn about that. Ideally support both. Sadly the community is already torn of what to do.
* If you want to use contextual data (think thread locals) you are a bit out of luck currently. The most popular workaround is apparently atlassian's <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">aiolocals</tt> which basically requires you to manually propagate contextual information into coroutines spawned since the interpreter does not provide support for this. This means that if you have a utility library spawning coroutines you will lose context.
* Ignore that the old coroutine stuff in Python exists. Use 3.5 only with the new <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">async def</tt>keyword and co. In particular you will need that anyways to somewhat enjoy the experience because with older versions you do not have async context managers which turn out to be very necessary for resource management.
* Learn to restart the event loop for cleanup. This is something that took me longer to realize than I wish it did but the sanest way to deal with cleanup logic that is written in async code is to restart the event loop a few times until nothing pending is left. Since sadly there is no common pattern to deal with this you will end up with some ugly workaround at time. For instance <cite>aiohttp</cite>'s web support also does this pattern so if you want to combine two cleanup logics you will probably have to reimplement the utility helper that it provides since that helper completely tears down the loop when it's done. This is also not the first library I saw do this :(
* Working with subprocesses is non obvious. You need to have an event loop running in the main thread which I suppose is listening in on signal events and then dispatches it to other event loops. This requires that the loop is notified via<tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">asyncio.get_child_watcher().attach_loop(...)</tt>.
* Writing code that supports both async and sync is somewhat of a lost cause. It also gets dangerous quickly when you start being clever and try to support <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">with</tt> and <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">async with</tt> on the same object for instance.
* If you want to give a coroutine a better name to figure out why it was not being awaited, setting <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">__name__</tt> doesn't help. You need to set <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">__qualname__</tt> instead which is what the error message printer uses.
* Sometimes internal type conversations can screw you over. In particular the <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">asyncio.wait()</tt>function will make sure all things passed are futures which means that if you pass coroutines instead you will have a hard time finding out if your coroutine finished or is pending since the input objects no longer match the output objects. In that case the only real sane thing to do is to ensure that everything is a future upfront.
### Context Data
Aside from the insane complexity and lack of understanding on my part of how to best write APIs for it my biggest issue is the complete lack of consideration for context local data. This is something that the node community learned by now. <tt class="docutils literal" style="font-family: &quot;Ubuntu Mono&quot;, Consolas, &quot;Deja Vu Sans Mono&quot;, &quot;Bitstream Vera Sans Mono&quot;, Monaco, &quot;Courier New&quot;; font-size: 0.9em; background: rgb(238, 238, 238);">continuation-local-storage</tt> exists but has been accepted as implemented too late. Continuation local storage and similar concepts are regularly used to enforce security policies in a concurrent environment and corruption of that information can cause severe security issues.
The fact that Python does not even have any store at all for this is more than disappointing. I was looking into this in particular because I'm investigating how to best support [Sentry's breadcrumbs][3] for asyncio and I do not see a sane way to do it. There is no concept of context in asyncio, there is no way to figure out which event loop you are working with from generic code and without monkeypatching the world this information will not be available.
Node is currently going through the process of [finding a long term solution for this problem][2]. That this is not something to be left ignored can be seen by this being a recurring issue in all ecosystems. It comes up with JavaScript, Python and the .NET environment. The problem [is named async context propagation][1] and solutions go by many names. In Go the context package needs to be used and explicitly passed to all goroutines (not a perfect solution but at least one). .NET has the best solution in the form of local call contexts. It can be a thread context, an web request context, or something similar but it's automatically propagating unless suppressed. This is the gold standard of what to aim for. Microsoft had this solved since more than 15 years now I believe.
I don't know if the ecosystem is still young enough that logical call contexts can be added but now might still be the time.
### Personal Thoughts
Man that thing is complex and it keeps getting more complex. I do not have the mental capacity to casually work with asyncio. It requires constantly updating the knowledge with all language changes and it has tremendously complicated the language. It's impressive that an ecosystem is evolving around it but I can't help but get the impression that it will take quite a few more years for it to become a particularly enjoyable and stable development experience.
What landed in 3.5 (the actual new coroutine objects) is great. In particular with the changes that will come up there is a sensible base that I wish would have been in earlier versions. The entire mess with overloading generators to be coroutines was a mistake in my mind. With regards to what's in asyncio I'm not sure of anything. It's an incredibly complex thing and super messy internally. It's hard to comprehend how it works in all details. When you can pass a generator, when it has to be a real coroutine, what futures are, what tasks are, how the loop works and that did not even come to the actual IO part.
The worst part is that asyncio is not even particularly fast. David Beazley's live demo hacked up asyncio replacement is twice as fast as it. There is an enormous amount of complexity that's hard to understand and reason about and then it fails on it's main promise. I'm not sure what to think about it but I know at least that I don't understand asyncio enough to feel confident about giving people advice about how to structure code for it.
--------------------------------------------------------------------------------
via: http://lucumr.pocoo.org/2016/10/30/i-dont-understand-asyncio/
作者:[Armin Ronacher][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://lucumr.pocoo.org/about/
[1]:https://docs.google.com/document/d/1tlQ0R6wQFGqCS5KeIw0ddoLbaSYx6aU7vyXOkv-wvlM/edit
[2]:https://github.com/nodejs/node-eps/pull/18
[3]:https://docs.sentry.io/learn/breadcrumbs/
[4]:https://docs.python.org/3/library/asyncio.html

View File

@ -0,0 +1,102 @@
###Translating by rusking
4 Useful Way to Know Plugged USB Device Name in Linux
============================================================
As a newbie, one of the many [things you should master in Linux][1] is identification of devices attached to your system. It may be your computers hard disk, an external hard drive or removable media such USB drive or SD Memory card.
Using USB drives for file transfer is so common today, and for those (new Linux users) who prefer to use the command line, learning the different ways to identify a USB device name is very important, when you need to format it.
Once you attach a device to your system such as a USB, especially on a desktop, it is automatically mounted to a given directory, normally under /media/username/device-label and you can then access the files in it from that directory. However, this is not the case with a server where you have to[ manually mount a device][2] and specify its mount point.
Linux identifies devices using special device files stored in `/dev` directory. Some of the files you will find in this directory include `/dev/sda` or `/dev/hda` which represents your first master drive, each partition will be represented by a number such as `/dev/sda1` or `/dev/hda1` for the first partition and so on.
```
$ ls /dev/sda*
```
[
![List All Linux Device Names](http://www.tecmint.com/wp-content/uploads/2016/10/List-All-Linux-Device-Names.png)
][3]
List All Linux Device Names
Now lets find out device names using some different command-line tools as shown:
### Find Out Plugged USB Device Name Using df Command
To view each device attached to your system as well as its mount point, you can use the [df command][4](checks Linux disk space utilization) as shown in the image below:
```
$ df -h
```
[
![Find USB Device Name Using df Command](http://www.tecmint.com/wp-content/uploads/2016/10/Find-USB-Device-Name.png)
][5]
Find USB Device Name Using df Command
### Use lsblk Command to Find USB Device Name
You can also use the [lsblk command (list block devices)][6] which lists all block devices attached to your system like so:
```
$ lsblk
```
[
![List Linux Block Devices](http://www.tecmint.com/wp-content/uploads/2016/10/List-Linux-Block-Devices.png)
][7]
List Linux Block Devices
### Identify USB Device Name with fdisk Utility
[fdisk is a powerful utility][8] which prints out the partition table on all your block devices, a USB drive inclusive, you can run it will root privileges as follows:
```
$ sudo fdisk -l
```
[
![List Partition Table of Block Devices](http://www.tecmint.com/wp-content/uploads/2016/10/List-Partition-Table.png)
][9]
List Partition Table of Block Devices
### Determine USB Device Name with dmesg Command
dmesg is an important command that prints or controls the kernel ring buffer, a data structure which [stores information about the kernels operations][10].
Run the command below to view kernel operation messages which will as well print information about your USB device:
```
$ dmesg
```
[
![dmesg - Prints USB Device Name](http://www.tecmint.com/wp-content/uploads/2016/10/dmesg-shows-kernel-information.png)
][11]
dmesg Prints USB Device Name
That is all for now, in this article, we have covered different approaches of how to find out a USB device name from the command line. You can also share with us any other methods for the same purpose or perhaps offer us your thoughts about the article via the response section below.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/find-usb-device-name-in-linux
作者:[Aaron Kili ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/tag/linux-tricks/
[2]:http://www.tecmint.com/mount-filesystem-in-linux/
[3]:http://www.tecmint.com/wp-content/uploads/2016/10/List-All-Linux-Device-Names.png
[4]:http://www.tecmint.com/how-to-check-disk-space-in-linux/
[5]:http://www.tecmint.com/wp-content/uploads/2016/10/Find-USB-Device-Name.png
[6]:http://www.tecmint.com/commands-to-collect-system-and-hardware-information-in-linux/
[7]:http://www.tecmint.com/wp-content/uploads/2016/10/List-Linux-Block-Devices.png
[8]:http://www.tecmint.com/fdisk-commands-to-manage-linux-disk-partitions/
[9]:http://www.tecmint.com/wp-content/uploads/2016/10/List-Partition-Table.png
[10]:http://www.tecmint.com/dmesg-commands/
[11]:http://www.tecmint.com/wp-content/uploads/2016/10/dmesg-shows-kernel-information.png

View File

@ -1,3 +1,5 @@
【翻译中】 by jayjay823
# 5 Best FPS Games For Linux # 5 Best FPS Games For Linux
![best FPS games for linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/fps-games-for-linux.jpg?581) ![best FPS games for linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/fps-games-for-linux.jpg?581)

View File

@ -1,28 +0,0 @@
# 98 percent of developers use open source at work
![developer using open source](http://i0.wp.com/opensourceforu.com/wp-content/uploads/2016/07/developer.jpg?resize=750%2C500)
Open source is already reaching new heights each day. But a new study surfaced online that claims over 98 percent of developers use open source tools at work.
Git repository manager [GitLab][1] has conducted a survey that revealed some interesting facts about open source adoption. The survey, conducted with a developer group, claimed that of the 98 percent developers who prefer open source usage at work, 91 percent opt for the same development tools for work and personal projects. Moreover, 92 percent of the total group consider distributed version control systems (Git repositories) are crucial for their everyday work.
Among all the preferred programming languages, JavaScript comes on top with 51 percent of respondents. It is followed by Python, PHP, Java, Swift and Objective-C. Furthermore, 86 percent of developers considers security as a prime factor for judging the code.
“While process-driven development techniques have been successful in the past, developers are searching for a more natural evolution of software development that fosters collaboration and information sharing across the lifecycle of a project,” said Sid Sijbrandij, CEO and co-founder of GitLab, in a statement.
GitLab surveyed 362 startup and enterprise CTOs, developers and DevOps professionals who used its repository platform between July 6 and 27.
--------------------------------------------------------------------------------
via: http://opensourceforu.com/2016/11/98-percent-developers-use-open-source-at-work/
作者:[JAGMEET SINGH ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://opensourceforu.com/author/jagmeet-singh/
[1]:https://about.gitlab.com/2016/11/02/global-developer-survey-2016/

View File

@ -1,3 +1,5 @@
**************Translating by messon007******************
# Perl and the birth of the dynamic web # Perl and the birth of the dynamic web
>The fascinating story of Perl's role in the dynamic web spans newsgroups and mailing lists, computer science labs, and continents. >The fascinating story of Perl's role in the dynamic web spans newsgroups and mailing lists, computer science labs, and continents.

View File

@ -0,0 +1,129 @@
### Create a simple wallpaper with Fedora and Inkscape
![inkscape-wallpaper](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/inkscape-wallpaper-945x400.png)
In our previous two Inkscape articles, we have [covered the basics of using Inkscape, creating objects,][18] and [doing some basic manipulations and color changes.][17]
In this next installment, we are going to put all these new skills together, and create our first composition — a simple wallpaper.
### Changing the document size
When going through the previous tutorials, you probably noticed the default document size shown on the main canvas window as a black bordered rectangle. The default document size in Inkscape is the A4 paper size:
[
![Screenshot from 2016-09-07 08-37-01](https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-08-37-01.png)
][16]
For this wallpaper, we are going to resize the the document to **1024px x 768px**. To change the document size, Go to `File` > `Document Properties…` . In the Custom Size section of the Document Properties dialog, enter the width of 1024px, and a height of 768px:
[
![Screenshot from 2016-09-07 09-00-00](https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-09-00-00.png)
][15]
The document outline on the page should now look something like this:
[
![Screenshot from 2016-09-07 09-01-03](https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-09-01-03.png)
][14]
### Drawing the background
Next up, we are going to draw a rectangle as big as the document. So choose the using **rectangle tool, ** draw a rectangle, and adjust the size of the rectangle using the Tools Control bar.
[
![rect](https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/rect.png)
][13]
Next up, add a Gradient Fill to the rectangle. [If you need a refresher on adding gradients, check out the previous adding colours article.][12]
[
![Screenshot from 2016-09-07 09-41-13](https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-09-41-13.png)
][11]
Your rectangle might also have a stroke colour set. Use the fill and stroke dialog to set the stroke paint to **none**.
[
![Screenshot from 2016-09-07 09-44-15](https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-09-44-15.png)
][10]
### Drawing the pattern
Next we are going to draw a triangle. Use the star / polygon tool with 3 points. You can** PRESS and HOLD DOWN CTRL** key, to give your triangle an angle and symmetry.
[
![Screenshot from 2016-09-07 09-52-38](https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-09-52-38.png)
][9]
Select the triangle and press **CTRL+D**, to duplicate it (the duplicated figure will overlap the existing one), **so be sure to move it after duplicating.**
[
![Screenshot from 2016-09-07 10-44-01](https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-10-44-01.png)
][8]
Select one of the triangles as shown, and go to **OBJECT > FLIP-HORIZONTAL**
[
![Screenshot from 2016-09-07 09-57-23](https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-09-57-23.png)
][7][
![Screenshot from 2016-09-07 09-57-42](https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-09-57-42.png)
][6]
Recolour your three triangles to three colors that look good with your background.
[
![Screenshot from 2016-09-07 09-58-52](https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-09-58-52.png)
][5]
Select all your triangles, and duplicate them again to fill out your pattern:
[
![Screenshot from 2016-09-07 10-49-25](https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-10-49-25.png)
][4]
### Exporting your background
Finally, we need to export our document as a PNG file. Open up the export dialog with** FILE > EXPORT PNG**, select the file location and name, make sure the Drawing  tab is pressed, and click on **EXPORT**
[
![Screenshot from 2016-09-07 11-07-05](https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-11-07-05-1.png)
][3]
Let no tool be a barrier to your imagination. Come up with beautiful wallpapers and submit the designs for [FEDORA 25 wallpapers][2]. Your design might get lucky enough to be used by thousands of Fedora users. Here are some examples of wallpapers created with Inkscape and the techniques above:
[
![back1](https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/back1.png)
][1]
![back2](https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/back2.png)
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/inkscape-design-imagination/
作者:[a2batic][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://a2batic.id.fedoraproject.org/
[1]:https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/back1.png
[2]:https://fedoramagazine.org/keeping-fedora-beautiful-contribute-wallpaper/
[3]:https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-11-07-05-1.png
[4]:https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-10-49-25.png
[5]:https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-09-58-52.png
[6]:https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-09-57-42.png
[7]:https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-09-57-23.png
[8]:https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-10-44-01.png
[9]:https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-09-52-38.png
[10]:https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-09-44-15.png
[11]:https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-09-41-13.png
[12]:https://fedoramagazine.org/inkscape-adding-colour/
[13]:https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/rect.png
[14]:https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-09-01-03.png
[15]:https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-09-00-00.png
[16]:https://1504253206.rsc.cdn77.org/wp-content/uploads/2016/10/Screenshot-from-2016-09-07-08-37-01.png
[17]:https://fedoramagazine.org/inkscape-adding-colour/
[18]:https://fedoramagazine.org/getting-started-inkscape-fedora/
[19]:https://fedoramagazine.org/inkscape-design-imagination/

View File

@ -0,0 +1,109 @@
CLOUD FOCUSED LINUX DISTROS FOR PEOPLE WHO BREATHE ONLINE
============================================================
[
![Best Linux distributions for cloud computing](https://itsfoss.com/wp-content/uploads/2016/11/cloud-centric-Linux-distributions.jpg)
][6]
_Brief: We list some _Cloud centric_ Linux distributions that might be termed as real Linux alternatives to Chrome OS._
The world is moving to cloud-based services and we all know the kind of love that Chrome OS got. Well, it does deserve respect. Its super fast, light, power-efficient, minimalistic, beautifully designed and utilizes the full potential of cloud that technology permits today.
Although [Chrome OS][7] is exclusively available only for Googles hardware, there are other means to experience the potential of cloud computing right on your laptop or desktop.
As I have repeatedly said, there is always something for everybody in the Linux domain, be it [Linux distributions that look like Windows][8] or Mac OS. Linux is all about sharing, love and some really bleeding edge computing experience. Lets crack this list right away!
### 1\. CUB LINUX
![Cub Linux Desktop](https://itsfoss.com/wp-content/uploads/2016/10/cub1.jpg)
It is not Chrome OS. But the above image is featuring the desktop of [Cub Linux][9]. Say what?
Cub Linux is no news for Linux users. But if you already did not know, Cub Linux is the web focused Linux distro that is inspired from mainstream Chrome OS. It is also the open source brother of Chrome OS from mother Linux.
Chrome OS has the Chrome Browser as its primary component. Not so long ago, a project by name [Chromixium OS][10] was started to recreate Chrome OS like experience by using the Chromium Browser in place of Chrome Browser. Due to some legal issues, the name was later changed to Cub Linux (Chromium+Ubuntu).
![cub2](https://itsfoss.com/wp-content/uploads/2016/10/cub2.jpg)
Well, history apart, as the name hints, Cub Linux is based on Ubuntu, features the lightweight Openbox Desktop Environment. The Desktop is customized to give a Chrome OS impression and looks really neat.
In the apps department, you can install the web applications from the Chrome web store and all the Ubuntu software. Yup, with all the snappy apps of the Chrome OS, Youll still get the Ubuntu goodies.
As far as the performance is concerned, the operating system is super fast thanks to its Openbox Desktop Environment. Based on Ubuntu Linux, the stability of Cub Linux is unquestionable. The desktop itself is a treat to the eyes with all its smooth animations and beautiful UI.
[Suggested Read[Year 2013 For Linux] 2 Linux Distributions Discontinued][11]
I suggest Cub Linux to anybody who spends most their times on a browser and do some home tasks now and then. Well, a browser is all you need and a browser is exactly what youll get.
### 2\. PEPPERMINT OS
A good number of people look towards Linux because they want a no-BS computing experience. Some people do not really like the hassle of an anti-virus, a defragmenter, a cleaner etcetera as they want an operating system and not a baby. And I must say Peppermint OS is really good at being no-BS. [Peppermint OS][12] developers have put a good amount of effort in understanding the user requirements and needs.
![pep1](https://itsfoss.com/wp-content/uploads/2016/11/pep1.jpg)
There is a very small number of selected software included by default. The traditional ideology of including a couple apps from every software category is ditched here for good. The power to customize the computer as per needs has been given to the user. By the way, do we really need to install so many applications when we can get web alternatives for almost all the applications?
Ice
Ice is a utile little tool that converts your favorite and most used websites into desktop applications that you can directly launch from your desktop or the menu. Its what we call a site-specific browser.
![pep4](https://itsfoss.com/wp-content/uploads/2016/11/pep4.jpg)
Love facebook? Why not make a facebook web app on your desktop for quick launch? While there are people complaining about a decent Google drive application for Linux, Ice allows you to access Drive with just a click. Not just Drive, the functionality of Ice is limited only by your imagination.
Peppermint OS 7 is based on Ubuntu 16.04\. It not only provides a smooth, rock solid performance but also a very swift response. A heavily customizes LXDE will be your home screen. And the customization Im speaking about is driven to achieve both a snappy performance as well as visual appeal.
Peppermint OS hits more of a middle ground in the cloud-native operating system types. Although the skeleton of the OS is designed to support the speedy cloudy apps, the native Ubuntu application play well too. If you are someone like me who wants an OS that is balanced in online-offline capabilities, [Peppermint OS is for you][13].
[Suggested ReadPennsylvania High School Distributes 1,700 Ubuntu Laptops to Students][14]
### 3.APRICITY OS
[Apricity OS][15] stole the show for being one of the top aesthetically pleasing Linux distros out there. Its just gorgeous. Its like the Mona Lisa of the Linux domain. But, theres more to it than just the looks.
![ap2](https://itsfoss.com/wp-content/uploads/2016/11/ap2.jpg)
The prime reason [Apricity OS][16] makes this list is because of its simplicity. While OS desktop design is getting chaotic and congested with elements (and Im not talking only about non-Linux operating systems), Apricity de-clutters everything and simplifies the very basic human-desktop interaction. Gnome desktop environment is customized beautifully here. They made it really simpler.
The pre-installed software list is really small. Almost all Linux distros have the same pre-installed software. But Apricity OS has a completely new set of software. Chrome instead of Firefox. I was really waiting for this.  I mean why not give us whats rocking out there?
Apricity OS also features the Ice tool that we discussed in the last segment. But instead of Firefox, Chrome browser is used in website-desktop integration. Apricity OS has Numix Circle icons by default and everytime you add a popular webapp, there will be a beautiful icon placed on your Dock.
![](https://itsfoss.com/wp-content/uploads/2016/11/ap1.jpg)
See what I mean?
Apricity OS is based on Arch Linux. (So anybody looking for a quick start with Arch, and a beautiful at that one, download that Apricity ISO [here][17]) Apricity fully upholds the Arch principle of freedom of choice. Within just 10 minutes on the Ice, and youll have all your favorite webapps set up.
Gorgeous backgrounds, minimalistic desktop and a great functionality. These make Apricity OS a really great choice for setting up an amazing cloud-centric system. Itll take 5 mins for Apricity OS to make you fall in love with it. I mean it.
There you have it, people. Cloud-centric Linux distros for online dwellers. Do give us your take on the webapp-native app topic. Dont forget to share.
--------------------------------------------------------------------------------
via: https://itsfoss.com/cloud-focused-linux-distros/
作者:[Aquil Roshan ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/aquil/
[1]:https://itsfoss.com/author/aquil/
[2]:https://itsfoss.com/cloud-focused-linux-distros/#comments
[3]:https://twitter.com/share?original_referer=https%3A%2F%2Fitsfoss.com%2F&source=tweetbutton&text=Cloud+Focused+Linux+Distros+For+People+Who+Breathe+Online&url=https%3A%2F%2Fitsfoss.com%2Fcloud-focused-linux-distros%2F&via=%40itsfoss
[4]:https://www.linkedin.com/cws/share?url=https://itsfoss.com/cloud-focused-linux-distros/
[5]:http://pinterest.com/pin/create/button/?url=https://itsfoss.com/cloud-focused-linux-distros/&description=Cloud+Focused+Linux+Distros+For+People+Who+Breathe+Online&media=https://itsfoss.com/wp-content/uploads/2016/11/cloud-centric-Linux-distributions.jpg
[6]:https://itsfoss.com/wp-content/uploads/2016/11/cloud-centric-Linux-distributions.jpg
[7]:https://en.wikipedia.org/wiki/Chrome_OS
[8]:https://itsfoss.com/windows-like-linux-distributions/
[9]:https://cublinux.com/
[10]:https://itsfoss.com/chromixiumos-released/
[11]:https://itsfoss.com/year-2013-linux-2-linux-distributions-discontinued/
[12]:https://peppermintos.com/
[13]:https://peppermintos.com/
[14]:https://itsfoss.com/pennsylvania-high-school-ubuntu/
[15]:https://apricityos.com/
[16]:https://itsfoss.com/apricity-os/
[17]:https://apricityos.com/

View File

@ -0,0 +1,182 @@
# 4 Ways to Batch Convert Your PNG to JPG and Vice-Versa
In computing, Batch processing is the [execution of a series of tasks][11] in a program non-interactively. In this guide will offer you 4 simple ways to batch convert several `.PNG` images to `.JPG` and vice-versa using Linux command-line tools.
We will use convert command line tool in all the examples, however, you can as well make use of mogrify to achieve this.
The syntax for using convert is:
```
$ convert input-option input-file output-option output-file
```
And for mogrify is:
```
$ mogrify options input-file
```
Note: With mogrify, the original image file is replaced with the new image file by default, but it is possible to prevent this, by using certain options that you can find in the man page.
Below are the various ways to batch convert your all `.PNG` images to `.JPG` format, if you want to convert `.JPG`to `.PNG`, you can modify the commands according to your needs.
### 1\. Convert PNG to JPG Using ls and xargs Commands
The [ls command][10] allows you to list all your png images and xargs make it possible to build and execute a convert command from standard input to convert all `.png` images to `.jpg`.
```
----------- Convert PNG to JPG -----------
$ ls -1 *.png | xargs -n 1 bash -c 'convert "$0" "${0%.png}.jpg"'
----------- Convert JPG to PNG -----------
$ ls -1 *.jpg | xargs -n 1 bash -c 'convert "$0" "${0%.jpg}.png"'
```
Explanation about the options used in the above command.
1. `-1`  flag tells ls to list one image per line.
2. `-n`  specifies the maximum number of arguments, which is 1 for the case.
3. `-c`  instructs bash to run the given command.
4. `${0%.png}.jpg`  sets the name of the new converted image, the % sign helps to remove the old file extension.
[
![Convert PNG to JPG Format in Linux](http://www.tecmint.com/wp-content/uploads/2016/11/Convert-PNG-to-JPG-in-Linux.png)
][9]
Convert PNG to JPG Format in Linux
I used `ls -ltr` command to [list all files by modified date and time][8].
Similarly, you can use above command to convert all your `.jpg` images to `.png` by tweaking the above command.
### 2\. Convert PNG to JPG Using GNU Parallel Command
GNU Parallel enables a user to build and execute shell commands from standard input in parallel. Make sure you have GNU Parallel installed on your system, otherwise install it using the appropriate commands below:
```
$ sudo apt-get install parallel [On Debian/Ubuntu systems]
$ sudo yum install parallel [On RHEL/CentOS and Fedora]
```
Once Parallel utility installed, you can run the following command to convert all `.png` images to `.jpg` format from the standard input.
```
----------- Convert PNG to JPG -----------
$ parallel convert '{}' '{.}.jpg' ::: *.png
----------- Convert JPG to PNG -----------
$ parallel convert '{}' '{.}.png' ::: *.jpg
```
Where,
1. `{}`  input line which is a replacement string substituted by a complete line read from the input source.
2. `{.}`  input line minus extension.
3. `:::`  specifies input source, that is the command line for the example above where *png or *jpg is the argument.
[
![Parallel Command - Converts All PNG Images to JPG Format](http://www.tecmint.com/wp-content/uploads/2016/11/Convert-PNG-to-JPG-Using-Parallel-Command.png)
][7]
Parallel Command Converts All PNG Images to JPG Format
Alternatively, you can as well use [ls][6] and parallel commands together to batch convert all your images as shown:
```
----------- Convert PNG to JPG -----------
$ ls -1 *.png | parallel convert '{}' '{.}.jpg'
----------- Convert JPG to PNG -----------
$ ls -1 *.jpg | parallel convert '{}' '{.}.png'
```
### 3\. Convert PNG to JPG Using for loop Command
To avoid the hustle of writing a shell script, you can execute a `for loop` from the command line as follows:
```
----------- Convert PNG to JPG -----------
$ bash -c 'for image in *.png; do convert "$image" "${image%.png}.jpg"; echo “image $image converted to ${image%.png}.jpg ”; done'
----------- Convert JPG to PNG -----------
$ bash -c 'for image in *.jpg; do convert "$image" "${image%.jpg}.png"; echo “image $image converted to ${image%.jpg}.png ”; done'
```
Description of each option used in the above command:
1. -c allows for execution of the for loop statement in single quotes.
2. The image variable is a counter for number of images in the directory.
3. For each conversion operation, the [echo command][1] informs the user that a png image has been converted to jpg format and vice-versa in the line $image converted to ${image%.png}.jpg”.
4. “${image%.png}.jpg” creates the name of the converted image, where % removes the extension of the old image format.
[
![for loop - Convert PNG to JPG Format](http://www.tecmint.com/wp-content/uploads/2016/11/Convert-PNG-to-JPG-Using-for-loop-Command.png)
][5]
for loop Convert PNG to JPG Format
### 4\. Convert PNG to JPG Using Shell Script
If you do not want to make your command line dirty as in the previous example, write a small script like so:
Note: Appropriately interchange the `.png` and `.jpg` extensions as in the example below for conversion from one format to another.
```
#!/bin/bash
#convert
for image in *.png; do
convert "$image" "${image%.png}.jpg"
echo “image $image converted to ${image%.png}.jpg ”
done
exit 0
```
Save it as `convert.sh` and make the script executable and then run it from within the directory that has your images.
```
$ chmod +x convert.sh
$ ./convert.sh
```
[
![Batch Image Convert Using Shell Script](http://www.tecmint.com/wp-content/uploads/2016/11/Batch-Image-Convert-Using-Shell-Script.png)
][4]
Batch Image Convert Using Shell Script
In summary, we covered some important ways to batch convert `.png` images to `.jpg` format and vice-versa. If you want to optimize images, you can go through our guide that shows [how to compress png and jpg images in Linux][3].
You can as well share with us any other methods including [Linux command line tools][2] for converting images from one format to another on the terminal, or ask a question via the comment section below.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/linux-image-conversion-tools/
作者:[Aaron Kili][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/echo-command-in-linux/
[2]:http://www.tecmint.com/tag/linux-tricks/
[3]:http://www.tecmint.com/optimize-and-compress-jpeg-or-png-batch-images-linux-commandline/
[4]:http://www.tecmint.com/wp-content/uploads/2016/11/Batch-Image-Convert-Using-Shell-Script.png
[5]:http://www.tecmint.com/wp-content/uploads/2016/11/Convert-PNG-to-JPG-Using-for-loop-Command.png
[6]:http://www.tecmint.com/tag/linux-ls-command/
[7]:http://www.tecmint.com/wp-content/uploads/2016/11/Convert-PNG-to-JPG-Using-Parallel-Command.png
[8]:http://www.tecmint.com/sort-ls-output-by-last-modified-date-and-time/
[9]:http://www.tecmint.com/wp-content/uploads/2016/11/Convert-PNG-to-JPG-in-Linux.png
[10]:http://www.tecmint.com/tag/linux-ls-command/
[11]:http://www.tecmint.com/using-shell-script-to-automate-linux-system-maintenance-tasks/

View File

@ -0,0 +1,107 @@
translating by chenzhijun
How To Update Wifi Network Password From Terminal In Arch Linux
============================================================
![Update Wifi Network Password From Terminal In Arch Linux](https://www.ostechnix.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
After changing the Wifi Network password in my Router, My Arch Linux test machine lost the Internet connection. So I wanted to update the new password from Terminal because my Arch Linux test box doesnt have graphical desktop environment. Changing old wifi password to new password is pretty easy in GUI mode. I will simply open the network manager and update the new password to the wifi in few seconds. However, I am not aware of updating the wifi network password from command line in Arch Linux. So, I started to dig into Google and find a perfect solution from the Arch Linux forum. In case you ever been in the same situation, read on. Its not that difficult.
### Update Wifi Network Password From Terminal
After changing the password in Router, I ran _wifi-menu_ command to update the new password. But It kept throwing the following error.
```
sudo wifi-menu
```
It displayed the list of available wifi networks.
[
![sksk_001](http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_001-1.png)
][2]
My wifi network name is Murugs9376. Then, I selected my network and hit OK button. Instead of asking the new password (I thought it was going to ask me if the password has been changed.), It showed the following error.
```
Interface 'wlp9s0' is controlled by netctl-auto
WPA association/authentication failed for interface 'wlp9s0'
```
[
![sksk_002](http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_002-1.png)
][3]
I dont have much experience in Arch based distributions. So I went thorough the Arch linux forum hoping for the solution. Thankfully, someone has posted the same problem and got the workaround from one of the fellow Arch user. Following is the solution to update the wifi network password from Terminal in Arch based distributions.
The network profiles is stored in the /etc/netctl/ folder. For example, here is my Arch Linux test box wifi network profile details.
```
ls /etc/netctl/
Sample Output:
examples ostechnix 'wlp9s0-Chendhan Cell Service' wlp9s0-Pratheesh
hooks wlp9s0 wlp9s0-Murugu9376
interfaces wlp9s0-AndroidAP wlp9s0-none
```
[
![sksk_003](http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_003-1.png)
][4]
All I need to update the new password is to delete the my wifi network profile (Ex. wlp9s0-Murugs9376) and re-run the _wifi-menu_ command to new password.
So, first let us delete the wifi profile using command:
```
sudo rm /etc/netctl/wlp9s0-Murugu9376
```
After deleting the profile, run wifi-menu command to update the new password.
```
sudo wifi-menu
```
Select the wifi-network and press ENTER.
[
![sksk_004](http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_004-1.png)
][5]
Enter a name for the profile.
[
![sksk_005](http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_005-1.png)
][6]
Finally, Enter the security key to the network profile and hit ENTER key.
[
![sksk_006](http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_006-1.png)
][7]
Thats it. Now, we have updated the password to the wifi network. As you can see, updating password from Terminal in Arch Linux is no big deal. Anyone could do it in a matter of seconds.
If you find this guide useful, please share it on your social networks and support us.
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/update-wifi-network-password-terminal-arch-linux/
作者:[ SK][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:http://ostechnix.tradepub.com/free/w_pacb38/prgm.cgi?a=1
[2]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_001-1.png
[3]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_002-1.png
[4]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_003-1.png
[5]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_004-1.png
[6]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_005-1.png
[7]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@sk_006-1.png

View File

@ -0,0 +1,126 @@
翻译中 by zky001
How to check if port is in use on Linux or Unix
============================================================
[
![](https://s0.cyberciti.org/images/category/old/linux-logo.png)
][1]
How do I determine if a port is in use under Linux or Unix-like system? How can I verify which ports are listening on Linux server?
It is important you verify which ports are listing on the servers network interfaces. You need to pay attention to open ports to detect an intrusion. Apart from an intrusion, for troubleshooting purposes, it may be necessary to check if a port is already in use by a different application on your servers. For example, you may install Apache and Nginx server on the same system. So it is necessary to know if Apache or Nginx is using TCP port # 80/443\. This quick tutorial provides steps to use the netstat, nmap and lsof command to check the ports in use and view the application that is utilizing the port.
### How to check the listening ports and applications on Linux:
1. Open a terminal application i.e. shell prompt.
2. Run any one of the following command:
```
sudo lsof -i -P -n | grep LISTEN
sudo netstat -tulpn | grep LISTEN
sudo nmap -sTU -O IP-address-Here
```
Let us see commands and its output in details.
### Option #1: lsof command
The syntax is:
```
$ sudo lsof -i -P -n
$ sudo lsof -i -P -n | grep LISTEN
$ doas lsof -i -P -n | grep LISTEN
```
### [OpenBSD] ###
Sample outputs:
[
![Fig.01: Check the listening ports and applications with lsof command](https://s0.cyberciti.org/uploads/faq/2016/11/lsof-outputs.png)
][2]
Fig.01: Check the listening ports and applications with lsof command
Consider the last line from above outputs:
```
sshd 85379 root 3u IPv4 0xffff80000039e000 0t0 TCP 10.86.128.138:22 (LISTEN)
```
- sshd is the name of the application.
- 10.86.128.138 is the IP address to which sshd application bind to (LISTEN)
- 22 is the TCP port that is being used (LISTEN)
- 85379 is the process ID of the sshd process
### Option #2: netstat command
You can check the listening ports and applications with netstat as follows.
### Linux netstat syntax
```
$ netstat -tulpn | grep LISTEN
```
### FreeBSD/MacOS X netstat syntax
```
$ netstat -anp tcp | grep LISTEN
$ netstat -anp udp | grep LISTEN
```
### OpenBSD netstat syntax
````
$ netstat -na -f inet | grep LISTEN
$ netstat -nat | grep LISTEN
```
### Option #3: nmap command
The syntax is:
```
$ sudo nmap -sT -O localhost
$ sudo nmap -sU -O 192.168.2.13 ##[ list open UDP ports ]##
$ sudo nmap -sT -O 192.168.2.13 ##[ list open TCP ports ]##
```
Sample outputs:
[
![Fig.02: Determines which ports are listening for TCP connections using nmap](https://s0.cyberciti.org/uploads/faq/2016/11/nmap-outputs.png)
][3]
Fig.02: Determines which ports are listening for TCP connections using nmap
You can combine TCP/UDP scan in a single command:
`$ sudo nmap -sTU -O 192.168.2.13`
### A note about Windows users
You can check port usage from Windows operating system using following command:
```
netstat -bano | more
netstat -bano | grep LISTENING
netstat -bano | findstr /R /C:"[LISTING]"
````
--------------------------------------------------------------------------------
via: https://www.cyberciti.biz/faq/unix-linux-check-if-port-is-in-use-command/
作者:[ VIVEK GITE][a]
译者:[zky001](https://github.com/zky001)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.cyberciti.biz/faq/unix-linux-check-if-port-is-in-use-command/
[1]:https://www.cyberciti.biz/faq/category/linux/
[2]:http://www.cyberciti.biz/faq/unix-linux-check-if-port-is-in-use-command/lsof-outputs/
[3]:http://www.cyberciti.biz/faq/unix-linux-check-if-port-is-in-use-command/nmap-outputs/

View File

@ -0,0 +1,114 @@
User Editorial: Steam Machines & SteamOS after a year in the wild
====
On this day, last year, [Valve released Steam Machines onto the world][2], after the typical Valve delays. While the state of the Linux desktop regarding gaming has improved, Steam Machines have not taken off as a platform, and SteamOS remains stagnant. What happened with these projects from Valve? Why were they created, why did they fail, and what could have been done to make them succeed?
**Context**
In 2012, when Windows 8 released, it included an app store, much like iOS and Android. With the new touch-friendly user interface Microsoft debuted, there was a new set of APIs available called “WinRT,” for creating these immersive touch-friendly applications in the UI language called “Metro.” Applications created with this new API, however, could only be distributed via the Windows Store, with Microsoft taking out a 30% cut, just like the other stores. To Gabe Newell, CEO of Valve, this was unacceptable, and he saw the risks of Microsoft using its position to push the Windows Store and Metro applications to crush Valve, like what they had done to Netscape using Internet Explorer.
To Valve, the strength of the PC running Windows was it that was an open platform, where anyone could run whatever they want without control over the operating system or hardware vendor. The alternative to these proprietary platforms closing in on third-party application stores like Steam was to push a truly open platform that grants freedoms to change, to everyone Linux. Linux is just a kernel, but you can easily create an operating system with it and other software like the GNU core utilities and Gnome, such as Ubuntu. While pushing Ubuntu and other Linux distributions would allow Valve a sanctuary platform in case Microsoft or Apple turned hostile, Linux gave them possibilities to create a new platform.
**Conception**
Valve seemed to have found an opportunity in the console space, if we can call Steam Machines consoles. To achieve the user interface expectations of a console, being used on a large screen television from afar, Big Picture Mode was created. A core principle of the machines was openness; the software was able to be swapped out for Windows, as an example, and the CAD designs for the controller are available for peoples projects.
Originally, Valve had planned to create their own box as a “flagship” machine. However, these only shipped as prototypes to testers in 2013\. They would also let other OEMs like Dell create their own Steam Machines as well, and allow a variety of pricing and specification options. A company called “Xi3” showed their small box, small enough to fit into a palm, as a possible candidate to become a premiere Steam Machine, which created more hype around Steam Machines. Ultimately, Valve decided to only go with OEM partners to make and advertise Steam Machines, rather than doing it themselves.
More “out there” ideas were considered. Biometrics, gaze tracking, and motion controllers were considered for the controller. Of them, the released Steam Controller had a gyroscope, and the HTC Vive controllers had various tracking and motion features that may have been originally intended for the original controller concepts. The controller was also originally intended to be more radical in its approach, with a touchscreen in the middle that had customizable, context-sensitive actions. Ultimately, the launch controller was more conservative, but still had features like the dual trackpads and advanced software that gave it flexibility. Valve had also considered making a version of Steam Machines and SteamOS for smaller hardware like laptops. This ultimately never bore any fruit, though the “Smach Z” handheld could be compared to this.
In [September 2013][3], Valve had announced Steam Machines and SteamOS, with an expected release in the middle of 2014\. The aforementioned 300 prototype machines were released to testers in December, and in January, 2000 more machines were provided to developers. SteamOS was released for testers experienced with Linux to try out. With the feedback given, Valve had decided to delay the release until November 2015.
The late launch caused problems with partners; Dells Steam Machine was launched a year early running Windows as the Alienware Alpha, with extra software to improve usability with a controller.
**Launch**
With the launch, Valve and their OEM partners released their machines, and Valve also released the Steam Controller and the Steam Link. A retail presence was established with GameStop and other brick and mortar stores providing space. Before release, some OEMs pulled out of the launch; Origin PC and Falcon Northwest, two high-end boutique builders. They had claimed performance issues and limitations had made them decide not to ship SteamOS.
The machines had launched to mixed reviews. The Steam Link was praised and many had considered buying one for their existing PC instead of buying a Steam Machine for the living room. The Steam Controller reception was muddled, due to its rich feature set but high learning curve. The Steam Machines themselves ultimately launched to the muddiest reception, however. Reviewers like LinusTechTips noticed glaring defects with the SteamOS software, including performance issues. Many of the machines were criticized for their high price point and poor value, especially when compared to the option of building a PC from the perspective of a PC gamer, or the price in comparison to other consoles. The use of SteamOS was criticized over compatibility, bugs, and lower performance than Windows. Of the available options, the Alienware Steam Machine was considered to be the most interesting option due to its value relative to other machines and small form factor.
By using Debian Linux as the base, Valve had many “launch titles” for the platform, as they had a library of pre-existing Linux titles. The initial availability of games was seen as favourable over other consoles. However, many titles originally announced for the platform never came out, or came out much later. Rocket League and Mad Max only came out only recently after the initial announcements a year ago, and titles like The Witcher 3 and Batman: Arkham Knight never came for the platform, despite initial promises from Valve or publishers. In the case of The Witcher 3, the developer, CD Projekt Red, had denied they ever announced a port, despite their game appearing in a list of titles on sale that had or were announced to have Linux and SteamOS support. In addition, many “AAA” titles have not been ported; though this situation continues to improve over time.
**Neglect**
With the Steam Machines launched, developers at Valve had moved on to other projects. Of the projects being worked on, virtual reality was seen as the most important, with about a third of employees working on it as of June. Valve had seen virtual reality as something to develop, and Steam as the prime ecosystem for delivering VR. Using HTC to manufacture, they had designed their own virtual reality headset and controllers, and would continue to develop new revisions. However, Linux and Steam Machines had fallen to the wayside with this focus. SteamVR, until recently, did not support Linux (it's still not public yet, but it was shown off at SteamDevDays on Linux), which put into question Valves commitments to Steam Machines and Linux as an open platform with a future.
There has been little development to SteamOS itself. The last major update, SteamOS 2.0 was mostly synchronizing with upstream Debian and required a reinstallation, and continued patches simply continue synchronizing with upstream sources. While Valve has made improvements to projects like Mesa, which have improved performance for many users, it has done little with Steam Machines as a product.
Many features continue to go undeveloped. Steams built in functionality like chat and broadcasting continue to be weak, but this affects all platforms that Steam runs on. More pressingly, services like Netflix, Twitch, and Spotify are not integrated into the interface like most major consoles, but accessing them requires using the browser, which can be slow and clunky, if it even achieves what is wanted, or to bring in software from third-party sources, which requires using the terminal, and the software might not be very usable using a controller this is a poor UX for whats considered to be an appliance.
Valve put little effort into marketing the platform, preferring to leave this to OEMs. However, most OEMs were either boutique builders or makers or barebones builders. Of the OEMs, only Dell was the major player in the PC market, and the only one who pushed Steam Machines with advertisements.
Sales were not strong. With 500,000 controllers sold 7 months on (stated in June 2016), including those bundled with a Steam Machine. This puts retail Steam Machines, not counting machines people have installed SteamOS on, in the low hundred thousand mark. Compared to the existing PC and console install bases, this is low.
**Post-mortem thoughts**
So, with the story of what happened, can we identify why Steam Machines failed, and ways they could succeed in the future?
_Vision and purpose_
Steam Machines did not make clear what they were in the market, nor did any advantages particularly stand out. On the PC flank, building PCs had become popular and is a cheaper option with better upgrade and flexibility options. On the console flank, they were outflanked by consoles with low initial investment, despite a possibly higher TCO with game prices, and a far simpler user experience.
With PCs, flexibility is seen as a core asset, with users being able to use their machines beyond gaming, doing work and other tasks. While Steam Machines were just PCs running SteamOS with no restrictions, the SteamOS software and marketing had solidified their view as consoles to PC gamers, compounded by the price and lower flexibility in hardware with some options. In the living room where these machines could have made sense to PC gamers, the Steam Link offered a way to access content on a PC in another room, and small form factor hardware like NUCs and Mini-ITX motherboards allowed for custom built PCs that are more socially acceptable in living rooms. The SteamOS software was also available to “convert” these PCs into Steam Machines, but people seeking flexibility and compatibility often opted for a Linux or Windows desktop. Recent strides in Windows and desktop Linux have simplified maintenance tasks associated with desktop-experience computers, automating most of it.
With consoles, simplicity is a virtue. Even as they have expanded in their roles, with media features often a priority, they are still a plug and play experience where compatibility and experience are guaranteed, with a low barrier of entry. Consoles also have long life cycles, ranging from four to seven years, and the fixed hardware during this life cycle allow developers to target and optimize especially for their specifications and features. New mid-life upgrades like “Scorpio” and PlayStation 4 Pro may change the unified experience previously shared by users, but manufactures are requiring games to work on the original model consoles to avoid the most problematic aspects. To keep users attached to the systems, social networks and exclusive games are used. Games also come on discs that can be freely reused and resold, which is a positive for retailers and users. Steam Machines have none of these guarantees; they carry PC complexity and higher initial prices despite a living room friendly exterior.
_Reconciliation_
With this, Steam Machines could be seen as a “worst of both worlds” product, carrying the burdens of both kinds of product, without showing clearly as one or the other, or some kind of new product category. There also exist many deficiencies that neither party experiences, like lack of AAA titles that appear on consoles and Windows PCs, and lack of clients for services like Netflix. Despite this, Valve has shown little effort into improving the product or even trying to resolve the seemingly contradictory goals like the mutual distrust of PC and console gaming.
Some things may make it impossible to reconcile the two concepts into one category or the other, though. Things like graphics settings and mods may make it hard to create a foolproof experience, and the complexity of the underlying system appears from time to time.
One of the most complex parts is the concept of having a lineup users need to evaluate not only the costs and specifications of a system, but its value and value relative to other systems. You need some way for the user to know that their hardware can run any given game, either by some automated benchmark system with comparison, or a grading system, though these need to be simple and need to support (almost) every game in the library. In addition, you also need to worry about how these systems and grades will age what does a “2016 Grade A” machine mean three years from now?
_Valve, effort, and organization_
Valves organizational structure may be detrimental to creating platforms like Steam Machines, let alone maintaining services like Steam. Their mostly leaderless structure with people supposedly moving their desks to ad-hoc units working on projects that they alone decide to work on can be great for creative endeavours, as well as research and development. Its said Valve only hires what they consider to be the “cream of the crop,” with very strict standards, tying them to what they deem more "worthy" work. This view may be inaccurate though; as cliques often exist, the word of Gabe Newell is more important than the “leaked” employee handbook lets on, and people hired and then fired as needed, as a form of contractor working on certain aspects.
However, this leaves projects that arent glamorous or interesting, but need persistent and often mundane work, to wither on the vine. Customer support for Valve has been a constant headache, with enraged users felt ignored, and Valve sometimes only acting when legally required to do so, like with the automated refund system that was forced into action by Australian and European legislation, or the Counter-Strike: Global Offensive item gambling site controversy involving the gambling commission of Washington State thats still ongoing.
This has affected Steam Machines as a result. With the launch delayed by a year, some partners hands were forced, by Dell launching the Alienware Steam Machine a year earlier as the Alienware Alpha causing the hardware to be outdated on launch. These delays may have also affected game availability as well. The opinions of developers and hardware partners as a result of the delayed and non-monumental launch are not clear. Valves platform for virtual reality simply wasnt available on Linux, and as such, SteamOS, until recently, even as SteamVR was receiving significant developer effort.
The “long game”
Valve is seen as playing a “long game” with Steam Machines and SteamOS, though it appears as if there is no roadmap. An example of Valve aiming for the long term is with Steam, from its humble and initially reviled beginnings as a patching platform for their games to the popular distribution and social network it is today. It also helped that Steam was required to play Valves games like Half-Life 2 and Counter-Strike 1.6\. However, it doesnt seem as if Valve is putting in the effort to Steam Machines as they did with Steam before. There is also entrenched competition that Steam in the early days never really dealt with. Their competition includes arguably Valve itself, with Steam on Windows.
_Gambit_
With the lack of developments in Steam Machines, one wonders if the platform was a bargaining chip of sorts. Steam Machines had been originally started over Valves Linux efforts took fruit because of concerns that Microsoft and Apple would have pushed them out of the market with native app stores, and Steam Machines grew so Valve would have a safe haven in case this happened, and a bargaining chip so Valve can remind the developers of its host platforms of possible independence. When these turned out to be non-threatening, Valve slowed down development. I dont see this however; Valve has expended a lot of goodwill with hardware partners and developers trying to push this, only to halt it. You could say both Microsoft and Valve called each others bluffs Microsoft with a locked-down Windows 8, and Valves capability as an independent player.
Even then, who is to say developers wouldnt follow Microsoft with a locked-in platform, if they can offer superior deals to publishers, or better customer relationships? In addition, now Microsoft is pushing Xbox on Windows integration with cross-buy, Xbox Live integration, and Xbox exclusive games on Windows, all while preserving Windows as an open platform arguably more a threat to Steam.
Another point you could argue is that all of this with Steam Machines was simply to push Linux adoption with PC gaming, and Steam Machines were simply to make it more palatable to publishers and developers by implying a large push and continued support. However, this made it an awfully expensive gambit, and developers continued to support Linux before and after Steam Machines, and could have backfired with developers pulling out of Linux due to the lack of the Promised Land of SteamOS coming.
**My opinions on what could have been done**
I think theres an interesting product with Steam Machines, and that there is a market for it, but lack of interest and effort, as well as possible confusion in what it should have been has been damaging for it. I see Steam Machines as a way to cut out the complexity of PC gaming of worrying about parts, life cycles, and maintenance; while giving the advantages like cheap games, mods, and an open platform that can be fiddled with if the user desires. However, they need to get core aspects like pricing, marketing, lineup, and software right.
I think Steam Machines can make compromises on things like upgradability (Though its possible to preserve this but it should be done with attention to user experience.) and choices, to reduce friction. PCs would still exist to these options. The paralysis of choice is a real dilemma, and the sheer amount of poorly valued options available with Steam Machines didn't help. Valve needs a flagship machine to lead Steam Machines. Arguably, the Alienware model was close, but it wasnt made officially so. Theres good industrial design talent in Valve, and if they focused on their own machine, and with effort put in, it might be worth it. A company like Dell or HTC can manufacture for Valve, bringing their experience in. Defining life cycles and only having one or two specifications updated periodically would help, especially if they worked with developers to establish this is a baseline that should be supported. Im not sure with OEMs; if Valve is putting their effort behind one machine, they might be made redundant and ultimately only hindering development of the platform.
Addressing the software issues is essential. The lack of integration with services like Netflix and Twitch that exist fluidly on console and easily put into place on PC, despite living room user interface issues, are holding Steam Machines back. Although Valve has slowly been acquiring movie licenses for distribution on Steam, people will use existing and trusted streaming sources. This needs to be addressed, especially as people use their consoles as parts of their home theatre. Fixing issues with the Steam client and platform are essential, and feature parity with other platforms is a good idea. Performance issues with Linux and its graphics stack are also a problem, but this is slowly improving. Getting ports of games will also be another issue. Game porting shops like Feral Interactive and Aspyr Media help the library, but they need to be contracted by publishers and developers, and they often use wrappers that add overhead. Valve has helped studios directly with porting, such as with Rocket League, but this has rarely happened and when it did, slowly at the typical Valve pace. The monolith of AAA games cant be ignored either the situation has improved dramatically, but studios like Bethesda are still reluctant to port, especially with a small user base, lack of support from Valve with Steam Machines even if Linux is doing relatively well, and the lack of extra DRM like Denuvo.
Valve also needs to put effort into the other bits beyond hardware and software. With one machine, they have an interest and can subsidize the hardware effectively. This would put it into parity with consoles, and possibly cheaper than custom built PCs. Efforts to marketing the product to market segments that would be interested in the machines are essential, whatever they are. (I myself would be interested in the machines. I dont like the hassle of dealing with PC building or the premium on prebuilt machines, but consoles often lack the games I want to play, and I have an existing library of games on Steam I acquired cheaply.) Retail partners may not be effective, due to their interest in selling and reselling physical copies of games.
Even with my suggestions towards the platform and product, Im not sure how effective it would be to help Steam Machines achieve their full potential and do well in the marketplace. Ultimately, learning from not just your own mistakes, but the mistakes of previous entrants like 3DO and Pippin who relied on an open platform or were descended from desktop-experience computing, which are relevant to Valves current situation, and the future of Nintendo's Switch, which steps into the realm of possible confusion between values.
_Note: Clearing up done by liamdawe, all thoughts are from the submitter._ 
This article was submitted by a guest, we encourage anyone to [submit their own articles][1].
--------------------------------------------------------------------------------
via: https://www.gamingonlinux.com/articles/user-editorial-steam-machines-steamos-after-a-year-in-the-wild.8474
作者:[calvin][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.gamingonlinux.com/profiles/5163
[1]:https://www.gamingonlinux.com/submit-article/
[2]:https://www.gamingonlinux.com/articles/steam-machines-steam-link-steam-controller-officially-released-steamos-sale.6201
[3]:https://www.gamingonlinux.com/articles/valve-announces-steam-machines-you-can-win-one-too.2469

View File

@ -0,0 +1,172 @@
### [Can Linux containers save IoT from a security meltdown?][28]
![](http://hackerboards.com/files/internet_of_things_wikimedia1-thm.jpg)
In this final IoT series post, Canonical and Resin.io champion Linux container technology as a solution to IoT security and interoperability challenges.
|
![](http://hackerboards.com/files/samsung_artik710-thm.jpg)
**Artik 7** |
Despite growing security threats, the Internet of Things hype shows no sign of abating. Feeling the FoMo, companies are busily rearranging their roadmaps for IoT. The transition to IoT runs even deeper and broader than the mobile revolution. Everything gets swallowed in the IoT maw, including smartphones, which are often our windows on the IoT world, and sometimes our hubs or sensor endpoints.
New IoT focused processors and embedded boards continue to reshape the tech landscape. Since our [Linux and Open Source Hardware for IoT][5] story in September, weve seen [Intel Atom E3900][6] “Apollo Lake” SoCs aimed at IoT gateways, as well as [new Samsung Artik modules][7], including a Linux-driven, 64-bit Artik 7 COM for gateways and an RTOS-ready, Cortex-M4 Artik 0\. ARM announced [Cortex-M23 and Cortex-M33][8] cores for IoT endpoints featuring ARMv8-M and TrustZone security.
Security is a selling point for these products, and for good reason. The Mirai botnet that recently attacked the Dyn service and blacked out much of the U.S. Internet for a day brought Linux-based IoT into the forefront — and not in a good way. Just as IoT devices can be turned to the dark side via DDoS, the devices and their owners can also be the victimized directly by malicious attacks.
|
![](http://hackerboards.com/files/arm_cortexm33m23-thm.jpg)
**Cortex-M33 and -M23** |
The Dyn attack reinforced the view that IoT will more confidently move forward in more controlled and protected industrial environments rather than the home. Its not that consumer [IoT security technology][9] is unavailable, but unless products are designed for security from scratch, as are many of the solutions in our [smart home hub story][10], security adds cost and complexity.
In this final, future-looking segment of our IoT series, we look at two Linux-based, Docker-oriented container technologies that are being proposed as solutions to IoT security. Containers might also help solve the ongoing issues of development complexity and barriers to interoperability that we explored in our story on [IoT frameworks][11].
We spoke with Canonicals Oliver Ries, VP Engineering Ubuntu Client Platform about his companys Ubuntu Core and its Docker-friendly, container-like Snaps package management technology. We also interviewed Resin.io CEO and co-founder Alexandros Marinos about his companys new Docker-based ResinOS for IoT.
**Ubuntu Core Snaps to**
Canonicals IoT-oriented [Snappy Ubuntu Core][12] version of Ubuntu is built around a container-like snap package management mechanism, and offers app store support. The snaps technology was recently [released on its own][13] for other Linux distributions. On November 3, Canonical released [Ubuntu Core 16][14], which improves white label app store and update control services.
<center>
[
![](http://hackerboards.com/files/canonical_ubuntucore16_diagram-sm.jpg)
][15]
**Classic Ubuntu (left) architecture vs. Ubuntu Core 16**
(click image to enlarge)
</center>
The snap mechanism offers automatic updates, and helps block unauthorized updates. Using transactional systems management, snaps ensure that updates either deploy as intended or not at all. In Ubuntu Core, security is further strengthened with AppArmor, and the fact that all application files are kept in separate silos, and are read-only.
|
![](http://hackerboards.com/files/limesdr-thm.jpg)
**LimeSDR** |
Ubuntu Core, which was part of our recent [survey of open source IoT OSes][16], now runs on Gumstix boards, Erle Robotics drones, Dell Edge Gateways, the [Nextcloud Box][17], LimeSDR, the Mycroft home hub, Intels Joule, and SBCs compliant with Linaros 96Boards spec. Canonical is also collaborating with the Linaro IoT and Embedded (LITE) Segment Group on its [96Boards IoT Edition][18]Initially, 96Boards IE is focused on Zephyr-driven Cortex-M4 boards like Seeeds [BLE Carbon][19], but it will expand to gateway boards that can run Ubuntu Core.
“Ubuntu Core and snaps have relevance from edge to gateway to the cloud,” says Canonicals Ries. “The ability to run snap packages on any major distribution, including Ubuntu Server and Ubuntu for Cloud, allows us to provide a coherent experience. Snaps can be upgraded in a failsafe manner using transactional updates, which is important in an IoT world moving to continuous updates for security, bug fixes, or new features.”
|
![](http://hackerboards.com/files/nextcloud_box3-thm.jpg)
**Nextcloud Box** |
Security and reliability are key points of emphasis, says Ries. “Snaps can run completely isolated from one another and from the OS, making it possible for two applications to securely run on a single gateway,” he says. “Snaps are read-only and authenticated, guaranteeing the integrity of the code.”
Ries also touts the technology for reducing development time. “Snap packages allow a developer to deliver the same binary package to any platform that supports it, thereby cutting down on development and testing costs, deployment time, and update speed,” says Ries. “With snap packages, the developer is in full control of the lifecycle, and can update immediately. Snap packages provide all required dependencies, so developers can choose which components they use.”
**ResinOS: Docker for IoT**
Resin.io, which makes the commercial IoT framework of the same name, recently spun off the frameworks Yocto Linux based [ResinOS 2.0][20]” target=”new”>ResinOS 2.0 as an open source project. Whereas Ubuntu Core runs Docker container engines within snap packages, ResinOS runs Docker on the host. The minimalist ResinOS abstracts the complexity of working with Yocto code, enabling developers to quickly deploy Docker containers.
<center>
[
![](http://hackerboards.com/files/resinio_resinos_arch-sm.jpg)
][21]
**ResinOS 2.0 architecture**
(click image to enlarge)
</center>
Like the Linux-based CoreOS, ResinOS integrates systemd control services and a networking stack, enabling secure rollouts of updated applications over a heterogeneous network. However, its designed to run on resource constrained devices such as ARM hacker boards, whereas CoreOS and other Docker-oriented OSes like the Red Hat based Project Atomic are currently x86 only and prefer a resource-rich server platform. ResinOS can run on 20 Linux devices and counting, including the Raspberry Pi, BeagleBone, and Odroid-C1.
“We believe that Linux containers are even more important for embedded than for the cloud,” says Resin.ios Marinos. “In the cloud, containers represent an optimization over previous processes, but in embedded they represent the long-delayed arrival of generic virtualization.”
|
![](http://hackerboards.com/files/beaglebone-hand-thm.jpg)
**BeagleBone
Black** |
When applied to IoT, full enterprise virtual machines have performance issues and restrictions on direct hardware access, says Marinos. Mobile VMs like OSGi and Androids Dalvik can been used for IoT, but they require Java among other limitations.
Using Docker may seem natural for enterprise developers, but how do you convince embedded hackers to move to an entirely new paradigm? “Rather than transferring practices from the cloud wholesale, ResinOS is optimized for embedded,” answers Marinos. In addition, he says, containers are better than typical IoT technologies at containing failure. “If theres a software defect, the host OS can remain functional and even connected. To recover, you can either restart the container or push an update. The ability to update a device without rebooting it further removes failure opportunities.”
According to Marinos, other benefits accrue from better alignment with the cloud, such as access to a broader set of developers. Containers provide “a uniform paradigm across data center and edge, and a way to easily transfer technology, workflows, infrastructure, and even applications to the edge,” he adds.
The inherent security benefits in containers are being augmented with other technologies, says Marinos. “As the Docker community pushes to implement signed images and attestation, these naturally transfer to ResinOS,” he says. “Similar benefits accrue when the Linux kernel is hardened to improve container security, or gains the ability to better manage resources consumed by a container.”
Containers also fit in well with open source IoT frameworks, says Marinos. “Linux containers are easy to use in combination with an almost endless variety of protocols, applications, languages and libraries,” says Marinos. “Resin.io has participated in the AllSeen Alliance, and we have worked with partners who use IoTivity and Thread.”
**Future IoT: Smarter Gateways and Endpoints**
Marinos and Canonicals Ries agree on several future trends in IoT. First, the original conception of IoT, in which MCU-based endpoints communicate directly with the cloud for processing, is quickly being replaced with a fog computing architecture. That calls for more intelligent gateways that do a lot more than aggregate data and translate between ZigBee and WiFi.
Second, gateways and smart edge devices are increasingly running multiple apps. Third, many of these devices will provide onboard analytics, which were seeing in the latest [smart home hubs][22]. Finally, rich media will soon become part of the IoT mix.
<center>
[
![](http://hackerboards.com/files/eurotech_reliagate2026-sm.jpg)
][23] [
![](http://hackerboards.com/files/advantech_ubc221-sm.jpg)
][24]
**Some recent IoT gateways: Eurotechs [ReliaGate 20-26][1] and Advantechs [UBC-221][2]**
(click images to enlarge)
</center>
“Intelligent gateways are taking over a lot of the processing and control functions that were originally envisioned for the cloud,” says Marinos. “Accordingly, were seeing an increased push for containerization, so feature- and security-related improvements can be deployed with a cloud-like workflow. The decentralization is driven by factors such as the mobile data crunch, an evolving legal framework, and various physical limitations.”
Platforms like Ubuntu Core are enabling an “explosion of software becoming available for gateways,” says Canonicals Ries. “The ability to run multiple applications on a single device is appealing both for users annoyed with the multitude of single-function devices, and for device owners, who can now generate ongoing software revenues.”
<center>
[
![](http://hackerboards.com/files/myomega_mynxg-sm.jpg)
][25] [
![](http://hackerboards.com/files/technexion_ls1021aiot_front-sm.jpg)
][26]
**Two more IoT gateways: [MyOmega MYNXG IC2 Controller (left) and TechNexions ][3][LS1021A-IoT Gateway][4]**
(click images to enlarge)
</center>
Its not only gateways — endpoints are getting smarter, too. “Reading a lot of IoT coverage, you get the impression that all endpoints run on microcontrollers,” says Marinos. “But we were surprised by the large amount of Linux endpoints out there like digital signage, drones, and industrial machinery, that perform tasks rather than operate as an intermediary. We call this the shadow IoT.”
Canonicals Ries agrees that a single-minded focus on minimalist technology misses out on the emerging IoT landscape. “The notion of lightweight is very short lived in an industry thats developing as fast as IoT,” says Ries. “Todays premium consumer hardware will be powering endpoints in a matter of months.”
While much of the IoT world will remain lightweight and “headless,” with sensors like accelerometers and temperature sensors communicating in whisper thin data streams, many of the newer IoT applications use rich media. “Media input/output is simply another type of peripheral,” says Marinos. “Theres always the issue of multiple containers competing for a limited resource, but its not much different than with sensor or Bluetooth antenna access.”
Ries sees a trend of “increasing smartness at the edge” in both industrial and home gateways. “We are seeing a large uptick in AI, machine learning, computer vision, and context awareness,” says Ries. “Why run face detection software in the cloud and incur delays and bandwidth and computing costs, when the same software could run at the edge?”
As we explored in our [opening story][27] of this IoT series, there are IoT issues related to security such as loss of privacy and the tradeoffs from living in a surveillance culture. There are also questions about the wisdom of relinquishing ones decisions to AI agents that may be controlled by someone else. These wont be fully solved by containers, snaps, or any other technology.
Perhaps wed be happier if Alexa handled the details of our lives while we sweat the big stuff, and maybe theres a way to balance privacy and utility. For now, were still exploring, and thats all for the good.
--------------------------------------------------------------------------------
via: http://hackerboards.com/can-linux-containers-save-iot-from-a-security-meltdown/
作者:[Eric Brown][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://hackerboards.com/can-linux-containers-save-iot-from-a-security-meltdown/
[1]:http://hackerboards.com/atom-based-gateway-taps-new-open-source-iot-cloud-platform/
[2]:http://hackerboards.com/compact-iot-gateway-runs-yocto-linux-on-quark/
[3]:http://hackerboards.com/wireless-crazed-customizable-iot-gateway-uses-arm-or-x86-coms/
[4]:http://hackerboards.com/iot-gateway-runs-linux-on-qoriq-accepts-arduino-shields/
[5]:http://hackerboards.com/linux-and-open-source-hardware-for-building-iot-devices/
[6]:http://hackerboards.com/intel-launches-14nm-atom-e3900-and-spins-an-automotive-version/
[7]:http://hackerboards.com/samsung-adds-first-64-bit-and-cortex-m4-based-artik-modules/
[8]:http://hackerboards.com/new-cortex-m-chips-add-armv8-and-trustzone/
[9]:http://hackerboards.com/exploring-security-challenges-in-linux-based-iot-devices/
[10]:http://hackerboards.com/linux-based-smart-home-hubs-advance-into-ai/
[11]:http://hackerboards.com/open-source-projects-for-the-internet-of-things-from-a-to-z/
[12]:http://hackerboards.com/lightweight-snappy-ubuntu-core-os-targets-iot/
[13]:http://hackerboards.com/canonical-pushes-snap-as-a-universal-linux-package-format/
[14]:http://hackerboards.com/ubuntu-core-16-gets-smaller-goes-all-snaps/
[15]:http://hackerboards.com/files/canonical_ubuntucore16_diagram.jpg
[16]:http://hackerboards.com/open-source-oses-for-the-internet-of-things/
[17]:http://hackerboards.com/private-cloud-server-and-iot-gateway-runs-ubuntu-snappy-on-rpi/
[18]:http://hackerboards.com/linaro-beams-lite-at-internet-of-things-devices/
[19]:http://hackerboards.com/96boards-goes-cortex-m4-with-iot-edition-and-carbon-sbc/
[20]:http://hackerboards.com/can-linux-containers-save-iot-from-a-security-meltdown/%3Ca%20href=
[21]:http://hackerboards.com/files/resinio_resinos_arch.jpg
[22]:http://hackerboards.com/linux-based-smart-home-hubs-advance-into-ai/
[23]:http://hackerboards.com/files/eurotech_reliagate2026.jpg
[24]:http://hackerboards.com/files/advantech_ubc221.jpg
[25]:http://hackerboards.com/files/myomega_mynxg.jpg
[26]:http://hackerboards.com/files/technexion_ls1021aiot_front.jpg
[27]:http://hackerboards.com/an-open-source-perspective-on-the-internet-of-things-part-1/
[28]:http://hackerboards.com/can-linux-containers-save-iot-from-a-security-meltdown/

View File

@ -0,0 +1,233 @@
Neofetch Shows Linux System Information with Distribution Logo
============================================================
Neoftech is a cross-platform and easy-to-use [system information command line script][3] that collects your Linux system information and display it on the terminal next to an image, it could be your distributions logo or any ascii art of your choice.
Neoftech is very similar to [ScreenFetch][4] or [Linux_Logo][5] utilities, but highly customizable and comes with some extra features as discussed below.
Its main features include: its fast, prints a full color image your distributions logo in ASCII alongside your system information, its highly customizable in terms of which, where and when information is printed on the terminal and it can take a screenshot of your desktop when closing the script as enabled by a special flag.
#### Required Dependencies:
1. Bash 3.0+ with ncurses support.
2. w3m-img (occasionally packaged with w3m) or iTerm2 or Terminology for printing images.
3. [imagemagick][1]  for thumbnail creation.
4. [Linux terminal emulator][2] should support \033[14t [3] or xdotool or xwininfo + xprop or xwininfo + xdpyinfo .
5. On Linux, you need feh, nitrogen or gsettings for wallpaper support.
Important: You can read more about optional dependencies from the Neofetch Github repository to check if your [Linux terminal emulator][6] actually supports \033[14t or any extra dependencies for the script to work well on your distro.
### How To Install Neofetch in Linux
Neofetch can be easily installed from third-party repositories on almost all Linux distributions by following below respective installation instructions as per your distribution.
#### On Debian
```
$ echo "deb http://dl.bintray.com/dawidd6/neofetch jessie main" | sudo tee -a /etc/apt/sources.list
$ curl -L "https://bintray.com/user/downloadSubjectPublicKey?username=bintray" -o Release-neofetch.key && sudo apt-key add Release-neofetch.key && rm Release-neofetch.key
$ sudo apt-get update
$ sudo apt-get install neofetch
```
#### On Ubuntu and Linux Mint
```
$ sudo add-apt-repository ppa:dawidd0811/neofetch
$ sudo apt-get update
$ sudo apt-get install neofetch
```
#### On RHEL, CentOS and Fedora
You need to have dnf-plugins-core installed on your system, or else install it with the command below:
```
$ sudo yum install dnf-plugins-core
```
Enable COPR repository and install neofetch package.
```
$ sudo dnf copr enable konimex/neofetch
$ sudo dnf install neofetch
```
#### On Arch Linux
You can either install neofetch or neofetch-git from the AUR using packer or Yaourt.
```
$ packer -S neofetch
$ packer -S neofetch-git
OR
$ yaourt -S neofetch
$ yaourt -S neofetch-git
```
#### On Gentoo
Install app-misc/neofetch from Gentoo/Funtoos official repositories. However, in case you need the git version of the package, you can install =app-misc/neofetch-9999.
### How To Use Neofetch in Linux
Once you have installed the package, the general syntax for using it is:
```
$ neofetch
```
Note: If w3m-img or [imagemagick][7] is not installed on your system, [screenfetch][8] will be enabled by default and neofetch will display your [ASCII art logo][9] as in the image below.
#### Linux Mint Information
[
![Linux Mint System Information](http://www.tecmint.com/wp-content/uploads/2016/11/Linux-Mint-System-Information.png)
][10]
Linux Mint System Information
#### Ubuntu Information
[
![Ubuntu System Information](http://www.tecmint.com/wp-content/uploads/2016/11/Ubuntu-System-Information.png)
][11]
Ubuntu System Information
If you want to display the default distribution logo as image, you should install w3m-img or imagemagickon your system as follows:
```
$ sudo apt-get install w3m-img [On Debian/Ubuntu/Mint]
$ sudo yum install w3m-img [On RHEL/CentOS/Fedora]
```
Then run neofetch again, you will see the default wallpaper of your Linux distributions as the image.
```
$ neofetch
```
[
![Ubuntu System Information with Logo](http://www.tecmint.com/wp-content/uploads/2016/11/Ubuntu-System-Information-with-Logo.png)
][12]
Ubuntu System Information with Logo
After running neofetch for the first time, it will create a configuration file with all options and settings: `$HOME/.config/neofetch/config`.
This configuration file will enable you through the `printinfo ()` function to alter the system information that you want to print on the terminal. You can type in new lines of information, modify the information lineup, delete certain lines and also tweak the script it using bash code to manage the information to be printed out.
You can open the configuration file using your favorite editor as follows:
```
$ vi ~/.config/neofetch/config
```
Below is an excerpt of the configuration file on my system showing the `printinfo ()` function.
Neofetch Configuration File
```
#!/usr/bin/env bash
# vim:fdm=marker
#
# Neofetch config file
# https://github.com/dylanaraps/neofetch
# Speed up script by not using unicode
export LC_ALL=C
export LANG=C
# Info Options {{{
# Info
# See this wiki page for more info:
# https://github.com/dylanaraps/neofetch/wiki/Customizing-Info
printinfo() {
info title
info underline
info "Model" model
info "OS" distro
info "Kernel" kernel
info "Uptime" uptime
info "Packages" packages
info "Shell" shell
info "Resolution" resolution
info "DE" de
info "WM" wm
info "WM Theme" wmtheme
info "Theme" theme
info "Icons" icons
info "Terminal" term
info "Terminal Font" termfont
info "CPU" cpu
info "GPU" gpu
info "Memory" memory
# info "CPU Usage" cpu_usage
# info "Disk" disk
# info "Battery" battery
# info "Font" font
# info "Song" song
# info "Local IP" localip
# info "Public IP" publicip
# info "Users" users
# info "Birthday" birthday
info linebreak
info cols
info linebreak
}
.....
```
Type the command below to view all flags and their configuration values you can use with neofetch script:
```
$ neofetch --help
```
To launch neofetch with all functions and flags enabled, employ the `--test` flag:
```
$ neofetch --test
```
You can enable the ASCII art logo again using the `--ascii` flag:
```
$ neofetch --ascii
```
In this article, we have covered a simple and highly configuration/customizable command line script that gathers your system information and displays it on the terminal.
Remember to get in touch with us via the feedback form below to ask any questions or give us your thoughts concerning the neofetch script.
Last but not least, if you know of any similar scripts out there, do not hesitate to let us know, we will be pleased to hear from you.
Visit the [neofetch Github repository][13].
--------------------------------------------------------------------------------
via: http://www.tecmint.com/neofetch-shows-linux-system-information-with-logo
作者:[Aaron Kili ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/install-imagemagick-in-linux/
[2]:http://www.tecmint.com/linux-terminal-emulators/
[3]:http://www.tecmint.com/screenfetch-system-information-generator-for-linux/
[4]:http://www.tecmint.com/screenfetch-system-information-generator-for-linux/
[5]:http://www.tecmint.com/linux_logo-tool-to-print-color-ansi-logos-of-linux/
[6]:http://www.tecmint.com/linux-terminal-emulators/
[7]:http://www.tecmint.com/install-imagemagick-in-linux/
[8]:http://www.tecmint.com/screenfetch-system-information-generator-for-linux/
[9]:http://www.tecmint.com/linux_logo-tool-to-print-color-ansi-logos-of-linux/
[10]:http://www.tecmint.com/wp-content/uploads/2016/11/Linux-Mint-System-Information.png
[11]:http://www.tecmint.com/wp-content/uploads/2016/11/Ubuntu-System-Information.png
[12]:http://www.tecmint.com/wp-content/uploads/2016/11/Ubuntu-System-Information-with-Logo.png
[13]:https://github.com/dylanaraps/neofetch

View File

@ -0,0 +1,233 @@
OneNewLife translating
Apache Vs Nginx Vs Node.js And What It Means About The Performance Of WordPress Vs Ghost
============================================================
![Node vs Apache vs Nginx](https://iwf1.com/wordpress/wp-content/uploads/2016/11/Node-vs-Apache-vs-Nginx-730x430.jpg)
Ultimate battle of the giants: can the rising star Node.js prevail against the titans Apache and Nginx?
Just like you, I too have read the various kinds of opinions / facts which are scattered all over the Internet throughout all sorts of sources, some of which I consider reliable, while others, perhaps shady or doubtful.
Many of the sources I read were quite contradicting, ahm did someone say StackOverflow?[1][2], others showed a clear yet surprising results[3] thus having a crucial role in pushing me towards running my own tests and experiments.
At first, I did some thought experiments thinking I may avoid all the hassle of building and running physical tests of my own I was drowning deep in those before I even knew it.
Nonetheless, looking backwards on it, it seem that my initial thoughts were quite accurate after all and have been reaffirmed by my tests; a fact which reminds me of what I learned back in school regarding Einstein and his photoelectric effect experiments where he faced a waveparticle duality and initially concluded that the experiments were affected by his state of mind, that is, when he expected the result would be a wave then so it was and vice versa.
That said, Im pretty sure my results wont prove to be a duality anytime in the near future, although my own state of mind probably did had an effect, to some extents, on them.
### About The Comparison
One of the sources I read came up with a revolutionary way, in my view, to deal with the natural subjectiveness and personal biases an author may have.
A way which I decided to embrace as-well, thus I declare the following in advance:
Developers spend many years honing their craft. Those who reach higher levels usually make their own choice based on a host of factors. Its subjective; youll promote and defend your technology decision.
That said, the point of this comparison is not to become another “use whatever suits you, buddy” article. I will make recommendations based on my own experience, requirements and biases. Youll agree with some points and disagree with others; thats great — your comments will help others make an informed choice.
And thank you to Craig Buckler of [SitePoint][2] for re-enlightening me regarding the purpose of comparisons a purpose I tend re-forgetting as Im trying to please all visitors.
### About The Tests
All test were ran locally on an:
* Intel core i7-2600k machine of 4 cores and 8 threads.
* **[Gentoo Linux][1]** is the operating system used to run the tests.
The tool used for benchmarking: ApacheBench, Version 2.3 <$Revision: 1748469 $>.
The tests included a series of benchmarks, starting from 1,000 to 10,000 requests and a concurrency of 100 to 1,000 the results were quite surprising.
In addition, stress test to measure server function under high load was also issued.
As for the content, the main focus was about a static file containing a number of Lorem Ipsum verses with headings and an image.
[
![Lorem Ipsum and ApacheBenchmark](http://iwf1.com/wordpress/wp-content/uploads/2016/11/Lorem-Ipsum-and-ApacheBenchmark-730x411.jpg)
][3]
Lorem Ipsum and ApacheBenchmark
The reason I decided to focus on static files is because they remove all sorts of rendering factors that may have an effect on the tests, such as: the speed of a programming language interpreter, how well is an interpreter integrated with the server, etc…
Also, based on my own experience, a substantial part of the average page load time is usually being spent on static content such as images for example, therefore in order to see which server could save us the most of that precious time it seem more realistic to focus on that part.
That aside, I also wanted to test a more real case scenario where I benchmarked each server upon running a dynamic page of different CMSs (more details about that later on).
### The Servers
As Im running Gentoo Linux, you could say that either one of my HTTP servers is starting from an optimized state to begin with, since I built them using only the use-flags I actually needed. I.e there shouldnt be any unnecessary code or module loading or running in the background while I ran my tests.
[
![Apache vs Nginx vs Node.js use-flags](http://iwf1.com/wordpress/wp-content/uploads/2016/10/Apache-vs-Nginx-vs-Node.js-use-flags-730x241.jpg)
][4]
Apache vs Nginx vs Node.js use-flags
### Apache
`$: curl -i http://localhost/index.html
HTTP/1.1 200 OK
Date: Sun, 30 Oct 2016 15:35:44 GMT
Server: Apache
Last-Modified: Sun, 30 Oct 2016 14:13:36 GMT
ETag: "2cf2-54015b280046d"
Accept-Ranges: bytes
Content-Length: 11506
Cache-Control: max-age=600
Expires: Sun, 30 Oct 2016 15:45:44 GMT
Vary: Accept-Encoding
Content-Type: text/html`
Apache was configured with “event mpm”.
### Nginx
`$: curl -i http://localhost/index.html
HTTP/1.1 200 OK
Server: nginx/1.10.1
Date: Sun, 30 Oct 2016 14:17:30 GMT
Content-Type: text/html
Content-Length: 11506
Last-Modified: Sun, 30 Oct 2016 14:13:36 GMT
Connection: keep-alive
Keep-Alive: timeout=20
ETag: "58160010-2cf2"
Accept-Ranges: bytes`
Nginx included various tweaks, among them: “sendfile on”, “tcp_nopush on” and “tcp_nodelay on”.
### Node.js
`$: curl -i http://127.0.0.1:8080
HTTP/1.1 200 OK
Content-Length: 11506
Etag: 15
Last-Modified: Thu, 27 Oct 2016 14:09:58 GMT
Content-Type: text/html
Date: Sun, 30 Oct 2016 16:39:47 GMT
Connection: keep-alive`
The Node.js server used in the static tests was custom built from scratch, tailor made to be as lightweight and fast as possible no external modules (outside of Nodes core) were used.
### The Results
Click on the images to enlarge:
[
![Apache vs Nginx vs Node: performance under requests load (per 100 concurrent users)](http://iwf1.com/wordpress/wp-content/uploads/2016/11/requests-730x234.jpg)
][5]
Apache vs Nginx vs Node: performance under requests load (per 100 concurrent users)
[
![Apache vs Nginx vs Node: performance under concurrent users load](http://iwf1.com/wordpress/wp-content/uploads/2016/11/concurrency-730x234.jpg)
][6]
Apache vs Nginx vs Node: performance under concurrent users load (per 1,000 requests)
### Stress Testing
[
![Apache vs Nginx vs Node: time to complete 100,000 requests with concurrency of 1,000](http://iwf1.com/wordpress/wp-content/uploads/2016/11/stress.jpg)
][7]
Apache vs Nginx vs Node: time to complete 100,000 requests with concurrency of 1,000
### What Can We Learn From The Results?
Judging by the results above, it appears that Nginx can complete the highest amount of requests in the least amount of time, in other words, **Nginx** is the fastest HTTP server.
Another thing we can learn, which is quite surprising as a matter of fact, is that Node.js can be faster than Nginx and Apache in some cases, given the right amount of concurrent users and requests.
To those who wonder, the answer is NO, when the number of requests was raised during the concurrency test then Nginx would return to a leading position.
Unlike Apache and Nginx, Node.js, especially clustered Node, seem to be indifferent to the number of concurrent users hitting it. As the chart shows, clustered Node keeps a straight line at around 0.1 seconds while both Apache and Nginx suffer a variation of about 0.2 seconds.
A conclusion that can be drawn based on the above statistics is that the smaller the site is the less it matters which server it uses. However, as the site grows larger audience, the more apparent the impact an HTTP server has.
At the bottom line, when it comes to the raw speed of each server, as its depicted by the stress test, my sense is that the most crucial factor behind the performance is not some special algorithm but what it comes down to is actually the programming language each server is running.
As both Apache and Nginx are using C language which is AOT (Ahead Of Time) compiled language, Node.js on the other hand is using JavaScript which is an interpreted / JIT (Just In Time) compiled language. This means theres additional work for the Node.js server on its way to execute a program.
This sense I base not only upon the results above but also upon further results, which youll see below, where I got pretty much the same performance parity even when using an optimized Node.js server built with the popular Express framework.
### The Bigger Picture
At the end of the day, an HTTP server is quite useless without the content it serves. Therefore when looking to compare web servers, a vital part we must take into account is the content we wish to run on top of it.
Although other function exists as well, the most widely popular use done with an HTTP server is running a website. Hence, to see the real life implications of each servers performance I decided to compare WordPress the most widely used CMS (Content Management System) in the world, with Ghost a rising star with a gimmick of using JavaScript at its core.
Will a Ghost web-page based on JavaScript alone be able to outperform a WordPress page running on top of PHP and Apache / Nginx?
Thats an interesting question since Ghost has the advantage of using a single, coherent tool for its actions no additional layers needed, whereas WordPress needs to rely on the integration between Apache / Nginx and PHP, an integration which might incur significant performance drawbacks.
Adding to that, theres also a significant performance difference between PHP and Node.js in favor of the latter, which Ill briefly talk about below, things might come out a bit differently than initially seemed.
### PHP Vs Node.js
In order to compare WordPress and Ghost we must first consider an essential component which affects both.
Essentially, WordPress is a PHP based CMS while Ghost is Node.js (JavaScript) based. Unlike PHP, Node.js enjoys the following advantages:
* Non-blocking I/O
* Event driven
* Modern, less legacy code encumbered
Since there are plenty of comparisons out there explaining and demonstrating Node.js raw speed over PHP (including PHP 7) I shall not elaborate further on the subject, Google it, I implore you.
So, given that Node.js outperforms PHP in general, will it be significant enough to make a Node.js website faster than Apache / Nginx with PHP?
### WordPress Vs Ghost
When comparing WordPress to Ghost some would say its like comparing apples to oranges and for the most part Ill agree, as WordPress is a fully fledged CMS while Ghost is basically just a blogging platform at the moment.
However, the two still share many overlapping areas where both can be used to publish thoughts to the world.
Given that premise, how can we compare the 2 while one runs on totally different code base than the other, including themes and core features.
Indeed, a scientific lab-conditioned test would be hard to devise. However, in this comparison Im interested in a more real life case scenario, where WordPress gets to keep its theme and so does Ghost. Thus, the goal here is to have both platforms web-pages similar in size as possible and let PHP and Node.js do their magic behind the scenes.
Since the results were measured against different criteria and most importantly not exact same sizes, it wouldnt be fair to display them side by side in a chart. Hence a table is used instead:
[
![Node vs Nginx vs Apache comparison table](http://iwf1.com/wordpress/wp-content/uploads/2016/11/Node-vs-Nginx-vs-Apache-comparison-table-730x185.jpg)
][8]
Node vs Nginx vs Apache running WordPress & Ghost. Top 2 rows are WordPress, bottom 2 are Ghost
As you can see, despite the fact Ghost (Node.js) is loading a smaller sized page (youd be surprised how much difference can 1kB make) it still remains slower than both WordPress with Nginx and with Apache.
Also, does preempting every Node server hit with Nginx proxy that serves as a load balancer actually contributes or detracts from performance?
Well, according to the table above, if it has any effect at all then it is a detracting one which is a reasonable outcome as adding another layer should make things slower. However, the numbers above shows it just might be negligible.
But the most important thing the table above shows us is that even though Node.js is faster than PHP, the role an HTTP server has, may surpass the importance of what type of programming language a certain web platform uses.
Of course, on the other hand, if the page loaded was a lot more reliant on server-side script serving, then the results would of wind up a bit different, I suspect.
At the end of it, if a web platform really wants to beat WordPress at its own game, performance-wise that is, the conclusion rising from this comparison is itll have to have some sort of customized tool a-la PHP-FPM, that will communicate with JavaScript directly (instead of running it as a server) thus it could fully harness JS power to reach a better performance.
--------------------------------------------------------------------------------
via: https://iwf1.com/apache-vs-nginx-vs-node-js-and-what-it-means-about-the-performance-of-wordpress-vs-ghost/
作者:[Liron][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://iwf1.com/tag/linux
[1]:http://iwf1.com/5-reasons-use-gentoo-linux/
[2]:https://www.sitepoint.com/sitepoint-smackdown-php-vs-node-js/
[3]:http://iwf1.com/wordpress/wp-content/uploads/2016/11/Lorem-Ipsum-and-ApacheBenchmark.jpg
[4]:http://iwf1.com/wordpress/wp-content/uploads/2016/10/Apache-vs-Nginx-vs-Node.js-use-flags.jpg
[5]:http://iwf1.com/wordpress/wp-content/uploads/2016/11/requests.jpg
[6]:http://iwf1.com/wordpress/wp-content/uploads/2016/11/concurrency.jpg
[7]:http://iwf1.com/wordpress/wp-content/uploads/2016/11/stress.jpg
[8]:http://iwf1.com/wordpress/wp-content/uploads/2016/11/Node-vs-Nginx-vs-Apache-comparison-table.jpg

View File

@ -0,0 +1,42 @@
Vic020
Introduction to Eclipse Che, a next-generation, web-based IDE
============================================================
![Introduction to Eclipse Che, a next-generation, web-based IDE](https://opensource.com/sites/default/files/styles/image-full-size/public/images/education/EDU_OSDC_OpenClass_520x292_FINAL_JD.png?itok=ETOrrpcP "Introduction to Eclipse Che, a next-generation, web-based IDE")
>Image by : opensource.com
Correctly installing and configuring an integrated development environment, workspace, and build tools in order to contribute to a project can be a daunting or time consuming task, even for experienced developers. Tyler Jewell, CEO of [Codenvy][1], faced this problem when he was attempting to set up a simple Java project when he was working on getting his coding skills back after dealing with some health issues and having spent time in managerial positions. After multiple days of struggling, Jewell could not get the project to work, but inspiration struck him. He wanted to make it so that "anyone, anytime can contribute to a project with installing software."
It is this idea that lead to the development of [Eclipse Che][2].
Eclipse Che is a web-based integrated development environment (IDE) and workspace. Workspaces in Eclipse Che are bundled with an appropriate runtime stack and serve their own IDE, all in one tightly integrated bundle. A project in one of these workspaces has everything it needs to run without the developer having to do anything more than picking the correct stack when creating a workspace.
The ready-to-go bundled stacks included with Eclipse Che cover most of the modern popular languages. There are stacks for C++, Java, Go, PHP, Python, .NET, Node.js, Ruby on Rails, and Android development. A Stack Library provides even more options and if that is not enough, there is the option to create a custom stack that can provide specialized environments.
Eclipse Che is a full-featured IDE, not a simple web-based text editor. It is built on Orion and the JDT. Intellisense and debugging are both supported, and version control with both Git and Subversion is integrated. The IDE can even be shared by multiple users for paired programming. With just a web browser, a developer can write and debug their code. However, if a developer would prefer to use a desktop-based IDE, it is possible to connect to the workspace with a SSH connection.
One of the major technologies underlying Eclipse Che are [Linux containers][3], using Docker. Workspaces are built using Docker and installing a local copy of Eclipse Che requires nothing but Docker and a small script file. The first time `che.sh start` is run, the requisite Docker containers are downloaded and run. If setting up Docker to install Eclipse Che is too much work for you, Codenvy does offer online hosting options. They even provide 4GB workspaces for open source projects for any contributor to the project. Using Codenvy's hosting option or another online hosting method, it is possible to provide a url to potential contributors that will automatically create a workspace complete with a project's code, all with one click.
Beyond Codenvy, contributors to Eclipse Che include Microsoft, Red Hat, IBM, Samsung, and many others. Several of the contributors are working on customized versions of Eclipse Che for their own specific purposes. For example, Samsung's [Artik IDE][4] for IoT projects. A web-based IDE might turn some people off, but Eclipse Che has a lot to offer, and with so many big names in the industry involved, it is worth checking out.
* * *
If you are interested in learning more about Eclipse Che, [CheConf 2016][5] takes place on November 15\. CheConf 2016 is an online conference and registration is free. Sessions start at 11:00 am Eastern time (4:00 pm UTC) and end at 5:30 pm Eastern time (10:30 pm UTC).
--------------------------------------------------------------------------------
via: https://opensource.com/life/16/11/introduction-eclipse-che
作者:[Joshua Allen Holm][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/holmja
[1]:http://codenvy.com/
[2]:http://eclipse.org/che
[3]:https://opensource.com/resources/what-are-linux-containers
[4]:http://eclipse.org/che/artik
[5]:https://eclipse.org/che/checonf/

View File

@ -0,0 +1,175 @@
Build, Deploy and Manage Custom Apps with IBM Bluemix
============================================================
![IBM Blue mix logo](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/IBM-Blue-mix-logo.jpg?resize=300%2C266)
_IBMs Bluemix affords developers an opportunity to build, deploy and manage custom apps. Bluemix is built on Cloud Foundry. It supports a number of programming languages as well as OpenWhisk, which allows developers to call any function without the need for resource management._
Bluemix is an open standards, cloud-based platform implemented by IBM. It has an open architecture which enables organisations to create, develop and manage their applications on the cloud. It is based on Cloud Foundry and hence can be considered as a Platform as a Service (PaaS). With Bluemix, developers need not worry about cloud configurations, but can concentrate on their applications. Cloud configurations will be done automatically by Bluemix.
Bluemix also provides a dashboard, with which developers can create, manage and view services and applications, while monitoring resource usage also.
It supports the following programming languages:
* Java
* Python
* Ruby on Rails
* PHP
* Node.js
It also supports OpenWhisk (Function as a Service), which is also an IBM product that allows developers to call any function without requiring any resource management.
![Figure 1 An Overview of IBM Bluemix](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-1-An-Overview-of-IBM-Bluemix.jpg?resize=296%2C307)
Figure 1: An Overview of IBM Bluemix
![Figure 2 The IBM Bluemix architecture](http://i0.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-2-The-IBM-Bluemix-architecture.jpg?resize=350%2C239)
Figure 2: The IBM Bluemix architecture
![Figure 3 Creating an organisation in IBM Bluemix](http://i0.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-3-Creating-an-organisation-in-IBM-Bluemix.jpg?resize=350%2C280)
Figure 3: Creating an organisation in IBM Bluemix
**How IBM Bluemix works**
Bluemix is built on top of IBMs SoftLayer IaaS (Infrastructure as a Service). It uses Cloud Foundry as an open source PaaS. It starts by pushing code through Cloud Foundry, which plays the role of combining the code and suitable runtime environment based on the programming language in which the application is written. IBM services, third party services or community built services can be used for different functionalities. Secure connectors can be used to connect to on-premise systems and the cloud.
![Figure 4 Setting up Space in IBM Bluemix](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-4-Setting-up-Space-in-IBM-Bluemix.jpg?resize=350%2C267)
Figure 4: Setting up Space in IBM Bluemix
![Figure 5 The app template](http://i2.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-5-The-app-template.jpg?resize=350%2C135)
Figure 5: The app template
![Figure 6 IBM Bluemix supported programming languages](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-6-IBM-Bluemix-supported-programming-languages.jpg?resize=350%2C173)
Figure 6: IBM Bluemix supported programming languages
**Creating an app in Bluemix**
In this article, we will create a sample Hello World application in IBM Bluemix by using the Liberty for Java starter pack, in just a few simple steps.
1\. Go to [_https://console.ng.bluemix.net/registration/_][2].
2\. Confirm the Bluemix account.
3\. Click on the confirmation link in the mail to complete the sign up process.
4\. Give your email ID and click on _Continue_ to log in.
5\. Enter the password and click on _Log in._
6. _Set up_ and _Environment_ share resources in specific regions.
7\. Create Space to manage access and roll-back in Bluemix. We can map Spaces to development stages such as dev, test, uat, pre-prod and prod.
![Figure 7 Naming the app](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-7-Naming-the-app.jpg?resize=350%2C133)
Figure 7: Naming the app
![Figure 8 Knowing when the app is ready](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-8-Knowing-when-the-app-is-ready.jpg?resize=350%2C170)
Figure 8: Knowing when the app is ready
![Figure 9 The IBM Bluemix Java App](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-9-The-IBM-Bluemix-Java-App.jpg?resize=350%2C151)
Figure 9: The IBM Bluemix Java App
8\. Once this initial configuration is completed, click on_ Im ready_. _Good to Go_!
9\. Verify the IBM Bluemix dashboard after successfully logging in, specifically sections such as Cloud Foundry Apps where 2GB is available and Virtual Server where 0 instances are available, as of now.
10\. Click on _Create app_. Choose the template for app creation. In our case, we will go for a Web app.
11\. How do you get started? Click on Liberty for Java, and then verify the description.
12\. Click on _Continue_.
13\. What do you want to name your new app? For this article, lets use osfy-bluemix-tutorial and click on _Finish_.
14\. It will take some time to create resources and to host an application on Bluemix.
15\. In a few minutes, your app will be up and running. Note the URL of the application.
16\. Visit the applications URL _http://osfy-bluemix-tutorial.au-syd.mybluemix.net/_. Bingo, our first Java application is up and running on IBM Bluemix.
17\. To verify the source code, click on _Files_ and navigate to different files and folders in the portal.
18\. The _Logs_ section provides all the activity logs, starting from the applications creation.
19\. The _Environment Variables_ section provides details on all the environment variables of VCAP_Services as well as those that are user defined.
20\. To verify the applications consumption of resources, go to the Liberty for Java section.
21\. The _Overview_ section of each application contains details regarding resources, the applications health, and activity logs, by default.
22\. Open Eclipse, go to the Help menu and click on _Eclipse Marketplace_.
23\. Find _IBM Eclipse tools_ for _Bluemix_ and click on _Install_.
24\. Confirm the selected features and install them in Eclipse.
25\. Download the application starter code. Import it into Eclipse by clicking on _File Menu_, select _Import Existing Projects_ into _Workspace_ and start modifying the existing code.
![Figure 10 The Java app source files](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-10-The-Java-app-source-files.jpg?resize=350%2C173)
Figure 10: The Java app source files
![Figure 11 The Java app logs](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-11-The-Java-app-logs.jpg?resize=350%2C133)
Figure 11: The Java app logs
![Figure 12 Java app -- Liberty for Java](http://i0.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-12-Java-app-Liberty-for-Java.jpg?resize=350%2C169)
Figure 12: Java app — Liberty for Java
**[
][1]Why IBM Bluemix?**
Here are some compelling reasons to use IBM Bluemix:
* Supports multiple languages and platforms
* Free trial
1\. Minimal registration process
2\. No credit card required
3\. 30-days trial period with quotas of 2GB of runtime, 20 services, 500 routes
4\. Unlimited access to standard support
5\. No production use limitations
* Pay only for the use of each runtime and service
* Quick set-up hence faster time to market
* Continuous delivery of new features
* Secure integration with on-premise resources
* Use cases
1\. Web applications and mobile back-ends
2\. APIs and on-premise integration
* DevOps services are available as SaaS on the cloud and support continuous delivery of:
1\. Web IDE
2\. SCM
3\. Agile planning
4\. Delivery pipeline service
--------------------------------------------------------------------------------
via: http://opensourceforu.com/2016/11/build-deploy-manage-custom-apps-ibm-bluemix/
作者:[MITESH_SONI][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://opensourceforu.com/author/mitesh_soni/
[1]:http://opensourceforu.com/wp-content/uploads/2016/10/Figure-7-Naming-the-app.jpg
[2]:https://console.ng.bluemix.net/registration/

View File

@ -0,0 +1,68 @@
微流冷却技术可能让摩尔定律起死回生
============================================================
![](http://tr1.cbsistatic.com/hub/i/r/2015/12/09/a7cb82d1-96e8-43b5-bfbd-d4593869b230/resize/620x/9607388a284e3a61a39f4399a9202bd7/networkingistock000042544852agsandrew.jpg)
>Image: iStock/agsandrew
现有的技术无法对微芯片进行有效的冷却,这正快速成为摩尔定律消亡的第一原因。
随着对数字计算速度的需求,科学家和工程师正努力地将更多的晶体管和支撑电路放在已经很拥挤的硅片上。的确,它非常地复杂,然而,和复杂性相比,热量聚积引起的问题更严重。
洛克希德马丁公司首席研究员John Ditri在新闻稿中说到当前我们可以放入微芯片的功能是有限的最主要的原因之一是发热的管理。如果你能管理好发热你可以用较少的芯片较少的材料那样就可以节约成本并能减少系统的大小和重量。如果你能管理好发热用相同数量的芯片将能获得更好的系统性能。
硅对电子流动的阻力产生了热量,在如此小的空间封装如此多的晶体管累积了足以毁坏元器件的热量。一种消除热累积的方法是在芯片层用光子学技术减少电子的流动,然而光子学技术有它的一系列问题。
SEE:2015年硅光子将引起数据中心的革命 [Silicon photonics will revolutionize data centers in 2015][5]
### 微流冷却技术可能是问题的解决之道
为了寻找其他解决办法国防高级研究计划局DARPA发起了一个关于ICECool应用[ICECool Applications][6] (片内/片间增强冷却技术)的项目。GSA网站 [GSA website FedBizOpps.gov][7] 报道ICECool正在探索革命性的热技术其将减轻热耗对军用电子系统的限制同时能显著减小军用电子系统的尺寸重量和功耗。
微流冷却方法的独特之处在于组合使用片内和(或)片间微流冷却技术和片上热互连技术。
![](http://tr4.cbsistatic.com/hub/i/r/2016/05/25/fd3d0d17-bd86-4d25-a89a-a7050c4d59c4/resize/300x/e9c18034bde66526310c667aac92fbf5/microcooling-1.png)
>MicroCooling 1 Image: DARPA
DARPA ICECool应用项目 [DARPA ICECool Application announcement][8] 指出, 这种微型片内和(或)片间通道可采用轴向微通道,径向通道和(或)横流通道,采用微孔和歧管结构及局部液体喷射形式来疏散和重新引导微流,从而以最有利的方式来满足指定的散热指标。
通过上面的技术洛克希德马丁的工程师已经实验性地证明了片上冷却是如何得到显著改善的。洛克希德马丁新闻报道ICECool项目的第一阶段发现当冷却具有多个局部30kW/cm2热点发热为1kw/cm2的芯片时热阻减少了4倍进而验证了洛克希德的嵌入式微流冷却方法的有效性。
第二阶段洛克希德马丁的工程师聚焦于RF放大器。通过ICECool的技术团队演示了RF的输出功率可以得到6倍的增长而放大器仍然比其常规冷却的更凉。
### 投产
出于对技术的信心,洛克希德马丁已经在设计和制造实用的微流冷却发射天线。 Lockheed Martin还与Qorvo合作将其热解决方案与Qorvo的高性能GaN工艺 [GaN process][9] 集成.
研究论文 [DARPA's Intra/Interchip Enhanced Cooling (ICECool) Program][10] 的作者认为ICECool将使电子系统的热管理模式发生改变。ICECool应用的执行者将根据应用来定制片内和片间的热管理方法这个方法需要兼顾应用的材料制造工艺和工作环境。
如果微流冷却能像科学家和工程师所说的成功的话,似乎摩尔定律会起死回生。
更多的关于网络的信息请订阅Data Centers newsletter。
[SUBSCRIBE](https://secure.techrepublic.com/user/login/?regSource=newsletter-button&position=newsletter-button&appId=true&redirectUrl=http%3A%2F%2Fwww.techrepublic.com%2Farticle%2Fmicrofluidic-cooling-may-prevent-the-demise-of-moores-law%2F&)
--------------------------------------------------------------------------------
via: http://www.techrepublic.com/article/microfluidic-cooling-may-prevent-the-demise-of-moores-law/
作者:[Michael Kassner][a]
译者:[messon007](https://github.com/messon007)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.techrepublic.com/search/?a=michael+kassner
[1]: http://www.intel.com/content/www/us/en/history/museum-gordon-moore-law.html
[2]: https://books.google.com/books?id=mfec2Zw_b7wC&pg=PA154&lpg=PA154&dq=does+heat+destroy+transistors&source=bl&ots=-aNdbMD7FD&sig=XUUiaYG_6rcxHncx4cI4Cqe3t20&hl=en&sa=X&ved=0ahUKEwif4M_Yu_PMAhVL7oMKHW3GC3cQ6AEITTAH#v=onepage&q=does%20heat%20destroy%20transis
[3]: http://www.lockheedmartin.com/us/news/press-releases/2016/march/160308-mst-cool-technology-turns-down-the-heat-on-high-tech-equipment.html
[4]: http://www.techrepublic.com/article/silicon-photonics-will-revolutionize-data-centers-in-2015/
[5]: http://www.techrepublic.com/article/silicon-photonics-will-revolutionize-data-centers-in-2015/
[6]: https://www.fbo.gov/index?s=opportunity&mode=form&id=0be99f61fbac0501828a9d3160883b97&tab=core&_cview=1
[7]: https://www.fbo.gov/index?s=opportunity&mode=form&id=0be99f61fbac0501828a9d3160883b97&tab=core&_cview=1
[8]: https://www.fbo.gov/index?s=opportunity&mode=form&id=0be99f61fbac0501828a9d3160883b97&tab=core&_cview=1
[9]: http://electronicdesign.com/communications/what-s-difference-between-gaas-and-gan-rf-power-amplifiers
[10]: http://www.csmantech.org/Digests/2013/papers/050.pdf

View File

@ -1,59 +0,0 @@
Training vs. hiring to meet the IT needs of today and tomorrow
培训还是雇人,来满足当今和未来的 IT 需求
================================================================
![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/cio_talent_4.png?itok=QLhyS_Xf)
在数字化时代,由于企业需要不断跟上工具和技术更新换代的步伐,对 IT 技能的需求也稳定增长。对于企业来说,寻找和雇佣那些拥有令人垂涎能力的创新人才,是非常不容易的。同时,培训内部员工来使他们接受新的技能和挑战,需要一定的时间。而且,这也往往满足不了需求。
[Sandy Hill][1] 对多种 IT 学科涉及到的多项技术都很熟悉。她作为 [Pegasystems][2] 项目的 IT 主管,负责的 IT 团队涉及的领域从应用的部署到数据中心的运营。更重要的是Pegasystems 开发应用来帮助销售,市场,服务以及运行团队简化操作,联系客户。这意味着她需要掌握和利用 IT 内部资源的最佳方法,面对公司客户遇到的 IT 挑战。
![](https://enterprisersproject.com/sites/default/files/CIO_Q%20and%20A_0.png)
**企业家项目TEP这些年你是如何调整培训重心的**
**Hill**:在过去的几十年中,我们经历了爆炸式的发展,所以现在我们要实现更多的全球化进程。随之而来的培训方面,将确保每个人都在同一起跑线上。
我们大多的关注点已经转移到培养员工使用新的产品和工具上,这些新产品和工具的实现,能够推动创新,并提高工作效率。例如,我们实现了资产管理系统; 以前我们是没有的。因此我们需要为全部员工做培训,而不是雇佣那些已经知道该产品的人。当我们正在发展的时候,我们也试图保持紧张的预算和稳定的职员总数。所以,我们更愿意在内部培训而不是雇佣新人。
**TEP说说培训方法吧你是怎样帮助你的员工发展他们的技能**
**Hill**:我要求每一位员工制定一个技术性的和非技术性的训练目标。这作为他们绩效评估的一部分。他们的技术性目标需要与他们的工作职能相符,非技术行目标则着重发展一项软技能,或是学一些专业领域之外的东西。我每年对职员进行一次评估,看看差距和不足之处,以使团队保持全面发展。
**TEP你的训练计划能够在多大程度上减轻招聘和保留职员的问题**
**Hill**:使我们的职员对学习新的技术保持兴奋,让他们的技能更好。让职员知道我们重视他们并且让他们在擅长的领域成长和发展,以此激励他们。
**TEP你有没有发现哪种培训是最有效的**
**HILL**:我们使用几种不同的我们发现是有效的培训方法。当有新的或特殊的项目时,我们尝试加入一套由甲方(不会翻:乙方,卖方?)领导的培训课程,作为项目的一部分。要是这个方法不能实现,我们将进行异地培训。我们也会购买一些在线的培训课程。我也鼓励职员每年参加至少一次会议,以了解行业的动向。
**TEP你有没有发现有哪些技能雇佣新人要比培训现有员工要好**
**Hill**:这和项目有关。有一个最近的计划,试图实现 OpenStack而我们根本没有这方面的专家。所以我们与一家从事这一领域的咨询公司合作。我们利用他们的专业知识帮助我们运行项目并现场培训我们的内部团队成员。让内部员工学习他们需要的技能同时还要完成他们们天的工作这是一项艰巨的任务。
顾问帮助我们确定我们需要的对某一技术熟练的的员工人数。这使我们能够对员工进行评估,看看是否存在缺口。如果存在人员上的缺口,我们还需要额外的培训或是员工招聘。我们也确实雇佣了一些承包商。另一个选择是让一些全职员工进行为期六至八周的培训,但我们的项目模式不容许这么做。
**TEP想一下你最近雇佣的员工他们的那些技能特别能够吸引到你**
**Hill**:在最近的招聘中,我侧重于软技能。除了扎实的技术能力外,他们需要能够在团队中进行有效的沟通和工作,要有说服他人,谈判和解决冲突的能力。
IT 人一向独来独往。他们一般不是社交最多的人。现在IT 越来越整合到组织中,它为其他业务部门提供有用的更新报告和状态报告的能力是至关重要的,这也表明 IT 是积极的存在,并将取得成功。
--------------------------------------------------------------------------------
via: https://enterprisersproject.com/article/2016/6/training-vs-hiring-meet-it-needs-today-and-tomorrow
作者:[Paul Desmond][a]
译者:[Cathon](https://github.com/Cathon)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://enterprisersproject.com/user/paul-desmond
[1]: https://enterprisersproject.com/user/sandy-hill
[2]: https://www.pega.com/pega-can?&utm_source=google&utm_medium=cpc&utm_campaign=900.US.Evaluate&utm_term=pegasystems&gloc=9009726&utm_content=smAXuLA4U|pcrid|102822102849|pkw|pegasystems|pmt|e|pdv|c|

View File

@ -1,57 +0,0 @@
宽松开源许可证的崛起意味着什么
====
为什么像 GNU GPL 这样的限制性许可证越来越不受青睐。
“如果你用了任何开源软件, 那么你软件的其他部分也必须开源。” 这是微软 CEO Steve Ballmer 2001 年说的, 尽管他说的不对, 还是引发了人们对自由软件的 FUD (恐惧, 不确定和怀疑)。 大概这才是他的意图。
对开源软件的这些 FUD 主要与开源许可有关。 现在有许多不同的许可证, 当中有些限制比其他的更严格(也有人称“更具保护性”)。 诸如 GNU 通用公共许可证 GPL 这样的限制性许可证使用了 copyleft 的概念。 copyleft 赋予人们自由发布软件副本和修改版的权力, 只要衍生工作给予人们同样的权力。 bash 和 GIMP 等开源项目就是使用了 GPL v3。 还有一个 Affero GPL 的许可证, 它为网络上的软件(如网络服务)提供了 copyleft 许可。
这意味着, 如果你使用了这种许可的代码, 然后加入了你自己的专有代码, 那么在一些情况下, 整个代码, 包括你的代码也就遵从这种限制性开源许可证。 Ballmer 说的大概就是这类的许可证。
但宽松许可证不同。 比如, 只要保留属性且不要求开发者承担责任, MIT 许可证允许任何人任意使用开源代码, 包括修改和出售。 另一个比较流行的宽松开源许可证, Apache 许可证 2.0 也把专利权从贡献者授予用户。 JQuery, .NET Core 和 Rails 使用了 MIT 许可证, 使用 Apache 许可证 2.0 的软件包括安卓, Apache 和 Swift。
两种许可证类型最终都是为了让软件更有用。 限制性许可证促进了参与和分享的开源理念, 使每个人从软件中得到最多的利益。 而宽松许可证通过允许人们任意使用软件来确保人们能从软件中得到最多的利益, 即使这意味着他们可以使用代码, 修改它, 据为己有,甚至以专有软件出售,而不做任何回报。
开源许可证管理公司 Black Duck Software 的数据显示, 去年使用最多的开源许可证是限制性许可证 GPL 2.0, 市占率大约 25%。 宽松许可证 MIT 和 Apache 2.0 次之, 市占率分别为 18% 和 16% 再后面是 GPL 3.0, 市占率大约 10%。 这样来看, 限制性许可证占 35% 宽松许可证占 34% 几乎是平手。
但这个数据没有显示趋势。 Black Duck 的数据显示, 从 2009 年到 2015 年的六年间, MIT 许可证的市占率上升了 15.7% Apache 的市占率上升了 12.4%。 在这段时期, GPL v2 和 v3 的市占率惊人地下降了 21.4%。 换言之, 在这段时期里, 大量市占率从限制性许可证移动到宽松许可证。
这个趋势还在继续。 Black Duck 的[最新数据][1]显示, MIT 现在的市占率为 26% GPL v2 为 21% Apache 2 为 16% GPL v3 为 9%。 即 30% 的限制性许可证和 42% 的宽松许可证--与前一年的 35% 的限制许可证和 34% 的宽松许可证相比, 发生了重大的转变。 对 GitHub 上使用许可证的[调查研究][2]证实了这种转变。 它显示 MIT 以压倒性的 45% 占有率成为最流行的许可证, 与之相比, GPL v2 只有 13% Apache 11%。
![](http://images.techhive.com/images/article/2016/09/open-source-licenses.jpg-100682571-large.idge.jpeg)
### 引领趋势
从限制性许可证到宽松许可证,这么大的转变背后是什么呢? 是公司害怕如果使用了限制性许可证的软件他们就会像Ballmer说的那样失去自己私有软件的控制权了吗 事实上, 可能就是如此。 比如, Google [禁用了 Affero GPL 软件][3]。
[Instructional Media + Magic][4] 的主席 Jim Farmer 是一个教育开源技术的开发者。 他作为很多公司为避开法律问题而不使用限制性许可证。 “问题就在于复杂性。 许可证的复杂性越高, 被人因为某此行为而告上法庭的可能性越高。 高复杂性更可能带来麻烦“, 他说。
他补充说, 这种对限制性许可证的恐惧正被律师们驱动着, 许多律师建议自己的客户使用 MIT 或 Apache 2.0 许可证的软件, 并明确反对使用 Affero 许可证的软件。
他说, 这会对软件开发者产生影响, 因为如果公司都避开限制性许可证软件的使用,开发者想要自己的软件被使用, 就更会把新的软件使用宽松许可证。
但 SalesAgility 也就是开源 SuiteCRM 的那家公司,的 CEO Greg Soper 认为这种到宽松许可证的转变也由一些开发者驱动。 “看看像 Rocket.Chat 这样的应用。 开发者本可以选择 GPL 2.0 或 Affero 许可证, 但他们选择了宽松许可证,” 他说。 “这样可以给这个应用最大的机会, 因为专有软件厂商可以使用它, 不会伤害到他们的产品, 且不需要把他们的产品也使用开源许可证。 这样如果开发者想要让第三方应用使用他的应用的话, 他有理由选择宽松许可证。”
Soper 指出, 限制性许可证的设计,就是通过阻止开发者拿了别人的代码,做了修改,但不把结果回报给社区来帮助开源项目。 “ Affero 许可证对我们的产品很重要, 因为如果有人 fork 了,并做得比我们好, 却又不把代码回报回来, 就会杀死我们的产品,” 他说。 “ 对 Rocket.Chat 则不同, 因为如果它使用 Affero 那么它会污染公司的 IP 所以公司不会使用它。 不同的许可证有不同的使用案例。”
曾在 Gnome 现在是 LibreOffice 的 OpenOffice 上工作的开源开发者 Michael Meeks 同意 Jim Farmer 的,许多公司确实出于对法律的担心,而选择使用宽松许可证的软件的观点。 “copyleft 许可证有风险, 但同样也有巨大的益处。 遗憾的是人们都听从律师, 而律师只是讲风险, 但从不告诉你有些事是安全的。”
Ballmer 发表他不正确的言论已经 15 年了, 但它产生的 FUD 还是有影响--即使从限制性许可证到宽松许可证的转变并不是他想要的。
--------------------------------------------------------------------------------
via: http://www.cio.com/article/3120235/open-source-tools/what-the-rise-of-permissive-open-source-licenses-means.html
作者:[Paul Rubens ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.cio.com/author/Paul-Rubens/
[1]: https://www.blackducksoftware.com/top-open-source-licenses
[2]: https://github.com/blog/1964-open-source-license-usage-on-github-com
[3]: http://www.theregister.co.uk/2011/03/31/google_on_open_source_licenses/
[4]: http://immagic.com/

View File

@ -0,0 +1,71 @@
针对物理 RAM 的攻击可以取得 Android 设备的根权限,其它设备也存在这样的可能
===
>攻击者确实可以在物理存储单元中实现位翻转来达到侵入(这里把 compromise 翻译成了侵入,感觉还是有点词不达意)移动设备与计算机的目的
![](http://images.techhive.com/images/idgnsImport/2015/08/id-2969037-security1-100606370-large.jpg)
研究者们发现了一种新的在不利用任何软件漏洞情况下,利用 RAM 芯片物理设计上的弱点来侵入 Android 设备的方式。这种攻击技术同样可以影响到其它如 ARM 和 X86 架构的设备与计算机。
攻击起源于过去十多年中将更多的 DRAM动态随机存取存储器容量封装进越来越小的芯片中这将导致存储单元在特定情况下电子在相邻的两行中从一边泄漏到另一边。这里翻译的有点不太通顺特别是这 row 的概念,只是查看了维基百科有了大致了解,结合起来看可能会更有助理解 https://en.wikipedia.org/wiki/Row_hammer
例如,反复且快速的访问相同的物理储存位置 -- 一种被称为 “hammering” (这里 hammering 实在不知道该如何处理,用英文原文似乎也挺好的,在读英文内容的时候也会带来一定的便利)的行为 -- 可以导致相邻位置的位值从 0 反转成 1或者相反。
虽然这样的电子干扰已经被生产商知晓并且从可靠性角度研究了一段时间了 -- 因为内存错误能够导致系统崩溃 -- 研究者展示了在可控方式的触发下它所存在的严重安全隐患。
在 2015 年 4 月,来自谷歌 Project Zero 项目的研究者公布了两份基于内存 “row hammer” 对于 x86-64 CPU 架构的 [提权利用][7]。其中一份利用可以使代码从谷歌的 Chrome 浏览器沙盒里逃逸并且直接在系统上执行,另一份可以在 Linux 机器上获取高级权限。(这里的 kernel-level 不太确定该如何处理,这个可以参看 https://en.wikipedia.org/wiki/Privilege_level
此后,其他的研究者进行了更深入的调查并且展示了[通过网站中 JaveScript 脚本进行利用的方式][6]甚至能够影响运行在云环境下的[虚拟服务器][5]。然而,对于这项技术是否可以应用在智能手机和移动设备大量使用的 ARM 架构中还是有疑问的。
现在,一队成员来自荷兰阿姆斯特丹自由大学,奥地利格拉茨技术大学和加州大学圣塔芭芭拉分校的 VUSec 小组,已经证明了 Rowhammer 不仅仅可以应用在 ARM 架构上并且甚至比在 x86 架构上更容易。
研究者们将他们的新攻击命名为 Drammer代表了 Rowhammer 确实存在,并且计划于周三在维也纳举办的第 23 届 ACM 计算机与通信安全大会上展示。这种攻击建立在之前就被发现与实现的 Rowhammer 技术之上。
VUSec 小组的研究者已经制造了一个适用于 Android 设备的恶意应用,当它被执行的时候利用不易察觉的内存位反转在不需要任何权限的情况下就可以获取设备根权限。
研究者们测试了来自不同制造商的 27 款 Android 设备21 款使用 ARMv732-bit指令集架构其它 6 款使用 ARMv864-bit指令集架构。他们成功的在 17 款 ARMv7 设备和 1 款 ARMv8 设备上实现了为反转,表明了这些设备是易受攻击的。
此外Drammer 能够与其它的 Android 漏洞组合使用,例如 [Stagefright][4] 或者 [BAndroid][3] 来实现无需用户手动下载恶意应用的远程攻击。
谷歌已经注意到了这一类型的攻击。“在研究者向漏洞奖励计划(这里应该是特指谷歌的那个吧,不知道把它翻译成中文是否合适)报告了这个问题之后,我们与他们进行了密切的沟通来深入理解这个问题以便我们更好的保护用户,”一位谷歌的代表在一份邮件申明中这样说到。“我们已经开发了一个缓解方案(这里将 mitigation 翻成了缓解方案不知是否妥当,这又是一个有丰富含义的概念 https://en.wikipedia.org/wiki/Vulnerability_management将会包含在十一月的安全更新中。”
VUSec 的研究者认为,谷歌的缓解方案将会使得攻击过程更为复杂,但是它不能修复潜在的问题。
事实上,从软件上去修复一个由硬件导致的问题是不现实的。硬件供应商正在研究相关问题并且有可能在将来的内存芯片中被修复,但是在现有设备的芯片中风险依然存在。
更糟的是,研究者们说,由于有许多因素会影响到攻击的成功与否并且这些因素尚未被研究透彻,因此很难去说有哪些设备会被影响到。例如,内存控制器可能会在不同的电量的情况下展现不同的行为,因此一个设备可能在满电的情况下没有风险,当它处于低电量的情况下就是有风险的。
同样的在网络安全中有这样一句俗语Attacks always get getter, they never get worse.这里借用“道高一尺魔高一仗。”不知是否合适Rowhammer 攻击已经从理论变成了变成了现实,同样的他可能也会从现在的简单实现变成确确实实的存在。(这一句凭自己的理解翻的)这意味着今天某个设备是不被影响的,在明天就有可能被改进后的 Rowhammer 技术证明它是存在风险的。
Drammer 在 Android 上实现是因为研究者期望研究基于 ARM 设备的影响,但是潜在的技术可以被使用在所有的架构与操作系统上。新的攻击相较于之前建立在运气与特殊特性与特定平台之上并且十分容易失效的技术已经是一个巨大的进步了。
Drammer 依靠被大量硬件子系统所使用的 DMA直接存储访问缓存其中包括了图形网络声音。Drammer 的实现采用了所有操作系统上都有的 Android 的 ION内存分配器接口与方法这个特征这里将 warning 翻译成了特征,纯粹是自己的理解,不知是否妥当)是文章中主要的贡献之一。
"破天荒的,我们成功的展示了我们可以做到,在不依赖任何特定的特性情况下完全可靠的证明了 Rowhammer“ VUSec 小组中的其中一位研究者, Cristiano Giuffrida 这样说道。”攻击所利用的内存位置并非是 Android 独有的。攻击在任何的 Linux 平台上都能工作 -- 我们甚至怀疑其它操作系统也可以 -- 因为它利用的是操作系统内核内存管理中固有的特性。“
”我期待我们可以看到更多针对其它平台的攻击的变种,“阿姆斯特丹自由大学的教授兼 VUSec 系统安全研究小组的领导者Herbert Bos 补充道。
在他们的[文章][2]之外,研究者们也释出了一个 Android 应用来测试 Android 设备在受到 Rowhammer 攻击时是否会有风险 -- 在当前所知的技术条件下。应用还没有传上[谷歌应用商店][Google Play],可以从 [VUSec Drammer 网站][1] 下载来手动安装。一个开源的 Rowhammer 模拟器同样能够帮助其他的研究者来更深入的研究这个问题。
--------------------------------------------------------------------------------
via:http://www.csoonline.com/article/3134726/security/physical-ram-attack-can-root-android-and-possibly-other-devices.html
作者:[Lucian Constantin][a]
译者:[wcnnbdk1](https://github.com/wcnnbdk1)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.csoonline.com/author/Lucian-Constantin/
[1]:https://www.vusec.net/projects/drammer/
[2]:https://vvdveen.com/publications/drammer.pdf
[3]:https://www.vusec.net/projects/bandroid/
[4]:http://www.csoonline.com/article/3045836/security/new-stagefright-exploit-puts-millions-of-android-devices-at-risk.html
[5]:http://www.infoworld.com/article/3105889/security/flip-feng-shui-attack-on-cloud-vms-exploits-hardware-weaknesses.html
[6]:http://www.computerworld.com/article/2954582/security/researchers-develop-astonishing-webbased-attack-on-a-computers-dram.html
[7]:http://www.computerworld.com/article/2895898/google-researchers-hack-computers-using-dram-electrical-leaks.html
[8]:http://csoonline.com/newsletters/signup.html

View File

@ -0,0 +1,84 @@
我们大学的机房使用 Fedora 系统
==========
![Fedora-powered computer lab at our university](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/fedora-powered-computer-lab-945x400.png)
在[塞尔维亚共和国诺维萨德大学的自然科学系和数学与信息学系][5],我们教学生很多东西。从编程语言的入门到机器学习,所有开设的课程最终目的是让我们的学生能够像专业的开发者和软件工程师一样思考。课程时间紧凑而且学生众多,所以我们必须对现有可利用的资源进行合理调整以满足正常的教学。最终我们决定将机房电脑系统换为 Fedora。
### 以前的设置
我们过去的解决方案是在 Ubuntu 系统上面安装 Windows [虚拟机][4]并在虚拟机下安装好教学所需的开发软件。这在当时看起来是一个很不错的主意。然而,这种方法有很多弊端。首先,因为运行虚拟机导致了严重的计算机性能的浪费。因为运行虚拟机导致计算机性能利用率不高和操作系统的运行速度降低。此外,虚拟机有时候会在另一个用户会话里面同时运行。这会导致计算机工作效率的严重降低。我们有限的宝贵时间不应该花费在启动电脑和启动虚拟机上。最后,我们意识到我们的大部分教学所需软件都有对应的 Linux 版本。虚拟机不是必需的。我们不得不寻找一个更好的解决办法。
### 进入 Fedora!
![Computer lab in Serbia powered by Fedora](https://cdn.fedoramagazine.org/wp-content/uploads/2016/10/jxXtuFO-1024x576.jpg)
默认运行 Fedora 工作站版本的一个机房的照片
我们思考过暂时替代以前的安装 Windows 虚拟机方案。我们最终决定使用 Fedora 有很多原因。
#### 发展的前沿
在我们所教授的课程中,我们会用到很多各种各样的开发工具。因此,我们能及时获取到最新的、最好用的开发工具很重要。在 Fedora 下,我们发现我们用到的开发工具有 95% 都能够在官方的软件仓库中找到。只有少量的一些工具,我们不得不手动安装。这在 Fedora 下很简单,因为你能获取到几乎所有的能开箱即用的开发工具。
我们意识到在这个过程中我们使用了大量自由、开源的软件和工具。这些软件总是能够及时更新并且可以用来做大量的工作而不仅仅局限于 Fedora 平台。
#### 硬件兼容性
我们选择 Fedora 用作机房的第二个原因是硬件兼容性。机房现在的电脑还是比较崭新的。过去比较低的内核版本总有些问题。在 Fedora 下,我们发现我们总能获得最新的内核版本。正如我们预期的那样,一切运行完美,没有任何差错。
我们决定我们最终会使用带有 [GNOME 桌面环境][2]的 Fedora [工作站版本][3]。学生群体通过使用这个版本的 Fedora 会发现很容易、直观、快速的上手。学生有一个简单舒适的环境对我们很重要,这样他们会更多的关注自己的任务和课程本身而不是一个复杂的或者运行缓慢的用户界面。
#### 自主的技术支持
最近,我们院系给予自由、开放源代码的软件很高的评价。通过使用这些软件,学生们即便在毕业后和工作的时候,仍然能够继续自由地使用它们。在这个过程中,他们通常也对 Fedora 和自由、开源的软件有一定了解。
### 转换机房
我们将机房的一台电脑完全手动安装。包括准备所有必要的脚本和软件,设置远程控制权限和一些其他的重要组成部分。我们也为每一门课程单独设置一个用户以方便学生存储他们的文件。
一台电脑安装配置好后,我们使用一个强大的、免费的、开源的叫做 [CloneZilla][1] 的工具。 CloneZilla 能够让我们为硬盘镜像做备份。镜像大小约为 11 G。我们用一些带有高速 USB 3.0 接口的闪存来还原磁盘镜像到剩余的电脑。我们仅仅利用若干个闪存设备花费了 75 分钟设置好剩余的 24 台电脑。
### 将来的工作
我们机房现在所有的电脑都完全使用 Fedora (没有虚拟机)。剩下的工作是设置一些管理脚本方便远程安装软件,电脑的开关等等。
我们由衷地感谢所有 Fedora 的维护人员,包制作人员和其他贡献者。我们希望我们的工作能够鼓励其他的学校和大学像我们一样将机房电脑的操作系统转向 Fedora。我们很高兴地确认了 Fedora 完全适合我们同时我们也担保 Fedora 同样适合你。
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/fedora-computer-lab-university/
作者:[Nemanja Milošević][a]
译者:[WangYueScream](https://github.com/WangYueScream)[LemonDemo](https://github.com/LemonDemo)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/nmilosev/
[1]:http://clonezilla.org/
[2]:https://www.gnome.org/
[3]:https://getfedora.org/workstation/
[4]:https://en.wikipedia.org/wiki/Virtual_machine
[5]:http://www.dmi.rs/

View File

@ -0,0 +1,55 @@
# 你会考虑乘坐无人驾驶汽车吗?
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/10/Writers-Opinion-Driverless-Car-Featured.jpg "你会考虑乘坐无人驾驶汽车吗?")
科技在经历重大进展。最近我们经历的是如苹果手表及各种克隆FitBit活动智能设备 谷歌眼镜等可穿戴设备。看起来下一个就是人们研究了很长时间的无人驾驶汽车了。
这些汽车,有时也叫做自动汽车,自动驾驶汽车,或机器人汽车,确实可以依靠技术自己驾驶。它们能探测周边环境,如障碍物和标志,并使用 GPS 找到自己的路线。但是它们驾驶起来安全吗?我们请教我们的科技作者,“你会考虑乘坐无人驾驶汽车吗?”
### 我们的观点
**Derrik** 说他会乘坐无人驾驶汽车,因为 “_技术早就存在而且很多聪明能干的人研究了很长时间。_” 他承认它们还是有些问题但他相信很多事故的发生是因为有人的参与。如果不考虑人他认为乘坐无人驾驶汽车会“_难以置信的安全_。”
对 **Phil**来说,这些汽车让他“紧张”,但他也承认这是他想象出的,因为他从没乘坐过。他同意 Derrik 这些技术是高度发达的观点也知道它的原理但仍然认为“_ 自己新技术接受缓慢不会购买这类车_” 他甚至坦白说平时很少使用定速巡航。他认为依赖它太多的司机会让他感到不安全。
![writers-opinion-driverless-car](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/10/Writers-Opinion-Driverless-Car.jpg "编辑对无人驾驶汽车的观点")
**Robert** 认为“_这个概念确实有点怪_”但原则上他看不到汽车不向那个方向发展的理由。 他指出飞机已经走了那条路,而且变得更加安全, 他认为事故发生的主因是“_人们过于依赖科技手段而当科技出现故障时不知所措而导致。_”
他是一个“焦虑型乘客”, 更喜欢控制整个局面。 对他来说,他的车子在哪里开很重要。如果是以低速在城市中驾驶他感觉还好,但如果是在“宽度不足两车道的弯弯曲曲的英国乡村道路”上,则绝不可以。他和 Phil 都认为英国道路与美国道路大大不同。他建议让别人去做小白鼠, 等到确定安全了再乘坐。
对 **Mahesh**来说, 他绝对会乘坐无人驾驶汽车,因为他知道这些汽车公司“ 拥有坚实的安全技术,决不会让他们的顾客去冒险。 ” 他承认安全与否还与车辆行驶的道路有关。
我的观点有些像这些观点的折中。虽然平时我会快速地投入到新科技中,但如果要拿生命去冒险,我不会那么做。我承认这些汽车发展了很久,应该很安全。而且坦率地说,很多司机比无人驾驶汽车危险得多。但和 Robert 一样,我想我会让其他人去做小白鼠,等到它更普遍了再去乘坐。
### 你的观点
在这个问题上,你的观点是什么呢? 你会信任新生的科学技术呢,还是乘坐无人驾驶汽车时会紧张到不行? 你会考虑乘坐无人驾驶汽车吗? 在下面的评论里加入讨论吧。
<small style="box-sizing: inherit; font-size: 16px;">图片来自: [Steve Jurvetson][4] 和 [Steve Jurvetson at Wikimedia Commons][3]</small>
--------------------------------------------------------------------------------
via: https://www.maketecheasier.com/riding-driverless-car/?utm_medium=feed&utm_source=feedpress.me&utm_campaign=Feed%3A+maketecheasier
作者:[Laura Tucker][a]
译者:[willcoderwang](https://github.com/willcoderwang)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.maketecheasier.com/author/lauratucker/
[1]:https://www.maketecheasier.com/riding-driverless-car/#comments
[2]:https://www.maketecheasier.com/author/lauratucker/
[3]:https://commons.m.wikimedia.org/wiki/File:Inside_the_Google_RoboCar_today_with_PlanetLabs.jpg
[4]:https://commons.m.wikimedia.org/wiki/File:Jurvetson_Google_driverless_car_trimmed.jpg
[5]:https://support.google.com/adsense/troubleshooter/1631343
[6]:https://www.maketecheasier.com/best-wordpress-video-plugins/
[7]:https://www.maketecheasier.com/hidden-google-games/
[8]:mailto:?subject=Would%20You%20Consider%20Riding%20in%20a%20Driverless%20Car?&body=https%3A%2F%2Fwww.maketecheasier.com%2Friding-driverless-car%2F
[9]:http://twitter.com/share?url=https%3A%2F%2Fwww.maketecheasier.com%2Friding-driverless-car%2F&text=Would+You+Consider+Riding+in+a+Driverless+Car%3F
[10]:http://www.facebook.com/sharer.php?u=https%3A%2F%2Fwww.maketecheasier.com%2Friding-driverless-car%2F
[11]:https://www.maketecheasier.com/category/opinion/

View File

@ -0,0 +1,68 @@
Arch Linux: DIY用户的终结圣地
![Tripple Renault photo by Gilles Paire via
Shutterstock ](https://regmedia.co.uk/2016/10/31/tripple_renault_photo_by_gilles_paire_via_shutterstock.jpg?x=648&y=348&crop=1)
深入研究下Linux系统的新闻史你会发现其中有一些鲜为人知的Linux发行版而且关于这些操作系统的新闻报道的数量也十分惊人。
新发行版中的佼佼者比如Elementary
OS和Solus操作系统因其华丽的界面而被大家所关注并且任何搭载MATE桌面环境的操作系统都因其简洁性而被广泛报道。
感谢像《黑客军团》这样的电视节目我确信关于Kali Linux系统的报道将会飙升。
尽管有很多关于Linux系统的报道然而仍然有一个被广泛使用的Linux发行版几乎被大家完全遗忘了Arch Linux系统。
关于Arch的新闻报道很少的原因有很多其中最主要的原因是它很难安装而且你还得熟练地在命令行下完成各种配置以使其正常运行。更可怕的是大多数的用户认为最难的是配置系统复杂的安装过程令无数的菜鸟们望而怯步。
这的确很遗憾在我看来实际上一旦安装完成后Arch比我用过的其它Linux发行版更容易得多。
确实如此Arch的安装过程很让人蛋疼。有些发行版的安装过程只需要点击“安装”后就可以放手地去干其它事了。Arch相对来说要花费更多的时间和精力去完成硬盘分区手动挂载生成fstab文件等。但是从Arch的安装过程中我们学到很多。它掀开帷幕让我们弄明白很多背后的东西。事实上这个帷幕已经彻底消失了在Arch的世界里你就是帷幕背后的主宰。
除了大家所熟知的难安装外Arch甚至没有自己默认的桌面环境虽然这有些让人难以理解但是Arch也因其可定制化而被广泛推崇。你可以自行决定在Arch的基础软件包上安装任何东西。
 ![ARCH "DESKTOP" SCREENSHOT LINUX -
OBVS VARIES DEPENDING ON USER ](https://regmedia.co.uk/2016/11/01/arch.jpg?x=648&y=364&infer_y=1
"ARCH "DESKTOP" SCREENSHOT LINUX - OBVS VARIES DEPENDING ON USER
")
你可以认为Arch是高度可定制化的或者说它完全没有定制化。比如不像Ubuntu系统那样Arch几乎没有修改过或是定制开发后的软件包。Arch的开发者从始至终都使用上游开发者提供的软件包。对于部分用户来说这种情况非常棒。比如你可以使用纯粹的未定制化开发过的GNOME桌面环境。但是在某些情况下一些上游开发者未更新过的定制化软件包可能存在很多的缺陷。 
由于Arch缺乏一些默认的应用程序和桌面系统这完全不利于用户管理自己的桌面环境。我曾经使用最小化安装配置Openboxtint2和dmenu桌面管理工具但是安装后的效果却跟我很失望。因此我更倾向于使用最新版的GNOME桌面系统。在使用Arch的过程中我们会同时安装一个桌面环境但是这给我们的体验是完全不一样的。对于任何发行版来说这种做法是正确的但是大多数的Linux系统都至少会使用一个默认的桌面环境。
然而Arch还是有很多共性的元素一起构成这个基本系统。比如说我使用Arch系统的主要原因是因为它是一个滚动更新的发行版。这意味着两件事情。首先Arch使用最新的稳定版内核。这就意味着我可以在Arch系统里完成在其它Linux发行版中很难完成的测试。滚动版最大的一个好处就是所有可用的软件更新包会被即时发布出来。这不只是说明软件包更新速度快而且也没有太多的系统升级包会被拆分。
由于Arch是一个滚动更新的发行版因此很多用户认为它是不稳定的。但是在我使用了9个多月之后我并不赞同这种观点。
我在每一次升级系统的过程中,从未损坏过任何软件。有一次升级系统之后我不得不回滚,因为系统启动分区/boot无法挂载成功但是后来我发现那完全是自己操作上的失误。一些基本的系统缺陷比如我关于戴尔XPS笔记本触摸板相关的回归测试方面的问题已经被修复并且可用的软件包更新速度要比其它非滚动发行版快得多。总的来说我认为Arch滚动更新的发布模式比其它我在用的发行版要稳定得多。唯一一点我要强调的是查阅维基上的资料多关注你要更新的内容。
你必须要小心你正在做的操作因为Arch也不是任你肆意胡来的。盲目的更新Arch系统是极其危险的。但是任何一个发行版的更新都有风险。在你别无选择的时候你得根据实际情况三思而后行。
Arch的哲学理念是我支持它的另外一个最主要的原因。我认为Arch最吸引用户的一点就是Arch面向的是专业的Linux用户或者是有“自己动手”的态度并愿意查资料解决问题的任何人。
随着Linux逐渐成为主流的操作系统开发者们更需要顺利地渡过每一个艰难的技术领域。那些晦涩难懂的专有软件方面的经验恰恰能反映出用户高深的技术能力。
尽管在这个时代听起来有些怪怪的但是事实上我们大多数的用户更愿意自己动手装配一些东西。在这种情形下Arch将会是Linux DIY用户的终结圣地。
--------------------------------------------------------------------------------
via: http://www.theregister.co.uk/2016/11/02/arch_linux_taster/
作者:[Scott
Gilbertson][a]
译者:[rusking](https://github.com/rusking)
校对:[校对者ID](https://github.com/校对者ID)
本文由
[LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.theregister.co.uk/Author/1785
[1]:https://wiki.archlinux.org/index.php/Arch_Linux
[2]:http://www.theregister.co.uk/Author/1785
[3]:https://www.linkedin.com/shareArticle?mini=true&url=http://www.theregister.co.uk/2016/11/02/arch_linux_taster/&title=Arch%20Linux%3A%20In%20a%20world%20of%20polish%2C%20DIY%20never%20felt%20so%20good&summary=Last%20refuge%20for%20purists
[4]:http://twitter.com/share?text=Arch%20Linux%3A%20In%20a%20world%20of%20polish%2C%20DIY%20never%20felt%20so%20good&url=http://www.theregister.co.uk/2016/11/02/arch_linux_taster/&via=theregister
[5]:http://www.reddit.com/submit?url=http://www.theregister.co.uk/2016/11/02/arch_linux_taster/&title=Arch%20Linux%3A%20In%20a%20world%20of%20polish%2C%20DIY%20never%20felt%20so%20good

View File

@ -1,128 +0,0 @@
安卓编年史
================================================================================
![安卓1.5的屏幕软键盘输入时的输入建议栏,大写状态键盘,数字与符号界面,更多符号弹窗。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/kb5.png)
安卓1.5的虚拟键盘输入时的输入建议栏,大写状态键盘,数字与符号界面,更多符号弹窗。
Ron Amadeo供图
### Android 1.5 Cupcake——虚拟键盘打开设备设计的大门 ###
在2009年4月安卓1.1发布后将近三个月后安卓1.5发布了。这是第一个拥有公开的,市场化代号的安卓版本:纸杯蛋糕(Cupcake)。从这个版本开始,每个版本的安卓将会拥有一个按字母表排序,以小吃为主题的代号。
纸杯蛋糕新增功能中最重要的明显当属虚拟键盘。这是OEM厂商第一次有可能抛开数不清按键的实体键盘以及复杂的滑动结构创造出平板风格的安卓设备。
安卓的按键标识可以在大小写之间切换这取决于大写锁定是否开启。尽管默认情况下它是关闭的显示在键盘顶部的建议栏有个选项可以打开它。在按键的弹框中带有省略号的就像“u”上面图那样的可以在按住的情况下可以输入弹框中的[发音符号][1]。键盘可以切换到数字和符号,长按句号键可以打开更多符号。
![1.5和1.1中的应用程序界面和通知面板的对比。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/abweave.png)
1.5和1.1中的应用程序界面和通知面板的对比。
Ron Amadeo供图
“摄像机”功能加入了新图标Google Talk从IM中分离出来成为了一个独立的应用。亚马逊MP3和浏览器的图标同样经过了重新设计。亚马逊MP3图标更改的主要原因是亚马逊即将计划推出其它的安卓应用而“A”图标所指范围太泛了。浏览器图标无疑是安卓1.1中最糟糕的设计,所以它被重新设计了,并且不再像是一个桌面操作系统的对话框。应用抽屉的最后一个改变是“图片”,它被重新命名为了“相册”。
通知面板同样经过了重新设计。面板背景加上了布纹纹理通知的渐变效果也被平滑化了。安卓1.5在系统核心部分有许多设计上的微小改变,这些改变影响到所有的应用。在“清除通知”按钮上,你可以看到全新的系统按钮风格,相比与旧版本的按钮有了渐变,更细的边框线以及更少的阴影。
![安卓1.5和1.1中的“添加到主屏幕”对话框。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/widget.png)
安卓1.5和1.1中的“添加到主屏幕”对话框。
Ron Amadeo供图
第三方小部件是纸杯蛋糕的另一个头等特性,它们现在仍然是安卓的本质特征之一。无论是用来控制应用还是显示应用的信息,开发者们都可以为他们的应用捆绑一个主屏幕小部件。谷歌同样展示了一些它们自己的新的小部件,分别来自日历和音乐这两个应用。
![左:日历小部件,音乐小部件以及一排实时文件夹的截图。中:文件夹列表。右:“带电话号码的联系人”实时文件夹的打开视图。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/folders-and-widgets-and-stuff.png)
左:日历小部件,音乐小部件以及一排实时文件夹的截图。中:文件夹列表。右:“带电话号码的联系人”实时文件夹的打开视图。
Ron Amadeo供图
在上方左边的截图里你可以看到新的日历和音乐图标。日历小部件只能显示当天的一个事件,点击它会打开日历。你不能够选择日历所显示的内容,小部件也不能够重新设置大小——它就是上面看起来的那个样子。音乐小部件是蓝色的——尽管音乐应用里没有一丁点的蓝色——它展示了歌曲名和歌手名,此外还有播放和下一曲按钮。
同样在左侧截图里,底部一排的头三个文件夹是一个叫做“实时文件夹”的新特性。它们可以在“添加到主屏幕”菜单中的新顶层选项“文件夹”中被找到,就像你在中间那张图看到的那样。实时文件夹可以展示一个应用的内容而不用打开这个应用。纸杯蛋糕带来的都是和联系人相关的实时文件夹,能够显示所有联系人,带有电话号码的联系人和加星标的联系人。
实时文件夹在主屏的弹窗使用了一个简单的列表视图而不是图标。联系人只是实时文件夹的一个初级应用它是给开发者使用的一个完整API。谷歌用Google Books应用做了个图书文件夹的演示它可以显示RSS订阅或是一个网站的热门故事。实时文件夹是安卓没有成功实现的想法之一这个特性最终在蜂巢(3.x)中被取消。
![摄像机和相机界面,屏幕上有触摸快门。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/device-2013-12-26-11016071.png)
摄像机和相机界面,屏幕上有触摸快门。
Ron Amadeo供图
如果你不能认出新的“摄像机”图标这不奇怪视频录制是在安卓1.5中才被添加进来的。相机和摄像机两个图标其实是同一个应用你可用过菜单中的“切换至相机”和“切换至摄像机”选项在它们之间切换。T-Mobile G1上录制的视频质量并不高。一个“高”质量的测试视频输出一个.3GP格式的视频文件其分辨率仅为352 x 288帧率只有4FPS。
除了新的视频特性相机应用中还可以看到一些急需的UI调整。上方左侧的快照展示了最近拍摄的那张照片点击它会跳转到相册中的相机胶卷。各个界面上方右侧的圆形图标是触摸快门这意味着从1.5开始,安卓设备不再需要一个实体相机按钮。
这个界面相比于之后版本的相机应用实际上更加接近于安卓4.2的设计。尽管后续的设计会向相机加入愚蠢的皮革纹理毅力更多的控制设置安卓最终还是回到了基本的设计安卓4.2的重新设计和这里有很多共同之处。安卓1.5中的原始布局演变成了安卓4.2中的最小化的,全屏的取景器。
![Google Talk运行在Google Talk中vs运行在IM应用中。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/gtalk-im.png)
Google Talk运行在Google Talk中vs运行在IM应用中。
Ron Amadeo供图
安卓1.0的IM即时通讯应用功能上支持Google Talk但在安卓1.5中Google Talk从中分离出来成为独立应用。IM应用中对其的支持已经被移除。Google Talk上图左侧明显是基于IM应用上图右侧但随着独立应用在1.5中的发布在IM应用的工作被放弃了。
新的Google Talk应用拥有重新设计过的状态栏右侧状态指示灯重新设计过的移动设备标识是个灰色的安卓小绿人图案。聊天界面的蓝色的输入框变成了更加合理的灰色消息的背景从淡绿和白色变成了淡绿和绿色。有了独立的应用谷歌可以向其中添加Gtalk独有的特性比如“不保存聊天记录”聊天该特性可以阻止Gmail保存每个聊天记录。
![安卓1.5的日历更加明亮。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/cal15.png)
安卓1.5的日历更加明亮。
Ron Amadeo供图
日历抛弃了丑陋的黑色背景上白色方块的设计,转变为全浅色主题。所有东西的背景都变成了白色,顶部的星期日变成了蓝色。单独的约会方块从带有颜色的细条变成了拥有整个颜色背景,文字也变为白色。这将是很长一段时间内日历的最后一次改动。
![从左到右:新的浏览器控件,缩放视图,复制/粘贴文本高亮。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/browser-craziness.png)
从左到右:新的浏览器控件,缩放视图,复制/粘贴文本高亮。
Ron Amadeo供图
安卓1.5从系统全局修改了缩放控件。缩放控件不再是两个大圆形,取而代之的是一个圆角的椭圆形从中间分开为左右两个按钮。这些新的控件被用在了浏览器,谷歌地图和相册之中。
浏览器在缩放功能上做了很多工作。在放大或缩小之后点击“1x”按钮可以回到正常缩放状态。底部右侧的按钮会将页面缩放整个页面并在页面上显示一个放大矩形框就像你能在上面中间截图看到的那样。抓住矩形框并且释放会将页面的那一部分显示回“1x”视图。安卓并没有加速滚动这使得最大滚动速度着实很慢——这就是谷歌对长网页页面导航的解决方案。
浏览器的另一个新增功能就是从网页上复制文本——之前你只能从输入框中复制文本。在菜单中选择“复制文本”会激活高亮模式在网页文本上拖动你的手指会使它们高亮。G1的轨迹球对于这种精准的移动十分的方便并且能够控制鼠标指针。这里并没有可以拖动的光标当你的手指离开屏幕的时候安卓就会复制文本并且移除高亮。所以你必须做到荒谬般的精确来使用复制功能。
安卓1.5中的浏览器很容易崩溃——比之前的版本经常多了。仅仅是以桌面模式浏览Ars Technica就会导致崩溃许多其它的站点也是一样。
![](http://cdn.arstechnica.net/wp-content/uploads/2013/12/lockscreen.png)
Ron Amadeo供图
默认的锁屏界面和图形锁屏都不再是空荡荡的黑色背景,而是和主屏幕一致的壁纸。
图形解锁界面的浅色背景显示出了谷歌在圆圈对齐工作上的草率和马虎。白色圆圈在黑圆圈里从来都不是在正中心的位置——像这样基本的对齐问题对于这一时期的安卓是个频繁出现的问题。
![Youtube上传工具内容快照自动旋转设置全新的音乐应用设计。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/TWEAKS2.png)
Youtube上传工具内容快照自动旋转设置全新的音乐应用设计。
Ron Amadeo供图
安卓1.5给予了YouTube应用向其网站上传视频的能力。上传通过从相册中分享视频到YouTube应用来完成或从YouTube应用中直接打开一个视频。这将会打开一个上传界面用户在这里可以设置像视频标题标签和权限这样的选项。照片可以以类似的方式上传到Picasa一个谷歌建立的图片网站。
整个系统的调整没有多少。现在喜爱的联系人在联系人列表中可以显示图片(尽管常规联系人还是没有图片)。第三张截图展示了设置中全新的自动旋转选项——这个版本同样也是第一个支持基于从设备内部传感器读取的数据自动切换方向的版本。
> #### 谷歌地图是第一个登陆谷歌市场的内置应用 ####
>
> 尽管这篇文章为了简单起见主要以安卓版本顺序来组织应用更新但还是有一些在这时间线之外的东西值得我们特别注意一下。2009年6月14日谷歌地图成为第一个通过谷歌市场更新的预置应用。尽管其它的所有应用更新还是要求一个完整的系统更新地图从系统中脱离了出来只要新特性已经就绪就可以随时接收升级周期之外的更新。
>
> 将应用从核心系统分离发布到安卓市场上将成为谷歌前进的主要关注点。总的来说OTA更新是个重大的主动改进——这需要OEM厂商和运营商的合作二者都是拖着后退的角色。更新同样没有做到到达每个设备。今天谷歌市场给了谷歌一个与每个安卓手机之间的联系而没有了这样的外界干扰。
>
> 然而这是后来才需要考虑的问题。在2009年谷歌只有两部裸机需要支持而且早期的安卓运营商似乎对谷歌的升级需要反应积极。这些早期的行动对谷歌部分来说将被证明是非常积极的决定。一开始公司只在最重要的应用——地图和Gmail上——走这条路线但后来它将大部分预置应用导入安卓市场。后来的举措比如Google Play服务甚至将应用API从系统移除加入了谷歌商店。
>
> 至于这时的新地图应用,得到了一个新的路线界面,此外还有提供公共交通和步行方向的能力。现在,路线只有个朴素的黑色列表界面——逐向风格的导航很快就会登场。
>
> 2009年6月同时还是苹果发布第三代iPhone——3GS——以及第三版iPhone OS的时候。iPhone OS 3的主要特性大多是追赶上来的项目比如复制/粘贴和对彩信的支持。苹果的硬件依然是更好的软件更流畅更整合还有更好的设计。尽管谷歌疯狂的开发步伐使得它不得不走上追赶的道路。iPhone OS 2是在安卓0.5的Milestone 5版本之前发布的在iOS一年的发布周期里安卓发布了五个版本。
![HTC Magic第二部安卓设备第一个不带实体键盘的设备。](http://cdn.arstechnica.net/wp-content/uploads/2014/04/htc-magic-white.jpg)
HTC Magic第二部安卓设备第一个不带实体键盘的设备。
HTC供图
纸杯蛋糕在改进安卓上做了巨大的工作特别是从硬件方面。虚拟键盘意味着不再需要实体键盘。自动旋转使得系统更加接近iPhone屏幕上的虚拟快门按键同样也意味着实体相机按键变成了可选选项。1.5发布后不久,第二部安卓设备的出现将会展示出这个平台未来的方向:HTC Magic。Magic上图没有实体键盘或相机按钮。它是没有间隙没有滑动结构的平板状设备依赖于安卓的虚拟按键来完成任务。
安卓旗舰机开始可能有着最多按键——一个实体qwerty键盘——后来随着时间流逝开始慢慢减少按键数量。而Magic是重大的一步去除了整个键盘和相机按钮它仍然使用通话和挂断键四个系统键以及轨迹球。
----------
![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
[Ron Amadeo][a] / Ron是Ars Technica的评论编缉专注于安卓系统和谷歌产品。他总是在追寻新鲜事物还喜欢拆解事物看看它们到底是怎么运作的。
[@RonAmadeo][t]
--------------------------------------------------------------------------------
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/8/
译者:[alim0x](https://github.com/alim0x) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://en.wikipedia.org/wiki/Diacritic
[a]:http://arstechnica.com/author/ronamadeo
[t]:https://twitter.com/RonAmadeo

View File

@ -1,54 +0,0 @@
病毒过后,系统管理员看上了 Linux
=======================================================
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/OPENHERE_blue.png?itok=3eqp-7gT)
我开源事业的第一笔,是我在 2001 年作为一名兼职系统管理员,为大学工作的时候。作为那个以教学为目的,不仅仅在大学中,还在学术界的其他领域建立商业案例研究的小组的一份子。
随着团队的发展渐渐地开始需求一个由文件服务intranet 应用domain logons 等,构建的健壮的局域网络。 我们的 IT 基础设施主要由跑着 Windows 98 的计算机组成,这些计算机对于大学的 IT 实验室来说已经太老了,就重新分配给了我们部门。
### 初探 Linux
一天作为大学IT采购计划的一部分我们部门收到了一台 IBM 服务器。 我们计划将其用作 Internet 网关,域控制站,文件服务器和备份服务器,以及 intranet 应用程序主机。
拆封后,我们注意到它附带了红帽 Linux 的 CD。 我们的 22 人团队(包括我)对 Linux 一无所知。 经过几天的研究,我找到了一位朋友的朋友,一位以 Linux RTOS 编程为生的人。 求助他如何安装。
光看着那朋友用 CD 驱动器载入第一张安装 CD 并进入 Anaconda 安装系统,我的头都晕了。 大约一个小时,我们完成了基本的安装,但仍然没有可用的 internet 连接。
另一个小时的折腾使我们连接到互联网,但仍没有 domain logons 或 Internet 网关功能。 经过一个周末的折腾,我们可以通过 Windows 98 的终端接受 Linux PC 的 IP 作为代理,终于构出了一个正常工作的共享互联环境。 但 domain logons 还需要一段时间。
我们用龟速的电话调制解调器下载了 [Samba][1],并手动配置它作为域控制站。文件服务也通过 NFS Kernel Server 开启了,随后为 Windows 98 的网络邻居创建用户目录并进行了必要的调整和配置。
这个设置完美运行了一段时间,直到最终我们决定开始使用 Intranet 应用管理时间表和一些别的东西。 这个时候,我已经离开了组织,并把大部分系统管理员的东西交给了接替我的人。
### 再遇 Linux
2004 年,我又重新装回了 Linux。我的妻子经营的一份独立的员工安置业务使用来自 Monster.com 等服务的数据来打通客户与求职者的交流渠道。
作为我们两人中的计算机好点的那个,在计算机和互联网出故障的时候,维修就成了我的分内之事。我们还需要用许多工具尝试,从堆积如山的简历中筛选出她每天必须看的。
Windows [BSoDs][2](蓝屏) 早已司空见惯,但只要我们的付费数据是安全的,那就还算可以容忍。为此我将不得不每周花几个小时去做备份。
一天,我们的电脑中了毒,并且通过简单的方法无法清除。我们并不了解磁盘上的数据发生了些什么。当磁盘彻底挂掉后,我们插入了一周前的从备份磁盘,但是一周后它也挂了。我们的第二个备份直接拒绝启动。是时候寻求专业帮助了,所以我们把电脑送到一家靠谱的维修店。两天以后,我们被告知一些恶意软件或病毒已经将某些文件类型擦除殆尽,其中包括我们的付费数据。
这是对我妻子的商业计划的一个巨大的打击,同时意味着丢失合同并耽误了账单。我曾短期出国工作,并在台湾的 [Computex 2004][3] 购买了我的第一台笔记本电脑。 预装的是 Windows XP但我还是想换成 Linux。 我知道 Linux 已经为桌面端做好了准备,[Mandrake Linux][4] 是一个很不错的选择。 我第一次安装就很顺利。所有工作都执行的非常漂亮。我使用 [OpenOffice][5] 来满足我写作,演示文稿和电子表格的需求。
我们为我们的计算机获得了新的硬盘驱动器,并为其安装了 Mandrake Linux。用 OpenOffice 替换了 Microsoft Office。 我们依靠网络邮件来满足邮件需求,并在 2004 年的 11 月迎来了 [Mozilla Firefox][6]。我的妻子马上从中看到了好处,因为没有崩溃或病毒/恶意软件感染。更重要的是,我们告别了困扰 Windows 98 和 XP 的频繁崩溃问题。 她延续使用相同的分布。
而我,开始尝试其他的发行版。 我爱上了 distro-hopping (译注:用于描述在不同版本的 Linux 发行版之间频繁切换的 Linux 用户的术语)和第一时间尝试新发行版的感觉。我也经常会在 Apache 和 NGINX 上尝试和测试 Web 应用程序,如 DrupalJoomla 和 WordPress。现在我们 2006 年出生的儿子,在 Linux 下成长。 也对 Tux PaintGcompris 和 SMPlayer 非常满意。
--------------------------------------------------------------------------------
via: https://opensource.com/life/16/3/my-linux-story-soumya-sarkar
作者:[Soumya Sarkar][a]
译者:[martin2011qi](https://github.com/martin2011qi)
校对:[校对者ID](https://github.com/校对者ID)
[a]: https://opensource.com/users/ssarkarhyd
[1]: https://www.samba.org/
[2]: https://en.wikipedia.org/wiki/Blue_Screen_of_Death
[3]: https://en.wikipedia.org/wiki/Computex_Taipei
[4]: https://en.wikipedia.org/wiki/Mandriva_Linux
[5]: http://www.openoffice.org/
[6]: https://www.mozilla.org/en-US/firefox/new/

View File

@ -1,101 +0,0 @@
我成为一名软件工程师的原因和经历
==========================================
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/myopensourcestory.png?itok=6TXlAkFi)
1989 年乌干达首都,坎帕拉。
赞美我的父母,他们机智的把我送到叔叔的办公室,去学着用电脑,而非将我留在家里添麻烦。几日后,我和另外六、七个小孩,还有一台放置在与讲台相垂直课桌子上的崭新电脑,一起置身于 21 层楼高的狭小房间中。很明显我们还不够格去碰那家伙。在长达三周无趣的 DOS 命令学习后,终于迎来了这美妙的时光。终于轮到我来输 **copy doc.txt d:** 了。
那奇怪的声音其实是将一个简单文件写入五英寸软盘的声音但听起来却像音乐般美妙。那段时间这块软盘简直成为了我的至宝。我把所有我可以拷贝的东西都放在上面了。然而1989 年的乌干达,人们的生活十分正经,相较而言捣鼓电脑,拷贝文件还有格式化磁盘就称不上正经。我不得不专注于自己的学业,这让我离开了计算机科学走入了建筑工程学。
在这些年里,我和同龄人一样,干过很多份工作也学到了许多技能。我教过幼儿园的小朋友,也教过大人如何使用软件,在服装店工作过,还在教堂中担任过付费招待。在我获取堪萨斯大学的学位时,我正在技术高管的手下做技术助理,其实也就听上去比较洋气,也就是搞搞学生数据库而已。
当我 2007 年毕业时,这些技术已经变得不可或缺。建筑工程学的方方面面都与计算机科学深深的交织在一起,所以我们也都在不经意间也都学了些简单的编程知识。我对于这方面一直很着迷。但由于我不得不成为一位正经的工程师,所以我发展了一项私人爱好:写科幻小说。
在我的故事中,我以我笔下的女英雄的形式存在。她们都是编程能力出众的科学家,总是在冒险的途中用自己的技术发明战胜那些渣渣们,有时发明要在战斗中进行。我想出的这些“新技术”,一般基于真实世界中的发明。也有些是从买来的科幻小说中读到的。这就意味着我需要了解这些技术的原理,而且我的研究使我有意无意的关注了许多有趣的 subreddit 和电子杂志
### 开源:巨大的宝库
在我的经历中,那几周花在 DOS 命令上的时间仍然记忆犹新在一些偏门的项目上耗费心血并占据了宝贵的学习时间。Geocities 一向所有 Yahoo! 用户开放,我就创建了一个网站,用于发布一些由我用小型数码相机拍摄的个人图片。这个网站是我随性而为的,用来帮助家人和朋友,解决一些他们所遇到的电脑问题。同时也为教堂搭建了一个图书馆数据库。
这意味着我需要一直研究并尝试获取更多的信息使它们变得更棒。上帝保佑让互联网和开源砸在了我的面前。然后30 天试用和 license 限制对我而言就变成了过去式。我可以完全不受这些限制,持续的使用 GIMP、Inkscape 和 OpenOffice。
### 是时候正经了
我很幸运,有商业伙伴看出了我故事中的奇妙。她也是个想象力丰富的人,对更高效、更便捷的互联这个世界,充满了各种美好的想法。我们根据我们以往成功道路中经历的痛点制定了解决方案,但执行却成了一个问题。我们都缺乏那种将产品带入生活的能力,每当我们试图将想法带到投资人面前时,都表现的尤为突出。
我们需要学习编程。所以在 2015 年的夏末,我们踏上了征途,来到了 Holberton 学校的阶前。那是一所座落于旧金山由社区推进,基于项目教学的学校。
我的商业伙伴一天早上找到我,并开始了一段充满她方式的对话,每当她有疯狂想法想要拉我入伙时。
**Zee**: Gloria我想和你说点事在拒绝前能先听我说完吗
**Me**: 不行。
**Zee**: 我们想要申请一所学校的全栈工程师。
**Me**: 什么?
**Zee**: 就是这,看!就是这所学校,我们要申请这所学校学习编程。
**Me**: 我不明白。我们不是正在网上学 Python 和…
**Zee**: 这不一样。相信我。
**Me**: 那…
**Zee**: 还不相信吗?
**Me**: 好吧…我看看。
### 抛开偏见
我看到的和我们在网上听说的几乎差不多。这简直太棒了,以至于让人觉得不太真实,但我们还是决定尝试一下,双脚起跳,看看结果如何。
要成为学生,我们需要经历四个步骤,仅仅是针对才能和态度,无关学历和编程经历的筛选。筛选便是课程的开始,通过它我们开始学习与合作。
根据我和我合作伙伴的经验,相比 Holberton 学校的申请流程,其他的申请流程实在是太无聊了。就像场游戏。如果你完成了一项挑战,你就能通往下一关,在那里有别的有趣的挑战正等着你。我们创建了 Twitter 账号,在 Medium 上写博客,为了创建网站而学习 HTML 和 CSS 打造了一个充满活力的社区,虽然在此之前我们并不知晓有谁会来。
在线社区最吸引人的就是我们使用电脑的经验是多种多样的而我们的背景和性别并非创始人我们私下里称他为“The Trinity三位一体做出选择的因素。大家只是喜欢聚在一块交流。我们都是通过学习编程来提升自己计算机技术的聪明人。
相较于其他的的申请流程,我们不需要泄露很多的身份信息。就像我的商业伙伴,她的名字里看不出她的性别和种族。直到最后一个步骤,在视频聊天的时候, The Trinity 才知道她是一位有色人种女性。迄今为止,促使她达到这个程度的只是她的热情和才华。肤色和性别,并没有妨碍或者帮助到她。还有比这更酷的吗?
那个我们获得录取通知书的晚上我们知道我们的命运已经改变我们获得了原先梦寐以求的生活。2016 年 1 月 22 日,我们来到北至巴特瑞大街 98 号,去见我们的小伙伴 [Hippokampoiers][2]这是我们的初次见面。很明显在见面之前“The Trinity”就已经开始做一些令人激动的事了。他们已经聚集了一批形形色色的人他们都专注于成为全栈工程师并为之乐此不疲。
这所大学有种与众不同的体验。感觉每天都是向编程的一次竭力的冲锋。我们着手的工程,并不会有很多指导,我们需要使用一切我们可以使用的资源找出解决方案。[Holberton 学校][1] 的办学宗旨便是向学员提供相较于我们已知而言更为多样的信息渠道。MOOCs大型开放式课程、教程、可用的开源软件和项目以及线上社区层出不穷将我们完成项目所需要的知识全都衔接了起来。加之宝贵的导师团队来指导我们制定解决方案这所学校变得并不仅仅是一所学校我们已经成为了求学者的社区。任何对软件工程感兴趣并对这种学习方法感兴趣的人我都十分推荐这所学校。下次开课在 2016 年 10 月,并且会接受新的申请。虽然会让人有些悲喜交加,但是那真的很值得。
### 开源问题
我最早使用的开源系统是 [Fedora][3],一个 [Red Hat][4] 赞助的项目。在与 IRC 中一名成员一番惊慌失措的交流后,她推荐了这款免费的系统。 虽然在此之前,我还未独自安装过操作系统,但是这激起了我对开源的兴趣和日常使用计算机时对开源软件的依赖性。我们提倡为开源贡献代码,创造并使用开源的项目。我们的项目就在 Github 上,任何人都可以使用或是向它贡献出自己的力量。我们也会使用或以自己的方式为一些既存的开源项目做出贡献。在学校里,我们使用的大部分工具是开源的,例如 Fedora、[Vagrant][5]、[VirtualBox][6]、[GCC][7] 和 [Discourse][8],仅举几例。
重回软件工程师之路以后,我始终憧憬着有这样一个时刻——能为开源社区做出一份贡献,能与他人分享我所掌握的知识。
### Diversity Matters
站在教室里,在着 29 双明亮的眼睛关注下交流心得,真是令人陶醉。学员中有 40% 是女性,有 44% 的有色人种。当你是一位有色人种且为女性,并身处于这个以缺乏多样而著名的领域时,这些数字就变得非常重要了。那是高科技圣地麦加上的绿洲。我知道我做到了。
想要成为一个全栈的工程师是十分困难的,你甚至很难了解这意味着什么。这是一条充满挑战的路途,道路四周布满了对收获的未知。科技推动着未来飞速发展,而你也是美好未来很重要的一部分。虽然媒体在持续的关注解决科技公司的多样化的问题,但是如果能认清自己,了解自己,知道自己为什么想成为一名全栈工程师,这样你便能觅得一处生根发芽。
不过可能最重要的是,提醒人们女性在计算机的发展史上扮演着多么重要的角色,以帮助更多的女性回归到科技界,并使她们充满期待,而非对自己的性别与能力感到犹豫。她们的才能将会描绘出不仅仅是科技的未来,而是整个世界的未来。
------------------------------------------------------------------------------
via: https://opensource.com/life/16/4/my-open-source-story-gloria-bwandungi
作者:[Gloria Bwandungi][a]
译者:[martin2011qi](https://github.com/martin2011qi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/nappybrain
[1]: https://www.holbertonschool.com/
[2]: https://twitter.com/hippokampoiers
[3]: https://en.wikipedia.org/wiki/Fedora_(operating_system)
[4]: https://www.redhat.com/
[5]: https://www.vagrantup.com/
[6]: https://www.virtualbox.org/
[7]: https://gcc.gnu.org/
[8]: https://www.discourse.org/

View File

@ -1,195 +0,0 @@
Finish Translated
Linux vs. Windows 设备驱动模型架构APIs 和开发环境比较
============================================================================================
API Application Program Interface 应用程序接口
ABI application binary interfaces 应用系统二进制接口
设备驱动是操作系统的一部分,他能够通过一些特定的编程接口提升硬件设备的使用,这样软件就可以控制并且运行那些设备了。因为每个驱动都对应不同的操作系统,所以你就需要不同的 LinuxWindows 或 Unix 设备驱动,以便能够在不同的计算机上使用你的设备。这就是当雇佣一个驱动开发者或者选择一个研发服务商提供者的时候,查看他们为各种操作系统平台开发驱动的经验是非常重要的。
![](https://c2.staticflickr.com/8/7289/26775594584_d2fe7483f9_c.jpg)
驱动开发的第一步是理解每个操作系统处理它驱动的不同方式、底层驱动模型、它使用的架构、以及可用的开发工具。例如Linux 驱动程序模型就与 Windows 非常不同。虽然 Windows 提倡驱动程序开发和操作系统开发分别进行,并通过一组 ABI 调用来结合驱动程序和操作系统,但是 Linux 设备驱动程序开发不依赖任何稳定的 ABI 或 API所以它的驱动代码并没有被纳入内核中。每一个模型都有自己的优点和缺点但是如果你想为你的设备提供全面支持那么重要的是要全面的了解他们,。
在本文中,我们将比较 Windows 和 Linux 设备驱动程序探索不同的架构API开发和分布希望为您提供一个比较深入理解关于如何开始为每一个操作系统编写设备驱动程序。
### 1. 设备驱动架构
Windows 设备驱动程序的体系结构和 Linux 中使用的是不同他们有自己的优点和缺点。差异主要受以下原因的影响Windows 是闭源操作系统,而 Linux 是开源操作系统。比较 Linux 和 Windows 设备驱动程序架构将帮助我们理解 Windows 和 Linux 驱动程序背后的核心差异。
#### 1.1. Windows 驱动架构
虽然 Linux 内核由 Linux 驱动来分配,但 Windows 内核不包括设备驱动程序。相反的是,现代 Windows 设备驱动程序编写使用 Windows 驱动模型WDM这是一种完全支持即插即用和电源管理的模型所以可以根据需要加载和卸载驱动程序。
处理来自应用的请求,是由 Windows 内核的一部分调用 I/O 管理器来完成的。I/O 管理器的作用是是转换这些请求到 I/O 请求数据包IRPsIRPs 可以被用来在驱动层识别请求并且传输数据。
Windows 驱动模型 WDM 提供三中驱动, 他们形成了三个层:
- 过滤驱动提供关于 IRPs 的可选附加处理。
- 功能驱动是实现接口和每个设备通信的主要驱动。
- 总线驱动服务不同的配适器和不同的总线控制器,来实现主机模式控制设备。
An IRP 通过这些层就像他们经过 I/O 管理器到达底层硬件那样。每个层能够独立的处理一个 IRP 并且把他们送回 I/O 管理器。在硬件底层中有硬件抽象层HAL它提供一个通用的接口到物理设备。
#### 1.2. Linux 驱动架构
相比于 Windows 设备驱动Linux 设备驱动架构核心的不同就是 Linux 没有一个标准的驱动模型也没有一个干净独立的层。每一个设备驱动都被当做一个能够自动的从内核中加载和卸载模块来实现。Linux 为即插即用设备和电源管理设备提供一些方式,以便那些驱动可以使用它们来正确地管理这些设备,但这并不是必须的。
Linux 提供的外部模块和通信功能通过调用那些函数并且传送在随机数据结构中。来自用户应用的请求实际来自文件系统层或者网络层,它们被转化为需要的数据结构。模块能够被存储在层中,通过一些提供接口到一个设备群的模块处理一个一个的请求,例如 USB 设备。
Linux 设备驱动程序支持三种设备:
- 实现一个字节流接口的字符设备。
- 主机文件系统和支持多字节块的数据展示的块设备。
- 用于通过网络转换数据包的网络接口。
Linux 也有一个硬件抽象层HAL它像一个接口一样连接硬件和设备驱动。
### 2. 设备驱动 APIs
Linux 和 Windows 驱动 API 都属于事件驱动类型:只有当某些事件发生的时候,驱动代码才执行:当用户的应用程序希望从设备获取一些东西,或者当设备有某些请求需要告知操作系统。
#### 2.1. 初始化
在 Windows 上,驱动被表示为驱动对象结构,驱动对象结构在驱动入口函数的执行过程中被初始化。这些入口点也注册一些回调函数对设备的添加和移除、驱动卸载,和处理新进入的 IRP 做出回应。当一个设备连接的时候Windows 创建一个设备对象,这个设备对象处理所有应用请求来代表设备驱动。
相比于 WindowsLinux 设备驱动生命周期由内核模块的 module_init 和 module_exit 函数负责管理,他们分别用于模块的加载和卸载。他们负责注册模块并通过使用内核接口来处理设备的请求。这个模块需要穿件一个设备文件(或者一个网络接口),说明一些它希望管理的数字识别器,和注册一些回调函数,当用户和设备文件交互的时候使用。
#### 2.2. 命名和声明设备
##### **在 Windows 上注册设备**
Windows 设备驱动是由回调函数 AddDevice 在新连接设备时被通知的。它接下来就超过去创建一个用于识别这种特殊驱动的设备对象。取决于驱动的类型设备对象可以是物理设备对象PDO函数设备对象FDO或者过滤设备对象FIDO。设备对象能够使用PDO来存储在底层。
设备对象在这个设备连接在计算机的时候一直存在。设备扩展结构能够被用于使用一个设备对象来辅助全局设备。
设备对象可以有如下形式的名字 **\Device\DeviceName** 这些被系统用来识别和定位他们。一个应用使用 CreateFile API 函数来打开一个有上述名字的文件,获得一个可以用于和设备交互的句柄。
然而,通常只有 PDO 有自己的名字。未命名的设备能够通过设备级结构来访问。设备驱动寄存器的一个或多个结构能够通过 128 位全局唯一标识符GUIDs来识别他们。用户应用能够使用全局唯一标识符来获取一个句柄。
##### **在 Linux 上注册设备**
在 Linux 平台上,用户应用通过位于 /dev 目录的文件系统入口访问设备。在模块初始化的时候,它通过调用内核函数 register_chrdev创建了所有需要的入口。这个调用后来被发送到回调函数这个回调函数以及具有返回的描述符的进一步的系统调用例如读、写或关闭是通过模块安装在结构中就像file_operations 或者 block_device_operations。
设备驱动模块负责分配和保持任何需要用于运行的数据结构。一个传送进入系统文件回调函数的文件结构有一个 private_data 字段,它可以被用来存放指向具体驱动数据的指针。块设备和网络接口 API 也提供类似的字段。
虽然其他系统的应用使用文件系统节点来定位设备,但是 Linux 使用一个主设备和次设备号的概念来识别设备和他们的内部驱动。主设备号被用来识别设备驱动,而次设备号由驱动使用来识别被它管理的设备。驱动为了去管理一个或多个固定的主设备号,必须首先注册自己或者让系统来分配未使用的设备号给它。
目前Linux 为主次设备对使用一个32位的值其中12位分配主设备号并允许多达4096个不同的设备。主次设备对对于字符设备和块设备是不同的所以一个字符设备和一个块设备能使用相同的设备对而不导致冲突。网络接口是通过像 eth0 的标志名来识别,这些又是区别于主次设备的字符设备和块设备的。
#### 2.3. 交换数据
Linux 和 Windows 都支持在用户级应用程序和内核级驱动程序之间传输数据的三种方式:
- **缓冲型输入输出** 它使用由内核管理的缓冲区。对于写操作,内核从用户空间缓冲区中拷贝数据到内核分配缓冲区,并且把它传送到设备驱动中。读操作是一样的,由内核将数据从内核缓冲区中拷贝到应用提供的缓冲区中。
- **直接型输入输出** 它不使用拷贝功能。代替它的是,内核在物理内存中引导用户分配缓冲区,这样他就能够保存这些数据,在数据传输过程中而不被换出。
- **内存映射** 它也能够由内核管理,这样内核和用户空间应用就能够通过不同的地址访问同样的内存页。
##### **Windows 上的 I/O 模式**
支持缓冲型 I/O 是 WDM 的一种内置功能。缓冲区能够通过在 IRP 结构中的 AssociatedIrp.SystemBuffer 字符访问设备驱动。当需要和用户空间交流的时候,驱动只需从缓冲区中进行读写操作。
Windows 上的直接 I/O 由内存描述符列表MDLs介导。这种半透明的结构是通过在 IRP 中的 MdlAddress 字段来访问的。它们被用来定位由用户应用程序分配的缓冲区的物理地址,并在 I/O 请求的持续时间内固定。
在 Windows 上进行数据传输的第三个选项称为 METHOD_NEITHER。 在这种情况下,内核需要传送用户空间输入输出缓冲区的虚拟地址到驱动,而不需要确定他们有效或者保证他们映射到一个由设备驱动访问的物理储存器。设备驱动也负责处理数据传输的细节。
##### **Linux 上的驱动程序 I/O 模式**
Linux 提供许多函数例如clear_user, copy_to_user, strncpy_from_user 和一些其他的用来在内核和用户内存之间展示缓冲区数据传输。这些函数保证了指向数据缓存区指针的有效,并且通过在存储器区域之间安全拷贝数据缓冲区来处理数据传输的所有细节。
然而,块设备的驱动对已知大小的整个数据块进行操作,它可以被快速移动在内核和用户地址区域之间而不需要拷贝他们。这些情况都是由 Linux 内核来自动处理所有的块设备驱动。块请求队列处理传送数据块而没有多余的拷贝,而 Linux 系统调用接口来转换文件系统请求到块请求中。
最终,设备驱动能够为内核地址区域分配一些存储页面(不可用于交换)并且使用 remap_pfn_range 函数来直接映射页面到用户进程的地址空间。然后应用能获取缓冲区的虚拟地址并且使用它来和设备驱动交流。
### 3. 设备驱动开发环境
#### 3.1. 设备驱动框架
##### **Windows 驱动程序工具包**
Windows 是一个闭源操作系统。Microsoft 提供 Windows 驱动程序工具包以方便非 Microsoft 供应商开发 Windows 设备驱动。工具包中包含开发,调试,检验和 Windows 设备驱动包等所需的所有内容。
Windows 驱动模型为设备驱动定义了一个干净的接口框架。Windows 保持这些接口的源和二进制兼容性。编译 WDM 驱动通常需要前向兼容性:也就是说,一个较旧的驱动能够在没有重新编译的情况下在较新的系统上运行,但是它当然不能够访问系统提供的新功能。但是,驱动不保证后向兼容性。
##### **Linux 源代码**
和 Windows 相对比Linux 是一个开源操作系统,因此 Linux 的整个源代码是用于驱动开发的 SDK。没有驱动设备的正式框架但是 Linux 内核包含许多如提供驱动注册的通用服务的子系统。这些子系统的接口被描述为内核头文件。
尽管 Linux 有定义接口这些接口在设计上并不稳定。Linux 不提供有关前向和后向兼容的任何保证。设备驱动对于不同的内核版本需要重新编译。没有稳定性的保证允许 Linux 内核的快速开发,因为开发人员不必去支持旧的借口,并且能够使用最好的方法解决手头的这些问题。
当为 Linux 写树内驱动程序时这种不断变化的环境不会造成任何问题因为它们作为内核源代码的一部分与内核本身同步更新。然而闭源驱动必须单独开发并且在树外并且必须维护它们以支持不同的内核版本。因此Linux 鼓励设备驱动程序开发人员来维持他们的树内驱动。
#### 3.2. 为设备驱动构建系统
Windows 驱动程序工具包为 Microsoft Visual Studio 添加了驱动开发支持,并包括用来构建驱动程序代码的编译器。开发 Windows 设备驱动程序与在 IDE 中开发用户空间应用程序没有太大的区别。Microsoft 提供了一个企业 Windows 驱动程序工具包,使其能够构建类似于 Linux 的命令行环境。
Linux 使用 Makefile 作为树内和树外系统设备驱动程序的构建系统。Linux 构建系统非常发达,通常是一个设备驱动程序只需要少数行就产生一个可工作的二进制代码。开发人员可以使用任何[IDE][5],只要它可以处理 Linux 源代码库和运行 make ,或者他们可以很容易地从终端手动编译驱动程序。
#### 3.3. 文档支持
Windows 具有对于驱动程序的开发的良好文档支持。Windows 驱动程序工具包包括文档和示例驱动程序代码,通过 MSDN 可获得关于内核接口的大量信息,并存在大量的有关驱动程序开发和 Windows 内部的参考和指南。
Linux 文档不是描述性的,但这缓解了整个 Linux 源代码可供驱动开发人员使用。源代码树中的文档目录记录了一些 Linux 的子系统,但是有更详尽的关于 Linux 设备驱动程序开发和 Linux 内核概述的[multiple books][4]。
Linux 不提供指定的设备驱动程序的样本,但现有生产驱动程序的源代码可用,可以用作开发新设备驱动程序的参考。
#### 3.4. 调试支持
Linux 和 Windows 都有可用于追踪调试驱动程序代码的日志记录工具。在 Windows 上将使用 DbgPrint 函数,而在 Linux 上使用的函数称为 printk。然而并不是每个问题都可以通过只使用日志记录和源代码来解决。有时断点更有用因为它们允许检查驱动代码的动态行为。交互式调试对于研究崩溃的原因也是必不可少的。
Windows 通过其内核级调试器 WinDbg 支持交互式调试。这需要通过一个串行端口连接两台机器一台计算机运行调试内核另一台运行调试器和控制被调试的操作系统。Windows 驱动程序工具包包括 Windows 内核的调试符号,因此 Windows 数据结构将在调试器中部分可见。
Linux 还支持通过 KDB 和 KGDB 进行的交互式调试。调试支持可以内置到内核也可以在启动时同时启用。之后可以直接通过物理键盘调试系统或通过串行端口从另一台计算机连接到它。KDB 提供了一个简单的命令行界面这是唯一的在同一台机器上来调试内核的方法。然而KDB 缺乏源代码级调试支持。KGDB 通过串行端口提供了一个更复杂的接口。它允许使用像 GDB 这样的标准应用程序调试器来调试 Linux 内核,就像任何其他用户空间应用程序一样。
### 4. 设备驱动分配
##### 4.1. 安装设备驱动
在 Windows 上安装的驱动程序,是由被称为为 INF 的文本文件描述的,通常存储在 C:\Windows\INF 目录中。这些文件由驱动供应商提供,并且定义哪些是由驱动程序服务的设备,哪里可以找到驱动程序的二进制文件,和驱动程序的版本等。
当一个新设备插入计算机时Windows 通过查看已经安装的驱动程序并且选择适当的一个加载。当设备被移除的时候,驱动会自动卸载它。
在 Linux 上,一些驱动被构建到内核中并且保持永久的加载。非必要的驱动被构建为内核模块,这通常是存储在/lib/modules/kernel-version 目录中。这个目录还包含各种配置文件,如 modules.dep用于描述内核模块之间的依赖关系。
虽然 Linux 内核可以自身在启动时加载一些模块但通常模块加载由用户空间应用程序监督。例如init 进程可能在系统初始化期间加载一些模块udev 守护程序负责跟踪新插入的设备并为它们加载适当的模块。
#### 4.2. 更新设备驱动
Windows 为设备驱动程序提供了稳定的二进制接口,因此在某些情况下,无需与系统一起更新驱动程序二进制文件。任何必要的更新由 Windows Update 服务处理Windows 服务负责定位,下载和安装适用于最新版本系统的驱动程序。
然而Linux 不提供稳定的二进制接口,因此有必要在每次内核更新时重新编译和更新所有必需的设备驱动程序。显然,内置在内核中的设备驱动程序会自动更新,但是树外模块会产生轻微的问题。 维护最新的模块二进制文件的任务通常用[DKMS] [3]来解决:一个当安装新的内核版本时自动重建所有注册的内核模块的服务。
#### 4.3. 安全注意事项
所有 Windows 设备驱动程序必须在 Windows 加载它们之前进行数字签名。在开发期间可以使用自签名证书,但是分发给终端用户的驱动程序包必须使用 Microsoft 信任的有效证书进行签名。供应商可以从 Microsoft 授权的任何受信任的证书颁发机构获取软件出版商证书。然后,此证书由 Microsoft 交叉签名,并且生成的交叉证书用于在发行之前签署驱动程序包。
Linux 内核还可以配置为验证正在加载的内核模块的签名,并禁止不可信的内核模块。被内核所信任的公钥集在构建时是固定的,并且是完全可配置的。由内核执行的检查,它的严格性在构建时也是可配置的,范围从简单地为不可信模块发出警告,到拒绝加载任何可疑的有效性。
### 5. 结论
如上所示Windows 和 Linux 设备驱动程序基础设施有一些共同点,例如调用 API 的方法,但更多的细节是相当不同的。最突出的差异源于 Windows 是由商业公司开发的封闭源操作系统这个事实。这是 Windows 上使变的非常好,文档化,稳定的驱动程序 ABI 和正式框架的一个要求,而在 Linux 上,更多的是到源代码的一个很好的补充。文档支持也在 Windows 环境中更加发达,因为 Microsoft 具有所需的资源来维护它。
另一方面Linux 不会使用框架限制设备驱动程序开发人员,并且内核和生产设备驱动程序的源代码可以在需要的时候有所帮助。缺乏接口稳定性也有意义,因为它意味着最新的设备驱动程序总是使用最新的接口,内核本身承载较小的后向兼容性负担,这带来了更干净的代码。
了解这些差异以及每个系统的具体情况是为您的设备提供有效的驱动程序开发和支持的关键的第一步。我们希望这篇文章 Windows 和 Linux 设备驱动程序开发的对比,有助于您理解它们,并在设备驱动程序开发过程的研究中,将此作为一个伟大的起点。
--------------------------------------------------------------------------------
via: http://xmodulo.com/linux-vs-windows-device-driver-model.html
作者:[Dennis Turpitka][a]
译者:[译者ID](https://github.com/FrankXinqi(&YangYang))
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://xmodulo.com/author/dennis
[1]: http://xmodulo.com/linux-vs-windows-device-driver-model.html?format=pdf
[2]: https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=PBHS9R4MB9RX4
[3]: http://xmodulo.com/build-kernel-module-dkms-linux.html
[4]: http://xmodulo.com/go/linux_device_driver_books
[5]: http://xmodulo.com/good-ide-for-c-cpp-linux.html

Some files were not shown because too many files have changed in this diff Show More