mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-28 23:20:10 +08:00
commit
9dfe7e4db2
188
README.md
188
README.md
@ -51,113 +51,117 @@ LCTT的组成
|
||||
* 2014/12/25 提升runningwater为Core Translators成员。
|
||||
* 2015/04/19 发起 LFS-BOOK-7.7-systemd 项目。
|
||||
* 2015/06/09 提升ictlyh和dongfengweixiao为Core Translators成员。
|
||||
* 2015/11/10 提升strugglingyouth、FSSlc、Vic020、alim0x为Core Translators成员。
|
||||
|
||||
活跃成员
|
||||
-------------------------------
|
||||
|
||||
目前 TP 活跃成员有:
|
||||
- CORE @wxy,
|
||||
- CORE @carolinewuyan,
|
||||
- CORE @DeadFire,
|
||||
- CORE @geekpi,
|
||||
- CORE @GOLinux,
|
||||
- CORE @reinoir,
|
||||
- CORE @bazz2,
|
||||
- CORE @zpl1025,
|
||||
- CORE @ictlyh,
|
||||
- CORE @dongfengweixiao
|
||||
- CORE @carolinewuyan,
|
||||
- CORE @strugglingyouth,
|
||||
- CORE @FSSlc
|
||||
- CORE @zpl1025,
|
||||
- CORE @bazz2,
|
||||
- CORE @Vic020,
|
||||
- CORE @dongfengweixiao,
|
||||
- CORE @alim0x,
|
||||
- Senior @reinoir,
|
||||
- Senior @tinyeyeser,
|
||||
- Senior @vito-L,
|
||||
- Senior @jasminepeng,
|
||||
- Senior @willqian,
|
||||
- Senior @vizv,
|
||||
- @ZTinoZ,
|
||||
- @Vic020,
|
||||
- @runningwater,
|
||||
- @KayGuoWhu,
|
||||
- @luoxcat,
|
||||
- @alim0x,
|
||||
- @2q1w2007,
|
||||
- @theo-l,
|
||||
- @FSSlc,
|
||||
- @su-kaiyao,
|
||||
- @blueabysm,
|
||||
- @flsf,
|
||||
- @martin2011qi,
|
||||
- @SPccman,
|
||||
- @wi-cuckoo,
|
||||
- @Linchenguang,
|
||||
- @linuhap,
|
||||
- @crowner,
|
||||
- @Linux-pdz,
|
||||
- @H-mudcup,
|
||||
- @yechunxiao19,
|
||||
- @woodboow,
|
||||
- @Stevearzh,
|
||||
- @disylee,
|
||||
- @cvsher,
|
||||
- @wwy-hust,
|
||||
- @johnhoow,
|
||||
- @felixonmars,
|
||||
- @TxmszLou,
|
||||
- @shipsw,
|
||||
- @scusjs,
|
||||
- @wangjiezhe,
|
||||
- @hyaocuk,
|
||||
- @MikeCoder,
|
||||
- @ZhouJ-sh,
|
||||
- @boredivan,
|
||||
- @goreliu,
|
||||
- @l3b2w1,
|
||||
- @JonathanKang,
|
||||
- @NearTan,
|
||||
- @jiajia9linuxer,
|
||||
- @Love-xuan,
|
||||
- @coloka,
|
||||
- @owen-carter,
|
||||
- @luoyutiantang,
|
||||
- @JeffDing,
|
||||
- @icybreaker,
|
||||
- @tenght,
|
||||
- @liuaiping,
|
||||
- @mtunique,
|
||||
- @rogetfan,
|
||||
- @nd0104,
|
||||
- @mr-ping,
|
||||
- @szrlee,
|
||||
- @lfzark,
|
||||
- @CNprober,
|
||||
- @DongShuaike,
|
||||
- @ggaaooppeenngg,
|
||||
- @haimingfg,
|
||||
- @213edu,
|
||||
- @Tanete,
|
||||
- @guodongxiaren,
|
||||
- @zzlyzq,
|
||||
- @FineFan,
|
||||
- @yujianxuechuan,
|
||||
- @Medusar,
|
||||
- @shaohaolin,
|
||||
- @ailurus1991,
|
||||
- @liaoishere,
|
||||
- @CHINAANSHE,
|
||||
- @stduolc,
|
||||
- @yupmoon,
|
||||
- @tomatoKiller,
|
||||
- @zhangboyue,
|
||||
- @kingname,
|
||||
- @KevinSJ,
|
||||
- @zsJacky,
|
||||
- @willqian,
|
||||
- @Hao-Ding,
|
||||
- @JygjHappy,
|
||||
- @Maclauring,
|
||||
- @small-Wood,
|
||||
- @cereuz,
|
||||
- @fbigun,
|
||||
- @lijhg,
|
||||
- @soooogreen,
|
||||
- runningwater,
|
||||
- ZTinoZ,
|
||||
- theo-l,
|
||||
- luoxcat,
|
||||
- disylee,
|
||||
- wi-cuckoo,
|
||||
- haimingfg,
|
||||
- KayGuoWhu,
|
||||
- wwy-hust,
|
||||
- martin2011qi,
|
||||
- cvsher,
|
||||
- su-kaiyao,
|
||||
- flsf,
|
||||
- SPccman,
|
||||
- Stevearzh
|
||||
- Linchenguang,
|
||||
- oska874
|
||||
- Linux-pdz,
|
||||
- 2q1w2007,
|
||||
- felixonmars,
|
||||
- wyangsun,
|
||||
- MikeCoder,
|
||||
- mr-ping,
|
||||
- xiqingongzi
|
||||
- H-mudcup,
|
||||
- zhangboyue,
|
||||
- goreliu,
|
||||
- DongShuaike,
|
||||
- TxmszLou,
|
||||
- ZhouJ-sh,
|
||||
- wangjiezhe,
|
||||
- NearTan,
|
||||
- icybreaker,
|
||||
- shipsw,
|
||||
- johnhoow,
|
||||
- linuhap,
|
||||
- boredivan,
|
||||
- blueabysm,
|
||||
- liaoishere,
|
||||
- yechunxiao19,
|
||||
- l3b2w1,
|
||||
- XLCYun,
|
||||
- KevinSJ,
|
||||
- tenght,
|
||||
- coloka,
|
||||
- luoyutiantang,
|
||||
- yupmoon,
|
||||
- jiajia9linuxer,
|
||||
- scusjs,
|
||||
- tnuoccalanosrep,
|
||||
- woodboow,
|
||||
- 1w2b3l,
|
||||
- crowner,
|
||||
- mtunique,
|
||||
- dingdongnigetou,
|
||||
- CNprober,
|
||||
- JonathanKang,
|
||||
- Medusar,
|
||||
- hyaocuk,
|
||||
- szrlee,
|
||||
- Xuanwo,
|
||||
- nd0104,
|
||||
- xiaoyu33,
|
||||
- guodongxiaren,
|
||||
- zzlyzq,
|
||||
- yujianxuechuan,
|
||||
- ailurus1991,
|
||||
- ggaaooppeenngg,
|
||||
- Ricky-Gong,
|
||||
- lfzark,
|
||||
- 213edu,
|
||||
- Tanete,
|
||||
- liuaiping,
|
||||
- jerryling315,
|
||||
- tomatoKiller,
|
||||
- stduolc,
|
||||
- shaohaolin,
|
||||
- Timeszoro,
|
||||
- rogetfan,
|
||||
- FineFan,
|
||||
- kingname,
|
||||
- jasminepeng,
|
||||
- JeffDing,
|
||||
- CHINAANSHE,
|
||||
|
||||
(按提交行数排名前百)
|
||||
|
||||
LFS 项目活跃成员有:
|
||||
|
||||
@ -169,7 +173,7 @@ LFS 项目活跃成员有:
|
||||
- @KevinSJ
|
||||
- @Yuking-net
|
||||
|
||||
(更新于2015/06/09,以Github contributors列表排名)
|
||||
(更新于2015/11/29)
|
||||
|
||||
谢谢大家的支持!
|
||||
|
||||
|
@ -0,0 +1,165 @@
|
||||
如何在 Linux 上从 NetworkManager 切换为 systemd-network
|
||||
================================================================================
|
||||
在 Linux 世界里,对 [systemd][1] 的采用一直是激烈争论的主题,它的支持者和反对者之间的战火仍然在燃烧。到了今天,大部分主流 Linux 发行版都已经采用了 systemd 作为默认的初始化(init)系统。
|
||||
|
||||
正如其作者所说,作为一个 “从未完成、从未完善、但一直追随技术进步” 的系统,systemd 已经不只是一个初始化进程,它被设计为一个更广泛的系统以及服务管理平台,这个平台是一个包含了不断增长的核心系统进程、库和工具的生态系统。
|
||||
|
||||
**systemd** 的其中一部分是 **systemd-networkd**,它负责 systemd 生态中的网络配置。使用 systemd-networkd,你可以为网络设备配置基础的 DHCP/静态 IP 网络。它还可以配置虚拟网络功能,例如网桥、隧道和 VLAN。systemd-networkd 目前还不能直接支持无线网络,但你可以使用 wpa_supplicant 服务配置无线适配器,然后把它和 **systemd-networkd** 联系起来。
|
||||
|
||||
在很多 Linux 发行版中,NetworkManager 仍然作为默认的网络配置管理器。和 NetworkManager 相比,**systemd-networkd** 仍处于积极的开发状态,还缺少一些功能。例如,它还不能像 NetworkManager 那样能让你的计算机在任何时候通过多种接口保持连接。它还没有为更高层面的脚本编程提供 ifup/ifdown 钩子函数。但是,systemd-networkd 和其它 systemd 组件(例如用于域名解析的 **resolved**、NTP 的**timesyncd**,用于命名的 udevd)结合的非常好。随着时间增长,**systemd-networkd**只会在 systemd 环境中扮演越来越重要的角色。
|
||||
|
||||
如果你对 **systemd-networkd** 的进步感到高兴,从 NetworkManager 切换到 systemd-networkd 是值得你考虑的一件事。如果你强烈反对 systemd,对 NetworkManager 或[基础网络服务][2]感到很满意,那也很好。
|
||||
|
||||
但对于那些想尝试 systemd-networkd 的人,可以继续看下去,在这篇指南中学会在 Linux 中怎么从 NetworkManager 切换到 systemd-networkd。
|
||||
|
||||
### 需求 ###
|
||||
|
||||
systemd 210 及其更高版本提供了 systemd-networkd。因此诸如 Debian 8 "Jessie" (systemd 215)、 Fedora 21 (systemd 217)、 Ubuntu 15.04 (systemd 219) 或更高版本的 Linux 发行版和 systemd-networkd 兼容。
|
||||
|
||||
对于其它发行版,在开始下一步之前先检查一下你的 systemd 版本。
|
||||
|
||||
$ systemctl --version
|
||||
|
||||
### 从 NetworkManager 切换到 Systemd-networkd ###
|
||||
|
||||
从 NetworkManager 切换到 systemd-networkd 其实非常简答(反过来也一样)。
|
||||
|
||||
首先,按照下面这样先停用 NetworkManager 服务,然后启用 systemd-networkd。
|
||||
|
||||
$ sudo systemctl disable NetworkManager
|
||||
$ sudo systemctl enable systemd-networkd
|
||||
|
||||
你还要启用 **systemd-resolved** 服务,systemd-networkd用它来进行域名解析。该服务还实现了一个缓存式 DNS 服务器。
|
||||
|
||||
$ sudo systemctl enable systemd-resolved
|
||||
$ sudo systemctl start systemd-resolved
|
||||
|
||||
当启动后,**systemd-resolved** 就会在 /run/systemd 目录下某个地方创建它自己的 resolv.conf。但是,把 DNS 解析信息存放在 /etc/resolv.conf 是更普遍的做法,很多应用程序也会依赖于 /etc/resolv.conf。因此为了兼容性,按照下面的方式创建一个到 /etc/resolv.conf 的符号链接。
|
||||
|
||||
$ sudo rm /etc/resolv.conf
|
||||
$ sudo ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf
|
||||
|
||||
### 用 systemd-networkd 配置网络连接 ###
|
||||
|
||||
要用 systemd-networkd 配置网络服务,你必须指定带.network 扩展名的配置信息文本文件。这些网络配置文件保存到 /etc/systemd/network 并从这里加载。当有多个文件时,systemd-networkd 会按照字母顺序一个个加载并处理。
|
||||
|
||||
首先创建 /etc/systemd/network 目录。
|
||||
|
||||
$ sudo mkdir /etc/systemd/network
|
||||
|
||||
#### DHCP 网络 ####
|
||||
|
||||
首先来配置 DHCP 网络。对于此,先要创建下面的配置文件。文件名可以任意,但记住文件是按照字母顺序处理的。
|
||||
|
||||
$ sudo vi /etc/systemd/network/20-dhcp.network
|
||||
|
||||
----------
|
||||
|
||||
[Match]
|
||||
Name=enp3*
|
||||
|
||||
[Network]
|
||||
DHCP=yes
|
||||
|
||||
正如你上面看到的,每个网络配置文件包括了一个或多个 “sections”,每个 “section”都用 [XXX] 开头。每个 section 包括了一个或多个键值对。`[Match]` 部分决定这个配置文件配置哪个(些)网络设备。例如,这个文件匹配所有名称以 ens3 开头的网络设备(例如 enp3s0、 enp3s1、 enp3s2 等等)对于匹配的接口,然后启用 [Network] 部分指定的 DHCP 网络配置。
|
||||
|
||||
### 静态 IP 网络 ###
|
||||
|
||||
如果你想给网络设备分配一个静态 IP 地址,那就新建下面的配置文件。
|
||||
|
||||
$ sudo vi /etc/systemd/network/10-static-enp3s0.network
|
||||
|
||||
----------
|
||||
|
||||
[Match]
|
||||
Name=enp3s0
|
||||
|
||||
[Network]
|
||||
Address=192.168.10.50/24
|
||||
Gateway=192.168.10.1
|
||||
DNS=8.8.8.8
|
||||
|
||||
正如你猜测的, enp3s0 接口地址会被指定为 192.168.10.50/24,默认网关是 192.168.10.1, DNS 服务器是 8.8.8.8。这里微妙的一点是,接口名 enp3s0 事实上也匹配了之前 DHCP 配置中定义的模式规则。但是,根据词汇顺序,文件 "10-static-enp3s0.network" 在 "20-dhcp.network" 之前被处理,对于 enp3s0 接口静态配置比 DHCP 配置有更高的优先级。
|
||||
|
||||
一旦你完成了创建配置文件,重启 systemd-networkd 服务或者重启机器。
|
||||
|
||||
$ sudo systemctl restart systemd-networkd
|
||||
|
||||
运行以下命令检查服务状态:
|
||||
|
||||
$ systemctl status systemd-networkd
|
||||
$ systemctl status systemd-resolved
|
||||
|
||||
![](https://farm1.staticflickr.com/719/21010813392_76abe123ed_c.jpg)
|
||||
|
||||
### 用 systemd-networkd 配置虚拟网络设备 ###
|
||||
|
||||
**systemd-networkd** 同样允许你配置虚拟网络设备,例如网桥、VLAN、隧道、VXLAN、绑定等。你必须在用 .netdev 作为扩展名的文件中配置这些虚拟设备。
|
||||
|
||||
这里我展示了如何配置一个桥接接口。
|
||||
|
||||
#### Linux 网桥 ####
|
||||
|
||||
如果你想创建一个 Linux 网桥(br0) 并把物理接口(eth1) 添加到网桥,你可以新建下面的配置。
|
||||
|
||||
$ sudo vi /etc/systemd/network/bridge-br0.netdev
|
||||
|
||||
----------
|
||||
|
||||
[NetDev]
|
||||
Name=br0
|
||||
Kind=bridge
|
||||
|
||||
然后按照下面这样用 .network 文件配置网桥接口 br0 和从接口 eth1。
|
||||
|
||||
$ sudo vi /etc/systemd/network/bridge-br0-slave.network
|
||||
|
||||
----------
|
||||
|
||||
[Match]
|
||||
Name=eth1
|
||||
|
||||
[Network]
|
||||
Bridge=br0
|
||||
|
||||
----------
|
||||
|
||||
$ sudo vi /etc/systemd/network/bridge-br0.network
|
||||
|
||||
----------
|
||||
|
||||
[Match]
|
||||
Name=br0
|
||||
|
||||
[Network]
|
||||
Address=192.168.10.100/24
|
||||
Gateway=192.168.10.1
|
||||
DNS=8.8.8.8
|
||||
|
||||
最后,重启 systemd-networkd。
|
||||
|
||||
$ sudo systemctl restart systemd-networkd
|
||||
|
||||
你可以用 [brctl 工具][3] 来验证是否创建好了网桥 br0。
|
||||
|
||||
### 总结 ###
|
||||
|
||||
当 systemd 誓言成为 Linux 的系统管理器时,有类似 systemd-networkd 的东西来管理网络配置也就不足为奇。但是在现阶段,systemd-networkd 看起来更适合于网络配置相对稳定的服务器环境。对于桌面/笔记本环境,它们有多种临时有线/无线接口,NetworkManager 仍然是比较好的选择。
|
||||
|
||||
对于想进一步了解 systemd-networkd 的人,可以参考官方[man 手册][4]了解完整的支持列表和关键点。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://xmodulo.com/switch-from-networkmanager-to-systemd-networkd.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[ictlyh](http://mutouxiaogui.cn/blog)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/nanni
|
||||
[1]:http://xmodulo.com/use-systemd-system-administration-debian.html
|
||||
[2]:http://xmodulo.com/disable-network-manager-linux.html
|
||||
[3]:http://xmodulo.com/how-to-configure-linux-bridge-interface.html
|
||||
[4]:http://www.freedesktop.org/software/systemd/man/systemd.network.html
|
@ -0,0 +1,427 @@
|
||||
超神们:15 位健在的世界级程序员!
|
||||
================================================================================
|
||||
|
||||
当开发人员说起世界顶级程序员时,他们的名字往往会被提及。
|
||||
|
||||
好像现在程序员有很多,其中不乏有许多优秀的程序员。但是哪些程序员更好呢?
|
||||
|
||||
虽然这很难客观评价,不过在这个话题确实是开发者们津津乐道的。ITworld 深入程序员社区,避开四溅的争执口水,试图找出可能存在的所谓共识。事实证明,屈指可数的某些名字经常是讨论的焦点。
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/superman-620x465-100611650-orig.jpg)
|
||||
|
||||
*图片来源: [tom_bullock CC BY 2.0][1]*
|
||||
|
||||
下面就让我们来看看这些世界顶级的程序员吧!
|
||||
|
||||
### 玛格丽特·汉密尔顿(Margaret Hamilton) ###
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/margaret_hamilton-620x465-100611764-orig.jpg)
|
||||
|
||||
*图片来源: [NASA][2]*
|
||||
|
||||
**成就: 阿波罗飞行控制软件背后的大脑**
|
||||
|
||||
生平: 查尔斯·斯塔克·德雷珀实验室(Charles Stark Draper Laboratory)软件工程部的主任,以她为首的团队负责设计和打造 NASA 的阿波罗的舰载飞行控制器软件和空间实验室(Skylab)的任务。基于阿波罗这段的工作经历,她又后续开发了[通用系统语言(Universal Systems Language)][5]和[开发先于事实( Development Before the Fact)][6]的范例。开创了[异步软件、优先调度和超可靠的软件设计][7]理念。被认为发明了“[软件工程( software engineering)][8]”一词。1986年获[奥古斯塔·埃达·洛夫莱斯奖(Augusta Ada Lovelace Award)][9],2003年获 [NASA 杰出太空行动奖(Exceptional Space Act Award)][10]。
|
||||
|
||||
评论:
|
||||
|
||||
> “汉密尔顿发明了测试,使美国计算机工程规范了很多” —— [ford_beeblebrox][11]
|
||||
|
||||
> “我认为在她之前(不敬地说,包括高德纳(Knuth)在内的)计算机编程是(另一种形式上留存的)数学分支。然而这个宇宙飞船的飞行控制系统明确地将编程带入了一个崭新的领域。” —— [Dan Allen][12]
|
||||
|
||||
> “... 她引入了‘软件工程’这个术语 — 并作出了最好的示范。” —— [David Hamilton][13]
|
||||
|
||||
> “真是个坏家伙” [Drukered][14]
|
||||
|
||||
|
||||
### 唐纳德·克努斯(Donald Knuth),即 高德纳 ###
|
||||
|
||||
![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_donald_knuth-620x465-100502872-orig.jpg)
|
||||
|
||||
*图片来源: [vonguard CC BY-SA 2.0][15]*
|
||||
|
||||
**成就: 《计算机程序设计艺术(The Art of Computer Programming,TAOCP)》 作者**
|
||||
|
||||
生平: 撰写了[编程理论的权威书籍][16]。发明了数字排版系统 Tex。1971年,[ACM(美国计算机协会)葛丽丝·穆雷·霍普奖(Grace Murray Hopper Award)][17] 的首位获奖者。1974年获 ACM [图灵奖(A. M. Turing)][18],1979年获[美国国家科学奖章(National Medal of Science)][19],1995年获IEEE[约翰·冯·诺依曼奖章(John von Neumann Medal)][20]。1998年入选[计算机历史博物馆(Computer History Museum)名人录(Hall of Fellows)][21]。
|
||||
|
||||
评论:
|
||||
|
||||
> “... 写的计算机编程艺术(The Art of Computer Programming,TAOCP)可能是有史以来计算机编程方面最大的贡献。”—— [佚名][22]
|
||||
|
||||
> “唐·克努斯的 TeX 是我所用过的计算机程序中唯一一个几乎没有 bug 的。真是让人印象深刻!”—— [Jaap Weel][23]
|
||||
|
||||
> “如果你要问我的话,我只能说太棒了!” —— [Mitch Rees-Jones][24]
|
||||
|
||||
### 肯·汤普逊(Ken Thompson) ###
|
||||
|
||||
![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_ken-thompson-620x465-100502874-orig.jpg)
|
||||
|
||||
*图片来源: [Association for Computing Machinery][25]*
|
||||
|
||||
**成就: Unix 之父**
|
||||
|
||||
生平:与[丹尼斯·里奇(Dennis Ritchie)][26]共同创造了 Unix。创造了 [B 语言][27]、[UTF-8 字符编码方案][28]、[ed 文本编辑器][29],同时也是 Go 语言的共同开发者。(和里奇)共同获得1983年的[图灵奖(A.M. Turing Award )][30],1994年获 [IEEE 计算机先驱奖( IEEE Computer Pioneer Award)][31],1998年获颁[美国国家科技奖章( National Medal of Technology )][32]。在1997年入选[计算机历史博物馆(Computer History Museum)名人录(Hall of Fellows)][33]。
|
||||
|
||||
评论:
|
||||
|
||||
> “... 可能是有史以来最能成事的程序员了。Unix 内核,Unix 工具,国际象棋程序世界冠军 Belle,Plan 9,Go 语言。” —— [Pete Prokopowicz][34]
|
||||
|
||||
> “肯所做出的贡献,据我所知无人能及,是如此的根本、实用、经得住时间的考验,时至今日仍在使用。” —— [Jan Jannink][35]
|
||||
|
||||
|
||||
### 理查德·斯托曼(Richard Stallman) ###
|
||||
|
||||
![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_richard_stallman-620x465-100502868-orig.jpg)
|
||||
|
||||
*图片来源: [Jiel Beaumadier CC BY-SA 3.0][135]*
|
||||
|
||||
**成就: Emacs 和 GCC 缔造者**
|
||||
|
||||
生平: 成立了 [GNU 工程(GNU Project)] [36],并创造了它的许多核心工具,如 [Emacs、GCC、GDB][37] 和 [GNU Make][38]。还创办了[自由软件基金会(Free Software Foundation)] [39]。1990年荣获 ACM 的[葛丽丝·穆雷·霍普奖( Grace Murray Hopper Award)][40],1998年获 [EFF 先驱奖(Pioneer Award)][41].
|
||||
|
||||
评论:
|
||||
|
||||
> “... 在 Symbolics 对阵 LMI 的战斗中,独自一人与一众 Lisp 黑客好手对码。” —— [Srinivasan Krishnan][42]
|
||||
|
||||
> “通过他在编程上的精湛造诣与强大信念,开辟了一整套编程与计算机的亚文化。” —— [Dan Dunay][43]
|
||||
|
||||
> “我可以不赞同这位伟人的很多方面,不必盖棺论定,他不可否认都已经是一位伟大的程序员了。” —— [Marko Poutiainen][44]
|
||||
|
||||
> “试想 Linux 如果没有 GNU 工程的前期工作会怎么样。(多亏了)斯托曼的炸弹!” —— [John Burnette][45]
|
||||
|
||||
### 安德斯·海尔斯伯格(Anders Hejlsberg) ###
|
||||
|
||||
![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_anders_hejlsberg-620x465-100502873-orig.jpg)
|
||||
|
||||
*图片来源: [D.Begley CC BY 2.0][46]*
|
||||
|
||||
**成就: 创造了Turbo Pascal**
|
||||
|
||||
生平: [Turbo Pascal 的原作者][47],是最流行的 Pascal 编译器和第一个集成开发环境。而后,[领导了 Turbo Pascal 的继任者 Delphi][48] 的构建。[C# 的主要设计师和架构师][49]。2001年荣获[ Dr. Dobb 的杰出编程奖(Dr. Dobb's Excellence in Programming Award )][50]。
|
||||
|
||||
评论:
|
||||
|
||||
> “他用汇编语言为当时两个主流的 PC 操作系统(DOS 和 CPM)编写了 [Pascal] 编译器。用它来编译、链接并运行仅需几秒钟而不是几分钟。” —— [Steve Wood][51]
|
||||
|
||||
> “我佩服他 - 他创造了我最喜欢的开发工具,陪伴着我度过了三个关键的时期直至我成为一位专业的软件工程师。” —— [Stefan Kiryazov][52]
|
||||
|
||||
### Doug Cutting ###
|
||||
|
||||
![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_doug_cutting-620x465-100502871-orig.jpg)
|
||||
|
||||
图片来源: [vonguard CC BY-SA 2.0][53]
|
||||
|
||||
**成就: 创造了 Lucene**
|
||||
|
||||
生平: [开发了 Lucene 搜索引擎以及 Web 爬虫 Nutch][54] 和用于大型数据集的分布式处理套件 [Hadoop][55]。一位强有力的开源支持者(Lucene、Nutch 以及 Hadoop 都是开源的)。前 [Apache 软件基金(Apache Software Foundation)的理事][56]。
|
||||
|
||||
评论:
|
||||
|
||||
|
||||
> “...他就是那个既写出了优秀搜索框架(lucene/solr),又为世界开启大数据之门(hadoop)的男人。” —— [Rajesh Rao][57]
|
||||
|
||||
> “他在 Lucene 和 Hadoop(及其它工程)的创造/工作中为世界创造了巨大的财富和就业...” —— [Amit Nithianandan][58]
|
||||
|
||||
### Sanjay Ghemawat ###
|
||||
|
||||
![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_sanjay_ghemawat-620x465-100502876-orig.jpg)
|
||||
|
||||
*图片来源: [Association for Computing Machinery][59]*
|
||||
|
||||
**成就: 谷歌核心架构师**
|
||||
|
||||
生平: [协助设计和实现了一些谷歌大型分布式系统的功能][60],包括 MapReduce、BigTable、Spanner 和谷歌文件系统(Google File System)。[创造了 Unix 的 ical ][61]日历系统。2009年入选[美国国家工程院(National Academy of Engineering)][62]。2012年荣获 [ACM-Infosys 基金计算机科学奖( ACM-Infosys Foundation Award in the Computing Sciences)][63]。
|
||||
|
||||
评论:
|
||||
|
||||
|
||||
> “Jeff Dean的僚机。” —— [Ahmet Alp Balkan][64]
|
||||
|
||||
### Jeff Dean ###
|
||||
|
||||
![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_jeff_dean-620x465-100502866-orig.jpg)
|
||||
|
||||
*图片来源: [Google][65]*
|
||||
|
||||
**成就: 谷歌搜索索引背后的大脑**
|
||||
|
||||
生平:协助设计和实现了[许多谷歌大型分布式系统的功能][66],包括网页爬虫,索引搜索,AdSense,MapReduce,BigTable 和 Spanner。2009年入选[美国国家工程院( National Academy of Engineering)][67]。2012年荣获ACM 的[SIGOPS 马克·维瑟奖( SIGOPS Mark Weiser Award)][68]及[ACM-Infosys基金计算机科学奖( ACM-Infosys Foundation Award in the Computing Sciences)][69]。
|
||||
|
||||
评论:
|
||||
|
||||
> “... 带来了在数据挖掘(GFS、MapReduce、BigTable)上的突破。” —— [Natu Lauchande][70]
|
||||
|
||||
> “... 设计、构建并部署 MapReduce 和 BigTable,和以及数不清的其它东西” —— [Erik Goldman][71]
|
||||
|
||||
### 林纳斯·托瓦兹(Linus Torvalds) ###
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/linus_torvalds-620x465-100611765-orig.jpg)
|
||||
|
||||
*图片来源: [Krd CC BY-SA 4.0][72]*
|
||||
|
||||
**成就: Linux缔造者**
|
||||
|
||||
生平:创造了 [Linux 内核][73]与[开源的版本控制系统 Git][74]。收获了许多奖项和荣誉,包括有1998年的 [EFF 先驱奖(EFF Pioneer Award)][75],2000年荣获[英国电脑学会(British Computer Society)授予的洛夫莱斯勋章(Lovelace Medal)][76],2012年荣获[千禧技术奖(Millenium Technology Prize)][77]还有2014年[IEEE计算机学会( IEEE Computer Society)授予的计算机先驱奖(Computer Pioneer Award)][78]。同样入选了2008年的[计算机历史博物馆( Computer History Museum)名人录(Hall of Fellows)][79]与2012年的[互联网名人堂(Internet Hall of Fame )][80]。
|
||||
|
||||
评论:
|
||||
|
||||
> “他只用了几年的时间就写出了 Linux 内核,而 GNU Hurd(GNU 开发的内核)历经25年的开发却丝毫没有准备发布的意思。他的成就就是带来了希望。” —— [Erich Ficker][81]
|
||||
|
||||
> “托沃兹可能是程序员的程序员。” —— [Dan Allen][82]
|
||||
|
||||
> “他真的很棒。” —— [Alok Tripathy][83]
|
||||
|
||||
### 约翰·卡马克(John Carmack) ###
|
||||
|
||||
![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_john_carmack-620x465-100502867-orig.jpg)
|
||||
|
||||
*图片来源: [QuakeCon CC BY 2.0][84]*
|
||||
|
||||
**成就: 毁灭战士的缔造者**
|
||||
|
||||
生平: ID 社联合创始人,打造了德军总部3D(Wolfenstein 3D)、毁灭战士(Doom)和雷神之锤(Quake)等所谓的即时 FPS 游戏。引领了[切片适配刷新(adaptive tile refresh)][86], [二叉空间分割(binary space partitioning)][87],表面缓存(surface caching)等开创性的计算机图像技术。2001年入选[互动艺术与科学学会名人堂(Academy of Interactive Arts and Sciences Hall of Fame)][88],2007年和2008年荣获工程技术类[艾美奖(Emmy awards)][89]并于2010年由[游戏开发者甄选奖( Game Developers Choice Awards)][90]授予终生成就奖。
|
||||
|
||||
评论:
|
||||
|
||||
> “他在写第一个渲染引擎的时候不到20岁。这家伙这是个天才。我若有他四分之一的天赋便心满意足了。” —— [Alex Dolinsky][91]
|
||||
|
||||
> “... 德军总部3D(Wolfenstein 3D)、毁灭战士(Doom)还有雷神之锤(Quake)在那时都是革命性的,影响了一代游戏设计师。” —— [dniblock][92]
|
||||
|
||||
> “一个周末他几乎可以写出任何东西....” —— [Greg Naughton][93]
|
||||
|
||||
> “他是编程界的莫扎特... ” —— [Chris Morris][94]
|
||||
|
||||
### 法布里斯·贝拉(Fabrice Bellard) ###
|
||||
|
||||
![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_fabrice_bellard-620x465-100502870-orig.jpg)
|
||||
|
||||
*图片来源: [Duff][95]*
|
||||
|
||||
**成就: 创造了 QEMU**
|
||||
|
||||
生平: 创造了[一系列耳熟能详的开源软件][96],其中包括硬件模拟和虚拟化的平台 QEMU,用于处理多媒体数据的 FFmpeg,微型C编译器(Tiny C Compiler)和 一个可执行文件压缩软件 LZEXE。2000年和2001年[C语言混乱代码大赛(Obfuscated C Code Contest)的获胜者][97]并在2011年荣获[Google-O'Reilly 开源奖(Google-O'Reilly Open Source Award )][98]。[计算 Pi 最多位数][99]的前世界纪录保持着。
|
||||
|
||||
评论:
|
||||
|
||||
|
||||
> “我觉得法布里斯·贝拉做的每一件事都是那么显著而又震撼。” —— [raphinou][100]
|
||||
|
||||
> “法布里斯·贝拉是世界上最高产的程序员...” —— [Pavan Yara][101]
|
||||
|
||||
> “他就像软件工程界的尼古拉·特斯拉(Nikola Tesla)。” —— [Michael Valladolid][102]
|
||||
|
||||
> “自80年代以来,他一直高产出一系列的成功作品。” —— [Michael Biggins][103]
|
||||
|
||||
### Jon Skeet ###
|
||||
|
||||
![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_jon_skeet-620x465-100502863-orig.jpg)
|
||||
|
||||
*图片来源: [Craig Murphy CC BY 2.0][104]*
|
||||
|
||||
**成就: Stack Overflow 的传说级贡献者**
|
||||
|
||||
生平: Google 工程师,[深入解析C#(C# in Depth)][105]的作者。保持着[有史以来在 Stack Overflow 上最高的声誉][106],平均每月解答390个问题。
|
||||
|
||||
评论:
|
||||
|
||||
|
||||
> “他根本不需要调试器,只要他盯一下代码,错误之处自会原形毕露。” —— [Steven A. Lowe][107]
|
||||
|
||||
> “如果他的代码没有通过编译,那编译器应该道歉。” —— [Dan Dyer][108]
|
||||
|
||||
> “他根本不需要什么编程规范,他的代码就是编程规范。” —— [佚名][109]
|
||||
|
||||
### 亚当·安捷罗(Adam D'Angelo) ###
|
||||
|
||||
![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_image_adam_dangelo-620x465-100502875-orig.jpg)
|
||||
|
||||
*图片来源: [Philip Neustrom CC BY 2.0][110]*
|
||||
|
||||
**成就: Quora 的创办人之一**
|
||||
|
||||
生平: 还是 Facebook 工程师时,[为其搭建了 news feed 功能的基础][111]。直至其离开并联合创始了 Quora,已经成为了 Facebook 的CTO和工程 VP。2001年以高中生的身份在[美国计算机奥林匹克(USA Computing Olympiad)上第八位完成比赛][112]。2004年ACM国际大学生编程大赛(International Collegiate Programming Contest)[获得银牌的团队 - 加利福尼亚技术研究所( California Institute of Technology)][113]的成员。2005年入围 Topcoder 大学生[算法编程挑战赛(Algorithm Coding Competition)][114]。
|
||||
|
||||
评论:
|
||||
|
||||
> “一位程序设计全才。” —— [佚名][115]
|
||||
|
||||
> "我做的每个好东西,他都已有了六个。" —— [马克.扎克伯格(Mark Zuckerberg)][116]
|
||||
|
||||
### Petr Mitrechev ###
|
||||
|
||||
![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_petr_mitrichev-620x465-100502869-orig.jpg)
|
||||
|
||||
*图片来源: [Facebook][117]*
|
||||
|
||||
**成就: 有史以来最具竞技能力的程序员之一**
|
||||
|
||||
生平: 在国际信息学奥林匹克(International Olympiad in Informatics)中[两次获得金牌][118](2000,2002)。在2006,[赢得 Google Code Jam][119] 同时也是[TopCoder Open 算法大赛冠军][120]。也同样,两次赢得 Facebook黑客杯(Facebook Hacker Cup)([2011][121],[2013][122])。写这篇文章的时候,[TopCoder 榜中排第二][123] (即:Petr)、在 [Codeforces 榜同样排第二][124]。
|
||||
|
||||
评论:
|
||||
|
||||
> “他是竞技程序员的偶像,即使在印度也是如此...” —— [Kavish Dwivedi][125]
|
||||
|
||||
### Gennady Korotkevich ###
|
||||
|
||||
![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_gennady_korot-620x465-100502864-orig.jpg)
|
||||
|
||||
*图片来源: [Ishandutta2007 CC BY-SA 3.0][126]*
|
||||
|
||||
**成就: 竞技编程小神童**
|
||||
|
||||
生平: 国际信息学奥林匹克(International Olympiad in Informatics)中最小参赛者(11岁),[6次获得金牌][127] (2007-2012)。2013年 ACM 国际大学生编程大赛(International Collegiate Programming Contest)[获胜队伍][128]成员及[2014 Facebook 黑客杯(Facebook Hacker Cup)][129]获胜者。写这篇文章的时候,[Codeforces 榜排名第一][130] (即:Tourist)、[TopCoder榜第一][131]。
|
||||
|
||||
评论:
|
||||
|
||||
> “一个编程神童!” —— [Prateek Joshi][132]
|
||||
|
||||
> “Gennady 真是棒,也是为什么我在白俄罗斯拥有一个强大开发团队的例证。” —— [Chris Howard][133]
|
||||
|
||||
> “Tourist 真是天才” —— [Nuka Shrinivas Rao][134]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.itworld.com/article/2823547/enterprise-software/158256-superclass-14-of-the-world-s-best-living-programmers.html#slide1
|
||||
|
||||
作者:[Phil Johnson][a]
|
||||
译者:[martin2011qi](https://github.com/martin2011qi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.itworld.com/author/Phil-Johnson/
|
||||
[1]:https://www.flickr.com/photos/tombullock/15713223772
|
||||
[2]:https://commons.wikimedia.org/wiki/File:Margaret_Hamilton_in_action.jpg
|
||||
[3]:http://klabs.org/home_page/hamilton.htm
|
||||
[4]:https://www.youtube.com/watch?v=DWcITjqZtpU&feature=youtu.be&t=3m12s
|
||||
[5]:http://www.htius.com/Articles/r12ham.pdf
|
||||
[6]:http://www.htius.com/Articles/Inside_DBTF.htm
|
||||
[7]:http://www.nasa.gov/home/hqnews/2003/sep/HQ_03281_Hamilton_Honor.html
|
||||
[8]:http://www.nasa.gov/50th/50th_magazine/scientists.html
|
||||
[9]:https://books.google.com/books?id=JcmV0wfQEoYC&pg=PA321&lpg=PA321&dq=ada+lovelace+award+1986&source=bl&ots=qGdBKsUa3G&sig=bkTftPAhM1vZ_3VgPcv-38ggSNo&hl=en&sa=X&ved=0CDkQ6AEwBGoVChMI3paoxJHWxwIVA3I-Ch1whwPn#v=onepage&q=ada%20lovelace%20award%201986&f=false
|
||||
[10]:http://history.nasa.gov/alsj/a11/a11Hamilton.html
|
||||
[11]:https://www.reddit.com/r/pics/comments/2oyd1y/margaret_hamilton_with_her_code_lead_software/cmrswof
|
||||
[12]:http://qr.ae/RFEZLk
|
||||
[13]:http://qr.ae/RFEZUn
|
||||
[14]:https://www.reddit.com/r/pics/comments/2oyd1y/margaret_hamilton_with_her_code_lead_software/cmrv9u9
|
||||
[15]:https://www.flickr.com/photos/44451574@N00/5347112697
|
||||
[16]:http://cs.stanford.edu/~uno/taocp.html
|
||||
[17]:http://awards.acm.org/award_winners/knuth_1013846.cfm
|
||||
[18]:http://amturing.acm.org/award_winners/knuth_1013846.cfm
|
||||
[19]:http://www.nsf.gov/od/nms/recip_details.jsp?recip_id=198
|
||||
[20]:http://www.ieee.org/documents/von_neumann_rl.pdf
|
||||
[21]:http://www.computerhistory.org/fellowawards/hall/bios/Donald,Knuth/
|
||||
[22]:http://www.quora.com/Who-are-the-best-programmers-in-Silicon-Valley-and-why/answers/3063
|
||||
[23]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Jaap-Weel
|
||||
[24]:http://qr.ae/RFE94x
|
||||
[25]:http://amturing.acm.org/photo/thompson_4588371.cfm
|
||||
[26]:https://www.youtube.com/watch?v=JoVQTPbD6UY
|
||||
[27]:https://www.bell-labs.com/usr/dmr/www/bintro.html
|
||||
[28]:http://doc.cat-v.org/bell_labs/utf-8_history
|
||||
[29]:http://c2.com/cgi/wiki?EdIsTheStandardTextEditor
|
||||
[30]:http://amturing.acm.org/award_winners/thompson_4588371.cfm
|
||||
[31]:http://www.computer.org/portal/web/awards/cp-thompson
|
||||
[32]:http://www.uspto.gov/about/nmti/recipients/1998.jsp
|
||||
[33]:http://www.computerhistory.org/fellowawards/hall/bios/Ken,Thompson/
|
||||
[34]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Pete-Prokopowicz-1
|
||||
[35]:http://qr.ae/RFEWBY
|
||||
[36]:https://groups.google.com/forum/#!msg/net.unix-wizards/8twfRPM79u0/1xlglzrWrU0J
|
||||
[37]:http://www.emacswiki.org/emacs/RichardStallman
|
||||
[38]:https://www.gnu.org/gnu/thegnuproject.html
|
||||
[39]:http://www.emacswiki.org/emacs/FreeSoftwareFoundation
|
||||
[40]:http://awards.acm.org/award_winners/stallman_9380313.cfm
|
||||
[41]:https://w2.eff.org/awards/pioneer/1998.php
|
||||
[42]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Greg-Naughton/comment/4146397
|
||||
[43]:http://qr.ae/RFEaib
|
||||
[44]:http://www.quora.com/Software-Engineering/Who-are-some-of-the-greatest-currently-active-software-architects-in-the-world/answer/Marko-Poutiainen
|
||||
[45]:http://qr.ae/RFEUqp
|
||||
[46]:https://www.flickr.com/photos/begley/2979906130
|
||||
[47]:http://www.taoyue.com/tutorials/pascal/history.html
|
||||
[48]:http://c2.com/cgi/wiki?AndersHejlsberg
|
||||
[49]:http://www.microsoft.com/about/technicalrecognition/anders-hejlsberg.aspx
|
||||
[50]:http://www.drdobbs.com/windows/dr-dobbs-excellence-in-programming-award/184404602
|
||||
[51]:http://qr.ae/RFEZrv
|
||||
[52]:http://www.quora.com/Software-Engineering/Who-are-some-of-the-greatest-currently-active-software-architects-in-the-world/answer/Stefan-Kiryazov
|
||||
[53]:https://www.flickr.com/photos/vonguard/4076389963/
|
||||
[54]:http://www.wizards-of-os.org/archiv/sprecher/a_c/doug_cutting.html
|
||||
[55]:http://hadoop.apache.org/
|
||||
[56]:https://www.linkedin.com/in/cutting
|
||||
[57]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Shalin-Shekhar-Mangar/comment/2293071
|
||||
[58]:http://www.quora.com/Who-are-the-best-programmers-in-Silicon-Valley-and-why/answer/Amit-Nithianandan
|
||||
[59]:http://awards.acm.org/award_winners/ghemawat_1482280.cfm
|
||||
[60]:http://research.google.com/pubs/SanjayGhemawat.html
|
||||
[61]:http://www.quora.com/Google/Who-is-Sanjay-Ghemawat
|
||||
[62]:http://www8.nationalacademies.org/onpinews/newsitem.aspx?RecordID=02062009
|
||||
[63]:http://awards.acm.org/award_winners/ghemawat_1482280.cfm
|
||||
[64]:http://www.quora.com/Google/Who-is-Sanjay-Ghemawat/answer/Ahmet-Alp-Balkan
|
||||
[65]:http://research.google.com/people/jeff/index.html
|
||||
[66]:http://research.google.com/people/jeff/index.html
|
||||
[67]:http://www8.nationalacademies.org/onpinews/newsitem.aspx?RecordID=02062009
|
||||
[68]:http://news.cs.washington.edu/2012/10/10/uw-cse-ph-d-alum-jeff-dean-wins-2012-sigops-mark-weiser-award/
|
||||
[69]:http://awards.acm.org/award_winners/dean_2879385.cfm
|
||||
[70]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Natu-Lauchande
|
||||
[71]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Cosmin-Negruseri/comment/28399
|
||||
[72]:https://commons.wikimedia.org/wiki/File:LinuxCon_Europe_Linus_Torvalds_05.jpg
|
||||
[73]:http://www.linuxfoundation.org/about/staff#torvalds
|
||||
[74]:http://git-scm.com/book/en/Getting-Started-A-Short-History-of-Git
|
||||
[75]:https://w2.eff.org/awards/pioneer/1998.php
|
||||
[76]:http://www.bcs.org/content/ConWebDoc/14769
|
||||
[77]:http://www.zdnet.com/blog/open-source/linus-torvalds-wins-the-tech-equivalent-of-a-nobel-prize-the-millennium-technology-prize/10789
|
||||
[78]:http://www.computer.org/portal/web/pressroom/Linus-Torvalds-Named-Recipient-of-the-2014-IEEE-Computer-Society-Computer-Pioneer-Award
|
||||
[79]:http://www.computerhistory.org/fellowawards/hall/bios/Linus,Torvalds/
|
||||
[80]:http://www.internethalloffame.org/inductees/linus-torvalds
|
||||
[81]:http://qr.ae/RFEeeo
|
||||
[82]:http://qr.ae/RFEZLk
|
||||
[83]:http://www.quora.com/Software-Engineering/Who-are-some-of-the-greatest-currently-active-software-architects-in-the-world/answer/Alok-Tripathy-1
|
||||
[84]:https://www.flickr.com/photos/quakecon/9434713998
|
||||
[85]:http://doom.wikia.com/wiki/John_Carmack
|
||||
[86]:http://thegamershub.net/2012/04/gaming-gods-john-carmack/
|
||||
[87]:http://www.shamusyoung.com/twentysidedtale/?p=4759
|
||||
[88]:http://www.interactive.org/special_awards/details.asp?idSpecialAwards=6
|
||||
[89]:http://www.itworld.com/article/2951105/it-management/a-fly-named-for-bill-gates-and-9-other-unusual-honors-for-tech-s-elite.html#slide8
|
||||
[90]:http://www.gamechoiceawards.com/archive/lifetime.html
|
||||
[91]:http://qr.ae/RFEEgr
|
||||
[92]:http://www.itworld.com/answers/topic/software/question/whos-best-living-programmer#comment-424562
|
||||
[93]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Greg-Naughton
|
||||
[94]:http://money.cnn.com/2003/08/21/commentary/game_over/column_gaming/
|
||||
[95]:http://dufoli.wordpress.com/2007/06/23/ammmmaaaazing-night/
|
||||
[96]:http://bellard.org/
|
||||
[97]:http://www.ioccc.org/winners.html#B
|
||||
[98]:http://www.oscon.com/oscon2011/public/schedule/detail/21161
|
||||
[99]:http://bellard.org/pi/pi2700e9/
|
||||
[100]:https://news.ycombinator.com/item?id=7850797
|
||||
[101]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Erik-Frey/comment/1718701
|
||||
[102]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Erik-Frey/comment/2454450
|
||||
[103]:http://qr.ae/RFEjhZ
|
||||
[104]:https://www.flickr.com/photos/craigmurphy/4325516497
|
||||
[105]:http://www.amazon.co.uk/gp/product/1935182471?ie=UTF8&tag=developetutor-21&linkCode=as2&camp=1634&creative=19450&creativeASIN=1935182471
|
||||
[106]:http://stackexchange.com/leagues/1/alltime/stackoverflow
|
||||
[107]:http://meta.stackexchange.com/a/9156
|
||||
[108]:http://meta.stackexchange.com/a/9138
|
||||
[109]:http://meta.stackexchange.com/a/9182
|
||||
[110]:https://www.flickr.com/photos/philipn/5326344032
|
||||
[111]:http://www.crunchbase.com/person/adam-d-angelo
|
||||
[112]:http://www.exeter.edu/documents/Exeter_Bulletin/fall_01/oncampus.html
|
||||
[113]:http://icpc.baylor.edu/community/results-2004
|
||||
[114]:https://www.topcoder.com/tc?module=Static&d1=pressroom&d2=pr_022205
|
||||
[115]:http://qr.ae/RFfOfe
|
||||
[116]:http://www.businessinsider.com/in-new-alleged-ims-mark-zuckerberg-talks-about-adam-dangelo-2012-9#ixzz369FcQoLB
|
||||
[117]:https://www.facebook.com/hackercup/photos/a.329665040399024.91563.133954286636768/553381194694073/?type=1
|
||||
[118]:http://stats.ioinformatics.org/people/1849
|
||||
[119]:http://googlepress.blogspot.com/2006/10/google-announces-winner-of-global-code_27.html
|
||||
[120]:http://community.topcoder.com/tc?module=SimpleStats&c=coder_achievements&d1=statistics&d2=coderAchievements&cr=10574855
|
||||
[121]:https://www.facebook.com/notes/facebook-hacker-cup/facebook-hacker-cup-finals/208549245827651
|
||||
[122]:https://www.facebook.com/hackercup/photos/a.329665040399024.91563.133954286636768/553381194694073/?type=1
|
||||
[123]:http://community.topcoder.com/tc?module=AlgoRank
|
||||
[124]:http://codeforces.com/ratings
|
||||
[125]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Venkateswaran-Vicky/comment/1960855
|
||||
[126]:http://commons.wikimedia.org/wiki/File:Gennady_Korot.jpg
|
||||
[127]:http://stats.ioinformatics.org/people/804
|
||||
[128]:http://icpc.baylor.edu/regionals/finder/world-finals-2013/standings
|
||||
[129]:https://www.facebook.com/hackercup/posts/10152022955628845
|
||||
[130]:http://codeforces.com/ratings
|
||||
[131]:http://community.topcoder.com/tc?module=AlgoRank
|
||||
[132]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Prateek-Joshi
|
||||
[133]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Prateek-Joshi/comment/4720779
|
||||
[134]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Prateek-Joshi/comment/4880549
|
||||
[135]:http://commons.wikimedia.org/wiki/File:Jielbeaumadier_richard_stallman_2010.jpg
|
@ -0,0 +1,128 @@
|
||||
一位开发者的 Linux 容器之旅
|
||||
================================================================================
|
||||
![](https://deis.com/images/blog-images/dev_journey_0.jpg)
|
||||
|
||||
我告诉你一个秘密:使得我的应用程序进入到全世界的 DevOps 云计算之类的东西对我来说仍然有一点神秘。但随着时间流逝,我意识到理解大规模的机器增减和应用程序部署的来龙去脉对一个开发者来说是非常重要的知识。这类似于成为一个专业的音乐家,当然你肯定需要知道如何使用你的乐器,但是,如果你不知道一个录音棚是如何工作的,或者如何适应一个交响乐团,那么你在这样的环境中工作会变得非常困难。
|
||||
|
||||
在软件开发的世界里,使你的代码进入我们的更大的世界如同把它编写出来一样重要。DevOps 重要,而且是很重要。
|
||||
|
||||
因此,为了弥合开发(Dev)和部署(Ops)之间的空隙,我会从头开始介绍容器技术。为什么是容器?因为有强力的证据表明,容器是机器抽象的下一步:使计算机成为场所而不再是一个东西。理解容器是我们共同的旅程。
|
||||
|
||||
在这篇文章中,我会介绍容器化(containerization)背后的概念。包括容器和虚拟机的区别,以及容器构建背后的逻辑以及它是如何适应应用程序架构的。我会探讨轻量级的 Linux 操作系统是如何适应容器生态系统。我还会讨论使用镜像创建可重用的容器。最后我会介绍容器集群如何使你的应用程序可以快速扩展。
|
||||
|
||||
在后面的文章中,我会一步一步向你介绍容器化一个示例应用程序的过程,以及如何为你的应用程序容器创建一个托管集群。同时,我会向你展示如何使用 Deis 将你的示例应用程序部署到你本地系统以及多种云供应商的虚拟机上。
|
||||
|
||||
让我们开始吧。
|
||||
|
||||
### 虚拟机的好处 ###
|
||||
|
||||
为了理解容器如何适应事物发展,你首先要了解容器的前任:虚拟机。
|
||||
|
||||
[虚拟机][1] (virtual machine (VM))是运行在物理宿主机上的软件抽象。配置一个虚拟机就像是购买一台计算机:你需要定义你想要的 CPU 数目、RAM 和磁盘存储容量。配置好了机器后,你为它加载操作系统,以及你想让虚拟机支持的任何服务器或者应用程序。
|
||||
|
||||
虚拟机允许你在一台硬件主机上运行多个模拟计算机。这是一个简单的示意图:
|
||||
|
||||
![](https://deis.com/images/blog-images/dev_journey_1.png)
|
||||
|
||||
虚拟机可以让你能充分利用你的硬件资源。你可以购买一台巨大的、轰隆作响的机器,然后在上面运行多个虚拟机。你可以有一个数据库虚拟机以及很多运行相同版本的定制应用程序的虚拟机所构成的集群。你可以在有限的硬件资源获得很多的扩展能力。如果你觉得你需要更多的虚拟机而且你的宿主硬件还有容量,你可以添加任何你需要的虚拟机。或者,如果你不再需要一个虚拟机,你可以关闭该虚拟机并删除虚拟机镜像。
|
||||
|
||||
### 虚拟机的局限 ###
|
||||
|
||||
但是,虚拟机确实有局限。
|
||||
|
||||
如上面所示,假如你在一个主机上创建了三个虚拟机。主机有 12 个 CPU,48 GB 内存和 3TB 的存储空间。每个虚拟机配置为有 4 个 CPU,16 GB 内存和 1TB 存储空间。到现在为止,一切都还好。主机有这个容量。
|
||||
|
||||
但这里有个缺陷。所有分配给一个虚拟机的资源,无论是什么,都是专有的。每台机器都分配了 16 GB 的内存。但是,如果第一个虚拟机永不会使用超过 1GB 分配的内存,剩余的 15 GB 就会被浪费在那里。如果第三个虚拟机只使用分配的 1TB 存储空间中的 100GB,其余的 900GB 就成为浪费空间。
|
||||
|
||||
这里没有资源的流动。每台虚拟机拥有分配给它的所有资源。因此,在某种方式上我们又回到了虚拟机之前,把大部分金钱花费在未使用的资源上。
|
||||
|
||||
虚拟机还有*另一个*缺陷。让它们跑起来需要很长时间。如果你处于基础设施需要快速增长的情形,即使增加虚拟机是自动的,你仍然会发现你的很多时间都浪费在等待机器上线。
|
||||
|
||||
### 来到:容器 ###
|
||||
|
||||
概念上来说,容器是一个 Linux 进程,Linux 认为它只是一个运行中的进程。该进程只知道它被告知的东西。另外,在容器化方面,该容器进程也分配了它自己的 IP 地址。这点很重要,重要的事情讲三遍,这是第二遍。**在容器化方面,容器进程有它自己的 IP 地址。**一旦给予了一个 IP 地址,该进程就是宿主网络中可识别的资源。然后,你可以在容器管理器上运行命令,使容器 IP 映射到主机中能访问公网的 IP 地址。建立了该映射,无论出于什么意图和目的,容器就是网络上一个可访问的独立机器,从概念上类似于虚拟机。
|
||||
|
||||
这是第三遍,容器是拥有不同 IP 地址从而使其成为网络上可识别的独立 Linux 进程。下面是一个示意图:
|
||||
|
||||
![](https://deis.com/images/blog-images/dev_journey_2.png)
|
||||
|
||||
容器/进程以动态、合作的方式共享主机上的资源。如果容器只需要 1GB 内存,它就只会使用 1GB。如果它需要 4GB,就会使用 4GB。CPU 和存储空间利用也是如此。CPU、内存和存储空间的分配是动态的,和典型虚拟机的静态方式不同。所有这些资源的共享都由容器管理器来管理。
|
||||
|
||||
最后,容器能非常快速地启动。
|
||||
|
||||
因此,容器的好处是:**你获得了虚拟机独立和封装的好处,而抛弃了静态资源专有的缺陷**。另外,由于容器能快速加载到内存,在扩展到多个容器时你能获得更好的性能。
|
||||
|
||||
### 容器托管、配置和管理 ###
|
||||
|
||||
托管容器的计算机运行着被剥离的只剩下主要部分的某个 Linux 版本。现在,宿主计算机流行的底层操作系统是之前提到的 [CoreOS][2]。当然还有其它,例如 [Red Hat Atomic Host][3] 和 [Ubuntu Snappy][4]。
|
||||
|
||||
该 Linux 操作系统被所有容器所共享,减少了容器足迹的重复和冗余。每个容器只包括该容器特有的部分。下面是一个示意图:
|
||||
|
||||
![](https://deis.com/images/blog-images/dev_journey_3.png)
|
||||
|
||||
你可以用它所需的组件来配置容器。一个容器组件被称为**层(layer)**。层是一个容器镜像,(你会在后面的部分看到更多关于容器镜像的介绍)。你从一个基本层开始,这通常是你想在容器中使用的操作系统。(容器管理器只提供你所要的操作系统在宿主操作系统中不存在的部分。)当你构建你的容器配置时,你需要添加层,例如你想要添加网络服务器时这个层就是 Apache,如果容器要运行脚本,则需要添加 PHP 或 Python 运行时环境。
|
||||
|
||||
分层非常灵活。如果应用程序或者服务容器需要 PHP 5.2 版本,你相应地配置该容器即可。如果你有另一个应用程序或者服务需要 PHP 5.6 版本,没问题,你可以使用 PHP 5.6 配置该容器。不像虚拟机,更改一个版本的运行时依赖时你需要经过大量的配置和安装过程;对于容器你只需要在容器配置文件中重新定义层。
|
||||
|
||||
所有上面描述的容器的各种功能都由一个称为容器管理器(container manager)的软件控制。现在,最流行的容器管理器是 [Docker][5] 和 [Rocket][6]。上面的示意图展示了容器管理器是 Docker,宿主操作系统是 CentOS 的主机情景。
|
||||
|
||||
### 容器由镜像构成 ###
|
||||
|
||||
当你需要将我们的应用程序构建到容器时,你就要编译镜像。镜像代表了你的容器需要完成其工作的容器模板。(容器里可以在容器里面,如下图)。镜像存储在注册库(registry)中,注册库通过网络访问。
|
||||
|
||||
从概念上讲,注册库类似于一个使用 Java 的人眼中的 [Maven][7] 仓库、使用 .NET 的人眼中的 [NuGet][8] 服务器。你会创建一个列出了你应用程序所需镜像的容器配置文件。然后你使用容器管理器创建一个包括了你的应用程序代码以及从容器注册库中下载的部分资源。例如,如果你的应用程序包括了一些 PHP 文件,你的容器配置文件会声明你会从注册库中获取 PHP 运行时环境。另外,你还要使用容器配置文件声明需要复制到容器文件系统中的 .php 文件。容器管理器会封装你应用程序的所有东西为一个独立容器,该容器将会在容器管理器的管理下运行在宿主计算机上。
|
||||
|
||||
这是一个容器创建背后概念的示意图:
|
||||
|
||||
![](https://deis.com/images/blog-images/dev_journey_4.png)
|
||||
|
||||
让我们仔细看看这个示意图。
|
||||
|
||||
(1)代表一个定义了你容器所需东西以及你容器如何构建的容器配置文件。当你在主机上运行容器时,容器管理器会读取该配置文件,从云上的注册库中获取你需要的容器镜像,(2)将镜像作为层添加到你的容器中。
|
||||
|
||||
另外,如果组成镜像需要其它镜像,容器管理器也会获取这些镜像并把它们作为层添加进来。(3)容器管理器会将需要的文件复制到容器中。
|
||||
|
||||
如果你使用了配置(provisioning)服务,例如 [Deis][9],你刚刚创建的应用程序容器做成镜像,(4)配置服务会将它部署到你选择的云供应商上,比如类似 AWS 和 Rackspace 云供应商。
|
||||
|
||||
### 集群中的容器 ###
|
||||
|
||||
好了。这里有一个很好的例子说明了容器比虚拟机提供了更好的配置灵活性和资源利用率。但是,这并不是全部。
|
||||
|
||||
容器真正的灵活是在集群中。记住,每个容器有一个独立的 IP 地址。因此,能把它放到负载均衡器后面。将容器放到负载均衡器后面,这就上升了一个层面。
|
||||
|
||||
你可以在一个负载均衡容器后运行容器集群以获得更高的性能和高可用计算。这是一个例子:
|
||||
|
||||
![](https://deis.com/images/blog-images/dev_journey_5.png)
|
||||
|
||||
假如你开发了一个资源密集型的应用程序,例如图片处理。使用类似 [Deis][9] 的容器配置技术,你可以创建一个包括了你图片处理程序以及你图片处理程序需要的所有资源的容器镜像。然后,你可以部署一个或多个容器镜像到主机上的负载均衡器下。一旦创建了容器镜像,你可以随时使用它。当系统繁忙时可以添加更多的容器实例来满足手中的工作。
|
||||
|
||||
这里还有更多好消息。每次添加实例到环境中时,你不需要手动配置负载均衡器以便接受你的容器镜像。你可以使用服务发现技术让容器告知均衡器它可用。然后,一旦获知,均衡器就会将流量分发到新的结点。
|
||||
|
||||
### 全部放在一起 ###
|
||||
|
||||
容器技术完善了虚拟机缺失的部分。类似 CoreOS、RHEL Atomic、和 Ubuntu 的 Snappy 宿主操作系统,和类似 Docker 和 Rocket 的容器管理技术结合起来,使得容器变得日益流行。
|
||||
|
||||
尽管容器变得更加越来越普遍,掌握它们还是需要一段时间。但是,一旦你懂得了它们的窍门,你可以使用类似 [Deis][9] 这样的配置技术使容器创建和部署变得更加简单。
|
||||
|
||||
从概念上理解容器和进一步实际使用它们完成工作一样重要。但我认为不实际动手把想法付诸实践,概念也难以理解。因此,我们该系列的下一阶段就是:创建一些容器。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://deis.com/blog/2015/developer-journey-linux-containers
|
||||
|
||||
作者:[Bob Reselman][a]
|
||||
译者:[ictlyh](http://www.mutouxiaogui.cn/blog/)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://deis.com/blog
|
||||
[1]:https://en.wikipedia.org/wiki/Virtual_machine
|
||||
[2]:https://coreos.com/using-coreos/
|
||||
[3]:http://www.projectatomic.io/
|
||||
[4]:https://developer.ubuntu.com/en/snappy/
|
||||
[5]:https://www.docker.com/
|
||||
[6]:https://coreos.com/blog/rocket/
|
||||
[7]:https://en.wikipedia.org/wiki/Apache_Maven
|
||||
[8]:https://www.nuget.org/
|
||||
[9]:http://deis.com/learn
|
@ -1,26 +1,26 @@
|
||||
修复Sheell脚本在Ubuntu中用文本编辑器打开的方式
|
||||
修复 Shell 脚本在 Ubuntu 中的默认打开方式
|
||||
================================================================================
|
||||
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Run-Shell-Script-on-Double-Click.jpg)
|
||||
|
||||
当你双击一个脚本(.sh文件)的时候,你想要做的是什么?通常的想法是执行它。但是在Ubuntu下面却不是这样,或者我应该更确切地说是在Files(Nautilus)中。你可能会疯狂地大叫“运行文件,运行文件”,但是文件没有运行而是用Gedit打开了。
|
||||
当你双击一个脚本(.sh文件)的时候,你想要做的是什么?通常的想法是执行它。但是在Ubuntu下面却不是这样,或者我应该更确切地说是在Files(Nautilus)中。你可能会疯狂地大叫“运行文件,运行文件”,但是文件没有运行而是用Gedit打开了。
|
||||
|
||||
我知道你也许会说文件有可执行权限么?我会说是的。脚本有可执行权限但是当我双击它的时候,它还是用文本编辑器打开了。我不希望这样如果你遇到了同样的问题,我想你也许也不需要这样。
|
||||
我知道你也许会说文件有可执行权限么?我会说是的。脚本有可执行权限但是当我双击它的时候,它还是用文本编辑器打开了。我不希望这样,如果你遇到了同样的问题,我想你也许也想要这样。
|
||||
|
||||
我知道你或许已经被建议在终端下面运行,我知道这个可行但是这不是一个在GUI下不能运行的借口是么?
|
||||
我知道你或许已经被建议在终端下面执行,我知道这个可行,但是这不是一个在GUI下不能运行的借口是么?
|
||||
|
||||
这篇教程中,我们会看到**如何在双击后运行shell脚本。**
|
||||
|
||||
#### 修复在Ubuntu中shell脚本用文本编辑器打开的方式 ####
|
||||
|
||||
shell脚本用文件编辑器打开的原因是Files(Ubuntu中的文件管理器)中的默认行为设置。在更早的版本中,它或许会询问你是否运行文件或者用编辑器打开。默认的行位在新的版本中被修改了。
|
||||
shell脚本用文件编辑器打开的原因是Files(Ubuntu中的文件管理器)中的默认行为设置。在更早的版本中,它或许会询问你是否运行文件或者用编辑器打开。默认的行为在新的版本中被修改了。
|
||||
|
||||
要修复这个,进入文件管理器,并在菜单中点击**选项**:
|
||||
|
||||
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/execute-shell-program-ubuntu-1.png)
|
||||
|
||||
接下来在**文件选项**中进入**行为**标签中,你会看到**文本文件执行**选项。
|
||||
接下来在**文件选项(Files Preferences)**中进入**行为(Behavior)**标签中,你会看到**可执行的文本文件(Executable Text Files)**选项。
|
||||
|
||||
默认情况下,它被设置成“在打开是显示文本文件”。我建议你把它改成“每次询问”,这样你可以选择是执行还是编辑了,当然了你也可以选择默认执行。你可以自行选择。
|
||||
默认情况下,它被设置成“在打开时显示文本文件(View executable text files when they are opend)”。我建议你把它改成“每次询问(Ask each time)”,这样你可以选择是执行还是编辑了,当然了你也可以选择“在打开时云可执行文本文件(Run executable text files when they are opend)”。你可以自行选择。
|
||||
|
||||
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/execute-shell-program-ubuntu-2.png)
|
||||
|
||||
@ -32,7 +32,7 @@ via: http://itsfoss.com/shell-script-opens-text-editor/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,44 @@
|
||||
好奇 Linux?试试云端的 Linux 桌面
|
||||
================================================================================
|
||||
Linux 在桌面操作系统市场上只占据了非常小的份额,从目前的调查结果来看,估计只有2%的市场份额;对比来看,丰富多变的 Windows 系统占据了接近90%的市场份额。对于 Linux 来说,要挑战 Windows 在桌面操作系统市场的垄断,需要有一个让用户学习不同的操作系统的简单方式。如果你相信传统的 Windows 用户会再买一台机器来使用 Linux,那你就太天真了。我们只能去试想用户重新分区,设置引导程序来使用双系统,或者跳过所有步骤回到一个最简单的方法。
|
||||
|
||||
![](http://www.linuxlinks.com/portal/content/reviews/Cloud/CloudComputing.png)
|
||||
|
||||
我们实验过一系列让用户试操作 Linux 的无风险的使用方法,不涉及任何分区管理,包括 CD/DVD 光盘、USB 存储棒和桌面虚拟化软件等等。通过实验,我强烈推荐使用 VMware 的 VMware Player 或者 Oracle VirtualBox 虚拟机,对于桌面操作系统或者便携式电脑的用户,这是一种安装运行多操作系统的相对简单而且免费的的方法。每一台虚拟机和其他虚拟机相隔离,但是共享 CPU、内存、网络接口等等。虚拟机仍需要一定的资源来安装运行 Linux,也需要一台相当强劲的主机。但对于一个好奇心不大的人,这样做实在是太麻烦了。
|
||||
|
||||
要打破用户传统的使用观念是非常困难的。很多 Windows 用户可以尝试使用 Linux 提供的自由软件,但也有太多要学习的 Linux 系统知识。这会花掉他们相当一部分时间才能习惯 Linux 的工作方式。
|
||||
|
||||
当然了,对于一个第一次在 Linux 上操作的新手,有没有一个更高效的方法呢?答案是肯定的,接着往下看看云实验平台。
|
||||
|
||||
### LabxNow ###
|
||||
|
||||
![LabxNow](http://www.linuxlinks.com/portal/content/reviews/Cloud/Screenshot-LabxNow.png)
|
||||
|
||||
LabxNow 提供了一个免费服务,方便广大用户通过浏览器来访问远程 Linux 桌面。开发者将其加强为一个用户个人远程实验室(用户可以在系统里运行、开发任何程序),用户可以在任何地方通过互联网登入远程实验室。
|
||||
|
||||
这项服务现在可以为个人用户提供2核处理器,4GB RAM和10GB的固态硬盘,运行在128G RAM的4 AMD 6272处理器上。
|
||||
|
||||
#### 配置参数: ####
|
||||
|
||||
- 系统镜像:基于 Ubuntu 14.04 的 Xface 4.10,RHEL 6.5,CentOS(Gnome桌面),Oracle
|
||||
- 硬件: CPU - 1核或者2核;内存: 512MB, 1GB, 2GB or 4GB
|
||||
- 超快的网络数据传输
|
||||
- 可以运行在所有流行的浏览器上
|
||||
- 可以安装任意程序,可以运行任何程序 – 这是一个非常棒的方法,可以随意做实验学习你想学的任何知识,没有 一点风险
|
||||
- 添加、删除、管理、制定虚拟机非常方便
|
||||
- 支持虚拟机共享,远程桌面
|
||||
|
||||
你所需要的只是一台有稳定网络的设备。不用担心虚拟专用系统(VPS)、域名、或者硬件带来的高费用。LabxNow提供了一个在 Ubuntu、RHEL 和 CentOS 上实验的非常好的方法。它给 Windows 用户提供一个极好的环境,让他们探索美妙的 Linux 世界。说得深入一点,它可以让用户随时随地在里面工作,而没有了要在每台设备上安装 Linux 的压力。点击下面这个链接进入 [www.labxnow.org/labxweb/][1]。
|
||||
|
||||
另外还有一些其它服务(大部分是收费服务)可以让用户使用 Linux,包括 Cloudsigma 环境的7天使用权和Icebergs.io (通过HTML5实现root权限)。但是现在,我推荐 LabxNow。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
来自: http://www.linuxlinks.com/article/20151003095334682/LinuxCloud.html
|
||||
|
||||
译者:[sevenot](https://github.com/sevenot)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:https://www.labxnow.org/labxweb/
|
@ -1,22 +1,25 @@
|
||||
命令行下使用Mop 监视股票价格
|
||||
命令行下使用 Mop 监视股票价格
|
||||
================================================================================
|
||||
有一份隐性收入通常很不错,特别是当你可以轻松的协调业余和全职工作。如果你的日常工作使用了联网的电脑,交易股票是一个很流行的选项来获取额外收入。
|
||||
![](https://www.maketecheasier.com/assets/uploads/2015/09/mop-featured-new.jpg)
|
||||
|
||||
有一份隐性收入通常很不错,特别是当你可以轻松的协调业余和全职工作。如果你的日常工作使用了联网的电脑,交易股票就是一个获取额外收入的很流行的选项。
|
||||
|
||||
但是目前只有很少的股票监视软件可以运行在 linux 上,其中大多数还是基于图形界面的。如果你是一个 Linux 专家,并且大量的工作时间是在没有图形界面的电脑上呢?你是不是就没办法了?不,还是有一些命令行下的股票追踪工具,包括Mop,也就是本文要聊一聊的工具。
|
||||
|
||||
但是目前只有很少的股票监视软件可以用在linux 上,其中大多数还是基于图形界面的。如果你是一个Linux 专家,并且大量的工作时间是在没有图形界面的电脑上呢?你是不是就没办法了?不,这里还有一个命令行下的股票追踪工具,包括Mop,也就是本文要聊一聊的工具。
|
||||
### Mop ###
|
||||
|
||||
Mop,如上所述,是一个命令行下连续显示和更新美股和独立股票信息的工具。使用GO 实现的,是Michael Dvorkin 大脑的产物。
|
||||
Mop,如上所述,是一个命令行下连续显示和更新美股和独立股票信息的工具。使用 GO 语言实现的,是 Michael Dvorkin 的智慧结晶。
|
||||
|
||||
### 下载安装 ###
|
||||
|
||||
|
||||
因为这个工程使用GO 实现的,所以你要做的第一步是在你的计算机上安装这种编程语言,下面就是在Debian 系系统,比如Ubuntu上安装GO的步骤:
|
||||
因为这个项目使用 GO 实现的,所以你要做的第一步是在你的计算机上安装这种编程语言,下面就是在 Debian 系的系统,比如 Ubuntu 上安装 GO 的步骤:
|
||||
|
||||
sudo apt-get install golang
|
||||
mkdir ~/workspace
|
||||
echo 'export GOPATH="$HOME/workspace"' >> ~/.bashrc
|
||||
source ~/.bashrc
|
||||
|
||||
GO 安装好后的下一步是安装Mop 工具和配置环境,你要做的是运行下面的命令:
|
||||
GO 安装好后的下一步是安装 Mop 工具和配置环境,你要做的是运行下面的命令:
|
||||
|
||||
sudo apt-get install git
|
||||
go get github.com/michaeldv/mop
|
||||
@ -24,12 +27,13 @@ GO 安装好后的下一步是安装Mop 工具和配置环境,你要做的是
|
||||
make install
|
||||
export PATH="$PATH:$GOPATH/bin"
|
||||
|
||||
完成之后就可以运行下面的命令执行Mop:
|
||||
完成之后就可以运行下面的命令执行 Mop:
|
||||
|
||||
cmd
|
||||
|
||||
### 特性 ###
|
||||
|
||||
当你第一次运行Mop 时,你会看到类似下面的输出信息:
|
||||
当你第一次运行 Mop 时,你会看到类似下面的输出信息:
|
||||
|
||||
![](https://www.maketecheasier.com/assets/uploads/2015/09/mop-first-run.jpg)
|
||||
|
||||
@ -37,19 +41,19 @@ GO 安装好后的下一步是安装Mop 工具和配置环境,你要做的是
|
||||
|
||||
### 添加删除股票 ###
|
||||
|
||||
Mop 允许你轻松的从输出列表上添加/删除个股信息。要添加,你全部要做的是按”+“和输入股票名称。举个例子,下图就是添加Facebook (FB) 到列表里。
|
||||
Mop 允许你轻松的从输出列表上添加/删除个股信息。要添加,你全部要做的是按“+”和输入股票名称。举个例子,下图就是添加 Facebook (FB) 到列表里。
|
||||
|
||||
![](https://www.maketecheasier.com/assets/uploads/2015/09/mop-add-stock.png)
|
||||
|
||||
因为我按下了”+“键,一列包含文本”Add tickers:“出现了,提示我添加股票名称—— 我添加了FB 然后按下回车。输出列表更新了,我添加的新股票也出现在列表了:
|
||||
我按下了“+”键,就出现了包含文本“Add tickers:”的一列,提示我添加股票名称—— 我添加了 FB 然后按下回车。输出列表更新了,我添加的新股票也出现在列表了:
|
||||
|
||||
![](https://www.maketecheasier.com/assets/uploads/2015/09/mop-stock-added.png)
|
||||
|
||||
类似的,你可以使用”-“ 键和提供股票名称删除一个股票。
|
||||
类似的,你可以使用“-”键和提供股票名称删除一个股票。
|
||||
|
||||
#### 根据价格分组 ####
|
||||
|
||||
还有一个把股票分组的办法:依据他们的股价升跌,你索要做的就是按下”g“ 键。接下来,股票会分组显示:升的在一起使用绿色字体显示,而下跌的股票会黑色字体显示。
|
||||
还有一个把股票分组的办法:依据他们的股价升跌,你所要做的就是按下“g”键。接下来,股票会分组显示:升的在一起使用绿色字体显示,而下跌的股票会黑色字体显示。
|
||||
|
||||
如下所示:
|
||||
|
||||
@ -57,7 +61,7 @@ Mop 允许你轻松的从输出列表上添加/删除个股信息。要添加,
|
||||
|
||||
#### 列排序 ####
|
||||
|
||||
Mop 同时也允许你根据不同的列类型改变排序规则。这种用法需要你按下”o“(这个命令默认使用第一列的值来排序),然后使用左右键来选择你要使用的列。完成之后按下回车对内容重新排序。
|
||||
Mop 同时也允许你根据不同的列类型改变排序规则。这种用法需要你按下“o”(这个命令默认使用第一列的值来排序),然后使用左右键来选择你要排序的列。完成之后按下回车对内容重新排序。
|
||||
|
||||
举个例子,下面的截图就是根据输出内容的第一列、按照字母表排序之后的结果。
|
||||
|
||||
@ -67,12 +71,13 @@ Mop 同时也允许你根据不同的列类型改变排序规则。这种用法
|
||||
|
||||
#### 其他选项 ####
|
||||
|
||||
其它的可用选项包括”p“:暂停市场和股票信息更新,”q“ 或者”esc“ 来退出命令行程序,”?“ 显示帮助页。
|
||||
其它的可用选项包括“p”:暂停市场和股票信息更新,“q”或者“esc” 来退出命令行程序,“?”显示帮助页。
|
||||
|
||||
![](https://www.maketecheasier.com/assets/uploads/2015/09/mop-help.png)
|
||||
|
||||
### 结论 ###
|
||||
|
||||
Mop 是一个基础的股票监控工具,并没有提供太多的特性,只提供了他声称的功能。很明显,这个工具并不是为专业股票交易者提供的,而仅仅为你在只有命令行的机器上得体的提供了一个跟踪股票信息的选择。
|
||||
Mop 是一个基础的股票监控工具,并没有提供太多的特性,只提供了它所声称的功能。很明显,这个工具并不是为专业股票交易者提供的,而仅仅为你在只有命令行的机器上得体的提供了一个跟踪股票信息的选择。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -80,7 +85,7 @@ via: https://www.maketecheasier.com/monitor-stock-prices-ubuntu-command-line/
|
||||
|
||||
作者:[Himanshu Arora][a]
|
||||
译者:[oska874](https://github.com/oska874)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,113 @@
|
||||
用浏览器管理 Docker
|
||||
================================================================================
|
||||
Docker 越来越流行了。在一个容器里面而不是虚拟机里运行一个完整的操作系统是一种非常棒的技术和想法。docker 已经通过节省工作时间来拯救了成千上万的系统管理员和开发人员。这是一个开源技术,提供一个平台来把应用程序当作容器来打包、分发、共享和运行,而不用关注主机上运行的操作系统是什么。它没有开发语言、框架或打包系统的限制,并且可以在任何时间、任何地点运行,从小型计算机到高端服务器都可以。运行 docker 容器和管理它们可能会花费一点点努力和时间,所以现在有一款基于 web 的应用程序-DockerUI,可以让管理和运行容器变得很简单。DockerUI 是一个对那些不熟悉 Linux 命令行,但又很想运行容器化程序的人很有帮助的工具。DockerUI 是一个开源的基于 web 的应用程序,它最值得称道的是它华丽的设计和用来运行和管理 docker 的简洁的操作界面。
|
||||
|
||||
下面会介绍如何在 Linux 上安装配置 DockerUI。
|
||||
|
||||
### 1. 安装 docker ###
|
||||
|
||||
首先,我们需要安装 docker。我们得感谢 docker 的开发者,让我们可以简单的在主流 linux 发行版上安装 docker。为了安装 docker,我们得在对应的发行版上使用下面的命令。
|
||||
|
||||
#### Ubuntu/Fedora/CentOS/RHEL/Debian ####
|
||||
|
||||
docker 维护者已经写了一个非常棒的脚本,用它可以在 Ubuntu 15.04/14.10/14.04、 CentOS 6.x/7、 Fedora 22、 RHEL 7 和 Debian 8.x 这几个 linux 发行版上安装 docker。这个脚本可以识别出我们的机器上运行的 linux 的发行版本,然后将需要的源库添加到文件系统、并更新本地的安装源目录,最后安装 docker 及其依赖库。要使用这个脚本安装docker,我们需要在 root 用户或者 sudo 权限下运行如下的命令,
|
||||
|
||||
# curl -sSL https://get.docker.com/ | sh
|
||||
|
||||
#### OpenSuse/SUSE Linux 企业版 ####
|
||||
|
||||
要在运行了 OpenSuse 13.1/13.2 或者 SUSE Linux Enterprise Server 12 的机器上安装 docker,我们只需要简单的执行zypper 命令。运行下面的命令就可以安装最新版本的docker:
|
||||
|
||||
# zypper in docker
|
||||
|
||||
#### ArchLinux ####
|
||||
|
||||
docker 在 ArchLinux 的官方源和社区维护的 AUR 库中可以找到。所以在 ArchLinux 上我们有两种方式来安装 docker。使用官方源安装,需要执行下面的 pacman 命令:
|
||||
|
||||
# pacman -S docker
|
||||
|
||||
如果要从社区源 AUR 安装 docker,需要执行下面的命令:
|
||||
|
||||
# yaourt -S docker-git
|
||||
|
||||
### 2. 启动 ###
|
||||
|
||||
安装好 docker 之后,我们需要运行 docker 守护进程,然后才能运行并管理 docker 容器。我们需要使用下列命令来确认 docker 守护进程已经安装并运行了。
|
||||
|
||||
#### 在 SysVinit 上####
|
||||
|
||||
# service docker start
|
||||
|
||||
#### 在Systemd 上####
|
||||
|
||||
# systemctl start docker
|
||||
|
||||
### 3. 安装 DockerUI ###
|
||||
|
||||
安装 DockerUI 比安装 docker 要简单很多。我们仅仅需要从 docker 注册库上拉取 dockerui ,然后在容器里面运行。要完成这些,我们只需要简单的执行下面的命令:
|
||||
|
||||
# docker run -d -p 9000:9000 --privileged -v /var/run/docker.sock:/var/run/docker.sock dockerui/dockerui
|
||||
|
||||
![Starting DockerUI Container](http://blog.linoxide.com/wp-content/uploads/2015/09/starting-dockerui-container.png)
|
||||
|
||||
在上面的命令里,dockerui 使用的默认端口是9000,我们需要使用`-p` 命令映射默认端口。使用`-v` 标志我们可以指定docker 的 socket。如果主机使用了 SELinux 那么就得使用`--privileged` 标志。
|
||||
|
||||
执行完上面的命令后,我们要检查 DockerUI 容器是否运行了,或者使用下面的命令检查:
|
||||
|
||||
# docker ps
|
||||
|
||||
![Running Docker Containers](http://blog.linoxide.com/wp-content/uploads/2015/09/running-docker-containers.png)
|
||||
|
||||
### 4. 拉取 docker 镜像 ###
|
||||
|
||||
现在我们还不能直接使用 DockerUI 拉取镜像,所以我们需要在命令行下拉取 docker 镜像。要完成这些我们需要执行下面的命令。
|
||||
|
||||
# docker pull ubuntu
|
||||
|
||||
![Docker Image Pull](http://blog.linoxide.com/wp-content/uploads/2015/10/docker-image-pull.png)
|
||||
|
||||
上面的命令将会从 docker 官方源 [Docker Hub][1]拉取一个标志为 ubuntu 的镜像。类似的我们可以从 Hub 拉取需要的其它镜像。
|
||||
|
||||
### 4. 管理 ###
|
||||
|
||||
启动了 DockerUI 容器之后,我们可以用它来执行启动、暂停、终止、删除以及 DockerUI 提供的其它操作 docker 容器的命令。
|
||||
|
||||
首先,我们需要在 web 浏览器里面打开 dockerui:在浏览器里面输入 http://ip-address:9000 或者 http://mydomain.com:9000,具体要根据你的系统配置。默认情况下登录不需要认证,但是可以配置我们的 web 服务器来要求登录认证。要启动一个容器,我们需要有包含我们要运行的程序的镜像。
|
||||
|
||||
#### 创建 ####
|
||||
|
||||
创建容器我们需要在 Images 页面里,点击我们想创建的容器的镜像 id。然后点击 `Create` 按钮,接下来我们就会被要求输入创建容器所需要的属性。这些都完成之后,我们需要点击按钮`Create` 完成最终的创建。
|
||||
|
||||
![Creating Docker Container](http://blog.linoxide.com/wp-content/uploads/2015/10/creating-docker-container.png)
|
||||
|
||||
#### 停止 ####
|
||||
|
||||
要停止一个容器,我们只需要跳转到`Containers` 页面,然后选取要停止的容器。然后在 Action 的子菜单里面按下 Stop 就行了。
|
||||
|
||||
![Managing Container](http://blog.linoxide.com/wp-content/uploads/2015/10/managing-container.png)
|
||||
|
||||
#### 暂停与恢复 ####
|
||||
|
||||
要暂停一个容器,只需要简单的选取目标容器,然后点击 Pause 就行了。恢复一个容器只需要在 Actions 的子菜单里面点击 Unpause 就行了。
|
||||
|
||||
#### 删除 ####
|
||||
|
||||
类似于我们上面完成的任务,杀掉或者删除一个容器或镜像也是很简单的。只需要检查、选择容器或镜像,然后点击 Kill 或者 Remove 就行了。
|
||||
|
||||
### 结论 ###
|
||||
|
||||
DockerUI 使用了 docker 远程 API 提供了一个很棒的管理 docker 容器的 web 界面。它的开发者们完全使用 HTML 和 JS 设计、开发了这个应用。目前这个程序还处于开发中,并且还有大量的工作要完成,所以我们并不推荐将它应用在生产环境。它可以帮助用户简单的完成管理容器和镜像,而且只需要一点点工作。如果想要为 DockerUI 做贡献,可以访问它们的 [Github 仓库][2]。如果有问题、建议、反馈,请写在下面的评论框,这样我们就可以修改或者更新我们的内容。谢谢。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/setup-dockerui-web-interface-docker/
|
||||
|
||||
作者:[Arun Pyasi][a]
|
||||
译者:[oska874](https://github.com/oska874)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunp/
|
||||
[1]:https://hub.docker.com/
|
||||
[2]:https://github.com/crosbymichael/dockerui/
|
@ -1,9 +1,8 @@
|
||||
如何在 CentOS 7.0 上配置 Ceph 存储
|
||||
How to Setup Red Hat Ceph Storage on CentOS 7.0
|
||||
================================================================================
|
||||
Ceph 是一个将数据存储在单一分布式计算机集群上的开源软件平台。当你计划构建一个云时,你首先需要决定如何实现你的存储。开源的 CEPH 是红帽原生技术之一,它基于称为 RADOS 的对象存储系统,用一组网关 API 表示块、文件、和对象模式中的数据。由于它自身开源的特性,这种便携存储平台能在公有和私有云上安装和使用。Ceph 集群的拓扑结构是按照备份和信息分布设计的,这内在设计能提供数据完整性。它的设计目标就是容错、通过正确配置能运行于商业硬件和一些更高级的系统。
|
||||
Ceph 是一个将数据存储在单一分布式计算机集群上的开源软件平台。当你计划构建一个云时,你首先需要决定如何实现你的存储。开源的 Ceph 是红帽原生技术之一,它基于称为 RADOS 的对象存储系统,用一组网关 API 表示块、文件、和对象模式中的数据。由于它自身开源的特性,这种便携存储平台能在公有云和私有云上安装和使用。Ceph 集群的拓扑结构是按照备份和信息分布设计的,这种内在设计能提供数据完整性。它的设计目标就是容错、通过正确配置能运行于商业硬件和一些更高级的系统。
|
||||
|
||||
Ceph 能在任何 Linux 发行版上安装,但为了能正确运行,它要求最近的内核以及其它最新的库。在这篇指南中,我们会使用最小化安装的 CentOS-7.0。
|
||||
Ceph 能在任何 Linux 发行版上安装,但为了能正确运行,它需要最近的内核以及其它最新的库。在这篇指南中,我们会使用最小化安装的 CentOS-7.0。
|
||||
|
||||
### 系统资源 ###
|
||||
|
||||
@ -25,11 +24,11 @@ Ceph 能在任何 Linux 发行版上安装,但为了能正确运行,它要
|
||||
|
||||
### 安装前的配置 ###
|
||||
|
||||
在安装 CEPH 存储之前,我们要在每个节点上完成一些步骤。第一件事情就是确保每个节点的网络已经配置好并且能相互访问。
|
||||
在安装 Ceph 存储之前,我们要在每个节点上完成一些步骤。第一件事情就是确保每个节点的网络已经配置好并且能相互访问。
|
||||
|
||||
**配置 Hosts**
|
||||
|
||||
要在每个节点上配置 hosts 条目,要像下面这样打开默认的 hosts 配置文件。
|
||||
要在每个节点上配置 hosts 条目,要像下面这样打开默认的 hosts 配置文件(LCTT 译注:或者做相应的 DNS 解析)。
|
||||
|
||||
# vi /etc/hosts
|
||||
|
||||
@ -46,9 +45,9 @@ Ceph 能在任何 Linux 发行版上安装,但为了能正确运行,它要
|
||||
|
||||
**配置防火墙**
|
||||
|
||||
如果你正在使用启用了防火墙的限制性环境,确保在你的 CEPH 存储管理节点和客户端节点中开放了以下的端口。
|
||||
如果你正在使用启用了防火墙的限制性环境,确保在你的 Ceph 存储管理节点和客户端节点中开放了以下的端口。
|
||||
|
||||
你必须在你的 Admin Calamari 节点开放 80、2003、以及4505-4506 端口,并且允许通过 80 号端口到 CEPH 或 Calamari 管理节点,以便你网络中的客户端能访问 Calamari web 用户界面。
|
||||
你必须在你的 Admin Calamari 节点开放 80、2003、以及4505-4506 端口,并且允许通过 80 号端口到 CEPH 或 Calamari 管理节点,以便你网络中的客户端能访问 Calamari web 用户界面。
|
||||
|
||||
你可以使用下面的命令在 CentOS 7 中启动并启用防火墙。
|
||||
|
||||
@ -62,7 +61,7 @@ Ceph 能在任何 Linux 发行版上安装,但为了能正确运行,它要
|
||||
#firewall-cmd --zone=public --add-port=4505-4506/tcp --permanent
|
||||
#firewall-cmd --reload
|
||||
|
||||
在 CEPH Monitor 节点,你要在防火墙中允许通过以下端口。
|
||||
在 Ceph Monitor 节点,你要在防火墙中允许通过以下端口。
|
||||
|
||||
#firewall-cmd --zone=public --add-port=6789/tcp --permanent
|
||||
|
||||
@ -82,9 +81,9 @@ Ceph 能在任何 Linux 发行版上安装,但为了能正确运行,它要
|
||||
#yum update
|
||||
#shutdown -r 0
|
||||
|
||||
### 设置 CEPH 用户 ###
|
||||
### 设置 Ceph 用户 ###
|
||||
|
||||
现在我们会新建一个单独的 sudo 用户用于在每个节点安装 ceph-deploy工具,并允许该用户无密码访问每个节点,因为它需要在 CEPH 节点上安装软件和配置文件而不会有输入密码提示。
|
||||
现在我们会新建一个单独的 sudo 用户用于在每个节点安装 ceph-deploy工具,并允许该用户无密码访问每个节点,因为它需要在 Ceph 节点上安装软件和配置文件而不会有输入密码提示。
|
||||
|
||||
运行下面的命令在 ceph-storage 主机上新建有独立 home 目录的新用户。
|
||||
|
||||
@ -100,7 +99,7 @@ Ceph 能在任何 Linux 发行版上安装,但为了能正确运行,它要
|
||||
|
||||
### 设置 SSH 密钥 ###
|
||||
|
||||
现在我们会在 ceph 管理节点生成 SSH 密钥并把密钥复制到每个 Ceph 集群节点。
|
||||
现在我们会在 Ceph 管理节点生成 SSH 密钥并把密钥复制到每个 Ceph 集群节点。
|
||||
|
||||
在 ceph-node 运行下面的命令复制它的 ssh 密钥到 ceph-storage。
|
||||
|
||||
@ -125,7 +124,8 @@ Ceph 能在任何 Linux 发行版上安装,但为了能正确运行,它要
|
||||
|
||||
### 配置 PID 数目 ###
|
||||
|
||||
要配置 PID 数目的值,我们会使用下面的命令检查默认的内核值。默认情况下,是一个小的最大线程数 32768.
|
||||
要配置 PID 数目的值,我们会使用下面的命令检查默认的内核值。默认情况下,是一个小的最大线程数 32768。
|
||||
|
||||
如下图所示通过编辑系统配置文件配置该值为一个更大的数。
|
||||
|
||||
![更改 PID 值](http://blog.linoxide.com/wp-content/uploads/2015/10/3-PID-value.png)
|
||||
@ -142,9 +142,9 @@ Ceph 能在任何 Linux 发行版上安装,但为了能正确运行,它要
|
||||
|
||||
#rpm -Uhv http://ceph.com/rpm-giant/el7/noarch/ceph-release-1-0.el7.noarch.rpm
|
||||
|
||||
![添加 EPEL](http://blog.linoxide.com/wp-content/uploads/2015/10/k1.png)
|
||||
![添加 Ceph 仓仓库](http://blog.linoxide.com/wp-content/uploads/2015/10/k1.png)
|
||||
|
||||
或者创建一个新文件并更新 CEPH 库参数,别忘了替换你当前的 Release 和版本号。
|
||||
或者创建一个新文件并更新 Ceph 库参数,别忘了替换你当前的 Release 和版本号。
|
||||
|
||||
[root@ceph-storage ~]# vi /etc/yum.repos.d/ceph.repo
|
||||
|
||||
@ -160,7 +160,7 @@ Ceph 能在任何 Linux 发行版上安装,但为了能正确运行,它要
|
||||
|
||||
之后更新你的系统并安装 ceph-deploy 软件包。
|
||||
|
||||
### 安装 CEPH-Deploy 软件包 ###
|
||||
### 安装 ceph-deploy 软件包 ###
|
||||
|
||||
我们运行下面的命令以及 ceph-deploy 安装命令来更新系统以及最新的 ceph 库和其它软件包。
|
||||
|
||||
@ -181,15 +181,16 @@ Ceph 能在任何 Linux 发行版上安装,但为了能正确运行,它要
|
||||
![设置 ceph 集群](http://blog.linoxide.com/wp-content/uploads/2015/10/k4.png)
|
||||
|
||||
如果成功执行了上面的命令,你会看到它新建了配置文件。
|
||||
现在配置 CEPH 默认的配置文件,用任意编辑器打开它并在会影响你公共网络的 global 参数下面添加以下两行。
|
||||
|
||||
现在配置 Ceph 默认的配置文件,用任意编辑器打开它并在会影响你公共网络的 global 参数下面添加以下两行。
|
||||
|
||||
#vim ceph.conf
|
||||
osd pool default size = 1
|
||||
public network = 45.79.0.0/16
|
||||
|
||||
### 安装 CEPH ###
|
||||
### 安装 Ceph ###
|
||||
|
||||
现在我们准备在和 CEPH 集群相关的每个节点上安装 CEPH。我们使用下面的命令在 ceph-storage 和 ceph-node 上安装 CEPH。
|
||||
现在我们准备在和 Ceph 集群相关的每个节点上安装 Ceph。我们使用下面的命令在 ceph-storage 和 ceph-node 上安装 Ceph。
|
||||
|
||||
#ceph-deploy install ceph-node ceph-storage
|
||||
|
||||
@ -201,7 +202,7 @@ Ceph 能在任何 Linux 发行版上安装,但为了能正确运行,它要
|
||||
|
||||
#ceph-deploy mon create-initial
|
||||
|
||||
![CEPH 初始化监视器](http://blog.linoxide.com/wp-content/uploads/2015/10/k6.png)
|
||||
![Ceph 初始化监视器](http://blog.linoxide.com/wp-content/uploads/2015/10/k6.png)
|
||||
|
||||
### 设置 OSDs 和 OSD 守护进程 ###
|
||||
|
||||
@ -223,9 +224,9 @@ Ceph 能在任何 Linux 发行版上安装,但为了能正确运行,它要
|
||||
|
||||
#ceph-deploy admin ceph-node ceph-storage
|
||||
|
||||
### 测试 CEPH ###
|
||||
### 测试 Ceph ###
|
||||
|
||||
我们几乎完成了 CEPH 集群设置,让我们在 ceph 管理节点上运行下面的命令检查正在运行的 ceph 状态。
|
||||
我们快完成了 Ceph 集群设置,让我们在 ceph 管理节点上运行下面的命令检查正在运行的 ceph 状态。
|
||||
|
||||
#ceph status
|
||||
#ceph health
|
||||
@ -235,7 +236,7 @@ Ceph 能在任何 Linux 发行版上安装,但为了能正确运行,它要
|
||||
|
||||
### 总结 ###
|
||||
|
||||
在这篇详细的文章中我们学习了如何使用两台安装了 CentOS 7 的虚拟机设置 CEPH 存储集群,这能用于备份或者作为用于处理其它虚拟机的本地存储。我们希望这篇文章能对你有所帮助。当你试着安装的时候记得分享你的经验。
|
||||
在这篇详细的文章中我们学习了如何使用两台安装了 CentOS 7 的虚拟机设置 Ceph 存储集群,这能用于备份或者作为用于处理其它虚拟机的本地存储。我们希望这篇文章能对你有所帮助。当你试着安装的时候记得分享你的经验。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -243,7 +244,7 @@ via: http://linoxide.com/storage/setup-red-hat-ceph-storage-centos-7-0/
|
||||
|
||||
作者:[Kashif Siddique][a]
|
||||
译者:[ictlyh](http://mutouxiaogui.cn/blog/)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,38 @@
|
||||
Nautilus 的文件搜索将迎来巨大提升
|
||||
================================================================================
|
||||
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/10/nautilus-new-search-filters.jpg)
|
||||
|
||||
*在Nautilus中搜索零散文件和文件夹将会将会变得相当简单。*
|
||||
|
||||
[GNOME文件管理器][1]中正在开发一个新的**搜索过滤器**。它大量使用 GNOME 漂亮的弹出式菜单,以通过简单的方法来缩小搜索结果并精确地找到你所需要的。
|
||||
|
||||
开发者Georges Stavracas正致力于开发新的UI,他[说][2]这个新的界面“更干净、更合理、更直观”。
|
||||
|
||||
根据他[上传到Youtube][3]的视频来展示的新方式-他还没有嵌入它-他没有错。
|
||||
|
||||
> 他在他的博客中写到:“ Nautilus 有非常复杂但是强大的内部组成,它允许我们做很多事情。事实上在代码上存在各种可能。那么,为何它曾经看上去这么糟糕?”
|
||||
|
||||
这个问题的部分原因比较令人吃惊:新的搜索过滤器界面向用户展示了“强大的内部组成”。搜索结果可以根据类型、名字或者日期范围来进行过滤。
|
||||
|
||||
对于像 Nautilus 这类 app 的任何修改有可能让一些用户不安,因此像这样帮助性的、直接的新UI会带来一些争议。
|
||||
|
||||
虽然对于不满的担心貌似会影响进度(毫无疑问,虽然像[移除输入优先搜索][4]的争议自2014年以来一直在争论)。GNOME 3.18 在[上个月发布了][5],给 Nautilus 引入了新的文件进度对话框,以及远程共享的更好整合,包括 Google Drive。
|
||||
|
||||
Stavracas 的搜索过滤器还没被合并进 Files 的 trunk 中,但是复刻的搜索 UI 已经初步计划在明年春天的 GNOME 3.20 中实现。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.omgubuntu.co.uk/2015/10/new-nautilus-search-filter-ui
|
||||
|
||||
作者:[Joey-Elijah Sneddon][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[Caroline](https://github.com/carolinewuyan)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/117485690627814051450/?rel=author
|
||||
[1]:https://wiki.gnome.org/Apps/Nautilus
|
||||
[2]:http://feaneron.com/2015/10/12/the-new-search-for-gnome-files-aka-nautilus/
|
||||
[3]:https://www.youtube.com/watch?v=X2sPRXDzmUw
|
||||
[4]:http://www.omgubuntu.co.uk/2014/01/ubuntu-14-04-nautilus-type-ahead-patch
|
||||
[5]:http://www.omgubuntu.co.uk/2015/09/gnome-3-18-release-new-features
|
@ -2,9 +2,9 @@ Linux 下如何安装 Retro Terminal
|
||||
================================================================================
|
||||
![Retro Terminal in Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Retro-Terminal-Linux.jpeg)
|
||||
|
||||
你有怀旧情节?那就试试 **安装 retro terminal 应用** [cool-retro-term][1] 来一瞥过去的时光吧。顾名思义,`cool-retro-term` 是一个兼具酷炫和怀旧的终端。
|
||||
你有怀旧情节?那就试试 **安装复古终端应用** [cool-retro-term][1] 来一瞥过去的时光吧。顾名思义,`cool-retro-term` 是一个兼具酷炫和怀旧的终端。
|
||||
|
||||
你还记得那段遍地都是 CRT 显示器、终端屏幕闪烁不停的时光吗?现在你并不需要穿越到过去来见证那段时光。假如你观看背景设置在上世纪 90 年代的电影,你就可以看到大量带有绿色或黑底白字的显像管显示器。再加上它们通常带有极客光环,这使得它们看起来更酷。
|
||||
你还记得那段遍地都是 CRT 显示器、终端屏幕闪烁不停的时光吗?现在你并不需要穿越到过去来见证那段时光。假如你观看背景设置在上世纪 90 年代的电影,你就可以看到大量带有绿色或黑底白字的显像管显示器。这种极客光环让它们看起来非常酷!
|
||||
|
||||
若你已经厌倦了你机器中终端的外表,正寻找某些炫酷且‘新奇’的东西,则 `cool-retro-term` 将会带给你一个复古的终端外表,使你可以重温过去。你也可以改变它的颜色、动画类型并添加一些额外的特效。
|
||||
|
||||
@ -48,7 +48,7 @@ Linux 下如何安装 Retro Terminal
|
||||
|
||||
./cool-retro-term
|
||||
|
||||
假如你想使得这个应用可在程序菜单中被快速获取到,以便你不用再每次手动地用命令来启动它,则你可以使用下面的命令:
|
||||
假如你想把这个应用放在程序菜单中以便快速找到,这样你就不用再每次手动地用命令来启动它,则你可以使用下面的命令:
|
||||
|
||||
sudo cp cool-retro-term.desktop /usr/share/applications
|
||||
|
||||
@ -60,13 +60,13 @@ Linux 下如何安装 Retro Terminal
|
||||
|
||||
via: http://itsfoss.com/cool-retro-term/
|
||||
|
||||
作者:[Hossein Heydari][a]
|
||||
作者:[Abhishek Prakash][a]
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/hossein/
|
||||
[a]:http://itsfoss.com/author/abhishek/
|
||||
[1]:https://github.com/Swordfish90/cool-retro-term
|
||||
[2]:http://itsfoss.com/tag/antergos/
|
||||
[3]:https://manjaro.github.io/
|
@ -1,6 +1,6 @@
|
||||
如何在 Linux 上使用 SSHfs 挂载一个远程文件系统
|
||||
================================================================================
|
||||
你有想通过安全 shell 挂载一个远程文件系统到本地的经历吗?如果有的话,SSHfs 也许就是你所需要的。它通过使用 SSH 和 Fuse(LCTT 译注:Filesystem in Userspace,用户态文件系统,是 Linux 中用于挂载某些网络空间,如 SSH,到本地文件系统的模块) 允许你挂载远程计算机(或者服务器)到本地。
|
||||
你曾经想过用安全 shell 挂载一个远程文件系统到本地吗?如果有的话,SSHfs 也许就是你所需要的。它通过使用 SSH 和 Fuse(LCTT 译注:Filesystem in Userspace,用户态文件系统,是 Linux 中用于挂载某些网络空间,如 SSH,到本地文件系统的模块) 允许你挂载远程计算机(或者服务器)到本地。
|
||||
|
||||
**注意**: 这篇文章假设你明白[SSH 如何工作并在你的系统中配置 SSH][1]。
|
||||
|
||||
@ -16,7 +16,7 @@
|
||||
|
||||
如果你使用的不是 Ubuntu,那就在你的发行版软件包管理器中搜索软件包名称。最好搜索和 fuse 或 SSHfs 相关的关键字,因为取决于你运行的系统,软件包名称可能稍微有些不同。
|
||||
|
||||
在你的系统上安装完软件包之后,就该创建 fuse 组了。在你安装 fuse 的时候,应该会在你的系统上创建一个组。如果没有的话,在终端窗口中输入以下命令以便在你的 Linux 系统中创建组:
|
||||
在你的系统上安装完软件包之后,就该创建好 fuse 组了。在你安装 fuse 的时候,应该会在你的系统上创建一个组。如果没有的话,在终端窗口中输入以下命令以便在你的 Linux 系统中创建组:
|
||||
|
||||
sudo groupadd fuse
|
||||
|
||||
@ -26,7 +26,7 @@
|
||||
|
||||
![sshfs 添加用户到组 fuse](https://www.maketecheasier.com/assets/uploads/2015/10/sshfs-add-user-to-fuse-group.png)
|
||||
|
||||
别担心上面命令的 `$USER`。shell 会自动用你自己的用户名替换。处理了和组相关的事之后,就是时候创建要挂载远程文件的目录了。
|
||||
别担心上面命令的 `$USER`。shell 会自动用你自己的用户名替换。处理了和组相关的工作之后,就是时候创建要挂载远程文件的目录了。
|
||||
|
||||
mkdir ~/remote_folder
|
||||
|
||||
@ -54,9 +54,9 @@
|
||||
|
||||
### 总结 ###
|
||||
|
||||
在 Linux 上有很多工具可以用于访问远程文件并挂载到本地。如之前所说,如果有的话,也只有很少的工具能充分利用 SSH 的强大功能。我希望在这篇指南的帮助下,也能认识到 SSHfs 是一个多么强大的工具。
|
||||
在 Linux 上有很多工具可以用于访问远程文件并挂载到本地。但是如之前所说,如果有的话,也只有很少的工具能充分利用 SSH 的强大功能。我希望在这篇指南的帮助下,也能认识到 SSHfs 是一个多么强大的工具。
|
||||
|
||||
你觉得 SSHfs 怎么样呢?在线的评论框里告诉我们吧!
|
||||
你觉得 SSHfs 怎么样呢?在下面的评论框里告诉我们吧!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -64,7 +64,7 @@ via: https://www.maketecheasier.com/sshfs-mount-remote-filesystem-linux/
|
||||
|
||||
作者:[Derrik Diener][a]
|
||||
译者:[ictlyh](http://mutouxiaogui.cn/blog/)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,4 +1,3 @@
|
||||
|
||||
如何在 Linux 终端下创建新的文件系统/分区
|
||||
================================================================================
|
||||
![](https://www.maketecheasier.com/assets/uploads/2015/03/cfdisk-feature-image.png)
|
||||
@ -13,8 +12,7 @@
|
||||
|
||||
![cfdisk-lsblk](https://www.maketecheasier.com/assets/uploads/2015/03/cfdisk-lsblk.png)
|
||||
|
||||
|
||||
一旦你运行了 `lsblk`,你应该会看到当前系统上每个磁盘的详细列表。看看这个列表,然后找出你想要使用的磁盘。在本文中,我将使用 `sdb` 来进行演示。
|
||||
当你运行了 `lsblk`,你应该会看到当前系统上每个磁盘的详细列表。看看这个列表,然后找出你想要使用的磁盘。在本文中,我将使用 `sdb` 来进行演示。
|
||||
|
||||
在终端输入这个命令。它会显示一个功能强大的基于终端的分区编辑程序。
|
||||
|
||||
@ -26,9 +24,7 @@
|
||||
|
||||
当输入此命令后,你将进入分区编辑器中,然后访问你想改变的磁盘。
|
||||
|
||||
Since hard drive partitions are different, depending on a user’s needs, this part of the guide will go over **how to set up a split Linux home/root system layout**.
|
||||
|
||||
由于磁盘分区的不同,这取决于用户的需求,这部分的指南将在 **如何建立一个分布的 Linux home/root 文件分区**。
|
||||
由于磁盘分区的不同,这取决于用户的需求,这部分的指南将在 **如何建立一个分离的 Linux home/root 分区布局**。
|
||||
|
||||
首先,需要创建根分区。这需要根据磁盘的字节数来进行分割。我测试的磁盘是 32 GB。
|
||||
|
||||
@ -38,7 +34,7 @@ Since hard drive partitions are different, depending on a user’s needs, this p
|
||||
|
||||
该程序会要求你输入分区大小。一旦你指定好大小后,按 Enter 键。这将被称为根分区(或 /dev/sdb1)。
|
||||
|
||||
接下来该创建用户分区(/dev/sdb2)了。你需要在 CFdisk 中再选择一些空闲分区。使用箭头选择 [ NEW ] 选项,然后按 Enter 键。输入你用户分区的大小,然后按 Enter 键来创建它。
|
||||
接下来该创建 home 分区(/dev/sdb2)了。你需要在 CFdisk 中再选择一些空闲分区。使用箭头选择 [ NEW ] 选项,然后按 Enter 键。输入你的 home 分区的大小,然后按 Enter 键来创建它。
|
||||
|
||||
![cfdisk-create-home-partition](https://www.maketecheasier.com/assets/uploads/2015/03/cfdisk-create-home-partition.png)
|
||||
|
||||
@ -48,7 +44,7 @@ Since hard drive partitions are different, depending on a user’s needs, this p
|
||||
|
||||
![cfdisk-specify-partition-type-swap](https://www.maketecheasier.com/assets/uploads/2015/03/cfdisk-specify-partition-type-swap.png)
|
||||
|
||||
现在,交换分区被创建了,该指定其类型。使用上下箭头来选择它。之后,使用左右箭头选择 [ TYPE ] 。找到 Linux swap 选项,然后按 Enter 键。
|
||||
现在,创建了交换分区,该指定其类型。使用上下箭头来选择它。之后,使用左右箭头选择 [ TYPE ] 。找到 Linux swap 选项,然后按 Enter 键。
|
||||
|
||||
![cfdisk-write-partition-table](https://www.maketecheasier.com/assets/uploads/2015/03/cfdisk-write-partition-table.jpg)
|
||||
|
||||
@ -56,13 +52,13 @@ Since hard drive partitions are different, depending on a user’s needs, this p
|
||||
|
||||
### 使用 mkfs 创建文件系统 ###
|
||||
|
||||
有时候,你并不需要一个完整的分区,你只想要创建一个文件系统而已。你可以在终端直接使用 `mkfs` 命令来实现。
|
||||
有时候,你并不需要一个整个重新分区,你只想要创建一个文件系统而已。你可以在终端直接使用 `mkfs` 命令来实现。
|
||||
|
||||
![cfdisk-mkfs-list-partitions-lsblk](https://www.maketecheasier.com/assets/uploads/2015/10/cfdisk-mkfs-list-partitions-lsblk.png)
|
||||
|
||||
首先,找出你要使用的磁盘。在终端输入 `lsblk` 找出来。它会打印出列表,之后只要找到你想制作文件系统的分区或盘符。
|
||||
首先,找出你要使用的磁盘。在终端输入 `lsblk` 找出来。它会打印出列表,之后只要找到你想创建文件系统的分区或盘符。
|
||||
|
||||
在这个例子中,我将使用 `/dev/sdb1` 的第一个分区。只对 `/dev/sdb` 使用 mkfs(将会使用整个分区)。
|
||||
在这个例子中,我将使用第二个硬盘的 `/dev/sdb1` 作为第一个分区。可以对 `/dev/sdb` 使用 mkfs(这将会使用整个分区)。
|
||||
|
||||
![cfdisk-mkfs-make-file-system-ext4](https://www.maketecheasier.com/assets/uploads/2015/10/cfdisk-mkfs-make-file-system-ext4.png)
|
||||
|
||||
@ -70,13 +66,13 @@ Since hard drive partitions are different, depending on a user’s needs, this p
|
||||
|
||||
sudo mkfs.ext4 /dev/sdb1
|
||||
|
||||
在终端。应当指出的是,`mkfs.ext4` 可以将你指定的任何文件系统改变。
|
||||
在终端。应当指出的是,`mkfs.ext4` 可以换成任何你想要使用的的文件系统。
|
||||
|
||||
### 结论 ###
|
||||
|
||||
虽然使用图形工具编辑文件系统和分区更容易,但终端可以说是更有效的。终端的加载速度更快,点击几个按钮即可。GParted 和其它工具一样,它也是一个完整的工具。我希望在本教程的帮助下,你会明白如何在终端中高效的编辑文件系统。
|
||||
|
||||
你是否更喜欢使用基于终端的方法在 Linux 上编辑分区?为什么或为什么不?在下面告诉我们!
|
||||
你是否更喜欢使用基于终端的方法在 Linux 上编辑分区?不管是不是,请在下面告诉我们。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -84,7 +80,7 @@ via: https://www.maketecheasier.com/create-file-systems-partitions-terminal-linu
|
||||
|
||||
作者:[Derrik Diener][a]
|
||||
译者:[strugglingyouth](https://github.com/strugglingyouth)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,84 @@
|
||||
如何在 Ubuntu 上用 Go For It 管理您的待办清单
|
||||
================================================================================
|
||||
![](https://www.maketecheasier.com/assets/uploads/2015/10/gfi-featured1.jpg)
|
||||
|
||||
任务管理可以说是工作及日常生活中最重要也最具挑战性的事情之一。当您在工作中承担越来越多的责任时,您的表现将与您管理任务的能力直接挂钩。
|
||||
|
||||
若您的工作有部分需要在电脑上完成,那么您一定很乐意知道,有多款应用软件自称可以为您减轻任务管理的负担。即便这些软件中的大多数都是为 Windows 用户服务的,在 Linux 系统中仍然有不少选择。在本文中,我们就来讨论这样一款软件:Go For It.
|
||||
|
||||
### Go For It ###
|
||||
|
||||
[Go For It][1] (GFI) 由 Manuel Kehl 开发,他声称:“这是款简单易用且时尚优雅的生产力软件,以待办清单(To-Do List)为主打特色,并整合了一个能让你专注于当前事务的定时器。”这款软件的定时器功能尤其有趣,它还可以让您在继续工作之前暂停下来,放松一段时间。
|
||||
|
||||
### 下载并安装 ###
|
||||
|
||||
使用基于 Debian 系统(如Ubuntu)的用户可以通过运行以下终端命令轻松地安装这款软件:
|
||||
|
||||
sudo add-apt-repository ppa:mank319/go-for-it
|
||||
sudo apt-get update
|
||||
sudo apt-get install go-for-it
|
||||
|
||||
以上命令执行完毕后,您就可以使用这条命令运行这款应用软件了:
|
||||
|
||||
go-for-it
|
||||
|
||||
### 使用及配置###
|
||||
|
||||
当你第一次运行 GFI 时,它的界面是长这样的:
|
||||
|
||||
![gfi-first-run](https://www.maketecheasier.com/assets/uploads/2015/10/gfi-first-run1.png)
|
||||
|
||||
可以看到,界面由三个标签页组成,分别是*待办* (To-Do),*定时器* (Timer)和*完成* (Done)。*待办*页是一个任务列表(上图所示的4个任务是默认生成的——您可以点击头部的方框删除它们),*定时器*页内含有任务定时器,而*完成*页则是已完成任务的列表。底部有个文本框,您可以在此输入任务描述,并点击“+”号将任务添加到上面的列表中。
|
||||
|
||||
举个例子,我将一个名为“MTE-research-work”的任务添加到了列表中,并点击选中了它,如下图所示:
|
||||
|
||||
![gfi-task-added](https://www.maketecheasier.com/assets/uploads/2015/10/gfi-task-added1.png)
|
||||
|
||||
然后我进入*定时器*页,在这里我可以看到一个为当前“MTE-reaserch-work”任务设定的定时器,定时25分钟。
|
||||
|
||||
![gfi-active-task-timer](https://www.maketecheasier.com/assets/uploads/2015/10/gfi-active-task-timer.png)
|
||||
|
||||
当然,您可以将定时器设定为你喜欢的任何值。然而我并没有修改,而是直接点击下方的“开始 (Start)”按钮启动定时器。一旦剩余时间为60秒,GFI 就会给出一个提示。
|
||||
|
||||
![gfi-first-notification-new](https://www.maketecheasier.com/assets/uploads/2015/10/gfi-first-notification-new.jpg)
|
||||
|
||||
一旦时间到,它会提醒我休息5分钟。
|
||||
|
||||
![gfi-time-up-notification-new](https://www.maketecheasier.com/assets/uploads/2015/10/gfi-time-up-notification-new.jpg)
|
||||
|
||||
5分钟过后,我可以为我的任务再次开启定时器。
|
||||
|
||||
![gfi-break-time-up-new](https://www.maketecheasier.com/assets/uploads/2015/10/gfi-break-time-up-new.jpg)
|
||||
|
||||
任务完成以后,您可以点击*定时器*页中的“完成 (Done)”按钮,然后这个任务就会从*待办*页被转移到*完成*页。
|
||||
|
||||
![gfi-task-done](https://www.maketecheasier.com/assets/uploads/2015/10/gfi-task-done1.png)
|
||||
|
||||
GFI 也能让您稍微调整一些它的设置。例如,下图所示的设置窗口就包含了一些选项,让您修改默认的任务时长,休息时长和提示时刻。
|
||||
|
||||
![gfi-settings](https://www.maketecheasier.com/assets/uploads/2015/10/gfi-settings1.png)
|
||||
|
||||
值得一提的是,GFI 是以 TODO.txt 格式保存待办清单的,这种格式方便了移动设备之间的同步,也让您能使用其他前端程序来编辑任务——更多详情请阅读[这里][2]。
|
||||
|
||||
您还可以通过以下视频观看 GFI 的动态展示。
|
||||
|
||||
注:youtube 视频
|
||||
<iframe frameborder="0" src="http://www.youtube.com/embed/mnw556C9FZQ?autoplay=1&autohide=2&border=1&wmode=opaque&enablejsapi=1&controls=1&showinfo=0" id="youtube-iframe"></iframe>
|
||||
|
||||
### 结论###
|
||||
|
||||
正如您所看到的,GFI 是一款简洁明了且易于使用的任务管理软件。虽然它没有提供非常丰富的功能,但它实现了它的承诺,定时器的整合特别有用。如果您正在寻找一款实现了基础功能,并且开源的 Linux 任务管理软件,Go For It 值得您一试。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.maketecheasier.com/to-do-lists-ubuntu-go-for-it/
|
||||
|
||||
作者:[Himanshu Arora][a]
|
||||
译者:[Ricky-Gong](https://github.com/Ricky-Gong)
|
||||
校对:[Caroline](https://github.com/carolinewuyan)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.maketecheasier.com/author/himanshu/
|
||||
[1]:http://manuel-kehl.de/projects/go-for-it/
|
||||
[2]:http://todotxt.com/
|
@ -1,15 +1,13 @@
|
||||
Linux又问必答-- 如何在Linux中改变默认的Java版本
|
||||
Linux 有问必答:如何在 Linux 中改变默认的 Java 版本
|
||||
================================================================================
|
||||
> **提问**:当我尝试在Linux中运行一个Java程序时,我遇到了一个错误。看上去像程序编译所使用的Javab版本与我本地的不同。我该如何在Linux上切换默认的Java版本?
|
||||
> **提问**:当我尝试在Linux中运行一个Java程序时,我遇到了一个错误。看上去像程序编译所使用的Java版本与我本地的不同。我该如何在Linux上切换默认的Java版本?
|
||||
|
||||
>
|
||||
> Exception in thread "main" java.lang.UnsupportedClassVersionError: com/xmodulo/hmon/gui/NetConf : Unsupported major.minor version 51.0
|
||||
|
||||
当Java程序编译时,编译环境会设置一个“target”变量来设置程序可以运行的最低Java版本。如果你Linux系统上运行的程序不满足最低的JRE版本要求,那么你会在运行的时候遇到下面的错误。
|
||||
当Java程序编译时,编译环境会设置一个“target”变量来设置程序可以运行的最低Java版本。如果你Linux系统上运行的程序不能满足最低的JRE版本要求,那么你会在运行的时候遇到下面的错误。
|
||||
|
||||
Exception in thread "main" java.lang.UnsupportedClassVersionError: com/xmodulo/hmon/gui/NetConf : Unsupported major.minor version 51.0
|
||||
|
||||
比如,这种情况下程序在Java JRE 1.7下编译,但是系统只有Java JRE 1.6。
|
||||
比如,程序在Java JRE 1.7下编译,但是系统只有Java JRE 1.6。
|
||||
|
||||
要解决这个问题,你需要改变默认的Java版本到Java JRE 1.7或者更高(假设JRE已经安装了)。
|
||||
|
||||
@ -21,7 +19,7 @@ Linux又问必答-- 如何在Linux中改变默认的Java版本
|
||||
|
||||
本例中,总共安装了4个不同的Java版本:OpenJDK JRE 1.6、Oracle Java JRE 1.6、OpenJDK JRE 1.7 和 Oracle Java JRE 1.7。现在默认的Java版本是OpenJDK JRE 1.6。
|
||||
|
||||
如果没有安装需要的Java JRE,你可以参考[这些指导][1]来完成安装。
|
||||
如果没有安装需要的Java JRE,你可以参考[这些指导][1]来完成安装。
|
||||
|
||||
现在有可用的候选版本,你可以用下面的命令在可用的Java JRE之间**切换默认的Java版本**:
|
||||
|
||||
@ -45,7 +43,7 @@ via: http://ask.xmodulo.com/change-default-java-version-linux.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,5 +1,4 @@
|
||||
|
||||
Linux 有问必答 - 如何在 Linux 上找到当前正在使用的 shell
|
||||
Linux 有问必答:如何知道当前正在使用的 shell 是哪个?
|
||||
================================================================================
|
||||
> **问题**: 我经常在命令行中切换 shell。是否有一个快速简便的方法来找出我当前正在使用的 shell 呢?此外,我怎么能找到当前 shell 的版本?
|
||||
|
||||
@ -7,36 +6,30 @@ Linux 有问必答 - 如何在 Linux 上找到当前正在使用的 shell
|
||||
|
||||
有多种方式可以查看你目前在使用什么 shell,最简单的方法就是通过使用 shell 的特殊参数。
|
||||
|
||||
其一,[一个名为 "$$" 的特殊参数][1] 表示当前你正在运行的 shell 的 PID。此参数是只读的,不能被修改。所以,下面的命令也将显示你正在运行的 shell 的名字:
|
||||
其一,[一个名为 "$$" 的特殊参数][1] 表示当前你正在运行的 shell 实例的 PID。此参数是只读的,不能被修改。所以,下面的命令也将显示你正在运行的 shell 的名字:
|
||||
|
||||
$ ps -p $$
|
||||
|
||||
----------
|
||||
|
||||
PID TTY TIME CMD
|
||||
21666 pts/4 00:00:00 bash
|
||||
|
||||
上述命令可在所有可用的 shell 中工作。
|
||||
|
||||
如果你不使用 csh,使用 shell 的特殊参数 “$$” 可以找出当前的 shell,这表示当前正在运行的 shell 或 shell 脚本的名称。这是 Bash 的一个特殊参数,但也可用在其他 shells 中,如 sh, zsh, tcsh or dash。使用 echo 命令也可以查看你目前正在使用的 shell 的名称。
|
||||
如果你不使用 csh,找到当前使用的 shell 的另外一个办法是使用特殊参数 “$0” ,它表示当前正在运行的 shell 或 shell 脚本的名称。这是 Bash 的一个特殊参数,但也可用在其他 shell 中,如 sh、zsh、tcsh 或 dash。使用 echo 命令可以查看你目前正在使用的 shell 的名称。
|
||||
|
||||
$ echo $0
|
||||
|
||||
----------
|
||||
|
||||
bash
|
||||
|
||||
不要将 $SHELL 看成是一个单独的环境变量,它被设置为整个路径下的默认 shell。因此,这个变量并不一定指向你当前使用的 shell。例如,即使你在终端中调用不同的 shell,$SHELL 也保持不变。
|
||||
不要被一个叫做 $SHELL 的单独的环境变量所迷惑,它被设置为你的默认 shell 的完整路径。因此,这个变量并不一定指向你当前使用的 shell。例如,即使你在终端中调用不同的 shell,$SHELL 也保持不变。
|
||||
|
||||
$ echo $SHELL
|
||||
|
||||
----------
|
||||
|
||||
/bin/shell
|
||||
|
||||
![](https://c2.staticflickr.com/6/5688/22544087680_4a9c180485_c.jpg)
|
||||
|
||||
因此,找出当前的shell,你应该使用 $$ 或 $0,但不是 $ SHELL。
|
||||
因此,找出当前的shell,你应该使用 $$ 或 $0,但不是 $SHELL。
|
||||
|
||||
### 找出当前 Shell 的版本 ###
|
||||
|
||||
@ -46,8 +39,6 @@ Linux 有问必答 - 如何在 Linux 上找到当前正在使用的 shell
|
||||
|
||||
$ bash --version
|
||||
|
||||
----------
|
||||
|
||||
GNU bash, version 4.3.30(1)-release (x86_64-pc-linux-gnu)
|
||||
Copyright (C) 2013 Free Software Foundation, Inc.
|
||||
License GPLv3+: GNU GPL version 3 or later
|
||||
@ -59,23 +50,17 @@ Linux 有问必答 - 如何在 Linux 上找到当前正在使用的 shell
|
||||
|
||||
$ zsh --version
|
||||
|
||||
----------
|
||||
|
||||
zsh 5.0.7 (x86_64-pc-linux-gnu)
|
||||
|
||||
**对于** tcsh **shell**:
|
||||
$ tcsh --version
|
||||
|
||||
----------
|
||||
|
||||
tcsh 6.18.01 (Astron) 2012-02-14 (x86_64-unknown-linux) options wide,nls,dl,al,kan,rh,nd,color,filec
|
||||
|
||||
对于一些 shells,你还可以使用 shell 特定的变量(例如,$ BASH_VERSION 或 $ ZSH_VERSION)。
|
||||
对于某些 shell,你还可以使用 shell 特定的变量(例如,$BASH_VERSION 或 $ZSH_VERSION)。
|
||||
|
||||
$ echo $BASH_VERSION
|
||||
|
||||
----------
|
||||
|
||||
4.3.8(1)-release
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
@ -84,7 +69,7 @@ via: http://ask.xmodulo.com/which-shell-am-i-using.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[strugglingyouth](https://github.com/strugglingyouth)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,12 +1,12 @@
|
||||
LastPass的开源替代品
|
||||
LastPass 的开源替代品
|
||||
================================================================================
|
||||
LastPass是一个跨平台的密码管理程序。在Linux平台中,它可作为Firefox, Chrome和Opera浏览器的插件使用。LastPass Sesame支持Ubuntu/Debian与Fedora系统。此外,LastPass还有安装在Firefox Portable的便携版,可将其安装在USB设备上。再加上适用于Ubuntu/Debian, Fedora和openSUSE的LastPass Pocket, 其具有良好的跨平台覆盖性。虽然LastPass备受好评,但它是一个专有软件。此外,LastPass最近被LogMeIn收购。如果你在找一个开源的替代品,这篇文章可能会对你有所帮助。
|
||||
|
||||
我们正面临着信息大爆炸。无论你是要在线经营生意,找工作,还是只为了休闲来进行阅读,互联网都是一个广大的信息源。在这种情况下,长期保留信息是很困难的。然而,及时地获取某些特定信息非常重要。密码就是这样的一个例子。
|
||||
我们正面临着信息大爆炸。无论你是要在线经营生意,找工作,还是只为了休闲来进行阅读,互联网都是一个海量的信息源。在这种情况下,长期保留信息是很困难的。然而,及时地获取某些特定信息非常重要。密码就是这样的一个例子。
|
||||
|
||||
作为一个电脑用户,你可能会面临在不同服务或网站使用相同或不同密码的困境。这个事情非常复杂,因为有些网站会限制你对密码的选择。比如,一个网站可能会限制密码的最小位数,大写字母,数字或者特殊字符,这使得在所有网站使用统一密码变得不可能。更重要的是,不在不同网站中使用同一密码有安全方面的原因。这样就不可避免地意味着人们经常会有很多密码要记。一个解决方案是将所有的密码写下来。然而,这种做法也极度的不安全。
|
||||
|
||||
为了解决需要记忆无穷多串密码的问题,目前比较流行的解决方案是使用密码管理软件。事实上,这类软件对于活跃的互联网用户来说极为实用。它使得你获取、管理和安全保存所有密码变得极为容易,而大多数密码都是被软件或文件系统加密过的。因此,用户只需要记住一个简单的密码就可以获取到其它所有密码。密码管理软件鼓励用户对于不同服务去采用独一无二的,非直观的强密码。
|
||||
为了解决需要记忆无穷多串密码的问题,目前比较流行的解决方案是使用密码管理软件。事实上,这类软件对于活跃的互联网用户来说极为实用。它使得你获取、管理和安全保存所有密码变得极为容易,而大多数密码都是用软件或文件系统加密过的。因此,用户只需要记住一个简单的密码就可以获取到其它所有密码。密码管理软件鼓励用户对于不同服务去采用独一无二的,非直观的高强度的密码。
|
||||
|
||||
为了让大家更深入地了解Linux软件的质量,我将介绍4款优秀的、可替代LastPass的开源软件。
|
||||
|
||||
@ -14,25 +14,27 @@ LastPass是一个跨平台的密码管理程序。在Linux平台中,它可作
|
||||
|
||||
![KeePassX软件截图](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-KeePassX.png)
|
||||
|
||||
KeePassX提供KeePass的多平台接口,是一款开源、跨平台的密码管理软件。这款软件可以帮助你以安全的方式保管密码。你可以将所有密码保存在一个数据库中,而这个数据库被一个主密码或密码盘来保管。
|
||||
KeePassX是KeePass的多平台移植,是一款开源、跨平台的密码管理软件。这款软件可以帮助你以安全的方式保管密码。你可以将所有密码保存在一个数据库中,而这个数据库被一个主密码或密码盘来保管。这使得用户只需要记住一个单一的主密码或插入密码盘即可解锁整个数据库。
|
||||
|
||||
密码数据库使用AES(即Rijndael)或者TwoFish算法进行加密,密钥长度为256位。
|
||||
|
||||
该软件功能包括:
|
||||
|
||||
- 多重管理模式 - 使每条密码更容易被识别
|
||||
- 管理模式丰富
|
||||
- 通过标题使每条密码更容易被识别
|
||||
- 可设置密码过期时间
|
||||
- 可插入附件
|
||||
- 可为不同分组或密码自定义标志
|
||||
- 在分组中对密码排序
|
||||
- 搜索函数:可在特定分组或整个数据库中搜索
|
||||
- Auto-Type: 这个功能允许你在登录网站时只需要按下几个键。KeePassX可以帮助你输入剩下的密码。Auto-Type通过读取当前窗口的标题,对密码数据库进行搜索来获取相应的密码
|
||||
- 数据库安全性强,用户可通过密码或一个密钥文件(可存储在CD或U盘中)访问数据库
|
||||
- 自动生成安全的密码
|
||||
- 具有预防措施,获取选中的密码并检查其安全性
|
||||
- 加密 - 用256位密钥,通过AES(高级加密标准)或TwoFish算法加密数据库
|
||||
- 搜索功能:可在特定分组或整个数据库中搜索
|
||||
- 自动键入: 这个功能允许你在登录网站时只需要按下几个键。KeePassX可以帮助你输入剩下的密码。自动键入通过读取当前窗口的标题,对密码数据库进行搜索来获取相应的密码
|
||||
- 数据库安全性强,用户可通过密码或一个密钥文件(可存储在CD或U盘中)访问数据库(或两者)
|
||||
- 安全密码自动生成
|
||||
- 具有预防措施,获取用星号隐藏的密码并检查其安全性
|
||||
- 加密 - 用256位密钥,通过AES(高级加密标准)或TwoFish算法加密数据库,
|
||||
- 密码可以导入或导出。可从PwManager文件(*.pwm)或KWallet文件(*.xml)中导入密码,可导出为文本(*.txt)格式。
|
||||
|
||||
---
|
||||
- 软件官网:[www.keepassx.org][1]
|
||||
- 开发者:KeepassX Team
|
||||
- 软件许可证:GNU GPL V2
|
||||
@ -42,21 +44,23 @@ KeePassX提供KeePass的多平台接口,是一款开源、跨平台的密码
|
||||
|
||||
![Encryptr软件截图](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-Encryptr.png)
|
||||
|
||||
Encryptr是一个开源的、零知晓的、基于云端的密码管理/电子钱包软件,以Crypton为基础开发。Crypton是一个Javascript库,允许开发者利用其开发应用,上传文件至服务器,而服务器无法知道用户所存储的文件内容。
|
||||
Encryptr是一个开源的、零知识(zero-knowledge)的、基于云端的密码管理/电子钱包软件,以Crypton为基础开发。Crypton是一个Javascript库,允许开发者利用其开发应用来上传文件至服务器,而服务器无法知道用户所存储的文件内容。
|
||||
|
||||
Encryptr可将你的敏感信息,比如密码、信用卡数据、PIN码、或认证码存储在云端。然而,由于它基于零知晓的Cypton框架开发,Encryptr可保证只有用户才拥有访问或读取秘密信息的权限。
|
||||
Encryptr可将你的敏感信息,比如密码、信用卡数据、PIN码、或认证码存储在云端。然而,由于它基于零知识的Cypton框架开发,Encryptr可保证只有用户才拥有访问或读取秘密信息的权限。
|
||||
|
||||
由于其跨平台的特性,Encryptr允许用户随时随地、安全地通过一个账户从云端获取机密信息。
|
||||
|
||||
软件特性包括:
|
||||
|
||||
- 使用极安全、零知晓的Crypton框架,软件只在本地加密/解密数据
|
||||
- 使用非常安全的零知识Crypton框架,只在你的本地加密/解密数据
|
||||
- 易于使用
|
||||
- 基于云端
|
||||
- 可存储三种类型的数据:密码、信用卡账号以及通用的键值对
|
||||
- 可对每条密码设置“备注”项
|
||||
- 对本地密码进行缓存加密,以节省上传时间
|
||||
- 过滤和搜索密码
|
||||
- 对密码进行本地加密缓存,以节省载入时间
|
||||
|
||||
---
|
||||
- 软件官网: [encryptr.org][2]
|
||||
- 开发者: Tommy Williams
|
||||
- 软件许可证: GNU GPL v3
|
||||
@ -74,7 +78,9 @@ RatticDB被设计为一个“密码生命周期管理工具”而不是单单一
|
||||
|
||||
- 简洁的ACL设计
|
||||
- 可改变队列功能,可让用户知晓何时需要更改某应用的密码
|
||||
- Ansible配置
|
||||
- 支持Ansible配置
|
||||
|
||||
---
|
||||
|
||||
- 软件官网: [rattic.org][3]
|
||||
- 开发者: Daniel Hall
|
||||
@ -85,9 +91,9 @@ RatticDB被设计为一个“密码生命周期管理工具”而不是单单一
|
||||
|
||||
![Seahorse软件截图](http://www.linuxlinks.com/portal/content/reviews/Security/Screenshot-Seahorse.png)
|
||||
|
||||
Seahorse是一个于Gnome前端运行的GnuPG - GNU隐私保护软件。它的目标是提供一个易于使用密钥管理工具,一并提供一个易于使用的界面来控制加密操作。
|
||||
Seahorse是一个GnuPG(GNU隐私保护软件)的Gnome前端界面。它的目标是提供一个易于使用密钥管理工具,以及一个易于使用的界面来控制加密操作。
|
||||
|
||||
Seahorse是一个工具,用来提供安全沟通和数据存储服务。数据加密和数字密钥生成操作可以轻易通过GUI来演示,密钥管理操作也可以轻易通过直观的界面来进行。
|
||||
Seahorse是一个工具,用来提供安全传输和数据存储服务。数据加密和数字密钥生成操作可以轻易通过GUI来操作,密钥管理操作也可以轻易通过直观的界面来进行。
|
||||
|
||||
此外,Seahorse包含一个Gedit插件,可以使用鹦鹉螺文件管理器管理文件,一个管理剪贴板中事物的小程序,一个存储私密密码的代理,还有一个GnuPG和OpenSSH的密钥管理工具。
|
||||
|
||||
@ -95,7 +101,7 @@ Seahorse是一个工具,用来提供安全沟通和数据存储服务。数据
|
||||
|
||||
- 对文本进行加密/解密/签名
|
||||
- 管理密钥及密钥环
|
||||
- 将密钥及密钥环于密钥服务器同步
|
||||
- 将密钥及密钥环与密钥服务器同步
|
||||
- 密码签名及发布
|
||||
- 将密码缓存起来,无需多次重复键入
|
||||
- 对密钥及密钥环进行备份
|
||||
@ -103,6 +109,8 @@ Seahorse是一个工具,用来提供安全沟通和数据存储服务。数据
|
||||
- 生成SSH密钥,对其进行验证及储存
|
||||
- 多语言支持
|
||||
|
||||
---
|
||||
|
||||
- 软件官网: [www.gnome.org/projects/seahorse][4]
|
||||
- 开发者: Jacob Perkins, Jose Carlos, Garcia Sogo, Jean Schurger, Stef Walter, Adam Schreiber
|
||||
- 软件许可证: GNU GPL v2
|
||||
@ -113,7 +121,7 @@ Seahorse是一个工具,用来提供安全沟通和数据存储服务。数据
|
||||
via: http://www.linuxlinks.com/article/20151108125950773/LastPassAlternatives.html
|
||||
|
||||
译者:[StdioA](https://github.com/StdioA)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,48 @@
|
||||
Linux 有问必答:如何在 Linux 上自动设置 JAVA_HOME 环境变量
|
||||
================================================================================
|
||||
> **问题**:我需要在我的 Linux 机器上编译 Java 程序。为此我已经安装了 JDK (Java Development Kit),而现在我正试图设置 JAVA\_HOME 环境变量使其指向安装好的 JDK 。关于在 Linux 上设置 JAVA\_HOME 环境变量,最受推崇的办法是什么?
|
||||
|
||||
许多 Java 程序或基于 Java 的*集成开发环境* (IDE)都需要设置好 JAVA_HOME 环境变量。该变量应指向 *Java 开发工具包* (JDK)或 *Java 运行时环境* (JRE)的安装目录。JDK 不仅包含了 JRE 提供的一切,还带有用于编译 Java 程序的额外的二进制代码和库文件(例如编译器,调试器及 JavaDoc 文档生成器)。JDK 是用来构建 Java 程序的,如果只是运行已经构建好的 Java 程序,单独一份 JRE 就足够了。
|
||||
|
||||
当您正试图设置 JAVA\_HOME 环境变量时,麻烦的事情在于 JAVA\_HOME 变量需要根据以下几点而改变:(1) 您是否安装了 JDK 或 JRE;(2) 您安装了哪个版本;(3) 您安装的是 Oracle JDK 还是 Open JDK。
|
||||
|
||||
因此每当您的开发环境或运行时环境发生改变(例如为 JDK 更新版本)时,您需要根据实际情况调整 JAVA\_HOME 变量,而这种做法是繁重且缺乏效率的。
|
||||
|
||||
以下 export 命令能为您**自动设置** JAVA\_HOME 环境变量,而无须顾及上述的因素。
|
||||
|
||||
若您安装的是 JRE:
|
||||
|
||||
export JAVA_HOME=$(dirname $(dirname $(readlink -f $(which java))))
|
||||
|
||||
若您安装的是 JDK:
|
||||
|
||||
export JAVA_HOME=$(dirname $(dirname $(readlink -f $(which javac))))
|
||||
|
||||
根据您的情况,将上述命令中的一条写入 ~/.bashrc(或 /etc/profile)文件中,它就会永久地设置好 JAVA\_HOME 变量。
|
||||
|
||||
注意,由于 java 或 javac 可以建立起多个层次的符号链接,为此"readlink -f"命令是用来获取它们真正的执行路径的。
|
||||
|
||||
举个例子,假如您安装的是 Oracle JRE 7,那么上述的第一条 export 命令将自动设置 JAVA\_HOME 为:
|
||||
|
||||
/usr/lib/jvm/java-7-oracle/jre
|
||||
|
||||
若您安装的是 Open JDK 第8版,那么第二条 export 命令将设置 JAVA\_HOME 为:
|
||||
|
||||
/usr/lib/jvm/java-8-openjdk-amd64
|
||||
|
||||
![](https://c1.staticflickr.com/1/700/22961948071_c73a3261dd_c.jpg)
|
||||
|
||||
简而言之,这些 export 命令会在您重装/升级您的JDK/JRE,或[更换默认 Java 版本][1]时自动更新 JAVA\_HOME 变量。您不再需要手动调整它。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ask.xmodulo.com/set-java_home-environment-variable-linux.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[Ricky-Gong](https://github.com/Ricky-Gong)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://ask.xmodulo.com/author/nanni
|
||||
[1]:http://ask.xmodulo.com/change-default-java-version-linux.html
|
@ -0,0 +1,48 @@
|
||||
N1:下一代开源邮件客户端
|
||||
================================================================================
|
||||
![N1 Open Source email client](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/N1-email-client.png)
|
||||
|
||||
当我们谈论到Linux中的邮件客户端,通常 Thunderbird、Geary 和 [Evolution][3] 就会出现在我们的脑海。作为对这些大咖们的挑战,一款新的开源邮件客户端正在涌入市场。
|
||||
|
||||
### 设计和功能 ###
|
||||
|
||||
[N1][4]是一个设计与功能并重的新一代开源邮件客户端。作为一个开源软件,N1目前支持 Linux 和 Mac OS X,Windows的版本还在开发中。
|
||||
|
||||
N1宣传它自己为“可扩展的开源邮件客户端”,因为它包含了 Javascript 插件框架,任何人都可以为它创建强大的新功能。可扩展是一个非常流行的功能,它帮助[开源编辑器Atom][5]变得流行。N1同样把重点放在了可扩展上面。
|
||||
|
||||
除了可扩展性,N1同样着重设计了程序的外观。下面N1的截图就是个很好的例子:
|
||||
|
||||
![N1 Open Source email client on Mac OS X](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/N1-email-client-1.jpeg)
|
||||
|
||||
*Mac OS X上的N1客户端。图片来自:N1*
|
||||
|
||||
除了这个功能,N1兼容上百个邮件服务提供商,包括Gmail、Yahoo、iCloud、Microsoft Exchange等等,这个桌面应用提供了离线功能。
|
||||
|
||||
### 目前只能邀请使用 ###
|
||||
|
||||
我不知道为什么每个人都选择了 OnePlus 的‘只能邀请使用’的市场策略。目前,N1桌面端只能被邀请才能下载。你可以用下面的链接请求一个邀请。N1团队会在几天内邮件给你下载链接。
|
||||
|
||||
|
||||
- [请求N1邀请][6]
|
||||
|
||||
### 感兴趣了么? ###
|
||||
|
||||
我并不是桌面邮件客户端的粉丝,但是 N1 的确引起了我的兴趣,让我想要试一试。你呢?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/n1-open-source-email-client/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[Caroline](https://github.com/carolinewuyan)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/abhishek/
|
||||
[1]:https://www.mozilla.org/en-US/thunderbird/
|
||||
[2]:https://wiki.gnome.org/Apps/Geary
|
||||
[3]:https://help.gnome.org/users/evolution/stable/
|
||||
[4]:https://nylas.com/N1/
|
||||
[5]:http://itsfoss.com/atom-stable-released/
|
||||
[6]:https://invite.nylas.com/download
|
@ -0,0 +1,68 @@
|
||||
如何在 Ubuntu 15.10,14.04 中安装 NVIDIA 358.16 驱动程序
|
||||
================================================================================
|
||||
![nvidia-logo-1](http://ubuntuhandbook.org/wp-content/uploads/2015/06/nvidia-logo-1.png)
|
||||
|
||||
[NVIDIA 358.16][1] —— NVIDIA 358 系列的第一个稳定版本已经发布,并对 358.09 中(测试版)做了一些修正,以及一些小的改进。
|
||||
|
||||
NVIDIA 358 增加了一个新的 **nvidia-modeset.ko** 内核模块,可以配合 nvidia.ko 内核模块工作来调用 GPU 显示引擎。在以后发布版本中,**nvidia-modeset.ko** 内核驱动程序将被用于模式设置接口的基础,该接口由内核的直接渲染管理器(DRM)所提供。
|
||||
|
||||
新的驱动程序也有新的 GLX 协议扩展,以及在 OpenGL 驱动中分配大量内存的系统内存分配新机制。新的 GPU **GeForce 805A** 和 **GeForce GTX 960A** 都支持。NVIDIA 358.16 也支持 X.Org 1.18 服务器和 OpenGL 4.3。
|
||||
|
||||
### 如何在 Ubuntu 中安装 NVIDIA 358.16 : ###
|
||||
|
||||
> **请不要在生产设备上安装,除非你知道自己在做什么以及如何才能恢复。**
|
||||
|
||||
对于官方的二进制文件,请到 [nvidia.com/object/unix.html][1] 查看。
|
||||
|
||||
对于那些喜欢 Ubuntu PPA 的,我建议你使用 [显卡驱动 PPA][2]。到目前为止,支持 Ubuntu 16.04, Ubuntu 15.10, Ubuntu 15.04, Ubuntu 14.04。
|
||||
|
||||
**1. 添加 PPA.**
|
||||
|
||||
通过按 `Ctrl+Alt+T` 快捷键来从 Unity 桌面打开终端。当打启动应用后,粘贴下面的命令并按回车键:
|
||||
|
||||
sudo add-apt-repository ppa:graphics-drivers/ppa
|
||||
|
||||
![nvidia-ppa](http://ubuntuhandbook.org/wp-content/uploads/2015/08/nvidia-ppa.jpg)
|
||||
|
||||
它会要求你输入密码。输入密码后,密码不会显示在屏幕上,按 Enter 继续。
|
||||
|
||||
**2. 刷新并安装新的驱动程序**
|
||||
|
||||
添加 PPA 后,逐一运行下面的命令刷新软件库并安装新的驱动程序:
|
||||
|
||||
sudo apt-get update
|
||||
|
||||
sudo apt-get install nvidia-358 nvidia-settings
|
||||
|
||||
### (如果需要的话,) 卸载: ###
|
||||
|
||||
开机从 GRUB 菜单进入恢复模式,进入根控制台。然后逐一运行下面的命令:
|
||||
|
||||
重新挂载文件系统为可写:
|
||||
|
||||
mount -o remount,rw /
|
||||
|
||||
删除所有的 nvidia 包:
|
||||
|
||||
apt-get purge nvidia*
|
||||
|
||||
最后返回菜单并重新启动:
|
||||
|
||||
reboot
|
||||
|
||||
要禁用/删除显卡驱动 PPA,点击系统设置下的**软件和更新**,然后导航到**其他软件**标签。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ubuntuhandbook.org/index.php/2015/11/install-nvidia-358-16-driver-ubuntu-15-10/
|
||||
|
||||
作者:[Ji m][a]
|
||||
译者:[strugglingyouth](https://github.com/strugglingyouth)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://ubuntuhandbook.org/index.php/about/
|
||||
[1]:http://www.nvidia.com/Download/driverResults.aspx/95921/en-us
|
||||
[2]:http://www.nvidia.com/object/unix.html
|
||||
[3]:https://launchpad.net/~graphics-drivers/+archive/ubuntu/ppa
|
@ -0,0 +1,46 @@
|
||||
在 Ubuntu 15.10 上安装 Intel Graphics 安装器
|
||||
================================================================================
|
||||
![Intel graphics installer](http://ubuntuhandbook.org/wp-content/uploads/2015/11/intel_logo.jpg)
|
||||
|
||||
Intel 最近发布了一个新版本的 Linux Graphics 安装器。在新版本中,将不支持 Ubuntu 15.04,而必须用 Ubuntu 15.10 Wily。
|
||||
|
||||
> Linux 版 Intel® Graphics 安装器可以让你很容易的为你的 Intel Graphics 硬件安装最新版的图形与视频驱动。它能保证你一直使用最新的增强与优化功能,并能够安装到 Intel Graphics Stack 中,来保证你在你的 Intel 图形硬件下,享受到最佳的用户体验。*现在 Linux 版的 Intel® Graphics 安装器支持最新版的 Ubuntu。*
|
||||
|
||||
![intel-graphics-installer](http://ubuntuhandbook.org/wp-content/uploads/2015/11/intel-graphics-installer.jpg)
|
||||
|
||||
### 安装 ###
|
||||
|
||||
**1.** 从[这个链接页面][1]中下载该安装器。当前支持 Ubuntu 15.10 的版本是1.2.1版。你可以在**系统设置 -> 详细信息**中检查你的操作系统(32位或64位)的类型。
|
||||
|
||||
![download-intel-graphics-installer](http://ubuntuhandbook.org/wp-content/uploads/2015/11/download-intel-graphics-installer.jpg)
|
||||
|
||||
**2.** 一旦下载完成,到下载目录中点击 .deb 安装包,用 Ubuntu 软件中心打开它,然最后点击“安装”按钮。
|
||||
|
||||
![install-via-software-center](http://ubuntuhandbook.org/wp-content/uploads/2015/11/install-via-software-center.jpg)
|
||||
|
||||
**3.** 为了让系统信任 Intel Graphics 安装器,你需要通过下面的命令来为它添加密钥。
|
||||
|
||||
用快捷键`Ctrl+Alt+T`或者在 Unity Dash 中的“应用程序启动器”中打开终端。依次粘贴运行下面的命令。
|
||||
|
||||
wget --no-check-certificate https://download.01.org/gfx/RPM-GPG-KEY-ilg -O - | sudo apt-key add -
|
||||
|
||||
wget --no-check-certificate https://download.01.org/gfx/RPM-GPG-KEY-ilg-2 -O - | sudo apt-key add -
|
||||
|
||||
![trust-intel](http://ubuntuhandbook.org/wp-content/uploads/2015/11/trust-intel.jpg)
|
||||
|
||||
注意:在运行第一个命令的过程中,如果密钥下载完成后,光标停住不动并且一直闪烁的话,就像上面图片显示的那样,输入你的密码(输入时不会看到什么有变化)然后回车就行了。
|
||||
|
||||
最后通过 Unity Dash 或应用程序启动器打开 Intel Graphics 安装器。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ubuntuhandbook.org/index.php/2015/11/install-intel-graphics-installer-in-ubuntu-15-10/
|
||||
|
||||
作者:[Ji m][a]
|
||||
译者:[XLCYun](https://github.com/XLCYun)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://ubuntuhandbook.org/index.php/about/
|
||||
[1]:https://01.org/linuxgraphics/downloads
|
@ -1,11 +1,10 @@
|
||||
第七部分 - 在 Linux 客户端配置基于 Kerberos 身份验证的 NFS 服务器
|
||||
RHCE 系列(七):在 Linux 客户端配置基于 Kerberos 身份验证的 NFS 服务器
|
||||
================================================================================
|
||||
在本系列的前一篇文章,我们回顾了[如何在可能包括多种类型操作系统的网络上配置 Samba 共享][1]。现在,如果你需要为一组类-Unix 客户端配置文件共享,很自然的你会想到网络文件系统,或简称 NFS。
|
||||
|
||||
在本系列的前一篇文章,我们回顾了[如何在可能包括多种类型操作系统的网络上配置 Samba 共享][1]。现在,如果你需要为一组类 Unix 客户端配置文件共享,很自然的你会想到网络文件系统,或简称 NFS。
|
||||
|
||||
![设置使用 Kerberos 进行身份验证的 NFS 服务器](http://www.tecmint.com/wp-content/uploads/2015/09/Setting-Kerberos-Authentication-with-NFS.jpg)
|
||||
|
||||
RHCE 系列:第七部分 - 设置使用 Kerberos 进行身份验证的 NFS 服务器
|
||||
*RHCE 系列:第七部分 - 设置使用 Kerberos 进行身份验证的 NFS 服务器*
|
||||
|
||||
在这篇文章中我们会介绍配置基于 Kerberos 身份验证的 NFS 共享的整个流程。假设你已经配置好了一个 NFS 服务器和一个客户端。如果还没有,可以参考 [安装和配置 NFS 服务器][2] - 它列出了需要安装的依赖软件包并解释了在进行下一步之前如何在服务器上进行初始化配置。
|
||||
|
||||
@ -24,28 +23,26 @@ RHCE 系列:第七部分 - 设置使用 Kerberos 进行身份验证的 NFS 服
|
||||
|
||||
#### 创建 NFS 组并配置 NFS 共享目录 ####
|
||||
|
||||
1. 新建一个名为 nfs 的组并给它添加用户 nfsnobody,然后更改 /nfs 目录的权限为 0770,组属主为 nfs。于是,nfsnobody(对应请求用户)在共享目录有写的权限,你就不需要在 /etc/exports 文件中使用 no_root_squash(译者注:设为 root_squash 意味着在访问 NFS 服务器上的文件时,客户机上的 root 用户不会被当作 root 用户来对待)。
|
||||
1、 新建一个名为 nfs 的组并给它添加用户 nfsnobody,然后更改 /nfs 目录的权限为 0770,组属主为 nfs。于是,nfsnobody(对应请求用户)在共享目录有写的权限,你就不需要在 /etc/exports 文件中使用 no_root_squash(LCTT 译注:设为 root_squash 意味着在访问 NFS 服务器上的文件时,客户机上的 root 用户不会被当作 root 用户来对待)。
|
||||
|
||||
# groupadd nfs
|
||||
# usermod -a -G nfs nfsnobody
|
||||
# chmod 0770 /nfs
|
||||
# chgrp nfs /nfs
|
||||
|
||||
2. 像下面那样更改 export 文件(/etc/exports)只允许从 box1 使用 Kerberos 安全验证的访问(sec=krb5)。
|
||||
2、 像下面那样更改 export 文件(/etc/exports)只允许从 box1 使用 Kerberos 安全验证的访问(sec=krb5)。
|
||||
|
||||
**注意**:anongid 的值设置为之前新建的组 nfs 的 GID:
|
||||
|
||||
**exports – 添加 NFS 共享**
|
||||
|
||||
----------
|
||||
|
||||
/nfs box1(rw,sec=krb5,anongid=1004)
|
||||
|
||||
3. 再次 exprot(-r)所有(-a)NFS 共享。为输出添加详情(-v)是个好主意,因为它提供了发生错误时解决问题的有用信息:
|
||||
3、 再次 exprot(-r)所有(-a)NFS 共享。为输出添加详情(-v)是个好主意,因为它提供了发生错误时解决问题的有用信息:
|
||||
|
||||
# exportfs -arv
|
||||
|
||||
4. 重启并启用 NFS 服务器以及相关服务。注意你不需要启动 nfs-lock 和 nfs-idmapd,因为系统启动时其它服务会自动启动它们:
|
||||
4、 重启并启用 NFS 服务器以及相关服务。注意你不需要启动 nfs-lock 和 nfs-idmapd,因为系统启动时其它服务会自动启动它们:
|
||||
|
||||
# systemctl restart rpcbind nfs-server nfs-lock nfs-idmap
|
||||
# systemctl enable rpcbind nfs-server
|
||||
@ -61,14 +58,12 @@ RHCE 系列:第七部分 - 设置使用 Kerberos 进行身份验证的 NFS 服
|
||||
|
||||
正如你看到的,为了简便,NFS 服务器和 KDC 在同一台机器上,当然如果你有更多可用机器你也可以把它们安装在不同的机器上。两台机器都在 `mydomain.com` 域。
|
||||
|
||||
最后同样重要的是,Kerberos 要求客户端和服务器中至少有一个域名解析的基本模式和[网络时间协议][5]服务,因为 Kerberos 身份验证的安全一部分基于时间戳。
|
||||
最后同样重要的是,Kerberos 要求客户端和服务器中至少有一个域名解析的基本方式和[网络时间协议][5]服务,因为 Kerberos 身份验证的安全一部分基于时间戳。
|
||||
|
||||
为了配置域名解析,我们在客户端和服务器中编辑 /etc/hosts 文件:
|
||||
|
||||
**host 文件 – 为域添加 DNS**
|
||||
|
||||
----------
|
||||
|
||||
192.168.0.18 box1.mydomain.com box1
|
||||
192.168.0.20 box2.mydomain.com box2
|
||||
|
||||
@ -82,10 +77,9 @@ RHCE 系列:第七部分 - 设置使用 Kerberos 进行身份验证的 NFS 服
|
||||
|
||||
# chronyc tracking
|
||||
|
||||
|
||||
![用 Chrony 同步服务器时间](http://www.tecmint.com/wp-content/uploads/2015/09/Synchronize-Time-with-Chrony.png)
|
||||
|
||||
用 Chrony 同步服务器时间
|
||||
*用 Chrony 同步服务器时间*
|
||||
|
||||
### 安装和配置 Kerberos ###
|
||||
|
||||
@ -109,7 +103,7 @@ RHCE 系列:第七部分 - 设置使用 Kerberos 进行身份验证的 NFS 服
|
||||
|
||||
![创建 Kerberos 数据库](http://www.tecmint.com/wp-content/uploads/2015/09/Create-Kerberos-Database.png)
|
||||
|
||||
创建 Kerberos 数据库
|
||||
*创建 Kerberos 数据库*
|
||||
|
||||
下一步,使用 kadmin.local 工具为 root 创建管理权限:
|
||||
|
||||
@ -129,7 +123,7 @@ RHCE 系列:第七部分 - 设置使用 Kerberos 进行身份验证的 NFS 服
|
||||
|
||||
![添加 Kerberos 到 NFS 服务器](http://www.tecmint.com/wp-content/uploads/2015/09/Create-Kerboros-for-NFS.png)
|
||||
|
||||
添加 Kerberos 到 NFS 服务器
|
||||
*添加 Kerberos 到 NFS 服务器*
|
||||
|
||||
为 root/admin 获取和缓存票据授权票据(ticket-granting ticket):
|
||||
|
||||
@ -138,7 +132,7 @@ RHCE 系列:第七部分 - 设置使用 Kerberos 进行身份验证的 NFS 服
|
||||
|
||||
![缓存 Kerberos](http://www.tecmint.com/wp-content/uploads/2015/09/Cache-kerberos-Ticket.png)
|
||||
|
||||
缓存 Kerberos
|
||||
*缓存 Kerberos*
|
||||
|
||||
真正使用 Kerberos 之前的最后一步是保存被授权使用 Kerberos 身份验证的规则到一个密钥表文件(在服务器中):
|
||||
|
||||
@ -154,7 +148,7 @@ RHCE 系列:第七部分 - 设置使用 Kerberos 进行身份验证的 NFS 服
|
||||
|
||||
![挂载 NFS 共享](http://www.tecmint.com/wp-content/uploads/2015/09/Mount-NFS-Share.png)
|
||||
|
||||
挂载 NFS 共享
|
||||
*挂载 NFS 共享*
|
||||
|
||||
现在让我们卸载共享,在客户端中重命名密钥表文件(模拟它不存在)然后试着再次挂载共享目录:
|
||||
|
||||
@ -163,7 +157,7 @@ RHCE 系列:第七部分 - 设置使用 Kerberos 进行身份验证的 NFS 服
|
||||
|
||||
![挂载/卸载 Kerberos NFS 共享](http://www.tecmint.com/wp-content/uploads/2015/09/Mount-Unmount-Kerberos-NFS-Share.png)
|
||||
|
||||
挂载/卸载 Kerberos NFS 共享
|
||||
*挂载/卸载 Kerberos NFS 共享*
|
||||
|
||||
现在你可以使用基于 Kerberos 身份验证的 NFS 共享了。
|
||||
|
||||
@ -177,12 +171,12 @@ via: http://www.tecmint.com/setting-up-nfs-server-with-kerberos-based-authentica
|
||||
|
||||
作者:[Gabriel Cánepa][a]
|
||||
译者:[ictlyh](http://www.mutouxiaogui.cn/blog/)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/gacanepa/
|
||||
[1]:http://www.tecmint.com/setup-samba-file-sharing-for-linux-windows-clients/
|
||||
[1]:https://linux.cn/article-6550-1.html
|
||||
[2]:http://www.tecmint.com/configure-nfs-server/
|
||||
[3]:http://www.tecmint.com/selinux-essentials-and-control-filesystem-access/
|
||||
[4]:http://www.tecmint.com/firewalld-rules-for-centos-7/
|
@ -1,45 +0,0 @@
|
||||
sevenot translating
|
||||
Curious about Linux? Try Linux Desktop on the Cloud
|
||||
================================================================================
|
||||
Linux maintains a very small market share as a desktop operating system. Current surveys estimate its share to be a mere 2%; contrast that with the various strains (no pun intended) of Windows which total nearly 90% of the desktop market. For Linux to challenge Microsoft's monopoly on the desktop, there needs to be a simple way of learning about this different operating system. And it would be naive to believe a typical Windows user is going to buy a second machine, tinker with partitioning a hard disk to set up a multi-boot system, or just jump ship to Linux without an easy way back.
|
||||
|
||||
![](http://www.linuxlinks.com/portal/content/reviews/Cloud/CloudComputing.png)
|
||||
|
||||
We have examined a number of risk-free ways users can experiment with Linux without dabbling with partition management. Various options include Live CD/DVDs, USB keys and desktop virtualization software. For the latter, I can strongly recommend VMWare (VMWare Player) or Oracle VirtualBox, two relatively easy and free ways of installing and running multiple operating systems on a desktop or laptop computer. Each virtual machine has its own share of CPU, memory, network interfaces etc which is isolated from other virtual machines. But virtual machines still require some effort to get Linux up and running, and a reasonably powerful machine. Too much effort for a mere inquisitive mind.
|
||||
|
||||
It can be difficult to break down preconceptions. Many Windows users will have experimented with free software that is available on Linux. But there are many facets to learn on Linux. And it takes time to become accustomed to the way things work in Linux.
|
||||
|
||||
Surely there should be an effortless way for a beginner to experiment with Linux for the first time? Indeed there is; step forward the online cloud lab.
|
||||
|
||||
### LabxNow ###
|
||||
|
||||
![LabxNow](http://www.linuxlinks.com/portal/content/reviews/Cloud/Screenshot-LabxNow.png)
|
||||
|
||||
LabxNow provides a free service for general users offering Linux remote desktop over the browser. The developers promote the service as having a personal remote lab (to play around, develop, whatever!) that will be accessible from anywhere, with the internet of course.
|
||||
|
||||
The service currently offers a free virtual private server with 2 cores, 4GB RAM and 10GB SSD space. The service runs on a 4 AMD 6272 CPU with 128GB RAM.
|
||||
|
||||
#### Features include: ####
|
||||
|
||||
- Machine images: Ubuntu 14.04 with Xfce 4.10, RHEL 6.5, CentOS with Gnome, and Oracle
|
||||
- Hardware: CPU - 1 or 2 cores; RAM: 512MB, 1GB, 2GB or 4GB
|
||||
- Fast network for data transfers
|
||||
- Works with all popular browsers
|
||||
- Install anything, run anything - an excellent way to experiment and learn all about Linux without any risk
|
||||
- Easily add, delete, manage and customize VMs
|
||||
- Share VMs, Remote desktop support
|
||||
|
||||
All you need is a reasonable Internet connected device. Forget about high cost VPS, domain space or hardware support. LabxNow offers a great way of experimenting with Ubuntu, RHEL and CentOS. It gives Windows users an excellent environment to dip their toes into the wonderful world of Linux. Further, it allows users to do (programming) work from anywhere in the word without having the stress of installing Linux on each machine. Point your web browser at [www.labxnow.org/labxweb/][1].
|
||||
|
||||
There are other services (mostly paid services) that allow users to experiment with Linux. These include Cloudsigma which offers a free 7 day trial, and Icebergs.io (full root access via HTML5). But for now, LabxNow gets my recommendation.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxlinks.com/article/20151003095334682/LinuxCloud.html
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:https://www.labxnow.org/labxweb/
|
@ -0,0 +1,70 @@
|
||||
Translating by ZTinoZ
|
||||
7 ways hackers can use Wi-Fi against you
|
||||
================================================================================
|
||||
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/intro_title-100626673-orig.jpg)
|
||||
|
||||
### 7 ways hackers can use Wi-Fi against you ###
|
||||
|
||||
Wi-Fi — oh so convenient, yet oh so dangerous. Here are seven ways you could be giving away your identity through a Wi-Fi connection and what to do instead.
|
||||
|
||||
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/1_free-hotspots-100626674-orig.jpg)
|
||||
|
||||
### Using free hotspots ###
|
||||
|
||||
They seem to be everywhere, and their numbers are expected to [quadruple over the next four years][1]. But many of them are untrustworthy, created just so your login credentials, to email or even more sensitive accounts, can be picked up by hackers using “sniffers” — software that captures any information you submit over the connection. The best defense against sniffing hackers is to use a VPN (virtual private network). A VPN keeps your private data protected because it encrypts what you input.
|
||||
|
||||
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/2_online-banking-100626675-orig.jpg)
|
||||
|
||||
### Banking online ###
|
||||
|
||||
You might think that no one needs to be warned against banking online using free Wi-Fi, but cybersecurity firm Kaspersky Lab says that [more than 100 banks worldwide have lost $900 million][2] from cyberhacking, so it would seem that a lot of people are doing it. If you want to use the free Wi-Fi in a coffee shop because you’re confident it will be legitimate, confirm the exact network name with the barista. It’s pretty easy for [someone else in the shop with a router to set up an open connection][3] with a name that seems like it would be the name of the shop’s Wi-Fi.
|
||||
|
||||
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/3_keeping-wifi-on-100626676-orig.jpg)
|
||||
|
||||
### Keeping Wi-Fi on all the time ###
|
||||
|
||||
When your phone’s Wi-Fi is automatically enabled, you can be connected to an unsecure network without even realizing it. Use your phone’s [location-based Wi-Fi feature][4], if it’s available. It will turn off your Wi-Fi when you’re away from your saved networks and will turn back on when you’re within range.
|
||||
|
||||
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/4_not-using-firewall-100626677-orig.jpg)
|
||||
|
||||
### Not using a firewall ###
|
||||
|
||||
A firewall is your first line of defense against malicious intruders. It’s meant to let good traffic through your computer on a network and keep hackers and malware out. You should turn it off only when your antivirus software has its own firewall.
|
||||
|
||||
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/5_browsing-unencrypted-sites-100626678-orig.jpg)
|
||||
|
||||
### Browsing unencrypted websites ###
|
||||
|
||||
Sad to say, [55% of the Web’s top 1 million sites don’t offer encryption][5]. An unencrypted website allows all data transmissions to be viewed by the prying eyes of hackers. Your browser will indicate when a site is secure (you’ll see a gray padlock with Mozilla Firefox, for example, and a green lock icon with Chrome). But even a secure website can’t protect you from sidejackers, who can steal the cookies from a website you visited, whether it’s a valid site or not, through a public network.
|
||||
|
||||
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/6_updating-security-software-100626679-orig.jpg)
|
||||
|
||||
### Not updating your security software ###
|
||||
|
||||
If you want to ensure that your own network is well protected, upgrade the firmware of your router. All you have to do is go to your router’s administration page to check. Normally, you can download the newest firmware right from the manufacturer’s site.
|
||||
|
||||
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/7_securing-home-wifi-100626680-orig.jpg)
|
||||
|
||||
### Not securing your home Wi-Fi ###
|
||||
|
||||
Needless to say, it is important to set up a password that is not too easy to guess, and change your connection’s default name. You can also filter your MAC address so your router will recognize only certain devices.
|
||||
|
||||
**Josh Althuser** is an open software advocate, Web architect and tech entrepreneur. Over the past 12 years, he has spent most of his time advocating for open-source software and managing teams and projects, as well as providing enterprise-level consultancy for Web applications and helping bring their products to the market. You may connect with him on [Twitter][6].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.networkworld.com/article/3003170/mobile-security/7-ways-hackers-can-use-wi-fi-against-you.html
|
||||
|
||||
作者:[Josh Althuser][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://twitter.com/JoshAlthuser
|
||||
[1]:http://www.pcworld.com/article/243464/number_of_wifi_hotspots_to_quadruple_by_2015_says_study.html
|
||||
[2]:http://www.nytimes.com/2015/02/15/world/bank-hackers-steal-millions-via-malware.html?hp&action=click&pgtype=Homepage&module=first-column-region%C2%AEion=top-news&WT.nav=top-news&_r=3
|
||||
[3]:http://news.yahoo.com/blogs/upgrade-your-life/banking-online-not-hacked-182159934.html
|
||||
[4]:http://pocketnow.com/2014/10/15/should-you-leave-your-smartphones-wifi-on-or-turn-it-off
|
||||
[5]:http://www.cnet.com/news/chrome-becoming-tool-in-googles-push-for-encrypted-web/
|
||||
[6]:https://twitter.com/JoshAlthuser
|
@ -0,0 +1,64 @@
|
||||
eSpeak: Text To Speech Tool For Linux
|
||||
================================================================================
|
||||
![Text to speech tool in Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/Text-to-speech-Linux.jpg)
|
||||
|
||||
[eSpeak][1] is a command line tool for Linux that converts text to speech. This is a compact speech synthesizer that provides support to English and many other languages. It is written in C.
|
||||
|
||||
eSpeak reads the text from the standard input or the input file. The voice generated, however, is nowhere close to a human voice. But it is still a compact and handy tool if you want to use it in your projects.
|
||||
|
||||
Some of the main features of eSpeak are:
|
||||
|
||||
- A command line tool for Linux and Windows
|
||||
- Speaks text from a file or from stdin
|
||||
- Shared library version for use by other programs
|
||||
- SAPI5 version for Windows, so it can be used with screen-readers and other programs that support the Windows SAPI5 interface.
|
||||
- Ported to other platforms, including Android, Mac OSX etc.
|
||||
- Several voice characteristics to choose from
|
||||
- speech output can be saved as [.WAV file][2]
|
||||
- SSML ([Speech Synthesis Markup Language][3]) is supported partially along with HTML
|
||||
- Tiny in size, the complete program with language support etc is under 2 MB.
|
||||
- Can translate text into phoneme codes, so it could be adapted as a front end for another speech synthesis engine.
|
||||
- Development tools available for producing and tuning phoneme data.
|
||||
|
||||
### Install eSpeak ###
|
||||
|
||||
To install eSpeak in Ubuntu based system, use the command below in a terminal:
|
||||
|
||||
sudo apt-get install espeak
|
||||
|
||||
eSpeak is an old tool and I presume that it should be available in the repositories of other Linux distributions such as Arch Linux, Fedora etc. You can install eSpeak easily using dnf, pacman etc.
|
||||
|
||||
To use eSpeak, just use it like: espeak and press enter to hear it aloud. Use Ctrl+C to close the running program.
|
||||
|
||||
![eSpeak command line](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/eSpeak-example.png)
|
||||
|
||||
There are several other options available. You can browse through them through the help section of the program.
|
||||
|
||||
### GUI version: Gespeaker ###
|
||||
|
||||
If you prefer the GUI version over the command line, you can install Gespeaker that provides a GTK front end to eSpeak.
|
||||
|
||||
Use the command below to install Gespeaker:
|
||||
|
||||
sudo apt-get install gespeaker
|
||||
|
||||
The interface is straightforward and easy to use. You can explore it all by yourself.
|
||||
|
||||
![eSpeak GUI tool for text to speech in Ubuntu](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/eSpeak-GUI.png)
|
||||
|
||||
While such tools might not be useful for general computing need, it could be handy if you are working on some projects where text to speech conversion is required. I let you decide the usage of this speech synthesizer.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/espeak-text-speech-linux/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/abhishek/
|
||||
[1]:http://espeak.sourceforge.net/
|
||||
[2]:http://en.wikipedia.org/wiki/WAV
|
||||
[3]:http://en.wikipedia.org/wiki/Speech_Synthesis_Markup_Language
|
@ -1,3 +1,4 @@
|
||||
sevenot translating
|
||||
A Linux User Using ‘Windows 10′ After More than 8 Years – See Comparison
|
||||
================================================================================
|
||||
Windows 10 is the newest member of windows NT family of which general availability was made on July 29, 2015. It is the successor of Windows 8.1. Windows 10 is supported on Intel Architecture 32 bit, AMD64 and ARMv7 processors.
|
||||
@ -341,4 +342,4 @@ via: http://www.tecmint.com/a-linux-user-using-windows-10-after-more-than-8-year
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/avishek/
|
||||
[1]:https://www.microsoft.com/en-us/software-download/windows10ISO
|
||||
[1]:https://www.microsoft.com/en-us/software-download/windows10ISO
|
||||
|
@ -1,391 +0,0 @@
|
||||
martin translating...
|
||||
|
||||
Superclass: 15 of the world’s best living programmers
|
||||
================================================================================
|
||||
When developers discuss who the world’s top programmer is, these names tend to come up a lot.
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/superman-620x465-100611650-orig.jpg)
|
||||
|
||||
Image courtesy [tom_bullock CC BY 2.0][1]
|
||||
|
||||
It seems like there are lots of programmers out there these days, and lots of really good programmers. But which one is the very best?
|
||||
|
||||
Even though there’s no way to really say who the best living programmer is, that hasn’t stopped developers from frequently kicking the topic around. ITworld has solicited input and scoured coder discussion forums to see if there was any consensus. As it turned out, a handful of names did frequently get mentioned in these discussions.
|
||||
|
||||
Use the arrows above to read about 15 people commonly cited as the world’s best living programmer.
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/margaret_hamilton-620x465-100611764-orig.jpg)
|
||||
|
||||
Image courtesy [NASA][2]
|
||||
|
||||
### Margaret Hamilton ###
|
||||
|
||||
**Main claim to fame: The brains behind Apollo’s flight control software**
|
||||
|
||||
Credentials: As the Director of the Software Engineering Division at Charles Stark Draper Laboratory, she headed up the team which [designed and built][3] the on-board [flight control software for NASA’s Apollo][4] and Skylab missions. Based on her Apollo work, she later developed the [Universal Systems Language][5] and [Development Before the Fact][6] paradigm. Pioneered the concepts of [asynchronous software, priority scheduling, and ultra-reliable software design][7]. Coined the term “[software engineering][8].” Winner of the [Augusta Ada Lovelace Award][9] in 1986 and [NASA’s Exceptional Space Act Award in 2003][10].
|
||||
|
||||
Quotes: “Hamilton invented testing , she pretty much formalised Computer Engineering in the US.” [ford_beeblebrox][11]
|
||||
|
||||
“I think before her (and without disrespect including Knuth) computer programming was (and to an extent remains) a branch of mathematics. However a flight control system for a spacecraft clearly moves programming into a different paradigm.” [Dan Allen][12]
|
||||
|
||||
“... she originated the term ‘software engineering’ — and offered a great example of how to do it.” [David Hamilton][13]
|
||||
|
||||
“What a badass” [Drukered][14]
|
||||
|
||||
![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_donald_knuth-620x465-100502872-orig.jpg)
|
||||
|
||||
Image courtesy [vonguard CC BY-SA 2.0][15]
|
||||
|
||||
### Donald Knuth ###
|
||||
|
||||
**Main claim to fame: Author of The Art of Computer Programming**
|
||||
|
||||
Credentials: Wrote the [definitive book on the theory of programming][16]. Created the TeX digital typesetting system. [First winner of the ACM’s Grace Murray Hopper Award][17] in 1971. Winner of the ACM’s [A. M. Turing][18] Award in 1974, the [National Medal of Science][19] in 1979 and the IEEE’s [John von Neumann Medal][20] in 1995. Named a [Fellow at the Computer History Museum][21] in 1998.
|
||||
|
||||
Quotes: “... wrote The Art of Computer Programming which is probably the most comprehensive work on computer programming ever.” [Anonymous][22]
|
||||
|
||||
“There is only one large computer program I have used in which there are to a decent approximation 0 bugs: Don Knuth's TeX. That's impressive.” [Jaap Weel][23]
|
||||
|
||||
“Pretty awesome if you ask me.” [Mitch Rees-Jones][24]
|
||||
|
||||
![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_ken-thompson-620x465-100502874-orig.jpg)
|
||||
|
||||
Image courtesy [Association for Computing Machinery][25]
|
||||
|
||||
### Ken Thompson ###
|
||||
|
||||
**Main claim to fame: Creator of Unix**
|
||||
|
||||
Credentials: Co-creator, [along with Dennis Ritchie][26], of Unix. Creator of the [B programming language][27], the [UTF-8 character encoding scheme][28], the ed [text editor][29], and co-developer of the Go programming language. Co-winner (along with Ritchie) of the [A.M. Turing Award][30] in 1983, [IEEE Computer Pioneer Award][31] in 1994, and the [National Medal of Technology][32] in 1998. Inducted as a [fellow of the Computer History Museum][33] in 1997.
|
||||
|
||||
Quotes: “... probably the most accomplished programmer ever. Unix kernel, Unix tools, world-champion chess program Belle, Plan 9, Go Language.” [Pete Prokopowicz][34]
|
||||
|
||||
“Ken's contributions, more than anyone else I can think of, were fundamental and yet so practical and timeless they are still in daily use.“ [Jan Jannink][35]
|
||||
|
||||
![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_richard_stallman-620x465-100502868-orig.jpg)
|
||||
|
||||
Image courtesy Jiel Beaumadier CC BY-SA 3.0
|
||||
|
||||
### Richard Stallman ###
|
||||
|
||||
**Main claim to fame: Creator of Emacs, GCC**
|
||||
|
||||
Credentials: Founded the [GNU Project][36] and created many of its core tools, such as [Emacs, GCC, GDB][37], and [GNU Make][38]. Also founded the [Free Software Foundation][39]. Winner of the ACM's [Grace Murray Hopper Award][40] in 1990 and the [EFF's Pioneer Award in 1998][41].
|
||||
|
||||
Quotes: “... there was the time when he single-handedly outcoded several of the best Lisp hackers around, in the Symbolics vs LMI fight.” [Srinivasan Krishnan][42]
|
||||
|
||||
“Through his amazing mastery of programming and force of will, he created a whole sub-culture in programming and computers.” [Dan Dunay][43]
|
||||
|
||||
“I might disagree on many things with the great man, but he is still one of the most important programmers, alive or dead” [Marko Poutiainen][44]
|
||||
|
||||
“Try to imagine Linux without the prior work on the GNu project. Stallman's the bomb, yo.” [John Burnette][45]
|
||||
|
||||
![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_anders_hejlsberg-620x465-100502873-orig.jpg)
|
||||
|
||||
Image courtesy [D.Begley CC BY 2.0][46]
|
||||
|
||||
### Anders Hejlsberg ###
|
||||
|
||||
**Main claim to fame: Creator of Turbo Pascal**
|
||||
|
||||
Credentials: [The original author of what became Turbo Pascal][47], one of the most popular Pascal compilers and the first integrated development environment. Later, [led the building of Delphi][48], Turbo Pascal’s successor. [Chief designer and architect of C#][49]. Winner of [Dr. Dobb's Excellence in Programming Award][50] in 2001.
|
||||
|
||||
Quotes: “He wrote the [Pascal] compiler in assembly language for both of the dominant PC operating systems of the day (DOS and CPM). It was designed to compile, link and run a program in seconds rather than minutes.” [Steve Wood][51]
|
||||
|
||||
“I revere this guy - he created the development tools that were my favourite through three key periods along my path to becoming a professional software engineer.” [Stefan Kiryazov][52]
|
||||
|
||||
![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_doug_cutting-620x465-100502871-orig.jpg)
|
||||
|
||||
Image courtesy [vonguard CC BY-SA 2.0][53]
|
||||
|
||||
### Doug Cutting ###
|
||||
|
||||
**Main claim to fame: Creator of Lucene**
|
||||
|
||||
Credentials: [Developed the Lucene search engine, as well as Nutch][54], a web crawler, and [Hadoop][55], a set of tools for distributed processing of large data sets. A strong proponent of open-source (Lucene, Nutch and Hadoop are all open-source). Currently [a former director of the Apache Software Foundation][56].
|
||||
|
||||
Quotes: “... he is the same guy who has written an exceptional search framework(lucene/solr) and opened the big-data gateway to the world(hadoop).” [Rajesh Rao][57]
|
||||
|
||||
“His creation/work on Lucene and Hadoop (among other projects) has created a tremendous amount of wealth and employment for folks in the world….” [Amit Nithianandan][58]
|
||||
|
||||
![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_sanjay_ghemawat-620x465-100502876-orig.jpg)
|
||||
|
||||
Image courtesy [Association for Computing Machinery][59]
|
||||
|
||||
### Sanjay Ghemawat ###
|
||||
|
||||
**Main claim to fame: Key Google architect**
|
||||
|
||||
Credentials: [Helped to design and implement some of Google’s large distributed systems][60], including MapReduce, BigTable, Spanner and Google File System. [Created Unix’s ical][61] calendaring system. Elected to the [National Academy of Engineering][62] in 2009. Winner of the [ACM-Infosys Foundation Award in the Computing Sciences][63] in 2012.
|
||||
|
||||
Quote: “Jeff Dean's wingman.” [Ahmet Alp Balkan][64]
|
||||
|
||||
![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_jeff_dean-620x465-100502866-orig.jpg)
|
||||
|
||||
Image courtesy [Google][65]
|
||||
|
||||
### Jeff Dean ###
|
||||
|
||||
**Main claim to fame: The brains behind Google search indexing**
|
||||
|
||||
Credentials: Helped to design and implement [many of Google’s large-scale distributed systems][66], including website crawling, indexing and searching, AdSense, MapReduce, BigTable and Spanner. Elected to the [National Academy of Engineering][67] in 2009. 2012 winner of the ACM’s [SIGOPS Mark Weiser Award][68] and the [ACM-Infosys Foundation Award in the Computing Sciences][69].
|
||||
|
||||
Quotes: “... for bringing breakthroughs in data mining( GFS, Map and Reduce, Big Table ).” [Natu Lauchande][70]
|
||||
|
||||
“... conceived, built, and deployed MapReduce and BigTable, among a bazillion other things” [Erik Goldman][71]
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/linus_torvalds-620x465-100611765-orig.jpg)
|
||||
|
||||
Image courtesy [Krd CC BY-SA 4.0][72]
|
||||
|
||||
### Linus Torvalds ###
|
||||
|
||||
**Main claim to fame: Creator of Linux**
|
||||
|
||||
Credentials: Created the [Linux kernel][73] and [Git][74], an open source version control system. Winner of numerous awards and honors, including the [EFF Pioneer Award][75] in 1998, the [British Computer Society’s Lovelace Medal][76] in 2000, the [Millenium Technology Prize][77] in 2012 and the [IEEE Computer Society’s Computer Pioneer Award][78] in 2014. Also inducted into the [Computer History Museum’s Hall of Fellows][79] in 2008 and the [Internet Hall of Fame][80] in 2012.
|
||||
|
||||
Quotes: “To put into prospective what an achievement this is, he wrote the Linux kernel in a few years while the GNU Hurd (a GNU-developed kernel) has been under development for 25 years and has still yet to release a production-ready example.” [Erich Ficker][81]
|
||||
|
||||
“Torvalds is probably the programmer's programmer.” [Dan Allen][82]
|
||||
|
||||
“He's pretty darn good.” [Alok Tripathy][83]
|
||||
|
||||
![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_john_carmack-620x465-100502867-orig.jpg)
|
||||
|
||||
Image courtesy [QuakeCon CC BY 2.0][84]
|
||||
|
||||
### John Carmack ###
|
||||
|
||||
**Main claim to fame: Creator of Doom**
|
||||
|
||||
Credentials: Cofounded id Software and [created such influential FPS games][85] as Wolfenstein 3D, Doom and Quake. Pioneered such ground-breaking computer graphic techniques [adaptive tile refresh][86], [binary space partitioning][87], and surface caching. Inducted into the [Academy of Interactive Arts and Sciences Hall of Fame][88] in 2001, [won Emmy awards][89] in the Engineering & Technology category in 2007 and 2008, and given a lifetime achievement award by the [Game Developers Choice Awards][90] in 2010.
|
||||
|
||||
Quotes: “He wrote his first rendering engine before he was 20 years old. The guy's a genius. I wish I were a quarter a programmer he is.” [Alex Dolinsky][91]
|
||||
|
||||
“... Wolfenstein 3D, Doom and Quake were revolutionary at the time and have influenced a generation of game designers.” [dniblock][92]
|
||||
|
||||
“He can write basically anything in a weekend....” [Greg Naughton][93]
|
||||
|
||||
“He is the Mozart of computer coding….” [Chris Morris][94]
|
||||
|
||||
![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_fabrice_bellard-620x465-100502870-orig.jpg)
|
||||
|
||||
Image courtesy [Duff][95]
|
||||
|
||||
### Fabrice Bellard ###
|
||||
|
||||
**Main claim to fame: Creator of QEMU**
|
||||
|
||||
Credentials: Created a [variety of well-known open-source software programs][96], including QEMU, a platform for hardware emulation and virtualization, FFmpeg, for handling multimedia data, the Tiny C Compiler and LZEXE, an executable file compressor. [Winner of the Obfuscated C Code Contest][97] in 2000 and 2001 and the [Google-O'Reilly Open Source Award][98] in 2011. Former world record holder for [calculating the most number of digits in Pi][99].
|
||||
|
||||
Quotes: “I find Fabrice Bellard's work remarkable and impressive.” [raphinou][100]
|
||||
|
||||
“Fabrice Bellard is the most productive programmer in the world....” [Pavan Yara][101]
|
||||
|
||||
“Hes like the Nikola Tesla of sofware engineering.” [Michael Valladolid][102]
|
||||
|
||||
“He's a prolific serial achiever since the 1980s.” M[ichael Biggins][103]
|
||||
|
||||
![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_jon_skeet-620x465-100502863-orig.jpg)
|
||||
|
||||
Image courtesy [Craig Murphy CC BY 2.0][104]
|
||||
|
||||
### Jon Skeet ###
|
||||
|
||||
**Main claim to fame: Legendary Stack Overflow contributor**
|
||||
|
||||
Credentials: Google engineer and author of [C# in Depth][105]. Holds [highest reputation score of all time on Stack Overflow][106], answering, on average, 390 questions per month.
|
||||
|
||||
Quotes: “Jon Skeet doesn't need a debugger, he just stares down the bug until the code confesses” [Steven A. Lowe][107]
|
||||
|
||||
“When Jon Skeet's code fails to compile the compiler apologises.” [Dan Dyer][108]
|
||||
|
||||
“Jon Skeet's code doesn't follow a coding convention. It is the coding convention.” [Anonymous][109]
|
||||
|
||||
![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_image_adam_dangelo-620x465-100502875-orig.jpg)
|
||||
|
||||
Image courtesy [Philip Neustrom CC BY 2.0][110]
|
||||
|
||||
### Adam D'Angelo ###
|
||||
|
||||
**Main claim to fame: Co-founder of Quora**
|
||||
|
||||
Credentials: As an engineer at Facebook, [built initial infrastructure for its news feed][111]. Went on to become CTO and VP of engineering at Facebook, before leaving to co-found Quora. [Eighth place finisher at the USA Computing Olympiad][112] as a high school student in 2001. Member of [California Institute of Technology’s silver medal winning team][113] at the ACM International Collegiate Programming Contest in 2004. [Finalist in the Algorithm Coding Competition][114] of Topcoder Collegiate Challenge in 2005.
|
||||
|
||||
Quotes: “An "All-Rounder" Programmer.” [Anonymous][115]
|
||||
|
||||
"For every good thing I make he has like six." [Mark Zuckerberg][116]
|
||||
|
||||
![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_petr_mitrichev-620x465-100502869-orig.jpg)
|
||||
|
||||
Image courtesy [Facebook][117]
|
||||
|
||||
### Petr Mitrechev ###
|
||||
|
||||
**Main claim to fame: One of the top competitive programmers of all time**
|
||||
|
||||
Credentials: [Two-time gold medal winner][118] in the International Olympiad in Informatics (2000, 2002). In 2006, [won the Google Code Jam][119] and was also the [TopCoder Open Algorithm champion][120]. Also, two-time winner of the Facebook Hacker Cup ([2011][121], [2013][122]). At the time of this writing, [the second ranked algorithm competitor on TopCoder][123] (handle: Petr) and also [ranked second by Codeforces][124]
|
||||
|
||||
Quote: “He is an idol in competitive programming even here in India…” [Kavish Dwivedi][125]
|
||||
|
||||
![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_gennady_korot-620x465-100502864-orig.jpg)
|
||||
|
||||
Image courtesy [Ishandutta2007 CC BY-SA 3.0][126]
|
||||
|
||||
### Gennady Korotkevich ###
|
||||
|
||||
**Main claim to fame: Competitive programming prodigy**
|
||||
|
||||
Credentials: Youngest participant ever (age 11) and [6 time gold medalist][127] (2007-2012) in the International Olympiad in Informatics. Part of [the winning team][128] at the ACM International Collegiate Programming Contest in 2013 and winner of the [2014 Facebook Hacker Cup][129]. At the time of this writing, [ranked first by Codeforces][130] (handle: Tourist) and [first among algorithm competitors by TopCoder][131].
|
||||
|
||||
Quotes: “A programming prodigy!” [Prateek Joshi][132]
|
||||
|
||||
“Gennady is definitely amazing, and visible example of why I have a large development team in Belarus.” [Chris Howard][133]
|
||||
|
||||
“Tourist is genius” [Nuka Shrinivas Rao][134]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.itworld.com/article/2823547/enterprise-software/158256-superclass-14-of-the-world-s-best-living-programmers.html#slide1
|
||||
|
||||
作者:[Phil Johnson][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.itworld.com/author/Phil-Johnson/
|
||||
[1]:https://www.flickr.com/photos/tombullock/15713223772
|
||||
[2]:https://commons.wikimedia.org/wiki/File:Margaret_Hamilton_in_action.jpg
|
||||
[3]:http://klabs.org/home_page/hamilton.htm
|
||||
[4]:https://www.youtube.com/watch?v=DWcITjqZtpU&feature=youtu.be&t=3m12s
|
||||
[5]:http://www.htius.com/Articles/r12ham.pdf
|
||||
[6]:http://www.htius.com/Articles/Inside_DBTF.htm
|
||||
[7]:http://www.nasa.gov/home/hqnews/2003/sep/HQ_03281_Hamilton_Honor.html
|
||||
[8]:http://www.nasa.gov/50th/50th_magazine/scientists.html
|
||||
[9]:https://books.google.com/books?id=JcmV0wfQEoYC&pg=PA321&lpg=PA321&dq=ada+lovelace+award+1986&source=bl&ots=qGdBKsUa3G&sig=bkTftPAhM1vZ_3VgPcv-38ggSNo&hl=en&sa=X&ved=0CDkQ6AEwBGoVChMI3paoxJHWxwIVA3I-Ch1whwPn#v=onepage&q=ada%20lovelace%20award%201986&f=false
|
||||
[10]:http://history.nasa.gov/alsj/a11/a11Hamilton.html
|
||||
[11]:https://www.reddit.com/r/pics/comments/2oyd1y/margaret_hamilton_with_her_code_lead_software/cmrswof
|
||||
[12]:http://qr.ae/RFEZLk
|
||||
[13]:http://qr.ae/RFEZUn
|
||||
[14]:https://www.reddit.com/r/pics/comments/2oyd1y/margaret_hamilton_with_her_code_lead_software/cmrv9u9
|
||||
[15]:https://www.flickr.com/photos/44451574@N00/5347112697
|
||||
[16]:http://cs.stanford.edu/~uno/taocp.html
|
||||
[17]:http://awards.acm.org/award_winners/knuth_1013846.cfm
|
||||
[18]:http://amturing.acm.org/award_winners/knuth_1013846.cfm
|
||||
[19]:http://www.nsf.gov/od/nms/recip_details.jsp?recip_id=198
|
||||
[20]:http://www.ieee.org/documents/von_neumann_rl.pdf
|
||||
[21]:http://www.computerhistory.org/fellowawards/hall/bios/Donald,Knuth/
|
||||
[22]:http://www.quora.com/Who-are-the-best-programmers-in-Silicon-Valley-and-why/answers/3063
|
||||
[23]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Jaap-Weel
|
||||
[24]:http://qr.ae/RFE94x
|
||||
[25]:http://amturing.acm.org/photo/thompson_4588371.cfm
|
||||
[26]:https://www.youtube.com/watch?v=JoVQTPbD6UY
|
||||
[27]:https://www.bell-labs.com/usr/dmr/www/bintro.html
|
||||
[28]:http://doc.cat-v.org/bell_labs/utf-8_history
|
||||
[29]:http://c2.com/cgi/wiki?EdIsTheStandardTextEditor
|
||||
[30]:http://amturing.acm.org/award_winners/thompson_4588371.cfm
|
||||
[31]:http://www.computer.org/portal/web/awards/cp-thompson
|
||||
[32]:http://www.uspto.gov/about/nmti/recipients/1998.jsp
|
||||
[33]:http://www.computerhistory.org/fellowawards/hall/bios/Ken,Thompson/
|
||||
[34]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Pete-Prokopowicz-1
|
||||
[35]:http://qr.ae/RFEWBY
|
||||
[36]:https://groups.google.com/forum/#!msg/net.unix-wizards/8twfRPM79u0/1xlglzrWrU0J
|
||||
[37]:http://www.emacswiki.org/emacs/RichardStallman
|
||||
[38]:https://www.gnu.org/gnu/thegnuproject.html
|
||||
[39]:http://www.emacswiki.org/emacs/FreeSoftwareFoundation
|
||||
[40]:http://awards.acm.org/award_winners/stallman_9380313.cfm
|
||||
[41]:https://w2.eff.org/awards/pioneer/1998.php
|
||||
[42]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Greg-Naughton/comment/4146397
|
||||
[43]:http://qr.ae/RFEaib
|
||||
[44]:http://www.quora.com/Software-Engineering/Who-are-some-of-the-greatest-currently-active-software-architects-in-the-world/answer/Marko-Poutiainen
|
||||
[45]:http://qr.ae/RFEUqp
|
||||
[46]:https://www.flickr.com/photos/begley/2979906130
|
||||
[47]:http://www.taoyue.com/tutorials/pascal/history.html
|
||||
[48]:http://c2.com/cgi/wiki?AndersHejlsberg
|
||||
[49]:http://www.microsoft.com/about/technicalrecognition/anders-hejlsberg.aspx
|
||||
[50]:http://www.drdobbs.com/windows/dr-dobbs-excellence-in-programming-award/184404602
|
||||
[51]:http://qr.ae/RFEZrv
|
||||
[52]:http://www.quora.com/Software-Engineering/Who-are-some-of-the-greatest-currently-active-software-architects-in-the-world/answer/Stefan-Kiryazov
|
||||
[53]:https://www.flickr.com/photos/vonguard/4076389963/
|
||||
[54]:http://www.wizards-of-os.org/archiv/sprecher/a_c/doug_cutting.html
|
||||
[55]:http://hadoop.apache.org/
|
||||
[56]:https://www.linkedin.com/in/cutting
|
||||
[57]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Shalin-Shekhar-Mangar/comment/2293071
|
||||
[58]:http://www.quora.com/Who-are-the-best-programmers-in-Silicon-Valley-and-why/answer/Amit-Nithianandan
|
||||
[59]:http://awards.acm.org/award_winners/ghemawat_1482280.cfm
|
||||
[60]:http://research.google.com/pubs/SanjayGhemawat.html
|
||||
[61]:http://www.quora.com/Google/Who-is-Sanjay-Ghemawat
|
||||
[62]:http://www8.nationalacademies.org/onpinews/newsitem.aspx?RecordID=02062009
|
||||
[63]:http://awards.acm.org/award_winners/ghemawat_1482280.cfm
|
||||
[64]:http://www.quora.com/Google/Who-is-Sanjay-Ghemawat/answer/Ahmet-Alp-Balkan
|
||||
[65]:http://research.google.com/people/jeff/index.html
|
||||
[66]:http://research.google.com/people/jeff/index.html
|
||||
[67]:http://www8.nationalacademies.org/onpinews/newsitem.aspx?RecordID=02062009
|
||||
[68]:http://news.cs.washington.edu/2012/10/10/uw-cse-ph-d-alum-jeff-dean-wins-2012-sigops-mark-weiser-award/
|
||||
[69]:http://awards.acm.org/award_winners/dean_2879385.cfm
|
||||
[70]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Natu-Lauchande
|
||||
[71]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Cosmin-Negruseri/comment/28399
|
||||
[72]:https://commons.wikimedia.org/wiki/File:LinuxCon_Europe_Linus_Torvalds_05.jpg
|
||||
[73]:http://www.linuxfoundation.org/about/staff#torvalds
|
||||
[74]:http://git-scm.com/book/en/Getting-Started-A-Short-History-of-Git
|
||||
[75]:https://w2.eff.org/awards/pioneer/1998.php
|
||||
[76]:http://www.bcs.org/content/ConWebDoc/14769
|
||||
[77]:http://www.zdnet.com/blog/open-source/linus-torvalds-wins-the-tech-equivalent-of-a-nobel-prize-the-millennium-technology-prize/10789
|
||||
[78]:http://www.computer.org/portal/web/pressroom/Linus-Torvalds-Named-Recipient-of-the-2014-IEEE-Computer-Society-Computer-Pioneer-Award
|
||||
[79]:http://www.computerhistory.org/fellowawards/hall/bios/Linus,Torvalds/
|
||||
[80]:http://www.internethalloffame.org/inductees/linus-torvalds
|
||||
[81]:http://qr.ae/RFEeeo
|
||||
[82]:http://qr.ae/RFEZLk
|
||||
[83]:http://www.quora.com/Software-Engineering/Who-are-some-of-the-greatest-currently-active-software-architects-in-the-world/answer/Alok-Tripathy-1
|
||||
[84]:https://www.flickr.com/photos/quakecon/9434713998
|
||||
[85]:http://doom.wikia.com/wiki/John_Carmack
|
||||
[86]:http://thegamershub.net/2012/04/gaming-gods-john-carmack/
|
||||
[87]:http://www.shamusyoung.com/twentysidedtale/?p=4759
|
||||
[88]:http://www.interactive.org/special_awards/details.asp?idSpecialAwards=6
|
||||
[89]:http://www.itworld.com/article/2951105/it-management/a-fly-named-for-bill-gates-and-9-other-unusual-honors-for-tech-s-elite.html#slide8
|
||||
[90]:http://www.gamechoiceawards.com/archive/lifetime.html
|
||||
[91]:http://qr.ae/RFEEgr
|
||||
[92]:http://www.itworld.com/answers/topic/software/question/whos-best-living-programmer#comment-424562
|
||||
[93]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Greg-Naughton
|
||||
[94]:http://money.cnn.com/2003/08/21/commentary/game_over/column_gaming/
|
||||
[95]:http://dufoli.wordpress.com/2007/06/23/ammmmaaaazing-night/
|
||||
[96]:http://bellard.org/
|
||||
[97]:http://www.ioccc.org/winners.html#B
|
||||
[98]:http://www.oscon.com/oscon2011/public/schedule/detail/21161
|
||||
[99]:http://bellard.org/pi/pi2700e9/
|
||||
[100]:https://news.ycombinator.com/item?id=7850797
|
||||
[101]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Erik-Frey/comment/1718701
|
||||
[102]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Erik-Frey/comment/2454450
|
||||
[103]:http://qr.ae/RFEjhZ
|
||||
[104]:https://www.flickr.com/photos/craigmurphy/4325516497
|
||||
[105]:http://www.amazon.co.uk/gp/product/1935182471?ie=UTF8&tag=developetutor-21&linkCode=as2&camp=1634&creative=19450&creativeASIN=1935182471
|
||||
[106]:http://stackexchange.com/leagues/1/alltime/stackoverflow
|
||||
[107]:http://meta.stackexchange.com/a/9156
|
||||
[108]:http://meta.stackexchange.com/a/9138
|
||||
[109]:http://meta.stackexchange.com/a/9182
|
||||
[110]:https://www.flickr.com/photos/philipn/5326344032
|
||||
[111]:http://www.crunchbase.com/person/adam-d-angelo
|
||||
[112]:http://www.exeter.edu/documents/Exeter_Bulletin/fall_01/oncampus.html
|
||||
[113]:http://icpc.baylor.edu/community/results-2004
|
||||
[114]:https://www.topcoder.com/tc?module=Static&d1=pressroom&d2=pr_022205
|
||||
[115]:http://qr.ae/RFfOfe
|
||||
[116]:http://www.businessinsider.com/in-new-alleged-ims-mark-zuckerberg-talks-about-adam-dangelo-2012-9#ixzz369FcQoLB
|
||||
[117]:https://www.facebook.com/hackercup/photos/a.329665040399024.91563.133954286636768/553381194694073/?type=1
|
||||
[118]:http://stats.ioinformatics.org/people/1849
|
||||
[119]:http://googlepress.blogspot.com/2006/10/google-announces-winner-of-global-code_27.html
|
||||
[120]:http://community.topcoder.com/tc?module=SimpleStats&c=coder_achievements&d1=statistics&d2=coderAchievements&cr=10574855
|
||||
[121]:https://www.facebook.com/notes/facebook-hacker-cup/facebook-hacker-cup-finals/208549245827651
|
||||
[122]:https://www.facebook.com/hackercup/photos/a.329665040399024.91563.133954286636768/553381194694073/?type=1
|
||||
[123]:http://community.topcoder.com/tc?module=AlgoRank
|
||||
[124]:http://codeforces.com/ratings
|
||||
[125]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Venkateswaran-Vicky/comment/1960855
|
||||
[126]:http://commons.wikimedia.org/wiki/File:Gennady_Korot.jpg
|
||||
[127]:http://stats.ioinformatics.org/people/804
|
||||
[128]:http://icpc.baylor.edu/regionals/finder/world-finals-2013/standings
|
||||
[129]:https://www.facebook.com/hackercup/posts/10152022955628845
|
||||
[130]:http://codeforces.com/ratings
|
||||
[131]:http://community.topcoder.com/tc?module=AlgoRank
|
||||
[132]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Prateek-Joshi
|
||||
[133]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Prateek-Joshi/comment/4720779
|
||||
[134]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Prateek-Joshi/comment/4880549
|
@ -1,4 +1,3 @@
|
||||
icybreaker translating...
|
||||
14 tips for teaching open source development
|
||||
================================================================================
|
||||
Academia is an excellent platform for training and preparing the open source developers of tomorrow. In research, we occasionally open source software we write. We do this for two reasons. One, to promote the use of the tools we produce. And two, to learn more about the impact and issues other people face when using them. With this background of writing research software, I was tasked with redesigning the undergraduate software engineering course for second-year students at the University of Bradford.
|
||||
|
77
sources/talk/20151117 How bad a boss is Linus Torvalds.md
Normal file
77
sources/talk/20151117 How bad a boss is Linus Torvalds.md
Normal file
@ -0,0 +1,77 @@
|
||||
How bad a boss is Linus Torvalds?
|
||||
================================================================================
|
||||
![linus torvalds](http://images.techhive.com/images/article/2015/08/linus_torvalds-100600260-primary.idge.jpg)
|
||||
|
||||
*Linus Torvalds addressed a packed auditorium of Linux enthusiasts during his speech at the LinuxWorld show in San Jose, California, on August 10, 1999. Credit: James Niccolai*
|
||||
|
||||
**It depends on context. In the world of software development, he’s what passes for normal. The question is whether that situation should be allowed to continue.**
|
||||
|
||||
I've known Linus Torvalds, Linux's inventor, for over 20 years. We're not chums, but we like each other.
|
||||
|
||||
Lately, Torvalds has been getting a lot of flack for his management style. Linus doesn't suffer fools gladly. He has one way of judging people in his business of developing the Linux kernel: How good is your code?
|
||||
|
||||
Nothing else matters. As Torvalds said earlier this year at the Linux.conf.au Conference, "I'm not a nice person, and I don't care about you. [I care about the technology and the kernel][1] -- that's what's important to me."
|
||||
|
||||
Now, I can deal with that kind of person. If you can't, you should avoid the Linux kernel community, where you'll find a lot of this kind of meritocratic thinking. Which is not to say that I think everything in Linuxland is hunky-dory and should be impervious to calls for change. A meritocracy I can live with; a bastion of male dominance where women are subjected to scorn and disrespect is a problem.
|
||||
|
||||
That's why I see the recent brouhaha about Torvalds' management style -- or more accurately, his total indifference to the personal side of management -- as nothing more than standard operating procedure in the world of software development. And at the same time, I see another instance that has come to light as evidence of a need for things to really change.
|
||||
|
||||
The first situation arose with the [release of Linux 4.3][2], when Torvalds used the Linux Kernel Mailing List to tear into a developer who had inserted some networking code that Torvalds thought was -- well, let's say "crappy." "[[A]nd it generates [crappy] code.][3] It looks bad, and there's no reason for it." He goes on in this vein for quite a while. Besides the word "crap" and its earthier synonym, he uses the word "idiotic" pretty often.
|
||||
|
||||
Here's the thing, though. He's right. I read the code. It's badly written and it does indeed seem to have been designed to use the new "overflow_usub()" function just for the sake of using it.
|
||||
|
||||
Now, some people see this diatribe as evidence that Torvalds is a bad-tempered bully. I see a perfectionist who, within his field, doesn't put up with crap.
|
||||
|
||||
Many people have told me that this is not how professional programmers should act. People, have you ever worked with top developers? That's exactly how they act, at Apple, Microsoft, Oracle and everywhere else I've known them.
|
||||
|
||||
I've heard Steve Jobs rip a developer to pieces. I've cringed while a senior Oracle developer lead tore into a room of new programmers like a piranha through goldfish.
|
||||
|
||||
In Accidental Empires, his classic book on the rise of PCs, Robert X. Cringely described Microsoft's software management style when Bill Gates was in charge as a system where "Each level, from Gates on down, screams at the next, goading and humiliating them." Ah, yes, that's the Microsoft I knew and hated.
|
||||
|
||||
The difference between the leaders at big proprietary software companies and Torvalds is that he says everything in the open for the whole world to see. The others do it in private conference rooms. I've heard people claim that Torvalds would be fired in their company. Nope. He'd be right where he is now: on top of his programming world.
|
||||
|
||||
Oh, and there's another difference. If you get, say, Larry Ellison mad at you, you can kiss your job goodbye. When you get Torvalds angry at your work, you'll get yelled at in an email. That's it.
|
||||
|
||||
You see, Torvalds isn't anyone's boss. He's the guy in charge of a project with about 10,000 contributors, but he has zero hiring and firing authority. He can hurt your feelings, but that's about it.
|
||||
|
||||
That said, there is a serious problem within both open-source and proprietary software development circles. No matter how good a programmer you are, if you're a woman, the cards are stacked against you.
|
||||
|
||||
No case shows this better than that of Sarah Sharp, an Intel developer and formerly a top Linux programmer. [In a post on her blog in October][4], she explained why she had stopped contributing to the Linux kernel more than a year earlier: "I finally realized that I could no longer contribute to a community where I was technically respected, but I could not ask for personal respect.... I did not want to work professionally with people who were allowed to get away with subtle sexist or homophobic jokes."
|
||||
|
||||
Who can blame her? I can't. Torvalds, like almost every software manager I've ever known, I'm sorry to say, has permitted a hostile work environment.
|
||||
|
||||
He would probably say that it's not his job to ensure that Linux contributors behave with professionalism and mutual respect. He's concerned with the code and nothing but the code.
|
||||
|
||||
As Sharp wrote:
|
||||
|
||||
> I have the utmost respect for the technical efforts of the Linux kernel community. They have scaled and grown a project that is focused on maintaining some of the highest coding standards out there. The focus on technical excellence, in combination with overloaded maintainers, and people with different cultural and social norms, means that Linux kernel maintainers are often blunt, rude, or brutal to get their job done. Top Linux kernel developers often yell at each other in order to correct each other's behavior.
|
||||
>
|
||||
> That's not a communication style that works for me. …
|
||||
>
|
||||
> Many senior Linux kernel developers stand by the right of maintainers to be technically and personally brutal. Even if they are very nice people in person, they do not want to see the Linux kernel communication style change.
|
||||
|
||||
She's right.
|
||||
|
||||
Where I differ from other observers is that I don't think that this problem is in any way unique to Linux or open-source communities. With five years of work in the technology business and 25 years as a technology journalist, I've seen this kind of immature boy behavior everywhere.
|
||||
|
||||
It's not Torvalds' fault. He's a technical leader with a vision, not a manager. The real problem is that there seems to be no one in the software development universe who can set a supportive tone for teams and communities.
|
||||
|
||||
Looking ahead, I hope that companies and organizations, such as the Linux Foundation, can find a way to empower community managers or other managers to encourage and enforce civil behavior.
|
||||
|
||||
We won't, unfortunately, find that kind of managerial finesse in our pure technical or business leaders. It's not in their DNA.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.computerworld.com/article/3004387/it-management/how-bad-a-boss-is-linus-torvalds.html
|
||||
|
||||
作者:[Steven J. Vaughan-Nichols][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.computerworld.com/author/Steven-J.-Vaughan_Nichols/
|
||||
[1]:http://www.computerworld.com/article/2874475/linus-torvalds-diversity-gaffe-brings-out-the-best-and-worst-of-the-open-source-world.html
|
||||
[2]:http://www.zdnet.com/article/linux-4-3-released-after-linus-torvalds-scraps-brain-damage-code/
|
||||
[3]:http://lkml.iu.edu/hypermail/linux/kernel/1510.3/02866.html
|
||||
[4]:http://sarah.thesharps.us/2015/10/05/closing-a-door/
|
@ -0,0 +1,284 @@
|
||||
Review: 5 memory debuggers for Linux coding
|
||||
================================================================================
|
||||
![](http://images.techhive.com/images/article/2015/11/penguinadmin-2400px-100627186-primary.idge.jpg)
|
||||
Credit: [Moini][1]
|
||||
|
||||
As a programmer, I'm aware that I tend to make mistakes -- and why not? Even programmers are human. Some errors are detected during code compilation, while others get caught during software testing. However, a category of error exists that usually does not get detected at either of these stages and that may cause the software to behave unexpectedly -- or worse, terminate prematurely.
|
||||
|
||||
If you haven't already guessed it, I am talking about memory-related errors. Manually debugging these errors can be not only time-consuming but difficult to find and correct. Also, it's worth mentioning that these errors are surprisingly common, especially in software written in programming languages like C and C++, which were designed for use with [manual memory management][2].
|
||||
|
||||
Thankfully, several programming tools exist that can help you find memory errors in your software programs. In this roundup, I assess five popular, free and open-source memory debuggers that are available for Linux: Dmalloc, Electric Fence, Memcheck, Memwatch and Mtrace. I've used all five in my day-to-day programming, and so these reviews are based on practical experience.
|
||||
|
||||
eviews are based on practical experience.
|
||||
|
||||
### [Dmalloc][3] ###
|
||||
|
||||
**Developer**: Gray Watson
|
||||
**Reviewed version**: 5.5.2
|
||||
**Linux support**: All flavors
|
||||
**License**: Creative Commons Attribution-Share Alike 3.0 License
|
||||
|
||||
Dmalloc is a memory-debugging tool developed by Gray Watson. It is implemented as a library that provides wrappers around standard memory management functions like **malloc(), calloc(), free()** and more, enabling programmers to detect problematic code.
|
||||
|
||||
![cw dmalloc output](http://images.techhive.com/images/article/2015/11/cw_dmalloc-output-100627040-large.idge.png)
|
||||
Dmalloc
|
||||
|
||||
As listed on the tool's Web page, the debugging features it provides includes memory-leak tracking, [double free][4] error tracking and [fence-post write detection][5]. Other features include file/line number reporting, and general logging of statistics.
|
||||
|
||||
#### What's new ####
|
||||
|
||||
Version 5.5.2 is primarily a [bug-fix release][6] containing corrections for a couple of build and install problems.
|
||||
|
||||
#### What's good about it ####
|
||||
|
||||
The best part about Dmalloc is that it's extremely configurable. For example, you can configure it to include support for C++ programs as well as threaded applications. A useful functionality it provides is runtime configurability, which means that you can easily enable/disable the features the tool provides while it is being executed.
|
||||
|
||||
You can also use Dmalloc with the [GNU Project Debugger (GDB)][7] -- just add the contents of the dmalloc.gdb file (located in the contrib subdirectory in Dmalloc's source package) to the .gdbinit file in your home directory.
|
||||
|
||||
Another thing that I really like about Dmalloc is its extensive documentation. Just head to the [documentation section][8] on its official website, and you'll get everything from how to download, install, run and use the library to detailed descriptions of the features it provides and an explanation of the output file it produces. There's also a section containing solutions to some common problems.
|
||||
|
||||
#### Other considerations ####
|
||||
|
||||
Like Mtrace, Dmalloc requires programmers to make changes to their program's source code. In this case you may, at the very least, want to add the **dmalloc.h** header, because it allows the tool to report the file/line numbers of calls that generate problems, something that is very useful as it saves time while debugging.
|
||||
|
||||
In addition, the Dmalloc library, which is produced after the package is compiled, needs to be linked with your program while the program is being compiled.
|
||||
|
||||
However, complicating things somewhat is the fact that you also need to set an environment variable, dubbed **DMALLOC_OPTION**, that the debugging tool uses to configure the memory debugging features -- as well as the location of the output file -- at runtime. While you can manually assign a value to the environment variable, beginners may find that process a bit tough, given that the Dmalloc features you want to enable are listed as part of that value, and are actually represented as a sum of their respective hexadecimal values -- you can read more about it [here][9].
|
||||
|
||||
An easier way to set the environment variable is to use the [Dmalloc Utility Program][10], which was designed for just that purpose.
|
||||
|
||||
#### Bottom line ####
|
||||
|
||||
Dmalloc's real strength lies in the configurability options it provides. It is also highly portable, having being successfully ported to many OSes, including AIX, BSD/OS, DG/UX, Free/Net/OpenBSD, GNU/Hurd, HPUX, Irix, Linux, MS-DOG, NeXT, OSF, SCO, Solaris, SunOS, Ultrix, Unixware and even Unicos (on a Cray T3E). Although the tool has a bit of a learning curve associated with it, the features it provides are worth it.
|
||||
|
||||
### [Electric Fence][15] ###
|
||||
|
||||
**Developer**: Bruce Perens
|
||||
**Reviewed version**: 2.2.3
|
||||
**Linux support**: All flavors
|
||||
**License**: GNU GPL (version 2)
|
||||
|
||||
Electric Fence is a memory-debugging tool developed by Bruce Perens. It is implemented in the form of a library that your program needs to link to, and is capable of detecting overruns of memory allocated on a [heap][11] ) as well as memory accesses that have already been released.
|
||||
|
||||
![cw electric fence output](http://images.techhive.com/images/article/2015/11/cw_electric-fence-output-100627041-large.idge.png)
|
||||
Electric Fence
|
||||
|
||||
As the name suggests, Electric Fence creates a virtual fence around each allocated buffer in a way that any illegal memory access results in a [segmentation fault][12]. The tool supports both C and C++ programs.
|
||||
|
||||
#### What's new ####
|
||||
|
||||
Version 2.2.3 contains a fix for the tool's build system, allowing it to actually pass the -fno-builtin-malloc option to the [GNU Compiler Collection (GCC)][13].
|
||||
|
||||
#### What's good about it ####
|
||||
|
||||
The first thing that I liked about Electric Fence is that -- unlike Memwatch, Dmalloc and Mtrace -- it doesn't require you to make any changes in the source code of your program. You just need to link your program with the tool's library during compilation.
|
||||
|
||||
Secondly, the way the debugging tool is implemented makes sure that a segmentation fault is generated on the very first instruction that causes a bounds violation, which is always better than having the problem detected at a later stage.
|
||||
|
||||
Electric Fence always produces a copyright message in output irrespective of whether an error was detected or not. This behavior is quite useful, as it also acts as a confirmation that you are actually running an Electric Fence-enabled version of your program.
|
||||
|
||||
#### Other considerations ####
|
||||
|
||||
On the other hand, what I really miss in Electric Fence is the ability to detect memory leaks, as it is one of the most common and potentially serious problems that software written in C/C++ has. In addition, the tool cannot detect overruns of memory allocated on the stack, and is not thread-safe.
|
||||
|
||||
Given that the tool allocates an inaccessible virtual memory page both before and after a user-allocated memory buffer, it ends up consuming a lot of extra memory if your program makes too many dynamic memory allocations.
|
||||
|
||||
Another limitation of the tool is that it cannot explicitly tell exactly where the problem lies in your programs' code -- all it does is produce a segmentation fault whenever it detects a memory-related error. To find out the exact line number, you'll have to debug your Electric Fence-enabled program with a tool like [The Gnu Project Debugger (GDB)][14], which in turn depends on the -g compiler option to produce line numbers in output.
|
||||
|
||||
Finally, although Electric Fence is capable of detecting most buffer overruns, an exception is the scenario where the allocated buffer size is not a multiple of the word size of the system -- in that case, an overrun (even if it's only a few bytes) won't be detected.
|
||||
|
||||
#### Bottom line ####
|
||||
|
||||
Despite all its limitations, where Electric Fence scores is the ease of use -- just link your program with the tool once, and it'll alert you every time it detects a memory issue it's capable of detecting. However, as already mentioned, the tool requires you to use a source-code debugger like GDB.
|
||||
|
||||
### [Memcheck][16] ###
|
||||
|
||||
**Developer**: [Valgrind Developers][17]
|
||||
**Reviewed version**: 3.10.1
|
||||
**Linux support**: All flavors
|
||||
**License**: GPL
|
||||
|
||||
[Valgrind][18] is a suite that provides several tools for debugging and profiling Linux programs. Although it works with programs written in many different languages -- such as Java, Perl, Python, Assembly code, Fortran, Ada and more -- the tools it provides are largely aimed at programs written in C and C++.
|
||||
|
||||
The most popular Valgrind tool is Memcheck, a memory-error detector that can detect issues such as memory leaks, invalid memory access, uses of undefined values and problems related to allocation and deallocation of heap memory.
|
||||
|
||||
#### What's new ####
|
||||
|
||||
This [release][19] of the suite (3.10.1) is a minor one that primarily contains fixes to bugs reported in version 3.10.0. In addition, it also "backports fixes for all reported missing AArch64 ARMv8 instructions and syscalls from the trunk."
|
||||
|
||||
#### What's good about it ####
|
||||
|
||||
Memcheck, like all other Valgrind tools, is basically a command line utility. It's very easy to use: If you normally run your program on the command line in a form such as prog arg1 arg2, you just need to add a few values, like this: valgrind --leak-check=full prog arg1 arg2.
|
||||
|
||||
![cw memcheck output](http://images.techhive.com/images/article/2015/11/cw_memcheck-output-100627037-large.idge.png)
|
||||
Memcheck
|
||||
|
||||
(Note: You don't need to mention Memcheck anywhere in the command line because it's the default Valgrind tool. However, you do need to initially compile your program with the -g option -- which adds debugging information -- so that Memcheck's error messages include exact line numbers.)
|
||||
|
||||
What I really like about Memcheck is that it provides a lot of command line options (such as the --leak-check option mentioned above), allowing you to not only control how the tool works but also how it produces the output.
|
||||
|
||||
For example, you can enable the --track-origins option to see information on the sources of uninitialized data in your program. Enabling the --show-mismatched-frees option will let Memcheck match the memory allocation and deallocation techniques. For code written in C language, Memcheck will make sure that only the free() function is used to deallocate memory allocated by malloc(), while for code written in C++, the tool will check whether or not the delete and delete[] operators are used to deallocate memory allocated by new and new[], respectively. If a mismatch is detected, an error is reported.
|
||||
|
||||
But the best part, especially for beginners, is that the tool even produces suggestions about which command line option the user should use to make the output more meaningful. For example, if you do not use the basic --leak-check option, it will produce an output suggesting: "Rerun with --leak-check=full to see details of leaked memory." And if there are uninitialized variables in the program, the tool will generate a message that says, "Use --track-origins=yes to see where uninitialized values come from."
|
||||
|
||||
Another useful feature of Memcheck is that it lets you [create suppression files][20], allowing you to suppress certain errors that you can't fix at the moment -- this way you won't be reminded of them every time the tool is run. It's worth mentioning that there already exists a default suppression file that Memcheck reads to suppress errors in the system libraries, such as the C library, that come pre-installed with your OS. You can either create a new suppression file for your use, or edit the existing one (usually /usr/lib/valgrind/default.supp).
|
||||
|
||||
For those seeking advanced functionality, it's worth knowing that Memcheck can also [detect memory errors][21] in programs that use [custom memory allocators][22]. In addition, it also provides [monitor commands][23] that can be used while working with Valgrind's built-in gdbserver, as well as a [client request mechanism][24] that allows you not only to tell the tool facts about the behavior of your program, but make queries as well.
|
||||
|
||||
#### Other considerations ####
|
||||
|
||||
While there's no denying that Memcheck can save you a lot of debugging time and frustration, the tool uses a lot of memory, and so can make your program execution significantly slower (around 20 to 30 times, [according to the documentation][25]).
|
||||
|
||||
Aside from this, there are some other limitations, too. According to some user comments, Memcheck apparently isn't [thread-safe][26]; it doesn't detect [static buffer overruns][27]). Also, there are some Linux programs, like [GNU Emacs][28], that currently do not work with Memcheck.
|
||||
|
||||
If you're interested in taking a look, an exhaustive list of Valgrind's limitations can be found [here][29].
|
||||
|
||||
#### Bottom line ####
|
||||
|
||||
Memcheck is a handy memory-debugging tool for both beginners as well as those looking for advanced features. While it's very easy to use if all you need is basic debugging and error checking, there's a bit of learning curve if you want to use features like suppression files or monitor commands.
|
||||
|
||||
Although it has a long list of limitations, Valgrind (and hence Memcheck) claims on its site that it is used by [thousands of programmers][30] across the world -- the team behind the tool says it's received feedback from users in over 30 countries, with some of them working on projects with up to a whopping 25 million lines of code.
|
||||
|
||||
### [Memwatch][31] ###
|
||||
|
||||
**Developer**: Johan Lindh
|
||||
**Reviewed version**: 2.71
|
||||
**Linux support**: All flavors
|
||||
**License**: GNU GPL
|
||||
|
||||
Memwatch is a memory-debugging tool developed by Johan Lindh. Although it's primarily a memory-leak detector, it is also capable (according to its Web page) of detecting other memory-related issues like [double-free error tracking and erroneous frees][32], buffer overflow and underflow, [wild pointer][33] writes, and more.
|
||||
|
||||
The tool works with programs written in C. Although you can also use it with C++ programs, it's not recommended (according to the Q&A file that comes with the tool's source package).
|
||||
|
||||
#### What's new ####
|
||||
|
||||
This version adds ULONG_LONG_MAX to detect whether a program is 32-bit or 64-bit.
|
||||
|
||||
#### What's good about it ####
|
||||
|
||||
Like Dmalloc, Memwatch comes with good documentation. You can refer to the USING file if you want to learn things like how the tool works; how it performs initialization, cleanup and I/O operations; and more. Then there is a FAQ file that is aimed at helping users in case they face any common error while using Memcheck. Finally, there is a test.c file that contains a working example of the tool for your reference.
|
||||
|
||||
![cw memwatch output](http://images.techhive.com/images/article/2015/11/cw_memwatch_output-100627038-large.idge.png)
|
||||
Memwatch
|
||||
|
||||
Unlike Mtrace, the log file to which Memwatch writes the output (usually memwatch.log) is in human-readable form. Also, instead of truncating, Memwatch appends the memory-debugging output to the file each time the tool is run, allowing you to easily refer to the previous outputs should the need arise.
|
||||
|
||||
It's also worth mentioning that when you execute your program with Memwatch enabled, the tool produces a one-line output on [stdout][34] informing you that some errors were found -- you can then head to the log file for details. If no such error message is produced, you can rest assured that the log file won't contain any mistakes -- this actually saves time if you're running the tool several times.
|
||||
|
||||
Another thing that I liked about Memwatch is that it also provides a way through which you can capture the tool's output from within the code, and handle it the way you like (refer to the mwSetOutFunc() function in the Memwatch source code for more on this).
|
||||
|
||||
#### Other considerations ####
|
||||
|
||||
Like Mtrace and Dmalloc, Memwatch requires you to add extra code to your source file -- you have to include the memwatch.h header file in your code. Also, while compiling your program, you need to either compile memwatch.c along with your program's source files or include the object module from the compile of the file, as well as define the MEMWATCH and MW_STDIO variables on the command line. Needless to say, the -g compiler option is also required for your program if you want exact line numbers in the output.
|
||||
|
||||
There are some features that it doesn't contain. For example, the tool cannot detect attempts to write to an address that has already been freed or read data from outside the allocated memory. Also, it's not thread-safe. Finally, as I've already pointed out in the beginning, there is no guarantee on how the tool will behave if you use it with programs written in C++.
|
||||
|
||||
#### Bottom line ####
|
||||
|
||||
Memcheck can detect many memory-related problems, making it a handy debugging tool when dealing with projects written in C. Given that it has a very small source code, you can learn how the tool works, debug it if the need arises, and even extend or update its functionality as per your requirements.
|
||||
|
||||
### [Mtrace][35] ###
|
||||
|
||||
**Developers**: Roland McGrath and Ulrich Drepper
|
||||
**Reviewed version**: 2.21
|
||||
**Linux support**: All flavors
|
||||
**License**: GNU LGPL
|
||||
|
||||
Mtrace is a memory-debugging tool included in [the GNU C library][36]. It works with both C and C++ programs on Linux, and detects memory leaks caused by unbalanced calls to the malloc() and free() functions.
|
||||
|
||||
![cw mtrace output](http://images.techhive.com/images/article/2015/11/cw_mtrace-output-100627039-large.idge.png)
|
||||
Mtrace
|
||||
|
||||
The tool is implemented in the form of a function called mtrace(), which traces all malloc/free calls made by a program and logs the information in a user-specified file. Because the file contains data in computer-readable format, a Perl script -- also named mtrace -- is used to convert and display it in human-readable form.
|
||||
|
||||
#### What's new ####
|
||||
|
||||
[The Mtrace source][37] and [the Perl file][38] that now come with the GNU C library (version 2.21) add nothing new to the tool aside from an update to the copyright dates.
|
||||
|
||||
#### What's good about it ####
|
||||
|
||||
The best part about Mtrace is that the learning curve for it isn't steep; all you need to understand is how and where to add the mtrace() -- and the corresponding muntrace() -- function in your code, and how to use the Mtrace Perl script. The latter is very straightforward -- all you have to do is run the mtrace() <program-executable> <log-file-generated-upon-program-execution> command. (For an example, see the last command in the screenshot above.)
|
||||
|
||||
Another thing that I like about Mtrace is that it's scalable -- which means that you can not only use it to debug a complete program, but can also use it to detect memory leaks in individual modules of the program. Just call the mtrace() and muntrace() functions within each module.
|
||||
|
||||
Finally, since the tool is triggered when the mtrace() function -- which you add in your program's source code -- is executed, you have the flexibility to enable the tool dynamically (during program execution) [using signals][39].
|
||||
|
||||
#### Other considerations ####
|
||||
|
||||
Because the calls to mtrace() and mauntrace() functions -- which are declared in the mcheck.h file that you need to include in your program's source -- are fundamental to Mtrace's operation (the mauntrace() function is not [always required][40]), the tool requires programmers to make changes in their code at least once.
|
||||
|
||||
Be aware that you need to compile your program with the -g option (provided by both the [GCC][41] and [G++][42] compilers), which enables the debugging tool to display exact line numbers in the output. In addition, some programs (depending on how big their source code is) can take a long time to compile. Finally, compiling with -g increases the size of the executable (because it produces extra information for debugging), so you have to remember that the program needs to be recompiled without -g after the testing has been completed.
|
||||
|
||||
To use Mtrace, you need to have some basic knowledge of environment variables in Linux, given that the path to the user-specified file -- which the mtrace() function uses to log all the information -- has to be set as a value for the MALLOC_TRACE environment variable before the program is executed.
|
||||
|
||||
Feature-wise, Mtrace is limited to detecting memory leaks and attempts to free up memory that was never allocated. It can't detect other memory-related issues such as illegal memory access or use of uninitialized memory. Also, [there have been complaints][43] that it's not [thread-safe][44].
|
||||
|
||||
### Conclusions ###
|
||||
|
||||
Needless to say, each memory debugger that I've discussed here has its own qualities and limitations. So, which one is best suited for you mostly depends on what features you require, although ease of setup and use might also be a deciding factor in some cases.
|
||||
|
||||
Mtrace is best suited for cases where you just want to catch memory leaks in your software program. It can save you some time, too, since the tool comes pre-installed on your Linux system, something which is also helpful in situations where the development machines aren't connected to the Internet or you aren't allowed to download a third party tool for any kind of debugging.
|
||||
|
||||
Dmalloc, on the other hand, can not only detect more error types compared to Mtrace, but also provides more features, such as runtime configurability and GDB integration. Also, unlike any other tool discussed here, Dmalloc is thread-safe. Not to mention that it comes with detailed documentation, making it ideal for beginners.
|
||||
|
||||
Although Memwatch comes with even more comprehensive documentation than Dmalloc, and can detect even more error types, you can only use it with software written in the C programming language. One of its features that stands out is that it lets you handle its output from within the code of your program, something that is helpful in case you want to customize the format of the output.
|
||||
|
||||
If making changes to your program's source code is not what you want, you can use Electric Fence. However, keep in mind that it can only detect a couple of error types, and that doesn't include memory leaks. Plus, you also need to know GDB basics to make the most out of this memory-debugging tool.
|
||||
|
||||
Memcheck is probably the most comprehensive of them all. It detects more error types and provides more features than any other tool discussed here -- and it doesn't require you to make any changes in your program's source code.But be aware that, while the learning curve is not very high for basic usage, if you want to use its advanced features, a level of expertise is definitely required.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.computerworld.com/article/3003957/linux/review-5-memory-debuggers-for-linux-coding.html
|
||||
|
||||
作者:[Himanshu Arora][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.computerworld.com/author/Himanshu-Arora/
|
||||
[1]:https://openclipart.org/detail/132427/penguin-admin
|
||||
[2]:https://en.wikipedia.org/wiki/Manual_memory_management
|
||||
[3]:http://dmalloc.com/
|
||||
[4]:https://www.owasp.org/index.php/Double_Free
|
||||
[5]:https://stuff.mit.edu/afs/sipb/project/gnucash-test/src/dmalloc-4.8.2/dmalloc.html#Fence-Post%20Overruns
|
||||
[6]:http://dmalloc.com/releases/notes/dmalloc-5.5.2.html
|
||||
[7]:http://www.gnu.org/software/gdb/
|
||||
[8]:http://dmalloc.com/docs/
|
||||
[9]:http://dmalloc.com/docs/latest/online/dmalloc_26.html#SEC32
|
||||
[10]:http://dmalloc.com/docs/latest/online/dmalloc_23.html#SEC29
|
||||
[11]:https://en.wikipedia.org/wiki/Memory_management#Dynamic_memory_allocation
|
||||
[12]:https://en.wikipedia.org/wiki/Segmentation_fault
|
||||
[13]:https://en.wikipedia.org/wiki/GNU_Compiler_Collection
|
||||
[14]:http://www.gnu.org/software/gdb/
|
||||
[15]:https://launchpad.net/ubuntu/+source/electric-fence/2.2.3
|
||||
[16]:http://valgrind.org/docs/manual/mc-manual.html
|
||||
[17]:http://valgrind.org/info/developers.html
|
||||
[18]:http://valgrind.org/
|
||||
[19]:http://valgrind.org/docs/manual/dist.news.html
|
||||
[20]:http://valgrind.org/docs/manual/mc-manual.html#mc-manual.suppfiles
|
||||
[21]:http://valgrind.org/docs/manual/mc-manual.html#mc-manual.mempools
|
||||
[22]:http://stackoverflow.com/questions/4642671/c-memory-allocators
|
||||
[23]:http://valgrind.org/docs/manual/mc-manual.html#mc-manual.monitor-commands
|
||||
[24]:http://valgrind.org/docs/manual/mc-manual.html#mc-manual.clientreqs
|
||||
[25]:http://valgrind.org/docs/manual/valgrind_manual.pdf
|
||||
[26]:http://sourceforge.net/p/valgrind/mailman/message/30292453/
|
||||
[27]:https://msdn.microsoft.com/en-us/library/ee798431%28v=cs.20%29.aspx
|
||||
[28]:http://www.computerworld.com/article/2484425/linux/5-free-linux-text-editors-for-programming-and-word-processing.html?nsdr=true&page=2
|
||||
[29]:http://valgrind.org/docs/manual/manual-core.html#manual-core.limits
|
||||
[30]:http://valgrind.org/info/
|
||||
[31]:http://www.linkdata.se/sourcecode/memwatch/
|
||||
[32]:http://www.cecalc.ula.ve/documentacion/tutoriales/WorkshopDebugger/007-2579-007/sgi_html/ch09.html
|
||||
[33]:http://c2.com/cgi/wiki?WildPointer
|
||||
[34]:https://en.wikipedia.org/wiki/Standard_streams#Standard_output_.28stdout.29
|
||||
[35]:http://www.gnu.org/software/libc/manual/html_node/Tracing-malloc.html
|
||||
[36]:https://www.gnu.org/software/libc/
|
||||
[37]:https://sourceware.org/git/?p=glibc.git;a=history;f=malloc/mtrace.c;h=df10128b872b4adc4086cf74e5d965c1c11d35d2;hb=HEAD
|
||||
[38]:https://sourceware.org/git/?p=glibc.git;a=history;f=malloc/mtrace.pl;h=0737890510e9837f26ebee2ba36c9058affb0bf1;hb=HEAD
|
||||
[39]:http://webcache.googleusercontent.com/search?q=cache:s6ywlLtkSqQJ:www.gnu.org/s/libc/manual/html_node/Tips-for-the-Memory-Debugger.html+&cd=1&hl=en&ct=clnk&gl=in&client=Ubuntu
|
||||
[40]:http://www.gnu.org/software/libc/manual/html_node/Using-the-Memory-Debugger.html#Using-the-Memory-Debugger
|
||||
[41]:http://linux.die.net/man/1/gcc
|
||||
[42]:http://linux.die.net/man/1/g++
|
||||
[43]:https://sourceware.org/ml/libc-help/2014-05/msg00008.html
|
||||
[44]:https://en.wikipedia.org/wiki/Thread_safety
|
@ -0,0 +1,171 @@
|
||||
20 Years of GIMP Evolution: Step by Step
|
||||
================================================================================
|
||||
注:youtube 视频
|
||||
<iframe width="660" height="371" frameborder="0" allowfullscreen="" src="https://www.youtube.com/embed/PSJAzJ6mkVw?feature=oembed"></iframe>
|
||||
|
||||
[GIMP][1] (GNU Image Manipulation Program) – superb open source and free graphics editor. Development began in 1995 as students project of the University of California, Berkeley by Peter Mattis and Spencer Kimball. In 1997 the project was renamed in “GIMP” and became an official part of [GNU Project][2]. During these years the GIMP is one of the best graphics editor and platinum holy wars “GIMP vs Photoshop” – one of the most popular.
|
||||
|
||||
The first announce, 21.11.1995:
|
||||
|
||||
> From: Peter Mattis
|
||||
>
|
||||
> Subject: ANNOUNCE: The GIMP
|
||||
>
|
||||
> Date: 1995-11-21
|
||||
>
|
||||
> Message-ID: <48s543$r7b@agate.berkeley.edu>
|
||||
>
|
||||
> Newsgroups: comp.os.linux.development.apps,comp.os.linux.misc,comp.windows.x.apps
|
||||
>
|
||||
> The GIMP: the General Image Manipulation Program
|
||||
> ------------------------------------------------
|
||||
>
|
||||
> The GIMP is designed to provide an intuitive graphical interface to a
|
||||
> variety of image editing operations. Here is a list of the GIMP's
|
||||
> major features:
|
||||
>
|
||||
> Image viewing
|
||||
> -------------
|
||||
>
|
||||
> * Supports 8, 15, 16 and 24 bit color.
|
||||
> * Ordered and Floyd-Steinberg dithering for 8 bit displays.
|
||||
> * View images as rgb color, grayscale or indexed color.
|
||||
> * Simultaneously edit multiple images.
|
||||
> * Zoom and pan in real-time.
|
||||
> * GIF, JPEG, PNG, TIFF and XPM support.
|
||||
>
|
||||
> Image editing
|
||||
> -------------
|
||||
>
|
||||
> * Selection tools including rectangle, ellipse, free, fuzzy, bezier
|
||||
> and intelligent.
|
||||
> * Transformation tools including rotate, scale, shear and flip.
|
||||
> * Painting tools including bucket, brush, airbrush, clone, convolve,
|
||||
> blend and text.
|
||||
> * Effects filters (such as blur, edge detect).
|
||||
> * Channel & color operations (such as add, composite, decompose).
|
||||
> * Plug-ins which allow for the easy addition of new file formats and
|
||||
> new effect filters.
|
||||
> * Multiple undo/redo.
|
||||
|
||||
GIMP 0.54, 1996
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/GIMP%20History/054.png)
|
||||
|
||||
GIMP 0.54 was required X11 displays, X-server and Motif 1.2 wigdets and supported 8, 15, 16 & 24 color depths with RGB & grayscale colors. Supported images format: GIF, JPEG, PNG, TIFF and XPM.
|
||||
|
||||
Basic functionality: rectangle, ellipse, free, fuzzy, bezier, intelligent selection tools, and rotate, scale, shear, clone, blend and flip images.
|
||||
|
||||
Extended tools: text operations, effects filters, tools for channel and colors manipulation, undo and redo operations. Since the first version GIMP support the plugin system.
|
||||
|
||||
GIMP 0.54 can be ran in Linux, HP-UX, Solaris, SGI IRIX.
|
||||
|
||||
### GIMP 0.60, 1997 ###
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/GIMP%20History/060.gif)
|
||||
|
||||
This is development release, not for all users. GIMP has the new toolkits – GDK (GIMP Drawing Kit) and GTK (GIMP Toolkit), Motif support is deprecated. GIMP Toolkit is also begin of the GTK+ cross-platform widget toolkit. New features:
|
||||
|
||||
- basic layers
|
||||
- sub-pixel sampling
|
||||
- brush spacing
|
||||
- improver airbrush
|
||||
- paint modes
|
||||
|
||||
### GIMP 0.99, 1997 ###
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/GIMP%20History/099.png)
|
||||
|
||||
Since 0.99 version GIMP has the scripts add macros (Script-Fus) support. GTK and GDK with some improvements has now the new name – GTK+. Other improvements:
|
||||
|
||||
- support big images (rather than 100 MB)
|
||||
- new native format – XCF
|
||||
- new API – write plugins and extensions is easy
|
||||
|
||||
### GIMP 1.0, 1998 ###
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/GIMP%20History/100.gif)
|
||||
|
||||
GIMP and GTK+ was splitted into separate projects. The GIMP official website has
|
||||
reconstructed and contained new tutorials, plugins and documentation. New features:
|
||||
|
||||
- tile-based memory management
|
||||
- massive changes in plugin API
|
||||
- XFC format now support layers, guides and selections
|
||||
- web interface
|
||||
- online graphics generation
|
||||
|
||||
### GIMP 1.2, 2000 ###
|
||||
|
||||
New features:
|
||||
|
||||
- translation for non-english languages
|
||||
- fixed many bugs in GTK+ and GIMP
|
||||
- many new plugins
|
||||
- image map
|
||||
- new toolbox: resize, measure, dodge, burn, smugle, samle colorize and curve bend
|
||||
- image pipes
|
||||
- images preview before saving
|
||||
- scaled brush preview
|
||||
- recursive selection by path
|
||||
- new navigation window
|
||||
- drag’n’drop
|
||||
- watermarks support
|
||||
|
||||
### GIMP 2.0, 2004 ###
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/GIMP%20History/200.png)
|
||||
|
||||
The biggest change – new GTK+ 2.x toolkit.
|
||||
|
||||
### GIMP 2.2, 2004 ###
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/GIMP%20History/220.png)
|
||||
|
||||
Many bugfixes and drag’n’drop support.
|
||||
|
||||
### GIMP 2.4, 2007 ###
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/GIMP%20History/240.png)
|
||||
|
||||
New features:
|
||||
|
||||
- better drag’n’drop support
|
||||
- Ti-Fu was replaced to Script-Fu – the new script interpreter
|
||||
- new plugins: photocopy, softglow, neon, cartoon, dog, glob and others
|
||||
|
||||
### GIMP 2.6, 2008 ###
|
||||
|
||||
New features:
|
||||
|
||||
- renew graphics interface
|
||||
- new select and tool
|
||||
- GEGL (GEneric Graphics Library) integration
|
||||
- “The Utility Window Hint” for MDI behavior
|
||||
|
||||
### GIMP 2.8, 2012 ###
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/GIMP%20History/280.png)
|
||||
|
||||
New features:
|
||||
|
||||
- GUI has some visual changes
|
||||
- new save and export menu
|
||||
- renew text editor
|
||||
- layers group support
|
||||
- JPEG2000 and export to PDF support
|
||||
- webpage screenshot tool
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://tlhp.cf/20-years-of-gimp-evolution/
|
||||
|
||||
作者:[Pavlo Rudyi][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://tlhp.cf/author/paul/
|
||||
[1]:https://gimp.org/
|
||||
[2]:http://www.gnu.org/
|
@ -0,0 +1,51 @@
|
||||
Linux Foundation Explains a "World without Linux" and Open Source
|
||||
================================================================================
|
||||
> The Linux Foundation responds to questions about its "World without Linux" movies, including what the Internet would be like without Linux and other open source software.
|
||||
|
||||
![](http://thevarguy.com/site-files/thevarguy.com/files/imagecache/medium_img/uploads/2015/11/hey_22.png)
|
||||
|
||||
Would the world really be tremendously different if Linux, the open source operating system kernel, did not exist? Would there be no Internet or movies? Those are the questions some viewers of the [Linux Foundation's][1] ongoing "[World without Linux][2]" video series are asking. Here are some answers.
|
||||
|
||||
In case you've missed it, the "World without Linux" series is a collection of quirky short films that depict, well, a world without Linux (and open source software more generally). They have emphasized themes like [Linux's role in movie-making][3] and in [serving the Internet][4].
|
||||
|
||||
To offer perspective on the series's claims, direction and hidden symbols, Jennifer Cloer, vice president of communications at The Linux Foundation, recently sent The VAR Guy responses to some common queries about the movies. Below are the answers, in her own words.
|
||||
|
||||
### The latest episode takes Sam and Annie to the movies. Would today's graphics really be that much different without Linux? ###
|
||||
|
||||
In episode #4, we do a bit of a parody on "Avatar." Love it or hate it, the graphics in the real "Avatar" are pretty impressive. In a world without Linux, the graphics would be horrible but we wouldn't even know it because we wouldn't know any better. But in fact, "Avatar" was created using Linux. Weta Digital used one of the world's largest Linux clusters to render the film and do 3D modeling. It's also been reported that "Lord of the Rings," "Fantastic Four" and "King Kong," among others, have used Linux. We hope this episode can bring attention to that work, which hasn't been widely reported.
|
||||
|
||||
### Some people criticized the original episode for concluding there would be no Internet without Linux. What's your reaction? ###
|
||||
|
||||
We enjoyed the debate that resulted from the debut episode. With more than 100,000 views to date of that episode alone, it brought awareness to the role that Linux plays in society and to the worldwide community of contributors and supporters. Of course the Internet would exist without Linux but it wouldn't be the Internet we know today and it wouldn't have matured at the pace it has. Each episode makes a bold and fun statement about Linux's role in our every day lives. We hope this can help extend the story of Linux to more people around the world.
|
||||
|
||||
### Why is Sam and Annie's cat named String? ###
|
||||
|
||||
Nothing in the series is a coincidence. Look closely and you'll find all kinds of inside Linux and geek jokes. String is named after String theory and was named by our Linux.com Editor Libby Clark. In physics, string theory is a theoretical framework in which the point-like particles of particle physics are replaced by one-dimensional objects called strings. String theory describes how these strings propagate through space and interact with each other. Kind of like Sam, Annie and String in a World Without Linux.
|
||||
|
||||
### What can we expect from the next two episodes and, in particular, the finale? When will it air? ###
|
||||
|
||||
In episode #5, we'll go to space and experience what a world without Linux would mean to exploration. It's a wild ride. In the finale, we finally get to see Linus in a world without Linux. There have been clues throughout the series as to what this finale will include but I can't give more than that away since there are ongoing contests to find the clues. And I can't give away the air date for the finale! You'll have to follow #WorldWithoutLinux to learn more.
|
||||
|
||||
### Can you give us a hint on the clues in episode #4? ###
|
||||
|
||||
There is another reference to the Free Burger Restaurant in this episode. Linux also actually does appear in this world without Linux but in a very covert way; you could say it's like reading Linux in another language. And, of course, just for fun, String makes another appearance.
|
||||
|
||||
### Is the series achieving what you hoped? ###
|
||||
|
||||
Yes. We're really happy to see people share and engage with these stories. We hope that it's reaching people who might not otherwise know the story of Linux or understand its pervasiveness in the world today. It's really about surfacing this to a broader audience and giving thanks to the worldwide community of developers and companies that support Linux and all the things it makes possible.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://thevarguy.com/open-source-application-software-companies/linux-foundation-explains-world-without-linux-and-open-so
|
||||
|
||||
作者:[Christopher Tozzi][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://thevarguy.com/author/christopher-tozzi
|
||||
[1]:http://linuxfoundation.org/
|
||||
[2]:http://www.linuxfoundation.org/world-without-linux
|
||||
[3]:http://thevarguy.com/open-source-application-software-companies/new-linux-foundation-video-highlights-role-open-source-3d
|
||||
[4]:http://thevarguy.com/open-source-application-software-companies/100715/would-internet-exist-without-linux-yes-without-open-sourc
|
@ -0,0 +1,77 @@
|
||||
Microsoft and Linux: True Romance or Toxic Love?
|
||||
================================================================================
|
||||
Every now and then, you come across a news story that makes you choke on your coffee or splutter hot latte all over your monitor. Microsoft's recent proclamations of love for Linux is an outstanding example of such a story.
|
||||
|
||||
Common sense says that Microsoft and the FOSS movement should be perpetual enemies. In the eyes of many, Microsoft embodies most of the greedy excesses that the Free Software movement rejects. In addition, Microsoft previously has labeled Linux as a cancer and the FOSS community as a "pack of thieves".
|
||||
|
||||
We can understand why Microsoft has been afraid of a free operating system. When combined with open-source applications that challenge Microsoft's core line, it threatens Microsoft's grip on the desktop/laptop market.
|
||||
|
||||
In spite of Microsoft's fears over its desktop dominance, the Web server marketplace is one arena where Linux has had the greatest impact. Today, the majority of Web servers are Linux boxes. This includes most of the world's busiest sites. The sight of so much unclaimed licensing revenue must be painful indeed for Microsoft.
|
||||
|
||||
Handheld devices are another realm where Microsoft has lost ground to free software. At one point, its Windows CE and Pocket PC operating systems were at the forefront of mobile computing. Windows-powered PDA devices were the shiniest and flashiest gadgets around. But, that all ended when Apple released its iPhone. Since then, Android has stepped into the limelight, with Windows Mobile largely ignored and forgotten. The Android platform is built on free and open-source components.
|
||||
|
||||
The rapid expansion in Android's market share is due to the open nature of the platform. Unlike with iOS, any phone manufacturer can release an Android handset. And, unlike with Windows Mobile, there are no licensing fees. This has been really good news for consumers. It has led to lots of powerful and cheap handsets appearing from manufacturers all over the world. It's a very definite vindication of the value of FOSS software.
|
||||
|
||||
Losing the battle for the Web and mobile computing is a brutal loss for Microsoft. When you consider the size of those two markets combined, the desktop market seems like a stagnant backwater. Nobody likes to lose, especially when money is on the line. And, Microsoft does have a lot to lose. You would expect Microsoft to be bitter about it. And in the past, it has been.
|
||||
|
||||
Microsoft has fought back against Linux and FOSS using every weapon at its disposal, from propaganda to patent threats, and although these attacks have slowed the adoption of Linux, they haven't stopped it.
|
||||
|
||||
So, you can forgive us for being shocked when Microsoft starts handing out t-shirts and badges that say "Microsoft Loves Linux" at open-source conferences and events. Could it be true? Does Microsoft really love Linux?
|
||||
|
||||
Of course, PR slogans and free t-shirts do not equal truth. Actions speak louder than words. And when you consider Microsoft's actions, Microsoft's stance becomes a little more ambiguous.
|
||||
|
||||
On the one hand, Microsoft is recruiting hundreds of Linux developers and sysadmins. It's releasing its .NET Core framework as an open-source project with cross-platform support (so that .NET apps can run on OS X and Linux). And, it is partnering with Linux companies to bring popular distros to its Azure platform. In fact, Microsoft even has gone so far as to create its own Linux distro for its Azure data center.
|
||||
|
||||
On the other hand, Microsoft continues to launch legal attacks on open-source projects directly and through puppet corporations. It's clear that Microsoft hasn't had some big moral change of heart over proprietary vs. free software, so why the public declarations of adoration?
|
||||
|
||||
To state the obvious, Microsoft is a profit-making entity. It's an investment vehicle for its shareholders and a source of income for its employees. Everything it does has a single ultimate goal: revenue. Microsoft doesn't act out of love or even hate (although that's a common accusation).
|
||||
|
||||
So the question shouldn't be "does Microsoft really love Linux?" Instead, we should ask how Microsoft is going to profit from all this.
|
||||
|
||||
Let's take the open-source release of .NET Core. This move makes it easy to port the .NET runtime to any platform. That extends the reach of Microsoft's .NET framework far beyond the Windows platform.
|
||||
|
||||
Opening .NET Core ultimately will make it possible for .NET developers to produce cross-platform apps for OS X, Linux, iOS and even Android--all from a single codebase.
|
||||
|
||||
From a developer's perspective, this makes the .NET framework much more attractive than before. Being able to reach many platforms from a single codebase dramatically increases the potential target market for any app developed using the .NET framework.
|
||||
|
||||
What's more, a strong Open Source community would provide developers with lots of code to reuse in their own projects. So, the availability of open-source projects would make the .NET framework.
|
||||
|
||||
On the plus side, opening .NET Core reduces fragmentation across different platforms and means a wider choice of apps for consumers. That means more choice, both in terms of open-source software and proprietary apps.
|
||||
|
||||
From Microsoft's point of view, it would gain a huge army of developers. Microsoft profits by selling training, certification, technical support, development tools (including Visual Studio) and proprietary extensions.
|
||||
|
||||
The question we should ask ourselves is does this benefit or hurt the Free Software community?
|
||||
|
||||
Widespread adoption of the .NET framework could mean the eventual death of competing open-source projects, forcing us all to dance to Microsoft's tune.
|
||||
|
||||
Moving beyond .NET, Microsoft is drawing a lot of attention to its Linux support on its Azure cloud computing platform. Remember, Azure originally was Windows Azure. That's because Windows Server was the only supported operating system. Today, Azure offers support for a number of Linux distros too.
|
||||
|
||||
There's one reason for this: paying customers who need and want Linux services. If Microsoft didn't offer Linux virtual machines, those customers would do business with someone else.
|
||||
|
||||
It looks like Microsoft is waking up to the fact that Linux is here to stay. Microsoft cannot feasibly wipe it out, so it has to embrace it.
|
||||
|
||||
This brings us back to the question of why there is so much buzz about Microsoft and Linux. We're all talking about it, because Microsoft wants us to think about it. After all, all these stories trace back to Microsoft, whether it's through press releases, blog posts or public announcements at conferences. The company is working hard to draw attention to its Linux expertise.
|
||||
|
||||
What other possible purpose could be behind Chief Architect Kamala Subramaniam's blog post announcing Azure Cloud Switch? ACS is a custom Linux distro that Microsoft uses to automate the configuration of its switch hardware in the Azure data centers.
|
||||
|
||||
ACS is not publicly available. It's intended for internal use in the Azure data center, and it's unlikely that anyone else would be able to find a use for it. In fact, Subramaniam states the same thing herself in her post.
|
||||
|
||||
So, Microsoft won't be making any money from selling ACS, and it won't attract a user base by giving it away. Instead, Microsoft gets to draw attention to Linux and Azure, strengthening its position as a Linux cloud computing platform.
|
||||
|
||||
Is Microsoft's new-found love for Linux good news for the community?
|
||||
|
||||
We shouldn't be slow to forget Microsoft's mantra of Embrace, Extend and Exterminate. Right now, Microsoft is very much in the early stages of embracing Linux. Will Microsoft seek to splinter the community through custom extensions and proprietary "standards"?
|
||||
|
||||
Let us know what you think in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxjournal.com/content/microsoft-and-linux-true-romance-or-toxic-love-0
|
||||
|
||||
作者:[James Darvell][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxjournal.com/users/james-darvell
|
@ -1,801 +0,0 @@
|
||||
wyangsun translating
|
||||
Linux workstation security checklist
|
||||
================================================================================
|
||||
This is a set of recommendations used by the Linux Foundation for their systems
|
||||
administrators. All of LF employees are remote workers and we use this set of
|
||||
guidelines to ensure that a sysadmin's system passes core security requirements
|
||||
in order to reduce the risk of it becoming an attack vector against the rest
|
||||
of our infrastructure.
|
||||
|
||||
Even if your systems administrators are not remote workers, chances are that
|
||||
they perform a lot of their work either from a portable laptop in a work
|
||||
environment, or set up their home systems to access the work infrastructure
|
||||
for after-hours/emergency support. In either case, you can adapt this set of
|
||||
recommendations to suit your environment.
|
||||
|
||||
This, by no means, is an exhaustive "workstation hardening" document, but
|
||||
rather an attempt at a set of baseline recommendations to avoid most glaring
|
||||
security errors without introducing too much inconvenience. You may read this
|
||||
document and think it is way too paranoid, while someone else may think this
|
||||
barely scratches the surface. Security is just like driving on the highway --
|
||||
anyone going slower than you is an idiot, while anyone driving faster than you
|
||||
is a crazy person. These guidelines are merely a basic set of core safety
|
||||
rules that is neither exhaustive, nor a replacement for experience, vigilance,
|
||||
and common sense.
|
||||
|
||||
Each section is split into two areas:
|
||||
|
||||
- The checklist that can be adapted to your project's needs
|
||||
- Free-form list of considerations that explain what dictated these decisions
|
||||
|
||||
## Severity levels
|
||||
|
||||
The items in each checklist include the severity level, which we hope will help
|
||||
guide your decision:
|
||||
|
||||
- _(CRITICAL)_ items should definitely be high on the consideration list.
|
||||
If not implemented, they will introduce high risks to your workstation
|
||||
security.
|
||||
- _(MODERATE)_ items will improve your security posture, but are less
|
||||
important, especially if they interfere too much with your workflow.
|
||||
- _(LOW)_ items may improve the overall security, but may not be worth the
|
||||
convenience trade-offs.
|
||||
- _(PARANOID)_ is reserved for items we feel will dramatically improve your
|
||||
workstation security, but will probably require a lot of adjustment to the
|
||||
way you interact with your operating system.
|
||||
|
||||
Remember, these are only guidelines. If you feel these severity levels do not
|
||||
reflect your project's commitment to security, you should adjust them as you
|
||||
see fit.
|
||||
|
||||
## Choosing the right hardware
|
||||
|
||||
We do not mandate that our admins use a specific vendor or a specific model, so
|
||||
this section addresses core considerations when choosing a work system.
|
||||
|
||||
### Checklist
|
||||
|
||||
- [ ] System supports SecureBoot _(CRITICAL)_
|
||||
- [ ] System has no firewire, thunderbolt or ExpressCard ports _(MODERATE)_
|
||||
- [ ] System has a TPM chip _(LOW)_
|
||||
|
||||
### Considerations
|
||||
|
||||
#### SecureBoot
|
||||
|
||||
Despite its controversial nature, SecureBoot offers prevention against many
|
||||
attacks targeting workstations (Rootkits, "Evil Maid," etc), without
|
||||
introducing too much extra hassle. It will not stop a truly dedicated attacker,
|
||||
plus there is a pretty high degree of certainty that state security agencies
|
||||
have ways to defeat it (probably by design), but having SecureBoot is better
|
||||
than having nothing at all.
|
||||
|
||||
Alternatively, you may set up [Anti Evil Maid][1] which offers a more
|
||||
wholesome protection against the type of attacks that SecureBoot is supposed
|
||||
to prevent, but it will require more effort to set up and maintain.
|
||||
|
||||
#### Firewire, thunderbolt, and ExpressCard ports
|
||||
|
||||
Firewire is a standard that, by design, allows any connecting device full
|
||||
direct memory access to your system ([see Wikipedia][2]). Thunderbolt and
|
||||
ExpressCard are guilty of the same, though some later implementations of
|
||||
Thunderbolt attempt to limit the scope of memory access. It is best if the
|
||||
system you are getting has none of these ports, but it is not critical, as
|
||||
they usually can be turned off via UEFI or disabled in the kernel itself.
|
||||
|
||||
#### TPM Chip
|
||||
|
||||
Trusted Platform Module (TPM) is a crypto chip bundled with the motherboard
|
||||
separately from the core processor, which can be used for additional platform
|
||||
security (such as to store full-disk encryption keys), but is not normally used
|
||||
for day-to-day workstation operation. At best, this is a nice-to-have, unless
|
||||
you have a specific need to use TPM for your workstation security.
|
||||
|
||||
## Pre-boot environment
|
||||
|
||||
This is a set of recommendations for your workstation before you even start
|
||||
with OS installation.
|
||||
|
||||
### Checklist
|
||||
|
||||
- [ ] UEFI boot mode is used (not legacy BIOS) _(CRITICAL)_
|
||||
- [ ] Password is required to enter UEFI configuration _(CRITICAL)_
|
||||
- [ ] SecureBoot is enabled _(CRITICAL)_
|
||||
- [ ] UEFI-level password is required to boot the system _(LOW)_
|
||||
|
||||
### Considerations
|
||||
|
||||
#### UEFI and SecureBoot
|
||||
|
||||
UEFI, with all its warts, offers a lot of goodies that legacy BIOS doesn't,
|
||||
such as SecureBoot. Most modern systems come with UEFI mode on by default.
|
||||
|
||||
Make sure a strong password is required to enter UEFI configuration mode. Pay
|
||||
attention, as many manufacturers quietly limit the length of the password you
|
||||
are allowed to use, so you may need to choose high-entropy short passwords vs.
|
||||
long passphrases (see below for more on passphrases).
|
||||
|
||||
Depending on the Linux distribution you decide to use, you may or may not have
|
||||
to jump through additional hoops in order to import your distribution's
|
||||
SecureBoot key that would allow you to boot the distro. Many distributions have
|
||||
partnered with Microsoft to sign their released kernels with a key that is
|
||||
already recognized by most system manufacturers, therefore saving you the
|
||||
trouble of having to deal with key importing.
|
||||
|
||||
As an extra measure, before someone is allowed to even get to the boot
|
||||
partition and try some badness there, let's make them enter a password. This
|
||||
password should be different from your UEFI management password, in order to
|
||||
prevent shoulder-surfing. If you shut down and start a lot, you may choose to
|
||||
not bother with this, as you will already have to enter a LUKS passphrase and
|
||||
this will save you a few extra keystrokes.
|
||||
|
||||
## Distro choice considerations
|
||||
|
||||
Chances are you'll stick with a fairly widely-used distribution such as Fedora,
|
||||
Ubuntu, Arch, Debian, or one of their close spin-offs. In any case, this is
|
||||
what you should consider when picking a distribution to use.
|
||||
|
||||
### Checklist
|
||||
|
||||
- [ ] Has a robust MAC/RBAC implementation (SELinux/AppArmor/Grsecurity) _(CRITICAL)_
|
||||
- [ ] Publishes security bulletins _(CRITICAL)_
|
||||
- [ ] Provides timely security patches _(CRITICAL)_
|
||||
- [ ] Provides cryptographic verification of packages _(CRITICAL)_
|
||||
- [ ] Fully supports UEFI and SecureBoot _(CRITICAL)_
|
||||
- [ ] Has robust native full disk encryption support _(CRITICAL)_
|
||||
|
||||
### Considerations
|
||||
|
||||
#### SELinux, AppArmor, and GrSecurity/PaX
|
||||
|
||||
Mandatory Access Controls (MAC) or Role-Based Access Controls (RBAC) are an
|
||||
extension of the basic user/group security mechanism used in legacy POSIX
|
||||
systems. Most distributions these days either already come bundled with a
|
||||
MAC/RBAC implementation (Fedora, Ubuntu), or provide a mechanism to add it via
|
||||
an optional post-installation step (Gentoo, Arch, Debian). Obviously, it is
|
||||
highly advised that you pick a distribution that comes pre-configured with a
|
||||
MAC/RBAC system, but if you have strong feelings about a distribution that
|
||||
doesn't have one enabled by default, do plan to configure it
|
||||
post-installation.
|
||||
|
||||
Distributions that do not provide any MAC/RBAC mechanisms should be strongly
|
||||
avoided, as traditional POSIX user- and group-based security should be
|
||||
considered insufficient in this day and age. If you would like to start out
|
||||
with a MAC/RBAC workstation, AppArmor and PaX are generally considered easier
|
||||
to learn than SELinux. Furthermore, on a workstation, where there are few or
|
||||
no externally listening daemons, and where user-run applications pose the
|
||||
highest risk, GrSecurity/PaX will _probably_ offer more security benefits than
|
||||
SELinux.
|
||||
|
||||
#### Distro security bulletins
|
||||
|
||||
Most of the widely used distributions have a mechanism to deliver security
|
||||
bulletins to their users, but if you are fond of something esoteric, check
|
||||
whether the developers have a documented mechanism of alerting the users about
|
||||
security vulnerabilities and patches. Absence of such mechanism is a major
|
||||
warning sign that the distribution is not mature enough to be considered for a
|
||||
primary admin workstation.
|
||||
|
||||
#### Timely and trusted security updates
|
||||
|
||||
Most of the widely used distributions deliver regular security updates, but is
|
||||
worth checking to ensure that critical package updates are provided in a
|
||||
timely fashion. Avoid using spin-offs and "community rebuilds" for this
|
||||
reason, as they routinely delay security updates due to having to wait for the
|
||||
upstream distribution to release it first.
|
||||
|
||||
You'll be hard-pressed to find a distribution that does not use cryptographic
|
||||
signatures on packages, updates metadata, or both. That being said, fairly
|
||||
widely used distributions have been known to go for years before introducing
|
||||
this basic security measure (Arch, I'm looking at you), so this is a thing
|
||||
worth checking.
|
||||
|
||||
#### Distros supporting UEFI and SecureBoot
|
||||
|
||||
Check that the distribution supports UEFI and SecureBoot. Find out whether it
|
||||
requires importing an extra key or whether it signs its boot kernels with a key
|
||||
already trusted by systems manufacturers (e.g. via an agreement with
|
||||
Microsoft). Some distributions do not support UEFI/SecureBoot but offer
|
||||
alternatives to ensure tamper-proof or tamper-evident boot environments
|
||||
([Qubes-OS][3] uses Anti Evil Maid, mentioned earlier). If a distribution
|
||||
doesn't support SecureBoot and has no mechanisms to prevent boot-level attacks,
|
||||
look elsewhere.
|
||||
|
||||
#### Full disk encryption
|
||||
|
||||
Full disk encryption is a requirement for securing data at rest, and is
|
||||
supported by most distributions. As an alternative, systems with
|
||||
self-encrypting hard drives may be used (normally implemented via the on-board
|
||||
TPM chip) and offer comparable levels of security plus faster operation, but at
|
||||
a considerably higher cost.
|
||||
|
||||
## Distro installation guidelines
|
||||
|
||||
All distributions are different, but here are general guidelines:
|
||||
|
||||
### Checklist
|
||||
|
||||
- [ ] Use full disk encryption (LUKS) with a robust passphrase _(CRITICAL)_
|
||||
- [ ] Make sure swap is also encrypted _(CRITICAL)_
|
||||
- [ ] Require a password to edit bootloader (can be same as LUKS) _(CRITICAL)_
|
||||
- [ ] Set up a robust root password (can be same as LUKS) _(CRITICAL)_
|
||||
- [ ] Use an unprivileged account, part of administrators group _(CRITICAL)_
|
||||
- [ ] Set up a robust user-account password, different from root _(CRITICAL)_
|
||||
|
||||
### Considerations
|
||||
|
||||
#### Full disk encryption
|
||||
|
||||
Unless you are using self-encrypting hard drives, it is important to configure
|
||||
your installer to fully encrypt all the disks that will be used for storing
|
||||
your data and your system files. It is not sufficient to simply encrypt the
|
||||
user directory via auto-mounting cryptfs loop files (I'm looking at you, older
|
||||
versions of Ubuntu), as this offers no protection for system binaries or swap,
|
||||
which is likely to contain a slew of sensitive data. The recommended
|
||||
encryption strategy is to encrypt the LVM device, so only one passphrase is
|
||||
required during the boot process.
|
||||
|
||||
The `/boot` partition will always remain unencrypted, as the bootloader needs
|
||||
to be able to actually boot the kernel before invoking LUKS/dm-crypt. The
|
||||
kernel image itself should be protected against tampering with a cryptographic
|
||||
signature checked by SecureBoot.
|
||||
|
||||
In other words, `/boot` should always be the only unencrypted partition on your
|
||||
system.
|
||||
|
||||
#### Choosing good passphrases
|
||||
|
||||
Modern Linux systems have no limitation of password/passphrase length, so the
|
||||
only real limitation is your level of paranoia and your stubbornness. If you
|
||||
boot your system a lot, you will probably have to type at least two different
|
||||
passwords: one to unlock LUKS, and another one to log in, so having long
|
||||
passphrases will probably get old really fast. Pick passphrases that are 2-3
|
||||
words long, easy to type, and preferably from rich/mixed vocabularies.
|
||||
|
||||
Examples of good passphrases (yes, you can use spaces):
|
||||
- nature abhors roombas
|
||||
- 12 in-flight Jebediahs
|
||||
- perdon, tengo flatulence
|
||||
|
||||
You can also stick with non-vocabulary passwords that are at least 10-12
|
||||
characters long, if you prefer that to typing passphrases.
|
||||
|
||||
Unless you have concerns about physical security, it is fine to write down your
|
||||
passphrases and keep them in a safe place away from your work desk.
|
||||
|
||||
#### Root, user passwords and the admin group
|
||||
|
||||
We recommend that you use the same passphrase for your root password as you
|
||||
use for your LUKS encryption (unless you share your laptop with other trusted
|
||||
people who should be able to unlock the drives, but shouldn't be able to
|
||||
become root). If you are the sole user of the laptop, then having your root
|
||||
password be different from your LUKS password has no meaningful security
|
||||
advantages. Generally, you can use the same passphrase for your UEFI
|
||||
administration, disk encryption, and root account -- knowing any of these will
|
||||
give an attacker full control of your system anyway, so there is little
|
||||
security benefit to have them be different on a single-user workstation.
|
||||
|
||||
You should have a different, but equally strong password for your regular user
|
||||
account that you will be using for day-to-day tasks. This user should be member
|
||||
of the admin group (e.g. `wheel` or similar, depending on the distribution),
|
||||
allowing you to perform `sudo` to elevate privileges.
|
||||
|
||||
In other words, if you are the sole user on your workstation, you should have 2
|
||||
distinct, robust, equally strong passphrases you will need to remember:
|
||||
|
||||
**Admin-level**, used in the following locations:
|
||||
|
||||
- UEFI administration
|
||||
- Bootloader (GRUB)
|
||||
- Disk encryption (LUKS)
|
||||
- Workstation admin (root user)
|
||||
|
||||
**User-level**, used for the following:
|
||||
|
||||
- User account and sudo
|
||||
- Master password for the password manager
|
||||
|
||||
All of them, obviously, can be different if there is a compelling reason.
|
||||
|
||||
## Post-installation hardening
|
||||
|
||||
Post-installation security hardening will depend greatly on your distribution
|
||||
of choice, so it is futile to provide detailed instructions in a general
|
||||
document such as this one. However, here are some steps you should take:
|
||||
|
||||
### Checklist
|
||||
|
||||
- [ ] Globally disable firewire and thunderbolt modules _(CRITICAL)_
|
||||
- [ ] Check your firewalls to ensure all incoming ports are filtered _(CRITICAL)_
|
||||
- [ ] Make sure root mail is forwarded to an account you check _(CRITICAL)_
|
||||
- [ ] Check to ensure sshd service is disabled by default _(MODERATE)_
|
||||
- [ ] Set up an automatic OS update schedule, or update reminders _(MODERATE)_
|
||||
- [ ] Configure the screensaver to auto-lock after a period of inactivity _(MODERATE)_
|
||||
- [ ] Set up logwatch _(MODERATE)_
|
||||
- [ ] Install and use rkhunter _(LOW)_
|
||||
- [ ] Install an Intrusion Detection System _(PARANOID)_
|
||||
|
||||
### Considerations
|
||||
|
||||
#### Blacklisting modules
|
||||
|
||||
To blacklist a firewire and thunderbolt modules, add the following lines to a
|
||||
file in `/etc/modprobe.d/blacklist-dma.conf`:
|
||||
|
||||
blacklist firewire-core
|
||||
blacklist thunderbolt
|
||||
|
||||
The modules will be blacklisted upon reboot. It doesn't hurt doing this even if
|
||||
you don't have these ports (but it doesn't do anything either).
|
||||
|
||||
#### Root mail
|
||||
|
||||
By default, root mail is just saved on the system and tends to never be read.
|
||||
Make sure you set your `/etc/aliases` to forward root mail to a mailbox that
|
||||
you actually read, otherwise you may miss important system notifications and
|
||||
reports:
|
||||
|
||||
# Person who should get root's mail
|
||||
root: bob@example.com
|
||||
|
||||
Run `newaliases` after this edit and test it out to make sure that it actually
|
||||
gets delivered, as some email providers will reject email coming in from
|
||||
nonexistent or non-routable domain names. If that is the case, you will need to
|
||||
play with your mail forwarding configuration until this actually works.
|
||||
|
||||
#### Firewalls, sshd, and listening daemons
|
||||
|
||||
The default firewall settings will depend on your distribution, but many of
|
||||
them will allow incoming `sshd` ports. Unless you have a compelling legitimate
|
||||
reason to allow incoming ssh, you should filter that out and disable the `sshd`
|
||||
daemon.
|
||||
|
||||
systemctl disable sshd.service
|
||||
systemctl stop sshd.service
|
||||
|
||||
You can always start it temporarily if you need to use it.
|
||||
|
||||
In general, your system shouldn't have any listening ports apart from
|
||||
responding to ping. This will help safeguard you against network-level 0-day
|
||||
exploits.
|
||||
|
||||
#### Automatic updates or notifications
|
||||
|
||||
It is recommended to turn on automatic updates, unless you have a very good
|
||||
reason not to do so, such as fear that an automatic update would render your
|
||||
system unusable (it's happened in the past, so this fear is not unfounded). At
|
||||
the very least, you should enable automatic notifications of available updates.
|
||||
Most distributions already have this service automatically running for you, so
|
||||
chances are you don't have to do anything. Consult your distribution
|
||||
documentation to find out more.
|
||||
|
||||
You should apply all outstanding errata as soon as possible, even if something
|
||||
isn't specifically labeled as "security update" or has an associated CVE code.
|
||||
All bugs have the potential of being security bugs and erring on the side of
|
||||
newer, unknown bugs is _generally_ a safer strategy than sticking with old,
|
||||
known ones.
|
||||
|
||||
#### Watching logs
|
||||
|
||||
You should have a keen interest in what happens on your system. For this
|
||||
reason, you should install `logwatch` and configure it to send nightly activity
|
||||
reports of everything that happens on your system. This won't prevent a
|
||||
dedicated attacker, but is a good safety-net feature to have in place.
|
||||
|
||||
Note, that many systemd distros will no longer automatically install a syslog
|
||||
server that `logwatch` needs (due to systemd relying on its own journal), so
|
||||
you will need to install and enable `rsyslog` to make sure your `/var/log` is
|
||||
not empty before logwatch will be of any use.
|
||||
|
||||
#### Rkhunter and IDS
|
||||
|
||||
Installing `rkhunter` and an intrusion detection system (IDS) like `aide` or
|
||||
`tripwire` will not be that useful unless you actually understand how they work
|
||||
and take the necessary steps to set them up properly (such as, keeping the
|
||||
databases on external media, running checks from a trusted environment,
|
||||
remembering to refresh the hash databases after performing system updates and
|
||||
configuration changes, etc). If you are not willing to take these steps and
|
||||
adjust how you do things on your own workstation, these tools will introduce
|
||||
hassle without any tangible security benefit.
|
||||
|
||||
We do recommend that you install `rkhunter` and run it nightly. It's fairly
|
||||
easy to learn and use, and though it will not deter a sophisticated attacker,
|
||||
it may help you catch your own mistakes.
|
||||
|
||||
## Personal workstation backups
|
||||
|
||||
Workstation backups tend to be overlooked or done in a haphazard, often unsafe
|
||||
manner.
|
||||
|
||||
### Checklist
|
||||
|
||||
- [ ] Set up encrypted workstation backups to external storage _(CRITICAL)_
|
||||
- [ ] Use zero-knowledge backup tools for cloud backups _(MODERATE)_
|
||||
|
||||
### Considerations
|
||||
|
||||
#### Full encrypted backups to external storage
|
||||
|
||||
It is handy to have an external hard drive where one can dump full backups
|
||||
without having to worry about such things like bandwidth and upstream speeds
|
||||
(in this day and age most providers still offer dramatically asymmetric
|
||||
upload/download speeds). Needless to say, this hard drive needs to be in itself
|
||||
encrypted (again, via LUKS), or you should use a backup tool that creates
|
||||
encrypted backups, such as `duplicity` or its GUI companion, `deja-dup`. I
|
||||
recommend using the latter with a good randomly generated passphrase, stored in
|
||||
your password manager. If you travel with your laptop, leave this drive at home
|
||||
to have something to come back to in case your laptop is lost or stolen.
|
||||
|
||||
In addition to your home directory, you should also back up `/etc` and
|
||||
`/var/log` for various forensic purposes.
|
||||
|
||||
Above all, avoid copying your home directory onto any unencrypted storage, even
|
||||
as a quick way to move your files around between systems, as you will most
|
||||
certainly forget to erase it once you're done, exposing potentially private or
|
||||
otherwise security sensitive data to snooping hands -- especially if you keep
|
||||
that storage media in the same bag with your laptop.
|
||||
|
||||
#### Selective zero-knowledge backups off-site
|
||||
|
||||
Off-site backups are also extremely important and can be done either to your
|
||||
employer, if they offer space for it, or to a cloud provider. You can set up a
|
||||
separate duplicity/deja-dup profile to only include most important files in
|
||||
order to avoid transferring huge amounts of data that you don't really care to
|
||||
back up off-site (internet cache, music, downloads, etc).
|
||||
|
||||
Alternatively, you can use a zero-knowledge backup tool, such as
|
||||
[SpiderOak][5], which offers an excellent Linux GUI tool and has additional
|
||||
useful features such as synchronizing content between multiple systems and
|
||||
platforms.
|
||||
|
||||
## Best practices
|
||||
|
||||
What follows is a curated list of best practices that we think you should
|
||||
adopt. It is most certainly non-exhaustive, but rather attempts to offer
|
||||
practical advice that strikes a workable balance between security and overall
|
||||
usability.
|
||||
|
||||
### Browsing
|
||||
|
||||
There is no question that the web browser will be the piece of software with
|
||||
the largest and the most exposed attack surface on your system. It is a tool
|
||||
written specifically to download and execute untrusted, frequently hostile
|
||||
code. It attempts to shield you from this danger by employing multiple
|
||||
mechanisms such as sandboxes and code sanitization, but they have all been
|
||||
previously defeated on multiple occasions. You should learn to approach
|
||||
browsing websites as the most insecure activity you'll engage in on any given
|
||||
day.
|
||||
|
||||
There are several ways you can reduce the impact of a compromised browser, but
|
||||
the truly effective ways will require significant changes in the way you
|
||||
operate your workstation.
|
||||
|
||||
#### 1: Use two different browsers
|
||||
|
||||
This is the easiest to do, but only offers minor security benefits. Not all
|
||||
browser compromises give an attacker full unfettered access to your system --
|
||||
sometimes they are limited to allowing one to read local browser storage,
|
||||
steal active sessions from other tabs, capture input entered into the browser,
|
||||
etc. Using two different browsers, one for work/high security sites, and
|
||||
another for everything else will help prevent minor compromises from giving
|
||||
attackers access to the whole cookie jar. The main inconvenience will be the
|
||||
amount of memory consumed by two different browser processes.
|
||||
|
||||
Here's what we recommend:
|
||||
|
||||
##### Firefox for work and high security sites
|
||||
|
||||
Use Firefox to access work-related sites, where extra care should be taken to
|
||||
ensure that data like cookies, sessions, login information, keystrokes, etc,
|
||||
should most definitely not fall into attackers' hands. You should NOT use
|
||||
this browser for accessing any other sites except select few.
|
||||
|
||||
You should install the following Firefox add-ons:
|
||||
|
||||
- [ ] NoScript _(CRITICAL)_
|
||||
- NoScript prevents active content from loading, except from user
|
||||
whitelisted domains. It is a great hassle to use with your default browser
|
||||
(though offers really good security benefits), so we recommend only
|
||||
enabling it on the browser you use to access work-related sites.
|
||||
|
||||
- [ ] Privacy Badger _(CRITICAL)_
|
||||
- EFF's Privacy Badger will prevent most external trackers and ad platforms
|
||||
from being loaded, which will help avoid compromises on these tracking
|
||||
sites from affecting your browser (trackers and ad sites are very commonly
|
||||
targeted by attackers, as they allow rapid infection of thousands of
|
||||
systems worldwide).
|
||||
|
||||
- [ ] HTTPS Everywhere _(CRITICAL)_
|
||||
- This EFF-developed Add-on will ensure that most of your sites are accessed
|
||||
over a secure connection, even if a link you click is using http:// (great
|
||||
to avoid a number of attacks, such as [SSL-strip][7]).
|
||||
|
||||
- [ ] Certificate Patrol _(MODERATE)_
|
||||
- This tool will alert you if the site you're accessing has recently changed
|
||||
their TLS certificates -- especially if it wasn't nearing expiration dates
|
||||
or if it is now using a different certification authority. It helps
|
||||
alert you if someone is trying to man-in-the-middle your connection,
|
||||
but generates a lot of benign false-positives.
|
||||
|
||||
You should leave Firefox as your default browser for opening links, as
|
||||
NoScript will prevent most active content from loading or executing.
|
||||
|
||||
##### Chrome/Chromium for everything else
|
||||
|
||||
Chromium developers are ahead of Firefox in adding a lot of nice security
|
||||
features (at least [on Linux][6]), such as seccomp sandboxes, kernel user
|
||||
namespaces, etc, which act as an added layer of isolation between the sites
|
||||
you visit and the rest of your system. Chromium is the upstream open-source
|
||||
project, and Chrome is Google's proprietary binary build based on it (insert
|
||||
the usual paranoid caution about not using it for anything you don't want
|
||||
Google to know about).
|
||||
|
||||
It is recommended that you install **Privacy Badger** and **HTTPS Everywhere**
|
||||
extensions in Chrome as well and give it a distinct theme from Firefox to
|
||||
indicate that this is your "untrusted sites" browser.
|
||||
|
||||
#### 2: Use two different browsers, one inside a dedicated VM
|
||||
|
||||
This is a similar recommendation to the above, except you will add an extra
|
||||
step of running Chrome inside a dedicated VM that you access via a fast
|
||||
protocol, allowing you to share clipboards and forward sound events (e.g.
|
||||
Spice or RDP). This will add an excellent layer of isolation between the
|
||||
untrusted browser and the rest of your work environment, ensuring that
|
||||
attackers who manage to fully compromise your browser will then have to
|
||||
additionally break out of the VM isolation layer in order to get to the rest
|
||||
of your system.
|
||||
|
||||
This is a surprisingly workable configuration, but requires a lot of RAM and
|
||||
fast processors that can handle the increased load. It will also require an
|
||||
important amount of dedication on the part of the admin who will need to
|
||||
adjust their work practices accordingly.
|
||||
|
||||
#### 3: Fully separate your work and play environments via virtualization
|
||||
|
||||
See [Qubes-OS project][3], which strives to provide a high-security
|
||||
workstation environment via compartmentalizing your applications into separate
|
||||
fully isolated VMs.
|
||||
|
||||
### Password managers
|
||||
|
||||
#### Checklist
|
||||
|
||||
- [ ] Use a password manager _(CRITICAL_)
|
||||
- [ ] Use unique passwords on unrelated sites _(CRITICAL)_
|
||||
- [ ] Use a password manager that supports team sharing _(MODERATE)_
|
||||
- [ ] Use a separate password manager for non-website accounts _(PARANOID)_
|
||||
|
||||
#### Considerations
|
||||
|
||||
Using good, unique passwords should be a critical requirement for every member
|
||||
of your team. Credential theft is happening all the time -- either via
|
||||
compromised computers, stolen database dumps, remote site exploits, or any
|
||||
number of other means. No credentials should ever be reused across sites,
|
||||
especially for critical applications.
|
||||
|
||||
##### In-browser password manager
|
||||
|
||||
Every browser has a mechanism for saving passwords that is fairly secure and
|
||||
can sync with vendor-maintained cloud storage while keeping the data encrypted
|
||||
with a user-provided passphrase. However, this mechanism has important
|
||||
disadvantages:
|
||||
|
||||
1. It does not work across browsers
|
||||
2. It does not offer any way of sharing credentials with team members
|
||||
|
||||
There are several well-supported, free-or-cheap password managers that are
|
||||
well-integrated into multiple browsers, work across platforms, and offer
|
||||
group sharing (usually as a paid service). Solutions can be easily found via
|
||||
search engines.
|
||||
|
||||
##### Standalone password manager
|
||||
|
||||
One of the major drawbacks of any password manager that comes integrated with
|
||||
the browser is the fact that it's part of the application that is most likely
|
||||
to be attacked by intruders. If this makes you uncomfortable (and it should),
|
||||
you may choose to have two different password managers -- one for websites
|
||||
that is integrated into your browser, and one that runs as a standalone
|
||||
application. The latter can be used to store high-risk credentials such as
|
||||
root passwords, database passwords, other shell account credentials, etc.
|
||||
|
||||
It may be particularly useful to have such tool for sharing superuser account
|
||||
credentials with other members of your team (server root passwords, ILO
|
||||
passwords, database admin passwords, bootloader passwords, etc).
|
||||
|
||||
A few tools can help you:
|
||||
|
||||
- [KeePassX][8], which improves team sharing in version 2
|
||||
- [Pass][9], which uses text files and PGP and integrates with git
|
||||
- [Django-Pstore][10], which uses GPG to share credentials between admins
|
||||
- [Hiera-Eyaml][11], which, if you are already using Puppet for your
|
||||
infrastructure, may be a handy way to track your server/service credentials
|
||||
as part of your encrypted Hiera data store
|
||||
|
||||
### Securing SSH and PGP private keys
|
||||
|
||||
Personal encryption keys, including SSH and PGP private keys, are going to be
|
||||
the most prized items on your workstation -- something the attackers will be
|
||||
most interested in obtaining, as that would allow them to further attack your
|
||||
infrastructure or impersonate you to other admins. You should take extra steps
|
||||
to ensure that your private keys are well protected against theft.
|
||||
|
||||
#### Checklist
|
||||
|
||||
- [ ] Strong passphrases are used to protect private keys _(CRITICAL)_
|
||||
- [ ] PGP Master key is stored on removable storage _(MODERATE)_
|
||||
- [ ] Auth, Sign and Encrypt Subkeys are stored on a smartcard device _(MODERATE)_
|
||||
- [ ] SSH is configured to use PGP Auth key as ssh private key _(MODERATE)_
|
||||
|
||||
#### Considerations
|
||||
|
||||
The best way to prevent private key theft is to use a smartcard to store your
|
||||
encryption private keys and never copy them onto the workstation. There are
|
||||
several manufacturers that offer OpenPGP capable devices:
|
||||
|
||||
- [Kernel Concepts][12], where you can purchase both the OpenPGP compatible
|
||||
smartcards and the USB readers, should you need one.
|
||||
- [Yubikey NEO][13], which offers OpenPGP smartcard functionality in addition
|
||||
to many other cool features (U2F, PIV, HOTP, etc).
|
||||
|
||||
It is also important to make sure that the master PGP key is not stored on the
|
||||
main workstation, and only subkeys are used. The master key will only be
|
||||
needed when signing someone else's keys or creating new subkeys -- operations
|
||||
which do not happen very frequently. You may follow [the Debian's subkeys][14]
|
||||
guide to learn how to move your master key to removable storage and how to
|
||||
create subkeys.
|
||||
|
||||
You should then configure your gnupg agent to act as ssh agent and use the
|
||||
smartcard-based PGP Auth key to act as your ssh private key. We publish a
|
||||
[detailed guide][15] on how to do that using either a smartcard reader or a
|
||||
Yubikey NEO.
|
||||
|
||||
If you are not willing to go that far, at least make sure you have a strong
|
||||
passphrase on both your PGP private key and your SSH private key, which will
|
||||
make it harder for attackers to steal and use them.
|
||||
|
||||
### SELinux on the workstation
|
||||
|
||||
If you are using a distribution that comes bundled with SELinux (such as
|
||||
Fedora), here are some recommendation of how to make the best use of it to
|
||||
maximize your workstation security.
|
||||
|
||||
#### Checklist
|
||||
|
||||
- [ ] Make sure SELinux is enforcing on your workstation _(CRITICAL)_
|
||||
- [ ] Never blindly run `audit2allow -M`, always check _(CRITICAL)_
|
||||
- [ ] Never `setenforce 0` _(MODERATE)_
|
||||
- [ ] Switch your account to SELinux user `staff_u` _(MODERATE)_
|
||||
|
||||
#### Considerations
|
||||
|
||||
SELinux is a Mandatory Access Controls (MAC) extension to core POSIX
|
||||
permissions functionality. It is mature, robust, and has come a long way since
|
||||
its initial roll-out. Regardless, many sysadmins to this day repeat the
|
||||
outdated mantra of "just turn it off."
|
||||
|
||||
That being said, SELinux will have limited security benefits on the
|
||||
workstation, as most applications you will be running as a user are going to
|
||||
be running unconfined. It does provide enough net benefit to warrant leaving
|
||||
it on, as it will likely help prevent an attacker from escalating privileges
|
||||
to gain root-level access via a vulnerable daemon service.
|
||||
|
||||
Our recommendation is to leave it on and enforcing.
|
||||
|
||||
##### Never `setenforce 0`
|
||||
|
||||
It's tempting to use `setenforce 0` to flip SELinux into permissive mode
|
||||
on a temporary basis, but you should avoid doing that. This essentially turns
|
||||
off SELinux for the entire system, while what you really want is to
|
||||
troubleshoot a particular application or daemon.
|
||||
|
||||
Instead of `setenforce 0` you should be using `semanage permissive -a
|
||||
[somedomain_t]` to put only that domain into permissive mode. First, find out
|
||||
which domain is causing troubles by running `ausearch`:
|
||||
|
||||
ausearch -ts recent -m avc
|
||||
|
||||
and then look for `scontext=` (source SELinux context) line, like so:
|
||||
|
||||
scontext=staff_u:staff_r:gpg_pinentry_t:s0-s0:c0.c1023
|
||||
^^^^^^^^^^^^^^
|
||||
|
||||
This tells you that the domain being denied is `gpg_pinentry_t`, so if you
|
||||
want to troubleshoot the application, you should add it to permissive domains:
|
||||
|
||||
semange permissive -a gpg_pinentry_t
|
||||
|
||||
This will allow you to use the application and collect the rest of the AVCs,
|
||||
which you can then use in conjunction with `audit2allow` to write a local
|
||||
policy. Once that is done and you see no new AVC denials, you can remove that
|
||||
domain from permissive by running:
|
||||
|
||||
semanage permissive -d gpg_pinentry_t
|
||||
|
||||
##### Use your workstation as SELinux role staff_r
|
||||
|
||||
SELinux comes with a native implementation of roles that prohibit or grant
|
||||
certain privileges based on the role associated with the user account. As an
|
||||
administrator, you should be using the `staff_r` role, which will restrict
|
||||
access to many configuration and other security-sensitive files, unless you
|
||||
first perform `sudo`.
|
||||
|
||||
By default, accounts are created as `unconfined_r` and most applications you
|
||||
execute will run unconfined, without any (or with only very few) SELinux
|
||||
constraints. To switch your account to the `staff_r` role, run the following
|
||||
command:
|
||||
|
||||
usermod -Z staff_u [username]
|
||||
|
||||
You should log out and log back in to enable the new role, at which point if
|
||||
you run `id -Z`, you'll see:
|
||||
|
||||
staff_u:staff_r:staff_t:s0-s0:c0.c1023
|
||||
|
||||
When performing `sudo`, you should remember to add an extra flag to tell
|
||||
SELinux to transition to the "sysadmin" role. The command you want is:
|
||||
|
||||
sudo -i -r sysadm_r
|
||||
|
||||
At which point `id -Z` will show:
|
||||
|
||||
staff_u:sysadm_r:sysadm_t:s0-s0:c0.c1023
|
||||
|
||||
**WARNING**: you should be comfortable using `ausearch` and `audit2allow`
|
||||
before you make this switch, as it's possible some of your applications will
|
||||
no longer work when you're running as role `staff_r`. At the time of writing,
|
||||
the following popular applications are known to not work under `staff_r`
|
||||
without policy tweaks:
|
||||
|
||||
- Chrome/Chromium
|
||||
- Skype
|
||||
- VirtualBox
|
||||
|
||||
To switch back to `unconfined_r`, run the following command:
|
||||
|
||||
usermod -Z unconfined_u [username]
|
||||
|
||||
and then log out and back in to get back into the comfort zone.
|
||||
|
||||
## Further reading
|
||||
|
||||
The world of IT security is a rabbit hole with no bottom. If you would like to
|
||||
go deeper, or find out more about security features on your particular
|
||||
distribution, please check out the following links:
|
||||
|
||||
- [Fedora Security Guide](https://docs.fedoraproject.org/en-US/Fedora/19/html/Security_Guide/index.html)
|
||||
- [CESG Ubuntu Security Guide](https://www.gov.uk/government/publications/end-user-devices-security-guidance-ubuntu-1404-lts)
|
||||
- [Debian Security Manual](https://www.debian.org/doc/manuals/securing-debian-howto/index.en.html)
|
||||
- [Arch Linux Security Wiki](https://wiki.archlinux.org/index.php/Security)
|
||||
- [Mac OSX Security](https://www.apple.com/support/security/guides/)
|
||||
|
||||
## License
|
||||
This work is licensed under a
|
||||
[Creative Commons Attribution-ShareAlike 4.0 International License][0].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://github.com/lfit/itpol/blob/master/linux-workstation-security.md#linux-workstation-security-checklist
|
||||
|
||||
作者:[mricon][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://github.com/mricon
|
||||
[0]: http://creativecommons.org/licenses/by-sa/4.0/
|
||||
[1]: https://github.com/QubesOS/qubes-antievilmaid
|
||||
[2]: https://en.wikipedia.org/wiki/IEEE_1394#Security_issues
|
||||
[3]: https://qubes-os.org/
|
||||
[4]: https://xkcd.com/936/
|
||||
[5]: https://spideroak.com/
|
||||
[6]: https://code.google.com/p/chromium/wiki/LinuxSandboxing
|
||||
[7]: http://www.thoughtcrime.org/software/sslstrip/
|
||||
[8]: https://keepassx.org/
|
||||
[9]: http://www.passwordstore.org/
|
||||
[10]: https://pypi.python.org/pypi/django-pstore
|
||||
[11]: https://github.com/TomPoulton/hiera-eyaml
|
||||
[12]: http://shop.kernelconcepts.de/
|
||||
[13]: https://www.yubico.com/products/yubikey-hardware/yubikey-neo/
|
||||
[14]: https://wiki.debian.org/Subkeys
|
||||
[15]: https://github.com/lfit/ssh-gpg-smartcard-config
|
@ -1,5 +1,3 @@
|
||||
translating by Ezio
|
||||
|
||||
Remember sed and awk? All Linux admins should
|
||||
================================================================================
|
||||
![](http://images.techhive.com/images/article/2015/03/linux-100573790-primary.idge.jpg)
|
||||
|
@ -1,277 +0,0 @@
|
||||
10 Tips for 10x Application Performance
|
||||
================================================================================
|
||||
Improving web application performance is more critical than ever. The share of economic activity that’s online is growing; more than 5% of the developed world’s economy is now on the Internet (see Resources below for statistics). And our always-on, hyper-connected modern world means that user expectations are higher than ever. If your site does not respond instantly, or if your app does not work without delay, users quickly move on to your competitors.
|
||||
|
||||
For example, a study done by Amazon almost 10 years ago proved that, even then, a 100-millisecond decrease in page-loading time translated to a 1% increase in its revenue. Another recent study highlighted the fact that that more than half of site owners surveyed said they lost revenue or customers due to poor application performance.
|
||||
|
||||
How fast does a website need to be? For each second a page takes to load, about 4% of users abandon it. Top e-commerce sites offer a time to first interaction ranging from one to three seconds, which offers the highest conversion rate. It’s clear that the stakes for web application performance are high and likely to grow.
|
||||
|
||||
Wanting to improve performance is easy, but actually seeing results is difficult. To help you on your journey, this blog post offers you ten tips to help you increase your website performance by as much as 10x. It’s the first in a series detailing how you can increase your application performance with the help of some well-tested optimization techniques, and with a little support from NGINX. This series also outlines potential improvements in security that you can gain along the way.
|
||||
|
||||
### Tip #1: Accelerate and Secure Applications with a Reverse Proxy Server ###
|
||||
|
||||
If your web application runs on a single machine, the solution to performance problems might seem obvious: just get a faster machine, with more processor, more RAM, a fast disk array, and so on. Then the new machine can run your WordPress server, Node.js application, Java application, etc., faster than before. (If your application accesses a database server, the solution might still seem simple: get two faster machines, and a faster connection between them.)
|
||||
|
||||
Trouble is, machine speed might not be the problem. Web applications often run slowly because the computer is switching among different kinds of tasks: interacting with users on thousands of connections, accessing files from disk, and running application code, among others. The application server may be thrashing – running out of memory, swapping chunks of memory out to disk, and making many requests wait on a single task such as disk I/O.
|
||||
|
||||
Instead of upgrading your hardware, you can take an entirely different approach: adding a reverse proxy server to offload some of these tasks. A [reverse proxy server][1] sits in front of the machine running the application and handles Internet traffic. Only the reverse proxy server is connected directly to the Internet; communication with the application servers is over a fast internal network.
|
||||
|
||||
Using a reverse proxy server frees the application server from having to wait for users to interact with the web app and lets it concentrate on building pages for the reverse proxy server to send across the Internet. The application server, which no longer has to wait for client responses, can run at speeds close to those achieved in optimized benchmarks.
|
||||
|
||||
Adding a reverse proxy server also adds flexibility to your web server setup. For instance, if a server of a given type is overloaded, another server of the same type can easily be added; if a server is down, it can easily be replaced.
|
||||
|
||||
Because of the flexibility it provides, a reverse proxy server is also a prerequisite for many other performance-boosting capabilities, such as:
|
||||
|
||||
- **Load balancing** (see [Tip #2][2]) – A load balancer runs on a reverse proxy server to share traffic evenly across a number of application servers. With a load balancer in place, you can add application servers without changing your application at all.
|
||||
- **Caching static files** (see [Tip #3][3]) – Files that are requested directly, such as image files or code files, can be stored on the reverse proxy server and sent directly to the client, which serves assets more quickly and offloads the application server, allowing the application to run faster.
|
||||
- **Securing your site** – The reverse proxy server can be configured for high security and monitored for fast recognition and response to attacks, keeping the application servers protected.
|
||||
|
||||
NGINX software is specifically designed for use as a reverse proxy server, with the additional capabilities described above. NGINX uses an event-driven processing approach which is more efficient than traditional servers. NGINX Plus adds more advanced reverse proxy features, such as application [health checks][4], specialized request routing, advanced caching, and support.
|
||||
|
||||
![NGINX Worker Process helps increase application performance](https://www.nginx.com/wp-content/uploads/2015/10/Graph-11.png)
|
||||
|
||||
### Tip #2: Add a Load Balancer ###
|
||||
|
||||
Adding a [load balancer][5] is a relatively easy change which can create a dramatic improvement in the performance and security of your site. Instead of making a core web server bigger and more powerful, you use a load balancer to distribute traffic across a number of servers. Even if an application is poorly written, or has problems with scaling, a load balancer can improve the user experience without any other changes.
|
||||
|
||||
A load balancer is, first, a reverse proxy server (see [Tip #1][6]) – it receives Internet traffic and forwards requests to another server. The trick is that the load balancer supports two or more application servers, using [a choice of algorithms][7] to split requests between servers. The simplest load balancing approach is round robin, with each new request sent to the next server on the list. Other methods include sending requests to the server with the fewest active connections. NGINX Plus has [capabilities][8] for continuing a given user session on the same server, which is called session persistence.
|
||||
|
||||
Load balancers can lead to strong improvements in performance because they prevent one server from being overloaded while other servers wait for traffic. They also make it easy to expand your web server capacity, as you can add relatively low-cost servers and be sure they’ll be put to full use.
|
||||
|
||||
Protocols that can be load balanced include HTTP, HTTPS, SPDY, HTTP/2, WebSocket, [FastCGI][9], SCGI, uwsgi, memcached, and several other application types, including TCP-based applications and other Layer 4 protocols. Analyze your web applications to determine which you use and where performance is lagging.
|
||||
|
||||
The same server or servers used for load balancing can also handle several other tasks, such as SSL termination, support for HTTP/1/x and HTTP/2 use by clients, and caching for static files.
|
||||
|
||||
NGINX is often used for load balancing; to learn more, please see our [overview blog post][10], [configuration blog post][11], [ebook][12] and associated [webinar][13], and [documentation][14]. Our commercial version, [NGINX Plus][15], supports more specialized load balancing features such as load routing based on server response time and the ability to load balance on Microsoft’s NTLM protocol.
|
||||
|
||||
### Tip #3: Cache Static and Dynamic Content ###
|
||||
|
||||
Caching improves web application performance by delivering content to clients faster. Caching can involve several strategies: preprocessing content for fast delivery when needed, storing content on faster devices, storing content closer to the client, or a combination.
|
||||
|
||||
There are two different types of caching to consider:
|
||||
|
||||
- **Caching of static content**. Infrequently changing files, such as image files (JPEG, PNG) and code files (CSS, JavaScript), can be stored on an edge server for fast retrieval from memory or disk.
|
||||
- **Caching of dynamic content**. Many Web applications generate fresh HTML for each page request. By briefly caching one copy of the generated HTML for a brief period of time, you can dramatically reduce the total number of pages that have to be generated while still delivering content that’s fresh enough to meet your requirements.
|
||||
|
||||
If a page gets ten views per second, for instance, and you cache it for one second, 90% of requests for the page will come from the cache. If you separately cache static content, even the freshly generated versions of the page might be made up largely of cached content.
|
||||
|
||||
There are three main techniques for caching content generated by web applications:
|
||||
|
||||
- **Moving content closer to users**. Keeping a copy of content closer to the user reduces its transmission time.
|
||||
- **Moving content to faster machines**. Content can be kept on a faster machine for faster retrieval.
|
||||
- **Moving content off of overused machines**. Machines sometimes operate much slower than their benchmark performance on a particular task because they are busy with other tasks. Caching on a different machine improves performance for the cached resources and also for non-cached resources, because the host machine is less overloaded.
|
||||
|
||||
Caching for web applications can be implemented from the inside – the web application server – out. First, caching is used for dynamic content, to reduce the load on application servers. Then, caching is used for static content (including temporary copies of what would otherwise be dynamic content), further off-loading application servers. And caching is then moved off of application servers and onto machines that are faster and/or closer to the user, unburdening the application servers, and reducing retrieval and transmission times.
|
||||
|
||||
Improved caching can speed up applications tremendously. For many web pages, static data, such as large image files, makes up more than half the content. It might take several seconds to retrieve and transmit such data without caching, but only fractions of a second if the data is cached locally.
|
||||
|
||||
As an example of how caching is used in practice, NGINX and NGINX Plus use two directives to [set up caching][16]: proxy_cache_path and proxy_cache. You specify the cache location and size, the maximum time files are kept in the cache, and other parameters. Using a third (and quite popular) directive, proxy_cache_use_stale, you can even direct the cache to supply stale content when the server that supplies fresh content is busy or down, giving the client something rather than nothing. From the user’s perspective, this may strongly improves your site or application’s uptime.
|
||||
|
||||
NGINX Plus has [advanced caching features][17], including support for [cache purging][18] and visualization of cache status on a [dashboard][19] for live activity monitoring.
|
||||
|
||||
For more information on caching with NGINX, see the [reference documentation][20] and [NGINX Content Caching][21] in the NGINX Plus Admin Guide.
|
||||
|
||||
**Note**: Caching crosses organizational lines between people who develop applications, people who make capital investment decisions, and people who run networks in real time. Sophisticated caching strategies, like those alluded to here, are a good example of the value of a [DevOps perspective][22], in which application developer, architectural, and operations perspectives are merged to help meet goals for site functionality, response time, security, and business results, )such as completed transactions or sales.
|
||||
|
||||
### Tip #4: Compress Data ###
|
||||
|
||||
Compression is a huge potential performance accelerator. There are carefully engineered and highly effective compression standards for photos (JPEG and PNG), videos (MPEG-4), and music (MP3), among others. Each of these standards reduces file size by an order of magnitude or more.
|
||||
|
||||
Text data – including HTML (which includes plain text and HTML tags), CSS, and code such as JavaScript – is often transmitted uncompressed. Compressing this data can have a disproportionate impact on perceived web application performance, especially for clients with slow or constrained mobile connections.
|
||||
|
||||
That’s because text data is often sufficient for a user to interact with a page, where multimedia data may be more supportive or decorative. Smart content compression can reduce the bandwidth requirements of HTML, Javascript, CSS and other text-based content, typically by 30% or more, with a corresponding reduction in load time.
|
||||
|
||||
If you use SSL, compression reduces the amount of data that has to be SSL-encoded, which offsets some of the CPU time it takes to compress the data.
|
||||
|
||||
Methods for compressing text data vary. For example, see the [section on HTTP/2][23] for a novel text compression scheme, adapted specifically for header data. As another example of text compression you can [turn on][24] GZIP compression in NGINX. After you [pre-compress text data][25] on your services, you can serve the compressed .gz version directly using the gzip_static directive.
|
||||
|
||||
### Tip #5: Optimize SSL/TLS ###
|
||||
|
||||
The Secure Sockets Layer ([SSL][26]) protocol and its successor, the Transport Layer Security (TLS) protocol, are being used on more and more websites. SSL/TLS encrypts the data transported from origin servers to users to help improve site security. Part of what may be influencing this trend is that Google now uses the presence of SSL/TLS as a positive influence on search engine rankings.
|
||||
|
||||
Despite rising popularity, the performance hit involved in SSL/TLS is a sticking point for many sites. SSL/TLS slows website performance for two reasons:
|
||||
|
||||
1. The initial handshake required to establish encryption keys whenever a new connection is opened. The way that browsers using HTTP/1.x establish multiple connections per server multiplies that hit.
|
||||
1. Ongoing overhead from encrypting data on the server and decrypting it on the client.
|
||||
|
||||
To encourage the use of SSL/TLS, the authors of HTTP/2 and SPDY (described in the [next section][27]) designed these protocols so that browsers need just one connection per browser session. This greatly reduces one of the two major sources of SSL overhead. However, even more can be done today to improve the performance of applications delivered over SSL/TLS.
|
||||
|
||||
The mechanism for optimizing SSL/TLS varies by web server. As an example, NGINX uses [OpenSSL][28], running on standard commodity hardware, to provide performance similar to dedicated hardware solutions. NGINX [SSL performance][29] is well-documented and minimizes the time and CPU penalty from performing SSL/TLS encryption and decryption.
|
||||
|
||||
In addition, see [this blog post][30] for details on ways to increase SSL/TLS performance. To summarize briefly, the techniques are:
|
||||
|
||||
- **Session caching**. Uses the [ssl_session_cache][31] directive to cache the parameters used when securing each new connection with SSL/TLS.
|
||||
- **Session tickets or IDs**. These store information about specific SSL/TLS sessions in a ticket or ID so a connection can be reused smoothly, without new handshaking.
|
||||
- **OCSP stapling**. Cuts handshaking time by caching SSL/TLS certificate information.
|
||||
|
||||
NGINX and NGINX Plus can be used for SSL/TLS termination – handling encryption and decyption for client traffic, while communicating with other servers in clear text. Use [these steps][32] to set up NGINX or NGINX Plus to handle SSL/TLS termination. Also, here are [specific steps][33] for NGINX Plus when used with servers that accept TCP connections.
|
||||
|
||||
### Tip #6: Implement HTTP/2 or SPDY ###
|
||||
|
||||
For sites that already use SSL/TLS, HTTP/2 and SPDY are very likely to improve performance, because the single connection requires just one handshake. For sites that don’t yet use SSL/TLS, HTTP/2 and SPDY makes a move to SSL/TLS (which normally slows performance) a wash from a responsiveness point of view.
|
||||
|
||||
Google introduced SPDY in 2012 as a way to achieve faster performance on top of HTTP/1.x. HTTP/2 is the recently approved IETF standard based on SPDY. SPDY is broadly supported, but is soon to be deprecated, replaced by HTTP/2.
|
||||
|
||||
The key feature of SPDY and HTTP/2 is the use of a single connection rather than multiple connections. The single connection is multiplexed, so it can carry pieces of multiple requests and responses at the same time.
|
||||
|
||||
By getting the most out of one connection, these protocols avoid the overhead of setting up and managing multiple connections, as required by the way browsers implement HTTP/1.x. The use of a single connection is especially helpful with SSL, because it minimizes the time-consuming handshaking that SSL/TLS needs to set up a secure connection.
|
||||
|
||||
The SPDY protocol required the use of SSL/TLS; HTTP/2 does not officially require it, but all browsers so far that support HTTP/2 use it only if SSL/TLS is enabled. That is, a browser that supports HTTP/2 uses it only if the website is using SSL and its server accepts HTTP/2 traffic. Otherwise, the browser communicates over HTTP/1.x.
|
||||
|
||||
When you implement SPDY or HTTP/2, you no longer need typical HTTP performance optimizations such as domain sharding, resource merging, and image spriting. These changes make your code and deployments simpler and easier to manage. To learn more about the changes that HTTP/2 is bringing about, read our [white paper][34].
|
||||
|
||||
![NGINX Supports SPDY and HTTP/2 for increased web application performance](https://www.nginx.com/wp-content/uploads/2015/10/http2-27.png)
|
||||
|
||||
As an example of support for these protocols, NGINX has supported SPDY from early on, and [most sites][35] that use SPDY today run on NGINX. NGINX is also [pioneering support][36] for HTTP/2, with [support][37] for HTTP/2 in NGINX open source and NGINX Plus as of September 2015.
|
||||
|
||||
Over time, we at NGINX expect most sites to fully enable SSL and to move to HTTP/2. This will lead to increased security and, as new optimizations are found and implemented, simpler code that performs better.
|
||||
|
||||
### Tip #7: Update Software Versions ###
|
||||
|
||||
One simple way to boost application performance is to select components for your software stack based on their reputation for stability and performance. In addition, because developers of high-quality components are likely to pursue performance enhancements and fix bugs over time, it pays to use the latest stable version of software. New releases receive more attention from developers and the user community. Newer builds also take advantage of new compiler optimizations, including tuning for new hardware.
|
||||
|
||||
Stable new releases are typically more compatible and higher-performing than older releases. It’s also easier to keep on top of tuning optimizations, bug fixes, and security alerts when you stay on top of software updates.
|
||||
|
||||
Staying with older software can also prevent you from taking advantage of new capabilities. For example, HTTP/2, described above, currently requires OpenSSL 1.0.1. Starting in mid-2016, HTTP/2 will require OpenSSL 1.0.2, which was released in January 2015.
|
||||
|
||||
NGINX users can start by moving to the [[latest version of the NGINX open source software][38] or [NGINX Plus][39]; they include new capabilities such as socket sharding and thread pools (see below), and both are constantly being tuned for performance. Then look at the software deeper in your stack and move to the most recent version wherever you can.
|
||||
|
||||
### Tip #8: Tune Linux for Performance ###
|
||||
|
||||
Linux is the underlying operating system for most web server implementations today, and as the foundation of your infrastructure, Linux represents a significant opportunity to improve performance. By default, many Linux systems are conservatively tuned to use few resources and to match a typical desktop workload. This means that web application use cases require at least some degree of tuning for maximum performance.
|
||||
|
||||
Linux optimizations are web server-specific. Using NGINX as an example, here are a few highlights of changes you can consider to speed up Linux:
|
||||
|
||||
- **Backlog queue**. If you have connections that appear to be stalling, consider increasing net.core.somaxconn, the maximum number of connections that can be queued awaiting attention from NGINX. You will see error messages if the existing connection limit is too small, and you can gradually increase this parameter until the error messages stop.
|
||||
- **File descriptors**. NGINX uses up to two file descriptors for each connection. If your system is serving a lot of connections, you might need to increase sys.fs.file_max, the system-wide limit for file descriptors, and nofile, the user file descriptor limit, to support the increased load.
|
||||
- **Ephemeral ports**. When used as a proxy, NGINX creates temporary (“ephemeral”) ports for each upstream server. You can increase the range of port values, set by net.ipv4.ip_local_port_range, to increase the number of ports available. You can also reduce the timeout before an inactive port gets reused with the net.ipv4.tcp_fin_timeout setting, allowing for faster turnover.
|
||||
|
||||
For NGINX, check out the [NGINX performance tuning guides][40] to learn how to optimize your Linux system so that it can cope with large volumes of network traffic without breaking a sweat!
|
||||
|
||||
### Tip #9: Tune Your Web Server for Performance ###
|
||||
|
||||
Whatever web server you use, you need to tune it for web application performance. The following recommendations apply generally to any web server, but specific settings are given for NGINX. Key optimizations include:
|
||||
|
||||
- **Access logging**. Instead of writing a log entry for every request to disk immediately, you can buffer entries in memory and write them to disk as a group. For NGINX, add the *buffer=size* parameter to the *access_log* directive to write log entries to disk when the memory buffer fills up. If you add the **flush=time** parameter, the buffer contents are also be written to disk after the specified amount of time.
|
||||
- **Buffering**. Buffering holds part of a response in memory until the buffer fills, which can make communications with the client more efficient. Responses that don’t fit in memory are written to disk, which can slow performance. When NGINX buffering is [on][42], you use the *proxy_buffer_size* and *proxy_buffers* directives to manage it.
|
||||
- **Client keepalives**. Keepalive connections reduce overhead, especially when SSL/TLS is in use. For NGINX, you can increase the maximum number of *keepalive_requests* a client can make over a given connection from the default of 100, and you can increase the *keepalive_timeout* to allow the keepalive connection to stay open longer, resulting in faster subsequent requests.
|
||||
- **Upstream keepalives**. Upstream connections – connections to application servers, database servers, and so on – benefit from keepalive connections as well. For upstream connections, you can increase *keepalive*, the number of idle keepalive connections that remain open for each worker process. This allows for increased connection reuse, cutting down on the need to open brand new connections. For more information about keepalives, refer to this [blog post][41].
|
||||
- **Limits**. Limiting the resources that clients use can improve performance and security. For NGINX,the *limit_conn* and *limit_conn_zone* directives restrict the number of connections from a given source, while *limit_rate* constrains bandwidth. These settings can stop a legitimate user from “hogging” resources and also help prevent against attacks. The *limit_req* and *limit_req_zone* directives limit client requests. For connections to upstream servers, use the max_conns parameter to the server directive in an upstream configuration block. This limits connections to an upstream server, preventing overloading. The associated queue directive creates a queue that holds a specified number of requests for a specified length of time after the *max_conns* limit is reached.
|
||||
- **Worker processes**. Worker processes are responsible for the processing of requests. NGINX employs an event-based model and OS-dependent mechanisms to efficiently distribute requests among worker processes. The recommendation is to set the value of *worker_processes* to one per CPU. The maximum number of worker_connections (512 by default) can safely be raised on most systems if needed; experiment to find the value that works best for your system.
|
||||
- **Socket sharding**. Typically, a single socket listener distributes new connections to all worker processes. Socket sharding creates a socket listener for each worker process, with the kernel assigning connections to socket listeners as they become available. This can reduce lock contention and improve performance on multicore systems. To enable [socket sharding][43], include the reuseport parameter on the listen directive.
|
||||
- **Thread pools**. Any computer process can be held up by a single, slow operation. For web server software, disk access can hold up many faster operations, such as calculating or copying information in memory. When a thread pool is used, the slow operation is assigned to a separate set of tasks, while the main processing loop keeps running faster operations. When the disk operation completes, the results go back into the main processing loop. In NGINX, two operations – the read() system call and sendfile() – are offloaded to [thread pools][44].
|
||||
|
||||
![Thread pools help increase application performance by assigning a slow operation to a separate set of tasks](https://www.nginx.com/wp-content/uploads/2015/10/Graph-17.png)
|
||||
|
||||
**Tip**. When changing settings for any operating system or supporting service, change a single setting at a time, then test performance. If the change causes problems, or if it doesn’t make your site run faster, change it back.
|
||||
|
||||
See this [blog post][45] for more details on tuning NGINX.
|
||||
|
||||
### Tip #10: Monitor Live Activity to Resolve Issues and Bottlenecks ###
|
||||
|
||||
The key to a high-performance approach to application development and delivery is watching your application’s real-world performance closely and in real time. You must be able to monitor activity within specific devices and across your web infrastructure.
|
||||
|
||||
Monitoring site activity is mostly passive – it tells you what’s going on, and leaves it to you to spot problems and fix them.
|
||||
|
||||
Monitoring can catch several different kinds of issues. They include:
|
||||
|
||||
- A server is down.
|
||||
- A server is limping, dropping connections.
|
||||
- A server is suffering from a high proportion of cache misses.
|
||||
- A server is not sending correct content.
|
||||
|
||||
A global application performance monitoring tool like New Relic or Dynatrace helps you monitor page load time from remote locations, while NGINX helps you monitor the application delivery side. Application performance data tells you when your optimizations are making a real difference to your users, and when you need to consider adding capacity to your infrastructure to sustain the traffic.
|
||||
|
||||
To help identify and resolve issues quickly, NGINX Plus adds [application-aware health checks][46] – synthetic transactions that are repeated regularly and are used to alert you to problems. NGINX Plus also has [session draining][47], which stops new connections while existing tasks complete, and a slow start capability, allowing a recovered server to come up to speed within a load-balanced group. When used effectively, health checks allow you to identify issues before they significantly impact the user experience, while session draining and slow start allow you to replace servers and ensure the process does not negatively affect perceived performance or uptime. The figure shows the built-in NGINX Plus [live activity monitoring][48] dashboard for a web infrastructure with servers, TCP connections, and caching.
|
||||
|
||||
![Use real-time application performance monitoring tools to identify and resolve issues quickly](https://www.nginx.com/wp-content/uploads/2015/10/Screen-Shot-2015-10-05-at-4.16.32-PM.png)
|
||||
|
||||
### Conclusion: Seeing 10x Performance Improvement ###
|
||||
|
||||
The performance improvements that are available for any one web application vary tremendously, and actual gains depend on your budget, the time you can invest, and gaps in your existing implementation. So, how might you achieve 10x performance improvement for your own applications?
|
||||
|
||||
To help guide you on the potential impact of each optimization, here are pointers to the improvement that may be possible with each tip detailed above, though your mileage will almost certainly vary:
|
||||
|
||||
- **Reverse proxy server and load balancing**. No load balancing, or poor load balancing, can cause episodes of very poor performance. Adding a reverse proxy server, such as NGINX, can prevent web applications from thrashing between memory and disk. Load balancing can move processing from overburdened servers to available ones and make scaling easy. These changes can result in dramatic performance improvement, with a 10x improvement easily achieved compared to the worst moments for your current implementation, and lesser but substantial achievements available for overall performance.
|
||||
- **Caching dynamic and static content**. If you have an overburdened web server that’s doubling as your application server, 10x improvements in peak-time performance can be achieved by caching dynamic content alone. Caching for static files can improve performance by single-digit multiples as well.
|
||||
- **Compressing data**. Using media file compression such as JPEG for photos, PNG for graphics, MPEG-4 for movies, and MP3 for music files can greatly improve performance. Once these are all in use, then compressing text data (code and HTML) can improve initial page load times by a factor of two.
|
||||
- **Optimizing SSL/TLS**. Secure handshakes can have a big impact on performance, so optimizing them can lead to perhaps a 2x improvement in initial responsiveness, particularly for text-heavy sites. Optimizing media file transmission under SSL/TLS is likely to yield only small performance improvements.
|
||||
- **Implementing HTTP/2 and SPDY**. When used with SSL/TLS, these protocols are likely to result in incremental improvements for overall site performance.
|
||||
- **Tuning Linux and web server software (such as NGINX)**. Fixes such as optimizing buffering, using keepalive connections, and offloading time-intensive tasks to a separate thread pool can significantly boost performance; thread pools, for instance, can speed disk-intensive tasks by [nearly an order of magnitude][49].
|
||||
|
||||
We hope you try out these techniques for yourself. We want to hear the kind of application performance improvements you’re able to achieve. Share your results in the comments below, or tweet your story with the hash tags #NGINX and #webperf!
|
||||
|
||||
### Resources for Internet Statistics ###
|
||||
|
||||
[Statista.com – Share of the internet economy in the gross domestic product in G-20 countries in 2016][50]
|
||||
|
||||
[Load Impact – How Bad Performance Impacts Ecommerce Sales][51]
|
||||
|
||||
[Kissmetrics – How Loading Time Affects Your Bottom Line (infographic)][52]
|
||||
|
||||
[Econsultancy – Site speed: case studies, tips and tools for improving your conversion rate][53]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io
|
||||
|
||||
作者:[Floyd Smith][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.nginx.com/blog/author/floyd/
|
||||
[1]:https://www.nginx.com/resources/glossary/reverse-proxy-server
|
||||
[2]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip2
|
||||
[3]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip3
|
||||
[4]:https://www.nginx.com/products/application-health-checks/
|
||||
[5]:https://www.nginx.com/solutions/load-balancing/
|
||||
[6]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip1
|
||||
[7]:https://www.nginx.com/resources/admin-guide/load-balancer/
|
||||
[8]:https://www.nginx.com/blog/load-balancing-with-nginx-plus/
|
||||
[9]:https://www.digitalocean.com/community/tutorials/understanding-and-implementing-fastcgi-proxying-in-nginx
|
||||
[10]:https://www.nginx.com/blog/five-reasons-use-software-load-balancer/
|
||||
[11]:https://www.nginx.com/blog/load-balancing-with-nginx-plus/
|
||||
[12]:https://www.nginx.com/resources/ebook/five-reasons-choose-software-load-balancer/
|
||||
[13]:https://www.nginx.com/resources/webinars/choose-software-based-load-balancer-45-min/
|
||||
[14]:https://www.nginx.com/resources/admin-guide/load-balancer/
|
||||
[15]:https://www.nginx.com/products/
|
||||
[16]:https://www.nginx.com/blog/nginx-caching-guide/
|
||||
[17]:https://www.nginx.com/products/content-caching-nginx-plus/
|
||||
[18]:http://nginx.org/en/docs/http/ngx_http_proxy_module.html?&_ga=1.95342300.1348073562.1438712874#proxy_cache_purge
|
||||
[19]:https://www.nginx.com/products/live-activity-monitoring/
|
||||
[20]:http://nginx.org/en/docs/http/ngx_http_proxy_module.html?&&&_ga=1.61156076.1348073562.1438712874#proxy_cache
|
||||
[21]:https://www.nginx.com/resources/admin-guide/content-caching
|
||||
[22]:https://www.nginx.com/blog/network-vs-devops-how-to-manage-your-control-issues/
|
||||
[23]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip6
|
||||
[24]:https://www.nginx.com/resources/admin-guide/compression-and-decompression/
|
||||
[25]:http://nginx.org/en/docs/http/ngx_http_gzip_static_module.html
|
||||
[26]:https://www.digicert.com/ssl.htm
|
||||
[27]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip6
|
||||
[28]:http://openssl.org/
|
||||
[29]:https://www.nginx.com/blog/nginx-ssl-performance/
|
||||
[30]:https://www.nginx.com/blog/improve-seo-https-nginx/
|
||||
[31]:http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_cache
|
||||
[32]:https://www.nginx.com/resources/admin-guide/nginx-ssl-termination/
|
||||
[33]:https://www.nginx.com/resources/admin-guide/nginx-tcp-ssl-termination/
|
||||
[34]:https://www.nginx.com/resources/datasheet/datasheet-nginx-http2-whitepaper/
|
||||
[35]:http://w3techs.com/blog/entry/25_percent_of_the_web_runs_nginx_including_46_6_percent_of_the_top_10000_sites
|
||||
[36]:https://www.nginx.com/blog/how-nginx-plans-to-support-http2/
|
||||
[37]:https://www.nginx.com/blog/nginx-plus-r7-released/
|
||||
[38]:http://nginx.org/en/download.html
|
||||
[39]:https://www.nginx.com/products/
|
||||
[40]:https://www.nginx.com/blog/tuning-nginx/
|
||||
[41]:https://www.nginx.com/blog/http-keepalives-and-web-performance/
|
||||
[42]:http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering
|
||||
[43]:https://www.nginx.com/blog/socket-sharding-nginx-release-1-9-1/
|
||||
[44]:https://www.nginx.com/blog/thread-pools-boost-performance-9x/
|
||||
[45]:https://www.nginx.com/blog/tuning-nginx/
|
||||
[46]:https://www.nginx.com/products/application-health-checks/
|
||||
[47]:https://www.nginx.com/products/session-persistence/#session-draining
|
||||
[48]:https://www.nginx.com/products/live-activity-monitoring/
|
||||
[49]:https://www.nginx.com/blog/thread-pools-boost-performance-9x/
|
||||
[50]:http://www.statista.com/statistics/250703/forecast-of-internet-economy-as-percentage-of-gdp-in-g-20-countries/
|
||||
[51]:http://blog.loadimpact.com/blog/how-bad-performance-impacts-ecommerce-sales-part-i/
|
||||
[52]:https://blog.kissmetrics.com/loading-time/?wide=1
|
||||
[53]:https://econsultancy.com/blog/10936-site-speed-case-studies-tips-and-tools-for-improving-your-conversion-rate/
|
@ -1,236 +0,0 @@
|
||||
How to Install Redis Server on CentOS 7
|
||||
================================================================================
|
||||
Hi everyone, today Redis is the subject of our article, we are going to install it on CentOS 7. Build sources files, install the binaries, create and install files. After installing its components, we will set its configuration as well as some operating system parameters to make it more reliable and faster.
|
||||
|
||||
![Runnins Redis](http://blog.linoxide.com/wp-content/uploads/2015/10/run-redis-standalone.jpg)
|
||||
|
||||
Redis server
|
||||
|
||||
Redis is an open source multi-platform data store written in ANSI C, that uses datasets directly from memory achieving extremely high performance. It supports various programming languages, including Lua, C, Java, Python, Perl, PHP and many others. It is based on simplicity, about 30k lines of code that do "few" things, but do them well. Despite you work on memory, persistence may exist and it has a fairly reasonable support for high availability and clustering, which does good in keeping your data safe.
|
||||
|
||||
### Building Redis ###
|
||||
|
||||
There is no official RPM package available, we need to build it from sources, in order to do this you will need install Make and GCC.
|
||||
|
||||
Install GNU Compiler Collection and Make with yum if it is not already installed
|
||||
|
||||
yum install gcc make
|
||||
|
||||
Download the tarball from [redis download page][1].
|
||||
|
||||
curl http://download.redis.io/releases/redis-3.0.4.tar.gz -o redis-3.0.4.tar.gz
|
||||
|
||||
Extract the tarball contents
|
||||
|
||||
tar zxvf redis-3.0.4.tar.gz
|
||||
|
||||
Enter Redis the directory we have extracted
|
||||
|
||||
cd redis-3.0.4
|
||||
|
||||
Use Make to build the source files
|
||||
|
||||
make
|
||||
|
||||
### Install ###
|
||||
|
||||
Enter on the src directory
|
||||
|
||||
cd src
|
||||
|
||||
Copy Redis server and client to /usr/local/bin
|
||||
|
||||
cp redis-server redis-cli /usr/local/bin
|
||||
|
||||
Its good also to copy sentinel, benchmark and check as well.
|
||||
|
||||
cp redis-sentinel redis-benchmark redis-check-aof redis-check-dump /usr/local/bin
|
||||
|
||||
Make Redis config directory
|
||||
|
||||
mkdir /etc/redis
|
||||
|
||||
Create a working and data directory under /var/lib/redis
|
||||
|
||||
mkdir -p /var/lib/redis/6379
|
||||
|
||||
#### System parameters ####
|
||||
|
||||
In order to Redis work correctly you need to set some kernel options
|
||||
|
||||
Set the vm.overcommit_memory to 1, which means always, this will avoid data to be truncated, take a look [here][2] for more.
|
||||
|
||||
sysctl -w vm.overcommit_memory=1
|
||||
|
||||
Change the maximum of backlog connections some value higher than the value on tcp-backlog option of redis.conf, which defaults to 511. You can find more on sysctl based ip networking "tunning" on [kernel.org][3] website.
|
||||
|
||||
sysctl -w net.core.somaxconn=512.
|
||||
|
||||
Disable transparent huge pages support, that is known to cause latency and memory access issues with Redis.
|
||||
|
||||
echo never > /sys/kernel/mm/transparent_hugepage/enabled
|
||||
|
||||
### redis.conf ###
|
||||
|
||||
Redis.conf is the Redis configuration file, however you will see the file named as 6379.conf here, where the number is the same as the network port is listening to. This name is recommended if you are going to run more than one Redis instance.
|
||||
|
||||
Copy sample redis.conf to **/etc/redis/6379.conf**.
|
||||
|
||||
cp redis.conf /etc/redis/6379.conf
|
||||
|
||||
Now edit the file and set at some of its parameters.
|
||||
|
||||
vi /etc/redis/6379.conf
|
||||
|
||||
#### daemonize ####
|
||||
|
||||
Set daemonize to no, systemd need it to be in foreground, otherwise Redis will suddenly die.
|
||||
|
||||
daemonize no
|
||||
|
||||
#### pidfile ####
|
||||
|
||||
Set the pidfile to redis_6379.pid under /var/run.
|
||||
|
||||
pidfile /var/run/redis_6379.pid
|
||||
|
||||
#### port ####
|
||||
|
||||
Change the network port if you are not going to use the default
|
||||
|
||||
port 6379
|
||||
|
||||
#### loglevel ####
|
||||
|
||||
Set your loglevel.
|
||||
|
||||
loglevel notice
|
||||
|
||||
#### logfile ####
|
||||
|
||||
Set the logfile to /var/log/redis_6379.log
|
||||
|
||||
logfile /var/log/redis_6379.log
|
||||
|
||||
#### dir ####
|
||||
|
||||
Set the directory to /var/lib/redis/6379
|
||||
|
||||
dir /var/lib/redis/6379
|
||||
|
||||
### Security ###
|
||||
|
||||
Here are some actions that you can take to enforce the security.
|
||||
|
||||
#### Unix sockets ####
|
||||
|
||||
In many cases, the client application resides on the same machine as the server, so there is no need to listen do network sockets. If this is the case you may want to use unix sockets instead, for this you need to set the **port** option to 0, and then enable unix sockets with the following options.
|
||||
|
||||
Set the path to the socket file
|
||||
|
||||
unixsocket /tmp/redis.sock
|
||||
|
||||
Set restricted permission to the socket file
|
||||
|
||||
unixsocketperm 700
|
||||
|
||||
Now, to have access with redis-cli you should use the -s flag pointing to the socket file
|
||||
|
||||
redis-cli -s /tmp/redis.sock
|
||||
|
||||
#### requirepass ####
|
||||
|
||||
You may need remote access, if so, you should use a password, that will be required before any operation.
|
||||
|
||||
requirepass "bTFBx1NYYWRMTUEyNHhsCg"
|
||||
|
||||
#### rename-command ####
|
||||
|
||||
Imagine the output of the next command. Yes, it will dump the configuration of the server, so you should deny access to this kind information whenever is possible.
|
||||
|
||||
CONFIG GET *
|
||||
|
||||
To restrict, or even disable this and other commands by using the **rename-command**. You must provide a command name and a replacement. To disable, set the replacement string to "" (blank), this is more secure as it will prevent someone from guessing the command name.
|
||||
|
||||
rename-command FLUSHDB "FLUSHDB_MY_SALT_G0ES_HERE09u09u"
|
||||
rename-command FLUSHALL ""
|
||||
rename-command CONFIG "CONFIG_MY_S4LT_GO3S_HERE09u09u"
|
||||
|
||||
![Access Redis through unix with password and command changes](http://blog.linoxide.com/wp-content/uploads/2015/10/redis-security-test.jpg)
|
||||
|
||||
Access through unix sockets with password and command changes
|
||||
|
||||
#### Snapshots ####
|
||||
|
||||
By default Redis will periodically dump its datasets to **dump.rdb** on the data directory we set. You can configure how often the rdb file will be updated by the save command, the first parameter is a timeframe in seconds and the second is a number of changes performed on the data file.
|
||||
|
||||
Every 15 hours if there was at least 1 key change
|
||||
|
||||
save 900 1
|
||||
|
||||
Every 5 hours if there was at least 10 key changes
|
||||
|
||||
save 300 10
|
||||
|
||||
Every minute if there was at least 10000 key changes
|
||||
|
||||
save 60 10000
|
||||
|
||||
The **/var/lib/redis/6379/dump.rdb** file contains a dump of the dataset on memory since last save. Since it creates a temporary file and then replace the original file, there is no problem of corruption and you can always copy it directly without fear.
|
||||
|
||||
### Starting at boot ###
|
||||
|
||||
You may use systemd to add Redis to the system startup
|
||||
|
||||
Copy sample init_script to /etc/init.d, note also the number of the port on the script name
|
||||
|
||||
cp utils/redis_init_script /etc/init.d/redis_6379
|
||||
|
||||
We are going to use systemd, so create a unit file named redis_6379.service under **/etc/systems/system**
|
||||
|
||||
vi /etc/systemd/system/redis_6379.service
|
||||
|
||||
Put this content, try man systemd.service for details
|
||||
|
||||
[Unit]
|
||||
Description=Redis on port 6379
|
||||
|
||||
[Service]
|
||||
Type=forking
|
||||
ExecStart=/etc/init.d/redis_6379 start
|
||||
ExecStop=/etc/init.d/redis_6379 stop
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
|
||||
Now add the memory overcommit and maximum backlog options we have set before to the **/etc/sysctl.conf** file.
|
||||
|
||||
vm.overcommit_memory = 1
|
||||
|
||||
net.core.somaxconn=512
|
||||
|
||||
For the transparent huge pages support there is no sysctl directive, so you can put the command at the end of /etc/rc.local
|
||||
|
||||
echo never > /sys/kernel/mm/transparent_hugepage/enabled
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
That's enough to start, with these settings you will be able to deploy Redis server for many simpler scenarios, however there is many options on redis.conf for more complex environments. On some cases, you may use [replication][4] and [Sentinel][5] to provide high availability, [split the data][6] across servers, create a cluster of servers. Thanks for reading!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/storage/install-redis-server-centos-7/
|
||||
|
||||
作者:[Carlos Alberto][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/carlosal/
|
||||
[1]:http://redis.io/download
|
||||
[2]:https://www.kernel.org/doc/Documentation/vm/overcommit-accounting
|
||||
[3]:https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt
|
||||
[4]:http://redis.io/topics/replication
|
||||
[5]:http://redis.io/topics/sentinel
|
||||
[6]:http://redis.io/topics/partitioning
|
@ -1,124 +0,0 @@
|
||||
translating by ezio
|
||||
|
||||
How to Install SQLite 3.9.1 with JSON Support on Ubuntu 15.04
|
||||
================================================================================
|
||||
Hello and welcome to our today's article on SQLite which is the most widely deployed SQL database engine in the world that comes with zero-configuration, that means no setup or administration needed. SQLite is public-domain software package that provides relational database management system, or RDBMS that is used to store user-defined records in large tables. In addition to data storage and management, database engine process complex query commands that combine data from multiple tables to generate reports and data summaries.
|
||||
|
||||
SQLite is very small and light weight that does not require a separate server process or system to operate. It is available on UNIX, Linux, Mac OS-X, Android, iOS and Windows which is being used in various software applications like Opera, Ruby On Rails, Adobe System, Mozilla Firefox, Google Chrome and Skype.
|
||||
|
||||
### 1) Basic Requirements: ###
|
||||
|
||||
There is are no such complex complex requirements for the installation of SQLite as it mostly comes support all major cross platforms.
|
||||
|
||||
So, let's login to your Ubuntu server with sudo or root credentials using your CLI or Secure Shell. Then update your system so that your operating system is upto date with latest packages.
|
||||
|
||||
In ubuntu, the below command is to be used for system update.
|
||||
|
||||
# apt-get update
|
||||
|
||||
If you are starting to deploy SQLite on on a fresh Ubuntu, then make sure that you have installed some basic system management utilities like wget, make, unzip, gcc.
|
||||
|
||||
To install wget, make and gcc packages on ubuntu, you use the below command, then press "Y" to allow and proceed with installation of these packages.
|
||||
|
||||
# apt-get install wget make gcc
|
||||
|
||||
### 2) Download SQLite ###
|
||||
|
||||
To download the latest package of SQLite, you can refer to their official [SQLite Download Page][1] as shown below.
|
||||
|
||||
![SQLite download](http://blog.linoxide.com/wp-content/uploads/2015/10/Selection_014.png)
|
||||
|
||||
You can copy the link of its resource package and download it on ubuntu server using the wget utility command.
|
||||
|
||||
# wget https://www.sqlite.org/2015/sqlite-autoconf-3090100.tar.gz
|
||||
|
||||
![wget SQLite](http://blog.linoxide.com/wp-content/uploads/2015/10/23.png)
|
||||
|
||||
After downloading is complete, extract the package and change your current directory to the extracted SQLite folder by using the below command as shown.
|
||||
|
||||
# tar -zxvf sqlite-autoconf-3090100.tar.gz
|
||||
|
||||
### 3) Installing SQLite ###
|
||||
|
||||
Now we are going to install and configure the SQLite package that we downloaded. So, to compile and install SQLite on ubuntu run the configuration script within the same directory where your have extracted the SQLite package as shown below.
|
||||
|
||||
root@ubuntu-15:~/sqlite-autoconf-3090100# ./configure –prefix=/usr/local
|
||||
|
||||
![SQLite Installation](http://blog.linoxide.com/wp-content/uploads/2015/10/35.png)
|
||||
|
||||
Once the package is configuration is done under the mentioned prefix, then run the below command make command to compile the package.
|
||||
|
||||
root@ubuntu-15:~/sqlite-autoconf-3090100# make
|
||||
source='sqlite3.c' object='sqlite3.lo' libtool=yes \
|
||||
DEPDIR=.deps depmode=none /bin/bash ./depcomp \
|
||||
/bin/bash ./libtool --tag=CC --mode=compile gcc -DPACKAGE_NAME=\"sqlite\" -DPACKAGE_TARNAME=\"sqlite\" -DPACKAGE_VERSION=\"3.9.1\" -DPACKAGE_STRING=\"sqlite\ 3.9.1\" -DPACKAGE_BUGREPORT=\"http://www.sqlite.org\" -DPACKAGE_URL=\"\" -DPACKAGE=\"sqlite\" -DVERSION=\"3.9.1\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -DHAVE_DLFCN_H=1 -DLT_OBJDIR=\".libs/\" -DHAVE_FDATASYNC=1 -DHAVE_USLEEP=1 -DHAVE_LOCALTIME_R=1 -DHAVE_GMTIME_R=1 -DHAVE_DECL_STRERROR_R=1 -DHAVE_STRERROR_R=1 -DHAVE_POSIX_FALLOCATE=1 -I. -D_REENTRANT=1 -DSQLITE_THREADSAFE=1 -DSQLITE_ENABLE_FTS3 -DSQLITE_ENABLE_RTREE -g -O2 -c -o sqlite3.lo sqlite3.c
|
||||
|
||||
After running make command, to complete the installation of SQLite on ubuntu run the 'make install' command as shown below.
|
||||
|
||||
# make install
|
||||
|
||||
![SQLite Make Install](http://blog.linoxide.com/wp-content/uploads/2015/10/44.png)
|
||||
|
||||
### 4) Testing SQLite Installation ###
|
||||
|
||||
To confirm the successful installation of SQLite 3.9, run the below command in your command line interface.
|
||||
|
||||
# sqlite3
|
||||
|
||||
You will the SQLite verion after running the above command as shown.
|
||||
|
||||
![Testing SQLite Installation](http://blog.linoxide.com/wp-content/uploads/2015/10/53.png)
|
||||
|
||||
### 5) Using SQLite ###
|
||||
|
||||
SQLite is very handy to use. To get the detailed information about its usage, simply run the below command in the SQLite console.
|
||||
|
||||
sqlite> .help
|
||||
|
||||
So here is the list of all its available commands, with their description that you can get help to start using SQLite.
|
||||
|
||||
![SQLite Help](http://blog.linoxide.com/wp-content/uploads/2015/10/62.png)
|
||||
|
||||
Now in this last section , we make use of few SQLite commands to create a new database using the SQLite3 command line interface.
|
||||
|
||||
To to create a new database run the below command.
|
||||
|
||||
# sqlite3 test.db
|
||||
|
||||
To create a table within the new database run the below command.
|
||||
|
||||
sqlite> create table memos(text, priority INTEGER);
|
||||
|
||||
After creating the table, insert some data using the following commands.
|
||||
|
||||
sqlite> insert into memos values('deliver project description', 15);
|
||||
sqlite> insert into memos values('writing new artilces', 100);
|
||||
|
||||
To view the inserted data from the table , run the below command.
|
||||
|
||||
sqlite> select * from memos;
|
||||
deliver project description|15
|
||||
writing new artilces|100
|
||||
|
||||
to exit from the sqlite3 type the below command.
|
||||
|
||||
sqlite> .exit
|
||||
|
||||
![Using SQLite3](http://blog.linoxide.com/wp-content/uploads/2015/10/73.png)
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
In this article you learned the installation of latest version of SQLite 3.9.1 which enables the recently JSON1 support in its 3.9.0 version and so on. Its is an amazing library that gets embedded inside the application that makes use of it to keep the resources much efficient and lighter. We hope you find this article much helpful, feel free to get back to us if you find any difficulty.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/ubuntu-how-to/install-sqlite-json-ubuntu-15-04/
|
||||
|
||||
作者:[Kashif Siddique][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/kashifs/
|
||||
[1]:https://www.sqlite.org/download.html
|
@ -1,84 +0,0 @@
|
||||
How to Manage Your To-Do Lists in Ubuntu Using Go For It Application
|
||||
================================================================================
|
||||
![](https://www.maketecheasier.com/assets/uploads/2015/10/gfi-featured1.jpg)
|
||||
|
||||
Task management is arguably one of the most important and challenging part of professional as well as personal life. Professionally, as you assume more and more responsibility, your performance is directly related to or affected with your ability to manage the tasks you’re assigned.
|
||||
|
||||
If your job involves working on a computer, then you’ll be happy to know that there are various applications available that claim to make task management easy for you. While most of them cater to Windows users, there are many options available on Linux, too. In this article we will discuss one such application: Go For It.
|
||||
|
||||
### Go For It ###
|
||||
|
||||
[Go For It][1] (GFI) is developed by Manuel Kehl, who describes it as a “a simple and stylish productivity app, featuring a to-do list, merged with a timer that keeps your focus on the current task.” The timer feature, specifically, is interesting, as it also makes sure that you take a break from your current task and relax for sometime before proceeding further.
|
||||
|
||||
### Download and Installation ###
|
||||
|
||||
Users of Debian-based systems, like Ubuntu, can easily install the app by running the following commands in terminal:
|
||||
|
||||
sudo add-apt-repository ppa:mank319/go-for-it
|
||||
sudo apt-get update
|
||||
sudo apt-get install go-for-it
|
||||
|
||||
Once done, you can execute the application by running the following command:
|
||||
|
||||
go-for-it
|
||||
|
||||
### Usage and Configuration ###
|
||||
|
||||
Here is how the GFI interface looks when you run the app for the very first time:
|
||||
|
||||
![gfi-first-run](https://www.maketecheasier.com/assets/uploads/2015/10/gfi-first-run1.png)
|
||||
|
||||
As you can see, the interface consists of three tabs: To-Do, Timer, and Done. While the To-Do tab contains a list of tasks (the 4 tasks shown in the image above are there by default – you can delete them by clicking on the rectangular box in front of them), the Timer tab contains task timer, while Done contains a list of tasks that you’ve finished successfully. Right at the bottom is a text box where you can enter the task text and click “+” to add it to the list above.
|
||||
|
||||
For example, I added a task named “MTE-research-work” to the list and selected it by clicking on it in the list – see the screenshot below:
|
||||
|
||||
![gfi-task-added](https://www.maketecheasier.com/assets/uploads/2015/10/gfi-task-added1.png)
|
||||
|
||||
Then I selected the Timer tab. Here I could see a 25-minute timer for the active task which was “MTE-reaserch-work.”
|
||||
|
||||
![gfi-active-task-timer](https://www.maketecheasier.com/assets/uploads/2015/10/gfi-active-task-timer.png)
|
||||
|
||||
Of course, you can change the timer value and set to any time you want. I, however, didn’t change the value and clicked the Start button present below to start the task timer. Once 60 seconds were left, GFI issued a notification indicating the same.
|
||||
|
||||
![gfi-first-notification-new](https://www.maketecheasier.com/assets/uploads/2015/10/gfi-first-notification-new.jpg)
|
||||
|
||||
And once the time was up, I was asked to take a break of five minutes.
|
||||
|
||||
![gfi-time-up-notification-new](https://www.maketecheasier.com/assets/uploads/2015/10/gfi-time-up-notification-new.jpg)
|
||||
|
||||
Once those five minutes were over, I could again start the task timer for my task.
|
||||
|
||||
![gfi-break-time-up-new](https://www.maketecheasier.com/assets/uploads/2015/10/gfi-break-time-up-new.jpg)
|
||||
|
||||
When you’re done with your task, you can click the Done button in the Timer tab. The task is then removed from the To-Do tab and listed in the Done tab.
|
||||
|
||||
![gfi-task-done](https://www.maketecheasier.com/assets/uploads/2015/10/gfi-task-done1.png)
|
||||
|
||||
GFI also allows you to tweak some of its settings. For example, the settings window shown below contains options to tweak the default task duration, break duration, and reminder time.
|
||||
|
||||
![gfi-settings](https://www.maketecheasier.com/assets/uploads/2015/10/gfi-settings1.png)
|
||||
|
||||
It’s worth mentioning that GFI stores the to-do lists in the Todo.txt format which simplifies synchronization with mobile devices and makes it possible for you to edit tasks using other frontends – read more about it [here][2].
|
||||
|
||||
You can also see the GFI app in action in the video below.
|
||||
|
||||
注:youtube 视频
|
||||
<iframe frameborder="0" src="http://www.youtube.com/embed/mnw556C9FZQ?autoplay=1&autohide=2&border=1&wmode=opaque&enablejsapi=1&controls=1&showinfo=0" id="youtube-iframe"></iframe>
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
As you have observed, GFI is an easy to understand and simple to use task management application. Although it doesn’t offer a plethora of features, it does what it claims – the timer integration is especially useful. If you’re looking for a basic, open-source task management tool for Linux, Go For It is worth trying.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.maketecheasier.com/to-do-lists-ubuntu-go-for-it/
|
||||
|
||||
作者:[Himanshu Arora][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.maketecheasier.com/author/himanshu/
|
||||
[1]:http://manuel-kehl.de/projects/go-for-it/
|
||||
[2]:http://todotxt.com/
|
@ -1,79 +0,0 @@
|
||||
How to Monitor the Progress of a Linux Command Line Operation Using PV Command
|
||||
================================================================================
|
||||
![](https://www.maketecheasier.com/assets/uploads/2015/11/pv-featured-1.jpg)
|
||||
|
||||
If you’re a Linux system admin, there’s no doubt that you must be spending most of your work time on the command line – installing and removing packages; monitoring system stats; copying, moving, deleting stuff; debugging problems; and more. There are times when you fire a command, and it takes a while before the operation completes. However, there are also times when the command you executed just hangs, leaving you guessing as to what’s actually happening behind the scenes.
|
||||
|
||||
Usually, Linux commands provide no information related to the progress of the ongoing operation, something that is very important especially when you have limited time. However, that doesn’t mean you’re helpless – there exists a command, dubbed pv, that displays useful progress information related to the ongoing command line operation. In this article we will discuss this command as well as its features through some easy-to-understand examples.
|
||||
|
||||
### PV Command ###
|
||||
|
||||
Developed by Andrew Wood, [PV][1] – which stands for Pipe Viewer – displays information related to the progress of data through a pipeline. The information includes time elapsed, percentage completed (with progress bar), current throughput rate, total data transferred, and ETA.
|
||||
|
||||
> “To use it, insert it in a pipeline between two processes, with the appropriate options. Its standard input will be passed through to its standard output and progress will be shown on standard error,”
|
||||
|
||||
The above explains the command’s man page.
|
||||
|
||||
### Download and Installation ###
|
||||
|
||||
Users of Debian-based systems like Ubuntu can easily install the utility by running the following command in terminal:
|
||||
|
||||
sudo apt-get install pv
|
||||
|
||||
If you’re using any other Linux distro, you can install the command using the package manager installed on your system. Once installed successfully you can use the command line utility in various scenarios (see the following section). It’s worth mentioning that pv version 1.2.0 has been used in all the examples mentioned in this article.
|
||||
|
||||
### Features and Usage ###
|
||||
|
||||
A very common scenario that probably most of us (who work on the command line in Linux) would relate to is copying a movie file from a USB drive to your computer. If you try to complete the aforementioned operation using the cp command, you’ll have to blindly wait until the copying is complete or some error is thrown.
|
||||
|
||||
However, the pv command can be helpful in this case. Here is an example:
|
||||
|
||||
pv /media/himanshu/1AC2-A8E3/fNf.mkv > ./Desktop/fnf.mkv
|
||||
|
||||
And here’s the output:
|
||||
|
||||
![pv-copy](https://www.maketecheasier.com/assets/uploads/2015/10/pv-copy.png)
|
||||
|
||||
So, as you can see above, the command shows a lot of useful information related to the ongoing operation, including the amount of data that has been transferred, time elapsed, rate of transfer, progress bar, progress in percentage, and the amount of time left.
|
||||
|
||||
The `pv` command provides various display switches. For example, you can use `-p` for displaying percentage, `-t` for timer, `-r` for rate of transfer, `-e` for eta, and -b for byte counter. The good thing is that you won’t have to remember any of them, as all of them are enabled by default. However, should you exclusively require information related to only a particular display switch in the output, you can pass that switch in the pv command.
|
||||
|
||||
There’s also a `-n` display switch that allows the command to display an integer percentage, one per line on standard error, instead of the regular visual progress indicator. The following is an example of this switch in action:
|
||||
|
||||
pv -n /media/himanshu/1AC2-A8E3/fNf.mkv > ./Desktop/fnf.mkv
|
||||
|
||||
![pv-numeric](https://www.maketecheasier.com/assets/uploads/2015/10/pv-numeric.png)
|
||||
|
||||
This particular display switch is suitable in scenarios where you want to pipe the output into the [dialog][2] command.
|
||||
|
||||
Moving on, there’s also a command line option, `-L`, that lets you modify the data transfer rate of the pv command. For example, I used -L to limit the data transfer rate to 2MB/s.
|
||||
|
||||
pv -L 2m /media/himanshu/1AC2-A8E3/fNf.mkv > ./Desktop/fnf.mkv
|
||||
|
||||
![pv-ratelimit](https://www.maketecheasier.com/assets/uploads/2015/10/pv-ratelimit.png)
|
||||
|
||||
As can be seen in the screenshot above, the data transfer rate was capped according to my direction.
|
||||
|
||||
Another scenario where `pv` can help is while compressing files. Here is an example of how you can use this command while compressing files using Gzip:
|
||||
|
||||
pv /media/himanshu/1AC2-A8E3/fnf.mkv | gzip > ./Desktop/fnf.log.gz
|
||||
|
||||
![pv-gzip](https://www.maketecheasier.com/assets/uploads/2015/10/pv-gzip.png)
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
As you have observed, pv is a useful little utility that could help you save your precious time in case a command line operation isn’t behaving as expected. Plus, the information it displays can also be used in shell scripts. I’d strongly recommend this command; it’s worth giving a try.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.maketecheasier.com/monitor-progress-linux-command-line-operation/
|
||||
|
||||
作者:[Himanshu Arora][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.maketecheasier.com/author/himanshu/
|
||||
[1]:http://linux.die.net/man/1/pv
|
||||
[2]:http://linux.die.net/man/1/dialog
|
@ -0,0 +1,125 @@
|
||||
Install Android On BQ Aquaris Ubuntu Phone In Linux
|
||||
================================================================================
|
||||
![How to install Android on Ubuntu Phone](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/Install-Android-on-Ubuntu-Phone.jpg)
|
||||
|
||||
If you happen to own the first Ubuntu phone and want to **replace Ubuntu with Android on the bq Aquaris e4.5**, this post is going to help you.
|
||||
|
||||
There can be plenty of reasons why you might want to remove Ubuntu and use the mainstream Android OS. One of the foremost reason is that the OS itself is at an early stage and intend to target developers and enthusiasts. Whatever may be your reason, installing Android on bq Aquaris is a piece of cake, thanks to the tools provided by bq.
|
||||
|
||||
Let’s see what to do we need to install Android on bq Aquaris.
|
||||
|
||||
### Prerequisite ###
|
||||
|
||||
- Working Internet connection to download Android factory image and install tools for flashing Android
|
||||
- USB data cable
|
||||
- A system running Linux
|
||||
|
||||
This tutorial is performed using Ubuntu 15.10. But the steps should be applicable to most other Linux distributions.
|
||||
|
||||
### Replace Ubuntu with Android in bq Aquaris e4.5 ###
|
||||
|
||||
#### Step 1: Download Android firmware ####
|
||||
|
||||
First step is to download the Android image for bq Aquaris e4.5. Good thing is that it is available from the bq’s support website. You can download the firmware, around 650 MB in size, from the link below:
|
||||
|
||||
- [Download Android for bq Aquaris e4.5][1]
|
||||
|
||||
Yes, you would get OTA updates with it. At present the firmware version is 2.0.1 which is based on Android Lolipop. Over time, there could be a new firmware based on Marshmallow and then the above link could be outdated.
|
||||
|
||||
I suggest to check the [bq support page][2] and download the latest firmware from there.
|
||||
|
||||
Once downloaded, extract it. In the extracted directory, look for **MT6582_Android_scatter.txt** file. We shall be using it later.
|
||||
|
||||
#### Step 2: Download flash tool ####
|
||||
|
||||
bq has provided its own flash tool, Herramienta MTK Flash Tool, for easier installation of Android or Ubuntu on the device. You can download the tool from the link below:
|
||||
|
||||
- [Download MTK Flash Tool][3]
|
||||
|
||||
Since the flash tool might be upgraded in future, you can always get the latest version of flash tool from the [bq support page][4].
|
||||
|
||||
Once downloaded extract the downloaded file. You should see an executable file named **flash_tool** in it. We shall be using it later.
|
||||
|
||||
#### Step 3: Remove conflicting packages (optional) ####
|
||||
|
||||
If you are using recent version of Ubuntu or Ubuntu based Linux distributions, you may encounter “BROM ERROR : S_UNDEFINED_ERROR (1001)” later in this tutorial.
|
||||
|
||||
To avoid this error, you’ll have to uninstall conflicting package. Use the commands below:
|
||||
|
||||
sudo apt-get remove modemmanager
|
||||
|
||||
Restart udev service with the command below:
|
||||
|
||||
sudo service udev restart
|
||||
|
||||
Just to check for any possible side effects on kernel module cdc_acm, run the command below:
|
||||
|
||||
lsmod | grep cdc_acm
|
||||
|
||||
If the output of the above command is an empty list, you’ll have to reinstall this kernel module:
|
||||
|
||||
sudo modprobe cdc_acm
|
||||
|
||||
#### Step 4: Prepare to flash Android ####
|
||||
|
||||
Go to the downloaded and extracted flash tool directory (in step 2). Use command line for this purpose because you’ll have to use the root privileges here.
|
||||
|
||||
Presuming that you saved it in the Downloads directory, use the command below to go to this directory (in case you do not know how to navigate between directories in command line).
|
||||
|
||||
cd ~/Downloads/SP_Flash*
|
||||
|
||||
After that use the command below to run the flash tool as root:
|
||||
|
||||
sudo ./flash_tool
|
||||
|
||||
You’ll see a window popped as the one below. Don’t bother about Download Agent field, it will be automatically filled. Just focus on Scatter-loading field.
|
||||
|
||||
![Replace Ubuntu with Android](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/Install-Android-bq-aquaris-Ubuntu-1.jpeg)
|
||||
|
||||
Remember we talked about **MT6582_Android_scatter.txt** in step 1? This text file is in the extracted directory of the Andriod firmware you downloaded in step 1. Click on Scatter-loading (in the above picture) and point to MT6582_Android_scatter.txt file.
|
||||
|
||||
When you do that, you’ll see several green lines like the one below:
|
||||
|
||||
![Install-Android-bq-aquaris-Ubuntu-2](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/Install-Android-bq-aquaris-Ubuntu-2.jpeg)
|
||||
|
||||
#### Step 5: Flashing Android ####
|
||||
|
||||
We are almost ready. Switch off your phone and connect it to your computer via a USB cable.
|
||||
|
||||
Select Firmware Upgrade from the dropdown and after that click on the big download button.
|
||||
|
||||
![flash Android with Ubuntu](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/Install-Android-bq-aquaris-Ubuntu.jpeg)
|
||||
|
||||
If everything is correct, you should see a flash status in the bottom of the tool:
|
||||
|
||||
![Replace Ubuntu with Android](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/Install-Android-bq-aquaris-Ubuntu-3.jpeg)
|
||||
|
||||
When the procedure is successfully completed, you’ll see a notification like this:
|
||||
|
||||
![Successfully flashed Android on bq qauaris Ubuntu Phone](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/Install-Android-bq-aquaris-Ubuntu-4.jpeg)
|
||||
|
||||
Unplug your phone and power it on. You should see a white screen with AQUARIS written in the middle and at bottom, “powered by Android” would be displayed. It might take upto 10 minutes before you could configure and start using Android.
|
||||
|
||||
Note: If something goes wrong in the process, Press power, volume up, volume down button together and boot in to fast boot mode. Turn off again and connect the cable again. Repeat the process of firmware upgrade. It should work.
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
Thanks to the tools provided, it becomes easier to **flash Android on bq Ubuntu Phone**. Of course, you can use the same steps to replace Android with Ubuntu. All you need is to download Ubuntu firmware instead of Android.
|
||||
|
||||
I hope this tutorial helped you to replace Ubuntu with Android on your bq phone. If you have questions or suggestions, feel free to ask in the comment section below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/install-android-ubuntu-phone/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/abhishek/
|
||||
[1]:https://storage.googleapis.com/otas/2014/Smartphones/Aquaris_E4.5_L/2.0.1_20150623-1900_bq-FW.zip
|
||||
[2]:http://www.bq.com/gb/support/aquaris-e4-5
|
||||
[3]:https://storage.googleapis.com/otas/2014/Smartphones/Aquaris_E4.5/Ubuntu/Web%20version/Web%20version/SP_Flash_Tool_exe_linux_v5.1424.00.zip
|
||||
[4]:http://www.bq.com/gb/support/aquaris-e4-5-ubuntu-edition
|
@ -0,0 +1,330 @@
|
||||
translating by ezio
|
||||
|
||||
Going Beyond Hello World Containers is Hard Stuff
|
||||
================================================================================
|
||||
In [my previous post][1], I provided the basic concepts behind Linux container technology. I wrote as much for you as I did for me. Containers are new to me. And I figured having the opportunity to blog about the subject would provide the motivation to really learn the stuff.
|
||||
|
||||
I intend to learn by doing. First get the concepts down, then get hands-on and write about it as I go. I assumed there must be a lot of Hello World type stuff out there to give me up to speed with the basics. Then, I could take things a bit further and build a microservice container or something.
|
||||
|
||||
I mean, it can’t be that hard, right?
|
||||
|
||||
Wrong.
|
||||
|
||||
Maybe it’s easy for someone who spends significant amount of their life immersed in operations work. But for me, getting started with this stuff turned out to be hard to the point of posting my frustrations to Facebook...
|
||||
|
||||
But, there is good news: I got it to work! And it’s always nice being able to make lemonade from lemons. So I am going to share the story of how I made my first microservice container with you. Maybe my pain will save you some time.
|
||||
|
||||
If you've ever found yourself in a situation like this, fear not: folks like me are here to deal with the problems so you don't have to!
|
||||
|
||||
Let’s begin.
|
||||
|
||||
### A Thumbnail Micro Service ###
|
||||
|
||||
The microservice I designed was simple in concept. Post a digital image in JPG or PNG format to an HTTP endpoint and get back a a 100px wide thumbnail.
|
||||
|
||||
Here’s what that looks like:
|
||||
|
||||
![container-diagram-0](https://deis.com/images/blog-images/containers-hard-0.png)
|
||||
|
||||
I decide to use a NodeJS for my code and version of [ImageMagick][2] to do the thumbnail transformation.
|
||||
|
||||
I did my first version of the service, using the logic shown here:
|
||||
|
||||
![container-diagram-1](https://deis.com/images/blog-images/containers-hard-1.png)
|
||||
|
||||
I download the [Docker Toolbox][3] which installs an the Docker Quickstart Terminal. Docker Quickstart Terminal makes creating containers easier. The terminal fires up a Linux virtual machine that has Docker installed, allowing you to run Docker commands from within a terminal.
|
||||
|
||||
In my case, I am running on OS X. But there’s a Windows version too.
|
||||
|
||||
I am going to use Docker Quickstart Terminal to build a container image for my microservice and run a container from that image.
|
||||
|
||||
The Docker Quickstart Terminal runs in your regular terminal, like so:
|
||||
|
||||
![container-diagram-2](https://deis.com/images/blog-images/containers-hard-2.png)
|
||||
|
||||
### The First Little Problem and the First Big Problem ###
|
||||
|
||||
So I fiddled around with NodeJS and ImageMagick and I got the service to work on my local machine.
|
||||
|
||||
Then, I created the Dockerfile, which is the configuration script Docker uses to build your container. (I’ll go more into builds and Dockerfile more later on.)
|
||||
|
||||
Here’s the build command I ran on the Docker Quickstart Terminal:
|
||||
|
||||
$ docker build -t thumbnailer:0.1
|
||||
|
||||
I got this response:
|
||||
|
||||
docker: "build" requires 1 argument.
|
||||
|
||||
Huh.
|
||||
|
||||
After 15 minutes I realized: I forgot to put a period . as the last argument!
|
||||
|
||||
It needs to be:
|
||||
|
||||
$ docker build -t thumbnailer:0.1 .
|
||||
|
||||
But this wasn’t the end of my problems.
|
||||
|
||||
I got the image to build and then I typed [the the `run` command][4] on the Docker Quickstart Terminal to fire up a container based on the image, called `thumbnailer:0.1`:
|
||||
|
||||
$ docker run -d -p 3001:3000 thumbnailer:0.1
|
||||
|
||||
The `-p 3001:3000` argument makes it so the NodeJS microservice running on port 3000 within the container binds to port 3001 on the host virtual machine.
|
||||
|
||||
Looks so good so far, right?
|
||||
|
||||
Wrong. Things are about to get pretty bad.
|
||||
|
||||
I determined the IP address of the virtual machine created by Docker Quickstart Terminal by running the `docker-machine` command:
|
||||
|
||||
$ docker-machine ip default
|
||||
|
||||
This returns the IP address of the default virtual machine, the one that is run under the Docker Quickstart Terminal. For me, this IP address was 192.168.99.100.
|
||||
|
||||
I browsed to http://192.168.99.100:3001/ and got the file upload page I built:
|
||||
|
||||
![container-diagram-3](https://deis.com/images/blog-images/containers-hard-3.png)
|
||||
|
||||
I selected a file and clicked the Upload Image button.
|
||||
|
||||
But it didn’t work.
|
||||
|
||||
The terminal is telling me it can’t find the `/upload` directory my microservice requires.
|
||||
|
||||
Now, keep in mind, I had been at this for about a day—between the fiddling and research. I’m feeling a little frustrated by this point.
|
||||
|
||||
Then, a brain spark flew. Somewhere along the line remembered reading a microservice should not do any data persistence on its own! Saving data should be the job of another service.
|
||||
|
||||
So what if the container can’t find the `/upload` directory? The real issue is: my microservice has a fundamentally flawed design.
|
||||
|
||||
Let’s take another look:
|
||||
|
||||
![container-diagram-4](https://deis.com/images/blog-images/containers-hard-4.png)
|
||||
|
||||
Why am I saving a file to disk? Microservices are supposed to be fast. Why not do all my work in memory? Using memory buffers will make the "I can’t find no stickin’ directory" error go away and will increase the performance of my app dramatically.
|
||||
|
||||
So that’s what I did. And here’s what the plan was:
|
||||
|
||||
![container-diagram-5](https://deis.com/images/blog-images/containers-hard-5.png)
|
||||
|
||||
Here’s the NodeJS I wrote to do all the in-memory work for creating a thumbnail:
|
||||
|
||||
// Bind to the packages
|
||||
var express = require('express');
|
||||
var router = express.Router();
|
||||
var path = require('path'); // used for file path
|
||||
var im = require("imagemagick");
|
||||
|
||||
// Simple get that allows you test that you can access the thumbnail process
|
||||
router.get('/', function (req, res, next) {
|
||||
res.status(200).send('Thumbnailer processor is up and running');
|
||||
});
|
||||
|
||||
// This is the POST handler. It will take the uploaded file and make a thumbnail from the
|
||||
// submitted byte array. I know, it's not rocket science, but it serves a purpose
|
||||
router.post('/', function (req, res, next) {
|
||||
req.pipe(req.busboy);
|
||||
req.busboy.on('file', function (fieldname, file, filename) {
|
||||
var ext = path.extname(filename)
|
||||
|
||||
// Make sure that only png and jpg is allowed
|
||||
if(ext.toLowerCase() != '.jpg' && ext.toLowerCase() != '.png'){
|
||||
res.status(406).send("Service accepts only jpg or png files");
|
||||
}
|
||||
|
||||
var bytes = [];
|
||||
|
||||
// put the bytes from the request into a byte array
|
||||
file.on('data', function(data) {
|
||||
for (var i = 0; i < data.length; ++i) {
|
||||
bytes.push(data[i]);
|
||||
}
|
||||
console.log('File [' + fieldname + '] got bytes ' + bytes.length + ' bytes');
|
||||
});
|
||||
|
||||
// Once the request is finished pushing the file bytes into the array, put the bytes in
|
||||
// a buffer and process that buffer with the imagemagick resize function
|
||||
file.on('end', function() {
|
||||
var buffer = new Buffer(bytes,'binary');
|
||||
console.log('Bytes got ' + bytes.length + ' bytes');
|
||||
|
||||
//resize
|
||||
im.resize({
|
||||
srcData: buffer,
|
||||
height: 100
|
||||
}, function(err, stdout, stderr){
|
||||
if (err){
|
||||
throw err;
|
||||
}
|
||||
// get the extension without the period
|
||||
var typ = path.extname(filename).replace('.','');
|
||||
res.setHeader("content-type", "image/" + typ);
|
||||
res.status(200);
|
||||
// send the image back as a response
|
||||
res.send(new Buffer(stdout,'binary'));
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
module.exports = router;
|
||||
|
||||
Okay, so we’re back on track and everything is hunky dory on my local machine. I go to sleep.
|
||||
|
||||
But, before I do I test the microservice code running as standard Node app on localhost...
|
||||
|
||||
![Containers Hard](https://deis.com/images/blog-images/containers-hard-6.png)
|
||||
|
||||
It works fine. Now all I needed to do was get it working in a container.
|
||||
|
||||
The next day I woke up, grabbed some coffee, and built an image—not forgetting to put in the period!
|
||||
|
||||
$ docker build -t thumbnailer:01 .
|
||||
|
||||
I am building from the root directory of my thumbnailer project. The build command uses the Dockerfile that is in the root directory. That’s how it goes: put the Dockerfile in the same place you want to run build and the Dockerfile will be used by default.
|
||||
|
||||
Here is the text of the Dockerfile I was using:
|
||||
|
||||
FROM ubuntu:latest
|
||||
MAINTAINER bob@CogArtTech.com
|
||||
|
||||
RUN apt-get update
|
||||
RUN apt-get install -y nodejs nodejs-legacy npm
|
||||
RUN apt-get install imagemagick libmagickcore-dev libmagickwand-dev
|
||||
RUN apt-get clean
|
||||
|
||||
COPY ./package.json src/
|
||||
|
||||
RUN cd src && npm install
|
||||
|
||||
COPY . /src
|
||||
|
||||
WORKDIR src/
|
||||
|
||||
CMD npm start
|
||||
|
||||
What could go wrong?
|
||||
|
||||
### The Second Big Problem ###
|
||||
|
||||
I ran the `build` command and I got this error:
|
||||
|
||||
Do you want to continue? [Y/n] Abort.
|
||||
|
||||
The command '/bin/sh -c apt-get install imagemagick libmagickcore-dev libmagickwand-dev' returned a non-zero code: 1
|
||||
|
||||
I figured something was wrong with the microservice. I went back to my machine, fired up the service on localhost, and uploaded a file.
|
||||
|
||||
Then I got this error from NodeJS:
|
||||
|
||||
Error: spawn convert ENOENT
|
||||
|
||||
What’s going on? This worked the other night!
|
||||
|
||||
I searched and searched, for every permutation of the error I could think of. After about four hours of replacing different node modules here and there, I figured: why not restart the machine?
|
||||
|
||||
I did. And guess what? The error went away!
|
||||
|
||||
Go figure.
|
||||
|
||||
### Putting the Genie Back in the Bottle ###
|
||||
|
||||
So, back to the original quest: I needed to get this build working.
|
||||
|
||||
I removed all of the containers running on the VM, using [the `rm` command][5]:
|
||||
|
||||
$ docker rm -f $(docker ps -a -q)
|
||||
|
||||
The `-f` flag here force removes running images.
|
||||
|
||||
Then I removed all of my Docker images, using [the `rmi` command][6]:
|
||||
|
||||
$ docker rmi if $(docker images | tail -n +2 | awk '{print $3}')
|
||||
|
||||
I go through the whole process of rebuilding the image, installing the container and try to get the microservice running. Then after about an hour of self-doubt and accompanying frustration, I thought to myself: maybe this isn’t a problem with the microservice.
|
||||
|
||||
So, I looked that the the error again:
|
||||
|
||||
Do you want to continue? [Y/n] Abort.
|
||||
|
||||
The command '/bin/sh -c apt-get install imagemagick libmagickcore-dev libmagickwand-dev' returned a non-zero code: 1
|
||||
|
||||
Then it hit me: the build is looking for a Y input from the keyboard! But, this is a non-interactive Dockerfile script. There is no keyboard.
|
||||
|
||||
I went back to the Dockerfile, and there it was:
|
||||
|
||||
RUN apt-get update
|
||||
RUN apt-get install -y nodejs nodejs-legacy npm
|
||||
RUN apt-get install imagemagick libmagickcore-dev libmagickwand-dev
|
||||
RUN apt-get clean
|
||||
|
||||
The second `apt-get` command is missing the `-y` flag which causes "yes" to be given automatically where usually it would be prompted for.
|
||||
|
||||
I added the missing `-y` to the command:
|
||||
|
||||
RUN apt-get update
|
||||
RUN apt-get install -y nodejs nodejs-legacy npm
|
||||
RUN apt-get install -y imagemagick libmagickcore-dev libmagickwand-dev
|
||||
RUN apt-get clean
|
||||
|
||||
And guess what: after two days of trial and tribulation, it worked! Two whole days!
|
||||
|
||||
So, I did my build:
|
||||
|
||||
$ docker build -t thumbnailer:0.1 .
|
||||
|
||||
I fired up the container:
|
||||
|
||||
$ docker run -d -p 3001:3000 thumbnailer:0.1
|
||||
|
||||
Got the IP address of the Virtual Machine:
|
||||
|
||||
$ docker-machine ip default
|
||||
|
||||
Went to my browser and entered http://192.168.99.100:3001/ into the address bar.
|
||||
|
||||
The upload page loaded.
|
||||
|
||||
I selected an image, and this is what I got:
|
||||
|
||||
![container-diagram-7](https://deis.com/images/blog-images/containers-hard-7.png)
|
||||
|
||||
It worked!
|
||||
|
||||
Inside a container, for the first time!
|
||||
|
||||
### So What Does It All Mean? ###
|
||||
|
||||
A long time ago, I accepted the fact when it comes to tech, sometimes even the easy stuff is hard. Along with that, I abandoned the desire to be the smartest guy in the room. Still, the last few days trying get basic competency with containers has been, at times, a journey of self doubt.
|
||||
|
||||
But, you wanna know something? It’s 2 AM on an early morning as I write this, and every nerve wracking hour has been worth it. Why? Because you gotta put in the time. This stuff is hard and it does not come easy for anyone. And don’t forget: you’re learning tech and tech runs the world!
|
||||
|
||||
P.S. Check out this two part video of Hello World containers, check out [Raziel Tabib’s][7] excellent work in this video...
|
||||
|
||||
注:youtube视频
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/PJ95WY2DqXo" frameborder="0" allowfullscreen></iframe>
|
||||
|
||||
And don't miss part two...
|
||||
|
||||
注:youtube视频
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/lss2rZ3Ppuk" frameborder="0" allowfullscreen></iframe>
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://deis.com/blog/2015/beyond-hello-world-containers-hard-stuff
|
||||
|
||||
作者:[Bob Reselman][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://deis.com/blog
|
||||
[1]:http://deis.com/blog/2015/developer-journey-linux-containers
|
||||
[2]:https://github.com/rsms/node-imagemagick
|
||||
[3]:https://www.docker.com/toolbox
|
||||
[4]:https://docs.docker.com/reference/commandline/run/
|
||||
[5]:https://docs.docker.com/reference/commandline/rm/
|
||||
[6]:https://docs.docker.com/reference/commandline/rmi/
|
||||
[7]:http://twitter.com/RazielTabib
|
@ -0,0 +1,242 @@
|
||||
How to Install Revive Adserver on Ubuntu 15.04 / CentOS 7
|
||||
================================================================================
|
||||
Revive AdserverHow to Install Revive Adserver on Ubuntu 15.04 / CentOS 7 is a free and open source advertisement management system that enables publishers, ad networks and advertisers to serve ads on websites, apps, videos and manage campaigns for multiple advertiser with many features. Revive Adserver is licensed under GNU Public License which is also known as OpenX Source. It features an integrated banner management interface, URL targeting, geo-targeting and tracking system for gathering statistics. This application enables website owners to manage banners from both in-house advertisement campaigns as well as from paid or third-party sources, such as Google's AdSense. Here, in this tutorial, we'll gonna install Revive Adserver in our machine running Ubuntu 15.04 or CentOS 7.
|
||||
|
||||
### 1. Installing LAMP Stack ###
|
||||
|
||||
First of all, as Revive Adserver requires a complete LAMP Stack to work, we'll gonna install it. LAMP Stack is the combination of Apache Web Server, MySQL/MariaDB Database Server and PHP modules. To run Revive properly, we'll need to install some PHP modules like apc, zlib, xml, pcre, mysql and mbstring. To setup LAMP Stack, we'll need to run the following command with respect to the distribution of linux we are currently running.
|
||||
|
||||
#### On Ubuntu 15.04 ####
|
||||
|
||||
# apt-get install apache2 mariadb-server php5 php5-gd php5-mysql php5-curl php-apc zlibc zlib1g zlib1g-dev libpcre3 libpcre3-dev libapache2-mod-php5 zip
|
||||
|
||||
#### On CentOS 7 ####
|
||||
|
||||
# yum install httpd mariadb php php-gd php-mysql php-curl php-mbstring php-xml php-apc zlibc zlib1g zlib1g-dev libpcre3 libpcre3-dev zip
|
||||
|
||||
### 2. Starting Apache and MariaDB server ###
|
||||
|
||||
We’ll now start our newly installed Apache web server and MariaDB database server in our linux machine. To do so, we'll need to execute the following commands.
|
||||
|
||||
#### On Ubuntu 15.04 ####
|
||||
|
||||
Ubuntu 15.04 is shipped with Systemd as its default init system, so we'll need to execute the following commands to start apache and mariadb daemons.
|
||||
|
||||
# systemctl start apache2 mysql
|
||||
|
||||
After its started, we'll now make it able to start automatically in every system boot by running the following command.
|
||||
|
||||
# systemctl enable apache2 mysql
|
||||
|
||||
Synchronizing state for apache2.service with sysvinit using update-rc.d...
|
||||
Executing /usr/sbin/update-rc.d apache2 defaults
|
||||
Executing /usr/sbin/update-rc.d apache2 enable
|
||||
Synchronizing state for mysql.service with sysvinit using update-rc.d...
|
||||
Executing /usr/sbin/update-rc.d mysql defaults
|
||||
Executing /usr/sbin/update-rc.d mysql enable
|
||||
|
||||
#### On CentOS 7 ####
|
||||
|
||||
Also in CentOS 7, systemd is the default init system so, we'll run the following command to start them.
|
||||
|
||||
# systemctl start httpd mariadb
|
||||
|
||||
Next, we'll enable them to start automatically in every startup of init system using the following command.
|
||||
|
||||
# systemctl enable httpd mariadb
|
||||
|
||||
ln -s '/usr/lib/systemd/system/httpd.service' '/etc/systemd/system/multi-user.target.wants/httpd.service'
|
||||
ln -s '/usr/lib/systemd/system/mariadb.service' '/etc/systemd/system/multi-user.target.wants/mariadb.service'
|
||||
|
||||
### 3. Configuring MariaDB ###
|
||||
|
||||
#### On CentOS 7/Ubuntu 15.04 ####
|
||||
|
||||
Now, as we are starting MariaDB for the first time and no password has been assigned for MariaDB so, we’ll first need to configure a root password for it. Then, we’ll gonna create a new database so that it can store data for our Revive Adserver installation.
|
||||
|
||||
To configure MariaDB and assign a root password, we’ll need to run the following command.
|
||||
|
||||
# mysql_secure_installation
|
||||
|
||||
This will ask us to enter the password for root but as we haven’t set any password before and its our first time we’ve installed mariadb, we’ll simply press enter and go further. Then, we’ll be asked to set root password, here we’ll hit Y and enter our password for root of MariaDB. Then, we’ll simply hit enter to set the default values for the further configurations.
|
||||
|
||||
….
|
||||
so you should just press enter here.
|
||||
|
||||
Enter current password for root (enter for none):
|
||||
OK, successfully used password, moving on…
|
||||
|
||||
Setting the root password ensures that nobody can log into the MariaDB
|
||||
root user without the proper authorisation.
|
||||
|
||||
Set root password? [Y/n] y
|
||||
New password:
|
||||
Re-enter new password:
|
||||
Password updated successfully!
|
||||
Reloading privilege tables..
|
||||
… Success!
|
||||
…
|
||||
installation should now be secure.
|
||||
Thanks for using MariaDB!
|
||||
|
||||
![Configuring MariaDB](http://blog.linoxide.com/wp-content/uploads/2015/11/configuring-mariadb.png)
|
||||
|
||||
### 4. Creating new Database ###
|
||||
|
||||
After we have assigned the password to our root user of mariadb server, we'll now create a new database for Revive Adserver application so that it can store its data into the database server. To do so, first we'll need to login to our MariaDB console by running the following command.
|
||||
|
||||
# mysql -u root -p
|
||||
|
||||
Then, it will ask us to enter the password of root user which we had just set in the above step. Then, we'll be welcomed into the MariaDB console in which we'll create our new database, database user and assign its password and grant all privileges to create, remove and edit the tables and data stored in it.
|
||||
|
||||
> CREATE DATABASE revivedb;
|
||||
> CREATE USER 'reviveuser'@'localhost' IDENTIFIED BY 'Pa$$worD123';
|
||||
> GRANT ALL PRIVILEGES ON revivedb.* TO 'reviveuser'@'localhost';
|
||||
> FLUSH PRIVILEGES;
|
||||
> EXIT;
|
||||
|
||||
![Creating Mariadb Revive Database](http://blog.linoxide.com/wp-content/uploads/2015/11/creating-mariadb-revive-database.png)
|
||||
|
||||
### 5. Downloading Revive Adserver Package ###
|
||||
|
||||
Next, we'll download the latest release of Revive Adserver ie version 3.2.2 in the time of writing this article. So, we'll first get the download link from the official Download Page of Revive Adserver ie [http://www.revive-adserver.com/download/][1] then we'll download the compressed zip file using wget command under /tmp/ directory as shown bellow.
|
||||
|
||||
# cd /tmp/
|
||||
# wget http://download.revive-adserver.com/revive-adserver-3.2.2.zip
|
||||
|
||||
--2015-11-09 17:03:48-- http://download.revive-adserver.com/revive-adserver-3.2.2.zip
|
||||
Resolving download.revive-adserver.com (download.revive-adserver.com)... 54.230.119.219, 54.239.132.177, 54.230.116.214, ...
|
||||
Connecting to download.revive-adserver.com (download.revive-adserver.com)|54.230.119.219|:80... connected.
|
||||
HTTP request sent, awaiting response... 200 OK
|
||||
Length: 11663620 (11M) [application/zip]
|
||||
Saving to: 'revive-adserver-3.2.2.zip'
|
||||
revive-adserver-3.2 100%[=====================>] 11.12M 1.80MB/s in 13s
|
||||
2015-11-09 17:04:02 (906 KB/s) - 'revive-adserver-3.2.2.zip' saved [11663620/11663620]
|
||||
|
||||
After the file is downloaded, we'll simply extract its files and directories using unzip command.
|
||||
|
||||
# unzip revive-adserver-3.2.2.zip
|
||||
|
||||
Then, we'll gonna move the entire Revive directories including every files from /tmp to the default webroot of Apache Web Server ie /var/www/html/ directory.
|
||||
|
||||
# mv revive-adserver-3.2.2 /var/www/html/reviveads
|
||||
|
||||
### 6. Configuring Apache Web Server ###
|
||||
|
||||
We'll now configure our Apache Server so that revive will run with proper configuration. To do so, we'll create a new virtualhost by creating a new configuration file named reviveads.conf . The directory here may differ from one distribution to another, here is how we create in the following distributions of linux.
|
||||
|
||||
#### On Ubuntu 15.04 ####
|
||||
|
||||
# touch /etc/apache2/sites-available/reviveads.conf
|
||||
# ln -s /etc/apache2/sites-available/reviveads.conf /etc/apache2/sites-enabled/reviveads.conf
|
||||
# nano /etc/apache2/sites-available/reviveads.conf
|
||||
|
||||
Now, we'll gonna add the following lines of configuration into this file using our favorite text editor.
|
||||
|
||||
<VirtualHost *:80>
|
||||
ServerAdmin info@reviveads.linoxide.com
|
||||
DocumentRoot /var/www/html/reviveads/
|
||||
ServerName reviveads.linoxide.com
|
||||
ServerAlias www.reviveads.linoxide.com
|
||||
<Directory /var/www/html/reviveads/>
|
||||
Options FollowSymLinks
|
||||
AllowOverride All
|
||||
</Directory>
|
||||
ErrorLog /var/log/apache2/reviveads.linoxide.com-error_log
|
||||
CustomLog /var/log/apache2/reviveads.linoxide.com-access_log common
|
||||
</VirtualHost>
|
||||
|
||||
![Configuring Apache2 Ubuntu](http://blog.linoxide.com/wp-content/uploads/2015/11/configuring-apache2-ubuntu.png)
|
||||
|
||||
After done, we'll gonna save the file and exit our text editor. Then, we'll restart our Apache Web server.
|
||||
|
||||
# systemctl restart apache2
|
||||
|
||||
#### On CentOS 7 ####
|
||||
|
||||
In CentOS, we'll directly create the file reviveads.conf under /etc/httpd/conf.d/ directory using our favorite text editor.
|
||||
|
||||
# nano /etc/httpd/conf.d/reviveads.conf
|
||||
|
||||
Then, we'll gonna add the following lines of configuration into the file.
|
||||
|
||||
<VirtualHost *:80>
|
||||
ServerAdmin info@reviveads.linoxide.com
|
||||
DocumentRoot /var/www/html/reviveads/
|
||||
ServerName reviveads.linoxide.com
|
||||
ServerAlias www.reviveads.linoxide.com
|
||||
<Directory /var/www/html/reviveads/>
|
||||
Options FollowSymLinks
|
||||
AllowOverride All
|
||||
</Directory>
|
||||
ErrorLog /var/log/httpd/reviveads.linoxide.com-error_log
|
||||
CustomLog /var/log/httpd/reviveads.linoxide.com-access_log common
|
||||
</VirtualHost>
|
||||
|
||||
![Configuring httpd Centos](http://blog.linoxide.com/wp-content/uploads/2015/11/configuring-httpd-centos.png)
|
||||
|
||||
Once done, we'll simply save the file and exit the editor. And then, we'll gonna restart our apache web server.
|
||||
|
||||
# systemctl restart httpd
|
||||
|
||||
### 7. Fixing Permissions and Ownership ###
|
||||
|
||||
Now, we'll gonna fix some file permissions and ownership of the installation path. First, we'll gonna set the ownership of the installation directory to Apache process owner so that apache web server will have full access of the files and directories to edit, create and delete.
|
||||
|
||||
#### On Ubuntu 15.04 ####
|
||||
|
||||
# chown www-data: -R /var/www/html/reviveads
|
||||
|
||||
#### On CentOS 7 ####
|
||||
|
||||
# chown apache: -R /var/www/html/reviveads
|
||||
|
||||
### 8. Allowing Firewall ###
|
||||
|
||||
Now, we'll gonna configure our firewall programs to allow port 80 (http) so that our apache web server running Revive Adserver will be accessible from other machines in the network across the default http port ie 80.
|
||||
|
||||
#### On Ubuntu 15.04/CentOS 7 ####
|
||||
|
||||
As CentOS 7 and Ubuntu 15.04 both has systemd installed by default, it contains firewalld running as firewall program. In order to open the port 80 (http service) on firewalld, we'll need to execute the following commands.
|
||||
|
||||
# firewall-cmd --permanent --add-service=http
|
||||
|
||||
success
|
||||
|
||||
# firewall-cmd --reload
|
||||
|
||||
success
|
||||
|
||||
### 9. Web Installation ###
|
||||
|
||||
Finally, after everything is done as expected, we'll now be able to access the web interface of the application using a web browser. We can go further towards the web installation, by pointing the web browser to the web server we are running in our linux machine. To do so, we'll need to point our web browser to http://ip-address/ or http://domain.com assigned to our linux machine. Here, in this tutorial, we'll point our browser to http://reviveads.linoxide.com/ .
|
||||
|
||||
Here, we'll see the Welcome page of the installation of Revive Adserver with the GNU General Public License V2 as Revive Adserver is released under this license. Then, we'll simply click on I agree button in order to continue the installation.
|
||||
|
||||
In the next page, we'll need to enter the required database information in order to connect Revive Adserver with the MariaDB database server. Here, we'll need to enter the database name, user and password that we had set in the above step. In this tutorial, we entered database name, user and password as revivedb, reviveuser and Pa$$worD123 respectively then, we set the hostname as localhost and continue further.
|
||||
|
||||
![Configuring Revive Adserver](http://blog.linoxide.com/wp-content/uploads/2015/11/configuring-revive-adserver.png)
|
||||
|
||||
We'll now enter the required information like administration username, password and email address so that we can use these information to login to the dashboard of our Adserver. After done, we'll head towards the Finish page in which we'll see that we have successfully installed Revive Adserver in our server.
|
||||
|
||||
Next, we'll be redirected to the Adverstiser page where we'll add new Advertisers and manage them. Then, we'll be able to navigate to our Dashboard, add new users to the adserver, add new campaign for our advertisers, banners, websites, video ads and everything that its built with.
|
||||
|
||||
For enabling more configurations and access towards the administrative settings, we can switch our Dashboard user to the Administrator account. This will add new administrative menus in the dashboard like Plugins, Configuration through which we can add and manage plugins and configure many features and elements of Revive Adserver.
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
In this article, we learned some information on what is Revive Adserver and how we can setup on linux machine running Ubuntu 15.04 and CentOS 7 distributions. Though Revive Adserver's initial source code was bought from OpenX, currently the code base for OpenX Enterprise and Revive Adserver are completely separate. To extend more features, we can install more plugins which we can also find from [http://www.adserverplugins.com/][2] . Really, this piece of software has changed the way of managing the ads for websites, apps, videos and made it very easy and efficient. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you !
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/install-revive-adserver-ubuntu-15-04-centos-7/
|
||||
|
||||
作者:[Arun Pyasi][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunp/
|
||||
[1]:http://www.revive-adserver.com/download/
|
||||
[2]:http://www.adserverplugins.com/
|
257
sources/tech/20151122 Doubly linked list in the Linux Kernel.md
Normal file
257
sources/tech/20151122 Doubly linked list in the Linux Kernel.md
Normal file
@ -0,0 +1,257 @@
|
||||
Data Structures in the Linux Kernel
|
||||
================================================================================
|
||||
|
||||
Doubly linked list
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
Linux kernel provides its own implementation of doubly linked list, which you can find in the [include/linux/list.h](https://github.com/torvalds/linux/blob/master/include/linux/list.h). We will start `Data Structures in the Linux kernel` from the doubly linked list data structure. Why? Because it is very popular in the kernel, just try to [search](http://lxr.free-electrons.com/ident?i=list_head)
|
||||
|
||||
First of all, let's look on the main structure in the [include/linux/types.h](https://github.com/torvalds/linux/blob/master/include/linux/types.h):
|
||||
|
||||
```C
|
||||
struct list_head {
|
||||
struct list_head *next, *prev;
|
||||
};
|
||||
```
|
||||
|
||||
You can note that it is different from many implementations of doubly linked list which you have seen. For example, this doubly linked list structure from the [glib](http://www.gnu.org/software/libc/) library looks like :
|
||||
|
||||
```C
|
||||
struct GList {
|
||||
gpointer data;
|
||||
GList *next;
|
||||
GList *prev;
|
||||
};
|
||||
```
|
||||
|
||||
Usually a linked list structure contains a pointer to the item. The implementation of linked list in Linux kernel does not. So the main question is - `where does the list store the data?`. The actual implementation of linked list in the kernel is - `Intrusive list`. An intrusive linked list does not contain data in its nodes - A node just contains pointers to the next and previous node and list nodes part of the data that are added to the list. This makes the data structure generic, so it does not care about entry data type anymore.
|
||||
|
||||
For example:
|
||||
|
||||
```C
|
||||
struct nmi_desc {
|
||||
spinlock_t lock;
|
||||
struct list_head head;
|
||||
};
|
||||
```
|
||||
|
||||
Let's look at some examples to understand how `list_head` is used in the kernel. As I already wrote about, there are many, really many different places where lists are used in the kernel. Let's look for an example in miscellaneous character drivers. Misc character drivers API from the [drivers/char/misc.c](https://github.com/torvalds/linux/blob/master/drivers/char/misc.c) is used for writing small drivers for handling simple hardware or virtual devices. Those drivers share same major number:
|
||||
|
||||
```C
|
||||
#define MISC_MAJOR 10
|
||||
```
|
||||
|
||||
but have their own minor number. For example you can see it with:
|
||||
|
||||
```
|
||||
ls -l /dev | grep 10
|
||||
crw------- 1 root root 10, 235 Mar 21 12:01 autofs
|
||||
drwxr-xr-x 10 root root 200 Mar 21 12:01 cpu
|
||||
crw------- 1 root root 10, 62 Mar 21 12:01 cpu_dma_latency
|
||||
crw------- 1 root root 10, 203 Mar 21 12:01 cuse
|
||||
drwxr-xr-x 2 root root 100 Mar 21 12:01 dri
|
||||
crw-rw-rw- 1 root root 10, 229 Mar 21 12:01 fuse
|
||||
crw------- 1 root root 10, 228 Mar 21 12:01 hpet
|
||||
crw------- 1 root root 10, 183 Mar 21 12:01 hwrng
|
||||
crw-rw----+ 1 root kvm 10, 232 Mar 21 12:01 kvm
|
||||
crw-rw---- 1 root disk 10, 237 Mar 21 12:01 loop-control
|
||||
crw------- 1 root root 10, 227 Mar 21 12:01 mcelog
|
||||
crw------- 1 root root 10, 59 Mar 21 12:01 memory_bandwidth
|
||||
crw------- 1 root root 10, 61 Mar 21 12:01 network_latency
|
||||
crw------- 1 root root 10, 60 Mar 21 12:01 network_throughput
|
||||
crw-r----- 1 root kmem 10, 144 Mar 21 12:01 nvram
|
||||
brw-rw---- 1 root disk 1, 10 Mar 21 12:01 ram10
|
||||
crw--w---- 1 root tty 4, 10 Mar 21 12:01 tty10
|
||||
crw-rw---- 1 root dialout 4, 74 Mar 21 12:01 ttyS10
|
||||
crw------- 1 root root 10, 63 Mar 21 12:01 vga_arbiter
|
||||
crw------- 1 root root 10, 137 Mar 21 12:01 vhci
|
||||
```
|
||||
|
||||
Now let's have a close look at how lists are used in the misc device drivers. First of all, let's look on `miscdevice` structure:
|
||||
|
||||
```C
|
||||
struct miscdevice
|
||||
{
|
||||
int minor;
|
||||
const char *name;
|
||||
const struct file_operations *fops;
|
||||
struct list_head list;
|
||||
struct device *parent;
|
||||
struct device *this_device;
|
||||
const char *nodename;
|
||||
mode_t mode;
|
||||
};
|
||||
```
|
||||
|
||||
We can see the fourth field in the `miscdevice` structure - `list` which is a list of registered devices. In the beginning of the source code file we can see the definition of misc_list:
|
||||
|
||||
```C
|
||||
static LIST_HEAD(misc_list);
|
||||
```
|
||||
|
||||
which expands to the definition of variables with `list_head` type:
|
||||
|
||||
```C
|
||||
#define LIST_HEAD(name) \
|
||||
struct list_head name = LIST_HEAD_INIT(name)
|
||||
```
|
||||
|
||||
and initializes it with the `LIST_HEAD_INIT` macro, which sets previous and next entries with the address of variable - name:
|
||||
|
||||
```C
|
||||
#define LIST_HEAD_INIT(name) { &(name), &(name) }
|
||||
```
|
||||
|
||||
Now let's look on the `misc_register` function which registers a miscellaneous device. At the start it initializes `miscdevice->list` with the `INIT_LIST_HEAD` function:
|
||||
|
||||
```C
|
||||
INIT_LIST_HEAD(&misc->list);
|
||||
```
|
||||
|
||||
which does the same as the `LIST_HEAD_INIT` macro:
|
||||
|
||||
```C
|
||||
static inline void INIT_LIST_HEAD(struct list_head *list)
|
||||
{
|
||||
list->next = list;
|
||||
list->prev = list;
|
||||
}
|
||||
```
|
||||
|
||||
In the next step after a device is created by the `device_create` function, we add it to the miscellaneous devices list with:
|
||||
|
||||
```
|
||||
list_add(&misc->list, &misc_list);
|
||||
```
|
||||
|
||||
Kernel `list.h` provides this API for the addition of a new entry to the list. Let's look at its implementation:
|
||||
|
||||
```C
|
||||
static inline void list_add(struct list_head *new, struct list_head *head)
|
||||
{
|
||||
__list_add(new, head, head->next);
|
||||
}
|
||||
```
|
||||
|
||||
It just calls internal function `__list_add` with the 3 given parameters:
|
||||
|
||||
* new - new entry.
|
||||
* head - list head after which the new item will be inserted.
|
||||
* head->next - next item after list head.
|
||||
|
||||
Implementation of the `__list_add` is pretty simple:
|
||||
|
||||
```C
|
||||
static inline void __list_add(struct list_head *new,
|
||||
struct list_head *prev,
|
||||
struct list_head *next)
|
||||
{
|
||||
next->prev = new;
|
||||
new->next = next;
|
||||
new->prev = prev;
|
||||
prev->next = new;
|
||||
}
|
||||
```
|
||||
|
||||
Here we add a new item between `prev` and `next`. So `misc` list which we defined at the start with the `LIST_HEAD_INIT` macro will contain previous and next pointers to the `miscdevice->list`.
|
||||
|
||||
There is still one question: how to get list's entry. There is a special macro:
|
||||
|
||||
```C
|
||||
#define list_entry(ptr, type, member) \
|
||||
container_of(ptr, type, member)
|
||||
```
|
||||
|
||||
which gets three parameters:
|
||||
|
||||
* ptr - the structure list_head pointer;
|
||||
* type - structure type;
|
||||
* member - the name of the list_head within the structure;
|
||||
|
||||
For example:
|
||||
|
||||
```C
|
||||
const struct miscdevice *p = list_entry(v, struct miscdevice, list)
|
||||
```
|
||||
|
||||
After this we can access to any `miscdevice` field with `p->minor` or `p->name` and etc... Let's look on the `list_entry` implementation:
|
||||
|
||||
```C
|
||||
#define list_entry(ptr, type, member) \
|
||||
container_of(ptr, type, member)
|
||||
```
|
||||
|
||||
As we can see it just calls `container_of` macro with the same arguments. At first sight, the `container_of` looks strange:
|
||||
|
||||
```C
|
||||
#define container_of(ptr, type, member) ({ \
|
||||
const typeof( ((type *)0)->member ) *__mptr = (ptr); \
|
||||
(type *)( (char *)__mptr - offsetof(type,member) );})
|
||||
```
|
||||
|
||||
First of all you can note that it consists of two expressions in curly brackets. The compiler will evaluate the whole block in the curly braces and use the value of the last expression.
|
||||
|
||||
For example:
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
|
||||
int main() {
|
||||
int i = 0;
|
||||
printf("i = %d\n", ({++i; ++i;}));
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
will print `2`.
|
||||
|
||||
The next point is `typeof`, it's simple. As you can understand from its name, it just returns the type of the given variable. When I first saw the implementation of the `container_of` macro, the strangest thing I found was the zero in the `((type *)0)` expression. Actually this pointer magic calculates the offset of the given field from the address of the structure, but as we have `0` here, it will be just a zero offset along with the field width. Let's look at a simple example:
|
||||
|
||||
```C
|
||||
#include <stdio.h>
|
||||
|
||||
struct s {
|
||||
int field1;
|
||||
char field2;
|
||||
char field3;
|
||||
};
|
||||
|
||||
int main() {
|
||||
printf("%p\n", &((struct s*)0)->field3);
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
will print `0x5`.
|
||||
|
||||
The next `offsetof` macro calculates offset from the beginning of the structure to the given structure's field. Its implementation is very similar to the previous code:
|
||||
|
||||
```C
|
||||
#define offsetof(TYPE, MEMBER) ((size_t) &((TYPE *)0)->MEMBER)
|
||||
```
|
||||
|
||||
Let's summarize all about `container_of` macro. The `container_of` macro returns the address of the structure by the given address of the structure's field with `list_head` type, the name of the structure field with `list_head` type and type of the container structure. At the first line this macro declares the `__mptr` pointer which points to the field of the structure that `ptr` points to and assigns `ptr` to it. Now `ptr` and `__mptr` point to the same address. Technically we don't need this line but it's useful for type checking. The first line ensures that the given structure (`type` parameter) has a member called `member`. In the second line it calculates offset of the field from the structure with the `offsetof` macro and subtracts it from the structure address. That's all.
|
||||
|
||||
Of course `list_add` and `list_entry` is not the only functions which `<linux/list.h>` provides. Implementation of the doubly linked list provides the following API:
|
||||
|
||||
* list_add
|
||||
* list_add_tail
|
||||
* list_del
|
||||
* list_replace
|
||||
* list_move
|
||||
* list_is_last
|
||||
* list_empty
|
||||
* list_cut_position
|
||||
* list_splice
|
||||
* list_for_each
|
||||
* list_for_each_entry
|
||||
|
||||
and many more.
|
||||
|
||||
|
||||
via: https://github.com/0xAX/linux-insides/edit/master/DataStructures/dlist.md
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -0,0 +1,236 @@
|
||||
Assign Multiple IP Addresses To One Interface On Ubuntu 15.10
|
||||
================================================================================
|
||||
Some times you might want to use more than one IP address for your network interface card. What will you do in such cases? Buy an extra network card and assign new IP? No, It’s not necessary(at least in the small networks). We can now assign multiple IP addresses to one interface on Ubuntu systems. Curious to know how? Well, Follow me, It is not that difficult.
|
||||
|
||||
This method will work on Debian and it’s derivatives too.
|
||||
|
||||
### Add additional IP addresses temporarily ###
|
||||
|
||||
First, let us find the IP address of the network card. In my Ubuntu 15.10 server, I use only one network card.
|
||||
|
||||
Run the following command to find out the IP address:
|
||||
|
||||
sudo ip addr
|
||||
|
||||
**Sample output:**
|
||||
|
||||
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
|
||||
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
|
||||
inet 127.0.0.1/8 scope host lo
|
||||
valid_lft forever preferred_lft forever
|
||||
inet6 ::1/128 scope host
|
||||
valid_lft forever preferred_lft forever
|
||||
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
|
||||
link/ether 08:00:27:2a:03:4b brd ff:ff:ff:ff:ff:ff
|
||||
inet 192.168.1.103/24 brd 192.168.1.255 scope global enp0s3
|
||||
valid_lft forever preferred_lft forever
|
||||
inet6 fe80::a00:27ff:fe2a:34e/64 scope link
|
||||
valid_lft forever preferred_lft forever
|
||||
|
||||
Or
|
||||
|
||||
sudo ifconfig
|
||||
|
||||
**Sample output:**
|
||||
|
||||
enp0s3 Link encap:Ethernet HWaddr 08:00:27:2a:03:4b
|
||||
inet addr:192.168.1.103 Bcast:192.168.1.255 Mask:255.255.255.0
|
||||
inet6 addr: fe80::a00:27ff:fe2a:34e/64 Scope:Link
|
||||
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
|
||||
RX packets:186 errors:0 dropped:0 overruns:0 frame:0
|
||||
TX packets:70 errors:0 dropped:0 overruns:0 carrier:0
|
||||
collisions:0 txqueuelen:1000
|
||||
RX bytes:21872 (21.8 KB) TX bytes:9666 (9.6 KB)
|
||||
lo Link encap:Local Loopback
|
||||
inet addr:127.0.0.1 Mask:255.0.0.0
|
||||
inet6 addr: ::1/128 Scope:Host
|
||||
UP LOOPBACK RUNNING MTU:65536 Metric:1
|
||||
RX packets:217 errors:0 dropped:0 overruns:0 frame:0
|
||||
TX packets:217 errors:0 dropped:0 overruns:0 carrier:0
|
||||
collisions:0 txqueuelen:0
|
||||
RX bytes:38793 (38.7 KB) TX bytes:38793 (38.7 KB)
|
||||
|
||||
As you see in the above output, my network card name is **enp0s3**, and its IP address is **192.168.1.103**.
|
||||
|
||||
Now let us add an additional IP address, for example **192.168.1.104**, to the Interface card.
|
||||
|
||||
Open your Terminal and run the following command to add additional IP.
|
||||
|
||||
sudo ip addr add 192.168.1.104/24 dev enp0s3
|
||||
|
||||
Now, let us check if the IP is added using command:
|
||||
|
||||
sudo ip address show enp0s3
|
||||
|
||||
**Sample output:**
|
||||
|
||||
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
|
||||
link/ether 08:00:27:2a:03:4e brd ff:ff:ff:ff:ff:ff
|
||||
inet 192.168.1.103/24 brd 192.168.1.255 scope global enp0s3
|
||||
valid_lft forever preferred_lft forever
|
||||
inet 192.168.1.104/24 scope global secondary enp0s3
|
||||
valid_lft forever preferred_lft forever
|
||||
inet6 fe80::a00:27ff:fe2a:34e/64 scope link
|
||||
valid_lft forever preferred_lft forever
|
||||
|
||||
Similarly, you can add as many IP addresses as you want.
|
||||
|
||||
Let us ping the IP address to verify it.
|
||||
|
||||
sudo ping 192.168.1.104
|
||||
|
||||
**Sample output:**
|
||||
|
||||
PING 192.168.1.104 (192.168.1.104) 56(84) bytes of data.
|
||||
64 bytes from 192.168.1.104: icmp_seq=1 ttl=64 time=0.901 ms
|
||||
64 bytes from 192.168.1.104: icmp_seq=2 ttl=64 time=0.571 ms
|
||||
64 bytes from 192.168.1.104: icmp_seq=3 ttl=64 time=0.521 ms
|
||||
64 bytes from 192.168.1.104: icmp_seq=4 ttl=64 time=0.524 ms
|
||||
|
||||
Yeah, It’s working!!
|
||||
|
||||
To remove the IP, just run:
|
||||
|
||||
sudo ip addr del 192.168.1.104/24 dev enp0s3
|
||||
|
||||
Let us check if it is removed.
|
||||
|
||||
sudo ip address show enp0s3
|
||||
|
||||
**Sample output:**
|
||||
|
||||
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
|
||||
link/ether 08:00:27:2a:03:4e brd ff:ff:ff:ff:ff:ff
|
||||
inet 192.168.1.103/24 brd 192.168.1.255 scope global enp0s3
|
||||
valid_lft forever preferred_lft forever
|
||||
inet6 fe80::a00:27ff:fe2a:34e/64 scope link
|
||||
valid_lft forever preferred_lft forever
|
||||
|
||||
See, It’s gone!!
|
||||
|
||||
Well, as you may know, the changes will lost after you reboot your system. How do I make it permanent? That’s easy too.
|
||||
|
||||
### Add additional IP addresses permanently ###
|
||||
|
||||
The network card configuration file of your Ubuntu system is **/etc/network/interfaces**.
|
||||
|
||||
Let us check the details of the above file.
|
||||
|
||||
sudo cat /etc/network/interfaces
|
||||
|
||||
**Sample output:**
|
||||
|
||||
# This file describes the network interfaces available on your system
|
||||
# and how to activate them. For more information, see interfaces(5).
|
||||
source /etc/network/interfaces.d/*
|
||||
# The loopback network interface
|
||||
auto lo
|
||||
iface lo inet loopback
|
||||
# The primary network interface
|
||||
auto enp0s3
|
||||
iface enp0s3 inet dhcp
|
||||
|
||||
As you see in the above output, the Interface is DHCP enabled.
|
||||
|
||||
Okay, now we will assign an additional address, for example **192.168.1.104/24**.
|
||||
|
||||
Edit file **/etc/network/interfaces**:
|
||||
|
||||
sudo nano /etc/network/interfaces
|
||||
|
||||
Add additional IP address as shown in the black letters.
|
||||
|
||||
# This file describes the network interfaces available on your system
|
||||
# and how to activate them. For more information, see interfaces(5).
|
||||
source /etc/network/interfaces.d/*
|
||||
# The loopback network interface
|
||||
auto lo
|
||||
iface lo inet loopback
|
||||
# The primary network interface
|
||||
auto enp0s3
|
||||
iface enp0s3 inet dhcp
|
||||
iface enp0s3 inet static
|
||||
address 192.168.1.104/24
|
||||
|
||||
Save and close the file.
|
||||
|
||||
Run the following file to take effect the changes without rebooting.
|
||||
|
||||
sudo ifdown enp0s3 && sudo ifup enp0s3
|
||||
|
||||
**Sample output:**
|
||||
|
||||
Killed old client process
|
||||
Internet Systems Consortium DHCP Client 4.3.1
|
||||
Copyright 2004-2014 Internet Systems Consortium.
|
||||
All rights reserved.
|
||||
For info, please visit https://www.isc.org/software/dhcp/
|
||||
Listening on LPF/enp0s3/08:00:27:2a:03:4e
|
||||
Sending on LPF/enp0s3/08:00:27:2a:03:4e
|
||||
Sending on Socket/fallback
|
||||
DHCPRELEASE on enp0s3 to 192.168.1.1 port 67 (xid=0x225f35)
|
||||
Internet Systems Consortium DHCP Client 4.3.1
|
||||
Copyright 2004-2014 Internet Systems Consortium.
|
||||
All rights reserved.
|
||||
For info, please visit https://www.isc.org/software/dhcp/
|
||||
Listening on LPF/enp0s3/08:00:27:2a:03:4e
|
||||
Sending on LPF/enp0s3/08:00:27:2a:03:4e
|
||||
Sending on Socket/fallback
|
||||
DHCPDISCOVER on enp0s3 to 255.255.255.255 port 67 interval 3 (xid=0xdfb94764)
|
||||
DHCPREQUEST of 192.168.1.103 on enp0s3 to 255.255.255.255 port 67 (xid=0x6447b9df)
|
||||
DHCPOFFER of 192.168.1.103 from 192.168.1.1
|
||||
DHCPACK of 192.168.1.103 from 192.168.1.1
|
||||
bound to 192.168.1.103 -- renewal in 35146 seconds.
|
||||
|
||||
**Note**: It is **very important** to run the above two commands into **one** line if you are remoting into the server because the first one will drop your connection. Given in this way the ssh-session will survive.
|
||||
|
||||
Now, let us check if IP is added using command:
|
||||
|
||||
sudo ip address show enp0s3
|
||||
|
||||
**Sample output:**
|
||||
|
||||
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
|
||||
link/ether 08:00:27:2a:03:4e brd ff:ff:ff:ff:ff:ff
|
||||
inet 192.168.1.103/24 brd 192.168.1.255 scope global enp0s3
|
||||
valid_lft forever preferred_lft forever
|
||||
inet 192.168.1.104/24 brd 192.168.1.255 scope global secondary enp0s3
|
||||
valid_lft forever preferred_lft forever
|
||||
inet6 fe80::a00:27ff:fe2a:34e/64 scope link
|
||||
valid_lft forever preferred_lft forever
|
||||
|
||||
Cool! Additional IP has been added.
|
||||
|
||||
Well then let us ping the IP address to verify.
|
||||
|
||||
sudo ping 192.168.1.104
|
||||
|
||||
**Sample output:**
|
||||
|
||||
PING 192.168.1.104 (192.168.1.104) 56(84) bytes of data.
|
||||
64 bytes from 192.168.1.104: icmp_seq=1 ttl=64 time=0.137 ms
|
||||
64 bytes from 192.168.1.104: icmp_seq=2 ttl=64 time=0.050 ms
|
||||
64 bytes from 192.168.1.104: icmp_seq=3 ttl=64 time=0.054 ms
|
||||
64 bytes from 192.168.1.104: icmp_seq=4 ttl=64 time=0.067 ms
|
||||
|
||||
Voila! It’s working. That’s it.
|
||||
|
||||
Want to know how to add additional IP addresses on CentOS/RHEL/Scientific Linux/Fedora systems, check the following link.
|
||||
|
||||
注:此篇文章以前做过选题:20150205 Linux Basics--Assign Multiple IP Addresses To Single Network Interface Card On CentOS 7.md
|
||||
- [Assign Multiple IP Addresses To Single Network Interface Card On CentOS 7][1]
|
||||
|
||||
Happy weekend!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.unixmen.com/assign-multiple-ip-addresses-to-one-interface-on-ubuntu-15-10/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.unixmen.com/author/sk/
|
||||
[1]:http://www.unixmen.com/linux-basics-assign-multiple-ip-addresses-single-network-interface-card-centos-7/
|
201
sources/tech/20151123 Data Structures in the Linux Kernel.md
Normal file
201
sources/tech/20151123 Data Structures in the Linux Kernel.md
Normal file
@ -0,0 +1,201 @@
|
||||
Data Structures in the Linux Kernel
|
||||
================================================================================
|
||||
|
||||
Radix tree
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
As you already know linux kernel provides many different libraries and functions which implement different data structures and algorithms. In this part we will consider one of these data structures - [Radix tree](http://en.wikipedia.org/wiki/Radix_tree). There are two files which are related to `radix tree` implementation and API in the linux kernel:
|
||||
|
||||
* [include/linux/radix-tree.h](https://github.com/torvalds/linux/blob/master/include/linux/radix-tree.h)
|
||||
* [lib/radix-tree.c](https://github.com/torvalds/linux/blob/master/lib/radix-tree.c)
|
||||
|
||||
Lets talk about what a `radix tree` is. Radix tree is a `compressed trie` where a [trie](http://en.wikipedia.org/wiki/Trie) is a data structure which implements an interface of an associative array and allows to store values as `key-value`. The keys are usually strings, but any data type can be used. A trie is different from an `n-tree` because of its nodes. Nodes of a trie do not store keys; instead, a node of a trie stores single character labels. The key which is related to a given node is derived by traversing from the root of the tree to this node. For example:
|
||||
|
||||
|
||||
```
|
||||
+-----------+
|
||||
| |
|
||||
| " " |
|
||||
| |
|
||||
+------+-----------+------+
|
||||
| |
|
||||
| |
|
||||
+----v------+ +-----v-----+
|
||||
| | | |
|
||||
| g | | c |
|
||||
| | | |
|
||||
+-----------+ +-----------+
|
||||
| |
|
||||
| |
|
||||
+----v------+ +-----v-----+
|
||||
| | | |
|
||||
| o | | a |
|
||||
| | | |
|
||||
+-----------+ +-----------+
|
||||
|
|
||||
|
|
||||
+-----v-----+
|
||||
| |
|
||||
| t |
|
||||
| |
|
||||
+-----------+
|
||||
```
|
||||
|
||||
So in this example, we can see the `trie` with keys, `go` and `cat`. The compressed trie or `radix tree` differs from `trie` in that all intermediates nodes which have only one child are removed.
|
||||
|
||||
Radix tree in linux kernel is the datastructure which maps values to integer keys. It is represented by the following structures from the file [include/linux/radix-tree.h](https://github.com/torvalds/linux/blob/master/include/linux/radix-tree.h):
|
||||
|
||||
```C
|
||||
struct radix_tree_root {
|
||||
unsigned int height;
|
||||
gfp_t gfp_mask;
|
||||
struct radix_tree_node __rcu *rnode;
|
||||
};
|
||||
```
|
||||
|
||||
This structure presents the root of a radix tree and contains three fields:
|
||||
|
||||
* `height` - height of the tree;
|
||||
* `gfp_mask` - tells how memory allocations will be performed;
|
||||
* `rnode` - pointer to the child node.
|
||||
|
||||
The first field we will discuss is `gfp_mask`:
|
||||
|
||||
Low-level kernel memory allocation functions take a set of flags as - `gfp_mask`, which describes how that allocation is to be performed. These `GFP_` flags which control the allocation process can have following values: (`GF_NOIO` flag) means sleep and wait for memory, (`__GFP_HIGHMEM` flag) means high memory can be used, (`GFP_ATOMIC` flag) means the allocation process has high-priority and can't sleep etc.
|
||||
|
||||
* `GFP_NOIO` - can sleep and wait for memory;
|
||||
* `__GFP_HIGHMEM` - high memory can be used;
|
||||
* `GFP_ATOMIC` - allocation process is high-priority and can't sleep;
|
||||
|
||||
etc.
|
||||
|
||||
The next field is `rnode`:
|
||||
|
||||
```C
|
||||
struct radix_tree_node {
|
||||
unsigned int path;
|
||||
unsigned int count;
|
||||
union {
|
||||
struct {
|
||||
struct radix_tree_node *parent;
|
||||
void *private_data;
|
||||
};
|
||||
struct rcu_head rcu_head;
|
||||
};
|
||||
/* For tree user */
|
||||
struct list_head private_list;
|
||||
void __rcu *slots[RADIX_TREE_MAP_SIZE];
|
||||
unsigned long tags[RADIX_TREE_MAX_TAGS][RADIX_TREE_TAG_LONGS];
|
||||
};
|
||||
```
|
||||
|
||||
This structure contains information about the offset in a parent and height from the bottom, count of the child nodes and fields for accessing and freeing a node. This fields are described below:
|
||||
|
||||
* `path` - offset in parent & height from the bottom;
|
||||
* `count` - count of the child nodes;
|
||||
* `parent` - pointer to the parent node;
|
||||
* `private_data` - used by the user of a tree;
|
||||
* `rcu_head` - used for freeing a node;
|
||||
* `private_list` - used by the user of a tree;
|
||||
|
||||
The two last fields of the `radix_tree_node` - `tags` and `slots` are important and interesting. Every node can contains a set of slots which are store pointers to the data. Empty slots in the linux kernel radix tree implementation store `NULL`. Radix trees in the linux kernel also supports tags which are associated with the `tags` fields in the `radix_tree_node` structure. Tags allow individual bits to be set on records which are stored in the radix tree.
|
||||
|
||||
Now that we know about radix tree structure, it is time to look on its API.
|
||||
|
||||
Linux kernel radix tree API
|
||||
---------------------------------------------------------------------------------
|
||||
|
||||
We start from the datastructure initialization. There are two ways to initialize a new radix tree. The first is to use `RADIX_TREE` macro:
|
||||
|
||||
```C
|
||||
RADIX_TREE(name, gfp_mask);
|
||||
````
|
||||
|
||||
As you can see we pass the `name` parameter, so with the `RADIX_TREE` macro we can define and initialize radix tree with the given name. Implementation of the `RADIX_TREE` is easy:
|
||||
|
||||
```C
|
||||
#define RADIX_TREE(name, mask) \
|
||||
struct radix_tree_root name = RADIX_TREE_INIT(mask)
|
||||
|
||||
#define RADIX_TREE_INIT(mask) { \
|
||||
.height = 0, \
|
||||
.gfp_mask = (mask), \
|
||||
.rnode = NULL, \
|
||||
}
|
||||
```
|
||||
|
||||
At the beginning of the `RADIX_TREE` macro we define instance of the `radix_tree_root` structure with the given name and call `RADIX_TREE_INIT` macro with the given mask. The `RADIX_TREE_INIT` macro just initializes `radix_tree_root` structure with the default values and the given mask.
|
||||
|
||||
The second way is to define `radix_tree_root` structure by hand and pass it with mask to the `INIT_RADIX_TREE` macro:
|
||||
|
||||
```C
|
||||
struct radix_tree_root my_radix_tree;
|
||||
INIT_RADIX_TREE(my_tree, gfp_mask_for_my_radix_tree);
|
||||
```
|
||||
|
||||
where:
|
||||
|
||||
```C
|
||||
#define INIT_RADIX_TREE(root, mask) \
|
||||
do { \
|
||||
(root)->height = 0; \
|
||||
(root)->gfp_mask = (mask); \
|
||||
(root)->rnode = NULL; \
|
||||
} while (0)
|
||||
```
|
||||
|
||||
makes the same initialziation with default values as it does `RADIX_TREE_INIT` macro.
|
||||
|
||||
The next are two functions for inserting and deleting records to/from a radix tree:
|
||||
|
||||
* `radix_tree_insert`;
|
||||
* `radix_tree_delete`;
|
||||
|
||||
The first `radix_tree_insert` function takes three parameters:
|
||||
|
||||
* root of a radix tree;
|
||||
* index key;
|
||||
* data to insert;
|
||||
|
||||
The `radix_tree_delete` function takes the same set of parameters as the `radix_tree_insert`, but without data.
|
||||
|
||||
The search in a radix tree implemented in two ways:
|
||||
|
||||
* `radix_tree_lookup`;
|
||||
* `radix_tree_gang_lookup`;
|
||||
* `radix_tree_lookup_slot`.
|
||||
|
||||
The first `radix_tree_lookup` function takes two parameters:
|
||||
|
||||
* root of a radix tree;
|
||||
* index key;
|
||||
|
||||
This function tries to find the given key in the tree and return the record associated with this key. The second `radix_tree_gang_lookup` function have the following signature
|
||||
|
||||
```C
|
||||
unsigned int radix_tree_gang_lookup(struct radix_tree_root *root,
|
||||
void **results,
|
||||
unsigned long first_index,
|
||||
unsigned int max_items);
|
||||
```
|
||||
|
||||
and returns number of records, sorted by the keys, starting from the first index. Number of the returned records will not be greater than `max_items` value.
|
||||
|
||||
And the last `radix_tree_lookup_slot` function will return the slot which will contain the data.
|
||||
|
||||
Links
|
||||
---------------------------------------------------------------------------------
|
||||
|
||||
* [Radix tree](http://en.wikipedia.org/wiki/Radix_tree)
|
||||
* [Trie](http://en.wikipedia.org/wiki/Trie)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://github.com/0xAX/linux-insides/edit/master/DataStructures/radix-tree.md
|
||||
|
||||
作者:[0xAX]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,133 @@
|
||||
How to Configure Apache Solr on Ubuntu 14 / 15
|
||||
================================================================================
|
||||
Hello and welcome to our today's article on Apache Solr. The brief description about Apache Solr is that it is an Open Source most famous search platform with Apache Lucene at the back end for Web sites that enables you to easily create search engines which searches websites, databases and files. It can index and search multiple sites and return recommendations for related contents based on the searched text.
|
||||
|
||||
Solr works with HTTP Extensible Markup Language (XML) that offers application program interfaces (APIs) for Javascript Object Notation, Python, and Ruby. According to the Apache Lucene Project, Solr offers capabilities that have made it popular with administrators including it many featuring like:
|
||||
|
||||
- Full Text Search
|
||||
- Faceted Navigation
|
||||
- Snippet generation/highting
|
||||
- Spell Suggestion/Auto complete
|
||||
- Custom document ranking/ordering
|
||||
|
||||
#### Prerequisites: ####
|
||||
|
||||
On a fresh Linux Ubuntu 14/15 with minimal packages installed, you only have to take care of few prerequisites in order to install Apache Solr.
|
||||
|
||||
### 1)System Update ###
|
||||
|
||||
Login to your Ubuntu server with a non-root sudo user that will be used to perform all the steps to install and use Solr.
|
||||
|
||||
After successful login, issue the following command to update your system with latest updates and patches.
|
||||
|
||||
$ sudo apt-get update
|
||||
|
||||
### 2) JRE Setup ###
|
||||
|
||||
The Solr setup needs Java Runtime Environment to be installed on the system as its basic requirement because solr and tomcat both are the Java based applications. So, we need to install and configure its home environment with latest Java.
|
||||
|
||||
To install the latest version on Oracle Java 8, we need to install Python Software Properties using the below command.
|
||||
|
||||
$ sudo apt-get install python-software-properties
|
||||
|
||||
Upon completion, run the setup its the repository for the latest version of Java 8.
|
||||
|
||||
$ sudo add-apt-repository ppa:webupd8team/java
|
||||
|
||||
Now you are able to install the latest version of Oracle Java 8 with 'wget' by issuing the below commands to update the packages source list and then to install Java.
|
||||
|
||||
$ sudo apt-get update
|
||||
|
||||
----------
|
||||
|
||||
$ sudo apt-get install oracle-java8-installer
|
||||
|
||||
Accept the Oracle Binary Code License Agreement for the Java SE Platform Products and JavaFX as you will be asked during the Java installation and configuration process by a click on the 'OK' button.
|
||||
|
||||
When the installation process complete, run the below command to test the successful installation of Java and check its version.
|
||||
|
||||
kash@solr:~$ java -version
|
||||
java version "1.8.0_66"
|
||||
Java(TM) SE Runtime Environment (build 1.8.0_66-b17)
|
||||
Java HotSpot(TM) 64-Bit Server VM (build 25.66-b17, mixed mode)
|
||||
|
||||
The output indicates that we have successfully fulfilled the basic requirement of Solr by installing the Java. Now move to the next step to install Solr.
|
||||
|
||||
### Installing Solr ###
|
||||
|
||||
Installing Solr on Ubuntu can be done by using two different ways but in this article we prefer to install its latest package from the source.
|
||||
|
||||
To install Solr from its source, download its available package with latest version from there Official [Web Page][1], copy the link address and get it using 'wget' command.
|
||||
|
||||
$ wget http://www.us.apache.org/dist/lucene/solr/5.3.1/solr-5.3.1.tgz
|
||||
|
||||
Run the command below to extract the archived service into '/bin' folder.
|
||||
|
||||
$ tar -xzf solr-5.3.1.tgz solr-5.3.1/bin/install_solr_service.sh --strip-components=2
|
||||
|
||||
Then run the script to start Solr service that will creates a new 'solr' user and then installs solr as a service.
|
||||
|
||||
$ sudo bash ./install_solr_service.sh solr-5.3.1.tgz
|
||||
|
||||
![Solr Installation](http://blog.linoxide.com/wp-content/uploads/2015/11/12.png)
|
||||
|
||||
To check the status of Solr service, you use the below command.
|
||||
|
||||
$ service solr status
|
||||
|
||||
![Solr Status](http://blog.linoxide.com/wp-content/uploads/2015/11/22.png)
|
||||
|
||||
### Creating Solr Collection: ###
|
||||
|
||||
Now we can create multiple collections using Solr user. To do so just run the below command by mentioning the name of the collection you want to create and by specifying its configuration set as shown.
|
||||
|
||||
$ sudo su - solr -c "/opt/solr/bin/solr create -c myfirstcollection -n data_driven_schema_configs"
|
||||
|
||||
![creating collection](http://blog.linoxide.com/wp-content/uploads/2015/11/32.png)
|
||||
|
||||
We have successfully created the new core instance directory for our our first collection where we can add new data in it. To view its default schema file in directory '/opt/solr/server/solr/configsets/data_driven_schema_configs/conf' .
|
||||
|
||||
### Using Solr Web ###
|
||||
|
||||
Apache Solr can be accessible on the default port of Solr that 8983. Open your favorite browser and navigate to http://your_server_ip:8983/solr or http://your-domain.com:8983/solr. Make sure that the port is allowed in your firewall.
|
||||
|
||||
http://172.25.10.171:8983/solr/
|
||||
|
||||
![Solr Web Access](http://blog.linoxide.com/wp-content/uploads/2015/11/42.png)
|
||||
|
||||
From the Solr Web Console click on the 'Core Admin' button from the left bar, then you will see your first collection that we created earlier using CLI. While you can also create new cores by pointing on the 'Add Core' button.
|
||||
|
||||
![Adding Core](http://blog.linoxide.com/wp-content/uploads/2015/11/52.png)
|
||||
|
||||
You can also add the document and query from the document as shown in below image by selecting your particular collection and pointing the document. Add the data in the specified format as shown in the box.
|
||||
|
||||
{
|
||||
"number": 1,
|
||||
"Name": "George Washington",
|
||||
"birth_year": 1989,
|
||||
"Starting_Job": 2002,
|
||||
"End_Job": "2009-04-30",
|
||||
"Qualification": "Graduation",
|
||||
"skills": "Linux and Virtualization"
|
||||
}
|
||||
|
||||
After adding the document click on the 'Submit Document' button.
|
||||
|
||||
![adding Document](http://blog.linoxide.com/wp-content/uploads/2015/11/62.png)
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
You are now able to insert and query data using the Solr web interface after its successful installation on Ubuntu. Now add more collections and insert you own data and documents that you wish to put and manage through Solr. We hope you have got this article much helpful and enjoyed reading this.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/ubuntu-how-to/configure-apache-solr-ubuntu-14-15/
|
||||
|
||||
作者:[Kashif][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/kashifs/
|
||||
[1]:http://lucene.apache.org/solr/
|
@ -0,0 +1,148 @@
|
||||
How to Install Cockpit in Fedora / CentOS / RHEL/ Arch Linux
|
||||
================================================================================
|
||||
Cockpit is a free and open source server management software that makes us easy to administer our GNU/Linux servers via its beautiful web interface frontend. Cockpit helps make linux system administrator, system maintainers and DevOps easy to manage their server and to perform simple tasks, such as administering storage, inspecting journals, starting and stopping services and more. Its journal interface adds aroma in flower making people easy to switch between the terminal and web interface. And moreover, it makes easy to manage not only one server but several multiple networked servers from a single place at the same time with just a single click. It is very light weight and has easy to use web based interface. In this tutorial, we'll learn how we can setup Cockpit and use it to manage our server running Fedora, CentOS, Arch Linux and RHEL distributions as their operating system software. Some of the awesome benefits of Cockpit in our GNU/Linux servers are as follows:
|
||||
|
||||
1. It consist of systemd service manager for ease.
|
||||
1. It has a Journal log viewer to perform troubleshoots and log analysis.
|
||||
1. Storage setup including LVM was never easier before.
|
||||
1. Basic Network configuration can be applied with Cockpit
|
||||
1. We can easily add and remove local users and manage multiple servers.
|
||||
|
||||
### 1. Installing Cockpit ###
|
||||
|
||||
First of all, we'll need to setup Cockpit in our linux based server. In most of the distributions, the cockpit package is already available in their official repositories. Here, in this tutorial, we'll setup Cockpit in Fedora 22, CentOS 7, Arch Linux and RHEL 7 from their official repositories.
|
||||
|
||||
#### On CentOS / RHEL ####
|
||||
|
||||
Cockpit is available in the official repository of CenOS and RHEL. So, we'll simply install it using yum manager. To do so, we'll simply run the following command under sudo/root access.
|
||||
|
||||
# yum install cockpit
|
||||
|
||||
![Install Cockpit Centos](http://blog.linoxide.com/wp-content/uploads/2015/10/install-cockpit-centos.png)
|
||||
|
||||
#### On Fedora 22/21 ####
|
||||
|
||||
Alike, CentOS, it is also available by default in Fedora's official repository, we'll simply install cockpit using dnf package manager.
|
||||
|
||||
# dnf install cockpit
|
||||
|
||||
![Install Cockpit Fedora](http://blog.linoxide.com/wp-content/uploads/2015/10/install-cockpit-fedora.png)
|
||||
|
||||
#### On Arch Linux ####
|
||||
|
||||
Cockpit is currently not available in the official repository of Arch Linux but it is available in the Arch User Repository also know as AUR. So, we'll simply run the following yaourt command to install it.
|
||||
|
||||
# yaourt cockpit
|
||||
|
||||
![Install Cockpit Archlinux](http://blog.linoxide.com/wp-content/uploads/2015/10/install-cockpit-archlinux.png)
|
||||
|
||||
### 2. Starting and Enabling Cockpit ###
|
||||
|
||||
After we have successfully installed it, we'll gonna start the cockpit server with our service/daemon manager. As of 2015, most of the linux distributions have adopted Systemd whereas some of the linux distributions still run SysVinit to manage daemon, but Cockpit uses systemd for almost everything from running daemons to services. So, we can only setup Cockpit in the latest releases of linux distributions running Systemd. In order to start Cockpit and make it start in every boot of the system, we'll need to run the following command in a terminal or a console.
|
||||
|
||||
# systemctl start cockpit
|
||||
|
||||
# systemctl enable cockpit.socket
|
||||
|
||||
Created symlink from /etc/systemd/system/sockets.target.wants/cockpit.socket to /usr/lib/systemd/system/cockpit.socket.
|
||||
|
||||
### 3. Allowing Firewall ###
|
||||
|
||||
After we have started our cockpit server and enable it to start in every boot, we'll now go for configuring firewall. As we have firewall programs running in our server, we'll need to allow ports in order to make cockpit accessible outside of the server.
|
||||
|
||||
#### On Firewalld ####
|
||||
|
||||
# firewall-cmd --add-service=cockpit --permanent
|
||||
|
||||
success
|
||||
|
||||
# firewall-cmd --reload
|
||||
|
||||
success
|
||||
|
||||
![Cockpit Allowing Firewalld](http://blog.linoxide.com/wp-content/uploads/2015/10/cockpit-allowing-firewalld.png)
|
||||
|
||||
#### On Iptables ####
|
||||
|
||||
# iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
|
||||
|
||||
# service iptables save
|
||||
|
||||
### 4. Accessing Cockpit Web Interface ###
|
||||
|
||||
Next, we'll gonna finally access the Cockpit web interface using a web browser. We'll simply need to point our web browser to https://ip-address:9090 or https://server.domain.com:9090 according to the configuration. Here, in our tutorial, we'll gonna point our browser to https://128.199.114.17:9090 as shown in the image below.
|
||||
|
||||
![Cockpit Webserver SSL Proceed](http://blog.linoxide.com/wp-content/uploads/2015/10/cockpit-webserver-ssl-proceed.png)
|
||||
|
||||
We'll be displayed an SSL certification warning as we are using a self-signed SSL certificate. So, we'll simply ignore it and go forward towards the login page, in chrome/chromium, we'll need to click on Show Advanced and then we'll need to click on **Proceed to 128.199.114.17 (unsafe)** .
|
||||
|
||||
![Cockpit Login Screen](http://blog.linoxide.com/wp-content/uploads/2015/10/cockpit-login-screen.png)
|
||||
|
||||
Now, we'll be asked to enter the login details in order to enter into the dashboard. Here, the username and password is the same as that of the login details we use to login to our linux server. After we enter the login details and click on Log In button, we will be welcomed into the Cockpit Dashboard.
|
||||
|
||||
![Cockpit Dashboard](http://blog.linoxide.com/wp-content/uploads/2015/10/cockpit-dashboard.png)
|
||||
|
||||
Here, we'll see all the menu and visualization of CPU, Disk, Network, Storage usages of the server. We'll see the dashboard as shown above.
|
||||
|
||||
#### Services ####
|
||||
|
||||
To manage services, we'll need to click on Services button on the menu situated in the right side of the web page. Then, we'll see the services under 5 categories, Targets, System Services, Sockets, Timers and Paths.
|
||||
|
||||
![Cockpit Services](http://blog.linoxide.com/wp-content/uploads/2015/10/cockpit-services.png)
|
||||
|
||||
#### Docker Containers ####
|
||||
|
||||
We can even manage docker containers with Cockpit. It is pretty easy to monitor and administer Docker containers with Cockpit. As docker isn't installed and running in our server, we'll need to click on Start Docker.
|
||||
|
||||
![Cockpit Container](http://blog.linoxide.com/wp-content/uploads/2015/10/cockpit-container.png)
|
||||
|
||||
Cockpit will automatically install and run docker in our server. After its running, we see the following screen. Then, we can manage the docker images, containers as per our requirement.
|
||||
|
||||
![Cockpit Containers Mangement](http://blog.linoxide.com/wp-content/uploads/2015/10/cockpit-containers-mangement.png)
|
||||
|
||||
#### Journal Log Viewer ####
|
||||
|
||||
Cockpit has a managed log viewer which separates the Errors, Warnings, Notices into different tabs. And we also have a tab All where we can see them all in a single place.
|
||||
|
||||
![Cockpit Journal Logs](http://blog.linoxide.com/wp-content/uploads/2015/10/cockpit-journal-logs.png)
|
||||
|
||||
#### Networking ####
|
||||
|
||||
Under the networking section, we see two graphs in which there is the visualization of Sending and Receiving speed. And we can see there the list of available interfaces with option to Add Bond, Bridge, VLAN. If we need to configure an interface, we can do so by simply clicking on the interface name. Below everything, we can see the Journal Log Viewer for Networking.
|
||||
|
||||
![Cockpit Network](http://blog.linoxide.com/wp-content/uploads/2015/10/cockpit-network.png)
|
||||
|
||||
#### Storage ####
|
||||
|
||||
Now, its easy with Cockpit to see the R/W speed of our hard disk. We can see the Journal log of the Storage in order to perform troubleshoot and fixes. A clear visualization bar of how much space is occupied is shown in the page. We can even Unmount, Format, Delete a partition of a Hard Disk and more. Features like creating RAID Device, Volume Group is also available in it.
|
||||
|
||||
![Cockpit Storage](http://blog.linoxide.com/wp-content/uploads/2015/10/cockpit-storage.png)
|
||||
|
||||
#### Account Management ####
|
||||
|
||||
We can easily create new accounts with Cockpit Web Interface. The accounts created in it is applied to the system's user account. We can change password, specify roles, delete, rename user accounts with it.
|
||||
|
||||
![Cockpit Accounts](http://blog.linoxide.com/wp-content/uploads/2015/10/cockpit-accounts.png)
|
||||
|
||||
#### Live Terminal ####
|
||||
|
||||
This is an awesome feature built-in with Cockpit. Yes, we can execute commands, do stuffs with the live terminal provided by Cockpit interface. This makes us really easy to switch between the web interface and terminal according to our need.
|
||||
|
||||
![Cockpit Terminal](http://blog.linoxide.com/wp-content/uploads/2015/10/cockpit-terminal.png)
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
Cockpit is a good free and open source software developed by [Red Hat][1] for making the server management easy and simple. It is best for performing simple system administration tasks and is good for the new system administrators. It is still under pre-release as its stable release hasn't been released yet. So, it is not suitable for production. It is currently developed on the latest release of Fedora, CentOS, Arch Linux, RHEL where systemd is installed by default. If you are willing to install Cockpit in Ubuntu, you can get the PPA access but is currently outdated. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank You !
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/install-cockpit-fedora-centos-rhel-arch-linux/
|
||||
|
||||
作者:[Arun Pyasi][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunp/
|
||||
[1]:http://www.redhat.com/
|
@ -0,0 +1,99 @@
|
||||
FSSlc translating
|
||||
|
||||
How to access Dropbox from the command line in Linux
|
||||
================================================================================
|
||||
Cloud storage is everywhere in today's multi-device environment, where people want to access content across multiple devices wherever they go. Dropbox is the most widely used cloud storage service thanks to its elegant UI and flawless multi-platform compatibility. The popularity of Dropbox has led to a flurry of official or unofficial Dropbox clients that are available across different operating system platforms.
|
||||
|
||||
Linux has its own share of Dropbox clients: CLI clients as well as GUI-based clients. [Dropbox Uploader][1] is an easy-to-use Dropbox CLI client written in BASH scripting language. In this tutorial, I describe** how to access Dropbox from the command line in Linux by using Dropbox Uploader**.
|
||||
|
||||
### Install and Configure Dropbox Uploader on Linux ###
|
||||
|
||||
To use Dropbox Uploader, download the script and make it executable.
|
||||
|
||||
$ wget https://raw.github.com/andreafabrizi/Dropbox-Uploader/master/dropbox_uploader.sh
|
||||
$ chmod +x dropbox_uploader.sh
|
||||
|
||||
Make sure that you have installed curl on your system, since Dropbox Uploader runs Dropbox APIs via curl.
|
||||
|
||||
To configure Dropbox Uploader, simply run dropbox_uploader.sh. When you run the script for the first time, it will ask you to grant the script access to your Dropbox account.
|
||||
|
||||
$ ./dropbox_uploader.sh
|
||||
|
||||
![](https://c2.staticflickr.com/6/5739/22860931599_10c08ff15f_c.jpg)
|
||||
|
||||
As instructed above, go to [https://www.dropbox.com/developers/apps][2] on your web browser, and create a new Dropbox app. Fill in the information of the new app as shown below, and enter the app name as generated by Dropbox Uploader.
|
||||
|
||||
![](https://c2.staticflickr.com/6/5745/22932921350_4123d2dbee_c.jpg)
|
||||
|
||||
After you have created a new app, you will see app key/secret on the next page. Make a note of them.
|
||||
|
||||
![](https://c1.staticflickr.com/1/736/22932962610_7db51aa718_c.jpg)
|
||||
|
||||
Enter the app key and secret in the terminal window where dropbox_uploader.sh is running. dropbox_uploader.sh will then generate an oAUTH URL (e.g., https://www.dropbox.com/1/oauth/authorize?oauth_token=XXXXXXXXXXXX).
|
||||
|
||||
![](https://c1.staticflickr.com/1/563/22601635533_423738baed_c.jpg)
|
||||
|
||||
Go to the oAUTH URL generated above on your web browser, and allow access to your Dropbox account.
|
||||
|
||||
![](https://c1.staticflickr.com/1/675/23202598606_6110c1a31b_c.jpg)
|
||||
|
||||
This completes Dropbox Uploader configuration. To check whether Dropbox Uploader is successfully authenticated, run the following command.
|
||||
|
||||
$ ./dropbox_uploader.sh info
|
||||
|
||||
----------
|
||||
|
||||
Dropbox Uploader v0.12
|
||||
|
||||
> Getting info...
|
||||
|
||||
Name: Dan Nanni
|
||||
UID: XXXXXXXXXX
|
||||
Email: my@email_address
|
||||
Quota: 2048 Mb
|
||||
Used: 13 Mb
|
||||
Free: 2034 Mb
|
||||
|
||||
### Dropbox Uploader Examples ###
|
||||
|
||||
To list all contents in the top-level directory:
|
||||
|
||||
$ ./dropbox_uploader.sh list
|
||||
|
||||
To list all contents in a specific folder:
|
||||
|
||||
$ ./dropbox_uploader.sh list Documents/manuals
|
||||
|
||||
To upload a local file to a remote Dropbox folder:
|
||||
|
||||
$ ./dropbox_uploader.sh upload snort.pdf Documents/manuals
|
||||
|
||||
To download a remote file from Dropbox to a local file:
|
||||
|
||||
$ ./dropbox_uploader.sh download Documents/manuals/mysql.pdf ./mysql.pdf
|
||||
|
||||
To download an entire remote folder from Dropbox to a local folder:
|
||||
|
||||
$ ./dropbox_uploader.sh download Documents/manuals ./manuals
|
||||
|
||||
To create a new remote folder on Dropbox:
|
||||
|
||||
$ ./dropbox_uploader.sh mkdir Documents/whitepapers
|
||||
|
||||
To delete an entire remote folder (including all its contents) on Dropbox:
|
||||
|
||||
$ ./dropbox_uploader.sh delete Documents/manuals
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://xmodulo.com/access-dropbox-command-line-linux.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/nanni
|
||||
[1]:http://www.andreafabrizi.it/?dropbox_uploader
|
||||
[2]:https://www.dropbox.com/developers/apps
|
@ -0,0 +1,139 @@
|
||||
How to install Android Studio on Ubuntu 15.04 / CentOS 7
|
||||
================================================================================
|
||||
With the advancement of smart phones in the recent years, Android has become one of the biggest phone platforms and all the tools required to build Android applications are also freely available. Android Studio is an Integrated Development Environment (IDE) for developing Android applications based on [IntelliJ IDEA][1]. It is a free and open source software by Google released in 2014 and succeeds Eclipse as the main IDE.
|
||||
|
||||
In this article, we will learn how to install Android Studio on Ubuntu 15.04 and CentOS 7.
|
||||
|
||||
### Installation on Ubuntu 15.04 ###
|
||||
|
||||
We can install Android Studio in two ways. One is to set up the required repository and install it; other is to download it from the official Android site and install it locally. In the following example, we will be setting up the repo using command line and install it. Before proceeding, we need to make sure that we have JDK version1.6 or greater installed.
|
||||
|
||||
Here, I'm installing JDK 1.8.
|
||||
|
||||
$ sudo add-apt-repository ppa:webupd8team/java
|
||||
|
||||
$ sudo apt-get update
|
||||
|
||||
$ sudo apt-get install oracle-java8-installer oracle-java8-set-default
|
||||
|
||||
Verify if java installation was successful:
|
||||
|
||||
poornima@poornima-Lenovo:~$ java -version
|
||||
|
||||
Now, setup the repo for installing Android Studio
|
||||
|
||||
$ sudo apt-add-repository ppa:paolorotolo/android-studio
|
||||
|
||||
![Android-Studio-repo](http://blog.linoxide.com/wp-content/uploads/2015/11/Android-studio-repo.png)
|
||||
|
||||
$ sudo apt-get update
|
||||
|
||||
$ sudo apt-get install android-studio
|
||||
|
||||
Above install command will install android-studio in the directory /opt.
|
||||
|
||||
Now, run the following command to start the setup wizard:
|
||||
|
||||
$ /opt/android-studio/bin/studio.sh
|
||||
|
||||
This will invoke the setup screen. Following are the screen shots that follow to set up Android studio:
|
||||
|
||||
![Android Studio setup](http://blog.linoxide.com/wp-content/uploads/2015/11/Studio-setup.png)
|
||||
|
||||
![Install-type](Android Studio setup)
|
||||
|
||||
![Emulator Settings](http://blog.linoxide.com/wp-content/uploads/2015/11/Emulator-settings.png)
|
||||
|
||||
Once you press the Finish button, Licence agreement will be displayed. After you accept the licence, it starts downloading the required components.
|
||||
|
||||
![Download components](http://blog.linoxide.com/wp-content/uploads/2015/11/Download.png)
|
||||
|
||||
Android studio installation will be complete after this step. When you relaunch Android studio, you will be shown the following welcome screen from where you will be able to start working with your Android Studio.
|
||||
|
||||
![Welcome screen](http://blog.linoxide.com/wp-content/uploads/2015/11/Welcome-screen.png)
|
||||
|
||||
### Installation on CentOS 7 ###
|
||||
|
||||
Let us now learn how to install Android Studio on CentOS 7. Here also, you need to install JDK 1.6 or later. Remember to use 'sudo' before the commands if you are not a root user. You can download the [latest version][2] of JDK. In case you already have an older version installed, remove the same before installing the new one. In the below example, I will be installing JDK version 1.8.0_65 by downloading the required rpm.
|
||||
|
||||
[root@li1260-39 ~]# rpm -ivh jdk-8u65-linux-x64.rpm
|
||||
Preparing... ################################# [100%]
|
||||
Updating / installing...
|
||||
1:jdk1.8.0_65-2000:1.8.0_65-fcs ################################# [100%]
|
||||
Unpacking JAR files...
|
||||
tools.jar...
|
||||
plugin.jar...
|
||||
javaws.jar...
|
||||
deploy.jar...
|
||||
rt.jar...
|
||||
jsse.jar...
|
||||
charsets.jar...
|
||||
localedata.jar...
|
||||
jfxrt.jar...
|
||||
|
||||
If Java path is not set properly, you will get error messages. Hence, set the correct path:
|
||||
|
||||
export JAVA_HOME=/usr/java/jdk1.8.0_25/
|
||||
export PATH=$PATH:$JAVA_HOME
|
||||
|
||||
Check if the correct version has been installed:
|
||||
|
||||
[root@li1260-39 ~]# java -version
|
||||
java version "1.8.0_65"
|
||||
Java(TM) SE Runtime Environment (build 1.8.0_65-b17)
|
||||
Java HotSpot(TM) 64-Bit Server VM (build 25.65-b01, mixed mode)
|
||||
|
||||
If you notice any error message of the sort "unable-to-run-mksdcard-sdk-tool:" while trying to install Android Studio, you might also have to install the following packages on CentOS 7 64-bit:
|
||||
|
||||
glibc.i686
|
||||
|
||||
glibc-devel.i686
|
||||
|
||||
libstdc++.i686
|
||||
|
||||
zlib-devel.i686
|
||||
|
||||
ncurses-devel.i686
|
||||
|
||||
libX11-devel.i686
|
||||
|
||||
libXrender.i686
|
||||
|
||||
libXrandr.i686
|
||||
|
||||
Let us know install studio by downloading the ide file from [Android site][3] and unzipping the same.
|
||||
|
||||
[root@li1260-39 tmp]# unzip android-studio-ide-141.2343393-linux.zip
|
||||
|
||||
Move android-studio directory to /opt directory
|
||||
|
||||
[root@li1260-39 tmp]# mv /tmp/android-studio/ /opt/
|
||||
|
||||
You can create a simlink to the studio executable to quickly start it whenever you need it.
|
||||
|
||||
[root@li1260-39 tmp]# ln -s /opt/android-studio/bin/studio.sh /usr/local/bin/android-studio
|
||||
|
||||
Now launch the studio from a terminal:
|
||||
|
||||
[root@localhost ~]#studio
|
||||
|
||||
The screens that follow for completing the installation are same as the ones shown above for Ubuntu. When the installation completes, you can start creating your own Android applications.
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
Within a year of its release, Android Studio has taken over as the primary IDE for Android development by eclipsing Eclipse. It is the only official IDE tool that will support future Android SDKs and other Android features that will be provided by Google. So, what are you waiting for? Go install Android Studio and have fun developing Android apps.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/tools/install-android-studio-ubuntu-15-04-centos-7/
|
||||
|
||||
作者:[B N Poornima][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/bnpoornima/
|
||||
[1]:https://www.jetbrains.com/idea/
|
||||
[2]:http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
|
||||
[3]:http://developer.android.com/sdk/index.html
|
@ -0,0 +1,60 @@
|
||||
translation by strugglingyouth
|
||||
How to Install GIMP 2.8.16 in Ubuntu 16.04, 15.10, 14.04
|
||||
================================================================================
|
||||
![GIMP 2.8.16](http://ubuntuhandbook.org/wp-content/uploads/2015/11/gimp-icon.png)
|
||||
|
||||
GIMP image editor 2.8.16 was released on its 20th birthday. Here’s how to install or upgrade in Ubuntu 16.04, Ubuntu 15.10, Ubuntu 14.04, Ubuntu 12.04 and their derivatives, e.g., Linux Mint 17.x/13, Elementary OS Freya.
|
||||
|
||||
GIMP 2.8.16 features support for layer groups in OpenRaster files, fixes for layer groups support in PSD, various user inrterface improvements, OSX build system fixes, translation updates, and more changes. Read the [official announcement][1].
|
||||
|
||||
![GIMP image editor 2.8,16](http://ubuntuhandbook.org/wp-content/uploads/2014/08/gimp-2-8-14.jpg)
|
||||
|
||||
### How to Install or Upgrade: ###
|
||||
|
||||
Thanks to Otto Meier, an [Ubuntu PPA][2] with latest GIMP packages is available for all current Ubuntu releases and derivatives.
|
||||
|
||||
**1. Add GIMP PPA**
|
||||
|
||||
Open terminal from Unity Dash, App launcher, or via Ctrl+Alt+T shortcut key. When it opens, paste below command and hit Enter:
|
||||
|
||||
sudo add-apt-repository ppa:otto-kesselgulasch/gimp
|
||||
|
||||
![add GIMP PPA](http://ubuntuhandbook.org/wp-content/uploads/2015/11/gimp-ppa.jpg)
|
||||
|
||||
Type in your password when it asks, no visual feedback so just type in mind, and hit enter to continue.
|
||||
|
||||
**2. Install or Upgrade the editor.**
|
||||
|
||||
After added the PPA, launch **Software Updater** (or Software Manager in Mint). After checking for updates, you’ll see GIMP in the update list. Click “Install Now” to upgrade it.
|
||||
|
||||
![upgrade-gimp2816](http://ubuntuhandbook.org/wp-content/uploads/2015/11/upgrade-gimp2816.jpg)
|
||||
|
||||
For those who prefer Linux commands, run below commands one by one to refresh your repository caches and install GIMP:
|
||||
|
||||
sudo apt-get update
|
||||
|
||||
sudo apt-get install gimp
|
||||
|
||||
**3. (Optional) Uninstall.**
|
||||
|
||||
Just in case you want to uninstall or downgrade GIMP image editor. Use Software Center to remove it, or run below commands one by one to purge PPA as well as downgrade the software:
|
||||
|
||||
sudo apt-get install ppa-purge
|
||||
|
||||
sudo ppa-purge ppa:otto-kesselgulasch/gimp
|
||||
|
||||
That’s it. Enjoy!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ubuntuhandbook.org/index.php/2015/11/how-to-install-gimp-2-8-16-in-ubuntu-16-04-15-10-14-04/
|
||||
|
||||
作者:[Ji m][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://ubuntuhandbook.org/index.php/about/
|
||||
[1]:http://www.gimp.org/news/2015/11/22/20-years-of-gimp-release-of-gimp-2816/
|
||||
[2]:https://launchpad.net/~otto-kesselgulasch/+archive/ubuntu/gimp
|
@ -0,0 +1,40 @@
|
||||
Running a mainline kernel on a cellphone
|
||||
================================================================================
|
||||
|
||||
One of the biggest freedoms associated with free software is the ability to replace a program with an updated or modified version. Even so, of the many millions of people using Linux-powered phones, few are able to run a mainline kernel on those phones, even if they have the technical skills to do the replacement. The sad fact is that no mainstream phone available runs mainline kernels. A session at the 2015 Kernel Summit, led by Rob Herring, explored this problem and what might be done to address it.
|
||||
|
||||
When asked, most of the developers in the room indicated that they would prefer to be able to run mainline kernels on their phones — though a handful did say that they would rather not do so. Rob has been working on this problem for the last year and a half in support of Project Ara (mentioned in this article). But the news is not good.
|
||||
|
||||
There is, he said, too much out-of-tree code running on a typical handset; mainline kernels simply lack the drivers needed to make that handset work. A typical phone is running 1-3 million lines of out-of-tree code. Almost all of those phones are stuck on the 3.10 kernel — or something even older. There are all kinds of reasons for this, but the simple fact is that things seem to move too quickly in the handset world for the kernel community to keep up. Is that, he asked, something that we care about?
|
||||
|
||||
Tim Bird noted that the Nexus 1, one of the original Android phones, never ran a mainline kernel and never will. It broke the promise of open source, making it impossible for users to put a new kernel onto their devices. At this point, no phone supports that ability. Peter Zijlstra wondered about how much of that out-of-tree code was duplicated functionality from one handset to the next; Rob noted that he has run into three independently developed hotplug governors so far.
|
||||
|
||||
Dirk Hohndel suggested that few people care. Of the billion phones out there, he said, approximately 27 of them have owners who care about running mainline kernels. The rest just want to get the phone to work. Perhaps developers who are concerned about running mainline kernels are trying to solve the wrong problem.
|
||||
|
||||
Chris Mason said that handset vendors are currently facing the same sorts of problems that distributors dealt with many years ago. They are coping with a lot of inefficient, repeated, duplicated work. Once the distributors [Rob Herring] decided to put their work into the mainline instead of carrying it themselves, things got a lot better. The key is to help the phone manufacturers to realize that they can benefit in the same way; that, rather than pressure from users, is how the problem will be solved.
|
||||
|
||||
Grant Likely raised concerns about security in a world where phones cannot be upgraded. What we need is a real distribution market for phones. But, as long as the vendors are in charge of the operating software, phones will not be upgradeable. We have a big security mess coming, he said. Peter added that, with Stagefright, that mess is already upon us.
|
||||
|
||||
Ted Ts'o said that running mainline kernels is not his biggest concern. He would be happy if the phones on sale this holiday season would be running a 3.18 or 4.1 kernel, rather than being stuck on 3.10. That, he suggested, is a more solvable problem. Steve Rostedt said that would not solve the security problem, but Ted remarked that a newer kernel would at least make it easier to backport fixes. Grant replied that, one year from now, it would all just happen again; shipping newer kernels is just an incremental fix. Kees Cook added that there is not much to be gained from backporting fixes; the real problem is that there are no defenses from bugs (he would expand on this theme in a separate session later in the day).
|
||||
|
||||
Rob said that any kind of solution would require getting the vendors on board. That, though, will likely run into trouble with the sort of lockdown that vendors like to apply to their devices. Paolo Bonzini asked whether it would be possible to sue vendors over unfixed security vulnerabilities, especially when the devices are still under warranty. Grant said that upgradeability had to become a market requirement or it simply wasn't going to happen. It might be a nasty security issue that causes this to happen, or carriers might start requiring it. Meanwhile, kernel developers need to keep pushing in that direction. Rob noted that, beyond the advantages noted thus far, the ability to run mainline kernels would help developers to test and validate new features on Android devices.
|
||||
|
||||
Josh Triplett asked whether the community would be prepared to do what it would take if the industry were to come around to the idea of mainline kernel support. There would be lots of testing and validation of kernels on handsets required; Android Compatibility Test Suite failures would have to be treated as regressions. Rob suggested that this could be discussed next year, after the basic functionality is in place, but Josh insisted that, if the demand were to show up, we would have to be able to give a good answer.
|
||||
|
||||
Tim said that there is currently a big disconnect with the vendor world; vendors are not reporting or contributing anything back to the community at all. They are completely disconnected, so there is no forward progress ever. Josh noted that when vendors do report bugs with the old kernels they are using, the reception tends to be less than friendly. Arnd Bergmann said that what was needed was to get one of the big silicon vendors to commit to the idea and get its hardware to a point where running mainline kernels was possible; that would put pressure on the others. But, he added, that would require the existence of one free GPU driver that got shipped with the hardware — something that does not exist currently.
|
||||
|
||||
Rob put up a list of problem areas, but there was not much time for discussion of the particulars. WiFi drivers continue to be an issue, especially with the new features being added in the Android world. Johannes Berg agreed that the new features are an issue; the Android developers do not even talk about them until they ship with the hardware. Support for most of those features does eventually land in the mainline kernel, though.
|
||||
|
||||
As things wound down, Ben Herrenschmidt reiterated that the key was to get vendors to realize that working with the mainline kernel is in their own best interest; it saves work in the long run. Mark Brown said that, in past years when the kernel version shipped with Android moved forward more reliably, the benefits of working upstream were more apparent to vendors. Now that things seem to be stuck on 3.10, that pressure is not there in the same way. The session ended with developers determined to improve the situation, but without any clear plan for getting there.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://lwn.net/Articles/662147/
|
||||
|
||||
作者:[Jonathan Corbet][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://lwn.net/Articles/KernelSummit2015/
|
@ -0,0 +1,327 @@
|
||||
Translating by KnightJoker
|
||||
How to Install Nginx as Reverse Proxy for Apache on FreeBSD 10.2
|
||||
================================================================================
|
||||
Nginx is free and open source HTTP server and reverse proxy, as well as an mail proxy server for IMAP/POP3. Nginx is high performance web server with rich of features, simple configuration and low memory usage. Originally written by Igor Sysoev on 2002, and until now has been used by a big technology company including Netflix, Github, Cloudflare, WordPress.com etc.
|
||||
|
||||
In this tutorial we will "**install and configure nginx web server as reverse proxy for apache on freebsd 10.2**". Apache will run with php on port 8080, and then we need to configure nginx run on port 80 to receive a request from user/visitor. If user request for web page from the browser on port 80, then nginx will pass the request to apache webserver and PHP that running on port 8080.
|
||||
|
||||
#### Prerequisite ####
|
||||
|
||||
- FreeBSD 10.2.
|
||||
- Root privileges.
|
||||
|
||||
### Step 1 - Update the System ###
|
||||
|
||||
Log in to your freebsd server with ssh credential and update system with command below :
|
||||
|
||||
freebsd-update fetch
|
||||
freebsd-update install
|
||||
|
||||
### Step 2 - Install Apache ###
|
||||
|
||||
pache is open source HTTP server and the most widely used web server. Apache is not installed by default on freebsd, but we can install it from the ports or package on "/usr/ports/www/apache24" or install it from freebsd repository with pkg command. In this tutorial we will use pkg command to install from the freebsd repository :
|
||||
|
||||
pkg install apache24
|
||||
|
||||
### Step 3 - Install PHP ###
|
||||
|
||||
Once apache is installed, followed with installing php for handling a PHP file request by a user. We will install php with pkg command as below :
|
||||
|
||||
pkg install php56 mod_php56 php56-mysql php56-mysqli
|
||||
|
||||
### Step 4 - Configure Apache and PHP ###
|
||||
|
||||
Once all is installed, we will configure apache to run on port 8080, and php working with apache. To configure apache, we can edit the configuration file "httpd.conf", and for PHP we just need to copy the php configuration file php.ini on "/usr/local/etc/" directory.
|
||||
|
||||
Go to "/usr/local/etc/" directory and copy php.ini-production file to php.ini :
|
||||
|
||||
cd /usr/local/etc/
|
||||
cp php.ini-production php.ini
|
||||
|
||||
Next, configure apache by editing file "httpd.conf" on apache directory :
|
||||
|
||||
cd /usr/local/etc/apache24
|
||||
nano -c httpd.conf
|
||||
|
||||
Port configuration on line **52** :
|
||||
|
||||
Listen 8080
|
||||
|
||||
ServerName configuration on line **219** :
|
||||
|
||||
ServerName 127.0.0.1:8080
|
||||
|
||||
Add DirectoryIndex file that apache will serve it if a directory requested on line **277** :
|
||||
|
||||
DirectoryIndex index.php index.html
|
||||
|
||||
Configure apache to work with php by adding script below under line **287** :
|
||||
|
||||
<FilesMatch "\.php$">
|
||||
SetHandler application/x-httpd-php
|
||||
</FilesMatch>
|
||||
<FilesMatch "\.phps$">
|
||||
SetHandler application/x-httpd-php-source
|
||||
</FilesMatch>
|
||||
|
||||
Save and exit.
|
||||
|
||||
Now add apache to start at boot time with sysrc command :
|
||||
|
||||
sysrc apache24_enable=yes
|
||||
|
||||
And test apache configuration with command below :
|
||||
|
||||
apachectl configtest
|
||||
|
||||
If there is no error, start apache :
|
||||
|
||||
service apache24 start
|
||||
|
||||
If all is done, verify that php is running well with apache by creating phpinfo file on "/usr/local/www/apache24/data" directory :
|
||||
|
||||
cd /usr/local/www/apache24/data
|
||||
echo "<?php phpinfo(); ?>" > info.php
|
||||
|
||||
Now visit the freebsd server IP : 192.168.1.123:8080/info.php.
|
||||
|
||||
![Apache and PHP on Port 8080](http://blog.linoxide.com/wp-content/uploads/2015/11/Apache-and-PHP-on-Port-8080.png)
|
||||
|
||||
Apache is working with php on port 8080.
|
||||
|
||||
### Step 5 - Install Nginx ###
|
||||
|
||||
Nginx high performance web server and reverse proxy with low memory consumption. In this step we will use nginx as reverse proxy for apache, so let's install it with pkg command :
|
||||
|
||||
pkg install nginx
|
||||
|
||||
### Step 6 - Configure Nginx ###
|
||||
|
||||
Once nginx is installed, we must configure it by replacing nginx file "**nginx.conf**" with new configuration below. Change the directory to "/usr/local/etc/nginx/" and backup default nginx.conf :
|
||||
|
||||
cd /usr/local/etc/nginx/
|
||||
mv nginx.conf nginx.conf.oroginal
|
||||
|
||||
Now create new nginx configuration file :
|
||||
|
||||
nano -c nginx.conf
|
||||
|
||||
and paste configuration below :
|
||||
|
||||
user www;
|
||||
worker_processes 1;
|
||||
error_log /var/log/nginx/error.log;
|
||||
|
||||
events {
|
||||
worker_connections 1024;
|
||||
}
|
||||
|
||||
http {
|
||||
include mime.types;
|
||||
default_type application/octet-stream;
|
||||
|
||||
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
|
||||
'$status $body_bytes_sent "$http_referer" '
|
||||
'"$http_user_agent" "$http_x_forwarded_for"';
|
||||
access_log /var/log/nginx/access.log;
|
||||
|
||||
sendfile on;
|
||||
keepalive_timeout 65;
|
||||
|
||||
# Nginx cache configuration
|
||||
proxy_cache_path /var/nginx/cache levels=1:2 keys_zone=my-cache:8m max_size=1000m inactive=600m;
|
||||
proxy_temp_path /var/nginx/cache/tmp;
|
||||
proxy_cache_key "$scheme$host$request_uri";
|
||||
|
||||
gzip on;
|
||||
|
||||
server {
|
||||
#listen 80;
|
||||
server_name _;
|
||||
|
||||
location /nginx_status {
|
||||
|
||||
stub_status on;
|
||||
access_log off;
|
||||
}
|
||||
|
||||
# redirect server error pages to the static page /50x.html
|
||||
#
|
||||
error_page 500 502 503 504 /50x.html;
|
||||
location = /50x.html {
|
||||
root /usr/local/www/nginx-dist;
|
||||
}
|
||||
|
||||
# proxy the PHP scripts to Apache listening on 127.0.0.1:8080
|
||||
#
|
||||
location ~ \.php$ {
|
||||
proxy_pass http://127.0.0.1:8080;
|
||||
include /usr/local/etc/nginx/proxy.conf;
|
||||
}
|
||||
}
|
||||
|
||||
include /usr/local/etc/nginx/vhost/*;
|
||||
|
||||
}
|
||||
|
||||
Save and exit.
|
||||
|
||||
Next, create new file called **proxy.conf** for reverse proxy configuration on nginx directory :
|
||||
|
||||
cd /usr/local/etc/nginx/
|
||||
nano -c proxy.conf
|
||||
|
||||
Paste configuration below :
|
||||
|
||||
proxy_buffering on;
|
||||
proxy_redirect off;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
client_max_body_size 10m;
|
||||
client_body_buffer_size 128k;
|
||||
proxy_connect_timeout 90;
|
||||
proxy_send_timeout 90;
|
||||
proxy_read_timeout 90;
|
||||
proxy_buffers 100 8k;
|
||||
add_header X-Cache $upstream_cache_status;
|
||||
|
||||
Save and exit.
|
||||
|
||||
And the last, create new directory for nginx cache on "/var/nginx/cache" :
|
||||
|
||||
mkdir -p /var/nginx/cache
|
||||
|
||||
### Step 7 - Configure Nginx VirtualHost ###
|
||||
|
||||
In this step we will create new virtualhost for domain "saitama.me", with document root on "/usr/local/www/saitama.me" and the log file on "/var/log/nginx" directory.
|
||||
|
||||
First thing we must do is creating new directory to store the virtualhost file, we here use new directory called "**vhost**". Let's create it :
|
||||
|
||||
cd /usr/local/etc/nginx/
|
||||
mkdir vhost
|
||||
|
||||
vhost directory has been created, now go to the directory and create new file virtualhost. I'me here will create new file "**saitama.conf**" :
|
||||
|
||||
cd vhost/
|
||||
nano -c saitama.conf
|
||||
|
||||
Paste virtualhost configuration below :
|
||||
|
||||
server {
|
||||
# Replace with your freebsd IP
|
||||
listen 192.168.1.123:80;
|
||||
|
||||
# Document Root
|
||||
root /usr/local/www/saitama.me;
|
||||
index index.php index.html index.htm;
|
||||
|
||||
# Domain
|
||||
server_name www.saitama.me saitama.me;
|
||||
|
||||
# Error and Access log file
|
||||
error_log /var/log/nginx/saitama-error.log;
|
||||
access_log /var/log/nginx/saitama-access.log main;
|
||||
|
||||
# Reverse Proxy Configuration
|
||||
location ~ \.php$ {
|
||||
proxy_pass http://127.0.0.1:8080;
|
||||
include /usr/local/etc/nginx/proxy.conf;
|
||||
|
||||
# Cache configuration
|
||||
proxy_cache my-cache;
|
||||
proxy_cache_valid 10s;
|
||||
proxy_no_cache $cookie_PHPSESSID;
|
||||
proxy_cache_bypass $cookie_PHPSESSID;
|
||||
proxy_cache_key "$scheme$host$request_uri";
|
||||
|
||||
}
|
||||
|
||||
# Disable Cache for the file type html, json
|
||||
location ~* .(?:manifest|appcache|html?|xml|json)$ {
|
||||
expires -1;
|
||||
}
|
||||
|
||||
# Enable Cache the file 30 days
|
||||
location ~* .(jpg|png|gif|jpeg|css|mp3|wav|swf|mov|doc|pdf|xls|ppt|docx|pptx|xlsx)$ {
|
||||
proxy_cache_valid 200 120m;
|
||||
expires 30d;
|
||||
proxy_cache my-cache;
|
||||
access_log off;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
Save and exit.
|
||||
|
||||
Next, create new log directory for nginx and virtualhost on "/var/log/" :
|
||||
|
||||
mkdir -p /var/log/nginx/
|
||||
|
||||
If all is done, let's create a directory for document root for saitama.me :
|
||||
|
||||
cd /usr/local/www/
|
||||
mkdir saitama.me
|
||||
|
||||
### Step 8 - Testing ###
|
||||
|
||||
This step is just test our nginx configuration and test the nginx virtualhost.
|
||||
|
||||
Test nginx configuration with command below :
|
||||
|
||||
nginx -t
|
||||
|
||||
If there is no problem, add nginx to boot time with sysrc command, and then start it and restart apache:
|
||||
|
||||
sysrc nginx_enable=yes
|
||||
service nginx start
|
||||
service apache24 restart
|
||||
|
||||
All is done, now verify the the php is working by adding new file phpinfo on saitama.me directory :
|
||||
|
||||
cd /usr/local/www/saitama.me
|
||||
echo "<?php phpinfo(); ?>" > info.php
|
||||
|
||||
Visit the domain : **www.saitama.me/info.php**.
|
||||
|
||||
![Virtualhost Configured saitamame](http://blog.linoxide.com/wp-content/uploads/2015/11/Virtualhost-Configured-saitamame.png)
|
||||
|
||||
Nginx as reverse proxy for apache is working, and php is working too.
|
||||
|
||||
And this is another results :
|
||||
|
||||
Test .html file with no-cache.
|
||||
|
||||
curl -I www.saitama.me
|
||||
|
||||
![html with no-cache](http://blog.linoxide.com/wp-content/uploads/2015/11/html-with-no-cache.png)
|
||||
|
||||
Test .css file with 30day cache.
|
||||
|
||||
curl -I www.saitama.me/test.css
|
||||
|
||||
![css file 30day cache](http://blog.linoxide.com/wp-content/uploads/2015/11/css-file-30day-cache.png)
|
||||
|
||||
Test .php file with cache :
|
||||
|
||||
curl -I www.saitama.me/info.php
|
||||
|
||||
![PHP file cached](http://blog.linoxide.com/wp-content/uploads/2015/11/PHP-file-cached.png)
|
||||
|
||||
All is done.
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
Nginx is most popular HTTP server and reverse proxy. Has a rich of features with high performance and low memory/RAM usage. Nginx use too for caching, we can cache a static file on the web to make the web fast load, and cache for php file if a user request for it. Nginx is easy to configure and use, use for HTTP server or act as reverse proxy for apache.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/install-nginx-reverse-proxy-apache-freebsd-10-2/
|
||||
|
||||
作者:[Arul][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arulm/
|
@ -0,0 +1,53 @@
|
||||
Linux FAQs with Answers--How to remove trailing whitespaces in a file on Linux
|
||||
================================================================================
|
||||
> Question: I have a text file in which I need to remove all trailing whitespsaces (e.g., spaces and tabs) in each line for formatting purpose. Is there a quick and easy Linux command line tool I can use for this?
|
||||
|
||||
When you are writing code for your program, you must understand that there are standard coding styles to follow. For example, "trailing whitespaces" are typically considered evil because when they get into a code repository for revision control, they can cause a lot of problems and confusion (e.g., "false diffs"). Many IDEs and text editors are capable of highlighting and automatically trimming trailing whitepsaces at the end of each line.
|
||||
|
||||
Here are a few ways to **remove trailing whitespaces in Linux command-line environment**.
|
||||
|
||||
### Method One ###
|
||||
|
||||
A simple command line approach to remove unwanted whitespaces is via sed.
|
||||
|
||||
The following command deletes all spaces and tabs at the end of each line in input.java.
|
||||
|
||||
$ sed -i 's/[[:space:]]*$//' input.java
|
||||
|
||||
If there are multiple files that need trailing whitespaces removed, you can use a combination of find and sed. For example, the following command deletes trailing whitespaces in all *.java files recursively found in the current directory as well as all its sub-directories.
|
||||
|
||||
$ find . -name "*.java" -type f -print0 | xargs -0 sed -i 's/[[:space:]]*$//'
|
||||
|
||||
### Method Two ###
|
||||
|
||||
Vim text editor is able to highlight and trim whitespaces in a file as well.
|
||||
|
||||
To highlight all trailing whitespaces in a file, open the file with Vim editor and enable text highlighting by typing the following in Vim command line mode.
|
||||
|
||||
:set hlsearch
|
||||
|
||||
Then search for trailing whitespaces by typing:
|
||||
|
||||
/\s\+$
|
||||
|
||||
This will show all trailing spaces and tabs found throughout the file.
|
||||
|
||||
![](https://c1.staticflickr.com/1/757/23198657732_bc40e757b4_b.jpg)
|
||||
|
||||
Then to clean up all trailing whitespaces in a file with Vim, type the following Vim command.
|
||||
|
||||
:%s/\s\+$//
|
||||
|
||||
This command means substituting all whitespace characters found at the end of the line (\s\+$) with no character.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ask.xmodulo.com/remove-trailing-whitespaces-linux.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://ask.xmodulo.com/author/nanni
|
@ -0,0 +1,425 @@
|
||||
15 Useful Linux and Unix Tape Managements Commands For Sysadmins
|
||||
================================================================================
|
||||
Tape devices should be used on a regular basis only for archiving files or for transferring data from one server to another. Usually, tape devices are all hooked up to Unix boxes, and controlled with mt or mtx. You must backup all data to both disks (may be in cloud) and tape device. In this tutorial you will learn about:
|
||||
|
||||
- Tape device names
|
||||
- Basic commands to manage tape drive
|
||||
- Basic backup and restore commands
|
||||
|
||||
### Why backup? ###
|
||||
|
||||
A backup plant is important:
|
||||
|
||||
- Ability to recover from disk failure
|
||||
- Accidental file deletion
|
||||
- File or file system corruption
|
||||
- Complete server destruction, including destruction of on-site backups due to fire or other problems.
|
||||
|
||||
You can use tape based archives to backup the whole server and move tapes off-site.
|
||||
|
||||
### Understanding tape file marks and block size ###
|
||||
|
||||
![Fig.01: Tape file marks](http://s0.cyberciti.org/uploads/cms/2015/10/tape-format.jpg)
|
||||
|
||||
Fig.01: Tape file marks
|
||||
|
||||
Each tape device can store multiple tape backup files. Tape backup files are created using cpio, tar, dd, and so on. However, tape device can be opened, written data to, and closed by various program. You can store several backups (tapes) on physical tape. Between each tape file is a "tape file mark". This is used to indicate where one tape file ends and another begins on physical tape. You need to use mt command to positions the tape (winds forward and rewinds and marks).
|
||||
|
||||
#### How data is stored on a tape ####
|
||||
|
||||
![Fig.02: How data is stored on a tape](http://s0.cyberciti.org/uploads/cms/2015/10/how-data-is-stored-on-a-tape.jpg)
|
||||
|
||||
Fig.02: How data is stored on a tape
|
||||
|
||||
All data is stored subsequently in sequential tape archive format using tar. The first tape archive will start on the physical beginning of the tape (tar #0). The next will be tar #1 and so on.
|
||||
|
||||
### Tape device names on Unix ###
|
||||
|
||||
1. /dev/rmt/0 or /dev/rmt/1 or /dev/rmt/[0-127] : Regular tape device name on Unix. The tape is rewound.
|
||||
1. /dev/rmt/0n : This is know as no rewind i.e. after using tape, leaves the tape in current status for next command.
|
||||
1. /dev/rmt/0b : Use magtape interface i.e. BSD behavior. More-readable by a variety of OS's such as AIX, Windows, Linux, FreeBSD, and more.
|
||||
1. /dev/rmt/0l : Set density to low.
|
||||
1. /dev/rmt/0m : Set density to medium.
|
||||
1. /dev/rmt/0u : Set density to high.
|
||||
1. /dev/rmt/0c : Set density to compressed.
|
||||
1. /dev/st[0-9] : Linux specific SCSI tape device name.
|
||||
1. /dev/sa[0-9] : FreeBSD specific SCSI tape device name.
|
||||
1. /dev/esa0 : FreeBSD specific SCSI tape device name that eject on close (if capable).
|
||||
|
||||
#### Tape device name examples ####
|
||||
|
||||
- The /dev/rmt/1cn indicate that I'm using unity 1, compressed density and no rewind.
|
||||
- The /dev/rmt/0hb indicate that I'm using unity 0, high density and BSD behavior.
|
||||
- The auto rewind SCSI tape device name on Linux : /dev/st0
|
||||
- The non-rewind SCSI tape device name on Linux : /dev/nst0
|
||||
- The auto rewind SCSI tape device name on FreeBSD: /dev/sa0
|
||||
- The non-rewind SCSI tape device name on FreeBSD: /dev/nsa0
|
||||
|
||||
#### How do I list installed scsi tape devices? ####
|
||||
|
||||
Type the following commands:
|
||||
|
||||
## Linux (read man pages for more info) ##
|
||||
lsscsi
|
||||
lsscsi -g
|
||||
|
||||
## IBM AIX ##
|
||||
lsdev -Cc tape
|
||||
lsdev -Cc adsm
|
||||
lscfg -vl rmt*
|
||||
|
||||
## Solaris Unix ##
|
||||
cfgadm –a
|
||||
cfgadm -al
|
||||
luxadm probe
|
||||
iostat -En
|
||||
|
||||
## HP-UX Unix ##
|
||||
ioscan Cf
|
||||
ioscan -funC tape
|
||||
ioscan -fnC tape
|
||||
ioscan -kfC tape
|
||||
|
||||
|
||||
Sample outputs from my Linux server:
|
||||
|
||||
![Fig.03: Installed tape devices on Linux server](http://s0.cyberciti.org/uploads/cms/2015/10/linux-find-tape-devices-command.jpg)
|
||||
|
||||
Fig.03: Installed tape devices on Linux server
|
||||
|
||||
### mt command examples ###
|
||||
|
||||
In Linux and Unix-like system, mt command is used to control operations of the tape drive, such as finding status or seeking through files on a tape or writing tape control marks to the tape. You must most of the following command as root user. The syntax is:
|
||||
|
||||
mt -f /tape/device/name operation
|
||||
|
||||
#### Setting up environment ####
|
||||
|
||||
You can set TAPE shell variable. This is the pathname of the tape drive. The default (if the variable is unset, but not if it is null) is /dev/nsa0 on FreeBSD. It may be overridden with the -f option passed to the mt command as explained below.
|
||||
|
||||
## Add to your shell startup file ##
|
||||
TAPE=/dev/st1 #Linux
|
||||
TAPE=/dev/rmt/2 #Unix
|
||||
TAPE=/dev/nsa3 #FreeBSD
|
||||
export TAPE
|
||||
|
||||
### 1: Display status of the tape/drive ###
|
||||
|
||||
mt status #Use default
|
||||
mt -f /dev/rmt/0 status #Unix
|
||||
mt -f /dev/st0 status #Linux
|
||||
mt -f /dev/nsa0 status #FreeBSD
|
||||
mt -f /dev/rmt/1 status #Unix unity 1 i.e. tape device no. 1
|
||||
|
||||
You can use shell loop as follows to poll a system and locate all of its tape drives:
|
||||
|
||||
for d in 0 1 2 3 4 5
|
||||
do
|
||||
mt -f "/dev/rmt/${d}" status
|
||||
done
|
||||
|
||||
### 2: Rewinds the tape ###
|
||||
|
||||
mt rew
|
||||
mt rewind
|
||||
mt -f /dev/mt/0 rewind
|
||||
mt -f /dev/st0 rewind
|
||||
|
||||
### 3: Eject the tape ###
|
||||
|
||||
mt off
|
||||
mt offline
|
||||
mt eject
|
||||
mt -f /dev/mt/0 off
|
||||
mt -f /dev/st0 eject
|
||||
|
||||
### 4: Erase the tape (rewind the tape and, if applicable, unload the tape) ###
|
||||
|
||||
mt erase
|
||||
mt -f /dev/st0 erase #Linux
|
||||
mt -f /dev/rmt/0 erase #Unix
|
||||
|
||||
### 5: Retensioning a magnetic tape cartridge ###
|
||||
|
||||
If errors occur when a tape is being read, you can retension the tape, clean the tape drive, and then try again as follows:
|
||||
|
||||
mt retension
|
||||
mt -f /dev/rmt/1 retension #Unix
|
||||
mt -f /dev/st0 retension #Linux
|
||||
|
||||
### 6: Writes n EOF marks in the current position of tape ###
|
||||
|
||||
mt eof
|
||||
mt weof
|
||||
mt -f /dev/st0 eof
|
||||
|
||||
### 7: Forward space count files i.e. jumps n EOF marks ###
|
||||
|
||||
The tape is positioned on the first block of the next file i.e. tape will position on first block of the field (see fig.01):
|
||||
|
||||
mt fsf
|
||||
mt -f /dev/rmt/0 fsf
|
||||
mt -f /dev/rmt/1 fsf 1 #go 1 forward file/tape (see fig.01)
|
||||
|
||||
### 8: Backward space count files i.e. rewinds n EOF marks ###
|
||||
|
||||
The tape is positioned on the first block of the next file i.e. tape positions after EOF mark (see fig.01):
|
||||
|
||||
mt bsf
|
||||
mt -f /dev/rmt/1 bsf
|
||||
mt -f /dev/rmt/1 bsf 1 #go 1 backward file/tape (see fig.01)
|
||||
|
||||
Here is a list of the tape position commands:
|
||||
|
||||
fsf Forward space count files. The tape is positioned on the first block of the next file.
|
||||
|
||||
fsfm Forward space count files. The tape is positioned on the last block of the previous file.
|
||||
|
||||
bsf Backward space count files. The tape is positioned on the last block of the previous file.
|
||||
|
||||
bsfm Backward space count files. The tape is positioned on the first block of the next file.
|
||||
|
||||
asf The tape is positioned at the beginning of the count file. Positioning is done by first rewinding the tape and then spacing forward over count filemarks.
|
||||
|
||||
fsr Forward space count records.
|
||||
|
||||
bsr Backward space count records.
|
||||
|
||||
fss (SCSI tapes) Forward space count setmarks.
|
||||
|
||||
bss (SCSI tapes) Backward space count setmarks.
|
||||
|
||||
### Basic backup commands ###
|
||||
|
||||
Let us see commands to backup and restore files
|
||||
|
||||
### 9: To backup directory (tar format) ###
|
||||
|
||||
tar cvf /dev/rmt/0n /etc
|
||||
tar cvf /dev/st0 /etc
|
||||
|
||||
### 10: To restore directory (tar format) ###
|
||||
|
||||
tar xvf /dev/rmt/0n -C /path/to/restore
|
||||
tar xvf /dev/st0 -C /tmp
|
||||
|
||||
### 11: List or check tape contents (tar format) ###
|
||||
|
||||
mt -f /dev/st0 rewind; dd if=/dev/st0 of=-
|
||||
|
||||
## tar format ##
|
||||
tar tvf {DEVICE} {Directory-FileName}
|
||||
tar tvf /dev/st0
|
||||
tar tvf /dev/st0 desktop
|
||||
tar tvf /dev/rmt/0 foo > list.txt
|
||||
|
||||
### 12: Backup partition with dump or ufsdump ###
|
||||
|
||||
## Unix backup c0t0d0s2 partition ##
|
||||
ufsdump 0uf /dev/rmt/0 /dev/rdsk/c0t0d0s2
|
||||
|
||||
## Linux backup /home partition ##
|
||||
dump 0uf /dev/nst0 /dev/sda5
|
||||
dump 0uf /dev/nst0 /home
|
||||
|
||||
## FreeBSD backup /usr partition ##
|
||||
dump -0aL -b64 -f /dev/nsa0 /usr
|
||||
|
||||
### 12: Restore partition with ufsrestore or restore ###
|
||||
|
||||
## Unix ##
|
||||
ufsrestore xf /dev/rmt/0
|
||||
## Unix interactive restore ##
|
||||
ufsrestore if /dev/rmt/0
|
||||
|
||||
## Linux ##
|
||||
restore rf /dev/nst0
|
||||
## Restore interactive from the 6th backup on the tape media ##
|
||||
restore isf 6 /dev/nst0
|
||||
|
||||
## FreeBSD restore ufsdump format ##
|
||||
restore -i -f /dev/nsa0
|
||||
|
||||
### 13: Start writing at the beginning of the tape (see fig.02) ###
|
||||
|
||||
## This will overwrite all data on tape ##
|
||||
mt -f /dev/st1 rewind
|
||||
|
||||
### Backup home ##
|
||||
tar cvf /dev/st1 /home
|
||||
|
||||
## Offline and unload tape ##
|
||||
mt -f /dev/st0 offline
|
||||
|
||||
To restore from the beginning of the tape:
|
||||
|
||||
mt -f /dev/st0 rewind
|
||||
tar xvf /dev/st0
|
||||
mt -f /dev/st0 offline
|
||||
|
||||
### 14: Start writing after the last tar (see fig.02) ###
|
||||
|
||||
## This will kee all data written so far ##
|
||||
mt -f /dev/st1 eom
|
||||
|
||||
### Backup home ##
|
||||
tar cvf /dev/st1 /home
|
||||
|
||||
## Unload ##
|
||||
mt -f /dev/st0 offline
|
||||
|
||||
### 15: Start writing after tar number 2 (see fig.02) ###
|
||||
|
||||
## To wrtite after tar number 2 (should be 2+1)
|
||||
mt -f /dev/st0 asf 3
|
||||
tar cvf /dev/st0 /usr
|
||||
|
||||
## asf equivalent command done using fsf ##
|
||||
mt -f /dev/sf0 rewind
|
||||
mt -f /dev/st0 fsf 2
|
||||
|
||||
To restore tar from tar number 2:
|
||||
|
||||
mt -f /dev/st0 asf 3
|
||||
tar xvf /dev/st0
|
||||
mt -f /dev/st0 offline
|
||||
|
||||
### How do I verify backup tapes created using tar? ###
|
||||
|
||||
It is important that you do regular full system restorations and service testing, it's the only way to know for sure that the entire system is working correctly. See our [tutorial on verifying tar command tape backups][1] for more information.
|
||||
|
||||
### Sample shell script ###
|
||||
|
||||
#!/bin/bash
|
||||
# A UNIX / Linux shell script to backup dirs to tape device like /dev/st0 (linux)
|
||||
# This script make both full and incremental backups.
|
||||
# You need at two sets of five tapes. Label each tape as Mon, Tue, Wed, Thu and Fri.
|
||||
# You can run script at midnight or early morning each day using cronjons.
|
||||
# The operator or sys admin can replace the tape every day after the script has done.
|
||||
# Script must run as root or configure permission via sudo.
|
||||
# -------------------------------------------------------------------------
|
||||
# Copyright (c) 1999 Vivek Gite <vivek@nixcraft.com>
|
||||
# This script is licensed under GNU GPL version 2.0 or above
|
||||
# -------------------------------------------------------------------------
|
||||
# This script is part of nixCraft shell script collection (NSSC)
|
||||
# Visit http://bash.cyberciti.biz/ for more information.
|
||||
# -------------------------------------------------------------------------
|
||||
# Last updated on : March-2003 - Added log file support.
|
||||
# Last updated on : Feb-2007 - Added support for excluding files / dirs.
|
||||
# -------------------------------------------------------------------------
|
||||
LOGBASE=/root/backup/log
|
||||
|
||||
# Backup dirs; do not prefix /
|
||||
BACKUP_ROOT_DIR="home sales"
|
||||
|
||||
# Get todays day like Mon, Tue and so on
|
||||
NOW=$(date +"%a")
|
||||
|
||||
# Tape devie name
|
||||
TAPE="/dev/st0"
|
||||
|
||||
# Exclude file
|
||||
TAR_ARGS=""
|
||||
EXCLUDE_CONF=/root/.backup.exclude.conf
|
||||
|
||||
# Backup Log file
|
||||
LOGFIILE=$LOGBASE/$NOW.backup.log
|
||||
|
||||
# Path to binaries
|
||||
TAR=/bin/tar
|
||||
MT=/bin/mt
|
||||
MKDIR=/bin/mkdir
|
||||
|
||||
# ------------------------------------------------------------------------
|
||||
# Excluding files when using tar
|
||||
# Create a file called $EXCLUDE_CONF using a text editor
|
||||
# Add files matching patterns such as follows (regex allowed):
|
||||
# home/vivek/iso
|
||||
# home/vivek/*.cpp~
|
||||
# ------------------------------------------------------------------------
|
||||
[ -f $EXCLUDE_CONF ] && TAR_ARGS="-X $EXCLUDE_CONF"
|
||||
|
||||
#### Custom functions #####
|
||||
# Make a full backup
|
||||
full_backup(){
|
||||
local old=$(pwd)
|
||||
cd /
|
||||
$TAR $TAR_ARGS -cvpf $TAPE $BACKUP_ROOT_DIR
|
||||
$MT -f $TAPE rewind
|
||||
$MT -f $TAPE offline
|
||||
cd $old
|
||||
}
|
||||
|
||||
# Make a partial backup
|
||||
partial_backup(){
|
||||
local old=$(pwd)
|
||||
cd /
|
||||
$TAR $TAR_ARGS -cvpf $TAPE -N "$(date -d '1 day ago')" $BACKUP_ROOT_DIR
|
||||
$MT -f $TAPE rewind
|
||||
$MT -f $TAPE offline
|
||||
cd $old
|
||||
}
|
||||
|
||||
# Make sure all dirs exits
|
||||
verify_backup_dirs(){
|
||||
local s=0
|
||||
for d in $BACKUP_ROOT_DIR
|
||||
do
|
||||
if [ ! -d /$d ];
|
||||
then
|
||||
echo "Error : /$d directory does not exits!"
|
||||
s=1
|
||||
fi
|
||||
done
|
||||
# if not; just die
|
||||
[ $s -eq 1 ] && exit 1
|
||||
}
|
||||
|
||||
#### Main logic ####
|
||||
|
||||
# Make sure log dir exits
|
||||
[ ! -d $LOGBASE ] && $MKDIR -p $LOGBASE
|
||||
|
||||
# Verify dirs
|
||||
verify_backup_dirs
|
||||
|
||||
# Okay let us start backup procedure
|
||||
# If it is Monday make a full backup;
|
||||
# For Tue to Fri make a partial backup
|
||||
# Weekend no backups
|
||||
case $NOW in
|
||||
Mon) full_backup;;
|
||||
Tue|Wed|Thu|Fri) partial_backup;;
|
||||
*) ;;
|
||||
esac > $LOGFIILE 2>&1
|
||||
|
||||
### A note about third party backup utilities ###
|
||||
|
||||
Both Linux and Unix-like system provides many third-party utilities which you can use to schedule the creation of backups including tape backups such as:
|
||||
|
||||
- Amanda
|
||||
- Bacula
|
||||
- rsync
|
||||
- duplicity
|
||||
- rsnapshot
|
||||
|
||||
See also
|
||||
|
||||
- Man pages - [mt(1)][2], [mtx(1)][3], [tar(1)][4], [dump(8)][5], [restore(8)][6]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.cyberciti.biz/hardware/unix-linux-basic-tape-management-commands/
|
||||
|
||||
作者:Vivek Gite
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://www.cyberciti.biz/faq/unix-verify-tape-backup/
|
||||
[2]:http://www.manpager.com/linux/man1/mt.1.html
|
||||
[3]:http://www.manpager.com/linux/man1/mtx.1.html
|
||||
[4]:http://www.manpager.com/linux/man1/tar.1.html
|
||||
[5]:http://www.manpager.com/linux/man8/dump.8.html
|
||||
[6]:http://www.manpager.com/linux/man8/restore.8.html
|
@ -1,3 +1,4 @@
|
||||
Flowsnow translating...
|
||||
Part 9 - LFCS: Linux Package Management with Yum, RPM, Apt, Dpkg, Aptitude and Zypper
|
||||
================================================================================
|
||||
Last August, the Linux Foundation announced the LFCS certification (Linux Foundation Certified Sysadmin), a shiny chance for system administrators everywhere to demonstrate, through a performance-based exam, that they are capable of succeeding at overall operational support for Linux systems. A Linux Foundation Certified Sysadmin has the expertise to ensure effective system support, first-level troubleshooting and monitoring, including finally issue escalation, when needed, to engineering support teams.
|
||||
@ -226,4 +227,4 @@ via: http://www.tecmint.com/linux-package-management/
|
||||
[2]:http://www.tecmint.com/useful-basic-commands-of-apt-get-and-apt-cache-for-package-management/
|
||||
[3]:http://www.tecmint.com/20-practical-examples-of-rpm-commands-in-linux/
|
||||
[4]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/
|
||||
[5]:http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/
|
||||
[5]:http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/
|
||||
|
@ -0,0 +1,151 @@
|
||||
HowTo: Use grep Command In Linux / UNIX – Examples
|
||||
================================================================================
|
||||
How do I use grep command on Linux, Apple OS X, and Unix-like operating systems? Can you give me a simple examples of the grep command?
|
||||
|
||||
The grep command is used to search text or searches the given file for lines containing a match to the given strings or words. By default, grep displays the matching lines. Use grep to search for lines of text that match one or many regular expressions, and outputs only the matching lines. grep is considered as one of the most useful commands on Linux and Unix-like operating systems.
|
||||
|
||||
### Did you know? ###
|
||||
|
||||
The name, "grep", derives from the command used to perform a similar operation, using the Unix/Linux text editor ed:
|
||||
|
||||
g/re/p
|
||||
|
||||
### The grep command syntax ###
|
||||
|
||||
The syntax is as follows:
|
||||
|
||||
grep 'word' filename
|
||||
grep 'word' file1 file2 file3
|
||||
grep 'string1 string2' filename
|
||||
cat otherfile | grep 'something'
|
||||
command | grep 'something'
|
||||
command option1 | grep 'data'
|
||||
grep --color 'data' fileName
|
||||
|
||||
### How do I use grep command to search a file? ###
|
||||
|
||||
Search /etc/passwd file for boo user, enter:
|
||||
|
||||
$ grep boo /etc/passwd
|
||||
|
||||
Sample outputs:
|
||||
|
||||
foo:x:1000:1000:foo,,,:/home/foo:/bin/ksh
|
||||
|
||||
You can force grep to ignore word case i.e match boo, Boo, BOO and all other combination with the -i option:
|
||||
|
||||
$ grep -i "boo" /etc/passwd
|
||||
|
||||
### Use grep recursively ###
|
||||
|
||||
You can search recursively i.e. read all files under each directory for a string "192.168.1.5"
|
||||
|
||||
$ grep -r "192.168.1.5" /etc/
|
||||
|
||||
OR
|
||||
|
||||
$ grep -R "192.168.1.5" /etc/
|
||||
|
||||
Sample outputs:
|
||||
|
||||
/etc/ppp/options:# ms-wins 192.168.1.50
|
||||
/etc/ppp/options:# ms-wins 192.168.1.51
|
||||
/etc/NetworkManager/system-connections/Wired connection 1:addresses1=192.168.1.5;24;192.168.1.2;
|
||||
|
||||
You will see result for 192.168.1.5 on a separate line preceded by the name of the file (such as /etc/ppp/options) in which it was found. The inclusion of the file names in the output data can be suppressed by using the -h option as follows:
|
||||
|
||||
$ grep -h -R "192.168.1.5" /etc/
|
||||
|
||||
OR
|
||||
|
||||
$ grep -hR "192.168.1.5" /etc/
|
||||
|
||||
Sample outputs:
|
||||
|
||||
# ms-wins 192.168.1.50
|
||||
# ms-wins 192.168.1.51
|
||||
addresses1=192.168.1.5;24;192.168.1.2;
|
||||
|
||||
### Use grep to search words only ###
|
||||
|
||||
When you search for boo, grep will match fooboo, boo123, barfoo35 and more. You can force the grep command to select only those lines containing matches that form whole words i.e. match only boo word:
|
||||
|
||||
$ grep -w "boo" file
|
||||
|
||||
### Use grep to search 2 different words ###
|
||||
|
||||
Use the egrep command as follows:
|
||||
|
||||
$ egrep -w 'word1|word2' /path/to/file
|
||||
|
||||
### Count line when words has been matched ###
|
||||
|
||||
The grep can report the number of times that the pattern has been matched for each file using -c (count) option:
|
||||
|
||||
$ grep -c 'word' /path/to/file
|
||||
|
||||
Pass the -n option to precede each line of output with the number of the line in the text file from which it was obtained:
|
||||
|
||||
$ grep -n 'root' /etc/passwd
|
||||
|
||||
Sample outputs:
|
||||
|
||||
1:root:x:0:0:root:/root:/bin/bash
|
||||
1042:rootdoor:x:0:0:rootdoor:/home/rootdoor:/bin/csh
|
||||
3319:initrootapp:x:0:0:initrootapp:/home/initroot:/bin/ksh
|
||||
|
||||
### Grep invert match ###
|
||||
|
||||
You can use -v option to print inverts the match; that is, it matches only those lines that do not contain the given word. For example print all line that do not contain the word bar:
|
||||
|
||||
$ grep -v bar /path/to/file
|
||||
|
||||
### UNIX / Linux pipes and grep command ###
|
||||
|
||||
grep command often used with [shell pipes][1]. In this example, show the name of the hard disk devices:
|
||||
|
||||
# dmesg | egrep '(s|h)d[a-z]'
|
||||
|
||||
Display cpu model name:
|
||||
|
||||
# cat /proc/cpuinfo | grep -i 'Model'
|
||||
|
||||
However, above command can be also used as follows without shell pipe:
|
||||
|
||||
# grep -i 'Model' /proc/cpuinfo
|
||||
|
||||
Sample outputs:
|
||||
|
||||
model : 30
|
||||
model name : Intel(R) Core(TM) i7 CPU Q 820 @ 1.73GHz
|
||||
model : 30
|
||||
model name : Intel(R) Core(TM) i7 CPU Q 820 @ 1.73GHz
|
||||
|
||||
### How do I list just the names of matching files? ###
|
||||
|
||||
Use the -l option to list file name whose contents mention main():
|
||||
|
||||
$ grep -l 'main' *.c
|
||||
|
||||
Finally, you can force grep to display output in colors, enter:
|
||||
|
||||
$ grep --color vivek /etc/passwd
|
||||
|
||||
Sample outputs:
|
||||
|
||||
![Grep command in action](http://files.cyberciti.biz/uploads/faq/2007/08/grep_command_examples.png)
|
||||
|
||||
Grep command in action
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.cyberciti.biz/faq/howto-use-grep-command-in-linux-unix/
|
||||
|
||||
作者:Vivek Gite
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
|
||||
[1]:http://bash.cyberciti.biz/guide/Pipes
|
@ -0,0 +1,289 @@
|
||||
Regular Expressions In grep
|
||||
================================================================================
|
||||
How do I use the Grep command with regular expressions on a Linux and Unix-like operating systems?
|
||||
|
||||
Linux comes with GNU grep, which supports extended regular expressions. GNU grep is the default on all Linux systems. The grep command is used to locate information stored anywhere on your server or workstation.
|
||||
|
||||
### Regular Expressions ###
|
||||
|
||||
Regular Expressions is nothing but a pattern to match for each input line. A pattern is a sequence of characters. Following all are examples of pattern:
|
||||
|
||||
^w1
|
||||
w1|w2
|
||||
[^ ]
|
||||
|
||||
#### grep Regular Expressions Examples ####
|
||||
|
||||
Search for 'vivek' in /etc/passswd
|
||||
|
||||
grep vivek /etc/passwd
|
||||
|
||||
Sample outputs:
|
||||
|
||||
vivek:x:1000:1000:Vivek Gite,,,:/home/vivek:/bin/bash
|
||||
vivekgite:x:1001:1001::/home/vivekgite:/bin/sh
|
||||
gitevivek:x:1002:1002::/home/gitevivek:/bin/sh
|
||||
|
||||
Search vivek in any case (i.e. case insensitive search)
|
||||
|
||||
grep -i -w vivek /etc/passwd
|
||||
|
||||
Search vivek or raj in any case
|
||||
|
||||
grep -E -i -w 'vivek|raj' /etc/passwd
|
||||
|
||||
The PATTERN in last example, used as an extended regular expression.
|
||||
|
||||
### Anchors ###
|
||||
|
||||
You can use ^ and $ to force a regex to match only at the start or end of a line, respectively. The following example displays lines starting with the vivek only:
|
||||
|
||||
grep ^vivek /etc/passwd
|
||||
|
||||
Sample outputs:
|
||||
|
||||
vivek:x:1000:1000:Vivek Gite,,,:/home/vivek:/bin/bash
|
||||
vivekgite:x:1001:1001::/home/vivekgite:/bin/sh
|
||||
|
||||
You can display only lines starting with the word vivek only i.e. do not display vivekgite, vivekg etc:
|
||||
|
||||
grep -w ^vivek /etc/passwd
|
||||
|
||||
Find lines ending with word foo:
|
||||
grep 'foo$' filename
|
||||
|
||||
Match line only containing foo:
|
||||
|
||||
grep '^foo$' filename
|
||||
|
||||
You can search for blank lines with the following examples:
|
||||
|
||||
grep '^$' filename
|
||||
|
||||
### Character Class ###
|
||||
|
||||
Match Vivek or vivek:
|
||||
|
||||
grep '[vV]ivek' filename
|
||||
|
||||
OR
|
||||
|
||||
grep '[vV][iI][Vv][Ee][kK]' filename
|
||||
|
||||
You can also match digits (i.e match vivek1 or Vivek2 etc):
|
||||
|
||||
grep -w '[vV]ivek[0-9]' filename
|
||||
|
||||
You can match two numeric digits (i.e. match foo11, foo12 etc):
|
||||
|
||||
grep 'foo[0-9][0-9]' filename
|
||||
|
||||
You are not limited to digits, you can match at least one letter:
|
||||
|
||||
grep '[A-Za-z]' filename
|
||||
|
||||
Display all the lines containing either a "w" or "n" character:
|
||||
|
||||
grep [wn] filename
|
||||
|
||||
Within a bracket expression, the name of a character class enclosed in "[:" and ":]" stands for the list of all characters belonging to that class. Standard character class names are:
|
||||
|
||||
- [:alnum:] - Alphanumeric characters.
|
||||
- [:alpha:] - Alphabetic characters
|
||||
- [:blank:] - Blank characters: space and tab.
|
||||
- [:digit:] - Digits: '0 1 2 3 4 5 6 7 8 9'.
|
||||
- [:lower:] - Lower-case letters: 'a b c d e f g h i j k l m n o p q r s t u v w x y z'.
|
||||
- [:space:] - Space characters: tab, newline, vertical tab, form feed, carriage return, and space.
|
||||
- [:upper:] - Upper-case letters: 'A B C D E F G H I J K L M N O P Q R S T U V W X Y Z'.
|
||||
|
||||
In this example match all upper case letters:
|
||||
|
||||
grep '[:upper:]' filename
|
||||
|
||||
### Wildcards ###
|
||||
|
||||
You can use the "." for a single character match. In this example match all 3 character word starting with "b" and ending in "t":
|
||||
|
||||
grep '\<b.t\>' filename
|
||||
|
||||
Where,
|
||||
|
||||
- \< Match the empty string at the beginning of word
|
||||
- \> Match the empty string at the end of word.
|
||||
|
||||
Print all lines with exactly two characters:
|
||||
|
||||
grep '^..$' filename
|
||||
|
||||
Display any lines starting with a dot and digit:
|
||||
|
||||
grep '^\.[0-9]' filename
|
||||
|
||||
#### Escaping the dot ####
|
||||
|
||||
The following regex to find an IP address 192.168.1.254 will not work:
|
||||
|
||||
grep '192.168.1.254' /etc/hosts
|
||||
|
||||
All three dots need to be escaped:
|
||||
|
||||
grep '192\.168\.1\.254' /etc/hosts
|
||||
|
||||
The following example will only match an IP address:
|
||||
|
||||
egrep '[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}' filename
|
||||
|
||||
The following will match word Linux or UNIX in any case:
|
||||
|
||||
egrep -i '^(linux|unix)' filename
|
||||
|
||||
### How Do I Search a Pattern Which Has a Leading - Symbol? ###
|
||||
|
||||
Searches for all lines matching '--test--' using -e option Without -e, grep would attempt to parse '--test--' as a list of options:
|
||||
|
||||
grep -e '--test--' filename
|
||||
|
||||
### How Do I do OR with grep? ###
|
||||
|
||||
Use the following syntax:
|
||||
|
||||
grep 'word1|word2' filename
|
||||
|
||||
OR
|
||||
|
||||
grep 'word1\|word2' filename
|
||||
|
||||
### How Do I do AND with grep? ###
|
||||
|
||||
Use the following syntax to display all lines that contain both 'word1' and 'word2'
|
||||
|
||||
grep 'word1' filename | grep 'word2'
|
||||
|
||||
### How Do I Test Sequence? ###
|
||||
|
||||
You can test how often a character must be repeated in sequence using the following syntax:
|
||||
|
||||
{N}
|
||||
{N,}
|
||||
{min,max}
|
||||
|
||||
Match a character "v" two times:
|
||||
|
||||
egrep "v{2}" filename
|
||||
|
||||
The following will match both "col" and "cool":
|
||||
|
||||
egrep 'co{1,2}l' filename
|
||||
|
||||
The following will match any row of at least three letters 'c'.
|
||||
|
||||
egrep 'c{3,}' filename
|
||||
|
||||
The following example will match mobile number which is in the following format 91-1234567890 (i.e twodigit-tendigit)
|
||||
|
||||
grep "[[:digit:]]\{2\}[ -]\?[[:digit:]]\{10\}" filename
|
||||
|
||||
### How Do I Hightlight with grep? ###
|
||||
|
||||
Use the following syntax:
|
||||
|
||||
grep --color regex filename
|
||||
|
||||
How Do I Show Only The Matches, Not The Lines?
|
||||
|
||||
Use the following syntax:
|
||||
|
||||
grep -o regex filename
|
||||
|
||||
### Regular Expression Operator ###
|
||||
|
||||
注:表格
|
||||
<table border=1>
|
||||
<tr>
|
||||
<th>Regex operator</th>
|
||||
<th>Meaning</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>.</td>
|
||||
<td>Matches any single character.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>?</td>
|
||||
<td>The preceding item is optional and will be matched, at most, once.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>*</td>
|
||||
<td>The preceding item will be matched zero or more times.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>+</td>
|
||||
<td>The preceding item will be matched one or more times.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>{N}</td>
|
||||
<td>The preceding item is matched exactly N times.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>{N,}</td>
|
||||
<td>The preceding item is matched N or more times.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>{N,M}</td>
|
||||
<td>The preceding item is matched at least N times, but not more than M times.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>-</td>
|
||||
<td>Represents the range if it's not first or last in a list or the ending point of a range in a list.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>^</td>
|
||||
<td>Matches the empty string at the beginning of a line; also represents the characters not in the range of a list.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>$</td>
|
||||
<td>Matches the empty string at the end of a line.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>\b</td>
|
||||
<td>Matches the empty string at the edge of a word.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>\B</td>
|
||||
<td>Matches the empty string provided it's not at the edge of a word.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>\<</td>
|
||||
<td>Match the empty string at the beginning of word.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>\></td>
|
||||
<td> Match the empty string at the end of word.</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
#### grep vs egrep ####
|
||||
|
||||
egrep is the same as **grep -E**. It interpret PATTERN as an extended regular expression. From the grep man page:
|
||||
|
||||
In basic regular expressions the meta-characters ?, +, {, |, (, and ) lose their special meaning; instead use the backslashed versions \?, \+, \{,
|
||||
\|, \(, and \).
|
||||
Traditional egrep did not support the { meta-character, and some egrep implementations support \{ instead, so portable scripts should avoid { in
|
||||
grep -E patterns and should use [{] to match a literal {.
|
||||
GNU grep -E attempts to support traditional usage by assuming that { is not special if it would be the start of an invalid interval specification.
|
||||
For example, the command grep -E '{1' searches for the two-character string {1 instead of reporting a syntax error in the regular expression.
|
||||
POSIX.2 allows this behavior as an extension, but portable scripts should avoid it.
|
||||
|
||||
References:
|
||||
|
||||
- man page grep and regex(7)
|
||||
- info page grep`
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.cyberciti.biz/faq/grep-regular-expressions/
|
||||
|
||||
作者:Vivek Gite
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -0,0 +1,41 @@
|
||||
Search Multiple Words / String Pattern Using grep Command
|
||||
================================================================================
|
||||
How do I search multiple strings or words using the grep command? For example I'd like to search word1, word2, word3 and so on within /path/to/file. How do I force grep to search multiple words?
|
||||
|
||||
The [grep command supports regular expression][1] pattern. To search multiple words, use following syntax:
|
||||
|
||||
grep 'word1\|word2\|word3' /path/to/file
|
||||
|
||||
In this example, search warning, error, and critical words in a text log file called /var/log/messages, enter:
|
||||
|
||||
$ grep 'warning\|error\|critical' /var/log/messages
|
||||
|
||||
To just match words, add -w swith:
|
||||
|
||||
$ grep -w 'warning\|error\|critical' /var/log/messages
|
||||
|
||||
egrep command can skip the above syntax and use the following syntax:
|
||||
|
||||
$ egrep -w 'warning|error|critical' /var/log/messages
|
||||
|
||||
I recommend that you pass the -i (ignore case) and --color option as follows:
|
||||
|
||||
$ egrep -wi --color 'warning|error|critical' /var/log/messages
|
||||
|
||||
Sample outputs:
|
||||
|
||||
![Fig.01: Linux / Unix egrep Command Search Multiple Words Demo Output](http://s0.cyberciti.org/uploads/faq/2008/04/egrep-words-output.png)
|
||||
|
||||
Fig.01: Linux / Unix egrep Command Search Multiple Words Demo Output
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.cyberciti.biz/faq/searching-multiple-words-string-using-grep/
|
||||
|
||||
作者:Vivek Gite
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://www.cyberciti.biz/faq/grep-regular-expressions/
|
@ -0,0 +1,33 @@
|
||||
Grep Count Lines If a String / Word Matches
|
||||
================================================================================
|
||||
How do I count lines if given word or string matches for each input file under Linux or UNIX operating systems?
|
||||
|
||||
You need to pass the -c or --count option to suppress normal output. It will display a count of matching lines for each input file:
|
||||
|
||||
$ grep -c vivek /etc/passwd
|
||||
|
||||
OR
|
||||
|
||||
$ grep -w -c vivek /etc/passwd
|
||||
|
||||
Sample outputs:
|
||||
|
||||
1
|
||||
|
||||
However, with the -v or --invert-match option it will count non-matching lines, enter:
|
||||
|
||||
$ grep -c vivek /etc/passwd
|
||||
|
||||
Sample outputs:
|
||||
|
||||
45
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.cyberciti.biz/faq/grep-count-lines-if-a-string-word-matches/
|
||||
|
||||
作者:Vivek Gite
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -0,0 +1,67 @@
|
||||
Grep From Files and Display the File Name
|
||||
================================================================================
|
||||
How do I grep from a number of files and display the file name only?
|
||||
|
||||
When there is more than one file to search it will display file name by default:
|
||||
|
||||
grep "word" filename
|
||||
grep root /etc/*
|
||||
|
||||
Sample outputs:
|
||||
|
||||
/etc/bash.bashrc: See "man sudo_root" for details.
|
||||
/etc/crontab:17 * * * * root cd / && run-parts --report /etc/cron.hourly
|
||||
/etc/crontab:25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
|
||||
/etc/crontab:47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
|
||||
/etc/crontab:52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )
|
||||
/etc/group:root:x:0:
|
||||
grep: /etc/gshadow: Permission denied
|
||||
/etc/logrotate.conf: create 0664 root utmp
|
||||
/etc/logrotate.conf: create 0660 root utmp
|
||||
|
||||
The first name is file name (e.g., /etc/crontab, /etc/group). The -l option will only print filename if th
|
||||
|
||||
grep -l "string" filename
|
||||
grep -l root /etc/*
|
||||
|
||||
Sample outputs:
|
||||
|
||||
/etc/aliases
|
||||
/etc/arpwatch.conf
|
||||
grep: /etc/at.deny: Permission denied
|
||||
/etc/bash.bashrc
|
||||
/etc/bash_completion
|
||||
/etc/ca-certificates.conf
|
||||
/etc/crontab
|
||||
/etc/group
|
||||
|
||||
You can suppress normal output; instead print the name of each input file from **which no output would normally have been** printed:
|
||||
|
||||
grep -L "word" filename
|
||||
grep -L root /etc/*
|
||||
|
||||
Sample outputs:
|
||||
|
||||
/etc/apm
|
||||
/etc/apparmor
|
||||
/etc/apparmor.d
|
||||
/etc/apport
|
||||
/etc/apt
|
||||
/etc/avahi
|
||||
/etc/bash_completion.d
|
||||
/etc/bindresvport.blacklist
|
||||
/etc/blkid.conf
|
||||
/etc/bluetooth
|
||||
/etc/bogofilter.cf
|
||||
/etc/bonobo-activation
|
||||
/etc/brlapi.key
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.cyberciti.biz/faq/grep-from-files-and-display-the-file-name/
|
||||
|
||||
作者:Vivek Gite
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -0,0 +1,66 @@
|
||||
How To Find Files by Content Under UNIX
|
||||
================================================================================
|
||||
I had written lots of code in C for my school work and saved it as source code under /home/user/c/*.c and *.h. How do I find files by content such as string or words (function name such as main() under UNIX shell prompt?
|
||||
|
||||
You need to use the following tools:
|
||||
|
||||
[a] **grep command** : print lines matching a pattern.
|
||||
|
||||
[b] **find command**: search for files in a directory hierarchy.
|
||||
|
||||
### [grep Command To Find Files By][1] Content ###
|
||||
|
||||
Type the command as follows:
|
||||
|
||||
grep 'string' *.txt
|
||||
grep 'main(' *.c
|
||||
grep '#include<example.h>' *.c
|
||||
grep 'getChar*' *.c
|
||||
grep -i 'ultra' *.conf
|
||||
grep -iR 'ultra' *.conf
|
||||
|
||||
Where
|
||||
|
||||
- **-i** : Ignore case distinctions in both the PATTERN (match valid, VALID, ValID string) and the input files (math file.c FILE.c FILE.C filename).
|
||||
- **-R** : Read all files under each directory, recursively
|
||||
|
||||
### Highlighting searched patterns ###
|
||||
|
||||
You can highlight patterns easily while searching large number of files:
|
||||
|
||||
$ grep --color=auto -iR 'getChar();' *.c
|
||||
|
||||
### Displaying file names and line number for searched patterns ###
|
||||
|
||||
You may also need to display filenames and numbers:
|
||||
|
||||
$ grep --color=auto -iRnH 'getChar();' *.c
|
||||
|
||||
Where,
|
||||
|
||||
- **-n** : Prefix each line of output with the 1-based line number within its input file.
|
||||
- **-H** Print the file name for each match. This is the default when there is more than one file to search.
|
||||
|
||||
$grep --color=auto -nH 'DIR' *
|
||||
|
||||
Sample output:
|
||||
|
||||
![Fig.01: grep command displaying searched pattern](http://www.cyberciti.biz/faq/wp-content/uploads/2008/09/grep-command.png)
|
||||
|
||||
Fig.01: grep command displaying searched pattern
|
||||
|
||||
You can also use find command:
|
||||
|
||||
$ find . -name "*.c" -print | xargs grep "main("
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.cyberciti.biz/faq/unix-linux-finding-files-by-content/
|
||||
|
||||
作者:Vivek Gite
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://www.cyberciti.biz/faq/howto-search-find-file-for-text-string/
|
@ -0,0 +1,151 @@
|
||||
Linux / UNIX View Only Configuration File Directives ( Uncommented Lines of a Config File )
|
||||
================================================================================
|
||||
Most Linux and UNIX-like system configuration files are documented using comments, but some time I just need to see line of configuration text in a config file. How can I view just the uncommented configuration file directives from squid.conf or httpd.conf file? How can I strip out comments and blank lines on a Linux or Unix-like systems?
|
||||
|
||||
To view just the uncommented lines of text in a config file use the grep, sed, awk, perl or any other text processing utility provided by UNIX / BSD / OS X / Linux operating systems.
|
||||
|
||||
### grep command example to strip out command ###
|
||||
|
||||
You can use the gerp command as follows:
|
||||
|
||||
$ grep -v "^#" /path/to/config/file
|
||||
$ grep -v "^#" /etc/apache2/apache2.conf
|
||||
|
||||
Sample outputs:
|
||||
|
||||
ServerRoot "/etc/apache2"
|
||||
|
||||
LockFile /var/lock/apache2/accept.lock
|
||||
|
||||
PidFile ${APACHE_PID_FILE}
|
||||
|
||||
Timeout 300
|
||||
|
||||
KeepAlive On
|
||||
|
||||
MaxKeepAliveRequests 100
|
||||
|
||||
KeepAliveTimeout 15
|
||||
|
||||
|
||||
<IfModule mpm_prefork_module>
|
||||
StartServers 5
|
||||
MinSpareServers 5
|
||||
MaxSpareServers 10
|
||||
MaxClients 150
|
||||
MaxRequestsPerChild 0
|
||||
</IfModule>
|
||||
|
||||
<IfModule mpm_worker_module>
|
||||
StartServers 2
|
||||
MinSpareThreads 25
|
||||
MaxSpareThreads 75
|
||||
ThreadLimit 64
|
||||
ThreadsPerChild 25
|
||||
MaxClients 150
|
||||
MaxRequestsPerChild 0
|
||||
</IfModule>
|
||||
|
||||
<IfModule mpm_event_module>
|
||||
StartServers 2
|
||||
MaxClients 150
|
||||
MinSpareThreads 25
|
||||
MaxSpareThreads 75
|
||||
ThreadLimit 64
|
||||
ThreadsPerChild 25
|
||||
MaxRequestsPerChild 0
|
||||
</IfModule>
|
||||
|
||||
User ${APACHE_RUN_USER}
|
||||
Group ${APACHE_RUN_GROUP}
|
||||
|
||||
|
||||
AccessFileName .htaccess
|
||||
|
||||
<Files ~ "^\.ht">
|
||||
Order allow,deny
|
||||
Deny from all
|
||||
Satisfy all
|
||||
</Files>
|
||||
|
||||
DefaultType text/plain
|
||||
|
||||
|
||||
HostnameLookups Off
|
||||
|
||||
ErrorLog /var/log/apache2/error.log
|
||||
|
||||
LogLevel warn
|
||||
|
||||
Include /etc/apache2/mods-enabled/*.load
|
||||
Include /etc/apache2/mods-enabled/*.conf
|
||||
|
||||
Include /etc/apache2/httpd.conf
|
||||
|
||||
Include /etc/apache2/ports.conf
|
||||
|
||||
LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined
|
||||
LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined
|
||||
LogFormat "%h %l %u %t \"%r\" %>s %O" common
|
||||
LogFormat "%{Referer}i -> %U" referer
|
||||
LogFormat "%{User-agent}i" agent
|
||||
|
||||
CustomLog /var/log/apache2/other_vhosts_access.log vhost_combined
|
||||
|
||||
|
||||
|
||||
Include /etc/apache2/conf.d/
|
||||
|
||||
Include /etc/apache2/sites-enabled/
|
||||
|
||||
To suppress blank lines use [egrep command][1], run:
|
||||
|
||||
egrep -v "^#|^$" /etc/apache2/apache2.conf
|
||||
## or pass it to the page such as more or less ##
|
||||
egrep -v "^#|^$" /etc/apache2/apache2.conf | less
|
||||
|
||||
## Bash function ######################################
|
||||
## or create function or alias and use it as follows ##
|
||||
## viewconfig /etc/squid/squid.conf ##
|
||||
#######################################################
|
||||
viewconfig(){
|
||||
local f="$1"
|
||||
[ -f "$1" ] && command egrep -v "^#|^$" "$f" || echo "Error $1 file not found."
|
||||
}
|
||||
|
||||
Sample output:
|
||||
|
||||
![Fig.01: Unix/Linux Egrep Strip Out Comments Blank Lines](http://s0.cyberciti.org/uploads/faq/2008/05/grep-strip-out-comments-blank-lines.jpg)
|
||||
|
||||
Fig.01: Unix/Linux Egrep Strip Out Comments Blank Lines
|
||||
|
||||
### Understanding grep/egrep command line options ###
|
||||
|
||||
The -v option invert the sense of matching, to select non-matching lines. This option should work under all posix based systems. The regex ^$ matches and removes all blank lines and ^# matches and removes all comments that starts with a "#".
|
||||
|
||||
### sed Command example ###
|
||||
|
||||
GNU / sed command can be used as follows:
|
||||
|
||||
$ sed '/ *#/d; /^ *$/d' /path/to/file
|
||||
$ sed '/ *#/d; /^ *$/d' /etc/apache2/apache2.conf
|
||||
|
||||
GNU or BSD sed can update your config file too. The syntax is as follows to edit files in-place, saving backups with the specified extension such as .bak:
|
||||
|
||||
sed -i'.bak.2015.12.27' '/ *#/d; /^ *$/d' /etc/apache2/apache2.conf
|
||||
|
||||
For more info see man pages - [grep(1)][2], [sed(1)][3]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.cyberciti.biz/faq/shell-display-uncommented-lines-only/
|
||||
|
||||
作者:Vivek Gite
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://www.cyberciti.biz/faq/grep-regular-expressions/
|
||||
[2]:http://www.manpager.com/linux/man1/grep.1.html
|
||||
[3]:http://www.manpager.com/linux/man1/sed.1.html
|
@ -2,11 +2,11 @@ Aix, HP-UX, Solaris, BSD, 和 LINUX 简史
|
||||
================================================================================
|
||||
![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/05/linux-712x445.png)
|
||||
|
||||
有句话说,当一扇门在你面前关上的时候,另一扇门就会打开。[Ken Thompson][1] 和 [Dennis Richie][2] 两个人就是最好的例子。他们俩是 **20世纪** 最优秀的信息技术专家,因为他们创造了 **UNIX**,最具影响力和创新性的软件之一。
|
||||
要记住,当一扇门在你面前关闭的时候,另一扇门就会打开。[Ken Thompson][1] 和 [Dennis Richie][2] 两个人就是这句名言很好的实例。他们俩是 **20世纪** 最优秀的信息技术专家,因为他们创造了 **UNIX**,最具影响力和创新性的软件之一。
|
||||
|
||||
### UNIX 系统诞生于贝尔实验室 ###
|
||||
|
||||
**UNIX** 最开始的名字是 **UNICS** (**UN**iplexed **I**nformation and **C**omputing **S**ervice),它有一个大家庭,并不是从石头里蹦出来的。UNIX的祖父是 **CTSS** (**C**ompatible **T**ime **S**haring **S**ystem),它的父亲是 **Multics** (**MULT**iplexed **I**nformation and **C**omputing **S**ervice),这个系统能支持大量用户通过交互式分时使用大型机。
|
||||
**UNIX** 最开始的名字是 **UNICS** (**UN**iplexed **I**nformation and **C**omputing **S**ervice),它有一个大家庭,并不是从石头缝里蹦出来的。UNIX的祖父是 **CTSS** (**C**ompatible **T**ime **S**haring **S**ystem),它的父亲是 **Multics** (**MULT**iplexed **I**nformation and **C**omputing **S**ervice),这个系统能支持大量用户通过交互式分时使用大型机。
|
||||
|
||||
UNIX 诞生于 **1969** 年,由 **Ken Thompson** 以及后来加入的 **Dennis Richie** 共同完成。这两位优秀的研究员和科学家一起在一个**通用电子**和**麻省理工学院**的合作项目里工作,项目目标是开发一个叫 Multics 的交互式分时系统。
|
||||
|
||||
@ -20,71 +20,71 @@ UNIX 的第一声啼哭是在一台 PDP-7 微型机上,它是 Thompson 测试
|
||||
|
||||
> “我们想要的不仅是一个优秀的编程环境,而是能围绕这个系统形成团体。按我们自己的经验,通过远程访问和分时共享主机实现的公共计算,本质上不只是用终端输入程序代替打孔机而已,而是鼓励密切沟通。”Dennis Richie 说。
|
||||
|
||||
UNIX 是第一个靠近理想的系统,在这里程序员可以坐在机器前自由摆弄程序,探索各种可能性并随手测试。在 UNIX 整个生命周期里,因为大量因为其他操作系统限制而投身过来的高手做出的无私贡献,它的功能模型一直保持上升趋势。
|
||||
UNIX 是第一个靠近理想的系统,在这里程序员可以坐在机器前自由摆弄程序,探索各种可能性并随手测试。在 UNIX 整个生命周期里,它吸引了大量因其他操作系统限制而投身过来的高手做出无私贡献,因此它的功能模型一直保持上升趋势。
|
||||
|
||||
UNIX 在 1970 年因为 PDP-11/20 获得了首次资金注入,之后正式更名为 UNIX 并支持在 PDP-11/20 上运行。UNIX 带来的第一次收获是在 1971 年,贝尔实验室的专利部门配备来做文字处理。
|
||||
|
||||
### UNIX 上的 C 语言革命 ###
|
||||
|
||||
Dennis Richie 在 1972 年发明了一种叫 “**C**” 的高级编程语言,之后他和 Ken Thompson 决定用 “C” 重写 UNIX 系统,来支持更好的移植性。他们在那一年里编写和调试了差不多 100,000 行代码。在使用了 “C” 语言后,系统可移植性非常好,只需要修改一小部分机器相关的代码就可以将 UNIX 移植到其他计算机平台上。
|
||||
Dennis Richie 在 1972 年发明了一种叫 “**C**” 的高级编程语言 ,之后他和 Ken Thompson 决定用 “C” 重写 UNIX 系统,来支持更好的移植性。他们在那一年里编写和调试了差不多 100,000 行代码。在使用了 “C” 语言后,系统可移植性非常好,只需要修改一小部分机器相关的代码就可以将 UNIX 移植到其他计算机平台上。
|
||||
|
||||
UNIX 第一次公开露面是在 1973 年 Dennis Ritchie 和 Ken Thompson 在操作系统原理上发表的一篇论文,然后 AT&T 发布了 UNIX 系统第 5 版,并授权给教育机构使用,然后在 1976 年第一次以 **$20.000** 的价格授权企业使用 UNIX 第 6 版。应用最广泛的是 1980 年发布的 UNIX 第 7 版,任何人都可以购买,只是授权条款非常有限。授权内容包括源代码,以及用 PDP-11 汇编语言写的及其相关内核。各种版本 UNIX 系统完全由它的用户手册确定。
|
||||
UNIX 第一次公开露面是 1973 年 Dennis Ritchie 和 Ken Thompson 在操作系统原理上发表的一篇论文,然后 AT&T 发布了 UNIX 系统第 5 版,并授权给教育机构使用,然后在 1976 年第一次以 **$20.000** 的价格授权企业使用 UNIX 第 6 版。应用最广泛的是 1980 年发布的 UNIX 第 7 版,任何人都可以购买授权,只是授权条款非常有限。授权内容包括源代码,以及用 PDP-11 汇编语言写的及其相关内核。反正,各种版本 UNIX 系统完全由它的用户手册确定。
|
||||
|
||||
### AIX 系统 ###
|
||||
|
||||
在 **1983** 年,**Microsoft** 计划开发 **Xenix** 作为 MS-DOS 的多用户版继任者,他们在那一年花了 $8,000 搭建了一台拥有 **512 KB** 内存以及 **10 MB**硬盘并运行 Xenix 的 Altos 586。而到 1984 年为止,全世界已经安装了超过 100,000 份 UNIX System V 第二版。在 1986 年发布了包含因特网域名服务的 4.3BSD,而且 **IBM** 宣布 **AIX 系统**的安装数已经超过 250,000。AIX 基于 Unix System V 开发,这套系统拥有 BSD 风格的根文件系统,是两者的结合。
|
||||
|
||||
AIX 第一次引入了 **日志文件系统 (JFS)** 以及集成逻辑卷管理器 (LVM)。IBM 在 1989 年将 AIX 移植到自己的 RS/6000 平台。2001 年发布的 5L 版是一个突破性的版本,提供了 Linux 友好性以及支持 Power4 服务器的逻辑分区。
|
||||
|
||||
在 **1983** 年,**Microsoft** 计划开发 **Xenix** 作为 MS-DOS 的多用户版继任者,他们在那一年花了 $8,000 搭建了一台拥有 **512 KB** 内存以及 **10 MB**硬盘并运行 Xenix 的 Altos 586。而到 1984 年为止,全世界 UNIX System V 第二版的安装数量已经超过了 100,000 。在 1986 年发布了包含因特网域名服务的 4.3BSD,而且 **IBM** 宣布 **AIX 系统**的安装数已经超过 250,000。AIX 基于 Unix System V 开发,这套系统拥有 BSD 风格的根文件系统,是两者的结合。
|
||||
|
||||
AIX 第一次引入了 **日志文件系统 (JFS)** 以及集成逻辑卷管理器 (LVM)。IBM 在 1989 年将 AIX 移植到自己的 RS/6000 平台。2001 年发布的 5L 版是一个突破性的版本,提供了 Linux 友好性以及支持 Power4 服务器的逻辑分区。
|
||||
|
||||
在 2004 年发布的 AIX 5.3 引入了支持 Advanced Power Virtualization (APV) 的虚拟化技术,支持对称多线程,微分区,以及可分享的处理器池。
|
||||
|
||||
|
||||
在 2007 年,IBM 同时发布 AIX 6.1 和 Power6 架构,开始加强自己的虚拟化产品。他们还将 Advanced Power Virtualization 重新包装成 PowerVM。
|
||||
|
||||
|
||||
这次改进包括被称为 WPARs 的负载分区形式,类似于 Solaris 的 zones/Containers,但是功能更强。
|
||||
|
||||
### HP-UX 系统 ###
|
||||
|
||||
|
||||
**惠普 UNIX (HP-UX)** 源于 System V 第 3 版。这套系统一开始只支持 PA-RISC HP 9000 平台。HP-UX 第 1 版发布于 1984 年。
|
||||
|
||||
HP-UX 第 9 版引入了 SAM,一个基于角色的图形用户界面 (GUI),用户可以用来管理整个系统。在 1995 年发布的第 10 版,调整了系统文件分布以及目录结构,变得有点类似 AT&T SVR4。
|
||||
|
||||
|
||||
HP-UX 第 9 版引入了 SAM,一个基于字符的图形用户界面 (GUI),用户可以用来管理整个系统。在 1995 年发布的第 10 版,调整了系统文件分布以及目录结构,变得有点类似 AT&T SVR4。
|
||||
|
||||
第 11 版发布于 1997 年。这是 HP 第一个支持 64 位寻址的版本。不过在 2000 年重新发布成 11i,因为 HP 为特定的信息技术目的,引入了操作环境和分级应用的捆绑组。
|
||||
|
||||
|
||||
在 2001 年发布的 11.20 版宣称支持 Itanium 系统。HP-UX 是第一个使用 ACLs(访问控制列表)管理文件权限的 UNIX 系统,也是首先支持内建逻辑卷管理器的系统之一。
|
||||
|
||||
|
||||
如今,HP-UX 因为 HP 和 Veritas 的合作关系使用了 Veritas 作为主文件系统。
|
||||
|
||||
HP-UX 目前最新的版是 11iv3, update 4。
|
||||
|
||||
HP-UX 目前的最新版本是 11iv3, update 4。
|
||||
|
||||
### Solaris 系统 ###
|
||||
|
||||
|
||||
Sun 的 UNIX 版本是 **Solaris**,用来接替 1992 年创建的 **SunOS**。SunOS 一开始基于 BSD(伯克利软件发行版)风格的 UNIX,但是 SunOS 5.0 版以及之后的版本都是基于重新包装成 Solaris 的 Unix System V 第 4 版。
|
||||
|
||||
|
||||
SunOS 1.0 版于 1983 年发布,用于支持 Sun-1 和 Sun-2 平台。随后在 1985 年发布了 2.0 版。在 1987 年,Sun 和 AT&T 宣布合作一个项目以 SVR4 为基础将 System V 和 BSD 合并成一个版本。
|
||||
|
||||
|
||||
Solaris 2.4 是 Sun 发布的第一个 Sparc/x86 版本。1994 年 11 月份发布的 SunOS 4.1.4 版是最后一个版本。Solaris 7 是首个 64 位 Ultra Sparc 版本,加入了对文件系统元数据记录的原生支持。
|
||||
|
||||
|
||||
Solaris 9 发布于 2002 年,支持 Linux 特性以及 Solaris 卷管理器。之后,2005 年发布了 Solaris 10,带来许多创新,比如支持 Solaris Containers,新的 ZFS 文件系统,以及逻辑域。
|
||||
|
||||
|
||||
目前 Solaris 最新的版本是 第 10 版,最后的更新发布于 2008 年。
|
||||
|
||||
### Linux ###
|
||||
|
||||
到了 1991 年,用来替代商业操作系统的免费系统的需求日渐高涨。因此 **Linus Torvalds** 开始构建一个免费操作系统,最终成为 **Linux**。Linux 最开始只有一些 “C” 文件,并且使用了阻止商业发行的授权。Linux 是一个类 UNIX 系统但又不尽相同。
|
||||
|
||||
2015 年 发布了基于 GNU Public License 授权的 3.18 版。IBM 声称有超过 1800 万行开源代码开放给开发者。
|
||||
|
||||
如今 GNU Public License 是应用最广泛的免费软件授权方式。根据开源软件原则,这份授权允许个人和企业自由分发,运行,通过拷贝共享,学习,以及修改软件源码。
|
||||
到了 1991 年,用来替代商业操作系统的免费系统的需求日渐高涨。因此 **Linus Torvalds** 开始构建一个免费的操作系统,最终成为 **Linux**。Linux 最开始只有一些 “C” 文件,并且使用了阻止商业发行的授权。Linux 是一个类 UNIX 系统但又不尽相同。
|
||||
|
||||
2015 年 发布了基于 GNU Public License 授权的 3.18 版。IBM 声称有超过 1800 万行开源代码开放给开发者。
|
||||
|
||||
如今 GNU Public License 是应用最广泛的免费软件授权方式。根据开源软件原则,这份授权允许个人和企业自由分发、运行、通过拷贝共享、学习,以及修改软件源码。
|
||||
|
||||
### UNIX vs. Linux: 技术概要 ###
|
||||
|
||||
- Linux 鼓励多样性,Linux 的开发人员有更宽广的背景,有更多不同经验和意见。
|
||||
- Linux 比 UNIX 支持更多的平台和架构。
|
||||
- UNIX 商业版本的开发人员会为他们的操作系统考虑特定目标平台以及用户。
|
||||
- **Linux 比 UNIX 有更好的安全性**,更少受病毒或恶意软件攻击。Linux 上大约有 60-100 种病毒,但是没有任何一种还在传播。另一方面,UNIX 上大约有 85-120 种病毒,但是其中有一些还在传播中。
|
||||
- 通过 UNIX 命令,系统上的工具和元素很少改变,甚至很多接口和命令行参数在后续 UNIX 版本中一直沿用。
|
||||
- 有些 Linux 开发项目以自愿为基础进行资助,比如 Debian。其他项目会维护一个和商业 Linux 的社区版,比如 SUSE 的 openSUSE 以及红帽的 Fedora。
|
||||
- 传统 UNIX 是扩大规模,而另一方面 Linux 是扩大范围。
|
||||
|
||||
- Linux 鼓励多样性,Linux 的开发人员有更广阔的背景,有更多不同经验和意见。
|
||||
- Linux 比 UNIX 支持更多的平台和架构。
|
||||
- UNIX 商业版本的开发人员会为他们的操作系统考虑特定目标平台以及用户。
|
||||
- **Linux 比 UNIX 有更好的安全性**,更少受病毒或恶意软件攻击。Linux 上大约有 60-100 种病毒,但是没有任何一种还在传播。另一方面,UNIX 上大约有 85-120 种病毒,但是其中有一些还在传播中。
|
||||
- 通过 UNIX 命令,系统上的工具和元素很少改变,甚至很多接口和命令行参数在后续 UNIX 版本中一直沿用。
|
||||
- 有些 Linux 开发项目以自愿为基础进行资助,比如 Debian。其他项目会维护一个和商业 Linux 的社区版,比如 SUSE 的 openSUSE 以及红帽的 Fedora。
|
||||
- 传统 UNIX 是纵向扩展,而另一方面 Linux 是横向扩展。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -92,7 +92,7 @@ via: http://www.unixmen.com/brief-history-aix-hp-ux-solaris-bsd-linux/
|
||||
|
||||
作者:[M.el Khamlichi][a]
|
||||
译者:[zpl1025](https://github.com/zpl1025)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[Caroline](https://github.com/carolinewuyan)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
|
@ -1,38 +0,0 @@
|
||||
Nautilus的文件搜索将迎来重大提升
|
||||
================================================================================
|
||||
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/10/nautilus-new-search-filters.jpg)
|
||||
|
||||
**在Nautilus中搜索位置文件和文件夹将会将会变得很简单。**
|
||||
|
||||
一个[GNOME文件管理器][1]中新的**搜索过滤器**正在开发中。它大量使用的GNOME的弹出式菜单来找出搜索结果并精确找到你关心的。
|
||||
|
||||
开发者Georges Stavracas正致力于新的UI并[描述][2]新的编辑器为“更干净、更理智、更直观”。
|
||||
|
||||
根据[上传到Youtube][3]的视频-他还没有嵌入它-他没有错。
|
||||
|
||||
> 他在他的博客中写到:“Nautilus有非常复杂但是强大的内部,它允许我们做很多事情。事实上这对于很多选项的代码也是这样。那么,为何它曾经看上去这么糟糕?”
|
||||
|
||||
问题有部分修辞;新的搜索过滤器界面对用户展示了“强大的内部”。搜索可以根据类型、名字或者日期范围来进行过滤。
|
||||
|
||||
对像Nautilus这种app的任何修改有可能让一些用户不安,因此像这样有帮助、直接的新UI会带来一些争议。
|
||||
|
||||
不要担心不满会影响进度(毫无疑问,虽然像[移除类型优先搜索][4]的争议自2014年以来一直在争论)。[上个月发布的][5]GNOME 3.18给Nautilus引入了新的文件进度对话框,以及更好的远程共享,包括Google Drive。
|
||||
|
||||
Stavracas的搜索过滤还没被合并进Files的trunk,但是重做的UI已经初步计划在明年春天的GNOME 3.20中实现。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.omgubuntu.co.uk/2015/10/new-nautilus-search-filter-ui
|
||||
|
||||
作者:[Joey-Elijah Sneddon][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/117485690627814051450/?rel=author
|
||||
[1]:https://wiki.gnome.org/Apps/Nautilus
|
||||
[2]:http://feaneron.com/2015/10/12/the-new-search-for-gnome-files-aka-nautilus/
|
||||
[3]:https://www.youtube.com/watch?v=X2sPRXDzmUw
|
||||
[4]:http://www.omgubuntu.co.uk/2014/01/ubuntu-14-04-nautilus-type-ahead-patch
|
||||
[5]:http://www.omgubuntu.co.uk/2015/09/gnome-3-18-release-new-features
|
@ -1,202 +0,0 @@
|
||||
如何在 Linux 中从 NetworkManager 切换为 systemd-network
|
||||
How to switch from NetworkManager to systemd-networkd on Linux
|
||||
================================================================================
|
||||
在 Linux 世界里, [systemd][1] 的采用一直是激烈争论的主题,它的支持者和反对者之间的战火仍然在燃烧。到了今天,大部分主流 Linux 发行版都已经采用了 systemd 作为默认初始化系统。
|
||||
In the world of Linux, adoption of [systemd][1] has been a subject of heated controversy, and the debate between its proponents and critics is still going on. As of today, most major Linux distributions have adopted systemd as a default init system.
|
||||
|
||||
正如其作者所说,作为一个 “从未完成、从未完善、但一直追随技术进步” 的系统,systemd 已经不只是一个初始化进程,它被设计为一个更广泛的系统以及服务管理平台,这个;平台包括了不断增长的核心系统进程、库和工具的生态系统。
|
||||
Billed as a "never finished, never complete, but tracking progress of technology" by its author, systemd is not just the init daemon, but is designed as a more broad system and service management platform which encompasses the growing ecosystem of core system daemons, libraries and utilities.
|
||||
|
||||
**systemd** 的其中一部分是 **systemd-networkd**,它负责 systemd 生态中的网络配置。使用 systemd-networkd,你可以为网络设备配置基础的 DHCP/静态 IP 网络。它还可以配置虚拟网络功能,例如网桥、隧道和 VLAN。systemd-networkd 目前还不能直接支持无线网络,但你可以使用 wpa_supplicant 服务配置无线适配器,然后用 **systemd-networkd** 挂钩起来。
|
||||
One of many additions to **systemd** is **systemd-networkd**, which is responsible for network configuration within the systemd ecosystem. Using systemd-networkd, you can configure basic DHCP/static IP networking for network devices. It can also configure virtual networking features such as bridges, tunnels or VLANs. Wireless networking is not directly handled by systemd-networkd, but you can use wpa_supplicant service to configure wireless adapters, and then hook it up with **systemd-networkd**.
|
||||
|
||||
在很多 Linux 发行版中,NetworkManager 仍然作为默认的网络配置管理器。和 NetworkManager 相比,**systemd-networkd** 仍处于活跃的开发状态,还缺少一些功能。例如,它还不能像 NetworkManager 那样能在任何时候让你的计算机在多种接口之间保持连接。它还没有为高级脚本提供 ifup/ifdown 钩子函数。但是,systemd-networkd 和其它 systemd 组件(例如用于域名解析的 **resolved**、NTP 的**timesyncd**,用于命名的 udevd)结合的非常好。随着时间增长,**systemd-networkd**只会在 systemd 环境中扮演越来越重要的角色。
|
||||
On many Linux distributions, NetworkManager has been and is still used as a default network configuration manager. Compared to NetworkManager, **systemd-networkd** is still under active development, and missing features. For example, it does not have NetworkManager's intelligence to keep your computer connected across various interfaces at all times. It does not provide ifup/ifdown hooks for advanced scripting. Yet, systemd-networkd is integrated well with the rest of systemd components (e.g., **resolved** for DNS, **timesyncd** for NTP, udevd for naming), and the role of **systemd-networkd** may only grow over time in the systemd environment.
|
||||
|
||||
如果你对 **systemd-networkd** 的进步感到高兴,从 NetworkManager 切换到 systemd-networkd 是值得你考虑的一件事。如果你强烈反对 systemd,对 NetworkManager 或[基础网络服务][2]感到很满意,那也很好。
|
||||
If you are happy with the way **systemd** is evolving, one thing you can consider is to switch from NetworkManager to systemd-networkd. If you are feverishly against systemd, and perfectly happy with NetworkManager or [basic network service][2], that is totally cool.
|
||||
|
||||
但对于那些想尝试 systemd-networkd 的人,可以继续看下去,在这篇指南中学会在 Linux 中怎么从 NetworkManager 切换到 systemd-networkd。
|
||||
But for those of you who want to try out systemd-networkd, you can read on, and find out in this tutorial how to switch from NetworkManager to systemd-networkd on Linux.
|
||||
|
||||
### 需求 ###
|
||||
### Requirement ###
|
||||
|
||||
systemd 210 或更高版本提供了 systemd-networkd。因此诸如 Debian 8 "Jessie" (systemd 215)、 Fedora 21 (systemd 217)、 Ubuntu 15.04 (systemd 219) 或更高版本的 Linux 发行版和 systemd-networkd 兼容。
|
||||
systemd-networkd is available in systemd version 210 and higher. Thus distributions like Debian 8 "Jessie" (systemd 215), Fedora 21 (systemd 217), Ubuntu 15.04 (systemd 219) or later are compatible with systemd-networkd.
|
||||
|
||||
对于其它发行版,在开始下一步之前先检查一下你的 systemd 版本。
|
||||
For other distributions, check the version of your systemd before proceeding.
|
||||
|
||||
$ systemctl --version
|
||||
|
||||
### 从 NetworkManager 切换到 Systemd-networkd ###
|
||||
### Switch from Network Manager to Systemd-Networkd ###
|
||||
|
||||
从 NetworkManager 切换到 systemd-networkd 其实非常简答(反过来也一样)。
|
||||
It is relatively straightforward to switch from Network Manager to systemd-networkd (and vice versa).
|
||||
|
||||
首先,按照下面这样先停用 NetworkManager 服务,然后启用 systemd-networkd。
|
||||
First, disable Network Manager service, and enable systemd-networkd as follows.
|
||||
|
||||
$ sudo systemctl disable NetworkManager
|
||||
$ sudo systemctl enable systemd-networkd
|
||||
|
||||
你还要启用 **systemd-resolved** 服务,systemd-networkd用它来进行域名解析。该服务还实现了一个缓存式 DNS 服务器。
|
||||
You also need to enable **systemd-resolved** service, which is used by systemd-networkd for network name resolution. This service implements a caching DNS server.
|
||||
|
||||
$ sudo systemctl enable systemd-resolved
|
||||
$ sudo systemctl start systemd-resolved
|
||||
|
||||
一旦启动,**systemd-resolved** 就会在 /run/systemd 目录下某个地方创建它自己的 resolv.conf。但是,把 DNS 解析信息存放在 /etc/resolv.conf 是更普遍的做法,很多应用程序也会依赖于 /etc/resolv.conf。因此为了兼容性,按照下面的方式创建一个到 /etc/resolv.conf 的符号链接。
|
||||
Once started, **systemd-resolved** will create its own resolv.conf somewhere under /run/systemd directory. However, it is a common practise to store DNS resolver information in /etc/resolv.conf, and many applications still rely on /etc/resolv.conf. Thus for compatibility reason, create a symlink to /etc/resolv.conf as follows.
|
||||
|
||||
$ sudo rm /etc/resolv.conf
|
||||
$ sudo ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf
|
||||
|
||||
### 用 systemd-networkd 配置网络连接 ###
|
||||
### Configure Network Connections with Systemd-networkd ###
|
||||
|
||||
要用 systemd-networkd 配置网络服务,你必须指定带.network 扩展名的配置信息文本文件。这些网络配置文件保存到 /etc/systemd/network 并从这里加载。当有多个文件时,systemd-networkd 会按照词汇顺序一个个加载并处理。
|
||||
To configure network devices with systemd-networkd, you must specify configuration information in text files with .network extension. These network configuration files are then stored and loaded from /etc/systemd/network. When there are multiple files, systemd-networkd loads and processes them one by one in lexical order.
|
||||
|
||||
首先创建 /etc/systemd/network 目录。
|
||||
Let's start by creating a folder /etc/systemd/network.
|
||||
|
||||
$ sudo mkdir /etc/systemd/network
|
||||
|
||||
#### DHCP 网络 ####
|
||||
#### DHCP Networking ####
|
||||
|
||||
首先来配置 DHCP 网络。对于此,先要创建下面的配置文件。文件名可以任意,但记住文件是按照词汇顺序处理的。
|
||||
Let's configure DHCP networking first. For this, create the following configuration file. The name of a file can be arbitrary, but remember that files are processed in lexical order.
|
||||
|
||||
$ sudo vi /etc/systemd/network/20-dhcp.network
|
||||
|
||||
----------
|
||||
|
||||
[Match]
|
||||
Name=enp3*
|
||||
|
||||
[Network]
|
||||
DHCP=yes
|
||||
|
||||
正如你上面看到的,每个网络配置文件包括了一个多多个 “sections”,每个 “section”都用 [XXX] 开头。每个 section 包括了一个或多个键值对。[Match] 部分决定这个配置文件配置哪个(些)网络设备。例如,这个文件匹配所有名称以 ens3 开头的网络设备(例如 enp3s0、 enp3s1、 enp3s2 等等)对于匹配的接口,然后启用 [Network] 部分指定的 DHCP 网络配置。
|
||||
As you can see above, each network configuration file contains one or more "sections" with each section preceded by [XXX] heading. Each section contains one or more key/value pairs. The [Match] section determine which network device(s) are configured by this configuration file. For example, this file matches any network interface whose name starts with ens3 (e.g., enp3s0, enp3s1, enp3s2, etc). For matched interface(s), it then applies DHCP network configuration specified under [Network] section.
|
||||
|
||||
### 静态 IP 网络 ###
|
||||
### Static IP Networking ###
|
||||
|
||||
如果你想给网络设备分配一个静态 IP 地址,那就新建下面的配置文件。
|
||||
If you want to assign a static IP address to a network interface, create the following configuration file.
|
||||
|
||||
$ sudo vi /etc/systemd/network/10-static-enp3s0.network
|
||||
|
||||
----------
|
||||
|
||||
[Match]
|
||||
Name=enp3s0
|
||||
|
||||
[Network]
|
||||
Address=192.168.10.50/24
|
||||
Gateway=192.168.10.1
|
||||
DNS=8.8.8.8
|
||||
|
||||
正如你猜测的, enp3s0 接口地址会被指定为 192.168.10.50/24,默认网关是 192.168.10.1, DNS 服务器是 8.8.8.8。这里微妙的一点是,接口名 enp3s0 事实上也匹配了之前 DHCP 配置中定义的模式规则。但是,根据词汇顺序,文件 "10-static-enp3s0.network" 在 "20-dhcp.network" 之前被处理,对于 enp3s0 接口静态配置比 DHCP 配置有更高的优先级。
|
||||
As you can guess, the interface enp3s0 will be assigned an address 192.168.10.50/24, a default gateway 192.168.10.1, and a DNS server 8.8.8.8. One subtlety here is that the name of an interface enp3s0, in facts, matches the pattern rule defined in the earlier DHCP configuration as well. However, since the file "10-static-enp3s0.network" is processed before "20-dhcp.network" according to lexical order, the static configuration takes priority over DHCP configuration in case of enp3s0 interface.
|
||||
|
||||
一旦你完成了创建配置文件,重启 systemd-networkd 服务或者重启机器。
|
||||
Once you are done with creating configuration files, restart systemd-networkd service or reboot.
|
||||
|
||||
$ sudo systemctl restart systemd-networkd
|
||||
|
||||
运行以下命令检查服务状态:
|
||||
Check the status of the service by running:
|
||||
|
||||
$ systemctl status systemd-networkd
|
||||
$ systemctl status systemd-resolved
|
||||
|
||||
![](https://farm1.staticflickr.com/719/21010813392_76abe123ed_c.jpg)
|
||||
|
||||
### 用 systemd-networkd 配置虚拟网络设备 ###
|
||||
### Configure Virtual Network Devices with Systemd-networkd ###
|
||||
|
||||
**systemd-networkd** 同样允许你配置虚拟网络设备,例如网桥、VLAN、隧道、VXLAN、绑定等。你必须在用 .netdev 作为扩展名的文件中配置这些虚拟设备。
|
||||
**systemd-networkd** also allows you to configure virtual network devices such as bridges, VLANs, tunnel, VXLAN, bonding, etc. You must configure these virtual devices in files with .netdev extension.
|
||||
|
||||
这里我展示了如何配置一个桥接接口。
|
||||
Here I'll show how to configure a bridge interface.
|
||||
|
||||
#### Linux 网桥 ####
|
||||
#### Linux Bridge ####
|
||||
|
||||
如果你想创建一个 Linux 网桥(br0) 并把物理接口(eth1) 添加到网桥,你可以新建下面的配置。
|
||||
If you want to create a Linux bridge (br0) and add a physical interface (eth1) to the bridge, create the following configuration.
|
||||
|
||||
$ sudo vi /etc/systemd/network/bridge-br0.netdev
|
||||
|
||||
----------
|
||||
|
||||
[NetDev]
|
||||
Name=br0
|
||||
Kind=bridge
|
||||
|
||||
然后按照下面这样用 .network 文件配置网桥接口 br0 和从接口 eth1。
|
||||
Then configure the bridge interface br0 and the slave interface eth1 using .network files as follows.
|
||||
|
||||
$ sudo vi /etc/systemd/network/bridge-br0-slave.network
|
||||
|
||||
----------
|
||||
|
||||
[Match]
|
||||
Name=eth1
|
||||
|
||||
[Network]
|
||||
Bridge=br0
|
||||
|
||||
----------
|
||||
|
||||
$ sudo vi /etc/systemd/network/bridge-br0.network
|
||||
|
||||
----------
|
||||
|
||||
[Match]
|
||||
Name=br0
|
||||
|
||||
[Network]
|
||||
Address=192.168.10.100/24
|
||||
Gateway=192.168.10.1
|
||||
DNS=8.8.8.8
|
||||
|
||||
最后,重启 systemd-networkd。
|
||||
Finally, restart systemd-networkd:
|
||||
|
||||
$ sudo systemctl restart systemd-networkd
|
||||
|
||||
你可以用 [brctl 工具][3] 来验证是否创建了网桥 br0。
|
||||
You can use [brctl tool][3] to verify that a bridge br0 has been created.
|
||||
|
||||
### 总结 ###
|
||||
### Summary ###
|
||||
|
||||
当 systemd 誓言成为 Linux 的系统管理器时,有类似 systemd-networkd 的东西来管理网络配置也就不足为奇。但是在现阶段,systemd-networkd 看起来更适合于网络配置相对稳定的服务器环境。对于桌面/笔记本环境,它们有多种临时有线/无线接口,NetworkManager 仍然是比较好的选择。
|
||||
When systemd promises to be a system manager for Linux, it is no wonder something like systemd-networkd came into being to manage network configurations. At this stage, however, systemd-networkd seems more suitable for a server environment where network configurations are relatively stable. For desktop/laptop environments which involve various transient wired/wireless interfaces, NetworkManager may still be a preferred choice.
|
||||
|
||||
对于想进一步了解 systemd-networkd 的人,可以参考官方[man 手册][4]了解完整的支持列表和关键点。
|
||||
For those who want to check out more on systemd-networkd, refer to the official [man page][4] for a complete list of supported sections and keys.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://xmodulo.com/switch-from-networkmanager-to-systemd-networkd.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[ictlyh](http://mutouxiaogui.cn/blog)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/nanni
|
||||
[1]:http://xmodulo.com/use-systemd-system-administration-debian.html
|
||||
[2]:http://xmodulo.com/disable-network-manager-linux.html
|
||||
[3]:http://xmodulo.com/how-to-configure-linux-bridge-interface.html
|
||||
[4]:http://www.freedesktop.org/software/systemd/man/systemd.network.html
|
487
translated/tech/20150831 Linux workstation security checklist.md
Normal file
487
translated/tech/20150831 Linux workstation security checklist.md
Normal file
@ -0,0 +1,487 @@
|
||||
Linux平台安全备忘录
|
||||
================================================================================
|
||||
这是一组Linux基金会自己系统管理员的推荐规范。所有Linux基金会的雇员都是远程工作,我们使用这套指导方针确保系统管理员的系统通过核心安全需求,降低我们平台成为攻击目标的风险。
|
||||
|
||||
即使你的系统管理员不用远程工作,很有可能的是,很多人的工作是在一个便携的笔记本上完成的,或者在业余时间或紧急时刻他们在工作平台中部署自己的家用系统。不论发生何种情况,你都能对应这个规范匹配到你的环境中。
|
||||
|
||||
这绝不是一个详细的“工作站加固”文档,可以说这是一个努力避免大多数明显安全错误导致太多不便的一组规范的底线。你可能阅读这个文档会认为它的方法太偏执,同时另一些人也许会认为这仅仅是一些肤浅的研究。安全就像在高速公路上开车 -- 任何比你开的慢的都是一个傻瓜,然而任何比你开的快的人都是疯子。这个指南仅仅是一些列核心安全规则,既不详细又不是替代经验,警惕,和常识。
|
||||
|
||||
每一节都分为两个部分:
|
||||
|
||||
- 核对适合你项目的需求
|
||||
- 随意列出关心的项目,解释为什么这么决定
|
||||
|
||||
## 严重级别
|
||||
|
||||
在清单的每一个项目都包括严重级别,这些是我们希望能帮助指导你的决定:
|
||||
|
||||
- _(关键)_ 项目应该在考虑列表上被明确的重视。如果不采取措施,将会导致你的平台安全出现高风险。
|
||||
- _(中等)_ 项目将改善你的安全形态,但不是很重要,尤其是如果他们太多的干涉你的工作流程。
|
||||
- _(低等)_ 项目也许会改善整体安全性,但是在便利权衡下也许并不值得。
|
||||
- _(可疑)_ 留作感觉会明显完善我们平台安全的项目,但是可能会需要大量的调整与操作系统交互的方式。
|
||||
|
||||
记住,这些只是参考。如果你觉得这些严重级别不能表达你的工程对安全承诺,正如你所见你应该调整他们为你合适的。
|
||||
|
||||
## 选择正确的硬件
|
||||
|
||||
我们禁止管理员使用一个特殊供应商或者一个特殊的型号,所以在选择工作系统时这部分是核心注意事项。
|
||||
|
||||
### 清单
|
||||
|
||||
- [ ] 系统支持安全启动 _(关键)_
|
||||
- [ ] 系统没有火线,雷电或者扩展卡接口 _(中等)_
|
||||
- [ ] 系统有TPM芯片 _(低)_
|
||||
|
||||
### 注意事项
|
||||
|
||||
#### 安全引导
|
||||
|
||||
尽管它是有争议的性质,安全引导提供了对抗很多针对平台的攻击(Rootkits, "Evil Maid,"等等),没有介绍太多额外的麻烦。它将不会停止真正专用的攻击者,加上有很大程度上,站点安全机构有办法应对它(可能通过设计),但是拥有安全引导总比什么都没有强。
|
||||
|
||||
作为选择,你也许部署了[Anti Evil Maid][1]提供更多健全的保护,对抗安全引导支持的攻击类型,但是它需要更多部署和维护的工作。
|
||||
|
||||
#### 系统没有火线,雷电或者扩展卡接口
|
||||
|
||||
火线是一个标准,故意的,允许任何连接设备完全直接内存访问你的系统([查看维基百科][2])。雷电接口和扩展卡同样有问题,虽然一些后来部署的雷电接口试图限制内存访问的范围。如果你没有这些系统端口,那是最好的,但是它并不严重,他们通常可以通过UEFI或内核本身禁用。
|
||||
|
||||
#### TPM芯片
|
||||
|
||||
可信平台模块(TPM)是主板上的一个与核心处理器单独分开的加密芯片,他可以用来增加平台的安全性(比如存储完整磁盘加密密钥),不过通常不用在日常平台操作。最多,这是个很好的存在,除非你有特殊需要使用TPM增加你平台安全性。
|
||||
|
||||
## 预引导环境
|
||||
|
||||
这是你开始安装系统前的一系列推荐规范。
|
||||
|
||||
### 清单
|
||||
|
||||
- [ ] 使用UEFI引导模式(不是传统BIOS)_(关键)_
|
||||
- [ ] 进入UEFI配置需要使用密码 _(关键)_
|
||||
- [ ] 使用安全引导 _(关键)_
|
||||
- [ ] 启动系统需要UEFI级别密码 _(低)_
|
||||
|
||||
### 注意事项
|
||||
|
||||
#### UEFI和安全引导
|
||||
|
||||
UEFI尽管有缺点,还是提供很多传统BIOS没有的好功能,比如安全引导。大多数现代的系统都默认使用UEFI模式。
|
||||
|
||||
UEFI配置模式密码要确保密码强度。注意,很多厂商默默地限制了你使用密码长度,所以对比长口令你也许应该选择高熵短密码(更多地密码短语看下面)。
|
||||
|
||||
基于你选择的Linux分支,你也许会也许不会跳过额外的圈子,以导入你的发行版的安全引导键,才允许你启动发行版。很多分支已经与微软合作大多数厂商给他们已发布的内核签订密钥,这已经是大多数厂商公认的了,因此为了避免问题你必须处理密钥导入。
|
||||
|
||||
作为一个额外的措施,在允许某人得到引导分区然后尝试做一些不好的事之前,让他们输入密码。为了防止肩窥,这个密码应该跟你的UEFI管理密码不同。如果你关闭启动太多,你也许该选择别把心思费在这上面,当你已经进入LUKS密码,这将为您节省一些额外的按键。
|
||||
|
||||
## 发行版选择注意事项
|
||||
|
||||
很有可能你会坚持一个广泛使用的发行版如Fedora,Ubuntu,Arch,Debian,或他们的一个类似分支。无论如何,这是你选择使用发行版应该考虑的。
|
||||
|
||||
### 清单
|
||||
|
||||
- [ ] 拥有一个强健的MAC/RBAC系统(SELinux/AppArmor/Grsecurity) _(关键)_
|
||||
- [ ] 公开的安全公告 _(关键)_
|
||||
- [ ] 提供及时的安全补丁 _(关键)_
|
||||
- [ ] 提供密码验证的包 _(关键)_
|
||||
- [ ] 完全支持UEFI和安全引导 _(关键)_
|
||||
- [ ] 拥有健壮的原生全磁盘加密支持 _(关键)_
|
||||
|
||||
### 注意事项
|
||||
|
||||
#### SELinux,AppArmor,和GrSecurity/PaX
|
||||
|
||||
强制访问控制(MAC)或者基于角色的访问控制(RBAC)是一个POSIX系统遗留的基于用户或组的安全机制延伸。这些天大多数发行版已经绑定MAC/RBAC系统(Fedora,Ubuntu),或通过提供一种机制一个可选的安装后的步骤来添加它(Gentoo,Arch,Debian)。很明显,强烈建议您选择一个预装MAC/RBAC系统的分支,但是如果你对一个分支情有独钟,没有默认启用它,装完系统后应计划配置安装它。
|
||||
|
||||
应该坚决避免使用不带任何MAC/RBAC机制的分支,像传统的POSIX基于用户和组的安全在当今时代应该算是考虑不足。如果你想建立一个MAC/RBAC工作站,通常会考虑AppArmor和PaX,他们比SELinux更容易学习。此外,在一个工作站上,有很少或者没有额外的监听用户运行的应用造成的最高风险,GrSecurity/PaX_可能_会比SELinux提供更多的安全效益。
|
||||
|
||||
#### 发行版安全公告
|
||||
|
||||
大多数广泛使用的分支都有一个机制发送安全公告到他们的用户,但是如果你对一些机密感兴趣,查看开发人员是否有记录机制提醒用户安全漏洞和补丁。缺乏这样的机制是一个重要的警告信号,这个分支不够成熟,不能被视为主要管理工作站。
|
||||
|
||||
#### 及时和可靠的安全更新
|
||||
|
||||
多数常用的发行版提供的定期安全更新,但为确保关键包更新及时提供是值得检查的。避免使用分支和"社区重建"的原因是,由于不得不等待上游分支先发布它,他们经常延迟安全更新。
|
||||
|
||||
你如果找到一个在安装包,更新元数据,或两者上不使用加密签名的发行版,将会处于困境。这么说,常用的发行版多年前就已经知道这个基本安全的意义(Arch,我正在看你),所以这也是值得检查的。
|
||||
|
||||
#### 发行版支持UEFI和安全引导
|
||||
|
||||
检查发行版支持UEFI和安全引导。查明它是否需要导入额外的密钥或是否要求启动内核有一个已经被系统厂商信任的密钥签名(例如跟微软达成合作)。一些发行版不支持UEFI或安全启动,但是提供了替代品来确保防篡改或防破坏引导环境([Qubes-OS][3]使用Anti Evil Maid,前面提到的)。如果一个发行版不支持安全引导和没有机制防止引导级别攻击,还是看看别的吧。
|
||||
|
||||
#### 全磁盘加密
|
||||
|
||||
全磁盘加密是保护静止数据要求,大多数发行版都支持。作为一个选择方案,系统自加密硬件驱动也许用来(通常通过主板TPM芯片实现)和提供类似安全级别加更快的选项,但是花费也更高。
|
||||
|
||||
## 发行版安装指南
|
||||
|
||||
所有发行版都是不同的,但是也有一些一般原则:
|
||||
|
||||
### 清单
|
||||
|
||||
- [ ] 使用健壮的密码全磁盘加密(LUKS) _(关键)_
|
||||
- [ ] 确保交换分区也加密了 _(关键)_
|
||||
- [ ] 确保引导程序设置了密码(可以和LUKS一样) _(关键)_
|
||||
- [ ] 设置健壮的root密码(可以和LUKS一样) _(关键)_
|
||||
- [ ] 使用无特权账户登录,管理员组的一部分 _(关键)_
|
||||
- [ ] 设置强壮的用户登录密码,不同于root密码 _(关键)_
|
||||
|
||||
### 注意事项
|
||||
|
||||
#### 全磁盘加密
|
||||
|
||||
除非你正在使用自加密硬件设备,配置你的安装程序给磁盘完整加密用来存储你的数据与你的系统文件很重要。通过自动安装的cryptfs循环文件加密用户目录还不够简单(我正在看你,老版Ubuntu),这并没有给系统二进制文件或交换分区提供保护,它可能包含大量的敏感数据。推荐的加密策略是加密LVM设备,所以在启动过程中只需要一个密码。
|
||||
|
||||
`/boot`分区将一直保持非加密,当引导程序需要引导内核前,调用LUKS/dm-crypt。内核映像本身应该用安全引导加密签名检查防止被篡改。
|
||||
|
||||
换句话说,`/boot`应该是你系统上唯一没有加密的分区。
|
||||
|
||||
#### 选择好密码
|
||||
|
||||
现代的Linux系统没有限制密码口令长度,所以唯一的限制是你的偏执和倔强。如果你要启动你的系统,你将大概至少要输入两个不同的密码:一个解锁LUKS,另一个登陆,所以长密码将会使你老的很快。最好从丰富或混合的词汇中选择2-3个单词长度,容易输入的密码。
|
||||
|
||||
优秀密码例子(是的,你可以使用空格):
|
||||
- nature abhors roombas
|
||||
- 12 in-flight Jebediahs
|
||||
- perdon, tengo flatulence
|
||||
|
||||
如果你更喜欢输入口令句,你也可以坚持使用无词汇密码但最少要10-12个字符长度。
|
||||
|
||||
除非你有人身安全的担忧,写下你的密码,并保存在一个远离你办公桌的安全的地方才合适。
|
||||
|
||||
#### Root,用户密码和管理组
|
||||
|
||||
我们建议,你的root密码和你的LUKS加密使用同样的密码(除非你共享你的笔记本给可信的人,他应该能解锁设备,但是不应该能成为root用户)。如果你是笔记本电脑的唯一用户,那么你的root密码与你的LUKS密码不同是没有意义的安全优势。通常,你可以使用同样的密码在你的UEFI管理,磁盘加密,和root登陆 -- 知道这些任意一个都会让攻击者完全控制您的系统,在单用户工作站上使这些密码不同,没有任何安全益处。
|
||||
|
||||
你应该有一个不同的,但同样强健的常规用户帐户密码用来每天工作。这个用户应该是管理组用户(例如`wheel`或者类似,根据分支),允许你执行`sudo`来提升权限。
|
||||
|
||||
换句话说,如果在你的工作站只有你一个用户,你应该有两个独特的,强健的,同样的强壮的密码需要记住:
|
||||
|
||||
**管理级别**,用在以下区域:
|
||||
|
||||
- UEFI管理
|
||||
- 引导程序(GRUB)
|
||||
- 磁盘加密(LUKS)
|
||||
- 工作站管理(root用户)
|
||||
|
||||
**User-level**, used for the following:
|
||||
**用户级别**,用在以下:
|
||||
|
||||
- 用户登陆和sudo
|
||||
- 密码管理器的主密码
|
||||
|
||||
很明显,如果有一个令人信服的理由他们所有可以不同。
|
||||
|
||||
## 安装后的加强
|
||||
|
||||
安装后的安全性加强在很大程度上取决于你选择的分支,所以在一个通用的文档中提供详细说明是徒劳的,例如这一个。然而,这里有一些你应该采取的步骤:
|
||||
|
||||
### 清单
|
||||
|
||||
- [ ] 在全体范围内禁用火线和雷电模块 _(关键)_
|
||||
- [ ] 检查你的防火墙,确保过滤所有传入端口 _(关键)_
|
||||
- [ ] 确保root邮件转发到一个你可以查看到的账户 _(关键)_
|
||||
- [ ] 检查以确保sshd服务默认情况下是禁用的 _(中等)_
|
||||
- [ ] 建立一个系统自动更新任务,或更新提醒 _(中等)_
|
||||
- [ ] 配置屏幕保护程序在一段时间的不活动后自动锁定 _(中等)_
|
||||
- [ ] 建立日志监控 _(中等)_
|
||||
- [ ] 安装使用rkhunter _(低等)_
|
||||
- [ ] 安装一个入侵检测系统 _(偏执)_
|
||||
|
||||
### 注意事项
|
||||
|
||||
#### 黑名单模块
|
||||
|
||||
将火线和雷电模块列入黑名单,增加一行到`/etc/modprobe.d/blacklist-dma.conf`文件:
|
||||
|
||||
blacklist firewire-core
|
||||
blacklist thunderbolt
|
||||
|
||||
重启后的模块将被列入黑名单。这样做是无害的,即使你没有这些端口(但也不做任何事)。
|
||||
|
||||
#### Root邮件
|
||||
|
||||
默认的root邮件只是存储在系统基本上没人读过。确保你设置了你的`/etc/aliases`来转发root邮件到你确实能读取的邮箱,否则你也许错过了重要的系统通知和报告:
|
||||
|
||||
# Person who should get root's mail
|
||||
root: bob@example.com
|
||||
|
||||
编辑后这些后运行`newaliases`,然后测试它确保已投递,像一些邮件供应商将拒绝从没有或者不可达的域名的邮件。如果是这个原因,你需要配置邮件转发直到确实可用。
|
||||
|
||||
#### 防火墙,sshd,和监听进程
|
||||
|
||||
默认的防火墙设置将取决于您的发行版,但是大多数都允许`sshd`端口连入。除非你有一个令人信服的合理理由允许连入ssh,你应该过滤出来,禁用sshd守护进程。
|
||||
|
||||
systemctl disable sshd.service
|
||||
systemctl stop sshd.service
|
||||
|
||||
如果你需要使用它,你也可以临时启动它。
|
||||
|
||||
通常,你的系统不应该有任何侦听端口除了响应ping。这将有助于你对抗网络级别的零日漏洞利用。
|
||||
|
||||
#### 自动更新或通知
|
||||
|
||||
建议打开自动更新,除非你有一个非常好的理由不这么做,如担心自动更新将使您的系统无法使用(这是发生在过去,所以这种恐惧并非杞人忧天)。至少,你应该启用自动通知可用的更新。大多数发行版已经有这个服务自动运行,所以你不需要做任何事。查阅你的发行版文档查看更多。
|
||||
|
||||
你应该尽快应用所有明显的勘误,即使这些不是特别贴上“安全更新”或有关联的CVE代码。所有错误都潜在的安全漏洞和新的错误,比起坚持旧的,已知的错误,未知错误通常是更安全的策略。
|
||||
|
||||
#### 监控日志
|
||||
|
||||
你应该对你的系统上发生了什么很感兴趣。出于这个原因,你应该安装`logwatch`然后配置它每夜发送在你的系统上发生的任何事情的活动报告。这不会预防一个专业的攻击者,但是一个好安全网功能。
|
||||
|
||||
注意,许多systemd发行版将不再自动安装一个“logwatch”需要的syslog服务(由于systemd依靠自己的分类),所以你需要安装和启用“rsyslog”来确保使用logwatch之前你的/var/log不是空。
|
||||
|
||||
#### Rkhunter和IDS
|
||||
|
||||
安装`rkhunter`和一个入侵检测系统(IDS)像`aide`或者`tripwire`将不会有用,除非你确实理解他们如何工作采取必要的步骤来设置正确(例如,保证数据库在额外的媒介,从可信的环境运行检测,记住执行系统更新和配置更改后要刷新数据库散列,等等)。如果你不愿在你的工作站执行这些步骤调整你如何工作,这些工具将带来麻烦没有任何实在的安全益处。
|
||||
|
||||
我们强烈建议你安装`rkhunter`并每晚运行它。它相当易于学习和使用,虽然它不会阻止一个复杂的攻击者,它也能帮助你捕获你自己的错误。
|
||||
|
||||
## 个人工作站备份
|
||||
|
||||
工作站备份往往被忽视,或无计划的做,常常是不安全的方式。
|
||||
|
||||
### 清单
|
||||
|
||||
- [ ] 设置加密备份工作站到外部存储 _(关键)_
|
||||
- [ ] 使用零认知云备份的备份工具 _(中等)_
|
||||
|
||||
### 注意事项
|
||||
|
||||
#### 全加密备份存到外部存储
|
||||
|
||||
把全部备份放到一个移动磁盘中比较方便,不用担心带宽和流速(在这个时代,大多数供应商仍然提供显著的不对称的上传/下载速度)。不用说,这个移动硬盘本身需要加密(又一次,通过LIKS),或者你应该使用一个备份工具建立加密备份,例如`duplicity`或者它的GUI版本,`deja-dup`。我建议使用后者并使用随机生成的密码,保存到你的密码管理器中。如果你带上笔记本去旅行,把这个磁盘留在家,以防你的笔记本丢失或被窃时可以找回备份。
|
||||
|
||||
除了你的家目录外,你还应该备份`/etc`目录和处于鉴定目的的`/var/log`目录。
|
||||
|
||||
首先是,避免拷贝你的家目录到任何非加密存储上,甚至是快速的在两个系统上移动文件,一旦完成你肯定会忘了清除它,暴露个人隐私或者安全信息到监听者手中 -- 尤其是把这个存储跟你的笔记本防盗同一个包里。
|
||||
|
||||
#### 零认知站外备份选择性
|
||||
|
||||
站外备份也是相当重要的,是否可以做到要么需要你的老板提供空间,要么找一家云服务商。你可以建一个单独的duplicity/deja-dup配置,只包括重要的文件,以免传输大量你不想备份的数据(网络缓存,音乐,下载等等)。
|
||||
|
||||
作为选择,你可以使用零认知备份工具,例如[SpiderOak][5],它提供一个卓越的Linux GUI工具还有实用的特性,例如在多个系统或平台间同步内容。
|
||||
|
||||
## 最佳实践
|
||||
|
||||
下面是我们认为你应该采用的最佳实践列表。它当然不是非常详细的,而是试图提供实用的建议,一个可行的整体安全性和可用性之间的平衡
|
||||
|
||||
### 浏览
|
||||
|
||||
毫无疑问,在你的系统上web浏览器将是最大、最容易暴露的攻击层面的软件。它是专门下载和执行不可信,恶意代码的一个工具。它试图采用沙箱和代码卫生处理等多种机制保护你免受这种危险,但是在之前多个场合他们都被击败了。你应该学到浏览网站是最不安全的活动在你参与的任何一天。
|
||||
|
||||
有几种方法可以减少浏览器的影响,但真正有效的方法需要你操作您的工作站将发生显著的变化。
|
||||
|
||||
#### 1: 实用两个不同的浏览器
|
||||
|
||||
这很容易做到,但是只有很少的安全效益。并不是所有浏览器都妥协给攻击者完全自由访问您的系统 -- 有时他们只能允许一个读取本地浏览器存储,窃取其他标签的活动会话,捕获输入浏览器,例如,实用两个不同的浏览器,一个用在工作/高安全站点,另一个用在其他,有助于防止攻击者请求整个饼干罐的小妥协。主要的不便是两个不同的浏览器消耗内存大量。
|
||||
|
||||
我们建议:
|
||||
|
||||
##### 火狐用来工作和高安全站点
|
||||
|
||||
使用火狐登陆工作有关的站点,应该额外关心的是确保数据如cookies,会话,登陆信息,打键次数等等,明显不应该落入攻击者手中。除了少数的几个网站,你不应该用这个浏览器访问其他网站。
|
||||
|
||||
你应该安装下面的火狐扩展:
|
||||
|
||||
- [ ] NoScript _(关键)_
|
||||
- NoScript阻止活动内容加载,除非在用户白名单里的域名。跟你默认浏览器比它使用起来很麻烦(可是提供了真正好的安全效益),所以我们建议只在开启了它的浏览器上访问与工作相关的网站。
|
||||
|
||||
- [ ] Privacy Badger _(关键)_
|
||||
- EFF的Privacy Badger将在加载时预防大多数外部追踪器和广告平台,在这些追踪站点影响你的浏览器时将有助于避免妥协(追踪着和广告站点通常会成为攻击者的目标,因为他们会迅速影响世界各地成千上万的系统)。
|
||||
|
||||
- [ ] HTTPS Everywhere _(关键)_
|
||||
- 这个EFF开发的扩展将确保你访问的大多数站点都在安全连接上,甚至你点击的连接使用的是http://(有效的避免大多数的攻击,例如[SSL-strip][7])。
|
||||
|
||||
- [ ] Certificate Patrol _(中等)_
|
||||
- 如果你正在访问的站点最近改变了他们的TLS证书 -- 特别是如果不是接近失效期或者现在使用不同的证书颁发机构,这个工具将会警告你。它有助于警告你是否有人正尝试中间人攻击你的连接,但是产生很多无害的假的类似情况。
|
||||
|
||||
你应该让火狐成为你的默认打开连接的浏览器,因为NoScript将在加载或者执行时阻止大多数活动内容。
|
||||
|
||||
##### 其他一切都用Chrome/Chromium
|
||||
|
||||
Chromium开发者在增加很多很好的安全特性方面比火狐强(至少[在Linux上][6])),例如seccomp沙箱,内核用户名空间等等,这担当一个你访问网站和你其他系统间额外的隔离层。Chromium是流开源项目,Chrome是Google所有的基于它构建的包(使用它输入时要非常谨慎任,何你不想让谷歌知道的事情都不要使用它)。
|
||||
|
||||
有人推荐你在Chrome上也安装**Privacy Badger**和**HTTPS Everywhere**扩展,然后给他一个不同的主题,从火狐指出这是你浏览器“不信任的站点”。
|
||||
|
||||
#### 2: 使用两个不同浏览器,一个在专用的虚拟机里
|
||||
|
||||
这有点像上面建议的做法,除了您将添加一个额外的步骤,通过快速访问协议运行专用虚拟机内部Chrome,允许你共享剪贴板和转发声音事件(如,Spice或RDP)。这将在不可信的浏览器和你其他的工作环境之间添加一个优秀的隔离层,确保攻击者完全危害你的浏览器将不得不另外打破VM隔离层,以达到系统的其余部分。
|
||||
|
||||
这是一个出奇可行的结构,但是需要大量的RAM和高速处理器可以处理增加的负载。这还需要一个重要的奉献的管理员需要相应地调整自己的工作实践。
|
||||
|
||||
#### 3: 通过虚拟化完全隔离你的工作和娱乐环境
|
||||
|
||||
看[Qubes-OS项目][3],它致力于通过划分你的应用到完全独立分开的VM中,提供高安全工作环境。
|
||||
|
||||
### 密码管理器
|
||||
|
||||
#### 清单
|
||||
|
||||
- [ ] 使用密码管理器 _(关键)_
|
||||
- [ ] 不相关的站点使用不同的密码 _(关键)_
|
||||
- [ ] 使用支持团队共享的密码管理器 _(中等)_
|
||||
- [ ] 给非网站用户使用一个单独的密码管理器 _(偏执)_
|
||||
|
||||
#### 注意事项
|
||||
|
||||
使用好的,唯一的密码对你的团队成员来说应该是非常关键的需求。证书盗取一直在发生 — 要么通过中间计算机,盗取数据库备份,远程站点利用,要么任何其他的打算。证书从不应该通过站点被重用,尤其是关键的应用。
|
||||
|
||||
|
||||
##### 浏览器中的密码管理器
|
||||
|
||||
每个浏览器有一个比较安全的保存密码机制,通过供应商的机制可以同步到云存储同事用户提供密码保证数据加密。无论如何,这个机制有严重的劣势:
|
||||
|
||||
|
||||
1. 不能跨浏览器工作
|
||||
2. 不提供任何与团队成员共享凭证的方法
|
||||
|
||||
也有一些良好的支持,免费或便宜的密码管理器,很好的融合到多个浏览器,跨平台工作,提供小组共享(通常是支付服务)。可以很容易地通过搜索引擎找到解决方案。
|
||||
|
||||
##### 独立的密码管理器
|
||||
|
||||
任何密码管理器都有一个主要的缺点,与浏览器结合,事实上是应用的一部分,这样最有可能被入侵者攻击。如果这让你不舒服(应该这样),你应该选择两个不同的密码管理器 -- 一个集成在浏览器中用来保存网站密码,一个作为独立运行的应用。后者可用于存储高风险凭证如root密码,数据库密码,其他shell账户凭证等。
|
||||
|
||||
有这样的工具可以特别有效的在团腿成员间共享超级用户的凭据(服务器根密码,ILO密码,数据库管理密码,引导装载程序密码等等)。
|
||||
|
||||
这几个工具可以帮助你:
|
||||
|
||||
- [KeePassX][8],2版中改善了团队共享
|
||||
- [Pass][9],它使用了文本文件和PGP并与git结合
|
||||
- [Django-Pstore][10],他是用GPG在管理员之间共享凭据
|
||||
- [Hiera-Eyaml][11],如果你已经在你的平台中使用了Puppet,可以便捷的追踪你的服务器/服务凭证,像你的Hiera加密数据的一部分。
|
||||
|
||||
### 加固SSH和PGP私钥
|
||||
|
||||
个人加密密钥,包括SSH和PGP私钥,都是你工作站中最重要的物品 -- 攻击将在获取到感兴趣的东西,这将允许他们进一步攻击你的平台或冒充你为其他管理员。你应该采取额外的步骤,确保你的私钥免遭盗窃。
|
||||
|
||||
#### 清单
|
||||
|
||||
- [ ] 强壮的密码用来保护私钥 _(关键)_
|
||||
- [ ] PGP的主密码保存在移动存储中 _(中等)_
|
||||
- [ ] 身份验证、签名和加密注册表子项存储在智能卡设备 _(中等)_
|
||||
- [ ] SSH配置为使用PGP认证密钥作为ssh私钥 _(中等)_
|
||||
|
||||
#### 注意事项
|
||||
|
||||
防止私钥被偷的最好方式是使用一个智能卡存储你的加密私钥,不要拷贝到工作平台上。有几个厂商提供支持OpenPGP的设备:
|
||||
|
||||
- [Kernel Concepts][12],在这里可以采购支持OpenPGP的智能卡和USB读取器,你应该需要一个。
|
||||
- [Yubikey NEO][13],这里提供OpenPGP功能的智能卡还提供很多很酷的特性(U2F, PIV, HOTP等等)。
|
||||
|
||||
确保PGP主密码没有存储在工作平台也很重要,只有子密码在使用。主密钥只有在登陆其他的密钥和创建子密钥时使用 — 不经常发生这种操作。你可以照着[Debian的子密钥][14]向导来学习如何移动你的主密钥到移动存储和创建子密钥。
|
||||
|
||||
你应该配置你的gnupg代理作为ssh代理然后使用基于智能卡PGP认证密钥作为你的ssh私钥。我们公布了一个细节向导如何使用智能卡读取器或Yubikey NEO。
|
||||
|
||||
如果你不想那么麻烦,最少要确保你的PGP私钥和你的SSH私钥有个强健的密码,这将让攻击者很难盗取使用它们。
|
||||
|
||||
### 工作站上的SELinux
|
||||
|
||||
如果你使用的发行版绑定了SELinux(如Fedora),这有些如何使用它的建议,让你的工作站达到最大限度的安全。
|
||||
|
||||
#### 清单
|
||||
|
||||
- [ ] 确保你的工作站强制使用SELinux _(关键)_
|
||||
- [ ] 不要盲目的执行`audit2allow -M`,经常检查 _(关键)_
|
||||
- [ ] 从不 `setenforce 0` _(中等)_
|
||||
- [ ] 切换你的用户到SELinux用户`staff_u` _(中等)_
|
||||
|
||||
#### 注意事项
|
||||
|
||||
SELinux是一个强制访问控制(MAC)为POSIX许可核心功能扩展。它是成熟,强健,自从它推出以来已经有很长的路了。不管怎样,许多系统管理员现在重复过时的口头禅“关掉它就行。”
|
||||
|
||||
话虽如此,在工作站上SELinux还是限制了安全效益,像很多应用都要作为一个用户自由的运行。开启它有益于给网络提供足够的保护,有可能有助于防止攻击者通过脆弱的后台服务提升到root级别的权限用户。
|
||||
|
||||
我们的建议是开启它并强制使用。
|
||||
|
||||
##### 从不`setenforce 0`
|
||||
|
||||
使用`setenforce 0`短时间内把SELinux设置为许可模式,但是你应该避免这样做。其实你是想查找一个特定应用或者程序的问题,实际上这样是把全部系统的SELinux关闭了。
|
||||
|
||||
你应该使用`semanage permissive -a [somedomain_t]`替换`setenforce 0`,只把这个程序放入许可模式。首先运行`ausearch`查看那个程序发生问题:
|
||||
|
||||
ausearch -ts recent -m avc
|
||||
|
||||
然后看下`scontext=`(SELinux的上下文)行,像这样:
|
||||
|
||||
scontext=staff_u:staff_r:gpg_pinentry_t:s0-s0:c0.c1023
|
||||
^^^^^^^^^^^^^^
|
||||
|
||||
这告诉你程序`gpg_pinentry_t`被拒绝了,所以你想查看应用的故障,应该增加它到许可模式:
|
||||
|
||||
semange permissive -a gpg_pinentry_t
|
||||
|
||||
这将允许你使用应用然后收集AVC的其他部分,你可以连同`audit2allow`写一个本地策略。一旦完成你就不会看到新的AVC的拒绝,你可以从许可中删除程序,运行:
|
||||
|
||||
semanage permissive -d gpg_pinentry_t
|
||||
|
||||
##### 用SELinux的用户staff_r,使用你的工作站
|
||||
|
||||
SELinux附带的本地角色实现基于角色的用户帐户禁止或授予某些特权。作为一个管理员,你应该使用`staff_r`角色,这可以限制访问很多配置和其他安全敏感文件,除非你先执行`sudo`。
|
||||
|
||||
默认,用户作为`unconfined_r`被创建,你可以运行大多数应用,没有任何(或只有一点)SELinux约束。转换你的用户到`staff_r`角色,运行下面的命令:
|
||||
|
||||
usermod -Z staff_u [username]
|
||||
|
||||
你应该退出然后登陆激活新角色,届时如果你运行`id -Z`,你将会看到:
|
||||
|
||||
staff_u:staff_r:staff_t:s0-s0:c0.c1023
|
||||
|
||||
在执行`sudo`时,你应该记住增加一个额外的标准告诉SELinux转换到"sysadmin"角色。你想要的命令是:
|
||||
|
||||
sudo -i -r sysadm_r
|
||||
|
||||
届时`id -Z`将会显示:
|
||||
|
||||
staff_u:sysadm_r:sysadm_t:s0-s0:c0.c1023
|
||||
|
||||
**警告**:在进行这个切换前你应该舒服的使用`ausearch`和`audit2allow`,当你作为`staff_r`角色运行时你的应用有可能不再工作了。写到这里时,以下流行的应用已知在`staff_r`下没有做策略调整就不会工作:
|
||||
|
||||
- Chrome/Chromium
|
||||
- Skype
|
||||
- VirtualBox
|
||||
|
||||
切换回`unconfined_r`,运行下面的命令:
|
||||
|
||||
usermod -Z unconfined_u [username]
|
||||
|
||||
然后注销再重新回到舒服的区域。
|
||||
|
||||
## 延伸阅读
|
||||
|
||||
IT安全的世界是一个没有底的兔子洞。如果你想深入,或者找到你的具体发行版更多的安全特性,请查看下面这些链接:
|
||||
|
||||
- [Fedora Security Guide](https://docs.fedoraproject.org/en-US/Fedora/19/html/Security_Guide/index.html)
|
||||
- [CESG Ubuntu Security Guide](https://www.gov.uk/government/publications/end-user-devices-security-guidance-ubuntu-1404-lts)
|
||||
- [Debian Security Manual](https://www.debian.org/doc/manuals/securing-debian-howto/index.en.html)
|
||||
- [Arch Linux Security Wiki](https://wiki.archlinux.org/index.php/Security)
|
||||
- [Mac OSX Security](https://www.apple.com/support/security/guides/)
|
||||
|
||||
## 许可
|
||||
|
||||
这项工作在[创作共用授权4.0国际许可证][0]许可下。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://github.com/lfit/itpol/blob/master/linux-workstation-security.md#linux-workstation-security-list
|
||||
|
||||
作者:[mricon][a]
|
||||
译者:[wyangsun](https://github.com/wyangsun)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://github.com/mricon
|
||||
[0]: http://creativecommons.org/licenses/by-sa/4.0/
|
||||
[1]: https://github.com/QubesOS/qubes-antievilmaid
|
||||
[2]: https://en.wikipedia.org/wiki/IEEE_1394#Security_issues
|
||||
[3]: https://qubes-os.org/
|
||||
[4]: https://xkcd.com/936/
|
||||
[5]: https://spideroak.com/
|
||||
[6]: https://code.google.com/p/chromium/wiki/LinuxSandboxing
|
||||
[7]: http://www.thoughtcrime.org/software/sslstrip/
|
||||
[8]: https://keepassx.org/
|
||||
[9]: http://www.passwordstore.org/
|
||||
[10]: https://pypi.python.org/pypi/django-pstore
|
||||
[11]: https://github.com/TomPoulton/hiera-eyaml
|
||||
[12]: http://shop.kernelconcepts.de/
|
||||
[13]: https://www.yubico.com/products/yubikey-hardware/yubikey-neo/
|
||||
[14]: https://wiki.debian.org/Subkeys
|
||||
[15]: https://github.com/lfit/ssh-gpg-smartcard-config
|
@ -1,128 +0,0 @@
|
||||
开发者的 Linux 容器之旅
|
||||
================================================================================
|
||||
![](https://deis.com/images/blog-images/dev_journey_0.jpg)
|
||||
|
||||
我告诉你一个秘密:使得我的应用程序进入到全世界的所有云计算的东西,对我来说仍然有一点神秘。但随着时间流逝,我意识到理解大规模机器配置和应用程序部署的来龙去脉对一个开发者来说是非常重要的知识。这类似于成为一个专业的音乐家。你当然需要知道如何使用你的乐器。但是,如果你不知道一个录音室是如何工作的,或者你如何适应一个交响乐团,你在这样的环境中工作会变得非常困难。
|
||||
|
||||
在软件开发的世界里,使你的代码进入我们更大的世界正如写出它来一样重要。开发重要,而且是很重要。
|
||||
|
||||
因此,为了弥合开发和部署之间的间隔,我会从头开始介绍容器技术。为什么是容器?因为有强有力的证据表明,容器是机器抽象的下一步:使计算机成为场所而不再是一个东西。理解容器是我们共同的旅程。
|
||||
|
||||
在这篇文章中,我会介绍容器化背后的概念。容器和虚拟机的区别。以及容器构建背后的逻辑以及它是如何适应应用程序架构的。我会探讨轻量级的 Linux 操作系统是如何适应容器生态系统。我还会讨论使用镜像创建可重用的容器。最后我会介绍容器集群如何使你的应用程序可以快速扩展。
|
||||
|
||||
在后面的文章中,我会一步一步向你介绍容器化一个事例应用程序的过程,以及如何为你的应用程序容器创建一个托管集群。同时,我会向你展示如何使用 Deis 将你的事例应用程序部署到你本地系统以及多种云供应商的虚拟机上。
|
||||
|
||||
让我们开始吧。
|
||||
|
||||
### 虚拟机的好处 ###
|
||||
|
||||
为了理解容器如何适应事物发展,你首先要了解容器的前者:虚拟机
|
||||
|
||||
[虚拟机][1] 是运行在物理宿主机上的软件抽象。配置一个虚拟机就像是购买一台计算机:你需要定义你想要的 CPU 数目,RAM 和磁盘存储容量。配置好了机器后,你把它加载到操作系统,然后是你想让虚拟机支持的任何服务器或者应用程序。
|
||||
|
||||
虚拟机允许你在一台硬件主机上运行多个模拟计算机。这是一个简单的示意图:
|
||||
|
||||
![](https://deis.com/images/blog-images/dev_journey_1.png)
|
||||
|
||||
虚拟机使得能充分利用你的硬件资源。你可以购买一台大型机然后在上面运行多个虚拟机。你可以有一个数据库虚拟机以及很多运行相同版本定制应用程序的虚拟机构成的集群。你可以在有限的硬件资源获得很多的扩展能力。如果你觉得你需要更多的虚拟机而且你的宿主硬件还有容量,你可以添加任何你想要的。或者,如果你不再需要一个虚拟机,你可以关闭该虚拟机并删除虚拟机镜像。
|
||||
|
||||
### 虚拟机的局限 ###
|
||||
|
||||
但是,虚拟机确实有局限。
|
||||
|
||||
如上面所示,假如你在一个主机上创建了三个虚拟机。主机有 12 个 CPU,48 GB 内存和 3TB 的存储空间。每个虚拟机配置为有 4 个 CPU,16 GB 内存和 1TB 存储空间。到现在为止,一切都还好。主机有这个容量。
|
||||
|
||||
但这里有个缺陷。所有分配给一个虚拟机的资源,无论是什么,都是专有的。每台机器都分配了 16 GB 的内存。但是,如果第一个虚拟机永不会使用超过 1GB 分配的内存,剩余的 15 GB 就会被浪费在那里。如果第三天虚拟机只使用分配的 1TB 存储空间中的 100GB,其余的 900GB 就成为浪费空间。
|
||||
|
||||
这里没有资源的流动。每台虚拟机拥有分配给它的所有资源。因此,在某种方式上我们又回到了虚拟机之前,把大部分金钱花费在未使用的资源上。
|
||||
|
||||
虚拟机还有*另一个*缺陷。扩展他们需要很长时间。如果你处于基础设施需要快速增长的情形,即使虚拟机配置是自动的,你仍然会发现你的很多时间都浪费在等待机器上线。
|
||||
|
||||
### 来到:容器 ###
|
||||
|
||||
概念上来说,容器是 Linux 中认为只有它自己的一个进程。该进程只知道告诉它的东西。另外,在容器化方面,该容器进程也分配了它自己的 IP 地址。这点很重要,我会再次重复。**在容器化方面,容器进程有它自己的 IP 地址**。一旦给予了一个 IP 地址,该进程就是宿主网络中可识别的资源。然后,你可以在容器管理器上运行命令,使容器 IP 映射到主机中能访问公网的 IP 地址。该映射发生时,对于任何意图和目的,一个容器就是网络上一个可访问的独立机器,概念上类似于虚拟机。
|
||||
|
||||
再次说明,容器是拥有不同 IP 地址从而使其成为网络上可识别的独立 Linux 进程。下面是一个示意图:
|
||||
|
||||
![](https://deis.com/images/blog-images/dev_journey_2.png)
|
||||
|
||||
容器/进程以动态合作的方式共享主机上的资源。如果容器只需要 1GB 内存,它就只会使用 1GB。如果它需要 4GB,就会使用 4GB。CPU 和存储空间利用也是如此。CPU,内存和存储空间的分配是动态的,和典型虚拟机的静态方式不同。所有这些资源的共享都由容器管理器管理。
|
||||
|
||||
最后,容器能快速启动。
|
||||
|
||||
因此,容器的好处是:**你获得了虚拟机独立和封装的好处而抛弃了专有静态资源的缺陷**。另外,由于容器能快速加载到内存,在扩展到多个容器时你能获得更好的性能。
|
||||
|
||||
### 容器托管、配置和管理 ###
|
||||
|
||||
托管容器的计算机运行着被剥离的只剩下主要部分的 Linux 版本。现在,宿主计算机流行的底层操作系统是上面提到的 [CoreOS][2]。当然还有其它,例如 [Red Hat Atomic Host][3] 和 [Ubuntu Snappy][4]。
|
||||
|
||||
所有容器之间共享Linux 操作系统,减少了容器足迹的重复和冗余。每个容器只包括该容器唯一的部分。下面是一个示意图:
|
||||
|
||||
![](https://deis.com/images/blog-images/dev_journey_3.png)
|
||||
|
||||
你用它所需的组件配置容器。一个容器组件被称为**层**。一层是一个容器镜像,(你会在后面的部分看到更多关于容器镜像的介绍)。你从一个基本层开始,这通常是你想在容器中使用的操作系统。(容器管理器只提供你想要的操作系统在宿主操作系统中不存在的部分。)当你构建配置你的容器时,你会添加层,例如你想要添加网络服务器 Apache,如果容器要运行脚本,则需要添加 PHP 或 Python 运行时。
|
||||
|
||||
分层非常灵活。如果应用程序或者服务容器需要 PHP 5.2 版本,你相应地配置该容器即可。如果你有另一个应用程序或者服务需要 PHP 5.6 版本,没问题,你可以使用 PHP 5.6 配置该容器。不像虚拟机,更改一个版本的运行时依赖时你需要经过大量的配置和安装过程;对于容器你只需要在容器配置文件中重新定义层。
|
||||
|
||||
所有上面描述的容器多功能性都由一个称为容器管理器的软件控制。现在,最流行的容器管理器是 [Docker][5] 和 [Rocket][6]。上面的示意图展示了容器管理器是 Docker,宿主操作系统是 CentOS 的主机情景。
|
||||
|
||||
### 容器由镜像构成 ###
|
||||
|
||||
当你需要将我们的应用程序构建到容器时,你就会编译镜像。镜像代表了需要完成容器工作的容器模板。(容器里的容器)。镜像被保存在网络上的注册表里。
|
||||
|
||||
从概念上讲,注册表类似于一个使用 Java 的人眼中的 [Maven][7] 仓库,使用 .NET 的人眼中的 [NuGet][8] 服务器。你会创建一个列出了你应用程序所需镜像的容器配置文件。然后你使用容器管理器创建一个包括了你应用程序代码以及从注册表中下载的构成资源的容器。例如,如果你的应用程序包括了一些 PHP 文件,你的容器配置文件会声明你会从注册表中获取 PHP 运行时。另外,你还要使用容器配置文件声明需要复制到容器文件系统中的 .php 文件。容器管理器会封装你应用程序的所有东西为一个独立容器。该容器将会在容器管理器的管理下运行在宿主计算机上。
|
||||
|
||||
这是一个容器创建背后概念的示意图:
|
||||
|
||||
![](https://deis.com/images/blog-images/dev_journey_4.png)
|
||||
|
||||
让我们仔细看看这个示意图。
|
||||
|
||||
(1)表示一个定义了你容器所需东西以及你容器如何构建的容器配置文件。当你在主机上运行容器时,容器管理器会读取配置文件从云上的注册表中获取你需要的容器镜像,(2)作为层将镜像添加到你的容器。
|
||||
|
||||
另外,如果组成镜像需要其它镜像,容器管理器也会获取这些镜像并把它们作为层添加进来。(3)容器管理器会将需要的文件复制到容器中。
|
||||
|
||||
如果你使用了配置服务,例如 [Deis][9],你刚刚创建的应用程序容器作为镜像存在(4)配置服务会将它部署到你选择的云供应商上。类似 AWS 和 Rackspace 云供应商。
|
||||
|
||||
### 集群中的容器 ###
|
||||
|
||||
好了。这里有一个很好的例子说明了容器比虚拟机提供了更好的配置灵活性和资源利用率。但是,这并不是全部。
|
||||
|
||||
容器真正灵活是在集群中。记住,每个容器有一个独立的 IP 地址。因此,能把它放到负载均衡器后面。将容器放到负载均衡器后面,就上升了一个层次。
|
||||
|
||||
你可以在一个负载均衡容器后运行容器集群以获得更高的性能和高可用计算。这是一个例子:
|
||||
|
||||
![](https://deis.com/images/blog-images/dev_journey_5.png)
|
||||
|
||||
假如你开发了一个进行资源密集型工作的应用程序。例如图片处理。使用类似 [Deis][9] 的容器配置技术,你可以创建一个包括了你图片处理程序以及你图片处理程序需要的所有资源的容器镜像。然后,你可以部署一个或多个容器镜像到主机上的负载均衡器。一旦创建了容器镜像,你可以在系统快要刷爆时把它放到一边,为了满足手中的工作时添加更多的容器实例。
|
||||
|
||||
这里还有更多好消息。你不需要每次添加实例到环境中时手动配置负载均衡器以便接受你的容器镜像。你可以使用服务发现技术告知均衡器你容器的可用性。然后,一旦获知,均衡器就会将流量分发到新的结点。
|
||||
|
||||
### 全部放在一起 ###
|
||||
|
||||
容器技术完善了虚拟机不包括的部分。类似 CoreOS、RHEL Atomic、和 Ubuntu 的 Snappy 宿主操作系统,和类似 Docker 和 Rocket 的容器管理技术结合起来,使得容器变得日益流行。
|
||||
|
||||
尽管容器变得更加越来越普遍,掌握它们还是需要一段时间。但是,一旦你懂得了它们的窍门,你可以使用类似 [Deis][9] 的配置技术使容器创建和部署变得更加简单。
|
||||
|
||||
概念上理解容器和进一步实际使用它们完成工作一样重要。但我认为不实际动手把想法付诸实践,概念也难以理解。因此,我们该系列的下一阶段就是:创建一些容器。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://deis.com/blog/2015/developer-journey-linux-containers
|
||||
|
||||
作者:[Bob Reselman][a]
|
||||
译者:[ictlyh](http://www.mutouxiaogui.cn/blog/)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://deis.com/blog
|
||||
[1]:https://en.wikipedia.org/wiki/Virtual_machine
|
||||
[2]:https://coreos.com/using-coreos/
|
||||
[3]:http://www.projectatomic.io/
|
||||
[4]:https://developer.ubuntu.com/en/snappy/
|
||||
[5]:https://www.docker.com/
|
||||
[6]:https://coreos.com/blog/rocket/
|
||||
[7]:https://en.wikipedia.org/wiki/Apache_Maven
|
||||
[8]:https://www.nuget.org/
|
||||
[9]:http://deis.com/learn
|
@ -1,111 +0,0 @@
|
||||
在浏览器上使用Docker
|
||||
================================================================================
|
||||
Docker 越来越流行了。在一个容器里面而不是虚拟机里运行一个完整的操作系统的这种是一个非常棒的技术和想法。docker 已经通过节省工作时间来拯救了千上万的系统管理员和开发人员。这是一个开源技术,提供一个平台来把应用程序当作容器来打包、分发、共享和运行,而不去关注主机上运行的操作系统是什么。它没有开发语言、框架或打包系统的限制,并且可以在任何时间、任何地点运行,从小型计算机到高端服务器都可以。运行docker容器和管理他们可能会花费一点点困难和时间,所以现在有一款基于web 的应用程序-DockerUI,可以让管理和运行容器变得很简单。DockerUI 是一个对那些不熟悉Linux 命令行担忧很想运行容器话程序的人很有帮助。DockerUI 是一个开源的基于web 的应用程序,它最著名的是它华丽的设计和简单的用来运行和管理docker 的简单的操作界面。
|
||||
|
||||
下面会介绍如何在Linux 上安装配置DockerUI。
|
||||
|
||||
### 1. 安装docker ###
|
||||
|
||||
首先,我们需要安装docker。我们得感谢docker 的开发者,让我们可以简单的在主流linux 发行版上安装docker。为了安装docker,我们得在对应的发行版上使用下面的命令。
|
||||
|
||||
#### Ubuntu/Fedora/CentOS/RHEL/Debian ####
|
||||
|
||||
docker 维护者已经写了一个非常棒的脚本,用它可以在Ubuntu 15.04/14.10/14.04, CentOS 6.x/7, Fedora 22, RHEL 7 和Debian 8.x 这几个linux 发行版上安装docker。这个脚本可以识别出我们的机器上运行的linux 的发行版本,然后将需要的源库添加到文件系统、更新本地的安装源目录,最后安装docker 和依赖库。要使用这个脚本安装docker,我们需要在root 用户或者sudo 权限下运行如下的命令,
|
||||
|
||||
# curl -sSL https://get.docker.com/ | sh
|
||||
|
||||
#### OpenSuse/SUSE Linux 企业版 ####
|
||||
|
||||
要在运行了OpenSuse 13.1/13.2 或者 SUSE Linux Enterprise Server 12 的机器上安装docker,我们只需要简单的执行zypper 命令。运行下面的命令就可以安装最新版本的docker:
|
||||
|
||||
# zypper in docker
|
||||
|
||||
#### ArchLinux ####
|
||||
|
||||
docker 存在于ArchLinux 的官方源和社区维护的AUR 库。所以在ArchLinux 上我们有两条路来安装docker。使用官方源安装,需要执行下面的pacman 命令:
|
||||
|
||||
# pacman -S docker
|
||||
|
||||
如果要从社区源 AUR 安装docker,需要执行下面的命令:
|
||||
|
||||
# yaourt -S docker-git
|
||||
|
||||
### 2. 启动 ###
|
||||
|
||||
安装好docker 之后,我们需要运行docker 监护程序,然后再能运行并管理docker 容器。我们需要使用下列命令来确定docker 监护程序已经安装并运行了。
|
||||
|
||||
#### 在 SysVinit 上####
|
||||
|
||||
# service docker start
|
||||
|
||||
#### 在Systemd 上####
|
||||
|
||||
# systemctl start docker
|
||||
|
||||
### 3. 安装DockerUI ###
|
||||
|
||||
安装DockerUI 比安装docker 要简单很多。我们仅仅需要懂docker 注册表上拉取dockerui ,然后在容器里面运行。要完成这些,我们只需要简单的执行下面的命令:
|
||||
|
||||
# docker run -d -p 9000:9000 --privileged -v /var/run/docker.sock:/var/run/docker.sock dockerui/dockerui
|
||||
|
||||
![Starting DockerUI Container](http://blog.linoxide.com/wp-content/uploads/2015/09/starting-dockerui-container.png)
|
||||
|
||||
在上面的命令里,dockerui 使用的默认端口是9000,我们需要使用`-p` 命令映射默认端口。使用`-v` 标志我们可以指定docker socket。如果主机使用了SELinux那么就得使用`--privileged` 标志。
|
||||
|
||||
执行完上面的命令后,我们要检查dockerui 容器是否运行了,或者使用下面的命令检查:
|
||||
|
||||
# docker ps
|
||||
|
||||
![Running Docker Containers](http://blog.linoxide.com/wp-content/uploads/2015/09/running-docker-containers.png)
|
||||
|
||||
### 4. 拉取docker镜像 ###
|
||||
|
||||
现在我们还不能直接使用dockerui 拉取镜像,所以我们需要在命令行下拉取docker 镜像。要完成这些我们需要执行下面的命令。
|
||||
|
||||
# docker pull ubuntu
|
||||
|
||||
![Docker Image Pull](http://blog.linoxide.com/wp-content/uploads/2015/10/docker-image-pull.png)
|
||||
|
||||
上面的命令将会从docker 官方源[Docker Hub][1]拉取一个标志为ubuntu 的镜像。类似的我们可以从Hub 拉取需要的其它镜像。
|
||||
|
||||
### 4. 管理 ###
|
||||
|
||||
启动了dockerui 容器之后,我们快乐的用它来执行启动、暂停、终止、删除和其它dockerui 提供的其他用来操作docker 容器的命令。第一,我们需要在web 浏览器里面打开dockerui:在浏览器里面输入http://ip-address:9000 或者 http://mydomain.com:9000,具体要根据你的系统配置。默认情况下登录不需啊哟认证,但是可以配置我们的web 服务器来要求登录认证。要启动一个容器,我们得得到包含我们要运行的程序的景象。
|
||||
|
||||
#### 创建 ####
|
||||
|
||||
创建容器我们需要在Images 页面,点击我们想创建的容器的镜像id。然后点击`Create` 按钮,接下来我们就会被要求输入创建容器所需要的属性。这些都完成之后,我们需要点击按钮`Create` 完成最终的创建。
|
||||
|
||||
![Creating Docker Container](http://blog.linoxide.com/wp-content/uploads/2015/10/creating-docker-container.png)
|
||||
|
||||
#### 中止 ####
|
||||
|
||||
要停止一个容器,我们只需要跳转到`Containers` 页面,然后选取要停止的容器。然后再Action 的子菜单里面按下Stop 就行了。
|
||||
|
||||
![Managing Container](http://blog.linoxide.com/wp-content/uploads/2015/10/managing-container.png)
|
||||
|
||||
#### 暂停与恢复 ####
|
||||
|
||||
要暂停一个容器,只需要简单的选取目标容器,然后点击Pause 就行了。恢复一个容器只需要在Actions 的子菜单里面点击Unpause 就行了。
|
||||
|
||||
#### 删除 ####
|
||||
|
||||
类似于我们上面完成的任务,杀掉或者删除一个容器或镜像也是很简单的。只需要检查、选择容器或镜像,然后点击Kill 或者Remove 就行了。
|
||||
|
||||
### 结论 ###
|
||||
|
||||
dockerui 使用了docker 远程API 完成了一个很棒的管理docker 容器的web 界面。它的开发者们已经使用纯HTML 和JS 设计、开发了这个应用。目前这个程序还处于开发中,并且还有大量的工作要完成,所以我们并不推荐将它应用在生产环境。它可以帮助用户简单的完成管理容器和镜像,而且只需要一点点工作。如果想参与、贡献dockerui,我们可以访问它们的[Github 仓库][2]。如果有问题、建议、反馈,请写在下面的评论框,这样我们就可以修改或者更新我们的内容。谢谢。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/setup-dockerui-web-interface-docker/
|
||||
|
||||
作者:[Arun Pyasi][a]
|
||||
译者:[oska874](https://github.com/oska874)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunp/
|
||||
[1]:https://hub.docker.com/
|
||||
[2]:https://github.com/crosbymichael/dockerui/
|
@ -0,0 +1,279 @@
|
||||
10 Tips for 10x Application Performance
|
||||
|
||||
将程序性能提高十倍的10条建议
|
||||
================================================================================
|
||||
|
||||
提高web 应用的性能从来没有比现在更关键过。网络经济的比重一直在增长;全球经济超过5% 的价值是在因特网上产生的(数据参见下面的资料)。我们的永远在线、超级连接的世界意味着用户的期望值也处于历史上的最高点。如果你的网站不能及时的响应,或者你的app 不能无延时的工作,用户会很快的投奔到你的竞争对手那里。
|
||||
|
||||
举一个例子,一份亚马逊十年前做过的研究可以证明,甚至在那个时候,网页加载时间每减少100毫秒,收入就会增加1%。另一个最近的研究特别强调一个事实,即超过一半的网站拥有着在调查中说他们会因为应用程序性能的问题流失用户。
|
||||
|
||||
网站到底需要多块呢?对于页面加载,每增加1秒钟就有4%的用户放弃使用。顶级的电子商务站点的页面在第一次交互时可以做到1秒到3秒加载时间,而这是提供最高舒适度的速度。很明显这种利害关系对于web 应用来说很高,而且在不断的增加。
|
||||
|
||||
想要提高效率很简单,但是看到实际结果很难。要在旅途上帮助你,这篇blog 会给你提供10条最高可以10倍的提升网站性能的建议。这是系列介绍提高应用程序性能的第一篇文章,包括测试充分的优化技术和一点NGIX 的帮助。这个系列给出了潜在的提高安全性的帮助。
|
||||
|
||||
### Tip #1: 通过反向代理来提高性能和增加安全性 ###
|
||||
|
||||
如果你的web 应用运行在单个机器上,那么这个办法会明显的提升性能:只需要添加一个更快的机器,更好的处理器,更多的内存,更快的磁盘阵列,等等。然后新机器就可以更快的运行你的WordPress 服务器, Node.js 程序, Java 程序,以及其它程序。(如果你的程序要访问数据库服务器,那么这个办法还是很简单:添加两个更快的机器,以及在两台电脑之间使用一个更快的链路。)
|
||||
|
||||
问题是,机器速度可能并不是问题。web 程序运行慢经常是因为计算机一直在不同的任务之间切换:和用户的成千上万的连接,从磁盘访问文件,运行代码,等等。应用服务器可能会抖动-内存不足,将内存数据写会磁盘,以及多个请求等待一个任务完成,如磁盘I/O。
|
||||
|
||||
你可以采取一个完全不同的方案来替代升级硬件:添加一个反向代理服务器来分担部分任务。[反向代理服务器][1] 位于运行应用的机器的前端,是用来处理网络流量的。只有反向代理服务器是直接连接到互联网的;和程序的通讯都是通过一个快速的内部网络完成的。
|
||||
|
||||
使用反向代理服务器可以将应用服务器从等待用户与web 程序交互解放出来,这样应用服务器就可以专注于为反向代理服务器构建网页,让其能够传输到互联网上。而应用服务器就不需要在能带客户端的响应,可以运行与接近优化过的性能水平。
|
||||
|
||||
添加方向代理服务器还可以给你的web 服务器安装带来灵活性。比如,一个已知类型的服务器已经超载了,那么就可以轻松的添加另一个相同的服务器;如果某个机器宕机了,也可以很容易的被替代。
|
||||
|
||||
因为反向代理带来的灵活性,所以方向代理也是一些性能加速功能的必要前提,比如:
|
||||
|
||||
- **负载均衡** (参见 [Tip #2][2]) – 负载均衡运行在方向代理服务器上,用来将流量均衡分配给一批应用。有了合适的负载均衡,你就可以在不改变程序的前提下添加应用服务器。
|
||||
- **缓存静态文件** (参见 [Tip #3][3]) – 直接读取的文件,比如图像或者代码,可以保存在方向代理服务器,然后直接发给客户端,这样就可以提高速度、分担应用服务器的负载,可以让应用运行的更快
|
||||
- **网站安全** – 反响代理服务器可以提高网站安全性,以及快速的发现和响应攻击,保证应用服务器处于被保护状态。
|
||||
|
||||
NGINX 软件是一个专门设计的反响代理服务器,也包含了上述的多种功能。NGINX 使用事件驱动的方式处理问题,着回避传统的服务器更加有效率。NGINX plus 天价了更多高级的反向代理特性,比如程序[健康度检查][4],专门用来处理request 路由,高级缓冲和相关支持。
|
||||
|
||||
![NGINX Worker Process helps increase application performance](https://www.nginx.com/wp-content/uploads/2015/10/Graph-11.png)
|
||||
|
||||
### Tip #2: 添加负载平衡 ###
|
||||
|
||||
添加一个[负载均衡服务器][5] 是一个相当简单的用来提高性能和网站安全性的的方法。使用负载均衡讲流量分配到多个服务器,是用来替代只使用一个巨大且高性能web 服务器的方案。即使程序写的不好,或者在扩容方面有困难,只使用负载均衡服务器就可以很好的提高用户体验。
|
||||
|
||||
负载均衡服务器首先是一个反响代理服务器(参见[Tip #1][6])——它接收来自互联网的流量,然后转发请求给另一个服务器。小戏法是负载均衡服务器支持两个或多个应用服务器,使用[分配算法][7]将请求转发给不同服务器。最简单的负载均衡方法是轮转法,只需要将新的请求发给列表里的下一个服务器。其它的方法包括将请求发给负载最小的活动连接。NGINX plus 拥有将特定用户的会话分配给同一个服务器的[能力][8].
|
||||
|
||||
负载均衡可以很好的提高性能是因为它可以避免某个服务器过载而另一些服务器却没有流量来处理。它也可以简单的扩展服务器规模,因为你可以添加多个价格相对便宜的服务器并且保证它们被充分利用了。
|
||||
|
||||
可以进行负载均衡的协议包括HTTP, HTTPS, SPDY, HTTP/2, WebSocket,[FastCGI][9],SCGI,uwsgi, memcached,以及集中其它的应用类型,包括采用TCP 第4层协议的程序。分析你的web 应用来决定那些你要使用以及那些地方的性能不足。
|
||||
|
||||
相同的服务器或服务器群可以被用来进行负载均衡,也可以用来处理其它的任务,如SSL 终止,提供对客户端使用的HTTP/1/x 和 HTTP/2 ,以及缓存静态文件。
|
||||
|
||||
NGINX 经常被用来进行负载均衡;要想了解更多的情况可以访问我们的[overview blog post][10], [configuration blog post][11], [ebook][12] 以及相关网站 [webinar][13], 和 [documentation][14]。我们的商业版本 [NGINX Plus][15] 支持更多优化了的负载均衡特性,如基于服务器响应时间的加载路由和Microsoft’s NTLM 协议上的负载均衡。
|
||||
|
||||
### Tip #3: 缓存静态和动态的内容 ###
|
||||
|
||||
缓存通过加速内容的传输速度来提高web 应用的性能。它可以采用一下集中策略:当需要的时候预处理要传输的内容,保存数据到速度更快的设备,把数据存储在距离客户端更近的位置,或者结合起来使用。
|
||||
|
||||
下面要考虑两种不同类型数据的缓冲:
|
||||
|
||||
- **静态内容缓存**。不经常变化的文件,比如图像(JPEG,PNG) 和代码(CSS,JavaScript),可以保存在边缘服务器,这样就可以快速的从内存和磁盘上提取。
|
||||
- **动态内容缓存**。很多web 应用回针对每个网页请求生成不同的HTML 页面。在短时间内简单的缓存每个生成HTML 内容,就可以很好的减少要生成的内容的数量,这完全可以达到你的要求。
|
||||
|
||||
举个例子,如果一个页面每秒会被浏览10次,你将它缓存1 秒,99%请求的页面都会直接从缓存提取。如果你将将数据分成静态内容,甚至新生成的页面可能都是由这些缓存构成的。
|
||||
|
||||
下面由是web 应用发明的三种主要的缓存技术:
|
||||
|
||||
- **缩短数据与用户的距离**。把一份内容的拷贝放的离用户更近点来减少传输时间。
|
||||
- **提高内容服务器的速度**。内容可以保存在一个更快的服务器上来减少提取文件的时间。
|
||||
- **从过载服务器拿走数据**。机器经常因为要完成某些其它的任务而造成某个任务的执行速度比测试结果要差。将数据缓存在不同的机器上可以提高缓存资源和非缓存资源的效率,而这知识因为主机没有被过度使用。
|
||||
|
||||
对web 应用的缓存机制可以web 应用服务器内部实现。第一,缓存动态内容是用来减少应用服务器加载动态内容的时间。然后,缓存静态内容(包括动态内容的临时拷贝)是为了更进一步的分担应用服务器的负载。而且缓存之后会从应用服务器转移到对用户而言更快、更近的机器,从而减少应用服务器的压力,减少提取数据和传输数据的时间。
|
||||
|
||||
改进过的缓存方案可以极大的提高应用的速度。对于大多数网页来说,静态数据,比如大图像文件,构成了超过一半的内容。如果没有缓存,那么这可能会花费几秒的时间来提取和传输这类数据,但是采用了缓存之后不到1秒就可以完成。
|
||||
|
||||
举一个在实际中缓存是如何使用的例子, NGINX 和NGINX Plus使用了两条指令来[设置缓存机制][16]:proxy_cache_path 和 proxy_cache。你可以指定缓存的位置和大小,文件在缓存中保存的最长时间和其他一些参数。使用第三条(而且是相当受欢迎的一条)指令,proxy_cache_use_stale,如果服务器提供新鲜内容是忙或者挂掉之类的信息,你甚至可以让缓存提供旧的内容,这样客户端就不会一无所得。从用户的角度来看这可以很好的提高你的网站或者应用的上线时间。
|
||||
|
||||
NGINX plus 拥有[高级缓存特性][17],包括对[缓存清除][18]的支持和在[仪表盘][19]上显示缓存状态信息。
|
||||
|
||||
要想获得更多关于NGINX 的缓存机制的信息可以浏览NGINX Plus 管理员指南中的 [reference documentation][20] 和 [NGINX Content Caching][21] 。
|
||||
|
||||
**注意**:缓存机制分布于应用开发者、投资决策者以及实际的系统运维人员之间。本文提到的一些复杂的缓存机制从[DevOps 的角度][23]来看很具有价值,即对集应用开发者、架构师以及运维操作人员的功能为一体的工程师来说可以满足他们对站点功能性、响应时间、安全性和商业结果,如完成的交易数。
|
||||
|
||||
### Tip #4: 压缩数据 ###
|
||||
|
||||
压缩是一个具有很大潜力的提高性能的加速方法。现在已经有一些针对照片(JPEG 和PNG)、视频(MPEG-4)和音乐(MP3)等各类文件精心设计和高压缩率的标准。每一个标准都或多或少的减少了文件的大小。
|
||||
|
||||
文本数据 —— 包括HTML(包含了纯文本和HTL 标签),CSS和代码,比如Javascript —— 经常是未经压缩就传输的。压缩这类数据会在对应用程序性能的感觉上,特别是处于慢速或受限的移动网络的客户端,产生不成比例的影响。
|
||||
|
||||
这是因为文本数据经常是用户与网页交互的有效数据,而多媒体数据可能更多的是起提供支持或者装饰的作用。聪明的内容压缩可以减少HTML,Javascript,CSS和其他文本内容对贷款的要求,通常可以减少30% 甚至更多的带宽和相应的页面加载时间。
|
||||
|
||||
如果你是用SSL,压缩可以减少需要进行SSL 编码的的数据量,而这些编码操作会占用一些CPU时间而抵消了压缩数据减少的时间。
|
||||
|
||||
压缩文本数据的方法很多,举个例子,在定义小说文本压缩模式的[HTTP/2 部分]就专门为适应头数据。另一个例子是可以在NGINX 里打开使用GZIP 压缩文本。你在你的服务里[预压缩文本数据][25]之后,你就可以直接使用gzip_static 指令来处理压缩过的.gz 版本。
|
||||
|
||||
### Tip #5: 优化 SSL/TLS ###
|
||||
|
||||
安全套接字([SSL][26]) 协议和它的继承者,传输层安全(TLS)协议正在被越来越多的网站采用。SSL/TLS 对从原始服务器发往用户的数据进行加密提高了网站的安全性。影响这个趋势的部分原因是Google 正在使用SSL/TLS,这在搜索引擎排名上是一个正面的影响因素。
|
||||
|
||||
尽管SSL/TLS 越来越流行,但是使用加密对速度的影响也让很多网站望而却步。SSL/TLS 之所以让网站变的更慢,原因有二:
|
||||
|
||||
1. 任何一个连接第一次连接时的握手过程都需要传递密钥。而采用HTTP/1.x 协议的浏览器在建立多个连接时会对每个连接重复上述操作。
|
||||
2. 数据在传输过程中需要不断的在服务器加密、在客户端解密。
|
||||
|
||||
要鼓励使用SSL/TLS,HTTP/2 和SPDY(在[下一章][27]会描述)的作者设计新的协议来让浏览器只需要对一个浏览器会话使用一个连接。这会大大的减少上述两个原因中的一个浪费的时间。然而现在可以用来提高应用程序使用SSL/TLS 传输数据的性能的方法不止这些。
|
||||
|
||||
web 服务器有对应的机制优化SSL/TLS 传输。举个例子,NGINX 使用[OpenSSL][28]运行在普通的硬件上提供接近专用硬件的传输性能。NGINX [SSL 性能][29] 有详细的文档,而且把对SSL/TLS 数据进行加解密的时间和CPU 占用率降低了很多。
|
||||
|
||||
更进一步,在这篇[blog][30]有详细的说明如何提高SSL/TLS 性能,可以总结为一下几点:
|
||||
|
||||
- **会话缓冲**。使用指令[ssl_session_cache][31]可以缓存每个新的SSL/TLS 连接使用的参数。
|
||||
- **会话票据或者ID**。把SSL/TLS 的信息保存在一个票据或者ID 里可以流畅的复用而不需要重新握手。
|
||||
- **OCSP 分割**。通过缓存SSL/TLS 证书信息来减少握手时间。
|
||||
|
||||
NGINX 和NGINX Plus 可以被用作SSL/TLS 终结——处理客户端流量的加密和解密,而同时和其他服务器进行明文通信。使用[这几步][32] 来设置NGINX 和NGINX Plus 处理SSL/TLS 终止。同时,这里还有一些NGINX Plus 和接收TCP 连接的服务器一起使用时的[特有的步骤][33]
|
||||
|
||||
### Tip #6: 使用 HTTP/2 或 SPDY ###
|
||||
|
||||
对于已经使用了SSL/TLS 的站点,HTTP/2 和SPDY 可以很好的提高性能,因为每个连接只需要一次握手。而对于没有使用SSL/TLS 的站点来说,HTTP/2 和SPDY会在响应速度上有些影响(通常会将度效率)。
|
||||
|
||||
Google 在2012年开始把SPDY 作为一个比HTTP/1.x 更快速的协议来推荐。HTTP/2 是目前IETF 标准,他也基于SPDY。SPDY 已经被广泛的支持了,但是很快就会被HTTP/2 替代。
|
||||
|
||||
SPDY 和HTTP/2 的关键是用单连接来替代多路连接。单个连接是被复用的,所以它可以同时携带多个请求和响应的分片。
|
||||
|
||||
通过使用一个连接这些协议可以避免过多的设置和管理多个连接,就像浏览器实现了HTTP/1.x 一样。单连接在对SSL 特别有效,这是因为它可以最小化SSL/TLS 建立安全链接时的握手时间。
|
||||
|
||||
SPDY 协议需要使用SSL/TLS, 而HTTP/2 官方并不需要,但是目前所有支持HTTP/2的浏览器只有在使能了SSL/TLS 的情况下才会使用它。这就意味着支持HTTP/2 的浏览器只有在网站使用了SSL 并且服务器接收HTTP/2 流量的情况下才会启用HTTP/2。否则的话浏览器就会使用HTTP/1.x 协议。
|
||||
|
||||
当你实现SPDY 或者HTTP/2时,你不再需要通常的HTTP 性能优化方案,比如域分隔资源聚合,以及图像登记。这些改变可以让你的代码和部署变得更简单和更易于管理。要了解HTTP/2 带来的这些变化可以浏览我们的[白皮书][34]。
|
||||
|
||||
![NGINX Supports SPDY and HTTP/2 for increased web application performance](https://www.nginx.com/wp-content/uploads/2015/10/http2-27.png)
|
||||
|
||||
作为支持这些协议的一个样例,NGINX 已经从一开始就支持了SPDY,而且[大部分使用SPDY 协议的网站][35]都运行的是NGINX。NGINX 同时也[很早][36]对HTTP/2 的提供了支持,从2015 年9月开始开源NGINX 和NGINX Plus 就[支持][37]它了。
|
||||
|
||||
经过一段时间,我们NGINX 希望更多的站点完全是能SSL 并且向HTTP/2 迁移。这将会提高安全性,同时新的优化手段也会被发现和实现,更简单的代码表现的更加优异。
|
||||
|
||||
### Tip #7: 升级软件版本 ###
|
||||
|
||||
一个提高应用性能的简单办法是根据软件的稳定性和性能的评价来选在你的软件栈。进一步说,因为高性能组件的开发者更愿意追求更高的性能和解决bug ,所以值得使用最新版本的软件。新版本往往更受开发者和用户社区的关注。更新的版本往往会利用到新的编译器优化,包括对新硬件的调优。
|
||||
|
||||
稳定的新版本通常比旧版本具有更好的兼容性和更高的性能。一直进行软件更新,可以非常简单的保持软件保持最佳的优化,解决掉bug,以及安全性的提高。
|
||||
|
||||
一直使用旧版软件也会组织你利用新的特性。比如上面说到的HTTP/2,目前要求OpenSSL 1.0.1.在2016 年中期开始将会要求1.0.2 ,而这是在2015年1月才发布的。
|
||||
|
||||
NGINX 用户可以开始迁移到[NGINX 最新的开源软件][38] 或者[NGINX Plus][39];他们都包含了罪行的能力,如socket分区和线程池(见下文),这些都已经为性能优化过了。然后好好看看的你软件栈,把他们升级到你能能升级道德最新版本吧。
|
||||
|
||||
### Tip #8: linux 系统性能调优 ###
|
||||
|
||||
linux 是大多数web 服务器使用操作系统,而且作为你的架构的基础,Linux 表现出明显可以提高性能的机会。默认情况下,很多linux 系统都被设置为使用很少的资源,匹配典型的桌面应用负载。这就意味着web 应用需要最少一些等级的调优才能达到最大效能。
|
||||
|
||||
Linux 优化是转变们针对web 服务器方面的。以NGINX 为例,这里有一些在加速linux 时需要强调的变化:
|
||||
|
||||
- **缓冲队列**。如果你有挂起的连接,那么你应该考虑增加net.core.somaxconn 的值,它代表了可以缓存的连接的最大数量。如果连接线直太小,那么你将会看到错误信息,而你可以逐渐的增加这个参数知道错误信息停止出现。
|
||||
- **文件描述符**。NGINX 对一个连接使用最多2个文件描述符。如果你的系统有很多连接,你可能就需要提高sys.fs.file_max ,增加系统对文件描述符数量整体的限制,这样子才能支持不断增加的负载需求。
|
||||
- **临时端口**。当使用代理时,NGINX 会为每个上游服务器创建临时端口。你可以设置net.ipv4.ip_local_port_range 来提高这些端口的范围,增加可用的端口。你也可以减少非活动的端口的超时判断来重复使用端口,这可以通过net.ipv4.tcp_fin_timeout 来设置,这可以快速的提高流量。
|
||||
|
||||
对于NGINX 来说,可以查阅[NGINX 性能调优指南][40]来学习如果优化你的Linux 系统,这样子它就可以很好的适应大规模网络流量而不会超过工作极限。
|
||||
|
||||
### Tip #9: web 服务器性能调优 ###
|
||||
|
||||
无论你是用哪种web 服务器,你都需要对它进行优化来提高性能。下面的推荐手段可以用于任何web 服务器,但是一些设置是针对NGINX的。关键的优化手段包括:
|
||||
|
||||
- **f访问日志**。不要把每个请求的日志都直接写回磁盘,你可以在内存将日志缓存起来然后一批写回磁盘。对于NGINX 来说添加给指令*access_log* 添加参数 *buffer=size* 可以让系统在缓存满了的情况下才把日志写到此哦按。如果你添加了参数**flush=time** ,那么缓存内容会每隔一段时间再写回磁盘。
|
||||
- **缓存**。缓存掌握了内存中的部分资源知道满了位置,这可以让与客户端的通信更加高效。与内存中缓存不匹配的响应会写回磁盘,而这就会降低效能。当NGINX [启用][42]了缓存机制后,你可以使用指令*proxy_buffer_size* 和 *proxy_buffers* 来管理缓存。
|
||||
- **客户端保活**。保活连接可以减少开销,特别是使用SSL/TLS时。对于NGINX 来说,你可以增加*keepalive_requests* 的值,从默认值100 开始修改,这样一个客户端就可以转交一个指定的连接,而且你也可以通过增加*keepalive_timeout* 的值来允许保活连接存活更长时间,结果就是让后来的请求处理的更快速。
|
||||
- **上游保活**。上游的连接——即连接到应用服务器、数据库服务器等机器的连接——同样也会收益于连接保活。对于上游连接老说,你可以增加*保活时间*,即每个工人进程的空闲保活连接个数。这就可以提高连接的复用次数,减少需要重新打开全新的连接次数。更多关于保活连接的信息可以参见[blog][41].
|
||||
- **限制**。限制客户端使用的资源可以提高性能和安全性。对于NGINX 来说指令*limit_conn* 和 *limit_conn_zone* 限制了每个源的连接数量,而*limit_rate* 限制了带宽。这些限制都可以阻止合法用户*攫取* 资源,同时夜避免了攻击。指令*limit_req* 和 *limit_req_zone* 限制了客户端请求。对于上游服务器来说,可以在上游服务器的配置块里使用max_conns 可以限制连接到上游服务器的连接。 这样可以避免服务器过载。关联的队列指令会创建一个队列来在连接数抵达*max_conn* 限制时在指定的长度的时间内保存特定数量的请求。
|
||||
- **工人进程**。工人进程负责处理请求。NGINX 采用事件驱动模型和依赖操作系统的机制来有效的讲请求分发给不同的工人进程。这条建议推荐设置每个CPU 的参数*worker_processes* 。如果需要的话,工人连接的最大数(默认512)可以安全在大部分系统增加,是指找到最适合你的系统的值。
|
||||
- **套接字分割**。通常一个套接字监听器会把新连接分配给所有工人进程。套接字分割会未每个工人进程创建一个套接字监听器,这样一来以内核分配连接给套接字就成为可能了。折可以减少锁竞争,并且提高多核系统的性能,要使能[套接字分隔][43]需要在监听指令里面加上复用端口参数。
|
||||
- **线程池**。一个计算机进程可以处理一个缓慢的操作。对于web 服务器软件来说磁盘访问会影响很多更快的操作,比如计算或者在内存中拷贝。使用了线程池之后慢操作可以分配到不同的任务集,而主进程可以一直运行快速操作。当磁盘操作完成后结果会返回给主进程的循环。在NGINX理有两个操作——read()系统调用和sendfile() ——被分配到了[线程池][44]
|
||||
|
||||
![Thread pools help increase application performance by assigning a slow operation to a separate set of tasks](https://www.nginx.com/wp-content/uploads/2015/10/Graph-17.png)
|
||||
|
||||
**技巧**。当改变任务操作系统或支持服务的设置时,一次只改变一个参数然后测试性能。如果修改引起问题了,或者不能让你的系统更快那么就改回去。
|
||||
|
||||
在[blog][45]可以看到更详细的NGINX 调优方法。
|
||||
|
||||
### Tip #10: 监视系统活动来解决问题和瓶颈 ###
|
||||
|
||||
在应用开发中要使得系统变得非常高效的关键是监视你的系统在现实世界运行的性能。你必须能通过特定的设备和你的web 基础设施上监控程序活动。
|
||||
|
||||
监视活动是最积极的——他会告诉你发生了什么,把问题留给你发现和最终解决掉。
|
||||
|
||||
监视可以发现集中不同的问题。它们包括:
|
||||
|
||||
- 服务器宕机。
|
||||
- 服务器出问题一直在丢失连接。
|
||||
- 服务器出现大量的缓存未命中。
|
||||
- 服务器没有发送正确的内容。
|
||||
|
||||
应用的总体性能监控工具,比如New Relic 和Dynatrace,可以帮助你监控到从远处加载网页的时间,二NGINX 可以帮助你监控到应用发送的时 间。当你需要考虑为基础设施添加容量以满足流量需求时,应用性能数据可以告诉你你的优化措施的确起作用了。
|
||||
|
||||
为了帮助开发者快速的发现、解决问题,NGINX Plus 增加了[应用感知健康度检查][46] ——对重复出现的常规事件进行综合分析并在问题出现时向你发出警告。NGINX Plus 同时提供[会话过滤][47] 功能,折可以组织当前任务未完成之前不接受新的连接,另一个功能是慢启动,允许一个从错误恢复过来的服务器追赶上负载均衡服务器群的速度。当有使用得当时,健康度检查可以让你在问题变得严重到影响用户体验前就发现它,而会话过滤和慢启动可以让你替换服务器,并且这个过程不会对性能和正常运行时间产生负面影响。这个表格就展示了NGINX Plus 内建模块在web 基础设施[监视活活动][48]的仪表盘,包括了服务器群,TCP 连接和缓存等信息。
|
||||
|
||||
![Use real-time application performance monitoring tools to identify and resolve issues quickly](https://www.nginx.com/wp-content/uploads/2015/10/Screen-Shot-2015-10-05-at-4.16.32-PM.png)
|
||||
|
||||
### 总结: 看看10倍性能提升的效果 ###
|
||||
|
||||
这些性能提升方案对任何一个web 应用都可用并且效果都很好,而实际效果取决于你的预算,如你能花费的时间,目前实现方案的差距。所以你该如何对你自己的应用实现10倍性能提升?
|
||||
|
||||
为了指导你了解每种优化手段的潜在影响,这里是是上面详述的每个优化方法的关键点,虽然你的里程肯定大不相同:
|
||||
|
||||
- **反向代理服务器和负载均衡**。没有负载均衡或者负载均衡很差都会造成间断的极低性能。增加一个反向代理,比如NGINX可以避免web应用程序在内存和磁盘之间抖动。负载均衡可以将过载服务器的任务转移到空闲的服务器,还可以轻松的进行扩容。这些改变都可以产生巨大的性能提升,很容易就可以比你现在的实现方案的最差性能提高10倍,对于总体性能来说可能提高的不多,但是也是有实质性的提升。
|
||||
- **缓存动态和静态数据**。如果你又一个web 服务器负担过重,那么毫无疑问肯定是你的应用服务器,只通过缓存动态数据就可以在峰值时间提高10倍的性能。缓存静态文件可以提高个位数倍的性能。
|
||||
- **压缩数据**。使用媒体文件压缩格式,比如图像格式JPEG,图形格式PNG,视频格式MPEG-4,音乐文件格式MP3可以极大的提高性能。一旦这些都用上了,然后压缩文件数据可以提高初始页面加载速度提高两倍。
|
||||
- **优化SSL/TLS**。安全握手会对性能产生巨大的影响,对他们的优化可能会对初始响应特别是重文本站点产生2倍的提升。优化SSL/TLS 下媒体文件只会产生很小的性能提升。
|
||||
- **使用HTTP/2 和SPDY*。当你使用了SSL/TLS,这些协议就可以提高整个站点的性能。
|
||||
- **对linux 和web 服务器软件进行调优**。比如优化缓存机制,使用保活连接,分配时间敏感型任务到不同的线程池可以明显的提高性能;举个例子,线程池可以加速对磁盘敏感的任务[近一个数量级][49].
|
||||
|
||||
我们希望你亲自尝试这些技术。我们希望这些提高应用性能的手段可以被你实现。请在下面评论栏分享你的结果 或者在标签#NGINX 和#webperf 下tweet 你的故事。
|
||||
### 网上资源 ###
|
||||
|
||||
[Statista.com – Share of the internet economy in the gross domestic product in G-20 countries in 2016][50]
|
||||
|
||||
[Load Impact – How Bad Performance Impacts Ecommerce Sales][51]
|
||||
|
||||
[Kissmetrics – How Loading Time Affects Your Bottom Line (infographic)][52]
|
||||
|
||||
[Econsultancy – Site speed: case studies, tips and tools for improving your conversion rate][53]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io
|
||||
|
||||
作者:[Floyd Smith][a]
|
||||
译者:[Ezio]](https://github.com/oska874)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.nginx.com/blog/author/floyd/
|
||||
[1]:https://www.nginx.com/resources/glossary/reverse-proxy-server
|
||||
[2]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip2
|
||||
[3]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip3
|
||||
[4]:https://www.nginx.com/products/application-health-checks/
|
||||
[5]:https://www.nginx.com/solutions/load-balancing/
|
||||
[6]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip1
|
||||
[7]:https://www.nginx.com/resources/admin-guide/load-balancer/
|
||||
[8]:https://www.nginx.com/blog/load-balancing-with-nginx-plus/
|
||||
[9]:https://www.digitalocean.com/community/tutorials/understanding-and-implementing-fastcgi-proxying-in-nginx
|
||||
[10]:https://www.nginx.com/blog/five-reasons-use-software-load-balancer/
|
||||
[11]:https://www.nginx.com/blog/load-balancing-with-nginx-plus/
|
||||
[12]:https://www.nginx.com/resources/ebook/five-reasons-choose-software-load-balancer/
|
||||
[13]:https://www.nginx.com/resources/webinars/choose-software-based-load-balancer-45-min/
|
||||
[14]:https://www.nginx.com/resources/admin-guide/load-balancer/
|
||||
[15]:https://www.nginx.com/products/
|
||||
[16]:https://www.nginx.com/blog/nginx-caching-guide/
|
||||
[17]:https://www.nginx.com/products/content-caching-nginx-plus/
|
||||
[18]:http://nginx.org/en/docs/http/ngx_http_proxy_module.html?&_ga=1.95342300.1348073562.1438712874#proxy_cache_purge
|
||||
[19]:https://www.nginx.com/products/live-activity-monitoring/
|
||||
[20]:http://nginx.org/en/docs/http/ngx_http_proxy_module.html?&&&_ga=1.61156076.1348073562.1438712874#proxy_cache
|
||||
[21]:https://www.nginx.com/resources/admin-guide/content-caching
|
||||
[22]:https://www.nginx.com/blog/network-vs-devops-how-to-manage-your-control-issues/
|
||||
[23]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip6
|
||||
[24]:https://www.nginx.com/resources/admin-guide/compression-and-decompression/
|
||||
[25]:http://nginx.org/en/docs/http/ngx_http_gzip_static_module.html
|
||||
[26]:https://www.digicert.com/ssl.htm
|
||||
[27]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip6
|
||||
[28]:http://openssl.org/
|
||||
[29]:https://www.nginx.com/blog/nginx-ssl-performance/
|
||||
[30]:https://www.nginx.com/blog/improve-seo-https-nginx/
|
||||
[31]:http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_cache
|
||||
[32]:https://www.nginx.com/resources/admin-guide/nginx-ssl-termination/
|
||||
[33]:https://www.nginx.com/resources/admin-guide/nginx-tcp-ssl-termination/
|
||||
[34]:https://www.nginx.com/resources/datasheet/datasheet-nginx-http2-whitepaper/
|
||||
[35]:http://w3techs.com/blog/entry/25_percent_of_the_web_runs_nginx_including_46_6_percent_of_the_top_10000_sites
|
||||
[36]:https://www.nginx.com/blog/how-nginx-plans-to-support-http2/
|
||||
[37]:https://www.nginx.com/blog/nginx-plus-r7-released/
|
||||
[38]:http://nginx.org/en/download.html
|
||||
[39]:https://www.nginx.com/products/
|
||||
[40]:https://www.nginx.com/blog/tuning-nginx/
|
||||
[41]:https://www.nginx.com/blog/http-keepalives-and-web-performance/
|
||||
[42]:http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering
|
||||
[43]:https://www.nginx.com/blog/socket-sharding-nginx-release-1-9-1/
|
||||
[44]:https://www.nginx.com/blog/thread-pools-boost-performance-9x/
|
||||
[45]:https://www.nginx.com/blog/tuning-nginx/
|
||||
[46]:https://www.nginx.com/products/application-health-checks/
|
||||
[47]:https://www.nginx.com/products/session-persistence/#session-draining
|
||||
[48]:https://www.nginx.com/products/live-activity-monitoring/
|
||||
[49]:https://www.nginx.com/blog/thread-pools-boost-performance-9x/
|
||||
[50]:http://www.statista.com/statistics/250703/forecast-of-internet-economy-as-percentage-of-gdp-in-g-20-countries/
|
||||
[51]:http://blog.loadimpact.com/blog/how-bad-performance-impacts-ecommerce-sales-part-i/
|
||||
[52]:https://blog.kissmetrics.com/loading-time/?wide=1
|
||||
[53]:https://econsultancy.com/blog/10936-site-speed-case-studies-tips-and-tools-for-improving-your-conversion-rate/
|
@ -0,0 +1,236 @@
|
||||
How to Install Redis Server on CentOS 7.md
|
||||
|
||||
如何在CentOS 7上安装Redis 服务
|
||||
================================================================================
|
||||
|
||||
大家好, 本文的主题是Redis,我们将要在CentOS 7 上安装它。编译源代码,安装二进制文件,创建、安装文件。然后安装组建,我们还会配置redis ,就像配置操作系统参数一样,目标就是让redis 运行的更加可靠和快速。
|
||||
|
||||
![Runnins Redis](http://blog.linoxide.com/wp-content/uploads/2015/10/run-redis-standalone.jpg)
|
||||
|
||||
Redis 服务器
|
||||
|
||||
Redis 是一个开源的多平台数据存储软件,使用ANSI C 编写,直接在内存使用数据集,这使得它得以实现非常高的效率。Redis 支持多种编程语言,包括Lua, C, Java, Python, Perl, PHP 和其他很多语言。redis 的代码量很小,只有约3万行,它只做很少的事,但是做的很好。尽管你在内存里工作,但是对数据持久化的需求还是存在的,而redis 的可靠性就很高,同时也支持集群,这儿些可以很好的保证你的数据安全。
|
||||
|
||||
### 构建 Redis ###
|
||||
|
||||
redis 目前没有官方RPM 安装包,我们需要从牙UN代码编译,而为了要编译就需要安装Make 和GCC。
|
||||
|
||||
如果没有安装过GCC 和Make,那么就使用yum 安装。
|
||||
|
||||
yum install gcc make
|
||||
|
||||
从[官网][1]下载tar 压缩包。
|
||||
|
||||
curl http://download.redis.io/releases/redis-3.0.4.tar.gz -o redis-3.0.4.tar.gz
|
||||
|
||||
解压缩。
|
||||
|
||||
tar zxvf redis-3.0.4.tar.gz
|
||||
|
||||
进入解压后的目录。
|
||||
|
||||
cd redis-3.0.4
|
||||
|
||||
使用Make 编译源文件。
|
||||
|
||||
make
|
||||
|
||||
### 安装 ###
|
||||
|
||||
进入源文件的目录。
|
||||
|
||||
cd src
|
||||
|
||||
复制 Redis server 和 client 到 /usr/local/bin
|
||||
|
||||
cp redis-server redis-cli /usr/local/bin
|
||||
|
||||
最好也把sentinel,benchmark 和check 复制过去。
|
||||
|
||||
cp redis-sentinel redis-benchmark redis-check-aof redis-check-dump /usr/local/bin
|
||||
|
||||
创建redis 配置文件夹。
|
||||
|
||||
mkdir /etc/redis
|
||||
|
||||
在`/var/lib/redis` 下创建有效的保存数据的目录
|
||||
|
||||
mkdir -p /var/lib/redis/6379
|
||||
|
||||
#### 系统参数 ####
|
||||
|
||||
为了让redis 正常工作需要配置一些内核参数。
|
||||
|
||||
配置vm.overcommit_memory 为1,它的意思是一直避免数据被截断,详情[见此][2].
|
||||
|
||||
sysctl -w vm.overcommit_memory=1
|
||||
|
||||
修改backlog 连接数的最大值超过redis.conf 中的tcp-backlog 值,即默认值511。你可以在[kernel.org][3] 找到更多有关基于sysctl 的ip 网络隧道的信息。
|
||||
|
||||
sysctl -w net.core.somaxconn=512.
|
||||
|
||||
禁止支持透明大页,,因为这会造成redis 使用过程产生延时和内存访问问题。
|
||||
|
||||
echo never > /sys/kernel/mm/transparent_hugepage/enabled
|
||||
|
||||
### redis.conf ###
|
||||
Redis.conf 是redis 的配置文件,然而你会看到这个文件的名字是6379.conf ,而这个数字就是redis 监听的网络端口。这个名字是告诉你可以运行超过一个redis 实例。
|
||||
|
||||
复制redis.conf 的示例到 **/etc/redis/6379.conf**.
|
||||
|
||||
cp redis.conf /etc/redis/6379.conf
|
||||
|
||||
现在编辑这个文件并且配置参数。
|
||||
|
||||
vi /etc/redis/6379.conf
|
||||
|
||||
#### 守护程序 ####
|
||||
|
||||
设置daemonize 为no,systemd 需要它运行在前台,否则redis 会突然挂掉。
|
||||
|
||||
daemonize no
|
||||
|
||||
#### pidfile ####
|
||||
|
||||
设置pidfile 为/var/run/redis_6379.pid。
|
||||
|
||||
pidfile /var/run/redis_6379.pid
|
||||
|
||||
#### port ####
|
||||
|
||||
如果不准备用默认端口,可以修改。
|
||||
|
||||
port 6379
|
||||
|
||||
#### loglevel ####
|
||||
|
||||
设置日志级别。
|
||||
|
||||
loglevel notice
|
||||
|
||||
#### logfile ####
|
||||
|
||||
修改日志文件路径。
|
||||
|
||||
logfile /var/log/redis_6379.log
|
||||
|
||||
#### dir ####
|
||||
|
||||
设置目录为 /var/lib/redis/6379
|
||||
|
||||
dir /var/lib/redis/6379
|
||||
|
||||
### 安全 ###
|
||||
|
||||
下面有几个操作可以提高安全性。
|
||||
|
||||
#### Unix sockets ####
|
||||
|
||||
在很多情况下,客户端程序和服务器端程序运行在同一个机器上,所以不需要监听网络上的socket。如果这和你的使用情况类似,你就可以使用unix socket 替代网络socket ,为此你需要配置**port** 为0,然后配置下面的选项来使能unix socket。
|
||||
|
||||
设置unix socket 的套接字文件。
|
||||
|
||||
unixsocket /tmp/redis.sock
|
||||
|
||||
限制socket 文件的权限。
|
||||
|
||||
unixsocketperm 700
|
||||
|
||||
现在为了获取redis-cli 的访问权限,应该使用-s 参数指向socket 文件。
|
||||
|
||||
redis-cli -s /tmp/redis.sock
|
||||
|
||||
#### 密码 ####
|
||||
|
||||
你可能需要远程访问,如果是,那么你应该设置密码,这样子每次操作之前要求输入密码。
|
||||
|
||||
requirepass "bTFBx1NYYWRMTUEyNHhsCg"
|
||||
|
||||
#### 重命名命令 ####
|
||||
|
||||
想象一下下面一条条指令的输出。使得,这回输出服务器的配置,所以你应该在任何可能的情况下拒绝这种信息。
|
||||
|
||||
CONFIG GET *
|
||||
|
||||
为了限制甚至禁止这条或者其他指令可以使用**rename-command** 命令。你必须提供一个命令名和替代的名字。要禁止的话需要设置replacement 为空字符串,这样子禁止任何人猜测命令的名字会比较安全。
|
||||
|
||||
rename-command FLUSHDB "FLUSHDB_MY_SALT_G0ES_HERE09u09u"
|
||||
rename-command FLUSHALL ""
|
||||
rename-command CONFIG "CONFIG_MY_S4LT_GO3S_HERE09u09u"
|
||||
|
||||
![Access Redis through unix with password and command changes](http://blog.linoxide.com/wp-content/uploads/2015/10/redis-security-test.jpg)
|
||||
|
||||
通过密码和修改命令来访问unix socket。
|
||||
|
||||
#### 快照 ####
|
||||
|
||||
默认情况下,redis 会周期性的将数据集转储到我们设置的目录下的文件**dump.rdb**。你可以使用save 命令配置转储的频率,他的第一个参数是以秒为单位的时间帧(译注:按照下文的意思单位应该是分钟),第二个参数是在数据文件上进行修改的数量。
|
||||
|
||||
每隔15小时并且最少修改过一次键。
|
||||
save 900 1
|
||||
|
||||
每隔5小时并且最少修改过10次键。
|
||||
|
||||
save 300 10
|
||||
|
||||
每隔1小时并且最少修改过10000次键。
|
||||
|
||||
save 60 10000
|
||||
|
||||
文件**/var/lib/redis/6379/dump.rdb** 包含了内存里经过上次保存命令的转储数据。因为他创建了临时文件并且替换了源文件,这里没有被破坏的问题,而且你不用担心直接复制这个文件。
|
||||
|
||||
### 开机时启动 ###
|
||||
|
||||
You may use systemd to add Redis to the system startup
|
||||
你可以使用systemd 将redis 添加到系统开机启动列表。
|
||||
|
||||
复制init_script 示例文件到/etc/init.d,注意脚本名所代表的端口号。
|
||||
|
||||
cp utils/redis_init_script /etc/init.d/redis_6379
|
||||
|
||||
现在我们来使用systemd,所以在**/etc/systems/system** 下创建一个单位文件名字为redis_6379.service。
|
||||
|
||||
vi /etc/systemd/system/redis_6379.service
|
||||
|
||||
填写下面的内容,详情可见systemd.service。
|
||||
|
||||
[Unit]
|
||||
Description=Redis on port 6379
|
||||
|
||||
[Service]
|
||||
Type=forking
|
||||
ExecStart=/etc/init.d/redis_6379 start
|
||||
ExecStop=/etc/init.d/redis_6379 stop
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
|
||||
现在添加我之前在**/etc/sysctl.conf** 里面修改多的内存过分提交和backlog 最大值的选项。
|
||||
|
||||
vm.overcommit_memory = 1
|
||||
|
||||
net.core.somaxconn=512
|
||||
|
||||
对于透明大页支持,并没有直接sysctl 命令可以控制,所以需要将下面的命令放到/etc/rc.local 的结尾。
|
||||
echo never > /sys/kernel/mm/transparent_hugepage/enabled
|
||||
|
||||
### 总结 ###
|
||||
|
||||
这些足够启动了,通过设置这些选项你将足够部署redis 服务到很多简单的场景,然而在redis.conf 还有很多为复杂环境准备的redis 的选项。在一些情况下,你可以使用[replication][4] 和 [Sentinel][5] 来提高可用性,或者[将数据分散][6]在多个服务器上,创建服务器集群 。谢谢阅读。
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/storage/install-redis-server-centos-7/
|
||||
|
||||
作者:[Carlos Alberto][a]
|
||||
译者:[ezio](https://github.com/oska874)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/carlosal/
|
||||
[1]:http://redis.io/download
|
||||
[2]:https://www.kernel.org/doc/Documentation/vm/overcommit-accounting
|
||||
[3]:https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt
|
||||
[4]:http://redis.io/topics/replication
|
||||
[5]:http://redis.io/topics/sentinel
|
||||
[6]:http://redis.io/topics/partitioning
|
@ -0,0 +1,121 @@
|
||||
如何在Ubuntu 15.04 上安装带JSON 支持的SQLite 3.9.1
|
||||
================================================================================
|
||||
欢迎阅读我们关于SQLite 的文章,SQLite 是当今时间上使用最广泛的SQL 数据库引擎,它他基本不需要配置,不需要安装或者管理就可以运行。SQLite 是一个是开放领域的软件,是关系数据库的管理系统,或者说RDBMS,用来在大表存储用户定义的记录。对于数据存储和管理来说,数据库引擎要处理复杂的查询命令,这些命令可能会从多个表获取数据然后生成报告的数据总结。
|
||||
|
||||
SQLite 是一个非常小、轻量级,不需要分离的服务进程或系统。他可以运行在UNIX,Linux,Mac OS-X,Android,iOS 和Windows 上,已经被大量的软件程序使用,如Opera, Ruby On Rails, Adobe System, Mozilla Firefox, Google Chrome 和 Skype。
|
||||
|
||||
### 1) 基本需求: ###
|
||||
|
||||
在几乎全部支持SQLite 的平台上安装SQLite 基本上没有复杂的要求。
|
||||
|
||||
所以让我们在CLI 或者Secure Shell 上使用sudo 或者root 权限登录Ubuntu 服务器。然后更新系统,这样子就可以让操作系统的软件更新到新版本。
|
||||
|
||||
在Ubuntu 上,下面的命令是用来更新系统的软件源的。
|
||||
|
||||
# apt-get update
|
||||
|
||||
如果你要在新安装的Ubuntu 上部署SQLite,那么你需要安装一些基础的系统管理工具,如wget, make, unzip, gcc。
|
||||
|
||||
要安装wget,可以使用下面的命令,然后输入Y 如果系统提示的话:
|
||||
|
||||
# apt-get install wget make gcc
|
||||
|
||||
### 2) 下载 SQLite ###
|
||||
|
||||
要下载SQLite 最好是在[SQLite 官网][1]下载,如下所示
|
||||
|
||||
![SQLite download](http://blog.linoxide.com/wp-content/uploads/2015/10/Selection_014.png)
|
||||
|
||||
你也可以直接复制资源的连接然后再命令行使用wget 下载,如下所示:
|
||||
|
||||
# wget https://www.sqlite.org/2015/sqlite-autoconf-3090100.tar.gz
|
||||
|
||||
![wget SQLite](http://blog.linoxide.com/wp-content/uploads/2015/10/23.png)
|
||||
|
||||
下载完成之后,解压缩安装包,切换工作目录到解压缩后的SQLite 目录,使用下面的命令。
|
||||
|
||||
# tar -zxvf sqlite-autoconf-3090100.tar.gz
|
||||
|
||||
### 3) 安装 SQLite ###
|
||||
|
||||
现在我们要开始安装、配置刚才下载的SQLite。所以在Ubuntu 上编译、安装SQLite,运行配置脚本。
|
||||
|
||||
root@ubuntu-15:~/sqlite-autoconf-3090100# ./configure –prefix=/usr/local
|
||||
|
||||
![SQLite Installation](http://blog.linoxide.com/wp-content/uploads/2015/10/35.png)
|
||||
|
||||
配置要上面的prefix 之后,运行下面的命令编译安装包。
|
||||
|
||||
root@ubuntu-15:~/sqlite-autoconf-3090100# make
|
||||
source='sqlite3.c' object='sqlite3.lo' libtool=yes \
|
||||
DEPDIR=.deps depmode=none /bin/bash ./depcomp \
|
||||
/bin/bash ./libtool --tag=CC --mode=compile gcc -DPACKAGE_NAME=\"sqlite\" -DPACKAGE_TARNAME=\"sqlite\" -DPACKAGE_VERSION=\"3.9.1\" -DPACKAGE_STRING=\"sqlite\ 3.9.1\" -DPACKAGE_BUGREPORT=\"http://www.sqlite.org\" -DPACKAGE_URL=\"\" -DPACKAGE=\"sqlite\" -DVERSION=\"3.9.1\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -DHAVE_DLFCN_H=1 -DLT_OBJDIR=\".libs/\" -DHAVE_FDATASYNC=1 -DHAVE_USLEEP=1 -DHAVE_LOCALTIME_R=1 -DHAVE_GMTIME_R=1 -DHAVE_DECL_STRERROR_R=1 -DHAVE_STRERROR_R=1 -DHAVE_POSIX_FALLOCATE=1 -I. -D_REENTRANT=1 -DSQLITE_THREADSAFE=1 -DSQLITE_ENABLE_FTS3 -DSQLITE_ENABLE_RTREE -g -O2 -c -o sqlite3.lo sqlite3.c
|
||||
|
||||
运行完上面的命令之后,要在Ubuntu 上完成SQLite 的安装得运行下面的命令。
|
||||
|
||||
# make install
|
||||
|
||||
![SQLite Make Install](http://blog.linoxide.com/wp-content/uploads/2015/10/44.png)
|
||||
|
||||
### 4) 测试 SQLite 安装 ###
|
||||
|
||||
要保证SQLite 3.9 安装成功了,运行下面的命令。
|
||||
|
||||
# sqlite3
|
||||
|
||||
SQLite 的版本会显示在命令行。
|
||||
|
||||
![Testing SQLite Installation](http://blog.linoxide.com/wp-content/uploads/2015/10/53.png)
|
||||
|
||||
### 5) 使用 SQLite ###
|
||||
|
||||
SQLite 很容易上手。要获得详细的使用方法,在SQLite 控制台里输入下面的命令。
|
||||
|
||||
sqlite> .help
|
||||
|
||||
这里会显示全部可用的命令和详细说明。
|
||||
|
||||
![SQLite Help](http://blog.linoxide.com/wp-content/uploads/2015/10/62.png)
|
||||
|
||||
现在开始最后一部分,使用一点SQLite 命令创建数据库。
|
||||
|
||||
要创建一个新的数据库需要运行下面的命令。
|
||||
|
||||
# sqlite3 test.db
|
||||
|
||||
然后创建一张新表。
|
||||
|
||||
sqlite> create table memos(text, priority INTEGER);
|
||||
|
||||
接着使用下面的命令插入数据。
|
||||
|
||||
sqlite> insert into memos values('deliver project description', 15);
|
||||
sqlite> insert into memos values('writing new artilces', 100);
|
||||
|
||||
要查看插入的数据可以运行下面的命令。
|
||||
|
||||
sqlite> select * from memos;
|
||||
deliver project description|15
|
||||
writing new artilces|100
|
||||
|
||||
或者使用下面的命令离开。
|
||||
|
||||
sqlite> .exit
|
||||
|
||||
![Using SQLite3](http://blog.linoxide.com/wp-content/uploads/2015/10/73.png)
|
||||
### 结论 ###
|
||||
|
||||
通过本文你可以了解如果安装支持JSON1 的最新版的SQLite,SQLite 从3.9.0 开始支持JSON1。这是一个非常棒的库,可以用来获取内嵌到应用程序,利用它可以很有效而且很轻量的管理资源。我们希望你能觉得本文有所帮助,请自由的像我们反馈你遇到的问题和困难。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/ubuntu-how-to/install-sqlite-json-ubuntu-15-04/
|
||||
|
||||
作者:[Kashif Siddique][a]
|
||||
译者:[译者ID](https://github.com/oska874)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/kashifs/
|
||||
[1]:https://www.sqlite.org/download.html
|
@ -0,0 +1,80 @@
|
||||
如何监控linux 命令行的命令执行进度
|
||||
================================================================================
|
||||
|
||||
![](https://www.maketecheasier.com/assets/uploads/2015/11/pv-featured-1.jpg)
|
||||
|
||||
如果你是一个linux 系统管理员,那么毫无疑问你必须花费大量的工作时间在命令行上:安装和卸载软件,监视系统状态,复制、移动、删除文件,查错,等等。很多时候都是你输入一个命令,然后等待很长时间直到执行完成。也有的时候你执行的命令挂起了,而你只能猜测命令执行的实际情况。
|
||||
|
||||
通常linux命令不提供和进度相关的信息,而这些信息特别重要,尤其当你只有有限的时间时。然而这并不意味着你是无助的-现在有一个命令,pv,他会显示当前在命令行执行的命令的进度信息。在本文我们会讨论它并用几个简单的例子说明种特性。
|
||||
|
||||
### PV 命令 ###
|
||||
|
||||
[PV][1] 由Andrew Wood 开发,是Pipe Viewer 的简称,意思是通过管道显示数据处理进度的信息。这些信息包括已经耗费的时间,完成的百分比(通过进度条显示),当前的速度,要传输的全部数据,以及估计剩余的时间。
|
||||
|
||||
>"要使用PV,需要配合合适的选项,把它放置在两个进程之间的管道。命令的标准输入将会通过标准输出传进来的,而进度会被输出到标准错误输出。”
|
||||
|
||||
上面解释了命令的主页(?)
|
||||
|
||||
### 下载和安装 ###
|
||||
|
||||
Debian 系的操作系统,如Ubuntu,可以简单的使用下面的命令安装PV:
|
||||
|
||||
sudo apt-get install pv
|
||||
|
||||
如果你使用了其他发行版本,你可以使用各自的包管理软件在你的系统上安装PV。一旦PV 安装好了你就可以在各种场合使用它(详见下文)。需要注意的是下面所有例子都可以正常的鱼pv 1.2.0 工作。
|
||||
|
||||
### 特性和用法 ###
|
||||
|
||||
我们(在linux 上使用命令行的用户)的大多数使用场景都会用到的命令是从一个USB 驱动器拷贝电影文件到你的电脑。如果你使用cp 来完成上面的任务,你会什么情况都不清楚知道整个复制过程结束或者出错。
|
||||
|
||||
然而pv 命令在这种情景下很有帮助。比如:
|
||||
|
||||
pv /media/himanshu/1AC2-A8E3/fNf.mkv > ./Desktop/fnf.mkv
|
||||
|
||||
输出如下:
|
||||
|
||||
![pv-copy](https://www.maketecheasier.com/assets/uploads/2015/10/pv-copy.png)
|
||||
|
||||
所以,如你所见,这个命令显示了很多和操作有关的有用信息,包括已经传输了的数据量,花费的时间,传输速率,进度条,进度的百分比,已经剩余的时间。
|
||||
|
||||
`pv` 命令提供了多种显示选项开关。比如,你可以使用`-p` 来显示百分比,`-t` 来显示时间,`-r` 表示传输速率,`-e` 代表eta(译注:估计剩余的时间)。好事是你不必记住某一个选项,因为默认这几个选项都是使能的。但是,如果你只要其中某一个信息,那么可以通过控制这几个选项来完成任务。
|
||||
|
||||
整理还有一个`-n` 选项来允许pv 命令显示整数百分比,在标准错误输出上每行显示一个数字,用来替代通常的视觉进度条。下面是一个例子:
|
||||
|
||||
pv -n /media/himanshu/1AC2-A8E3/fNf.mkv > ./Desktop/fnf.mkv
|
||||
|
||||
![pv-numeric](https://www.maketecheasier.com/assets/uploads/2015/10/pv-numeric.png)
|
||||
|
||||
这个特殊的选项非常合适某些情境下的需求,如你想把用管道把输出传给[dialog][2] 命令。
|
||||
|
||||
接下来还有一个命令行选项,`-L` 可以让你修改pv 命令的传输速率。举个例子,使用-L 选项来限制传输速率为2MB/s。
|
||||
|
||||
pv -L 2m /media/himanshu/1AC2-A8E3/fNf.mkv > ./Desktop/fnf.mkv
|
||||
|
||||
![pv-ratelimit](https://www.maketecheasier.com/assets/uploads/2015/10/pv-ratelimit.png)
|
||||
|
||||
如上图所见,数据传输速度按照我们的要求被限制了。
|
||||
|
||||
另一个pv 可以帮上忙的情景是压缩文件。这里有一个例子可以向你解释如何与压缩软件Gzip 一起工作。
|
||||
|
||||
pv /media/himanshu/1AC2-A8E3/fnf.mkv | gzip > ./Desktop/fnf.log.gz
|
||||
|
||||
![pv-gzip](https://www.maketecheasier.com/assets/uploads/2015/10/pv-gzip.png)
|
||||
|
||||
### 结论 ###
|
||||
|
||||
如上所述,pv 是一个非常有用的小工具,它可以在命令没有按照预期执行的情况下帮你节省你宝贵的时间。而且这些现实的信息还可以用在shell 脚本里。我强烈的推荐你使用这个命令,他值得你一试。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.maketecheasier.com/monitor-progress-linux-command-line-operation/
|
||||
|
||||
作者:[Himanshu Arora][a]
|
||||
译者:[ezio](https://github.com/oska874)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.maketecheasier.com/author/himanshu/
|
||||
[1]:http://linux.die.net/man/1/pv
|
||||
[2]:http://linux.die.net/man/1/dialog
|
@ -0,0 +1,92 @@
|
||||
Linux 有问必答 - 如何在 Linux 上安装 Node.js
|
||||
================================================================================
|
||||
> **问题**: 如何在你的 Linux 发行版上安装 Node.js?
|
||||
|
||||
[Node.js][1] 是建立在谷歌的 V8 JavaScript 引擎服务器端的软件平台上。在构建高性能的服务器端应用程序上,Node.js 在 JavaScript 中已是首选方案。是什么让使用 Node.js 库和应用程序的 [庞大生态系统][2] 来开发服务器后台变得如此流行。Node.js 自带一个被称为 npm 的命令行工具可以让你轻松地安装它,进行版本控制并使用 npm 的在线仓库来管理 Node.js 库和应用程序的依赖关系。
|
||||
|
||||
在本教程中,我将介绍 **如何在主流 Linux 发行版上安装 Node.js,包括Debian,Ubuntu,Fedora 和 CentOS** 。
|
||||
|
||||
Node.js 在一些发行版上作为预构建的程序包(如,Fedora 或 Ubuntu),而在其他发行版上你需要源码安装。由于 Node.js 发展比较快,建议从源码安装最新版而不是安装一个过时的预构建的程序包。最新的 Node.js 自带 npm(Node.js 的包管理器),让你可以轻松的安装 Node.js 的外部模块。
|
||||
|
||||
### 在 Debian 上安装 Node.js on ###
|
||||
|
||||
从 Debian 8 (Jessie)开始,Node.js 已被纳入官方软件仓库。因此,你可以使用如下方式安装它:
|
||||
|
||||
$ sudo apt-get install npm
|
||||
|
||||
在 Debian 7 (Wheezy) 以前的版本中,你需要使用下面的方式来源码安装:
|
||||
|
||||
$ sudo apt-get install python g++ make
|
||||
$ wget http://nodejs.org/dist/node-latest.tar.gz
|
||||
$ tar xvfvz node-latest.tar.gz
|
||||
$ cd node-v0.10.21 (replace a version with your own)
|
||||
$ ./configure
|
||||
$ make
|
||||
$ sudo make install
|
||||
|
||||
### 在 Ubuntu 或 Linux Mint 中安装 Node.js ###
|
||||
|
||||
Node.js 被包含在 Ubuntu(13.04 及更高版本)。因此,安装非常简单。以下方式将安装 Node.js 和 npm。
|
||||
|
||||
$ sudo apt-get install npm
|
||||
$ sudo ln -s /usr/bin/nodejs /usr/bin/node
|
||||
|
||||
而 Ubuntu 中的 Node.js 可能版本比较老,你可以从 [其 PPA][3] 中安装最新的版本。
|
||||
|
||||
$ sudo apt-get install python-software-properties python g++ make
|
||||
$ sudo add-apt-repository -y ppa:chris-lea/node.js
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install npm
|
||||
|
||||
### 在 Fedora 中安装 Node.js ###
|
||||
|
||||
Node.js 被包含在 Fedora 的 base 仓库中。因此,你可以在 Fedora 中用 yum 安装 Node.js。
|
||||
|
||||
$ sudo yum install npm
|
||||
|
||||
如果你想安装 Node.js 的最新版本,可以按照以下步骤使用源码来安装。
|
||||
|
||||
$ sudo yum groupinstall 'Development Tools'
|
||||
$ wget http://nodejs.org/dist/node-latest.tar.gz
|
||||
$ tar xvfvz node-latest.tar.gz
|
||||
$ cd node-v0.10.21 (replace a version with your own)
|
||||
$ ./configure
|
||||
$ make
|
||||
$ sudo make install
|
||||
|
||||
### 在 CentOS 或 RHEL 中安装 Node.js ###
|
||||
|
||||
在 CentOS 使用 yum 包管理器来安装 Node.js,首先启用 EPEL 软件库,然后运行:
|
||||
|
||||
$ sudo yum install npm
|
||||
|
||||
如果你想在 CentOS 中安装最新版的 Node.js,其安装步骤和在 Fedora 中的相同。
|
||||
|
||||
### 在 Arch Linux 上安装 Node.js ###
|
||||
|
||||
Node.js is available in the Arch Linux community repository. Thus installation is as simple as running:
|
||||
|
||||
Node.js 在 Arch Linux 的社区库中可以找到。所以安装很简单,只要运行:
|
||||
|
||||
$ sudo pacman -S nodejs npm
|
||||
|
||||
### 检查 Node.js 的版本 ###
|
||||
|
||||
一旦你已经安装了 Node.js,你可以使用如下所示的方法检查 Node.js 的版本。
|
||||
|
||||
$ node --version
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ask.xmodulo.com/install-node-js-linux.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[strugglingyou](https://github.com/strugglingyou)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://ask.xmodulo.com/author/nanni
|
||||
[1]:http://nodejs.org/
|
||||
[2]:https://www.npmjs.com/
|
||||
[3]:https://launchpad.net/~chris-lea/+archive/node.js
|
@ -0,0 +1,317 @@
|
||||
在 Ubuntu 15.10 上安装 PostgreSQL 9.4 和 phpPgAdmin
|
||||
================================================================================
|
||||
![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2014/05/postgresql.png)
|
||||
|
||||
### 简介 ###
|
||||
|
||||
[PostgreSQL][1] 是一款强大的,开源对象关系型数据库系统。它支持所有的主流操作系统,包括 Linux、Unix(AIX、BSD、HP-UX,SGI IRIX、Mac OS、Solaris、Tru64) 以及 Windows 操作系统。
|
||||
|
||||
下面是 **Ubuntu** 发起者 **Mark Shuttleworth** 对 PostgreSQL 的一段评价。
|
||||
|
||||
> PostgreSQL 真的是一款很好的数据库系统。刚开始我们使用它的时候,并不确定它能否胜任工作。但我错的太离谱了。它很强壮、快速,在各个方面都很专业。
|
||||
>
|
||||
> — Mark Shuttleworth.
|
||||
|
||||
在这篇简短的指南中,让我们来看看如何在 Ubuntu 15.10 服务器中安装 PostgreSQL 9.4。
|
||||
|
||||
### 安装 PostgreSQL ###
|
||||
|
||||
默认仓库中就有可用的 PostgreSQL。在终端中输入下面的命令安装它。
|
||||
|
||||
sudo apt-get install postgresql postgresql-contrib
|
||||
|
||||
如果你需要其它的版本,按照下面那样先添加 PostgreSQL 仓库然后再安装。
|
||||
|
||||
**PostgreSQL apt 仓库** 支持 amd64 和 i386 架构的 Ubuntu 长期支持版(10.04、12.04 和 14.04),以及非长期支持版(14.04)。对于其它非长期支持版,该软件包虽然不能完全支持,但使用和 LTS 版本近似的也能正常工作。
|
||||
|
||||
#### Ubuntu 14.10 系统: ####
|
||||
|
||||
新建文件**/etc/apt/sources.list.d/pgdg.list**;
|
||||
|
||||
sudo vi /etc/apt/sources.list.d/pgdg.list
|
||||
|
||||
用下面一行添加仓库:
|
||||
|
||||
deb http://apt.postgresql.org/pub/repos/apt/ utopic-pgdg main
|
||||
|
||||
**注意**: 上面的库只能用于 Ubuntu 14.10。还没有升级到 Ubuntu 15.04 和 15.10。
|
||||
|
||||
**Ubuntu 14.04**,添加下面一行:
|
||||
|
||||
deb http://apt.postgresql.org/pub/repos/apt/ trusty-pgdg main
|
||||
|
||||
**Ubuntu 12.04**,添加下面一行:
|
||||
|
||||
deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main
|
||||
|
||||
导入库签名密钥:
|
||||
|
||||
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc
|
||||
|
||||
----------
|
||||
|
||||
sudo apt-key add -
|
||||
|
||||
更新软件包列表:
|
||||
|
||||
sudo apt-get update
|
||||
|
||||
然后安装需要的版本。
|
||||
|
||||
sudo apt-get install postgresql-9.4
|
||||
|
||||
### 访问 PostgreSQL 命令窗口 ###
|
||||
|
||||
默认的数据库名称和数据库用户名称都是 “**postgres**”。切换到 postgres 用户进行 postgresql 相关的操作:
|
||||
|
||||
sudo -u postgres psql postgres
|
||||
|
||||
#### 事例输出: ####
|
||||
|
||||
psql (9.4.5)
|
||||
Type "help" for help.
|
||||
postgres=#
|
||||
|
||||
要退出 postgresql 窗口,在 **psql** 窗口输入 **\q** 退出到终端。
|
||||
|
||||
### 设置 “postgres” 用户密码 ###
|
||||
|
||||
登录到 postgresql 窗口,
|
||||
|
||||
sudo -u postgres psql postgres
|
||||
|
||||
用下面的命令为用户 postgres 设置密码:
|
||||
|
||||
postgres=# \password postgres
|
||||
Enter new password:
|
||||
Enter it again:
|
||||
postgres=# \q
|
||||
|
||||
要安装 PostgreSQL Adminpack,在 postgresql 窗口输入下面的命令:
|
||||
|
||||
sudo -u postgres psql postgres
|
||||
|
||||
----------
|
||||
|
||||
postgres=# CREATE EXTENSION adminpack;
|
||||
CREATE EXTENSION
|
||||
|
||||
在 **psql** 窗口输入 **\q** 从 postgresql 窗口退回到终端。
|
||||
|
||||
### 创建新用户和数据库 ###
|
||||
|
||||
例如,让我们创建一个新的用户,名为 “**senthil**”,密码是 “**ubuntu**”,以及名为 “**mydb**” 的数据库。
|
||||
|
||||
sudo -u postgres createuser -D -A -P senthil
|
||||
|
||||
----------
|
||||
|
||||
sudo -u postgres createdb -O senthil mydb
|
||||
|
||||
### 删除用户和数据库 ###
|
||||
|
||||
要删除数据库,首先切换到 postgres 用户:
|
||||
|
||||
sudo -u postgres psql postgres
|
||||
|
||||
输入命令:
|
||||
|
||||
$ drop database <database-name>
|
||||
|
||||
要删除一个用户,输入下面的命令:
|
||||
|
||||
$ drop user <user-name>
|
||||
|
||||
### 配置 PostgreSQL-MD5 验证 ###
|
||||
|
||||
**MD5 验证** 要求用户提供一个 MD5 加密的密码用于认证。首先编辑 **/etc/postgresql/9.4/main/pg_hba.conf** 文件:
|
||||
|
||||
sudo vi /etc/postgresql/9.4/main/pg_hba.conf
|
||||
|
||||
按照下面所示添加或修改行
|
||||
|
||||
[...]
|
||||
# TYPE DATABASE USER ADDRESS METHOD
|
||||
# "local" is for Unix domain socket connections only
|
||||
local all all md5
|
||||
# IPv4 local connections:
|
||||
host all all 127.0.0.1/32 md5
|
||||
host all all 192.168.1.0/24 md5
|
||||
# IPv6 local connections:
|
||||
host all all ::1/128 md5
|
||||
[...]
|
||||
|
||||
其中, 192.168.1.0/24 是我的本地网络 IP 地址。用你自己的地址替换。
|
||||
|
||||
重启 postgresql 服务以使更改生效:
|
||||
|
||||
sudo systemctl restart postgresql
|
||||
|
||||
或者,
|
||||
|
||||
sudo service postgresql restart
|
||||
|
||||
### 配置 PostgreSQL TCP/IP 配置 ###
|
||||
|
||||
默认情况下,没有启用 TCP/IP 连接,因此其它计算机的用户不能访问 postgresql。为了允许其它计算机的用户访问,编辑文件 **/etc/postgresql/9.4/main/postgresql.conf:**
|
||||
|
||||
sudo vi /etc/postgresql/9.4/main/postgresql.conf
|
||||
|
||||
找到下面一行:
|
||||
|
||||
[...]
|
||||
#listen_addresses = 'localhost'
|
||||
[...]
|
||||
#port = 5432
|
||||
[...]
|
||||
|
||||
取消改行的注释,然后设置你 postgresql 服务器的 IP 地址,或者设置为 ‘*’ 监听所有用户。你应该谨慎设置所有远程用户都可以访问 PostgreSQL。
|
||||
|
||||
[...]
|
||||
listen_addresses = '*'
|
||||
[...]
|
||||
port = 5432
|
||||
[...]
|
||||
|
||||
重启 postgresql 服务保存更改:
|
||||
|
||||
sudo systemctl restart postgresql
|
||||
|
||||
或者,
|
||||
|
||||
sudo service postgresql restart
|
||||
|
||||
### 用 phpPgAdmin 管理 PostgreSQL ###
|
||||
|
||||
[**phpPgAdmin**][2] 是基于 web 用 PHP 写的 PostgreSQL 管理工具。
|
||||
|
||||
默认仓库中有可用的 phpPgAdmin。用下面的命令安装 phpPgAdmin:
|
||||
|
||||
sudo apt-get install phppgadmin
|
||||
|
||||
默认情况下,你可以在本地系统的 web 浏览器用 **http://localhost/phppgadmin** 访问 phppgadmin。
|
||||
|
||||
要访问远程系统,在 Ubuntu 15.10 上做如下操作:
|
||||
|
||||
编辑文件 **/etc/apache2/conf-available/phppgadmin.conf**,
|
||||
|
||||
sudo vi /etc/apache2/conf-available/phppgadmin.conf
|
||||
|
||||
找到 **Require local** 的一行在这行前面添加 **#** 注释掉它。
|
||||
|
||||
#Require local
|
||||
|
||||
添加下面的一行:
|
||||
|
||||
allow from all
|
||||
|
||||
保存并退出文件。
|
||||
|
||||
然后重启 apache 服务。
|
||||
|
||||
sudo systemctl restart apache2
|
||||
|
||||
对于 Ubuntu 14.10 及之前版本:
|
||||
|
||||
编辑 **/etc/apache2/conf.d/phppgadmin**:
|
||||
|
||||
sudo nano /etc/apache2/conf.d/phppgadmin
|
||||
|
||||
注释掉下面一行:
|
||||
|
||||
[...]
|
||||
#allow from 127.0.0.0/255.0.0.0 ::1/128
|
||||
|
||||
取消下面一行的注释使所有系统都可以访问 phppgadmin。
|
||||
|
||||
allow from all
|
||||
|
||||
编辑 **/etc/apache2/apache2.conf**:
|
||||
|
||||
sudo vi /etc/apache2/apache2.conf
|
||||
|
||||
添加下面一行:
|
||||
|
||||
Include /etc/apache2/conf.d/phppgadmin
|
||||
|
||||
然后重启 apache 服务。
|
||||
|
||||
sudo service apache2 restart
|
||||
|
||||
### 配置 phpPgAdmin ###
|
||||
|
||||
编辑文件 **/etc/phppgadmin/config.inc.php**, 做以下更改。下面大部分选项都带有解释。认真阅读以便了解为什么要更改这些值。
|
||||
|
||||
sudo nano /etc/phppgadmin/config.inc.php
|
||||
|
||||
找到下面一行:
|
||||
|
||||
$conf['servers'][0]['host'] = '';
|
||||
|
||||
按照下面这样更改:
|
||||
|
||||
$conf['servers'][0]['host'] = 'localhost';
|
||||
|
||||
找到这一行:
|
||||
|
||||
$conf['extra_login_security'] = true;
|
||||
|
||||
更改值为 **false**。
|
||||
|
||||
$conf['extra_login_security'] = false;
|
||||
|
||||
找到这一行:
|
||||
|
||||
$conf['owned_only'] = false;
|
||||
|
||||
更改值为 **true**。
|
||||
|
||||
$conf['owned_only'] = true;
|
||||
|
||||
保存并关闭文件。重启 postgresql 服务和 Apache 服务。
|
||||
|
||||
sudo systemctl restart postgresql
|
||||
|
||||
----------
|
||||
|
||||
sudo systemctl restart apache2
|
||||
|
||||
或者,
|
||||
|
||||
sudo service postgresql restart
|
||||
|
||||
sudo service apache2 restart
|
||||
|
||||
现在打开你的浏览器并导航到 **http://ip-address/phppgadmin**。你会看到以下截图。
|
||||
|
||||
![phpPgAdmin – Google Chrome_001](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/11/phpPgAdmin-Google-Chrome_001.jpg)
|
||||
|
||||
用你之前创建的用户登录。我之前已经创建了一个名为 “**senthil**” 的用户,密码是 “**ubuntu**”,因此我以 “senthil” 用户登录。
|
||||
|
||||
![phpPgAdmin – Google Chrome_002](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/11/phpPgAdmin-Google-Chrome_002.jpg)
|
||||
|
||||
然后你就可以访问 phppgadmin 面板了。
|
||||
|
||||
![phpPgAdmin – Google Chrome_003](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/11/phpPgAdmin-Google-Chrome_003.jpg)
|
||||
|
||||
用 postgres 用户登录:
|
||||
|
||||
![phpPgAdmin – Google Chrome_004](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/11/phpPgAdmin-Google-Chrome_004.jpg)
|
||||
|
||||
就是这样。现在你可以用 phppgadmin 可视化创建、删除或者更改数据库了。
|
||||
|
||||
加油!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.unixmen.com/install-postgresql-9-4-and-phppgadmin-on-ubuntu-15-10/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[ictlyh](http://mutouxiaogui.cn/blog/)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.twitter.com/ostechnix
|
||||
[1]:http://www.postgresql.org/
|
||||
[2]:http://phppgadmin.sourceforge.net/doku.php
|
@ -0,0 +1,171 @@
|
||||
Linux 101:最有效地使用 Systemd
|
||||
================================================================================
|
||||
干嘛要这么做?
|
||||
|
||||
- 理解现代 Linux 发行版中的显著变化;
|
||||
- 看看 Systemd 是如何取代 SysVinit 的;
|
||||
- 处理好*单元* (unit)和新的 journal 日志。
|
||||
|
||||
吐槽邮件,人身攻击,死亡威胁——Lennart Poettering,Systemd 的作者,对收到这些东西早就习以为常了。这位 Red Hat 公司的员工最近在 Google+ 上怒斥 FOSS 社区([http://tinyurl.com/poorlennart][1])的本质,悲痛且失望地表示:“那真是个令人恶心的地方”。他着重指出 Linus Torvalds 在邮件列表上言辞刻薄的帖子,并谴责这位内核的领导者为在线讨论定下基调,并使得人身攻击及贬抑之辞成为常态。
|
||||
|
||||
但为何 Poettering 会遭受如此多的憎恨?为何就这么个搞搞开源软件的人要忍受这等愤怒?答案就在于他的软件的重要性。如今大多数发行版中,Systemd 是 Linux 内核发起的第一个程序,并且它还扮演多种角色。它会启动系统服务,处理用户登陆,每隔特定的时间执行一些任务,还有很多很多。它在不断地成长,并逐渐成为 Linux 的某种“基础系统”——提供系统启动和发行版维护所需的所有工具。
|
||||
|
||||
如今,在以下几点上 Systemd 颇具争议:它逃避了一些确立好的 Unix 传统,例如纯文本的日志文件;它被看成是个“大一统”的项目,试图接管一切;它还是我们这个操作系统的支柱的重要革新。然而大多数主流发行版已经接受了(或即将接受)它,因此它就保留了下来。而且它确实是有好处的:更快地启动,更简单地管理那些有依赖的服务程序,提供强大且安全的日志系统等。
|
||||
|
||||
因此在这篇教程中,我们将探索 Systemd 的特性,并向您展示如何最有效地利用这些特性。即便您此刻并不是这款软件的粉丝,读完本文后您至少可以更加了解和适应它。
|
||||
|
||||
![Image](http://www.linuxvoice.com/wp-content/uploads/2015/10/eating-large.jpg)
|
||||
|
||||
**这部没正经的动画片来自[http://tinyurl.com/m2e7mv8][2],它把 Systemd 塑造成一只狂暴的动物,吞噬它路过的一切。大多数批评者的言辞可不像这只公仔一样柔软。**
|
||||
|
||||
### 启动及服务 ###
|
||||
|
||||
大多数主流发行版要么已经采用 Systemd,要么即将在下个发布中采用(如 Debian 和 Ubuntu)。在本教程中,我们使用 Fedora 21——该发行版已经是 Systemd 的优秀实验场地——的一个预览版进行演示,但不论您用哪个发行版,要用到的命令和注意事项都应该是一样的。这是 Systemd 的一个加分点:它消除了不同发行版之间许多细微且琐碎的区别。
|
||||
|
||||
在终端中输入 **ps ax | grep systemd**,看到第一行,其中的数字 **1** 表示它的进程号是1,也就是说它是 Linux 内核发起的第一个程序。因此,内核一旦检测完硬件并组织好了内存,就会运行 **/usr/lib/systemd/systemd** 可执行程序,这个程序会按顺序依次发起其他程序。(在还没有 Systemd 的日子里,内核会去运行 **/sbin/init**,随后这个程序会在名为 SysVinit 的系统中运行其余的各种启动脚本。)
|
||||
|
||||
Systemd 的核心是一个叫*单元* (unit)的概念,它是一些存有关于服务(在运行在后台的程序),设备,挂载点,和操作系统其他方面信息的配置文件。Systemd 的其中一个目标就是简化这些事物之间的相互作用,因此如果你有程序需要在某个挂载点被创建或某个设备被接入后开始运行,Systemd 可以让这一切正常运作起来变得相当容易。(在没有 Systemd 的日子里,要使用脚本来把这些事情调配好,那可是相当丑陋的。)要列出您 Linux 系统上的所有单元,输入以下命令:
|
||||
|
||||
systemctl list-unit-files
|
||||
|
||||
现在,**systemctl** 是与 Systemd 交互的主要工具,它有不少选项。在单元列表中,您会注意到这儿有一些格式:被使能的单元显示为绿色,被禁用的显示为红色。标记为“static”的单元不能直接启用,它们是其他单元所依赖的对象。若要限制输出列表只包含服务,使用以下命令:
|
||||
|
||||
systemctl list-unit-files --type=service
|
||||
|
||||
注意,一个单元显示为“enabled”,并不等于对应的服务正在运行,而只能说明它可以被开启。要获得某个特定服务的信息,以 GDM (the Gnome Display Manager) 为例,输入以下命令:
|
||||
|
||||
systemctl status gdm.service
|
||||
|
||||
这条命令提供了许多有用的信息:一段人类可读的服务描述,单元配置文件的位置,启动的时间,进程号,以及它所从属的 CGroups (用以限制各组进程的资源开销)。
|
||||
|
||||
如果您去查看位于 **/usr/lib/systemd/system/gdm.service** 的单元配置文件,您可以看到多种选项,包括要被运行的二进制文件(“ExecStart”那一行),相冲突的其他单元(即不能同时进入运行的单元),以及需要在本单元执行前进入运行的单元(“After”那一行)。一些单元有附加的依赖选项,例如“Requires”(必要的依赖)和“Wants”(可选的依赖)。
|
||||
|
||||
此处另一个有趣的选项是:
|
||||
|
||||
Alias=display-manager.service
|
||||
|
||||
当您启动 **gdm.service** 后,您将可以通过 **systemctl status display-manager.service** 来查看它的状态。当您知道有*显示管理程序* (display manager)在运行并想对它做点什么,但您不关心那究竟是 GDM,KDM,XDM 还是什么别的显示管理程序时,这个选项会非常有用。
|
||||
|
||||
![Image](http://www.linuxvoice.com/wp-content/uploads/2015/10/status-large.jpg)
|
||||
|
||||
**使用 systemctl status 命令后面跟一个单元名,来查看对应的服务有什么情况。**
|
||||
|
||||
### “目标”锁定 ###
|
||||
|
||||
如果您在 **/usr/lib/systemd/system** 目录中输入 **ls** 命令,您将看到各种以 **.target** 结尾的文件。一个*启动目标* (target)是一种将多个单元聚合在一起以致于将它们同时启动的方式。例如,对大多数类 Unix 操作系统而言有一种“多用户”状态,意思是系统已被成功启动,后台服务正在运行,并且已准备好让一个或多个用户登陆并工作——至少在文本模式下。(其他状态包括用于进行管理工作的单用户状态,以及用于机器关机的重启状态。)
|
||||
|
||||
如果您打开 **multi-user.target** 文件一探究竟,您可能期待看到的是一个要被启动的单元列表。但您会发现这个文件内部几乎空空如也——其实,一个服务会通过 **WantedBy** 选项让自己成为启动目标的依赖。因此如果您去打开 **avahi-daemon.service**, **NetworkManager.service** 及其他 **.service** 文件看看,您将在 Install 段看到这一行:
|
||||
|
||||
WantedBy=multi-user.target
|
||||
|
||||
因此,切换到多用户启动目标会使能那些包含上述语句的单元。还有其他一些启动目标可用(例如 **emergency.target** 用于一个紧急情况使用的 shell,以及 **halt.target** 用于机器关机),您可以用以下方式轻松地在它们之间切换:
|
||||
|
||||
systemctl isolate emergency.target
|
||||
|
||||
在许多方面,这些都很像 SysVinit 中的*运行级* (runlevel),如文本模式的 **multi-user.target** 类似于第3运行级,**graphical.target** 类似于第5运行级,**reboot.target** 类似于第6运行级,诸如此类。
|
||||
|
||||
![Image](http://www.linuxvoice.com/wp-content/uploads/2015/10/unit-large.jpg)
|
||||
|
||||
**与传统的脚本相比,单元配置文件也许看起来很陌生,但并不难以理解。**
|
||||
|
||||
### 开启与停止 ###
|
||||
|
||||
现在您也许陷入了沉思:我们已经看了这么多,但仍没看到如何停止和开启服务!这其实是有原因的。从外部看,Systemd 也许很复杂,像野兽一般难以驾驭。因此在您开始摆弄它之间,有必要从宏观的角度看看它是如何工作的。实际用来管理服务的命令非常简单:
|
||||
|
||||
systemctl stop cups.service
|
||||
systemctl start cups.service
|
||||
|
||||
(若某个单元被禁用了,您可以先通过 **systemctl enable** 加该单元名的方式将其使能。这种做法会为该单元创建一个符号链接,并将其放置在当前启动目标的 .wants 目录下,这些 .wants 目录在**/etc/systemd/system** 文件夹中。)
|
||||
|
||||
还有两个有用的命令是 **systemctl restart** 和 **systemctl reload**,后面接单元名。后者要求单元重新加载它的配置文件。Systemd 的绝大部分都有良好的文档,因此您可以查看手册 (**man systemctl**) 了解每条命令的细节。
|
||||
|
||||
> ### 定时器单元:取代 Cron ###
|
||||
>
|
||||
> 除了系统初始化和服务管理,Systemd 还染指其他方面。在很大程度上,它能够完成 **cron** 的工作,而且可以说是以更灵活的方式(并带有更易读的语法)。**cron** 是一个以规定时间间隔执行任务的程序——例如清楚临时文件,刷新缓存等。
|
||||
>
|
||||
> 如果您再次进入 **/usr/lib/systemd/system** 目录,您会看到那儿有多个 **.timer** 文件。用 **less** 来查看这些文件,您会发现它们与 **.service** 和 **.target** 文件有着相似的结构,而区别在于 **[Timer]** 段。举个例子:
|
||||
>
|
||||
> [Timer]
|
||||
> OnBootSec=1h
|
||||
> OnUnitActiveSec=1w
|
||||
>
|
||||
> **OnBootSec** 选项告诉 Systemd 在系统启动一小时后启动这个单元。第二个选项的意思是:自那以后每周启动这个单元一次。关于定时器有大量选项您可以设置——输入 **man systemd.time** 查看完整列表。
|
||||
>
|
||||
> Systemd 的时间精度默认为一分钟。也就是说,它会在设定时刻的一分钟内运行单元,但不一定精确到那一秒。这么做是基于电源管理方面的原因,但如果您需要一个没有任何延时且精确到毫秒的定时器,您可以添加以下一行:
|
||||
>
|
||||
> AccuracySec=1us
|
||||
>
|
||||
> 另外, **WakeSystem** 选项(可以被设置为 true 或 false)决定了定时器是否可以唤醒处于休眠状态的机器。
|
||||
|
||||
![Image](http://www.linuxvoice.com/wp-content/uploads/2015/10/systemd_gui-large.jpg)
|
||||
|
||||
**存在一个 Systemd 的图形界面程序,即便它已有多年未被积极维护。**
|
||||
|
||||
### 日志文件:向 journald 问声好 ###
|
||||
|
||||
Systemd 的第二个主要部分是 journal 。这是个日志系统,类似于 syslog 但也有些显著区别。如果您是个 Unix 日志管理模式的 粉丝,准备好热血沸腾吧:这是个二进制日志,因此您不能使用常规的命令行文本处理工具来解析它。这个设计决定不出意料地在网上引起了激烈的争论,但它的确有些优点。例如,日志可以被更系统地组织,带有更多元数据,因此可以更容易地根据可执行文件名和进程号等过滤出信息。
|
||||
|
||||
要查看整个 journal,输入以下命令:
|
||||
|
||||
journalctl
|
||||
|
||||
像许多其他的 Systemd 命令一样,该命令将输出通过管道的方式引向 **less** 程序,因此您可以使用空格键向下滚动,“/”(斜杠)键查找,以及其他熟悉的快捷键。您也能在此看到少许颜色,像红色的警告及错误信息。
|
||||
|
||||
以上命令会输出很多信息。为了限制其只输出当前启动的消息,使用如下命令:
|
||||
|
||||
journalctl -b
|
||||
|
||||
这就是 Systemd 大放异彩的地方!您想查看自上次启动以来的全部消息吗?试试 **journalctl -b -1** 吧。再上一次的?用 **-2** 替换 **-1** 吧。那自某个具体时间,例如2014年10月24日16:38以来的呢?
|
||||
|
||||
journalctl -b --since=”2014-10-24 16:38”
|
||||
|
||||
即便您对二进制日志感到遗憾,那依然是个有用的特性,并且对许多系统管理员来说,构建类似的过滤器比起写正则表达式而言容易多了。
|
||||
|
||||
我们已经可以根据特定的时间来准确查找日志了,那可以根据特定程序吗?对单元而言,试试这个:
|
||||
|
||||
journalctl -u gdm.service
|
||||
|
||||
(注意:这是个查看 X server 产生的日志的好办法。)那根据特定的进程号?
|
||||
|
||||
journalctl _PID=890
|
||||
|
||||
您甚至可以请求只看某个可执行文件产生的消息:
|
||||
|
||||
journalctl /usr/bin/pulseaudio
|
||||
|
||||
若您想将输出的消息限制在某个优先级,可以使用 **-p** 选项。该选项参数为 0 的话只会显示紧急消息(也就是说,是时候向 **\$DEITY** 祈求保佑了),为 7 的话会显示所有消息,包括调试消息。请查看手册 (**man journalctl**) 获取更多关于优先级的信息。
|
||||
|
||||
值得指出的是,您也可以将多个选项结合在一起,若想查看在当前启动中由 GDM 服务输出的优先级数小于等于 3 的消息,请使用下述命令:
|
||||
|
||||
journalctl -u gdm.service -p 3 -b
|
||||
|
||||
最后,如果您仅仅想打开一个随 journal 持续更新的终端窗口,就像在没有 Systemd 时使用 tail 命令实现的那样,输入 **journalctl -f** 就好了。
|
||||
|
||||
![Image](http://www.linuxvoice.com/wp-content/uploads/2015/10/journal-large.jpg)
|
||||
|
||||
**二进制日志并不流行,但 journal 的确有它的优点,如非常方便的信息查找及过滤。**
|
||||
|
||||
> ### 没有 Systemd 的生活?###
|
||||
>
|
||||
> 如果您就是完全不能接收 Systemd,您仍然有一些主流发现版中的选择。尤其是 Slackware,作为历史最为悠久的发行版,目前还没有做出改变,但它的主要开发者并没有将其从未来规划中移除。一些不出名的发行版也在坚持使用 SysVinit 。
|
||||
>
|
||||
> 但这又将持续多久呢?Gnome 正越来越依赖于 Systemd,其他的主流桌面环境也会步其后尘。这也是引起 BSD 社区一阵恐慌的原因:Systemd 与 Linux 内核紧密相连,导致在某种程度上,桌面环境正变得越来越不可移植。一种折中的解决方案也许会以 Uselessd ([http://uselessd.darknedgy.net][3]) 的形式到来:一种裁剪版的 Systemd,纯粹专注于启动和监控进程,而不消耗整个基础系统。
|
||||
>
|
||||
> ![Image](http://www.linuxvoice.com/wp-content/uploads/2015/10/gentoo-large.jpg)
|
||||
>
|
||||
> 若您不喜欢 Systemd,可以尝试一下 Gentoo 发行版,它将 Systemd 作为初始化工具的一种选择,但并不强制用户使用 Systemd。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxvoice.com/linux-101-get-the-most-out-of-systemd/
|
||||
|
||||
作者:[Mike Saunders][a]
|
||||
译者:[Ricky-Gong](https://github.com/Ricky-Gong)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxvoice.com/author/mike/
|
||||
[1]:http://tinyurl.com/poorlennart
|
||||
[2]:http://tinyurl.com/m2e7mv8
|
||||
[3]:http://uselessd.darknedgy.net/
|
@ -0,0 +1,83 @@
|
||||
LNAV - 基于 Ncurses 的日志文件阅读器
|
||||
================================================================================
|
||||
日志文件导航器(Logfile Navigator,简称 lnav),是一个基于 curses 用于查看和分析日志文件的工具。和文本阅读器/编辑器相比, lnav 的好处是它充分利用了可以从日志文件中获取的语义信息,例如时间戳和日志等级。利用这些额外的语义信息, lnav 可以处理类似事情:来自不同文件的交错信息;按照时间生成信息直方图;提供在文件中导航的关键字。它希望使用这些功能可以使得用户可以快速有效地定位和解决问题。
|
||||
|
||||
### lnav 功能 ###
|
||||
|
||||
#### 支持以下日志文件格式: ####
|
||||
|
||||
Syslog、Apache 访问日志、strace、tcsh 历史以及常见的带时间戳的日志文件。读入文件的时候回自动检测文件格式。
|
||||
|
||||
#### 直方图视图: ####
|
||||
|
||||
以时间为桶显示日志信息数量。这对于在一段长时间内大概了解发生了什么非常有用。
|
||||
|
||||
#### 过滤器: ####
|
||||
|
||||
只显示那些匹配或不匹配一些正则表达式的行。对于移除大量你不感兴趣的日志行非常有用。
|
||||
|
||||
#### 及时操作: ####
|
||||
|
||||
在你输入到时候会同时完成检索;当添加新日志行的时候回自动加载和搜索;加载行的时候会应用过滤器;另外,还会在你输入 SQL 查询的时候检查正确性。
|
||||
|
||||
#### 自动显示后文: ####
|
||||
|
||||
日志文件视图会自动往下滚动到新添加到文件中的行。只需要向上滚动就可以锁定当前视图,然后向下滚动到底部恢复显示后文。
|
||||
|
||||
#### 按照日期顺序排序行: ####
|
||||
|
||||
从所有文件中加载的日志行会按照日期进行排序。使得你不需要手动从不同文件中收集日志信息。
|
||||
|
||||
#### 语法高亮: ####
|
||||
|
||||
错误和警告会用红色和黄色显示。高亮还可用于: SQL 关键字、XML 标签、Java 文件行号和括起来的字符串。
|
||||
|
||||
#### 导航: ####
|
||||
|
||||
有快捷键用于跳转到下一个或上一个错误或警告,按照一定的时间向后或向前移动。
|
||||
|
||||
#### 用 SQL 查询日志: ####
|
||||
|
||||
每个日志文件行都被认为是数据库中可以使用 SQL 查询的一行。可以使用的列取决于查看的日志文件类型。
|
||||
|
||||
#### 命令和搜索历史: ####
|
||||
|
||||
会自动保存你之前输入的命令和搜素,因此你可以在会话之间使用它们。
|
||||
|
||||
#### 压缩文件: ####
|
||||
|
||||
会实时自动检测和解压压缩的日志文件。
|
||||
|
||||
### 在 ubuntu 15.10 上安装 lnav ####
|
||||
|
||||
打开终端运行下面的命令
|
||||
|
||||
sudo apt-get install lnav
|
||||
|
||||
### 使用 lnav ###
|
||||
|
||||
如果你想使用 lnav 查看日志,你可以使用下面的命令,默认它会显示 syslogs
|
||||
|
||||
lnav
|
||||
|
||||
![](http://www.ubuntugeek.com/wp-content/uploads/2015/11/51.png)
|
||||
|
||||
如果你想查看特定的日志,那么需要指定路径
|
||||
|
||||
如果你想看 CPU 日志,在你的终端里运行下面的命令
|
||||
|
||||
lnav /var/log/cups
|
||||
|
||||
![](http://www.ubuntugeek.com/wp-content/uploads/2015/11/6.png)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.ubuntugeek.com/lnav-ncurses-based-log-file-viewer.html
|
||||
|
||||
作者:[ruchi][a]
|
||||
译者:[ictlyh](http://mutouxiaogui.cn/blog/)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.ubuntugeek.com/author/ubuntufix
|
137
translated/tech/20151125 The tar command explained.md
Normal file
137
translated/tech/20151125 The tar command explained.md
Normal file
@ -0,0 +1,137 @@
|
||||
tar 命令详解
|
||||
================================================================================
|
||||
Linux [tar][1] 命令是归档或分发文件时的强大武器。GNU tar 归档包可以包含多个文件和目录,还能保留权限,它还支持多种压缩格式。Tar 表示 "**T**ape **Ar**chiver",这是一种 POSIX 标准。
|
||||
|
||||
### Tar 文件格式 ###
|
||||
|
||||
tar 压缩等级简介。
|
||||
|
||||
- **无压缩** 没有压缩的文件用 .tar 结尾。
|
||||
- **Gzip 压缩** Gzip 格式是 tar 使用最广泛的压缩格式,它能快速压缩和提取文件。用 gzip 压缩的文件通常用 .tar.gz 或 .tgz 结尾。这里有一些如何[创建][2]和[解压][3] tar.gz 文件的例子。
|
||||
- **Bzip2 压缩** 和 Gzip格式相比 Bzip2 提供了更好的压缩比。创建压缩文件也比较慢,通常采用 .tar.bz2 结尾。
|
||||
- **Lzip(LAMA)压缩** Lizp 压缩结合了 Gzip 快速的优势,以及和 Bzip2 类似(甚至更好) 的压缩率。尽管有这些好处,这个格式并没有得到广泛使用。
|
||||
- **Lzop 压缩** 这个压缩选项也许是 tar 最快的压缩格式,它的压缩率和 gzip 类似,也没有广泛使用。
|
||||
|
||||
常见的格式是 tar.gz 和 tar.bz2。如果你想快速压缩,那么就是用 gzip。如果归档文件大小比较重要,就是用 tar.bz2。
|
||||
|
||||
### tar 命令用来干什么? ###
|
||||
|
||||
下面是一些使用 tar 命令的常见情形。
|
||||
|
||||
- 备份服务器或桌面系统
|
||||
- 文档归档
|
||||
- 软件分发
|
||||
|
||||
### 安装 tar ###
|
||||
|
||||
大部分 Linux 系统默认都安装了 tar。如果没有,这里有安装 tar 的命令。
|
||||
|
||||
#### CentOS ####
|
||||
|
||||
在 CentOS 中,以 root 用户在 shell 中执行下面的命令安装 tar。
|
||||
|
||||
yum install tar
|
||||
|
||||
#### Ubuntu ####
|
||||
|
||||
下面的命令会在 Ubuntu 上安装 tar。“sudo” 命令确保 apt 命令是以 root 权限运行的。
|
||||
|
||||
sudo apt-get install tar
|
||||
|
||||
#### Debian ####
|
||||
|
||||
下面的 apt 命令在 Debian 上安装 tar。
|
||||
|
||||
apt-get install tar
|
||||
|
||||
#### Windows ####
|
||||
|
||||
tar 命令在 Windows 也可以使用,你可以从 Gunwin 项目[http://gnuwin32.sourceforge.net/packages/gtar.htm][4]中下载它。
|
||||
|
||||
### 创建 tar.gz 文件 ###
|
||||
|
||||
下面是在 shell 中运行 [tar 命令][5] 的一些例子。下面我会解释这些命令行选项。
|
||||
|
||||
tar pczf myarchive.tar.gz /home/till/mydocuments
|
||||
|
||||
这个命令会创建归档文件 myarchive.tar.gz,其中包括了路径 /home/till/mydocuments 中的文件和目录。**命令行选项解释**:
|
||||
|
||||
- **[p]** 这个选项表示 “preserve”,它指示 tar 在归档文件中保留文件属主和权限信息。
|
||||
- **[c]** 表示创建。要创建文件时不能缺少这个选项。
|
||||
- **[z]** z 选项启用 gzip 压缩。
|
||||
- **[f]** file 选项告诉 tar 创建一个归档文件。如果没有这个选项 tar 会把输出发送到 stdout。
|
||||
|
||||
#### Tar 命令事例 ####
|
||||
|
||||
**事例 1: 备份 /etc 目录** 创建 /etc 配置目录的一个备份。备份保存在 root 目录。
|
||||
|
||||
tar pczvf /root/etc.tar.gz /etc
|
||||
|
||||
![用 tar 备份 /etc 目录](https://www.howtoforge.com/images/linux-tar-command/big/create-tar.png)
|
||||
|
||||
要以 root 用户运行命令确保 /etc 中的所有文件都会被包含在备份中。这次,我在命令中添加了 [v] 选项。这个选项表示 verbose,它告诉 tar 显示所有被包含到归档文件中的文件名。
|
||||
|
||||
**事例 2: 备份你的 /home 目录** 创建你的 home 目录的备份。备份会被保存到 /backup 目录。
|
||||
|
||||
tar czf /backup/myuser.tar.gz /home/myuser
|
||||
|
||||
用你的用户名替换 myuser。这个命令中,我省略了 [p] 选项,也就不会保存权限。
|
||||
|
||||
**事例 3: 基于文件的 MySQL 数据库备份** 在大部分 Linux 发行版中,MySQL 数据库保存在 /var/lib/mysql。你可以使用下面的命令检查:
|
||||
|
||||
ls /var/lib/mysql
|
||||
|
||||
![使用 tar 基于文件备份 MySQL](https://www.howtoforge.com/images/linux-tar-command/big/tar_backup_mysql.png)
|
||||
|
||||
用 tar 备份 MySQL 文件时为了保持一致性,首先停用数据库服务器。备份会被写到 /backup 目录。
|
||||
|
||||
1) 创建 backup 目录
|
||||
|
||||
mkdir /backup
|
||||
chmod 600 /backup
|
||||
|
||||
2) 停止 MySQL,用 tar 进行备份并重新启动数据库。
|
||||
|
||||
service mysql stop
|
||||
tar pczf /backup/mysql.tar.gz /var/lib/mysql
|
||||
service mysql start
|
||||
ls -lah /backup
|
||||
|
||||
![基于文件的 MySQL 备份](https://www.howtoforge.com/images/linux-tar-command/big/tar-backup-mysql2.png)
|
||||
|
||||
### 提取 tar.gz 文件###
|
||||
|
||||
提取 tar.gz 文件的命令是:
|
||||
|
||||
tar xzf myarchive.tar.gz
|
||||
|
||||
#### tar 命令选项解释 ####
|
||||
|
||||
- **[x]** x 表示提取,提取 tar 文件时这个命令不可缺少。
|
||||
- **[z]** z 选项告诉 tar 要解压的归档文件时 gzip 格式。
|
||||
- **[f]** 该选项告诉 tar 从一个文件中读取归档内容,本例中是 myarchive.tar.gz。
|
||||
|
||||
上面的 tar 命令会安静地提取 tar.gz 文件,它只会显示错误信息。如果你想要看提取了哪些文件,那么添加 “v” 选项。
|
||||
|
||||
tar xzvf myarchive.tar.gz
|
||||
|
||||
**[v]** 选项表示 verbose,它会向你显示解压的文件名。
|
||||
|
||||
![提取 tar.gz 文件](https://www.howtoforge.com/images/linux-tar-command/big/tar-xfz.png)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/tutorial/linux-tar-command/
|
||||
|
||||
作者:[howtoforge][a]
|
||||
译者:[ictlyh](http://mutouxiaogui.cn/blog/)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com/
|
||||
[1]:https://en.wikipedia.org/wiki/Tar_(computing)
|
||||
[2]:http://www.faqforge.com/linux/create-tar-gz/
|
||||
[3]:http://www.faqforge.com/linux/extract-tar-gz/
|
||||
[4]:http://gnuwin32.sourceforge.net/packages/gtar.htm
|
||||
[5]:http://www.faqforge.com/linux/tar-command/
|
Loading…
Reference in New Issue
Block a user