mirror of
https://github.com/LCTT/TranslateProject.git
synced 2024-12-23 21:20:42 +08:00
commit
29c1a03f87
@ -66,6 +66,7 @@ LCTT的组成
|
||||
- CORE @strugglingyouth,
|
||||
- CORE @FSSlc
|
||||
- CORE @zpl1025,
|
||||
- CORE @runningwater,
|
||||
- CORE @bazz2,
|
||||
- CORE @Vic020,
|
||||
- CORE @dongfengweixiao,
|
||||
@ -76,7 +77,6 @@ LCTT的组成
|
||||
- Senior @jasminepeng,
|
||||
- Senior @willqian,
|
||||
- Senior @vizv,
|
||||
- runningwater,
|
||||
- ZTinoZ,
|
||||
- theo-l,
|
||||
- luoxcat,
|
||||
|
64
published/20151013 DFileManager--Cover Flow File Manager.md
Normal file
64
published/20151013 DFileManager--Cover Flow File Manager.md
Normal file
@ -0,0 +1,64 @@
|
||||
DFileManager:封面流(CoverFlow)文件管理器
|
||||
================================================================================
|
||||
|
||||
这个一个 Ubuntu 标准软件仓库中缺失的像宝石般的、有着其独特的功能的文件管理器。这是 DFileManager 在推特中的宣称。
|
||||
|
||||
有一个不好回答的问题,如何知道到底有多少个 Linux 的开源软件?好奇的话,你可以在 Shell 里输入如下命令:
|
||||
|
||||
~$ for f in /var/lib/apt/lists/*Packages; do printf '%5d %s\n' $(grep '^Package: ' "$f" | wc -l) ${f##*/} done | sort -rn
|
||||
|
||||
在我的 Ubuntu 15.04 系统上,产生结果如下:
|
||||
|
||||
![Ubuntu 15.04 Packages](http://www.linuxlinks.com/portal/content/reviews/FileManagers/UbuntuPackages.png)
|
||||
|
||||
正如上面的截图所示,在 Universe 仓库中,大约有39000个包,在 main 仓库中大约有8500个包。这听起来很多。但是这些包括了开源应用、工具、库,有很多不是由 Ubuntu 开发者打包的。更重要的是,有很多重要的软件不在库中,只能通过源代码编译。DFileManager 就是这样一个软件。它是仍处在开发早期的一个基于 QT 的跨平台文件管理器。QT提供单一源码下的跨平台可移植性。
|
||||
|
||||
现在还没有二进制文件包,用户需要编译源代码才行。对于一些工具来说,这个可能会产生很大的问题,特别是如果这个应用依赖于某个复杂的依赖库,或者需要与已经安装在系统中的软件不兼容的某个版本。
|
||||
|
||||
### 安装 ###
|
||||
|
||||
幸运的是,DFileManager 非常容易编译。对于我的老 Ubutnu 机器来说,在开发者网站上的安装介绍提供了大部分的重要步骤,不过少量的基础包没有列出(为什么总是这样?虽然许多库会让文件系统变得一团糟!)。在我的系统上,从github 下载源代码并且编译这个软件,我在 Shell 里输入了以下命令:
|
||||
|
||||
~$ sudo apt-get install qt5-default qt5-qmake libqt5x11extras5-dev
|
||||
~$ git clone git://git.code.sf.net/p/dfilemanager/code dfilemanager-code
|
||||
~$ cd dfilemananger-code
|
||||
~$ mkdir build
|
||||
~$ cd build
|
||||
~$ cmake ../ -DCMAKE_INSTALL_PREFIX=/usr
|
||||
~$ make
|
||||
~$ sudo make install
|
||||
|
||||
你可以通过在shell中输入如下命令来启动它:
|
||||
|
||||
~$ dfm
|
||||
|
||||
下面是运行中的 DFileManager,完全展示了其最吸引人的地方:封面流(Cover Flow)视图。可以在当前文件夹的项目间滑动,提供了一个相当有吸引力的体验。这是看图片的理想选择。这个文件管理器酷似 Finder(苹果操作系统下的默认文件管理器),可能会吸引你。
|
||||
|
||||
![DFileManager in action](http://www.linuxlinks.com/portal/content/reviews/FileManagers/Screenshot-dfm.png)
|
||||
|
||||
### 特点: ###
|
||||
|
||||
- 4种视图:图标、详情、列视图和封面流
|
||||
- 按位置和设备归类书签
|
||||
- 标签页
|
||||
- 简单的搜索和过滤
|
||||
- 自定义文件类型的缩略图,包括多媒体文件
|
||||
- 信息栏可以移走
|
||||
- 单击打开文件和目录
|
||||
- 可以排队 IO 操作
|
||||
- 记住每个文件夹的视图属性
|
||||
- 显示隐藏文件
|
||||
|
||||
DFileManager 不是 KDE 的 Dolphin 的替代品,但是能做相同的事情。这个是一个真正能够帮助人们的浏览文件的文件管理器。还有,别忘了反馈信息给开发者,任何人都可以做出这样的贡献。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://gofk.tumblr.com/post/131014089537/dfilemanager-cover-flow-file-manager-a-real-gem
|
||||
|
||||
作者:[gofk][a]
|
||||
译者:[bestony](https://github.com/bestony)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://gofk.tumblr.com/
|
@ -0,0 +1,32 @@
|
||||
黑客们成功地在土豆上安装了 Linux !
|
||||
================================================================================
|
||||
|
||||
来自荷兰阿姆斯特丹的消息称,LinuxOnAnything.nl 网站的黑客们成功地在土豆上安装了 Linux!这是该操作系统第一次在根用蔬菜(root vegetable)上安装成功(LCTT 译注:root vetetable,一语双关,root 在 Linux 是指超级用户)。
|
||||
|
||||
![Linux Potato](http://www.bbspot.com/Images/News_Features/2008/12/linux-potato.jpg)
|
||||
|
||||
“土豆没有 CPU,内存和存储器,这真的是个挑战,” Linux On Anything (LOA) 小组的 Johan Piest 说。“显然我们不能使用一个像 Fedora 或 Ubuntu 这些体量较大的发行版,所以我们用的是 Damn Small Linux。”
|
||||
|
||||
在尝试了几周之后,LOA 小组的的同学们弄出了一个适合土豆的 Linux 内核,这玩艺儿上面可以用 vi 来编辑小的文本文件。这个 Linux 通过一个小型的 U 盘加载到土豆上,并通过一组红黑线以二进制的方式向这个土豆发送命令。
|
||||
|
||||
LOA 小组是一个不断壮大的黑客组织的分支;这个组织致力于将 Linux 安装到所有物体上;他们先是将 Linux 装到Gameboy 和 iPod 等电子产品上,不过最近他们在挑战一些高难度的东西,譬如将Linux安装到灯泡和小狗身上!
|
||||
|
||||
LOA 小组在与另一个黑客小组 Stuttering Monarchs 竞赛,看谁先拿到土豆这一分。“土豆是一种每个人都会接触到的蔬菜,它的用途就像 Linux 一样极其广泛。无论你是想煮捣烹炸还是别的都可以” Piest 说道,“你也许认为我们完成这个挑战是为了获得某些好处,而我们只是追求逼格而已。”
|
||||
|
||||
LOA 是第一个将 Linux 安装到一匹设德兰矮种马上的小组,但这五年来竞争愈演愈烈,其它黑客小组的进度已经反超了他们。
|
||||
|
||||
“我们本来可以成为在饼干上面安装 Linux 的第一个小组,但是那群来自挪威的混蛋把我们击败了。” Piest 说。
|
||||
|
||||
第一个成功安装了 Linux 的蔬菜是一头卷心菜,它是由一个土耳其的一个黑客小组完成的。
|
||||
|
||||
(好啦——是不是已经目瞪口呆,事实上,这是一篇好几年前的恶搞文,你看出来了吗?哈哈哈哈)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.bbspot.com/news/2008/12/linux-on-a-potato.html
|
||||
|
||||
作者:[Brian Briggs](briggsb@bbspot.com)
|
||||
译者:[StdioA](https://github.com/StdioA), [hittlle](https://github.com/hittlle)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -1,18 +1,19 @@
|
||||
Assign Multiple IP Addresses To One Interface On Ubuntu 15.10
|
||||
在 Ubuntu 15.10 上为单个网卡设置多个 IP 地址
|
||||
================================================================================
|
||||
Some times you might want to use more than one IP address for your network interface card. What will you do in such cases? Buy an extra network card and assign new IP? No, It’s not necessary(at least in the small networks). We can now assign multiple IP addresses to one interface on Ubuntu systems. Curious to know how? Well, Follow me, It is not that difficult.
|
||||
|
||||
This method will work on Debian and it’s derivatives too.
|
||||
有时候你可能想在你的网卡上使用多个 IP 地址。遇到这种情况你会怎么办呢?买一个新的网卡并分配一个新的 IP?不,没有这个必要(至少在小型网络中)。现在我们可以在 Ubuntu 系统中为一个网卡分配多个 IP 地址。想知道怎么做到的?跟着我往下看,其实并不难。
|
||||
|
||||
### Add additional IP addresses temporarily ###
|
||||
这个方法也适用于 Debian 以及它的衍生版本。
|
||||
|
||||
First, let us find the IP address of the network card. In my Ubuntu 15.10 server, I use only one network card.
|
||||
### 临时添加 IP 地址 ###
|
||||
|
||||
Run the following command to find out the IP address:
|
||||
首先,让我们找到网卡的 IP 地址。在我的 Ubuntu 15.10 服务器版中,我只使用了一个网卡。
|
||||
|
||||
运行下面的命令找到 IP 地址:
|
||||
|
||||
sudo ip addr
|
||||
|
||||
**Sample output:**
|
||||
**样例输出:**
|
||||
|
||||
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
|
||||
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
|
||||
@ -27,11 +28,11 @@ Run the following command to find out the IP address:
|
||||
inet6 fe80::a00:27ff:fe2a:34e/64 scope link
|
||||
valid_lft forever preferred_lft forever
|
||||
|
||||
Or
|
||||
或
|
||||
|
||||
sudo ifconfig
|
||||
|
||||
**Sample output:**
|
||||
**样例输出:**
|
||||
|
||||
enp0s3 Link encap:Ethernet HWaddr 08:00:27:2a:03:4b
|
||||
inet addr:192.168.1.103 Bcast:192.168.1.255 Mask:255.255.255.0
|
||||
@ -50,19 +51,19 @@ Or
|
||||
collisions:0 txqueuelen:0
|
||||
RX bytes:38793 (38.7 KB) TX bytes:38793 (38.7 KB)
|
||||
|
||||
As you see in the above output, my network card name is **enp0s3**, and its IP address is **192.168.1.103**.
|
||||
正如你在上面输出中看到的,我的网卡名称是 **enp0s3**,它的 IP 地址是 **192.168.1.103**。
|
||||
|
||||
Now let us add an additional IP address, for example **192.168.1.104**, to the Interface card.
|
||||
现在让我们来为网卡添加一个新的 IP 地址,例如说 **192.168.1.104**。
|
||||
|
||||
Open your Terminal and run the following command to add additional IP.
|
||||
打开你的终端并运行下面的命令添加额外的 IP。
|
||||
|
||||
sudo ip addr add 192.168.1.104/24 dev enp0s3
|
||||
|
||||
Now, let us check if the IP is added using command:
|
||||
用命令检查是否启用了新的 IP:
|
||||
|
||||
sudo ip address show enp0s3
|
||||
|
||||
**Sample output:**
|
||||
**样例输出:**
|
||||
|
||||
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
|
||||
link/ether 08:00:27:2a:03:4e brd ff:ff:ff:ff:ff:ff
|
||||
@ -73,13 +74,13 @@ Now, let us check if the IP is added using command:
|
||||
inet6 fe80::a00:27ff:fe2a:34e/64 scope link
|
||||
valid_lft forever preferred_lft forever
|
||||
|
||||
Similarly, you can add as many IP addresses as you want.
|
||||
类似地,你可以添加任意数量的 IP 地址,只要你想要。
|
||||
|
||||
Let us ping the IP address to verify it.
|
||||
让我们 ping 一下这个 IP 地址验证一下。
|
||||
|
||||
sudo ping 192.168.1.104
|
||||
|
||||
**Sample output:**
|
||||
**样例输出**
|
||||
|
||||
PING 192.168.1.104 (192.168.1.104) 56(84) bytes of data.
|
||||
64 bytes from 192.168.1.104: icmp_seq=1 ttl=64 time=0.901 ms
|
||||
@ -87,17 +88,17 @@ Let us ping the IP address to verify it.
|
||||
64 bytes from 192.168.1.104: icmp_seq=3 ttl=64 time=0.521 ms
|
||||
64 bytes from 192.168.1.104: icmp_seq=4 ttl=64 time=0.524 ms
|
||||
|
||||
Yeah, It’s working!!
|
||||
好极了,它能工作!
|
||||
|
||||
To remove the IP, just run:
|
||||
要删除 IP,只需要运行:
|
||||
|
||||
sudo ip addr del 192.168.1.104/24 dev enp0s3
|
||||
|
||||
Let us check if it is removed.
|
||||
再检查一下是否删除了 IP。
|
||||
|
||||
sudo ip address show enp0s3
|
||||
|
||||
**Sample output:**
|
||||
**样例输出:**
|
||||
|
||||
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
|
||||
link/ether 08:00:27:2a:03:4e brd ff:ff:ff:ff:ff:ff
|
||||
@ -106,19 +107,19 @@ Let us check if it is removed.
|
||||
inet6 fe80::a00:27ff:fe2a:34e/64 scope link
|
||||
valid_lft forever preferred_lft forever
|
||||
|
||||
See, It’s gone!!
|
||||
可以看到已经没有了!!
|
||||
|
||||
Well, as you may know, the changes will lost after you reboot your system. How do I make it permanent? That’s easy too.
|
||||
正如你所知,重启系统后这些设置会失效。那么怎么设置才能永久有效呢?这也很简单。
|
||||
|
||||
### Add additional IP addresses permanently ###
|
||||
### 添加永久 IP 地址 ###
|
||||
|
||||
The network card configuration file of your Ubuntu system is **/etc/network/interfaces**.
|
||||
Ubuntu 系统的网卡配置文件是 **/etc/network/interfaces**。
|
||||
|
||||
Let us check the details of the above file.
|
||||
让我们来看看上面文件的具体内容。
|
||||
|
||||
sudo cat /etc/network/interfaces
|
||||
|
||||
**Sample output:**
|
||||
**输出样例:**
|
||||
|
||||
# This file describes the network interfaces available on your system
|
||||
# and how to activate them. For more information, see interfaces(5).
|
||||
@ -130,15 +131,15 @@ Let us check the details of the above file.
|
||||
auto enp0s3
|
||||
iface enp0s3 inet dhcp
|
||||
|
||||
As you see in the above output, the Interface is DHCP enabled.
|
||||
正如你在上面输出中看到的,网卡启用了 DHCP。
|
||||
|
||||
Okay, now we will assign an additional address, for example **192.168.1.104/24**.
|
||||
现在,让我们来分配一个额外的地址,例如 **192.168.1.104/24**。
|
||||
|
||||
Edit file **/etc/network/interfaces**:
|
||||
编辑 **/etc/network/interfaces**:
|
||||
|
||||
sudo nano /etc/network/interfaces
|
||||
|
||||
Add additional IP address as shown in the black letters.
|
||||
如下添加额外的 IP 地址。
|
||||
|
||||
# This file describes the network interfaces available on your system
|
||||
# and how to activate them. For more information, see interfaces(5).
|
||||
@ -152,13 +153,13 @@ Add additional IP address as shown in the black letters.
|
||||
iface enp0s3 inet static
|
||||
address 192.168.1.104/24
|
||||
|
||||
Save and close the file.
|
||||
保存并关闭文件。
|
||||
|
||||
Run the following file to take effect the changes without rebooting.
|
||||
运行下面的命令使更改无需重启即生效。
|
||||
|
||||
sudo ifdown enp0s3 && sudo ifup enp0s3
|
||||
|
||||
**Sample output:**
|
||||
**样例输出:**
|
||||
|
||||
Killed old client process
|
||||
Internet Systems Consortium DHCP Client 4.3.1
|
||||
@ -182,13 +183,13 @@ Run the following file to take effect the changes without rebooting.
|
||||
DHCPACK of 192.168.1.103 from 192.168.1.1
|
||||
bound to 192.168.1.103 -- renewal in 35146 seconds.
|
||||
|
||||
**Note**: It is **very important** to run the above two commands into **one** line if you are remoting into the server because the first one will drop your connection. Given in this way the ssh-session will survive.
|
||||
**注意**:如果你从远程连接到服务器,把上面的两个命令放到**一行**中**非常重要**,因为第一个命令会断掉你的连接。而采用这种方式可以保留你的 ssh 会话。
|
||||
|
||||
Now, let us check if IP is added using command:
|
||||
现在,让我们用下面的命令来检查一下是否添加了新的 IP:
|
||||
|
||||
sudo ip address show enp0s3
|
||||
|
||||
**Sample output:**
|
||||
**输出样例:**
|
||||
|
||||
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
|
||||
link/ether 08:00:27:2a:03:4e brd ff:ff:ff:ff:ff:ff
|
||||
@ -199,13 +200,13 @@ Now, let us check if IP is added using command:
|
||||
inet6 fe80::a00:27ff:fe2a:34e/64 scope link
|
||||
valid_lft forever preferred_lft forever
|
||||
|
||||
Cool! Additional IP has been added.
|
||||
很好!我们已经添加了额外的 IP。
|
||||
|
||||
Well then let us ping the IP address to verify.
|
||||
再次 ping IP 地址进行验证。
|
||||
|
||||
sudo ping 192.168.1.104
|
||||
|
||||
**Sample output:**
|
||||
**样例输出:**
|
||||
|
||||
PING 192.168.1.104 (192.168.1.104) 56(84) bytes of data.
|
||||
64 bytes from 192.168.1.104: icmp_seq=1 ttl=64 time=0.137 ms
|
||||
@ -213,24 +214,23 @@ Well then let us ping the IP address to verify.
|
||||
64 bytes from 192.168.1.104: icmp_seq=3 ttl=64 time=0.054 ms
|
||||
64 bytes from 192.168.1.104: icmp_seq=4 ttl=64 time=0.067 ms
|
||||
|
||||
Voila! It’s working. That’s it.
|
||||
好极了!它能正常工作。就是这样。
|
||||
|
||||
Want to know how to add additional IP addresses on CentOS/RHEL/Scientific Linux/Fedora systems, check the following link.
|
||||
想知道怎么给 CentOS/RHEL/Scientific Linux/Fedora 系统添加额外的 IP 地址,可以点击下面的链接。
|
||||
|
||||
注:此篇文章以前做过选题:20150205 Linux Basics--Assign Multiple IP Addresses To Single Network Interface Card On CentOS 7.md
|
||||
- [Assign Multiple IP Addresses To Single Network Interface Card On CentOS 7][1]
|
||||
- [在CentOS 7上给一个网卡分配多个IP地址][1]
|
||||
|
||||
Happy weekend!
|
||||
工作愉快!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.unixmen.com/assign-multiple-ip-addresses-to-one-interface-on-ubuntu-15-10/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[ictlyh](http://mutouxiaogui.cn/blog/)
|
||||
校对:[Caroline](https://github.com/carolinewuyan)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.unixmen.com/author/sk/
|
||||
[1]:http://www.unixmen.com/linux-basics-assign-multiple-ip-addresses-single-network-interface-card-centos-7/
|
||||
[1]:https://linux.cn/article-5127-1.html
|
@ -0,0 +1,131 @@
|
||||
如何在 Ubuntu 14/15 上配置 Apache Solr
|
||||
================================================================================
|
||||
|
||||
大家好,欢迎来阅读我们今天这篇 Apache Solr 的文章。简单的来说,Apache Solr 是一个最负盛名的开源搜索平台,配合运行在网站后端的 Apache Lucene,能够让你轻松创建搜索引擎来搜索网站、数据库和文件。它能够索引和搜索多个网站并根据搜索文本的相关内容返回搜索建议。
|
||||
|
||||
Solr 使用 HTTP 可扩展标记语言(XML),可以为 JSON、Python 和 Ruby 等提供应用程序接口(API)。根据Apache Lucene 项目所述,Solr 提供了非常多的功能,很受管理员们的欢迎:
|
||||
|
||||
- 全文检索
|
||||
- 分面导航(Faceted Navigation)
|
||||
- 拼写建议/自动完成
|
||||
- 自定义文档排序/排列
|
||||
|
||||
#### 前提条件: ####
|
||||
|
||||
在一个使用最小化安装包的全新 Ubuntu 14/15 系统上,你仅仅需要少量的准备,就开始安装 Apache Solor.
|
||||
|
||||
### 1)System Update 系统更新###
|
||||
|
||||
使用一个具有 sudo 权限的非 root 用户登录你的 Ubuntu 服务器,在接下来的所有安装和使用 Solr 的步骤中都会使用它。
|
||||
|
||||
登录成功后,使用下面的命令,升级你的系统到最新的更新及补丁:
|
||||
|
||||
$ sudo apt-get update
|
||||
|
||||
### 2) 安装 JRE###
|
||||
|
||||
要安装 Solr,首先需要安装 JRE(Java Runtime Environment)作为基础环境,因为 solr 和 tomcat 都是基于Java.所以,我们需要安装最新版的 Java 并配置 Java 本地环境.
|
||||
|
||||
要想安装最新版的 Java 8,我们需要通过以下命令安装 Python Software Properties 工具包
|
||||
|
||||
$ sudo apt-get install python-software-properties
|
||||
|
||||
完成后,配置最新版 Java 8的仓库
|
||||
|
||||
$ sudo add-apt-repository ppa:webupd8team/java
|
||||
|
||||
现在你可以通过以下命令更新包源列表,使用‘apt-get’来安装最新版本的 Oracle Java 8。
|
||||
|
||||
$ sudo apt-get update
|
||||
|
||||
$ sudo apt-get install oracle-java8-installer
|
||||
|
||||
在安装和配置过程中,点击'OK'按钮接受 Java SE Platform 和 JavaFX 的 Oracle 二进制代码许可协议(Oracle Binary Code License Agreement)。
|
||||
|
||||
在安装完成后,运行下面的命令,检查是否安装成功以及查看安装的版本。
|
||||
|
||||
kash@solr:~$ java -version
|
||||
java version "1.8.0_66"
|
||||
Java(TM) SE Runtime Environment (build 1.8.0_66-b17)
|
||||
Java HotSpot(TM) 64-Bit Server VM (build 25.66-b17, mixed mode)
|
||||
|
||||
执行结果表明我们已经成功安装了 Java,并达到安装 Solr 最基本的要求了,接着我们进行下一步。
|
||||
|
||||
### 安装 Solr###
|
||||
|
||||
有两种不同的方式可以在 Ubuntu 上安装 Solr,在本文中我们只用最新的源码包来演示源码安装。
|
||||
|
||||
要使用源码安装 Solr,先要从[官网][1]下载最新的可用安装包。复制以下链接,然后使用 'wget' 命令来下载。
|
||||
|
||||
$ wget http://www.us.apache.org/dist/lucene/solr/5.3.1/solr-5.3.1.tgz
|
||||
|
||||
运行下面的命令,将这个已归档的服务解压到 /bin 目录。
|
||||
|
||||
$ tar -xzf solr-5.3.1.tgz solr-5.3.1/bin/install_solr_service.sh --strip-components=2
|
||||
|
||||
运行脚本来启动 Solr 服务,这将会先创建一个 solr 的用户,然后将 Solr 安装成服务。
|
||||
|
||||
$ sudo bash ./install_solr_service.sh solr-5.3.1.tgz
|
||||
|
||||
![Solr 安装](http://blog.linoxide.com/wp-content/uploads/2015/11/12.png)
|
||||
|
||||
使用下面的命令来检查 Solr 服务的状态。
|
||||
|
||||
$ service solr status
|
||||
|
||||
![Solr 状态](http://blog.linoxide.com/wp-content/uploads/2015/11/22.png)
|
||||
|
||||
### 创建 Solr 集合: ###
|
||||
|
||||
我们现在可以使用 Solr 用户添加多个集合。就像下图所示的那样,我们只需要在命令行中指定集合名称和指定其配置集就可以创建多个集合了。
|
||||
|
||||
$ sudo su - solr -c "/opt/solr/bin/solr create -c myfirstcollection -n data_driven_schema_configs"
|
||||
|
||||
![创建集合](http://blog.linoxide.com/wp-content/uploads/2015/11/32.png)
|
||||
|
||||
我们已经成功的为我们的第一个集合创建了新核心实例目录,并可以将数据添加到里面。要查看库中的默认模式文件,可以在这里找到: '/opt/solr/server/solr/configsets/data_driven_schema_configs/conf' 。
|
||||
|
||||
### 使用 Solr Web###
|
||||
|
||||
可以使用默认的端口8983连接 Apache Solr。打开浏览器,输入 http://your\_server\_ip:8983/solr 或者 http://your-domain.com:8983/solr. 确保你的防火墙允许8983端口.
|
||||
|
||||
http://172.25.10.171:8983/solr/
|
||||
|
||||
![Web访问Solr](http://blog.linoxide.com/wp-content/uploads/2015/11/42.png)
|
||||
|
||||
在 Solr 的 Web 控制台左侧菜单点击 'Core Admin' 按钮,你将会看见我们之前使用命令行方式创建的集合。你可以点击 'Add Core' 按钮来创建新的核心。
|
||||
|
||||
![添加核心](http://blog.linoxide.com/wp-content/uploads/2015/11/52.png)
|
||||
|
||||
就像下图中所示,你可以选择某个集合并指向文档来向里面添加内容或从文档中查询数据。如下显示的那样添加指定格式的数据。
|
||||
|
||||
{
|
||||
"number": 1,
|
||||
"Name": "George Washington",
|
||||
"birth_year": 1989,
|
||||
"Starting_Job": 2002,
|
||||
"End_Job": "2009-04-30",
|
||||
"Qualification": "Graduation",
|
||||
"skills": "Linux and Virtualization"
|
||||
}
|
||||
|
||||
添加文件后点击 'Submit Document'按钮.
|
||||
|
||||
![添加文档](http://blog.linoxide.com/wp-content/uploads/2015/11/62.png)
|
||||
|
||||
### 总结###
|
||||
|
||||
在 Ubuntu 上安装成功后,你就可以使用 Solr Web 接口插入或查询数据。如果你想通过 Solr 来管理更多的数据和文件,可以创建更多的集合。希望你能喜欢这篇文章并且希望它能够帮到你。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/ubuntu-how-to/configure-apache-solr-ubuntu-14-15/
|
||||
|
||||
作者:[Kashif][a]
|
||||
译者:[taichirain](https://github.com/taichirain)
|
||||
校对:[Caroline](https://github.com/carolinewuyan)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/kashifs/
|
||||
[1]:http://lucene.apache.org/solr/
|
@ -0,0 +1,327 @@
|
||||
如何在 FreeBSD 10.2 上安装 Nginx 作为 Apache 的反向代理
|
||||
================================================================================
|
||||
|
||||
Nginx 是一款自由开源的 HTTP 和反向代理服务器,也可以用作 POP3/IMAP 的邮件代理服务器。Nginx 是一款高性能的 web 服务器,其特点是功能丰富,结构简单以及内存占用低。 第一个版本由 Igor Sysoev 发布于2002年,到现在有很多大型科技公司在使用,包括 Netflix、 Github、 Cloudflare、 WordPress.com 等等。
|
||||
|
||||
在这篇教程里我们会“**在 freebsd 10.2 系统上,安装和配置 Nginx 网络服务器作为 Apache 的反向代理**”。 Apache 将在8080端口上运行 PHP ,而我们会配置 Nginx 运行在80端口以接收用户/访问者的请求。如果80端口接收到用户浏览器的网页请求,那么 Nginx 会将该请求传递给运行在8080端口上的 Apache 网络服务器和 PHP。
|
||||
|
||||
#### 前提条件 ####
|
||||
|
||||
- FreeBSD 10.2
|
||||
- Root 权限
|
||||
|
||||
### 步骤 1 - 更新系统 ###
|
||||
|
||||
使用 SSH 认证方式登录到你的 FreeBSD 服务器,使用下面命令来更新你的系统:
|
||||
|
||||
freebsd-update fetch
|
||||
freebsd-update install
|
||||
|
||||
### 步骤 2 - 安装 Apache ###
|
||||
|
||||
Apache 是开源的、使用范围最广的 web 服务器。在 FreeBSD 里默认没有安装 Apache, 但是我们可以直接通过 /usr/ports/www/apache24 下的 ports 或软件包来安装,也可以直接使用 pkg 命令从 FreeBSD 软件库中安装。在本教程中,我们将使用 pkg 命令从 FreeBSD 软件库中安装:
|
||||
|
||||
pkg install apache24
|
||||
|
||||
### 步骤 3 - 安装 PHP ###
|
||||
|
||||
一旦成功安装 Apache,接着将会安装 PHP ,它来负责处理用户对 PHP 文件的请求。我们将会用到如下的 pkg 命令来安装 PHP:
|
||||
|
||||
pkg install php56 mod_php56 php56-mysql php56-mysqli
|
||||
|
||||
### 步骤 4 - 配置 Apache 和 PHP ###
|
||||
|
||||
一旦所有都安装好了,我们将会配置 Apache 运行在8080端口上, 并让 PHP 与 Apache 一同工作。 要想配置Apache,我们可以编辑“httpd.conf”这个配置文件, 对于 PHP 我们只需要复制 “/usr/local/etc/”目录下的 PHP 配置文件 php.ini。
|
||||
|
||||
进入到“/usr/local/etc/”目录,并且复制 php.ini-production 文件到 php.ini :
|
||||
|
||||
cd /usr/local/etc/
|
||||
cp php.ini-production php.ini
|
||||
|
||||
下一步,在 Apache 目录下通过编辑“httpd.conf”文件来配置 Apache:
|
||||
|
||||
cd /usr/local/etc/apache24
|
||||
nano -c httpd.conf
|
||||
|
||||
端口配置在第**52**行 :
|
||||
|
||||
Listen 8080
|
||||
|
||||
服务器名称配置在第**219**行:
|
||||
|
||||
ServerName 127.0.0.1:8080
|
||||
|
||||
在第**277**行,添加 DirectoryIndex 文件,Apache 将用它来服务对目录的请求:
|
||||
|
||||
DirectoryIndex index.php index.html
|
||||
|
||||
在第**287**行下,配置 Apache ,添加脚本支持:
|
||||
|
||||
<FilesMatch "\.php$">
|
||||
SetHandler application/x-httpd-php
|
||||
</FilesMatch>
|
||||
<FilesMatch "\.phps$">
|
||||
SetHandler application/x-httpd-php-source
|
||||
</FilesMatch>
|
||||
|
||||
保存并退出。
|
||||
|
||||
现在用 sysrc 命令,来添加 Apache 为开机启动项目:
|
||||
|
||||
sysrc apache24_enable=yes
|
||||
|
||||
然后用下面的命令测试 Apache 的配置:
|
||||
|
||||
apachectl configtest
|
||||
|
||||
如果到这里都没有问题的话,那么就启动 Apache 吧:
|
||||
|
||||
service apache24 start
|
||||
|
||||
如果全部完毕,在“/usr/local/www/apache24/data”目录下创建一个 phpinfo 文件来验证 PHP 在 Apache 下顺利运行:
|
||||
|
||||
cd /usr/local/www/apache24/data
|
||||
echo "<?php phpinfo(); ?>" > info.php
|
||||
|
||||
现在就可以访问 freebsd 的服务器 IP : 192.168.1.123:8080/info.php 。
|
||||
|
||||
![Apache and PHP on Port 8080](http://blog.linoxide.com/wp-content/uploads/2015/11/Apache-and-PHP-on-Port-8080.png)
|
||||
|
||||
Apache 及 PHP 运行在 8080 端口。
|
||||
|
||||
### 步骤 5 - 安装 Nginx ###
|
||||
|
||||
Nginx 可以以较低内存占用提供高性能的 Web 服务器和反向代理服务器。在这个步骤里,我们将会使用 Nginx 作为Apache 的反向代理,因此让我们用 pkg 命令来安装它吧:
|
||||
|
||||
pkg install nginx
|
||||
|
||||
### 步骤 6 - 配置 Nginx ###
|
||||
|
||||
一旦 Nginx 安装完毕,在“**nginx.conf**”文件里,我们需要做一个新的配置文件来替换掉原来的 nginx 配置文件。切换到“/usr/local/etc/nginx/”目录下,并且备份默认 nginx.conf 文件:
|
||||
|
||||
cd /usr/local/etc/nginx/
|
||||
mv nginx.conf nginx.conf.oroginal
|
||||
|
||||
现在就可以创建一个新的 nginx 配置文件了:
|
||||
|
||||
nano -c nginx.conf
|
||||
|
||||
然后粘贴下面的配置:
|
||||
|
||||
user www;
|
||||
worker_processes 1;
|
||||
error_log /var/log/nginx/error.log;
|
||||
|
||||
events {
|
||||
worker_connections 1024;
|
||||
}
|
||||
|
||||
http {
|
||||
include mime.types;
|
||||
default_type application/octet-stream;
|
||||
|
||||
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
|
||||
'$status $body_bytes_sent "$http_referer" '
|
||||
'"$http_user_agent" "$http_x_forwarded_for"';
|
||||
access_log /var/log/nginx/access.log;
|
||||
|
||||
sendfile on;
|
||||
keepalive_timeout 65;
|
||||
|
||||
# Nginx cache configuration
|
||||
proxy_cache_path /var/nginx/cache levels=1:2 keys_zone=my-cache:8m max_size=1000m inactive=600m;
|
||||
proxy_temp_path /var/nginx/cache/tmp;
|
||||
proxy_cache_key "$scheme$host$request_uri";
|
||||
|
||||
gzip on;
|
||||
|
||||
server {
|
||||
#listen 80;
|
||||
server_name _;
|
||||
|
||||
location /nginx_status {
|
||||
|
||||
stub_status on;
|
||||
access_log off;
|
||||
}
|
||||
|
||||
# redirect server error pages to the static page /50x.html
|
||||
#
|
||||
error_page 500 502 503 504 /50x.html;
|
||||
location = /50x.html {
|
||||
root /usr/local/www/nginx-dist;
|
||||
}
|
||||
|
||||
# proxy the PHP scripts to Apache listening on 127.0.0.1:8080
|
||||
#
|
||||
location ~ \.php$ {
|
||||
proxy_pass http://127.0.0.1:8080;
|
||||
include /usr/local/etc/nginx/proxy.conf;
|
||||
}
|
||||
}
|
||||
|
||||
include /usr/local/etc/nginx/vhost/*;
|
||||
|
||||
}
|
||||
|
||||
保存并退出。
|
||||
|
||||
下一步,在 nginx 目录下面,创建一个 **proxy.conf** 文件,使其作为反向代理 :
|
||||
|
||||
cd /usr/local/etc/nginx/
|
||||
nano -c proxy.conf
|
||||
|
||||
粘贴如下配置:
|
||||
|
||||
proxy_buffering on;
|
||||
proxy_redirect off;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
client_max_body_size 10m;
|
||||
client_body_buffer_size 128k;
|
||||
proxy_connect_timeout 90;
|
||||
proxy_send_timeout 90;
|
||||
proxy_read_timeout 90;
|
||||
proxy_buffers 100 8k;
|
||||
add_header X-Cache $upstream_cache_status;
|
||||
|
||||
保存并退出。
|
||||
|
||||
最后一步,为 nginx 的高速缓存创建一个“/var/nginx/cache”的新目录:
|
||||
|
||||
mkdir -p /var/nginx/cache
|
||||
|
||||
### 步骤 7 - 配置 Nginx 的虚拟主机 ###
|
||||
|
||||
在这个步骤里面,我们需要创建一个新的虚拟主机域“saitama.me”,其文档根目录为“/usr/local/www/saitama.me”,日志文件放在“/var/log/nginx”目录下。
|
||||
|
||||
我们必须做的第一件事情就是创建新的目录来存放虚拟主机配置文件,我们创建的新目录名为“**vhost**”。创建它:
|
||||
|
||||
cd /usr/local/etc/nginx/
|
||||
mkdir vhost
|
||||
|
||||
创建好 vhost 目录,然后我们就进入这个目录并创建一个新的虚拟主机文件。这里我取名为“**saitama.conf**”:
|
||||
|
||||
cd vhost/
|
||||
nano -c saitama.conf
|
||||
|
||||
粘贴如下虚拟主机的配置:
|
||||
|
||||
server {
|
||||
# Replace with your freebsd IP
|
||||
listen 192.168.1.123:80;
|
||||
|
||||
# Document Root
|
||||
root /usr/local/www/saitama.me;
|
||||
index index.php index.html index.htm;
|
||||
|
||||
# Domain
|
||||
server_name www.saitama.me saitama.me;
|
||||
|
||||
# Error and Access log file
|
||||
error_log /var/log/nginx/saitama-error.log;
|
||||
access_log /var/log/nginx/saitama-access.log main;
|
||||
|
||||
# Reverse Proxy Configuration
|
||||
location ~ \.php$ {
|
||||
proxy_pass http://127.0.0.1:8080;
|
||||
include /usr/local/etc/nginx/proxy.conf;
|
||||
|
||||
# Cache configuration
|
||||
proxy_cache my-cache;
|
||||
proxy_cache_valid 10s;
|
||||
proxy_no_cache $cookie_PHPSESSID;
|
||||
proxy_cache_bypass $cookie_PHPSESSID;
|
||||
proxy_cache_key "$scheme$host$request_uri";
|
||||
|
||||
}
|
||||
|
||||
# Disable Cache for the file type html, json
|
||||
location ~* .(?:manifest|appcache|html?|xml|json)$ {
|
||||
expires -1;
|
||||
}
|
||||
|
||||
# Enable Cache the file 30 days
|
||||
location ~* .(jpg|png|gif|jpeg|css|mp3|wav|swf|mov|doc|pdf|xls|ppt|docx|pptx|xlsx)$ {
|
||||
proxy_cache_valid 200 120m;
|
||||
expires 30d;
|
||||
proxy_cache my-cache;
|
||||
access_log off;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
保存并退出。
|
||||
|
||||
下一步,为 nginx 和虚拟主机创建一个新的日志目录“/var/log/”:
|
||||
|
||||
mkdir -p /var/log/nginx/
|
||||
|
||||
如果一切顺利,在文件的根目录下创建目录 saitama.me 用作文档根:
|
||||
|
||||
cd /usr/local/www/
|
||||
mkdir saitama.me
|
||||
|
||||
### 步骤 8 - 测试 ###
|
||||
|
||||
在这个步骤里面,我们只是测试我们的 nginx 和虚拟主机的配置。
|
||||
|
||||
用如下命令测试 nginx 的配置:
|
||||
|
||||
nginx -t
|
||||
|
||||
如果一切都没有问题,用 sysrc 命令添加 nginx 为开机启动项,并且启动 nginx 和重启 apache:
|
||||
|
||||
sysrc nginx_enable=yes
|
||||
service nginx start
|
||||
service apache24 restart
|
||||
|
||||
一切完毕后,在 saitama.me 目录下,添加一个新的 phpinfo 文件来验证 php 的正常运行:
|
||||
|
||||
cd /usr/local/www/saitama.me
|
||||
echo "<?php phpinfo(); ?>" > info.php
|
||||
|
||||
然后访问这个域名: **www.saitama.me/info.php**。
|
||||
|
||||
![Virtualhost Configured saitamame](http://blog.linoxide.com/wp-content/uploads/2015/11/Virtualhost-Configured-saitamame.png)
|
||||
|
||||
Nginx 作为 Apache 的反向代理运行了,PHP 也同样工作了。
|
||||
|
||||
这是另一个结果:
|
||||
|
||||
测试无缓存的 .html 文件。
|
||||
|
||||
curl -I www.saitama.me
|
||||
|
||||
![html with no-cache](http://blog.linoxide.com/wp-content/uploads/2015/11/html-with-no-cache.png)
|
||||
|
||||
测试有三十天缓存的 .css 文件。
|
||||
|
||||
curl -I www.saitama.me/test.css
|
||||
|
||||
![css file 30day cache](http://blog.linoxide.com/wp-content/uploads/2015/11/css-file-30day-cache.png)
|
||||
|
||||
测试缓存的 .php 文件:
|
||||
|
||||
curl -I www.saitama.me/info.php
|
||||
|
||||
![PHP file cached](http://blog.linoxide.com/wp-content/uploads/2015/11/PHP-file-cached.png)
|
||||
|
||||
全部搞定。
|
||||
|
||||
### 总结 ###
|
||||
|
||||
Nginx 是最受欢迎的 HTTP 和反向代理服务器,拥有丰富的功能、高性能、低内存/RAM 占用。Nginx 也用于缓存, 我们可以在网络上缓存静态文件使得网页加速,并且缓存用户请求的 php 文件。 Nginx 容易配置和使用,可以将它用作 HTTP 服务器或者 apache 的反向代理。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/install-nginx-reverse-proxy-apache-freebsd-10-2/
|
||||
|
||||
作者:[Arul][a]
|
||||
译者:[KnightJoker](https://github.com/KnightJoker)
|
||||
校对:[Caroline](https://github.com/carolinewuyan),[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arulm/
|
@ -0,0 +1,162 @@
|
||||
在 Debian Linux 上安装配置 ISC DHCP 服务器
|
||||
================================================================================
|
||||
|
||||
动态主机控制协议(Dynamic Host Control Protocol,DHCP)给网络管理员提供了一种便捷的方式,为不断变化的网络主机或是动态网络提供网络层地址。其中最常用的 DHCP 服务工具是 ISC DHCP Server。DHCP 服务的目的是给主机提供必要的网络信息以便能够和其他连接在网络中的主机互相通信。DHCP 服务提供的信息包括:DNS 服务器信息,网络地址(IP),子网掩码,默认网关信息,主机名等等。
|
||||
|
||||
本教程介绍运行在 Debian 7.7 上 4.2.4 版的 ISC-DHCP-Server 如何管理多个虚拟局域网(VLAN),也可以非常容易应用到单一网络上。
|
||||
|
||||
测试用的网络是通过思科路由器使用传统的方式来管理 DHCP 租约地址的。目前有 12 个 VLAN 需要通过集中式服务器来管理。把 DHCP 的任务转移到一个专用的服务器上,路由器可以收回相应的资源,把资源用到更重要的任务上,比如路由寻址,访问控制列表,流量监测以及网络地址转换等。
|
||||
|
||||
另一个将 DHCP 服务转移到专用服务器的好处,以后会讲到,它可以建立动态域名服务器(DDNS),这样当主机从服务器请求 DHCP 地址的时候,这样新主机的主机名就会被添加到 DNS 系统里面。
|
||||
|
||||
### 安装和配置 ISC DHCP 服务器###
|
||||
|
||||
1、使用 apt 工具用来安装 Debian 软件仓库中的 ISC 软件,来创建这个多宿主服务器。与其他教程一样需要使用 root 或者 sudo 访问权限。请适当的修改,以便使用下面的命令。(LCTT 译注:下面中括号里面是注释,使用的时候请删除,#表示使用的 root 权限)
|
||||
|
||||
# apt-get install isc-dhcp-server [安装 the ISC DHCP Server 软件]
|
||||
# dpkg --get-selections isc-dhcp-server [确认软件已经成功安装]
|
||||
# dpkg -s isc-dhcp-server [用另一种方式确认成功安装]
|
||||
|
||||
![Install ISC DHCP Server in Debian](http://www.tecmint.com/wp-content/uploads/2015/04/Install-ISC-DHCP-Server.jpg)
|
||||
|
||||
2、 确认服务软件已经安装完成,现在需要提供网络信息来配置服务器,这样服务器才能够根据我们的需要来分发网络信息。作为管理员最起码需要了解的 DHCP 信息如下:
|
||||
|
||||
- 网络地址
|
||||
- 子网掩码
|
||||
- 动态分配的地址范围
|
||||
|
||||
其他一些服务器动态分配的有用信息包括:
|
||||
|
||||
- 默认网关
|
||||
- DNS 服务器 IP 地址
|
||||
- 域名
|
||||
- 主机名
|
||||
- 网络广播地址
|
||||
|
||||
这只是能让 ISC DHCP 服务器处理的选项中非常少的一部分。如果你想查看所有选项及其描述需要在安装好软件后输入以下命令:
|
||||
|
||||
# man dhcpd.conf
|
||||
|
||||
3、 一旦管理员已经确定了这台服务器分发的所有必要信息,那么是时候配置服务器并且分配必要的地址池了。在配置任何地址池或服务器配置之前,必须配置 DHCP 服务器侦听这台服务器上面的一个接口。
|
||||
|
||||
在这台特定的服务器上,设置好网卡后,DHCP 会侦听名称名为`'bond0'`的接口。请适根据你的实际情况来更改服务器以及网络环境。下面的配置都是针对本教程的。
|
||||
|
||||
![Configure ISC DHCP Network](http://www.tecmint.com/wp-content/uploads/2015/04/Configure-ISC-DHCP-Network.jpg)
|
||||
|
||||
这行指定的是 DHCP 服务侦听接口(一个或多个)上的 DHCP 流量。修改主配置文件,分配适合的 DHCP 地址池到所需要的网络上。主配置文件在 /etc/dhcp/dhcpd.conf。用文本编辑器打开这个文件
|
||||
|
||||
# nano /etc/dhcp/dhcpd.conf
|
||||
|
||||
这个配置文件可以配置我们所需要的地址池/主机。文件顶部有 ‘ddns-update-style‘ 这样一句,在本教程中它设置为 ‘none‘。在以后的教程中会讲到动态 DNS,ISC-DHCP-Server 将会与 BIND9 集成,它能够使主机名更新指向到 IP 地址。
|
||||
|
||||
4、 接下来的部分是管理员配置全局网络设置,如 DNS 域名,默认的租约时间,IP地址,子网的掩码,以及其它。如果你想了解所有的选项,请阅读 man 手册中的 dhcpd.conf 文件,命令如下:
|
||||
|
||||
# man dhcpd.conf
|
||||
|
||||
对于这台服务器,我们需要在配置文件顶部配置一些全局网络设置,这样就不用到每个地址池中去单独设置了。
|
||||
|
||||
![Configure ISC DDNS](http://www.tecmint.com/wp-content/uploads/2015/04/Configure-ISC-DDNS.png)
|
||||
|
||||
我们花一点时间来解释一下这些选项,在本教程中虽然它们是一些全局设置,但是也可以单独的为某一个地址池进行配置。
|
||||
|
||||
- option domain-name “comptech.local”; – 所有使用这台 DHCP 服务器的主机,都将成为 DNS 域 “comptech.local” 的一员
|
||||
|
||||
- option domain-name-servers 172.27.10.6; DHCP 向所有配置这台 DHCP 服务器的的网络主机分发 DNS 服务器地址为 172.27.10.6
|
||||
|
||||
- option subnet-mask 255.255.255.0; – 每个网络设备都分配子网掩码 255.255.255.0 或 /24
|
||||
|
||||
- default-lease-time 3600; – 默认有效的地址租约时间(单位是秒)。如果租约时间耗尽,那么主机可以重新申请租约。如果租约完成,那么相应的地址也将被尽快回收。
|
||||
|
||||
- max-lease-time 86400; – 这是一台主机所能租用的最大的租约时间(单位为秒)。
|
||||
|
||||
- ping-check true; – 这是一个额外的测试,以确保服务器分发出的网络地址不是当前网络中另一台主机已使用的网络地址。
|
||||
|
||||
- ping-timeout; – 在判断地址以前没有使用过前,服务器将等待 ping 响应多少秒。
|
||||
|
||||
- ignore client-updates; 现在这个选项是可以忽略的,因为 DDNS 在前面已在配置文件中已经被禁用,但是当 DDNS 运行时,这个选项会忽略主机更新其 DNS 主机名的请求。
|
||||
|
||||
5、 文件中下面一行是权威 DHCP 所在行。这行的意义是如果服务器是为文件中所配置的网络分发地址的服务器,那么取消对该权威关键字(authoritative stanza) 的注释。
|
||||
|
||||
通过去掉关键字 authoritative 前面的 ‘#’,取消注释全局权威关键字。这台服务器将是它所管理网络里面的唯一权威。
|
||||
|
||||
![Enable ISC Authoritative](http://www.tecmint.com/wp-content/uploads/2015/04/ISC-authoritative.png)
|
||||
|
||||
默认情况下服务器被假定为**不是**网络上的权威服务器。之所以这样做是出于安全考虑。如果有人因为不了解 DHCP 服务的配置,导致配置不当或配置到一个不该出现的网络里面,这都将带来非常严重的连接问题。这行还可用在每个网络中单独配置使用。也就是说如果这台服务器不是整个网络的 DHCP 服务器,authoritative 行可以用在每个单独的网络中,而不是像上面截图中那样的全局配置。
|
||||
|
||||
6、 这一步是配置服务器将要管理的所有 DHCP 地址池/网络。简短起见,本教程只讲到配置的地址池之一。作为管理员需要收集一些必要的网络信息(比如域名,网络地址,有多少地址能够被分发等等)
|
||||
|
||||
以下这个地址池所用到的信息都是管理员收集整理的:网络 ID 172.27.60.0, 子网掩码 255.255.255.0 或 /24, 默认子网网关 172.27.60.1,广播地址 172.27.60.255.0 。
|
||||
|
||||
以上这些信息对于构建 dhcpd.conf 文件中新网络非常重要。使用文本编辑器修改配置文件添加新网络进去,这里我们需要使用 root 或 sudo 访问权限。
|
||||
|
||||
# nano /etc/dhcp/dhcpd.conf
|
||||
|
||||
![Configure DHCP Pools and Networks](http://www.tecmint.com/wp-content/uploads/2015/04/ISC-network.png)
|
||||
|
||||
当前这个例子是给用 VMWare 创建的虚拟服务器分配 IP 地址。第一行显示是该网络的子网掩码。括号里面的内容是 DHCP 服务器应该提供给网络上面主机的所有选项。
|
||||
|
||||
第一行, range 172.27.60.50 172.27.60.254; 这一行显示的是,DHCP 服务在这个网络上能够给主机动态分发的地址范围。
|
||||
|
||||
第二行,option routers 172.27.60.1; 这里显示的是给网络里面所有的主机分发的默认网关地址。
|
||||
|
||||
最后一行, option broadcast-address 172.27.60.255; 显示当前网络的广播地址。这个地址不能被包含在要分发放的地址范围内,因为广播地址不能分配到一个主机上面。
|
||||
|
||||
必须要强调的是每行的结尾必须要用(;)来结束,所有创建的网络必须要在 {} 里面。
|
||||
|
||||
7、 如果要创建多个网络,继续创建完它们的相应选项后保存文本文件即可。配置完成以后如果有更改,ISC-DHCP-Server 进程需要重启来使新的更改生效。重启进程可以通过下面的命令来完成:
|
||||
|
||||
# service isc-dhcp-server restart
|
||||
|
||||
这条命令将重启 DHCP 服务,管理员能够使用几种不同的方式来检查服务器是否已经可以处理 dhcp 请求。最简单的方法是通过 [lsof 命令][1]来查看服务器是否在侦听67端口,命令如下:
|
||||
|
||||
# lsof -i :67
|
||||
|
||||
![Check DHCP Listening Port](http://www.tecmint.com/wp-content/uploads/2015/04/lsof.png)
|
||||
|
||||
这里输出的结果表明 dhcpd(DHCP 服务守护进程)正在运行并且侦听67端口。由于在 /etc/services 文件中67端口的映射,所以输出中的67端口实际上被转换成了 “bootps”。
|
||||
|
||||
在大多数的系统中这是非常常见的,现在服务器应该已经为网络连接做好准备,我们可以将一台主机接入网络请求DHCP地址来验证服务是否正常。
|
||||
|
||||
### 测试客户端连接 ###
|
||||
|
||||
8、 现在许多系统使用网络管理器来维护网络连接状态,因此这个设备应该预先配置好的,只要对应的接口处于活跃状态就能够获取 DHCP。
|
||||
|
||||
然而当一台设备无法使用网络管理器时,它可能需要手动获取 DHCP 地址。下面的几步将演示怎样手动获取以及如何查看服务器是否已经按需要分发地址。
|
||||
|
||||
‘[ifconfig][2]‘工具能够用来检查接口的配置。这台被用来测试的 DHCP 服务器的设备,它只有一个网络适配器(网卡),这块网卡被命名为 ‘eth0‘。
|
||||
|
||||
# ifconfig eth0
|
||||
|
||||
![Check Network Interface IP Address](http://www.tecmint.com/wp-content/uploads/2015/04/No-ip.png)
|
||||
|
||||
从输出结果上看,这台设备目前没有 IPv4 地址,这样很便于测试。我们把这台设备连接到 DHCP 服务器并发出一个请求。这台设备上已经安装了一个名为 ‘dhclient‘ 的DHCP客户端工具。因为操作系统各不相同,所以这个客户端软件也是互不一样的。
|
||||
|
||||
# dhclient eth0
|
||||
|
||||
![Request IP Address from DHCP](http://www.tecmint.com/wp-content/uploads/2015/04/IP.png)
|
||||
|
||||
当前 `'inet addr:'` 字段中显示了属于 172.27.60.0 网络地址范围内的 IPv4 地址。值得欣慰的是当前网络还配置了正确的子网掩码并且分发了广播地址。
|
||||
|
||||
到这里看起来还都不错,让我们来测试一下,看看这台设备收到新 IP 地址是不是由服务器发出的。这里我们参照服务器的日志文件来完成这个任务。虽然这个日志的内容有几十万条,但是里面只有几条是用来确定服务器是否正常工作的。这里我们使用一个工具 ‘tail’,它只显示日志文件的最后几行,这样我们就可以不用拿一个文本编辑器去查看所有的日志文件了。命令如下:
|
||||
|
||||
# tail /var/log/syslog
|
||||
|
||||
![Check DHCP Logs](http://www.tecmint.com/wp-content/uploads/2015/04/DHCP-Log.png)
|
||||
|
||||
OK!服务器记录表明它分发了一个地址给这台主机 (HRTDEBXENSRV)。服务器按预期运行,给它充当权威服务器的网络分发了适合的网络地址。至此 DHCP 服务器搭建成功并且运行。如果有需要你可以继续配置其他的网络,排查故障,确保安全。
|
||||
|
||||
在以后的Debian教程中我会讲一些新的 ISC-DHCP-Server 功能。有时间的话我将写一篇关于 Bind9 和 DDNS 的教程,融入到这篇文章里面。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/install-and-configure-multihomed-isc-dhcp-server-on-debian-linux/
|
||||
|
||||
作者:[Rob Turner][a]
|
||||
译者:[ivo-wang](https://github.com/ivo-wang)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/robturner/
|
||||
[1]:http://www.tecmint.com/10-lsof-command-examples-in-linux/
|
||||
[2]:http://www.tecmint.com/ifconfig-command-examples/
|
@ -0,0 +1,435 @@
|
||||
如何在 Ubuntu 15.04 中安装 puppet
|
||||
================================================================================
|
||||
|
||||
大家好,本教程将学习如何在 ubuntu 15.04 上面安装 puppet,它可以用来管理你的服务器基础环境。puppet 是由puppet 实验室(Puppet Labs)开发并维护的一款开源的配置管理软件,它能够帮我们自动化供给、配置和管理服务器的基础环境。不管我们管理的是几个服务器还是数以千计的计算机组成的业务报表体系,puppet 都能够使管理员从繁琐的手动配置调整中解放出来,腾出时间和精力去提系统的升整体效率。它能够确保所有自动化流程作业的一致性、可靠性以及稳定性。它让管理员和开发者更紧密的联系在一起,使开发者更容易产出付出设计良好、简洁清晰的代码。puppet 提供了配置管理和数据中心自动化的两个解决方案。这两个解决方案分别是 **puppet 开源版** 和 **puppet 企业版**。puppet 开源版以 Apache 2.0 许可证发布,它是一个非常灵活、可定制的解决方案,设置初衷是帮助管理员去完成那些重复性操作工作。pupprt 企业版是一个全平台复杂 IT 环境下的成熟解决方案,它除了拥有开源版本所有优势以外还有移动端 apps、只有商业版才有的加强支持,以及模块化和集成管理等。Puppet 使用 SSL 证书来认证主控服务器与代理节点之间的通信。
|
||||
|
||||
本教程将要介绍如何在运行 ubuntu 15.04 的主控服务器和代理节点上面安装开源版的 puppet。在这里,我们用一台服务器做主控服务器(master),管理和控制剩余的当作 puppet 代理节点(agent node)的服务器,这些代理节点将依据主控服务器来进行配置。在 ubuntu 15.04 只需要简单的几步就能安装配置好 puppet,用它来管理我们的服务器基础环境非常的方便。(LCTT 译注:puppet 采用 C/S 架构,所以必须有至少有一台作为服务器,其他作为客户端处理)
|
||||
|
||||
### 1.设置主机文件 ###
|
||||
|
||||
在本教程里,我们将使用2台运行 ubuntu 15.04 “Vivid Vervet" 的主机,一台作为主控服务器,另一台作为 puppet 的代理节点。下面是我们将用到的服务器的基础信息。
|
||||
|
||||
- puupet 主控服务器 IP:44.55.88.6 ,主机名: puppetmaster
|
||||
- puppet 代理节点 IP: 45.55.86.39 ,主机名: puppetnode
|
||||
|
||||
我们要在代理节点和服务器这两台机器的 hosts 文件里面都添加上相应的条目,使用 root 或是 sudo 访问权限来编辑 /etc/hosts 文件,命令如下:
|
||||
|
||||
# nano /etc/hosts
|
||||
|
||||
45.55.88.6 puppetmaster.example.com puppetmaster
|
||||
45.55.86.39 puppetnode.example.com puppetnode
|
||||
|
||||
注意,puppet 主控服务器必使用 8140 端口来运行,所以请务必保证开启8140端口。
|
||||
|
||||
### 2. 用 NTP 更新时间 ###
|
||||
|
||||
puppet 代理节点所使用系统时间必须要准确,这样可以避免代理证书出现问题。如果有时间差异,那么证书将过期失效,所以服务器与代理节点的系统时间必须互相同步。我们使用 NTP(Network Time Protocol,网络时间协议)来同步时间。**在服务器与代理节点上面分别**运行以下命令来同步时间。
|
||||
|
||||
# ntpdate pool.ntp.org
|
||||
|
||||
17 Jun 00:17:08 ntpdate[882]: adjust time server 66.175.209.17 offset -0.001938 sec
|
||||
|
||||
(LCTT 译注:显示类似的输出结果表示运行正常)
|
||||
|
||||
如果没有安装 ntp,请使用下面的命令更新你的软件仓库,安装并运行ntp服务
|
||||
|
||||
# apt-get update && sudo apt-get -y install ntp ; service ntp restart
|
||||
|
||||
### 3. 安装主控服务器软件 ###
|
||||
|
||||
安装开源版本的 puppet 有很多的方法。在本教程中我们在 puppet 实验室官网下载一个名为 puppetlabs-release 的软件包的软件源,安装后它将为我们在软件源里面添加 puppetmaster-passenger。puppetmaster-passenger 包括带有 apache 的 puppet 主控服务器。我们开始下载这个软件包:
|
||||
|
||||
# cd /tmp/
|
||||
# wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb
|
||||
|
||||
--2015-06-17 00:19:26-- https://apt.puppetlabs.com/puppetlabs-release-trusty.deb
|
||||
Resolving apt.puppetlabs.com (apt.puppetlabs.com)... 192.155.89.90, 2600:3c03::f03c:91ff:fedb:6b1d
|
||||
Connecting to apt.puppetlabs.com (apt.puppetlabs.com)|192.155.89.90|:443... connected.
|
||||
HTTP request sent, awaiting response... 200 OK
|
||||
Length: 7384 (7.2K) [application/x-debian-package]
|
||||
Saving to: ‘puppetlabs-release-trusty.deb’
|
||||
|
||||
puppetlabs-release-tr 100%[===========================>] 7.21K --.-KB/s in 0.06s
|
||||
|
||||
2015-06-17 00:19:26 (130 KB/s) - ‘puppetlabs-release-trusty.deb’ saved [7384/7384]
|
||||
|
||||
下载完成,我们来安装它:
|
||||
|
||||
# dpkg -i puppetlabs-release-trusty.deb
|
||||
|
||||
Selecting previously unselected package puppetlabs-release.
|
||||
(Reading database ... 85899 files and directories currently installed.)
|
||||
Preparing to unpack puppetlabs-release-trusty.deb ...
|
||||
Unpacking puppetlabs-release (1.0-11) ...
|
||||
Setting up puppetlabs-release (1.0-11) ...
|
||||
|
||||
使用 apt 包管理命令更新一下本地的软件源:
|
||||
|
||||
# apt-get update
|
||||
|
||||
现在我们就可以安装 puppetmaster-passenger 了
|
||||
|
||||
# apt-get install puppetmaster-passenger
|
||||
|
||||
**提示**: 在安装的时候可能会报错:
|
||||
|
||||
Warning: Setting templatedir is deprecated.see http://links.puppetlabs.com/env-settings-deprecations (at /usr/lib/ruby/vendor_ruby/puppet/settings.rb:1139:in `issue_deprecation_warning')
|
||||
|
||||
不过不用担心,忽略掉它就好,我们只需要在设置配置文件的时候把这一项禁用就行了。
|
||||
|
||||
如何来查看puppet 主控服务器是否已经安装成功了呢?非常简单,只需要使用下面的命令查看它的版本就可以了。
|
||||
|
||||
# puppet --version
|
||||
|
||||
3.8.1
|
||||
|
||||
现在我们已经安装好了 puppet 主控服务器。因为我们使用的是配合 apache 的 passenger,由 apache 来控制 puppet 主控服务器,当 apache 运行时 puppet 主控才运行。
|
||||
|
||||
在开始之前,我们需要通过停止 apache 服务来让 puppet 主控服务器停止运行。
|
||||
|
||||
# systemctl stop apache2
|
||||
|
||||
### 4. 使用 Apt 工具锁定主控服务器的版本 ###
|
||||
|
||||
现在已经安装了 3.8.1 版的 puppet,我们锁定这个版本不让它随意升级,因为升级会造成配置文件混乱。 使用 apt 工具来锁定它,这里我们需要使用文本编辑器来创建一个新的文件 **/etc/apt/preferences.d/00-puppet.pref**
|
||||
|
||||
# nano /etc/apt/preferences.d/00-puppet.pref
|
||||
|
||||
在新创建的文件里面添加以下内容:
|
||||
|
||||
# /etc/apt/preferences.d/00-puppet.pref
|
||||
Package: puppet puppet-common puppetmaster-passenger
|
||||
Pin: version 3.8*
|
||||
Pin-Priority: 501
|
||||
|
||||
这样在以后的系统软件升级中, puppet 主控服务器将不会跟随系统软件一起升级。
|
||||
|
||||
### 5. 配置 Puppet 主控服务器###
|
||||
|
||||
Puppet 主控服务器作为一个证书发行机构,需要生成它自己的证书,用于签署所有代理的证书的请求。首先我们要删除所有在该软件包安装过程中创建出来的 ssl 证书。本地默认的 puppet 证书放在 /var/lib/puppet/ssl。因此我们只需要使用 rm 命令来整个移除这些证书就可以了。
|
||||
|
||||
# rm -rf /var/lib/puppet/ssl
|
||||
|
||||
现在来配置该证书,在创建 puppet 主控服务器证书时,我们需要包括代理节点与主控服务器沟通所用的每个 DNS 名称。使用文本编辑器来修改服务器的配置文件 puppet.conf
|
||||
|
||||
# nano /etc/puppet/puppet.conf
|
||||
|
||||
输出的结果像下面这样
|
||||
|
||||
[main]
|
||||
logdir=/var/log/puppet
|
||||
vardir=/var/lib/puppet
|
||||
ssldir=/var/lib/puppet/ssl
|
||||
rundir=/var/run/puppet
|
||||
factpath=$vardir/lib/facter
|
||||
templatedir=$confdir/templates
|
||||
|
||||
[master]
|
||||
# These are needed when the puppetmaster is run by passenger
|
||||
# and can safely be removed if webrick is used.
|
||||
ssl_client_header = SSL_CLIENT_S_DN
|
||||
ssl_client_verify_header = SSL_CLIENT_VERIFY
|
||||
|
||||
在这我们需要注释掉 templatedir 这行使它失效。然后在文件的 `[main]` 小节的结尾添加下面的信息。
|
||||
|
||||
server = puppetmaster
|
||||
environment = production
|
||||
runinterval = 1h
|
||||
strict_variables = true
|
||||
certname = puppetmaster
|
||||
dns_alt_names = puppetmaster, puppetmaster.example.com
|
||||
|
||||
还有很多你可能用的到的配置选项。 如果你有需要,在 Puppet 实验室有一份详细的描述文件供你阅读: [Main Config File (puppet.conf)][1]。
|
||||
|
||||
编辑完成后保存退出。
|
||||
|
||||
使用下面的命令来生成一个新的证书。
|
||||
|
||||
# puppet master --verbose --no-daemonize
|
||||
|
||||
Info: Creating a new SSL key for ca
|
||||
Info: Creating a new SSL certificate request for ca
|
||||
Info: Certificate Request fingerprint (SHA256): F6:2F:69:89:BA:A5:5E:FF:7F:94:15:6B:A7:C4:20:CE:23:C7:E3:C9:63:53:E0:F2:76:D7:2E:E0:BF:BD:A6:78
|
||||
...
|
||||
Notice: puppetmaster has a waiting certificate request
|
||||
Notice: Signed certificate request for puppetmaster
|
||||
Notice: Removing file Puppet::SSL::CertificateRequest puppetmaster at '/var/lib/puppet/ssl/ca/requests/puppetmaster.pem'
|
||||
Notice: Removing file Puppet::SSL::CertificateRequest puppetmaster at '/var/lib/puppet/ssl/certificate_requests/puppetmaster.pem'
|
||||
Notice: Starting Puppet master version 3.8.1
|
||||
^CNotice: Caught INT; storing stop
|
||||
Notice: Processing stop
|
||||
|
||||
至此,证书已经生成。一旦我们看到 **Notice: Starting Puppet master version 3.8.1**,就表明证书就已经制作好了。我们按下 CTRL-C 回到 shell 命令行。
|
||||
|
||||
查看新生成证书的信息,可以使用下面的命令。
|
||||
|
||||
# puppet cert list -all
|
||||
|
||||
+ "puppetmaster" (SHA256) 33:28:97:86:A1:C3:2F:73:10:D1:FB:42:DA:D5:42:69:71:84:F0:E2:8A:01:B9:58:38:90:E4:7D:B7:25:23:EC (alt names: "DNS:puppetmaster", "DNS:puppetmaster.example.com")
|
||||
|
||||
### 6. 创建一个 Puppet 清单 ###
|
||||
|
||||
默认的主要清单(Manifest)是 /etc/puppet/manifests/site.pp。 这个主要清单文件包括了用于在代理节点执行的配置定义。现在我们来创建一个清单文件:
|
||||
|
||||
# nano /etc/puppet/manifests/site.pp
|
||||
|
||||
在刚打开的文件里面添加下面这几行:
|
||||
|
||||
# execute 'apt-get update'
|
||||
exec { 'apt-update': # exec resource named 'apt-update'
|
||||
command => '/usr/bin/apt-get update' # command this resource will run
|
||||
}
|
||||
|
||||
# install apache2 package
|
||||
package { 'apache2':
|
||||
require => Exec['apt-update'], # require 'apt-update' before installing
|
||||
ensure => installed,
|
||||
}
|
||||
|
||||
# ensure apache2 service is running
|
||||
service { 'apache2':
|
||||
ensure => running,
|
||||
}
|
||||
|
||||
以上这几行的意思是给代理节点部署 apache web 服务。
|
||||
|
||||
### 7. 运行 puppet 主控服务 ###
|
||||
|
||||
已经准备好运行 puppet 主控服务器 了,那么开启 apache 服务来让它启动
|
||||
|
||||
# systemctl start apache2
|
||||
|
||||
我们 puppet 主控服务器已经运行,不过它还不能管理任何代理节点。现在我们给 puppet 主控服务器添加代理节点.
|
||||
|
||||
**提示**: 如果报错
|
||||
|
||||
Job for apache2.service failed. see "systemctl status apache2.service" and "journalctl -xe" for details.
|
||||
|
||||
肯定是 apache 服务器有一些问题,我们可以使用 root 或是 sudo 访问权限来运行**apachectl start**查看它输出的日志。在本教程执行过程中, 我们发现一个 **/etc/apache2/sites-enabled/puppetmaster.conf** 的证书配置问题。修改其中的**SSLCertificateFile /var/lib/puppet/ssl/certs/server.pem **为 **SSLCertificateFile /var/lib/puppet/ssl/certs/puppetmaster.pem**,然后注释掉后面这行**SSLCertificateKeyFile** 。然后在命令行重新启动 apache。
|
||||
|
||||
### 8. 安装 Puppet 代理节点的软件包 ###
|
||||
|
||||
我们已经准备好了 puppet 的服务器,现在需要一个可以管理的代理节点,我们将安装 puppet 代理软件到节点上去。这里我们要给每一个需要管理的节点安装代理软件,并且确保这些节点能够通过 DNS 查询到服务器主机。下面将 安装最新的代理软件到 节点 puppetnode.example.com 上。
|
||||
|
||||
在代理节点上使用下面的命令下载 puppet 实验室提供的软件包:
|
||||
|
||||
# cd /tmp/
|
||||
# wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb\
|
||||
|
||||
--2015-06-17 00:54:42-- https://apt.puppetlabs.com/puppetlabs-release-trusty.deb
|
||||
Resolving apt.puppetlabs.com (apt.puppetlabs.com)... 192.155.89.90, 2600:3c03::f03c:91ff:fedb:6b1d
|
||||
Connecting to apt.puppetlabs.com (apt.puppetlabs.com)|192.155.89.90|:443... connected.
|
||||
HTTP request sent, awaiting response... 200 OK
|
||||
Length: 7384 (7.2K) [application/x-debian-package]
|
||||
Saving to: ‘puppetlabs-release-trusty.deb’
|
||||
|
||||
puppetlabs-release-tr 100%[===========================>] 7.21K --.-KB/s in 0.04s
|
||||
|
||||
2015-06-17 00:54:42 (162 KB/s) - ‘puppetlabs-release-trusty.deb’ saved [7384/7384]
|
||||
|
||||
在 ubuntu 15.04 上我们使用debian包管理系统来安装它,命令如下:
|
||||
|
||||
# dpkg -i puppetlabs-release-trusty.deb
|
||||
|
||||
使用 apt 包管理命令更新一下本地的软件源:
|
||||
|
||||
# apt-get update
|
||||
|
||||
通过远程仓库安装:
|
||||
|
||||
# apt-get install puppet
|
||||
|
||||
Puppet 代理默认是不启动的。这里我们需要使用文本编辑器修改 /etc/default/puppet 文件,使它正常工作:
|
||||
|
||||
# nano /etc/default/puppet
|
||||
|
||||
更改 **START** 的值改成 "yes" 。
|
||||
|
||||
START=yes
|
||||
|
||||
最后保存并退出。
|
||||
|
||||
### 9. 使用 Apt 工具锁定代理软件的版本 ###
|
||||
|
||||
和上面的步骤一样为防止随意升级造成的配置文件混乱,我们要使用 apt 工具来把它锁定。具体做法是使用文本编辑器创建一个文件 **/etc/apt/preferences.d/00-puppet.pref**
|
||||
|
||||
# nano /etc/apt/preferences.d/00-puppet.pref
|
||||
|
||||
在新建的文件里面加入如下内容
|
||||
|
||||
# /etc/apt/preferences.d/00-puppet.pref
|
||||
Package: puppet puppet-common
|
||||
Pin: version 3.8*
|
||||
Pin-Priority: 501
|
||||
|
||||
这样 puppet 就不会随着系统软件升级而随意升级了。
|
||||
|
||||
### 10. 配置 puppet 代理节点 ###
|
||||
|
||||
我们需要编辑一下代理节点的 puppet.conf 文件,来使它运行。
|
||||
|
||||
# nano /etc/puppet/puppet.conf
|
||||
|
||||
它看起来和服务器的配置文件完全一样。同样注释掉**templatedir**这行。不同的是在这里我们需要删除掉所有关于`[master]` 的部分。
|
||||
|
||||
假定主控服务器可以通过名字“puppet-master”访问,我们的客户端应该可以和它相互连接通信。如果不行的话,我们需要使用完整的主机域名 puppetmaster.example.com
|
||||
|
||||
[agent]
|
||||
server = puppetmaster.example.com
|
||||
certname = puppetnode.example.com
|
||||
|
||||
在文件的结尾增加上面3行,增加之后文件内容像下面这样:
|
||||
|
||||
[main]
|
||||
logdir=/var/log/puppet
|
||||
vardir=/var/lib/puppet
|
||||
ssldir=/var/lib/puppet/ssl
|
||||
rundir=/var/run/puppet
|
||||
factpath=$vardir/lib/facter
|
||||
#templatedir=$confdir/templates
|
||||
|
||||
[agent]
|
||||
server = puppetmaster.example.com
|
||||
certname = puppetnode.example.com
|
||||
|
||||
最后保存并退出。
|
||||
|
||||
使用下面的命令来启动客户端软件:
|
||||
|
||||
# systemctl start puppet
|
||||
|
||||
如果一切顺利的话,我们不会看到命令行有任何输出。 第一次运行的时候,代理节点会生成一个 ssl 证书并且给服务器发送一个请求,经过签名确认后,两台机器就可以互相通信了。
|
||||
|
||||
**提示**: 如果这是你添加的第一个代理节点,建议你在添加其他节点前先给这个证书签名。一旦能够通过并正常运行,回过头来再添加其他代理节点。
|
||||
|
||||
### 11. 在主控服务器上对证书请求进行签名 ###
|
||||
|
||||
第一次运行的时候,代理节点会生成一个 ssl 证书并且给服务器发送一个签名请求。在主控服务器给代理节点服务器证书签名之后,主服务器才能和代理服务器通信并且控制代理服务器。
|
||||
|
||||
在主控服务器上使用下面的命令来列出当前的证书请求:
|
||||
|
||||
# puppet cert list
|
||||
"puppetnode.example.com" (SHA256) 31:A1:7E:23:6B:CD:7B:7D:83:98:33:8B:21:01:A6:C4:01:D5:53:3D:A0:0E:77:9A:77:AE:8F:05:4A:9A:50:B2
|
||||
|
||||
因为只设置了一台代理节点服务器,所以我们将只看到一个请求。看起来类似如上,代理节点的完整域名即其主机名。
|
||||
|
||||
注意有没有“+”号在前面,代表这个证书有没有被签名。
|
||||
|
||||
使用带有主机名的**puppet cert sign**这个命令来签署这个签名请求,如下:
|
||||
|
||||
# puppet cert sign puppetnode.example.com
|
||||
Notice: Signed certificate request for puppetnode.example.com
|
||||
Notice: Removing file Puppet::SSL::CertificateRequest puppetnode.example.com at '/var/lib/puppet/ssl/ca/requests/puppetnode.example.com.pem'
|
||||
|
||||
主控服务器现在可以通讯和控制它签名过的代理节点了。
|
||||
|
||||
如果想签署所有的当前请求,可以使用 -all 选项,如下所示:
|
||||
|
||||
# puppet cert sign --all
|
||||
|
||||
### 12. 删除一个 Puppet 证书 ###
|
||||
|
||||
如果我们想移除一个主机,或者想重建一个主机然后再添加它。下面的例子里我们将展示如何删除 puppet 主控服务器上面的一个证书。使用的命令如下:
|
||||
|
||||
# puppet cert clean hostname
|
||||
Notice: Revoked certificate with serial 5
|
||||
Notice: Removing file Puppet::SSL::Certificate puppetnode.example.com at '/var/lib/puppet/ssl/ca/signed/puppetnode.example.com.pem'
|
||||
Notice: Removing file Puppet::SSL::Certificate puppetnode.example.com at '/var/lib/puppet/ssl/certs/puppetnode.example.com.pem'
|
||||
|
||||
如果我们想查看所有的签署和未签署的请求,使用下面这条命令:
|
||||
|
||||
# puppet cert list --all
|
||||
+ "puppetmaster" (SHA256) 33:28:97:86:A1:C3:2F:73:10:D1:FB:42:DA:D5:42:69:71:84:F0:E2:8A:01:B9:58:38:90:E4:7D:B7:25:23:EC (alt names: "DNS:puppetmaster", "DNS:puppetmaster.example.com")
|
||||
|
||||
|
||||
### 13. 部署 Puppet 清单 ###
|
||||
|
||||
当配置并完成 puppet 清单后,现在我们需要部署清单到代理节点服务器上。要应用并加载主 puppet 清单,我们可以在代理节点服务器上面使用下面的命令:
|
||||
|
||||
# puppet agent --test
|
||||
|
||||
Info: Retrieving pluginfacts
|
||||
Info: Retrieving plugin
|
||||
Info: Caching catalog for puppetnode.example.com
|
||||
Info: Applying configuration version '1434563858'
|
||||
Notice: /Stage[main]/Main/Exec[apt-update]/returns: executed successfully
|
||||
Notice: Finished catalog run in 10.53 seconds
|
||||
|
||||
这里向我们展示了主清单如何立即影响到了一个单一的服务器。
|
||||
|
||||
如果我们打算运行的 puppet 清单与主清单没有什么关联,我们可以简单使用 puppet apply 带上相应的清单文件的路径即可。它仅将清单应用到我们运行该清单的代理节点上。
|
||||
|
||||
# puppet apply /etc/puppet/manifest/test.pp
|
||||
|
||||
### 14. 为特定节点配置清单 ###
|
||||
|
||||
如果我们想部署一个清单到某个特定的节点,我们需要如下配置清单。
|
||||
|
||||
在主控服务器上面使用文本编辑器编辑 /etc/puppet/manifest/site.pp:
|
||||
|
||||
# nano /etc/puppet/manifest/site.pp
|
||||
|
||||
添加下面的内容进去
|
||||
|
||||
node 'puppetnode', 'puppetnode1' {
|
||||
# execute 'apt-get update'
|
||||
exec { 'apt-update': # exec resource named 'apt-update'
|
||||
command => '/usr/bin/apt-get update' # command this resource will run
|
||||
}
|
||||
|
||||
# install apache2 package
|
||||
package { 'apache2':
|
||||
require => Exec['apt-update'], # require 'apt-update' before installing
|
||||
ensure => installed,
|
||||
}
|
||||
|
||||
# ensure apache2 service is running
|
||||
service { 'apache2':
|
||||
ensure => running,
|
||||
}
|
||||
}
|
||||
|
||||
这里的配置显示我们将在名为 puppetnode 和 puppetnode1 的2个指定的节点上面安装 apache 服务。这里可以添加其他我们需要安装部署的具体节点进去。
|
||||
|
||||
### 15. 配置清单模块 ###
|
||||
|
||||
模块对于组合任务是非常有用的,在 Puppet 社区有很多人贡献了自己的模块组件。
|
||||
|
||||
在主控服务器上, 我们将使用 puppet module 命令来安装 **puppetlabs-apache** 模块。
|
||||
|
||||
# puppet module install puppetlabs-apache
|
||||
|
||||
**警告**: 千万不要在一个已经部署 apache 环境的机器上面使用这个模块,否则它将清空你没有被 puppet 管理的 apache 配置。
|
||||
|
||||
现在用文本编辑器来修改 **site.pp** :
|
||||
|
||||
# nano /etc/puppet/manifest/site.pp
|
||||
|
||||
添加下面的内容进去,在 puppetnode 上面安装 apache 服务。
|
||||
|
||||
node 'puppet-node' {
|
||||
class { 'apache': } # use apache module
|
||||
apache::vhost { 'example.com': # define vhost resource
|
||||
port => '80',
|
||||
docroot => '/var/www/html'
|
||||
}
|
||||
}
|
||||
|
||||
保存退出。然后重新运行该清单来为我们的代理节点部署 apache 配置。
|
||||
|
||||
### 总结 ###
|
||||
|
||||
现在我们已经成功的在 ubuntu 15.04 上面部署并运行 puppet 来管理代理节点服务器的基础运行环境。我们学习了puppet 是如何工作的,编写清单文件,节点与主机间使用 ssl 证书认证的认证过程。使用 puppet 开源软件配置管理工具在众多的代理节点上来控制、管理和配置重复性任务是非常容易的。如果你有任何的问题,建议,反馈,与我们取得联系,我们将第一时间完善更新,谢谢。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/install-puppet-ubuntu-15-04/
|
||||
|
||||
作者:[Arun Pyasi][a]
|
||||
译者:[ivo-wang](https://github.com/ivo-wang)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunp/
|
||||
[1]:https://docs.puppetlabs.com/puppet/latest/reference/config_file_main.html
|
@ -1,6 +1,7 @@
|
||||
如何在 CentOS 7.x 上安装 Zephyr 测试管理工具
|
||||
================================================================================
|
||||
测试管理工具包括作为测试人员需要的任何东西。测试管理工具用来记录测试执行的结果、计划测试活动以及报告质量保证活动的情况。在这篇文章中我们会向你介绍如何配置 Zephyr 测试管理工具,它包括了管理测试活动需要的所有东西,不需要单独安装测试活动所需要的应用程序从而降低测试人员不必要的麻烦。一旦你安装完它,你就看可以用它跟踪 bug、缺陷,和你的团队成员协作项目任务,因为你可以轻松地共享和访问测试过程中多个项目团队的数据。
|
||||
|
||||
测试管理(Test Management)指测试人员所需要的任何的所有东西。测试管理工具用来记录测试执行的结果、计划测试活动以及汇报质量控制活动的情况。在这篇文章中我们会向你介绍如何配置 Zephyr 测试管理工具,它包括了管理测试活动需要的所有东西,不需要单独安装测试活动所需要的应用程序从而降低测试人员不必要的麻烦。一旦你安装完它,你就看可以用它跟踪 bug 和缺陷,和你的团队成员协作项目任务,因为你可以轻松地共享和访问测试过程中多个项目团队的数据。
|
||||
|
||||
### Zephyr 要求 ###
|
||||
|
||||
@ -19,21 +20,21 @@
|
||||
</tr>
|
||||
<tr>
|
||||
<td width="140"><strong>Packages</strong></td>
|
||||
<td width="312">JDK 7 or above , Oracle JDK 6 update</td>
|
||||
<td width="209">No Prior Tomcat, MySQL installed</td>
|
||||
<td width="312">JDK 7 或更高 , Oracle JDK 6 update</td>
|
||||
<td width="209">没有事先安装的 Tomcat 和 MySQL</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td width="140"><strong>RAM</strong></td>
|
||||
<td width="312">4 GB</td>
|
||||
<td width="209">Preferred 8 GB</td>
|
||||
<td width="209">推荐 8 GB</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td width="140"><strong>CPU</strong></td>
|
||||
<td width="521" colspan="2">2.0 GHZ or Higher</td>
|
||||
<td width="521" colspan="2">2.0 GHZ 或更高</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td width="140"><strong>Hard Disk</strong></td>
|
||||
<td width="521" colspan="2">30 GB , Atleast 5GB must be free</td>
|
||||
<td width="521" colspan="2">30 GB , 至少 5GB </td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
@ -48,8 +49,6 @@
|
||||
|
||||
[root@centos-007 ~]# yum install java-1.7.0-openjdk-1.7.0.79-2.5.5.2.el7_1
|
||||
|
||||
----------
|
||||
|
||||
[root@centos-007 ~]# yum install java-1.7.0-openjdk-devel-1.7.0.85-2.6.1.2.el7_1.x86_64
|
||||
|
||||
安装完 java 和它的所有依赖后,运行下面的命令设置 JAVA_HOME 环境变量。
|
||||
@ -61,8 +60,6 @@
|
||||
|
||||
[root@centos-007 ~]# java –version
|
||||
|
||||
----------
|
||||
|
||||
java version "1.7.0_79"
|
||||
OpenJDK Runtime Environment (rhel-2.5.5.2.el7_1-x86_64 u79-b14)
|
||||
OpenJDK 64-Bit Server VM (build 24.79-b02, mixed mode)
|
||||
@ -71,7 +68,7 @@
|
||||
|
||||
### 安装 MySQL 5.6.x ###
|
||||
|
||||
如果的机器上有其它的 MySQL,建议你先卸载它们并安装这个版本,或者升级它们的模式到指定的版本。因为 Zephyr 前提要求这个指定的主要/最小 MySQL (5.6.x)版本要有 root 用户名。
|
||||
如果的机器上有其它的 MySQL,建议你先卸载它们并安装这个版本,或者升级它们的模式(schemas)到指定的版本。因为 Zephyr 前提要求这个指定的 5.6.x 版本的 MySQL ,要有 root 用户名。
|
||||
|
||||
可以按照下面的步骤在 CentOS-7.1 上安装 MySQL 5.6 :
|
||||
|
||||
@ -93,10 +90,7 @@
|
||||
[root@centos-007 ~]# service mysqld start
|
||||
[root@centos-007 ~]# service mysqld status
|
||||
|
||||
对于全新安装的 MySQL 服务器,MySQL root 用户的密码为空。
|
||||
为了安全起见,我们应该重置 MySQL root 用户的密码。
|
||||
|
||||
用自动生成的空密码连接到 MySQL 并更改 root 用户密码。
|
||||
对于全新安装的 MySQL 服务器,MySQL root 用户的密码为空。为了安全起见,我们应该重置 MySQL root 用户的密码。用自动生成的空密码连接到 MySQL 并更改 root 用户密码。
|
||||
|
||||
[root@centos-007 ~]# mysql
|
||||
mysql> SET PASSWORD FOR 'root'@'localhost' = PASSWORD('your_password');
|
||||
@ -224,7 +218,7 @@ via: http://linoxide.com/linux-how-to/setup-zephyr-tool-centos-7-x/
|
||||
|
||||
作者:[Kashif Siddique][a]
|
||||
译者:[ictlyh](http://mutouxiaogui.cn/blog/)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,509 @@
|
||||
来自 Linux 基金会内部的《Linux 工作站安全检查清单》
|
||||
================================================================================
|
||||
|
||||
### 目标受众
|
||||
|
||||
这是一套 Linux 基金会为其系统管理员提供的推荐规范。
|
||||
|
||||
这个文档用于帮助那些使用 Linux 工作站来访问和管理项目的 IT 设施的系统管理员团队。
|
||||
|
||||
如果你的系统管理员是远程员工,你也许可以使用这套指导方针确保系统管理员的系统可以通过核心安全需求,降低你的IT 平台成为攻击目标的风险。
|
||||
|
||||
即使你的系统管理员不是远程员工,很多人也会在工作环境中通过便携笔记本完成工作,或者在家中设置系统以便在业余时间或紧急时刻访问工作平台。不论发生何种情况,你都能调整这个推荐规范来适应你的环境。
|
||||
|
||||
|
||||
### 限制
|
||||
|
||||
但是,这并不是一个详细的“工作站加固”文档,可以说这是一个努力避免大多数明显安全错误而不会导致太多不便的一组推荐基线(baseline)。你也许阅读这个文档后会认为它的方法太偏执,而另一些人也许会认为这仅仅是一些肤浅的研究。安全就像在高速公路上开车 -- 任何比你开的慢的都是一个傻瓜,然而任何比你开的快的人都是疯子。这个指南仅仅是一些列核心安全规则,既不详细又不能替代经验、警惕和常识。
|
||||
|
||||
我们分享这篇文档是为了[将开源协作的优势带到 IT 策略文献资料中][18]。如果你发现它有用,我们希望你可以将它用到你自己团体中,并分享你的改进,对它的完善做出你的贡献。
|
||||
|
||||
### 结构
|
||||
|
||||
每一节都分为两个部分:
|
||||
|
||||
- 核对适合你项目的需求
|
||||
- 形式不定的提示内容,解释了为什么这么做
|
||||
|
||||
#### 严重级别
|
||||
|
||||
在清单的每一个项目都包括严重级别,我们希望这些能帮助指导你的决定:
|
||||
|
||||
- **关键(ESSENTIAL)** 该项应该在考虑列表上被明确的重视。如果不采取措施,将会导致你的平台安全出现高风险。
|
||||
- **中等(NICE)** 该项将改善你的安全形势,但是会影响到你的工作环境的流程,可能会要求养成新的习惯,改掉旧的习惯。
|
||||
- **低等(PARANOID)** 留作感觉会明显完善我们平台安全、但是可能会需要大量调整与操作系统交互的方式的项目。
|
||||
|
||||
记住,这些只是参考。如果你觉得这些严重级别不能反映你的工程对安全的承诺,你应该调整它们为你所合适的。
|
||||
|
||||
## 选择正确的硬件
|
||||
|
||||
我们并不会要求管理员使用一个特殊供应商或者一个特殊的型号,所以这一节提供的是选择工作系统时的核心注意事项。
|
||||
|
||||
### 检查清单
|
||||
|
||||
- [ ] 系统支持安全启动(SecureBoot) _(关键)_
|
||||
- [ ] 系统没有火线(Firewire),雷电(thunderbolt)或者扩展卡(ExpressCard)接口 _(中等)_
|
||||
- [ ] 系统有 TPM 芯片 _(中等)_
|
||||
|
||||
### 注意事项
|
||||
|
||||
#### 安全启动(SecureBoot)
|
||||
|
||||
尽管它还有争议,但是安全引导能够预防很多针对工作站的攻击(Rootkits、“Evil Maid”,等等),而没有太多额外的麻烦。它并不能阻止真正专门的攻击者,加上在很大程度上,国家安全机构有办法应对它(可能是通过设计),但是有安全引导总比什么都没有强。
|
||||
|
||||
作为选择,你也许可以部署 [Anti Evil Maid][1] 提供更多健全的保护,以对抗安全引导所需要阻止的攻击类型,但是它需要更多部署和维护的工作。
|
||||
|
||||
#### 系统没有火线(Firewire),雷电(thunderbolt)或者扩展卡(ExpressCard)接口
|
||||
|
||||
火线是一个标准,其设计上允许任何连接的设备能够完全地直接访问你的系统内存(参见[维基百科][2])。雷电接口和扩展卡同样有问题,虽然一些后来部署的雷电接口试图限制内存访问的范围。如果你没有这些系统端口,那是最好的,但是它并不严重,它们通常可以通过 UEFI 关闭或内核本身禁用。
|
||||
|
||||
#### TPM 芯片
|
||||
|
||||
可信平台模块(Trusted Platform Module ,TPM)是主板上的一个与核心处理器单独分开的加密芯片,它可以用来增加平台的安全性(比如存储全盘加密的密钥),不过通常不会用于日常的平台操作。充其量,这个是一个有则更好的东西,除非你有特殊需求,需要使用 TPM 增加你的工作站安全性。
|
||||
|
||||
## 预引导环境
|
||||
|
||||
这是你开始安装操作系统前的一系列推荐规范。
|
||||
|
||||
### 检查清单
|
||||
|
||||
- [ ] 使用 UEFI 引导模式(不是传统 BIOS)_(关键)_
|
||||
- [ ] 进入 UEFI 配置需要使用密码 _(关键)_
|
||||
- [ ] 使用安全引导 _(关键)_
|
||||
- [ ] 启动系统需要 UEFI 级别密码 _(中等)_
|
||||
|
||||
### 注意事项
|
||||
|
||||
#### UEFI 和安全引导
|
||||
|
||||
UEFI 尽管有缺点,还是提供了很多传统 BIOS 没有的好功能,比如安全引导。大多数现代的系统都默认使用 UEFI 模式。
|
||||
|
||||
确保进入 UEFI 配置模式要使用高强度密码。注意,很多厂商默默地限制了你使用密码长度,所以相比长口令你也许应该选择高熵值的短密码(关于密码短语请参考下面内容)。
|
||||
|
||||
基于你选择的 Linux 发行版,你也许需要、也许不需要按照 UEFI 的要求,来导入你的发行版的安全引导密钥,从而允许你启动该发行版。很多发行版已经与微软合作,用大多数厂商所支持的密钥给它们已发布的内核签名,因此避免了你必须处理密钥导入的麻烦。
|
||||
|
||||
作为一个额外的措施,在允许某人访问引导分区然后尝试做一些不好的事之前,让他们输入密码。为了防止肩窥(shoulder-surfing),这个密码应该跟你的 UEFI 管理密码不同。如果你经常关闭和启动,你也许不想这么麻烦,因为你已经必须输入 LUKS 密码了(LUKS 参见下面内容),这样会让你您减少一些额外的键盘输入。
|
||||
|
||||
## 发行版选择注意事项
|
||||
|
||||
很有可能你会坚持一个广泛使用的发行版如 Fedora,Ubuntu,Arch,Debian,或它们的一个类似发行版。无论如何,以下是你选择使用发行版应该考虑的。
|
||||
|
||||
### 检查清单
|
||||
|
||||
- [ ] 拥有一个强健的 MAC/RBAC 系统(SELinux/AppArmor/Grsecurity) _(关键)_
|
||||
- [ ] 发布安全公告 _(关键)_
|
||||
- [ ] 提供及时的安全补丁 _(关键)_
|
||||
- [ ] 提供软件包的加密验证 _(关键)_
|
||||
- [ ] 完全支持 UEFI 和安全引导 _(关键)_
|
||||
- [ ] 拥有健壮的原生全磁盘加密支持 _(关键)_
|
||||
|
||||
### 注意事项
|
||||
|
||||
#### SELinux,AppArmor,和 GrSecurity/PaX
|
||||
|
||||
强制访问控制(Mandatory Access Controls,MAC)或者基于角色的访问控制(Role-Based Access Controls,RBAC)是一个用在老式 POSIX 系统的基于用户或组的安全机制扩展。现在大多数发行版已经捆绑了 MAC/RBAC 系统(Fedora,Ubuntu),或通过提供一种机制一个可选的安装后步骤来添加它(Gentoo,Arch,Debian)。显然,强烈建议您选择一个预装 MAC/RBAC 系统的发行版,但是如果你对某个没有默认启用它的发行版情有独钟,装完系统后应计划配置安装它。
|
||||
|
||||
应该坚决避免使用不带任何 MAC/RBAC 机制的发行版,像传统的 POSIX 基于用户和组的安全在当今时代应该算是考虑不足。如果你想建立一个 MAC/RBAC 工作站,通常认为 AppArmor 和 PaX 比 SELinux 更容易掌握。此外,在工作站上,很少有或者根本没有对外监听的守护进程,而针对用户运行的应用造成的最高风险,GrSecurity/PaX _可能_ 会比SELinux 提供更多的安全便利。
|
||||
|
||||
#### 发行版安全公告
|
||||
|
||||
大多数广泛使用的发行版都有一个给它们的用户发送安全公告的机制,但是如果你对一些机密感兴趣,去看看开发人员是否有见于文档的提醒用户安全漏洞和补丁的机制。缺乏这样的机制是一个重要的警告信号,说明这个发行版不够成熟,不能被用作主要管理员的工作站。
|
||||
|
||||
#### 及时和可靠的安全更新
|
||||
|
||||
多数常用的发行版提供定期安全更新,但应该经常检查以确保及时提供关键包更新。因此应避免使用附属发行版(spin-offs)和“社区重构”,因为它们必须等待上游发行版先发布,它们经常延迟发布安全更新。
|
||||
|
||||
现在,很难找到一个不使用加密签名、更新元数据或二者都不使用的发行版。如此说来,常用的发行版在引入这个基本安全机制就已经知道这些很多年了(Arch,说你呢),所以这也是值得检查的。
|
||||
|
||||
#### 发行版支持 UEFI 和安全引导
|
||||
|
||||
检查发行版是否支持 UEFI 和安全引导。查明它是否需要导入额外的密钥或是否要求启动内核有一个已经被系统厂商信任的密钥签名(例如跟微软达成合作)。一些发行版不支持 UEFI 或安全启动,但是提供了替代品来确保防篡改(tamper-proof)或防破坏(tamper-evident)引导环境([Qubes-OS][3] 使用 Anti Evil Maid,前面提到的)。如果一个发行版不支持安全引导,也没有防止引导级别攻击的机制,还是看看别的吧。
|
||||
|
||||
#### 全磁盘加密
|
||||
|
||||
全磁盘加密是保护静止数据的要求,大多数发行版都支持。作为一个选择方案,带有自加密硬盘的系统也可以用(通常通过主板 TPM 芯片实现),并提供了类似安全级别而且操作更快,但是花费也更高。
|
||||
|
||||
## 发行版安装指南
|
||||
|
||||
所有发行版都是不同的,但是也有一些一般原则:
|
||||
|
||||
### 检查清单
|
||||
|
||||
- [ ] 使用健壮的密码全磁盘加密(LUKS) _(关键)_
|
||||
- [ ] 确保交换分区也加密了 _(关键)_
|
||||
- [ ] 确保引导程序设置了密码(可以和LUKS一样) _(关键)_
|
||||
- [ ] 设置健壮的 root 密码(可以和LUKS一样) _(关键)_
|
||||
- [ ] 使用无特权账户登录,作为管理员组的一部分 _(关键)_
|
||||
- [ ] 设置健壮的用户登录密码,不同于 root 密码 _(关键)_
|
||||
|
||||
### 注意事项
|
||||
|
||||
#### 全磁盘加密
|
||||
|
||||
除非你正在使用自加密硬盘,配置你的安装程序完整地加密所有存储你的数据与系统文件的磁盘很重要。简单地通过自动挂载的 cryptfs 环(loop)文件加密用户目录还不够(说你呢,旧版 Ubuntu),这并没有给系统二进制文件或交换分区提供保护,它可能包含大量的敏感数据。推荐的加密策略是加密 LVM 设备,以便在启动过程中只需要一个密码。
|
||||
|
||||
`/boot`分区将一直保持非加密,因为引导程序需要在调用 LUKS/dm-crypt 前能引导内核自身。一些发行版支持加密的`/boot`分区,比如 [Arch][16],可能别的发行版也支持,但是似乎这样增加了系统更新的复杂度。如果你的发行版并没有原生支持加密`/boot`也不用太在意,内核镜像本身并没有什么隐私数据,它会通过安全引导的加密签名检查来防止被篡改。
|
||||
|
||||
#### 选择一个好密码
|
||||
|
||||
现代的 Linux 系统没有限制密码口令长度,所以唯一的限制是你的偏执和倔强。如果你要启动你的系统,你将大概至少要输入两个不同的密码:一个解锁 LUKS ,另一个登录,所以长密码将会使你老的更快。最好从丰富或混合的词汇中选择2-3个单词长度,容易输入的密码。
|
||||
|
||||
优秀密码例子(是的,你可以使用空格):
|
||||
|
||||
- nature abhors roombas
|
||||
- 12 in-flight Jebediahs
|
||||
- perdon, tengo flatulence
|
||||
|
||||
如果你喜欢输入可以在公开场合和你生活中能见到的句子,比如:
|
||||
|
||||
- Mary had a little lamb
|
||||
- you're a wizard, Harry
|
||||
- to infinity and beyond
|
||||
|
||||
如果你愿意的话,你也应该带上最少要 10-12个字符长度的非词汇的密码。
|
||||
|
||||
除非你担心物理安全,你可以写下你的密码,并保存在一个远离你办公桌的安全的地方。
|
||||
|
||||
#### Root,用户密码和管理组
|
||||
|
||||
我们建议,你的 root 密码和你的 LUKS 加密使用同样的密码(除非你共享你的笔记本给信任的人,让他应该能解锁设备,但是不应该能成为 root 用户)。如果你是笔记本电脑的唯一用户,那么你的 root 密码与你的 LUKS 密码不同是没有安全优势上的意义的。通常,你可以使用同样的密码在你的 UEFI 管理,磁盘加密,和 root 登录中 -- 知道这些任意一个都会让攻击者完全控制您的系统,在单用户工作站上使这些密码不同,没有任何安全益处。
|
||||
|
||||
你应该有一个不同的,但同样强健的常规用户帐户密码用来日常工作。这个用户应该是管理组用户(例如`wheel`或者类似,根据发行版不同),允许你执行`sudo`来提升权限。
|
||||
|
||||
换句话说,如果在你的工作站只有你一个用户,你应该有两个独特的、强健(robust)而强壮(strong)的密码需要记住:
|
||||
|
||||
**管理级别**,用在以下方面:
|
||||
|
||||
- UEFI 管理
|
||||
- 引导程序(GRUB)
|
||||
- 磁盘加密(LUKS)
|
||||
- 工作站管理(root 用户)
|
||||
|
||||
**用户级别**,用在以下:
|
||||
|
||||
- 用户登录和 sudo
|
||||
- 密码管理器的主密码
|
||||
|
||||
很明显,如果有一个令人信服的理由的话,它们全都可以不同。
|
||||
|
||||
## 安装后的加固
|
||||
|
||||
安装后的安全加固在很大程度上取决于你选择的发行版,所以在一个像这样的通用文档中提供详细说明是徒劳的。然而,这里有一些你应该采取的步骤:
|
||||
|
||||
### 检查清单
|
||||
|
||||
- [ ] 在全局范围内禁用火线和雷电模块 _(关键)_
|
||||
- [ ] 检查你的防火墙,确保过滤所有传入端口 _(关键)_
|
||||
- [ ] 确保 root 邮件转发到一个你可以收到的账户 _(关键)_
|
||||
- [ ] 建立一个系统自动更新任务,或更新提醒 _(中等)_
|
||||
- [ ] 检查以确保 sshd 服务默认情况下是禁用的 _(中等)_
|
||||
- [ ] 配置屏幕保护程序在一段时间的不活动后自动锁定 _(中等)_
|
||||
- [ ] 设置 logwatch _(中等)_
|
||||
- [ ] 安装使用 rkhunter _(中等)_
|
||||
- [ ] 安装一个入侵检测系统(Intrusion Detection System) _(中等)_
|
||||
|
||||
### 注意事项
|
||||
|
||||
#### 将模块列入黑名单
|
||||
|
||||
将火线和雷电模块列入黑名单,增加一行到`/etc/modprobe.d/blacklist-dma.conf`文件:
|
||||
|
||||
blacklist firewire-core
|
||||
blacklist thunderbolt
|
||||
|
||||
重启后的这些模块将被列入黑名单。这样做是无害的,即使你没有这些端口(但也不做任何事)。
|
||||
|
||||
#### Root 邮件
|
||||
|
||||
默认的 root 邮件只是存储在系统基本上没人读过。确保你设置了你的`/etc/aliases`来转发 root 邮件到你确实能读取的邮箱,否则你也许错过了重要的系统通知和报告:
|
||||
|
||||
# Person who should get root's mail
|
||||
root: bob@example.com
|
||||
|
||||
编辑后这些后运行`newaliases`,然后测试它确保能投递到,像一些邮件供应商将拒绝来自不存在的域名或者不可达的域名的邮件。如果是这个原因,你需要配置邮件转发直到确实可用。
|
||||
|
||||
#### 防火墙,sshd,和监听进程
|
||||
|
||||
默认的防火墙设置将取决于您的发行版,但是大多数都允许`sshd`端口连入。除非你有一个令人信服的合理理由允许连入 ssh,你应该过滤掉它,并禁用 sshd 守护进程。
|
||||
|
||||
systemctl disable sshd.service
|
||||
systemctl stop sshd.service
|
||||
|
||||
如果你需要使用它,你也可以临时启动它。
|
||||
|
||||
通常,你的系统不应该有任何侦听端口,除了响应 ping 之外。这将有助于你对抗网络级的零日漏洞利用。
|
||||
|
||||
#### 自动更新或通知
|
||||
|
||||
建议打开自动更新,除非你有一个非常好的理由不这么做,如果担心自动更新将使您的系统无法使用(以前发生过,所以这种担心并非杞人忧天)。至少,你应该启用自动通知可用的更新。大多数发行版已经有这个服务自动运行,所以你不需要做任何事。查阅你的发行版文档了解更多。
|
||||
|
||||
你应该尽快应用所有明显的勘误,即使这些不是特别贴上“安全更新”或有关联的 CVE 编号。所有的问题都有潜在的安全漏洞和新的错误,比起停留在旧的、已知的问题上,未知问题通常是更安全的策略。
|
||||
|
||||
#### 监控日志
|
||||
|
||||
你应该会对你的系统上发生了什么很感兴趣。出于这个原因,你应该安装`logwatch`然后配置它每夜发送在你的系统上发生的任何事情的活动报告。这不会预防一个专业的攻击者,但是一个不错的安全网络功能。
|
||||
|
||||
注意,许多 systemd 发行版将不再自动安装一个“logwatch”所需的 syslog 服务(因为 systemd 会放到它自己的日志中),所以你需要安装和启用“rsyslog”来确保在使用 logwatch 之前你的 /var/log 不是空的。
|
||||
|
||||
#### Rkhunter 和 IDS
|
||||
|
||||
安装`rkhunter`和一个类似`aide`或者`tripwire`入侵检测系统(IDS)并不是那么有用,除非你确实理解它们如何工作,并采取必要的步骤来设置正确(例如,保证数据库在外部介质,从可信的环境运行检测,记住执行系统更新和配置更改后要刷新散列数据库,等等)。如果你不愿在你的工作站执行这些步骤,并调整你如何工作的方式,这些工具只能带来麻烦而没有任何实在的安全益处。
|
||||
|
||||
我们建议你安装`rkhunter`并每晚运行它。它相当易于学习和使用,虽然它不会阻止一个复杂的攻击者,它也能帮助你捕获你自己的错误。
|
||||
|
||||
## 个人工作站备份
|
||||
|
||||
工作站备份往往被忽视,或偶尔才做一次,这常常是不安全的方式。
|
||||
|
||||
### 检查清单
|
||||
|
||||
- [ ] 设置加密备份工作站到外部存储 _(关键)_
|
||||
- [ ] 使用零认知(zero-knowledge)备份工具备份到站外或云上 _(中等)_
|
||||
|
||||
### 注意事项
|
||||
|
||||
#### 全加密的备份存到外部存储
|
||||
|
||||
把全部备份放到一个移动磁盘中比较方便,不用担心带宽和上行网速(在这个时代,大多数供应商仍然提供显著的不对称的上传/下载速度)。不用说,这个移动硬盘本身需要加密(再说一次,通过 LUKS),或者你应该使用一个备份工具建立加密备份,例如`duplicity`或者它的 GUI 版本 `deja-dup`。我建议使用后者并使用随机生成的密码,保存到离线的安全地方。如果你带上笔记本去旅行,把这个磁盘留在家,以防你的笔记本丢失或被窃时可以找回备份。
|
||||
|
||||
除了你的家目录外,你还应该备份`/etc`目录和出于取证目的的`/var/log`目录。
|
||||
|
||||
尤其重要的是,避免拷贝你的家目录到任何非加密存储上,即使是需要快速的在两个系统上移动文件时,一旦完成你肯定会忘了清除它,从而暴露个人隐私或者安全信息到监听者手中 -- 尤其是把这个存储介质跟你的笔记本放到同一个包里。
|
||||
|
||||
#### 有选择的零认知站外备份
|
||||
|
||||
站外备份(Off-site backup)也是相当重要的,是否可以做到要么需要你的老板提供空间,要么找一家云服务商。你可以建一个单独的 duplicity/deja-dup 配置,只包括重要的文件,以免传输大量你不想备份的数据(网络缓存、音乐、下载等等)。
|
||||
|
||||
作为选择,你可以使用零认知(zero-knowledge)备份工具,例如 [SpiderOak][5],它提供一个卓越的 Linux GUI工具还有更多的实用特性,例如在多个系统或平台间同步内容。
|
||||
|
||||
## 最佳实践
|
||||
|
||||
下面是我们认为你应该采用的最佳实践列表。它当然不是非常详细的,而是试图提供实用的建议,来做到可行的整体安全性和可用性之间的平衡。
|
||||
|
||||
### 浏览
|
||||
|
||||
毫无疑问, web 浏览器将是你的系统上最大、最容易暴露的面临攻击的软件。它是专门下载和执行不可信、甚至是恶意代码的一个工具。它试图采用沙箱和代码清洁(code sanitization)等多种机制保护你免受这种危险,但是在之前它们都被击败了多次。你应该知道,在任何时候浏览网站都是你做的最不安全的活动。
|
||||
|
||||
有几种方法可以减少浏览器的影响,但这些真实有效的方法需要你明显改变操作您的工作站的方式。
|
||||
|
||||
#### 1: 使用两个不同的浏览器 _(关键)_
|
||||
|
||||
这很容易做到,但是只有很少的安全效益。并不是所有浏览器都可以让攻击者完全自由访问您的系统 -- 有时它们只能允许某人读取本地浏览器存储,窃取其它标签的活动会话,捕获浏览器的输入等。使用两个不同的浏览器,一个用在工作/高安全站点,另一个用在其它方面,有助于防止攻击者请求整个 cookie 存储的小问题。主要的不便是两个不同的浏览器会消耗大量内存。
|
||||
|
||||
我们建议:
|
||||
|
||||
##### 火狐用来访问工作和高安全站点
|
||||
|
||||
使用火狐登录工作有关的站点,应该额外关心的是确保数据如 cookies,会话,登录信息,击键等等,明显不应该落入攻击者手中。除了少数的几个网站,你不应该用这个浏览器访问其它网站。
|
||||
|
||||
你应该安装下面的火狐扩展:
|
||||
|
||||
- [ ] NoScript _(关键)_
|
||||
- NoScript 阻止活动内容加载,除非是在用户白名单里的域名。如果用于默认浏览器它会很麻烦(可是提供了真正好的安全效益),所以我们建议只在访问与工作相关的网站的浏览器上开启它。
|
||||
|
||||
- [ ] Privacy Badger _(关键)_
|
||||
- EFF 的 Privacy Badger 将在页面加载时阻止大多数外部追踪器和广告平台,有助于在这些追踪站点影响你的浏览器时避免跪了(追踪器和广告站点通常会成为攻击者的目标,因为它们能会迅速影响世界各地成千上万的系统)。
|
||||
|
||||
- [ ] HTTPS Everywhere _(关键)_
|
||||
- 这个 EFF 开发的扩展将确保你访问的大多数站点都使用安全连接,甚至你点击的连接使用的是 http://(可以有效的避免大多数的攻击,例如[SSL-strip][7])。
|
||||
|
||||
- [ ] Certificate Patrol _(中等)_
|
||||
- 如果你正在访问的站点最近改变了它们的 TLS 证书,这个工具将会警告你 -- 特别是如果不是接近失效期或者现在使用不同的证书颁发机构。它有助于警告你是否有人正尝试中间人攻击你的连接,不过它会产生很多误报。
|
||||
|
||||
你应该让火狐成为你打开连接时的默认浏览器,因为 NoScript 将在加载或者执行时阻止大多数活动内容。
|
||||
|
||||
##### 其它一切都用 Chrome/Chromium
|
||||
|
||||
Chromium 开发者在增加很多很好的安全特性方面走在了火狐前面(至少[在 Linux 上][6]),例如 seccomp 沙箱,内核用户空间等等,这会成为一个你访问的网站与你其它系统之间的额外隔离层。Chromium 是上游开源项目,Chrome 是 Google 基于它构建的专有二进制包(加一句偏执的提醒,如果你有任何不想让谷歌知道的事情都不要使用它)。
|
||||
|
||||
推荐你在 Chrome 上也安装**Privacy Badger** 和 **HTTPS Everywhere** 扩展,然后给它一个与火狐不同的主题,以让它告诉你这是你的“不可信站点”浏览器。
|
||||
|
||||
#### 2: 使用两个不同浏览器,一个在专用的虚拟机里 _(中等)_
|
||||
|
||||
这有点像上面建议的做法,除了您将添加一个通过快速访问协议运行在专用虚拟机内部 Chrome 的额外步骤,它允许你共享剪贴板和转发声音事件(如,Spice 或 RDP)。这将在不可信浏览器和你其它的工作环境之间添加一个优秀的隔离层,确保攻击者完全危害你的浏览器将必须另外打破 VM 隔离层,才能达到系统的其余部分。
|
||||
|
||||
这是一个鲜为人知的可行方式,但是需要大量的 RAM 和高速的处理器来处理多增加的负载。这要求作为管理员的你需要相应地调整自己的工作实践而付出辛苦。
|
||||
|
||||
#### 3: 通过虚拟化完全隔离你的工作和娱乐环境 _(低等)_
|
||||
|
||||
了解下 [Qubes-OS 项目][3],它致力于通过划分你的应用到完全隔离的 VM 中来提供高度安全的工作环境。
|
||||
|
||||
### 密码管理器
|
||||
|
||||
#### 检查清单
|
||||
|
||||
- [ ] 使用密码管理器 _(关键)_
|
||||
- [ ] 不相关的站点使用不同的密码 _(关键)_
|
||||
- [ ] 使用支持团队共享的密码管理器 _(中等)_
|
||||
- [ ] 给非网站类账户使用一个单独的密码管理器 _(低等)_
|
||||
|
||||
#### 注意事项
|
||||
|
||||
使用好的、唯一的密码对你的团队成员来说应该是非常关键的需求。凭证(credential)盗取一直在发生 — 通过被攻破的计算机、盗取数据库备份、远程站点利用、以及任何其它的方式。凭证绝不应该跨站点重用,尤其是关键的应用。
|
||||
|
||||
##### 浏览器中的密码管理器
|
||||
|
||||
每个浏览器有一个比较安全的保存密码机制,可以同步到供应商维护的,并使用用户的密码保证数据加密。然而,这个机制有严重的劣势:
|
||||
|
||||
1. 不能跨浏览器工作
|
||||
2. 不提供任何与团队成员共享凭证的方法
|
||||
|
||||
也有一些支持良好、免费或便宜的密码管理器,可以很好的融合到多个浏览器,跨平台工作,提供小组共享(通常是付费服务)。可以很容易地通过搜索引擎找到解决方案。
|
||||
|
||||
##### 独立的密码管理器
|
||||
|
||||
任何与浏览器结合的密码管理器都有一个主要的缺点,它实际上是应用的一部分,这样最有可能被入侵者攻击。如果这让你不放心(应该这样),你应该选择两个不同的密码管理器 -- 一个集成在浏览器中用来保存网站密码,一个作为独立运行的应用。后者可用于存储高风险凭证如 root 密码、数据库密码、其它 shell 账户凭证等。
|
||||
|
||||
这样的工具在团队成员间共享超级用户的凭据方面特别有用(服务器 root 密码、ILO密码、数据库管理密码、引导程序密码等等)。
|
||||
|
||||
这几个工具可以帮助你:
|
||||
|
||||
- [KeePassX][8],在第2版中改进了团队共享
|
||||
- [Pass][9],它使用了文本文件和 PGP,并与 git 结合
|
||||
- [Django-Pstore][10],它使用 GPG 在管理员之间共享凭据
|
||||
- [Hiera-Eyaml][11],如果你已经在你的平台中使用了 Puppet,在你的 Hiera 加密数据的一部分里面,可以便捷的追踪你的服务器/服务凭证。
|
||||
|
||||
### 加固 SSH 与 PGP 的私钥
|
||||
|
||||
个人加密密钥,包括 SSH 和 PGP 私钥,都是你工作站中最重要的物品 -- 这是攻击者最想得到的东西,这可以让他们进一步攻击你的平台或在其它管理员面前冒充你。你应该采取额外的步骤,确保你的私钥免遭盗窃。
|
||||
|
||||
#### 检查清单
|
||||
|
||||
- [ ] 用来保护私钥的强壮密码 _(关键)_
|
||||
- [ ] PGP 的主密码保存在移动存储中 _(中等)_
|
||||
- [ ] 用于身份验证、签名和加密的子密码存储在智能卡设备 _(中等)_
|
||||
- [ ] SSH 配置为以 PGP 认证密钥作为 ssh 私钥 _(中等)_
|
||||
|
||||
#### 注意事项
|
||||
|
||||
防止私钥被偷的最好方式是使用一个智能卡存储你的加密私钥,绝不要拷贝到工作站上。有几个厂商提供支持 OpenPGP 的设备:
|
||||
|
||||
- [Kernel Concepts][12],在这里可以采购支持 OpenPGP 的智能卡和 USB 读取器,你应该需要一个。
|
||||
- [Yubikey NEO][13],这里提供 OpenPGP 功能的智能卡还提供很多很酷的特性(U2F、PIV、HOTP等等)。
|
||||
|
||||
确保 PGP 主密码没有存储在工作站也很重要,仅使用子密码。主密钥只有在签名其它的密钥和创建新的子密钥时使用 — 不经常发生这种操作。你可以照着 [Debian 的子密钥][14]向导来学习如何将你的主密钥移动到移动存储并创建子密钥。
|
||||
|
||||
你应该配置你的 gnupg 代理作为 ssh 代理,然后使用基于智能卡 PGP 认证密钥作为你的 ssh 私钥。我们发布了一个[详尽的指导][15]如何使用智能卡读取器或 Yubikey NEO。
|
||||
|
||||
如果你不想那么麻烦,最少要确保你的 PGP 私钥和你的 SSH 私钥有个强健的密码,这将让攻击者很难盗取使用它们。
|
||||
|
||||
### 休眠或关机,不要挂起
|
||||
|
||||
当系统挂起时,内存中的内容仍然保留在内存芯片中,可以会攻击者读取到(这叫做冷启动攻击(Cold Boot Attack))。如果你离开你的系统的时间较长,比如每天下班结束,最好关机或者休眠,而不是挂起它或者就那么开着。
|
||||
|
||||
### 工作站上的 SELinux
|
||||
|
||||
如果你使用捆绑了 SELinux 的发行版(如 Fedora),这有些如何使用它的建议,让你的工作站达到最大限度的安全。
|
||||
|
||||
#### 检查清单
|
||||
|
||||
- [ ] 确保你的工作站强制(enforcing)使用 SELinux _(关键)_
|
||||
- [ ] 不要盲目的执行`audit2allow -M`,应该经常检查 _(关键)_
|
||||
- [ ] 绝不要 `setenforce 0` _(中等)_
|
||||
- [ ] 切换你的用户到 SELinux 用户`staff_u` _(中等)_
|
||||
|
||||
#### 注意事项
|
||||
|
||||
SELinux 是强制访问控制(Mandatory Access Controls,MAC),是 POSIX许可核心功能的扩展。它是成熟、强健,自从它推出以来已经有很长的路了。不管怎样,许多系统管理员现在仍旧重复过时的口头禅“关掉它就行”。
|
||||
|
||||
话虽如此,在工作站上 SELinux 会带来一些有限的安全效益,因为大多数你想运行的应用都是可以自由运行的。开启它有益于给网络提供足够的保护,也有可能有助于防止攻击者通过脆弱的后台服务提升到 root 级别的权限用户。
|
||||
|
||||
我们的建议是开启它并强制使用(enforcing)。
|
||||
|
||||
##### 绝不`setenforce 0`
|
||||
|
||||
使用`setenforce 0`临时把 SELinux 设置为许可(permissive)模式很有诱惑力,但是你应该避免这样做。当你想查找一个特定应用或者程序的问题时,实际上这样做是把整个系统的 SELinux 给关闭了。
|
||||
|
||||
你应该使用`semanage permissive -a [somedomain_t]`替换`setenforce 0`,只把这个程序放入许可模式。首先运行`ausearch`查看哪个程序发生问题:
|
||||
|
||||
ausearch -ts recent -m avc
|
||||
|
||||
然后看下`scontext=`(源自 SELinux 的上下文)行,像这样:
|
||||
|
||||
scontext=staff_u:staff_r:gpg_pinentry_t:s0-s0:c0.c1023
|
||||
^^^^^^^^^^^^^^
|
||||
|
||||
这告诉你程序`gpg_pinentry_t`被拒绝了,所以你想排查应用的故障,应该增加它到许可域:
|
||||
|
||||
semange permissive -a gpg_pinentry_t
|
||||
|
||||
这将允许你使用应用然后收集 AVC 的其它数据,你可以结合`audit2allow`来写一个本地策略。一旦完成你就不会看到新的 AVC 的拒绝消息,你就可以通过运行以下命令从许可中删除程序:
|
||||
|
||||
semanage permissive -d gpg_pinentry_t
|
||||
|
||||
##### 用 SELinux 的用户 staff_r 使用你的工作站
|
||||
|
||||
SELinux 带有角色(role)的原生实现,基于用户帐户相关角色来禁止或授予某些特权。作为一个管理员,你应该使用`staff_r`角色,这可以限制访问很多配置和其它安全敏感文件,除非你先执行`sudo`。
|
||||
|
||||
默认情况下,用户以`unconfined_r`创建,你可以自由运行大多数应用,没有任何(或只有一点)SELinux 约束。转换你的用户到`staff_r`角色,运行下面的命令:
|
||||
|
||||
usermod -Z staff_u [username]
|
||||
|
||||
你应该退出然后登录新的角色,届时如果你运行`id -Z`,你将会看到:
|
||||
|
||||
staff_u:staff_r:staff_t:s0-s0:c0.c1023
|
||||
|
||||
在执行`sudo`时,你应该记住增加一个额外标志告诉 SELinux 转换到“sysadmin”角色。你需要用的命令是:
|
||||
|
||||
sudo -i -r sysadm_r
|
||||
|
||||
然后`id -Z`将会显示:
|
||||
|
||||
staff_u:sysadm_r:sysadm_t:s0-s0:c0.c1023
|
||||
|
||||
**警告**:在进行这个切换前你应该能很顺畅的使用`ausearch`和`audit2allow`,当你以`staff_r`角色运行时你的应用有可能不再工作了。在写作本文时,已知以下流行的应用在`staff_r`下没有做策略调整就不会工作:
|
||||
|
||||
- Chrome/Chromium
|
||||
- Skype
|
||||
- VirtualBox
|
||||
|
||||
切换回`unconfined_r`,运行下面的命令:
|
||||
|
||||
usermod -Z unconfined_u [username]
|
||||
|
||||
然后注销再重新回到舒适区。
|
||||
|
||||
## 延伸阅读
|
||||
|
||||
IT 安全的世界是一个没有底的兔子洞。如果你想深入,或者找到你的具体发行版更多的安全特性,请查看下面这些链接:
|
||||
|
||||
- [Fedora 安全指南](https://docs.fedoraproject.org/en-US/Fedora/19/html/Security_Guide/index.html)
|
||||
- [CESG Ubuntu 安全指南](https://www.gov.uk/government/publications/end-user-devices-security-guidance-ubuntu-1404-lts)
|
||||
- [Debian 安全手册](https://www.debian.org/doc/manuals/securing-debian-howto/index.en.html)
|
||||
- [Arch Linux 安全维基](https://wiki.archlinux.org/index.php/Security)
|
||||
- [Mac OSX 安全](https://www.apple.com/support/security/guides/)
|
||||
|
||||
## 许可
|
||||
|
||||
这项工作在[创作共用授权4.0国际许可证][0]许可下。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://github.com/lfit/itpol/blob/bbc17d8c69cb8eee07ec41f8fbf8ba32fdb4301b/linux-workstation-security.md
|
||||
|
||||
作者:[mricon][a]
|
||||
译者:[wyangsun](https://github.com/wyangsun)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://github.com/mricon
|
||||
[0]: http://creativecommons.org/licenses/by-sa/4.0/
|
||||
[1]: https://github.com/QubesOS/qubes-antievilmaid
|
||||
[2]: https://en.wikipedia.org/wiki/IEEE_1394#Security_issues
|
||||
[3]: https://qubes-os.org/
|
||||
[4]: https://xkcd.com/936/
|
||||
[5]: https://spideroak.com/
|
||||
[6]: https://code.google.com/p/chromium/wiki/LinuxSandboxing
|
||||
[7]: http://www.thoughtcrime.org/software/sslstrip/
|
||||
[8]: https://keepassx.org/
|
||||
[9]: http://www.passwordstore.org/
|
||||
[10]: https://pypi.python.org/pypi/django-pstore
|
||||
[11]: https://github.com/TomPoulton/hiera-eyaml
|
||||
[12]: http://shop.kernelconcepts.de/
|
||||
[13]: https://www.yubico.com/products/yubikey-hardware/yubikey-neo/
|
||||
[14]: https://wiki.debian.org/Subkeys
|
||||
[15]: https://github.com/lfit/ssh-gpg-smartcard-config
|
||||
[16]: http://www.pavelkogan.com/2014/05/23/luks-full-disk-encryption/
|
||||
[17]: https://en.wikipedia.org/wiki/Cold_boot_attack
|
||||
[18]: http://www.linux.com/news/featured-blogs/167-amanda-mcpherson/850607-linux-foundation-sysadmins-open-source-their-it-policies
|
220
published/201512/20150917 A Repository with 44 Years of Unix Evolution.md
Executable file
220
published/201512/20150917 A Repository with 44 Years of Unix Evolution.md
Executable file
@ -0,0 +1,220 @@
|
||||
一个涵盖 Unix 44 年进化史的版本仓库
|
||||
=============================================================================
|
||||
|
||||
http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html
|
||||
|
||||
This is an HTML rendering of a working paper draft that led to a publication. The publication should always be cited in preference to this draft using the following reference:
|
||||
|
||||
- **Diomidis Spinellis**. [A repository with 44 years of Unix evolution](http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html). In MSR '15: Proceedings of the 12th Working Conference on Mining Software Repositories, pages 13-16. IEEE, 2015. Best Data Showcase Award. ([doi:10.1109/MSR.2015.6](http://dx.doi.org/10.1109/MSR.2015.6))
|
||||
|
||||
This document is also available in [PDF format](http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.pdf).
|
||||
|
||||
The document's metadata is available in [BibTeX format](http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c-bibtex.html).
|
||||
|
||||
This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
|
||||
|
||||
[Diomidis Spinellis Publications](http://www.dmst.aueb.gr/dds/pubs/)
|
||||
|
||||
© 2015 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
|
||||
|
||||
### 摘要 ###
|
||||
|
||||
Unix 操作系统的进化历史,可以从一个版本控制仓库中窥见,时间跨度从 1972 年的 5000 行内核代码开始,到 2015 年成为一个含有 26,000,000 行代码的被广泛使用的系统。该仓库包含 659,000 条提交,和 2306 次合并。仓库部署了被普遍采用的 Git 系统用于储存其代码,并且在时下流行的 GitHub 上建立了存档。它由来自贝尔实验室(Bell Labs),伯克利大学(Berkeley University),386BSD 团队所开发的系统软件的 24 个快照综合定制而成,这包括两个老式仓库和一个开源 FreeBSD 系统的仓库。总的来说,可以确认其中的 850 位个人贡献者,更早些时候的一批人主要做基础研究。这些数据可以用于一些经验性的研究,在软件工程,信息系统和软件考古学领域。
|
||||
|
||||
### 1、介绍 ###
|
||||
|
||||
Unix 操作系统作为一个主要的工程上的突破而脱颖而出,得益于其模范的设计、大量的技术贡献、它的开发模型及广泛的使用。Unix 编程环境的设计已经被视为一个提供非常简洁、强大而优雅的设计 [[1][1]] 。在技术方面,许多对 Unix 有直接贡献的,或者因 Unix 而流行的特性就包括 [[2][2]] :用高级语言编写的可移植部署的内核;一个分层式设计的文件系统;兼容的文件,设备,网络和进程间 I/O;管道和过滤架构;虚拟文件系统;和作为普通进程的可由用户选择的不同 shell。很早的时候,就有一个庞大的社区为 Unix 贡献软件 [[3][3]] ,[[4][4],pp. 65-72] 。随时间流逝,这个社区不断壮大,并且以现在称为开源软件开发的方式在工作着 [[5][5],pp. 440-442] 。Unix 和其睿智的晚辈们也将 C 和 C++ 编程语言、分析程序和词法分析生成器(*yacc*,*lex*)、文档编制工具(*troff*,*eqn*,*tbl*)、脚本语言(*awk*,*sed*,*Perl*)、TCP/IP 网络、和配置管理系统(configuration management system)(*SCSS*,*RCS*,*Subversion*,*Git*)发扬广大了,同时也形成了现代互联网基础设施和网络的最大的部分。
|
||||
|
||||
幸运的是,一些重要的具有历史意义的 Unix 材料已经保存下来了,现在保持对外开放。尽管 Unix 最初是由相对严格的协议发行,但在早期的开发中,很多重要的部分是通过 Unix 的版权拥有者之一(Caldera International) (LCTT 译注:2002年改名为 SCO Group)以一个自由的协议发行。通过将这些部分再结合上由加州大学伯克利分校(University of California, Berkeley)和 FreeBSD 项目组开发或发布的开源软件,贯穿了从 1972 年六月二十日开始到现在的整个系统的开发。
|
||||
|
||||
通过规划和处理这些可用的快照以及或旧或新的配置管理仓库,将这些可用数据的大部分重建到一个新合成的 Git 仓库之中。这个仓库以数字的形式记录了过去44年来最重要的数字时代产物的详细的进化。下列章节描述了该仓库的结构和内容(第[2][6]节)、创建方法(第[3][7]节)和该如何使用(第[4][8]节)。
|
||||
|
||||
### 2、数据概览 ###
|
||||
|
||||
这 1GB 的 Unix 历史仓库可以从 [GitHub][9] 上克隆^[1][10] 。如今^[2][11] ,这个仓库包含来自 850 个贡献者的 659,000 个提交和 2,306 个合并。贡献者有来自贝尔实验室(Bell Labs)的 23 个员工,伯克利大学(Berkeley University)的计算机系统研究组(Computer Systems Research Group)(CSRG)的 158 个人,和 FreeBSD 项目的 660 个成员。
|
||||
|
||||
这个仓库的生命始于一个 *Epoch* 的标签,这里面只包含了证书信息和现在的 README 文件。其后各种各样的标签和分支记录了很多重要的时刻。
|
||||
|
||||
- *Research-VX* 标签对应来自贝尔实验室(Bell Labs)六个研究版本。从 *Research-V1* (4768 行 PDP-11 汇编代码)开始,到以 *Research-V7* (大约 324,000 行代码,1820 个 C 文件)结束。
|
||||
- *Bell-32V* 是第七个版本 Unix 在 DEC/VAX 架构上的移植。
|
||||
- *BSD-X* 标签对应伯克利大学(Berkeley University)释出的 15 个快照。
|
||||
- *386BSD-X* 标签对应该系统的两个开源版本,主要是 Lynne 和 William Jolitz 写的适用于 Intel 386 架构的内核代码。
|
||||
- *FreeBSD-release/X* 标签和分支标记了来自 FreeBSD 项目的 116 个发行版。
|
||||
|
||||
另外,以 *-Snapshot-Development* 为后缀的分支,表示该提交由来自一个以时间排序的快照文件序列而合成;而以一个 *-VCS-Development* 为后缀的标签,标记了有特定发行版出现的历史分支的时刻。
|
||||
|
||||
仓库的历史包含从系统开发早期的一些提交,比如下面这些。
|
||||
|
||||
commit c9f643f59434f14f774d61ee3856972b8c3905b1
|
||||
Author: Dennis Ritchie <research!dmr>
|
||||
Date: Mon Dec 2 18:18:02 1974 -0500
|
||||
Research V5 development
|
||||
Work on file usr/sys/dmr/kl.c
|
||||
|
||||
两个发布之间的合并代表着系统发生了进化,比如 BSD 3 的开发来自 BSD2 和 Unix 32/V,它在 Git 仓库里正是被表示为带两个父节点的图形节点。
|
||||
|
||||
更为重要的是,以这种方式构造的仓库允许 **git blame**,就是可以给源代码行加上注释,如版本、日期和它们第一次出现相关联的作者,这样可以知道任何代码的起源。比如说,检出 **BSD-4** 这个标签,并在内核的 *pipe.c* 文件上运行一下 git blame,就会显示出由 Ken Thompson 写于 1974,1975 和 1979年的代码行,和 Bill Joy 写于 1980 年的。这就可以自动(尽管计算上比较费事)检测出任何时刻出现的代码。
|
||||
|
||||
![](http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/provenance.png)
|
||||
|
||||
*图1:各个重大 Unix 发行版的代码来源*
|
||||
|
||||
如[上图][12]所示,现代版本的 Unix(FreeBSD 9)依然有相当部分的来自 BSD 4.3,BSD 4.3 Net/2 和 BSD 2.0 的代码块。有趣的是,这图片显示有部分代码好像没有保留下来,当时激进地要创造一个脱离于伯克利(386BSD 和 FreeBSD 1.0)所释出代码的开源操作系统。FreeBSD 9 中最古老的代码是一个 18 行的队列,在 C 库里面的 timezone.c 文件里,该文件也可以在第七版的 Unix 文件里找到,同样的名字,时间戳是 1979 年一月十日 - 36 年前。
|
||||
|
||||
### 3、数据收集和处理 ###
|
||||
|
||||
这个项目的目的是以某种方式巩固从数据方面说明 Unix 的进化,通过将其并入一个现代的版本仓库,帮助人们对系统进化的研究。项目工作包括收录数据,分类并综合到一个单独的 Git 仓库里。
|
||||
|
||||
![](http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/branches.png)
|
||||
|
||||
*图2:导入 Unix 快照、仓库及其合并*
|
||||
|
||||
项目以三种数据类型为基础(见[图2][13])。首先,早期发布版本的快照,获取自 [Unix 遗产社会归档(Unix Heritage Society archive)][14]^[3][15] 、包括了 CSRG 全部的源代码归档的 [CD-ROM 镜像][16]^[4][17] , [Oldlinux 网站][18]^[5][19] 和 [FreeBSD 归档][20]^[6][21] 。 其次,以前的和现在的仓库,即 CSRG SCCS [[6][22]] 仓库,FreeBSD 1 CVS 仓库,和[现代 FreeBSD 开发的 Git 镜像][23]^[7][24] 。前两个都是从和快照相同的来源获得的。
|
||||
|
||||
最后,也是最费力的数据源是 **初步研究(primary research)**。释出的快照并没有提供关于它们的源头和每个文件贡献者的信息。因此,这些信息片段需要通过初步研究(primary research)验证。至于作者信息主要通过作者的自传,研究论文,内部备忘录和旧文档扫描件;通过阅读并且自动处理源代码和帮助页面补充;通过与那个年代的人用电子邮件交流;在 *StackExchange* 网站上贴出疑问;查看文件的位置(在早期的内核版本的源代码,分为 `usr/sys/dmr` 和 `/usr/sys/ken` 两个位置);从研究论文和帮助手册披露的作者找到源代码,从一个又一个的发行版中获取。(有趣的是,第一和第二的研究版(Research Edition)帮助页面都有一个 “owner” 部分,列出了作者(比如,*Ken*)及对应的系统命令、文件、系统调用或库函数。在第四版中这个部分就没了,而在 BSD 发行版中又浮现了 “Author” 部分。)关于作者信息更为详细地写在了项目的文件中,这些文件被用于匹配源代码文件和它们的作者和对应的提交信息。最后,关于源代码库之间的合并信息是获取自[ NetBSD 项目所维护的 BSD 家族树][25]^[8][26] 。
|
||||
|
||||
作为本项目的一部分而开发的软件和数据文件,现在可以[在线获取][27]^[9][28] ,并且,如果有合适的网络环境,CPU 和磁盘资源,可以用来从头构建这样一个仓库。关于主要发行版的作者信息,都存储在本项目的 `author-path` 目录下的文件里。它们的内容中带有正则表达式的文件路径后面指出了相符的作者。可以指定多个作者。正则表达式是按线性处理的,所以一个文件末尾的匹配一切的表达式可以指定一个发行版的默认作者。为避免重复,一个以 `.au` 后缀的独立文件专门用于映射作者的识别号(identifier)和他们的名字及 email。这样一个文件为每个与该系统进化相关的社区都建立了一个:贝尔实验室(Bell Labs),伯克利大学(Berkeley University),386BSD 和 FreeBSD。为了真实性的需要,早期贝尔实验室(Bell Labs)发行版的 emails 都以 UUCP 注释(UUCP notation)方式列出(例如, `research!ken`)。FreeBSD 作者的识别映射,需要导入早期的 CVS 仓库,通过从如今项目的 Git 仓库里拆解对应的数据构建。总的来说,由 1107 行构成了注释作者信息的文件(828 个规则),并且另有 640 行用于映射作者的识别号到名字。
|
||||
|
||||
现在项目的数据源被编码成了一个 168 行的 `Makefile`。它包括下面的步骤。
|
||||
|
||||
**Fetching** 从远程站点复制和克隆大约 11GB 的镜像、归档和仓库。
|
||||
|
||||
**Tooling** 从 2.9 BSD 中为旧的 PDP-11 归档获取一个归档器,并调整它以在现代的 Unix 版本下编译;编译 4.3 BSD 的 *compress* 程序来解压 386BSD 发行版,这个程序不再是现代 Unix 系统的组成部分了。
|
||||
|
||||
**Organizing** 用 *tar* 和 *cpio* 解压缩包;合并第六个研究版的三个目录;用旧的 PDP-11 归档器解压全部一个 BSD 归档;挂载 CD-ROM 镜像,这样可以作为文件系统处理;合并第 8 和 62 的 386BSD 磁盘镜像为两个独立的文件。
|
||||
|
||||
**Cleaning** 恢复第一个研究版的内核源代码文件,这个可以通过 OCR 从打印件上得到近似其原始状态的的格式;给第七个研究版的源代码文件打补丁;移除发行后被添加进来的元数据和其他文件,为避免得到错误的时间戳信息;修复毁坏的 SCCS 文件;用一个定制的 Perl 脚本移除指定到多个版本的 CVS 符号、删除与现在冲突的 CVS *Attr* 文件、用 *cvs2svn* 将 CVS 仓库转换为 Git 仓库,以处理早期的 FreeBSD CVS 仓库。
|
||||
|
||||
在仓库再现(representation)中有一个很有意思的部分就是,如何导入那些快照,并以一种方式联系起来,使得 *git blame* 可以发挥它的魔力。快照导入到仓库是基于每个文件的时间戳作为一系列的提交实现的。当所有文件导入后,就被用对应发行版的名字给标记了。然后,可以删除那些文件,并开始导入下一个快照。注意 *git blame* 命令是通过回溯一个仓库的历史来工作的,并使用启发法(heuristics)来检测文件之间或文件内的代码移动和复制。因此,删除掉的快照间会产生中断,以防止它们之间的代码被追踪。
|
||||
|
||||
相反,在下一个快照导入之前,之前快照的所有文件都被移动到了一个隐藏的后备目录里,叫做 `.ref`(引用)。它们保存在那,直到下个快照的所有文件都被导入了,这时候它们就会被删掉。因为 `.ref` 目录下的每个文件都精确对应一个原始文件,*git blame* 可以知道多少源代码通过 `.ref` 文件从一个版本移到了下一个,而不用显示出 `.ref` 文件。为了更进一步帮助检测代码起源,同时增加再现(representation)的真实性,每个发行版都被再现(represented)为一个有增量文件的分支(*-Development*)与之前发行版之间的合并。
|
||||
|
||||
上世纪 80 年代时期,只有伯克利(Berkeley) 开发的文件的一个子集是用 SCCS 版本控制的。在那个期间,我们的统一仓库里包含了来自 SCCS 的提交和快照的增量文件的导入数据。对于每个发行版,可用最近的时间戳找到该 SCCS 提交,并被标记为一个与发行版增量导入分支的合并。这些合并可以在[图2][29] 的中间看到。
|
||||
|
||||
将各种数据资源综合到一个仓库的工作,主要是用两个脚本来完成的。一个 780 行的 Perl 脚本(`import-dir.pl`)可以从一个单独的数据源(快照目录、SCCS 仓库,或者 Git 仓库)中,以 *Git fast export* 格式导出(真实的或者综合的)提交历史。输出是一个简单的文本格式,Git 工具用这个来导入和导出提交。其他方面,这个脚本以一些东西为参数,如文件到贡献者的映射、贡献者登录名和他们的全名间的映射、哪个导入的提交会被合并、哪些文件要处理和忽略、以及“引用”文件的处理。一个 450 行的 Shell 脚本创建 Git 仓库,并调用带适当参数的 Perl 脚本,来导入 27 个可用的历史数据资源。Shell 脚本也会运行 30 个测试,比较特定标签的仓库和对应的数据源,核对查看的目录中出现的和没出现的,并回溯查看分支树和合并的数量,*git blame* 和 *git log* 的输出。最后,调用 *git* 作垃圾收集和仓库压缩,从最初的 6GB 降到分发的 1GB 大小。
|
||||
|
||||
### 4、数据使用 ###
|
||||
|
||||
该数据可以用于软件工程、信息系统和软件考古学(software archeology)领域的经验性研究。鉴于它从不间断而独一无二的存在了超过了 40 年,可以供软件进化和跨代更迭参考。从那时以来,处理速度已经成千倍地增长、存储容量扩大了百万倍,该数据同样可以用于软件和硬件技术交叉进化(co-evolution)的研究。软件开发从研究中心到大学,到开源社区的转移,可以用来研究组织文化对于软件开发的影响。该仓库也可以用于学习著名人物的实际编程,比如 Turing 奖获得者(Dennis Ritchie 和 Ken Thompson)和 IT 产业的大佬(Bill Joy 和 Eric Schmidt)。另一个值得学习的现象是代码的长寿,无论是单行的水平,或是作为那个时代随 Unix 发布的完整的系统(Ingres、 Lisp、 Pascal、 Ratfor、 Snobol、 TMP),和导致代码存活或消亡的因素。最后,因为该数据让 Git 感到了压力,底层的软件仓库存储技术达到了其极限,这会推动版本管理系统领域的工程进度。
|
||||
|
||||
![](http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/metrics.png)
|
||||
|
||||
*图3:Unix 发行版的代码风格进化*
|
||||
|
||||
[图3][30] 根据 36 个主要 Unix 发行版描述了一些有趣的代码统计的趋势线(用 R 语言的局部多项式回归拟合函数生成),验证了代码风格和编程语言的使用在很长的时间尺度上的进化。这种进化是软硬件技术的需求和支持、软件构筑理论,甚至社会力量所驱动的。图片中的日期计算了出现在一个给定发行版中的所有文件的平均日期。正如可以从中看到,在过去的 40 年中,标示符和文件名字的长度已经稳步从 4 到 6 个字符增长到 7 到 11 个字符。我们也可以看到注释数量的少量稳步增加,以及 *goto* 语句的使用量减少,同时 *register* 这个类型修饰符的消失。
|
||||
|
||||
### 5、未来的工作 ###
|
||||
|
||||
可以做很多事情去提高仓库的正确性和有效性。创建过程以开源代码共享了,通过 GitHub 的拉取请求(pull request),可以很容易地贡献更多代码和修复。最有用的社区贡献将使得导入的快照文件的覆盖面增长,以便归属于某个具体的作者。现在,大约 90,000 个文件(在 160,000 总量之外)通过默认规则指定了作者。类似地,大约有 250 个作者(最初 FreeBSD 那些)仅知道其识别号。两个都列在了 build 仓库的 unmatched 目录里,欢迎贡献数据。进一步,BSD SCCS 和 FreeBSD CVS 的提交共享相同的作者和时间戳,这些可以结合成一个单独的 Git 提交。导入 SCCS 文件提交的支持会被添加进来,以便引入仓库对应的元数据。最后,也是最重要的,开源系统的更多分支会添加进来,比如 NetBSD、 OpenBSD、DragonFlyBSD 和 *illumos*。理想情况下,其他历史上重要的 Unix 发行版,如 System III、System V、 NeXTSTEP 和 SunOS 等的当前版权拥有者,也会在一个允许他们的合作伙伴使用仓库用于研究的协议下释出他们的系统。
|
||||
|
||||
### 鸣谢 ###
|
||||
|
||||
本文作者感谢很多付出努力的人们。 Brian W. Kernighan, Doug McIlroy 和 Arnold D. Robbins 在贝尔实验室(Bell Labs)的登录识别号方面提供了帮助。 Clem Cole, Era Erikson, Mary Ann Horton, Kirk McKusick, Jeremy C. Reed, Ingo Schwarze 和 Anatole Shaw 在 BSD 的登录识别号方面提供了帮助。BSD SCCS 的导入代码是基于 H. Merijn Brand 和 Jonathan Gray 的工作。
|
||||
|
||||
这次研究由欧盟 ( 欧洲社会基金(European Social Fund,ESF)) 和 希腊国家基金(Greek national funds)通过国家战略参考框架( National Strategic Reference Framework ,NSRF) 的 Operational Program " Education and Lifelong Learning" - Research Funding Program: Thalis - Athens University of Economics and Business - Software Engineering Research Platform ,共同出资赞助。
|
||||
|
||||
### 引用 ###
|
||||
|
||||
[[1]][31]
|
||||
M. D. McIlroy, E. N. Pinson, and B. A. Tague, "UNIX time-sharing system: Foreword," *The Bell System Technical Journal*, vol. 57, no. 6, pp. 1899-1904, July-August 1978.
|
||||
|
||||
[[2]][32]
|
||||
D. M. Ritchie and K. Thompson, "The UNIX time-sharing system," *Bell System Technical Journal*, vol. 57, no. 6, pp. 1905-1929, July-August 1978.
|
||||
|
||||
[[3]][33]
|
||||
D. M. Ritchie, "The evolution of the UNIX time-sharing system," *AT&T Bell Laboratories Technical Journal*, vol. 63, no. 8, pp. 1577-1593, Oct. 1984.
|
||||
|
||||
[[4]][34]
|
||||
P. H. Salus, *A Quarter Century of UNIX*. Boston, MA: Addison-Wesley, 1994.
|
||||
|
||||
[[5]][35]
|
||||
E. S. Raymond, *The Art of Unix Programming*. Addison-Wesley, 2003.
|
||||
|
||||
[[6]][36]
|
||||
M. J. Rochkind, "The source code control system," *IEEE Transactions on Software Engineering*, vol. SE-1, no. 4, pp. 255-265, 1975.
|
||||
|
||||
----------
|
||||
|
||||
#### 脚注 ####
|
||||
|
||||
[1][37] - [https://github.com/dspinellis/unix-history-repo][38]
|
||||
|
||||
[2][39] - Updates may add or modify material. To ensure replicability the repository's users are encouraged to fork it or archive it.
|
||||
|
||||
[3][40] - [http://www.tuhs.org/archive_sites.html][41]
|
||||
|
||||
[4][42] - [https://www.mckusick.com/csrg/][43]
|
||||
|
||||
[5][44] - [http://www.oldlinux.org/Linux.old/distributions/386BSD][45]
|
||||
|
||||
[6][46] - [http://ftp-archive.freebsd.org/pub/FreeBSD-Archive/old-releases/][47]
|
||||
|
||||
[7][48] - [https://github.com/freebsd/freebsd][49]
|
||||
|
||||
[8][50] - [http://ftp.netbsd.org/pub/NetBSD/NetBSD-current/src/share/misc/bsd-family-tree][51]
|
||||
|
||||
[9][52] - [https://github.com/dspinellis/unix-history-make][53]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html
|
||||
|
||||
作者:Diomidis Spinellis
|
||||
译者:[wi-cuckoo](https://github.com/wi-cuckoo)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#MPT78
|
||||
[2]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#RT78
|
||||
[3]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#Rit84
|
||||
[4]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#Sal94
|
||||
[5]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#Ray03
|
||||
[6]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#sec:data
|
||||
[7]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#sec:dev
|
||||
[8]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#sec:use
|
||||
[9]:https://github.com/dspinellis/unix-history-repo
|
||||
[10]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAB
|
||||
[11]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAC
|
||||
[12]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#fig:provenance
|
||||
[13]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#fig:branches
|
||||
[14]:http://www.tuhs.org/archive_sites.html
|
||||
[15]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAD
|
||||
[16]:https://www.mckusick.com/csrg/
|
||||
[17]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAE
|
||||
[18]:http://www.oldlinux.org/Linux.old/distributions/386BSD
|
||||
[19]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAF
|
||||
[20]:http://ftp-archive.freebsd.org/pub/FreeBSD-Archive/old-releases/
|
||||
[21]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAG
|
||||
[22]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#SCCS
|
||||
[23]:https://github.com/freebsd/freebsd
|
||||
[24]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAH
|
||||
[25]:http://ftp.netbsd.org/pub/NetBSD/NetBSD-current/src/share/misc/bsd-family-tree
|
||||
[26]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAI
|
||||
[27]:https://github.com/dspinellis/unix-history-make
|
||||
[28]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAJ
|
||||
[29]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#fig:branches
|
||||
[30]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#fig:metrics
|
||||
[31]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#CITEMPT78
|
||||
[32]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#CITERT78
|
||||
[33]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#CITERit84
|
||||
[34]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#CITESal94
|
||||
[35]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#CITERay03
|
||||
[36]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#CITESCCS
|
||||
[37]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAB
|
||||
[38]:https://github.com/dspinellis/unix-history-repo
|
||||
[39]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAC
|
||||
[40]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAD
|
||||
[41]:http://www.tuhs.org/archive_sites.html
|
||||
[42]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAE
|
||||
[43]:https://www.mckusick.com/csrg/
|
||||
[44]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAF
|
||||
[45]:http://www.oldlinux.org/Linux.old/distributions/386BSD
|
||||
[46]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAG
|
||||
[47]:http://ftp-archive.freebsd.org/pub/FreeBSD-Archive/old-releases/
|
||||
[48]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAH
|
||||
[49]:https://github.com/freebsd/freebsd
|
||||
[50]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAI
|
||||
[51]:http://ftp.netbsd.org/pub/NetBSD/NetBSD-current/src/share/misc/bsd-family-tree
|
||||
[52]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAJ
|
||||
[53]:https://github.com/dspinellis/unix-history-make
|
@ -0,0 +1,101 @@
|
||||
UNIX 家族小史
|
||||
================================================================================
|
||||
![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/05/linux-712x445.png)
|
||||
|
||||
要记住,当一扇门在你面前关闭的时候,另一扇门就会打开。肯·汤普森([Ken Thompson][1]) 和丹尼斯·里奇([Dennis Richie][2]) 两个人就是这句名言很好的实例。他们俩是**20世纪**最优秀的信息技术专家之二,因为他们创造了最具影响力和创新性的软件之一: **UNIX**。
|
||||
|
||||
### UNIX 系统诞生于贝尔实验室 ###
|
||||
|
||||
**UNIX** 最开始的名字是 **UNICS** (**UN**iplexed **I**nformation and **C**omputing **S**ervice),它有一个大家庭,并不是从石头缝里蹦出来的。UNIX的祖父是 **CTSS** (**C**ompatible **T**ime **S**haring **S**ystem),它的父亲是 **Multics** (**MULT**iplexed **I**nformation and **C**omputing **S**ervice),这个系统能支持大量用户通过交互式分时(timesharing)的方式使用大型机。
|
||||
|
||||
UNIX 诞生于 **1969** 年,由**肯·汤普森**以及后来加入的**丹尼斯·里奇**共同完成。这两位优秀的研究员和科学家在一个**通用电器 GE**和**麻省理工学院**的合作项目里工作,项目目标是开发一个叫 Multics 的交互式分时系统。
|
||||
|
||||
Multics 的目标是整合分时技术以及当时其他先进技术,允许用户在远程终端通过电话(拨号)登录到主机,然后可以编辑文档,阅读电子邮件,运行计算器,等等。
|
||||
|
||||
在之后的五年里,AT&T 公司为 Multics 项目投入了数百万美元。他们购买了 GE-645 大型机,聚集了贝尔实验室的顶级研究人员,例如肯·汤普森、 Stuart Feldman、丹尼斯·里奇、道格拉斯·麦克罗伊(M. Douglas McIlroy)、 Joseph F. Ossanna 以及 Robert Morris。但是项目目标太过激进,进度严重滞后。最后,AT&T 高层决定放弃这个项目。
|
||||
|
||||
贝尔实验室的管理层决定停止这个让许多研究人员无比纠结的操作系统上的所有遗留工作。不过要感谢汤普森,里奇和一些其他研究员,他们把老板的命令丢到一边,并继续在实验室里满怀热心地忘我工作,最终孵化出前无古人后无来者的 UNIX。
|
||||
|
||||
UNIX 的第一声啼哭是在一台 PDP-7 微型机上,它是汤普森测试自己在操作系统设计上的点子的机器,也是汤普森和 里奇一起玩 Space and Travel 游戏的模拟器。
|
||||
|
||||
> “我们想要的不仅是一个优秀的编程环境,而是能围绕这个系统形成团体。按我们自己的经验,通过远程访问和分时主机实现的公共计算,本质上不只是用终端输入程序代替打孔机而已,而是鼓励密切沟通。”丹尼斯·里奇说。
|
||||
|
||||
UNIX 是第一个靠近理想的系统,在这里程序员可以坐在机器前自由摆弄程序,探索各种可能性并随手测试。在 UNIX 整个生命周期里,它吸引了大量因其他操作系统限制而投身过来的高手做出无私贡献,因此它的功能模型一直保持上升趋势。
|
||||
|
||||
UNIX 在 1970 年因为 PDP-11/20 获得了首次资金注入,之后正式更名为 UNIX 并支持在 PDP-11/20 上运行。UNIX 带来的第一次用于实际场景中是在 1971 年,贝尔实验室的专利部门配备来做文字处理。
|
||||
|
||||
### UNIX 上的 C 语言革命 ###
|
||||
|
||||
丹尼斯·里奇在 1972 年发明了一种叫 “**C**” 的高级编程语言 ,之后他和肯·汤普森决定用 “C” 重写 UNIX 系统,来支持更好的移植性。他们在那一年里编写和调试了差不多 100,000 行代码。在迁移到 “C” 语言后,系统可移植性非常好,只需要修改一小部分机器相关的代码就可以将 UNIX 移植到其他计算机平台上。
|
||||
|
||||
UNIX 第一次公开露面是 1973 年丹尼斯·里奇和肯·汤普森在操作系统原理(Operating Systems Principles)上发表的一篇论文,然后 AT&T 发布了 UNIX 系统第 5 版,并授权给教育机构使用,之后在 1975 年第一次以 **$20.000** 的价格授权企业使用 UNIX 第 6 版。应用最广泛的是 1980 年发布的 UNIX 第 7 版,任何人都可以购买授权,只是授权条款非常严格。授权内容包括源代码,以及用 PDP-11 汇编语言写的及其相关内核。反正,各种版本 UNIX 系统完全由它的用户手册确定。
|
||||
|
||||
### AIX 系统 ###
|
||||
|
||||
在 **1983** 年,**微软**计划开发 **Xenix** 作为 MS-DOS 的多用户版继任者,他们在那一年花了 $8,000 搭建了一台拥有 **512 KB** 内存以及 **10 MB**硬盘并运行 Xenix 的 Altos 586。而到 1984 年为止,全世界 UNIX System V 第二版的安装数量已经超过了 100,000 。在 1986 年发布了包含因特网域名服务的 4.3BSD,而且 **IBM** 宣布 **AIX 系统**的安装数已经超过 250,000。AIX 基于 Unix System V 开发,这套系统拥有 BSD 风格的根文件系统,是两者的结合。
|
||||
|
||||
AIX 第一次引入了 **日志文件系统 (JFS)** 以及集成逻辑卷管理器 (Logical Volume Manager ,LVM)。IBM 在 1989 年将 AIX 移植到自己的 RS/6000 平台。2001 年发布的 5L 版是一个突破性的版本,提供了 Linux 友好性以及支持 Power4 服务器的逻辑分区。
|
||||
|
||||
在 2004 年发布的 AIX 5.3 引入了支持高级电源虚拟化( Advanced Power Virtualization,APV)的虚拟化技术,支持对称多线程、微分区,以及共享处理器池。
|
||||
|
||||
在 2007 年,IBM 同时发布 AIX 6.1 和 Power6 架构,开始加强自己的虚拟化产品。他们还将高级电源虚拟化重新包装成 PowerVM。
|
||||
|
||||
这次改进包括被称为 WPARs 的负载分区形式,类似于 Solaris 的 zones/Containers,但是功能更强。
|
||||
|
||||
### HP-UX 系统 ###
|
||||
|
||||
**惠普 UNIX (Hewlett-Packard’s UNIX,HP-UX)** 源于 System V 第 3 版。这套系统一开始只支持 PA-RISC HP 9000 平台。HP-UX 第 1 版发布于 1984 年。
|
||||
|
||||
HP-UX 第 9 版引入了 SAM,一个基于字符的图形用户界面 (GUI),用户可以用来管理整个系统。在 1995 年发布的第 10 版,调整了系统文件分布以及目录结构,变得有点类似 AT&T SVR4。
|
||||
|
||||
第 11 版发布于 1997 年。这是 HP 第一个支持 64 位寻址的版本。不过在 2000 年重新发布成 11i,因为 HP 为特定的信息技术用途,引入了操作环境(operating environments)和分级应用(layered applications)的捆绑组(bundled groups)。
|
||||
|
||||
在 2001 年发布的 11.20 版宣称支持安腾(Itanium)系统。HP-UX 是第一个使用 ACLs(访问控制列表,Access Control Lists)管理文件权限的 UNIX 系统,也是首先支持内建逻辑卷管理器(Logical Volume Manager)的系统之一。
|
||||
|
||||
如今,HP-UX 因为 HP 和 Veritas 的合作关系使用了 Veritas 作为主文件系统。
|
||||
|
||||
HP-UX 目前的最新版本是 11iv3, update 4。
|
||||
|
||||
### Solaris 系统 ###
|
||||
|
||||
Sun 的 UNIX 版本是 **Solaris**,用来接替 1992 年创建的 **SunOS**。SunOS 一开始基于 BSD(伯克利软件发行版,Berkeley Software Distribution)风格的 UNIX,但是 SunOS 5.0 版以及之后的版本都是基于重新包装为 Solaris 的 Unix System V 第 4 版。
|
||||
|
||||
SunOS 1.0 版于 1983 年发布,用于支持 Sun-1 和 Sun-2 平台。随后在 1985 年发布了 2.0 版。在 1987 年,Sun 和 AT&T 宣布合作一个项目以 SVR4 为基础将 System V 和 BSD 合并成一个版本。
|
||||
|
||||
Solaris 2.4 是 Sun 发布的第一个 Sparc/x86 版本。1994 年 11 月份发布的 SunOS 4.1.4 版是最后一个版本。Solaris 7 是首个 64 位 Ultra Sparc 版本,加入了对文件系统元数据记录的原生支持。
|
||||
|
||||
Solaris 9 发布于 2002 年,支持 Linux 特性以及 Solaris 卷管理器(Solaris Volume Manager)。之后,2005 年发布了 Solaris 10,带来许多创新,比如支持 Solaris Containers,新的 ZFS 文件系统,以及逻辑域(Logical Domains)。
|
||||
|
||||
目前 Solaris 最新的版本是 第 10 版,最后的更新发布于 2008 年。
|
||||
|
||||
### Linux ###
|
||||
|
||||
到了 1991 年,用来替代商业操作系统的自由(free)操作系统的需求日渐高涨。因此,**Linus Torvalds** 开始构建一个自由的操作系统,最终成为 **Linux**。Linux 最开始只有一些 “C” 文件,并且使用了阻止商业发行的授权。Linux 是一个类 UNIX 系统但又不尽相同。
|
||||
|
||||
2015 年发布了基于 GNU Public License (GPL)授权的 3.18 版。IBM 声称有超过 1800 万行开源代码开源给开发者。
|
||||
|
||||
如今 GNU Public License 是应用最广泛的自由软件授权方式。根据开源软件原则,这份授权允许个人和企业自由分发、运行、通过拷贝共享、学习,以及修改软件源码。
|
||||
|
||||
### UNIX vs. Linux:技术概要 ###
|
||||
|
||||
- Linux 鼓励多样性,Linux 的开发人员来自各种背景,有更多不同经验和意见。
|
||||
- Linux 比 UNIX 支持更多的平台和架构。
|
||||
- UNIX 商业版本的开发人员针对特定目标平台以及用户设计他们的操作系统。
|
||||
- **Linux 比 UNIX 有更好的安全性**,更少受病毒或恶意软件攻击。截止到现在,Linux 上大约有 60-100 种病毒,但是没有任何一种还在传播。另一方面,UNIX 上大约有 85-120 种病毒,但是其中有一些还在传播中。
|
||||
- 由于 UNIX 命令、工具和元素很少改变,甚至很多接口和命令行参数在后续 UNIX 版本中一直沿用。
|
||||
- 有些 Linux 开发项目以自愿为基础进行资助,比如 Debian。其他项目会维护一个和商业 Linux 的社区版,比如 SUSE 的 openSUSE 以及红帽的 Fedora。
|
||||
- 传统 UNIX 是纵向扩展,而另一方面 Linux 是横向扩展。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.unixmen.com/brief-history-aix-hp-ux-solaris-bsd-linux/
|
||||
|
||||
作者:[M.el Khamlichi][a]
|
||||
译者:[zpl1025](https://github.com/zpl1025)
|
||||
校对:[Caroline](https://github.com/carolinewuyan)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.unixmen.com/author/pirat9/
|
||||
[1]:http://www.unixmen.com/ken-thompson-unix-systems-father/
|
||||
[2]:http://www.unixmen.com/dennis-m-ritchie-father-c-programming-language/
|
@ -8,45 +8,44 @@ Copyright (C) 2015 greenbytes GmbH
|
||||
|
||||
### 源码 ###
|
||||
|
||||
你可以从[这里][1]得到 Apache 发行版。Apache 2.4.17 及其更高版本都支持 HTTP/2。我不会再重复介绍如何构建服务器的指令。在很多地方有很好的指南,例如[这里][2]。
|
||||
你可以从[这里][1]得到 Apache 版本。Apache 2.4.17 及其更高版本都支持 HTTP/2。我不会再重复介绍如何构建该服务器的指令。在很多地方有很好的指南,例如[这里][2]。
|
||||
|
||||
(有任何试验的链接?在 Twitter 上告诉我吧 @icing)
|
||||
(有任何这个试验性软件包的相关链接?在 Twitter 上告诉我吧 @icing)
|
||||
|
||||
#### 编译支持 HTTP/2 ####
|
||||
#### 编译支持 HTTP/2 ####
|
||||
|
||||
在你编译发行版之前,你要进行一些**配置**。这里有成千上万的选项。和 HTTP/2 相关的是:
|
||||
在你编译版本之前,你要进行一些**配置**。这里有成千上万的选项。和 HTTP/2 相关的是:
|
||||
|
||||
- **--enable-http2**
|
||||
|
||||
启用在 Apache 服务器内部实现协议的 ‘http2’ 模块。
|
||||
启用在 Apache 服务器内部实现该协议的 ‘http2’ 模块。
|
||||
|
||||
- **--with-nghttp2=<dir>**
|
||||
- **--with-nghttp2=\<dir>**
|
||||
|
||||
指定 http2 模块需要的 libnghttp2 模块的非默认位置。如果 nghttp2 是在默认的位置,配置过程会自动采用。
|
||||
|
||||
- **--enable-nghttp2-staticlib-deps**
|
||||
|
||||
很少用到的选项,你可能用来静态链接 nghttp2 库到服务器。在大部分平台上,只有在找不到共享 nghttp2 库时才有效。
|
||||
很少用到的选项,你可能想将 nghttp2 库静态链接到服务器里。在大部分平台上,只有在找不到共享 nghttp2 库时才有用。
|
||||
|
||||
如果你想自己编译 nghttp2,你可以到 [nghttp2.org][3] 查看文档。最新的 Fedora 以及其它发行版已经附带了这个库。
|
||||
如果你想自己编译 nghttp2,你可以到 [nghttp2.org][3] 查看文档。最新的 Fedora 以及其它版本已经附带了这个库。
|
||||
|
||||
#### TLS 支持 ####
|
||||
|
||||
大部分人想在浏览器上使用 HTTP/2, 而浏览器只在 TLS 连接(**https:// 开头的 url)时支持它。你需要一些我下面介绍的配置。但首先你需要的是支持 ALPN 扩展的 TLS 库。
|
||||
大部分人想在浏览器上使用 HTTP/2, 而浏览器只在使用 TLS 连接(**https:// 开头的 url)时才支持 HTTP/2。你需要一些我下面介绍的配置。但首先你需要的是支持 ALPN 扩展的 TLS 库。
|
||||
|
||||
ALPN 用来协商(negotiate)服务器和客户端之间的协议。如果你服务器上 TLS 库还没有实现 ALPN,客户端只能通过 HTTP/1.1 通信。那么,可以和 Apache 链接并支持它的是什么库呢?
|
||||
|
||||
ALPN 用来屏蔽服务器和客户端之间的协议。如果你服务器上 TLS 库还没有实现 ALPN,客户端只能通过 HTTP/1.1 通信。那么,和 Apache 连接的到底是什么?又是什么支持它呢?
|
||||
- **OpenSSL 1.0.2** 及其以后。
|
||||
- ??? (别的我也不知道了)
|
||||
|
||||
- **OpenSSL 1.0.2** 即将到来。
|
||||
- ???
|
||||
|
||||
如果你的 OpenSSL 库是 Linux 发行版自带的,这里使用的版本号可能和官方 OpenSSL 发行版的不同。如果不确定的话检查一下你的 Linux 发行版吧。
|
||||
如果你的 OpenSSL 库是 Linux 版本自带的,这里使用的版本号可能和官方 OpenSSL 版本的不同。如果不确定的话检查一下你的 Linux 版本吧。
|
||||
|
||||
### 配置 ###
|
||||
|
||||
另一个给服务器的好建议是为 http2 模块设置合适的日志等级。添加下面的配置:
|
||||
|
||||
# 某个地方有这样一行
|
||||
# 放在某个地方的这样一行
|
||||
LoadModule http2_module modules/mod_http2.so
|
||||
|
||||
<IfModule http2_module>
|
||||
@ -62,38 +61,37 @@ ALPN 用来屏蔽服务器和客户端之间的协议。如果你服务器上 TL
|
||||
|
||||
那么,假设你已经编译部署好了服务器, TLS 库也是最新的,你启动了你的服务器,打开了浏览器。。。你怎么知道它在工作呢?
|
||||
|
||||
如果除此之外你没有添加其它到服务器配置,很可能它没有工作。
|
||||
如果除此之外你没有添加其它的服务器配置,很可能它没有工作。
|
||||
|
||||
你需要告诉服务器在哪里使用协议。默认情况下,你的服务器并没有启动 HTTP/2 协议。因为这是安全路由,你可能要有一套部署了才能继续。
|
||||
你需要告诉服务器在哪里使用该协议。默认情况下,你的服务器并没有启动 HTTP/2 协议。因为这样比较安全,也许才能让你已有的部署可以继续工作。
|
||||
|
||||
你用 **Protocols** 命令启用 HTTP/2 协议:
|
||||
你可以用新的 **Protocols** 指令启用 HTTP/2 协议:
|
||||
|
||||
# for a https server
|
||||
# 对于 https 服务器
|
||||
Protocols h2 http/1.1
|
||||
...
|
||||
|
||||
# for a http server
|
||||
# 对于 http 服务器
|
||||
Protocols h2c http/1.1
|
||||
|
||||
你可以给一般服务器或者指定的 **vhosts** 添加这个配置。
|
||||
你可以给整个服务器或者指定的 **vhosts** 添加这个配置。
|
||||
|
||||
#### SSL 参数 ####
|
||||
|
||||
对于 TLS (SSL),HTTP/2 有一些特殊的要求。阅读 [https:// 连接][4]了解更详细的信息。
|
||||
对于 TLS (SSL),HTTP/2 有一些特殊的要求。阅读下面的“ https:// 连接”一节了解更详细的信息。
|
||||
|
||||
### http:// 连接 (h2c) ###
|
||||
|
||||
尽管现在还没有浏览器支持 HTTP/2 协议, http:// 这样的 url 也能正常工作, 因为有 mod_h[ttp]2 的支持。启用它你只需要做的一件事是在 **httpd.conf** 配置 Protocols :
|
||||
尽管现在还没有浏览器支持,但是 HTTP/2 协议也工作在 http:// 这样的 url 上, 而且 mod_h[ttp]2 也支持。启用它你唯一所要做的是在 Protocols 配置中启用它:
|
||||
|
||||
# for a http server
|
||||
# 对于 http 服务器
|
||||
Protocols h2c http/1.1
|
||||
|
||||
|
||||
这里有一些支持 **h2c** 的客户端(和客户端库)。我会在下面介绍:
|
||||
|
||||
#### curl ####
|
||||
|
||||
Daniel Stenberg 维护的网络资源命令行客户端 curl 当然支持。如果你的系统上有 curl,有一个简单的方法检查它是否支持 http/2:
|
||||
Daniel Stenberg 维护的用于访问网络资源的命令行客户端 curl 当然支持。如果你的系统上有 curl,有一个简单的方法检查它是否支持 http/2:
|
||||
|
||||
sh> curl -V
|
||||
curl 7.43.0 (x86_64-apple-darwin15.0) libcurl/7.43.0 SecureTransport zlib/1.2.5
|
||||
@ -126,11 +124,11 @@ Daniel Stenberg 维护的网络资源命令行客户端 curl 当然支持。如
|
||||
|
||||
恭喜,如果看到了有 **...101 Switching...** 的行就表示它正在工作!
|
||||
|
||||
有一些情况不会发生到 HTTP/2 的 Upgrade 。如果你的第一个请求没有内容,例如你上传一个文件,就不会触发 Upgrade。[h2c 限制][5]部分有详细的解释。
|
||||
有一些情况不会发生 HTTP/2 的升级切换(Upgrade)。如果你的第一个请求有内容数据(body),例如你上传一个文件时,就不会触发升级切换。[h2c 限制][5]部分有详细的解释。
|
||||
|
||||
#### nghttp ####
|
||||
|
||||
nghttp2 有能一起编译的客户端和服务器。如果你的系统中有客户端,你可以简单地通过获取资源验证你的安装:
|
||||
nghttp2 可以一同编译它自己的客户端和服务器。如果你的系统中有该客户端,你可以简单地通过获取一个资源来验证你的安装:
|
||||
|
||||
sh> nghttp -uv http://<yourserver>/
|
||||
[ 0.001] Connected
|
||||
@ -151,7 +149,7 @@ nghttp2 有能一起编译的客户端和服务器。如果你的系统中有客
|
||||
|
||||
这和我们上面 **curl** 例子中看到的 Upgrade 输出很相似。
|
||||
|
||||
在命令行参数中隐藏着一种可以使用 **h2c**:的参数:**-u**。这会指示 **nghttp** 进行 HTTP/1 Upgrade 过程。但如果我们不使用呢?
|
||||
有另外一种在命令行参数中不用 **-u** 参数而使用 **h2c** 的方法。这个参数会指示 **nghttp** 进行 HTTP/1 升级切换过程。但如果我们不使用呢?
|
||||
|
||||
sh> nghttp -v http://<yourserver>/
|
||||
[ 0.002] Connected
|
||||
@ -166,36 +164,33 @@ nghttp2 有能一起编译的客户端和服务器。如果你的系统中有客
|
||||
:scheme: http
|
||||
...
|
||||
|
||||
连接马上显示出了 HTTP/2!这就是协议中所谓的直接模式,当客户端发送一些特殊的 24 字节到服务器时就会发生:
|
||||
连接马上使用了 HTTP/2!这就是协议中所谓的直接(direct)模式,当客户端发送一些特殊的 24 字节到服务器时就会发生:
|
||||
|
||||
0x505249202a20485454502f322e300d0a0d0a534d0d0a0d0a
|
||||
or in ASCII: PRI * HTTP/2.0\r\n\r\nSM\r\n\r\n
|
||||
|
||||
用 ASCII 表示是:
|
||||
|
||||
PRI * HTTP/2.0\r\n\r\nSM\r\n\r\n
|
||||
|
||||
支持 **h2c** 的服务器在一个新的连接中看到这些信息就会马上切换到 HTTP/2。HTTP/1.1 服务器则认为是一个可笑的请求,响应并关闭连接。
|
||||
|
||||
因此 **直接** 模式只适合于那些确定服务器支持 HTTP/2 的客户端。例如,前一个 Upgrade 过程是成功的。
|
||||
因此,**直接**模式只适合于那些确定服务器支持 HTTP/2 的客户端。例如,当前一个升级切换过程成功了的时候。
|
||||
|
||||
**直接** 模式的魅力是零开销,它支持所有请求,即使没有 body 部分(查看[h2c 限制][6])。任何支持 h2c 协议的服务器默认启用了直接模式。如果你想停用它,可以添加下面的配置指令到你的服务器:
|
||||
**直接**模式的魅力是零开销,它支持所有请求,即使带有请求数据部分(查看[h2c 限制][6])。
|
||||
|
||||
注:下面这行打删除线
|
||||
|
||||
H2Direct off
|
||||
|
||||
注:下面这行打删除线
|
||||
|
||||
对于 2.4.17 发行版,默认明文连接时启用 **H2Direct** 。但是有一些模块和这不兼容。因此,在下一发行版中,默认会设置为**off**,如果你希望你的服务器支持它,你需要设置它为:
|
||||
对于 2.4.17 版本,明文连接时默认启用 **H2Direct** 。但是有一些模块和这不兼容。因此,在下一版本中,默认会设置为**off**,如果你希望你的服务器支持它,你需要设置它为:
|
||||
|
||||
H2Direct on
|
||||
|
||||
### https:// 连接 (h2) ###
|
||||
|
||||
一旦你的 mod_h[ttp]2 支持 h2c 连接,就是时候一同启用 **h2**,因为现在的浏览器支持它和 **https:** 一同使用。
|
||||
当你的 mod_h[ttp]2 可以支持 h2c 连接时,那就可以一同启用 **h2** 兄弟了,现在的浏览器仅支持它和 **https:** 一同使用。
|
||||
|
||||
HTTP/2 标准对 https:(TLS)连接增加了一些额外的要求。上面已经提到了 ALNP 扩展。另外的一个要求是不会使用特定[黑名单][7]中的密码。
|
||||
HTTP/2 标准对 https:(TLS)连接增加了一些额外的要求。上面已经提到了 ALNP 扩展。另外的一个要求是不能使用特定[黑名单][7]中的加密算法。
|
||||
|
||||
尽管现在版本的 **mod_h[ttp]2** 不增强这些密码(以后可能会),大部分客户端会这么做。如果你用不切当的密码在浏览器中打开 **h2** 服务器,你会看到模糊警告**INADEQUATE_SECURITY**,浏览器会拒接连接。
|
||||
尽管现在版本的 **mod_h[ttp]2** 不增强这些算法(以后可能会),但大部分客户端会这么做。如果让你的浏览器使用不恰当的算法打开 **h2** 服务器,你会看到不明确的警告**INADEQUATE_SECURITY**,浏览器会拒接连接。
|
||||
|
||||
一个可接受的 Apache SSL 配置类似:
|
||||
一个可行的 Apache SSL 配置类似:
|
||||
|
||||
SSLCipherSuite ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK
|
||||
SSLProtocol All -SSLv2 -SSLv3
|
||||
@ -203,11 +198,11 @@ HTTP/2 标准对 https:(TLS)连接增加了一些额外的要求。上面已
|
||||
|
||||
(是的,这确实很长。)
|
||||
|
||||
这里还有一些应该调整的 SSL 配置参数,但不是必须:**SSLSessionCache**, **SSLUseStapling** 等,其它地方也有介绍这些。例如 Ilya Grigorik 写的一篇博客 [高性能浏览器网络][8]。
|
||||
这里还有一些应该调整,但不是必须调整的 SSL 配置参数:**SSLSessionCache**, **SSLUseStapling** 等,其它地方也有介绍这些。例如 Ilya Grigorik 写的一篇超赞的博客: [高性能浏览器网络][8]。
|
||||
|
||||
#### curl ####
|
||||
|
||||
再次回到 shell 并使用 curl(查看 [curl h2c 章节][9] 了解要求)你也可以通过 curl 用简单的命令检测你的服务器:
|
||||
再次回到 shell 使用 curl(查看上面的“curl h2c”章节了解要求),你也可以通过 curl 用简单的命令检测你的服务器:
|
||||
|
||||
sh> curl -v --http2 https://<yourserver>/
|
||||
...
|
||||
@ -220,9 +215,9 @@ HTTP/2 标准对 https:(TLS)连接增加了一些额外的要求。上面已
|
||||
|
||||
恭喜你,能正常工作啦!如果还不能,可能原因是:
|
||||
|
||||
- 你的 curl 不支持 HTTP/2。查看[检测][10]。
|
||||
- 你的 curl 不支持 HTTP/2。查看上面的“检测 curl”一节。
|
||||
- 你的 openssl 版本太低不支持 ALPN。
|
||||
- 不能验证你的证书,或者不接受你的密码配置。尝试添加命令行选项 -k 停用 curl 中的检查。如果那能工作,还要重新配置你的 SSL 和证书。
|
||||
- 不能验证你的证书,或者不接受你的算法配置。尝试添加命令行选项 -k 停用 curl 中的这些检查。如果可以工作,就重新配置你的 SSL 和证书。
|
||||
|
||||
#### nghttp ####
|
||||
|
||||
@ -246,11 +241,11 @@ HTTP/2 标准对 https:(TLS)连接增加了一些额外的要求。上面已
|
||||
The negotiated protocol: http/1.1
|
||||
[ERROR] HTTP/2 protocol was not selected. (nghttp2 expects h2)
|
||||
|
||||
这表示 ALPN 能正常工作,但并没有用 h2 协议。你需要像上面介绍的那样在服务器上选中那个协议。如果一开始在 vhost 部分选中不能正常工作,试着在通用部分选中它。
|
||||
这表示 ALPN 能正常工作,但并没有用 h2 协议。你需要像上面介绍的那样检查你服务器上的 Protocols 配置。如果一开始在 vhost 部分设置不能正常工作,试着在通用部分设置它。
|
||||
|
||||
#### Firefox ####
|
||||
|
||||
Update: [Apache Lounge][11] 的 Steffen Land 告诉我 [Firefox HTTP/2 指示插件][12]。你可以看到有多少地方用到了 h2(提示:Apache Lounge 用 h2 已经有一段时间了。。。)
|
||||
更新: [Apache Lounge][11] 的 Steffen Land 告诉我 [Firefox 上有个 HTTP/2 指示插件][12]。你可以看到有多少地方用到了 h2(提示:Apache Lounge 用 h2 已经有一段时间了。。。)
|
||||
|
||||
你可以在 Firefox 浏览器中打开开发者工具,在那里的网络标签页查看 HTTP/2 连接。当你打开了 HTTP/2 并重新刷新 html 页面时,你会看到类似下面的东西:
|
||||
|
||||
@ -260,9 +255,9 @@ Update: [Apache Lounge][11] 的 Steffen Land 告诉我 [Firefox HTTP/2 指示
|
||||
|
||||
#### Google Chrome ####
|
||||
|
||||
在 Google Chrome 中,你在开发者工具中看不到 HTTP/2 指示器。相反,Chrome 用特殊的地址 **chrome://net-internals/#http2** 给出了相关信息。
|
||||
在 Google Chrome 中,你在开发者工具中看不到 HTTP/2 指示器。相反,Chrome 用特殊的地址 **chrome://net-internals/#http2** 给出了相关信息。(LCTT 译注:Chrome 已经有一个 “HTTP/2 and SPDY indicator” 可以很好的在地址栏识别 HTTP/2 连接)
|
||||
|
||||
如果你在服务器中打开了一个页面并在 Chrome 那个页面查看,你可以看到类似下面这样:
|
||||
如果你打开了一个服务器的页面,可以在 Chrome 中查看那个 net-internals 页面,你可以看到类似下面这样:
|
||||
|
||||
![](https://icing.github.io/mod_h2/images/chrome-h2.png)
|
||||
|
||||
@ -276,21 +271,21 @@ Windows 10 中 Internet Explorer 的继任者 Edge 也支持 HTTP/2。你也可
|
||||
|
||||
#### Safari ####
|
||||
|
||||
在 Apple 的 Safari 中,打开开发者工具,那里有个网络标签页。重新加载你的服务器页面并在开发者工具中选择显示了加载的行。如果你启用了在右边显示详细试图,看 **状态** 部分。那里显示了 **HTTP/2.0 200**,类似:
|
||||
在 Apple 的 Safari 中,打开开发者工具,那里有个网络标签页。重新加载你的服务器上的页面,并在开发者工具中选择显示了加载的那行。如果你启用了在右边显示详细视图,看 **Status** 部分。那里显示了 **HTTP/2.0 200**,像这样:
|
||||
|
||||
![](https://icing.github.io/mod_h2/images/safari-h2.png)
|
||||
|
||||
#### 重新协商 ####
|
||||
|
||||
https: 连接重新协商是指正在运行的连接中特定的 TLS 参数会发生变化。在 Apache httpd 中,你可以通过目录中的配置文件修改 TLS 参数。如果一个要获取特定位置资源的请求到来,配置的 TLS 参数会和当前的 TLS 参数进行对比。如果它们不相同,就会触发重新协商。
|
||||
https: 连接重新协商是指正在运行的连接中特定的 TLS 参数会发生变化。在 Apache httpd 中,你可以在 directory 配置中改变 TLS 参数。如果进来一个获取特定位置资源的请求,配置的 TLS 参数会和当前的 TLS 参数进行对比。如果它们不相同,就会触发重新协商。
|
||||
|
||||
这种最常见的情形是密码变化和客户端验证。你可以要求客户访问特定位置时需要通过验证,或者对于特定资源,你可以使用更安全的, CPU 敏感的密码。
|
||||
这种最常见的情形是算法变化和客户端证书。你可以要求客户访问特定位置时需要通过验证,或者对于特定资源,你可以使用更安全的、对 CPU 压力更大的算法。
|
||||
|
||||
不管你的想法有多么好,HTTP/2 中都**不可以**发生重新协商。如果有 100 多个请求到同一个地方,什么时候哪个会发生重新协商呢?
|
||||
但不管你的想法有多么好,HTTP/2 中都**不可以**发生重新协商。在同一个连接上会有 100 多个请求,那么重新协商该什么时候做呢?
|
||||
|
||||
对于这种配置,现有的 **mod_h[ttp]2** 还不能保证你的安全。如果你有一个站点使用了 TLS 重新协商,别在上面启用 h2!
|
||||
对于这种配置,现有的 **mod_h[ttp]2** 还没有办法。如果你有一个站点使用了 TLS 重新协商,别在上面启用 h2!
|
||||
|
||||
当然,我们会在后面的发行版中解决这个问题然后你就可以安全地启用了。
|
||||
当然,我们会在后面的版本中解决这个问题,然后你就可以安全地启用了。
|
||||
|
||||
### 限制 ###
|
||||
|
||||
@ -298,45 +293,45 @@ https: 连接重新协商是指正在运行的连接中特定的 TLS 参数会
|
||||
|
||||
实现除 HTTP 之外协议的模块可能和 **mod_http2** 不兼容。这在其它协议要求服务器首先发送数据时无疑会发生。
|
||||
|
||||
**NNTP** 就是这种协议的一个例子。如果你在服务器中配置了 **mod_nntp_like_ssl**,甚至都不要加载 mod_http2。等待下一个发行版。
|
||||
**NNTP** 就是这种协议的一个例子。如果你在服务器中配置了 **mod\_nntp\_like\_ssl**,那么就不要加载 mod_http2。等待下一个版本。
|
||||
|
||||
#### h2c 限制 ####
|
||||
|
||||
**h2c** 的实现还有一些限制,你应该注意:
|
||||
|
||||
#### 在虚拟主机中拒绝 h2c ####
|
||||
##### 在虚拟主机中拒绝 h2c #####
|
||||
|
||||
你不能对指定的虚拟主机拒绝 **h2c 直连**。连接建立而没有看到请求时会触发**直连**,这使得不可能预先知道 Apache 需要查找哪个虚拟主机。
|
||||
|
||||
#### 升级请求体 ####
|
||||
##### 有请求数据时的升级切换 #####
|
||||
|
||||
对于有 body 部分的请求,**h2c** 升级不能正常工作。那些是 PUT 和 POST 请求(用于提交和上传)。如果你写了一个客户端,你可能会用一个简单的 GET 去处理请求或者用选项 * 去触发升级。
|
||||
对于有数据的请求,**h2c** 升级切换不能正常工作。那些是 PUT 和 POST 请求(用于提交和上传)。如果你写了一个客户端,你可能会用一个简单的 GET 或者 OPTIONS * 来处理那些请求以触发升级切换。
|
||||
|
||||
原因从技术层面来看显而易见,但如果你想知道:升级过程中,连接处于半疯状态。请求按照 HTTP/1.1 的格式,而响应使用 HTTP/2。如果请求有一个 body 部分,服务器在发送响应之前需要读取整个 body。因为响应可能需要从客户端处得到应答用于流控制。但如果仍在发送 HTTP/1.1 请求,客户端就还不能处理 HTTP/2 连接。
|
||||
原因从技术层面来看显而易见,但如果你想知道:在升级切换过程中,连接处于半疯状态。请求按照 HTTP/1.1 的格式,而响应使用 HTTP/2 帧。如果请求有一个数据部分,服务器在发送响应之前需要读取整个数据。因为响应可能需要从客户端处得到应答用于流控制及其它东西。但如果仍在发送 HTTP/1.1 请求,客户端就仍然不能以 HTTP/2 连接。
|
||||
|
||||
为了使行为可预测,几个服务器实现商决定不要在任何请求体中进行升级,即使 body 很小。
|
||||
为了使行为可预测,几个服务器在实现上决定不在任何带有请求数据的请求中进行升级切换,即使请求数据很小。
|
||||
|
||||
#### 升级 302s ####
|
||||
##### 302 时的升级切换 #####
|
||||
|
||||
有重定向发生时当前 h2c 升级也不能工作。看起来 mod_http2 之前的重写有可能发生。这当然不会导致断路,但你测试这样的站点也许会让你迷惑。
|
||||
有重定向发生时,当前的 h2c 升级切换也不能工作。看起来 mod_http2 之前的重写有可能发生。这当然不会导致断路,但你测试这样的站点也许会让你迷惑。
|
||||
|
||||
#### h2 限制 ####
|
||||
|
||||
这里有一些你应该意识到的 h2 实现限制:
|
||||
|
||||
#### 连接重用 ####
|
||||
##### 连接重用 #####
|
||||
|
||||
HTTP/2 协议允许在特定条件下重用 TLS 连接:如果你有带通配符的证书或者多个 AltSubject 名称,浏览器可能会重用现有的连接。例如:
|
||||
|
||||
你有一个 **a.example.org** 的证书,它还有另外一个名称 **b.example.org**。你在浏览器中打开 url **https://a.example.org/**,用另一个标签页加载 **https://b.example.org/**。
|
||||
你有一个 **a.example.org** 的证书,它还有另外一个名称 **b.example.org**。你在浏览器中打开 URL **https://a.example.org/**,用另一个标签页加载 **https://b.example.org/**。
|
||||
|
||||
在重新打开一个新的连接之前,浏览器看到它有一个到 **a.example.org** 的连接并且证书对于 **b.example.org** 也可用。因此,它在第一个连接上面向第二个标签页发送请求。
|
||||
在重新打开一个新的连接之前,浏览器看到它有一个到 **a.example.org** 的连接并且证书对于 **b.example.org** 也可用。因此,它在第一个连接上面发送第二个标签页的请求。
|
||||
|
||||
这种连接重用是刻意设计的,它使得致力于 HTTP/1 切分效率的站点能够不需要太多变化就能利用 HTTP/2。
|
||||
这种连接重用是刻意设计的,它使得使用了 HTTP/1 切分(sharding)来提高效率的站点能够不需要太多变化就能利用 HTTP/2。
|
||||
|
||||
Apache **mod_h[ttp]2** 还没有完全实现这点。如果 **a.example.org** 和 **b.example.org** 是不同的虚拟主机, Apache 不会允许这样的连接重用,并会告知浏览器状态码**421 错误请求**。浏览器会意识到它需要重新打开一个到 **b.example.org** 的连接。这仍然能工作,只是会降低一些效率。
|
||||
Apache **mod_h[ttp]2** 还没有完全实现这点。如果 **a.example.org** 和 **b.example.org** 是不同的虚拟主机, Apache 不会允许这样的连接重用,并会告知浏览器状态码 **421 Misdirected Request**。浏览器会意识到它需要重新打开一个到 **b.example.org** 的连接。这仍然能工作,只是会降低一些效率。
|
||||
|
||||
我们期望下一次的发布中能有切当的检查。
|
||||
我们期望下一次的发布中能有合适的检查。
|
||||
|
||||
Münster, 12.10.2015,
|
||||
|
||||
@ -355,7 +350,7 @@ via: https://icing.github.io/mod_h2/howto.html
|
||||
|
||||
作者:[icing][a]
|
||||
译者:[ictlyh](http://mutouxiaogui.cn/blog/)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,19 +1,18 @@
|
||||
|
||||
提高 WordPress 性能的9个技巧
|
||||
深入浅出讲述提升 WordPress 性能的九大秘笈
|
||||
================================================================================
|
||||
|
||||
关于建站和 web 应用程序交付,WordPress 是全球最大的一个平台。全球大约 [四分之一][1] 的站点现在正在使用开源 WordPress 软件,包括 eBay, Mozilla, RackSpace, TechCrunch, CNN, MTV,纽约时报,华尔街日报。
|
||||
在建站和 web 应用程序交付方面,WordPress 是全球最大的一个平台。全球大约[四分之一][1] 的站点现在正在使用开源 WordPress 软件,包括 eBay、 Mozilla、 RackSpace、 TechCrunch、 CNN、 MTV、纽约时报、华尔街日报 等等。
|
||||
|
||||
WordPress.com,对于用户创建博客平台是最流行的,其也运行在WordPress 开源软件上。[NGINX powers WordPress.com][2]。许多 WordPress 用户刚开始在 WordPress.com 上建站,然后移动到搭载着 WordPress 开源软件的托管主机上;其中大多数站点都使用 NGINX 软件。
|
||||
最流行的个人博客平台 WordPress.com,其也运行在 WordPress 开源软件上。[而 NGINX 则为 WordPress.com 提供了动力][2]。在 WordPress.com 的用户当中,许多站点起步于 WordPress.com,然后换成了自己运行 WordPress 开源软件;它们中越来越多的站点也使用了 NGINX 软件。
|
||||
|
||||
WordPress 的吸引力是它的简单性,无论是安装启动或者对于终端用户的使用。然而,当使用量不断增长时,WordPress 站点的体系结构也存在一定的问题 - 这里几个方法,包括使用缓存以及组合 WordPress 和 NGINX,可以解决这些问题。
|
||||
WordPress 的吸引力源于其简单性,无论是对于最终用户还是安装架设。然而,当使用量不断增长时,WordPress 站点的体系结构也存在一定的问题 - 这里有几个方法,包括使用缓存,以及将 WordPress 和 NGINX 组合起来,可以解决这些问题。
|
||||
|
||||
在这篇博客中,我们提供了9个技巧来进行优化,以帮助你解决 WordPress 中一些常见的性能问题:
|
||||
在这篇博客中,我们提供了九个提速技巧来帮助你解决 WordPress 中一些常见的性能问题:
|
||||
|
||||
- [缓存静态资源][3]
|
||||
- [缓存动态文件][4]
|
||||
- [使用 NGINX][5]
|
||||
- [添加支持 NGINX 的链接][6]
|
||||
- [迁移到 NGINX][5]
|
||||
- [添加 NGINX 静态链接支持][6]
|
||||
- [为 NGINX 配置 FastCGI][7]
|
||||
- [为 NGINX 配置 W3_Total_Cache][8]
|
||||
- [为 NGINX 配置 WP-Super-Cache][9]
|
||||
@ -22,39 +21,39 @@ WordPress 的吸引力是它的简单性,无论是安装启动或者对于终
|
||||
|
||||
### 在 LAMP 架构下 WordPress 的性能 ###
|
||||
|
||||
大多数 WordPress 站点都运行在传统的 LAMP 架构下:Linux 操作系统,Apache Web 服务器软件,MySQL 数据库软件 - 通常是一个单独的数据库服务器 - 和 PHP 编程语言。这些都是非常著名的,广泛应用的开源工具。大多数人都将 WordPress “称为” LAMP,并且很容易寻求帮助和支持。
|
||||
大多数 WordPress 站点都运行在传统的 LAMP 架构下:Linux 操作系统,Apache Web 服务器软件,MySQL 数据库软件(通常是一个单独的数据库服务器)和 PHP 编程语言。这些都是非常著名的,广泛应用的开源工具。在 WordPress 世界里,很多人都用的是 LAMP,所以很容易寻求帮助和支持。
|
||||
|
||||
当用户访问 WordPress 站点时,浏览器为每个用户创建六到八个连接来运行 Linux/Apache 的组合。当用户请求连接时,每个页面的 PHP 文件开始飞速的从 MySQL 数据库争夺资源来响应请求。
|
||||
当用户访问 WordPress 站点时,浏览器为每个用户创建六到八个连接来连接到 Linux/Apache 上。当用户请求连接时,PHP 即时生成每个页面,从 MySQL 数据库获取资源来响应请求。
|
||||
|
||||
LAMP 对于数百个并发用户依然能照常工作。然而,流量突然增加是常见的并且 - 通常是 - 一件好事。
|
||||
LAMP 或许对于数百个并发用户依然能照常工作。然而,流量突然增加是常见的,并且通常这应该算是一件好事。
|
||||
|
||||
但是,当 LAMP 站点变得繁忙时,当同时在线的用户达到数千个时,它的瓶颈就会被暴露出来。瓶颈存在主要是两个原因:
|
||||
|
||||
1. Apache Web 服务器 - Apache 为每一个连接需要消耗大量资源。如果 Apache 接受了太多的并发连接,内存可能会耗尽,性能急剧降低,因为数据必须使用磁盘进行交换。如果以限制连接数来提高响应时间,新的连接必须等待,这也导致了用户体验变得很差。
|
||||
1. Apache Web 服务器 - Apache 的每个/每次连接需要消耗大量资源。如果 Apache 接受了太多的并发连接,内存可能会耗尽,从而导致性能急剧降低,因为数据必须交换到磁盘了。如果以限制连接数来提高响应时间,新的连接必须等待,这也导致了用户体验变得很差。
|
||||
|
||||
1. PHP/MySQL 的交互 - 总之,一个运行 PHP 和 MySQL 数据库服务器的应用服务器上每秒的请求量不能超过最大限制。当请求的数量超过最大连接数时,用户必须等待。超过最大连接数时也会增加所有用户的响应时间。超过其两倍以上时会出现明显的性能问题。
|
||||
1. PHP/MySQL 的交互 - 一个运行 PHP 和 MySQL 数据库服务器的应用服务器上每秒的请求量有一个最大限制。当请求的数量超过这个最大限制时,用户必须等待。超过这个最大限制时也会增加所有用户的响应时间。超过其两倍以上时会出现明显的性能问题。
|
||||
|
||||
LAMP 架构的网站一般都会出现性能瓶颈,这时就需要升级硬件了 - 加 CPU,扩大磁盘空间等等。当 Apache 和 PHP/MySQL 的架构负载运行后,在硬件上不断的提升无法保证对系统资源指数增长的需求。
|
||||
LAMP 架构的网站出现性能瓶颈是常见的情况,这时就需要升级硬件了 - 增加 CPU,扩大磁盘空间等等。当 Apache 和 PHP/MySQL 的架构超载后,在硬件上不断的提升却跟不上系统资源指数增长的需求。
|
||||
|
||||
最先取代 LAMP 架构的是 LEMP 架构 – Linux, NGINX, MySQL, 和 PHP。 (这是 LEMP 的缩写,E 代表着 “engine-x.” 的发音。) 我们在 [技巧 3][12] 中会描述 LEMP 架构。
|
||||
首选替代 LAMP 架构的是 LEMP 架构 – Linux, NGINX, MySQL, 和 PHP。 (这是 LEMP 的缩写,E 代表着 “engine-x.” 的发音。) 我们在 [技巧 3][12] 中会描述 LEMP 架构。
|
||||
|
||||
### 技巧 1. 缓存静态资源 ###
|
||||
|
||||
静态资源是指不变的文件,像 CSS,JavaScript 和图片。这些文件往往在网页的数据中占半数以上。页面的其余部分是动态生成的,像在论坛中评论,仪表盘的性能,或个性化的内容(可以看看Amazon.com 产品)。
|
||||
静态资源是指不变的文件,像 CSS,JavaScript 和图片。这些文件往往在网页的数据中占半数以上。页面的其余部分是动态生成的,像在论坛中评论,性能仪表盘,或个性化的内容(可以看看 Amazon.com 产品)。
|
||||
|
||||
缓存静态资源有两大好处:
|
||||
|
||||
- 更快的交付给用户 - 用户从他们浏览器的缓存或者从互联网上离他们最近的缓存服务器获取静态文件。有时候文件较大,因此减少等待时间对他们来说帮助很大。
|
||||
- 更快的交付给用户 - 用户可以从它们浏览器的缓存或者从互联网上离它们最近的缓存服务器获取静态文件。有时候文件较大,因此减少等待时间对它们来说帮助很大。
|
||||
|
||||
- 减少应用服务器的负载 - 从缓存中检索到的每个文件会让 web 服务器少处理一个请求。你的缓存越多,用户等待的时间越短。
|
||||
|
||||
要让浏览器缓存文件,需要早在静态文件中设置正确的 HTTP 首部。当看到 HTTP Cache-Control 首部时,特别设置了 max-age,Expires 首部,以及 Entity 标记。[这里][13] 有详细的介绍。
|
||||
要让浏览器缓存文件,需要在静态文件中设置正确的 HTTP 首部。看看 HTTP Cache-Control 首部,特别是设置了 max-age 参数,Expires 首部,以及 Entity 标记。[这里][13] 有详细的介绍。
|
||||
|
||||
当启用本地缓存然后用户请求以前访问过的文件时,浏览器首先检查该文件是否在缓存中。如果在,它会询问 Web 服务器该文件是否改变过。如果该文件没有改变,Web 服务器将立即响应一个304状态码(未改变),这意味着该文件没有改变,而不是返回状态码200 OK,然后继续检索并发送已改变的文件。
|
||||
当启用本地缓存,然后用户请求以前访问过的文件时,浏览器首先检查该文件是否在缓存中。如果在,它会询问 Web 服务器该文件是否改变过。如果该文件没有改变,Web 服务器将立即响应一个304状态码(未改变),这意味着该文件没有改变,而不是返回状态码200 OK 并检索和发送已改变的文件。
|
||||
|
||||
为了支持浏览器以外的缓存,可以考虑下面的方法,内容分发网络(CDN)。CDN 是一种流行且强大的缓存工具,但我们在这里不详细描述它。可以想一下 CDN 背后的支撑技术的实现。此外,当你的站点从 HTTP/1.x 过渡到 HTTP/2 协议时,CDN 的用处可能不太大;根据需要调查和测试,找到你网站需要的正确方法。
|
||||
要在浏览器之外支持缓存,可以考虑下面讲到的技巧,以及考虑使用内容分发网络(CDN)。CDN 是一种流行且强大的缓存工具,但我们在这里不详细描述它。在你实现了这里讲到的其它技术之后可以考虑 CDN。此外,当你的站点从 HTTP/1.x 过渡到 HTTP/2 协议时,CDN 的用处可能不太大;根据需要调查和测试,找到你网站需要的正确方法。
|
||||
|
||||
如果你转向 NGINX Plus 或开源的 NGINX 软件作为架构的一部分,建议你考虑 [技巧 3][14],然后配置 NGINX 缓存静态资源。使用下面的配置,用你 Web 服务器的 URL 替换 www.example.com。
|
||||
如果你转向 NGINX Plus 或将开源的 NGINX 软件作为架构的一部分,建议你考虑 [技巧 3][14],然后配置 NGINX 缓存静态资源。使用下面的配置,用你 Web 服务器的 URL 替换 www.example.com。
|
||||
|
||||
server {
|
||||
# substitute your web server's URL for www.example.com
|
||||
@ -86,63 +85,63 @@ LAMP 对于数百个并发用户依然能照常工作。然而,流量突然增
|
||||
|
||||
### 技巧 2. 缓存动态文件 ###
|
||||
|
||||
WordPress 是动态生成的网页,这意味着每次请求时它都要生成一个给定的网页(即使和前一次的结果相同)。这意味着用户随时获得的是最新内容。
|
||||
WordPress 动态地生成网页,这意味着每次请求时它都要生成一个给定的网页(即使和前一次的结果相同)。这意味着用户随时获得的是最新内容。
|
||||
|
||||
想一下,当用户访问一个帖子时,并在文章底部有用户的评论时。你希望用户能够看到所有的评论 - 即使评论刚刚发布。动态内容就是处理这种情况的。
|
||||
|
||||
但现在,当帖子每秒出现十几二十几个请求时。应用服务器可能每秒需要频繁生成页面导致其压力过大,造成延误。为了给用户提供最新的内容,每个访问理论上都是新的请求,因此他们也不得不在首页等待。
|
||||
但现在,当帖子每秒出现十几二十几个请求时。应用服务器可能每秒需要频繁生成页面导致其压力过大,造成延误。为了给用户提供最新的内容,每个访问理论上都是新的请求,因此它们不得不在原始出处等待很长时间。
|
||||
|
||||
为了防止页面由于负载过大变得缓慢,需要缓存动态文件。这需要减少文件的动态内容来提高整个系统的响应速度。
|
||||
为了防止页面由于不断提升的负载而变得缓慢,需要缓存动态文件。这需要减少文件的动态内容来提高整个系统的响应速度。
|
||||
|
||||
要在 WordPress 中启用缓存中,需要使用一些流行的插件 - 如下所述。WordPress 的缓存插件需要刷新页面,然后将其缓存短暂时间 - 也许只有几秒钟。因此,如果该网站每秒中只有几个请求,那大多数用户获得的页面都是缓存的副本。这也有助于提高所有用户的检索时间:
|
||||
要在 WordPress 中启用缓存中,需要使用一些流行的插件 - 如下所述。WordPress 的缓存插件会请求最新的页面,然后将其缓存短暂时间 - 也许只有几秒钟。因此,如果该网站每秒中会有几个请求,那大多数用户获得的页面都是缓存的副本。这也有助于提高所有用户的检索时间:
|
||||
|
||||
- 大多数用户获得页面的缓存副本。应用服务器没有做任何工作。
|
||||
- 用户很快会得到一个新的副本。应用服务器只需每隔一段时间刷新页面。当服务器产生一个新的页面(对于第一个用户访问后,缓存页过期),它这样做要快得多,因为它的请求不会超载。
|
||||
- 用户会得到一个之前的崭新副本。应用服务器只需每隔一段时间生成一个崭新页面。当服务器产生一个崭新页面(对于缓存过期后的第一个用户访问),它这样做要快得多,因为它的请求并没有超载。
|
||||
|
||||
你可以缓存运行在 LAMP 架构或者 [LEMP 架构][15] 上 WordPress 的动态文件(在 [技巧 3][16] 中说明了)。有几个缓存插件,你可以在 WordPress 中使用。这里有最流行的缓存插件和缓存技术,从最简单到最强大的:
|
||||
你可以缓存运行在 LAMP 架构或者 [LEMP 架构][15] 上 WordPress 的动态文件(在 [技巧 3][16] 中说明了)。有几个缓存插件,你可以在 WordPress 中使用。运用到了最流行的缓存插件和缓存技术,从最简单到最强大的:
|
||||
|
||||
- [Hyper-Cache][17] 和 [Quick-Cache][18] – 这两个插件为每个 WordPress 页面创建单个 PHP 文件。它支持的一些动态函数会绕过多个 WordPress 与数据库的连接核心处理,创建一个更快的用户体验。他们不会绕过所有的 PHP 处理,所以使用以下选项他们不能给出相同的性能提升。他们也不需要修改 NGINX 的配置。
|
||||
- [Hyper-Cache][17] 和 [Quick-Cache][18] – 这两个插件为每个 WordPress 页面创建单个 PHP 文件。它支持绕过多个 WordPress 与数据库的连接核心处理的一些动态功能,创建一个更快的用户体验。它们不会绕过所有的 PHP 处理,所以并不会如下面那些取得同样的性能提升。它们也不需要修改 NGINX 的配置。
|
||||
|
||||
- [WP Super Cache][19] – 最流行的 WordPress 缓存插件。它有许多功能,它的界面非常简洁,如下图所示。我们展示了 NGINX 一个简单的配置实例在 [技巧 7][20] 中。
|
||||
- [WP Super Cache][19] – 最流行的 WordPress 缓存插件。在它易用的界面易用上提供了许多功能,如下所示。我们在 [技巧 7][20] 中展示了一个简单的 NGINX 配置实例。
|
||||
|
||||
- [W3 Total Cache][21] – 这是第二大最受欢迎的 WordPress 缓存插件。它比 WP Super Cache 的功能更强大,但它有些配置选项比较复杂。一个 NGINX 的简单配置,请看 [技巧 6][22]。
|
||||
- [W3 Total Cache][21] – 这是第二流行的 WordPress 缓存插件。它比 WP Super Cache 的功能更强大,但它有些配置选项比较复杂。样例 NGINX 配置,请看 [技巧 6][22]。
|
||||
|
||||
- [FastCGI][23] – CGI 代表通用网关接口,在因特网上发送请求和接收文件。它不是一个插件只是一种能直接使用缓存的方法。FastCGI 可以被用在 Apache 和 Nginx 上,它也是最流行的动态缓存方法;我们在 [技巧 5][24] 中描述了如何配置 NGINX 来使用它。
|
||||
- [FastCGI][23] – CGI 的意思是通用网关接口( Common Gateway Interface),在因特网上发送请求和接收文件的一种通用方式。它不是一个插件,而是一种与缓存交互缓存的方法。FastCGI 可以被用在 Apache 和 Nginx 上,它也是最流行的动态缓存方法;我们在 [技巧 5][24] 中描述了如何配置 NGINX 来使用它。
|
||||
|
||||
这些插件的技术文档解释了如何在 LAMP 架构中配置它们。配置选项包括数据库和对象缓存;也包括使用 HTML,CSS 和 JavaScript 来构建 CDN 集成环境。对于 NGINX 的配置,请看列表中的提示技巧。
|
||||
这些插件和技术的文档解释了如何在典型的 LAMP 架构中配置它们。配置方式包括数据库和对象缓存;最小化 HTML、CSS 和 JavaScript;集成流行的 CDN 集成环境。对于 NGINX 的配置,请看列表中的提示技巧。
|
||||
|
||||
**注意**:WordPress 不能缓存用户的登录信息,因为它们的 WordPress 页面都是不同的。(对于大多数网站来说,只有一小部分用户可能会登录),大多数缓存不会对刚刚评论过的用户显示缓存页面,只有当用户刷新页面时才会看到他们的评论。若要缓存页面的非个性化内容,如果它对整体性能来说很重要,可以使用一种称为 [fragment caching][25] 的技术。
|
||||
**注意**:缓存不会用于已经登录的 WordPress 用户,因为他们的 WordPress 页面都是不同的。(对于大多数网站来说,只有一小部分用户可能会登录)此外,大多数缓存不会对刚刚评论过的用户显示缓存页面,因为当用户刷新页面时希望看到他们的评论。若要缓存页面的非个性化内容,如果它对整体性能来说很重要,可以使用一种称为 [碎片缓存(fragment caching)][25] 的技术。
|
||||
|
||||
### 技巧 3. 使用 NGINX ###
|
||||
|
||||
如上所述,当并发用户数超过某一值时 Apache 会导致性能问题 – 可能数百个用户同时使用。Apache 对于每一个连接会消耗大量的资源,因而容易耗尽内存。Apache 可以配置连接数的值来避免耗尽内存,但是这意味着,超过限制时,新的连接请求必须等待。
|
||||
如上所述,当并发用户数超过某一数量时 Apache 会导致性能问题 – 可能是数百个用户同时使用。Apache 对于每一个连接会消耗大量的资源,因而容易耗尽内存。Apache 可以配置连接数的值来避免耗尽内存,但是这意味着,超过限制时,新的连接请求必须等待。
|
||||
|
||||
此外,Apache 使用 mod_php 模块将每一个连接加载到内存中,即使只有静态文件(图片,CSS,JavaScript 等)。这使得每个连接消耗更多的资源,从而限制了服务器的性能。
|
||||
此外,Apache 为每个连接加载一个 mod_php 模块副本到内存中,即使只有服务于静态文件(图片,CSS,JavaScript 等)。这使得每个连接消耗更多的资源,从而限制了服务器的性能。
|
||||
|
||||
开始解决这些问题吧,从 LAMP 架构迁到 LEMP 架构 – 使用 NGINX 取代 Apache 。NGINX 仅消耗很少量的内存就能处理成千上万的并发连接数,所以你不必经历颠簸,也不必限制并发连接数。
|
||||
要解决这些问题,从 LAMP 架构迁到 LEMP 架构 – 使用 NGINX 取代 Apache 。NGINX 在一定的内存之下就能处理成千上万的并发连接数,所以你不必经历颠簸,也不必限制并发连接数到很小的数量。
|
||||
|
||||
NGINX 处理静态文件的性能也较好,它有内置的,简单的 [缓存][26] 控制策略。减少应用服务器的负载,你的网站的访问速度会更快,用户体验更好。
|
||||
NGINX 处理静态文件的性能也较好,它有内置的,容易调整的 [缓存][26] 控制策略。减少应用服务器的负载,你的网站的访问速度会更快,用户体验更好。
|
||||
|
||||
你可以在部署的所有 Web 服务器上使用 NGINX,或者你可以把一个 NGINX 服务器作为 Apache 的“前端”来进行反向代理 - NGINX 服务器接收客户端请求,将请求的静态文件直接返回,将 PHP 请求转发到 Apache 上进行处理。
|
||||
你可以在部署环境的所有 Web 服务器上使用 NGINX,或者你可以把一个 NGINX 服务器作为 Apache 的“前端”来进行反向代理 - NGINX 服务器接收客户端请求,将请求的静态文件直接返回,将 PHP 请求转发到 Apache 上进行处理。
|
||||
|
||||
对于动态页面的生成 - WordPress 核心体验 - 选择一个缓存工具,如 [技巧 2][27] 中描述的。在下面的技巧中,你可以看到 FastCGI,W3_Total_Cache 和 WP-Super-Cache 在 NGINX 上的配置示例。 (Hyper-Cache 和 Quick-Cache 不需要改变 NGINX 的配置。)
|
||||
对于动态页面的生成,这是 WordPress 核心体验,可以选择一个缓存工具,如 [技巧 2][27] 中描述的。在下面的技巧中,你可以看到 FastCGI,W3\_Total\_Cache 和 WP-Super-Cache 在 NGINX 上的配置示例。 (Hyper-Cache 和 Quick-Cache 不需要改变 NGINX 的配置。)
|
||||
|
||||
**技巧** 缓存通常会被保存到磁盘上,但你可以用 [tmpfs][28] 将缓存放在内存中来提高性能。
|
||||
|
||||
为 WordPress 配置 NGINX 很容易。按照这四个步骤,其详细的描述在指定的技巧中:
|
||||
为 WordPress 配置 NGINX 很容易。仅需四步,其详细的描述在指定的技巧中:
|
||||
|
||||
1.添加永久的支持 - 添加对 NGINX 的永久支持。此步消除了对 **.htaccess** 配置文件的依赖,这是 Apache 特有的。参见 [技巧 4][29]
|
||||
2.配置缓存 - 选择一个缓存工具并安装好它。可选择的有 FastCGI cache,W3 Total Cache, WP Super Cache, Hyper Cache, 和 Quick Cache。请看技巧 [5][30], [6][31], 和 [7][32].
|
||||
3.落实安全防范措施 - 在 NGINX 上采用对 WordPress 最佳安全的做法。参见 [技巧 8][33]。
|
||||
4.配置 WordPress 多站点 - 如果你使用 WordPress 多站点,在 NGINX 下配置子目录,子域,或多个域的结构。见 [技巧9][34]。
|
||||
1. 添加永久链接的支持 - 让 NGINX 支持永久链接。此步消除了对 **.htaccess** 配置文件的依赖,这是 Apache 特有的。参见 [技巧 4][29]。
|
||||
2. 配置缓存 - 选择一个缓存工具并安装好它。可选择的有 FastCGI cache,W3 Total Cache, WP Super Cache, Hyper Cache, 和 Quick Cache。请看技巧 [5][30]、 [6][31] 和 [7][32]。
|
||||
3. 落实安全防范措施 - 在 NGINX 上采用对 WordPress 最佳安全的做法。参见 [技巧 8][33]。
|
||||
4. 配置 WordPress 多站点 - 如果你使用 WordPress 多站点,在 NGINX 下配置子目录,子域,或多域名架构。见 [技巧9][34]。
|
||||
|
||||
### 技巧 4. 添加支持 NGINX 的链接 ###
|
||||
### 技巧 4. 让 NGINX 支持永久链接 ###
|
||||
|
||||
许多 WordPress 网站依靠 **.htaccess** 文件,此文件依赖 WordPress 的多个功能,包括永久支持,插件和文件缓存。NGINX 不支持 **.htaccess** 文件。幸运的是,你可以使用 NGINX 的简单而全面的配置文件来实现大部分相同的功能。
|
||||
许多 WordPress 网站依赖于 **.htaccess** 文件,此文件为 WordPress 的多个功能所需要,包括永久链接支持、插件和文件缓存。NGINX 不支持 **.htaccess** 文件。幸运的是,你可以使用 NGINX 的简单而全面的配置文件来实现大部分相同的功能。
|
||||
|
||||
你可以在使用 NGINX 的 WordPress 中通过在主 [server][36] 块下添加下面的 location 块中启用 [永久链接][35]。(此 location 块在其他代码示例中也会被包括)。
|
||||
你可以在你的主 [server][36] 块下添加下面的 location 块中为使用 NGINX 的 WordPress 启用 [永久链接][35]。(此 location 块在其它代码示例中也会被包括)。
|
||||
|
||||
**try_files** 指令告诉 NGINX 检查请求的 URL 在根目录下是作为文件(**$uri**)还是目录(**$uri/**),**/var/www/example.com/htdocs**。如果都不是,NGINX 将重定向到 **/index.php**,通过查询字符串参数判断是否作为参数。
|
||||
**try_files** 指令告诉 NGINX 检查请求的 URL 在文档根目录(**/var/www/example.com/htdocs**)下是作为文件(**$uri**)还是目录(**$uri/**) 存在的。如果都不是,NGINX 将重定向到 **/index.php**,并传递查询字符串参数作为参数。
|
||||
|
||||
server {
|
||||
server_name example.com www.example.com;
|
||||
@ -159,17 +158,17 @@ NGINX 处理静态文件的性能也较好,它有内置的,简单的 [缓存
|
||||
|
||||
### 技巧 5. 在 NGINX 中配置 FastCGI ###
|
||||
|
||||
NGINX 可以从 FastCGI 应用程序中缓存响应,如 PHP 响应。此方法可提供最佳的性能。
|
||||
NGINX 可以缓存来自 FastCGI 应用程序的响应,如 PHP 响应。此方法可提供最佳的性能。
|
||||
|
||||
对于开源的 NGINX,第三方模块 [ngx_cache_purge][37] 提供了缓存清除能力,需要手动编译,配置代码如下所示。NGINX Plus 已经包含了此代码的实现。
|
||||
对于开源的 NGINX,编译入第三方模块 [ngx\_cache\_purge][37] 可以提供缓存清除能力,配置代码如下所示。NGINX Plus 已经包含了它自己实现此代码。
|
||||
|
||||
当使用 FastCGI 时,我们建议你安装 [NGINX 辅助插件][38] 并使用下面的配置文件,尤其是要使用 **fastcgi_cache_key** 并且 location 块下要包括 **fastcgi_cache_purge**。当页面被发布或有改变时,甚至有新评论被发布时,该插件会自动清除你的缓存,你也可以从 WordPress 管理控制台手动清除。
|
||||
当使用 FastCGI 时,我们建议你安装 [NGINX 辅助插件][38] 并使用下面的配置文件,尤其是要注意 **fastcgi\_cache\_key** 的使用和包括 **fastcgi\_cache\_purge** 的 location 块。当页面发布或有改变时,有新评论被发布时,该插件会自动清除你的缓存,你也可以从 WordPress 管理控制台手动清除。
|
||||
|
||||
NGINX 的辅助插件还可以添加一个简短的 HTML 代码到你网页的底部,确认缓存是否正常并显示一些统计工作。(你也可以使用 [$upstream_cache_status][39] 确认缓存功能是否正常。)
|
||||
NGINX 的辅助插件还可以在你网页的底部添加一个简短的 HTML 代码,以确认缓存是否正常并显示一些统计数据。(你也可以使用 [$upstream\_cache\_status][39] 确认缓存功能是否正常。)
|
||||
|
||||
fastcgi_cache_path /var/run/nginx-cache levels=1:2
|
||||
fastcgi_cache_path /var/run/nginx-cache levels=1:2
|
||||
keys_zone=WORDPRESS:100m inactive=60m;
|
||||
fastcgi_cache_key "$scheme$request_method$host$request_uri";
|
||||
fastcgi_cache_key "$scheme$request_method$host$request_uri";
|
||||
|
||||
server {
|
||||
server_name example.com www.example.com;
|
||||
@ -181,7 +180,7 @@ fastcgi_cache_key "$scheme$request_method$host$request_uri";
|
||||
|
||||
set $skip_cache 0;
|
||||
|
||||
# POST 请求和查询网址的字符串应该交给 PHP
|
||||
# POST 请求和带有查询参数的网址应该交给 PHP
|
||||
if ($request_method = POST) {
|
||||
set $skip_cache 1;
|
||||
}
|
||||
@ -196,7 +195,7 @@ fastcgi_cache_key "$scheme$request_method$host$request_uri";
|
||||
set $skip_cache 1;
|
||||
}
|
||||
|
||||
#用户不能使用缓存登录或缓存最近的评论
|
||||
#不要为登录用户或最近的评论者进行缓存
|
||||
if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass
|
||||
|wordpress_no_cache|wordpress_logged_in") {
|
||||
set $skip_cache 1;
|
||||
@ -240,13 +239,13 @@ fastcgi_cache_key "$scheme$request_method$host$request_uri";
|
||||
}
|
||||
}
|
||||
|
||||
### 技巧 6. 为 NGINX 配置 W3_Total_Cache ###
|
||||
### 技巧 6. 为 NGINX 配置 W3\_Total\_Cache ###
|
||||
|
||||
[W3 Total Cache][40], 是 Frederick Townes 的 [W3-Edge][41] 下的, 是一个支持 NGINX 的 WordPress 缓存框架。其有众多选项配置,可以替代 FastCGI 缓存。
|
||||
[W3 Total Cache][40], 是 [W3-Edge][41] 的 Frederick Townes 出品的, 是一个支持 NGINX 的 WordPress 缓存框架。其有众多选项配置,可以替代 FastCGI 缓存。
|
||||
|
||||
缓存插件提供了各种缓存配置,还包括数据库和对象的缓存,对 HTML,CSS 和 JavaScript,可选择性的与流行的 CDN 整合。
|
||||
这个缓存插件提供了各种缓存配置,还包括数据库和对象的缓存,最小化 HTML、CSS 和 JavaScript,并可选与流行的 CDN 整合。
|
||||
|
||||
使用插件时,需要将其配置信息写入位于你的域的根目录的 NGINX 配置文件中。
|
||||
这个插件会通过写入一个位于你的域的根目录的 NGINX 配置文件来控制 NGINX。
|
||||
|
||||
server {
|
||||
server_name example.com www.example.com;
|
||||
@ -271,11 +270,11 @@ fastcgi_cache_key "$scheme$request_method$host$request_uri";
|
||||
|
||||
### 技巧 7. 为 NGINX 配置 WP Super Cache ###
|
||||
|
||||
[WP Super Cache][42] 是由 Donncha O Caoimh 完成的, [Automattic][43] 上的一个 WordPress 开发者, 这是一个 WordPress 缓存引擎,它可以将 WordPress 的动态页面转变成静态 HTML 文件,以使 NGINX 可以很快的提供服务。它是第一个 WordPress 缓存插件,和其他的相比,它更专注于某一特定的领域。
|
||||
[WP Super Cache][42] 是由 Donncha O Caoimh 开发的, 他是 [Automattic][43] 的一个 WordPress 开发者, 这是一个 WordPress 缓存引擎,它可以将 WordPress 的动态页面转变成静态 HTML 文件,以使 NGINX 可以很快的提供服务。它是第一个 WordPress 缓存插件,和其它的相比,它更专注于某一特定的领域。
|
||||
|
||||
配置 NGINX 使用 WP Super Cache 可以根据你的喜好而进行不同的配置。以下是一个示例配置。
|
||||
|
||||
在下面的配置中,location 块中使用了名为 WP Super Cache 的超级缓存中部分配置来工作。代码的其余部分是根据 WordPress 的规则不缓存用户登录信息,不缓存 POST 请求,并对静态资源设置过期首部,再加上标准的 PHP 实现;这部分可以进行定制,来满足你的需求。
|
||||
在下面的配置中,带有名为 supercache 的 location 块是 WP Super Cache 特有的部分。 WordPress 规则的其余代码用于不缓存已登录用户的信息,不缓存 POST 请求,并对静态资源设置过期首部,再加上标准的 PHP 处理;这部分可以根据你的需求进行定制。
|
||||
|
||||
|
||||
server {
|
||||
@ -288,7 +287,7 @@ fastcgi_cache_key "$scheme$request_method$host$request_uri";
|
||||
|
||||
set $cache_uri $request_uri;
|
||||
|
||||
# POST 请求和查询网址的字符串应该交给 PHP
|
||||
# POST 请求和带有查询字符串的网址应该交给 PHP
|
||||
if ($request_method = POST) {
|
||||
set $cache_uri 'null cache';
|
||||
}
|
||||
@ -305,13 +304,13 @@ fastcgi_cache_key "$scheme$request_method$host$request_uri";
|
||||
set $cache_uri 'null cache';
|
||||
}
|
||||
|
||||
#用户不能使用缓存登录或缓存最近的评论
|
||||
#不对已登录用户和最近的评论者使用缓存
|
||||
if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+
|
||||
|wp-postpass|wordpress_logged_in") {
|
||||
set $cache_uri 'null cache';
|
||||
}
|
||||
|
||||
#当请求的文件存在时使用缓存,否则将请求转发给WordPress
|
||||
#当请求的文件存在时使用缓存,否则将请求转发给 WordPress
|
||||
location / {
|
||||
try_files /wp-content/cache/supercache/$http_host/$cache_uri/index.html
|
||||
$uri $uri/ /index.php;
|
||||
@ -346,7 +345,7 @@ fastcgi_cache_key "$scheme$request_method$host$request_uri";
|
||||
|
||||
### 技巧 8. 为 NGINX 配置安全防范措施 ###
|
||||
|
||||
为了防止攻击,可以控制对关键资源的访问以及当机器超载时进行登录限制。
|
||||
为了防止攻击,可以控制对关键资源的访问并限制机器人对登录功能的过量攻击。
|
||||
|
||||
只允许特定的 IP 地址访问 WordPress 的仪表盘。
|
||||
|
||||
@ -365,14 +364,14 @@ fastcgi_cache_key "$scheme$request_method$host$request_uri";
|
||||
deny all;
|
||||
}
|
||||
|
||||
拒绝其他人访问 WordPress 的配置文件 **wp-config.php**。拒绝其他人访问的另一种方法是将该文件的一个目录移到域的根目录下。
|
||||
拒绝其它人访问 WordPress 的配置文件 **wp-config.php**。拒绝其它人访问的另一种方法是将该文件的一个目录移到域的根目录之上的目录。
|
||||
|
||||
# 拒绝其他人访问 wp-config.php
|
||||
# 拒绝其它人访问 wp-config.php
|
||||
location ~* wp-config.php {
|
||||
deny all;
|
||||
}
|
||||
|
||||
对 **wp-login.php** 进行限速来防止暴力攻击。
|
||||
对 **wp-login.php** 进行限速来防止暴力破解。
|
||||
|
||||
# 拒绝访问 wp-login.php
|
||||
location = /wp-login.php {
|
||||
@ -383,27 +382,27 @@ fastcgi_cache_key "$scheme$request_method$host$request_uri";
|
||||
|
||||
### 技巧 9. 配置 NGINX 支持 WordPress 多站点 ###
|
||||
|
||||
WordPress 多站点,顾名思义,使用同一个版本的 WordPress 从单个实例中允许你管理两个或多个网站。[WordPress.com][44] 运行的就是 WordPress 多站点,其主机为成千上万的用户提供博客服务。
|
||||
WordPress 多站点(WordPress Multisite),顾名思义,这个版本 WordPress 可以让你以单个实例管理两个或多个网站。[WordPress.com][44] 运行的就是 WordPress 多站点,其主机为成千上万的用户提供博客服务。
|
||||
|
||||
你可以从单个域的任何子目录或从不同的子域来运行独立的网站。
|
||||
|
||||
使用此代码块添加对子目录的支持。
|
||||
|
||||
# 在 WordPress 中添加支持子目录结构的多站点
|
||||
# 在 WordPress 多站点中添加对子目录结构的支持
|
||||
if (!-e $request_filename) {
|
||||
rewrite /wp-admin$ $scheme://$host$uri/ permanent;
|
||||
rewrite ^(/[^/]+)?(/wp-.*) $2 last;
|
||||
rewrite ^(/[^/]+)?(/.*\.php) $2 last;
|
||||
}
|
||||
|
||||
使用此代码块来替换上面的代码块以添加对子目录结构的支持,子目录名自定义。
|
||||
使用此代码块来替换上面的代码块以添加对子目录结构的支持,替换为你自己的子目录名。
|
||||
|
||||
# 添加支持子域名
|
||||
server_name example.com *.example.com;
|
||||
|
||||
旧版本(3.4以前)的 WordPress 多站点使用 readfile() 来提供静态内容。然而,readfile() 是 PHP 代码,它会导致在执行时性能会显著降低。我们可以用 NGINX 来绕过这个非必要的 PHP 处理。该代码片段在下面被(==============)线分割出来了。
|
||||
|
||||
# 避免 PHP readfile() 在 /blogs.dir/structure 子目录中
|
||||
# 避免对子目录中 /blogs.dir/ 结构执行 PHP readfile()
|
||||
location ^~ /blogs.dir {
|
||||
internal;
|
||||
alias /var/www/example.com/htdocs/wp-content/blogs.dir;
|
||||
@ -414,8 +413,8 @@ WordPress 多站点,顾名思义,使用同一个版本的 WordPress 从单
|
||||
|
||||
============================================================
|
||||
|
||||
# 避免 PHP readfile() 在 /files/structure 子目录中
|
||||
location ~ ^(/[^/]+/)?files/(?.+) {
|
||||
# 避免对子目录中 /files/ 结构执行 PHP readfile()
|
||||
location ~ ^(/[^/]+/)?files/(?.+) {
|
||||
try_files /wp-content/blogs.dir/$blogid/files/$rt_file /wp-includes/ms-files.php?file=$rt_file;
|
||||
access_log off;
|
||||
log_not_found off;
|
||||
@ -424,7 +423,7 @@ WordPress 多站点,顾名思义,使用同一个版本的 WordPress 从单
|
||||
|
||||
============================================================
|
||||
|
||||
# WPMU 文件结构的子域路径
|
||||
# 子域路径的WPMU 文件结构
|
||||
location ~ ^/files/(.*)$ {
|
||||
try_files /wp-includes/ms-files.php?file=$1 =404;
|
||||
access_log off;
|
||||
@ -434,7 +433,7 @@ WordPress 多站点,顾名思义,使用同一个版本的 WordPress 从单
|
||||
|
||||
============================================================
|
||||
|
||||
# 地图博客 ID 在特定的目录下
|
||||
# 映射博客 ID 到特定的目录
|
||||
map $http_host $blogid {
|
||||
default 0;
|
||||
example.com 1;
|
||||
@ -444,15 +443,15 @@ WordPress 多站点,顾名思义,使用同一个版本的 WordPress 从单
|
||||
|
||||
### 结论 ###
|
||||
|
||||
可扩展性对许多站点的开发者来说是一项挑战,因为这会让他们在 WordPress 站点中取得成功。(对于那些想要跨越 WordPress 性能问题的新站点。)为 WordPress 添加缓存,并将 WordPress 和 NGINX 结合,是不错的答案。
|
||||
可扩展性对许多要让他们的 WordPress 站点取得成功的开发者来说是一项挑战。(对于那些想要跨越 WordPress 性能门槛的新站点而言。)为 WordPress 添加缓存,并将 WordPress 和 NGINX 结合,是不错的答案。
|
||||
|
||||
NGINX 不仅对 WordPress 网站是有用的。世界上排名前 1000,10,000和100,000网站中 NGINX 也是作为 [领先的 web 服务器][45] 被使用。
|
||||
NGINX 不仅用于 WordPress 网站。世界上排名前 1000、10000 和 100000 网站中 NGINX 也是 [遥遥领先的 web 服务器][45]。
|
||||
|
||||
欲了解更多有关 NGINX 的性能,请看我们最近的博客,[关于 10x 应用程序的 10 个技巧][46]。
|
||||
欲了解更多有关 NGINX 的性能,请看我们最近的博客,[让应用性能提升 10 倍的 10 个技巧][46]。
|
||||
|
||||
NGINX 软件有两个版本:
|
||||
|
||||
- NGINX 开源的软件 - 像 WordPress 一样,此软件你可以自行下载,配置和编译。
|
||||
- NGINX 开源软件 - 像 WordPress 一样,此软件你可以自行下载,配置和编译。
|
||||
- NGINX Plus - NGINX Plus 包括一个预构建的参考版本的软件,以及服务和技术支持。
|
||||
|
||||
想要开始,先到 [nginx.org][47] 下载开源软件并了解下 [NGINX Plus][48]。
|
||||
@ -463,7 +462,7 @@ via: https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-
|
||||
|
||||
作者:[Floyd Smith][a]
|
||||
译者:[strugglingyouth](https://github.com/strugglingyouth)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,16 +1,15 @@
|
||||
|
||||
如何在树莓派2 B型上安装 FreeBSD
|
||||
如何在树莓派 2B 上安装 FreeBSD
|
||||
================================================================================
|
||||
|
||||
在树莓派2 B型上如何安装 FreeBSD 10 或 FreeBSD 11(current)?怎么在 Linux,OS X,FreeBSD 或类 Unix 操作系统上烧录 SD 卡?
|
||||
在树莓派 2B 上如何安装 FreeBSD 10 或 FreeBSD 11(current)?怎么在 Linux,OS X,FreeBSD 或类 Unix 操作系统上烧录 SD 卡?
|
||||
|
||||
在树莓派2 B型上安装 FreeBSD 10或 FreeBSD 11(current)很容易。使用 FreeBSD 操作系统可以打造一个非常易用的 Unix 服务器。FreeBSD-CURRENT 自2012年十一月以来一直支持树莓派,2015年三月份后也开始支持树莓派2了。在这个快速教程中我将介绍如何在 RPI2 上安装 FreeBSD 11 current arm 版。
|
||||
在树莓派 2B 上安装 FreeBSD 10 或 FreeBSD 11(current)很容易。使用 FreeBSD 操作系统可以打造一个非常易用的 Unix 服务器。FreeBSD-CURRENT 自2012年十一月以来一直支持树莓派,2015年三月份后也开始支持树莓派2了。在这个快速教程中我将介绍如何在树莓派 2B 上安装 FreeBSD 11 current arm 版。
|
||||
|
||||
### 1. 下载 FreeBSD-current 的 arm 镜像 ###
|
||||
|
||||
你可以 [访问这个页面来下载][1] 树莓派2的镜像。使用 wget 或 curl 命令来下载镜像:
|
||||
|
||||
|
||||
$ wget ftp://ftp.freebsd.org/pub/FreeBSD/snapshots/arm/armv6/ISO-IMAGES/11.0/FreeBSD-11.0-CURRENT-arm-armv6-RPI2-20151016-r289420.img.xz
|
||||
|
||||
或
|
||||
@ -45,52 +44,51 @@
|
||||
1024+0 records out
|
||||
1073741824 bytes transferred in 661.669584 secs (1622776 bytes/sec)
|
||||
|
||||
#### 使用 Linux/FreeBSD 或者 类 Unix 系统来烧录 FreeBSD-current ####
|
||||
#### 使用 Linux/FreeBSD 或者类 Unix 系统来烧录 FreeBSD-current ####
|
||||
|
||||
语法是这样:
|
||||
|
||||
$ dd if=FreeBSD-11.0-CURRENT-arm-armv6-RPI2-20151016-r289420.img of=/dev/sdb bs=1M
|
||||
|
||||
确保使用实际 SD 卡的设备名称来替换 /dev/sdb 。
|
||||
**确保使用实际的 SD 卡的设备名称来替换 /dev/sdb**(LCTT 译注:千万注意不要写错了)。
|
||||
|
||||
### 4. 引导 FreeBSD ###
|
||||
|
||||
在树莓派2 B型上插入 SD 卡。你需要连接键盘,鼠标和显示器。我使用的是 USB 转串口线来连接显示器的:
|
||||
在树莓派 2B 上插入 SD 卡。你需要连接键盘,鼠标和显示器。我使用的是 USB 转串口线来连接显示器的:
|
||||
|
||||
![Fig.01 RPi USB based serial connection](http://s0.cyberciti.org/uploads/faq/2015/10/Raspberry-Pi-2-Model-B.pin-out.jpg)
|
||||
|
||||
|
||||
图01 RPI 基于 USB 的串行连接
|
||||
*图01 基于树莓派 USB 的串行连接*
|
||||
|
||||
在下面的例子中,我使用 screen 命令来连接我的 RPI:
|
||||
|
||||
## Linux version ##
|
||||
## Linux 上 ##
|
||||
screen /dev/tty.USB0 115200
|
||||
|
||||
## OS X version ##
|
||||
## OS X 上 ##
|
||||
screen /dev/cu.usbserial 115200
|
||||
|
||||
## Windows user use Putty.exe ##
|
||||
## Windows 请使用 Putty.exe ##
|
||||
|
||||
FreeBSD RPI 启动输出样例:
|
||||
|
||||
![Gif 01: Booting FreeBSD-current on RPi 2](http://s0.cyberciti.org/uploads/faq/2015/10/freebsd-current-rpi.gif)
|
||||
|
||||
图01: 在 RPi 2上引导 FreeBSD-current
|
||||
*图02: 在树莓派 2上引导 FreeBSD-current*
|
||||
|
||||
### 5. FreeBSD 在 RPi 2上的用户名和密码 ###
|
||||
|
||||
默认的密码是 freebsd/freebsd 和 root/root。
|
||||
|
||||
到此为止, FreeBSD-current 已经安装并运行在 RPi 2上。
|
||||
到此为止, FreeBSD-current 已经安装并运行在树莓派 2上。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.cyberciti.biz/faq/how-to-install-freebsd-on-raspberry-pi-2-model-b/
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[strugglingyouth](https://github.com/strugglingyouth)
|
||||
译者:[strugglingyouth](https://github.com/strugglingyouth)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,239 @@
|
||||
如何在 CentOS 7 上安装 Redis 服务器
|
||||
================================================================================
|
||||
|
||||
大家好,本文的主题是 Redis,我们将要在 CentOS 7 上安装它。编译源代码,安装二进制文件,创建、安装文件。在安装了它的组件之后,我们还会配置 redis ,就像配置操作系统参数一样,目标就是让 redis 运行的更加可靠和快速。
|
||||
|
||||
![Runnins Redis](http://blog.linoxide.com/wp-content/uploads/2015/10/run-redis-standalone.jpg)
|
||||
|
||||
*Redis 服务器*
|
||||
|
||||
Redis 是一个开源的多平台数据存储软件,使用 ANSI C 编写,直接在内存使用数据集,这使得它得以实现非常高的效率。Redis 支持多种编程语言,包括 Lua, C, Java, Python, Perl, PHP 和其他很多语言。redis 的代码量很小,只有约3万行,它只做“很少”的事,但是做的很好。尽管是在内存里工作,但是数据持久化的保存还是有的,而redis 的可靠性就很高,同时也支持集群,这些可以很好的保证你的数据安全。
|
||||
|
||||
### 构建 Redis ###
|
||||
|
||||
redis 目前没有官方 RPM 安装包,我们需要从源代码编译,而为了要编译就需要安装 Make 和 GCC。
|
||||
|
||||
如果没有安装过 GCC 和 Make,那么就使用 yum 安装。
|
||||
|
||||
yum install gcc make
|
||||
|
||||
从[官网][1]下载 tar 压缩包。
|
||||
|
||||
curl http://download.redis.io/releases/redis-3.0.4.tar.gz -o redis-3.0.4.tar.gz
|
||||
|
||||
解压缩。
|
||||
|
||||
tar zxvf redis-3.0.4.tar.gz
|
||||
|
||||
进入解压后的目录。
|
||||
|
||||
cd redis-3.0.4
|
||||
|
||||
使用Make 编译源文件。
|
||||
|
||||
make
|
||||
|
||||
### 安装 ###
|
||||
|
||||
进入源文件的目录。
|
||||
|
||||
cd src
|
||||
|
||||
复制 Redis 的服务器和客户端到 /usr/local/bin。
|
||||
|
||||
cp redis-server redis-cli /usr/local/bin
|
||||
|
||||
最好也把 sentinel,benchmark 和 check 复制过去。
|
||||
|
||||
cp redis-sentinel redis-benchmark redis-check-aof redis-check-dump /usr/local/bin
|
||||
|
||||
创建redis 配置文件夹。
|
||||
|
||||
mkdir /etc/redis
|
||||
|
||||
在`/var/lib/redis` 下创建有效的保存数据的目录
|
||||
|
||||
mkdir -p /var/lib/redis/6379
|
||||
|
||||
#### 系统参数 ####
|
||||
|
||||
为了让 redis 正常工作需要配置一些内核参数。
|
||||
|
||||
配置 `vm.overcommit_memory` 为1,这可以避免数据被截断,详情[见此][2]。
|
||||
|
||||
sysctl -w vm.overcommit_memory=1
|
||||
|
||||
修改 backlog 连接数的最大值超过 redis.conf 中的 `tcp-backlog` 值,即默认值511。你可以在[kernel.org][3] 找到更多有关基于 sysctl 的 ip 网络隧道的信息。
|
||||
|
||||
sysctl -w net.core.somaxconn=512
|
||||
|
||||
取消对透明巨页内存(transparent huge pages)的支持,因为这会造成 redis 使用过程产生延时和内存访问问题。
|
||||
|
||||
echo never > /sys/kernel/mm/transparent_hugepage/enabled
|
||||
|
||||
### redis.conf ###
|
||||
|
||||
redis.conf 是 redis 的配置文件,然而你会看到这个文件的名字是 6379.conf ,而这个数字就是 redis 监听的网络端口。如果你想要运行超过一个的 redis 实例,推荐用这样的名字。
|
||||
|
||||
复制示例的 redis.conf 到 **/etc/redis/6379.conf**。
|
||||
|
||||
cp redis.conf /etc/redis/6379.conf
|
||||
|
||||
现在编辑这个文件并且配置参数。
|
||||
|
||||
vi /etc/redis/6379.conf
|
||||
|
||||
#### daemonize ####
|
||||
|
||||
设置 `daemonize` 为 no,systemd 需要它运行在前台,否则 redis 会突然挂掉。
|
||||
|
||||
daemonize no
|
||||
|
||||
#### pidfile ####
|
||||
|
||||
设置 `pidfile` 为 /var/run/redis_6379.pid。
|
||||
|
||||
pidfile /var/run/redis_6379.pid
|
||||
|
||||
#### port ####
|
||||
|
||||
如果不准备用默认端口,可以修改。
|
||||
|
||||
port 6379
|
||||
|
||||
#### loglevel ####
|
||||
|
||||
设置日志级别。
|
||||
|
||||
loglevel notice
|
||||
|
||||
#### logfile ####
|
||||
|
||||
修改日志文件路径。
|
||||
|
||||
logfile /var/log/redis_6379.log
|
||||
|
||||
#### dir ####
|
||||
|
||||
设置目录为 /var/lib/redis/6379
|
||||
|
||||
dir /var/lib/redis/6379
|
||||
|
||||
### 安全 ###
|
||||
|
||||
下面有几个可以提高安全性的操作。
|
||||
|
||||
#### Unix sockets ####
|
||||
|
||||
在很多情况下,客户端程序和服务器端程序运行在同一个机器上,所以不需要监听网络上的 socket。如果这和你的使用情况类似,你就可以使用 unix socket 替代网络 socket,为此你需要配置 `port` 为0,然后配置下面的选项来启用 unix socket。
|
||||
|
||||
设置 unix socket 的套接字文件。
|
||||
|
||||
unixsocket /tmp/redis.sock
|
||||
|
||||
限制 socket 文件的权限。
|
||||
|
||||
unixsocketperm 700
|
||||
|
||||
现在为了让 redis-cli 可以访问,应该使用 -s 参数指向该 socket 文件。
|
||||
|
||||
redis-cli -s /tmp/redis.sock
|
||||
|
||||
#### requirepass ####
|
||||
|
||||
你可能需要远程访问,如果是,那么你应该设置密码,这样子每次操作之前要求输入密码。
|
||||
|
||||
requirepass "bTFBx1NYYWRMTUEyNHhsCg"
|
||||
|
||||
#### rename-command ####
|
||||
|
||||
想象一下如下指令的输出。是的,这会输出服务器的配置,所以你应该在任何可能的情况下拒绝这种访问。
|
||||
|
||||
CONFIG GET *
|
||||
|
||||
为了限制甚至禁止这条或者其他指令可以使用 `rename-command` 命令。你必须提供一个命令名和替代的名字。要禁止的话需要设置替代的名字为空字符串,这样禁止任何人猜测命令的名字会比较安全。
|
||||
|
||||
rename-command FLUSHDB "FLUSHDB_MY_SALT_G0ES_HERE09u09u"
|
||||
rename-command FLUSHALL ""
|
||||
rename-command CONFIG "CONFIG_MY_S4LT_GO3S_HERE09u09u"
|
||||
|
||||
![Access Redis through unix with password and command changes](http://blog.linoxide.com/wp-content/uploads/2015/10/redis-security-test.jpg)
|
||||
|
||||
*使用密码通过 unix socket 访问,和修改命令*
|
||||
|
||||
#### 快照 ####
|
||||
|
||||
默认情况下,redis 会周期性的将数据集转储到我们设置的目录下的 **dump.rdb** 文件。你可以使用 `save` 命令配置转储的频率,它的第一个参数是以秒为单位的时间帧,第二个参数是在数据文件上进行修改的数量。
|
||||
|
||||
每隔15分钟并且最少修改过一次键。
|
||||
|
||||
save 900 1
|
||||
|
||||
每隔5分钟并且最少修改过10次键。
|
||||
|
||||
save 300 10
|
||||
|
||||
每隔1分钟并且最少修改过10000次键。
|
||||
|
||||
save 60 10000
|
||||
|
||||
文件 `/var/lib/redis/6379/dump.rdb` 包含了从上次保存以来内存里数据集的转储数据。因为它先创建临时文件然后替换之前的转储文件,这里不存在数据破坏的问题,你不用担心,可以直接复制这个文件。
|
||||
|
||||
### 开机时启动 ###
|
||||
|
||||
你可以使用 systemd 将 redis 添加到系统开机启动列表。
|
||||
|
||||
复制示例的 init_script 文件到 `/etc/init.d`,注意脚本名所代表的端口号。
|
||||
|
||||
cp utils/redis_init_script /etc/init.d/redis_6379
|
||||
|
||||
现在我们要使用 systemd,所以在 `/etc/systems/system` 下创建一个单位文件名字为 `redis_6379.service`。
|
||||
|
||||
vi /etc/systemd/system/redis_6379.service
|
||||
|
||||
填写下面的内容,详情可见 systemd.service。
|
||||
|
||||
[Unit]
|
||||
Description=Redis on port 6379
|
||||
|
||||
[Service]
|
||||
Type=forking
|
||||
ExecStart=/etc/init.d/redis_6379 start
|
||||
ExecStop=/etc/init.d/redis_6379 stop
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
|
||||
现在添加我之前在 `/etc/sysctl.conf` 里面修改过的内存过量使用和 backlog 最大值的选项。
|
||||
|
||||
vm.overcommit_memory = 1
|
||||
|
||||
net.core.somaxconn=512
|
||||
|
||||
对于透明巨页内存支持,并没有直接 sysctl 命令可以控制,所以需要将下面的命令放到 `/etc/rc.local` 的结尾。
|
||||
|
||||
echo never > /sys/kernel/mm/transparent_hugepage/enabled
|
||||
|
||||
### 总结 ###
|
||||
|
||||
这样就可以启动了,通过设置这些选项你就可以部署 redis 服务到很多简单的场景,然而在 redis.conf 还有很多为复杂环境准备的 redis 选项。在一些情况下,你可以使用 [replication][4] 和 [Sentinel][5] 来提高可用性,或者[将数据分散][6]在多个服务器上,创建服务器集群。
|
||||
|
||||
谢谢阅读。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/storage/install-redis-server-centos-7/
|
||||
|
||||
作者:[Carlos Alberto][a]
|
||||
译者:[ezio](https://github.com/oska874)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/carlosal/
|
||||
[1]:http://redis.io/download
|
||||
[2]:https://www.kernel.org/doc/Documentation/vm/overcommit-accounting
|
||||
[3]:https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt
|
||||
[4]:http://redis.io/topics/replication
|
||||
[5]:http://redis.io/topics/sentinel
|
||||
[6]:http://redis.io/topics/partitioning
|
@ -0,0 +1,35 @@
|
||||
开源开发者提交不安全代码,遭 Linus 炮轰
|
||||
================================================================================
|
||||
![](http://thevarguy.com/site-files/thevarguy.com/files/imagecache/medium_img/uploads/2015/11/linus-torvalds.jpg)
|
||||
|
||||
Linus 上个月骂了一个 Linux 开发者,原因是他向 kernel 提交了一份不安全的代码。
|
||||
|
||||
Linus 是个 Linux 内核项目非官方的“仁慈的独裁者(benevolent dictator)”(LCTT译注:英国《卫报》曾将乔布斯评价为‘仁慈的独裁者’),这意味着他有权决定将哪些代码合入内核,哪些代码直接丢掉。
|
||||
|
||||
在10月28号,一个开源开发者提交的代码未能符合 Torvalds 的要求,于是遭来了[一顿臭骂][1]。Torvalds 在他提交的代码下评论道:“你提交的是什么东西。”
|
||||
|
||||
接着他说这个开发者是“毫无能力的神经病”。
|
||||
|
||||
Torvalds 为什么会这么生气?他觉得那段代码可以写得更有效率一点,可读性更强一点,编译器编译后跑得更好一点(编译器的作用就是将让人看的代码翻译成让电脑看的代码)。
|
||||
|
||||
Torvalds 重新写了一版代码将原来的那份替换掉,并建议所有开发者应该像他那种风格来写代码。
|
||||
|
||||
Torvalds 一直在嘲讽那些不符合他观点的人。早在1991年他就攻击过 [Andrew Tanenbaum][2]——那个 Minix 操作系统的作者,而那个 Minix 操作系统被 Torvalds 描述为“脑残”。
|
||||
|
||||
但是 Torvalds 在这次嘲讽中表现得更有战略性了:“我想让*每个人*都知道,像他这种代码是完全不能被接收的。”他说他的目的是提醒每个 Linux 开发者,而不是针对那个开发者。
|
||||
|
||||
Torvalds 也用这个机会强调了烂代码的安全问题。现在的企业对安全问题很重视,所以安全问题需要在开源开发者心中得到足够重视,甚至需要在代码中表现为最高等级(LCTT 译注:操作系统必须权衡许多因素:安全、处理速度、灵活性、易用性等,而这里 Torvalds 将安全提升为最高优先级了)。骂一下那些提交不安全代码的开发者可以帮助提高 Linux 系统的安全性。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://thevarguy.com/open-source-application-software-companies/110415/linus-torvalds-lambasts-open-source-programmers-over-inse
|
||||
|
||||
作者:[Christopher Tozzi][a]
|
||||
译者:[bazz2](https://github.com/bazz2)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://thevarguy.com/author/christopher-tozzi
|
||||
[1]:http://lkml.iu.edu/hypermail/linux/kernel/1510.3/02866.html
|
||||
[2]:https://en.wikipedia.org/wiki/Tanenbaum%E2%80%93Torvalds_debate
|
@ -0,0 +1,80 @@
|
||||
如何使用 pv 命令监控 linux 命令的执行进度
|
||||
================================================================================
|
||||
|
||||
![](https://www.maketecheasier.com/assets/uploads/2015/11/pv-featured-1.jpg)
|
||||
|
||||
如果你是一个 linux 系统管理员,那么毫无疑问你必须花费大量的工作时间在命令行上:安装和卸载软件,监视系统状态,复制、移动、删除文件,查错,等等。很多时候都是你输入一个命令,然后等待很长时间直到执行完成。也有的时候你执行的命令挂起了,而你只能猜测命令执行的实际情况。
|
||||
|
||||
通常 linux 命令不提供和进度相关的信息,而这些信息特别重要,尤其当你只有有限的时间时。然而这并不意味着你是无助的——现在有一个命令,pv,它会显示当前在命令行执行的命令的进度信息。在本文我们会讨论它并用几个简单的例子说明其特性。
|
||||
|
||||
### PV 命令 ###
|
||||
|
||||
[PV][1] 由Andrew Wood 开发,是 Pipe Viewer 的简称,意思是通过管道显示数据处理进度的信息。这些信息包括已经耗费的时间,完成的百分比(通过进度条显示),当前的速度,全部传输的数据,以及估计剩余的时间。
|
||||
|
||||
> "要使用 PV,需要配合合适的选项,把它放置在两个进程之间的管道。命令的标准输入将会通过标准输出传进来的,而进度会被输出到标准错误输出。”
|
||||
|
||||
上述解释来自该命令的帮助页。
|
||||
|
||||
### 下载和安装 ###
|
||||
|
||||
Debian 系的操作系统,如 Ubuntu,可以简单的使用下面的命令安装 PV:
|
||||
|
||||
sudo apt-get install pv
|
||||
|
||||
如果你使用了其他发行版本,你可以使用各自的包管理软件在你的系统上安装 PV。一旦 PV 安装好了你就可以在各种场合使用它(详见下文)。需要注意的是下面所有例子都使用的是 pv 1.2.0。
|
||||
|
||||
### 特性和用法 ###
|
||||
|
||||
我们(在 linux 上使用命令行的用户)的大多数使用场景都会用到的命令是从一个 USB 驱动器拷贝电影文件到你的电脑。如果你使用 cp 来完成上面的任务,你会什么情况都不清楚,直到整个复制过程结束或者出错。
|
||||
|
||||
然而pv 命令在这种情景下很有帮助。比如:
|
||||
|
||||
pv /media/himanshu/1AC2-A8E3/fNf.mkv > ./Desktop/fnf.mkv
|
||||
|
||||
输出如下:
|
||||
|
||||
![pv-copy](https://www.maketecheasier.com/assets/uploads/2015/10/pv-copy.png)
|
||||
|
||||
所以,如你所见,这个命令显示了很多和操作有关的有用信息,包括已经传输了的数据量,花费的时间,传输速率,进度条,进度的百分比,以及剩余的时间。
|
||||
|
||||
`pv` 命令提供了多种显示选项开关。比如,你可以使用`-p` 来显示百分比,`-t` 来显示时间,`-r` 表示传输速率,`-e` 代表eta(LCTT 译注:估计剩余的时间)。好事是你不必记住某一个选项,因为默认这几个选项都是启用的。但是,如果你只要其中某一个信息,那么可以通过控制这几个选项来完成任务。
|
||||
|
||||
这里还有一个`-n` 选项来允许 pv 命令显示整数百分比,在标准错误输出上每行显示一个数字,用来替代通常的可视进度条。下面是一个例子:
|
||||
|
||||
pv -n /media/himanshu/1AC2-A8E3/fNf.mkv > ./Desktop/fnf.mkv
|
||||
|
||||
![pv-numeric](https://www.maketecheasier.com/assets/uploads/2015/10/pv-numeric.png)
|
||||
|
||||
这个特殊的选项非常合适某些情境下的需求,如你想把用管道把输出传给[dialog][2] 命令。
|
||||
|
||||
接下来还有一个命令行选项,`-L` 可以让你修改 pv 命令的传输速率。举个例子,使用 -L 选项来限制传输速率为2MB/s。
|
||||
|
||||
pv -L 2m /media/himanshu/1AC2-A8E3/fNf.mkv > ./Desktop/fnf.mkv
|
||||
|
||||
![pv-ratelimit](https://www.maketecheasier.com/assets/uploads/2015/10/pv-ratelimit.png)
|
||||
|
||||
如上图所见,数据传输速度按照我们的要求被限制了。
|
||||
|
||||
另一个pv 可以帮上忙的情景是压缩文件。这里有一个例子可以向你解释如何与压缩软件Gzip 一起工作。
|
||||
|
||||
pv /media/himanshu/1AC2-A8E3/fnf.mkv | gzip > ./Desktop/fnf.log.gz
|
||||
|
||||
![pv-gzip](https://www.maketecheasier.com/assets/uploads/2015/10/pv-gzip.png)
|
||||
|
||||
### 结论 ###
|
||||
|
||||
如上所述,pv 是一个非常有用的小工具,它可以在命令没有按照预期执行的情况下帮你节省你宝贵的时间。而且这些显示的信息还可以用在 shell 脚本里。我强烈的推荐你使用这个命令,它值得你一试。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.maketecheasier.com/monitor-progress-linux-command-line-operation/
|
||||
|
||||
作者:[Himanshu Arora][a]
|
||||
译者:[ezio](https://github.com/oska874)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.maketecheasier.com/author/himanshu/
|
||||
[1]:http://linux.die.net/man/1/pv
|
||||
[2]:http://linux.die.net/man/1/dialog
|
@ -1,16 +1,14 @@
|
||||
|
||||
如何在 Ubuntu 服务器中配置 AWStats
|
||||
================================================================================
|
||||
![](https://www.maketecheasier.com/assets/uploads/2015/10/Apache_awstats_featured.jpg)
|
||||
|
||||
|
||||
AWStats 是一个开源的网站分析报告工具,自带网络,流媒体,FTP 或邮件服务器统计图。此日志分析器以 CGI 或命令行方式进行工作,并在网页中以图表的形式尽可能的显示你日志中所有的信息。它采用的是部分信息文件,以便能够频繁并快速处理大量的日志文件。它支持绝大多数 Web 服务器日志文件格式,包括 Apache,IIS 等。
|
||||
AWStats 是一个开源的网站分析报告工具,可以生成强大的网站、流媒体、FTP 或邮件服务器的访问统计图。此日志分析器以 CGI 或命令行方式进行工作,并在网页中以图表的形式尽可能的显示你日志中所有的信息。它可以“部分”读取信息文件,以便能够频繁并快速处理大量的日志文件。它支持绝大多数 Web 服务器日志文件格式,包括 Apache,IIS 等。
|
||||
|
||||
本文将帮助你在 Ubuntu 上安装配置 AWStats。
|
||||
|
||||
### 安装 AWStats 包 ###
|
||||
|
||||
默认情况下,AWStats 的包在 Ubuntu 仓库中。
|
||||
默认情况下,AWStats 的包可以在 Ubuntu 仓库中找到。
|
||||
|
||||
可以通过运行下面的命令来安装:
|
||||
|
||||
@ -18,7 +16,7 @@ AWStats 是一个开源的网站分析报告工具,自带网络,流媒体,FT
|
||||
|
||||
接下来,你需要启用 Apache 的 CGI 模块。
|
||||
|
||||
运行以下命令来启动:
|
||||
运行以下命令来启动 CGI:
|
||||
|
||||
sudo a2enmod cgi
|
||||
|
||||
@ -38,7 +36,7 @@ AWStats 是一个开源的网站分析报告工具,自带网络,流媒体,FT
|
||||
|
||||
sudo nano /etc/awstats/awstats.test.com.conf
|
||||
|
||||
像下面这样修改下:
|
||||
像下面这样修改一下:
|
||||
|
||||
# Change to Apache log file, by default it's /var/log/apache2/access.log
|
||||
LogFile="/var/log/apache2/access.log"
|
||||
@ -73,6 +71,7 @@ AWStats 是一个开源的网站分析报告工具,自带网络,流媒体,FT
|
||||
### 测试 AWStats ###
|
||||
|
||||
现在,您可以通过访问 url “http://your-server-ip/cgi-bin/awstats.pl?config=test.com.” 来查看 AWStats 的页面。
|
||||
|
||||
它的页面像下面这样:
|
||||
|
||||
![awstats_page](https://www.maketecheasier.com/assets/uploads/2015/10/awstats_page.jpg)
|
||||
@ -101,7 +100,7 @@ via: https://www.maketecheasier.com/set-up-awstats-ubuntu/
|
||||
|
||||
作者:[Hitesh Jethva][a]
|
||||
译者:[strugglingyouth](https://github.com/strugglingyouth)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,14 +1,14 @@
|
||||
在 Ubuntu 15.10 上安装 PostgreSQL 9.4 和 phpPgAdmin
|
||||
在 Ubuntu 上安装世界上最先进的开源数据库 PostgreSQL 9.4 和 phpPgAdmin
|
||||
================================================================================
|
||||
![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2014/05/postgresql.png)
|
||||
|
||||
### 简介 ###
|
||||
|
||||
[PostgreSQL][1] 是一款强大的,开源对象关系型数据库系统。它支持所有的主流操作系统,包括 Linux、Unix(AIX、BSD、HP-UX,SGI IRIX、Mac OS、Solaris、Tru64) 以及 Windows 操作系统。
|
||||
[PostgreSQL][1] 是一款强大的,开源的,对象关系型数据库系统。它支持所有的主流操作系统,包括 Linux、Unix(AIX、BSD、HP-UX,SGI IRIX、Mac OS、Solaris、Tru64) 以及 Windows 操作系统。
|
||||
|
||||
下面是 **Ubuntu** 发起者 **Mark Shuttleworth** 对 PostgreSQL 的一段评价。
|
||||
|
||||
> PostgreSQL 真的是一款很好的数据库系统。刚开始我们使用它的时候,并不确定它能否胜任工作。但我错的太离谱了。它很强壮、快速,在各个方面都很专业。
|
||||
> PostgreSQL 是一款极赞的数据库系统。刚开始我们在 Launchpad 上使用它的时候,并不确定它能否胜任工作。但我是错了。它很强壮、快速,在各个方面都很专业。
|
||||
>
|
||||
> — Mark Shuttleworth.
|
||||
|
||||
@ -22,7 +22,7 @@
|
||||
|
||||
如果你需要其它的版本,按照下面那样先添加 PostgreSQL 仓库然后再安装。
|
||||
|
||||
**PostgreSQL apt 仓库** 支持 amd64 和 i386 架构的 Ubuntu 长期支持版(10.04、12.04 和 14.04),以及非长期支持版(14.04)。对于其它非长期支持版,该软件包虽然不能完全支持,但使用和 LTS 版本近似的也能正常工作。
|
||||
**PostgreSQL apt 仓库** 支持 amd64 和 i386 架构的 Ubuntu 长期支持版(10.04、12.04 和 14.04),以及非长期支持版(14.04)。对于其它非长期支持版,该软件包虽然没有完全支持,但使用和 LTS 版本近似的也能正常工作。
|
||||
|
||||
#### Ubuntu 14.10 系统: ####
|
||||
|
||||
@ -36,11 +36,11 @@
|
||||
|
||||
**注意**: 上面的库只能用于 Ubuntu 14.10。还没有升级到 Ubuntu 15.04 和 15.10。
|
||||
|
||||
**Ubuntu 14.04**,添加下面一行:
|
||||
对于 **Ubuntu 14.04**,添加下面一行:
|
||||
|
||||
deb http://apt.postgresql.org/pub/repos/apt/ trusty-pgdg main
|
||||
|
||||
**Ubuntu 12.04**,添加下面一行:
|
||||
对于 **Ubuntu 12.04**,添加下面一行:
|
||||
|
||||
deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main
|
||||
|
||||
@ -48,8 +48,6 @@
|
||||
|
||||
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc
|
||||
|
||||
----------
|
||||
|
||||
sudo apt-key add -
|
||||
|
||||
更新软件包列表:
|
||||
@ -66,7 +64,7 @@
|
||||
|
||||
sudo -u postgres psql postgres
|
||||
|
||||
#### 事例输出: ####
|
||||
#### 示例输出: ####
|
||||
|
||||
psql (9.4.5)
|
||||
Type "help" for help.
|
||||
@ -87,7 +85,7 @@
|
||||
Enter it again:
|
||||
postgres=# \q
|
||||
|
||||
要安装 PostgreSQL Adminpack,在 postgresql 窗口输入下面的命令:
|
||||
要安装 PostgreSQL Adminpack 扩展,在 postgresql 窗口输入下面的命令:
|
||||
|
||||
sudo -u postgres psql postgres
|
||||
|
||||
@ -165,7 +163,7 @@
|
||||
#port = 5432
|
||||
[...]
|
||||
|
||||
取消改行的注释,然后设置你 postgresql 服务器的 IP 地址,或者设置为 ‘*’ 监听所有用户。你应该谨慎设置所有远程用户都可以访问 PostgreSQL。
|
||||
取消该行的注释,然后设置你 postgresql 服务器的 IP 地址,或者设置为 ‘*’ 监听所有用户。你应该谨慎设置所有远程用户都可以访问 PostgreSQL。
|
||||
|
||||
[...]
|
||||
listen_addresses = '*'
|
||||
@ -272,8 +270,6 @@
|
||||
|
||||
sudo systemctl restart postgresql
|
||||
|
||||
----------
|
||||
|
||||
sudo systemctl restart apache2
|
||||
|
||||
或者,
|
||||
@ -284,19 +280,19 @@
|
||||
|
||||
现在打开你的浏览器并导航到 **http://ip-address/phppgadmin**。你会看到以下截图。
|
||||
|
||||
![phpPgAdmin – Google Chrome_001](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/11/phpPgAdmin-Google-Chrome_001.jpg)
|
||||
![phpPgAdmin ](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/11/phpPgAdmin-Google-Chrome_001.jpg)
|
||||
|
||||
用你之前创建的用户登录。我之前已经创建了一个名为 “**senthil**” 的用户,密码是 “**ubuntu**”,因此我以 “senthil” 用户登录。
|
||||
|
||||
![phpPgAdmin – Google Chrome_002](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/11/phpPgAdmin-Google-Chrome_002.jpg)
|
||||
![phpPgAdmin ](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/11/phpPgAdmin-Google-Chrome_002.jpg)
|
||||
|
||||
然后你就可以访问 phppgadmin 面板了。
|
||||
|
||||
![phpPgAdmin – Google Chrome_003](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/11/phpPgAdmin-Google-Chrome_003.jpg)
|
||||
![phpPgAdmin ](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/11/phpPgAdmin-Google-Chrome_003.jpg)
|
||||
|
||||
用 postgres 用户登录:
|
||||
|
||||
![phpPgAdmin – Google Chrome_004](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/11/phpPgAdmin-Google-Chrome_004.jpg)
|
||||
![phpPgAdmin ](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/11/phpPgAdmin-Google-Chrome_004.jpg)
|
||||
|
||||
就是这样。现在你可以用 phppgadmin 可视化创建、删除或者更改数据库了。
|
||||
|
||||
@ -308,7 +304,7 @@ via: http://www.unixmen.com/install-postgresql-9-4-and-phppgadmin-on-ubuntu-15-1
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[ictlyh](http://mutouxiaogui.cn/blog/)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,71 @@
|
||||
黑客利用 Wi-Fi 攻击你的七种方法
|
||||
================================================================================
|
||||
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/intro_title-100626673-orig.jpg)
|
||||
|
||||
### 黑客利用 Wi-Fi 侵犯你隐私的七种方法 ###
|
||||
|
||||
Wi-Fi — 啊,你是如此的方便,却又如此的危险!
|
||||
|
||||
这里给大家介绍一下通过Wi-Fi连接“慷慨捐赠”你的身份信息的七种方法和反制措施。
|
||||
|
||||
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/1_free-hotspots-100626674-orig.jpg)
|
||||
|
||||
### 利用免费热点 ###
|
||||
|
||||
它们似乎无处不在,而且它们的数量会在[接下来四年里增加三倍][1]。但是它们当中很多都是不值得信任的,从你的登录凭证、email 甚至更加敏感的账户,都能被黑客用“嗅探器(sniffers)”软件截获 — 这种软件能截获到任何你通过该连接提交的信息。防止被黑客盯上的最好办法就是使用VPN(虚拟私有网virtual private network),它加密了你所输入的信息,因此能够保护你的数据隐私。
|
||||
|
||||
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/2_online-banking-100626675-orig.jpg)
|
||||
|
||||
### 网上银行 ###
|
||||
|
||||
你可能认为没有人需要被提醒不要使用免费 Wi-Fi 来操作网上银行, 但网络安全厂商卡巴斯基实验室表示**[全球超过100家银行因为网络黑客而损失9亿美元][2]**,由此可见还是有很多人因此受害。如果你确信一家咖啡店的免费 Wi-Fi 是正规的,想要连接它,那么你应该向服务员确认网络名称。[其他人在店里用路由器设置一个开放的无线连接][3],并将它的网络名称设置成店名是一件相当简单的事。
|
||||
|
||||
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/3_keeping-wifi-on-100626676-orig.jpg)
|
||||
|
||||
### 始终开着 Wi-Fi 开关 ###
|
||||
|
||||
如果你手机的 Wi-Fi 开关一直开着的,你会自动被连接到一个不安全的网络中去,你甚至都没有意识到。你可以利用你手机中[基于位置的 Wi-Fi 功能][4],如果有这种功能的话,那它会在你离开你所保存的网络范围后自动关闭你的 Wi-Fi 开关并在你回去之后再次开启。
|
||||
|
||||
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/4_not-using-firewall-100626677-orig.jpg)
|
||||
|
||||
### 不使用防火墙 ###
|
||||
|
||||
防火墙是你的第一道抵御恶意入侵的防线,它能有效地让你的电脑网络保持通畅并阻挡黑客和恶意软件。你应该时刻开启它除非你的杀毒软件有它自己的防火墙。
|
||||
|
||||
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/5_browsing-unencrypted-sites-100626678-orig.jpg)
|
||||
|
||||
### 浏览非加密网页 ###
|
||||
|
||||
说起来很难过,**[世界上排名前100万个网站中55%是不加密的][5]**,一个未加密的网站会让一切传输数据暴露在黑客的眼中。如果一个网页是安全的,你的浏览器则会有标明(比如说火狐浏览器是一把灰色的挂锁,Chrome 浏览器则是个绿锁图标)。但是即使是安全的网站不能让你免于被劫持的风险,他们能通过公共网络从你访问过的网站上窃取 cookies,无论是不是正规网站。
|
||||
|
||||
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/6_updating-security-software-100626679-orig.jpg)
|
||||
|
||||
### 不更新你的安全防护软件 ###
|
||||
|
||||
如果你想要确保你自己的网络是受保护的,就更新路由器固件。你要做的就是进入你的路由器管理页面去检查,通常你能在厂商的官方网页上下载到最新的固件版本。
|
||||
|
||||
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/7_securing-home-wifi-100626680-orig.jpg)
|
||||
|
||||
### 不保护你的家用 Wi-Fi ###
|
||||
|
||||
不用说,设置一个复杂的密码和更改无线连接的默认名都是非常重要的。你还可以过滤你的 MAC 地址来让你的路由器只识别那些确认过的设备。
|
||||
|
||||
本文作者 **Josh Althuser** 是一个开源支持者、网络架构师和科技企业家。在过去12年里,他花了很多时间去倡导使用开源软件来管理团队和项目,同时为网络应用程序提供企业级咨询并帮助它们把产品推向市场。你可以通过[他的推特][6]联系他。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.networkworld.com/article/3003170/mobile-security/7-ways-hackers-can-use-wi-fi-against-you.html
|
||||
|
||||
作者:[Josh Althuser][a]
|
||||
译者:[ZTinoZ](https://github.com/ZTinoZ)
|
||||
校对:[Caroline](https://github.com/carolinewuyan)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://twitter.com/JoshAlthuser
|
||||
[1]:http://www.pcworld.com/article/243464/number_of_wifi_hotspots_to_quadruple_by_2015_says_study.html
|
||||
[2]:http://www.nytimes.com/2015/02/15/world/bank-hackers-steal-millions-via-malware.html?hp&action=click&pgtype=Homepage&module=first-column-region%C2%AEion=top-news&WT.nav=top-news&_r=3
|
||||
[3]:http://news.yahoo.com/blogs/upgrade-your-life/banking-online-not-hacked-182159934.html
|
||||
[4]:http://pocketnow.com/2014/10/15/should-you-leave-your-smartphones-wifi-on-or-turn-it-off
|
||||
[5]:http://www.cnet.com/news/chrome-becoming-tool-in-googles-push-for-encrypted-web/
|
||||
[6]:https://twitter.com/JoshAlthuser
|
@ -1,8 +1,8 @@
|
||||
Linux 中如何从命令行访问 Dropbox
|
||||
Linux 中如何通过命令行访问 Dropbox
|
||||
================================================================================
|
||||
在当今这个多设备的环境下,云存储无处不在。无论身处何方,人们都想通过多种设备来从云存储中获取所需的内容。由于优雅的 UI 和完美的跨平台兼容性,Dropbox 已成为最为广泛使用的云存储服务。 Dropbox 的流行已引发了一系列官方或非官方 Dropbox 客户端的出现,它们支持不同的操作系统平台。
|
||||
在当今这个多设备的环境下,云存储无处不在。无论身处何方,人们都想通过多种设备来从云存储中获取所需的内容。由于拥有漂亮的 UI 和完美的跨平台兼容性,Dropbox 已成为最为广泛使用的云存储服务。 Dropbox 的流行已引发了一系列官方或非官方 Dropbox 客户端的出现,它们支持不同的操作系统平台。
|
||||
|
||||
当然 Linux 平台下也有着自己的 Dropbox 客户端: 既有命令行的,也有图形界面。[Dropbox Uploader][1] 是一个简单易用的 Dropbox 命令行客户端,它是用 BASH 脚本语言所编写的。在这篇教程中,我将描述 **在 Linux 中如何使用 Dropbox Uploader 通过命令行来访问 Dropbox**。
|
||||
当然 Linux 平台下也有着自己的 Dropbox 客户端: 既有命令行的,也有图形界面客户端。[Dropbox Uploader][1] 是一个简单易用的 Dropbox 命令行客户端,它是用 Bash 脚本语言所编写的(LCTT 译注:对,你没看错, 就是 Bash)。在这篇教程中,我将描述 **在 Linux 中如何使用 Dropbox Uploader 通过命令行来访问 Dropbox**。
|
||||
|
||||
### Linux 中安装和配置 Dropbox Uploader ###
|
||||
|
||||
@ -13,7 +13,7 @@ Linux 中如何从命令行访问 Dropbox
|
||||
|
||||
请确保你已经在系统中安装了 `curl`,因为 Dropbox Uploader 通过 curl 来运行 Dropbox 的 API。
|
||||
|
||||
要配置 Dropbox Uploader,只需运行 dropbox_uploader.sh 即可。当你第一次运行这个脚本时,它将询问你,以使得它可以访问你的 Dropbox 账户。
|
||||
要配置 Dropbox Uploader,只需运行 dropbox_uploader.sh 即可。当你第一次运行这个脚本时,它将请求得到授权以使得脚本可以访问你的 Dropbox 账户。
|
||||
|
||||
$ ./dropbox_uploader.sh
|
||||
|
||||
@ -88,7 +88,7 @@ via: http://xmodulo.com/access-dropbox-command-line-linux.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[Caroline](https://github.com/carolinewuyan)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,129 @@
|
||||
如何在 Ubuntu 15.04 / CentOS 7 上安装 Android Studio
|
||||
================================================================================
|
||||
随着最近几年智能手机的进步,安卓成为了最大的手机平台之一,在开发安卓应用中所用到的所有工具也都可以免费得到。Android Studio 是基于 [IntelliJ IDEA][1] 用于开发安卓应用的集成开发环境(IDE)。它是 Google 2014 年发布的免费开源软件,继 Eclipse 之后成为主要的 IDE。
|
||||
|
||||
在这篇文章,我们一起来学习如何在 Ubuntu 15.04 和 CentOS 7 上安装 Android Studio。
|
||||
|
||||
### 在 Ubuntu 15.04 上安装 ###
|
||||
|
||||
我们可以用两种方式安装 Android Studio。第一种是配置所需的库然后再安装它;另一种是从 Android 官方网站下载然后在本地编译安装。在下面的例子中,我们会使用命令行设置库并安装它。在继续下一步之前,我们需要确保我们已经安装了 JDK 1.6 或者更新版本。
|
||||
|
||||
这里,我打算安装 JDK 1.8。
|
||||
|
||||
$ sudo add-apt-repository ppa:webupd8team/java
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install oracle-java8-installer oracle-java8-set-default
|
||||
|
||||
验证 java 是否安装成功:
|
||||
|
||||
poornima@poornima-Lenovo:~$ java -version
|
||||
|
||||
现在,设置安装 Android Studio 需要的库
|
||||
|
||||
$ sudo apt-add-repository ppa:paolorotolo/android-studio
|
||||
|
||||
![Android-Studio-repo](http://blog.linoxide.com/wp-content/uploads/2015/11/Android-studio-repo.png)
|
||||
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install android-studio
|
||||
|
||||
上面的安装命令会在 /opt 目录下面安装 Android Studio。
|
||||
|
||||
现在,运行下面的命令启动安装向导:
|
||||
|
||||
$ /opt/android-studio/bin/studio.sh
|
||||
|
||||
这会激活安装窗口。下面的截图展示了安装 Android Studio 的过程。
|
||||
|
||||
![安装 Android Studio](http://blog.linoxide.com/wp-content/uploads/2015/11/Studio-setup.png)
|
||||
|
||||
![安装类型](http://blog.linoxide.com/wp-content/uploads/2015/11/Install-type.png)
|
||||
|
||||
![设置模拟器](http://blog.linoxide.com/wp-content/uploads/2015/11/Emulator-settings.png)
|
||||
|
||||
你点击了 Finish 按钮之后,就会显示同意协议页面。当你接受协议之后,它就开始下载需要的组件。
|
||||
|
||||
![下载组件](http://blog.linoxide.com/wp-content/uploads/2015/11/Download.png)
|
||||
|
||||
这一步完成之后就结束了 Android Studio 的安装。当你重启 Android Studio 时,你会看到下面的欢迎界面,从这里你可以开始用 Android Studio 工作了。
|
||||
|
||||
![欢迎界面](http://blog.linoxide.com/wp-content/uploads/2015/11/Welcome-screen.png)
|
||||
|
||||
### 在 CentOS 7 上安装 ###
|
||||
|
||||
现在再让我们来看看如何在 CentOS 7 上安装 Android Studio。这里你同样需要安装 JDK 1.6 或者更新版本。如果你不是 root 用户,记得在命令前面使用 ‘sudo’。你可以下载[最新版本][2]的 JDK。如果你已经安装了一个比较旧的版本,在安装新的版本之前你需要先卸载旧版本。在下面的例子中,我会通过下载需要的 rpm 包安装 JDK 1.8.0_65。
|
||||
|
||||
[root@li1260-39 ~]# rpm -ivh jdk-8u65-linux-x64.rpm
|
||||
Preparing... ################################# [100%]
|
||||
Updating / installing...
|
||||
1:jdk1.8.0_65-2000:1.8.0_65-fcs ################################# [100%]
|
||||
Unpacking JAR files...
|
||||
tools.jar...
|
||||
plugin.jar...
|
||||
javaws.jar...
|
||||
deploy.jar...
|
||||
rt.jar...
|
||||
jsse.jar...
|
||||
charsets.jar...
|
||||
localedata.jar...
|
||||
jfxrt.jar...
|
||||
|
||||
如果没有正确设置 Java 路径,你会看到错误信息。因此,设置正确的路径:
|
||||
|
||||
export JAVA_HOME=/usr/java/jdk1.8.0_25/
|
||||
export PATH=$PATH:$JAVA_HOME
|
||||
|
||||
检查是否安装了正确的版本:
|
||||
|
||||
[root@li1260-39 ~]# java -version
|
||||
java version "1.8.0_65"
|
||||
Java(TM) SE Runtime Environment (build 1.8.0_65-b17)
|
||||
Java HotSpot(TM) 64-Bit Server VM (build 25.65-b01, mixed mode)
|
||||
|
||||
如果你安装 Android Studio 的时候看到任何类似 “unable-to-run-mksdcard-sdk-tool:” 的错误信息,你可能要在 CentOS 7 64 位系统中安装以下软件包:
|
||||
|
||||
- glibc.i686
|
||||
- glibc-devel.i686
|
||||
- libstdc++.i686
|
||||
- zlib-devel.i686
|
||||
- ncurses-devel.i686
|
||||
- libX11-devel.i686
|
||||
- libXrender.i686
|
||||
- libXrandr.i686
|
||||
|
||||
通过从 [Android 网站][3] 下载 IDE 文件然后解压安装 studio 也是一样的。
|
||||
|
||||
[root@li1260-39 tmp]# unzip android-studio-ide-141.2343393-linux.zip
|
||||
|
||||
移动 android-studio 目录到 /opt 目录
|
||||
|
||||
[root@li1260-39 tmp]# mv /tmp/android-studio/ /opt/
|
||||
|
||||
需要的话你可以创建一个到 studio 可执行文件的符号链接用于快速启动。
|
||||
|
||||
[root@li1260-39 tmp]# ln -s /opt/android-studio/bin/studio.sh /usr/local/bin/android-studio
|
||||
|
||||
现在在终端中启动 studio:
|
||||
|
||||
[root@localhost ~]#studio
|
||||
|
||||
之后用于完成安装的截图和前面 Ubuntu 安装过程中的是一样的。安装完成后,你就可以开始开发你自己的 Android 应用了。
|
||||
|
||||
### 总结 ###
|
||||
|
||||
虽然发布不到一年,但是 Android Studio 已经替代 Eclipse 成为了 Android 的开发最主要的 IDE。它是唯一能支持 Google 之后将要提供的 Android SDK 和其它 Android 特性的官方 IDE 工具。那么,你还在等什么呢?赶快安装 Android Studio 来体验开发 Android 应用的乐趣吧。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/tools/install-android-studio-ubuntu-15-04-centos-7/
|
||||
|
||||
作者:[B N Poornima][a]
|
||||
译者:[ictlyh](http://mutouxiaogui.cn/blog/)
|
||||
校对:[Caroline](https://github.com/carolinewuyan)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/bnpoornima/
|
||||
[1]:https://www.jetbrains.com/idea/
|
||||
[2]:http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
|
||||
[3]:http://developer.android.com/sdk/index.html
|
@ -1,6 +1,6 @@
|
||||
LNAV - 基于 Ncurses 的日志文件阅读器
|
||||
LNAV:基于 Ncurses 的日志文件阅读器
|
||||
================================================================================
|
||||
日志文件导航器(Logfile Navigator,简称 lnav),是一个基于 curses 用于查看和分析日志文件的工具。和文本阅读器/编辑器相比, lnav 的好处是它充分利用了可以从日志文件中获取的语义信息,例如时间戳和日志等级。利用这些额外的语义信息, lnav 可以处理类似事情:来自不同文件的交错信息;按照时间生成信息直方图;提供在文件中导航的关键字。它希望使用这些功能可以使得用户可以快速有效地定位和解决问题。
|
||||
日志文件导航器(Logfile Navigator,简称 lnav),是一个基于 curses 的,用于查看和分析日志文件的工具。和文本阅读器/编辑器相比, lnav 的好处是它充分利用了可以从日志文件中获取的语义信息,例如时间戳和日志等级。利用这些额外的语义信息, lnav 可以处理像这样的事情:来自不同文件的交错的信息;按照时间生成信息直方图;支持在文件中导航的快捷键。它希望使用这些功能可以使得用户可以快速有效地定位和解决问题。
|
||||
|
||||
### lnav 功能 ###
|
||||
|
||||
@ -10,15 +10,15 @@ Syslog、Apache 访问日志、strace、tcsh 历史以及常见的带时间戳
|
||||
|
||||
#### 直方图视图: ####
|
||||
|
||||
以时间为桶显示日志信息数量。这对于在一段长时间内大概了解发生了什么非常有用。
|
||||
以时间区划来显示日志信息数量。这对于大概了解在一长段时间内发生了什么非常有用。
|
||||
|
||||
#### 过滤器: ####
|
||||
|
||||
只显示那些匹配或不匹配一些正则表达式的行。对于移除大量你不感兴趣的日志行非常有用。
|
||||
|
||||
#### 及时操作: ####
|
||||
#### 即时操作: ####
|
||||
|
||||
在你输入到时候会同时完成检索;当添加新日志行的时候回自动加载和搜索;加载行的时候会应用过滤器;另外,还会在你输入 SQL 查询的时候检查正确性。
|
||||
在你输入到时候会同时完成检索;当添加了新日志行的时候会自动加载和搜索;加载行的时候会应用过滤器;另外,还会在你输入 SQL 查询的时候检查其正确性。
|
||||
|
||||
#### 自动显示后文: ####
|
||||
|
||||
@ -34,11 +34,11 @@ Syslog、Apache 访问日志、strace、tcsh 历史以及常见的带时间戳
|
||||
|
||||
#### 导航: ####
|
||||
|
||||
有快捷键用于跳转到下一个或上一个错误或警告,按照一定的时间向后或向前移动。
|
||||
有快捷键用于跳转到下一个或上一个错误或警告,按照指定的时间向后或向前翻页。
|
||||
|
||||
#### 用 SQL 查询日志: ####
|
||||
|
||||
每个日志文件行都被认为是数据库中可以使用 SQL 查询的一行。可以使用的列取决于查看的日志文件类型。
|
||||
每个日志文件行都相当于数据库中的一行,可以使用 SQL 进行查询。可以使用的列取决于查看的日志文件类型。
|
||||
|
||||
#### 命令和搜索历史: ####
|
||||
|
||||
@ -62,9 +62,7 @@ Syslog、Apache 访问日志、strace、tcsh 历史以及常见的带时间戳
|
||||
|
||||
![](http://www.ubuntugeek.com/wp-content/uploads/2015/11/51.png)
|
||||
|
||||
如果你想查看特定的日志,那么需要指定路径
|
||||
|
||||
如果你想看 CPU 日志,在你的终端里运行下面的命令
|
||||
如果你想查看特定的日志,那么需要指定路径。如果你想看 CPU 日志,在你的终端里运行下面的命令
|
||||
|
||||
lnav /var/log/cups
|
||||
|
||||
@ -76,7 +74,7 @@ via: http://www.ubuntugeek.com/lnav-ncurses-based-log-file-viewer.html
|
||||
|
||||
作者:[ruchi][a]
|
||||
译者:[ictlyh](http://mutouxiaogui.cn/blog/)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,59 @@
|
||||
如何在 Ubuntu 16.04,15.10,14.04 中安装 GIMP 2.8.16
|
||||
================================================================================
|
||||
![GIMP 2.8.16](http://ubuntuhandbook.org/wp-content/uploads/2015/11/gimp-icon.png)
|
||||
|
||||
GIMP 图像编辑器 2.8.16 版本在其20岁生日时发布了。下面是如何安装或升级 GIMP 在 Ubuntu 16.04, Ubuntu 15.10, Ubuntu 14.04, Ubuntu 12.04 及其衍生版本中,如 Linux Mint 17.x/13, Elementary OS Freya。
|
||||
|
||||
GIMP 2.8.16 支持 OpenRaster 文件中的层组,修复了 PSD 中的层组支持以及各种用户界面改进,修复了 OSX 上的构建系统,以及更多新的变化。请阅读 [官方声明][1]。
|
||||
|
||||
![GIMP image editor 2.8,16](http://ubuntuhandbook.org/wp-content/uploads/2014/08/gimp-2-8-14.jpg)
|
||||
|
||||
### 如何安装或升级: ###
|
||||
|
||||
多亏了 Otto Meier,[Ubuntu PPA][2] 中最新的 GIMP 包可用于当前所有的 Ubuntu 版本和其衍生版。
|
||||
|
||||
**1. 添加 GIMP PPA**
|
||||
|
||||
从 Unity Dash 中打开终端,或通过 Ctrl+Alt+T 快捷键打开。在它打开它后,粘贴下面的命令并回车:
|
||||
|
||||
sudo add-apt-repository ppa:otto-kesselgulasch/gimp
|
||||
|
||||
![add GIMP PPA](http://ubuntuhandbook.org/wp-content/uploads/2015/11/gimp-ppa.jpg)
|
||||
|
||||
输入你的密码,密码不会在终端显示,然后回车继续。
|
||||
|
||||
**2. 安装或升级编辑器**
|
||||
|
||||
在添加了 PPA 后,启动 **Software Updater**(在 Mint 中是 Software Manager)。检查更新后,你将看到 GIMP 的更新列表。点击 “Install Now” 进行升级。
|
||||
|
||||
![upgrade-gimp2816](http://ubuntuhandbook.org/wp-content/uploads/2015/11/upgrade-gimp2816.jpg)
|
||||
|
||||
对于那些喜欢 Linux 命令的,按顺序执行下面的命令,刷新仓库的缓存然后安装 GIMP:
|
||||
|
||||
sudo apt-get update
|
||||
|
||||
sudo apt-get install gimp
|
||||
|
||||
**3. (可选的) 卸载**
|
||||
|
||||
如果你想卸载或降级 GIMP 图像编辑器。从软件中心直接删除它,或者按顺序运行下面的命令来将 PPA 清除并降级软件:
|
||||
|
||||
sudo apt-get install ppa-purge
|
||||
|
||||
sudo ppa-purge ppa:otto-kesselgulasch/gimp
|
||||
|
||||
就这样。玩的愉快!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ubuntuhandbook.org/index.php/2015/11/how-to-install-gimp-2-8-16-in-ubuntu-16-04-15-10-14-04/
|
||||
|
||||
作者:[Ji m][a]
|
||||
译者:[strugglingyouth](https://github.com/strugglingyouth)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://ubuntuhandbook.org/index.php/about/
|
||||
[1]:http://www.gimp.org/news/2015/11/22/20-years-of-gimp-release-of-gimp-2816/
|
||||
[2]:https://launchpad.net/~otto-kesselgulasch/+archive/ubuntu/gimp
|
@ -1,16 +1,16 @@
|
||||
tar 命令详解
|
||||
tar 命令使用介绍
|
||||
================================================================================
|
||||
Linux [tar][1] 命令是归档或分发文件时的强大武器。GNU tar 归档包可以包含多个文件和目录,还能保留权限,它还支持多种压缩格式。Tar 表示 "**T**ape **Ar**chiver",这是一种 POSIX 标准。
|
||||
Linux [tar][1] 命令是归档或分发文件时的强大武器。GNU tar 归档包可以包含多个文件和目录,还能保留其文件权限,它还支持多种压缩格式。Tar 表示 "**T**ape **Ar**chiver",这种格式是 POSIX 标准。
|
||||
|
||||
### Tar 文件格式 ###
|
||||
|
||||
tar 压缩等级简介。
|
||||
tar 压缩等级简介:
|
||||
|
||||
- **无压缩** 没有压缩的文件用 .tar 结尾。
|
||||
- **Gzip 压缩** Gzip 格式是 tar 使用最广泛的压缩格式,它能快速压缩和提取文件。用 gzip 压缩的文件通常用 .tar.gz 或 .tgz 结尾。这里有一些如何[创建][2]和[解压][3] tar.gz 文件的例子。
|
||||
- **Bzip2 压缩** 和 Gzip格式相比 Bzip2 提供了更好的压缩比。创建压缩文件也比较慢,通常采用 .tar.bz2 结尾。
|
||||
- **Bzip2 压缩** 和 Gzip 格式相比 Bzip2 提供了更好的压缩比。创建压缩文件也比较慢,通常采用 .tar.bz2 结尾。
|
||||
- **Lzip(LAMA)压缩** Lizp 压缩结合了 Gzip 快速的优势,以及和 Bzip2 类似(甚至更好) 的压缩率。尽管有这些好处,这个格式并没有得到广泛使用。
|
||||
- **Lzop 压缩** 这个压缩选项也许是 tar 最快的压缩格式,它的压缩率和 gzip 类似,也没有广泛使用。
|
||||
- **Lzop 压缩** 这个压缩选项也许是 tar 最快的压缩格式,它的压缩率和 gzip 类似,但也没有广泛使用。
|
||||
|
||||
常见的格式是 tar.gz 和 tar.bz2。如果你想快速压缩,那么就是用 gzip。如果归档文件大小比较重要,就是用 tar.bz2。
|
||||
|
||||
@ -59,11 +59,13 @@ tar 命令在 Windows 也可以使用,你可以从 Gunwin 项目[http://gnuwin
|
||||
- **[p]** 这个选项表示 “preserve”,它指示 tar 在归档文件中保留文件属主和权限信息。
|
||||
- **[c]** 表示创建。要创建文件时不能缺少这个选项。
|
||||
- **[z]** z 选项启用 gzip 压缩。
|
||||
- **[f]** file 选项告诉 tar 创建一个归档文件。如果没有这个选项 tar 会把输出发送到 stdout。
|
||||
- **[f]** file 选项告诉 tar 创建一个归档文件。如果没有这个选项 tar 会把输出发送到标准输出( LCTT 译注:如果没有指定,标准输出默认是屏幕,显然你不会想在屏幕上显示一堆乱码,通常你可以用管道符号送到其它程序去)。
|
||||
|
||||
#### Tar 命令事例 ####
|
||||
#### Tar 命令示例 ####
|
||||
|
||||
**事例 1: 备份 /etc 目录** 创建 /etc 配置目录的一个备份。备份保存在 root 目录。
|
||||
**示例 1: 备份 /etc 目录**
|
||||
|
||||
创建 /etc 配置目录的一个备份。备份保存在 root 目录。
|
||||
|
||||
tar pczvf /root/etc.tar.gz /etc
|
||||
|
||||
@ -71,19 +73,23 @@ tar 命令在 Windows 也可以使用,你可以从 Gunwin 项目[http://gnuwin
|
||||
|
||||
要以 root 用户运行命令确保 /etc 中的所有文件都会被包含在备份中。这次,我在命令中添加了 [v] 选项。这个选项表示 verbose,它告诉 tar 显示所有被包含到归档文件中的文件名。
|
||||
|
||||
**事例 2: 备份你的 /home 目录** 创建你的 home 目录的备份。备份会被保存到 /backup 目录。
|
||||
**示例 2: 备份你的 /home 目录**
|
||||
|
||||
创建你的 home 目录的备份。备份会被保存到 /backup 目录。
|
||||
|
||||
tar czf /backup/myuser.tar.gz /home/myuser
|
||||
|
||||
用你的用户名替换 myuser。这个命令中,我省略了 [p] 选项,也就不会保存权限。
|
||||
|
||||
**事例 3: 基于文件的 MySQL 数据库备份** 在大部分 Linux 发行版中,MySQL 数据库保存在 /var/lib/mysql。你可以使用下面的命令检查:
|
||||
**示例 3: 基于文件的 MySQL 数据库备份**
|
||||
|
||||
在大部分 Linux 发行版中,MySQL 数据库保存在 /var/lib/mysql。你可以使用下面的命令来查看:
|
||||
|
||||
ls /var/lib/mysql
|
||||
|
||||
![使用 tar 基于文件备份 MySQL](https://www.howtoforge.com/images/linux-tar-command/big/tar_backup_mysql.png)
|
||||
|
||||
用 tar 备份 MySQL 文件时为了保持一致性,首先停用数据库服务器。备份会被写到 /backup 目录。
|
||||
用 tar 备份 MySQL 数据文件时为了保持数据一致性,首先停用数据库服务器。备份会被写到 /backup 目录。
|
||||
|
||||
1) 创建 backup 目录
|
||||
|
||||
@ -108,10 +114,10 @@ tar 命令在 Windows 也可以使用,你可以从 Gunwin 项目[http://gnuwin
|
||||
#### tar 命令选项解释 ####
|
||||
|
||||
- **[x]** x 表示提取,提取 tar 文件时这个命令不可缺少。
|
||||
- **[z]** z 选项告诉 tar 要解压的归档文件时 gzip 格式。
|
||||
- **[z]** z 选项告诉 tar 要解压的归档文件是 gzip 格式。
|
||||
- **[f]** 该选项告诉 tar 从一个文件中读取归档内容,本例中是 myarchive.tar.gz。
|
||||
|
||||
上面的 tar 命令会安静地提取 tar.gz 文件,它只会显示错误信息。如果你想要看提取了哪些文件,那么添加 “v” 选项。
|
||||
上面的 tar 命令会安静地提取 tar.gz 文件,除非有错误信息。如果你想要看提取了哪些文件,那么添加 “v” 选项。
|
||||
|
||||
tar xzvf myarchive.tar.gz
|
||||
|
||||
@ -125,7 +131,7 @@ via: https://www.howtoforge.com/tutorial/linux-tar-command/
|
||||
|
||||
作者:[howtoforge][a]
|
||||
译者:[ictlyh](http://mutouxiaogui.cn/blog/)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,64 @@
|
||||
eSpeak: Linux 文本转语音工具
|
||||
================================================================================
|
||||
![Text to speech tool in Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/Text-to-speech-Linux.jpg)
|
||||
|
||||
[eSpeak][1]是一款 Linux 命令行工具,能把文本转换成语音。它是一款简洁的语音合成器,用C语言编写而成,它支持英语和其它多种语言。
|
||||
|
||||
eSpeak 从标准输入或者输入文件中读取文本。虽然语音输出与真人声音相去甚远,但是,在你项目需要的时候,eSpeak 仍不失为一个简便快捷的工具。
|
||||
|
||||
eSpeak 部分主要特性如下:
|
||||
|
||||
- 提供给 Linux 和 Windows 的命令行工具
|
||||
- 从文件或者标准输入中把文本读出来
|
||||
- 提供给其它程序使用的共享库版本
|
||||
- 为 Windows 提供 SAPI5 版本,所以它能用于 screen-readers 或者其它支持 Windows SAPI5 接口的程序
|
||||
- 可移植到其它平台,包括安卓,OSX等
|
||||
- 提供多种声音特性选择
|
||||
- 语音输出可保存为 [.WAV][2] 格式的文件
|
||||
- 配合 HTML 部分可支持 SSML(语音合成标记语言,[Speech Synthesis Markup Language][3])
|
||||
- 体积小巧,整个程序连同语言支持等占用小于2MB
|
||||
- 可以实现文本到音素编码(phoneme code)的转化,因此可以作为其它语音合成引擎的前端工具
|
||||
- 开发工具可用于生产和调整音素数据
|
||||
|
||||
### 安装 eSpeak ###
|
||||
|
||||
基于 Ubuntu 的系统中,在终端运行以下命令安装 eSpeak:
|
||||
|
||||
sudo apt-get install espeak
|
||||
|
||||
eSpeak 是一个古老的工具,我推测它应该能在其它众多 Linux 发行版中运行,比如 Arch,Fedora。使用 dnf,pacman 等命令就能轻松安装。
|
||||
|
||||
eSpeak 用法如下:输入 espeak 运行程序。输入字符按 enter 转换为语音输出(LCTT 译注:补充)。使用 Ctrl+C 来关闭运行中的程序。
|
||||
|
||||
![eSpeak command line](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/eSpeak-example.png)
|
||||
|
||||
还有一些其他的选项可用,可以通过程序帮助进行查看。
|
||||
|
||||
### GUI 版本:Gespeaker ###
|
||||
|
||||
如果你更倾向于使用 GUI 版本,可以安装 Gespeaker,它为 eSpeak 提供了 GTK 界面。
|
||||
|
||||
使用以下命令来安装 Gespeaker:
|
||||
|
||||
sudo apt-get install gespeaker
|
||||
|
||||
操作界面简明易用,你完全可以自行探索。
|
||||
|
||||
![eSpeak GUI tool for text to speech in Ubuntu](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/eSpeak-GUI.png)
|
||||
|
||||
虽然这些工具在大多数计算任务下用不到,但是当你的项目需要把文本转换成语音时,使用 espeak 还是挺方便的。是否使用 espeak 这款语音合成器,选择权就交给你们啦。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/espeak-text-speech-linux/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[soooogreen](https://github.com/soooogreen)
|
||||
校对:[Caroline](https://github.com/carolinewuyan)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/abhishek/
|
||||
[1]:http://espeak.sourceforge.net/
|
||||
[2]:http://en.wikipedia.org/wiki/WAV
|
||||
[3]:http://en.wikipedia.org/wiki/Speech_Synthesis_Markup_Language
|
@ -0,0 +1,73 @@
|
||||
如何在 Ubuntu 中安装最新的 Arduino IDE 1.6.6
|
||||
================================================================================
|
||||
![Install latest Arduino in Ubuntu](http://ubuntuhandbook.org/wp-content/uploads/2015/11/arduino-icon.png)
|
||||
|
||||
> 本篇教程会教你如何在当前的 Ubuntu 发行版中安装最新的 Arduino IDE 1.6.6。
|
||||
|
||||
开源的 Arduino IDE 发布了1.6.6,并带来了很多的改变。新的发布已经切换到 Java 8,它与 IDE 绑定并且用于编译所需。具体见 [发布说明][1]。
|
||||
|
||||
![Arduino 1.6.6 in Ubuntu 15.10](http://ubuntuhandbook.org/wp-content/uploads/2015/11/arduino-ubuntu.jpg)
|
||||
|
||||
对于那些不想使用软件中心的 1.0.5 旧版本的人而言,你可以使用下面的步骤在所有的 Ubuntu 发行版中安装 Arduino。
|
||||
|
||||
> **请用正确版本号替换软件包的版本号**
|
||||
|
||||
**1、** 从下面的官方链接下载最新的包 **Linux 32-bit 或者 Linux 64-bit**。
|
||||
|
||||
- [https://www.arduino.cc/en/Main/Software][2]
|
||||
|
||||
如果不知道你系统的类型?进入系统设置->详细->概览。
|
||||
|
||||
**2、** 从Unity Dash、App Launcher 或者使用 Ctrl+Alt+T 打开终端。打开后,一个个运行下面的命令:
|
||||
|
||||
进入下载文件夹:
|
||||
|
||||
cd ~/Downloads
|
||||
|
||||
![navigate-downloads](http://ubuntuhandbook.org/wp-content/uploads/2015/11/navigate-downloads.jpg)
|
||||
|
||||
使用 tar 命令解压:
|
||||
|
||||
tar -xvf arduino-1.6.6-*.tar.xz
|
||||
|
||||
![extract-archive](http://ubuntuhandbook.org/wp-content/uploads/2015/11/extract-archive.jpg)
|
||||
|
||||
将解压后的文件移动到**/opt/**下:
|
||||
|
||||
sudo mv arduino-1.6.6 /opt
|
||||
|
||||
![move-opt](http://ubuntuhandbook.org/wp-content/uploads/2015/11/move-opt.jpg)
|
||||
|
||||
**3、** 现在 IDE 已经与最新的 Java 绑定使用了。但是最好为程序设置一个桌面图标/启动方式:
|
||||
|
||||
进入安装目录:
|
||||
|
||||
cd /opt/arduino-1.6.6/
|
||||
|
||||
在这个目录给 install.sh 可执行权限
|
||||
|
||||
chmod +x install.sh
|
||||
|
||||
最后运行脚本同时安装桌面快捷方式和启动图标:
|
||||
|
||||
./install.sh
|
||||
|
||||
下图中我用“&&”同时运行这三个命令:
|
||||
|
||||
![install-desktop-icon](http://ubuntuhandbook.org/wp-content/uploads/2015/11/install-desktop-icon.jpg)
|
||||
|
||||
最后从 Unity Dash、程序启动器或者桌面快捷方式运行 Arduino IDE。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ubuntuhandbook.org/index.php/2015/11/install-arduino-ide-1-6-6-ubuntu/
|
||||
|
||||
作者:[Ji m][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://ubuntuhandbook.org/index.php/about/
|
||||
[1]:https://www.arduino.cc/en/Main/ReleaseNotes
|
||||
[2]:https://www.arduino.cc/en/Main/Software
|
@ -0,0 +1,95 @@
|
||||
使用 netcat [nc] 命令对 Linux 和 Unix 进行端口扫描
|
||||
================================================================================
|
||||
|
||||
我如何在自己的服务器上找出哪些端口是开放的?如何使用 nc 命令进行端口扫描来替换 [Linux 或类 Unix 中的 nmap 命令][1]?
|
||||
|
||||
nmap (“Network Mapper”)是一个用于网络探测和安全审核的开源工具。如果 nmap 没有安装或者你不希望使用 nmap,那你可以用 netcat/nc 命令进行端口扫描。它对于查看目标计算机上哪些端口是开放的或者运行着服务是非常有用的。你也可以使用 [nmap 命令进行端口扫描][2] 。
|
||||
|
||||
### 如何使用 nc 来扫描 Linux,UNIX 和 Windows 服务器的端口呢? ###
|
||||
|
||||
如果未安装 nmap,试试 nc/netcat 命令,如下所示。-z 参数用来告诉 nc 报告开放的端口,而不是启动连接。在 nc 命令中使用 -z 参数时,你需要在主机名/ip 后面限定端口的范围和加速其运行:
|
||||
|
||||
### 语法 ###
|
||||
### nc -z -v {host-name-here} {port-range-here}
|
||||
nc -z -v host-name-here ssh
|
||||
nc -z -v host-name-here 22
|
||||
nc -w 1 -z -v server-name-here port-Number-her
|
||||
|
||||
### 扫描 1 to 1023 端口 ###
|
||||
nc -zv vip-1.vsnl.nixcraft.in 1-1023
|
||||
|
||||
输出示例:
|
||||
|
||||
Connection to localhost 25 port [tcp/smtp] succeeded!
|
||||
Connection to vip-1.vsnl.nixcraft.in 25 port [tcp/smtp] succeeded!
|
||||
Connection to vip-1.vsnl.nixcraft.in 80 port [tcp/http] succeeded!
|
||||
Connection to vip-1.vsnl.nixcraft.in 143 port [tcp/imap] succeeded!
|
||||
Connection to vip-1.vsnl.nixcraft.in 199 port [tcp/smux] succeeded!
|
||||
Connection to vip-1.vsnl.nixcraft.in 783 port [tcp/*] succeeded!
|
||||
Connection to vip-1.vsnl.nixcraft.in 904 port [tcp/vmware-authd] succeeded!
|
||||
Connection to vip-1.vsnl.nixcraft.in 993 port [tcp/imaps] succeeded!
|
||||
|
||||
你也可以扫描单个端口:
|
||||
|
||||
nc -zv v.txvip1 443
|
||||
nc -zv v.txvip1 80
|
||||
nc -zv v.txvip1 22
|
||||
nc -zv v.txvip1 21
|
||||
nc -zv v.txvip1 smtp
|
||||
nc -zvn v.txvip1 ftp
|
||||
|
||||
### 使用1秒的超时值来更快的扫描 ###
|
||||
netcat -v -z -n -w 1 v.txvip1 1-1023
|
||||
|
||||
输出示例:
|
||||
|
||||
![Fig.01: Linux/Unix: Use Netcat to Establish and Test TCP and UDP Connections on a Server](http://s0.cyberciti.org/uploads/faq/2007/07/scan-with-nc.jpg)
|
||||
|
||||
*图01:Linux/Unix:使用 Netcat 来测试 TCP 和 UDP 与服务器建立连接*
|
||||
|
||||
1. -z : 端口扫描模式即零 I/O 模式。
|
||||
1. -v : 显示详细信息 [使用 -vv 来输出更详细的信息]。
|
||||
1. -n : 使用纯数字 IP 地址,即不用 DNS 来解析 IP 地址。
|
||||
1. -w 1 : 设置超时值设置为1。
|
||||
|
||||
更多例子:
|
||||
|
||||
$ netcat -z -vv www.cyberciti.biz http
|
||||
www.cyberciti.biz [75.126.153.206] 80 (http) open
|
||||
sent 0, rcvd 0
|
||||
$ netcat -z -vv google.com https
|
||||
DNS fwd/rev mismatch: google.com != maa03s16-in-f2.1e100.net
|
||||
DNS fwd/rev mismatch: google.com != maa03s16-in-f6.1e100.net
|
||||
DNS fwd/rev mismatch: google.com != maa03s16-in-f5.1e100.net
|
||||
DNS fwd/rev mismatch: google.com != maa03s16-in-f3.1e100.net
|
||||
DNS fwd/rev mismatch: google.com != maa03s16-in-f8.1e100.net
|
||||
DNS fwd/rev mismatch: google.com != maa03s16-in-f0.1e100.net
|
||||
DNS fwd/rev mismatch: google.com != maa03s16-in-f7.1e100.net
|
||||
DNS fwd/rev mismatch: google.com != maa03s16-in-f4.1e100.net
|
||||
google.com [74.125.236.162] 443 (https) open
|
||||
sent 0, rcvd 0
|
||||
$ netcat -v -z -n -w 1 192.168.1.254 1-1023
|
||||
(UNKNOWN) [192.168.1.254] 989 (ftps-data) open
|
||||
(UNKNOWN) [192.168.1.254] 443 (https) open
|
||||
(UNKNOWN) [192.168.1.254] 53 (domain) open
|
||||
|
||||
也可以看看 :
|
||||
|
||||
- [使用 nmap 命令扫描网络中开放的端口][3]。
|
||||
- 手册页 - [nc(1)][4], [nmap(1)][5]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.cyberciti.biz/faq/linux-port-scanning/
|
||||
|
||||
作者:Vivek Gite
|
||||
译者:[strugglingyouth](https://github.com/strugglingyouth)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:https://linux.cn/article-2561-1.html
|
||||
[2]:https://linux.cn/article-2561-1.html
|
||||
[3]:https://linux.cn/article-2561-1.html
|
||||
[4]:http://www.manpager.com/linux/man1/nc.1.html
|
||||
[5]:http://www.manpager.com/linux/man1/nmap.1.html
|
@ -0,0 +1,146 @@
|
||||
如何在命令行中使用 ftp 命令上传和下载文件
|
||||
================================================================================
|
||||
本文中,介绍在 Linux shell 中如何使用 ftp 命令。包括如何连接 FTP 服务器,上传或下载文件以及创建文件夹。尽管现在有许多不错的 FTP 桌面应用,但是在服务器、SSH、远程会话中命令行 ftp 命令还是有很多应用的。比如。需要服务器从 ftp 仓库拉取备份。
|
||||
|
||||
### 步骤 1: 建立 FTP 连接 ###
|
||||
|
||||
想要连接 FTP 服务器,在命令上中先输入`ftp`然后空格跟上 FTP 服务器的域名 'domain.com' 或者 IP 地址
|
||||
|
||||
#### 例如: ####
|
||||
|
||||
ftp domain.com
|
||||
|
||||
ftp 192.168.0.1
|
||||
|
||||
ftp user@ftpdomain.com
|
||||
|
||||
**注意: 本例中使用匿名服务器。**
|
||||
|
||||
替换下面例子中 IP 或域名为你的服务器地址。
|
||||
|
||||
![FTP 登录](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/ftpanonymous.png)
|
||||
|
||||
### 步骤 2: 使用用户名密码登录 ###
|
||||
|
||||
绝大多数的 FTP 服务器是使用密码保护的,因此这些 FTP 服务器会询问'**username**'和'**password**'.
|
||||
|
||||
如果你连接到被称作匿名 FTP 服务器(LCTT 译注:即,并不需要你有真实的用户信息即可使用的 FTP 服务器称之为匿名 FTP 服务器),可以尝试`anonymous`作为用户名以及使用空密码:
|
||||
|
||||
Name: anonymous
|
||||
|
||||
Password:
|
||||
|
||||
之后,终端会返回如下的信息:
|
||||
|
||||
230 Login successful.
|
||||
Remote system type is UNIX.
|
||||
Using binary mode to transfer files.
|
||||
ftp>
|
||||
|
||||
登录成功。
|
||||
|
||||
![FTP 登录成功](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/login.png)
|
||||
|
||||
### 步骤 3: 目录操作 ###
|
||||
|
||||
FTP 命令可以列出、移动和创建文件夹,如同我们在本地使用我们的电脑一样。`ls`可以打印目录列表,`cd`可以改变目录,`mkdir`可以创建文件夹。
|
||||
|
||||
#### 使用安全设置列出目录 ####
|
||||
|
||||
ftp> ls
|
||||
|
||||
服务器将返回:
|
||||
|
||||
200 PORT command successful. Consider using PASV.
|
||||
150 Here comes the directory listing.
|
||||
directory list
|
||||
....
|
||||
....
|
||||
226 Directory send OK.
|
||||
|
||||
![打印目录](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/listing.png)
|
||||
|
||||
#### 改变目录: ####
|
||||
|
||||
改变目录可以输入:
|
||||
|
||||
ftp> cd directory
|
||||
|
||||
服务器将会返回:
|
||||
|
||||
250 Directory succesfully changed.
|
||||
|
||||
![FTP中改变目录](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/directory.png)
|
||||
|
||||
### 步骤 4: 使用 FTP 下载文件 ###
|
||||
|
||||
在下载一个文件之前,我们首先需要使用`lcd`命令设定本地接受目录位置。
|
||||
|
||||
lcd /home/user/yourdirectoryname
|
||||
|
||||
如果你不指定下载目录,文件将会下载到你登录 FTP 时候的工作目录。
|
||||
|
||||
现在,我们可以使用命令 get 来下载文件,比如:
|
||||
|
||||
get file
|
||||
|
||||
文件会保存在使用lcd命令设置的目录位置。
|
||||
|
||||
服务器返回消息:
|
||||
|
||||
local: file remote: file
|
||||
200 PORT command successful. Consider using PASV.
|
||||
150 Opening BINARY mode data connection for file (xxx bytes).
|
||||
226 File send OK.
|
||||
XXX bytes received in x.xx secs (x.xxx MB/s).
|
||||
|
||||
![使用FTP下载文件](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/gettingfile.png)
|
||||
|
||||
下载多个文件可以使用通配符及 `mget` 命令。例如,下面这个例子我打算下载所有以 .xls 结尾的文件。
|
||||
|
||||
mget *.xls
|
||||
|
||||
### 步骤 5: 使用 FTP 上传文件 ###
|
||||
|
||||
完成 FTP 连接后,FTP 同样可以上传文件
|
||||
|
||||
使用 `put`命令上传文件:
|
||||
|
||||
put file
|
||||
|
||||
当文件不再当前本地目录下的时候,可以使用绝对路径:
|
||||
|
||||
put /path/file
|
||||
|
||||
同样,可以上传多个文件:
|
||||
|
||||
mput *.xls
|
||||
|
||||
### 步骤 6: 关闭 FTP 连接 ###
|
||||
|
||||
完成FTP工作后,为了安全起见需要关闭连接。有三个命令可以关闭连接:
|
||||
|
||||
bye
|
||||
|
||||
exit
|
||||
|
||||
quit
|
||||
|
||||
任意一个命令可以断开FTP服务器连接并返回:
|
||||
|
||||
221 Goodbye
|
||||
|
||||
![](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/goodbye.png)
|
||||
|
||||
需要更多帮助,在使用 ftp 命令连接到服务器后,可以使用`help`获得更多帮助。
|
||||
|
||||
![](https://www.howtoforge.com/images/how-to-use-ftp-in-the-linux-shell/big/helpwindow.png)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/tutorial/how-to-use-ftp-on-the-linux-shell/
|
||||
|
||||
译者:[VicYu](http://vicyu.net)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -0,0 +1,60 @@
|
||||
如何在 CentOS 6/7 上移除被 Fail2ban 禁止的 IP
|
||||
================================================================================
|
||||
![](http://www.ehowstuff.com/wp-content/uploads/2015/12/security-265130_1280.jpg)
|
||||
|
||||
[fail2ban][1] 是一款用于保护你的服务器免于暴力攻击的入侵保护软件。fail2ban 用 python 写成,并广泛用于很多服务器上。fail2ban 会扫描日志文件和 IP 黑名单来显示恶意软件、过多的密码失败尝试、web 服务器利用、wordpress 插件攻击和其他漏洞。如果你已经安装并使用了 fail2ban 来保护你的 web 服务器,你也许会想知道如何在 CentOS 6、CentOS 7、RHEL 6、RHEL 7 和 Oracle Linux 6/7 中找到被 fail2ban 阻止的 IP,或者你想将 ip 从 fail2ban 监狱中移除。
|
||||
|
||||
### 如何列出被禁止的 IP ###
|
||||
|
||||
要查看所有被禁止的 ip 地址,运行下面的命令:
|
||||
|
||||
# iptables -L
|
||||
Chain INPUT (policy ACCEPT)
|
||||
target prot opt source destination
|
||||
f2b-AccessForbidden tcp -- anywhere anywhere tcp dpt:http
|
||||
f2b-WPLogin tcp -- anywhere anywhere tcp dpt:http
|
||||
f2b-ConnLimit tcp -- anywhere anywhere tcp dpt:http
|
||||
f2b-ReqLimit tcp -- anywhere anywhere tcp dpt:http
|
||||
f2b-NoAuthFailures tcp -- anywhere anywhere tcp dpt:http
|
||||
f2b-SSH tcp -- anywhere anywhere tcp dpt:ssh
|
||||
f2b-php-url-open tcp -- anywhere anywhere tcp dpt:http
|
||||
f2b-nginx-http-auth tcp -- anywhere anywhere multiport dports http,https
|
||||
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
|
||||
ACCEPT icmp -- anywhere anywhere
|
||||
ACCEPT all -- anywhere anywhere
|
||||
ACCEPT tcp -- anywhere anywhere tcp dpt:EtherNet/IP-1
|
||||
ACCEPT tcp -- anywhere anywhere tcp dpt:http
|
||||
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
|
||||
|
||||
Chain FORWARD (policy ACCEPT)
|
||||
target prot opt source destination
|
||||
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
|
||||
|
||||
Chain OUTPUT (policy ACCEPT)
|
||||
target prot opt source destination
|
||||
|
||||
|
||||
Chain f2b-NoAuthFailures (1 references)
|
||||
target prot opt source destination
|
||||
REJECT all -- 64.68.50.128 anywhere reject-with icmp-port-unreachable
|
||||
REJECT all -- 104.194.26.205 anywhere reject-with icmp-port-unreachable
|
||||
RETURN all -- anywhere anywhere
|
||||
|
||||
### 如何从 Fail2ban 中移除 IP ###
|
||||
|
||||
# iptables -D f2b-NoAuthFailures -s banned_ip -j REJECT
|
||||
|
||||
我希望这篇教程可以给你在 CentOS 6、CentOS 7、RHEL 6、RHEL 7 和 Oracle Linux 6/7 中移除被禁止的 ip 一些指导。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.ehowstuff.com/how-to-remove-banned-ip-from-fail2ban-on-centos/
|
||||
|
||||
作者:[skytech][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.ehowstuff.com/author/skytech/
|
||||
[1]:http://www.fail2ban.org/wiki/index.php/Main_Page
|
@ -0,0 +1,41 @@
|
||||
可以在 Linux 下试试苹果编程语言 Swift
|
||||
================================================================================
|
||||
![](http://itsfoss.com/wp-content/uploads/2015/12/Apple-Swift-Open-Source.jpg)
|
||||
|
||||
是的,你知道的,苹果编程语言 Swift 已经开源了。其实我们并不应该感到意外,因为[在六个月以前苹果就已经宣布了这个消息][1]。
|
||||
|
||||
苹果宣布推出开源 Swift 社区。一个专用于开源 Swift 社区的[新网站][2]已经就位,网站首页显示以下信息:
|
||||
|
||||
> 我们对 Swift 开源感到兴奋。在苹果推出了编程语言 Swift 之后,它很快成为历史上增长最快的语言之一。Swift 可以编写出难以置信的又快又安全的软件。目前,Swift 是开源的,你可以将这个最好的通用编程语言用在各种地方。
|
||||
|
||||
[swift.org][2] 这个网站将会作为一站式网站,它会提供各种资料的下载,包括各种平台,社区指南,最新消息,入门教程,为开源 Swift 做贡献的说明,文件和一些其他的指南。 如果你正期待着学习 Swift,那么必须收藏这个网站。
|
||||
|
||||
在苹果的这次宣布中,一个用于方便分享和构建代码的包管理器已经可用了。
|
||||
|
||||
对于所有的 Linux 使用者来说,最重要的是,源代码已经可以从 [Github][3]获得了.你可以从以下链接 Checkout 它:
|
||||
|
||||
- [苹果 Swift 源代码][3]
|
||||
|
||||
除此之外,对于 ubuntu 14.04 和 15.10 版本还有预编译的二进制文件。
|
||||
|
||||
- [ubuntu 系统的 Swift 二进制文件][4]
|
||||
|
||||
不要急着在产品环境中使用它们,因为这些都是开发分支而不适合于产品环境。因此现在应避免使用在产品环境中,一旦发布了 Linux 下 Swift 的稳定版本,我希望 ubuntu 会把它包含在 [umake][5]中,和 [Visual Studio Code][6] 放一起。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/swift-open-source-linux/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[Flowsnow](https://github.com/Flowsnow)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/abhishek/
|
||||
[1]:http://itsfoss.com/apple-open-sources-swift-programming-language-linux/
|
||||
[2]:https://swift.org/
|
||||
[3]:https://github.com/apple
|
||||
[4]:https://swift.org/download/#latest-development-snapshots
|
||||
[5]:https://wiki.ubuntu.com/ubuntu-make
|
||||
[6]:http://itsfoss.com/install-visual-studio-code-ubuntu/
|
@ -0,0 +1,66 @@
|
||||
如何深度定制 Ubuntu 面板的时间日期显示格式
|
||||
================================================================================
|
||||
![时间日期格式](http://ubuntuhandbook.org/wp-content/uploads/2015/08/ubuntu_tips1.png)
|
||||
|
||||
尽管设置页面里已经有一些选项可以用了,这个快速教程会向你展示如何更加深入地自定义 Ubuntu 面板上的时间和日期指示器。
|
||||
|
||||
![自定义世间日期](http://ubuntuhandbook.org/wp-content/uploads/2015/12/custom-timedate.jpg)
|
||||
|
||||
在开始之前,在 Ubuntu 软件中心搜索并安装 **dconf Editor**。然后启动该软件并按以下步骤执行:
|
||||
|
||||
**1、** 当 dconf Editor 启动后,导航至 **com -> canonical -> indicator -> datetime**。将 **time-format** 的值设置为 **custom**。
|
||||
|
||||
![自定义时间格式](http://ubuntuhandbook.org/wp-content/uploads/2015/12/time-format.jpg)
|
||||
|
||||
你也可以通过终端里的命令完成以上操作:
|
||||
|
||||
gsettings set com.canonical.indicator.datetime time-format 'custom'
|
||||
|
||||
**2、** 现在你可以通过编辑 **custom-time-format** 的值来自定义时间和日期的格式。
|
||||
|
||||
![自定义-时间格式](http://ubuntuhandbook.org/wp-content/uploads/2015/12/customize-timeformat.jpg)
|
||||
|
||||
你也可以通过命令完成:(LCTT 译注:将 FORMAT_VALUE_HERE 替换为所需要的格式值)
|
||||
|
||||
gsettings set com.canonical.indicator.datetime custom-time-format 'FORMAT_VALUE_HERE'
|
||||
|
||||
以下是参数含义:
|
||||
|
||||
- %a = 星期名缩写
|
||||
- %A = 星期名完整拼写
|
||||
- %b = 月份名缩写
|
||||
- %B = 月份名完整拼写
|
||||
- %d = 每月的日期
|
||||
- %l = 小时 ( 1..12), %I = 小时 (01..12)
|
||||
- %k = 小时 ( 1..23), %H = 小时 (01..23)
|
||||
- %M = 分钟 (00..59)
|
||||
- %p = 午别,AM 或 PM, %P = am 或 pm.
|
||||
- %S = 秒 (00..59)
|
||||
|
||||
可以打开终端键入命令 `man date` 并执行以了解更多细节。
|
||||
|
||||
一些自定义时间日期显示格式值的例子:
|
||||
|
||||
**%a %H:%M %m/%d/%Y**
|
||||
|
||||
![%a %H:%M %m/%d/%Y](http://ubuntuhandbook.org/wp-content/uploads/2015/12/exam-1.jpg)
|
||||
|
||||
**%a %r %b %d or %a %I:%M:%S %p %b %d**
|
||||
|
||||
![%a %r %b %d or %a %I:%M:%S %p %b %d](http://ubuntuhandbook.org/wp-content/uploads/2015/12/exam-2.jpg)
|
||||
|
||||
**%a %-d %b %l:%M %P %z**
|
||||
|
||||
![%a %-d %b %l:%M %P %z](http://ubuntuhandbook.org/wp-content/uploads/2015/12/exam-3.jpg)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ubuntuhandbook.org/index.php/2015/12/time-date-format-ubuntu-panel/
|
||||
|
||||
作者:[Ji m][a]
|
||||
译者:[alim0x](https://github.com/alim0x)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://ubuntuhandbook.org/index.php/about/
|
@ -0,0 +1,72 @@
|
||||
在 Centos/RHEL 6.X 上安装 Wetty
|
||||
================================================================================
|
||||
![](http://www.unixmen.com/wp-content/uploads/2015/11/Terminal.png)
|
||||
|
||||
**Wetty 是什么?**
|
||||
|
||||
Wetty = Web + tty
|
||||
|
||||
作为系统管理员,如果你是在 Linux 桌面下,你可以用它像一个 GNOME 终端(或类似的)一样来连接远程服务器;如果你是在 Windows 下,你可以用它像使用 Putty 这样的 SSH 客户端一样来连接远程,然后同时可以在浏览器中上网并查收邮件等其它事情。
|
||||
|
||||
(LCTT 译注:简而言之,这是一个基于 Web 浏览器的远程终端)
|
||||
|
||||
![](https://github.com/krishnasrinivas/wetty/raw/master/terminal.png)
|
||||
|
||||
### 第1步: 安装 epel 源 ###
|
||||
|
||||
# wget http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
|
||||
# rpm -ivh epel-release-6-8.noarch.rpm
|
||||
|
||||
### 第2步:安装依赖 ###
|
||||
|
||||
# yum install epel-release git nodejs npm -y
|
||||
|
||||
(LCTT 译注:对,没错,是用 node.js 编写的)
|
||||
|
||||
### 第3步:在安装完依赖后,克隆 GitHub 仓库 ###
|
||||
|
||||
# git clone https://github.com/krishnasrinivas/wetty
|
||||
|
||||
### 第4步:运行 Wetty ###
|
||||
|
||||
# cd wetty
|
||||
# npm install
|
||||
|
||||
### 第5步:从 Web 浏览器启动 Wetty 并访问 Linux 终端 ###
|
||||
|
||||
# node app.js -p 8080
|
||||
|
||||
### 第6步:为 Wetty 安装 HTTPS 证书 ###
|
||||
|
||||
# openssl req -x509 -newkey rsa:2048 -keyout key.pem -out cert.pem -days 365 -nodes
|
||||
|
||||
(等待完成)
|
||||
|
||||
### 第7步:通过 HTTPS 来使用 Wetty ###
|
||||
|
||||
# nohup node app.js --sslkey key.pem --sslcert cert.pem -p 8080 &
|
||||
|
||||
### 第8步:为 wetty 添加一个用户 ###
|
||||
|
||||
# useradd <username>
|
||||
# Passwd <username>
|
||||
|
||||
### 第9步:访问 wetty ###
|
||||
|
||||
http://Your_IP-Address:8080
|
||||
|
||||
输入你之前为 wetty 创建的证书然后访问。
|
||||
|
||||
到此结束!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.unixmen.com/install-wetty-centosrhel-6-x/
|
||||
|
||||
作者:[Debojyoti Das][a]
|
||||
译者:[strugglingyouth](https://github.com/strugglingyouth)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.unixmen.com/author/debjyoti/
|
@ -0,0 +1,100 @@
|
||||
如何在 CentOS 上启用 软件集 Software Collections(SCL)
|
||||
================================================================================
|
||||
|
||||
红帽企业版 linux(RHEL)和它的社区版分支——CentOS,提供10年的生命周期,这意味着 RHEL/CentOS 的每个版本会提供长达10年的安全更新。虽然这么长的生命周期为企业用户提供了迫切需要的系统兼容性和可靠性,但也存在一个缺点:随着底层的 RHEL/CentOS 版本接近生命周期的结束,核心应用和运行时环境变得陈旧过时。例如 CentOS 6.5,它的生命周期结束时间是2020年11月30日,其所携带的 Python 2.6.6和 MySQL 5.1.73,以今天的标准来看已经非常古老了。
|
||||
|
||||
另一方面,在 RHEL/CentOS 上试图手动升级开发工具链和运行时环境存在使系统崩溃的潜在可能,除非所有依赖都被正确解决。通常情况下,手动升级都是不推荐的,除非你知道你在干什么。
|
||||
|
||||
[软件集(Software Collections)][1](SCL)源出现了,以帮助解决 RHEL/CentOS 下的这种问题。SCL 的创建就是为了给 RHEL/CentOS 用户提供一种以方便、安全地安装和使用应用程序和运行时环境的多个(而且可能是更新的)版本的方式,同时避免把系统搞乱。与之相对的是第三方源,它们可能会在已安装的包之间引起冲突。
|
||||
|
||||
最新的 SCL 提供了:
|
||||
|
||||
- Python 3.3 和 2.7
|
||||
- PHP 5.4
|
||||
- Node.js 0.10
|
||||
- Ruby 1.9.3
|
||||
- Perl 5.16.3
|
||||
- MariaDB 和 MySQL 5.5
|
||||
- Apache httpd 2.4.6
|
||||
|
||||
在这篇教程的剩余部分,我会展示一下如何配置 SCL 源,以及如何安装和启用 SCL 中的包。
|
||||
|
||||
### 配置 SCL 源
|
||||
|
||||
SCL 可用于 CentOS 6.5 及更新的版本。要配置 SCL 源,只需执行:
|
||||
|
||||
$ sudo yum install centos-release-SCL
|
||||
|
||||
要启用和运行 SCL 中的应用,你还需要安装下列包:
|
||||
|
||||
$ sudo yum install scl-utils-build
|
||||
|
||||
执行下面的命令可以查看 SCL 中可用包的完整列表:
|
||||
|
||||
$ yum --disablerepo="*" --enablerepo="scl" list available
|
||||
|
||||
![](https://c2.staticflickr.com/6/5730/23304424250_f5c8a09584_c.jpg)
|
||||
|
||||
### 从 SCL 中安装和启用包
|
||||
|
||||
既然你已配置好了 SCL,你可以继续并从 SCL 中安装包了。
|
||||
|
||||
你可以搜索 SCL 中的包:
|
||||
|
||||
$ yum --disablerepo="*" --enablerepo="scl" search <keyword>
|
||||
|
||||
我们假设你要安装 Python 3.3。
|
||||
|
||||
继续,就像通常安装包那样使用 yum 安装:
|
||||
|
||||
$ sudo yum install python33
|
||||
|
||||
任何时候你都可以查看从 SCL 中安装的包的列表,只需执行:
|
||||
|
||||
$ scl --list
|
||||
|
||||
python33
|
||||
|
||||
SCL 的优点之一是安装其中的包不会覆盖任何系统文件,并且保证不会引起与系统中其它库和应用的冲突。
|
||||
|
||||
例如,如果在安装 python33 包后检查默认的 python 版本,你会发现默认的版本并没有改变:
|
||||
|
||||
$ python --version
|
||||
|
||||
Python 2.6.6
|
||||
|
||||
如果想使用一个已经安装的 SCL 包,你需要在每个命令中使用 `scl` 命令显式启用它(LCTT 译注:即想在哪条命令中使用 SCL 中的包,就得通过`scl`命令执行该命令)
|
||||
|
||||
$ scl enable <scl-package-name> <command>
|
||||
|
||||
例如,要针对`python`命令启用 python33 包:
|
||||
|
||||
$ scl enable python33 'python --version'
|
||||
|
||||
Python 3.3.2
|
||||
|
||||
如果想在启用 python33 包时执行多条命令,你可以像下面那样创建一个启用 SCL 的 bash 会话:
|
||||
|
||||
$ scl enable python33 bash
|
||||
|
||||
在这个 bash 会话中,默认的 python 会被切换为3.3版本,直到你输入`exit`,退出会话。
|
||||
|
||||
![](https://c2.staticflickr.com/6/5642/23491549632_1d08e163cc_c.jpg)
|
||||
|
||||
简而言之,SCL 有几分像 Python 的虚拟环境,但更通用,因为你可以为远比 Python 更多的应用启用/禁用 SCL 会话。
|
||||
|
||||
更详细的 SCL 指南,参考官方的[快速入门指南][2]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://xmodulo.com/enable-software-collections-centos.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[bianjp](https://github.com/bianjp)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/nanni
|
||||
[1]:https://www.softwarecollections.org/
|
||||
[2]:https://www.softwarecollections.org/docs/
|
@ -0,0 +1,76 @@
|
||||
Linux/Unix 桌面趣事:让桌面下雪
|
||||
================================================================================
|
||||
|
||||
在这个节日里感到孤独么?试一下 Xsnow 吧。它是一个可以在 Unix/Linux 桌面下下雪的应用。圣诞老人和他的驯鹿会在屏幕中奔跑,伴随着雪片让你感受到节日的感觉。
|
||||
|
||||
我第一次安装它还是在 13、4 年前。它最初是在 1984 年 Macintosh 系统中创造的。你可以用下面的方法来安装:
|
||||
|
||||
### 安装 xsnow ###
|
||||
|
||||
Debian/Ubuntu/Mint 用户用下面的命令:
|
||||
|
||||
$ sudo apt-get install xsnow
|
||||
|
||||
Freebsd 用户输入下面的命令:
|
||||
|
||||
# cd /usr/ports/x11/xsnow/
|
||||
# make install clean
|
||||
|
||||
或者尝试添加包:
|
||||
|
||||
# pkg_add -r xsnow
|
||||
|
||||
#### 其他发行版的方法 ####
|
||||
|
||||
1. Fedora/RHEL/CentOS 在 [rpmfusion][1] 仓库中找找。
|
||||
2. Gentoo 用户试下 Gentoo portage,也就是[emerge -p xsnow][2]
|
||||
3. Opensuse 用户使用 yast 搜索 xsnow
|
||||
|
||||
### 我该如何使用 xsnow? ###
|
||||
|
||||
打开终端(程序 > 附件 > 终端),输入下面的额命令启动 xsnow:
|
||||
|
||||
$ xsnow
|
||||
|
||||
示例输出:
|
||||
|
||||
![Fig.01: Snow for your Linux and Unix desktop systems](http://files.cyberciti.biz/uploads/tips/2011/12/application-to-bring-snow-to-desktop_small.png)
|
||||
|
||||
*图01: 在 Linux 和 Unix 桌面中显示雪花*
|
||||
|
||||
你可以设置背景为蓝色,并让它下白雪,输入:
|
||||
|
||||
$ xsnow -bg blue -sc snow
|
||||
|
||||
设置最大的雪片数量,并让它尽可能快地掉下,输入:
|
||||
|
||||
$ xsnow -snowflakes 10000 -delay 0
|
||||
|
||||
不要显示圣诞树和圣诞老人满屏幕地跑,输入:
|
||||
|
||||
$ xsnow -notrees -nosanta
|
||||
|
||||
关于 xsnow 更多的信息和选项,在命令行下输入 man xsnow 查看手册:
|
||||
|
||||
$ man xsnow
|
||||
|
||||
建议阅读
|
||||
|
||||
- 官网[下载 Xsnow][1]
|
||||
- 注意 [MS-Windows][2] 和 [Mac OS X][3] 版本有一次性的共享软件费用。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.cyberciti.biz/tips/linux-unix-xsnow.html
|
||||
|
||||
作者:Vivek Gite
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://rpmfusion.org/Configuration
|
||||
[2]:http://www.gentoo.org/doc/en/handbook/handbook-x86.xml?part=2&chap=1
|
||||
[3]:http://dropmix.xs4all.nl/rick/Xsnow/
|
||||
[4]:http://dropmix.xs4all.nl/rick/WinSnow/
|
||||
[5]:http://dropmix.xs4all.nl/rick/MacOSXSnow/
|
@ -0,0 +1,41 @@
|
||||
Linux/Unix 桌面趣事:蒸汽火车
|
||||
================================================================================
|
||||
一个你[经常犯的错误][1]是把 ls 输入成了 sl。我已经设置了[一个别名][2],也就是 `alias sl=ls`。但是这样你也许就错过了这辆带汽笛的蒸汽小火车了。
|
||||
|
||||
sl 是一个搞笑软件或,也是一个 Unix 游戏。它会在你错误地把“ls”输入成“sl”(Steam Locomotive)后出现一辆蒸汽火车穿过你的屏幕。
|
||||
|
||||
### 安装 sl ###
|
||||
|
||||
在 Debian/Ubuntu 下输入下面的命令:
|
||||
|
||||
# apt-get install sl
|
||||
|
||||
它同样也在 Freebsd 和其他类Unix的操作系统上存在。
|
||||
|
||||
下面,让我们把 ls 输错成 sl:
|
||||
|
||||
$ sl
|
||||
|
||||
![Fig.01: Run steam locomotive across the screen if you type "sl" instead of "ls"](http://files.cyberciti.biz/uploads/tips/2011/05/sl_command_steam_locomotive.png)
|
||||
|
||||
*图01: 如果你把 “ls” 输入成 “sl” ,蒸汽火车会穿过你的屏幕。*
|
||||
|
||||
它同样支持下面的选项:
|
||||
|
||||
- **-a** : 似乎发生了意外。你会为那些哭喊求助的人们感到难过。
|
||||
- **-l** : 显示小一点的火车
|
||||
- **-F** : 它居然飞走了
|
||||
- **-e** : 允许被 Ctrl+C 中断
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.cyberciti.biz/tips/displays-animations-when-accidentally-you-type-sl-instead-of-ls.html
|
||||
|
||||
作者:Vivek Gite
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://www.cyberciti.biz/tips/my-10-unix-command-line-mistakes.html
|
||||
[2]:http://bash.cyberciti.biz/guide/Create_and_use_aliases
|
@ -0,0 +1,67 @@
|
||||
Linux/Unix 桌面趣事:终端 ASCII 水族箱
|
||||
================================================================================
|
||||
|
||||
你可以在你的终端中使用 ASCIIQuarium 安全地欣赏海洋的神秘了。它是一个用 perl 写的 ASCII 艺术水族箱/海洋动画。
|
||||
|
||||
### 安装 Term::Animation ###
|
||||
|
||||
首先你需要安装名为 Term-Animation 的perl模块。打开终端(选择程序 > 附件 > 终端),并输入:
|
||||
|
||||
$ sudo apt-get install libcurses-perl
|
||||
$ cd /tmp
|
||||
$ wget http://search.cpan.org/CPAN/authors/id/K/KB/KBAUCOM/Term-Animation-2.4.tar.gz
|
||||
$ tar -zxvf Term-Animation-2.4.tar.gz
|
||||
$ cd Term-Animation-2.4/
|
||||
$ perl Makefile.PL && make && make test
|
||||
$ sudo make install
|
||||
|
||||
### 下载安装 ASCIIQuarium ###
|
||||
|
||||
接着在终端中输入:
|
||||
|
||||
$ cd /tmp
|
||||
$ wget http://www.robobunny.com/projects/asciiquarium/asciiquarium.tar.gz
|
||||
$ tar -zxvf asciiquarium.tar.gz
|
||||
$ cd asciiquarium_1.0/
|
||||
$ sudo cp asciiquarium /usr/local/bin
|
||||
$ sudo chmod 0755 /usr/local/bin/asciiquarium
|
||||
|
||||
### 我怎么观赏 ASCII 水族箱? ###
|
||||
|
||||
输入下面的命令:
|
||||
|
||||
$ /usr/local/bin/asciiquarium
|
||||
|
||||
或者
|
||||
|
||||
$ perl /usr/local/bin/asciiquarium
|
||||
|
||||
![Fig.01: ASCII Aquarium](http://s0.cyberciti.org/uploads/tips/2011/01/screenshot-ASCIIQuarium.png)
|
||||
|
||||
*ASCII 水族箱*
|
||||
|
||||
### 相关媒体 ###
|
||||
|
||||
注:youtube 视频
|
||||
<iframe width="596" height="335" frameborder="0" allowfullscreen="" src="//www.youtube.com/embed/MzatWgu67ok"></iframe>
|
||||
|
||||
[视频01: ASCIIQuarium - Linux/Unix桌面上的海洋动画][1]
|
||||
|
||||
### 下载:ASCII Aquarium 的 KDE 和 Mac OS X 版本 ###
|
||||
|
||||
[点此下载 asciiquarium][2]。如果你运行的是 Mac OS X,试下这个可以直接使用的已经打包好的[版本][3]。对于 KDE 用户,试试基于 Asciiquarium 的[KDE 屏幕保护程序][4]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.cyberciti.biz/tips/linux-unix-apple-osx-terminal-ascii-aquarium.html
|
||||
|
||||
作者:Vivek Gite
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://youtu.be/MzatWgu67ok
|
||||
[2]:http://www.robobunny.com/projects/asciiquarium/html/
|
||||
[3]:http://habilis.net/macasciiquarium/
|
||||
[4]:http://kde-look.org/content/show.php?content=29207
|
@ -0,0 +1,89 @@
|
||||
Linux/Unix桌面趣事:显示器里的猫和老鼠
|
||||
================================================================================
|
||||
Oneko 是一个有趣的应用。它会把你的光标变成一只老鼠,并在后面创建一个可爱的小猫,并且始终追逐着老鼠光标。单词“neko”在日语中的意思是老鼠。它最初是一位日本人开发的 Macintosh 桌面附件。
|
||||
|
||||
### 安装 oneko ###
|
||||
|
||||
试下下面的命令:
|
||||
|
||||
$ sudo apt-get install oneko
|
||||
|
||||
示例输出:
|
||||
|
||||
[sudo] password for vivek:
|
||||
Reading package lists... Done
|
||||
Building dependency tree
|
||||
Reading state information... Done
|
||||
The following NEW packages will be installed:
|
||||
oneko
|
||||
0 upgraded, 1 newly installed, 0 to remove and 10 not upgraded.
|
||||
Need to get 38.6 kB of archives.
|
||||
After this operation, 168 kB of additional disk space will be used.
|
||||
Get:1 http://debian.osuosl.org/debian/ squeeze/main oneko amd64 1.2.sakura.6-7 [38.6 kB]
|
||||
Fetched 38.6 kB in 1s (25.9 kB/s)
|
||||
Selecting previously deselected package oneko.
|
||||
(Reading database ... 274152 files and directories currently installed.)
|
||||
Unpacking oneko (from .../oneko_1.2.sakura.6-7_amd64.deb) ...
|
||||
Processing triggers for menu ...
|
||||
Processing triggers for man-db ...
|
||||
Setting up oneko (1.2.sakura.6-7) ...
|
||||
Processing triggers for menu ...
|
||||
|
||||
FreeBSD 用户输入下面的命令安装 oneko:
|
||||
|
||||
# cd /usr/ports/games/oneko
|
||||
# make install clean
|
||||
|
||||
### 我该如何使用 oneko? ###
|
||||
|
||||
输入下面的命令:
|
||||
|
||||
$ oneko
|
||||
|
||||
你可以把猫变成 “tora-neko”,一只像白老虎条纹的猫:
|
||||
|
||||
$ oneko -tora
|
||||
|
||||
### 不喜欢猫? ###
|
||||
|
||||
你可以用狗代替猫:
|
||||
|
||||
$ oneko -dog
|
||||
|
||||
下面可以用樱花代替猫:
|
||||
|
||||
$ oneko -sakura
|
||||
|
||||
用大道寺代替猫:
|
||||
|
||||
$ oneko -tomoyo
|
||||
|
||||
### 查看相关媒体 ###
|
||||
|
||||
这个教程同样也有视频格式:
|
||||
|
||||
注:youtube 视频
|
||||
<iframe width="596" height="335" frameborder="0" allowfullscreen="" src="http://www.youtube.com/embed/Nm3SkXThL0s"></iframe>
|
||||
|
||||
(Video.01: 示例 - 在 Linux 下安装和使用 oneko)
|
||||
|
||||
### 其他选项 ###
|
||||
|
||||
你可以传入下面的选项
|
||||
|
||||
1. **-tofocus**:让猫在获得焦点的窗口顶部奔跑。当获得焦点的窗口不在视野中时,猫像平常那样追逐老鼠。
|
||||
2. **-position 坐标** :指定X和Y来调整猫相对老鼠的位置
|
||||
3. **-rv**:将前景色和背景色对调
|
||||
4. **-fg 颜色** : 前景色 (比如 oneko -dog -fg red)。
|
||||
5. **-bg 颜色** : 背景色 (比如 oneko -dog -bg green)。
|
||||
6. 查看 oneko 的手册获取更多信息。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.cyberciti.biz/open-source/oneko-app-creates-cute-cat-chasing-around-your-mouse/
|
||||
|
||||
作者:Vivek Gite
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -0,0 +1,55 @@
|
||||
在 Linux 终端下看《星球大战》
|
||||
================================================================================
|
||||
![](http://itsfoss.com/wp-content/uploads/2015/12/Star-Wars-Linux-Terminal-2.png)
|
||||
|
||||
《星球大战(Star Wars)》已经席卷世界。最新一期的 [《星球大战》系列, 《星球大战7:原力觉醒》,打破了有史以来的记录][1]。
|
||||
|
||||
虽然我不能帮你得到一张最新的《星球大战》的电影票,但我可以提供给你一种方式,看[星球大战第四集][2],它是非常早期的《星球大战》电影(1977 年)。
|
||||
|
||||
|
||||
不,它不会是高清,也不是蓝光版。相反,它将是 ASCII 版的《星球大战》第四集,你可以在 Linux 终端看它,这才是真正的极客的方式 :)
|
||||
|
||||
### 在 Linux 终端看星球大战 ###
|
||||
|
||||
打开一个终端,使用以下命令:
|
||||
|
||||
telnet towel.blinkenlights.nl
|
||||
|
||||
等待几秒钟,你可以在终端看到类似于以下这样的动画ASCII艺术:
|
||||
|
||||
(LCTT 译注:有时候会解析到效果更好 IPv6 版本上,如果你没有 IPv6 地址,可以重新连接试试;另外似乎线路不稳定,出现卡顿时稍等。)
|
||||
|
||||
![](http://itsfoss.com/wp-content/uploads/2015/12/Star-Wars-Linux-Terminal.png)
|
||||
|
||||
它将继续播映……
|
||||
|
||||
![](http://itsfoss.com/wp-content/uploads/2015/12/Star-Wars-Linux-Terminal-1.png)
|
||||
|
||||
![](http://itsfoss.com/wp-content/uploads/2015/12/Star-Wars-Linux-Terminal-2.png)
|
||||
|
||||
![](http://itsfoss.com/wp-content/uploads/2015/12/Star-Wars-Linux-Terminal-3.png)
|
||||
|
||||
![](http://itsfoss.com/wp-content/uploads/2015/12/Star-Wars-Linux-Terminal-5.png)
|
||||
|
||||
要停止动画,按 ctrl +],在这之后输入 quit 来退出 telnet 程序。
|
||||
|
||||
### 更多有趣的终端 ###
|
||||
|
||||
事实上,看《星球大战》并不是你在 Linux 终端下唯一能做有趣的事情。您可以运行[终端里的列车][3]或[通过ASCII艺术得到Linux标志][4]。
|
||||
|
||||
希望你能享受在 Linux 下看《星球大战》。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/star-wars-linux/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[zky001](https://github.com/zky001)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://www.gamespot.com/articles/star-wars-7-breaks-thursday-night-movie-opening-re/1100-6433246/
|
||||
[2]:http://www.imdb.com/title/tt0076759/
|
||||
[3]:http://itsfoss.com/ubuntu-terminal-train/
|
||||
[4]:http://itsfoss.com/display-linux-logo-in-ascii/
|
@ -0,0 +1,164 @@
|
||||
如何在 CentOS 7 / Ubuntu 15.04 上安装 PHP 框架 Laravel
|
||||
================================================================================
|
||||
|
||||
大家好,这篇文章将要讲述如何在 CentOS 7 / Ubuntu 15.04 上安装 Laravel。如果你是一个 PHP Web 的开发者,你并不需要考虑如何在琳琅满目的现代 PHP 框架中选择,Laravel 是最轻松启动和运行的,它省时省力,能让你享受到 web 开发的乐趣。Laravel 信奉着一个普世的开发哲学,通过简单的指导创建出可维护代码具有最高优先级,你将保持着高速的开发效率,能够随时毫不畏惧更改你的代码来改进现有功能。
|
||||
|
||||
Laravel 安装并不繁琐,你只要跟着本文章一步步操作就能在 CentOS 7 或者 Ubuntu 15 服务器上安装。
|
||||
|
||||
### 1) 服务器要求 ###
|
||||
|
||||
在安装 Laravel 前需要安装一些它的依赖前提条件,主要是一些基本的参数调整,比如升级系统到最新版本,sudo 权限和安装依赖包。
|
||||
|
||||
当你连接到你的服务器时,请确保你能通以下命令能成功的使用 EPEL 仓库并且升级你的服务器。
|
||||
|
||||
#### CentOS-7 ####
|
||||
|
||||
# yum install epel-release
|
||||
|
||||
# rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
|
||||
# rpm -Uvh https://mirror.webtatic.com/yum/el7/webtatic-release.rpm
|
||||
|
||||
# yum update
|
||||
|
||||
#### Ubuntu ####
|
||||
|
||||
# apt-get install python-software-properties
|
||||
# add-apt-repository ppa:ondrej/php5
|
||||
|
||||
# apt-get update
|
||||
|
||||
# apt-get install -y php5 mcrypt php5-mcrypt php5-gd
|
||||
|
||||
### 2) 防火墙安装 ###
|
||||
|
||||
系统防火墙和 SELinux 设置对于用于产品应用安全来说非常重要,当你使用测试服务器的时候可以关闭防火墙,用以下命令行设置 SELinux 成宽容模式(permissive)来保证安装程序不受它们的影响。
|
||||
|
||||
# setenforce 0
|
||||
|
||||
### 3) Apache, MariaDB, PHP 安装 ###
|
||||
|
||||
Laravel 安装程序需要完成安装 LAMP 整个环境,需要额外安装 OpenSSL、PDO,Mbstring 和 Tokenizer 等 PHP 扩展。如果 LAMP 已经运行在你的服务器上你可以跳过这一步,直接确认一些必要的 PHP 插件是否安装好。
|
||||
|
||||
要安装完整 AMP 你需要在自己的服务器上运行以下命令。
|
||||
|
||||
#### CentOS ####
|
||||
|
||||
# yum install httpd mariadb-server php56w php56w-mysql php56w-mcrypt php56w-dom php56w-mbstring
|
||||
|
||||
要在 CentOS 7 上实现 MySQL / Mariadb 服务开机自动启动,你需要运行以下命令。
|
||||
|
||||
# systemctl start httpd
|
||||
# systemctl enable httpd
|
||||
|
||||
#systemctl start mysqld
|
||||
#systemctl enable mysqld
|
||||
|
||||
在启动 MariaDB 服务之后,你需要运行以下命令配置一个足够安全的密码。
|
||||
|
||||
#mysql_secure_installation
|
||||
|
||||
#### Ubuntu ####
|
||||
|
||||
# apt-get install mysql-server apache2 libapache2-mod-php5 php5-mysql
|
||||
|
||||
### 4) 安装 Composer ###
|
||||
|
||||
在我们安装 Laravel 前,先让我们开始安装 composer。安装 composer 是安装 Laravel 的最重要步骤之一,因为 composer 能帮我们安装 Laravel 的各种依赖。
|
||||
|
||||
#### CentOS/Ubuntu ####
|
||||
|
||||
在 CentOS / Ubuntu 下运行以下命令来配置 composer 。
|
||||
|
||||
# curl -sS https://getcomposer.org/installer | php
|
||||
# mv composer.phar /usr/local/bin/composer
|
||||
# chmod +x /usr/local/bin/composer
|
||||
|
||||
![composer installation](http://blog.linoxide.com/wp-content/uploads/2015/11/14.png)
|
||||
|
||||
### 5) 安装 Laravel ###
|
||||
|
||||
我们可以运行以下命令从 github 上下载 Laravel 的安装包。
|
||||
|
||||
# wget https://github.com/laravel/laravel/archive/develop.zip
|
||||
|
||||
运行以下命令解压安装包并且移动 document 的根目录。
|
||||
|
||||
# unzip develop.zip
|
||||
|
||||
# mv laravel-develop /var/www/
|
||||
|
||||
现在使用 compose 命令来安装目录下所有 Laravel 所需要的依赖。
|
||||
|
||||
# cd /var/www/laravel-develop/
|
||||
# composer install
|
||||
|
||||
![compose laravel](http://blog.linoxide.com/wp-content/uploads/2015/11/25.png)
|
||||
|
||||
### 6) 密钥 ###
|
||||
|
||||
为了加密服务器,我们使用以下命令来生成一个加密后的 32 位的密钥。
|
||||
|
||||
# php artisan key:generate
|
||||
|
||||
Application key [Lf54qK56s3qDh0ywgf9JdRxO2N0oV9qI] set successfully
|
||||
|
||||
现在把这个密钥放到 'app.php' 文件,如以下所示。
|
||||
|
||||
# vim /var/www/laravel-develop/config/app.php
|
||||
|
||||
![Key encryption](http://blog.linoxide.com/wp-content/uploads/2015/11/45.png)
|
||||
|
||||
### 7) 虚拟主机和所属用户 ###
|
||||
|
||||
在 composer 安装好后,分配 document 根目录的权限和所属用户,如下所示。
|
||||
|
||||
# chmod 775 /var/www/laravel-develop/app/storage
|
||||
|
||||
# chown -R apache:apache /var/www/laravel-develop
|
||||
|
||||
用任意一款编辑器打开 apache 服务器的默认配置文件,在文件最后加上虚拟主机配置。
|
||||
|
||||
# vim /etc/httpd/conf/httpd.conf
|
||||
|
||||
----------
|
||||
|
||||
ServerName laravel-develop
|
||||
DocumentRoot /var/www/laravel/public
|
||||
|
||||
start Directory /var/www/laravel
|
||||
AllowOverride All
|
||||
Directory close
|
||||
|
||||
现在我们用以下命令重启 apache 服务器,打开浏览器查看 localhost 页面。
|
||||
|
||||
#### CentOS ####
|
||||
|
||||
# systemctl restart httpd
|
||||
|
||||
#### Ubuntu ####
|
||||
|
||||
# service apache2 restart
|
||||
|
||||
### 8) Laravel 5 网络访问 ###
|
||||
|
||||
打开浏览器然后输入你配置的 IP 地址或者完整域名(Fully qualified domain name)你将会看到 Laravel 5 的默认页面。
|
||||
|
||||
![Laravel Default](http://blog.linoxide.com/wp-content/uploads/2015/11/35.png)
|
||||
|
||||
### 总结 ###
|
||||
|
||||
Laravel 框架对于开发网页应用来说是一个绝好的的工具。所以,看了这篇文章你将学会在 Ubuntu 15 和 CentOS 7 上安装 Laravel, 之后你就可以使用这个超棒的 PHP 框架提供的各种功能和舒适便捷性来进行你的开发工作。
|
||||
|
||||
如果您有什么意见或者建议请在以下评论区中回复,我们将根据您宝贵的反馈来使我们的文章更加浅显易懂。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/install-laravel-php-centos-7-ubuntu-15-04/
|
||||
|
||||
作者:[Kashif][a]
|
||||
译者:[NearTan](https://github.com/NearTan)
|
||||
校对:[Caroline](https://github.com/carolinewuyan)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/kashifs/
|
92
published/20151222 Turn Tor socks to http.md
Normal file
92
published/20151222 Turn Tor socks to http.md
Normal file
@ -0,0 +1,92 @@
|
||||
将 Tor socks 转换成 http 代理
|
||||
================================================================================
|
||||
![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/12/tor-593x445.jpg)
|
||||
|
||||
你可以通过不同的 Tor 工具来使用 Tor 服务,如 Tor 浏览器、Foxyproxy 和其它东西,像 wget 和 aria2 这样的下载管理器不能直接使用 Tor socks 开始匿名下载,因此我们需要一些工具来将 Tor socks 转换成 http 代理,这样就能用它来下载了。
|
||||
|
||||
**注意**:本教程基于 Debian ,其他发行版会有些不同,因此如果你的发行版是基于 Debian 的,就可以直接使用下面的配置了。
|
||||
|
||||
### Polipo
|
||||
|
||||
这个服务会使用 8123 端口和 127.0.0.1 的 IP 地址,使用下面的命令来在计算机上安装 Polipo:
|
||||
|
||||
sudo apt install polipo
|
||||
|
||||
现在使用如下命令打开 Polipo 的配置文件:
|
||||
|
||||
sudo nano /etc/polipo/config
|
||||
|
||||
在文件最后加入下面的行:
|
||||
|
||||
proxyAddress = "::0"
|
||||
allowedClients = 192.168.1.0/24
|
||||
socksParentProxy = "localhost:9050"
|
||||
socksProxyType = socks5
|
||||
|
||||
用如下的命令来重启 Polipo:
|
||||
|
||||
sudo service polipo restart
|
||||
|
||||
现在 Polipo 已经安装好了!在匿名的世界里做你想做的吧!下面是使用的例子:
|
||||
|
||||
pdmt -l "link" -i 127.0.01 -p 8123
|
||||
|
||||
通过上面的命令 PDMT(Persian 下载器终端)会匿名地下载你的文件。
|
||||
|
||||
### Proxychains
|
||||
|
||||
在此服务中你可以设置使用 Tor 或者 Lantern 代理,但是在使用上它和 Polipo 和 Privoxy 有点不同,它不需要使用任何端口!使用下面的命令来安装:
|
||||
|
||||
sudo apt install proxychains
|
||||
|
||||
用这条命令来打开配置文件:
|
||||
|
||||
sudo nano /etc/proxychains.conf
|
||||
|
||||
现在添加下面的代码到文件底部,这里是 Tor 的端口和 IP:
|
||||
|
||||
socks5 127.0.0.1 9050
|
||||
|
||||
如果你在命令的前面加上“proxychains”并运行,它就能通过 Tor 代理来运行:
|
||||
|
||||
proxychains firefoxt
|
||||
proxychains aria2c
|
||||
proxychains wget
|
||||
|
||||
### Privoxy
|
||||
|
||||
Privoxy 使用 8118 端口,可以很轻松地通过 privoxy 包来安装:
|
||||
|
||||
sudo apt install privoxy
|
||||
|
||||
我们现在要修改配置文件:
|
||||
|
||||
sudo nano /etc/pivoxy/config
|
||||
|
||||
在文件底部加入下面的行:
|
||||
|
||||
forward-socks5 / 127.0.0.1:9050 .
|
||||
forward-socks4a / 127.0.0.1:9050 .
|
||||
forward-socks5t / 127.0.0.1:9050 .
|
||||
forward 192.168.*.*/ .
|
||||
forward 10.*.*.*/ .
|
||||
forward 127.*.*.*/ .
|
||||
forward localhost/ .
|
||||
|
||||
重启服务:
|
||||
|
||||
sudo service privoxy restart
|
||||
|
||||
服务已经好了!端口是 8118,IP 是 127.0.0.1,就尽情使用吧!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.unixmen.com/turn-tor-socks-http/
|
||||
|
||||
作者:[Hossein heydari][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.unixmen.com/author/hossein/
|
@ -0,0 +1,80 @@
|
||||
Ubuntu 里的“间谍软件”将在 Ubuntu 16.04 LTS 中被禁用
|
||||
================================================================================
|
||||
|
||||
出于用户隐私的考虑,Ubuntu 阉割了一个有争议的功能。
|
||||
|
||||
![](http://www.omgubuntu.co.uk/wp-content/uploads/2013/09/as2.jpg)
|
||||
|
||||
**Unity 中有争议的在线搜索功能将在今年四月份发布的 Ubuntu 16.04 LTS 中被默认禁用**
|
||||
|
||||
用户在 Unity 7 的 Dash 搜索栏里将**只能搜索到本地文件、文件夹以及应用**。这样,用户输入的关键词将不会被发送到 Canonical 或任何第三方内容提供商的服务器里。
|
||||
|
||||
> “现在,Unity 的在线搜索在默认状况下是关闭的”
|
||||
|
||||
在目前 ubuntu 的支持版本中,Dash 栏会将用户搜索的关键词发送到 Canonical 运营的远程服务器中。它发送这些数据以用于从50多家在线服务获取搜索结果,这些服务包括维基百科、YouTube 和 The Weather Channel 等。
|
||||
|
||||
我们可以选择去**系统设置 > 隐私控制**关闭这项功能。但是,一些开源社区针对的是默认打开这个事情。
|
||||
|
||||
### Ubuntu 在线搜索引发的争议 ###
|
||||
|
||||
> “Richard Stallman 将这个功能描述为 ‘间谍软件’”
|
||||
|
||||
早在2012年,在 Ubuntu 搜索中整合了来自亚马逊的内容之后,开源社区就表示为其用户的隐私感到担忧。在2013年,“Smart Scopes 服务”全面推出后,开源社区再度表示担忧.
|
||||
|
||||
风波如此之大,以至于开源界大神 [Richard Stallman 都称 Ubuntu 为"间谍软件"][1]。
|
||||
|
||||
[电子前哨基金会 (EFF)][2]也在一系列博文中表达出对此的关注,并且建议 Canonical 将这个功能做成用户自由选择是否开启的功能。Privacy International 比其他的组织走的更远,对于 Ubuntu 的工作,他们给 Ubuntu 的缔造者发了一个“[老大哥奖][3]”。
|
||||
|
||||
[Canonical][4] 坚称] Unity 的在线搜索功能所收集的数据是匿名的以及“不可识别是来自哪个用户的”。
|
||||
|
||||
在[2013年 Canoical 发布的博文中][5]他们解释道:“**(我们)会使用户了解我们收集哪些信息以及哪些第三方服务商将会在他们搜索时从 Dash 栏中给出结果。我们只会收集能够提升用户体验的信息。**”
|
||||
|
||||
### Ubuntu 开始严肃对待的用户数据隐私###
|
||||
|
||||
Canonical 在给新安装的 Ubuntu 14.04 LTS 以及以上版本中禁用了来自亚马逊的产品搜索结果(尽管来自其他服务商的搜索结果仍然在出现,直到你关闭这个选项)
|
||||
|
||||
在下一个LTS(长期支持)版,也就是 Ubuntu 16.04 中,Canonical 完全关闭了这个有争议的在线搜索功能,这个功能在用户安装完后就是关闭的。就如同 EFF 在2012年建议他们做的那样。
|
||||
|
||||
“你搜索的关键词将不会逃出你的计算机。” [Ubuntu 桌面主管 Will Cooke][6]解释道,“对于搜索结果的更精细的控制”和 Unity 8 所提供的“更有针对性的结果添加不到 Unity 7 里”。
|
||||
|
||||
这也就是“[Unity 7]的在线搜索功能将会退役”的原因。
|
||||
|
||||
这个变化也会降低对 Unity 7 的支持以及对 Canonical 基础设施的压力。Unity 提供的搜索结果越少,Canonical 就能把时间和工程师放到更加振奋人心的地方,比如更早的发布 Unity 8 桌面环境。
|
||||
|
||||
### 在 Ubuntu 16.04 中你需要自己开启在线搜索功能 ###
|
||||
|
||||
![Privacy settings in Ubuntu let you opt in to seeing online results](http://www.omgubuntu.co.uk/wp-content/uploads/2013/04/privacy.jpg)
|
||||
|
||||
*在 Ubuntu 隐私设置中你可以打开在线搜索功能*
|
||||
|
||||
禁用 Ubuntu 桌面的在线搜索功能的决定将获得众多开源/免费软件社区的欢呼。但是并不是每一个人都对 Dash 提供的语义搜索功能反感,如果你认为你失去了在搜索时预览天气、查看新闻或其他来自 Dash 在线搜索提供的内容所带来的效率的话,你只需要简单的点几下鼠标就可以**再次打开这个功能**,定位到 Ubuntu 的**系统设置 > 隐私控制 > 搜索**然后将选项调至“**开启**”。
|
||||
|
||||
这个选项不会自动把亚马逊的产品信息加入到搜索结果中。如果你想看产品信息的话,需要打开第二个可选项“shipping lens”才能看到来自 Amazon (和 Skimlinks)的内容。
|
||||
|
||||
### 总结 ###
|
||||
|
||||
|
||||
- 默认情况下,Ubuntu 16.04 LTS 的 Dash 栏将不会搜索到在线结果
|
||||
- 可以手动打开在线搜索
|
||||
- **系统设置 > 隐私控制 > 搜索**中的第二个可选项允许你看到亚马逊的产品信息
|
||||
- 这个变动只会影响新安装的系统。从老版本升级的将会保留用户的喜好
|
||||
|
||||
你同意这个决定吗?抑或是 Cononical 可能降低了新用户的体验?在评论中告诉我们。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.omgubuntu.co.uk/2016/01/ubuntu-online-search-feature-disabled-16-04
|
||||
|
||||
作者:[Joey-Elijah Sneddon][a]
|
||||
译者:[name1e5s](https://github.com/name1e5s)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/117485690627814051450/?rel=author
|
||||
[1]:http://arstechnica.com/information-technology/2012/12/richard-stallman-calls-ubuntu-spyware-because-it-tracks-searches/?utm_source=omgubuntu
|
||||
[2]:https://www.eff.org/deeplinks/2012/10/privacy-ubuntu-1210-amazon-ads-and-data-leaks?utm_source=omgubuntu
|
||||
[3]:http://www.omgubuntu.co.uk/2013/10/ubuntu-wins-big-brother-austria-privacy-award
|
||||
[4]:http://blog.canonical.com/2012/12/07/searching-in-the-dash-in-ubuntu-13-04/
|
||||
[5]:http://blog.canonical.com/2012/12/07/searching-in-the-dash-in-ubuntu-13-04/?utm_source=omgubuntu
|
||||
[6]:http://www.whizzy.org/2015/12/online-searches-in-the-dash-to-be-off-by-default?utm_source=omgubuntu
|
@ -1,12 +1,13 @@
|
||||
第 10 部分:在 RHEL/CentOS 7 中设置 “NTP(网络时间协议) 服务器”
|
||||
RHCE 系列(十):在 RHEL/CentOS 7 中设置 NTP(网络时间协议)服务器
|
||||
================================================================================
|
||||
网络时间协议 - NTP - 是运行在传输层 123 号端口允许计算机通过网络同步准确时间的协议。随着时间的流逝,计算机内部时间会出现漂移,这会导致时间不一致问题,尤其是对于服务器和客户端日志文件,或者你想要备份服务器资源或数据库。
|
||||
|
||||
网络时间协议 - NTP - 是运行在传输层 123 号端口的 UDP 协议,它允许计算机通过网络同步准确时间。随着时间的流逝,计算机内部时间会出现漂移,这会导致时间不一致问题,尤其是对于服务器和客户端日志文件,或者你想要复制服务器的资源或数据库。
|
||||
|
||||
![在 CentOS 上安装 NTP 服务器](http://www.tecmint.com/wp-content/uploads/2014/09/NTP-Server-Install-in-CentOS.png)
|
||||
|
||||
在 CentOS 和 RHEL 7 上安装 NTP 服务器
|
||||
*在 CentOS 和 RHEL 7 上安装 NTP 服务器*
|
||||
|
||||
#### 要求: ####
|
||||
#### 前置要求: ####
|
||||
|
||||
- [CentOS 7 安装过程][1]
|
||||
- [RHEL 安装过程][2]
|
||||
@ -17,62 +18,62 @@
|
||||
- [在 CentOS/RHCE 7 上配置静态 IP][4]
|
||||
- [在 CentOS/RHEL 7 上停用并移除不需要的服务][5]
|
||||
|
||||
这篇指南会告诉你如何在 CentOS/RHCE 7 上安装和配置 NTP 服务器,并使用 NTP 公共时间服务器池列表中和你服务器地理位置最近的可用节点中同步时间。
|
||||
这篇指南会告诉你如何在 CentOS/RHCE 7 上安装和配置 NTP 服务器,并使用 NTP 公共时间服务器池(NTP Public Pool Time Servers)列表中和你服务器地理位置最近的可用节点中同步时间。
|
||||
|
||||
#### 步骤一:安装和配置 NTP 守护进程 ####
|
||||
|
||||
1. 官方 CentOS /RHEL 7 库默认提供 NTP 服务器安装包,可以通过使用下面的命令安装。
|
||||
1、 官方 CentOS /RHEL 7 库默认提供 NTP 服务器安装包,可以通过使用下面的命令安装。
|
||||
|
||||
# yum install ntp
|
||||
|
||||
![在 CentOS 上安装 NTP 服务器](http://www.tecmint.com/wp-content/uploads/2014/09/Install-NTP-in-CentOS.png)
|
||||
|
||||
安装 NTP 服务器
|
||||
*安装 NTP 服务器*
|
||||
|
||||
2. 安装完服务器之后,首先到官方 [NTP 公共时间服务器池][6],选择你服务器物理位置所在的洲,然后搜索你的国家位置,然后会出现 NTP 服务器列表。
|
||||
2、 安装完服务器之后,首先到官方 [NTP 公共时间服务器池(NTP Public Pool Time Servers)][6],选择你服务器物理位置所在的洲,然后搜索你的国家位置,然后会出现 NTP 服务器列表。
|
||||
|
||||
![NTP 服务器池](http://www.tecmint.com/wp-content/uploads/2014/09/NTP-Pool-Server.png)
|
||||
|
||||
NTP 服务器池
|
||||
*NTP 服务器池*
|
||||
|
||||
3. 然后打开编辑 NTP 守护进程主要配置文件,从 pool.ntp.org 中注释掉默认的公共服务器列表并用类似下面截图提供给你国家的列表替换。
|
||||
3、 然后打开编辑 NTP 守护进程的主配置文件,注释掉来自 pool.ntp.org 项目的公共服务器默认列表,并用类似下面截图中提供给你所在国家的列表替换。(LCTT 译注:中国使用 0.cn.pool.ntp.org 等)
|
||||
|
||||
![在 CentOS 中配置 NTP 服务器](http://www.tecmint.com/wp-content/uploads/2014/09/Configure-NTP-Server.png)
|
||||
|
||||
配置 NTP 服务器
|
||||
*配置 NTP 服务器*
|
||||
|
||||
4. 下一步,你需要允许客户端从你的网络中和这台服务器同步时间。为了做到这点,添加下面一行到 NTP 配置文件,其中限制语句控制允许哪些网络查询和同步时间 - 根据需要替换网络 IP。
|
||||
4、 下一步,你需要允许来自你的网络的客户端和这台服务器同步时间。为了做到这点,添加下面一行到 NTP 配置文件,其中 **restrict** 语句控制允许哪些网络查询和同步时间 - 请根据需要替换网络 IP。
|
||||
|
||||
restrict 192.168.1.0 netmask 255.255.255.0 nomodify notrap
|
||||
|
||||
nomodify notrap 语句意味着不允许你的客户端配置服务器或者作为同步时间的节点。
|
||||
|
||||
5. 如果你需要额外的信息用于错误处理,以防你的 NTP 守护进程出现问题,添加一个 logfile 语句,用于记录所有 NTP 服务器问题到一个指定的日志文件。
|
||||
5、 如果你需要用于错误处理的额外信息,以防你的 NTP 守护进程出现问题,添加一个 logfile 语句,用于记录所有 NTP 服务器问题到一个指定的日志文件。
|
||||
|
||||
logfile /var/log/ntp.log
|
||||
|
||||
![在 CentOS 中启用 NTP 日志](http://www.tecmint.com/wp-content/uploads/2014/09/Enable-NTP-Log.png)
|
||||
|
||||
启用 NTP 日志
|
||||
*启用 NTP 日志*
|
||||
|
||||
6. 你编辑完所有上面解释的配置并保存关闭 ntp.conf 文件后,你最终的配置看起来像下面的截图。
|
||||
6、 在你编辑完所有上面解释的配置并保存关闭 ntp.conf 文件后,你最终的配置看起来像下面的截图。
|
||||
|
||||
![CentOS 中 NTP 服务器的配置](http://www.tecmint.com/wp-content/uploads/2014/09/NTP-Server-Configuration.png)
|
||||
|
||||
NTP 服务器配置
|
||||
*NTP 服务器配置*
|
||||
|
||||
### 步骤二:添加防火墙规则并启动 NTP 守护进程 ###
|
||||
|
||||
7. NTP 服务在传输层(第四层)使用 123 号 UDP 端口。它是针对限制可变延迟的影响特别设计的。要在 RHEL/CentOS 7 中开放这个端口,可以对 Firewalld 服务使用下面的命令。
|
||||
7、 NTP 服务使用 OSI 传输层(第四层)的 123 号 UDP 端口。它是为了避免可变延迟的影响所特别设计的。要在 RHEL/CentOS 7 中开放这个端口,可以对 Firewalld 服务使用下面的命令。
|
||||
|
||||
# firewall-cmd --add-service=ntp --permanent
|
||||
# firewall-cmd --reload
|
||||
|
||||
![在 Firewall 中开放 NTP 端口](http://www.tecmint.com/wp-content/uploads/2014/09/Open-NTP-Port.png)
|
||||
|
||||
在 Firewall 中开放 NTP 端口
|
||||
*在 Firewall 中开放 NTP 端口*
|
||||
|
||||
8. 你在防火墙中开放了 123 号端口之后,启动 NTP 服务器并确保系统范围内可用。用下面的命令管理服务。
|
||||
8、 你在防火墙中开放了 123 号端口之后,启动 NTP 服务器并确保系统范围内可用。用下面的命令管理服务。
|
||||
|
||||
# systemctl start ntpd
|
||||
# systemctl enable ntpd
|
||||
@ -80,34 +81,34 @@ NTP 服务器配置
|
||||
|
||||
![启动 NTP 服务](http://www.tecmint.com/wp-content/uploads/2014/09/Start-NTP-Service.png)
|
||||
|
||||
启动 NTP 服务
|
||||
*启动 NTP 服务*
|
||||
|
||||
### 步骤三:验证服务器时间同步 ###
|
||||
|
||||
9. 启动了 NTP 守护进程后,用几分钟等服务器和它的服务器池列表同步时间,然后运行下面的命令验证 NTP 节点同步状态和你的系统时间。
|
||||
9、 启动了 NTP 守护进程后,用几分钟等服务器和它的服务器池列表同步时间,然后运行下面的命令验证 NTP 节点同步状态和你的系统时间。
|
||||
|
||||
# ntpq -p
|
||||
# date -R
|
||||
|
||||
![验证 NTP 服务器时间](http://www.tecmint.com/wp-content/uploads/2014/09/Verify-NTP-Time-Sync.png)
|
||||
|
||||
验证 NTP 时间同步
|
||||
*验证 NTP 时间同步*
|
||||
|
||||
10. 如果你想查询或者和你选择的服务器池同步,你可以使用 ntpdate 命令,后面跟服务器名或服务器地址,类似下面建议的命令行事例。
|
||||
10、 如果你想查询或者和你选择的服务器池同步,你可以使用 ntpdate 命令,后面跟服务器名或服务器地址,类似下面建议的命令行示例。
|
||||
|
||||
# ntpdate -q 0.ro.pool.ntp.org 1.ro.pool.ntp.org
|
||||
|
||||
![同步 NTP 同步](http://www.tecmint.com/wp-content/uploads/2014/09/Synchronize-NTP-Time.png)
|
||||
|
||||
同步 NTP 时间
|
||||
*同步 NTP 时间*
|
||||
|
||||
### 步骤四:设置 Windows NTP 客户端 ###
|
||||
|
||||
11. 如果你的 windows 机器不是域名控制器的一部分,你可以配置 Windows 和你的 NTP服务器同步时间。在任务栏右边 -> 时间 -> 更改日期和时间设置 -> 网络时间标签 -> 更改设置 -> 和一个网络时间服务器检查同步 -> 在 Server 空格输入服务器 IP 或 FQDN -> 马上更新 -> OK。
|
||||
11、 如果你的 windows 机器不是域名控制器的一部分,你可以配置 Windows 和你的 NTP服务器同步时间。在任务栏右边 -> 时间 -> 更改日期和时间设置 -> 网络时间标签 -> 更改设置 -> 和一个网络时间服务器检查同步 -> 在 Server 空格输入服务器 IP 或 FQDN -> 马上更新 -> OK。
|
||||
|
||||
![和 NTP 同步 Windows 时间](http://www.tecmint.com/wp-content/uploads/2014/09/Synchronize-Windows-Time-with-NTP.png)
|
||||
|
||||
和 NTP 同步 Windows 时间
|
||||
*和 NTP 同步 Windows 时间*
|
||||
|
||||
就是这些。在你的网络中配置一个本地 NTP 服务器能确保你所有的服务器和客户端有相同的时间设置,以防出现网络连接失败,并且它们彼此都相互同步。
|
||||
|
||||
@ -117,7 +118,7 @@ via: http://www.tecmint.com/install-ntp-server-in-centos/
|
||||
|
||||
作者:[Matei Cezar][a]
|
||||
译者:[ictlyh](http://motouxiaogui.cn/blog)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,11 +1,13 @@
|
||||
RHCE 系列: 使用网络安全服务(NSS)为 Apache 通过 TLS 实现 HTTPS
|
||||
RHCE 系列(八):在 Apache 上使用网络安全服务(NSS)实现 HTTPS
|
||||
================================================================================
|
||||
如果你是一个负责维护和确保 web 服务器安全的系统管理员,你不能不花费最大的精力确保服务器中处理和通过的数据任何时候都受到保护。
|
||||
|
||||
如果你是一个负责维护和确保 web 服务器安全的系统管理员,你需要花费最大的精力确保服务器中处理和通过的数据任何时候都受到保护。
|
||||
|
||||
![使用 SSL/TLS 设置 Apache HTTPS](http://www.tecmint.com/wp-content/uploads/2015/09/Setup-Apache-SSL-TLS-Server.png)
|
||||
|
||||
RHCE 系列:第八部分 - 使用网络安全服务(NSS)为 Apache 通过 TLS 实现 HTTPS
|
||||
*RHCE 系列:第八部分 - 使用网络安全服务(NSS)为 Apache 通过 TLS 实现 HTTPS*
|
||||
|
||||
为了在客户端和服务器之间提供更安全的连接,作为 HTTP 和 SSL(安全套接层)或者最近称为 TLS(传输层安全)的组合,产生了 HTTPS 协议。
|
||||
为了在客户端和服务器之间提供更安全的连接,作为 HTTP 和 SSL(Secure Sockets Layer,安全套接层)或者最近称为 TLS(Transport Layer Security,传输层安全)的组合,产生了 HTTPS 协议。
|
||||
|
||||
由于一些严重的安全漏洞,SSL 已经被更健壮的 TLS 替代。由于这个原因,在这篇文章中我们会解析如何通过 TLS 实现你 web 服务器和客户端之间的安全连接。
|
||||
|
||||
@ -22,11 +24,11 @@ RHCE 系列:第八部分 - 使用网络安全服务(NSS)为 Apache 通过
|
||||
# firewall-cmd --permanent –-add-service=http
|
||||
# firewall-cmd --permanent –-add-service=https
|
||||
|
||||
然后安装一些必须软件包:
|
||||
然后安装一些必需的软件包:
|
||||
|
||||
# yum update && yum install openssl mod_nss crypto-utils
|
||||
|
||||
**重要**:请注意如果你想使用 OpenSSL 库而不是 NSS(网络安全服务)实现 TLS,你可以在上面的命令中用 mod\_ssl 替换 mod\_nss(使用哪一个取决于你,但在这篇文章中由于更加健壮我们会使用 NSS;例如,它支持最新的加密标准,比如 PKCS #11)。
|
||||
**重要**:请注意如果你想使用 OpenSSL 库而不是 NSS(Network Security Service,网络安全服务)实现 TLS,你可以在上面的命令中用 mod\_ssl 替换 mod\_nss(使用哪一个取决于你,但在这篇文章中我们会使用 NSS,因为它更加安全,比如说,它支持最新的加密标准,比如 PKCS #11)。
|
||||
|
||||
如果你使用 mod\_nss,首先要卸载 mod\_ssl,反之如此。
|
||||
|
||||
@ -54,15 +56,15 @@ nss.conf – 配置文件
|
||||
|
||||
下一步,在 `/etc/httpd/conf.d/nss.conf` 配置文件中做以下更改:
|
||||
|
||||
1. 指定 NSS 数据库目录。你可以使用默认的目录或者新建一个。本文中我们使用默认的:
|
||||
1、 指定 NSS 数据库目录。你可以使用默认的目录或者新建一个。本文中我们使用默认的:
|
||||
|
||||
NSSCertificateDatabase /etc/httpd/alias
|
||||
|
||||
2. 通过保存密码到数据库目录中的 /etc/httpd/nss-db-password.conf 文件避免每次系统启动时要手动输入密码:
|
||||
2、 通过保存密码到数据库目录中的 `/etc/httpd/nss-db-password.conf` 文件来避免每次系统启动时要手动输入密码:
|
||||
|
||||
NSSPassPhraseDialog file:/etc/httpd/nss-db-password.conf
|
||||
|
||||
其中 /etc/httpd/nss-db-password.conf 只包含以下一行,其中 mypassword 是后面你为 NSS 数据库设置的密码:
|
||||
其中 `/etc/httpd/nss-db-password.conf` 只包含以下一行,其中 mypassword 是后面你为 NSS 数据库设置的密码:
|
||||
|
||||
internal:mypassword
|
||||
|
||||
@ -71,27 +73,27 @@ nss.conf – 配置文件
|
||||
# chmod 640 /etc/httpd/nss-db-password.conf
|
||||
# chgrp apache /etc/httpd/nss-db-password.conf
|
||||
|
||||
3. 由于 POODLE SSLv3 漏洞,红帽建议停用 SSL 和 TLSv1.0 之前所有版本的 TLS(更多信息可以查看[这里][2])。
|
||||
3、 由于 POODLE SSLv3 漏洞,红帽建议停用 SSL 和 TLSv1.0 之前所有版本的 TLS(更多信息可以查看[这里][2])。
|
||||
|
||||
确保 NSSProtocol 指令的每个实例都类似下面一样(如果你没有托管其它虚拟主机,很可能只有一条):
|
||||
|
||||
NSSProtocol TLSv1.0,TLSv1.1
|
||||
|
||||
4. 由于这是一个自签名证书,Apache 会拒绝重启,并不会识别为有效发行人。由于这个原因,对于这种特殊情况我们还需要添加:
|
||||
4、 由于这是一个自签名证书,Apache 会拒绝重启,并不会识别为有效发行人。由于这个原因,对于这种特殊情况我们还需要添加:
|
||||
|
||||
NSSEnforceValidCerts off
|
||||
|
||||
5. 虽然并不是严格要求,为 NSS 数据库设置一个密码同样很重要:
|
||||
5、 虽然并不是严格要求,为 NSS 数据库设置一个密码同样很重要:
|
||||
|
||||
# certutil -W -d /etc/httpd/alias
|
||||
|
||||
![为 NSS 数据库设置密码](http://www.tecmint.com/wp-content/uploads/2015/09/Set-Password-for-NSS-Database.png)
|
||||
|
||||
为 NSS 数据库设置密码
|
||||
*为 NSS 数据库设置密码*
|
||||
|
||||
### 创建一个 Apache SSL 自签名证书 ###
|
||||
|
||||
下一步,我们会创建一个自签名证书为我们的客户机识别服务器(请注意这个方法对于生产环境并不是最好的选择;对于生产环境你应该考虑购买第三方可信证书机构验证的证书,例如 DigiCert)。
|
||||
下一步,我们会创建一个自签名证书来让我们的客户机可以识别服务器(请注意这个方法对于生产环境并不是最好的选择;对于生产环境你应该考虑购买第三方可信证书机构验证的证书,例如 DigiCert)。
|
||||
|
||||
我们用 genkey 命令为 box1 创建有效期为 365 天的 NSS 兼容证书。完成这一步后:
|
||||
|
||||
@ -101,19 +103,19 @@ nss.conf – 配置文件
|
||||
|
||||
![创建 Apache SSL 密钥](http://www.tecmint.com/wp-content/uploads/2015/09/Create-Apache-SSL-Key.png)
|
||||
|
||||
创建 Apache SSL 密钥
|
||||
*创建 Apache SSL 密钥*
|
||||
|
||||
你可以使用默认的密钥大小(2048),然后再次选择 Next:
|
||||
|
||||
![选择 Apache SSL 密钥大小](http://www.tecmint.com/wp-content/uploads/2015/09/Select-Apache-SSL-Key-Size.png)
|
||||
|
||||
选择 Apache SSL 密钥大小
|
||||
*选择 Apache SSL 密钥大小*
|
||||
|
||||
等待系统生成随机比特:
|
||||
|
||||
![生成随机密钥比特](http://www.tecmint.com/wp-content/uploads/2015/09/Generating-Random-Bits.png)
|
||||
|
||||
生成随机密钥比特
|
||||
*生成随机密钥比特*
|
||||
|
||||
为了加快速度,会提示你在控制台输入随机字符,正如下面的截图所示。请注意当没有从键盘接收到输入时进度条是如何停止的。然后,会让你选择:
|
||||
|
||||
@ -124,35 +126,35 @@ nss.conf – 配置文件
|
||||
注:youtube 视频
|
||||
<iframe width="720" height="405" frameborder="0" src="//www.youtube.com/embed/mgsfeNfuurA" allowfullscreen="allowfullscreen"></iframe>
|
||||
|
||||
最后,会提示你输入之前设置的密码到 NSS 证书:
|
||||
最后,会提示你输入之前给 NSS 证书设置的密码:
|
||||
|
||||
# genkey --nss --days 365 box1
|
||||
|
||||
![Apache NSS 证书密码](http://www.tecmint.com/wp-content/uploads/2015/09/Apache-NSS-Password.png)
|
||||
|
||||
Apache NSS 证书密码
|
||||
*Apache NSS 证书密码*
|
||||
|
||||
在任何时候你都可以用以下命令列出现有的证书:
|
||||
需要的话,你可以用以下命令列出现有的证书:
|
||||
|
||||
# certutil –L –d /etc/httpd/alias
|
||||
|
||||
![列出 Apache NSS 证书](http://www.tecmint.com/wp-content/uploads/2015/09/List-Apache-Certificates.png)
|
||||
|
||||
列出 Apache NSS 证书
|
||||
*列出 Apache NSS 证书*
|
||||
|
||||
然后通过名字删除(除非严格要求,用你自己的证书名称替换 box1):
|
||||
然后通过名字删除(如果你真的需要删除的,用你自己的证书名称替换 box1):
|
||||
|
||||
# certutil -d /etc/httpd/alias -D -n "box1"
|
||||
|
||||
如果你需要继续的话:
|
||||
如果你需要继续进行的话,请继续阅读。
|
||||
|
||||
### 测试 Apache SSL HTTPS 连接 ###
|
||||
|
||||
最后,是时候测试到我们服务器的安全连接了。当你用浏览器打开 https://<web 服务器 IP 或主机名\>,你会看到著名的信息 “This connection is untrusted”:
|
||||
最后,是时候测试到我们服务器的安全连接了。当你用浏览器打开 https://\<web 服务器 IP 或主机名\>,你会看到著名的信息 “This connection is untrusted”:
|
||||
|
||||
![检查 Apache SSL 连接](http://www.tecmint.com/wp-content/uploads/2015/09/Check-Apache-SSL-Connection.png)
|
||||
|
||||
检查 Apache SSL 连接
|
||||
*检查 Apache SSL 连接*
|
||||
|
||||
在上面的情况中,你可以点击添加例外(Add Exception) 然后确认安全例外(Confirm Security Exception) - 但先不要这么做。让我们首先来看看证书看它的信息是否和我们之前输入的相符(如截图所示)。
|
||||
|
||||
@ -160,37 +162,37 @@ Apache NSS 证书密码
|
||||
|
||||
![确认 Apache SSL 证书详情](http://www.tecmint.com/wp-content/uploads/2015/09/Check-Apache-SSL-Certificate-Details.png)
|
||||
|
||||
确认 Apache SSL 证书详情
|
||||
*确认 Apache SSL 证书详情*
|
||||
|
||||
现在你继续,确认例外(限于此次或永久),然后会通过 https 把你带到你 web 服务器的 DocumentRoot 目录,在这里你可以使用你浏览器自带的开发者工具检查连接详情:
|
||||
现在你可以继续,确认例外(限于此次或永久),然后会通过 https 把你带到你 web 服务器的 DocumentRoot 目录,在这里你可以使用你浏览器自带的开发者工具检查连接详情:
|
||||
|
||||
在火狐浏览器中,你可以通过在屏幕中右击然后从上下文菜单中选择检查元素(Inspect Element)启动,尤其是通过网络选项卡:
|
||||
在火狐浏览器中,你可以通过在屏幕中右击,然后从上下文菜单中选择检查元素(Inspect Element)启动开发者工具,尤其要看“网络”选项卡:
|
||||
|
||||
![检查 Apache HTTPS 连接](http://www.tecmint.com/wp-content/uploads/2015/09/Inspect-Apache-HTTPS-Connection.png)
|
||||
|
||||
检查 Apache HTTPS 连接
|
||||
*检查 Apache HTTPS 连接*
|
||||
|
||||
请注意这和之前显示的在验证过程中输入的信息一致。还有一种方式通过使用命令行工具测试连接:
|
||||
|
||||
左边(测试 SSLv3):
|
||||
左图(测试 SSLv3):
|
||||
|
||||
# openssl s_client -connect localhost:443 -ssl3
|
||||
|
||||
右边(测试 TLS):
|
||||
右图(测试 TLS):
|
||||
|
||||
# openssl s_client -connect localhost:443 -tls1
|
||||
|
||||
![测试 Apache SSL 和 TLS 连接](http://www.tecmint.com/wp-content/uploads/2015/09/Testing-Apache-SSL-and-TLS.png)
|
||||
|
||||
测试 Apache SSL 和 TLS 连接
|
||||
*测试 Apache SSL 和 TLS 连接*
|
||||
|
||||
参考上面的截图了解更相信信息。
|
||||
参考上面的截图了解更详细信息。
|
||||
|
||||
### 总结 ###
|
||||
|
||||
我确信你已经知道,使用 HTTPS 会增加会在你站点中输入个人信息的访客的信任(从用户名和密码到任何商业/银行账户信息)。
|
||||
我想你已经知道,使用 HTTPS 会增加会在你站点中输入个人信息的访客的信任(从用户名和密码到任何商业/银行账户信息)。
|
||||
|
||||
在那种情况下,你会希望获得由可信验证机构签名的证书,正如我们之前解释的(启用的步骤和发送 CSR 到 CA 然后获得签名证书的例子相同);另外的情况,就是像我们的例子中一样使用自签名证书。
|
||||
在那种情况下,你会希望获得由可信验证机构签名的证书,正如我们之前解释的(步骤和设置需要启用例外的证书的步骤相同,发送 CSR 到 CA 然后获得返回的签名证书);否则,就像我们的例子中一样使用自签名证书即可。
|
||||
|
||||
要获取更多关于使用 NSS 的详情,可以参考关于 [mod-nss][3] 的在线帮助。如果你有任何疑问或评论,请告诉我们。
|
||||
|
||||
@ -200,11 +202,11 @@ via: http://www.tecmint.com/create-apache-https-self-signed-certificate-using-ns
|
||||
|
||||
作者:[Gabriel Cánepa][a]
|
||||
译者:[ictlyh](http://www.mutouxiaogui.cn/blog/)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/install-lamp-in-centos-7/
|
||||
[1]:http://www.tecmint.com/author/gacanepa/
|
||||
[a]:http://www.tecmint.com/author/gacanepa/
|
||||
[1]:https://linux.cn/article-5789-1.html
|
||||
[2]:https://access.redhat.com/articles/1232123
|
||||
[3]:https://git.fedorahosted.org/cgit/mod_nss.git/plain/docs/mod_nss.html
|
@ -1,25 +1,25 @@
|
||||
第九部分 - 如果使用零客户端配置 Postfix 邮件服务器(SMTP)
|
||||
RHCE 系列(九):如何使用无客户端配置 Postfix 邮件服务器(SMTP)
|
||||
================================================================================
|
||||
尽管现在有很多在线联系方式,邮件仍然是一个人传递信息给远在世界尽头或办公室里坐在我们旁边的另一个人的有效方式。
|
||||
尽管现在有很多在线联系方式,电子邮件仍然是一个人传递信息给远在世界尽头或办公室里坐在我们旁边的另一个人的有效方式。
|
||||
|
||||
下面的图描述了邮件从发送者发出直到信息到达接收者收件箱的传递过程。
|
||||
下面的图描述了电子邮件从发送者发出直到信息到达接收者收件箱的传递过程。
|
||||
|
||||
![邮件如何工作](http://www.tecmint.com/wp-content/uploads/2015/09/How-Mail-Setup-Works.png)
|
||||
![电子邮件如何工作](http://www.tecmint.com/wp-content/uploads/2015/09/How-Mail-Setup-Works.png)
|
||||
|
||||
邮件如何工作
|
||||
*电子邮件如何工作*
|
||||
|
||||
要使这成为可能,背后发生了好多事情。为了使邮件信息从一个客户端应用程序(例如 [Thunderbird][1]、Outlook,或者网络邮件服务,例如 Gmail 或 Yahoo 邮件)到一个邮件服务器,并从其到目标服务器并最终到目标接收人,每个服务器上都必须有 SMTP(简单邮件传输协议)服务。
|
||||
要实现这一切,背后发生了好多事情。为了使电子邮件信息从一个客户端应用程序(例如 [Thunderbird][1]、Outlook,或者 web 邮件服务,例如 Gmail 或 Yahoo 邮件)投递到一个邮件服务器,并从其投递到目标服务器并最终到目标接收人,每个服务器上都必须有 SMTP(简单邮件传输协议)服务。
|
||||
|
||||
这就是为什么我们要在这篇博文中介绍如何在 RHEL 7 中设置 SMTP 服务器,从中本地用户发送的邮件(甚至发送到本地用户)被转发到一个中央邮件服务器以便于访问。
|
||||
这就是为什么我们要在这篇博文中介绍如何在 RHEL 7 中设置 SMTP 服务器,从本地用户发送的邮件(甚至发送到另外一个本地用户)被转发(forward)到一个中央邮件服务器以便于访问。
|
||||
|
||||
在实际需求中这称为零客户端安装。
|
||||
在这个考试的要求中这称为无客户端(null-client)安装。
|
||||
|
||||
在我们的测试环境中将包括一个原始邮件服务器和一个中央服务器或中继主机。
|
||||
在我们的测试环境中将包括一个起源(originating)邮件服务器和一个中央服务器或中继主机(relayhost)。
|
||||
|
||||
原始邮件服务器: (主机名: box1.mydomain.com / IP: 192.168.0.18)
|
||||
中央邮件服务器: (主机名: mail.mydomain.com / IP: 192.168.0.20)
|
||||
- 起源邮件服务器: (主机名: box1.mydomain.com / IP: 192.168.0.18)
|
||||
- 中央邮件服务器: (主机名: mail.mydomain.com / IP: 192.168.0.20)
|
||||
|
||||
为了域名解析我们在两台机器中都会使用有名的 /etc/hosts 文件:
|
||||
我们在两台机器中都会使用你熟知的 `/etc/hosts` 文件做名字解析:
|
||||
|
||||
192.168.0.18 box1.mydomain.com box1
|
||||
192.168.0.20 mail.mydomain.com mail
|
||||
@ -28,34 +28,29 @@
|
||||
|
||||
首先,我们需要(在两台机器上):
|
||||
|
||||
**1. 安装 Postfix:**
|
||||
**1、 安装 Postfix:**
|
||||
|
||||
# yum update && yum install postfix
|
||||
|
||||
**2. 启动服务并启用开机自动启动:**
|
||||
**2、 启动服务并启用开机自动启动:**
|
||||
|
||||
# systemctl start postfix
|
||||
# systemctl enable postfix
|
||||
|
||||
**3. 允许邮件流量通过防火墙:**
|
||||
**3、 允许邮件流量通过防火墙:**
|
||||
|
||||
# firewall-cmd --permanent --add-service=smtp
|
||||
# firewall-cmd --add-service=smtp
|
||||
|
||||
|
||||
![在防火墙中开通邮件服务器端口](http://www.tecmint.com/wp-content/uploads/2015/09/Allow-Traffic-through-Firewall.png)
|
||||
|
||||
在防火墙中开通邮件服务器端口
|
||||
*在防火墙中开通邮件服务器端口*
|
||||
|
||||
**4. 在 box1.mydomain.com 配置 Postfix**
|
||||
**4、 在 box1.mydomain.com 配置 Postfix**
|
||||
|
||||
Postfix 的主要配置文件是 /etc/postfix/main.cf。这个文件本身是一个很大的文本,因为其中包含的注释解析了程序设置的目的。
|
||||
Postfix 的主要配置文件是 `/etc/postfix/main.cf`。这个文件本身是一个很大的文本文件,因为其中包含了解释程序设置的用途的注释。
|
||||
|
||||
为了简洁,我们只显示了需要编辑的行(是的,在原始服务器中你需要保留 mydestination 为空;否则邮件会被保存到本地而不是我们实际想要的中央邮件服务器):
|
||||
|
||||
**在 box1.mydomain.com 配置 Postfix**
|
||||
|
||||
----------
|
||||
为了简洁,我们只显示了需要编辑的行(没错,在起源服务器中你需要保留 `mydestination` 为空;否则邮件会被存储到本地,而不是我们实际想要发往的中央邮件服务器):
|
||||
|
||||
myhostname = box1.mydomain.com
|
||||
mydomain = mydomain.com
|
||||
@ -64,11 +59,7 @@ Postfix 的主要配置文件是 /etc/postfix/main.cf。这个文件本身是一
|
||||
mydestination =
|
||||
relayhost = 192.168.0.20
|
||||
|
||||
**5. 在 mail.mydomain.com 配置 Postfix**
|
||||
|
||||
** 在 mail.mydomain.com 配置 Postfix **
|
||||
|
||||
----------
|
||||
**5、 在 mail.mydomain.com 配置 Postfix**
|
||||
|
||||
myhostname = mail.mydomain.com
|
||||
mydomain = mydomain.com
|
||||
@ -83,23 +74,23 @@ Postfix 的主要配置文件是 /etc/postfix/main.cf。这个文件本身是一
|
||||
|
||||
![设置 Postfix SELinux 权限](http://www.tecmint.com/wp-content/uploads/2015/09/Set-Postfix-SELinux-Permission.png)
|
||||
|
||||
设置 Postfix SELinux 权限
|
||||
*设置 Postfix SELinux 权限*
|
||||
|
||||
上面的 SELinux 布尔值会允许 Postfix 在中央服务器写入邮件池。
|
||||
上面的 SELinux 布尔值会允许中央服务器上的 Postfix 可以写入邮件池(mail spool)。
|
||||
|
||||
**6. 在两台机子上重启服务以使更改生效:**
|
||||
**6、 在两台机子上重启服务以使更改生效:**
|
||||
|
||||
# systemctl restart postfix
|
||||
|
||||
如果 Postfix 没有正确启动,你可以使用下面的命令进行错误处理。
|
||||
|
||||
# systemctl –l status postfix
|
||||
# journalctl –xn
|
||||
# postconf –n
|
||||
# systemctl -l status postfix
|
||||
# journalctl -xn
|
||||
# postconf -n
|
||||
|
||||
### 测试 Postfix 邮件服务 ###
|
||||
|
||||
为了测试邮件服务器,你可以使用任何邮件用户代理(最常见的简称为 MUA)例如 [mail 或 mutt][2]。
|
||||
要测试邮件服务器,你可以使用任何邮件用户代理(Mail User Agent,常简称为 MUA),例如 [mail 或 mutt][2]。
|
||||
|
||||
由于我个人喜欢 mutt,我会在 box1 中使用它发送邮件给用户 tecmint,并把现有文件(mailbody.txt)作为信息内容:
|
||||
|
||||
@ -107,7 +98,7 @@ Postfix 的主要配置文件是 /etc/postfix/main.cf。这个文件本身是一
|
||||
|
||||
![测试 Postfix 邮件服务器](http://www.tecmint.com/wp-content/uploads/2015/09/Test-Postfix-Mail-Server.png)
|
||||
|
||||
测试 Postfix 邮件服务器
|
||||
*测试 Postfix 邮件服务器*
|
||||
|
||||
现在到中央邮件服务器(mail.mydomain.com)以 tecmint 用户登录,并检查是否收到了邮件:
|
||||
|
||||
@ -116,15 +107,15 @@ Postfix 的主要配置文件是 /etc/postfix/main.cf。这个文件本身是一
|
||||
|
||||
![检查 Postfix 邮件服务器发送](http://www.tecmint.com/wp-content/uploads/2015/09/Check-Postfix-Mail-Server-Delivery.png)
|
||||
|
||||
检查 Postfix 邮件服务器发送
|
||||
*检查 Postfix 邮件服务器发送*
|
||||
|
||||
如果没有收到邮件,检查 root 用户的邮件池查看警告或者错误提示。你也需要使用 [nmap 命令][3]确保两台服务器运行了 SMTP 服务,并在中央邮件服务器中 打开了 25 号端口:
|
||||
如果没有收到邮件,检查 root 用户的邮件池看看是否有警告或者错误提示。你也许需要使用 [nmap 命令][3]确保两台服务器运行了 SMTP 服务,并在中央邮件服务器中打开了 25 号端口:
|
||||
|
||||
# nmap -PN 192.168.0.20
|
||||
|
||||
![Postfix 邮件服务器错误处理](http://www.tecmint.com/wp-content/uploads/2015/09/Troubleshoot-Postfix-Mail-Server.png)
|
||||
|
||||
Postfix 邮件服务器错误处理
|
||||
*Postfix 邮件服务器错误处理*
|
||||
|
||||
### 总结 ###
|
||||
|
||||
@ -134,7 +125,7 @@ Postfix 邮件服务器错误处理
|
||||
|
||||
- [在 CentOS/RHEL 07 上配置仅缓存的 DNS 服务器][4]
|
||||
|
||||
最后,我强烈建议你熟悉 Postfix 的配置文件(main.cf)和这个程序的帮助手册。如果有任何疑问,别犹豫,使用下面的评论框或者我们的论坛 Linuxsay.com 告诉我们吧,你会从世界各地的 Linux 高手中获得几乎及时的帮助。
|
||||
最后,我强烈建议你熟悉 Postfix 的配置文件(main.cf)和这个程序的帮助手册。如果有任何疑问,别犹豫,使用下面的评论框或者我们的论坛 Linuxsay.com 告诉我们吧,你会从世界各地的 Linux 高手中获得几乎是及时的帮助。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -142,7 +133,7 @@ via: http://www.tecmint.com/setup-postfix-mail-server-smtp-using-null-client-on-
|
||||
|
||||
作者:[Gabriel Cánepa][a]
|
||||
译者:[ictlyh](https//www.mutouxiaogui.cn/blog/)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,228 +0,0 @@
|
||||
Great Open Source Collaborative Editing Tools
|
||||
================================================================================
|
||||
In a nutshell, collaborative writing is writing done by more than one person. There are benefits and risks of collaborative working. Some of the benefits include a more integrated / co-ordinated approach, better use of existing resources, and a stronger, united voice. For me, the greatest advantage is one of the most transparent. That's when I need to take colleagues' views. Sending files back and forth between colleagues is inefficient, causes unnecessary delays and leaves people (i.e. me) unhappy with the whole notion of collaboration. With good collaborative software, I can share notes, data and files, and use comments to share thoughts in real-time or asynchronously. Working together on documents, images, video, presentations, and tasks is made less of a chore.
|
||||
|
||||
There are many ways to collaborate online, and it has never been easier. This article highlights my favourite open source tools to collaborate on documents in real time.
|
||||
|
||||
Google Docs is an excellent productivity application with most of the features I need. It serves as a collaborative tool for editing documents in real time. Documents can be shared, opened, and edited by multiple users simultaneously and users can see character-by-character changes as other collaborators make edits. While Google Docs is free for individuals, it is not open source.
|
||||
|
||||
Here is my take on the finest open source collaborative editors which help you focus on writing without interruption, yet work mutually with others.
|
||||
|
||||
----------
|
||||
|
||||
### Hackpad ###
|
||||
|
||||
![Hackpad in action](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-Hackpad.png)
|
||||
|
||||
Hackpad is an open source web-based realtime wiki, based on the open source EtherPad collaborative document editor.
|
||||
|
||||
Hackpad allows users to share your docs realtime and it uses color coding to show which authors have contributed to which content. It also allows in line photos, checklists and can also be used for coding as it offers syntax highlighting.
|
||||
|
||||
While Dropbox acquired Hackpad in April 2014, it is only this month that the software has been released under an open source license. It has been worth the wait.
|
||||
|
||||
Features include:
|
||||
|
||||
- Very rich set of functions, similar to those offered by wikis
|
||||
- Take collaborative notes, share data and files, and use comments to share your thoughts in real-time or asynchronously
|
||||
- Granular privacy permissions enable you to invite a single friend, a dozen teammates, or thousands of Twitter followers
|
||||
- Intelligent execution
|
||||
- Directly embed videos from popular video sharing sites
|
||||
- Tables
|
||||
- Syntax highlighting for most common programming languages including C, C#, CSS, CoffeeScript, Java, and HTML
|
||||
|
||||
- Website: [hackpad.com][1]
|
||||
- Source code: [github.com/dropbox/hackpad][2]
|
||||
- Developer: [Contributors][3]
|
||||
- License: Apache License, Version 2.0
|
||||
- Version Number: -
|
||||
|
||||
----------
|
||||
|
||||
### Etherpad ###
|
||||
|
||||
![Etherpad in action](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-Etherpad.png)
|
||||
|
||||
Etherpad is an open source web-based collaborative real-time editor, allowing authors to simultaneously edit a text document leave comments, and interact with others using an integrated chat.
|
||||
|
||||
Etherpad is implemented in JavaScript, on top of the AppJet platform, with the real-time functionality achieved using Comet streaming.
|
||||
|
||||
Features include:
|
||||
|
||||
- Well designed spartan interface
|
||||
- Simple text formatting features
|
||||
- "Time slider" - explore the history of a pad
|
||||
- Download documents in plain text, PDF, Microsoft Word, Open Document, and HTML
|
||||
- Auto-saves the document at regular, short intervals
|
||||
- Highly customizable
|
||||
- Client side plugins extend the editor functionality
|
||||
- Hundreds of plugins extend Etherpad including support for email notifications, pad management, authentication
|
||||
- Accessibility enabled
|
||||
- Interact with Pad contents in real time from within Node and from your CLI
|
||||
|
||||
- Website: [etherpad.org][4]
|
||||
- Source code: [github.com/ether/etherpad-lite][5]
|
||||
- Developer: David Greenspan, Aaron Iba, J.D. Zamfiresc, Daniel Clemens, David Cole
|
||||
- License: Apache License Version 2.0
|
||||
- Version Number: 1.5.7
|
||||
|
||||
----------
|
||||
|
||||
### Firepad ###
|
||||
|
||||
![Firepad in action](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-Firepad.png)
|
||||
|
||||
Firepad is an open source, collaborative text editor. It is designed to be embedded inside larger web applications with collaborative code editing added in only a few days.
|
||||
|
||||
Firepad is a full-featured text editor, with capabilities like conflict resolution, cursor synchronization, user attribution, and user presence detection. It uses Firebase as a backend, and doesn't need any server-side code. It can be added to any web app. Firepad can use either the CodeMirror editor or the Ace editor to render documents, and its operational transform code borrows from ot.js.
|
||||
|
||||
If you want to extend your web application capabilities by adding the simple document and code editor, Firepad is perfect.
|
||||
|
||||
Firepad is used by several editors, including the Atlassian Stash Realtime Editor, Nitrous.IO, LiveMinutes, and Koding.
|
||||
|
||||
Features include:
|
||||
|
||||
- True collaborative editing
|
||||
- Intelligent OT-based merging and conflict resolution
|
||||
- Support for both rich text and code editing
|
||||
- Cursor position synchronization
|
||||
- Undo / redo
|
||||
- Text highlighting
|
||||
- User attribution
|
||||
- Presence detection
|
||||
- Version checkpointing
|
||||
- Images
|
||||
- Extend Firepad through its API
|
||||
- Supports all modern browsers: Chrome, Safari, Opera 11+, IE8+, Firefox 3.6+
|
||||
|
||||
- Website: [www.firepad.io][6]
|
||||
- Source code: [github.com/firebase/firepad][7]
|
||||
- Developer: Michael Lehenbauer and the team at Firebase
|
||||
- License: MIT
|
||||
- Version Number: 1.1.1
|
||||
|
||||
----------
|
||||
|
||||
### OwnCloud Documents ###
|
||||
|
||||
![ownCloud Documents in action](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-ownCloud.png)
|
||||
|
||||
ownCloud Documents is an ownCloud app to work with office documents alone and/or collaboratively. It allows up to 5 individuals to collaborate editing .odt and .doc files in a web browser.
|
||||
|
||||
ownCloud is a self-hosted file sync and share server. It provides access to your data through a web interface, sync clients or WebDAV while providing a platform to view, sync and share across devices easily.
|
||||
|
||||
Features include:
|
||||
|
||||
- Cooperative edit, with multiple users editing files simultaneously
|
||||
- Document creation within ownCloud
|
||||
- Document upload
|
||||
- Share and edit files in the browser, and then share them inside ownCloud or through a public link
|
||||
- ownCloud features like versioning, local syncing, encryption, undelete
|
||||
- Seamless support for Microsoft Word documents by way of transparent conversion of file formats
|
||||
|
||||
- Website: [owncloud.org][8]
|
||||
- Source code: [github.com/owncloud/documents][9]
|
||||
- Developer: OwnCloud Inc.
|
||||
- License: AGPLv3
|
||||
- Version Number: 8.1.1
|
||||
|
||||
----------
|
||||
|
||||
### Gobby ###
|
||||
|
||||
![Gobby in action](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-Gobby.png)
|
||||
|
||||
Gobby is a collaborative editor supporting multiple documents in one session and a multi-user chat. All users could work on the file simultaneously without the need to lock it. The parts the various users write are highlighted in different colours and it supports syntax highlighting of various programming and markup languages.
|
||||
|
||||
Gobby allows multiple users to edit the same document together over the internet in real-time. It integrates well with the GNOME environment. It features a client-server architecture which supports multiple documents in one session, document synchronisation on request, password protection and an IRC-like chat for communication out of band. Users can choose a colour to highlight the text they have written in a document.
|
||||
|
||||
A dedicated server called infinoted is also provided.
|
||||
|
||||
Features include:
|
||||
|
||||
- Full-fledged text editing capabilities including syntax highlighting using GtkSourceView
|
||||
- Real-time, lock-free collaborative text editing through encrypted connections (including PFS)
|
||||
- Integrated group chat
|
||||
- Local group undo: Undo does not affect changes of remote users
|
||||
- Shows cursors and selections of remote users
|
||||
- Highlights text written by different users with different colors
|
||||
- Syntax highlighting for most programming languages, auto indentation, configurable tab width
|
||||
- Zeroconf support
|
||||
- Encrypted data transfer including perfect forward secrecy (PFS)
|
||||
- Sessions can be password-protected
|
||||
- Sophisticated access control with Access Control Lists (ACLs)
|
||||
- Highly configurable dedicated server
|
||||
- Automatic saving of documents
|
||||
- Advanced search and replace options
|
||||
- Internationalisation
|
||||
- Full Unicode support
|
||||
|
||||
- Website: [gobby.github.io][10]
|
||||
- Source code: [github.com/gobby][11]
|
||||
- Developer: Armin Burgmeier, Philipp Kern and contributors
|
||||
- License: GNU GPLv2+ and ISC
|
||||
- Version Number: 0.5.0
|
||||
|
||||
----------
|
||||
|
||||
### OnlyOffice ###
|
||||
|
||||
![OnlyOffice in action](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-OnlyOffice.png)
|
||||
|
||||
ONLYOFFICE (formerly known as Teamlab Office) is a multifunctional cloud online office suite integrated with CRM system, document and project management toolset, Gantt chart and email aggregator.
|
||||
|
||||
It allows you to organize business tasks and milestones, store and share your corporate or personal documents, use social networking tools such as blogs and forums, as well as communicate with your team members via corporate IM.
|
||||
|
||||
Manage documents, projects, team and customer relations in one place. OnlyOffice combines text, spreadsheet and presentation editors that include features similar to Microsoft desktop editors (Word, Excel and PowerPoint), but then allow to co-edit, comment and chat in real time.
|
||||
|
||||
OnlyOffice is written in ASP.NET, based on HTML5 Canvas element, and translated to 21 languages.
|
||||
|
||||
Features include:
|
||||
|
||||
- As powerful as a desktop editor when working with large documents, paging and zooming
|
||||
- Document sharing in view / edit modes
|
||||
- Document embedding
|
||||
- Spreadsheet and presentation editors
|
||||
- Co-editing
|
||||
- Commenting
|
||||
- Integrated chat
|
||||
- Mobile applications
|
||||
- Gantt charts
|
||||
- Time management
|
||||
- Access right management
|
||||
- Invoicing system
|
||||
- Calendar
|
||||
- Integration with file storage systems: Google Drive, Box, OneDrive, Dropbox, OwnCloud
|
||||
- Integration with CRM, email aggregator and project management module
|
||||
- Mail server
|
||||
- Mail aggregator
|
||||
- Edit documents, spreadsheets and presentations of the most popular formats: DOC, DOCX, ODT, RTF, TXT, XLS, XLSX, ODS, CSV, PPTX, PPT, ODP
|
||||
|
||||
- Website: [www.onlyoffice.com][12]
|
||||
- Source code: [github.com/ONLYOFFICE/DocumentServer][13]
|
||||
- Developer: Ascensio System SIA
|
||||
- License: GNU GPL v3
|
||||
- Version Number: 7.7
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxlinks.com/article/20150823085112605/CollaborativeEditing.html
|
||||
|
||||
作者:Frazer Kline
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:https://hackpad.com/
|
||||
[2]:https://github.com/dropbox/hackpad
|
||||
[3]:https://github.com/dropbox/hackpad/blob/master/CONTRIBUTORS
|
||||
[4]:http://etherpad.org/
|
||||
[5]:https://github.com/ether/etherpad-lite
|
||||
[6]:http://www.firepad.io/
|
||||
[7]:https://github.com/firebase/firepad
|
||||
[8]:https://owncloud.org/
|
||||
[9]:http://github.com/owncloud/documents/
|
||||
[10]:https://gobby.github.io/
|
||||
[11]:https://github.com/gobby
|
||||
[12]:https://www.onlyoffice.com/free-edition.aspx
|
||||
[13]:https://github.com/ONLYOFFICE/DocumentServer
|
@ -1,3 +1,4 @@
|
||||
translating by tastynoodle
|
||||
5 best open source board games to play online
|
||||
================================================================================
|
||||
I have always had a fascination with board games, in part because they are a device of social interaction, they challenge the mind and, most importantly, they are great fun to play. In my misspent youth, myself and a group of friends gathered together to escape the horrors of the classroom, and indulge in a little escapism. The time provided an outlet for tension and rivalry. Board games help teach diplomacy, how to make and break alliances, bring families and friends together, and learn valuable lessons.
|
||||
|
@ -1,605 +0,0 @@
|
||||
|
||||
translation by strugglingyouth
|
||||
80 Linux Monitoring Tools for SysAdmins
|
||||
================================================================================
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/linux-monitoring.jpg)
|
||||
|
||||
The industry is hotting up at the moment, and there are more tools than you can shake a stick at. Here lies the most comprehensive list on the Internet (of Tools). Featuring over 80 ways to your machines. Within this article we outline:
|
||||
|
||||
- Command line tools
|
||||
- Network related
|
||||
- System related monitoring
|
||||
- Log monitoring tools
|
||||
- Infrastructure monitoring tools
|
||||
|
||||
It’s hard work monitoring and debugging performance problems, but it’s easier with the right tools at the right time. Here are some tools you’ve probably heard of, some you probably haven’t – and when to use them:
|
||||
|
||||
### Top 10 System Monitoring Tools ###
|
||||
|
||||
#### 1. Top ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/top.jpg)
|
||||
|
||||
This is a small tool which is pre-installed in many unix systems. When you want an overview of all the processes or threads running in the system: top is a good tool. You can order these processes on different criteria and the default criteria is CPU.
|
||||
|
||||
#### 2. [htop][1] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/htop.jpg)
|
||||
|
||||
Htop is essentially an enhanced version of top. It’s easier to sort by processes. It’s visually easier to understand and has built in commands for common things you would like to do. Plus it’s fully interactive.
|
||||
|
||||
#### 3. [atop][2] ####
|
||||
|
||||
Atop monitors all processes much like top and htop, unlike top and htop however it has daily logging of the processes for long-term analysis. It also shows resource consumption by all processes. It will also highlight resources that have reached a critical load.
|
||||
|
||||
#### 4. [apachetop][3] ####
|
||||
|
||||
Apachetop monitors the overall performance of your apache webserver. It’s largely based on mytop. It displays current number of reads, writes and the overall number of requests processed.
|
||||
|
||||
#### 5. [ftptop][4] ####
|
||||
|
||||
ftptop gives you basic information of all the current ftp connections to your server such as the total amount of sessions, how many are uploading and downloading and who the client is.
|
||||
|
||||
#### 6. [mytop][5] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/mytop.jpg)
|
||||
|
||||
mytop is a neat tool for monitoring threads and performance of mysql. It gives you a live look into the database and what queries it’s processing in real time.
|
||||
|
||||
#### 7. [powertop][6] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/powertop.jpg)
|
||||
|
||||
powertop helps you diagnose issues that has to do with power consumption and power management. It can also help you experiment with power management settings to achieve the most efficient settings for your server. You switch tabs with the tab key.
|
||||
|
||||
#### 8. [iotop][7] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/iotop.jpg)
|
||||
|
||||
iotop checks the I/O usage information and gives you a top-like interface to that. It displays columns on read and write and each row represents a process. It also displays the percentage of time the process spent while swapping in and while waiting on I/O.
|
||||
|
||||
### Network related monitoring ###
|
||||
|
||||
#### 9. [ntopng][8] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ntopng.jpg)
|
||||
|
||||
ntopng is the next generation of ntop and the tool provides a graphical user interface via the browser for network monitoring. It can do stuff such as: geolocate hosts, get network traffic and show ip traffic distribution and analyze it.
|
||||
|
||||
#### 10. [iftop][9] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/iftop.jpg)
|
||||
|
||||
iftop is similar to top, but instead of mainly checking for cpu usage it listens to network traffic on selected network interfaces and displays a table of current usage. It can be handy for answering questions such as “Why on earth is my internet connection so slow?!”.
|
||||
|
||||
#### 11. [jnettop][10] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/jnettop.jpg)
|
||||
|
||||
jnettop visualises network traffic in much the same way as iftop does. It also supports customizable text output and a machine-friendly mode to support further analysis.
|
||||
|
||||
12. [bandwidthd][11]
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/bandwidthd.jpg)
|
||||
|
||||
BandwidthD tracks usage of TCP/IP network subnets and visualises that in the browser by building a html page with graphs in png. There is a database driven system that supports searching, filtering, multiple sensors and custom reports.
|
||||
|
||||
#### 13. [EtherApe][12] ####
|
||||
|
||||
EtherApe displays network traffic graphically, the more talkative the bigger the node. It either captures live traffic or can read it from a tcpdump. The displayed can also be refined using a network filter with pcap syntax.
|
||||
|
||||
#### 14. [ethtool][13] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ethtool.jpg)
|
||||
|
||||
ethtool is used for displaying and modifying some parameters of the network interface controllers. It can also be used to diagnose Ethernet devices and get more statistics from the devices.
|
||||
|
||||
#### 15. [NetHogs][14] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/nethogs.jpg)
|
||||
|
||||
NetHogs breaks down network traffic per protocol or per subnet. It then groups by process. So if there’s a surge in network traffic you can fire up NetHogs and see which process is causing it.
|
||||
|
||||
#### 16. [iptraf][15] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/iptraf.jpg)
|
||||
|
||||
iptraf gathers a variety of metrics such as TCP connection packet and byte count, interface statistics and activity indicators, TCP/UDP traffic breakdowns and station packet and byte counts.
|
||||
|
||||
#### 17. [ngrep][16] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ngrep.jpg)
|
||||
|
||||
ngrep is grep but for the network layer. It’s pcap aware and will allow to specify extended regular or hexadecimal expressions to match against packets of .
|
||||
|
||||
#### 18. [MRTG][17] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/mrtg.jpg)
|
||||
|
||||
MRTG was orginally developed to monitor router traffic, but now it’s able to monitor other network related things as well. It typically collects every five minutes and then generates a html page. It also has the capability of sending warning emails.
|
||||
|
||||
#### 19. [bmon][18] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/bmon.jpg)
|
||||
|
||||
Bmon monitors and helps you debug networks. It captures network related statistics and presents it in human friendly way. You can also interact with bmon through curses or through scripting.
|
||||
|
||||
#### 20. traceroute ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/traceroute.jpg)
|
||||
|
||||
Traceroute is a built-in tool for displaying the route and measuring the delay of packets across a network.
|
||||
|
||||
#### 21. [IPTState][19] ####
|
||||
|
||||
IPTState allows you to watch where traffic that crosses your iptables is going and then sort that by different criteria as you please. The tool also allows you to delete states from the table.
|
||||
|
||||
#### 22. [darkstat][20] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/darkstat.jpg)
|
||||
|
||||
Darkstat captures network traffic and calculates statistics about usage. The reports are served over a simple HTTP server and gives you a nice graphical user interface of the graphs.
|
||||
|
||||
#### 23. [vnStat][21] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/vnstat.jpg)
|
||||
|
||||
vnStat is a network traffic monitor that uses statistics provided by the kernel which ensures light use of system resources. The gathered statistics persists through system reboots. It has color options for the artistic sysadmins.
|
||||
|
||||
#### 24. netstat ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/netstat.jpg)
|
||||
|
||||
Netstat is a built-in tool that displays TCP network connections, routing tables and a number of network interfaces. It’s used to find problems in the network.
|
||||
|
||||
#### 25. ss ####
|
||||
|
||||
Instead of using netstat, it’s however preferable to use ss. The ss command is capable of showing more information than netstat and is actually faster. If you want a summary statistics you can use the command `ss -s`.
|
||||
|
||||
#### 26. [nmap][22] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/nmap.jpg)
|
||||
|
||||
Nmap allows you to scan your server for open ports or detect which OS is being used. But you could also use this for SQL injection vulnerabilities, network discovery and other means related to penetration testing.
|
||||
|
||||
#### 27. [MTR][23] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/mtr.jpg)
|
||||
|
||||
MTR combines the functionality of traceroute and the ping tool into a single network diagnostic tool. When using the tool it will limit the number hops individual packets has to travel while also listening to their expiry. It then repeats this every second.
|
||||
|
||||
#### 28. [Tcpdump][24] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/tcpdump.jpg)
|
||||
|
||||
Tcpdump will output a description of the contents of the packet it just captured which matches the expression that you provided in the command. You can also save the this data for further analysis.
|
||||
|
||||
#### 29. [Justniffer][25] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/justniffer.jpg)
|
||||
|
||||
Justniffer is a tcp packet sniffer. You can choose whether you would like to collect low-level data or high-level data with this sniffer. It also allows you to generate logs in customizable way. You could for instance mimic the access log that apache has.
|
||||
|
||||
### System related monitoring ###
|
||||
|
||||
#### 30. [nmon][26] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/nmon.jpg)
|
||||
|
||||
nmon either outputs the data on screen or saves it in a comma separated file. You can display CPU, memory, network, filesystems, top processes. The data can also be added to a RRD database for further analysis.
|
||||
|
||||
#### 31. [conky][27] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/cpulimit.jpg)
|
||||
|
||||
Conky monitors a plethora of different OS stats. It has support for IMAP and POP3 and even support for many popular music players! For the handy person you could extend it with your own scripts or programs using Lua.
|
||||
|
||||
#### 32. [Glances][28] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/glances.jpg)
|
||||
|
||||
Glances monitors your system and aims to present a maximum amount of information in a minimum amount of space. It has the capability to function in a client/server mode as well as monitoring remotely. It also has a web interface.
|
||||
|
||||
#### 33. [saidar][29] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/saidar.jpg)
|
||||
|
||||
Saidar is a very small tool that gives you basic information about your system resources. It displays a full screen of the standard system resources. The emphasis for saidar is being as simple as possible.
|
||||
|
||||
#### 34. [RRDtool][30] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/rrdtool.jpg)
|
||||
|
||||
RRDtool is a tool developed to handle round-robin databases or RRD. RRD aims to handle time-series data like CPU load, temperatures etc. This tool provides a way to extract RRD data in a graphical format.
|
||||
|
||||
#### 35. [monit][31] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/monit.jpg)
|
||||
|
||||
Monit has the capability of sending you alerts as well as restarting services if they run into trouble. It’s possible to perform any type of check you could write a script for with monit and it has a web user interface to ease your eyes.
|
||||
|
||||
#### 36. [Linux process explorer][32] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/linux-process-monitor.jpg)
|
||||
|
||||
Linux process explorer is akin to the activity monitor for OSX or the windows equivalent. It aims to be more usable than top or ps. You can view each process and see how much memory usage or CPU it uses.
|
||||
|
||||
#### 37. df ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/df.jpg)
|
||||
|
||||
df is an abbreviation for disk free and is pre-installed program in all unix systems used to display the amount of available disk space for filesystems which the user have access to.
|
||||
|
||||
#### 38. [discus][33] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/discus.jpg)
|
||||
|
||||
Discus is similar to df however it aims to improve df by making it prettier using fancy features as colors, graphs and smart formatting of numbers.
|
||||
|
||||
#### 39. [xosview][34] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/xosview.jpg)
|
||||
|
||||
xosview is a classic system monitoring tool and it gives you a simple overview of all the different parts of the including IRQ.
|
||||
|
||||
#### 40. [Dstat][35] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/dstat.jpg)
|
||||
|
||||
Dstat aims to be a replacement for vmstat, iostat, netstat and ifstat. It allows you to view all of your system resources in real-time. The data can then be exported into csv. Most importantly dstat allows for plugins and could thus be extended into areas not yet known to mankind.
|
||||
|
||||
#### 41. [Net-SNMP][36] ####
|
||||
|
||||
SNMP is the protocol ‘simple network management protocol’ and the Net-SNMP tool suite helps you collect accurate information about your servers using this protocol.
|
||||
|
||||
#### 42. [incron][37] ####
|
||||
|
||||
Incron allows you to monitor a directory tree and then take action on those changes. If you wanted to copy files to directory ‘b’ once new files appeared in directory ‘a’ that’s exactly what incron does.
|
||||
|
||||
#### 43. [monitorix][38] ####
|
||||
|
||||
Monitorix is lightweight system monitoring tool. It helps you monitor a single machine and gives you a wealth of metrics. It also has a built-in HTTP server to view graphs and a reporting mechanism of all metrics.
|
||||
|
||||
#### 44. vmstat ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/vmstat.jpg)
|
||||
|
||||
vmstat or virtual memory statistics is a small built-in tool that monitors and displays a summary about the memory in the machine.
|
||||
|
||||
#### 45. uptime ####
|
||||
|
||||
This small command that quickly gives you information about how long the machine has been running, how many users currently are logged on and the system load average for the past 1, 5 and 15 minutes.
|
||||
|
||||
#### 46. mpstat ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/mpstat.jpg)
|
||||
|
||||
mpstat is a built-in tool that monitors cpu usage. The most common command is using `mpstat -P ALL` which gives you the usage of all the cores. You can also get an interval update of the CPU usage.
|
||||
|
||||
#### 47. pmap ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/pmap.jpg)
|
||||
|
||||
pmap is a built-in tool that reports the memory map of a process. You can use this command to find out causes of memory bottlenecks.
|
||||
|
||||
#### 48. ps ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ps.jpg)
|
||||
|
||||
The ps command will give you an overview of all the current processes. You can easily select all processes using the command `ps -A`
|
||||
|
||||
#### 49. [sar][39] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/sar.jpg)
|
||||
|
||||
sar is a part of the sysstat package and helps you to collect, report and save different system metrics. With different commands it will give you CPU, memory and I/O usage among other things.
|
||||
|
||||
#### 50. [collectl][40] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/collectl.jpg)
|
||||
|
||||
Similar to sar collectl collects performance metrics for your machine. By default it shows cpu, network and disk stats but it collects a lot more. The difference to sar is collectl is able to deal with times below 1 second, it can be fed into a plotting tool directly and collectl monitors processes more extensively.
|
||||
|
||||
#### 51. [iostat][41] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/iostat.jpg)
|
||||
|
||||
iostat is also part of the sysstat package. This command is used for monitoring system input/output. The reports themselves can be used to change system configurations to better balance input/output load between hard drives in your machine.
|
||||
|
||||
#### 52. free ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/free.jpg)
|
||||
|
||||
This is a built-in command that displays the total amount of free and used physical memory on your machine. It also displays the buffers used by the kernel at that given moment.
|
||||
|
||||
#### 53. /Proc file system ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/procfile.jpg)
|
||||
|
||||
The proc file system gives you a peek into kernel statistics. From these statistics you can get detailed information about the different hardware devices on your machine. Take a look at the [full list of the proc file statistics][42]
|
||||
|
||||
#### 54. [GKrellM][43] ####
|
||||
|
||||
GKrellm is a gui application that monitor the status of your hardware such CPU, main memory, hard disks, network interfaces and many other things. It can also monitor and launch a mail reader of your choice.
|
||||
|
||||
#### 55. [Gnome system monitor][44] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/gnome-system-monitor.jpg)
|
||||
|
||||
Gnome system monitor is a basic system monitoring tool that has features looking at process dependencies from a tree view, kill or renice processes and graphs of all server metrics.
|
||||
|
||||
### Log monitoring tools ###
|
||||
|
||||
#### 56. [GoAccess][45] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/goaccess.jpg)
|
||||
|
||||
GoAccess is a real-time web log analyzer which analyzes the access log from either apache, nginx or amazon cloudfront. It’s also possible to output the data into HTML, JSON or CSV. It will give you general statistics, top visitors, 404s, geolocation and many other things.
|
||||
|
||||
#### 57. [Logwatch][46] ####
|
||||
|
||||
Logwatch is a log analysis system. It parses through your system’s logs and creates a report analyzing the areas that you specify. It can give you daily reports with short digests of the activities taking place on your machine.
|
||||
|
||||
#### 58. [Swatch][47] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/swatch.jpg)
|
||||
|
||||
Much like Logwatch Swatch also monitors your logs, but instead of giving reports it watches for regular expression and notifies you via mail or the console when there is a match. It could be used for intruder detection for example.
|
||||
|
||||
#### 59. [MultiTail][48] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/multitail.jpg)
|
||||
|
||||
MultiTail helps you monitor logfiles in multiple windows. You can merge two or more of these logfiles into one. It will also use colors to display the logfiles for easier reading with the help of regular expressions.
|
||||
|
||||
#### System tools ####
|
||||
|
||||
#### 60. [acct or psacct][49] ####
|
||||
|
||||
acct or psacct (depending on if you use apt-get or yum) allows you to monitor all the commands a users executes inside the system including CPU and memory time. Once installed you get that summary with the command ‘sa’.
|
||||
|
||||
#### 61. [whowatch][50] ####
|
||||
|
||||
Similar to acct this tool monitors users on your system and allows you to see in real time what commands and processes they are using. It gives you a tree structure of all the processes and so you can see exactly what’s happening.
|
||||
|
||||
#### 62. [strace][51] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/strace.jpg)
|
||||
|
||||
strace is used to diagnose, debug and monitor interactions between processes. The most common thing to do is making strace print a list of system calls made by the program which is useful if the program does not behave as expected.
|
||||
|
||||
#### 63. [DTrace][52] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/dtrace.jpg)
|
||||
|
||||
DTrace is the big brother of strace. It dynamically patches live running instructions with instrumentation code. This allows you to do in-depth performance analysis and troubleshooting. However, it’s not for the weak of heart as there is a 1200 book written on the topic.
|
||||
|
||||
#### 64. [webmin][53] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/webmin.jpg)
|
||||
|
||||
Webmin is a web-based system administration tool. It removes the need to manually edit unix configuration files and lets you manage the system remotely if need be. It has a couple of monitoring modules that you can attach to it.
|
||||
|
||||
#### 65. stat ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/stat.jpg)
|
||||
|
||||
Stat is a built-in tool for displaying status information of files and file systems. It will give you information such as when the file was modified, accessed or changed.
|
||||
|
||||
#### 66. ifconfig ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ifconfig.jpg)
|
||||
|
||||
ifconfig is a built-in tool used to configure the network interfaces. Behind the scenes network monitor tools use ifconfig to set it into promiscuous mode to capture all packets. You can do it yourself with `ifconfig eth0 promisc` and return to normal mode with `ifconfig eth0 -promisc`.
|
||||
|
||||
#### 67. [ulimit][54] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/unlimit.jpg)
|
||||
|
||||
ulimit is a built-in tool that monitors system resources and keeps a limit so any of the monitored resources don’t go overboard. For instance making a fork bomb where a properly configured ulimit is in place would be totally fine.
|
||||
|
||||
#### 68. [cpulimit][55] ####
|
||||
|
||||
CPUlimit is a small tool that monitors and then limits the CPU usage of a process. It’s particularly useful to make batch jobs not eat up too many CPU cycles.
|
||||
|
||||
#### 69. lshw ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/lshw.jpg)
|
||||
|
||||
lshw is a small built-in tool extract detailed information about the hardware configuration of the machine. It can output everything from CPU version and speed to mainboard configuration.
|
||||
|
||||
#### 70. w ####
|
||||
|
||||
W is a built-in command that displays information about the users currently using the machine and their processes.
|
||||
|
||||
#### 71. lsof ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/lsof.jpg)
|
||||
|
||||
lsof is a built-in tool that gives you a list of all open files and network connections. From there you can narrow it down to files opened by processes, based on the process name, by a specific user or perhaps kill all processes that belongs to a specific user.
|
||||
|
||||
### Infrastructure monitoring tools ###
|
||||
|
||||
#### 72. Server Density ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/server-density-monitoring.png)
|
||||
|
||||
Our [server monitoring tool][56]! It has a web interface that allows you to set alerts and view graphs for all system and network metrics. You can also set up monitoring of websites whether they are up or down. Server Density allows you to set permissions for users and you can extend your monitoring with our plugin infrastructure or api. The service already supports Nagios plugins.
|
||||
|
||||
#### 73. [OpenNMS][57] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/opennms.jpg)
|
||||
|
||||
OpenNMS has four main functional areas: event management and notifications; discovery and provisioning; service monitoring and data collection. It’s designed to be customizable to work in a variety of network environments.
|
||||
|
||||
#### 74. [SysUsage][58] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/sysusage.jpg)
|
||||
|
||||
SysUsage monitors your system continuously via Sar and other system commands. It also allows notifications to alarm you once a threshold is reached. SysUsage itself can be run from a centralized place where all the collected statistics are also being stored. It has a web interface where you can view all the stats.
|
||||
|
||||
#### 75. [brainypdm][59] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/brainypdm.jpg)
|
||||
|
||||
brainypdm is a data management and monitoring tool that has the capability to gather data from nagios or another generic source to make graphs. It’s cross-platform, has custom graphs and is web based.
|
||||
|
||||
#### 76. [PCP][60] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/pcp.jpg)
|
||||
|
||||
PCP has the capability of collating metrics from multiple hosts and does so efficiently. It also has a plugin framework so you can make it collect specific metrics that is important to you. You can access graph data through either a web interface or a GUI. Good for monitoring large systems.
|
||||
|
||||
#### 77. [KDE system guard][61] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/kdesystemguard.jpg)
|
||||
|
||||
This tool is both a system monitor and task manager. You can view server metrics from several machines through the worksheet and if a process needs to be killed or if you need to start a process it can be done within KDE system guard.
|
||||
|
||||
#### 78. [Munin][62] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/munin.jpg)
|
||||
|
||||
Munin is both a network and a system monitoring tool which offers alerts for when metrics go beyond a given threshold. It uses RRDtool to create the graphs and it has web interface to display these graphs. Its emphasis is on plug and play capabilities with a number of plugins available.
|
||||
|
||||
#### 79. [Nagios][63] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/nagios.jpg)
|
||||
|
||||
Nagios is system and network monitoring tool that helps you monitor monitor your many servers. It has support for alerting for when things go wrong. It also has many plugins written for the platform.
|
||||
|
||||
#### 80. [Zenoss][64] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/zenoss.jpg)
|
||||
|
||||
Zenoss provides a web interface that allows you to monitor all system and network metrics. Moreover it discovers network resources and changes in network configurations. It has alerts for you to take action on and it supports the Nagios plugins.
|
||||
|
||||
#### 81. [Cacti][65] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/cacti.jpg)
|
||||
|
||||
(And one for luck!) Cacti is network graphing solution that uses the RRDtool data storage. It allows a user to poll services at predetermined intervals and graph the result. Cacti can be extended to monitor a source of your choice through shell scripts.
|
||||
|
||||
#### 82. [Zabbix][66] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/zabbix-monitoring.png)
|
||||
|
||||
Zabbix is an open source infrastructure monitoring solution. It can use most databases out there to store the monitoring statistics. The Core is written in C and has a frontend in PHP. If you don’t like installing an agent, Zabbix might be an option for you.
|
||||
|
||||
### Bonus section: ###
|
||||
|
||||
Thanks for your suggestions. It’s an oversight on our part that we’ll have to go back trough and renumber all the headings. In light of that, here’s a short section at the end for some of the Linux monitoring tools recommended by you:
|
||||
|
||||
#### 83. [collectd][67] ####
|
||||
|
||||
Collectd is a Unix daemon that collects all your monitoring statistics. It uses a modular design and plugins to fill in any niche monitoring. This way collectd stays as lightweight and customizable as possible.
|
||||
|
||||
#### 84. [Observium][68] ####
|
||||
|
||||
Observium is an auto-discovering network monitoring platform supporting a wide range of hardware platforms and operating systems. Observium focuses on providing a beautiful and powerful yet simple and intuitive interface to the health and status of your network.
|
||||
|
||||
#### 85. Nload ####
|
||||
|
||||
It’s a command line tool that monitors network throughput. It’s neat because it visualizes the in and and outgoing traffic using two graphs and some additional useful data like total amount of transferred data. You can install it with
|
||||
|
||||
yum install nload
|
||||
|
||||
or
|
||||
|
||||
sudo apt-get install nload
|
||||
|
||||
#### 84. [SmokePing][69] ####
|
||||
|
||||
SmokePing keeps track of the network latencies of your network and it visualises them too. There are a wide range of latency measurement plugins developed for SmokePing. If a GUI is important to you it’s there is an ongoing development to make that happen.
|
||||
|
||||
#### 85. [MobaXterm][70] ####
|
||||
|
||||
If you’re working in windows environment day in and day out. You may feel limited by the terminal Windows provides. MobaXterm comes to the rescue and allows you to use many of the terminal commands commonly found in Linux. Which will help you tremendously in your monitoring needs!
|
||||
|
||||
#### 86. [Shinken monitoring][71] ####
|
||||
|
||||
Shinken is a monitoring framework which is a total rewrite of Nagios in python. It aims to enhance flexibility and managing a large environment. While still keeping all your nagios configuration and plugins.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://blog.serverdensity.com/80-linux-monitoring-tools-know/
|
||||
|
||||
作者:[Jonathan Sundqvist][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
|
||||
[a]:https://www.serverdensity.com/
|
||||
[1]:http://hisham.hm/htop/
|
||||
[2]:http://www.atoptool.nl/
|
||||
[3]:https://github.com/JeremyJones/Apachetop
|
||||
[4]:http://www.proftpd.org/docs/howto/Scoreboard.html
|
||||
[5]:http://jeremy.zawodny.com/mysql/mytop/
|
||||
[6]:https://01.org/powertop
|
||||
[7]:http://guichaz.free.fr/iotop/
|
||||
[8]:http://www.ntop.org/products/ntop/
|
||||
[9]:http://www.ex-parrot.com/pdw/iftop/
|
||||
[10]:http://jnettop.kubs.info/wiki/
|
||||
[11]:http://bandwidthd.sourceforge.net/
|
||||
[12]:http://etherape.sourceforge.net/
|
||||
[13]:https://www.kernel.org/pub/software/network/ethtool/
|
||||
[14]:http://nethogs.sourceforge.net/
|
||||
[15]:http://iptraf.seul.org/
|
||||
[16]:http://ngrep.sourceforge.net/
|
||||
[17]:http://oss.oetiker.ch/mrtg/
|
||||
[18]:https://github.com/tgraf/bmon/
|
||||
[19]:http://www.phildev.net/iptstate/index.shtml
|
||||
[20]:https://unix4lyfe.org/darkstat/
|
||||
[21]:http://humdi.net/vnstat/
|
||||
[22]:http://nmap.org/
|
||||
[23]:http://www.bitwizard.nl/mtr/
|
||||
[24]:http://www.tcpdump.org/
|
||||
[25]:http://justniffer.sourceforge.net/
|
||||
[26]:http://nmon.sourceforge.net/pmwiki.php
|
||||
[27]:http://conky.sourceforge.net/
|
||||
[28]:https://github.com/nicolargo/glances
|
||||
[29]:https://packages.debian.org/sid/utils/saidar
|
||||
[30]:http://oss.oetiker.ch/rrdtool/
|
||||
[31]:http://mmonit.com/monit
|
||||
[32]:http://sourceforge.net/projects/procexp/
|
||||
[33]:http://packages.ubuntu.com/lucid/utils/discus
|
||||
[34]:http://www.pogo.org.uk/~mark/xosview/
|
||||
[35]:http://dag.wiee.rs/home-made/dstat/
|
||||
[36]:http://www.net-snmp.org/
|
||||
[37]:http://inotify.aiken.cz/?section=incron&page=about&lang=en
|
||||
[38]:http://www.monitorix.org/
|
||||
[39]:http://sebastien.godard.pagesperso-orange.fr/
|
||||
[40]:http://collectl.sourceforge.net/
|
||||
[41]:http://sebastien.godard.pagesperso-orange.fr/
|
||||
[42]:http://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/proc.html
|
||||
[43]:http://members.dslextreme.com/users/billw/gkrellm/gkrellm.html
|
||||
[44]:http://freecode.com/projects/gnome-system-monitor
|
||||
[45]:http://goaccess.io/
|
||||
[46]:http://sourceforge.net/projects/logwatch/
|
||||
[47]:http://sourceforge.net/projects/swatch/
|
||||
[48]:http://www.vanheusden.com/multitail/
|
||||
[49]:http://www.gnu.org/software/acct/
|
||||
[50]:http://whowatch.sourceforge.net/
|
||||
[51]:http://sourceforge.net/projects/strace/
|
||||
[52]:http://dtrace.org/blogs/about/
|
||||
[53]:http://www.webmin.com/
|
||||
[54]:http://ss64.com/bash/ulimit.html
|
||||
[55]:https://github.com/opsengine/cpulimit
|
||||
[56]:https://www.serverdensity.com/server-monitoring/
|
||||
[57]:http://www.opennms.org/
|
||||
[58]:http://sysusage.darold.net/
|
||||
[59]:http://sourceforge.net/projects/brainypdm/
|
||||
[60]:http://www.pcp.io/
|
||||
[61]:https://userbase.kde.org/KSysGuard
|
||||
[62]:http://munin-monitoring.org/
|
||||
[63]:http://www.nagios.org/
|
||||
[64]:http://www.zenoss.com/
|
||||
[65]:http://www.cacti.net/
|
||||
[66]:http://www.zabbix.com/
|
||||
[67]:https://collectd.org/
|
||||
[68]:http://www.observium.org/
|
||||
[69]:http://oss.oetiker.ch/smokeping/
|
||||
[70]:http://mobaxterm.mobatek.net/
|
||||
[71]:http://www.shinken-monitoring.org/
|
@ -1,195 +0,0 @@
|
||||
Optimize Web Delivery with these Open Source Tools
|
||||
================================================================================
|
||||
Web proxy software forwards HTTP requests without modifying traffic in any way. They can be configured as a transparent proxy with no client-side configuration required. They can also be used as a reverse proxy front-end to websites; here the cache serves an unlimited number of clients for one or some web servers.
|
||||
|
||||
Web proxies are versatile tools. They have a wide variety of uses, from caching web, DNS and other lookups, to speeding up the delivery of a web server / reducing bandwidth consumption. Web proxy software can also harden security by filtering traffic and anonymizing connections, and offer media-range limitations. This software is used by high-profile, high-traffic websites such as The New York Times, The Guardian, and social media and content sites such as Twitter, Facebook, and Wikipedia.
|
||||
|
||||
Web caches have become a vital mechanism for optimising the amount of data that is delivered in a given period of time. Good web caches also help to minimise latency, serving pages as quickly as possible. This helps to prevent the end user from becoming impatient having to wait for content to be delivered. Web caches optimise the data flow between client and server. They also help to converse bandwidth by caching frequently-delivered content. If you need to reduce server load and improve delivery speed of your content, it is definitely worth exploring the benefits offered by web cache software.
|
||||
|
||||
To provide an insight into the quality of software available for Linux, I feature below 5 excellent open source web proxy tools. Some of the them are full-featured; a couple of them have very modest resource needs.
|
||||
|
||||
### Squid ###
|
||||
|
||||
Squid is a high-performance open source proxy caching server and web cache daemon. It supports FTP, Internet Gopher, HTTPS, TLS, and SSL. It handles all requests in a single, non-blocking, I/O-driven process over IPv4 or IPv6.
|
||||
|
||||
Squid consists of a main server program squid, a Domain Name System lookup program dnsserver, some optional programs for rewriting requests and performing authentication, together with some management and client tools.
|
||||
|
||||
Squid offers a rich access control, authorization and logging environment to develop web proxy and content serving applications.
|
||||
|
||||
Features include:
|
||||
|
||||
- Web proxy:
|
||||
- Caching to reduce access time and bandwidth use
|
||||
- Keeps meta data and especially hot objects cached in RAM
|
||||
- Caches DNS lookups
|
||||
- Supports non-blocking DNS lookups
|
||||
- Implements negative chacking of failed requests
|
||||
- Squid caches can be arranged in a hierarchy or mesh for additional bandwidth savings
|
||||
- Enforce site-usage policies with extensive access controls
|
||||
- Anonymize connections, such as disabling or changing specific header fields in a client's HTTP request
|
||||
- Reverse proxy
|
||||
- Media-range limitations
|
||||
- Supports SSL
|
||||
- Support for IPv6
|
||||
- Error Page Localization - error pages presented by Squid may now be localized per-request to match the visitors local preferred language
|
||||
- Connection Pinning (for NTLM Auth Passthrough) - a workaround which permits Web servers to use Microsoft NTLM Authentication instead of HTTP standard authentication through a web proxy
|
||||
- Quality of Service (QoS) Flow support
|
||||
- Select a TOS/Diffserv value to mark local hits
|
||||
- Select a TOS/Diffserv value to mark peer hits
|
||||
- Selectively mark only sibling or parent requests
|
||||
- Allows any HTTP response towards clients to have the TOS value of the response coming from the remote server preserved
|
||||
- Mask certain bits in the TOS received from the remote server, before copying the value to the TOS send towards clients
|
||||
- SSL Bump (for HTTPS Filtering and Adaptation) - Squid-in-the-middle decryption and encryption of CONNECT tunneled SSL traffic, using configurable client- and server-side certificates
|
||||
- eCAP Adaptation Module support
|
||||
- ICAP Bypass and Retry enhancements - ICAP is now extended with full bypass and dynamic chain routing to handle multiple adaptation services.
|
||||
- ICY streaming protocol support - commonly known as SHOUTcast multimedia streams
|
||||
- Dynamic SSL Certificate Generation
|
||||
- Support for the Internet Content Adaptation Protocol (ICAP)
|
||||
- Full request logging
|
||||
- Anonymize connections
|
||||
|
||||
- Website: [www.squid-cache.org][1]
|
||||
- Developer: National Laboratory for Applied Networking Research (NLANR) and Internet volunteers
|
||||
- License: GNU GPL v2
|
||||
- Version Number: 4.0.1
|
||||
|
||||
### Privoxy ###
|
||||
|
||||
Privoxy (Privacy Enhancing Proxy) is a non-caching Web proxy with advanced filtering capabilities for enhancing privacy, modifying web page data and HTTP headers, controlling access, and removing ads and other obnoxious Internet junk. Privoxy has a flexible configuration and can be customized to suit individual needs and tastes. It supports both stand-alone systems and multi-user networks.
|
||||
|
||||
Privoxy uses the concept of actions in order to manipulate the data stream between the browser and remote sites.
|
||||
|
||||
Features include:
|
||||
|
||||
- Highly configurable - completely personalize your installation
|
||||
- Ad blocking
|
||||
- Cookie management
|
||||
- Supports "Connection: keep-alive". Outgoing connections can be kept alive independently from the client
|
||||
- Supports IPv6
|
||||
- Tagging which allows to change the behaviour based on client and server headers
|
||||
- Run as an "intercepting" proxy
|
||||
- Sophisticated actions and filters for manipulating both server and client headers
|
||||
- Can be chained with other proxies
|
||||
- Integrated browser-based configuration and control utility. Browser-based tracing of rule and filter effects. Remote toggling
|
||||
- Web page filtering (text replacements, removes banners based on size, invisible "web-bugs" and HTML annoyances, etc)
|
||||
- Modularized configuration that allows for standard settings and user settings to reside in separate files, so that installing updated actions files won't overwrite individual user settings
|
||||
- Support for Perl Compatible Regular Expressions in the configuration files, and a more sophisticated and flexible configuration syntax
|
||||
- GIF de-animation
|
||||
- Bypass many click-tracking scripts (avoids script redirection)
|
||||
- User-customizable HTML templates for most proxy-generated pages (e.g. "blocked" page)
|
||||
- Auto-detection and re-reading of config file changes
|
||||
- Most features are controllable on a per-site or per-location basis
|
||||
|
||||
- Website: [www.privoxy.org][2]
|
||||
- Developer: Fabian Keil (lead developer), David Schmidt, and many other contributors
|
||||
- License: GNU GPL v2
|
||||
- Version Number: 3.4.2
|
||||
|
||||
### Varnish Cache ###
|
||||
|
||||
Varnish Cache is a web accelerator written with performance and flexibility in mind. It's modern architecture offers significantly better performance. It typically speeds up delivery with a factor of 300 - 1000x, depending on your architecture. Varnish stores web pages in memory so the web servers do not have to create the same web page repeatedly. The web server only recreates a page when it is changed. When content is served from memory this happens a lot faster then anything.
|
||||
|
||||
Additionally Varnish can serve web pages much faster then any application server is capable of - giving the website a significant speed enhancement.
|
||||
|
||||
For a cost-effective configuration, Varnish Cache uses between 1-16GB and a SSD disk.
|
||||
|
||||
Features include:
|
||||
|
||||
- Modern design
|
||||
- VCL - a very flexible configuration language. The VCL configuration is translated to C, compiled, loaded and executed giving flexibility and speed
|
||||
- Load balancing using both a round-robin and a random director, both with a per-backend weighting
|
||||
- DNS, Random, Hashing and Client IP based Directors
|
||||
- Load balance between multiple backends
|
||||
- Support for Edge Side Includes including stitching together compressed ESI fragments
|
||||
- Heavily threaded
|
||||
- URL rewriting
|
||||
- Cache multiple vhosts with a single Varnish
|
||||
- Log data is stored in shared memory
|
||||
- Basic health-checking of backends
|
||||
- Graceful handling of "dead" backends
|
||||
- Administered by a command line interface
|
||||
- Use In-line C to extend Varnish
|
||||
- Can be used on the same system as Apache
|
||||
- Run multiple Varnish on the same system
|
||||
- Support for HAProxy's PROXY protocol. This is a protocol adds a small header on each incoming TCP connection that describes who the real client is, added by (for example) an SSL terminating process
|
||||
- Warm and cold VCL states
|
||||
- Plugin support with Varnish Modules, called VMODs
|
||||
- Backends defined through VMODs
|
||||
- Gzip Compression and Decompression
|
||||
- HTTP Streaming Pass & Fetch
|
||||
- Saint and Grace mode. Saint Mode allows for unhealthy backends to be blacklisted for a period of time, preventing them from serving traffic when using Varnish as a load balancer. Grace mode allows Varnish to serve an expired version of a page or other asset in cases where Varnish is unable to retrieve a healthy response from the backend
|
||||
- Experimental support for Persistent Storage, without LRU eviction
|
||||
|
||||
- Website: [www.varnish-cache.org][3]
|
||||
- Developer: Varnish Software
|
||||
- License: FreeBSD
|
||||
- Version Number: 4.1.0
|
||||
|
||||
### Polipo ###
|
||||
|
||||
Polipo is an open source caching HTTP proxy which has modest resource needs.
|
||||
|
||||
It listens to requests for web pages from your browser and forwards them to web servers, and forwards the servers’ replies to your browser. In the process, it optimises and cleans up the network traffic. It is similar in spirit to WWWOFFLE, but the implementation techniques are more like the ones ones used by Squid.
|
||||
|
||||
Polipo aims at being a compliant HTTP/1.1 proxy. It should work with any web site that complies with either HTTP/1.1 or the older HTTP/1.0.
|
||||
|
||||
Features include:
|
||||
|
||||
- HTTP 1.1, IPv4 & IPv6, traffic filtering and privacy-enhancement
|
||||
- Uses HTTP/1.1 pipelining if it believes that the remote server supports it, whether the incoming requests are pipelined or come in simultaneously on multiple connections
|
||||
- Cache the initial segment of an instance if the download has been interrupted, and, if necessary, complete it later using Range requests
|
||||
- Upgrade client requests to HTTP/1.1 even if they come in as HTTP/1.0, and up- or downgrade server replies to the client's capabilities
|
||||
- Complete support for IPv6 (except for scoped (link-local) addresses)
|
||||
- Use as a bridge between the IPv4 and IPv6 Internets
|
||||
- Content-filtering
|
||||
- Can use a technique known as Poor Man's Multiplexing to reduce latency
|
||||
- SOCKS 4 and SOCKS 5 protocol support
|
||||
- HTTPS proxying
|
||||
- Behaves as a transparent proxy
|
||||
- Run Polipo together with Privoxy or tor
|
||||
|
||||
- Website: [www.pps.univ-paris-diderot.fr/~jch/software/polipo/][4]
|
||||
- Developer: Juliusz Chroboczek, Christopher Davis
|
||||
- License: MIT License
|
||||
- Version Number: 1.1.1
|
||||
|
||||
### Tinyproxy ###
|
||||
|
||||
Tinyproxy is a lightweight open source web proxy daemon. It is designed to be fast and yet small. It is useful for cases such as embedded deployments where a full featured HTTP proxy is required, but the system resources for a larger proxy are unavailable.
|
||||
|
||||
Tinyproxy is very useful in a small network setting, where a larger proxy would either be too resource intensive, or a security risk. One of the key features of Tinyproxy is the buffering connection concept. In effect, Tinyproxy will buffer a high speed response from a server, and then relay it to a client at the highest speed the client will accept. This feature greatly reduces the problems with sluggishness on the net.
|
||||
|
||||
Features:
|
||||
|
||||
- Easy to modify
|
||||
- Anonymous mode - allows specification of individual HTTP headers that should be allowed through, and which should be blocked
|
||||
- HTTPS support - Tinyproxy allows forwarding of HTTPS connections without modifying traffic in any way through the CONNECT method
|
||||
- Remote monitoring - access proxy statistics from afar, letting you know exactly how busy the proxy is
|
||||
- Load average monitoring - configure software to refuse connections after the server load reaches a certain point
|
||||
- Access control - configure to only allow connections from certain subnets or IP addresses
|
||||
- Secure - run without any special privileges, thus minimizing the chance of system compromise
|
||||
- URL based filtering - allows domain and URL-based black- and whitelisting
|
||||
- Transparent proxying - configure as a transparent proxy, so that a proxy can be used without any client-side configuration
|
||||
- Proxy chaining - use an upstream proxy server for outbound connections, instead of direct connections to the target server, creating a so-called proxy chain
|
||||
- Privacy features - restrict both what data comes to your web browser from the HTTP server (e.g., cookies), and to restrict what data is allowed through from your web browser to the HTTP server (e.g., version information)
|
||||
- Small footprint - the memory footprint is about 2MB with glibc, and the CPU load increases linearly with the number of simultaneous connections (depending on the speed of the connection). Tinyproxy can be run on an old machine without affecting performance
|
||||
|
||||
- Website: [banu.com/tinyproxy][5]
|
||||
- Developer: Robert James Kaes and contributors
|
||||
- License: GNU GPL v2
|
||||
- Version Number: 1.8.3
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxlinks.com/article/20151101020309690/WebDelivery.html
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://www.squid-cache.org/
|
||||
[2]:http://www.privoxy.org/
|
||||
[3]:https://www.varnish-cache.org/
|
||||
[4]:http://www.pps.univ-paris-diderot.fr/%7Ejch/software/polipo/
|
||||
[5]:https://banu.com/tinyproxy/
|
@ -1,70 +0,0 @@
|
||||
Translating by ZTinoZ
|
||||
7 ways hackers can use Wi-Fi against you
|
||||
================================================================================
|
||||
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/intro_title-100626673-orig.jpg)
|
||||
|
||||
### 7 ways hackers can use Wi-Fi against you ###
|
||||
|
||||
Wi-Fi — oh so convenient, yet oh so dangerous. Here are seven ways you could be giving away your identity through a Wi-Fi connection and what to do instead.
|
||||
|
||||
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/1_free-hotspots-100626674-orig.jpg)
|
||||
|
||||
### Using free hotspots ###
|
||||
|
||||
They seem to be everywhere, and their numbers are expected to [quadruple over the next four years][1]. But many of them are untrustworthy, created just so your login credentials, to email or even more sensitive accounts, can be picked up by hackers using “sniffers” — software that captures any information you submit over the connection. The best defense against sniffing hackers is to use a VPN (virtual private network). A VPN keeps your private data protected because it encrypts what you input.
|
||||
|
||||
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/2_online-banking-100626675-orig.jpg)
|
||||
|
||||
### Banking online ###
|
||||
|
||||
You might think that no one needs to be warned against banking online using free Wi-Fi, but cybersecurity firm Kaspersky Lab says that [more than 100 banks worldwide have lost $900 million][2] from cyberhacking, so it would seem that a lot of people are doing it. If you want to use the free Wi-Fi in a coffee shop because you’re confident it will be legitimate, confirm the exact network name with the barista. It’s pretty easy for [someone else in the shop with a router to set up an open connection][3] with a name that seems like it would be the name of the shop’s Wi-Fi.
|
||||
|
||||
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/3_keeping-wifi-on-100626676-orig.jpg)
|
||||
|
||||
### Keeping Wi-Fi on all the time ###
|
||||
|
||||
When your phone’s Wi-Fi is automatically enabled, you can be connected to an unsecure network without even realizing it. Use your phone’s [location-based Wi-Fi feature][4], if it’s available. It will turn off your Wi-Fi when you’re away from your saved networks and will turn back on when you’re within range.
|
||||
|
||||
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/4_not-using-firewall-100626677-orig.jpg)
|
||||
|
||||
### Not using a firewall ###
|
||||
|
||||
A firewall is your first line of defense against malicious intruders. It’s meant to let good traffic through your computer on a network and keep hackers and malware out. You should turn it off only when your antivirus software has its own firewall.
|
||||
|
||||
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/5_browsing-unencrypted-sites-100626678-orig.jpg)
|
||||
|
||||
### Browsing unencrypted websites ###
|
||||
|
||||
Sad to say, [55% of the Web’s top 1 million sites don’t offer encryption][5]. An unencrypted website allows all data transmissions to be viewed by the prying eyes of hackers. Your browser will indicate when a site is secure (you’ll see a gray padlock with Mozilla Firefox, for example, and a green lock icon with Chrome). But even a secure website can’t protect you from sidejackers, who can steal the cookies from a website you visited, whether it’s a valid site or not, through a public network.
|
||||
|
||||
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/6_updating-security-software-100626679-orig.jpg)
|
||||
|
||||
### Not updating your security software ###
|
||||
|
||||
If you want to ensure that your own network is well protected, upgrade the firmware of your router. All you have to do is go to your router’s administration page to check. Normally, you can download the newest firmware right from the manufacturer’s site.
|
||||
|
||||
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/7_securing-home-wifi-100626680-orig.jpg)
|
||||
|
||||
### Not securing your home Wi-Fi ###
|
||||
|
||||
Needless to say, it is important to set up a password that is not too easy to guess, and change your connection’s default name. You can also filter your MAC address so your router will recognize only certain devices.
|
||||
|
||||
**Josh Althuser** is an open software advocate, Web architect and tech entrepreneur. Over the past 12 years, he has spent most of his time advocating for open-source software and managing teams and projects, as well as providing enterprise-level consultancy for Web applications and helping bring their products to the market. You may connect with him on [Twitter][6].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.networkworld.com/article/3003170/mobile-security/7-ways-hackers-can-use-wi-fi-against-you.html
|
||||
|
||||
作者:[Josh Althuser][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://twitter.com/JoshAlthuser
|
||||
[1]:http://www.pcworld.com/article/243464/number_of_wifi_hotspots_to_quadruple_by_2015_says_study.html
|
||||
[2]:http://www.nytimes.com/2015/02/15/world/bank-hackers-steal-millions-via-malware.html?hp&action=click&pgtype=Homepage&module=first-column-region%C2%AEion=top-news&WT.nav=top-news&_r=3
|
||||
[3]:http://news.yahoo.com/blogs/upgrade-your-life/banking-online-not-hacked-182159934.html
|
||||
[4]:http://pocketnow.com/2014/10/15/should-you-leave-your-smartphones-wifi-on-or-turn-it-off
|
||||
[5]:http://www.cnet.com/news/chrome-becoming-tool-in-googles-push-for-encrypted-web/
|
||||
[6]:https://twitter.com/JoshAlthuser
|
@ -1,64 +0,0 @@
|
||||
eSpeak: Text To Speech Tool For Linux
|
||||
================================================================================
|
||||
![Text to speech tool in Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/Text-to-speech-Linux.jpg)
|
||||
|
||||
[eSpeak][1] is a command line tool for Linux that converts text to speech. This is a compact speech synthesizer that provides support to English and many other languages. It is written in C.
|
||||
|
||||
eSpeak reads the text from the standard input or the input file. The voice generated, however, is nowhere close to a human voice. But it is still a compact and handy tool if you want to use it in your projects.
|
||||
|
||||
Some of the main features of eSpeak are:
|
||||
|
||||
- A command line tool for Linux and Windows
|
||||
- Speaks text from a file or from stdin
|
||||
- Shared library version for use by other programs
|
||||
- SAPI5 version for Windows, so it can be used with screen-readers and other programs that support the Windows SAPI5 interface.
|
||||
- Ported to other platforms, including Android, Mac OSX etc.
|
||||
- Several voice characteristics to choose from
|
||||
- speech output can be saved as [.WAV file][2]
|
||||
- SSML ([Speech Synthesis Markup Language][3]) is supported partially along with HTML
|
||||
- Tiny in size, the complete program with language support etc is under 2 MB.
|
||||
- Can translate text into phoneme codes, so it could be adapted as a front end for another speech synthesis engine.
|
||||
- Development tools available for producing and tuning phoneme data.
|
||||
|
||||
### Install eSpeak ###
|
||||
|
||||
To install eSpeak in Ubuntu based system, use the command below in a terminal:
|
||||
|
||||
sudo apt-get install espeak
|
||||
|
||||
eSpeak is an old tool and I presume that it should be available in the repositories of other Linux distributions such as Arch Linux, Fedora etc. You can install eSpeak easily using dnf, pacman etc.
|
||||
|
||||
To use eSpeak, just use it like: espeak and press enter to hear it aloud. Use Ctrl+C to close the running program.
|
||||
|
||||
![eSpeak command line](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/eSpeak-example.png)
|
||||
|
||||
There are several other options available. You can browse through them through the help section of the program.
|
||||
|
||||
### GUI version: Gespeaker ###
|
||||
|
||||
If you prefer the GUI version over the command line, you can install Gespeaker that provides a GTK front end to eSpeak.
|
||||
|
||||
Use the command below to install Gespeaker:
|
||||
|
||||
sudo apt-get install gespeaker
|
||||
|
||||
The interface is straightforward and easy to use. You can explore it all by yourself.
|
||||
|
||||
![eSpeak GUI tool for text to speech in Ubuntu](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/eSpeak-GUI.png)
|
||||
|
||||
While such tools might not be useful for general computing need, it could be handy if you are working on some projects where text to speech conversion is required. I let you decide the usage of this speech synthesizer.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/espeak-text-speech-linux/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/abhishek/
|
||||
[1]:http://espeak.sourceforge.net/
|
||||
[2]:http://en.wikipedia.org/wiki/WAV
|
||||
[3]:http://en.wikipedia.org/wiki/Speech_Synthesis_Markup_Language
|
66
sources/share/20151204 Review EXT4 vs. Btrfs vs. XFS.md
Normal file
66
sources/share/20151204 Review EXT4 vs. Btrfs vs. XFS.md
Normal file
@ -0,0 +1,66 @@
|
||||
bazz2222222222222222222222222222222222222222222
|
||||
Review EXT4 vs. Btrfs vs. XFS
|
||||
================================================================================
|
||||
![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/09/1385698302_funny_linux_wallpapers-593x445.jpg)
|
||||
|
||||
To be honest, one of the things that comes last in people’s thinking is to look at which file system on their PC is being used. Windows users as well as Mac OS X users even have less reason for looking as they have really only 1 choice for their operating system which are NTFS and HFS+. Linux operating system, on the other side, has plenty of various file system options, with the current default is being widely used ext4. However, there is another push for changing the file system to something other which is called btrfs. But what makes btrfs better, what are other file systems, and when can we see the distributions making the change?
|
||||
|
||||
Let’s first have a general look at file systems and what they really do, then we will make a small comparison between famous file systems.
|
||||
|
||||
### So, What Do File Systems Do? ###
|
||||
|
||||
Just in case if you are unfamiliar about what file systems really do, it is actually simple when it is summarized. The file systems are mainly used in order for controlling how the data is stored after any program is no longer using it, how access to the data is controlled, what other information (metadata) is attached to the data itself, etc. I know that it does not sound like an easy thing to be programmed, and it is definitely not. The file systems are continually still being revised for including more functionality while becoming more efficient in what it simply needs to do. Therefore, however, it is a basic need for all computers, it is not quite as basic as it sounds like.
|
||||
|
||||
### Why Partitioning? ###
|
||||
|
||||
Many people have a vague knowledge of what the partitions are since each operating system has an ability for creating or removing them. It can seem strange that Linux operating system uses more than 1 partition on the same disk, even while using the standard installation procedure, so few explanations are called for them. One of the main goals of having different partitions is achieving higher data security in the disaster case.
|
||||
|
||||
By dividing your hard disk into partitions, the data may be grouped and also separated. When the accidents occur, only the data stored in the partition which got the hit will only be damaged, while data on the other partitions will survive most likely. These principles date from the days when the Linux operating system didn’t have a journaled file system and any power failure might have led to a disaster.
|
||||
|
||||
The using of partitions will remain for security and the robustness reasons, then the breach on 1 part of the operating system does not automatically mean that whole computer is under risk or danger. This is currently most important factor for the partitioning process. For example, the users create scripts, the programs or web applications which start filling up the disk. If that disk contains only 1 big partition, then entire system may stop functioning if that disk is full. If the users store data on separate partitions, then only that data partition can be affected, while system partitions and the possible other data partitions will keep functioning.
|
||||
|
||||
Mind that to have a journaled file system will only provide data security in case if there is a power failure as well as sudden disconnection of the storage devices. Such will not protect the data against the bad blocks and the logical errors in the file system. In such cases, the user should use a Redundant Array of Inexpensive Disks (RAID) solution.
|
||||
|
||||
### Why Switch File Systems? ###
|
||||
|
||||
The ext4 file system has been an improvement for the ext3 file system that was also an improvement over the ext2 file system. While the ext4 is a very solid file system which has been the default choice for almost all distributions for the past few years, it is made from an aging code base. Additionally, Linux operating system users are seeking many new different features in file systems which ext4 does not handle on its own. There is software which takes care of some of such needs, but in the performance aspect, being able to do such things on the file system level could be faster.
|
||||
|
||||
### Ext4 File System ###
|
||||
|
||||
The ext4 has some limits which are still a bit impressive. The maximum file size is 16 tebibytes (which is roughly 17.6 terabytes) and is much bigger than any hard drive a regular consumer can currently buy. While, the largest volume/partition you can make with ext4 is 1 exbibyte (which is roughly 1,152,921.5 terabytes). The ext4 is known to bring the speed improvements over ext3 by using multiple various techniques. Like in the most modern file systems, it is a journaling file system that means that it will keep a journal of where the files are mainly located on the disk and of any other changes that happen to the disk. Regardless all of its features, it doesn’t support the transparent compression, the data deduplication, or the transparent encryption. The snapshots are supported technically, but such feature is experimental at best.
|
||||
|
||||
### Btrfs File System ###
|
||||
|
||||
The btrfs, many of us pronounce it different ways, as an example, Better FS, Butter FS, or B-Tree FS. It is a file system which is completely made from scratch. The btrfs exists because its developers firstly wanted to expand the file system functionality in order to include snapshots, pooling, as well as checksums among the other things. While it is independent from the ext4, it also wants to build off the ideas present in the ext4 that are great for the consumers and the businesses alike as well as incorporate those additional features that will benefit everybody, but specifically the enterprises. For the enterprises who are using very large programs with very large databases, they are having a seemingly continuous file system across the multiple hard drives could be very beneficial as it will make a consolidation of the data much easier. The data deduplication could reduce the amount of the actual space data could occupy, and the data mirroring could become easier with the btrfs as well when there is a single and broad file system which needs to be mirrored.
|
||||
|
||||
The user certainly can still choose to create multiple partitions so that he does not need to mirror everything. Considering that the btrfs will be able for spanning over the multiple hard drives, it is a very good thing that it can support 16 times more drive space than the ext4. A maximum partition size of the btrfs file system is 16 exbibytes, as well as maximum file size is 16 exbibytes too.
|
||||
|
||||
### XFS File System ###
|
||||
|
||||
The XFS file system is an extension of the extent file system. The XFS is a high-performance 64-bit journaling file system. The support of the XFS was merged into Linux kernel in around 2002 and In 2009 Red Hat Enterprise Linux version 5.4 usage of the XFS file system. XFS supports maximum file system size of 8 exbibytes for the 64-bit file system. There is some comparison of XFS file system is XFS file system can’t be shrunk and poor performance with deletions of the large numbers of files. Now, the RHEL 7.0 uses XFS as the default filesystem.
|
||||
|
||||
### Final Thoughts ###
|
||||
|
||||
Unfortunately, the arrival date for the btrfs is not quite known. But officially, the next-generation file system is still classified as “unstable”, but if the user downloads the latest version of Ubuntu, he will be able to choose to install on a btrfs partition. When the btrfs will be classified actually as “stable” is still a mystery, but users shouldn’t expect the Ubuntu to use the btrfs by default until it’s indeed considered “stable”. It has been reported that Fedora 18 will use the btrfs as its default file system as by the time of its release a file system checker for the btrfs should exist. There is a good amount of work still left for the btrfs, as not all the features are yet implemented and the performance is a little sluggish if we compare it to the ext4.
|
||||
|
||||
So, which is better to use? Till now, the ext4 will be the winner despite the identical performance. But why? The answer will be the convenience as well as the ubiquity. The ext4 is still excellent file system for the desktop or workstation use. It is provided by default, so the user can install the operating system on it. Also, the ext4 supports volumes up to 1 Exabyte and files up to 16 Terabyte in size, so there’s still a plenty of room for the growth where space is concerned.
|
||||
|
||||
The btrfs might offer greater volumes up to 16 Exabyte and improved fault tolerance, but, till now, it feels more as an add-on file system rather than one integrated into the Linux operating system. For example, the btrfs-tools have to be present before a drive will be formatted with the btrfs, which means that the btrfs is not an option during the Linux operating system installation though that could vary with the distribution.
|
||||
|
||||
Even though the transfer rates are so important, there’s more to a just file system than speed of the file transfers. The btrfs has many useful features such as Copy-on-Write (CoW), extensive checksums, snapshots, scrubbing, self-healing data, deduplication, as well as many more good improvements that ensure the data integrity. The btrfs lacks the RAID-Z features of ZFS, so the RAID is still in an experimental state with the btrfs. For pure data storage, however, the btrfs is the winner over the ext4, but time still will tell.
|
||||
|
||||
Till the moment, the ext4 seems to be a better choice on the desktop system since it is presented as a default file system, as well as it is faster than the btrfs when transferring files. The btrfs is definitely worth to look into, but to completely switch to replace the ext4 on desktop Linux might be few years later. The data farms and the large storage pools could reveal different stories and show the right differences between ext4, XCF, and btrfs.
|
||||
|
||||
If you have a different or additional opinion, kindly let us know by commenting on this article.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.unixmen.com/review-ext4-vs-btrfs-vs-xfs/
|
||||
|
||||
作者:[M.el Khamlichi][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.unixmen.com/author/pirat9/
|
@ -1,220 +0,0 @@
|
||||
19 Years of KDE History: Step by Step
|
||||
================================================================================
|
||||
注:youtube 视频
|
||||
<iframe width="660" height="371" src="https://www.youtube.com/embed/1UG4lQOMBC4?feature=oembed" frameborder="0" allowfullscreen></iframe>
|
||||
|
||||
### Introduction ###
|
||||
|
||||
KDE – one of most functional desktop environment ever. It’s open source and free for use. 19 years ago, 14 october 1996 german programmer Matthias Ettrich has started a development of this beautiful environment. KDE provides the shell and many applications for everyday using. Today KDE uses the hundred thousand peoples over the world on Unix and Windows operating system. 19 years – serious age for software projects. Time to return and see how it begin.
|
||||
|
||||
K Desktop Environment has some new aspects: new design, good look & feel, consistency, easy to use, powerful applications for typical desktop work and special use cases. Name “KDE” is an easy word hack with “Common Desktop Environment”, “K” – “Cool”. The first KDE version used proprietary Trolltech’s Qt framework (parent of Qt) with dual licensing: open source QPL(Q public license) and proprietary commercial license. In 2000 Trolltech released some Qt libraries under GPL; Qt 4.5 was released in LGPL 2.1. Since 2009 KDE is compiled for three products: Plasma Workspaces (Shell), KDE Applications, KDE Platform as KDE Software compilation.
|
||||
|
||||
### Releases ###
|
||||
|
||||
#### Pre-Release – 14 October 1996 ####
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/0b3.png)
|
||||
|
||||
Kool Desktop Environment. Word “Kool” will be dropped in future. In the beginning, all components were released to the developer community separately without any coordinated timeframe throughout the overall project. First communication of KDE via mailing list, that was called kde@fiwi02.wiwi.uni-Tubingen.de.
|
||||
|
||||
#### KDE 1.0 – July 12, 1998 ####
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/10.png)
|
||||
|
||||
This version received mixed reception. Many criticized the use of the Qt software framework – back then under the FreeQt license which was claimed to not be compatible with free software – and advised the use of Motif or LessTif instead. Despite that criticism, KDE was well received by many users and made its way into the first Linux distributions.
|
||||
|
||||
![28 January 1999](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/11.png)
|
||||
|
||||
28 January 1999
|
||||
|
||||
An update, **K Desktop Environment 1.1**, was faster, more stable and included many small improvements. It also included a new set of icons, backgrounds and textures. Among this overhauled artwork was a new KDE logo by Torsten Rahn consisting of the letter K in front of a gear which is used in revised form to this day.
|
||||
|
||||
#### KDE 2.0 – October 23, 2000 ####
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/20.png)
|
||||
|
||||
Major updates: * DCOP (Desktop COmmunication Protocol), a client-to-client communications protocol * KIO, an application I/O library. * KParts, a component object model * KHTML, an HTML 4.0 compliant rendering and drawing engine
|
||||
|
||||
![26 February 2001](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/21.png)
|
||||
|
||||
26 February 2001
|
||||
|
||||
**K Desktop Environment 2.1** release inaugurated the media player noatun, which used a modular, plugin design. For development, K Desktop Environment 2.1 was bundled with KDevelop.
|
||||
|
||||
![15 August 2001](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/22.png)
|
||||
|
||||
15 August 2001
|
||||
|
||||
The **KDE 2.2** release featured up to a 50% improvement in application startup time on GNU/Linux systems and increased stability and capabilities for HTML rendering and JavaScript; some new features in KMail.
|
||||
|
||||
#### KDE 3.0 – April 3, 2002 ####
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/30.png)
|
||||
|
||||
K Desktop Environment 3.0 introduced better support for restricted usage, a feature demanded by certain environments such as kiosks, Internet cafes and enterprise deployments, which disallows the user from having full access to all capabilities of a piece of software.
|
||||
|
||||
![28 January 2003](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/31.png)
|
||||
|
||||
28 January 2003
|
||||
|
||||
**K Desktop Environment 3.1** introduced new default window (Keramik) and icon (Crystal) styles as well as several feature enhancements.
|
||||
|
||||
![3 February 2004](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/32.png)
|
||||
|
||||
3 February 2004
|
||||
|
||||
**K Desktop Environment 3.2** included new features, such as inline spell checking for web forms and emails, improved e-mail and calendaring support, tabs in Konqueror and support for Microsoft Windows desktop sharing protocol (RDP).
|
||||
|
||||
![19 August 2004](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/33.png)
|
||||
|
||||
19 August 2004
|
||||
|
||||
**K Desktop Environment 3.3** focused on integrating different desktop components. Kontact was integrated with Kolab, a groupware application, and Kpilot. Konqueror was given better support for instant messaging contacts, with the capability to send files to IM contacts and support for IM protocols (e.g., IRC).
|
||||
|
||||
![16 March 2005](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/34.png)
|
||||
|
||||
16 March 2005
|
||||
|
||||
**K Desktop Environment 3.4** focused on improving accessibility. The update added a text-to-speech system with support for Konqueror, Kate, KPDF, the standalone application KSayIt and text-to-speech synthesis on the desktop.
|
||||
|
||||
![29 November 2005](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/35.png)
|
||||
|
||||
29 November 2005
|
||||
|
||||
**The K Desktop Environment 3.5** release added SuperKaramba, which provides integrated and simple-to-install widgets to the desktop. Konqueror was given an ad-block feature and became the second web browser to pass the Acid2 CSS test.
|
||||
|
||||
#### KDE SC 4.0 – January 11, 2008 ####
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/400.png)
|
||||
|
||||
The majority of development went into implementing most of the new technologies and frameworks of KDE 4. Plasma and the Oxygen style were two of the biggest user-facing changes. Dolphin replaces Konqueror as file manager, Okular – default document viewer.
|
||||
|
||||
![29 July 2008](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/401.png)
|
||||
|
||||
29 July 2008
|
||||
|
||||
**KDE 4.1** includes a shared emoticon theming system which is used in PIM and Kopete, and DXS, a service that lets applications download and install data from the Internet with one click. Also introduced are GStreamer, QuickTime 7, and DirectShow 9 Phonon backends. New applications: * Dragon Player * Kontact * Skanlite – software for scanners * Step – physics simulator * New games: Kdiamond, Kollision, KBreakout and others
|
||||
|
||||
![27 January 2009](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/402.png)
|
||||
|
||||
27 January 2009
|
||||
|
||||
**KDE 4.2** is considered a significant improvement beyond KDE 4.1 in nearly all aspects, and a suitable replacement for KDE 3.5 for most users.
|
||||
|
||||
![4 August 2009](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/403.png)
|
||||
|
||||
4 August 2009
|
||||
|
||||
**KDE 4.3** fixed over 10,000 bugs and implemented almost 2,000 feature requests. Integration with other technologies, such as PolicyKit, NetworkManager & Geolocation services, was another focus of this release.
|
||||
|
||||
![9 February 2010](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/404.png)
|
||||
|
||||
9 February 2010
|
||||
|
||||
**KDE SC 4.4** is based on version 4.6 of the Qt 4 toolkit. New application – KAddressBook, first release of Kopete.
|
||||
|
||||
![10 August 2010](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/405.png)
|
||||
|
||||
10 August 2010
|
||||
|
||||
**KDE SC 4.5** has some new features: integration of the WebKit library, an open-source web browser engine, which is used in major browsers such as Apple Safari and Google Chrome. KPackageKit replaced Kpackage.
|
||||
|
||||
![26 January 2011](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/406.png)
|
||||
|
||||
26 January 2011
|
||||
|
||||
**KDE SC 4.6** has better OpenGL compositing along with the usual myriad of fixes and features.
|
||||
|
||||
![27 July 2011](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/407.png)
|
||||
|
||||
27 July 2011
|
||||
|
||||
**KDE SC 4.7** has updated KWin with OpenGL ES 2.0 compatible, Qt Quick, Plasma Desktop with many enhancements and a lot of new functions in general applications. 12k bugs if fixed.
|
||||
|
||||
![25 January 2012](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/408.png)
|
||||
|
||||
25 January 2012
|
||||
|
||||
**KDE SC 4.8**: better KWin performance and Wayland support, new design of Doplhin.
|
||||
|
||||
![1 August 2012](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/409.png)
|
||||
|
||||
1 August 2012
|
||||
|
||||
**KDE SC 4.9**: several improvements to the Dolphin file manager, including the reintroduction of in-line file renaming, back and forward mouse buttons, improvement of the places panel and better usage of file metadata.
|
||||
|
||||
![6 February 2013](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/410.png)
|
||||
|
||||
6 February 2013
|
||||
|
||||
**KDE SC 4.10**: many of the default Plasma widgets were rewritten in QML, and Nepomuk, Kontact and Okular received significant speed improvements.
|
||||
|
||||
![14 August 2013](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/411.png)
|
||||
|
||||
14 August 2013
|
||||
|
||||
**KDE SC 4.11**: Kontact and Nepomuk received many optimizations. The first generation Plasma Workspaces entered maintenance-only development mode.
|
||||
|
||||
![18 December 2013](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/412.png)
|
||||
|
||||
18 December 2013
|
||||
|
||||
**KDE SC 4.12**: Kontact received substantial improvements, many small improvements.
|
||||
|
||||
![16 April 2014](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/413.png)
|
||||
|
||||
18 December 2013
|
||||
|
||||
**KDE SC 4.13**: Nepomuk semantic desktop search was replaced with KDE’s in house Baloo. KDE SC 4.13 was released in 53 different translations.
|
||||
|
||||
![20 August 2014](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/414.png)
|
||||
|
||||
18 December 2013
|
||||
|
||||
**KDE SC 4.14**: he release primarily focused on stability, with numerous bugs fixed and few new features added. This was the final KDE SC 4 release.
|
||||
|
||||
#### KDE Plasma 5.0 – July 15, 2014 ####
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/500.png)
|
||||
|
||||
KDE Plasma 5 – 5th generation of KDE. Massive impovements in design and system, new default theme – Breeze, complete migration to QML, better performance with OpenGL, better HiDPI displays support.
|
||||
|
||||
![11 November 2014](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/501.png)
|
||||
|
||||
11 November 2014
|
||||
|
||||
**KDE Plasma 5.1**: Ported missing features from Plasma 4.
|
||||
|
||||
![27 January 2015](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/502.png)
|
||||
|
||||
27 January 2015
|
||||
|
||||
**KDE Plasma 5.2**: New components: BlueDevil, KSSHAskPass, Muon, SDDM theme configuration, KScreen, GTK+ style configuration and KDecoration.
|
||||
|
||||
![28 April 2015](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/503.png)
|
||||
|
||||
28 April 2015
|
||||
|
||||
**KDE Plasma 5.3**: Tech preview of Plasma Media Center. New Bluetooth and touchpad applets. Enhanced power management.
|
||||
|
||||
![25 August 2015](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/504.png)
|
||||
|
||||
25 August 2015
|
||||
|
||||
**KDE Plasma 5.4**: Initial Wayland session, new QML-based audio volume applet, and alternative full-screen application launcher.
|
||||
|
||||
Big thanks to the [KDE][1] developers and community, Wikipedia for [descriptions][2] and all my readers. Be free and use the open source software like a KDE.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://tlhp.cf/kde-history/
|
||||
|
||||
作者:[Pavlo RudyiCategories][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://tlhp.cf/author/paul/
|
||||
[1]:https://www.kde.org/
|
||||
[2]:https://en.wikipedia.org/wiki/KDE_Plasma_5
|
@ -1,345 +0,0 @@
|
||||
sevenot translating
|
||||
A Linux User Using ‘Windows 10′ After More than 8 Years – See Comparison
|
||||
================================================================================
|
||||
Windows 10 is the newest member of windows NT family of which general availability was made on July 29, 2015. It is the successor of Windows 8.1. Windows 10 is supported on Intel Architecture 32 bit, AMD64 and ARMv7 processors.
|
||||
|
||||
![Windows 10 and Linux Comparison](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-vs-Linux.jpg)
|
||||
|
||||
Windows 10 and Linux Comparison
|
||||
|
||||
As a Linux-user for more than 8 continuous years, I thought to test Windows 10, as it is making a lots of news these days. This article is a breakthrough of my observation. I will be seeing everything from the perspective of a Linux user so you may find it a bit biased towards Linux but with absolutely no false information.
|
||||
|
||||
1. I searched Google with the text “download windows 10” and clicked the first link.
|
||||
|
||||
![Search Windows 10](http://www.tecmint.com/wp-content/uploads/2015/08/Search-Windows-10.jpg)
|
||||
|
||||
Search Windows 10
|
||||
|
||||
You may directly go to link : [https://www.microsoft.com/en-us/software-download/windows10ISO][1]
|
||||
|
||||
2. I was supposed to select a edition from ‘windows 10‘, ‘windows 10 KN‘, ‘windows 10 N‘ and ‘windows 10 single language‘.
|
||||
|
||||
![Select Windows 10 Edition](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Windows-10-Edition.jpg)
|
||||
|
||||
Select Windows 10 Edition
|
||||
|
||||
For those who want to know details of different editions of Windows 10, here is the brief details of editions.
|
||||
|
||||
- Windows 10 – Contains everything offered by Microsoft for this OS.
|
||||
- Windows 10N – This edition comes without Media-player.
|
||||
- Windows 10KN – This edition comes without media playing capabilities.
|
||||
- Windows 10 Single Language – Only one Language Pre-installed.
|
||||
|
||||
3. I selected the first option ‘Windows 10‘ and clicked ‘Confirm‘. Then I was supposed to select a product language. I choose ‘English‘.
|
||||
|
||||
I was provided with Two Download Links. One for 32-bit and other for 64-bit. I clicked 64-bit, as per my architecture.
|
||||
|
||||
![Download Windows 10](http://www.tecmint.com/wp-content/uploads/2015/08/Download-Windows-10.jpg)
|
||||
|
||||
Download Windows 10
|
||||
|
||||
With my download speed (15Mbps), it took me 3 long hours to download it. Unfortunately there were no torrent file to download the OS, which could otherwise have made the overall process smooth. The OS iso image size is 3.8 GB.
|
||||
|
||||
I could not find an image of smaller size but again the truth is there don’t exist net-installer image like things for Windows. Also there is no way to calculate hash value after the iso image has been downloaded.
|
||||
|
||||
Wonder why so ignorance from windows on such issues. To verify if the iso is downloaded correctly I need to write the image to a disk or to a USB flash drive and then boot my system and keep my finger crossed till the setup is finished.
|
||||
|
||||
Lets start. I made my USB flash drive bootable with the windows 10 iso using dd command, as:
|
||||
|
||||
# dd if=/home/avi/Downloads/Win10_English_x64.iso of=/dev/sdb1 bs=512M; sync
|
||||
|
||||
It took a few minutes to complete the process. I then rebooted the system and choose to boot from USB flash Drive in my UEFI (BIOS) settings.
|
||||
|
||||
#### System Requirements ####
|
||||
|
||||
If you are upgrading
|
||||
|
||||
- Upgrade supported only from Windows 7 SP1 or Windows 8.1
|
||||
|
||||
If you are fresh Installing
|
||||
|
||||
- Processor: 1GHz or faster
|
||||
- RAM : 1GB and Above(32-bit), 2GB and Above(64-bit)
|
||||
- HDD: 16GB and Above(32-bit), 20GB and Above(64-bit)
|
||||
- Graphic card: DirectX 9 or later + WDDM 1.0 Driver
|
||||
|
||||
### Installation of Windows 10 ###
|
||||
|
||||
1. Windows 10 boots. Yet again they changed the logo. Also no information on whats going on.
|
||||
|
||||
![Windows 10 Logo](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-Logo.jpg)
|
||||
|
||||
Windows 10 Logo
|
||||
|
||||
2. Selected Language to install, Time & currency format and keyboard & Input methods before clicking Next.
|
||||
|
||||
![Select Language and Time](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Language-and-Time.jpg)
|
||||
|
||||
Select Language and Time
|
||||
|
||||
3. And then ‘Install Now‘ Menu.
|
||||
|
||||
![Install Windows 10](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Windows-10.jpg)
|
||||
|
||||
Install Windows 10
|
||||
|
||||
4. The next screen is asking for Product key. I clicked ‘skip’.
|
||||
|
||||
![Windows 10 Product Key](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-Product-Key.jpg)
|
||||
|
||||
Windows 10 Product Key
|
||||
|
||||
5. Choose from a listed OS. I chose ‘windows 10 pro‘.
|
||||
|
||||
![Select Install Operating System](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Install-Operating-System.jpg)
|
||||
|
||||
Select Install Operating System
|
||||
|
||||
6. oh yes the license agreement. Put a check mark against ‘I accept the license terms‘ and click next.
|
||||
|
||||
![Accept License](http://www.tecmint.com/wp-content/uploads/2015/08/Accept-License.jpg)
|
||||
|
||||
Accept License
|
||||
|
||||
7. Next was to upgrade (to windows 10 from previous versions of windows) and Install Windows. Don’t know why custom: Windows Install only is suggested as advanced by windows. Anyway I chose to Install windows only.
|
||||
|
||||
![Select Installation Type](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Installation-Type.jpg)
|
||||
|
||||
Select Installation Type
|
||||
|
||||
8. Selected the file-system and clicked ‘next’.
|
||||
|
||||
![Select Install Drive](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Install-Drive.jpg)
|
||||
|
||||
Select Install Drive
|
||||
|
||||
9. The installer started to copy files, getting files ready for installation, installing features, installing updates and finishing up. It would be better if the installer would have shown verbose output on the action is it taking.
|
||||
|
||||
![Installing Windows](http://www.tecmint.com/wp-content/uploads/2015/08/Installing-Windows.jpg)
|
||||
|
||||
Installing Windows
|
||||
|
||||
10. And then windows restarted. They said reboot was needed to continue.
|
||||
|
||||
![Windows Installation Process](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-Installation-Process.jpg)
|
||||
|
||||
Windows Installation Process
|
||||
|
||||
11. And then all I got was the below screen which reads “Getting Ready”. It took 5+ minutes at this point. No idea what was going on. No output.
|
||||
|
||||
![Windows Getting Ready](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-Getting-Ready.jpg)
|
||||
|
||||
Windows Getting Ready
|
||||
|
||||
12. yet again, it was time to “Enter Product Key”. I clicked “Do this later” and then used expressed settings.
|
||||
|
||||
![Enter Product Key](http://www.tecmint.com/wp-content/uploads/2015/08/Enter-Product-Key.jpg)
|
||||
|
||||
Enter Product Key
|
||||
|
||||
![Select Express Settings](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Express-Settings.jpg)
|
||||
|
||||
Select Express Settings
|
||||
|
||||
14. And then three more output screens, where I as a Linuxer expected that the Installer will tell me what it is doing but all in vain.
|
||||
|
||||
![Loading Windows](http://www.tecmint.com/wp-content/uploads/2015/08/Loading-Windows.jpg)
|
||||
|
||||
Loading Windows
|
||||
|
||||
![Getting Updates](http://www.tecmint.com/wp-content/uploads/2015/08/Getting-Updates.jpg)
|
||||
|
||||
Getting Updates
|
||||
|
||||
![Still Loading Windows](http://www.tecmint.com/wp-content/uploads/2015/08/Still-Loading-Windows.jpg)
|
||||
|
||||
Still Loading Windows
|
||||
|
||||
15. And then the installer wanted to know who owns this machine “My organization” or I myself. Chose “I own it” and then next.
|
||||
|
||||
![Select Organization](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Organization.jpg)
|
||||
|
||||
Select Organization
|
||||
|
||||
16. Installer prompted me to join “Azure Ad” or “Join a domain”, before I can click ‘continue’. I chooses the later option.
|
||||
|
||||
![Connect Windows](http://www.tecmint.com/wp-content/uploads/2015/08/Connect-Windows.jpg)
|
||||
|
||||
Connect Windows
|
||||
|
||||
17. The Installer wants me to create an account. So I entered user_name and clicked ‘Next‘, I was expecting an error message that I must enter a password.
|
||||
|
||||
![Create Account](http://www.tecmint.com/wp-content/uploads/2015/08/Create-Account.jpg)
|
||||
|
||||
Create Account
|
||||
|
||||
18. To my surprise Windows didn’t even showed warning/notification that I must create password. Such a negligence. Anyway I got my desktop.
|
||||
|
||||
![Windows 10 Desktop](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-Desktop.jpg)
|
||||
|
||||
Windows 10 Desktop
|
||||
|
||||
#### Experience of a Linux-user (Myself) till now ####
|
||||
|
||||
- No Net-installer Image
|
||||
- Image size too heavy
|
||||
- No way to check the integrity of iso downloaded (no hash check)
|
||||
- The booting and installation remains same as it was in XP, Windows 7 and 8 perhaps.
|
||||
- As usual no output on what windows Installer is doing – What file copying or what package installing.
|
||||
- Installation was straight forward and easy as compared to the installation of a Linux distribution.
|
||||
|
||||
### Windows 10 Testing ###
|
||||
|
||||
19. The default Desktop is clean. It has a recycle bin Icon on the default desktop. Search web directly from the desktop itself. Additionally icons for Task viewing, Internet browsing, folder browsing and Microsoft store is there. As usual notification bar is present on the bottom right to sum up desktop.
|
||||
|
||||
![Deskop Shortcut Icons](http://www.tecmint.com/wp-content/uploads/2015/08/Deskop-Shortcut-icons.jpg)
|
||||
|
||||
Deskop Shortcut Icons
|
||||
|
||||
20. Internet Explorer replaced with Microsoft Edge. Windows 10 has replace the legacy web browser Internet Explorer also known as IE with Edge aka project spartan.
|
||||
|
||||
![Microsoft Edge Browser](http://www.tecmint.com/wp-content/uploads/2015/08/Edge-browser.jpg)
|
||||
|
||||
Microsoft Edge Browser
|
||||
|
||||
It is fast at least as compared to IE (as it seems it testing). Familiar user Interface. The home screen contains news feed updates. There is also a search bar title that reads ‘Where to next?‘. The browser loads time is considerably low which result in improving overall speed and performance. The memory usages of Edge seems normal.
|
||||
|
||||
![Windows Performance](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-Performance.jpg)
|
||||
|
||||
Windows Performance
|
||||
|
||||
Edge has got cortana – Intelligent Personal Assistant, Support for chrome-extension, web Note – Take notes while Browsing, Share – Right from the tab without opening any other TAB.
|
||||
|
||||
#### Experience of a Linux-user (Myself) on this point ####
|
||||
|
||||
21. Microsoft has really improved web browsing. Lets see how stable and fine it remains. It don’t lag as of now.
|
||||
|
||||
22. Though RAM usages by Edge was fine for me, a lots of users are complaining that Edge is notorious for Excessive RAM Usages.
|
||||
|
||||
23. Difficult to say at this point if Edge is ready to compete with Chrome and/or Firefox at this point of time. Lets see what future unfolds.
|
||||
|
||||
#### A few more Virtual Tour ####
|
||||
|
||||
24. Start Menu redesigned – Seems clear and effective. Metro icons make it live. Populated with most commonly applications viz., Calendar, Mail, Edge, Photos, Contact, Temperature, Companion suite, OneNote, Store, Xbox, Music, Movies & TV, Money, News, Store, etc.
|
||||
|
||||
![Windows Look and Feel](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-Look.jpg)
|
||||
|
||||
Windows Look and Feel
|
||||
|
||||
In Linux on Gnome Desktop Environment, I use to search required applications simply by pressing windows key and then type the name of the application.
|
||||
|
||||
![Search Within Desktop](http://www.tecmint.com/wp-content/uploads/2015/08/Search-Within-Desktop.jpg)
|
||||
|
||||
Search Within Desktop
|
||||
|
||||
25. File Explorer – seems clear Designing. Edges are sharp. In the left pane there is link to quick access folders.
|
||||
|
||||
![Windows File Explorer](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-File-Explorer.jpg)
|
||||
|
||||
Windows File Explorer
|
||||
|
||||
Equally clear and effective file explorer on Gnome Desktop Environment on Linux. Removed UN-necessary graphics and images from icons is a plus point.
|
||||
|
||||
![File Browser on Gnome](http://www.tecmint.com/wp-content/uploads/2015/08/File-Browser.jpg)
|
||||
|
||||
File Browser on Gnome
|
||||
|
||||
26. Settings – Though the settings are a bit refined on Windows 10, you may compare it with the settings on a Linux Box.
|
||||
|
||||
**Settings on Windows**
|
||||
|
||||
![Windows 10 Settings](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-Settings.jpg)
|
||||
|
||||
Windows 10 Settings
|
||||
|
||||
**Setting on Linux Gnome**
|
||||
|
||||
![Gnome Settings](http://www.tecmint.com/wp-content/uploads/2015/08/Gnome-Settings.jpg)
|
||||
|
||||
Gnome Settings
|
||||
|
||||
27. List of Applications – List of Application on Linux is better than what they use to provide (based upon my memory, when I was a regular windows user) but still it stands low as compared to how Gnome3 list application.
|
||||
|
||||
**Application Listed by Windows**
|
||||
|
||||
![Application List on Windows 10](http://www.tecmint.com/wp-content/uploads/2015/08/Application-List-on-Windows-10.jpg)
|
||||
|
||||
Application List on Windows 10
|
||||
|
||||
**Application Listed by Gnome3 on Linux**
|
||||
|
||||
![Gnome Application List on Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Gnome-Application-List-on-Linux.jpg)
|
||||
|
||||
Gnome Application List on Linux
|
||||
|
||||
28. Virtual Desktop – Virtual Desktop feature of Windows 10 is one of those topic which are very much talked about these days.
|
||||
|
||||
Here is the virtual Desktop in Windows 10.
|
||||
|
||||
![Windows Virtual Desktop](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-Virtual-Desktop.jpg)
|
||||
|
||||
Windows Virtual Desktop
|
||||
|
||||
and the virtual Desktop on Linux we are using for more than 2 decades.
|
||||
|
||||
![Virtual Desktop on Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Virtual-Desktop-on-Linux.jpg)
|
||||
|
||||
Virtual Desktop on Linux
|
||||
|
||||
#### A few other features of Windows 10 ####
|
||||
|
||||
29. Windows 10 comes with wi-fi sense. It shares your password with others. Anyone who is in the range of your wi-fi and connected to you over Skype, Outlook, Hotmail or Facebook can be granted access to your wifi network. And mind it this feature has been added as a feature by microsoft to save time and hassle-free connection.
|
||||
|
||||
In a reply to question raised by Tecmint, Microsoft said – The user has to agree to enable wifi sense, everytime on a new network. oh! What a pathetic taste as far as security is concerned. I am not convinced.
|
||||
|
||||
30. Up-gradation from Windows 7 and Windows 8.1 is free though the retail cost of Home and pro editions are approximately $119 and $199 respectively.
|
||||
|
||||
31. Microsoft released first cumulative update for windows 10, which is said to put system into endless crash loop for a few people. Windows perhaps don’t understand such problem or don’t want to work on that part don’t know why.
|
||||
|
||||
32. Microsoft’s inbuilt utility to block/hide unwanted updates don’t work in my case. This means If a update is there, there is no way to block/hide it. Sorry windows users!
|
||||
|
||||
#### A few features native to Linux that windows 10 have ####
|
||||
|
||||
Windows 10 has a lots of features that were taken directly from Linux. If Linux were not released under GNU License perhaps Microsoft would never had the below features.
|
||||
|
||||
33. Command-line package management – Yup! You heard it right. Windows 10 has a built-in package management. It works only in Windows Power Shell. OneGet is the official package manager for windows. Windows package manager in action.
|
||||
|
||||
![Windows 10 Package Manager](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-Package-Manager.jpg)
|
||||
|
||||
Windows 10 Package Manager
|
||||
|
||||
- Border-less windows
|
||||
- Flat Icons
|
||||
- Virtual Desktop
|
||||
- One search for Online+offline search
|
||||
- Convergence of mobile and desktop OS
|
||||
|
||||
### Overall Conclusion ###
|
||||
|
||||
- Improved responsiveness
|
||||
- Well implemented Animation
|
||||
- low on resource
|
||||
- Improved battery life
|
||||
- Microsoft Edge web-browser is rock solid
|
||||
- Supported on Raspberry pi 2.
|
||||
- It is good because windows 8/8.1 was not upto mark and really bad.
|
||||
- It is a the same old wine in new bottle. Almost the same things with brushed up icons.
|
||||
|
||||
What my testing suggest is Windows 10 has improved on a few things like look and feel (as windows always did), +1 for Project spartan, Virtual Desktop, Command-line package management, one search for online and offline search. It is overall an improved product but those who thinks that Windows 10 will prove to be the last nail in the coffin of Linux are mistaken.
|
||||
|
||||
Linux is years ahead of Windows. Their approach is different. In near future windows won’t stand anywhere around Linux and there is nothing for which a Linux user need to go to Windows 10.
|
||||
|
||||
That’s all for now. Hope you liked the post. I will be here again with another interesting post you people will love to read. Provide us with your valuable feedback in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/a-linux-user-using-windows-10-after-more-than-8-years-see-comparison/
|
||||
|
||||
作者:[vishek Kumar][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/avishek/
|
||||
[1]:https://www.microsoft.com/en-us/software-download/windows10ISO
|
@ -1,147 +0,0 @@
|
||||
Why did you start using Linux?
|
||||
================================================================================
|
||||
> In today's open source roundup: What got you started with Linux? Plus: IBM's Linux only Mainframe. And why you should skip Windows 10 and go with Linux
|
||||
|
||||
### Why did you start using Linux? ###
|
||||
|
||||
Linux has become quite popular over the years, with many users defecting to it from OS X or Windows. But have you ever wondered what got people started with Linux? A redditor asked that question and got some very interesting answers.
|
||||
|
||||
SilverKnight asked his question on the Linux subreddit:
|
||||
|
||||
> I know this has been asked before, but I wanted to hear more from the younger generation why it is that they started using linux and what keeps them here.
|
||||
>
|
||||
> I dont want to discourage others from giving their linux origin stories, because those are usually pretty good, but I was mostly curious about our younger population since there isn't much out there from them yet.
|
||||
>
|
||||
> I myself am 27 and am a linux dabbler. I have installed quite a few different distros over the years but I haven't made the plunge to full time linux. I guess I am looking for some more reasons/inspiration to jump on the bandwagon.
|
||||
>
|
||||
> [More at Reddit][1]
|
||||
|
||||
Fellow redditors in the Linux subreddit responded with their thoughts:
|
||||
|
||||
> **DoublePlusGood**: "I started using Backtrack Linux (now Kali) at 12 because I wanted to be a "1337 haxor". I've stayed with Linux (Archlinux currently) because it lets me have the endless freedom to make my computer do what I want."
|
||||
>
|
||||
> **Zack**: "I'm a Linux user since, I think, the age of 12 or 13, I'm 15 now.
|
||||
>
|
||||
> It started when I got tired with Windows XP at 11 and the waiting, dammit am I impatient sometimes, but waiting for a basic task such as shutting down just made me tired of Windows all together.
|
||||
>
|
||||
> A few months previously I had started participating in discussions in a channel on the freenode IRC network which was about a game, and as freenode usually goes, it was open source and most of the users used Linux.
|
||||
>
|
||||
> I kept on hearing about this Linux but wasn't that interested in it at the time. However, because the channel (and most of freenode) involved quite a bit of programming I started learning Python.
|
||||
>
|
||||
> A year passed and I was attempting to install GNU/Linux (specifically Ubuntu) on my new (technically old, but I had just got it for my birthday) PC, unfortunately it continually froze, for reasons unknown (probably a bad hard drive, or a lot of dust or something else...).
|
||||
>
|
||||
> Back then I was the type to give up on things, so I just continually nagged my dad to try and install Ubuntu, he couldn't do it for the same reasons.
|
||||
>
|
||||
> After wanting Linux for a while I became determined to get Linux and ditch windows for good. So instead of Ubuntu I tried Linux Mint, being a derivative of Ubuntu(?) I didn't have high hopes, but it worked!
|
||||
>
|
||||
> I continued using it for another 6 months.
|
||||
>
|
||||
> During that time a friend on IRC gave me a virtual machine (which ran Ubuntu) on their server, I kept it for a year a bit until my dad got me my own server.
|
||||
>
|
||||
> After the 6 months I got a new PC (which I still use!) I wanted to try something different.
|
||||
>
|
||||
> I decided to install openSUSE.
|
||||
>
|
||||
> I liked it a lot, and on the same Christmas I obtained a Raspberry Pi, and stuck with Debian on it for a while due to the lack of support other distros had for it."
|
||||
>
|
||||
> **Cqz**: "Was about 9 when the Windows 98 machine handed down to me stopped working for reasons unknown. We had no Windows install disk, but Dad had one of those magazines that comes with demo programs and stuff on CDs. This one happened to have install media for Mandrake Linux, and so suddenly I was a Linux user. Had no idea what I was doing but had a lot of fun doing it, and although in following years I often dual booted with various Windows versions, the FLOSS world always felt like home. Currently only have one Windows installation, which is a virtual machine for games."
|
||||
>
|
||||
> **Tosmarcel**: "I was 15 and was really curious about this new concept called 'programming' and then I stumbled upon this Harvard course, CS50. They told users to install a Linux vm to use the command line. But then I asked myself: "Why doesn't windows have this command line?!". I googled 'linux' and Ubuntu was the top result -Ended up installing Ubuntu and deleted the windows partition accidentally... It was really hard to adapt because I knew nothing about linux. Now I'm 16 and running arch linux, never looked back and I love it!"
|
||||
>
|
||||
> **Micioonthet**: "First heard about Linux in the 5th grade when I went over to a friend's house and his laptop was running MEPIS (an old fork of Debian) instead of Windows XP.
|
||||
>
|
||||
> Turns out his dad was a socialist (in America) and their family didn't trust Microsoft. This was completely foreign to me, and I was confused as to why he would bother using an operating system that didn't support the majority of software that I knew.
|
||||
>
|
||||
> Fast forward to when I was 13 and without a laptop. Another friend of mine was complaining about how slow his laptop was, so I offered to buy it off of him so I could fix it up and use it for myself. I paid $20 and got a virus filled, unusable HP Pavilion with Windows Vista. Instead of trying to clean up the disgusting Windows install, I remembered that Linux was a thing and that it was free. I burned an Ubuntu 12.04 disc and installed it right away, and was absolutely astonished by the performance.
|
||||
>
|
||||
> Minecraft (one of the few early Linux games because it ran on Java), which could barely run at 5 FPS on Vista, ran at an entirely playable 25 FPS on a clean install of Ubuntu.
|
||||
>
|
||||
> I actually still have that old laptop and use it occasionally, because why not? Linux doesn't care how old your hardware is.
|
||||
>
|
||||
> I since converted my dad to Linux and we buy old computers at lawn sales and thrift stores for pennies and throw Linux Mint or some other lightweight distros on them."
|
||||
>
|
||||
> **Webtm**: "My dad had every computer in the house with some distribution on it, I think a couple with OpenSUSE and Debian, and his personal computer had Slackware on it. So I remember being little and playing around with Debian and not really getting into it much. So I had a Windows laptop for a few years and my dad asked me if I wanted to try out Debian. It was a fun experience and ever since then I've been using Debian and trying out distributions. I currently moved away from Linux and have been using FreeBSD for around 5 months now, and I am absolutely happy with it.
|
||||
>
|
||||
> The control over your system is fantastic. There are a lot of cool open source projects. I guess a lot of the fun was figuring out how to do the things I want by myself and tweaking those things in ways to make them do something else. Stability and performance is also a HUGE plus. Not to mention the level of privacy when switching."
|
||||
>
|
||||
> **Wyronaut**: "I'm currently 18, but I first started using Linux when I was 13. Back then my first distro was Ubuntu. The reason why I wanted to check out Linux, was because I was hosting little Minecraft game servers for myself and a couple of friends, back then Minecraft was pretty new-ish. I read that the defacto operating system for hosting servers was Linux.
|
||||
>
|
||||
> I was a big newbie when it came to command line work, so Linux scared me a little, because I had to take care of a lot of things myself. But thanks to google and a few wiki pages I managed to get up a couple of simple servers running on a few older PC's I had lying around. Great use for all that older hardware no one in the house ever uses.
|
||||
>
|
||||
> After running a few game servers I started running a few web servers as well. Experimenting with HTML, CSS and PHP. I worked with those for a year or two. Afterwards, took a look at Java. I made the terrible mistake of watching TheNewBoston video's.
|
||||
>
|
||||
> So after like a week I gave up on Java and went to pick up a book on Python instead. That book was Learn Python The Hard Way by Zed A. Shaw. After I finished that at the fast pace of two weeks, I picked up the book C++ Primer, because at the time I wanted to become a game developer. Went trough about half of the book (~500 pages) and burned out on learning. At that point I was spending a sickening amount of time behind my computer.
|
||||
>
|
||||
> After taking a bit of a break, I decided to pick up JavaScript. Read like 2 books, made like 4 different platformers and called it a day.
|
||||
>
|
||||
> Now we're arriving at the present. I had to go through the horrendous process of finding a school and deciding what job I wanted to strive for when I graduated. I ruled out anything in the gaming sector as I didn't want anything to do with graphics programming anymore, I also got completely sick of drawing and modelling. And I found this bachelor that had something to do with netsec and I instantly fell in love. I picked up a couple books on C to shred this vacation period and brushed up on some maths and I'm now waiting for the new school year to commence.
|
||||
>
|
||||
> Right now, I am having loads of fun with Arch Linux, made couple of different arrangements on different PC's and it's going great!
|
||||
>
|
||||
> In a sense Linux is what also got me into programming and ultimately into what I'm going to study in college starting this september. I probably have my future life to thank for it."
|
||||
>
|
||||
> **Linuxllc**: "You also can learn from old farts like me.
|
||||
>
|
||||
> The crutch, The crutch, The crutch. Getting rid of the crutch will inspired you and have good reason to stick with Linux.
|
||||
>
|
||||
> I got rid of my crutch(Windows XP) back in 2003. Took me only 5 days to get all my computer task back and running at a 100% workflow. Including all my peripheral devices. Minus any Windows games. I just play native Linux games."
|
||||
>
|
||||
> **Highclass**: "Hey I'm 28 not sure if this is the age group you are looking for.
|
||||
>
|
||||
> To be honest, I was always interested in computers and the thought of a free operating system was intriguing even though at the time I didn't fully grasp the free software philosophy, to me it was free as in no cost. I also did not find the CLI too intimidating as from an early age I had exposure to DOS.
|
||||
>
|
||||
> I believe my first distro was Mandrake, I was 11 or 12, I messed up the family computer on several occasions.... I ended up sticking with it always trying to push myself to the next level. Now I work in the industry with Linux everyday.
|
||||
>
|
||||
> /shrug"
|
||||
>
|
||||
> Matto: "My computer couldn't run fast enough for XP (got it at a garage sale), so I started looking for alternatives. Ubuntu came up in Google. I was maybe 15 or 16 at the time. Now I'm 23 and have a job working on a product that uses Linux internally."
|
||||
>
|
||||
> [More at Reddit][2]
|
||||
|
||||
### IBM's Linux only Mainframe ###
|
||||
|
||||
IBM has a long history with Linux, and now the company has created a Mainframe that features Ubuntu Linux. The new machine is named LinuxOne.
|
||||
|
||||
Ron Miller reports for TechCrunch:
|
||||
|
||||
> The new mainframes come in two flavors, named for penguins (Linux — penguins — get it?). The first is called Emperor and runs on the IBM z13, which we wrote about in January. The other is a smaller mainframe called the Rockhopper designed for a more “entry level” mainframe buyer.
|
||||
>
|
||||
> You may have thought that mainframes went the way of the dinosaur, but they are still alive and well and running in large institutions throughout the world. IBM as part of its broader strategy to promote the cloud, analytics and security is hoping to expand the potential market for mainframes by running Ubuntu Linux and supporting a range of popular open source enterprise software such as Apache Spark, Node.js, MongoDB, MariaDB, PostgreSQL and Chef.
|
||||
>
|
||||
> The metered mainframe will still sit inside the customer’s on-premises data center, but billing will be based on how much the customer uses the system, much like a cloud model, Mauri explained.
|
||||
>
|
||||
> ...IBM is looking for ways to increase those sales. Partnering with Canonical and encouraging use of open source tools on a mainframe gives the company a new way to attract customers to a small, but lucrative market.
|
||||
>
|
||||
> [More at TechCrunch][3]
|
||||
|
||||
### Why you should skip Windows 10 and opt for Linux ###
|
||||
|
||||
Since Windows 10 has been released there has been quite a bit of media coverage about its potential to spy on users. ZDNet has listed some reasons why you should skip Windows 10 and opt for Linux instead on your computer.
|
||||
|
||||
SJVN reports for ZDNet:
|
||||
|
||||
> You can try to turn Windows 10's data-sharing ways off, but, bad news: Windows 10 will keep sharing some of your data with Microsoft anyway. There is an alternative: Desktop Linux.
|
||||
>
|
||||
> You can do a lot to keep Windows 10 from blabbing, but you can't always stop it from talking. Cortana, Windows 10's voice activated assistant, for example, will share some data with Microsoft, even when it's disabled. That data includes a persistent computer ID to identify your PC to Microsoft.
|
||||
>
|
||||
> So, if that gives you a privacy panic attack, you can either stick with your old operating system, which is likely Windows 7, or move to Linux. Eventually, when Windows 7 is no longer supported, if you want privacy you'll have no other viable choice but Linux.
|
||||
>
|
||||
> There are other, more obscure desktop operating systems that are also desktop-based and private. These include the BSD Unix family such as FreeBSD, PCBSD, and NetBSD and eComStation, OS/2 for the 21st century. Your best choice, though, is a desktop-based Linux with a low learning curve.
|
||||
>
|
||||
> [More at ZDNet][4]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.itworld.com/article/2972587/linux/why-did-you-start-using-linux.html
|
||||
|
||||
作者:[Jim Lynch][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.itworld.com/author/Jim-Lynch/
|
||||
[1]:https://www.reddit.com/r/linux/comments/3hb2sr/question_for_younger_users_why_did_you_start/
|
||||
[2]:https://www.reddit.com/r/linux/comments/3hb2sr/question_for_younger_users_why_did_you_start/
|
||||
[3]:http://techcrunch.com/2015/08/16/ibm-teams-with-canonical-on-linux-mainframe/
|
||||
[4]:http://www.zdnet.com/article/sick-of-windows-spying-on-you-go-linux/
|
@ -1,72 +0,0 @@
|
||||
14 tips for teaching open source development
|
||||
================================================================================
|
||||
Academia is an excellent platform for training and preparing the open source developers of tomorrow. In research, we occasionally open source software we write. We do this for two reasons. One, to promote the use of the tools we produce. And two, to learn more about the impact and issues other people face when using them. With this background of writing research software, I was tasked with redesigning the undergraduate software engineering course for second-year students at the University of Bradford.
|
||||
|
||||
It was a challenge, as I was faced with 80 students coming for different degrees, including IT, business computing, and software engineering, all in the same course. The hardest part was working with students with a wide range of programming experience levels. Traditionally, the course had involved allowing students to choose their own teams, tasking them with building a garage database system and then submitting a report in the end as part of the assessment.
|
||||
|
||||
I decided to redesign the course to give students insight into the process of working on real-world software teams. I divided the students into teams of five or six, based on their degrees and programming skills. The aim was to have an equal distribution of skills across the teams to prevent any unfair advantage of one team over another.
|
||||
|
||||
### The core lessons ###
|
||||
|
||||
The course format was updated to have both lectures and lab sessions. However, the lab session functioned as mentoring sessions, where instructors visited each team to ask for updates and see how the teams were progressing with the clients and the products. There were traditional lectures on project management, software testing, requirements engineering, and similar topics, supplemented by lab sessions and mentor meetings. These meetings allowed us to check up on students' progress and monitor whether they were following the software engineering methodologies taught in the lecture portion. Topics we taught this year included:
|
||||
|
||||
- Requirements engineering
|
||||
- How to interact with clients and other team members
|
||||
- Software methodologies, such as agile and extreme programming approaches
|
||||
- How to use different software engineering approaches and work through sprints
|
||||
- Team meetings and documentations
|
||||
- Project management and Gantt charts
|
||||
- UML diagrams and system descriptions
|
||||
- Code revisioning using Git
|
||||
- Software testing and bug tracking
|
||||
- Using open source libraries for their tools
|
||||
- Open source licenses and which one to use
|
||||
- Software delivery
|
||||
|
||||
Along with these lectures, we had a few guest speakers from the corporate world talk about their practices in software product deliveries. We also managed to get the university’s intellectual property lawyer to come and talk about IP issues surrounding software in the UK, and how to handle any intellectual properties issues in software.
|
||||
|
||||
### Collaboration tools ###
|
||||
|
||||
To make all of the above possible, a number of tools were introduced. Students were trained on how to use them for their projects. These included:
|
||||
|
||||
- Google Drive folders shared within the team and the tutor, to maintain documents and spreadsheets for project descriptions, requirements gathering, meeting minutes, and time tracking of the project. This was an extremely efficient way to monitor and also provide feedback straight into the folders for each team.
|
||||
- [Basecamp][1] for document sharing as well, and later in the course we considered this as a possible replacement for Google Drive.
|
||||
- Bug reporting tools such as [Mantis][2] again have a limited users for free reporting. Later Git itself was being used for bug reports n any tools by the testers in the teams
|
||||
- Remote videoconferencing tools were used as a number of clients were off-campus, and sometimes not even in the same city. The students were regularly using Skype to communicate with them, documenting their meetings and sometimes even recording them for later use.
|
||||
- A number of open source tool kits were also used for students' projects. The students were allowed to choose their own tool kits and languages based on the requirements of the projects. The only condition was that these have to be open source and could be installed in the university labs, which the technical staff was extremely supportive of.
|
||||
- In the end all teams had to deliver their projects to the client, including complete working version of the software, documentation, and open source licenses of their own choosing. Most of the teams chose the GPL version 3 license.
|
||||
|
||||
### Tips and lessons learned ###
|
||||
|
||||
In the end, it was a fun year and nearly all students did very well. Here are some of the lessons I learned which may help improve the course next year:
|
||||
|
||||
1. Give the students a wide variety of choice in projects that are interesting, such as game development or mobile application development, and projects with goals. Working with mundane database systems is not going to keep most students interested. Working with interesting projects, most students became self-learners, and were also helping others in their teams and outside to solve some common issues. The course also had a message list, where students were posting any issues they were encountering, in hopes of receiving advice from others. However, there was a drawback to this approach. The external examiners have advised us to go back to a style of one type of project, and one type of language to help narrow the assessment criteria for the students.
|
||||
1. Give students regular feedback on their performance at every stage. This could be done during the mentoring meetings with the teams, or at other stages, to help them improve the work for next time.
|
||||
1. Students are more than willing to work with clients from outside university! They look forward to working with external company representatives or people outside the university, just because of the new experience. They were all able to display professional behavior when interacting with their mentors, which put the instructors at ease.
|
||||
1. A lot of teams left developing unit testing until the end of the project, which from an extreme programming methodology standpoint was a serious no-no. Maybe testing should be included at the assessments of the various stages to help remind students that they need to be developing unit tests in parallel with the software.
|
||||
1. In the class of 80, there were only four girls, each working in different teams. I observed that boys were very ready to take on roles as team leads, assigning the most interesting code pieces to themselves and the girls were mostly following instructions or doing documentation. For some reason, the girls choose not to show authority or preferred not to code even when they were encouraged by a female instructor. This is still a major issue that needs to be addressed.
|
||||
1. There are different styles of documentation such as using UML, state diagrams, and others. Allow students to learn them all and merge with other courses during the year to improve their learning experience.
|
||||
1. Some students were very good developers, but some doing business computing had very little coding experience. The teams were encouraged to work together to prevent the idea that developer would get better marks than other team members if they were only doing meeting minutes or documentations. Roles were also encouraged to be rotated during mentoring sessions to see that everyone was getting a chance to learn how to program.
|
||||
1. Allowing the team to meet with the mentor every week was helpful in monitoring team activities. It also showed who was doing the most work. Usually students who were not participating in their groups would not come to meetings, and could be identified by the work being presented by other members every week.
|
||||
1. We encouraged students to attach licenses to their work and identify intellectual property issues when working with external libraries and clients. This allowed students to think out of the box and learn about real-world software delivery problems.
|
||||
1. Give students room to choose their own technologies.
|
||||
1. Having teaching assistants is key. Managing 80 students was very difficult, especially on the weeks when they were being assessed. Next year I would definitely have teaching assistants helping me with the teams.
|
||||
1. A supportive tech support for the lab is very important. The university tech support was extremely supportive of the course. Next year, they are talking about having virtual machines assigned to teams, so the teams can install any software on their own virtual machine as needed.
|
||||
1. Teamwork helps. Most teams exhibited a supportive nature to other team members, and mentoring also helped.
|
||||
1. Additional support from other staff members is a plus. As a new academic, I needed to learn from experience and also seek advice at multiple points on how to handle certain students and teams if I was confused on how to engage them with the course. Support from senior staff members was very encouraging to me.
|
||||
|
||||
In the end, it was a fun course—not only for the me as an instructor, but for the students as well. There were some issues with learning objectives and traditional grading schemes that still need to be ironed out to reduce the workload it produced on the instructors. For next year, I plan to keep this same format, but hope to come up with a better grading scheme and introduce more software tools that can help monitor project activities and code revisions.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://opensource.com/education/15/9/teaching-open-source-development-undergraduates
|
||||
|
||||
作者:[Mariam Kiran][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://opensource.com/users/mariamkiran
|
||||
[1]:https://basecamp.com/
|
||||
[2]:https://www.mantisbt.org/
|
@ -1,199 +0,0 @@
|
||||
18 Years of GNOME Design and Software Evolution: Step by Step
|
||||
================================================================================
|
||||
注:youtube 视频
|
||||
<iframe width="660" height="371" src="https://www.youtube.com/embed/MtmcO5vRNFQ?feature=oembed" frameborder="0" allowfullscreen></iframe>
|
||||
|
||||
[GNOME][1] (GNU Object Model Environment) was started on August 15th 1997 by two Mexican programmers – Miguel de Icaza and Federico Mena. GNOME – Free Software project to develop a desktop environment and applications by volunteers and paid full-time developers. All of GNOME Desktop Environment is the open source software and support Linux, FreeBSD, OpenBSD and others.
|
||||
|
||||
Now we move to 1997 and see the first version of GNOME:
|
||||
|
||||
### GNOME 1 ###
|
||||
|
||||
![GNOME 1.0 - First major GNOME release](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/1.0/gnome.png)
|
||||
|
||||
**GNOME 1.0** (1997) – First major GNOME release
|
||||
|
||||
![GNOME 1.2 Bongo](https://raw.githubusercontent.com/paulcarroty/Articles/master/GNOME_History/1.2/1361441938.or.86429.png)
|
||||
|
||||
**GNOME 1.2** “Bongo”, 2000
|
||||
|
||||
![GNOME 1.4 Tranquility](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/1.4/1.png)
|
||||
|
||||
**GNOME 1.4** “Tranquility”, 2001
|
||||
|
||||
### GNOME 2 ###
|
||||
|
||||
![GNOME 2.0](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.0/1.png)
|
||||
|
||||
**GNOME 2.0**, 2002
|
||||
|
||||
Major upgrade based on GTK+2. Introduction of the Human Interface Guidelines.
|
||||
|
||||
![GNOME 2.2](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.2/GNOME_2.2_catala.png)
|
||||
|
||||
**GNOME 2.2**, 2003
|
||||
|
||||
Multimedia and file manager improvements.
|
||||
|
||||
![GNOME 2.4 Temujin](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.4/gnome-desktop.png)
|
||||
|
||||
**GNOME 2.4** “Temujin”, 2003
|
||||
|
||||
First release of Epiphany Browser, accessibility support.
|
||||
|
||||
![GNOME 2.6](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.6/Adam_Hooper.png)
|
||||
|
||||
**GNOME 2.6**, 2004
|
||||
|
||||
Nautilus changes to a spatial file manager, and a new GTK+ file dialog is introduced. A short-lived fork of GNOME, GoneME, is created as a response to the changes in this version.
|
||||
|
||||
![GNOME 2.8](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.8/3.png)
|
||||
|
||||
**GNOME 2.8**, 2004
|
||||
|
||||
Improved removable device support, adds Evolution
|
||||
|
||||
![GNOME 2.10](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.10/GNOME-Screenshot-2.10-FC4.png)
|
||||
|
||||
**GNOME 2.10**, 2005
|
||||
|
||||
Lower memory requirements and performance improvements. Adds: new panel applets (modem control, drive mounter and trashcan); and the Totem and Sound Juicer applications.
|
||||
|
||||
![GNOME 2.12](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.12/gnome-livecd.jpg)
|
||||
|
||||
**GNOME 2.12**, 2005
|
||||
|
||||
Nautilus improvements; improvements in cut/paste between applications and freedesktop.org integration. Adds: Evince PDF viewer; New default theme: Clearlooks; menu editor; keyring manager and admin tools. Based on GTK+ 2.8 with cairo support
|
||||
|
||||
![GNOME 2.14](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.14/debian4-stable.jpg)
|
||||
|
||||
**GNOME 2.14**, 2006
|
||||
|
||||
Performance improvements (over 100% in some cases); usability improvements in user preferences; GStreamer 0.10 multimedia framework. Adds: Ekiga video conferencing application; Deskbar search tool; Pessulus lockdown editor; Fast user switching; Sabayon system administration tool.
|
||||
|
||||
![GNOME 2.16](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.16/Gnome-2.16-screenshot.png)
|
||||
|
||||
**GNOME 2.16**, 2006
|
||||
|
||||
Performance improvements. Adds: Tomboy notetaking application; Baobab disk usage analyser; Orca screen reader; GNOME Power Manager (improving laptop battery life); improvements to Totem, Nautilus; compositing support for Metacity; new icon theme. Based on GTK+ 2.10 with new print dialog
|
||||
|
||||
![GNOME 2.18](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.18/Gnome-2.18.1.png)
|
||||
|
||||
**GNOME 2.18**, 2007
|
||||
|
||||
Performance improvements. Adds: Seahorse GPG security application, allowing encryption of emails and local files; Baobab disk usage analyser improved to support ring chart view; Orca screen reader; improvements to Evince, Epiphany and GNOME Power Manager, Volume control; two new games, GNOME Sudoku and glChess. MP3 and AAC audio encoding.
|
||||
|
||||
![GNOME 2.20](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.20/rnintroduction-screenshot.png)
|
||||
|
||||
**GNOME 2.20**, 2007
|
||||
|
||||
Tenth anniversary release. Evolution backup functionality; improvements in Epiphany, EOG, GNOME Power Manager; password keyring management in Seahorse. Adds: PDF forms editing in Evince; integrated search in the file manager dialogs; automatic multimedia codec installer.
|
||||
|
||||
![GNOME 2.22, 2008](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.22/GNOME-2-22-2-Released-2.png)
|
||||
|
||||
**GNOME 2.22**, 2008
|
||||
|
||||
Addition of Cheese, a tool for taking photos from webcams and Remote Desktop Viewer; basic window compositing support in Metacity; introduction of GVFS; improved playback support for DVDs and YouTube, MythTV support in Totem; internationalised clock applet; Google Calendar support and message tagging in Evolution; improvements in Evince, Tomboy, Sound Juicer and Calculator.
|
||||
|
||||
![GNOME 2.24](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.24/gnome-224.jpg)
|
||||
|
||||
**GNOME 2.24**, 2008
|
||||
|
||||
Addition of the Empathy instant messenger client, Ekiga 3.0, tabbed browsing in Nautilus, better multiple screens support and improved digital TV support.
|
||||
|
||||
![GNOME 2.26](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.26/gnome226-large_001.jpg)
|
||||
|
||||
**GNOME 2.26**, 2009
|
||||
|
||||
New optical disc recording application Brasero, simpler file sharing, media player improvements, support for multiple monitors and fingerprint reader support.
|
||||
|
||||
![GNOME 2.28](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.28/1.png)
|
||||
|
||||
**GNOME 2.28**, 2009
|
||||
|
||||
Addition of GNOME Bluetooth module. Improvements to Epiphany web browser, Empathy instant messenger client, Time Tracker, and accessibility. Upgrade to GTK+ version 2.18.
|
||||
|
||||
![GNOME 2.30](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.30/GNOME2.30.png)
|
||||
|
||||
**GNOME 2.30**, 2010
|
||||
|
||||
Improvements to Nautilus file manager, Empathy instant messenger client, Tomboy, Evince, Time Tracker, Epiphany, and Vinagre. iPod and iPod Touch devices are now partially supported via GVFS through libimobiledevice. Uses GTK+ 2.20.
|
||||
|
||||
![GNOME 2.32](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.32/gnome-2-32.png.en_GB.png)
|
||||
|
||||
**GNOME 2.32**, 2010
|
||||
|
||||
Addition of Rygel and GNOME Color Manager. Improvements to Empathy instant messenger client, Evince, Nautilus file manager and others. 3.0 was intended to be released in September 2010, so a large part of the development effort since 2.30 went towards 3.0.
|
||||
|
||||
### GNOME 3 ###
|
||||
|
||||
![GNOME 3.0](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.0/chat-3-0.png)
|
||||
|
||||
**GNOME 3.0**, 2011
|
||||
|
||||
Introduction of GNOME Shell. A redesigned settings framework with fewer, more focused options. Topic-oriented help based on the Mallard markup language. Side-by-side window tiling. A new visual theme and default font. Adoption of GTK+ 3.0 with its improved language bindings, themes, touch, and multiplatform support. Removal of long-deprecated development APIs.[73]
|
||||
|
||||
![GNOME 3.2](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.2/gdm.png)
|
||||
|
||||
**GNOME 3.2**, 2011
|
||||
|
||||
Online accounts support; Web applications support; contacts manager; documents and files manager; quick preview of files in the File Manager; greater integration; better documentation; enhanced looks and various performance improvements.
|
||||
|
||||
![GNOME 3.4](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.4/application-view.png)
|
||||
|
||||
**GNOME 3.4**, 2012
|
||||
|
||||
New Look for GNOME 3 Applications: Documents, Epiphany (now called Web), and GNOME Contacts. Search for documents from the Activities overview. Application menus support. Refreshed interface components: New color picker, redesigned scrollbars, easier to use spin buttons, and hideable title bars. Smooth scrolling support. New animated backgrounds. Improved system settings with new Wacom panel. Easier extensions management. Better hardware support. Topic-oriented documentation. Video calling and Live Messenger support in Empathy. Better accessibility: Improved Orca integration, better high contrast mode, and new zoom settings. Plus many other application enhancements and smaller details.
|
||||
|
||||
![GNOME 3.6](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.6/gnome-3-6.png)
|
||||
|
||||
**GNOME 3.6**, 2012
|
||||
|
||||
Refreshed Core components: New applications button and improved layout in the Activities Overview. A new login and lock screen. Redesigned Message Tray. Notifications are now smarter, more noticeable, easier to dismiss. Improved interface and settings for System Settings. The user menu now shows Power Off by default. Integrated Input Methods. Accessibility is always on. New applications: Boxes, that was introduced as a preview version in GNOME 3.4, and Clocks, an application to handle world times. Updated looks for Disk Usage Analyzer, Empathy and Font Viewer. Improved braille support in Orca. In Web, the previously blank start page was replaced by a grid that holds your most visited pages, plus better full screen mode and a beta of WebKit2. Evolution renders email using WebKit. Major improvements to Disks. Revamped Files application (also known as Nautilus), with new features like Recent files and search.
|
||||
|
||||
![GNOME 3.8](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.8/applications-view.png)
|
||||
|
||||
**GNOME 3.8**, 2013
|
||||
|
||||
Refreshed Core components: A new applications view with frequently used and all apps. An overhauled window layout. New input methods OSD switcher. The Notifications & Messaging tray now react to the force with which the pointer is pressed against the screen edge. Added Classic mode for those who prefer a more traditional desktop experience. The GNOME Settings application features an updated toolbar design. New Initial Setup assistant. GNOME Online Accounts integrates with more services. Web has been upgraded to use the WebKit2 engine. Web has a new private browsing mode. Documents has gained a new dual page mode & Google Documents integration. Improved user interface of Contacts. GNOME Files, GNOME Boxes and GNOME Disks have received a number of improvements. Integration of ownCloud. New GNOME Core Applications: GNOME Clocks and GNOME Weather.
|
||||
|
||||
![GNOME 3.10](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.10/GNOME-3-10-Release-Schedule-2.png)
|
||||
|
||||
**GNOME 3.10**, 2013
|
||||
|
||||
A reworked system status area, which gives a more focused overview of the system. A collection of new applications, including GNOME Maps, GNOME Notes, GNOME Music and GNOME Photos. New geolocation features, such as automatic time zones and world clocks. HiDPI support[75] and smart card support. D-Bus activation made possible with GLib 2.38
|
||||
|
||||
![GNOME 3.12](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.12/app-folders.png)
|
||||
|
||||
**GNOME 3.12**, 2014
|
||||
|
||||
Improved keyboard navigation and window selection in the Overview. Revamped first set-up utility based on usability tests. Wired networking re-added to the system status area. Customizable application folders in the Applications view. Introduction of new GTK+ widgets such as popovers in many applications. New tab style in GTK+. GNOME Videos GNOME Terminal and gedit were given a fresh look, more consistent with the HIG. A search provider for the terminal emulator is included in GNOME Shell. Improvements to GNOME Software and high-density display support. A new sound recorder application. New desktop notifications API. Progress in the Wayland port has reached a usable state that can be optionally previewed.
|
||||
|
||||
![GNOME 3.14](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.14/Top-Features-of-GNOME-3-14-Gallery-459893-2.jpg)
|
||||
|
||||
**GNOME 3.14**, 2014
|
||||
|
||||
Improved desktop environment animations. Improved touchscreen support. GNOME Software supports managing installed add-ons. GNOME Photos adds support for Google. Redesigned UI for Evince, Sudoku, Mines and Weather. Hitori is added as part of GNOME Games.
|
||||
|
||||
![GNOME 3.16](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.16/preview-apps.png)
|
||||
|
||||
**GNOME 3.16**, 2015
|
||||
|
||||
33,000 changes. Major changes include UI color scheme goes from black to charcoal. Overlay scroll bars added. Improvements to notifications including integration with Calendar applet. Tweaks to various apps including Files, Image Viewer, and Maps. Access to Preview Apps. Continued porting from X11 to Wayland.
|
||||
|
||||
Thanks to [Wikipedia][2] for short changelogs review and another big thanks for GNOME Project! Stay tuned!
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://tlhp.cf/18-years-of-gnome-evolution/
|
||||
|
||||
作者:[Pavlo Rudyi][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://tlhp.cf/author/paul/
|
||||
[1]:https://www.gnome.org/
|
||||
[2]:https://en.wikipedia.org/wiki/GNOME
|
@ -1,3 +1,5 @@
|
||||
For my dear RMS
|
||||
|
||||
30 Years of Free Software Foundation: Best Quotes of Richard Stallman
|
||||
================================================================================
|
||||
注:youtube 视频
|
||||
@ -167,4 +169,4 @@ via: https://tlhp.cf/fsf-richard-stallman/
|
||||
|
||||
[a]:https://tlhp.cf/author/paul/
|
||||
[1]:http://www.gnu.org/
|
||||
[2]:http://www.fsf.org/
|
||||
[2]:http://www.fsf.org/
|
||||
|
@ -1,35 +0,0 @@
|
||||
Linus Torvalds Lambasts Open Source Programmers over Insecure Code
|
||||
================================================================================
|
||||
![](http://thevarguy.com/site-files/thevarguy.com/files/imagecache/medium_img/uploads/2015/11/linus-torvalds.jpg)
|
||||
|
||||
Linus Torvalds's latest rant underscores the high expectations the Linux developer places on open source programmers—as well the importance of security for Linux kernel code.
|
||||
|
||||
Torvalds is the unofficial "benevolent dictator" of the Linux kernel project. That means he gets to decide which code contributions go into the kernel, and which ones land in the reject pile.
|
||||
|
||||
On Oct. 28, open source coders whose work did not meet Torvalds's expectations faced an [angry rant][1]. "Christ people," Torvalds wrote about the code. "This is just sh*t."
|
||||
|
||||
He went on to call the coders "just incompetent and out to lunch."
|
||||
|
||||
What made Torvalds so angry? He believed the code could have been written more efficiently. It could have been easier for other programmers to understand and would run better through a compiler, the program that translates human-readable code into the binaries that computers understand.
|
||||
|
||||
Torvalds posted his own substitution for the code in question and suggested that the programmers should have written it his way.
|
||||
|
||||
Torvalds has a history of lashing out against people with whom he disagrees. It stretches back to 1991, when he famously [flamed Andrew Tanenbaum][2]—whose Minix operating system he later described as a series of "brain-damages." No doubt this latest criticism of fellow open source coders will go down as another example of Torvalds's confrontational personality.
|
||||
|
||||
But Torvalds may also have been acting strategically during this latest rant. "I want to make it clear to *everybody* that code like this is completely unacceptable," he wrote, suggesting that his goal was to send a message to all Linux programmers, not just vent his anger at particular ones.
|
||||
|
||||
Torvalds also used the incident as an opportunity to highlight the security concerns that arise from poorly written code. Those are issues dear to open source programmers' hearts in an age when enterprises are finally taking software security seriously, and demanding top-notch performance from their code in this regard. Lambasting open source programmers who write insecure code thus helps Linux's image.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://thevarguy.com/open-source-application-software-companies/110415/linus-torvalds-lambasts-open-source-programmers-over-inse
|
||||
|
||||
作者:[Christopher Tozzi][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://thevarguy.com/author/christopher-tozzi
|
||||
[1]:http://lkml.iu.edu/hypermail/linux/kernel/1510.3/02866.html
|
||||
[2]:https://en.wikipedia.org/wiki/Tanenbaum%E2%80%93Torvalds_debate
|
@ -1,284 +0,0 @@
|
||||
Review: 5 memory debuggers for Linux coding
|
||||
================================================================================
|
||||
![](http://images.techhive.com/images/article/2015/11/penguinadmin-2400px-100627186-primary.idge.jpg)
|
||||
Credit: [Moini][1]
|
||||
|
||||
As a programmer, I'm aware that I tend to make mistakes -- and why not? Even programmers are human. Some errors are detected during code compilation, while others get caught during software testing. However, a category of error exists that usually does not get detected at either of these stages and that may cause the software to behave unexpectedly -- or worse, terminate prematurely.
|
||||
|
||||
If you haven't already guessed it, I am talking about memory-related errors. Manually debugging these errors can be not only time-consuming but difficult to find and correct. Also, it's worth mentioning that these errors are surprisingly common, especially in software written in programming languages like C and C++, which were designed for use with [manual memory management][2].
|
||||
|
||||
Thankfully, several programming tools exist that can help you find memory errors in your software programs. In this roundup, I assess five popular, free and open-source memory debuggers that are available for Linux: Dmalloc, Electric Fence, Memcheck, Memwatch and Mtrace. I've used all five in my day-to-day programming, and so these reviews are based on practical experience.
|
||||
|
||||
eviews are based on practical experience.
|
||||
|
||||
### [Dmalloc][3] ###
|
||||
|
||||
**Developer**: Gray Watson
|
||||
**Reviewed version**: 5.5.2
|
||||
**Linux support**: All flavors
|
||||
**License**: Creative Commons Attribution-Share Alike 3.0 License
|
||||
|
||||
Dmalloc is a memory-debugging tool developed by Gray Watson. It is implemented as a library that provides wrappers around standard memory management functions like **malloc(), calloc(), free()** and more, enabling programmers to detect problematic code.
|
||||
|
||||
![cw dmalloc output](http://images.techhive.com/images/article/2015/11/cw_dmalloc-output-100627040-large.idge.png)
|
||||
Dmalloc
|
||||
|
||||
As listed on the tool's Web page, the debugging features it provides includes memory-leak tracking, [double free][4] error tracking and [fence-post write detection][5]. Other features include file/line number reporting, and general logging of statistics.
|
||||
|
||||
#### What's new ####
|
||||
|
||||
Version 5.5.2 is primarily a [bug-fix release][6] containing corrections for a couple of build and install problems.
|
||||
|
||||
#### What's good about it ####
|
||||
|
||||
The best part about Dmalloc is that it's extremely configurable. For example, you can configure it to include support for C++ programs as well as threaded applications. A useful functionality it provides is runtime configurability, which means that you can easily enable/disable the features the tool provides while it is being executed.
|
||||
|
||||
You can also use Dmalloc with the [GNU Project Debugger (GDB)][7] -- just add the contents of the dmalloc.gdb file (located in the contrib subdirectory in Dmalloc's source package) to the .gdbinit file in your home directory.
|
||||
|
||||
Another thing that I really like about Dmalloc is its extensive documentation. Just head to the [documentation section][8] on its official website, and you'll get everything from how to download, install, run and use the library to detailed descriptions of the features it provides and an explanation of the output file it produces. There's also a section containing solutions to some common problems.
|
||||
|
||||
#### Other considerations ####
|
||||
|
||||
Like Mtrace, Dmalloc requires programmers to make changes to their program's source code. In this case you may, at the very least, want to add the **dmalloc.h** header, because it allows the tool to report the file/line numbers of calls that generate problems, something that is very useful as it saves time while debugging.
|
||||
|
||||
In addition, the Dmalloc library, which is produced after the package is compiled, needs to be linked with your program while the program is being compiled.
|
||||
|
||||
However, complicating things somewhat is the fact that you also need to set an environment variable, dubbed **DMALLOC_OPTION**, that the debugging tool uses to configure the memory debugging features -- as well as the location of the output file -- at runtime. While you can manually assign a value to the environment variable, beginners may find that process a bit tough, given that the Dmalloc features you want to enable are listed as part of that value, and are actually represented as a sum of their respective hexadecimal values -- you can read more about it [here][9].
|
||||
|
||||
An easier way to set the environment variable is to use the [Dmalloc Utility Program][10], which was designed for just that purpose.
|
||||
|
||||
#### Bottom line ####
|
||||
|
||||
Dmalloc's real strength lies in the configurability options it provides. It is also highly portable, having being successfully ported to many OSes, including AIX, BSD/OS, DG/UX, Free/Net/OpenBSD, GNU/Hurd, HPUX, Irix, Linux, MS-DOG, NeXT, OSF, SCO, Solaris, SunOS, Ultrix, Unixware and even Unicos (on a Cray T3E). Although the tool has a bit of a learning curve associated with it, the features it provides are worth it.
|
||||
|
||||
### [Electric Fence][15] ###
|
||||
|
||||
**Developer**: Bruce Perens
|
||||
**Reviewed version**: 2.2.3
|
||||
**Linux support**: All flavors
|
||||
**License**: GNU GPL (version 2)
|
||||
|
||||
Electric Fence is a memory-debugging tool developed by Bruce Perens. It is implemented in the form of a library that your program needs to link to, and is capable of detecting overruns of memory allocated on a [heap][11] ) as well as memory accesses that have already been released.
|
||||
|
||||
![cw electric fence output](http://images.techhive.com/images/article/2015/11/cw_electric-fence-output-100627041-large.idge.png)
|
||||
Electric Fence
|
||||
|
||||
As the name suggests, Electric Fence creates a virtual fence around each allocated buffer in a way that any illegal memory access results in a [segmentation fault][12]. The tool supports both C and C++ programs.
|
||||
|
||||
#### What's new ####
|
||||
|
||||
Version 2.2.3 contains a fix for the tool's build system, allowing it to actually pass the -fno-builtin-malloc option to the [GNU Compiler Collection (GCC)][13].
|
||||
|
||||
#### What's good about it ####
|
||||
|
||||
The first thing that I liked about Electric Fence is that -- unlike Memwatch, Dmalloc and Mtrace -- it doesn't require you to make any changes in the source code of your program. You just need to link your program with the tool's library during compilation.
|
||||
|
||||
Secondly, the way the debugging tool is implemented makes sure that a segmentation fault is generated on the very first instruction that causes a bounds violation, which is always better than having the problem detected at a later stage.
|
||||
|
||||
Electric Fence always produces a copyright message in output irrespective of whether an error was detected or not. This behavior is quite useful, as it also acts as a confirmation that you are actually running an Electric Fence-enabled version of your program.
|
||||
|
||||
#### Other considerations ####
|
||||
|
||||
On the other hand, what I really miss in Electric Fence is the ability to detect memory leaks, as it is one of the most common and potentially serious problems that software written in C/C++ has. In addition, the tool cannot detect overruns of memory allocated on the stack, and is not thread-safe.
|
||||
|
||||
Given that the tool allocates an inaccessible virtual memory page both before and after a user-allocated memory buffer, it ends up consuming a lot of extra memory if your program makes too many dynamic memory allocations.
|
||||
|
||||
Another limitation of the tool is that it cannot explicitly tell exactly where the problem lies in your programs' code -- all it does is produce a segmentation fault whenever it detects a memory-related error. To find out the exact line number, you'll have to debug your Electric Fence-enabled program with a tool like [The Gnu Project Debugger (GDB)][14], which in turn depends on the -g compiler option to produce line numbers in output.
|
||||
|
||||
Finally, although Electric Fence is capable of detecting most buffer overruns, an exception is the scenario where the allocated buffer size is not a multiple of the word size of the system -- in that case, an overrun (even if it's only a few bytes) won't be detected.
|
||||
|
||||
#### Bottom line ####
|
||||
|
||||
Despite all its limitations, where Electric Fence scores is the ease of use -- just link your program with the tool once, and it'll alert you every time it detects a memory issue it's capable of detecting. However, as already mentioned, the tool requires you to use a source-code debugger like GDB.
|
||||
|
||||
### [Memcheck][16] ###
|
||||
|
||||
**Developer**: [Valgrind Developers][17]
|
||||
**Reviewed version**: 3.10.1
|
||||
**Linux support**: All flavors
|
||||
**License**: GPL
|
||||
|
||||
[Valgrind][18] is a suite that provides several tools for debugging and profiling Linux programs. Although it works with programs written in many different languages -- such as Java, Perl, Python, Assembly code, Fortran, Ada and more -- the tools it provides are largely aimed at programs written in C and C++.
|
||||
|
||||
The most popular Valgrind tool is Memcheck, a memory-error detector that can detect issues such as memory leaks, invalid memory access, uses of undefined values and problems related to allocation and deallocation of heap memory.
|
||||
|
||||
#### What's new ####
|
||||
|
||||
This [release][19] of the suite (3.10.1) is a minor one that primarily contains fixes to bugs reported in version 3.10.0. In addition, it also "backports fixes for all reported missing AArch64 ARMv8 instructions and syscalls from the trunk."
|
||||
|
||||
#### What's good about it ####
|
||||
|
||||
Memcheck, like all other Valgrind tools, is basically a command line utility. It's very easy to use: If you normally run your program on the command line in a form such as prog arg1 arg2, you just need to add a few values, like this: valgrind --leak-check=full prog arg1 arg2.
|
||||
|
||||
![cw memcheck output](http://images.techhive.com/images/article/2015/11/cw_memcheck-output-100627037-large.idge.png)
|
||||
Memcheck
|
||||
|
||||
(Note: You don't need to mention Memcheck anywhere in the command line because it's the default Valgrind tool. However, you do need to initially compile your program with the -g option -- which adds debugging information -- so that Memcheck's error messages include exact line numbers.)
|
||||
|
||||
What I really like about Memcheck is that it provides a lot of command line options (such as the --leak-check option mentioned above), allowing you to not only control how the tool works but also how it produces the output.
|
||||
|
||||
For example, you can enable the --track-origins option to see information on the sources of uninitialized data in your program. Enabling the --show-mismatched-frees option will let Memcheck match the memory allocation and deallocation techniques. For code written in C language, Memcheck will make sure that only the free() function is used to deallocate memory allocated by malloc(), while for code written in C++, the tool will check whether or not the delete and delete[] operators are used to deallocate memory allocated by new and new[], respectively. If a mismatch is detected, an error is reported.
|
||||
|
||||
But the best part, especially for beginners, is that the tool even produces suggestions about which command line option the user should use to make the output more meaningful. For example, if you do not use the basic --leak-check option, it will produce an output suggesting: "Rerun with --leak-check=full to see details of leaked memory." And if there are uninitialized variables in the program, the tool will generate a message that says, "Use --track-origins=yes to see where uninitialized values come from."
|
||||
|
||||
Another useful feature of Memcheck is that it lets you [create suppression files][20], allowing you to suppress certain errors that you can't fix at the moment -- this way you won't be reminded of them every time the tool is run. It's worth mentioning that there already exists a default suppression file that Memcheck reads to suppress errors in the system libraries, such as the C library, that come pre-installed with your OS. You can either create a new suppression file for your use, or edit the existing one (usually /usr/lib/valgrind/default.supp).
|
||||
|
||||
For those seeking advanced functionality, it's worth knowing that Memcheck can also [detect memory errors][21] in programs that use [custom memory allocators][22]. In addition, it also provides [monitor commands][23] that can be used while working with Valgrind's built-in gdbserver, as well as a [client request mechanism][24] that allows you not only to tell the tool facts about the behavior of your program, but make queries as well.
|
||||
|
||||
#### Other considerations ####
|
||||
|
||||
While there's no denying that Memcheck can save you a lot of debugging time and frustration, the tool uses a lot of memory, and so can make your program execution significantly slower (around 20 to 30 times, [according to the documentation][25]).
|
||||
|
||||
Aside from this, there are some other limitations, too. According to some user comments, Memcheck apparently isn't [thread-safe][26]; it doesn't detect [static buffer overruns][27]). Also, there are some Linux programs, like [GNU Emacs][28], that currently do not work with Memcheck.
|
||||
|
||||
If you're interested in taking a look, an exhaustive list of Valgrind's limitations can be found [here][29].
|
||||
|
||||
#### Bottom line ####
|
||||
|
||||
Memcheck is a handy memory-debugging tool for both beginners as well as those looking for advanced features. While it's very easy to use if all you need is basic debugging and error checking, there's a bit of learning curve if you want to use features like suppression files or monitor commands.
|
||||
|
||||
Although it has a long list of limitations, Valgrind (and hence Memcheck) claims on its site that it is used by [thousands of programmers][30] across the world -- the team behind the tool says it's received feedback from users in over 30 countries, with some of them working on projects with up to a whopping 25 million lines of code.
|
||||
|
||||
### [Memwatch][31] ###
|
||||
|
||||
**Developer**: Johan Lindh
|
||||
**Reviewed version**: 2.71
|
||||
**Linux support**: All flavors
|
||||
**License**: GNU GPL
|
||||
|
||||
Memwatch is a memory-debugging tool developed by Johan Lindh. Although it's primarily a memory-leak detector, it is also capable (according to its Web page) of detecting other memory-related issues like [double-free error tracking and erroneous frees][32], buffer overflow and underflow, [wild pointer][33] writes, and more.
|
||||
|
||||
The tool works with programs written in C. Although you can also use it with C++ programs, it's not recommended (according to the Q&A file that comes with the tool's source package).
|
||||
|
||||
#### What's new ####
|
||||
|
||||
This version adds ULONG_LONG_MAX to detect whether a program is 32-bit or 64-bit.
|
||||
|
||||
#### What's good about it ####
|
||||
|
||||
Like Dmalloc, Memwatch comes with good documentation. You can refer to the USING file if you want to learn things like how the tool works; how it performs initialization, cleanup and I/O operations; and more. Then there is a FAQ file that is aimed at helping users in case they face any common error while using Memcheck. Finally, there is a test.c file that contains a working example of the tool for your reference.
|
||||
|
||||
![cw memwatch output](http://images.techhive.com/images/article/2015/11/cw_memwatch_output-100627038-large.idge.png)
|
||||
Memwatch
|
||||
|
||||
Unlike Mtrace, the log file to which Memwatch writes the output (usually memwatch.log) is in human-readable form. Also, instead of truncating, Memwatch appends the memory-debugging output to the file each time the tool is run, allowing you to easily refer to the previous outputs should the need arise.
|
||||
|
||||
It's also worth mentioning that when you execute your program with Memwatch enabled, the tool produces a one-line output on [stdout][34] informing you that some errors were found -- you can then head to the log file for details. If no such error message is produced, you can rest assured that the log file won't contain any mistakes -- this actually saves time if you're running the tool several times.
|
||||
|
||||
Another thing that I liked about Memwatch is that it also provides a way through which you can capture the tool's output from within the code, and handle it the way you like (refer to the mwSetOutFunc() function in the Memwatch source code for more on this).
|
||||
|
||||
#### Other considerations ####
|
||||
|
||||
Like Mtrace and Dmalloc, Memwatch requires you to add extra code to your source file -- you have to include the memwatch.h header file in your code. Also, while compiling your program, you need to either compile memwatch.c along with your program's source files or include the object module from the compile of the file, as well as define the MEMWATCH and MW_STDIO variables on the command line. Needless to say, the -g compiler option is also required for your program if you want exact line numbers in the output.
|
||||
|
||||
There are some features that it doesn't contain. For example, the tool cannot detect attempts to write to an address that has already been freed or read data from outside the allocated memory. Also, it's not thread-safe. Finally, as I've already pointed out in the beginning, there is no guarantee on how the tool will behave if you use it with programs written in C++.
|
||||
|
||||
#### Bottom line ####
|
||||
|
||||
Memcheck can detect many memory-related problems, making it a handy debugging tool when dealing with projects written in C. Given that it has a very small source code, you can learn how the tool works, debug it if the need arises, and even extend or update its functionality as per your requirements.
|
||||
|
||||
### [Mtrace][35] ###
|
||||
|
||||
**Developers**: Roland McGrath and Ulrich Drepper
|
||||
**Reviewed version**: 2.21
|
||||
**Linux support**: All flavors
|
||||
**License**: GNU LGPL
|
||||
|
||||
Mtrace is a memory-debugging tool included in [the GNU C library][36]. It works with both C and C++ programs on Linux, and detects memory leaks caused by unbalanced calls to the malloc() and free() functions.
|
||||
|
||||
![cw mtrace output](http://images.techhive.com/images/article/2015/11/cw_mtrace-output-100627039-large.idge.png)
|
||||
Mtrace
|
||||
|
||||
The tool is implemented in the form of a function called mtrace(), which traces all malloc/free calls made by a program and logs the information in a user-specified file. Because the file contains data in computer-readable format, a Perl script -- also named mtrace -- is used to convert and display it in human-readable form.
|
||||
|
||||
#### What's new ####
|
||||
|
||||
[The Mtrace source][37] and [the Perl file][38] that now come with the GNU C library (version 2.21) add nothing new to the tool aside from an update to the copyright dates.
|
||||
|
||||
#### What's good about it ####
|
||||
|
||||
The best part about Mtrace is that the learning curve for it isn't steep; all you need to understand is how and where to add the mtrace() -- and the corresponding muntrace() -- function in your code, and how to use the Mtrace Perl script. The latter is very straightforward -- all you have to do is run the mtrace() <program-executable> <log-file-generated-upon-program-execution> command. (For an example, see the last command in the screenshot above.)
|
||||
|
||||
Another thing that I like about Mtrace is that it's scalable -- which means that you can not only use it to debug a complete program, but can also use it to detect memory leaks in individual modules of the program. Just call the mtrace() and muntrace() functions within each module.
|
||||
|
||||
Finally, since the tool is triggered when the mtrace() function -- which you add in your program's source code -- is executed, you have the flexibility to enable the tool dynamically (during program execution) [using signals][39].
|
||||
|
||||
#### Other considerations ####
|
||||
|
||||
Because the calls to mtrace() and mauntrace() functions -- which are declared in the mcheck.h file that you need to include in your program's source -- are fundamental to Mtrace's operation (the mauntrace() function is not [always required][40]), the tool requires programmers to make changes in their code at least once.
|
||||
|
||||
Be aware that you need to compile your program with the -g option (provided by both the [GCC][41] and [G++][42] compilers), which enables the debugging tool to display exact line numbers in the output. In addition, some programs (depending on how big their source code is) can take a long time to compile. Finally, compiling with -g increases the size of the executable (because it produces extra information for debugging), so you have to remember that the program needs to be recompiled without -g after the testing has been completed.
|
||||
|
||||
To use Mtrace, you need to have some basic knowledge of environment variables in Linux, given that the path to the user-specified file -- which the mtrace() function uses to log all the information -- has to be set as a value for the MALLOC_TRACE environment variable before the program is executed.
|
||||
|
||||
Feature-wise, Mtrace is limited to detecting memory leaks and attempts to free up memory that was never allocated. It can't detect other memory-related issues such as illegal memory access or use of uninitialized memory. Also, [there have been complaints][43] that it's not [thread-safe][44].
|
||||
|
||||
### Conclusions ###
|
||||
|
||||
Needless to say, each memory debugger that I've discussed here has its own qualities and limitations. So, which one is best suited for you mostly depends on what features you require, although ease of setup and use might also be a deciding factor in some cases.
|
||||
|
||||
Mtrace is best suited for cases where you just want to catch memory leaks in your software program. It can save you some time, too, since the tool comes pre-installed on your Linux system, something which is also helpful in situations where the development machines aren't connected to the Internet or you aren't allowed to download a third party tool for any kind of debugging.
|
||||
|
||||
Dmalloc, on the other hand, can not only detect more error types compared to Mtrace, but also provides more features, such as runtime configurability and GDB integration. Also, unlike any other tool discussed here, Dmalloc is thread-safe. Not to mention that it comes with detailed documentation, making it ideal for beginners.
|
||||
|
||||
Although Memwatch comes with even more comprehensive documentation than Dmalloc, and can detect even more error types, you can only use it with software written in the C programming language. One of its features that stands out is that it lets you handle its output from within the code of your program, something that is helpful in case you want to customize the format of the output.
|
||||
|
||||
If making changes to your program's source code is not what you want, you can use Electric Fence. However, keep in mind that it can only detect a couple of error types, and that doesn't include memory leaks. Plus, you also need to know GDB basics to make the most out of this memory-debugging tool.
|
||||
|
||||
Memcheck is probably the most comprehensive of them all. It detects more error types and provides more features than any other tool discussed here -- and it doesn't require you to make any changes in your program's source code.But be aware that, while the learning curve is not very high for basic usage, if you want to use its advanced features, a level of expertise is definitely required.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.computerworld.com/article/3003957/linux/review-5-memory-debuggers-for-linux-coding.html
|
||||
|
||||
作者:[Himanshu Arora][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.computerworld.com/author/Himanshu-Arora/
|
||||
[1]:https://openclipart.org/detail/132427/penguin-admin
|
||||
[2]:https://en.wikipedia.org/wiki/Manual_memory_management
|
||||
[3]:http://dmalloc.com/
|
||||
[4]:https://www.owasp.org/index.php/Double_Free
|
||||
[5]:https://stuff.mit.edu/afs/sipb/project/gnucash-test/src/dmalloc-4.8.2/dmalloc.html#Fence-Post%20Overruns
|
||||
[6]:http://dmalloc.com/releases/notes/dmalloc-5.5.2.html
|
||||
[7]:http://www.gnu.org/software/gdb/
|
||||
[8]:http://dmalloc.com/docs/
|
||||
[9]:http://dmalloc.com/docs/latest/online/dmalloc_26.html#SEC32
|
||||
[10]:http://dmalloc.com/docs/latest/online/dmalloc_23.html#SEC29
|
||||
[11]:https://en.wikipedia.org/wiki/Memory_management#Dynamic_memory_allocation
|
||||
[12]:https://en.wikipedia.org/wiki/Segmentation_fault
|
||||
[13]:https://en.wikipedia.org/wiki/GNU_Compiler_Collection
|
||||
[14]:http://www.gnu.org/software/gdb/
|
||||
[15]:https://launchpad.net/ubuntu/+source/electric-fence/2.2.3
|
||||
[16]:http://valgrind.org/docs/manual/mc-manual.html
|
||||
[17]:http://valgrind.org/info/developers.html
|
||||
[18]:http://valgrind.org/
|
||||
[19]:http://valgrind.org/docs/manual/dist.news.html
|
||||
[20]:http://valgrind.org/docs/manual/mc-manual.html#mc-manual.suppfiles
|
||||
[21]:http://valgrind.org/docs/manual/mc-manual.html#mc-manual.mempools
|
||||
[22]:http://stackoverflow.com/questions/4642671/c-memory-allocators
|
||||
[23]:http://valgrind.org/docs/manual/mc-manual.html#mc-manual.monitor-commands
|
||||
[24]:http://valgrind.org/docs/manual/mc-manual.html#mc-manual.clientreqs
|
||||
[25]:http://valgrind.org/docs/manual/valgrind_manual.pdf
|
||||
[26]:http://sourceforge.net/p/valgrind/mailman/message/30292453/
|
||||
[27]:https://msdn.microsoft.com/en-us/library/ee798431%28v=cs.20%29.aspx
|
||||
[28]:http://www.computerworld.com/article/2484425/linux/5-free-linux-text-editors-for-programming-and-word-processing.html?nsdr=true&page=2
|
||||
[29]:http://valgrind.org/docs/manual/manual-core.html#manual-core.limits
|
||||
[30]:http://valgrind.org/info/
|
||||
[31]:http://www.linkdata.se/sourcecode/memwatch/
|
||||
[32]:http://www.cecalc.ula.ve/documentacion/tutoriales/WorkshopDebugger/007-2579-007/sgi_html/ch09.html
|
||||
[33]:http://c2.com/cgi/wiki?WildPointer
|
||||
[34]:https://en.wikipedia.org/wiki/Standard_streams#Standard_output_.28stdout.29
|
||||
[35]:http://www.gnu.org/software/libc/manual/html_node/Tracing-malloc.html
|
||||
[36]:https://www.gnu.org/software/libc/
|
||||
[37]:https://sourceware.org/git/?p=glibc.git;a=history;f=malloc/mtrace.c;h=df10128b872b4adc4086cf74e5d965c1c11d35d2;hb=HEAD
|
||||
[38]:https://sourceware.org/git/?p=glibc.git;a=history;f=malloc/mtrace.pl;h=0737890510e9837f26ebee2ba36c9058affb0bf1;hb=HEAD
|
||||
[39]:http://webcache.googleusercontent.com/search?q=cache:s6ywlLtkSqQJ:www.gnu.org/s/libc/manual/html_node/Tips-for-the-Memory-Debugger.html+&cd=1&hl=en&ct=clnk&gl=in&client=Ubuntu
|
||||
[40]:http://www.gnu.org/software/libc/manual/html_node/Using-the-Memory-Debugger.html#Using-the-Memory-Debugger
|
||||
[41]:http://linux.die.net/man/1/gcc
|
||||
[42]:http://linux.die.net/man/1/g++
|
||||
[43]:https://sourceware.org/ml/libc-help/2014-05/msg00008.html
|
||||
[44]:https://en.wikipedia.org/wiki/Thread_safety
|
@ -1,171 +0,0 @@
|
||||
20 Years of GIMP Evolution: Step by Step
|
||||
================================================================================
|
||||
注:youtube 视频
|
||||
<iframe width="660" height="371" frameborder="0" allowfullscreen="" src="https://www.youtube.com/embed/PSJAzJ6mkVw?feature=oembed"></iframe>
|
||||
|
||||
[GIMP][1] (GNU Image Manipulation Program) – superb open source and free graphics editor. Development began in 1995 as students project of the University of California, Berkeley by Peter Mattis and Spencer Kimball. In 1997 the project was renamed in “GIMP” and became an official part of [GNU Project][2]. During these years the GIMP is one of the best graphics editor and platinum holy wars “GIMP vs Photoshop” – one of the most popular.
|
||||
|
||||
The first announce, 21.11.1995:
|
||||
|
||||
> From: Peter Mattis
|
||||
>
|
||||
> Subject: ANNOUNCE: The GIMP
|
||||
>
|
||||
> Date: 1995-11-21
|
||||
>
|
||||
> Message-ID: <48s543$r7b@agate.berkeley.edu>
|
||||
>
|
||||
> Newsgroups: comp.os.linux.development.apps,comp.os.linux.misc,comp.windows.x.apps
|
||||
>
|
||||
> The GIMP: the General Image Manipulation Program
|
||||
> ------------------------------------------------
|
||||
>
|
||||
> The GIMP is designed to provide an intuitive graphical interface to a
|
||||
> variety of image editing operations. Here is a list of the GIMP's
|
||||
> major features:
|
||||
>
|
||||
> Image viewing
|
||||
> -------------
|
||||
>
|
||||
> * Supports 8, 15, 16 and 24 bit color.
|
||||
> * Ordered and Floyd-Steinberg dithering for 8 bit displays.
|
||||
> * View images as rgb color, grayscale or indexed color.
|
||||
> * Simultaneously edit multiple images.
|
||||
> * Zoom and pan in real-time.
|
||||
> * GIF, JPEG, PNG, TIFF and XPM support.
|
||||
>
|
||||
> Image editing
|
||||
> -------------
|
||||
>
|
||||
> * Selection tools including rectangle, ellipse, free, fuzzy, bezier
|
||||
> and intelligent.
|
||||
> * Transformation tools including rotate, scale, shear and flip.
|
||||
> * Painting tools including bucket, brush, airbrush, clone, convolve,
|
||||
> blend and text.
|
||||
> * Effects filters (such as blur, edge detect).
|
||||
> * Channel & color operations (such as add, composite, decompose).
|
||||
> * Plug-ins which allow for the easy addition of new file formats and
|
||||
> new effect filters.
|
||||
> * Multiple undo/redo.
|
||||
|
||||
GIMP 0.54, 1996
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/GIMP%20History/054.png)
|
||||
|
||||
GIMP 0.54 was required X11 displays, X-server and Motif 1.2 wigdets and supported 8, 15, 16 & 24 color depths with RGB & grayscale colors. Supported images format: GIF, JPEG, PNG, TIFF and XPM.
|
||||
|
||||
Basic functionality: rectangle, ellipse, free, fuzzy, bezier, intelligent selection tools, and rotate, scale, shear, clone, blend and flip images.
|
||||
|
||||
Extended tools: text operations, effects filters, tools for channel and colors manipulation, undo and redo operations. Since the first version GIMP support the plugin system.
|
||||
|
||||
GIMP 0.54 can be ran in Linux, HP-UX, Solaris, SGI IRIX.
|
||||
|
||||
### GIMP 0.60, 1997 ###
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/GIMP%20History/060.gif)
|
||||
|
||||
This is development release, not for all users. GIMP has the new toolkits – GDK (GIMP Drawing Kit) and GTK (GIMP Toolkit), Motif support is deprecated. GIMP Toolkit is also begin of the GTK+ cross-platform widget toolkit. New features:
|
||||
|
||||
- basic layers
|
||||
- sub-pixel sampling
|
||||
- brush spacing
|
||||
- improver airbrush
|
||||
- paint modes
|
||||
|
||||
### GIMP 0.99, 1997 ###
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/GIMP%20History/099.png)
|
||||
|
||||
Since 0.99 version GIMP has the scripts add macros (Script-Fus) support. GTK and GDK with some improvements has now the new name – GTK+. Other improvements:
|
||||
|
||||
- support big images (rather than 100 MB)
|
||||
- new native format – XCF
|
||||
- new API – write plugins and extensions is easy
|
||||
|
||||
### GIMP 1.0, 1998 ###
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/GIMP%20History/100.gif)
|
||||
|
||||
GIMP and GTK+ was splitted into separate projects. The GIMP official website has
|
||||
reconstructed and contained new tutorials, plugins and documentation. New features:
|
||||
|
||||
- tile-based memory management
|
||||
- massive changes in plugin API
|
||||
- XFC format now support layers, guides and selections
|
||||
- web interface
|
||||
- online graphics generation
|
||||
|
||||
### GIMP 1.2, 2000 ###
|
||||
|
||||
New features:
|
||||
|
||||
- translation for non-english languages
|
||||
- fixed many bugs in GTK+ and GIMP
|
||||
- many new plugins
|
||||
- image map
|
||||
- new toolbox: resize, measure, dodge, burn, smugle, samle colorize and curve bend
|
||||
- image pipes
|
||||
- images preview before saving
|
||||
- scaled brush preview
|
||||
- recursive selection by path
|
||||
- new navigation window
|
||||
- drag’n’drop
|
||||
- watermarks support
|
||||
|
||||
### GIMP 2.0, 2004 ###
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/GIMP%20History/200.png)
|
||||
|
||||
The biggest change – new GTK+ 2.x toolkit.
|
||||
|
||||
### GIMP 2.2, 2004 ###
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/GIMP%20History/220.png)
|
||||
|
||||
Many bugfixes and drag’n’drop support.
|
||||
|
||||
### GIMP 2.4, 2007 ###
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/GIMP%20History/240.png)
|
||||
|
||||
New features:
|
||||
|
||||
- better drag’n’drop support
|
||||
- Ti-Fu was replaced to Script-Fu – the new script interpreter
|
||||
- new plugins: photocopy, softglow, neon, cartoon, dog, glob and others
|
||||
|
||||
### GIMP 2.6, 2008 ###
|
||||
|
||||
New features:
|
||||
|
||||
- renew graphics interface
|
||||
- new select and tool
|
||||
- GEGL (GEneric Graphics Library) integration
|
||||
- “The Utility Window Hint” for MDI behavior
|
||||
|
||||
### GIMP 2.8, 2012 ###
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/GIMP%20History/280.png)
|
||||
|
||||
New features:
|
||||
|
||||
- GUI has some visual changes
|
||||
- new save and export menu
|
||||
- renew text editor
|
||||
- layers group support
|
||||
- JPEG2000 and export to PDF support
|
||||
- webpage screenshot tool
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://tlhp.cf/20-years-of-gimp-evolution/
|
||||
|
||||
作者:[Pavlo Rudyi][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://tlhp.cf/author/paul/
|
||||
[1]:https://gimp.org/
|
||||
[2]:http://www.gnu.org/
|
87
sources/talk/20151201 Cinnamon 2.8 Review.md
Normal file
87
sources/talk/20151201 Cinnamon 2.8 Review.md
Normal file
@ -0,0 +1,87 @@
|
||||
Cinnamon 2.8 Review
|
||||
================================================================================
|
||||
![](https://www.maketecheasier.com/assets/uploads/2015/11/cinnamon-2-8-featured.jpg)
|
||||
|
||||
Other than Gnome and KDE, Cinnamon is another desktop environment that is used by many people. It is made by the same team that produces Linux Mint (and ships with Linux Mint) and can also be installed on several other distributions. The latest version of this DE – Cinnamon 2.8 – was released earlier this month, and it brings a host of bug fixes and improvements as well as some new features.
|
||||
|
||||
I’m going to go over the major improvements made in this release as well as how to update to Cinnamon 2.8 or install it for the first time.
|
||||
|
||||
### Improvements to Applets ###
|
||||
|
||||
There are several improvements to already existing applets for the panel.
|
||||
|
||||
#### Sound Applet ####
|
||||
|
||||
![cinnamon-28-sound-applet](https://www.maketecheasier.com/assets/uploads/2015/11/rsz_cinnamon-28-sound-applet.jpg)
|
||||
|
||||
The Sound applet was revamped and now displays track information as well as the media controls on top of the cover art of the audio file. For music players with seeking support (such as Banshee), a progress bar will be displayed in the same region which you can use to change the position of the audio track. Right-clicking on the applet in the panel will display the options to mute input and output devices.
|
||||
|
||||
#### Power Applet ####
|
||||
|
||||
The Power applet now displays the status of each of the connected batteries and devices using the manufacturer’s data instead of generic names.
|
||||
|
||||
#### Window Thumbnails ####
|
||||
|
||||
![cinnamon-2.8-window-thumbnails](https://www.maketecheasier.com/assets/uploads/2015/11/cinnamon-2.8-window-thumbnails.png)
|
||||
|
||||
Cinnamon 2.8 brings the option to show window thumbnails when hovering over the window list in the panel. You can turn it off if you don’t like it, though.
|
||||
|
||||
#### Workspace Switcher Applet ####
|
||||
|
||||
![cinnamon-2.8-workspace-switcher](https://www.maketecheasier.com/assets/uploads/2015/11/cinnamon-2.8-workspace-switcher.png)
|
||||
|
||||
Adding the Workspace switcher applet to your panel will show you a visual representation of your workspaces with little rectangles embedded inside to show the position of your windows.
|
||||
|
||||
#### System Tray ####
|
||||
|
||||
Cinnamon 2.8 brings support for app indicators in the system tray. You can easily disable this in the settings which will force affected apps to fall back to using status icons instead.
|
||||
|
||||
### Visual Improvements ###
|
||||
|
||||
A host of visual improvements were made in Cinnamon 2.8. The classic and preview Alt + Tab switchers were polished with noticeable improvements, while the Alt + F2 dialog received bug fixes and better auto completion for commands.
|
||||
|
||||
Also, the issue with the traditional animation effect for minimizing windows is now sorted and works with multiple panels.
|
||||
|
||||
### Nemo Improvements ###
|
||||
|
||||
![cinnamon-2.8-nemo](https://www.maketecheasier.com/assets/uploads/2015/11/rsz_cinnamon-28-nemo.jpg)
|
||||
|
||||
The default file manager for Cinnamon also received several bug fixes and has a new “Quick-rename” feature for renaming files and directories. This works by clicking the file or directory twice with a short pause in between to rename the files.
|
||||
|
||||
Nemo also detects issues with thumbnails automatically and prompts you to quickly fix them.
|
||||
|
||||
### Other Notable improvements ###
|
||||
|
||||
- Applets now reload themselves automatically once they are updated.
|
||||
- Support for multiple monitors was improved significantly.
|
||||
- Dialog windows have been improved and now attach themselves to their parent windows.
|
||||
- HiDPI dectection has been improved.
|
||||
- QT5 applications now look more native and use the default GTK theme.
|
||||
- Window management and rendering performance has been improved.
|
||||
- There are various bugfixes.
|
||||
|
||||
### How to Get Cinnamon 2.8 ###
|
||||
|
||||
If you’re running Linux Mint you will get Cinnamon 2.8 as part of the upgrade to Linux Mint 17.3 “Rosa” Cinnamon Edition. The BETA release is already out, so you can grab that if you’d like to get your hands on the new software immediately.
|
||||
|
||||
For Arch users, Cinnamon 2.8 is already in the official Arch repositories, so you can just update your packages and do a system-wide upgrade to get the latest version.
|
||||
|
||||
Finally, for Ubuntu users, you can install or upgrade to Cinnamon 2.8 by running in turn the following commands:
|
||||
|
||||
sudo add-apt-repository -y ppa:moorkai/cinnamon
|
||||
sudo apt-get update
|
||||
sudo apt-get install cinnamon
|
||||
|
||||
Have you tried Cinnamon 2.8? What do you think of it?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.maketecheasier.com/cinnamon-2-8-review/
|
||||
|
||||
作者:[Ayo Isaiah][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.maketecheasier.com/author/ayoisaiah/
|
@ -0,0 +1,98 @@
|
||||
Translating by icecoobe.
|
||||
|
||||
What’s the Best File System for My Linux Install?
|
||||
================================================================================
|
||||
![](https://www.maketecheasier.com/assets/uploads/2015/05/file-systems-feature-image.jpg)
|
||||
|
||||
File systems: they’re not the most exciting things in the world, but important nonetheless. In this article we’ll go over the popular choices for file systems on Linux – what they’re about, what they can do, and who they’re for.
|
||||
|
||||
### Ext4 ###
|
||||
|
||||
![file-systems-ext4](https://www.maketecheasier.com/assets/uploads/2015/05/file-systems-ext4.png)
|
||||
|
||||
If you’ve ever installed Linux before, chances are you’ve seen the “Ext4” during installation. There’s a good reason for that: it’s the file system of choice for just about every Linux distribution available right now. Sure, there are some that choose other options, but there’s no denying that Extended 4 is the file system of choice for almost all Linux users.
|
||||
|
||||
#### What can it do? ####
|
||||
|
||||
Extended 4 has all of the goodness that you’ve come to expect from past file system iterations (Ext2/Ext3) but with enhancements. There’s a lot to dig into, but here are the best parts of what Ext4 can do for you:
|
||||
|
||||
- file system journaling
|
||||
- journal checksums
|
||||
- multi-block file allocation
|
||||
- backwards compatibility support for Extended 2 and 3
|
||||
- persistent pre-allocation of free space
|
||||
- improved file system checking (over previous versions)
|
||||
- and of course, support for larger files
|
||||
|
||||
#### Who is it for? ####
|
||||
|
||||
Extended 4 is for those looking for a super-stable foundation to build upon, or for those looking for something that just works. This file system won’t snapshot your system; it doesn’t even have the greatest SSD support, but If your needs aren’t too extravagant, you’ll get along with it just fine.
|
||||
|
||||
### BtrFS ###
|
||||
|
||||
![file-systems-btrFS](https://www.maketecheasier.com/assets/uploads/2015/05/file-systems-btrFS-e1450065697580.png)
|
||||
|
||||
The B-tree file system (also known as butterFS) is a file system for Linux developed by Oracle. It’s a new file system and is in heavy development stages. The Linux community considers it unstable to use for some. The core principle of BtrFS is based around the principle of copy-on-write. **Copy on write** basically means that the system has one single copy of a bit of data before the data has been written. When the data has been written, a copy of it is made.
|
||||
|
||||
#### What can it do? ####
|
||||
|
||||
Besides supporting copy-on-write, BtrFS can do many other things – so many things, in fact, that it’d take forever to list everything. Here are the most notable features: The file system supports read-only snapshots, file cloning, subvolumes, transparent compression, offline file system check, in-place conversion from ext3 and 4 to Btrfs, online defragmentation, anew has support for RAID 0, RAID 1, RAID 5, RAID 6 and RAID 10.
|
||||
|
||||
#### Who is it for? ####
|
||||
|
||||
The developers of BtrFS have promised that this file system is the next-gen replacement for other file systems out there. That much is true, though it certainly is a work in progress. There are many killer features for advanced users and basic users alike (including great performance on SSDs). This file system is for those looking to get a little bit more out of their file system and who want to try the copy-on-write way of doing things.
|
||||
|
||||
### XFS ###
|
||||
|
||||
![file-systems-xfs](https://www.maketecheasier.com/assets/uploads/2015/05/file-systems-xfs.jpg)
|
||||
|
||||
Developed and created by Silicon Graphics, XFS is a high-end file system that specializes in speed and performance. XFS does extremely well when it comes to parallel input and output because of its focus on performance. The XFS file system can handle massive amounts of data, so much in fact that some users of XFS have close to 300+ terabytes of data.
|
||||
|
||||
#### What can it do? ####
|
||||
|
||||
XFS is a well-tested data storage file system created for high performance operations. Its features include:
|
||||
|
||||
- striped allocation of RAID arrays
|
||||
- file system journaling
|
||||
- variable block sizes
|
||||
- direct I/O
|
||||
- guaranteed-rate I/O
|
||||
- snapshots
|
||||
- online defragmentation
|
||||
- online resizing
|
||||
|
||||
#### Who is it for? ####
|
||||
|
||||
XFS is for those looking for a rock-solid file solution. The file system has been around since 1993 and has only gotten better and better with time. If you have a home server and you’re perplexed on where you should go with storage, consider XFS. A lot of the features the file system comes with (like snapshots) could aid in your file storage system. It’s not just for servers, though. If you’re a more advanced user and you’re interested in a lot of what was promised in BtrFS, check out XFS. It does a lot of the same stuff and doesn’t have stability issues.
|
||||
|
||||
### Reiser4 ###
|
||||
|
||||
![file-system-riser4](https://www.maketecheasier.com/assets/uploads/2015/05/file-system-riser4.gif)
|
||||
|
||||
Reiser4, the successor to ReiserFS, is a file system created and developed by Namesys. The creation of Reiser4 was backed by the Linspire project as well as DARPA. What makes Reiser4 special is its multitude of transaction models. There isn’t one single way data can be written; instead, there are many.
|
||||
|
||||
#### What can it do? ####
|
||||
|
||||
Reiser4 has the unique ability to use different transaction models. It can use the copy-on-write model (like BtrFS), write-anywhere, journaling, and the hybrid transaction model. It has a lot of improvements upon ReiserFS, including better file system journaling via wandering logs, better support for smaller files, and faster handling of directories. Reiser4 has a lot to offer. There are a lot more features to talk about, but suffice it to say it’s a huge improvement over ReiserFS with tons of added features.
|
||||
|
||||
#### Who is it for? ####
|
||||
|
||||
Resier4 is for those looking to stretch one file system across multiple use-cases. Maybe you want to set up one machine with copy-on-write, another with write-anywhere, and another with hybrid transaction, and you don’t want to use different types of file systems to accomplish this task. Reiser4 is perfect for this type of use-case.
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
There are many file systems available on Linux. Each serves a unique purpose for unique users looking to solve different problems.This post focuses on the most popular choices for the platform. There is no doubt there are other choices out there for other use-cases.
|
||||
|
||||
What’s your favorite file system to use on Linux? Tell us why below!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.maketecheasier.com/best-file-system-linux/
|
||||
|
||||
作者:[Derrik Diener][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.maketecheasier.com/author/derrikdiener/
|
66
sources/talk/20151227 Upheaval in the Debian Live project.md
Normal file
66
sources/talk/20151227 Upheaval in the Debian Live project.md
Normal file
@ -0,0 +1,66 @@
|
||||
While the event had a certain amount of drama surrounding it, the [announcement][1] of the end for the [Debian Live project][2] seems likely to have less of an impact than it first appeared. The loss of the lead developer will certainly be felt—and the treatment he and the project received seems rather baffling—but the project looks like it will continue in some form. So Debian will still have tools to create live CDs and other media going forward, but what appears to be a long-simmering dispute between project founder and leader Daniel Baumann and the Debian CD and installer teams has been "resolved", albeit in an unfortunate fashion.
|
||||
|
||||
The November 9 announcement from Baumann was titled "An abrupt End to Debian Live". In that message, he pointed to a number of different events over the nearly ten years since the [project was founded][3] that indicated to him that his efforts on Debian Live were not being valued, at least by some. The final straw, it seems, was an "intent to package" (ITP) bug [filed][4] by Iain R. Learmonth that impinged on the namespace used by Debian Live.
|
||||
|
||||
Given that one of the main Debian Live packages is called "live-build", the new package's name, "live-build-ng", was fairly confrontational in and of itself. Live-build-ng is meant to be a wrapper around the [vmdebootstrap][5] tool for creating live media (CDs and USB sticks), which is precisely the role Debian Live is filling. But when Baumann [asked][6] Learmonth to choose a different name for his package, he got an "interesting" [reply][7]:
|
||||
|
||||
```
|
||||
It is worth noting that live-build is not a Debian project, it is an external project that claims to be an official Debian project. This is something that needs to be fixed.
|
||||
There is no namespace issue, we are building on the existing live-config and live-boot packages that are maintained and bringing these into Debian as native projects. If necessary, these will be forks, but I'm hoping that won't have to happen and that we can integrate these packages into Debian and continue development in a collaborative manner.
|
||||
live-build has been deprecated by debian-cd, and live-build-ng is replacing it. In a purely Debian context at least, live-build is deprecated. live-build-ng is being developed in collaboration with debian-cd and D-I [Debian Installer].
|
||||
```
|
||||
|
||||
Whether or not Debian Live is an "official" Debian project (or even what "official" means in this context) has been disputed in the thread. Beyond that, though, Neil Williams (who is the maintainer of vmdebootstrap) [provided some][8] explanation for the switch away from Debian Live:
|
||||
|
||||
```
|
||||
vmdebootstrap is being extended explicitly to provide support for a replacement for live-build. This work is happening within the debian-cd team to be able to solve the existing problems with live-build. These problems include reliability issues, lack of multiple architecture support and lack of UEFI support. vmdebootstrap has all of these, we do use support from live-boot and live-config as these are out of the scope for vmdebootstrap.
|
||||
```
|
||||
|
||||
Those seem like legitimate complaints, but ones that could have been fixed within the existing project. Instead, though, something of a stealth project was evidently undertaken to replace live-build. As Baumann [pointed out][9], nothing was posted to the debian-live mailing list about the plans. The ITP was the first notice that anyone from the Debian Live project got about the plans, so it all looks like a "secret plan"—something that doesn't sit well in a project like Debian.
|
||||
|
||||
As might be guessed, there were multiple postings that supported Baumann's request to rename "live-build-ng", followed by many that expressed dismay at his decision to stop working on Debian Live. But Learmonth and Williams were adamant that replacing live-build is needed. Learmonth did [rename][10] live-build-ng to a perhaps less confrontational name: live-wrapper. He noted that his aim had been to add the new tool to the Debian Live project (and "bring the Debian Live project into Debian"), but things did not play out that way.
|
||||
|
||||
```
|
||||
I apologise to everyone that has been upset by the ITP bug. The software is not yet ready for use as a full replacement for live-build, and it was filed to let people know that the work was ongoing and to collect feedback. This sort of worked, but the feedback wasn't the kind I was looking for.
|
||||
```
|
||||
|
||||
The backlash could perhaps have been foreseen. Communication is a key aspect of free-software communities, so a plan to replace the guts of a project seems likely to be controversial—more so if it is kept under wraps. For his part, Baumann has certainly not been perfect—he delayed the "wheezy" release by [uploading an unsuitable syslinux package][11] and [dropped down][12] from a Debian Developer to a Debian Maintainer shortly thereafter—but that doesn't mean he deserves this kind of treatment. There are others involved in the project as well, of course, so it is not just Baumann who is affected.
|
||||
|
||||
One of those other people is Ben Armstrong, who has been something of a diplomat during the event and has tried to smooth the waters. He started with a [post][13] that celebrated the project and what Baumann and the team had accomplished over the years. As he noted, the [list of downstream projects][14] for Debian Live is quite impressive. In another post, he also [pointed out][15] that the project is not dead:
|
||||
|
||||
```
|
||||
If the Debian CD team succeeds in their efforts and produces a replacement that is viable, reliable, well-tested, and a suitable candidate to replace live-build, this can only be good for Debian. If they are doing their job, they will not "[replace live-build with] an officially improved, unreliable, little-tested alternative". I've seen no evidence so far that they operate that way. And in the meantime, live-build remains in the archive -- there is no hurry to remove it, so long as it remains in good shape, and there is not yet an improved successor to replace it.
|
||||
```
|
||||
|
||||
On November 24, Armstrong also [posted][16] an update (and to [his blog][17]) on Debian Live. It shows some good progress made in the two weeks since Baumann's exit; there are even signs of collaboration between the project and the live-wrapper developers. There is also a [to-do list][18], as well as the inevitable call for more help. That gives reason to believe that all of the drama surrounding the project was just a glitch—avoidable, perhaps, but not quite as dire as it might have seemed.
|
||||
|
||||
|
||||
---------------------------------
|
||||
|
||||
via: https://lwn.net/Articles/665839/
|
||||
|
||||
作者:Jake Edge
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
|
||||
[1]: https://lwn.net/Articles/666127/
|
||||
[2]: http://live.debian.net/
|
||||
[3]: https://www.debian.org/News/weekly/2006/08/
|
||||
[4]: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=804315
|
||||
[5]: http://liw.fi/vmdebootstrap/
|
||||
[6]: https://lwn.net/Articles/666173/
|
||||
[7]: https://lwn.net/Articles/666176/
|
||||
[8]: https://lwn.net/Articles/666181/
|
||||
[9]: https://lwn.net/Articles/666208/
|
||||
[10]: https://lwn.net/Articles/666321/
|
||||
[11]: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=699808
|
||||
[12]: https://nm.debian.org/public/process/14450
|
||||
[13]: https://lwn.net/Articles/666336/
|
||||
[14]: http://live.debian.net/project/downstream/
|
||||
[15]: https://lwn.net/Articles/666338/
|
||||
[16]: https://lwn.net/Articles/666340/
|
||||
[17]: http://syn.theti.ca/2015/11/24/debian-live-after-debian-live/
|
||||
[18]: https://wiki.debian.org/DebianLive/TODO
|
@ -1,95 +0,0 @@
|
||||
alim0x translating
|
||||
|
||||
The history of Android
|
||||
================================================================================
|
||||
![Another Market design that was nothing like the old one. This lineup shows the categories page, featured, a top apps list, and an app page.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/market-pages.png)
|
||||
Another Market design that was nothing like the old one. This lineup shows the categories page, featured, a top apps list, and an app page.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
These screenshots give us our first look at the refined version of the Action Bar in Ice Cream Sandwich. Almost every app got a bar at the top of the screen that housed the app icon, title of the screen, several function buttons, and a menu button on the right. The right-aligned menu button was called the "overflow" button, because it housed items that didn't fit on the main action bar. The overflow menu wasn't static, though, it gave the action bar more screen real-estate—like in horizontal mode or on a tablet—and more of the overflow menu items were shown on the action bar as actual buttons.
|
||||
|
||||
New in Ice Cream Sandwich was this design style of "swipe tabs," which replaced the 2×3 interstitial navigation screen Google was previously pushing. A tab bar sat just under the Action Bar, with the center title showing the current tab and the left and right having labels for the pages to the left and right of this screen. A swipe in either direction would change tabs, or you could tap on a title to go to that tab.
|
||||
|
||||
One really cool design touch on the individual app screen was that, after the pictures, it would dynamically rearrange the page based on your history with that app. If you never installed the app before, the description would be the first box. If you used the app before, the first section would be the reviews bar, which would either invite you to review the app or remind you what you thought of the app last time you installed it. The second section for a previously used app was “What’s New," since an existing user would most likely be interested in changes.
|
||||
|
||||
![Recent apps and the browser were just like Honeycomb, but smaller.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/recentbrowser.png)
|
||||
Recent apps and the browser were just like Honeycomb, but smaller.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
Recent apps toned the Tron look way down. The blue outline around the thumbnails was removed, along with the eerie, uneven blue glow in the background. It now looked like a neutral UI piece that would be at home in any time period.
|
||||
|
||||
The Browser did its best to bring a tabbed experience to phones. Multi-tab browsing was placed front and center, but instead of wasting precious screen space on a tab strip, a tab button would open a Recent Apps-like interface that would show you your open tabs. Functionally, there wasn't much difference between this and the "window" view that was present in past versions of the Browser. The best addition to the Browser was a "Request desktop site" menu item, which would switch from the default mobile view to the normal site. The Browser showed off the flexibility of Google's Action Bar design, which, despite not having a top-left app icon, still functioned like any other top bar design.
|
||||
|
||||
![Gmail and Google Talk—they're like Honeycomb, but smaller!](http://cdn.arstechnica.net/wp-content/uploads/2014/03/gmail2.png)
|
||||
Gmail and Google Talk—they're like Honeycomb, but smaller!
|
||||
Photo by Ron Amadeo
|
||||
|
||||
Gmail and Google Talk both looked like smaller versions of their Honeycomb designs, but with a few tweaks to work better on smaller screens. Gmail featured a dual Action Bar—one on the top of the screen and one on the bottom. The top of the bar showed your current folder, account, and number of unread messages, and tapping on the bar opened a navigation menu. The bottom featured all the normal buttons you would expect along with the overflow button. This dual layout was used in order display more buttons on the surface level, but in landscape mode where vertical space was at a premium, the dual bars merged into a single top bar.
|
||||
|
||||
In the message view, the blue bar was "sticky" when you scrolled down. It stuck to the top of the screen, so you could always see who wrote the current message, reply, or star it. Once in a message, the thin, dark gray bar at the bottom showed your current spot in the inbox (or whatever list brought you here), and you could swipe left and right to get to other messages.
|
||||
|
||||
Google Talk would let you swipe left and right to change chat windows, just like Gmail, but there the bar was at the top.
|
||||
|
||||
![The new dialer and the incoming call screen, both of which we haven't seen since Gingerbread.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/inc-calls.png)
|
||||
The new dialer and the incoming call screen, both of which we haven't seen since Gingerbread.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
Since Honeycomb was only for tablets, some UI pieces were directly preceded by Gingerbread instead. The new Ice Cream Sandwich dialer was, of course, black and blue, and it used smaller tabs that could be swiped through. While Ice Cream Sandwich finally did the sensible thing and separated the main phone and contacts interfaces, the phone app still had its own contacts tab. There were now two spots to view your contact list—one with a dark theme and one with a light theme. With a hardware search button no longer being a requirement, the bottom row of buttons had the voicemail shortcut swapped out for a search icon.
|
||||
|
||||
Google liked to have the incoming call interface mirror the lock screen, which meant Ice Cream Sandwich got a circle-unlock design. Besides the usual decline or accept options, a new button was added to the top of the circle, which would let you decline a call by sending a pre-defined text message to the caller. Swiping up and picking a message like "Can't talk now, call you later" was (and still is) much more informative than an endlessly ringing phone.
|
||||
|
||||
![Honeycomb didn't have folders or a texting app, so here's Ice Cream Sandwich versus Gingerbread.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/thenonmessedupversion.png)
|
||||
Honeycomb didn't have folders or a texting app, so here's Ice Cream Sandwich versus Gingerbread.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
Folders were now much easier to make. In Gingerbread, you had to long press on the screen, pick "folders," and then pick "new folder." In Ice Cream Sandwich, just drag one icon on top of another, and a folder is created containing those two icons. It was dead simple and much easier than finding the hidden long-press command.
|
||||
|
||||
The design was much improved, too. Gingerbread used a generic beige folder icon, but Ice Cream Sandwich actually showed you what was in the folder by stacking the first three icons on top of each other, drawing a circle around them, and using that as the folder icon. Open folder containers resized to fit the amount of icons in the folder rather than being a full-screen, mostly empty box. It looked way, way better.
|
||||
|
||||
![YouTube switched to a more modern white theme and used a list view instead of the crazy 3D scrolling](http://cdn.arstechnica.net/wp-content/uploads/2014/03/youtubes.png)
|
||||
YouTube switched to a more modern white theme and used a list view instead of the crazy 3D scrolling
|
||||
Photo by Ron Amadeo
|
||||
|
||||
YouTube was completely redesigned and looked less like something from The Matrix and more like, well, YouTube. It was a simple white list of vertically scrolling videos, just like the website. Making videos on your phone was given prime real estate, with the first button on the action bar dedicated to recording a video. Strangely, different screens used different YouTube logos in the top left, switching between a horizontal YouTube logo and a square one.
|
||||
|
||||
YouTube used swipe tabs just about everywhere. They were placed on the main page to browse and view your account and on the video pages to switch between comments, info, and related videos. The 4.0 app showed the first signs of Google+ YouTube integration, placing a "+1" icon next to the traditional rating buttons. Eventually Google+ would completely take over YouTube, turning the comments and author pages into Google+ activity.
|
||||
|
||||
![Ice Cream Sandwich tried to make things easier on everyone. Here is a screen for tracking data usage, the new developer options with tons of analytics enabled, and the intro tutorial.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/data.png)
|
||||
Ice Cream Sandwich tried to make things easier on everyone. Here is a screen for tracking data usage, the new developer options with tons of analytics enabled, and the intro tutorial.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
Data Usage allowed users to easily keep track of and control their data usage. The main page showed a graph of this month's data usage, and users could set thresholds to be warned about data consumption or even set a hard usage limit to avoid overage charges. All of this was done easily by dragging the horizontal orange and red threshold lines higher or lower on the chart. The vertical white bars allowed users to select a slice of time in the graph. At the bottom of the page, the data usage for the selected time was broken down by app, so users could select a spike and easily see what app was sucking up all their data. When times got really tough, in the overflow button was an option to restrict all background data. Then, only apps running in the foreground could have access to the Internet connection.
|
||||
|
||||
The Developer Options typically only housed a tiny handful of settings, but in Ice Cream Sandwich the section received a huge expansion. Google added all sorts of on-screen diagnostic overlays to help app developers understand what was happening inside their app. You could view CPU usage, pointer location, and view screen updates. There were also options to change the way the system functioned, like control over animation speed, background processing, and GPU rendering.
|
||||
|
||||
One of the biggest differences between Android and the iOS is Android's app drawer interface. In Ice Cream Sandwich's quest to be more user-friendly, the initial startup launched a small tutorial showing users where the app drawer was and how to drag icons out of the drawer and onto the homescreen. With the removal of the off-screen menu button and changes like this, Android 4.0 made a big push to be more inviting to new smartphone users and switchers.
|
||||
|
||||
![The "touch to beam" NFC support, Google Earth, and App Info, which would let you disable crapware.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/2014-03-06-03.57.png)
|
||||
The "touch to beam" NFC support, Google Earth, and App Info, which would let you disable crapware.
|
||||
|
||||
Built into Ice Cream Sandwich was full support for [NFC][1]. While previous devices like the Nexus S had NFC, support was limited and the OS couldn't do much with the chip. 4.0 added a feature called Android Beam, which would let two NFC-equipped Android 4.0 devices transfer data back and forth. NFC would transmit data related to whatever was on the screen at the time, so tapping when a phone displayed a webpage would send that page to the other phone. You could also send contact information, directions, and YouTube links. When the two phones were put together, the screen zoomed out, and tapping on the zoomed-out display would send the information.
|
||||
|
||||
In Android, users are not allowed to uninstall system apps, which are often integral to the function of the device. Carriers and OEMs took advantage of this and started putting crapware in the system partition, which they would often stick with software they didn't want. Android 4.0 allowed users to disable any app that couldn't be uninstalled, meaning the app remained on the system but didn't show up in the app drawer and couldn't be run. If users were willing to dig through the settings, this gave them an easy way to take control of their phone.
|
||||
|
||||
Android 4.0 can be thought of as the start of the modern Android era. Most of the Google apps released around this time only worked on Android 4.0 and above. There were so many new APIs that Google wanted to take advantage of that—initially at least—support for versions below 4.0 was limited. After Ice Cream Sandwich and Honeycomb, Google was really starting to get serious about software design. In January 2012, the company [finally launched][2] *Android Design*, a design guideline site that taught Android app developers how to create apps to match the look and feel of Android. This was something iOS not only had from the start of third-party app support, but Apple enforced design so seriously that apps that did not meet the guidelines were blocked from the App Store. The fact that Android went three years without any kind of public design documents from Google shows just how bad things used to be. But with Duarte in charge of Android's design revolution, the company was finally addressing basic design needs.
|
||||
|
||||
----------
|
||||
|
||||
![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
|
||||
|
||||
[Ron Amadeo][a] / Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work.
|
||||
|
||||
[@RonAmadeo][t]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/20/
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://arstechnica.com/gadgets/2011/02/near-field-communications-a-technology-primer/
|
||||
[2]:http://arstechnica.com/business/2012/01/google-launches-style-guide-for-android-developers/
|
||||
[a]:http://arstechnica.com/author/ronamadeo
|
||||
[t]:https://twitter.com/RonAmadeo
|
@ -1,103 +0,0 @@
|
||||
The history of Android
|
||||
================================================================================
|
||||
![](http://cdn.arstechnica.net/wp-content/uploads/2014/03/playicons2.png)
|
||||
Photo by Ron Amadeo
|
||||
|
||||
### Google Play and the return of direct-to-consumer device sales ###
|
||||
|
||||
On March 6, 2012, Google unified all of its content offerings under the banner of "Google Play." The Android Market became the Google Play Store, Google Books became Google Play Books, Google Music became Google Play Music, and Android Market Movies became Google Play Movies & TV. While the app interfaces didn't change much, all four content apps got new names and icons. Content purchased in the Play Store would be downloaded to the appropriate app, and the Play Store and Play content apps all worked together to provide a fairly organized content experience.
|
||||
|
||||
The Google Play update was Google's first big out-of-cycle update. Four packed-in apps were all changed without having to issue a system update—they were all updated through the Android Market/Play Store. Enabling out-of-cycle updates to individual apps was a big focus for Google, and being able to do an update like this was the culmination of an engineering effort that started in the Gingerbread era. Google had been working on "decoupling" the apps from the operating system and making everything portable enough to be distributed through the Android Market/Play Store.
|
||||
|
||||
While one or two apps (mostly Maps and Gmail) had previously lived on the Android Market, from here on you'll see a lot more significant updates that have nothing to do with an operating system release. System updates require the cooperation of OEMs and carriers, so they are difficult to push out to every user. Play Store updates are completely controlled by Google, though, providing the company a direct line to users' devices. For the launch of Google Play, the Android Market updated itself to the Google Play Store, and from there, Books, Music, and Movies were all issued Google Play-flavored updates.
|
||||
|
||||
The design of the Google Play apps was still all over the place. Each app looked and functioned differently, but for now, a cohesive brand was a good start. And removing "Android" from the branding was necessary because many services were available in the browser and could be used without touching an Android device at all.
|
||||
|
||||
In April 2012, Google started [selling devices though the Play Store again][1], reviving the direct-to-customer model it had experimented with for the launch of the Nexus One. While it was only two years after ending the Nexus One sales, Internet shopping was now more common place, and buying something before you could hold it didn't seem as crazy as it did in 2010.
|
||||
|
||||
Google also saw how price-conscious consumers became when faced with the Nexus One's $530 price tag. The first device for sale was an unlocked, GSM version of the Galaxy Nexus for $399. From there, price would go even lower. $350 has been the entry-level price for the last two Nexus smartphones, and 7-inch Nexus tablets would come in at only $200 to $220.
|
||||
|
||||
Today, the Play Store sells eight different Android devices, four Chromebooks, a thermostat, and tons of accessories, and the device store is the de-facto location for a new Google product launch. New phone launches are so popular, the site usually breaks under the load, and new Nexus phones sell out in a few hours.
|
||||
|
||||
### Android 4.1, Jelly Bean—Google Now points toward the future ###
|
||||
|
||||
![The Asus-made Nexus 7, Android 4.1's launch device.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/ASUS_Google_Nexus_7_4_11.jpg)
|
||||
The Asus-made Nexus 7, Android 4.1's launch device.
|
||||
|
||||
With the release of Android 4.1, Jelly Bean in July 2012, Google settled into an Android release cadence of about every six months. The platform matured to the point where a release every three months was unnecessary, and the slower release cycle gave OEMs a chance to catch their breath. Unlike Honeycomb, point releases were now fairly major updates, with 4.1 bringing major UI and framework changes.
|
||||
|
||||
One of the biggest changes in Jelly Bean that you won't be able to see in screenshots is "Project Butter," the name for a concerted effort by Google's engineers to make Android animations run smoothly at 30FPS. Core changes were made, like Vsync and triple buffering, and individual animations were optimized so they could be drawn smoothly. Animation and scrolling smoothness had always been a weak point of Android when compared to iOS. After some work on both the core animation framework and on individual apps, Jelly Bean brought Android a lot closer to iOS' smoothness.
|
||||
|
||||
Along with Jelly Bean came the [Nexus][2] 7, a 7-inch tablet manufactured by Asus. Unlike the primarily horizontal Xoom, the Nexus 7 was meant to be used in portrait mode, like a large phone. The Nexus 7 showed that, after almost a year-and-a-half of ecosystem building, Google was ready to commit to the tablet market with a flagship device. Like the Nexus One and GSM Galaxy Nexus, the Nexus 7 was sold online directly by Google. While those earlier devices had shockingly high prices for consumers that were used to carrier subsidies, the Nexus 7 hit a mass market price point of only $200. The price bought you a device with a 7-inch, 1280x800 display, a quad core, 1.2 GHz Tegra 3 processor, 1GB of RAM, and 8GB of storage. The Nexus 7 was such a good value that many wondered if Google was making any money at all on its flagship tablet.
|
||||
|
||||
This smaller, lighter, 7-inch form factor would be a huge success for Google, and it put the company in the rare position of being an industry trendsetter. Apple, which started with a 10-inch iPad, was eventually forced to answer the Nexus 7 and tablets like it with the iPad Mini.
|
||||
|
||||
![4.1's new lock screen design, wallpaper, and the new on-press highlight on the system buttons.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/picture.png)
|
||||
4.1's new lock screen design, wallpaper, and the new on-press highlight on the system buttons.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
The Tron look introduced in Honeycomb was toned down a little in Ice Cream Sandwich, and Jelly Bean took things a step further. It started removing blue from large chunks of the operating system. The hint was the on-press highlights on the system buttons, which changed from blue to gray.
|
||||
|
||||
![A composite image of the new app lineup and the new notification panel with expandable notifications.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/jb-apps-and-notications.png)
|
||||
A composite image of the new app lineup and the new notification panel with expandable notifications.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
The Notification panel was completely revamped, and we've finally arrived at the design used today in KitKat. The new panel extended to the top of the screen and covered the usual status icons, meaning the status bar was no longer visible when the panel was open. The time was prominently displayed in the top left corner, along with the date and a settings shortcut. The clear all notions button, which was represented by an "X" in Ice Cream Sandwich, changed to a stairstep icon, symbolizing the staggered sliding animation that cleared the notification panel. The bottom handle changed from a circle to a single line that ran the length of the notification panel. All the typography was changed—the notification panel now used bigger, thinner fonts for everything. This was another screen where the blue introduced in Ice Cream Sandwich and Honeycomb was removed. The notification panel was entirely gray now except for on-touch highlights.
|
||||
|
||||
There was new functionality in the panel, too. Notifications were now expandable and could show much more information than the previous two-line design. It now showed up to eight lines of text and could even show buttons at the bottom of the notification. The screenshot notification had a share button at the bottom, and you could call directly from a missed call notification, or you could snooze a ringing alarm all from the notification panel. New notifications were expanded by default, but as they piled up they would collapse back to the traditional size. Dragging down on a notification with two fingers would expand it.
|
||||
|
||||
![The new Google Search app, with Google Now cards, voice search, and text search.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/googlenow.png)
|
||||
The new Google Search app, with Google Now cards, voice search, and text search.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
The biggest feature addition to Jelly Bean for not only Android, but for Google as a whole, was the new version of the Google Search application. This introduced "Google Now," a predictive search feature. Google Now was displayed as several cards that sit below the search box, and it would offer results to searches Google thinks you care about. These were things like Google Maps searches for places you've recently looked at on your desktop computer or calendar appointment locations, the weather, and time at home while traveling.
|
||||
|
||||
The new Google Search app could, of course, be launched with the Google icon, but it could also be accessed from any screen with a swipe up from the system bar. Long pressing on the system bar brought up a ring that worked similarly to the lock screen ring. The card section scrolled vertically, and cards could be a swipe away if you didn't want to see them. Voice Search was a big part of the updates. Questions weren't just blindly entered into Google; if Google knew the answer, it would also talk back using a text-To-Speech engine. And old-school text searches were, of course, still supported. Just tap on the bar and start typing.
|
||||
|
||||
Google frequently called Google Now "the future of Google Search." Telling Google what you wanted wasn't good enough. Google wanted to know what you wanted before you did. Google Now put all of Google's data mining knowledge about you to work for you, and it was the company's biggest advantage against rival search services like Bing. Smartphones knew more about you than any other device you own, so the service debuted on Android. But Google slowly worked Google Now into Chrome, and eventually it will likely end up on Google.com.
|
||||
|
||||
While the functionality was important, it became clear that Google Now was the most important design work to ever come out of the company, too. The white card aesthetic that this app introduced would become the foundation for Google's design of just about everything. Today, this card style is used in the Google Play Store and in all of the Play content apps, YouTube, Google Maps, Drive, Keep, Gmail, Google+, and many others. It's not just Android apps, either. Many of Google's desktop sites and iOS apps are inspired by this design. Design was historically one of Google's weak areas, but Google Now was the point where the company finally got its act together with a cohesive, company-wide design language.
|
||||
|
||||
![Yet another YouTube redesign. Information density went way down.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/yotuube.png)
|
||||
Yet another YouTube redesign. Information density went way down.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
Another version, another YouTube redesign. This time the list view was primarily thumbnail-based, with giant images taking up most of the screen real estate. Information density tanked with the new list design. Before YouTube would display around six items per screen, now it could only display three.
|
||||
|
||||
YouTube was one of the first apps to add a sliding drawer to the left side of an app, a feature which would become a standard design style across Google's apps. The drawer has links for your account and channel subscriptions, which allowed Google to kill the tabs-on-top design.
|
||||
|
||||
![Google Play Service's responsibilities versus the rest of Android.](http://cdn.arstechnica.net/wp-content/uploads/2013/08/playservicesdiagram2.png)
|
||||
Google Play Service's responsibilities versus the rest of Android.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
### Google Play Services—fragmentation and making OS versions (nearly) obsolete ###
|
||||
|
||||
It didn't seem like a big deal at the time, but in September 2012, Google Play Services 1.0 was automatically pushed out to every Android phone running 2.2 and up. It added a few Google+ APIs and support for OAuth 2.0.
|
||||
|
||||
While this update might sound boring, Google Play Services would eventually grow to become an integral part of Android. Google Play Services acts as a shim between the normal apps and the installed Android OS, allowing Google to update or replace some core components and add APIs without having to ship out a new Android version.
|
||||
|
||||
With Play Services, Google had a direct line to the core of an Android phone without having to go through OEM updates and carrier approval processes. Google used Play Services to add an entirely new location system, a malware scanner, remote wipe capabilities, and new Google Maps APIs, all without shipping an OS update. Like we mentioned at the end of the Gingerbread section, thanks to all the "portable" APIs implemented in Play Services, Gingerbread can still download a modern version of the Play Store and many other Google Apps.
|
||||
|
||||
The other big benefit was compatibility with Android's user base. The newest release of an Android OS can take a very long time to get out to the majority of users, which means APIs that get tied to the latest version of the OS won't be any good to developers until the majority of the user base upgrades. Google Play Services is compatible with Froyo and above, which is 99 percent of active devices, and the updates pushed directly to phones through the Play Store. By including APIs in Google Play Services instead of Android, Google can push a new API out to almost all users in about a week. It's [a great solution][3] to many of the problems caused by version fragmentation.
|
||||
|
||||
----------
|
||||
|
||||
![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
|
||||
|
||||
[Ron Amadeo][a] / Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work.
|
||||
|
||||
[@RonAmadeo][t]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/21/
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://arstechnica.com/gadgets/2012/04/unlocked-samsung-galaxy-nexus-can-now-be-purchased-from-google/
|
||||
[2]:http://arstechnica.com/gadgets/2012/07/divine-intervention-googles-nexus-7-is-a-fantastic-200-tablet/
|
||||
[3]:http://arstechnica.com/gadgets/2013/09/balky-carriers-and-slow-oems-step-aside-google-is-defragging-android/
|
||||
[a]:http://arstechnica.com/author/ronamadeo
|
||||
[t]:https://twitter.com/RonAmadeo
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user