Merge pull request #17 from LCTT/master

更新 20151115
This commit is contained in:
Chang Liu 2015-11-15 19:41:07 +08:00
commit dd1ced936a
22 changed files with 1582 additions and 1127 deletions

View File

@ -0,0 +1,88 @@
那些奇特的 Linux 发行版本
================================================================================
从大多数消费者所关注的诸如 UbuntuFedoraMint 或 elementary OS 到更加复杂、轻量级和企业级的诸如 SlackwareArch Linux 或 RHEL这些发行版本我都已经见识过了。除了这些难道没有其他别的了吗其实 Linux 的生态系统是非常多样化的,对每个人来说,总有一款适合你。下面就让我们讨论一些稀奇古怪的小众 Linux 发行版本吧,它们代表着开源平台真正的多样性。
### Puppy Linux
![strangest linux distros](http://2.bp.blogspot.com/--cSL2-6rIgA/VcwNc5hFebI/AAAAAAAAJzk/AgB55mVtJVQ/s1600/Puppy-Linux.png)
它是一个仅有一个普通 DVD 光盘容量十分之一大小的操作系统,这就是 Puppy Linux。整个操作系统仅有 100MB 大小!并且它还可以从内存中运行,这使得它运行极快,即便是在老式的 PC 机上。 在操作系统启动后,你甚至可以移除启动介质!还有什么比这个更好的吗? 系统所需的资源极小,大多数的硬件都会被自动检测到,并且它预装了能够满足你基本需求的软件。[在这里体验 Puppy Linux 吧][1].
### Suicide Linux(自杀 Linux)
![suicide linux](http://3.bp.blogspot.com/-dfeehRIQKpo/VdMgRVQqIJI/AAAAAAAAJz0/TmBs-n2K9J8/s1600/suicide-linux.jpg)
这个名字吓到你了吗?我想应该是。 ‘任何时候 -注意是任何时候-一旦你远程输入不正确的命令,解释器都会创造性地将它重定向为 `rm -rf /` 命令,然后擦除你的硬盘’。它就是这么简单。我真的很想知道谁自信到将[Suicide Linux][2] 安装到生产机上。 **警告:千万不要在生产机上尝试这个!** 假如你感兴趣的话,现在可以通过一个简洁的[DEB 包][3]来获取到它。
### PapyrOS
![top 10 strangest linux distros](http://3.bp.blogspot.com/-Q0hlEMCD9-o/VdMieAiXY1I/AAAAAAAAJ0M/iS_ZjVaZAk8/s1600/papyros.png)
它的 “奇怪”是好的方面。PapyrOS 正尝试着将 Android 的 material design 设计语言引入到新的 Linux 发行版本上。尽管这个项目还处于早期阶段,看起来它已经很有前景。该项目的网页上说该系统已经完成了 80%,随后人们可以期待它的第一个 Alpha 发行版本。在该项目被宣告提出时,我们做了 [PapyrOS][4] 的小幅报道,从它的外观上看,它甚至可能会引领潮流。假如你感兴趣的话,可在 [Google+][5] 上关注该项目并可通过 [BountySource][6] 来贡献出你的力量。
### Qubes OS
![10 most unique linux distros](http://3.bp.blogspot.com/-8aOtnTp3Yxk/VdMo_KWs4sI/AAAAAAAAJ0o/3NTqhaw60jM/s1600/qubes-linux.png)
Qubes 是一个开源的操作系统,其设计通过使用[安全分级Security by Compartmentalization][14]的方法,来提供强安全性。其前提假设是不存在完美的没有 bug 的桌面环境。并通过实现一个安全隔离Security by Isolation 的方法,[Qubes Linux][7]试图去解决这些问题。Qubes 基于 Xen、X 视窗系统和 Linux并可运行大多数的 Linux 应用,支持大多数的 Linux 驱动。Qubes 入选了 Access Innovation Prize 2014 for Endpoint Security Solution 决赛名单。
### Ubuntu Satanic Edition
![top10 linux distros](http://3.bp.blogspot.com/-2Sqvb_lilC0/VdMq_ceoXnI/AAAAAAAAJ00/kot20ugVJFk/s1600/ubuntu-satanic.jpg)
Ubuntu SE 是一个基于 Ubuntu 的发行版本。通过一个含有主题、壁纸甚至来源于某些天才新晋艺术家的重金属音乐的综合软件包,“它同时带来了最好的自由软件和免费的金属音乐” 。尽管这个项目看起来不再积极开发了, Ubuntu Satanic Edition 甚至在其名字上都显得奇异。 [Ubuntu SE (Slightly NSFW)][8]。
### Tiny Core Linux
![10 strange linux distros](http://2.bp.blogspot.com/-ZtIVjGMqdx0/VdMv136Pz1I/AAAAAAAAJ1E/-q34j-TXyUY/s1600/tiny-core-linux.png)
Puppy Linux 还不够小?试试这个吧。 Tiny Core Linux 是一个 12MB 大小的图形化 Linux 桌面!是的,你没有看错。一个主要的补充说明:它不是一个完整的桌面,也并不完全支持所有的硬件。它只含有能够启动进入一个非常小巧的 X 桌面,支持有线网络连接的核心部件。它甚至还有一个名为 Micro Core Linux 的没有 GUI 的版本,仅有 9MB 大小。[Tiny Core Linux][9]。
### NixOS
![top 10 unique and special linux distros](http://4.bp.blogspot.com/-idmCvIxtxeo/VdcqcggBk1I/AAAAAAAAJ1U/DTQCkiLqlLk/s1600/nixos.png)
它是一个资深用户所关注的 Linux 发行版本,有着独特的打包和配置管理方式。在其他的发行版本中,诸如升级的操作可能是非常危险的。升级一个软件包可能会引起其他包无法使用,而升级整个系统感觉还不如重新安装一个。在那些你不能安全地测试由一个配置的改变所带来的结果的更改之上,它们通常没有“重来”这个选项。在 NixOS 中,整个系统由 Nix 包管理器按照一个纯功能性的构建语言的描述来构建。这意味着构建一个新的配置并不会重写先前的配置。大多数其他的特色功能也遵循着这个模式。Nix 相互隔离地存储所有的软件包。有关 NixOS 的更多内容请看[这里][10]。
### GoboLinux
![strangest linux distros](http://4.bp.blogspot.com/-rOYfBXg-UiU/VddCF7w_xuI/AAAAAAAAJ1w/Nf11bOheOwM/s1600/gobolinux.jpg)
这是另一个非常奇特的 Linux 发行版本。它与其他系统如此不同的原因是它有着独特的重新整理的文件系统。它有着自己独特的子目录树其中存储着所有的文件和程序。GoboLinux 没有专门的包数据库,因为其文件系统就是它的数据库。在某些方面,这类重整有些类似于 OS X 上所看到的功能。
### Hannah Montana Linux
![strangest linux distros](http://1.bp.blogspot.com/-3P22pYfih6Y/VdcucPOv4LI/AAAAAAAAJ1g/PszZDbe83sQ/s1600/hannah-montana-linux.jpg)
它是一个基于 Kubuntu 的 Linux 发行版本,它有着汉娜·蒙塔娜( Hannah Montana 主题的开机启动界面、KDM(KDE Display Manager)、图标集、ksplash、plasma、颜色主题和壁纸(I'm so sorry)。[这是它的链接][12]。这个项目现在不再活跃了。
### RLSD Linux
它是一个极其精简、小巧、轻量和安全可靠的,基于 Linux 文本的操作系统。开发者称 “它是一个独特的发行版本,提供一系列的控制台应用和自带的安全特性,对黑客或许有吸引力。” [RLSD Linux][13].
我们还错过了某些更加奇特的发行版本吗?请让我们知晓吧。
--------------------------------------------------------------------------------
via: http://www.techdrivein.com/2015/08/the-strangest-most-unique-linux-distros.html
作者Manuel Jose
译者:[FSSlc](https://github.com/FSSlc)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:http://puppylinux.org/main/Overview%20and%20Getting%20Started.htm
[2]:http://qntm.org/suicide
[3]:http://sourceforge.net/projects/suicide-linux/files/
[4]:http://www.techdrivein.com/2015/02/papyros-material-design-linux-coming-soon.html
[5]:https://plus.google.com/communities/109966288908859324845/stream/3262a3d3-0797-4344-bbe0-56c3adaacb69
[6]:https://www.bountysource.com/teams/papyros
[7]:https://www.qubes-os.org/
[8]:http://ubuntusatanic.org/
[9]:http://tinycorelinux.net/
[10]:https://nixos.org/
[11]:http://www.gobolinux.org/
[12]:http://hannahmontana.sourceforge.net/
[13]:http://rlsd2.dimakrasner.com/
[14]:https://en.wikipedia.org/wiki/Compartmentalization_(information_security)

View File

@ -0,0 +1,53 @@
Ubuntu 软件中心将在 16.04 LTS 中被替换
================================================================================
![The USC Will Be Replaced](http://www.omgubuntu.co.uk/wp-content/uploads/2011/09/usc1.jpg)
*Ubuntu 软件中心将在 Ubuntu 16.04 LTS 中被替换。*
Ubuntu Xenial Xerus 桌面用户将会发现这个熟悉的并有些繁琐的Ubuntu 软件中心将不再可用。
按照目前的计划GNOME 的 [软件应用Software application][1] 将作为基于 Unity 7 的桌面的默认包管理工具。
![GNOME Software](http://www.omgubuntu.co.uk/wp-content/uploads/2013/09/gnome-software.jpg)
*GNOME 软件应用*
作为这次变化的一个结果是,会新开发插件来支持软件中心的评级、评论和应用程序付费的功能。
该决定是在伦敦的 Canonical 总部最近举行的一次桌面峰会中通过的。
“相对于 Ubuntu 软件中心,我们认为我们在 GNOME 软件中心sic添加 Snaps 支持上能做的更好。所以,现在看起来我们将使用 GNOME 软件中心来取代 [Ubuntu 软件中心]”Ubuntu 桌面经理 Will Cooke 在 Ubuntu 在线峰会解释说。
GNOME 3.18 架构与也将出现在 Ubuntu 16.04 中,其中一些应用程序将更新到 GNOME 3.20 这么做也是有道理的Will Cooke 补充说。
我们最近在 Twitter 上做了一项民意调查,询问如何在 Ubuntu 上安装软件。结果表明,只有少数人怀念现在的软件中心...
你使用什么方式在 Ubuntu 上安装软件?
- 软件中心
- 终端
### 在 Ubuntu 16.04 其他应用程序也将会减少 ###
Ubuntu 软件中心并不是唯一一个在 Xenial Xerus 中被丢弃的。
光盘刻录工具 Brasero 和即时通讯工具 **Empathy** 也将从默认镜像中删除。
虽然这些应用程序还在不断的开发,但随着笔记本减少了光驱以及基于移动网络的聊天服务,它们看起来越来越过时了。
如果你还在使用它们请不要惊慌Brasero 和 Empathy 将 **仍然可以通过存档在 Ubuntu 上安装**
也并不全是丢弃和替换默认还包括了一个新的桌面应用程序GNOME 日历。
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2015/11/the-ubuntu-software-centre-is-being-replace-in-16-04-lts
作者:[Sam Tran][a]
译者:[strugglingyouth](https://github.com/strugglingyouth)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/111008502832304483939?rel=author
[1]:https://wiki.gnome.org/Apps/Software

112
published/LetsEncrypt.md Normal file
View File

@ -0,0 +1,112 @@
# SSL/TLS 加密新纪元 - Let's Encrypt
根据 Let's Encrypt 官方博客消息Let's Encrypt 服务将在下周11 月 16 日)正式对外开放。
Let's Encrypt 项目是由互联网安全研究小组ISRGInternet Security Research Group主导并开发的一个新型数字证书认证机构CACertificate Authority。该项目旨在开发一个自由且开放的自动化 CA 套件,并向公众提供相关的证书免费签发服务以降低安全通讯的财务、技术和教育成本。在过去的一年中,互联网安全研究小组拟定了 [ACME 协议草案][1],并首次实现了使用该协议的应用套件:服务端 [Boulder][2] 和客户端 [letsencrypt][3]。
至于为什么 Let's Encrypt 让我们如此激动,以及 HTTPS 协议如何保护我们的通讯请参考[浅谈 HTTPS 和 SSL/TLS 协议的背景与基础][4]。
## ACME 协议
Let's Encrypt 的诞生离不开 ACMEAutomated Certificate Management Environment自动证书管理环境协议的拟定。
说到 ACME 协议,我们不得不提一下传统 CA 的认证方式。Let's Encrypt 服务所签发的证书为域名认证证书DVDomain-validated Certificate签发这类证书需要域名所有者完成以下至少一种挑战Challenge以证明自己对域名的所有权
* 验证申请人对域名的 Whois 信息中邮箱的控制权;
* 验证申请人对域名的常见管理员邮箱(如以 `admin@`、`postmaster@` 开头的邮箱等)的控制权;
* 在 DNS 的 TXT 记录中发布一条 CA 提供的字符串;
* 在包含域名的网址中特定路径发布一条 CA 提供的字符串。
不难发现其中最容易实现自动化的一种操作必然为最后一条ACME 协议中的 [Simple HTTP][5] 认证即是用一种类似的方法对从未签发过任何证书的域名进行认证。该协议要求在访问 `http://域名/.well-known/acme-challenge/指定字符串` 时返回特定的字符串。
然而实现该协议的客户端 [letsencrypt][3] 做了更多——它不仅可以通过 ACME 协议配合服务端 [Boulder][2] 的域名进行独立standalone的认证工作同时还可以自动配置常见的服务器软件目前支持 Nginx 和 Apache以完成认证。
## Let's Encrypt 免费证书签发服务
对于大多数网站管理员来讲,想要对自己的 Web 服务器进行加密需要一笔不小的支出进行证书签发并且难以配置。根据早些年 SSL Labs 公布的 [2010 年互联网 SSL 调查报告PDF][6] 指出超过半数的 Web 服务器没能正确使用 Web 服务器证书,主要的问题有证书不被浏览器信任、证书和域名不匹配、证书过期、证书信任链没有正确配置、使用已知有缺陷的协议和算法等。而且证书过期后的续签和泄漏后的吊销仍需进行繁琐的人工操作。
幸运的是 Let's Encrypt 免费证书签发服务在经历了漫长的开发和测试之后终于来临,在 Let's Encrypt 官方 CA 被广泛信任之前IdenTrust 的根证书对 Let's Encrypt 的二级 CA 进行了交叉签名使得大部分浏览器已经信任 Let's Encrypt 签发的证书。
## 使用 letsencrypt
由于当前 Let's Encrypt 官方的证书签发服务还未公开,你只能尝试开发版本。这个版本会签发一个 CA 标识为 `happy hacker fake CA` 的测试证书,注意这个证书不受信任。
要获取开发版本请直接 `$ git clone https://github.com/letsencrypt/letsencrypt`
以下的[使用方法][7]摘自 Let's Encrypt 官方网站。
### 签发证书
`letsencrypt` 工具可以协助你处理证书请求和验证工作。
#### 自动配置 Web 服务器
下面的操作将会自动帮你将新证书配置到 Nginx 和 Apache 中。
```
$ letsencrypt run
```
#### 独立签发证书
下面的操作将会将新证书置于当前目录下。
```
$ letsencrypt -d example.com auth
```
### 续签证书
默认情况下 `letsencrypt` 工具将协助你跟踪当前证书的有效期限并在需要时自动帮你续签。如果需要手动续签,执行下面的操作。
```
$ letsencrypt renew --cert-path example-cert.pem
```
### 吊销证书
列出当前托管的证书菜单以吊销。
```
$ letsencrypt revoke
```
你也可以吊销某一个证书或者属于某个私钥的所有证书。
```
$ letsencrypt revoke --cert-path example-cert.pem
```
```
$ letsencrypt revoke --key-path example-key.pem
```
## Docker 化 letsencrypt
如果你不想让 letsencrypt 自动配置你的 Web 服务器的话,使用 Docker 跑一份独立的版本将是一个不错的选择。你所要做的只是在装有 Docker 的系统中执行:
```
$ sudo docker run -it --rm -p 443:443 -p 80:80 --name letsencrypt \
-v "/etc/letsencrypt:/etc/letsencrypt" \
-v "/var/lib/letsencrypt:/var/lib/letsencrypt" \
quay.io/letsencrypt/letsencrypt:latest auth
```
你就可以快速的为自己的 Web 服务器签发一个免费而且受信任的 DV 证书啦!
## Let's Encrypt 的注意事项
* Let's Encrypt 当前发行的 DV 证书仅能验证域名的所有权,并不能验证其所有者身份;
* Let's Encrypt 不像其他 CA 那样对安全事故有保险赔付;
* Let's Encrypt 目前不提共 Wildcard 证书;
* Let's Encrypt 的有效时间仅为 90 天,逾期需要续签(可自动续签)。
对于 Let's Encrypt 的介绍就到这里,让我们一起目睹这场互联网的安全革命吧。
[1]: https://github.com/letsencrypt/acme-spec
[2]: https://github.com/letsencrypt/boulder
[3]: https://github.com/letsencrypt/letsencrypt
[4]: https://linux.cn/article-5175-1.html
[5]: https://letsencrypt.github.io/acme-spec/#simple-http
[6]: https://community.qualys.com/servlet/JiveServlet/download/38-1636/Qualys_SSL_Labs-State_of_SSL_2010-v1.6.pdf
[7]: https://letsencrypt.org/howitworks/

View File

@ -1,57 +0,0 @@
translation by strugglingyouth
Ubuntu Software Centre To Be Replaced in 16.04 LTS
================================================================================
![The USC Will Be Replaced](http://www.omgubuntu.co.uk/wp-content/uploads/2011/09/usc1.jpg)
The USC Will Be Replaced
**The Ubuntu Software Centre is to be replaced in Ubuntu 16.04 LTS.**
Users of the Xenial Xerus desktop will find that the familiar (and somewhat cumbersome) Ubuntu Software Centre is no longer available.
GNOMEs [Software application][1] will according to current plans take its place as the default and package management utility on the Unity 7-based desktop.
![GNOME Software](http://www.omgubuntu.co.uk/wp-content/uploads/2013/09/gnome-software.jpg)
GNOME Software
New plugins will be created to support the Software Centres ratings, reviews and paid app features as a result of the switch.
The decisions were taken at a recent desktop Sprint held at Canonical HQ in London.
“We are more confident in our ability to add support for Snaps to GNOME Software Centre (sic) than we are to Ubuntu Software Centre. And so, right now, it looks like we will be replacing [the USC] with GNOME Software Centre”, explains Ubuntu desktop manager Will Cooke at the Ubuntu Online Summit.
GNOME 3.18 stack will also be included in Ubuntu 16.04, with select app updates to GNOME 3.20 apps taken as and when it makes sense, adds Will Cooke.
We recently ran a poll on Twitter asking how you install software on Ubuntu. The results suggest that few of you will mourn the passing of the incumbent Software Centre…
注:投票项目
Which of these do you use to install software on #Ubuntu?
- Software Centre
- Terminal
### Other Apps Being Dropped in Ubuntu 16.04 ###
The Ubuntu Software Centre is not the only app set to be given the heave-ho in Xenial Xerus.
Disc burning utility Brasero and instant messaging app **Empathy** are also to be removed from the default install image.
Neither app is considered to be under active development, and with the march of laptops lacking optical drives and web and mobile-based chat services, they may also be seen as increasingly obsolete.
If you do have use for them dont panic: both Brasero and Empathy will **still be available to install on Ubuntu from the archives**.
Its not all removals and replacements as one new desktop app is set be included by default: GNOME Calendar.
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2015/11/the-ubuntu-software-centre-is-being-replace-in-16-04-lts
作者:[Sam Tran][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/111008502832304483939?rel=author
[1]:https://wiki.gnome.org/Apps/Software

View File

@ -1,125 +0,0 @@
Open Source Alternatives to LastPass
================================================================================
LastPass is a cross-platform password management program. For Linux, it is available as a plugin for Firefox, Chrome, and Opera. LastPass Sesame is available for Ubuntu/Debian and Fedora. There is also a version of LastPass compatible with Firefox Portable for installing on a USB key. And with LastPass Pocket for Ubuntu/Debian, Fedora and openSUSE, there's good coverage. While LastPass is a highly rated service, it is proprietary software. And LastPass has recently been absorbed by LogMeIn. If you're looking for an open source alternative, this article is for you.
We all face information overload. Whether you conduct business online, read for your job, or just read for pleasure, the internet is a vast source of information. Retaining that information on a long-term basis can be difficult. However, it is essential to recall certain items of information immediately. Passwords are one such example.
As a computer user, you face the dilemma of choosing the same password or a unique password for each service or web site you use. Matters are complicated because some sites place restrictions on the selection of the password. For example, a site may insist on a minimum number of characters, capital letters, numerals, and other characters which make choosing the same password for each site to be impossible. More importantly, there are good security reasons not to duplicate passwords. This inevitably means that individuals will simply have too many passwords to remember. One solution is to keep the passwords in written form. However, this is also highly insecure.
Instead of trying to remember an endless array of passwords, a popular solution is to use password manager software. In fact, this type of software is an essential tool for the active internet user. It makes it easy to retrieve, manage and secure all of your passwords. Most passwords are encrypted, either by the program or the filesystem. Consequently, the user only has to remember a single password. Password managers encourage users to choose unique, non-intuitive strong passwords for each service.
To provide an insight into the quality of software available for Linux, I introduce 4 excellent open source alternatives to LastPass.
### KeePassX ###
![KeePassX in action](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-KeePassX.png)
KeePassX is a multi-platform port of KeePass, an open source and cross-platform password manager. This utility helps you to manage your passwords in a secure way. You can put all your passwords in one database, which is locked with one master key or a key-disk. This lets users only need to remember one single master password or insert the key-disk to unlock the whole database.
The databases are encrypted using the algorithms AES (alias Rijndael) or Twofish using a 256 bit key.
Features include:
- Extensive management- title for each entry for better identification:
- Determine different expiration dates
- Insertion of attachments
- User-defined symbols for groups and entries
- Fast entry duplication
- Sorting entries in groups
- Search function: in specific groups or in the complete database
- Auto-Type, a feature that allows you to e.g. log in to a web page by pressing a single key combination. KeePassX does the rest of the typing for you. Auto-Type reads the title of currently active window on your screen and matches it to the configured database entries
- Database security with access to the KeePassX database being granted either with a password, a key-file (e.g. a CD or a memory-stick) or both
- Automatic generation of secure passwords
- Precaution features, quality indicator for chosen passwords hiding all passwords behind asterisks
- Encryption- either the Advanced Encryption Standard (AES) or the Twofish algorithm are used, with encryption of the database in 256 bit sized increments
- Import and export of entries. Import from PwManager (*.pwm) and KWallet (*.xml) files, Export as textfile (*.txt)
- Website: [www.keepassx.org][1]
- Developer: KeePassX Team
- License: GNU GPL v2
- Version Number: 0.4.3
### Encryptr ###
![Encryptr in action](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-Encryptr.png)
Encryptr is an open source zero-knowledge cloud-based password manager / e-wallet powered by Crypton. Crypton is a JavaScript library that allows developers to write web applications where the server knows nothing of the contents a user is storing.
Encryptr stores your sensitive data like passwords, credit card data, PINs, or access codes, in the cloud. However, because it was built on the zero-knowledge Crypton framework, Encryptr ensures that only the user has the ability to access or read the confidential information.
Being cross-platform, it allows users to securely access their confidential data from a single account from the cloud, no matter where they are.
Features include:
- Very secure Zero-Knowledge Crypton Framework only ever encrypts or decrypts your data locally on your device
- Simple to use
- Cloud based
- Stores three types of data it stores passwords, credit card numbers and general key/value pairs
- Optional "Notes" field to all entries
- Filtering / searching the entry list
- Local encrypted caching of entries to speed up load time
- Website: [encryptr.org][2]
- Developer: Tommy Williams
- License: GNU GPL v3
- Version Number: 1.2.0
### RatticDB ###
![RatticDB in action](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-RatticDB.png)
RatticDB is an open source Django based password management service.
RatticDB is built to be 'Password Lifecycle Management' and not simply a 'Password Storage Engine'. RatticDB aims to help you keep track of what passwords need to be changed and when. It does not include application level encryption.
Features include:
- Simple ACL scheme
- Change Queue feature that allows users to see when they need to update passwords for the applications they use
- Ansible configurations
-
- Website: [rattic.org][3]
- Developer: Daniel Hall
- License: GNU GPL v2
- Version Number: 1.3.1
### Seahorse ###
![Seahorse in action](http://www.linuxlinks.com/portal/content/reviews/Security/Screenshot-Seahorse.png)
Seahorse is a Gnome front end for GnuPG - the Gnu Privacy Guard program. Its goal is to provide an easy to use Key Management Tool, along with an easy to use interface for encryption operations.
It is a tool for secure communications and data storage. Data encryption and digital signature creation can easily be performed through a GUI and Key Management operations can easily be carried out through an intuitive interface.
Additionally, Seahorse includes a Gedit plugin, can handle files using Nautilus, an applet for managing stuff put in the clipboard and an agent for storing private passphrases, as well as a GnuPG and OpenSSH key manager.
Features include:
- Encrypt/decrypt/sign files and text
- Manage your keys and keyring
- Synchronize your keys and your keyring with key servers
- Sign keys and publish
- Cache your passphrase so you don't have to keep typing it
- Backup your keys and keyring
- Add an image in any GDK supported format as a OpenGPG photo ID
- Create SSH keys, configure them, cache them
- Internationalization support
- Website: [www.gnome.org/projects/seahorse][4]
- Developer: Jacob Perkins, Jose Carlos, Garcia Sogo, Jean Schurger, Stef Walter, Adam Schreiber
- License: GNU GPL v2
- Version Number: 3.18.0
--------------------------------------------------------------------------------
via: http://www.linuxlinks.com/article/20151108125950773/LastPassAlternatives.html
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:http://www.keepassx.org/
[2]:https://encryptr.org/
[3]:http://rattic.org/
[4]:http://www.gnome.org/projects/seahorse/

View File

@ -1,102 +0,0 @@
zpl1025 translating
The Brief History Of Aix, HP-UX, Solaris, BSD, And LINUX
================================================================================
![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/05/linux-712x445.png)
Always remember that when doors close on you, other doors open. [Ken Thompson][1] and [Dennis Richie][2] are a great example for such saying. They were two of the best information technology specialists in the **20th** century as they created the **UNIX** system which is considered one the most influential and inspirational software that ever written.
### The UNIX systems beginning at Bell Labs ###
**UNIX** which was originally called **UNICS** (**UN**iplexed **I**nformation and **C**omputing **S**ervice) has a great family and was never born by itself. The grandfather of UNIX was **CTSS** (**C**ompatible **T**ime **S**haring **S**ystem) and the father was the **Multics** (**MULT**iplexed **I**nformation and **C**omputing **S**ervice) project which supports interactive timesharing for mainframe computers by huge communities of users.
UNIX was born at **Bell Labs** in **1969** by **Ken Thompson** and later **Dennis Richie**. These two great researchers and scientists worked on a collaborative project with **General Electric** and the **Massachusetts Institute of Technology** to create an interactive timesharing system called the Multics.
Multics was created to combine timesharing with other technological advances, allowing the users to phone the computer from remote terminals, then edit documents, read e-mail, run calculations, and so on.
Over the next five years, AT&T corporate invested millions of dollars in the Multics project. They purchased mainframe computer called GE-645 and they dedicated to the effort of the top researchers at Bell Labs such as Ken Thompson, Stuart Feldman, Dennis Ritchie, M. Douglas McIlroy, Joseph F. Ossanna, and Robert Morris. The project was too ambitious, but it fell troublingly behind the schedule. And at the end, AT&T leaders decided to leave the project.
Bell Labs managers decided to stop any further work on operating systems which made many researchers frustrated and upset. But thanks to Thompson, Richie, and some researchers who ignored their bosses instructions and continued working with love on their labs, UNIX was created as one the greatest operating systems of all times.
UNIX started its life on a PDP-7 minicomputer which was a testing machine for Thompsons ideas about the operating systems design and a platform for Thompsons and Richies game simulation that was called Space and Travel.
> “What we wanted to preserve was not just a good environment in which to do programming, but a system around which a fellowship could form. We knew from experience that the essence of communal computing, as supplied by remote-access, time-shared machines, is not just to type programs into a terminal instead of a keypunch, but to encourage close communication”. Dennis Richie Said.
UNIX was so close to be the first system under which the programmer could directly sit down at a machine and start composing programs on the fly, explore possibilities and also test while composing. All through UNIX lifetime, it has had a growing more capabilities pattern by attracting skilled volunteer effort from different programmers impatient with the other operating systems limitations.
UNIX has received its first funding for a PDP-11/20 in 1970, the UNIX operating system was then officially named and could run on the PDP-11/20. The first real job from UNIX was in 1971, it was to support word processing for the patent department at Bell Labs.
### The C revolution on UNIX systems ###
Dennis Richie invented a higher level programming language called “**C**” in **1972**, later he decided with Ken Thompson to rewrite the UNIX in “C” to give the system more portability options. They wrote and debugged almost 100,000 code lines that year. The migration to the “C” language resulted in highly portable software that require only a relatively small machine-dependent code to be then replaced when porting UNIX to another computing platform.
The UNIX was first formally presented to the outside world in 1973 on Operating Systems Principles, where Dennis Ritchie and Ken Thompson delivered a paper, then AT&T released Version 5 of the UNIX system and licensed it to the educational institutions, and then in 1975 they licensed Version 6 of UNIX to companies for the first time with a cost **$20.000**. The most widely used version of UNIX was Version 7 in 1980 where anybody could purchase a license but it was very restrictive terms in this license. The license included the source code, the machine dependents kernel which was written in PDP-11 assembly language. At all, versions of UNIX systems were determined by its user manuals editions.
### The AIX System ###
In **1983**, **Microsoft** had a plan to make a **Xenix** MS-DOSs multiuser successor, and they created Xenix-based Altos 586 with **512 KB** RAM and **10 MB** hard drive by this year with cost $8,000. By 1984, 100,000 UNIX installations around the world for the System V Release 2. In 1986, 4.3BSD was released that included internet name server and the **AIX system** was announced by **IBM** with Installation base over 250,000. AIX is based on Unix System V, this system has BSD roots and is a hybrid of both.
AIX was the first operating system that introduced a **journaled file system (JFS)** and an integrated Logical Volume Manager (LVM). IBM ported AIX to its RS/6000 platform by 1989. The Version 5L was a breakthrough release that was introduced in 2001 to provide Linux affinity and logical partitioning with the Power4 servers.
AIX introduced virtualization by 2004 in AIX 5.3 with Advanced Power Virtualization (APV) which offered Symmetric multi-threading, micro-partitioning, and shared processor pools.
In 2007, IBM started to enhance its virtualization product, by coinciding with the AIX 6.1 release and the architecture of Power6. They also rebranded Advanced Power Virtualization to PowerVM.
The enhancements included form of workload partitioning that was called WPARs, that are similar to Solaris zones/Containers, but with much better functionality.
### The HP-UX System ###
The **Hewlett-Packards UNIX (HP-UX)** was based originally on System V release 3. The system initially ran exclusively on the PA-RISC HP 9000 platform. The Version 1 of HP-UX was released in 1984.
The Version 9, introduced SAM, its character-based graphical user interface (GUI), from which one can administrate the system. The Version 10, was introduced in 1995, and brought some changes in the layout of the system file and directory structure, which made it similar to AT&T SVR4.
The Version 11 was introduced in 1997. It was HPs first release to support 64-bit addressing. But in 2000, this release was rebranded to 11i, as HP introduced operating environments and bundled groups of layered applications for specific Information Technology purposes.
In 2001, The Version 11.20 was introduced with support for Itanium systems. The HP-UX was the first UNIX that used ACLs (Access Control Lists) for file permissions and it was also one of the first that introduced built-in support for Logical Volume Manager.
Nowadays, HP-UX uses Veritas as primary file system due to partnership between Veritas and HP.
The HP-UX is up to release 11iv3, update 4.
### The Solaris System ###
The Suns UNIX version, **Solaris**, was the successor of **SunOS**, which was founded in 1992. SunOS was originally based on the BSD (Berkeley Software Distribution) flavor of UNIX but SunOS versions 5.0 and later were based on Unix System V Release 4 which was rebranded as Solaris.
SunOS version 1.0 was introduced with support for Sun-1 and Sun-2 systems in 1983. Version 2.0 was introduced later in 1985. In 1987, Sun and AT&T announced that they would collaborate on a project to merge System V and BSD into only one release, based on SVR4.
The Solaris 2.4 was first Sparc/x86 release by Sun. The last release of the SunOS was version 4.1.4 announced in November 1994. The Solaris 7 was the first 64-bit Ultra Sparc release and it added native support for file system metadata logging.
Solaris 9 was introduced in 2002, with support for Linux capabilities and Solaris Volume Manager. Then, Solaris 10 was introduced in 2005, and has number of innovations, such as support for its Solaris Containers, new ZFS file system, and Logical Domains.
The Solaris system is presently up to version 10 as the latest update was released in 2008.
### Linux ###
By 1991 there were growing requirements for a free commercial alternative. Therefore **Linus Torvalds** set out to create new free operating system kernel that eventually became **Linux**. Linux started with a small number of “C” files and under a license which prohibited commercial distribution. Linux is a UNIX-like system and is different than UNIX.
Version 3.18 was introduced in 2015 under a GNU Public License. IBM said that more than 18 million lines of code are Open Source and available to developers.
The GNU Public License becomes the most widely available free software license which you can find nowadays. In accordance with the Open Source principles, this license permits individuals and organizations the freedom to distribute, run, share by copying, study, and also modify the code of the software.
### UNIX vs. Linux: Technical Overview ###
- Linux can encourage more diversity, and Linux developers come from wider range of backgrounds with different experiences and opinions.
- Linux can run on wider range of platforms and also types of architecture than UNIX.
- Developers of UNIX commercial editions have a specific target platform and audience in mind for their operating system.
- **Linux is more secure than UNIX** as it is less affected by virus threats or malware attacks. Linux has had about 60-100 viruses to date, but at the same time none of them are currently spreading. On the other hand, UNIX has had 85-120 viruses but some of them are still spreading.
- With commands of UNIX, tools and elements are rarely changed, and even some interfaces and command lines arguments still remain in later versions of UNIX.
- Some Linux development projects get funded on a voluntary basis such as Debian. The other projects maintain a community version of commercial Linux distributions such as SUSE with openSUSE and Red Hat with Fedora.
- Traditional UNIX is about scale up, but on the other hand Linux is about scale out.
--------------------------------------------------------------------------------
via: http://www.unixmen.com/brief-history-aix-hp-ux-solaris-bsd-linux/
作者:[M.el Khamlichi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.unixmen.com/author/pirat9/
[1]:http://www.unixmen.com/ken-thompson-unix-systems-father/
[2]:http://www.unixmen.com/dennis-m-ritchie-father-c-programming-language/

View File

@ -1,38 +0,0 @@
Nautilus File Search Is About To Get A Big Power Up
================================================================================
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/10/nautilus-new-search-filters.jpg)
**Finding stray files and folders in Nautilus is about to get a whole lot easier. **
A new **search filter** for the default [GNOME file manager][1] is in development. It makes heavy use of GNOMEs spiffy pop-over menus in an effort to offer a simpler way to narrow in on search results and find exactly what youre after.
Developer Georges Stavracas is working on the new UI and [describes][2] the new editor as “cleaner, saner and more intuitive”.
Based on a video hes [uploaded to YouTube][3] demoing the new approach which he hasnt made available for embedding hes not wrong.
> “Nautilus has very complex but powerful internals, which allows us to do many things. And indeed, there is code for the many options in there. So, why did it used to look so poorly implemented/broken?”, he writes on his blog.
The question is part rhetorical; the new search filter interface surfaces many of these powerful internals to yhe user. Searches can be filtered ad **hoc** based on content type, name or by date range.
Changing anything in an app like Nautilus is likely to upset some users, so as helpful and straightforward as the new UI seems it could come in for some heat.
Not that worry of discontent seems to hamper progress (though the outcry at the [removal of type ahead search][4] in 2014 still rings loud in many ears, no doubt). GNOME 3.18, [released last month][5], introduced a new file progress dialog to Nautilus and better integration for remote shares, including Google Drive.
Stavracas search filter are not yet merged in to Files trunk, but the reworked search UI is tentatively targeted for inclusion in GNOME 3.20, due spring next year.
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2015/10/new-nautilus-search-filter-ui
作者:[Joey-Elijah Sneddon][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:https://wiki.gnome.org/Apps/Nautilus
[2]:http://feaneron.com/2015/10/12/the-new-search-for-gnome-files-aka-nautilus/
[3]:https://www.youtube.com/watch?v=X2sPRXDzmUw
[4]:http://www.omgubuntu.co.uk/2014/01/ubuntu-14-04-nautilus-type-ahead-patch
[5]:http://www.omgubuntu.co.uk/2015/09/gnome-3-18-release-new-features

View File

@ -1,114 +0,0 @@
translating by Ezio
How to Setup DockerUI - a Web Interface for Docker
================================================================================
Docker is getting more popularity day by day. The idea of running a complete Operating System inside a container rather than running inside a virtual machine is an awesome technology. Docker has made lives of millions of system administrators and developers pretty easy for getting their work done in no time. It is an open source technology that provides an open platform to pack, ship, share and run any application as a lightweight container without caring on which operating system we are running on the host. It has no boundaries of Language support, Frameworks or packaging system and can be run anywhere, anytime from a small home computers to high-end servers. Running docker containers and managing them may come a bit difficult and time consuming, so there is a web based application named DockerUI which is make managing and running container pretty simple. DockerUI is highly beneficial to people who are not much aware of linux command lines and want to run containerized applications. DockerUI is an open source web based application best known for its beautiful design and ease simple interface for running and managing docker containers.
Here are some easy steps on how we can setup Docker Engine with DockerUI in our linux machine.
### 1. Installing Docker Engine ###
First of all, we'll gonna install docker engine in our linux machine. Thanks to its developers, docker is very easy to install in any major linux distribution. To install docker engine, we'll need to run the following command with respect to which distribution we are running.
#### On Ubuntu/Fedora/CentOS/RHEL/Debian ####
Docker maintainers have written an awesome script that can be used to install docker engine in Ubuntu 15.04/14.10/14.04, CentOS 6.x/7, Fedora 22, RHEL 7 and Debian 8.x distributions of linux. This script recognizes the distribution of linux installed in our machine, then adds the required repository to the filesystem, updates the local repository index and finally installs docker engine and required dependencies from it. To install docker engine using that script, we'll need to run the following command under root or sudo mode.
# curl -sSL https://get.docker.com/ | sh
#### On OpenSuse/SUSE Linux Enterprise ####
To install docker engine in the machine running OpenSuse 13.1/13.2 or SUSE Linux Enterprise Server 12, we'll simply need to execute the zypper command. We'll gonna install docker using zypper command as the latest docker engine is available on the official repository. To do so, we'll run the following command under root/sudo mode.
# zypper in docker
#### On ArchLinux ####
Docker is available in the official repository of Archlinux as well as in the AUR packages maintained by the community. So, we have two options to install docker in archlinux. To install docker using the official arch repository, we'll need to run the following pacman command.
# pacman -S docker
But if we want to install docker from the Archlinux User Repository ie AUR, then we'll need to execute the following command.
# yaourt -S docker-git
### 2. Starting Docker Daemon ###
After docker is installed, we'll now gonna start our docker daemon so that we can run docker containers and manage them. We'll run the following command to make sure that docker daemon is installed and to start the docker daemon.
#### On SysVinit ####
# service docker start
#### On Systemd ####
# systemctl start docker
### 3. Installing DockerUI ###
Installing DockerUI is pretty easy than installing docker engine. We just need to pull the dockerui from the Docker Registry Hub and run it inside a container. To do so, we'll simply need to run the following command.
# docker run -d -p 9000:9000 --privileged -v /var/run/docker.sock:/var/run/docker.sock dockerui/dockerui
![Starting DockerUI Container](http://blog.linoxide.com/wp-content/uploads/2015/09/starting-dockerui-container.png)
Here, in the above command, as the default port of the dockerui web application server 9000, we'll simply map the default port of it with -p flag. With -v flag, we specify the docker socket. The --privileged flag is required for hosts using SELinux.
After executing the above command, we'll now check if the dockerui container is running or not by running the following command.
# docker ps
![Running Docker Containers](http://blog.linoxide.com/wp-content/uploads/2015/09/running-docker-containers.png)
### 4. Pulling an Image ###
Currently, we cannot pull an image directly from DockerUI so, we'll need to pull a docker image from the linux console/terminal. To do so, we'll need to run the following command.
# docker pull ubuntu
![Docker Image Pull](http://blog.linoxide.com/wp-content/uploads/2015/10/docker-image-pull.png)
The above command will pull an image tagged as ubuntu from the official [Docker Hub][1]. Similarly, we can pull more images that we require and are available in the hub.
### 4. Managing with DockerUI ###
After we have started the dockerui container, we'll now have fun with it to start, pause, stop, remove and perform many possible activities featured by dockerui with docker containers and images. First of all, we'll need to open the web application using our web browser. To do so, we'll need to point our browser to http://ip-address:9000 or http://mydomain.com:9000 according to the configuration of our system. By default, there is no login authentication needed for the user access but we can configure our web server for adding authentication. To start a container, first we'll need to have images of the required application we want to run a container with.
#### Create a Container ####
To create a container, we'll need to go to the section named Images then, we'll need to click on the image id which we want to create a container of. After clicking on the required image id, we'll need to click on Create button then we'll be asked to enter the required properties for our container. And after everything is set and done. We'll need to click on Create button to finally create a container.
![Creating Docker Container](http://blog.linoxide.com/wp-content/uploads/2015/10/creating-docker-container.png)
#### Stop a Container ####
To stop a container, we'll need to move towards the Containers page and then select the required container we want to stop. Now, we'll want to click on Stop option which we can see under Actions drop-down menu.
![Managing Container](http://blog.linoxide.com/wp-content/uploads/2015/10/managing-container.png)
#### Pause and Resume ####
To pause a container, we simply select the required container we want to pause by keeping a check mark on the container and then click the Pause option under Actions . This is will pause the running container and then, we can simply resume the container by selecting Unpause option from the Actions drop down menu.
#### Kill and Remove ####
Like we had performed the above tasks, its pretty easy to kill and remove a container or an image. We just need to check/select the required container or image and then select the Kill or Remove button from the application according to our need.
### Conclusion ###
DockerUI is a beautiful utilization of Docker Remote API to develop an awesome web interface for managing docker containers. The developers have designed and developed this application in pure HTML and JS language. It is currently incomplete and is under heavy development so we don't recommend it for the use in production currently. It makes users pretty easy to manage their containers and images with simple clicks without needing to execute lines of commands to do small jobs. If we want to contribute DockerUI, we can simply visit its [Github Repository][2]. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you !
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/setup-dockerui-web-interface-docker/
作者:[Arun Pyasi][a]
译者:[oska874](https://github.com/oska874)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:https://hub.docker.com/
[2]:https://github.com/crosbymichael/dockerui/

View File

@ -1,513 +0,0 @@
struggling 翻译中...
9 Tips for Improving WordPress Performance
================================================================================
WordPress is the single largest platform for website creation and web application delivery worldwide. About [a quarter][1] of all sites are now built on open-source WordPress software, including sites for eBay, Mozilla, RackSpace, TechCrunch, CNN, MTV, the New York Times, the Wall Street Journal.
WordPress.com, the most popular site for user-created blogs, also runs on WordPress open source software. [NGINX powers WordPress.com][2]. Among WordPress customers, many sites start on WordPress.com and then move to hosted WordPress open-source software; more and more of these sites use NGINX software as well.
WordPress appeal is its simplicity, both for end users and for implementation. However, the architecture of a WordPress site presents problems when usage ramps upward and several steps, including caching and combining WordPress and NGINX, can solve these problems.
In this blog post, we provide nine performance tips to help overcome typical WordPress performance challenges:
- [Cache static resources][3]
- [Cache dynamic files][4]
- [Move to NGINX][5]
- [Add permalink support to NGINX][6]
- [Configure NGINX for FastCGI][7]
- [Configure NGINX for W3_Total_Cache][8]
- [Configure NGINX for WP-Super-Cache][9]
- [Add security precautions to your NGINX configuration][10]
- [Configure NGINX to support WordPress Multisite][11]
### WordPress Performance on LAMP Sites ###
Most WordPress sites are run on a traditional LAMP software stack: the Linux OS, Apache web server software, MySQL database software often on a separate database server and the PHP programming language. Each of these is a very well-known, widely used, open source tool. Most people in the WordPress world “speak” LAMP, so its easy to get help and support.
When a user visits a WordPress site, a browser running the Linux/Apache combination creates six to eight connections per user. As the user moves around the site, PHP assembles each page on the fly, grabbing resources from the MySQL database to answer requests.
LAMP stacks work well for anywhere from a few to, perhaps, hundreds of simultaneous users. However, sudden increases in traffic are common online and usually a good thing.
But when a LAMP-stack site gets busy, with the number of simultaneous users climbing into the many hundreds or thousands, it can develop serious bottlenecks. Two main causes of bottlenecks are:
1. The Apache web server Apache consumes substantial resources for each and every connection. If Apache accepts too many simultaneous connections, memory can be exhausted and performance slows because data has to be paged back and forth to disk. If connections are limited to protect response time, new connections have to wait, which also leads to a poor user experience.
1. The PHP/MySQL interaction Together, an application server running PHP and a MySQL database server can serve a maximum number of requests per second. When the number of requests exceeds the maximum, users have to wait. Exceeding the maximum by a relatively small amount can cause a large slowdown in responsiveness for all users. Exceeding it by two or more times can cause significant performance problems.
The performance bottlenecks in a LAMP site are particularly resistant to the usual instinctive response, which is to upgrade to more powerful hardware more CPUs, more disk space, and so on. Incremental increases in hardware performance cant keep up with the exponential increases in demand for system resources that Apache and the PHP/MySQL combination experience when they get overloaded.
The leading alternative to a LAMP stack is a LEMP stack Linux, NGINX, MySQL, and PHP. (In the LEMP acronym, the E stands for the sound at the start of “engine-x.”) We describe a LEMP stack in [Tip 3][12].
### Tip 1. Cache Static Resources ###
Static resources are unchanging files such as CSS files, JavaScript files, and image files. These files often make up half or more of the data on a web page. The remainder of the page is dynamically generated content like comments in a forum, a performance dashboard, or personalized content (think Amazon.com product recommendations).
Caching static resources has two big benefits:
- Faster delivery to the user The user gets the static file from their browser cache or a caching server closer to them on the Internet. These are sometimes big files, so reducing latency for them helps a lot.
- Reduced load on the application server Every file thats retrieved from a cache is one less request the web server has to process. The more you cache, the more you avoid thrashing because resources have run out.
To support browser caching, set the correct HTTP headers for static files. Look into the HTTP Cache-Control header, specifically the max-age setting, the Expires header, and Entity tags. You can find a good introduction [here][13].
When local caching is enabled and a user requests a previously accessed file, the browser first checks whether the file is in the cache. If so, it asks the web server if the file has changed. If the file hasnt changed, the web server can respond immediately with code 304 (Not Modified) meaning that the file is unchanged, instead of returning code 200 OK and then retrieving and delivering the changed file.
To support caching beyond the browser, consider the Tips below, and consider a content delivery network (CDN).CDNs are a popular and powerful tool for caching, but we dont describe them in detail here. Consider a CDN after you implement the other techniques mentioned here. Also, CDNs may be less useful as you transition your site from HTTP/1.x to the new HTTP/2 standard; investigate and test as needed to find the right answer for your site.
If you move to NGINX Plus or the open source NGINX software as part of your software stack, as suggested in [Tip 3][14], then configure NGINX to cache static resources. Use the following configuration, replacing www.example.com with the URL of your web server.
server {
# substitute your web server's URL for www.example.com
server_name www.example.com;
root /var/www/example.com/htdocs;
index index.php;
access_log /var/log/nginx/example.com.access.log;
error_log /var/log/nginx/example.com.error.log;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
try_files $uri =404;
include fastcgi_params;
# substitute the socket, or address and port, of your WordPress server
fastcgi_pass unix:/var/run/php5-fpm.sock;
#fastcgi_pass 127.0.0.1:9000;
}
location ~* .(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ {
expires max;
log_not_found off;
access_log off;
}
}
### Tip 2. Cache Dynamic Files ###
WordPress generates web pages dynamically, meaning that it generates a given web page every time it is requested (even if the result is the same as the time before). This means that users always get the freshest content.
Think of a user visiting a blog post that has comments enabled at the bottom of the post. You want the user to see all comments even a comment that just came in a moment ago. Dynamic content makes this happen.
But now lets say that the blog post is getting ten or twenty requests per second. The application server might start to thrash under the pressure of trying to regenerate the page so often, causing big delays. The goal of delivering the latest content to new visitors becomes relevant only in theory, because theyre have to wait so long to get the page in the first place.
To prevent page delivery from slowing down due to increasing load, cache the dynamic file. This makes the file less dynamic, but makes the whole system more responsive.
To enable caching in WordPress, use one of several popular plug-ins described below. A WordPress caching plug-in asks for a fresh page, then caches it for a brief period of time perhaps just a few seconds. So, if the site is getting several requests a second, most users get their copy of the page from the cache. This helps the retrieval time for all users:
- Most users get a cached copy of the page. The application server does no work at all.
- Users who do get a fresh copy get it fast. The application server only has to generate a fresh page every so often. When the server does generate a fresh page (for the first user to come along after the cached page expires), it does this much faster because its not overloaded with requests.
You can cache dynamic files for WordPress running on a LAMP stack or on a [LEMP stack][15] (described in [Tip 3][16]). There are several caching plug-ins you can use with WordPress. Here are the most popular caching plug-ins and caching techniques, listed from the simplest to the most powerful:
- [Hyper-Cache][17] and [Quick-Cache][18] These two plug-ins create a single PHP file for each WordPress page or post. This supports some dynamic functionality while bypassing much WordPress core processing and the database connection, creating a faster user experience. They dont bypass all PHP processing, so they dont give the same performance boost as the following options. They also dont require changes to the NGINX configuration.
- [WP Super Cache][19] The most popular caching plug-in for WordPress. It has many settings, which are presented through an easy-to-use interface, shown below. We show a sample NGINX configuration in [Tip 7][20].
- [W3 Total Cache][21] This is the second most popular cache plug-in for WordPress. It has even more option settings than WP Super Cache, making it a powerful but somewhat complex option. For a sample NGINX configuration, see [Tip 6][22].
- [FastCGI][23] CGI stands for Common Gateway Interface, a language-neutral way to request and receive files on the Internet. FastCGI is not a plug-in but a way to interact with a cache. FastCGI can be used in Apache as well as in NGINX, where its the most popular dynamic caching approach; we describe how to configure NGINX to use it in [Tip 5][24].
The documentation for these plug-ins and techniques explains how to configure them in a typical LAMP stack. Configuration options include database and object caching; minification for HTML, CSS, and JavaScript files; and integration options for popular CDNs. For NGINX configuration, see the Tips referenced in the list.
**Note**: Caches do not work for users who are logged into WordPress, because their view of WordPress pages is personalized. (For most sites, only a small minority of users are likely to be logged in.) Also, most caches do not show a cached page to users who have recently left a comment, as that user will want to see their comment appear when they refresh the page. To cache the non-personalized content of a page, you can use a technique called [fragment caching][25], if its important to overall performance.
### Tip 3. Move to NGINX ###
As mentioned above, Apache can cause performance problems when the number of simultaneous users rises above a certain point perhaps hundreds of simultaneous users. Apache allocates substantial resources to each connection, and therefore tends to run out of memory. Apache can be configured to limit connections to avoid exhausting memory, but that means, when the limit is exceeded, new connection requests have to wait.
In addition, Apache loads another copy of the mod_php module into memory for every connection, even if its only serving static files (images, CSS, JavaScript, etc.). This consumes even more resources for each connection and limits the capacity of the server further.
To start solving these problems, move from a LAMP stack to a LEMP stack replace Apache with (e)NGINX. NGINX handles many thousands of simultaneous connections in a fixed memory footprint, so you dont have to experience thrashing, nor limit simultaneous connections to a small number.
NGINX also deals with static files better, with built-in, easily tuned [caching][26] controls. The load on the application server is reduced, and your site can serve far more traffic with a faster, more enjoyable experience for your users.
You can use NGINX on all the web servers in your deployment, or you can put an NGINX server “in front” of Apache as a reverse proxy the NGINX server receives client requests, serves static files, and sends PHP requests to Apache, which processes them.
For dynamically generated pages the core use case for WordPress experience choose a caching tool, as described in [Tip 2][27]. In the Tips below, you can find NGINX configuration suggestions for FastCGI, W3_Total_Cache, and WP-Super-Cache. (Hyper-Cache and Quick-Cache dont require changes to NGINX configuration.)
**Tip.** Caches are typically saved to disk, but you can use [tmpfs][28] to store the cache in memory and increase performance.
Setting up NGINX for WordPress is easy. Just follow these four steps, which are described in further detail in the indicated Tips:
1. Add permalink support Add permalink support to NGINX. This eliminates dependence on the **.htaccess** configuration file, which is Apache-specific. See [Tip 4][29].
1. Configure for caching Choose a caching tool and implement it. Choices include FastCGI cache, W3 Total Cache, WP Super Cache, Hyper Cache, and Quick Cache. See Tips [5][30], [6][31], and [7][32].
1. Implement security precautions Adopt best practices for WordPress security on NGINX. See [Tip 8][33].
1. Configure WordPress Multisite If you use WordPress Multisite, configure NGINX for a subdirectory, subdomain, or multiple-domain architecture. See [Tip 9][34].
### Tip 4. Add Permalink Support to NGINX ###
Many WordPress sites depend on **.htaccess** files, which are required for several WordPress features, including permalink support, plug-ins, and file caching. NGINX does not support **.htaccess** files. Fortunately, you can use NGINXs simple, yet comprehensive, configuration language to achieve most of the same functionality.
You can enable [Permalinks][35] in WordPress with NGINX by including the following location block in your main [server][36] block. (This location block is also included in other code samples below.)
The **try_files** directive tells NGINX to check whether the requested URL exists as a file ( **$uri**) or directory (**$uri/**) in the document root, **/var/www/example.com/htdocs**. If not, NGINX does a redirect to **/index.php**, passing the query string arguments as parameters.
server {
server_name example.com www.example.com;
root /var/www/example.com/htdocs;
index index.php;
access_log /var/log/nginx/example.com.access.log;
error_log /var/log/nginx/example.com.error.log;
location / {
try_files $uri $uri/ /index.php?$args;
}
}
### Tip 5. Configure NGINX for FastCGI ###
NGINX can cache responses from FastCGI applications like PHP. This method offers the best performance.
For NGINX open source, compile in the third-party module [ngx_cache_purge][37], which provides cache purging capability, and use the configuration code below. NGINX Plus includes its own implementation of this code.
When using FastCGI, we recommend you install the [Nginx Helper plug-in][38] and use a configuration such as the one below, especially the use of **fastcgi_cache_key** and the location block including **fastcgi_cache_purge**. The plug-in automatically purges your cache when a page or a post is published or modified, a new comment is published, or the cache is manually purged from the WordPress Admin Dashboard.
The Nginx Helper plug-in can also add a short HTML snippet to the bottom of your pages, confirming the cache is working and displaying some statistics. (You can also confirm the cache is functioning properly using the [$upstream_cache_status][39] variable.)
fastcgi_cache_path /var/run/nginx-cache levels=1:2
keys_zone=WORDPRESS:100m inactive=60m;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
server {
server_name example.com www.example.com;
root /var/www/example.com/htdocs;
index index.php;
access_log /var/log/nginx/example.com.access.log;
error_log /var/log/nginx/example.com.error.log;
set $skip_cache 0;
# POST requests and urls with a query string should always go to PHP
if ($request_method = POST) {
set $skip_cache 1;
}
if ($query_string != "") {
set $skip_cache 1;
}
# Don't cache uris containing the following segments
if ($request_uri ~* "/wp-admin/|/xmlrpc.php|wp-.*.php|/feed/|index.php
|sitemap(_index)?.xml") {
set $skip_cache 1;
}
# Don't use the cache for logged in users or recent commenters
if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass
|wordpress_no_cache|wordpress_logged_in") {
set $skip_cache 1;
}
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
try_files $uri /index.php;
include fastcgi_params;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_cache_bypass $skip_cache;
fastcgi_no_cache $skip_cache;
fastcgi_cache WORDPRESS;
fastcgi_cache_valid 60m;
}
location ~ /purge(/.*) {
fastcgi_cache_purge WORDPRESS "$scheme$request_method$host$1";
}
location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg|jpeg|gif|png
|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ {
access_log off;
log_not_found off;
expires max;
}
location = /robots.txt {
access_log off;
log_not_found off;
}
location ~ /\. {
deny all;
access_log off;
log_not_found off;
}
}
### Tip 6. Configure NGINX for W3_Total_Cache ###
[W3 Total Cache][40], by Frederick Townes of [W3-Edge][41], is a WordPress caching framework that supports NGINX. Its an alternative to FastCGI cache with a wide range of option settings.
The caching plug-in offers a variety of caching configurations and also includes options for database and object caching, minification of HTML, CSS, and JavaScript, as well as options to integrate with popular CDNs.
The plug-in handles NGINX configuration by writing to an NGINX configuration file located in the root directory of your domain.
server {
server_name example.com www.example.com;
root /var/www/example.com/htdocs;
index index.php;
access_log /var/log/nginx/example.com.access.log;
error_log /var/log/nginx/example.com.error.log;
include /path/to/wordpress/installation/nginx.conf;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
try_files $uri =404;
include fastcgi_params;
fastcgi_pass unix:/var/run/php5-fpm.sock;
}
}
### Tip 7. Configure NGINX for WP Super Cache ###
[WP Super Cache][42] by Donncha O Caoimh, a WordPress developer at [Automattic][43], is a WordPress caching engine that turns dynamic WordPress pages into static HTML files that NGINX can serve very quickly. It was one of the first caching plug-ins for WordPress and has a smaller, more focused range of options than others.
NGINX configurations for WP-Super-Cache can vary depending on your preference. One possible configuration follows.
In the configuration below, the location block with supercache named in it is the WP Super Cache-specific part, and is needed for the configuration to work. The rest of the code is made up of WordPress rules for not caching users who are logged into WordPress, not caching POST requests, and setting expires headers for static assets, plus standard PHP implementation; these parts can be customized to fit your needs.
server {
server_name example.com www.example.com;
root /var/www/example.com/htdocs;
index index.php;
access_log /var/log/nginx/example.com.access.log;
error_log /var/log/nginx/example.com.error.log debug;
set $cache_uri $request_uri;
# POST requests and urls with a query string should always go to PHP
if ($request_method = POST) {
set $cache_uri 'null cache';
}
if ($query_string != "") {
set $cache_uri 'null cache';
}
# Don't cache uris containing the following segments
if ($request_uri ~* "(/wp-admin/|/xmlrpc.php|/wp-(app|cron|login|register|mail).php
|wp-.*.php|/feed/|index.php|wp-comments-popup.php
|wp-links-opml.php|wp-locations.php |sitemap(_index)?.xml
|[a-z0-9_-]+-sitemap([0-9]+)?.xml)") {
set $cache_uri 'null cache';
}
# Don't use the cache for logged-in users or recent commenters
if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+
|wp-postpass|wordpress_logged_in") {
set $cache_uri 'null cache';
}
# Use cached or actual file if it exists, otherwise pass request to WordPress
location / {
try_files /wp-content/cache/supercache/$http_host/$cache_uri/index.html
$uri $uri/ /index.php;
}
location = /favicon.ico {
log_not_found off;
access_log off;
}
location = /robots.txt {
log_not_found off
access_log off;
}
location ~ .php$ {
try_files $uri /index.php;
include fastcgi_params;
fastcgi_pass unix:/var/run/php5-fpm.sock;
#fastcgi_pass 127.0.0.1:9000;
}
# Cache static files for as long as possible
location ~*.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css
|rss|atom|js|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2
|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ {
expires max;
log_not_found off;
access_log off;
}
}
### Tip 8. Add Security Precautions to Your NGINX Configuration ###
To protect against attacks, you can control access to key resources and limit the ability of bots to overload the login utility.
Allow only specific IP addresses to access the WordPress Dashboard.
# Restrict access to WordPress Dashboard
location /wp-admin {
deny 192.192.9.9;
allow 192.192.1.0/24;
allow 10.1.1.0/16;
deny all;
}
Only allow uploading of specific types of files to prevent programs with malicious intent from being uploaded and running.
# Deny access to uploads which arent images, videos, music, etc.
location ~* ^/wp-content/uploads/.*.(html|htm|shtml|php|js|swf)$ {
deny all;
}
Deny access to **wp-config.php**, the WordPress configuration file. Another way to deny access is to move the file one directory level above the domain root.
# Deny public access to wp-config.php
location ~* wp-config.php {
deny all;
}
Rate limit **wp-login.php** to block against brute force attacks.
# Deny access to wp-login.php
location = /wp-login.php {
limit_req zone=one burst=1 nodelay;
fastcgi_pass unix:/var/run/php5-fpm.sock;
#fastcgi_pass 127.0.0.1:9000;
}
### Tip 9. Use NGINX with WordPress Multisite ###
WordPress Multisite, as its name implies, is a version of WordPress software that allows you to manage two or more sites from a single WordPress instance. The [WordPress.com][44] service, which hosts thousands of user blogs, is run from WordPress Multisite.
You can run separate sites from either subdirectories of a single domain or from separate subdomains.
Use this code block to add support for a subdirectory structure.
# Add support for subdirectory structure in WordPress Multisite
if (!-e $request_filename) {
rewrite /wp-admin$ $scheme://$host$uri/ permanent;
rewrite ^(/[^/]+)?(/wp-.*) $2 last;
rewrite ^(/[^/]+)?(/.*\.php) $2 last;
}
Use this code block instead of the code block above to add support for a subdirectory structure, substituting your own subdirectory names.
# Add support for subdomains
server_name example.com *.example.com;
Older versions of WordPress Multisite (3.4 and earlier) use readfile() to serve static content. However, readfile() is PHP code, which causes a significant performance hit when it executes. We can use NGINX to bypass this unnecessary PHP processing. The code snippets below are separated by separator lines (==============).
# Avoid PHP readfile() for /blogs.dir/structure in the subdirectory path.
location ^~ /blogs.dir {
internal;
alias /var/www/example.com/htdocs/wp-content/blogs.dir;
access_log off;
log_not_found off;
expires max;
}
============================================================
# Avoid php readfile() for /files/structure in the subdirectory path
location ~ ^(/[^/]+/)?files/(?.+) {
try_files /wp-content/blogs.dir/$blogid/files/$rt_file /wp-includes/ms-files.php?file=$rt_file;
access_log off;
log_not_found off;
expires max;
}
============================================================
# WPMU files structure for the subdomain path
location ~ ^/files/(.*)$ {
try_files /wp-includes/ms-files.php?file=$1 =404;
access_log off;
log_not_found off;
expires max;
}
============================================================
# Map blog ID to specific directory
map $http_host $blogid {
default 0;
example.com 1;
site1.example.com 2;
site1.com 2;
}
### Conclusion ###
Scalability is a challenge for more and more site developers as they achieve success with their WordPress sites. (And for new sites that want to head WordPress performance problems off at the pass.) Adding WordPress caching, and combining WordPress and NGINX, are solid answers.
NGINX is not only useful with WordPress sites. NGINX is the [leading web server][45] among the busiest 1,000, 10,000, and 100,000 sites in the world.
For more on NGINX performance, see our recent blog post, [10 Tips for 10x Application Performance][46].
NGINX software comes in two versions:
- NGINX open source software Like WordPress, this is software you download, configure, and compile yourself.
- NGINX Plus NGINX Plus includes a pre-built reference version of the software, as well as service and technical support.
To get started, go to [nginx.org][47] for the open source software or check out [NGINX Plus][48].
--------------------------------------------------------------------------------
via: https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/
作者:[Floyd Smith][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.nginx.com/blog/author/floyd/
[1]:http://w3techs.com/technologies/overview/content_management/all
[2]:https://www.nginx.com/press/choosing-nginx-growth-wordpresscom/
[3]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#cache-static
[4]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#cache-dynamic
[5]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#adopt-nginx
[6]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#permalink
[7]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#fastcgi
[8]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#w3-total-cache
[9]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#wp-super-cache
[10]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#security
[11]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#multisite
[12]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#adopt-nginx
[13]:http://www.mobify.com/blog/beginners-guide-to-http-cache-headers/
[14]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#adopt-nginx
[15]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#lamp
[16]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#adopt-nginx
[17]:https://wordpress.org/plugins/hyper-cache/
[18]:https://wordpress.org/plugins/quick-cache/
[19]:https://wordpress.org/plugins/wp-super-cache/
[20]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#wp-super-cache
[21]:https://wordpress.org/plugins/w3-total-cache/
[22]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#w3-total-cache
[23]:http://www.fastcgi.com/
[24]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#fastcgi
[25]:https://css-tricks.com/wordpress-fragment-caching-revisited/
[26]:https://www.nginx.com/resources/admin-guide/content-caching/
[27]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#cache-dynamic
[28]:https://www.kernel.org/doc/Documentation/filesystems/tmpfs.txt
[29]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#permalink
[30]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#fastcgi
[31]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#w3-total-cache
[32]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#wp-super-cache
[33]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#security
[34]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#multisite
[35]:http://codex.wordpress.org/Using_Permalinks
[36]:http://nginx.org/en/docs/http/ngx_http_core_module.html#server
[37]:https://github.com/FRiCKLE/ngx_cache_purge
[38]:https://wordpress.org/plugins/nginx-helper/
[39]:http://nginx.org/en/docs/http/ngx_http_upstream_module.html#variables
[40]:https://wordpress.org/plugins/w3-total-cache/
[41]:http://www.w3-edge.com/
[42]:https://wordpress.org/plugins/wp-super-cache/
[43]:http://automattic.com/
[44]:https://wordpress.com/
[45]:http://w3techs.com/technologies/cross/web_server/ranking
[46]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/
[47]:http://www.nginx.org/en
[48]:https://www.nginx.com/products/
[49]:
[50]:

View File

@ -1,3 +1,5 @@
translating by ezio
How to Install SQLite 3.9.1 with JSON Support on Ubuntu 15.04
================================================================================
Hello and welcome to our today's article on SQLite which is the most widely deployed SQL database engine in the world that comes with zero-configuration, that means no setup or administration needed. SQLite is public-domain software package that provides relational database management system, or RDBMS that is used to store user-defined records in large tables. In addition to data storage and management, database engine process complex query commands that combine data from multiple tables to generate reports and data summaries.
@ -119,4 +121,4 @@ via: http://linoxide.com/ubuntu-how-to/install-sqlite-json-ubuntu-15-04/
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/kashifs/
[1]:https://www.sqlite.org/download.html
[1]:https://www.sqlite.org/download.html

View File

@ -1,3 +1,4 @@
正在翻译zky001
How to Configure Tripwire IDS on Debian
================================================================================
This article is about Tripwire installation and configuration on Debian OS. It is a host based Intrusion detection system (IDS) for Linux environment. Prime function of tripwire IDS is to detect and report any unauthorized change (files and directories ) on linux system. After tripwire installation, baseline database created first, tripwire monitors and detects changes such as new file addition/creation, file modification and user who changed it etc. If the changes are legitimate, you can accept the changes to update tripwire database.
@ -371,9 +372,9 @@ In this article, we learned installation and basic configuration of open source
via: http://linoxide.com/security/configure-tripwire-ids-debian/
作者:[nido][a]
译者:[译者ID](https://github.com/译者ID)
译者:[译者zky001](https://github.com/zky001)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/naveeda/
[a]:http://linoxide.com/author/naveeda/

View File

@ -1,3 +1,5 @@
FSSlc translting
How to Install GitLab on Ubuntu / Fedora / Debian
================================================================================
Distributed version control was never easy before git. Git is a free and open source software that is designed to handle everything from small to very large projects with ease and speed. Git was first developed by Linus Torvalds who was also the founder of well-known Linux Kernel. [GitLab][1] is an awesome development in the field of git and distributed version control system. It is a web based Git repository managing application which includes features like code reviews,wikis, issue tracking and much more. Creating, reviewing and deploying codes is very easy, managed and fast with GitLab. It can be hosted in our own server though it also provides free repository hosting in its official server which is similar to Github. GitLab has two different editions, Community Edition and Enterprise Edition. Community Edition is a complete free and open source software licensed under MIT License whereas Enterprise Edition is under a proprietary license, and contains features that are not present in the CE version. Here are some easy steps on how we can install GitLab Community Edition on our machine running Ubuntu, Fedora and Debian as operating system.
@ -174,4 +176,4 @@ via: http://linoxide.com/linux-how-to/install-gitlab-on-ubuntu-fedora-debian/
[1]:https://about.gitlab.com/
[2]:https://packages.gitlab.com/gitlab/gitlab-ce?filter=debs
[3]:https://packages.gitlab.com/gitlab/gitlab-ce?filter=debs
[4]:https://packages.gitlab.com/gitlab/gitlab-ce?filter=rpms
[4]:https://packages.gitlab.com/gitlab/gitlab-ce?filter=rpms

View File

@ -1,107 +0,0 @@
translation by strugglingyouth
How to Set Up AWStats On Ubuntu Server
================================================================================
![](https://www.maketecheasier.com/assets/uploads/2015/10/Apache_awstats_featured.jpg)
AWStats is an open-source Web analytics reporting tool that generates advanced web, streaming, FTP or mail server statistics graphically. This log analyser works as a CGI or from command line and shows you all the possible information your log contains in a few graphical web pages. It uses a partial information file to be able to process large log files often and quickly. It supports most web server log file formats including Apache, IIS and many other web server log formats.
This article will help you to install and configure AWStats on Ubuntu.
### Install AWStats Package ###
By default, AWStats package is available in the Ubuntu repository.
You can install it by running:
sudo apt-get install awstats
Next you will need to enable the CGI module in Apache.
You can do this by running:
sudo a2enmod cgi
Now, restart Apache to reflect the changes.
sudo /etc/init.d/apache2 restart
### Configure AWStats ###
You need to create a configuration file for each domain or website you wish to view statistics for. In this example we will create a configuration file for “test.com“.
You can do this by duplicating the AWStats default configuration file to one with your domain name.
sudo cp /etc/awstats/awstats.conf /etc/awstats/awstats.test.com.conf
Now, you need to make some changes in the config file:
sudo nano /etc/awstats/awstats.test.com.conf
Update the settings shown below:
# Change to Apache log file, by default it's /var/log/apache2/access.log
LogFile="/var/log/apache2/access.log"
# Change to the website domain name
SiteDomain="test.com"
HostAliases="www.test.com localhost 127.0.0.1"
# When this parameter is set to 1, AWStats adds a button on report page to allow to "update" statistics from a web browser
AllowToUpdateStatsFromBrowser=1
Save and close the file.
After these changes, you need to build your initial statistics which will be generated from the current logs on your server. You can do this using:
sudo /usr/lib/cgi-bin/awstats.pl -config=test.com -update
The output will look something like this:
![awtstats](https://www.maketecheasier.com/assets/uploads/2015/10/awtstats.png)
### Configure Apache For AWStats ###
Next, you need to configure Apache2 to show these stats. Now copy the content of the “cgi-bin” folder to the default document root directory of your Apache installation. By default, this is in the “/usr/lib/cgi-bin” folder.
You can do this by running:
sudo cp -r /usr/lib/cgi-bin /var/www/html/
sudo chown www-data:www-data /var/www/html/cgi-bin/
sudo chmod -R 755 /var/www/html/cgi-bin/
### Test AWStats ###
Now you can access your AWStats by visiting the url “http://your-server-ip/cgi-bin/awstats.pl?config=test.com.”
It will show you a results page like this:
![awstats_page](https://www.maketecheasier.com/assets/uploads/2015/10/awstats_page.jpg)
### Set Up Cron to Update Logs ###
It is recommended to schedule a cron job to regularly update the AWStats database using newly created log entries, so the stats get updated on a regular basis. This will also save your time.
To do this you need to edit the “/etc/crontab” file:
sudo nano /etc/crontab
Add the following line that tells AWStats to update every ten minutes.
*/10 * * * * root /usr/lib/cgi-bin/awstats.pl -config=test.com -update
Save and close the file.
### Conclusion ###
AWStats is a very useful tool that can give you an overview of what is happening on your website and assist with site analysis. It is very easy to install and configure. Feel free to comment below if you have any questions.
--------------------------------------------------------------------------------
via: https://www.maketecheasier.com/set-up-awstats-ubuntu/
作者:[Hitesh Jethva][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.maketecheasier.com/author/hiteshjethva/

View File

@ -1,3 +1,4 @@
zpl1025
Install Android On BQ Aquaris Ubuntu Phone In Linux
================================================================================
![How to install Android on Ubuntu Phone](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/11/Install-Android-on-Ubuntu-Phone.jpg)
@ -122,4 +123,4 @@ via: http://itsfoss.com/install-android-ubuntu-phone/
[1]:https://storage.googleapis.com/otas/2014/Smartphones/Aquaris_E4.5_L/2.0.1_20150623-1900_bq-FW.zip
[2]:http://www.bq.com/gb/support/aquaris-e4-5
[3]:https://storage.googleapis.com/otas/2014/Smartphones/Aquaris_E4.5/Ubuntu/Web%20version/Web%20version/SP_Flash_Tool_exe_linux_v5.1424.00.zip
[4]:http://www.bq.com/gb/support/aquaris-e4-5-ubuntu-edition
[4]:http://www.bq.com/gb/support/aquaris-e4-5-ubuntu-edition

View File

@ -0,0 +1,317 @@
How to Setup Drone - a Continuous Integration Service in Linux
==============================================================
Are you tired of cloning, building, testing, and deploying codes time and again? If yes, switch to continuous integration. Continuous Integration aka CI is practice in software engineering of making frequent commits to the code base, building, testing and deploying as we go. CI helps to quickly integrate new codes into the existing code base. If this process is made automated, then this will speed up the development process as it reduces the time taken for the developer to build and test things manually. [Drone][1] is a free and open source project which provides an awesome environment of continuous integration service and is released under Apache License Version 2.0. It integrates with many repository providers like Github, Bitbucket and Google Code and has the ability to pull codes from the repositories enabling us to build the source code written in number of languages including PHP, Node, Ruby, Go, Dart, Python, C/C++, JAVA and more. It is made such a powerful platform cause it uses containers and docker technology for every build making users a complete control over their build environment with guaranteed isolation.
### 1. Installing Docker ###
First of all, we'll gonna install Docker as its the most vital element for the complete workflow of Drone. Drone does a proper utilization of docker for the purpose of building and testing application. This container technology speeds up the development of the applications. To install docker, we'll need to run the following commands with respective the distribution of linux. In this tutorial, we'll cover the steps with Ubuntu 14.04 and CentOS 7 linux distributions.
#### On Ubuntu ####
To install Docker in Ubuntu, we can simply run the following commands in a terminal or console.
# apt-get update
# apt-get install docker.io
After the installation is done, we'll restart our docker engine using service command.
# service docker restart
Then, we'll make docker start automatically in every system boot.
# update-rc.d docker defaults
Adding system startup for /etc/init.d/docker ...
/etc/rc0.d/K20docker -> ../init.d/docker
/etc/rc1.d/K20docker -> ../init.d/docker
/etc/rc6.d/K20docker -> ../init.d/docker
/etc/rc2.d/S20docker -> ../init.d/docker
/etc/rc3.d/S20docker -> ../init.d/docker
/etc/rc4.d/S20docker -> ../init.d/docker
/etc/rc5.d/S20docker -> ../init.d/docker
#### On CentOS ####
First, we'll gonna update every packages installed in our centos machine. We can do that by running the following command.
# sudo yum update
To install docker in centos, we can simply run the following commands.
# curl -sSL https://get.docker.com/ | sh
After our docker engine is installed in our centos machine, we'll simply start it by running the following systemd command as systemd is the default init system in centos 7.
# systemctl start docker
Then, we'll enable docker to start automatically in every system startup.
# systemctl enable docker
ln -s '/usr/lib/systemd/system/docker.service' '/etc/systemd/system/multi-user.target.wants/docker.service'
### 2. Installing SQlite Driver ###
It uses SQlite3 database server for storing its data and information by default. It will automatically create a database file named drone.sqlite under /var/lib/drone/ which will handle database schema setup and migration. To setup SQlite3 drivers, we'll need to follow the below steps.
#### On Ubuntu 14.04 ####
As SQlite3 is available on the default respository of Ubuntu 14.04, we'll simply install it by running the following apt command.
# apt-get install libsqlite3-dev
#### On CentOS 7 ####
To install it on CentOS 7 machine, we'll need to run the following yum command.
# yum install sqlite-devel
### 3. Installing Drone ###
Finally, after we have installed those dependencies successfully, we'll now go further towards the installation of drone in our machine. In this step, we'll simply download the binary package of it from the official download link of the respective binary formats and then install them using the default package manager.
#### On Ubuntu ####
We'll use wget to download the debian package of drone for ubuntu from the [official Debian file download link][2]. Here is the command to download the required debian package of drone.
# wget downloads.drone.io/master/drone.deb
Resolving downloads.drone.io (downloads.drone.io)... 54.231.48.98
Connecting to downloads.drone.io (downloads.drone.io)|54.231.48.98|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 7722384 (7.4M) [application/x-debian-package]
Saving to: 'drone.deb'
100%[======================================>] 7,722,384 1.38MB/s in 17s
2015-11-06 14:09:28 (456 KB/s) - 'drone.deb' saved [7722384/7722384]
After its downloaded, we'll gonna install it with dpkg package manager.
# dpkg -i drone.deb
Selecting previously unselected package drone.
(Reading database ... 28077 files and directories currently installed.)
Preparing to unpack drone.deb ...
Unpacking drone (0.3.0-alpha-1442513246) ...
Setting up drone (0.3.0-alpha-1442513246) ...
Your system ubuntu 14: using upstart to control Drone
drone start/running, process 9512
#### On CentOS ####
In the machine running CentOS, we'll download the RPM package from the [official download link for RPM][3] using wget command as shown below.
# wget downloads.drone.io/master/drone.rpm
--2015-11-06 11:06:45-- http://downloads.drone.io/master/drone.rpm
Resolving downloads.drone.io (downloads.drone.io)... 54.231.114.18
Connecting to downloads.drone.io (downloads.drone.io)|54.231.114.18|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 7763311 (7.4M) [application/x-redhat-package-manager]
Saving to: drone.rpm
100%[======================================>] 7,763,311 1.18MB/s in 20s
2015-11-06 11:07:06 (374 KB/s) - drone.rpm saved [7763311/7763311]
Then, we'll install the download rpm package using yum package manager.
# yum localinstall drone.rpm
### 4. Configuring Port ###
After the installation is completed, we'll gonna configure drone to make it workable. The configuration of drone is inside **/etc/drone/drone.toml** file. By default, drone web interface is exposed under port 80 which is the default port of http, if we wanna change it, we can change it by replacing the value under server block as shown below.
[server]
port=":80"
### 5. Integrating Github ###
In order to run Drone we must setup at least one integration points between GitHub, GitHub Enterprise, Gitlab, Gogs, Bitbucket. In this tutorial, we'll only integrate github but if we wanna integrate other we can do that from the configuration file. In order to integrate github, we'll need to create a new application in our [github settings][4].
![Registering App Github](http://blog.linoxide.com/wp-content/uploads/2015/11/registering-app-github.png)
To create, we'll need to click on Register a New Application then fill out the form as shown in the following image.
![Registering OAuth app github](http://blog.linoxide.com/wp-content/uploads/2015/11/registering-OAuth-app-github.png)
We should make sure that **Authorization callback URL** looks like http://drone.linoxide.com/api/auth/github.com under the configuration of the application. Then, we'll click on Register application. After done, we'll note the Client ID and Client Secret key as we'll need to configure it in our drone configuration.
![Client ID and Secret Token](http://blog.linoxide.com/wp-content/uploads/2015/11/client-id-secret-token.png)
After thats done, we'll need to edit our drone configuration using a text editor by running the following command.
# nano /etc/drone/drone.toml
Then, we'll find the [github] section and append the section with the above noted configuration as shown below.
[github]
client="3dd44b969709c518603c"
secret="4ee261abdb431bdc5e96b19cc3c498403853632a"
# orgs=[]
# open=false
![Configuring Github Drone](http://blog.linoxide.com/wp-content/uploads/2015/11/configuring-github-drone-e1446835124465.png)
### 6. Configuring SMTP server ###
If we wanna enable drone to send notifications via emails, then we'll need to specify the SMTP configuration of our SMTP server. If we already have an SMTP server, we can use its configuration but as we don't have an SMTP server, we'll need to install an MTA ie Postfix and then specify the SMTP configuration in the drone configuration.
#### On Ubuntu ####
We can install postfix in ubuntu by running the following apt command.
# apt-get install postfix
#### On CentOS ####
We can install postfix in CentOS by running the following yum command.
# yum install postfix
After installing, we'll need to edit the configuration of our postfix configuration using a text editor.
# nano /etc/postfix/main.cf
Then, we'll need to replace the value of myhostname parameter to our FQDN ie drone.linoxide.com .
myhostname = drone.linoxide.com
Now, we'll gonna finally configure the SMTP section of our drone configuration file.
# nano /etc/drone/drone.toml
Then, we'll find the [stmp] section and then we'll need to append the setting as follows.
[smtp]
host = "drone.linoxide.com"
port = "587"
from = "root@drone.linoxide.com"
user = "root"
pass = "password"
![Configuring SMTP Drone](http://blog.linoxide.com/wp-content/uploads/2015/11/configuring-smtp-drone.png)
Note: Here, **user** and **pass** parameters are strongly recommended to be changed according to one's user configuration.
### 7. Configuring Worker ###
As we know that drone utilizes docker for its building and testing task, we'll need to configure docker as the worker for our drone. To do so, we'll need to edit the [worker] section in the drone configuration file.
# nano /etc/drone/drone.toml
Then, we'll uncomment the following lines and append as shown below.
[worker]
nodes=[
"unix:///var/run/docker.sock",
"unix:///var/run/docker.sock"
]
Here, we have set only 2 node which means the above configuration is capable of executing only 2 build at a time. In order to increase concurrency, we can increase the number of nodes.
[worker]
nodes=[
"unix:///var/run/docker.sock",
"unix:///var/run/docker.sock",
"unix:///var/run/docker.sock",
"unix:///var/run/docker.sock"
]
Here, in the above configuration, drone is configured to process four builds at a time, using the local docker daemon.
### 8. Restarting Drone ###
Finally, after everything is done regarding the installation and configuration, we'll now start our drone server in our linux machine.
#### On Ubuntu ####
To start drone in our Ubuntu 14.04 machine, we'll simply run service command as the default init system of Ubuntu 14.04 is SysVinit.
# service drone restart
To make drone start automatically in every boot of the system, we'll run the following command.
# update-rc.d drone defaults
#### On CentOS ####
To start drone in CentOS machine, we'll simply run systemd command as CentOS 7 is shipped with systemd as init system.
# systemctl restart drone
Then, we'll enable drone to start automatically in every system boot.
# systemctl enable drone
### 9. Allowing Firewalls ###
As we know drone utilizes port 80 by default and we haven't changed the port, we'll gonna configure our firewall programs to allow port 80 (http) and be accessible from other machines in the network.
#### On Ubuntu 14.04 ####
Iptables is a popular firewall program which is installed in the ubuntu distributions by default. We'll make iptables to expose port 80 so that we can make our Drone web interface accessible in the network.
# iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
# /etc/init.d/iptables save
#### On CentOS 7 ####
As CentOS 7 has systemd installed by default, it contains firewalld running as firewall problem. In order to open the port 80 (http service) on firewalld, we'll need to execute the following commands.
# firewall-cmd --permanent --add-service=http
success
# firewall-cmd --reload
success
### 10. Accessing Web Interface ###
Now, we'll gonna open the web interface of drone using our favourite web browser. To do so, we'll need to point our web browser to our machine running drone in it. As the default port of drone is 80 and we have also set 80 in this tutorial, we'll simply point our browser to http://ip-address/ or http://drone.linoxide.com according to our configuration. After we have done that correctly, we'll see the first page of it having options to login into our dashboard.
![Login Github Drone](http://blog.linoxide.com/wp-content/uploads/2015/11/login-github-drone-e1446834688394.png)
As we have configured Github in the above step, we'll simply select github and we'll go through the app authentication process and after its done, we'll be forwarded to our Dashboard.
![Drone Dashboard](http://blog.linoxide.com/wp-content/uploads/2015/11/drone-dashboard.png)
Here, it will synchronize all our github repository and will ask us to activate the repo which we want to build with drone.
![Activate Repository](http://blog.linoxide.com/wp-content/uploads/2015/11/activate-repository-e1446835574595.png)
After its activated, it will ask us to add a new file named .drone.yml in our repository and define the build process and configuration in that file like which image to fetch and which command/script to run while compiling, etc.
We'll need to configure our .drone.yml as shown below.
image: python
script:
- python helloworld.py
- echo "Build has been completed."
After its done, we'll be able to build our application using the configuration YAML file .drone.yml in our drone appliation. All the commits made into the repository is synced in realtime. It automatically syncs the commit and changes made to the repository. Once the commit is made in the repository, build is automatically started in our drone application.
![Building Application Drone](http://blog.linoxide.com/wp-content/uploads/2015/11/building-application-drone.png)
After the build is completed, we'll be able to see the output of the build with the output console.
![Build Success Drone](http://blog.linoxide.com/wp-content/uploads/2015/11/build-success-drone.png)
### Conclusion ###
In this article, we learned to completely setup a workable Continuous Intergration platform with Drone. If we want, we can even get started with the services provided by the official Drone.io project. We can start with free service or paid service according to our requirements. It has changed the world of Continuous integration with its beautiful web interface and powerful bunches of features. It has the ability to integrate with many third party applications and deployment platforms. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you !
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/setup-drone-continuous-integration-linux/
作者:[Arun Pyasi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:https://drone.io/
[2]:http://downloads.drone.io/master/drone.deb
[3]:http://downloads.drone.io/master/drone.rpm
[4]:https://github.com/settings/developers

View File

@ -0,0 +1,123 @@
LastPass的开源替代品
================================================================================
LastPass是一个跨平台的密码管理程序。在Linux平台中它可作为Firefox, Chrome和Opera浏览器的插件使用。LastPass Sesame支持Ubuntu/Debian与Fedora系统。此外LastPass还有安装在Firefox Portable的便携版可将其安装在USB设备上。再加上适用于Ubuntu/Debian, Fedora和openSUSE的LastPass Pocket, 其具有良好的跨平台覆盖性。虽然LastPass备受好评但它是一个专有软件。此外LastPass最近被LogMeIn收购。如果你在找一个开源的替代品这篇文章可能会对你有所帮助。
我们正面临着信息大爆炸。无论你是要在线经营生意,找工作,还是只为了休闲来进行阅读,互联网都是一个广大的信息源。在这种情况下,长期保留信息是很困难的。然而,及时地获取某些特定信息非常重要。密码就是这样的一个例子。
作为一个电脑用户,你可能会面临在不同服务或网站使用相同或不同密码的困境。这个事情非常复杂,因为有些网站会限制你对密码的选择。比如,一个网站可能会限制密码的最小位数,大写字母,数字或者特殊字符,这使得在所有网站使用统一密码变得不可能。更重要的是,不在不同网站中使用同一密码有安全方面的原因。这样就不可避免地意味着人们经常会有很多密码要记。一个解决方案是将所有的密码写下来。然而,这种做法也极度的不安全。
为了解决需要记忆无穷多串密码的问题,目前比较流行的解决方案是使用密码管理软件。事实上,这类软件对于活跃的互联网用户来说极为实用。它使得你获取、管理和安全保存所有密码变得极为容易,而大多数密码都是被软件或文件系统加密过的。因此,用户只需要记住一个简单的密码就可以获取到其它所有密码。密码管理软件鼓励用户对于不同服务去采用独一无二的,非直观的强密码。
为了让大家更深入地了解Linux软件的质量我将介绍4款优秀的、可替代LastPass的开源软件。
### KeePassX ###
![KeePassX软件截图](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-KeePassX.png)
KeePassX提供KeePass的多平台接口是一款开源、跨平台的密码管理软件。这款软件可以帮助你以安全的方式保管密码。你可以将所有密码保存在一个数据库中而这个数据库被一个主密码或密码盘来保管。
密码数据库使用AES(即Rijndael)或者TwoFish算法进行加密密钥长度为256位。
该软件功能包括:
- 多重管理模式 - 使每条密码更容易被识别
- 可设置密码过期时间
- 可插入附件
- 可为不同分组或密码自定义标志
- 在分组中对密码排序
- 搜索函数:可在特定分组或整个数据库中搜索
- Auto-Type: 这个功能允许你在登录网站时只需要按下几个键。KeePassX可以帮助你输入剩下的密码。Auto-Type通过读取当前窗口的标题对密码数据库进行搜索来获取相应的密码
- 数据库安全性强用户可通过密码或一个密钥文件可存储在CD或U盘中访问数据库
- 自动生成安全的密码
- 具有预防措施,获取选中的密码并检查其安全性
- 加密 - 用256位密钥通过AES(高级加密标准)或TwoFish算法加密数据库
- 密码可以导入或导出。可从PwManager文件(*.pwm)或KWallet文件(*.xml)中导入密码,可导出为文本(*.txt)格式。
- 软件官网:[www.keepassx.org][1]
- 开发者KeepassX Team
- 软件许可证GNU GPL V2
- 版本号0.4.3
### Encryptr ###
![Encryptr软件截图](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-Encryptr.png)
Encryptr是一个开源的、零知晓的、基于云端的密码管理/电子钱包软件以Crypton为基础开发。Crypton是一个Javascript库允许开发者利用其开发应用上传文件至服务器而服务器无法知道用户所存储的文件内容。
Encryptr可将你的敏感信息比如密码、信用卡数据、PIN码、或认证码存储在云端。然而由于它基于零知晓的Cypton框架开发Encryptr可保证只有用户才拥有访问或读取秘密信息的权限。
由于其跨平台的特性Encryptr允许用户随时随地、安全地通过一个账户从云端获取机密信息。
软件特性包括:
- 使用极安全、零知晓的Crypton框架软件只在本地加密/解密数据
- 易于使用
- 基于云端
- 可存储三种类型的数据:密码、信用卡账号以及通用的键值对
- 可对每条密码设置“备注”项
- 对本地密码进行缓存加密,以节省上传时间
- 软件官网: [encryptr.org][2]
- 开发者: Tommy Williams
- 软件许可证: GNU GPL v3
- 版本号: 1.2.0
### RatticDB ###
![RatticDB软件截图](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-RatticDB.png)
RatticDB是一个开源的、基于Django的密码管理服务。
RatticDB被设计为一个“密码生命周期管理工具”而不是单单一个“密码存储工具”。RatticDB致力于及时提醒用户哪些密码在何时需要更改。它不提供应用层面的密码加密。
软件特性包括:
- 简洁的ACL设计
- 可改变队列功能,可让用户知晓何时需要更改某应用的密码
- Ansible配置
- 软件官网: [rattic.org][3]
- 开发者: Daniel Hall
- 软件许可证: GNU GPL v2
- 版本号: 1.3.1
### Seahorse ###
![Seahorse软件截图](http://www.linuxlinks.com/portal/content/reviews/Security/Screenshot-Seahorse.png)
Seahorse是一个于Gnome前端运行的GnuPG - GNU隐私保护软件。它的目标是提供一个易于使用密钥管理工具一并提供一个易于使用的界面来控制加密操作。
Seahorse是一个工具用来提供安全沟通和数据存储服务。数据加密和数字密钥生成操作可以轻易通过GUI来演示密钥管理操作也可以轻易通过直观的界面来进行。
此外Seahorse包含一个Gedit插件可以使用鹦鹉螺文件管理器管理文件一个管理剪贴板中事物的小程序一个存储私密密码的代理还有一个GnuPG和OpenSSH的密钥管理工具。
软件特性包括:
- 对文本进行加密/解密/签名
- 管理密钥及密钥环
- 将密钥及密钥环于密钥服务器同步
- 密码签名及发布
- 将密码缓存起来,无需多次重复键入
- 对密钥及密钥环进行备份
- 可添加一个GDK支持格式的图片作为OpenGPG图片ID
- 生成SSH密钥对其进行验证及储存
- 多语言支持
- 软件官网: [www.gnome.org/projects/seahorse][4]
- 开发者: Jacob Perkins, Jose Carlos, Garcia Sogo, Jean Schurger, Stef Walter, Adam Schreiber
- 软件许可证: GNU GPL v2
- 版本号: 3.18.0
--------------------------------------------------------------------------------
via: http://www.linuxlinks.com/article/20151108125950773/LastPassAlternatives.html
译者:[StdioA](https://github.com/StdioA)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:http://www.keepassx.org/
[2]:https://encryptr.org/
[3]:http://rattic.org/
[4]:http://www.gnome.org/projects/seahorse/

View File

@ -1,66 +0,0 @@
那些奇特的 Linux 发行版本
================================================================================
从大多数消费者所关注的诸如 UbuntuFedoraMint 或 elementary OS 到更加晦涩、轻量级和企业级的诸如 SlackwareArch Linux 或 RHEL这些发行版本我都已经见识过了。除了这些难道没有其他别的了吗其实 Linux 的生态系统是非常多样化的,对每个人来说,总有一款适合你。下面就让我们讨论一些稀奇古怪的小众 Linux 发行版本吧,它们代表着开源平台真正的多样性。
![strangest linux distros](http://2.bp.blogspot.com/--cSL2-6rIgA/VcwNc5hFebI/AAAAAAAAJzk/AgB55mVtJVQ/s1600/Puppy-Linux.png)
**Puppy Linux** 它是一个仅有一个普通 DVD 光盘容纳十分之一大小的操作系统,这就是 Puppy Linux。整个操作系统仅有 100MB 大小!并且它还可以在内存中运行,这使得它运行极快,甚至是在老式的 PC 机上。 在操作系统启动后,你甚至可以移除安装介质!还有什么比这个更好的吗? 系统所需的资源极小,大多数的硬件都会被自动检测到,并且它预装了能够满足你基本需求的软件。[在这里体验 Puppy Linux 吧][1].
![suicide linux](http://3.bp.blogspot.com/-dfeehRIQKpo/VdMgRVQqIJI/AAAAAAAAJz0/TmBs-n2K9J8/s1600/suicide-linux.jpg)
**Suicide Linux(自杀 Linux)** 这个名字吓到你了吗?我想应该是。 ‘任何时候 -注意是任何时候-一旦你输入不正确的命令,解释器都会创造性地将它重定向为 `rm -rf /` 命令,然后擦除你的硬盘’。它就是这么简单。我真的很想知道那些自信到将[Suicide Linux][2] 安装到生产机上的家伙。 **警告:不要在生产机上尝试这个!** 假如你感兴趣的话,现在可以通过一个简洁的[DEB 包][3]来获取到它。
![top 10 strangest linux distros](http://3.bp.blogspot.com/-Q0hlEMCD9-o/VdMieAiXY1I/AAAAAAAAJ0M/iS_ZjVaZAk8/s1600/papyros.png)
**PapyrOS** 它在好的方面上 “奇怪”。PapyrOS 正尝试着将 Android 的 material design 设计语言应用到新品牌的 Linux 发行版本上。尽管这个项目还处于早期阶段,看起来它已经很有前景。该项目的网页上说该系统已经完成了 80%,随后人们可以期待它的第一个 Alpha 发行版本。在该项目被宣告提出时,我们做了[PapyrOS][4]的小幅报道,从它的外观上看,它甚至可能会引领潮流。假如你感兴趣的话,可在[Google+][5]上关注该项目并可通过[BountySource][6]来贡献出你的力量。
![10 most unique linux distros](http://3.bp.blogspot.com/-8aOtnTp3Yxk/VdMo_KWs4sI/AAAAAAAAJ0o/3NTqhaw60jM/s1600/qubes-linux.png)
**Qubes OS** Qubes 是一个开源的操作系统,通过使用安全划分的方法,被设计用来提供强大的安全性。其前提假设是不存在完美的没有 bug 的桌面环境。并通过实现一个‘安全隔离’ 的方法,[Qubes Linux][7]尝试去弥补那些 bug。Qubes 基于 XenX 视窗系统和 Linux并可运行大多数的 Linux 应用,支持大多数的 Linux 驱动。Qubes 入选了 Access Innovation Prize 2014 for Endpoint Security Solution 决赛名单。
![top10 linux distros](http://3.bp.blogspot.com/-2Sqvb_lilC0/VdMq_ceoXnI/AAAAAAAAJ00/kot20ugVJFk/s1600/ubuntu-satanic.jpg)
**Ubuntu Satanic Edition** Ubuntu SE 是一个基于 Ubuntu 的发行版本。通过一个含有主题、壁纸甚至来源于某些天才新晋艺术家的重金属音乐的综合软件包,“它同时带来了最好的自由软件和免费的金属音乐” 。尽管这个项目看起来不再被活跃地发展了, Ubuntu Satanic Edition 甚至在其名字上都显得奇异。 [Ubuntu SE (Slightly NSFW)][8]。
![10 strange linux distros](http://2.bp.blogspot.com/-ZtIVjGMqdx0/VdMv136Pz1I/AAAAAAAAJ1E/-q34j-TXyUY/s1600/tiny-core-linux.png)
**Tiny Core Linux** Puppy Linux 还不够小?试试这个吧。 Tiny Core Linux 是一个 12MB 大小的图形化 Linux 桌面!是的,你没有读错。一个主要的补充说明:它不是一个完整的桌面,也并不完全支持所有的硬件。它只含有能够启动进入一个非常小巧的 X 桌面,支持有线网络连接的核心部件。它甚至还有一个名为 Micro Core Linux 的没有 GUI 的版本,仅有 9MB 大小。[Tiny Core Linux][9]。
![top 10 unique and special linux distros](http://4.bp.blogspot.com/-idmCvIxtxeo/VdcqcggBk1I/AAAAAAAAJ1U/DTQCkiLqlLk/s1600/nixos.png)
**NixOS** 它是一个非常关注经验用户的 Linux 发行版本,有着独特的方式来打包和配置管理。在其他的发行版本中,诸如升级的操作可能是非常危险的。升级一个软件包可能会引起其他包无法使用,相比于从头安装一个系统,升级整个系统则显得不是那么可信。在那些你不能安全地测试由一个配置的改变所带来的结果的更改之上,它们通常没有“重来”这个选项。在 NixOS 中,整个系统由 Nix 包管理器按照一个纯功能性的构建语言的描述来构建。这意味着一个新的配置不会重写先前的配置。大多数其他的特色功能也遵循着这个模式。Nix 相互分离地存储所有的软件包。有关 NixOS 的更多内容请看[这里][10]。
![strangest linux distros](http://4.bp.blogspot.com/-rOYfBXg-UiU/VddCF7w_xuI/AAAAAAAAJ1w/Nf11bOheOwM/s1600/gobolinux.jpg)
**GoboLinux** 这是另一个非常奇特的 Linux 发行版本。它与其他系统如此不同的原因是它有着独特的重管理文件系统。它有着自己独特的子目录树其中存储着所有的文件和程序。GoboLinux 没有专门的包数据库,因为其文件系统就是它的数据库。在某些方面,这类管理有些类似于 OS X 上所看到的功能。
![strangest linux distros](http://1.bp.blogspot.com/-3P22pYfih6Y/VdcucPOv4LI/AAAAAAAAJ1g/PszZDbe83sQ/s1600/hannah-montana-linux.jpg)
**Hannah Montana Linux** 它是一个基于 Kubuntu 的 Linux 发行版本,它有着 Hannah Montana 主题的开机启动界面、KDM(KDE Display Manager)、图标集、ksplash、plasma、颜色主题和壁纸(请抱歉)。[这是它的链接][12]。这个项目现在不再活跃了。
**RLSD Linux** 它是一个极其精简、小巧、轻量和安全加固的,建立在 Linux 内核上的基于文本的操作系统。开发者称 “它是一个独特的发行版本,提供一系列的控制台应用和本地化的安全特性,对黑客或许有吸引力。” [RLSD Linux][13].
我们还错过了某些更加奇特的发行版本吗?请让我们知晓吧。
--------------------------------------------------------------------------------
via: http://www.techdrivein.com/2015/08/the-strangest-most-unique-linux-distros.html
作者Manuel Jose
译者:[FSSlc](https://github.com/FSSlc)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:http://puppylinux.org/main/Overview%20and%20Getting%20Started.htm
[2]:http://qntm.org/suicide
[3]:http://sourceforge.net/projects/suicide-linux/files/
[4]:http://www.techdrivein.com/2015/02/papyros-material-design-linux-coming-soon.html
[5]:https://plus.google.com/communities/109966288908859324845/stream/3262a3d3-0797-4344-bbe0-56c3adaacb69
[6]:https://www.bountysource.com/teams/papyros
[7]:https://www.qubes-os.org/
[8]:http://ubuntusatanic.org/
[9]:http://tinycorelinux.net/
[10]:https://nixos.org/
[11]:http://www.gobolinux.org/
[12]:http://hannahmontana.sourceforge.net/
[13]:http://rlsd2.dimakrasner.com/

View File

@ -0,0 +1,101 @@
Aix, HP-UX, Solaris, BSD, 和 LINUX 简史
================================================================================
![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/05/linux-712x445.png)
有句话说,当一扇门在你面前关上的时候,另一扇门就会打开。[Ken Thompson][1] 和 [Dennis Richie][2] 两个人就是最好的例子。他们俩是 **20世纪** 最优秀的信息技术专家,因为他们创造了 **UNIX**,最具影响力和创新性的软件之一。
### UNIX 系统诞生于贝尔实验室 ###
**UNIX** 最开始的名字是 **UNICS** (**UN**iplexed **I**nformation and **C**omputing **S**ervice)它有一个大家庭并不是从石头里蹦出来的。UNIX的祖父是 **CTSS** (**C**ompatible **T**ime **S**haring **S**ystem),它的父亲是 **Multics** (**MULT**iplexed **I**nformation and **C**omputing **S**ervice),这个系统能支持大量用户通过交互式分时使用大型机。
UNIX 诞生于 **1969** 年,由 **Ken Thompson** 以及后来加入的 **Dennis Richie** 共同完成。这两位优秀的研究员和科学家一起在一个**通用电子**和**麻省理工学院**的合作项目里工作,项目目标是开发一个叫 Multics 的交互式分时系统。
Multics 的目标是整合分时共享以及当时其他先进技术,允许用户在远程终端通过电话登录到主机,然后可以编辑文档,阅读电子邮件,运行计算器,等等。
在之后的五年里AT&T 公司为 Multics 项目投入了数百万美元。他们购买了 GE-645 大型机,聚集了贝尔实验室的顶级研究人员,例如 Ken Thompson, Stuart Feldman, Dennis Ritchie, M. Douglas McIlroy, Joseph F. Ossanna, 以及 Robert Morris。但是项目目标太过激进进度严重滞后。最后AT&T 高层决定放弃这个项目。
贝尔实验室的管理层决定停止这个让许多研究人员无比纠结的操作系统上的所有遗留工作。不过要感谢 ThompsonRichie 和一些其他研究员,他们把老板的命令丢到一边,并继续在实验室里满怀热心地忘我工作,最终孵化出前无古人后无来者的 UNIX。
UNIX 的第一声啼哭是在一台 PDP-7 微型机上,它是 Thompson 测试自己在操作系统设计上的点子的机器,也是 Thompson 和 Richie 一起玩 Space and Travel 游戏的模拟器。
> “我们想要的不仅是一个优秀的编程环境而是能围绕这个系统形成团体。按我们自己的经验通过远程访问和分时共享主机实现的公共计算本质上不只是用终端输入程序代替打孔机而已而是鼓励密切沟通。”Dennis Richie 说。
UNIX 是第一个靠近理想的系统,在这里程序员可以坐在机器前自由摆弄程序,探索各种可能性并随手测试。在 UNIX 整个生命周期里,因为大量因为其他操作系统限制而投身过来的高手做出的无私贡献,它的功能模型一直保持上升趋势。
UNIX 在 1970 年因为 PDP-11/20 获得了首次资金注入,之后正式更名为 UNIX 并支持在 PDP-11/20 上运行。UNIX 带来的第一次收获是在 1971 年,贝尔实验室的专利部门配备来做文字处理。
### UNIX 上的 C 语言革命 ###
Dennis Richie 在 1972 年发明了一种叫 “**C**” 的高级编程语言,之后他和 Ken Thompson 决定用 “C” 重写 UNIX 系统,来支持更好的移植性。他们在那一年里编写和调试了差不多 100,000 行代码。在使用了 “C” 语言后,系统可移植性非常好,只需要修改一小部分机器相关的代码就可以将 UNIX 移植到其他计算机平台上。
UNIX 第一次公开露面是在 1973 年 Dennis Ritchie 和 Ken Thompson 在操作系统原理上发表的一篇论文,然后 AT&T 发布了 UNIX 系统第 5 版,并授权给教育机构使用,然后在 1976 年第一次以 **$20.000** 的价格授权企业使用 UNIX 第 6 版。应用最广泛的是 1980 年发布的 UNIX 第 7 版,任何人都可以购买,只是授权条款非常有限。授权内容包括源代码,以及用 PDP-11 汇编语言写的及其相关内核。各种版本 UNIX 系统完全由它的用户手册确定。
### AIX 系统 ###
**1983** 年,**Microsoft** 计划开发 **Xenix** 作为 MS-DOS 的多用户版继任者,他们在那一年花了 $8,000 搭建了一台拥有 **512 KB** 内存以及 **10 MB**硬盘并运行 Xenix 的 Altos 586。而到 1984 年为止,全世界已经安装了超过 100,000 份 UNIX System V 第二版。在 1986 年发布了包含因特网域名服务的 4.3BSD,而且 **IBM** 宣布 **AIX 系统**的安装数已经超过 250,000。AIX 基于 Unix System V 开发,这套系统拥有 BSD 风格的根文件系统,是两者的结合。
AIX 第一次引入了 **日志文件系统 (JFS)** 以及集成逻辑卷管理器 (LVM)。IBM 在 1989 年将 AIX 移植到自己的 RS/6000 平台。2001 年发布的 5L 版是一个突破性的版本,提供了 Linux 友好性以及支持 Power4 服务器的逻辑分区。
在 2004 年发布的 AIX 5.3 引入了支持 Advanced Power Virtualization (APV) 的虚拟化技术,支持对称多线程,微分区,以及可分享的处理器池。
在 2007 年IBM 同时发布 AIX 6.1 和 Power6 架构,开始加强自己的虚拟化产品。他们还将 Advanced Power Virtualization 重新包装成 PowerVM。
这次改进包括被称为 WPARs 的负载分区形式,类似于 Solaris 的 zones/Containers但是功能更强。
### HP-UX 系统 ###
**惠普 UNIX (HP-UX)** 源于 System V 第 3 版。这套系统一开始只支持 PA-RISC HP 9000 平台。HP-UX 第 1 版发布于 1984 年。
HP-UX 第 9 版引入了 SAM一个基于角色的图形用户界面 (GUI),用户可以用来管理整个系统。在 1995 年发布的第 10 版,调整了系统文件分布以及目录结构,变得有点类似 AT&T SVR4。
第 11 版发布于 1997 年。这是 HP 第一个支持 64 位寻址的版本。不过在 2000 年重新发布成 11i因为 HP 为特定的信息技术目的,引入了操作环境和分级应用的捆绑组。
在 2001 年发布的 11.20 版宣称支持 Itanium 系统。HP-UX 是第一个使用 ACLs访问控制列表管理文件权限的 UNIX 系统,也是首先支持内建逻辑卷管理器的系统之一。
如今HP-UX 因为 HP 和 Veritas 的合作关系使用了 Veritas 作为主文件系统。
HP-UX 目前最新的版是 11iv3, update 4。
### Solaris 系统 ###
Sun 的 UNIX 版本是 **Solaris**,用来接替 1992 年创建的 **SunOS**。SunOS 一开始基于 BSD伯克利软件发行版风格的 UNIX但是 SunOS 5.0 版以及之后的版本都是基于重新包装成 Solaris 的 Unix System V 第 4 版。
SunOS 1.0 版于 1983 年发布,用于支持 Sun-1 和 Sun-2 平台。随后在 1985 年发布了 2.0 版。在 1987 年Sun 和 AT&T 宣布合作一个项目以 SVR4 为基础将 System V 和 BSD 合并成一个版本。
Solaris 2.4 是 Sun 发布的第一个 Sparc/x86 版本。1994 年 11 月份发布的 SunOS 4.1.4 版是最后一个版本。Solaris 7 是首个 64 位 Ultra Sparc 版本,加入了对文件系统元数据记录的原生支持。
Solaris 9 发布于 2002 年,支持 Linux 特性以及 Solaris 卷管理器。之后2005 年发布了 Solaris 10带来许多创新比如支持 Solaris Containers新的 ZFS 文件系统,以及逻辑域。
目前 Solaris 最新的版本是 第 10 版,最后的更新发布于 2008 年。
### Linux ###
到了 1991 年,用来替代商业操作系统的免费系统的需求日渐高涨。因此 **Linus Torvalds** 开始构建一个免费操作系统,最终成为 **Linux**。Linux 最开始只有一些 “C” 文件并且使用了阻止商业发行的授权。Linux 是一个类 UNIX 系统但又不尽相同。
2015 年 发布了基于 GNU Public License 授权的 3.18 版。IBM 声称有超过 1800 万行开源代码开放给开发者。
如今 GNU Public License 是应用最广泛的免费软件授权方式。根据开源软件原则,这份授权允许个人和企业自由分发,运行,通过拷贝共享,学习,以及修改软件源码。
### UNIX vs. Linux: 技术概要 ###
- Linux 鼓励多样性Linux 的开发人员有更宽广的背景,有更多不同经验和意见。
- Linux 比 UNIX 支持更多的平台和架构。
- UNIX 商业版本的开发人员会为他们的操作系统考虑特定目标平台以及用户。
- **Linux 比 UNIX 有更好的安全性**更少受病毒或恶意软件攻击。Linux 上大约有 60-100 种病毒但是没有任何一种还在传播。另一方面UNIX 上大约有 85-120 种病毒,但是其中有一些还在传播中。
- 通过 UNIX 命令,系统上的工具和元素很少改变,甚至很多接口和命令行参数在后续 UNIX 版本中一直沿用。
- 有些 Linux 开发项目以自愿为基础进行资助,比如 Debian。其他项目会维护一个和商业 Linux 的社区版,比如 SUSE 的 openSUSE 以及红帽的 Fedora。
- 传统 UNIX 是扩大规模,而另一方面 Linux 是扩大范围。
--------------------------------------------------------------------------------
via: http://www.unixmen.com/brief-history-aix-hp-ux-solaris-bsd-linux/
作者:[M.el Khamlichi][a]
译者:[zpl1025](https://github.com/zpl1025)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.unixmen.com/author/pirat9/
[1]:http://www.unixmen.com/ken-thompson-unix-systems-father/
[2]:http://www.unixmen.com/dennis-m-ritchie-father-c-programming-language/

View File

@ -0,0 +1,38 @@
Nautilus的文件搜索将迎来重大提升
================================================================================
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/10/nautilus-new-search-filters.jpg)
**在Nautilus中搜索位置文件和文件夹将会将会变得很简单。**
一个[GNOME文件管理器][1]中新的**搜索过滤器**正在开发中。它大量使用的GNOME的弹出式菜单来找出搜索结果并精确找到你关心的。
开发者Georges Stavracas正致力于新的UI并[描述][2]新的编辑器为“更干净、更理智、更直观”。
根据[上传到Youtube][3]的视频-他还没有嵌入它-他没有错。
> 他在他的博客中写到“Nautilus有非常复杂但是强大的内部它允许我们做很多事情。事实上这对于很多选项的代码也是这样。那么为何它曾经看上去这么糟糕
问题有部分修辞;新的搜索过滤器界面对用户展示了“强大的内部”。搜索可以根据类型、名字或者日期范围来进行过滤。
对像Nautilus这种app的任何修改有可能让一些用户不安因此像这样有帮助、直接的新UI会带来一些争议。
不要担心不满会影响进度(毫无疑问,虽然像[移除类型优先搜索][4]的争议自2014年以来一直在争论。[上个月发布的][5]GNOME 3.18给Nautilus引入了新的文件进度对话框以及更好的远程共享包括Google Drive。
Stavracas的搜索过滤还没被合并进Files的trunk但是重做的UI已经初步计划在明年春天的GNOME 3.20中实现。
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2015/10/new-nautilus-search-filter-ui
作者:[Joey-Elijah Sneddon][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:https://wiki.gnome.org/Apps/Nautilus
[2]:http://feaneron.com/2015/10/12/the-new-search-for-gnome-files-aka-nautilus/
[3]:https://www.youtube.com/watch?v=X2sPRXDzmUw
[4]:http://www.omgubuntu.co.uk/2014/01/ubuntu-14-04-nautilus-type-ahead-patch
[5]:http://www.omgubuntu.co.uk/2015/09/gnome-3-18-release-new-features

View File

@ -0,0 +1,111 @@
在浏览器上使用Docker
================================================================================
Docker 越来越流行了。在一个容器里面而不是虚拟机里运行一个完整的操作系统的这种是一个非常棒的技术和想法。docker 已经通过节省工作时间来拯救了千上万的系统管理员和开发人员。这是一个开源技术提供一个平台来把应用程序当作容器来打包、分发、共享和运行而不去关注主机上运行的操作系统是什么。它没有开发语言、框架或打包系统的限制并且可以在任何时间、任何地点运行从小型计算机到高端服务器都可以。运行docker容器和管理他们可能会花费一点点困难和时间所以现在有一款基于web 的应用程序DockerUI可以让管理和运行容器变得很简单。DockerUI 是一个对那些不熟悉Linux 命令行担忧很想运行容器话程序的人很有帮助。DockerUI 是一个开源的基于web 的应用程序它最著名的是它华丽的设计和简单的用来运行和管理docker 的简单的操作界面。
下面会介绍如何在Linux 上安装配置DockerUI。
### 1. 安装docker ###
首先我们需要安装docker。我们得感谢docker 的开发者让我们可以简单的在主流linux 发行版上安装docker。为了安装docker我们得在对应的发行版上使用下面的命令。
#### Ubuntu/Fedora/CentOS/RHEL/Debian ####
docker 维护者已经写了一个非常棒的脚本用它可以在Ubuntu 15.04/14.10/14.04, CentOS 6.x/7, Fedora 22, RHEL 7 和Debian 8.x 这几个linux 发行版上安装docker。这个脚本可以识别出我们的机器上运行的linux 的发行版本然后将需要的源库添加到文件系统、更新本地的安装源目录最后安装docker 和依赖库。要使用这个脚本安装docker我们需要在root 用户或者sudo 权限下运行如下的命令,
# curl -sSL https://get.docker.com/ | sh
#### OpenSuse/SUSE Linux 企业版 ####
要在运行了OpenSuse 13.1/13.2 或者 SUSE Linux Enterprise Server 12 的机器上安装docker我们只需要简单的执行zypper 命令。运行下面的命令就可以安装最新版本的docker
# zypper in docker
#### ArchLinux ####
docker 存在于ArchLinux 的官方源和社区维护的AUR 库。所以在ArchLinux 上我们有两条路来安装docker。使用官方源安装需要执行下面的pacman 命令:
# pacman -S docker
如果要从社区源 AUR 安装docker需要执行下面的命令
# yaourt -S docker-git
### 2. 启动 ###
安装好docker 之后我们需要运行docker 监护程序然后再能运行并管理docker 容器。我们需要使用下列命令来确定docker 监护程序已经安装并运行了。
#### 在 SysVinit 上####
# service docker start
#### 在Systemd 上####
# systemctl start docker
### 3. 安装DockerUI ###
安装DockerUI 比安装docker 要简单很多。我们仅仅需要懂docker 注册表上拉取dockerui ,然后在容器里面运行。要完成这些,我们只需要简单的执行下面的命令:
# docker run -d -p 9000:9000 --privileged -v /var/run/docker.sock:/var/run/docker.sock dockerui/dockerui
![Starting DockerUI Container](http://blog.linoxide.com/wp-content/uploads/2015/09/starting-dockerui-container.png)
在上面的命令里dockerui 使用的默认端口是9000我们需要使用`-p` 命令映射默认端口。使用`-v` 标志我们可以指定docker socket。如果主机使用了SELinux那么就得使用`--privileged` 标志。
执行完上面的命令后我们要检查dockerui 容器是否运行了,或者使用下面的命令检查:
# docker ps
![Running Docker Containers](http://blog.linoxide.com/wp-content/uploads/2015/09/running-docker-containers.png)
### 4. 拉取docker镜像 ###
现在我们还不能直接使用dockerui 拉取镜像所以我们需要在命令行下拉取docker 镜像。要完成这些我们需要执行下面的命令。
# docker pull ubuntu
![Docker Image Pull](http://blog.linoxide.com/wp-content/uploads/2015/10/docker-image-pull.png)
上面的命令将会从docker 官方源[Docker Hub][1]拉取一个标志为ubuntu 的镜像。类似的我们可以从Hub 拉取需要的其它镜像。
### 4. 管理 ###
启动了dockerui 容器之后我们快乐的用它来执行启动、暂停、终止、删除和其它dockerui 提供的其他用来操作docker 容器的命令。第一我们需要在web 浏览器里面打开dockerui在浏览器里面输入http://ip-address:9000 或者 http://mydomain.com:9000具体要根据你的系统配置。默认情况下登录不需啊哟认证但是可以配置我们的web 服务器来要求登录认证。要启动一个容器,我们得得到包含我们要运行的程序的景象。
#### 创建 ####
创建容器我们需要在Images 页面点击我们想创建的容器的镜像id。然后点击`Create` 按钮,接下来我们就会被要求输入创建容器所需要的属性。这些都完成之后,我们需要点击按钮`Create` 完成最终的创建。
![Creating Docker Container](http://blog.linoxide.com/wp-content/uploads/2015/10/creating-docker-container.png)
#### 中止 ####
要停止一个容器,我们只需要跳转到`Containers` 页面然后选取要停止的容器。然后再Action 的子菜单里面按下Stop 就行了。
![Managing Container](http://blog.linoxide.com/wp-content/uploads/2015/10/managing-container.png)
#### 暂停与恢复 ####
要暂停一个容器只需要简单的选取目标容器然后点击Pause 就行了。恢复一个容器只需要在Actions 的子菜单里面点击Unpause 就行了。
#### 删除 ####
类似于我们上面完成的任务杀掉或者删除一个容器或镜像也是很简单的。只需要检查、选择容器或镜像然后点击Kill 或者Remove 就行了。
### 结论 ###
dockerui 使用了docker 远程API 完成了一个很棒的管理docker 容器的web 界面。它的开发者们已经使用纯HTML 和JS 设计、开发了这个应用。目前这个程序还处于开发中并且还有大量的工作要完成所以我们并不推荐将它应用在生产环境。它可以帮助用户简单的完成管理容器和镜像而且只需要一点点工作。如果想参与、贡献dockerui我们可以访问它们的[Github 仓库][2]。如果有问题、建议、反馈,请写在下面的评论框,这样我们就可以修改或者更新我们的内容。谢谢。
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/setup-dockerui-web-interface-docker/
作者:[Arun Pyasi][a]
译者:[oska874](https://github.com/oska874)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:https://hub.docker.com/
[2]:https://github.com/crosbymichael/dockerui/

View File

@ -0,0 +1,520 @@
提高 WordPress 性能的9个技巧
================================================================================
关于建站和 web 应用程序交付WordPress 是全球最大的一个平台。全球大约 [四分之一][1] 的站点现在正在使用开源 WordPress 软件,包括 eBay, Mozilla, RackSpace, TechCrunch, CNN, MTV,纽约时报,华尔街日报。
WordPress.com对于用户创建博客平台是最流行的其也运行在WordPress 开源软件上。[NGINX powers WordPress.com][2]。许多 WordPress 用户刚开始在 WordPress.com 上建站,然后移动到搭载着 WordPress 开源软件的托管主机上;其中大多数站点都使用 NGINX 软件。
WordPress 的吸引力是它的简单性无论是安装启动或者对于终端用户的使用。然而当使用量不断增长时WordPress 站点的体系结构也存在一定的问题 - 这里几个方法,包括使用缓存以及组合 WordPress 和 NGINX可以解决这些问题。
在这篇博客中我们提供了9个技巧来进行优化以帮助你解决 WordPress 中一些常见的性能问题:
- [缓存静态资源][3]
- [缓存动态文件][4]
- [使用 NGINX][5]
- [添加支持 NGINX 的链接][6]
- [为 NGINX 配置 FastCGI][7]
- [为 NGINX 配置 W3_Total_Cache][8]
- [为 NGINX 配置 WP-Super-Cache][9]
- [为 NGINX 配置安全防范措施][10]
- [配置 NGINX 支持 WordPress 多站点][11]
### 在 LAMP 架构下 WordPress 的性能 ###
大多数 WordPress 站点都运行在传统的 LAMP 架构下Linux 操作系统Apache Web 服务器软件MySQL 数据库软件 - 通常是一个单独的数据库服务器 - 和 PHP 编程语言。这些都是非常著名的,广泛应用的开源工具。大多数人都将 WordPress “称为” LAMP并且很容易寻求帮助和支持。
当用户访问 WordPress 站点时,浏览器为每个用户创建六到八个连接来运行 Linux/Apache 的组合。当用户请求连接时,每个页面的 PHP 文件开始飞速的从 MySQL 数据库争夺资源来响应请求。
LAMP 对于数百个并发用户依然能照常工作。然而,流量突然增加是常见的并且 - 通常是 - 一件好事。
但是,当 LAMP 站点变得繁忙时,当同时在线的用户达到数千个时,它的瓶颈就会被暴露出来。瓶颈存在主要是两个原因:
1. Apache Web 服务器 - Apache 为每一个连接需要消耗大量资源。如果 Apache 接受了太多的并发连接,内存可能会耗尽,性能急剧降低,因为数据必须使用磁盘进行交换。如果以限制连接数来提高响应时间,新的连接必须等待,这也导致了用户体验变得很差。
1. PHP/MySQL 的交互 - 总之,一个运行 PHP 和 MySQL 数据库服务器的应用服务器上每秒的请求量不能超过最大限制。当请求的数量超过最大连接数时,用户必须等待。超过最大连接数时也会增加所有用户的响应时间。超过其两倍以上时会出现明显的性能问题。
LAMP 架构的网站一般都会出现性能瓶颈,这时就需要升级硬件了 - 加 CPU扩大磁盘空间等等。当 Apache 和 PHP/MySQL 的架构负载运行后,在硬件上不断的提升无法保证对系统资源指数增长的需求。
最先取代 LAMP 架构的是 LEMP 架构 Linux, NGINX, MySQL, 和 PHP。 (这是 LEMP 的缩写E 代表着 “engine-x.” 的发音。) 我们在 [技巧 3][12] 中会描述 LEMP 架构。
### 技巧 1. 缓存静态资源 ###
静态资源是指不变的文件,像 CSSJavaScript 和图片。这些文件往往在网页的数据中占半数以上。页面的其余部分是动态生成的像在论坛中评论仪表盘的性能或个性化的内容可以看看Amazon.com 产品)。
缓存静态资源有两大好处:
- 更快的交付给用户 - 用户从他们浏览器的缓存或者从互联网上离他们最近的缓存服务器获取静态文件。有时候文件较大,因此减少等待时间对他们来说帮助很大。
- 减少应用服务器的负载 - 从缓存中检索到的每个文件会让 web 服务器少处理一个请求。你的缓存越多,用户等待的时间越短。
要让浏览器缓存文件,需要早在静态文件中设置正确的 HTTP 首部。当看到 HTTP Cache-Control 首部时,特别设置了 max-ageExpires 首部,以及 Entity 标记。[这里][13] 有详细的介绍。
当启用本地缓存然后用户请求以前访问过的文件时,浏览器首先检查该文件是否在缓存中。如果在,它会询问 Web 服务器该文件是否改变过。如果该文件没有改变Web 服务器将立即响应一个304状态码未改变这意味着该文件没有改变而不是返回状态码200 OK然后继续检索并发送已改变的文件。
为了支持浏览器以外的缓存可以考虑下面的方法内容分发网络CDN。CDN 是一​​种流行且​​强大的缓存工具,但我们在这里不详细描述它。可以想一下 CDN 背后的支撑技术的实现。此外,当你的站点从 HTTP/1.x 过渡到 HTTP/2 协议时CDN 的用处可能不太大;根据需要调查和测试,找到你网站需要的正确方法。
如果你转向 NGINX Plus 或开源的 NGINX 软件作为架构的一部分,建议你考虑 [技巧 3][14],然后配置 NGINX 缓存静态资源。使用下面的配置,用你 Web 服务器的 URL 替换 www.example.com。
server {
# substitute your web server's URL for www.example.com
server_name www.example.com;
root /var/www/example.com/htdocs;
index index.php;
access_log /var/log/nginx/example.com.access.log;
error_log /var/log/nginx/example.com.error.log;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
try_files $uri =404;
include fastcgi_params;
# 使用你 WordPress 服务器的套接字,地址和端口来替换
fastcgi_pass unix:/var/run/php5-fpm.sock;
#fastcgi_pass 127.0.0.1:9000;
}
location ~* .(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ {
expires max;
log_not_found off;
access_log off;
}
}
### 技巧 2. 缓存动态文件 ###
WordPress 是动态生成的网页,这意味着每次请求时它都要生成一个给定的网页(即使和前一次的结果相同)。这意味着用户随时获得的是最新内容。
想一下,当用户访问一个帖子时,并在文章底部有用户的评论时。你希望用户能够看到所有的评论 - 即使评论刚刚发布。动态内容就是处理这种情况的。
但现在,当帖子每秒出现十几二十几个请求时。应用服务器可能每秒需要频繁生成页面导致其压力过大,造成延误。为了给用户提供最新的内容,每个访问理论上都是新的请求,因此他们也不得不在首页等待。
为了防止页面由于负载过大变得缓慢,需要缓存动态文件。这需要减少文件的动态内容来提高整个系统的响应速度。
要在 WordPress 中启用缓存中,需要使用一些流行的插件 - 如下所述。WordPress 的缓存插件需要刷新页面,然后将其缓存短暂时间 - 也许只有几秒钟。因此,如果该网站每秒中只有几个请求,那大多数用户获得的页面都是缓存的副本。这也有助于提高所有用户的检索时间:
- 大多数用户获得页面的缓存副本。应用服务器没有做任何工作。
- 用户很快会得到一个新的副本。应用服务器只需每隔一段时间刷新页面。当服务器产生一个新的页面(对于第一个用户访问后,缓存页过期),它这样做要快得多,因为它的请求不会超载。
你可以缓存运行在 LAMP 架构或者 [LEMP 架构][15] 上 WordPress 的动态文件(在 [技巧 3][16] 中说明了)。有几个缓存插件,你可以在 WordPress 中使用。这里有最流行的缓存插件和缓存技术,从最简单到最强大的:
- [Hyper-Cache][17] 和 [Quick-Cache][18] 这两个插件为每个 WordPress 页面创建单个 PHP 文件。它支持的一些动态函数会绕过多个 WordPress 与数据库的连接核心处理,创建一个更快的用户体验。他们不会绕过所有的 PHP 处理,所以使用以下选项他们不能给出相同的性能提升。他们也不需要修改 NGINX 的配置。
- [WP Super Cache][19] 最流行的 WordPress 缓存插件。它有许多功能,它的界面非常简洁,如下图所示。我们展示了 NGINX 一个简单的配置实例在 [技巧 7][20] 中。
- [W3 Total Cache][21] 这是第二大最受欢迎的 WordPress 缓存插件。它比 WP Super Cache 的功能更强大,但它有些配置选项比较复杂。一个 NGINX 的简单配置,请看 [技巧 6][22]。
- [FastCGI][23] CGI 代表通用网关接口在因特网上发送请求和接收文件。它不是一个插件只是一种能直接使用缓存的方法。FastCGI 可以被用在 Apache 和 Nginx 上,它也是最流行的动态缓存方法;我们在 [技巧 5][24] 中描述了如何配置 NGINX 来使用它。
这些插件的技术文档解释了如何在 LAMP 架构中配置它们。配置选项包括数据库和对象缓存;也包括使用 HTMLCSS 和 JavaScript 来构建 CDN 集成环境。对于 NGINX 的配置,请看列表中的提示技巧。
**注意**WordPress 不能缓存用户的登录信息,因为它们的 WordPress 页面都是不同的。(对于大多数网站来说,只有一小部分用户可能会登录),大多数缓存不会对刚刚评论过的用户显示缓存页面,只有当用户刷新页面时才会看到他们的评论。若要缓存页面的非个性化内容,如果它对整体性能来说很重要,可以使用一种称为 [fragment caching][25] 的技术。
### 技巧 3. 使用 NGINX ###
如上所述,当并发用户数超过某一值时 Apache 会导致性能问题 可能数百个用户同时使用。Apache 对于每一个连接会消耗大量的资源因而容易耗尽内存。Apache 可以配置连接数的值来避免耗尽内存,但是这意味着,超过限制时,新的连接请求必须等待。
此外Apache 使用 mod_php 模块将每一个连接加载到内存中即使只有静态文件图片CSSJavaScript 等)。这使得每个连接消耗更多的资源,从而限制了服务器的性能。
开始解决这些问题吧,从 LAMP 架构迁到 LEMP 架构 使用 NGINX 取代 Apache 。NGINX 仅消耗很少量的内存就能处理成千上万的并发连接数,所以你不必经历颠簸,也不必限制并发连接数。
NGINX 处理静态文件的性能也较好,它有内置的,简单的 [缓存][26] 控制策略。减少应用服务器的负载,你的网站的访问速度会更快,用户体验更好。
你可以在部署的所有 Web 服务器上使用 NGINX或者你可以把一个 NGINX 服务器作为 Apache 的“前端”来进行反向代理 - NGINX 服务器接收客户端请求,将请求的静态文件直接返回,将 PHP 请求转发到 Apache 上进行处理。
对于动态页面的生成 - WordPress 核心体验 - 选择一个缓存工具,如 [技巧 2][27] 中描述的。在下面的技巧中,你可以看到 FastCGIW3_Total_Cache 和 WP-Super-Cache 在 NGINX 上的配置示例。 Hyper-Cache 和 Quick-Cache 不需要改变 NGINX 的配置。)
**技巧** 缓存通常会被保存到磁盘上,但你可以用 [tmpfs][28] 将缓存放在内存中来提高性能。
为 WordPress 配置 NGINX 很容易。按照这四个步骤,其详细的描述在指定的技巧中:
1.添加永久的支持 - 添加对 NGINX 的永久支持。此步消除了对 **.htaccess** 配置文件的依赖,这是 Apache 特有的。参见 [技巧 4][29]
2.配置缓存 - 选择一个缓存工具并安装好它。可选择的有 FastCGI cacheW3 Total Cache, WP Super Cache, Hyper Cache, 和 Quick Cache。请看技巧 [5][30], [6][31], 和 [7][32].
3.落实安全防范措施 - 在 NGINX 上采用对 WordPress 最佳安全的做法。参见 [技巧 8][33]。
4.配置 WordPress 多站点 - 如果你使用 WordPress 多站点,在 NGINX 下配置子目录,子域,或多个域的结构。见 [技巧9][34]。
### 技巧 4. 添加支持 NGINX 的链接 ###
许多 WordPress 网站依靠 **.htaccess** 文件,此文件依赖 WordPress 的多个功能包括永久支持插件和文件缓存。NGINX 不支持 **.htaccess** 文件。幸运的是,你可以使用 NGINX 的简单而全面的配置文件来实现大部分相同的功能。
你可以在使用 NGINX 的 WordPress 中通过在主 [server][36] 块下添加下面的 location 块中启用 [永久链接][35]。(此 location 块在其他代码示例中也会被包括)。
**try_files** 指令告诉 NGINX 检查请求的 URL 在根目录下是作为文件(**$uri**)还是目录(**$uri/**)**/var/www/example.com/htdocs**。如果都不是NGINX 将重定向到 **/index.php**,通过查询字符串参数判断是否作为参数。
server {
server_name example.com www.example.com;
root /var/www/example.com/htdocs;
index index.php;
access_log /var/log/nginx/example.com.access.log;
error_log /var/log/nginx/example.com.error.log;
location / {
try_files $uri $uri/ /index.php?$args;
}
}
### 技巧 5. 在 NGINX 中配置 FastCGI ###
NGINX 可以从 FastCGI 应用程序中缓存响应,如 PHP 响应。此方法可提供最佳的性能。
对于开源的 NGINX第三方模块 [ngx_cache_purge][37] 提供了缓存清除能力需要手动编译配置代码如下所示。NGINX Plus 已经包含了此代码的实现。
当使用 FastCGI 时,我们建议你安装 [NGINX 辅助插件][38] 并使用下面的配置文件,尤其是要使用 **fastcgi_cache_key** 并且 location 块下要包括 **fastcgi_cache_purge**。当页面被发布或有改变时,甚至有新评论被发布时,该插件会自动清除你的缓存,你也可以从 WordPress 管理控制台手动清除。
NGINX 的辅助插件还可以添加一个简短的 HTML 代码到你网页的底部,确认缓存是否正常并显示一些统计工作。(你也可以使用 [$upstream_cache_status][39] 确认缓存功能是否正常。)
fastcgi_cache_path /var/run/nginx-cache levels=1:2
keys_zone=WORDPRESS:100m inactive=60m;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
server {
server_name example.com www.example.com;
root /var/www/example.com/htdocs;
index index.php;
access_log /var/log/nginx/example.com.access.log;
error_log /var/log/nginx/example.com.error.log;
set $skip_cache 0;
# POST 请求和查询网址的字符串应该交给 PHP
if ($request_method = POST) {
set $skip_cache 1;
}
if ($query_string != "") {
set $skip_cache 1;
}
#以下 uris 中包含的部分不缓存
if ($request_uri ~* "/wp-admin/|/xmlrpc.php|wp-.*.php|/feed/|index.php
|sitemap(_index)?.xml") {
set $skip_cache 1;
}
#用户不能使用缓存登录或缓存最近的评论
if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass
|wordpress_no_cache|wordpress_logged_in") {
set $skip_cache 1;
}
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
try_files $uri /index.php;
include fastcgi_params;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_cache_bypass $skip_cache;
fastcgi_no_cache $skip_cache;
fastcgi_cache WORDPRESS;
fastcgi_cache_valid 60m;
}
location ~ /purge(/.*) {
fastcgi_cache_purge WORDPRESS "$scheme$request_method$host$1";
}
location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg|jpeg|gif|png
|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ {
access_log off;
log_not_found off;
expires max;
}
location = /robots.txt {
access_log off;
log_not_found off;
}
location ~ /\. {
deny all;
access_log off;
log_not_found off;
}
}
### 技巧 6. 为 NGINX 配置 W3_Total_Cache ###
[W3 Total Cache][40], 是 Frederick Townes 的 [W3-Edge][41] 下的, 是一个支持 NGINX 的 WordPress 缓存框架。其有众多选项配置,可以替代 FastCGI 缓存。
缓存插件提供了各种缓存配置,还包括数据库和对象的缓存,对 HTMLCSS 和 JavaScript可选择性的与流行的 CDN 整合。
使用插件时,需要将其配置信息写入位于你的域的根目录的 NGINX 配置文件中。
server {
server_name example.com www.example.com;
root /var/www/example.com/htdocs;
index index.php;
access_log /var/log/nginx/example.com.access.log;
error_log /var/log/nginx/example.com.error.log;
include /path/to/wordpress/installation/nginx.conf;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
try_files $uri =404;
include fastcgi_params;
fastcgi_pass unix:/var/run/php5-fpm.sock;
}
}
### 技巧 7. 为 NGINX 配置 WP Super Cache ###
[WP Super Cache][42] 是由 Donncha O Caoimh 完成的, [Automattic][43] 上的一个 WordPress 开发者, 这是一个 WordPress 缓存引擎,它可以将 WordPress 的动态页面转变成静态 HTML 文件,以使 NGINX 可以很快的提供服务。它是第一个 WordPress 缓存插件,和其他的相比,它更专注于某一特定的领域。
配置 NGINX 使用 WP Super Cache 可以根据你的喜好而进行不同的配置。以下是一个示例配置。
在下面的配置中location 块中使用了名为 WP Super Cache 的超级缓存中部分配置来工作。代码的其余部分是根据 WordPress 的规则不缓存用户登录信息,不缓存 POST 请求,并对静态资源设置过期首部,再加上标准的 PHP 实现;这部分可以进行定制,来满足你的需求。
server {
server_name example.com www.example.com;
root /var/www/example.com/htdocs;
index index.php;
access_log /var/log/nginx/example.com.access.log;
error_log /var/log/nginx/example.com.error.log debug;
set $cache_uri $request_uri;
# POST 请求和查询网址的字符串应该交给 PHP
if ($request_method = POST) {
set $cache_uri 'null cache';
}
if ($query_string != "") {
set $cache_uri 'null cache';
}
#以下 uris 中包含的部分不缓存
if ($request_uri ~* "(/wp-admin/|/xmlrpc.php|/wp-(app|cron|login|register|mail).php
|wp-.*.php|/feed/|index.php|wp-comments-popup.php
|wp-links-opml.php|wp-locations.php |sitemap(_index)?.xml
|[a-z0-9_-]+-sitemap([0-9]+)?.xml)") {
set $cache_uri 'null cache';
}
#用户不能使用缓存登录或缓存最近的评论
if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+
|wp-postpass|wordpress_logged_in") {
set $cache_uri 'null cache';
}
#当请求的文件存在时使用缓存否则将请求转发给WordPress
location / {
try_files /wp-content/cache/supercache/$http_host/$cache_uri/index.html
$uri $uri/ /index.php;
}
location = /favicon.ico {
log_not_found off;
access_log off;
}
location = /robots.txt {
log_not_found off
access_log off;
}
location ~ .php$ {
try_files $uri /index.php;
include fastcgi_params;
fastcgi_pass unix:/var/run/php5-fpm.sock;
#fastcgi_pass 127.0.0.1:9000;
}
# 尽可能的缓存静态文件
location ~*.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css
|rss|atom|js|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2
|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ {
expires max;
log_not_found off;
access_log off;
}
}
### 技巧 8. 为 NGINX 配置安全防范措施 ###
为了防止攻击,可以控制对关键资源的访问以及当机器超载时进行登录限制。
只允许特定的 IP 地址访问 WordPress 的仪表盘。
#对访问 WordPress 的仪表盘进行限制
location /wp-admin {
deny 192.192.9.9;
allow 192.192.1.0/24;
allow 10.1.1.0/16;
deny all;
}
只允许上传特定类型的文件,以防止恶意代码被上传和运行。
#当上传的不是图像,视频,音乐等时,拒绝访问。
location ~* ^/wp-content/uploads/.*.(html|htm|shtml|php|js|swf)$ {
deny all;
}
拒绝其他人访问 WordPress 的配置文件 **wp-config.php**。拒绝其他人访问的另一种方法是将该文件的一个目录移到域的根目录下。
# 拒绝其他人访问 wp-config.php
location ~* wp-config.php {
deny all;
}
**wp-login.php** 进行限速来防止暴力攻击。
# 拒绝访问 wp-login.php
location = /wp-login.php {
limit_req zone=one burst=1 nodelay;
fastcgi_pass unix:/var/run/php5-fpm.sock;
#fastcgi_pass 127.0.0.1:9000;
}
### 技巧 9. 配置 NGINX 支持 WordPress 多站点 ###
WordPress 多站点,顾名思义,使用同一个版本的 WordPress 从单个实例中允许你管理两个或多个网站。[WordPress.com][44] 运行的就是 WordPress 多站点,其主机为成千上万的用户提供博客服务。
你可以从单个域的任何子目录或从不同的子域来运行独立的网站。
使用此代码块添加对子目录的支持。
# 在 WordPress 中添加支持子目录结构的多站点
if (!-e $request_filename) {
rewrite /wp-admin$ $scheme://$host$uri/ permanent;
rewrite ^(/[^/]+)?(/wp-.*) $2 last;
rewrite ^(/[^/]+)?(/.*\.php) $2 last;
}
使用此代码块来替换上面的代码块以添加对子目录结构的支持,子目录名自定义。
# 添加支持子域名
server_name example.com *.example.com;
旧版本3.4以前)的 WordPress 多站点使用 readfile() 来提供静态内容。然而readfile() 是 PHP 代码,它会导致在执行时性能会显著降低。我们可以用 NGINX 来绕过这个非必要的 PHP 处理。该代码片段在下面被(==============)线分割出来了。
# 避免 PHP readfile() 在 /blogs.dir/structure 子目录中
location ^~ /blogs.dir {
internal;
alias /var/www/example.com/htdocs/wp-content/blogs.dir;
access_log off;
log_not_found off;
expires max;
}
============================================================
# 避免 PHP readfile() 在 /files/structure 子目录中
location ~ ^(/[^/]+/)?files/(?.+) {
try_files /wp-content/blogs.dir/$blogid/files/$rt_file /wp-includes/ms-files.php?file=$rt_file;
access_log off;
log_not_found off;
expires max;
}
============================================================
# WPMU 文件结构的子域路径
location ~ ^/files/(.*)$ {
try_files /wp-includes/ms-files.php?file=$1 =404;
access_log off;
log_not_found off;
expires max;
}
============================================================
# 地图博客 ID 在特定的目录下
map $http_host $blogid {
default 0;
example.com 1;
site1.example.com 2;
site1.com 2;
}
### 结论 ###
可扩展性对许多站点的开发者来说是一项挑战,因为这会让他们在 WordPress 站点中取得成功。(对于那些想要跨越 WordPress 性能问题的新站点。)为 WordPress 添加缓存,并将 WordPress 和 NGINX 结合,是不错的答案。
NGINX 不仅对 WordPress 网站是有用的。世界上排名前 100010,000和100,000网站中 NGINX 也是作为 [领先的 web 服务器][45] 被使用。
欲了解更多有关 NGINX 的性能,请看我们最近的博客,[关于 10x 应用程序的 10 个技巧][46]。
NGINX 软件有两个版本:
- NGINX 开源的软件 - 像 WordPress 一样,此软件你可以自行下载,配置和编译。
- NGINX Plus - NGINX Plus 包括一个预构建的参考版本的软件,以及服务和技术支持。
想要开始,先到 [nginx.org][47] 下载开源软件并了解下 [NGINX Plus][48]。
--------------------------------------------------------------------------------
via: https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/
作者:[Floyd Smith][a]
译者:[strugglingyouth](https://github.com/strugglingyouth)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.nginx.com/blog/author/floyd/
[1]:http://w3techs.com/technologies/overview/content_management/all
[2]:https://www.nginx.com/press/choosing-nginx-growth-wordpresscom/
[3]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#cache-static
[4]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#cache-dynamic
[5]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#adopt-nginx
[6]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#permalink
[7]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#fastcgi
[8]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#w3-total-cache
[9]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#wp-super-cache
[10]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#security
[11]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#multisite
[12]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#adopt-nginx
[13]:http://www.mobify.com/blog/beginners-guide-to-http-cache-headers/
[14]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#adopt-nginx
[15]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#lamp
[16]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#adopt-nginx
[17]:https://wordpress.org/plugins/hyper-cache/
[18]:https://wordpress.org/plugins/quick-cache/
[19]:https://wordpress.org/plugins/wp-super-cache/
[20]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#wp-super-cache
[21]:https://wordpress.org/plugins/w3-total-cache/
[22]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#w3-total-cache
[23]:http://www.fastcgi.com/
[24]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#fastcgi
[25]:https://css-tricks.com/wordpress-fragment-caching-revisited/
[26]:https://www.nginx.com/resources/admin-guide/content-caching/
[27]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#cache-dynamic
[28]:https://www.kernel.org/doc/Documentation/filesystems/tmpfs.txt
[29]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#permalink
[30]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#fastcgi
[31]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#w3-total-cache
[32]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#wp-super-cache
[33]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#security
[34]:https://www.nginx.com/blog/9-tips-for-improving-wordpress-performance-with-nginx/#multisite
[35]:http://codex.wordpress.org/Using_Permalinks
[36]:http://nginx.org/en/docs/http/ngx_http_core_module.html#server
[37]:https://github.com/FRiCKLE/ngx_cache_purge
[38]:https://wordpress.org/plugins/nginx-helper/
[39]:http://nginx.org/en/docs/http/ngx_http_upstream_module.html#variables
[40]:https://wordpress.org/plugins/w3-total-cache/
[41]:http://www.w3-edge.com/
[42]:https://wordpress.org/plugins/wp-super-cache/
[43]:http://automattic.com/
[44]:https://wordpress.com/
[45]:http://w3techs.com/technologies/cross/web_server/ranking
[46]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/
[47]:http://www.nginx.org/en
[48]:https://www.nginx.com/products/
[49]:
[50]:

View File

@ -0,0 +1,108 @@
如何在 Ubuntu 服务器中配置 AWStats
================================================================================
![](https://www.maketecheasier.com/assets/uploads/2015/10/Apache_awstats_featured.jpg)
AWStats 是一个开源的网站分析报告工具,自带网络,流媒体FTP 或邮件服务器统计图。此日志分析器以 CGI 或命令行方式进行工作,并在网页中以图表的形式尽可能的显示你日志中所有的信息。它采用的是部分信息文件,以便能够频繁并快速处理大量的日志文件。它支持绝大多数 Web 服务器日志文件格式,包括 ApacheIIS 等。
本文将帮助你在 Ubuntu 上安装配置 AWStats。
### 安装 AWStats 包 ###
默认情况下AWStats 的包在 Ubuntu 仓库中。
可以通过运行下面的命令来安装:
sudo apt-get install awstats
接下来,你需要启用 Apache 的 CGI 模块。
运行以下命令来启动:
sudo a2enmod cgi
现在,重新启动 Apache 以使改变生效。
sudo /etc/init.d/apache2 restart
### 配置 AWStats ###
你需要为你想要查看统计的每个域或网站创建一个配置文件。在这个例子中,我们将为 “test.com” 创建一个配置文件。
要完成此步,你可以通过复制 AWStats 的默认配置文件来配置你要统计的域。
sudo cp /etc/awstats/awstats.conf /etc/awstats/awstats.test.com.conf
现在,你需要在配置文件中做一些修改:
sudo nano /etc/awstats/awstats.test.com.conf
像下面这样修改下:
# Change to Apache log file, by default it's /var/log/apache2/access.log
LogFile="/var/log/apache2/access.log"
# Change to the website domain name
SiteDomain="test.com"
HostAliases="www.test.com localhost 127.0.0.1"
# When this parameter is set to 1, AWStats adds a button on report page to allow to "update" statistics from a web browser
AllowToUpdateStatsFromBrowser=1
保存并关闭文件。
修改配置文件后,你需要用服务器的当前日志建立初步统计。你可以这样做:
sudo /usr/lib/cgi-bin/awstats.pl -config=test.com -update
输出会是这个样子:
![awtstats](https://www.maketecheasier.com/assets/uploads/2015/10/awtstats.png)
### 为 Apache 配置 AWStats ###
接下来,你需要配置 Apache2 来显示统计数据。现在你需要将 “cgi-bin” 文件夹中的内容复制到 Apache 默认根目录下。默认它是在 “/usr/lib/cgi-bin”。
运行以下命令来完成此步:
sudo cp -r /usr/lib/cgi-bin /var/www/html/
sudo chown www-data:www-data /var/www/html/cgi-bin/
sudo chmod -R 755 /var/www/html/cgi-bin/
### 测试 AWStats ###
现在,您可以通过访问 url “http://your-server-ip/cgi-bin/awstats.pl?config=test.com.” 来查看 AWStats 的页面。
它的页面像下面这样:
![awstats_page](https://www.maketecheasier.com/assets/uploads/2015/10/awstats_page.jpg)
### 设置定时任务来更新日志 ###
建议你创建一个定时任务,使用新创建的日志条目定期更新 AWStats 的数据库,然后统计会定期更新。这也将节省你的时间。
要做到这一点,你需要编辑 “/etc/crontab” 文件:
sudo nano /etc/crontab
添加下面那一行来让 AWStats 每十分钟更新一次。
*/10 * * * * root /usr/lib/cgi-bin/awstats.pl -config=test.com -update
保存并关闭文件。
### 结论 ###
AWStats 是一个非常有用的工具,可以让你对网站的状况了如指掌,并能协助你分析网站。它非常容易安装和配置。如果你有任何疑问,请在下面发表评论。
--------------------------------------------------------------------------------
via: https://www.maketecheasier.com/set-up-awstats-ubuntu/
作者:[Hitesh Jethva][a]
译者:[strugglingyouth](https://github.com/strugglingyouth)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.maketecheasier.com/author/hiteshjethva/